WO2021128221A1 - 交换芯片 - Google Patents

交换芯片 Download PDF

Info

Publication number
WO2021128221A1
WO2021128221A1 PCT/CN2019/128905 CN2019128905W WO2021128221A1 WO 2021128221 A1 WO2021128221 A1 WO 2021128221A1 CN 2019128905 W CN2019128905 W CN 2019128905W WO 2021128221 A1 WO2021128221 A1 WO 2021128221A1
Authority
WO
WIPO (PCT)
Prior art keywords
maq
pipeline
mux
coupled
processing resource
Prior art date
Application number
PCT/CN2019/128905
Other languages
English (en)
French (fr)
Inventor
比克尤尼
李楠
陈略
布里马亚里夫
塔尔亚利克斯
郝宇
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201980102667.3A priority Critical patent/CN114747193B/zh
Priority to PCT/CN2019/128905 priority patent/WO2021128221A1/zh
Publication of WO2021128221A1 publication Critical patent/WO2021128221A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/04Switchboards

Definitions

  • This application relates to the field of switching chips, and in particular to a switching chip.
  • the switch chip includes a switch core 11, N ingress pipelines 12, and N egress pipelines 13, where N is a positive integer.
  • the inlet pipeline 12 and the outlet pipeline 13 are coupled through the exchange core 11.
  • the exchange core 11 is used to exchange data from the inlet pipeline 12 to the outlet pipeline 13.
  • Media access control (MAC) 14 receives the message and sends it to the ingress pipeline 12.
  • MAC Media access control
  • the ingress pipeline 12 performs match action (MA) processing on the message and instructs the downstream port, it is sent to the switch core 11
  • the egress pipeline 13 performs MA processing on the message and then sends it out through the MAC 14.
  • the processing of messages by the ingress pipeline 12 and the egress pipeline 13 includes cascaded MA processing resources.
  • the use of MA processing resources is different. Exemplarily, as shown in FIG. 2, for a top of rank (TOR) switch, all the MA processing resources of the ingress pipeline 12 and the egress pipeline 13 will be used. As shown in Figure 3, for the core (CORE) switch, only part of the MA processing resources in the ingress pipeline 12 and the egress pipeline 13 are used.
  • the inlet pipeline and the outlet pipeline need to reserve MA processing resources that can meet the maximum usage of each application scenario, which may cause excessive chip area and power consumption. The problem.
  • the embodiment of the present application provides a switching chip for reducing reserved MA processing resources.
  • a switch chip including a switch core, multiple inlet pipeline groups and multiple outlet pipeline groups.
  • the switch core is used to exchange data from multiple inlet pipeline groups to multiple outlet pipeline groups.
  • the inlet pipeline group or the outlet pipeline group includes a first pipeline input end, a second pipeline input end, a first pipeline output end, a second pipeline output end, and M matching action groups MAQ, where M is a positive integer.
  • MAQ includes a first MAQ input terminal, a second MAQ input terminal, a first MAQ output terminal, and a second MAQ output terminal; the first MAQ output terminal of the mth MAQ is coupled to the first MAQ input terminal of the m+1th MAQ , The second MAQ output terminal of the mth MAQ is coupled to the second MAQ input terminal of the m+1th MAQ, 1 ⁇ m ⁇ M, where m is a positive integer; the first MAQ input terminal of the first MAQ is coupled to the The first pipeline input terminal, the second MAQ input terminal of the first MAQ is coupled to the second pipeline input terminal; the first MAQ output terminal of the Mth MAQ is coupled to the first pipeline output terminal, and the second MAQ output of the Mth MAQ The terminal is coupled to the output terminal of the second pipeline.
  • the path between the input end of the first pipeline and the output end of the first pipeline is the first pipeline, and the path between the input end of the second pipeline and the output end of the second pipeline is the second pipeline.
  • the MAQ also includes a first MA processing resource, a second MA processing resource, a first data selector MUX, and a second MUX.
  • the first MUX and the second MUX are used to: connect the first MA processing resource and the second MA processing resource to the path of the first pipeline in series; or, connect the first MA processing resource to the path of the first pipeline, and connect the second MA processing resource to the path of the first pipeline.
  • the MA processing resources are connected to the path of the second pipeline.
  • the switch chip provided by the embodiment of this application, when the same pipeline needs to use a large number of MA processing resources, all MA processing resources can be switched to the first pipeline, and when the same pipeline needs to use a small number of MA processing resources, it can be Part of the MA processing resources is switched to the first pipeline, and another part of the MA processing resources is switched to the second pipeline to increase the rate of processing packets.
  • the number of MA processing resources can be adapted to different application scenarios, and it is not necessary for all pipelines to reserve MA processing resources according to the maximum use quantity, which reduces the reserved MA processing resources.
  • the input terminal of the first MA processing resource is coupled to the first MAQ input terminal, and the output terminal of the first MA processing resource is coupled to the first selection input terminal of the first MUX and the first selection input terminal of the second MUX.
  • Second selection input terminal; the third selection input terminal of the first MUX is coupled to the second MAQ input terminal, the first selection output terminal of the first MUX is coupled to the fourth selection input terminal of the second MUX and the second MAQ output terminal;
  • the second selection output terminal of the two MUXs is coupled to the first MAQ output terminal.
  • the MAQ further includes a third MA processing resource
  • the second MA processing resource is coupled to the fourth selection input terminal and the second MAQ output terminal of the second MUX through the third MA processing resource.
  • the MAQ further includes a fourth MA processing resource, and the second selection output terminal of the second MUX is coupled to the first MAQ output terminal through the fourth MA processing resource.
  • the MAQ further includes a local lookup memory
  • the MA processing resources of the MAQ are all coupled to the local lookup memory
  • the local lookup memory is used to store lookup table entries used by the MA processing resources of the MAQ for MA processing.
  • the ingress pipeline group or the egress pipeline group further includes: a shared lookup memory, and the MAQ is coupled to the shared lookup memory, and the shared lookup memory is used to store lookup table entries used by the MAQ for MA processing.
  • the MA processing resources of the MAQ cannot find the corresponding lookup table entry in the local search memory, the corresponding lookup table entry can be searched in the shared search memory. In this way of sharing storage resources, there is no need to allocate separate storage resources for each MAQ, which reduces the reserved memory resources.
  • FIG. 1 is a schematic structural diagram of a switch chip provided by an embodiment of the application
  • FIG. 2 is a schematic diagram of a pipeline of a TOR switch provided by an embodiment of the application
  • FIG. 3 is a schematic diagram of a pipeline of a CORE switch provided by an embodiment of the application.
  • FIG. 4 is a schematic structural diagram of a switch chip provided by an embodiment of the application.
  • FIG. 5 is a schematic structural diagram of a pipeline resource pool provided by an embodiment of this application.
  • FIG. 6 is a structural schematic diagram 1 of an MAQ provided by an embodiment of this application.
  • FIG. 7 is a first schematic diagram of an SPP mode of MAQ provided by an embodiment of this application.
  • FIG. 8 is a second schematic diagram of a DPP mode of MAQ provided by an embodiment of this application.
  • FIG. 9 is a schematic diagram 2 of a MAQ structure provided by an embodiment of this application.
  • FIG. 10 is a third structural diagram of an MAQ provided by an embodiment of this application.
  • FIG. 11 is a second schematic diagram of an SPP mode of MAQ provided by an embodiment of this application.
  • FIG. 12 is a second schematic diagram of a DPP mode of MAQ provided by an embodiment of this application.
  • FIG. 13 is a schematic diagram of an SPP mode of a pipeline resource pool provided by an embodiment of this application.
  • FIG. 14 is a schematic diagram of a DPP mode of a pipeline resource pool provided by an embodiment of the application.
  • the inlet pipeline and the outlet pipeline need to reserve MA processing resources that can meet the maximum usage of each application scenario, which may cause excessive chip area and power consumption problems.
  • the MA processing resources are shared by two pipelines to reduce the reserved MA processing resources.
  • cascaded MA processing resources have different requirements for memory.
  • One way is to deploy large-size memory for each level of MA processing resources. It meets different requirements in different application scenarios, but this solution has a very large overhead on storage resources.
  • the switching chip and the switching method provided in the embodiments of the present application reduce the reserved memory resources by sharing search storage resources.
  • the embodiment of the present application provides a switching chip, which can use one chip to meet the requirements of multiple different application scenarios on the premise of adding a small number of logic units and chip area.
  • the switch chip includes: a switch core 41, a plurality of inlet pipeline groups 42, and a plurality of outlet pipeline groups 43.
  • the number of the inlet pipeline group 42 and the outlet pipeline group 43 may be the same, for example, both are N, and N is a positive integer.
  • the multiple inlet pipeline groups 42 are coupled to the multiple outlet pipeline groups 43 through the exchange core 41.
  • the inlet pipeline group 42 serves as an inlet pipeline (ingress pipeline), and the outlet pipeline group 43 serves as an outlet pipeline (egress pipeline).
  • the exchange core 41 is used to exchange data from multiple inlet pipeline groups 42 to multiple outlet pipeline groups 43.
  • the ingress pipeline group 42 performs MA processing on the message from the MAC 44, and instructs the switching core 41 to send the MA processed message to the corresponding egress pipeline group 43.
  • the corresponding egress pipeline group 43 performs MA processing on the message and then sends it out through MAC 44.
  • the inlet pipeline group 42 or the outlet pipeline group 43 includes a first pipeline input terminal 51a, a second pipeline input terminal 52a, a first pipeline output terminal 51b, a second pipeline output terminal 52b, and M matching action groups ( match action quad, MAQ) 51 (for example, MAQ 1 -MAQ M ), the first message parsing core 52, the second message parsing core 53, the first message editing core 54, the second message editing core 55, the first The message memory 56, the second message memory 57, and the shared search memory 58.
  • M is a positive integer.
  • the path between the first pipeline input terminal 51a and the first pipeline output terminal 51b is the first pipeline, and the path between the second pipeline input terminal 52a and the second pipeline output terminal 52b is the second pipeline.
  • Each MAQ includes: a first MAQ input terminal 61a, a second MAQ input terminal 62a, a first MAQ output terminal 61b, and a second MAQ output terminal 62b.
  • the path from the first MAQ input terminal 61a to the first MAQ output terminal 61b is the first pipeline, and the path from the second MAQ input terminal 62a to the second MAQ output terminal 62b is the second pipeline.
  • M MAQ 51 are connected in series.
  • the first MAQ output terminal 61b of the mth MAQ is coupled to the first MAQ input terminal 61a of the m+1th MAQ.
  • the first MAQ input terminal 61a of the first MAQ is coupled to the first pipeline input terminal 51a, and the first MAQ output terminal 61b of the Mth MAQ is coupled to the first pipeline output terminal 51b.
  • the above path belongs to the first pipeline. 1 ⁇ m ⁇ M, m is a positive integer.
  • the second MAQ output terminal 62b of the m-th MAQ is coupled to the second MAQ input terminal 62a of the m+1-th MAQ.
  • the second MAQ input terminal 62a of the first MAQ is coupled to the second pipeline input terminal 52a, and the second MAQ output terminal 62b of the Mth MAQ is coupled to the second pipeline output terminal 52b.
  • the above path belongs to the second pipeline.
  • the first pipeline input terminal 51a is coupled to the first MAQ input terminal 61a of the first MAQ (MAQ 1 ) of the M MAQ 51 through the first message parsing core 52, and the first pipeline input terminal 51a also passes through the first message memory 56 is coupled with the first message editing core 54.
  • the first pipeline output terminal 51b is coupled to the first MAQ output terminal 61b of the last MAQ (MAQ M ) of the M MAQ 51 through the first message editing core 54.
  • the second pipeline input terminal 52a is coupled to the second MAQ input terminal 62a of the first MAQ (MAQ 1 ) of the M MAQ 51 through the second message parsing core 53, and the second pipeline input terminal 52a also passes through the second message memory 57 is coupled with the second message editing core 55.
  • the second pipeline output terminal 52b is coupled to the second MAQ output terminal 62b of the last MAQ (MAQ M ) of the M MAQs 51 through the second message editing core 55.
  • the M MAQ 51 are all coupled to the shared search memory 58.
  • the shared search memory 58 may be a memory (memory) or a ternary content addressable memory (TCAM).
  • the shared lookup memory 58 may be used to store lookup table entries for M MAQ 51 for MA processing. In this way of sharing storage resources, there is no need to allocate separate storage resources for each MAQ, which reduces the reserved memory resources.
  • the first message parsing core 52 is used for parsing the header data of the message input by the first pipeline
  • the second message parsing core 53 is used for parsing the header data of the message input by the second pipeline.
  • the first message editing core 54 is used for editing the header data of the message output by the first pipeline
  • the second message editing core 55 is used for editing the header data of the message output by the second pipeline.
  • the first message memory 56 is used for buffering the message data being processed in the first pipeline
  • the second message memory 57 is used for buffering the message data being processed in the second pipeline.
  • each MAQ 60 further includes: a first MA processing resource 61, a second MA processing resource 62, a first data selector (multiplexer, MUX) 63, a second MUX 64, and a local search memory 65.
  • a first MA processing resource 61 a second MA processing resource 62
  • a first data selector (multiplexer, MUX) 63 a second MUX 64
  • a local search memory 65 a local search memory 65.
  • Both the first MUX 63 and the second MUX 64 have two inputs, and the function of the first MUX 63 and the second MUX 64 is to select one output of the two inputs.
  • the first MUX 63 and the second MUX 64 are used to connect the first MA processing resource 61 and the second MA processing resource 62 to the path of the first pipeline in series.
  • the MAQ works in a single processing pipeline (SPP) mode .
  • the first MA processing resource 61 is connected to the channel of the first pipeline
  • the second MA processing resource 62 is connected to the channel of the second pipeline.
  • the MAQ works in a double processing pipeline (DPP) mode.
  • DPP double processing pipeline
  • This application does not limit the connection mode among the first MA processing resource 61, the second MA processing resource 62, the first MUX 63, and the second MUX 64.
  • the input terminal of the first MA processing resource 61 is coupled to the first MAQ input terminal 61a
  • the output terminal of the first MA processing resource 61 is coupled to the first selection input terminal of the first MUX 63 and the first MAQ input terminal 61a.
  • the third selection input terminal of the first MUX 63 is coupled to the second MAQ input terminal 62a
  • the first selection output terminal of the first MUX 63 is coupled to the fourth selection input terminal of the second MUX 64 and the second MAQ output terminal 62b.
  • the second selection output terminal of the second MUX 64 is coupled to the first MAQ output terminal 61b.
  • MA processing refers to parsing the header data of the message to generate a match key, searching the key in the MA flow table (table) to obtain the search result, and executing actions according to the search result.
  • MA processing resources refer to hardware structures dedicated to MA processing.
  • the first MUX 63 can connect the first MA processing resource 61 and the second MA processing resource 62, and the second MUX 63 can connect the second MA processing resource 62 to the first MAQ output terminal.
  • 61b is turned on. That is, MAQ works in SPP mode.
  • the packets of the first pipeline After the packets of the first pipeline enter from the first MAQ input terminal 61a of the MAQ 60, they are sequentially processed by the first MA processing resource 61 and the second MA processing resource 62, and output from the first MAQ output terminal 61b of the MAQ 60. It should be noted that, since the second MAQ output terminal 62b is directly connected to the output terminal of the second MA processing resource 62, the second MAQ output terminal 62b will also output data, which is the same as the data output by the first MAQ output terminal 61b. However, the data output from the second MAQ output terminal 62b may not be used.
  • the first MUX 63 can connect the second MA processing resource 62 with the second MAQ input 62a in the path of the second pipeline, and the second MUX 63 can connect the second MA processing resource 62 is connected to the second MAQ output terminal 62b. That is, MAQ works in DPP mode.
  • the packets of the first pipeline After the packets of the first pipeline enter from the first MAQ input terminal 61a of the MAQ 60, they are processed by the first MA processing resource 61 and output from the first MAQ output terminal 61b of the MAQ 60.
  • the packets of the second pipeline After the packets of the second pipeline enter from the second MAQ input end 62a of the MAQ 60, they are processed by the second MA processing resource 62 and output from the second MAQ output end 62b of the MAQ 60.
  • the first MAQ input terminal 61a is coupled to the first selection input terminal of the first MUX 63 and the second selection input terminal of the second MUX 64.
  • the second MAQ input terminal 62a is coupled to the third selection input terminal of the first MUX 63.
  • the first selection output terminal of the first MUX 63 is coupled to the input terminal of the second MA processing resource 62.
  • the output terminal of the second MA processing resource 62 is coupled to the fourth selection input terminal of the second MUX 64 and the second MAQ output terminal 62b.
  • the second selection output terminal of the second MUX 64 is coupled to the input terminal of the first MA processing resource 61.
  • the output terminal of the first MA processing resource 61 is coupled to the first MAQ output terminal 61b.
  • the MAQ 60 further includes a third MA processing resource 66, and the second MA processing resource 62 is coupled to the second MUX 64 through the third MA processing resource 66 The fourth selection input terminal and the second MAQ output terminal 62b.
  • the MAQ 60 further includes a fourth MA processing resource 67, and the second selection output terminal of the second MUX 64 is coupled to the fourth MA processing resource 67 The first MAQ output terminal 61b.
  • the above-mentioned MAQ can work in SPP mode.
  • the first MUX 63 selects to turn on the output terminal of the first MA processing resource 61
  • the second MUX 64 selects to turn on the output terminal of the third MA processing resource 66
  • the first MA processing resource 61, the second MA processing resource 62, and the The three MA processing resources 66 and the fourth MA processing resources 67 are connected in series, the first pipeline works, and the second pipeline does not work.
  • the messages of the first streamline are processed by the first MA processing resource 61, the second MA processing resource 62, the third MA processing resource 66, and the fourth MA processing resource 67.
  • the second MAQ output terminal 62b is directly connected to the output terminal of the third MA processing resource 66, the second MAQ output terminal 62b will also output data, but because it has not passed the fourth MA processing resource 67 Therefore, it is different from the data output by the first MAQ output terminal 61b.
  • the above-mentioned MAQ works in DPP mode.
  • the first MUX 63 selects to turn on the second MAQ input end 62a of the MAQ 60
  • the second MUX 64 selects to turn on the output end of the first MA processing resource 61
  • the first MA processing resource 61 and the fourth MA processing resource 67 are connected in series
  • the second MA processing resource 62 and the third MA processing resource 66 are connected in series, and both the first pipeline and the second pipeline work.
  • the packets of the second pipeline After the packets of the second pipeline enter from the second MAQ input terminal 62a of the MAQ 60, they are processed by the second MA processing resource 62 and the third MA processing resource 66 successively, and then output from the second MAQ output terminal 62b of the MAQ 60.
  • the pipeline modes of the M MAQs belonging to the same pipeline resource pool are the same, that is, the M MAQs all work in the SPP mode, or all work in the DPP mode. In other words, some MAQs cannot work in SPP mode, and other MAQs work in DPP mode, which will cause confusion in data processing.
  • all M MAQs can access the entire storage space of the shared search memory 58 at this time. Assuming that the working frequency of the first pipeline is Freq, the maximum packet rate that can be processed per second in the SPP mode is Freq packets per second (PPS).
  • the ingress pipeline group 42 can have at least NMAQ_MIN_ING_SPP MAQs, and at most NMAQ_MAX_ING_SPP MAQs.
  • the outlet pipeline group 43 can have at least NMAQ_MIN_EGR_SPP MAQ, and at most NMAQ_MAX_EGR_SPP MAQ.
  • all MAQs in the pipeline resource pool 50 work in the DPP mode, and when the pipeline resource pool 50 works in the DPP mode, both the first pipeline and the second pipeline work.
  • the messages of the first pipeline enter the first pipeline input 51a of the pipeline resource pool 50, they are processed by the first message parsing core 52, the first pipeline of MAQ 51, and the first message editing core 54 successively.
  • the output of the first pipeline output terminal 51b of the pool 50 is output.
  • the messages of the second pipeline enter the second pipeline input 52a of the pipeline resource pool 50, they are processed by the second message parsing core 53, the second pipeline of MAQ 51, and the second message editing core 55 successively.
  • the output of the first pipeline output terminal 51b of the pool 50 is output.
  • the MA processing resources of the MAQ 60 are all coupled to the local search memory 65.
  • the local lookup memory 65 may be used to store lookup table entries for MA processing resources of the MAQ 60 for MA processing. In this way of sharing storage resources, there is no need to allocate separate storage resources for each MA processing resource, which reduces the reserved memory resources.
  • the local lookup memory 65 is also coupled with the shared lookup memory 58 external to the MAQ.
  • the shared lookup memory 58 can search for the corresponding lookup table entry. In this way of sharing storage resources, there is no need to allocate separate storage resources for each MAQ, which reduces the reserved memory resources.
  • the first pipeline and the second pipeline of each MAQ can access half of the storage space of the shared search memory 58. Assuming that the working frequency of the first pipeline is Freq, the maximum packet rate that can be processed per second in DPP mode is 2*Freq PPS.
  • the ingress pipeline group 42 can have at least NMAQ_MIN_ING_DPP MAQ, and at most NMAQ_MAX_ING_DPP MAQ.
  • the outlet pipeline group 43 can have at least NMAQ_MIN_EGR_DPP MAQ, and at most NMAQ_MAX_EGR_DPP MAQ.
  • the switch chip provided by the embodiment of this application, when the same pipeline needs to use a large number of MA processing resources, all MA processing resources can be switched to the first pipeline, and when the same pipeline needs to use a small number of MA processing resources, it can be Part of the MA processing resources is switched to the first pipeline, and another part of the MA processing resources is switched to the second pipeline to increase the rate of processing packets.
  • the number of MA processing resources can be adapted to different application scenarios, and it is not necessary for all pipelines to reserve MA processing resources according to the maximum use quantity, which reduces the reserved MA processing resources.
  • the MA processing resource analysis header data in the MAQ of the first pipeline is Generate a key value, generate a search request according to the key value and control field, and send it to the local search storage 65 of this MAQ.
  • the local search memory 65 determines the purpose of the search request according to the control field.
  • the search purpose may be to search for a key value in a lookup table entry of the local search memory 65, or to search for a key value in a lookup table entry of the shared search memory 58. If the key value is searched in the lookup table entry of the shared lookup memory 58, the local lookup memory sends the lookup request to the shared lookup memory 58 outside the MAQ for search.
  • the local search memory 65 can package the search requests from different MA processing resources of the MAQ and send them to the shared search memory 58 at the same time, and the number of search requests sent at the same time is less than or equal to the number of MA processing resources in the MAQ.
  • the embodiment of the present application adopts a multi-level shared lookup cache technology, which reduces the reserved memory resources.
  • the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection between devices or units through some interfaces, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or includes one or more data storage devices such as servers, data centers, etc. that can be integrated with the medium.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请公开了一种交换芯片,涉及交换芯片领域,用于减少预留的MA处理资源。交换芯片包括交换核,多个入口流水线组和多个出口流水线组;入口流水线组或出口流水线组包括第一流水线输入端、第二流水线输入端、第一流水线输出端、第二流水线输出端和M个匹配动作组MAQ;第一流水线输入端与第一流水线输出端之间的通路为第一流水线,第二流水线输入端与第二流水线输出端之间的通路为第二流水线;MAQ包括第一MA处理资源、第二MA处理资源、第一数据选择器MUX和第二MUX;第一MUX和第二MUX用于:将第一MA处理资源和第二MA处理资源串联接入第一流水线的通路;或者,将第一MA处理资源接入第一流水线的通路,将第二MA处理资源接入第二流水线的通路。

Description

交换芯片 技术领域
本申请涉及交换芯片领域,尤其涉及一种交换芯片。
背景技术
在超高带宽交换芯片领域,业界普遍使用流水线(pipeline)架构来得到超大带宽的处理性能。
如图1所示,交换芯片包括交换核11、N条入口流水线(ingress pipeline)12和N条出口流水线(egress pipeline)13,N为正整数。入口流水线12和出口流水线13通过交换核11耦合。交换核11用于将来自入口流水线12的数据交换到出口流水线13上。媒体访问控制(media access control,MAC)14接收到报文,发送至入口流水线12,入口流水线12对报文进行匹配动作(match action,MA)处理并指示下行端口后,通过交换核11发送给对应的出口流水线13,出口流水线13对报文进行MA处理后通过MAC 14发送出去。
入口流水线12和出口流水线13对报文的处理包括级联的MA处理资源。对于不同类型的交换机来说,对MA处理资源的使用情况不同。示例性的,如图2所示,对于机架(top of rank,TOR)交换机来说,入口流水线12和出口流水线13的所有的MA处理资源都会被使用到。如图3所示,对于核心(CORE)交换机来说,入口流水线12和出口流水线13中MA处理资源均只有部分被使用到。
为了支持类似的不同应用场景下的MA处理资源的不同使用情况,入口流水线和出口流水线需要预留能满足各应用场景的最大使用量的MA处理资源,可能会造成芯片面积和功耗问题过大的问题。
发明内容
本申请实施例提供一种交换芯片,用于减少预留的MA处理资源。
为达到上述目的,本申请的实施例采用如下技术方案:
第一方面,提供了一种交换芯片,包括交换核,多个入口流水线组和多个出口流水线组,交换核用于将来自多个入口流水线组的数据,交换到多个出口流水线组上去。入口流水线组或出口流水线组包括第一流水线输入端、第二流水线输入端、第一流水线输出端、第二流水线输出端和M个匹配动作组MAQ,M为正整数。MAQ包括第一MAQ输入端、第二MAQ输入端、第一MAQ输出端、第二MAQ输出端;第m个MAQ的第一MAQ输出端耦合至第m+1个MAQ的第一MAQ输入端,第m个MAQ的第二MAQ输出端耦合至第m+1个MAQ的第二MAQ输入端,1≤m<M,m为正整数;第1个MAQ的第一MAQ输入端耦合至第一流水线输入端,第1个MAQ的第二MAQ输入端耦合至第二流水线输入端;第M个MAQ的第一MAQ输出端耦合至第一流水线输出端,第M个MAQ的第二MAQ输出端耦合至第二流水线输出端。第一流水线输入端与第一流水线输出端之间的通路为第一流水线,第二流水线输入端与第二 流水线输出端之间的通路为第二流水线。MAQ还包括第一MA处理资源、第二MA处理资源、第一数据选择器MUX和第二MUX。第一MUX和第二MUX用于:将第一MA处理资源和第二MA处理资源串联接入第一流水线的通路;或者,将第一MA处理资源接入第一流水线的通路,将第二MA处理资源接入第二流水线的通路。
本申请实施例提供的交换芯片,当同一流水线需要使用的MA处理资源数量较多时,可以将所有MA处理资源切换至第一流水线,当同一流水线需要使用的MA处理资源数量较少时,可以将一部分MA处理资源切换至第一流水线,将另一部分MA处理资源切换至第二流水线,以提高处理报文的速率。可以使MA处理资源的数量适配不同的应用场景,而不必所有流水线按照最大使用数量来预留MA处理资源,减少了预留的MA处理资源。
在一种可能的实施方式中,第一MA处理资源的输入端耦合至第一MAQ输入端,第一MA处理资源的输出端耦合至第一MUX的第一选择输入端以及第二MUX的第二选择输入端;第一MUX的第三选择输入端耦合至第二MAQ输入端,第一MUX的第一选择输出端耦合至第二MUX的第四选择输入端以及第二MAQ输出端;第二MUX的第二选择输出端耦合至第一MAQ输出端。
在一种可能的实施方式中,MAQ还包括第三MA处理资源,第二MA处理资源通过第三MA处理资源耦合至第二MUX的第四选择输入端以及第二MAQ输出端。
在一种可能的实施方式中,MAQ还包括第四MA处理资源,第二MUX的第二选择输出端通过第四MA处理资源耦合至第一MAQ输出端。
在一种可能的实施方式中,MAQ还包括本地查找存储器,MAQ的MA处理资源均耦合至本地查找存储器,本地查找存储器用于存储供MAQ的MA处理资源进行MA处理所使用的查找表项。通过这种共享存储资源的方式,不必为每个MA处理资源分配单独的存储资源,减少了预留的存储器资源。
在一种可能的实施方式中,入口流水线组或出口流水线组还包括:共享查找存储器,MAQ均耦合至共享查找存储器,共享查找存储器用于存储供MAQ进行MA处理所使用的查找表项。当MAQ的MA处理资源在本地查找存储器查找不到对应的查找表项时,可以在共享查找存储器查找对应的查找表项。通过这种共享存储资源的方式,不必为每个MAQ分配单独的存储资源,减少了预留的存储器资源。
附图说明
图1为本申请实施例提供的一种交换芯片的结构示意图;
图2为本申请实施例提供的一种TOR交换机的流水线的示意图;
图3为本申请实施例提供的一种CORE交换机的流水线的示意图;
图4为本申请实施例提供的一种交换芯片的结构示意图;
图5为本申请实施例提供的一种流水线资源池的结构示意图;
图6为本申请实施例提供的一种MAQ的结构示意图一;
图7为本申请实施例提供的一种MAQ的SPP模式的示意图一;
图8为本申请实施例提供的一种MAQ的DPP模式的示意图二;
图9为本申请实施例提供的一种MAQ的结构示意图二;
图10为本申请实施例提供的一种MAQ的结构示意图三;
图11为本申请实施例提供的一种MAQ的SPP模式的示意图二;
图12为本申请实施例提供的一种MAQ的DPP模式的示意图二;
图13为本申请实施例提供的一种流水线资源池的SPP模式的示意图;
图14为本申请实施例提供的一种流水线资源池的DPP模式的示意图。
具体实施方式
针对前文所述的,入口流水线和出口流水线需要预留能满足各应用场景的最大使用量的MA处理资源,可能会造成芯片面积和功耗问题过大的问题。本申请实施例提供的交换芯片和交换方法,通过两条流水线共享MA处理资源来减少预留的MA处理资源。
另外由于不同应用场景下有不同的查找需求,针对不同的查找表项,级联的MA处理资源对于存储器的需求也不同,一种方式是针对每一级MA处理资源都部署大尺寸的存储器来满足不同应用场景下的不同需求,但是这种方案对存储资源的开销非常大。本申请实施例提供的交换芯片和交换方法,通过共享查找存储资源来减少预留的存储器资源。
具体的,本申请实施例提供了一种交换芯片,可以在添加少量逻辑单元和芯片面积的前提下使用一款芯片满足多个不同应用场景的需求。
如图4所示,该交换芯片包括:交换核41、多个入口流水线组42、多个出口流水线组43。
其中,入口流水线组42与出口流水线组43的数目可以相同,例如均为N,N为正整数。多个入口流水线组42通过交换核41耦合至多个出口流水线组43。入口流水线组42作为入口流水线(ingress pipeline),出口流水线组43作为出口流水线(egress pipeline)。交换核41用于将来自多个入口流水线组42的数据,交换到多个出口流水线组43上去。
入口流水线组42对来自MAC 44的报文进行MA处理,并指示交换核41将MA处理后的报文发送给对应的出口流水线组43。对应的出口流水线组43对报文进行MA处理后通过MAC 44发送出去。
如图5所示,入口流水线组42或出口流水线组43包括第一流水线输入端51a、第二流水线输入端52a、第一流水线输出端51b、第二流水线输出端52b、M个匹配动作组(match action quad,MAQ)51(例如MAQ 1-MAQ M)、第一报文解析核52、第二报文解析核53、第一报文编辑核54、第二报文编辑核55、第一报文存储器56、第二报文存储器57、共享查找存储器58。M为正整数。
第一流水线输入端51a与第一流水线输出端51b之间的通路为第一流水线,第二流水线输入端52a与第二流水线输出端52b之间的通路为第二流水线。
每个MAQ包括:第一MAQ输入端61a、第二MAQ输入端62a、第一MAQ输出端61b、第二MAQ输出端62b。从第一MAQ输入端61a至第一MAQ输出端61b的通路为第一流水线,从第二MAQ输入端62a至第二MAQ输出端62b的通路为第二流水线。M个MAQ 51依次串联。
即第m个MAQ的第一MAQ输出端61b耦合至第m+1个MAQ的第一MAQ输入端61a。第1个MAQ的第一MAQ输入端61a耦合至第一流水线输入端51a,第M 个MAQ的第一MAQ输出端61b耦合至第一流水线输出端51b。并且以上通路属于第一流水线。1≤m<M,m为正整数。
第m个MAQ的第二MAQ输出端62b耦合至第m+1个MAQ的第二MAQ输入端62a。第1个MAQ的第二MAQ输入端62a耦合至第二流水线输入端52a,第M个MAQ的第二MAQ输出端62b耦合至第二流水线输出端52b。并且以上通路属于第二流水线。
第一流水线输入端51a通过第一报文解析核52与M个MAQ 51中第一个MAQ(MAQ 1)的第一MAQ输入端61a耦合,第一流水线输入端51a还通过第一报文存储器56与第一报文编辑核54耦合。第一流水线输出端51b通过第一报文编辑核54与M个MAQ 51中最后一个MAQ(MAQ M)的第一MAQ输出端61b耦合。
第二流水线输入端52a通过第二报文解析核53与M个MAQ 51中第一个MAQ(MAQ 1)的第二MAQ输入端62a耦合,第二流水线输入端52a还通过第二报文存储器57与第二报文编辑核55耦合。第二流水线输出端52b通过第二报文编辑核55与M个MAQ 51中最后一个MAQ(MAQ M)的第二MAQ输出端62b耦合。
M个MAQ 51均耦合至共享查找存储器58。共享查找存储器58可以为存储器(memory)或三态内容寻址存储器(ternary content addressable memory,TCAM)。共享查找存储器58可以用于存储供M个MAQ 51进行MA处理所使用的查找表项。通过这种共享存储资源的方式,不必为每个MAQ分配单独的存储资源,减少了预留的存储器资源。
第一报文解析核52用于解析第一流水线输入的报文的报头数据,第二报文解析核53用于解析第二流水线输入的报文的报头数据。
第一报文编辑核54用于编辑第一流水线输出的报文的报头数据,第二报文编辑核55用于编辑第二流水线输出的报文的报头数据。
第一报文存储器56用于缓存第一流水线中正在处理的报文数据,第二报文存储器57用于缓存第二流水线中正在处理的报文数据。
如图6所示,每个MAQ 60还包括:第一MA处理资源61、第二MA处理资源62、第一数据选择器(multiplexer,MUX)63、第二MUX 64和本地查找存储器65。
第一MUX 63和第二MUX 64均有两路输入,第一MUX 63和第二MUX 64的作用是选择两路输入中的一路输出。
第一MUX 63和第二MUX 64用于:将第一MA处理资源61和第二MA处理资源62串联接入第一流水线的通路,此时MAQ工作在单流水线(single processing pipeline,SPP)模式。或者,将第一MA处理资源61接入第一流水线的通路,将第二MA处理资源62接入第二流水线的通路,此时MAQ工作在双流水线(double processing pipeline,DPP)模式。
本申请不限定第一MA处理资源61、第二MA处理资源62、第一MUX 63、第二MUX 64之间的连接方式。在一种可能的实施方式中,第一MA处理资源61的输入端耦合至第一MAQ输入端61a,第一MA处理资源61的输出端耦合至第一MUX 63的第一选择输入端以及第二MUX 64的第二选择输入端。第一MUX 63的第三选择输入端耦合至第二MAQ输入端62a,第一MUX 63的第一选择输出端耦合至第二MUX 64 的第四选择输入端以及第二MAQ输出端62b。第二MUX 64的第二选择输出端耦合至第一MAQ输出端61b。
本申请实施例中,MA处理指解析报文的报头数据以生成键值(match key),在MA流表(table)中查找这个键值以得到查找结果,根据查找结果执行动作(action)。MA处理资源指专用于MA处理的硬件结构。
示例性的,如图7所示,第一MUX 63可以将第一MA处理资源61和第二MA处理资源62导通,第二MUX 63可以将第二MA处理资源62与第一MAQ输出端61b导通。即MAQ工作在SPP模式。
第一流水线的报文从MAQ 60的第一MAQ输入端61a进入后,先后由第一MA处理资源61和第二MA处理资源62进行MA处理,从MAQ 60的第一MAQ输出端61b输出。需要说明的是,由于第二MAQ输出端62b与第二MA处理资源62的输出端直连,所以第二MAQ输出端62b也会输出数据,并且与第一MAQ输出端61b输出的数据相同,但是可以不使用第二MAQ输出端62b输出的数据。
示例性的,如图8所示,第一MUX 63可以将第二MA处理资源62与第二流水线的通路中的第二MAQ输入端62a导通,第二MUX 63可以将第二MA处理资源62与第二MAQ输出端62b导通。即MAQ工作在DPP模式。
第一流水线的报文从MAQ 60的第一MAQ输入端61a进入后,由第一MA处理资源61进行MA处理,从MAQ 60的第一MAQ输出端61b输出。第二流水线的报文从MAQ 60的第二MAQ输入端62a进入后,由第二MA处理资源62进行MA处理,从MAQ 60的第二MAQ输出端62b输出。
需要说明的是,图6-图8中仅是示例性的描述MAQ中各模块的一种连接方式。如图9所示,在另一种可能的实施方式中,第一MAQ输入端61a耦合至第一MUX 63的第一选择输入端以及第二MUX 64的第二选择输入端。第二MAQ输入端62a耦合至第一MUX 63的第三选择输入端。第一MUX 63的第一选择输出端耦合至第二MA处理资源62的输入端。第二MA处理资源62的输出端耦合至第二MUX 64的第四选择输入端以及第二MAQ输出端62b。第二MUX 64的第二选择输出端耦合至第一MA处理资源61的输入端。第一MA处理资源61的输出端耦合至第一MAQ输出端61b。
可选的,如图10所示,在图6-图8的基础上,MAQ 60还包括第三MA处理资源66,第二MA处理资源62通过第三MA处理资源66耦合至第二MUX 64的第四选择输入端以及第二MAQ输出端62b。
可选的,如图10所示,在图6-图8的基础上,MAQ 60还包括第四MA处理资源67,第二MUX 64的第二选择输出端通过第四MA处理资源67耦合至第一MAQ输出端61b。
如图11所示,上述MAQ可以工作在SPP模式。第一MUX 63选择导通第一MA处理资源61的输出端,第二MUX 64选择导通第三MA处理资源66的输出端,则第一MA处理资源61、第二MA处理资源62、第三MA处理资源66、第四MA处理资源67串联,第一流水线工作,第二流水线不工作。第一流水线的报文从MAQ 60的第一MAQ输入端61a进入后,先后经过第一MA处理资源61、第二MA处理资源62、第三MA处理资源66、第四MA处理资源67处理,从MAQ 60的第一MAQ输出端 61b输出。如图7中相关描述,由于第二MAQ输出端62b与第三MA处理资源66的输出端直连,所以第二MAQ输出端62b也会输出数据,但是由于未经过第四MA处理资源67的处理,所以与第一MAQ输出端61b输出的数据不同。
如图12所示,上述MAQ工作在DPP模式。第一MUX 63选择导通MAQ 60的第二MAQ输入端62a,第二MUX 64选择导通第一MA处理资源61的输出端,则第一MA处理资源61和第四MA处理资源67串联,第二MA处理资源62和第三MA处理资源66串联,第一流水线和第二流水线均工作。第一流水线的报文从MAQ 60的第一MAQ输入端61a进入后,先后经过第一MA处理资源61和第四MA处理资源67处理,从MAQ 60的第一MAQ输出端61b输出。第二流水线的报文从MAQ 60的第二MAQ输入端62a进入后,先后经过第二MA处理资源62和第三MA处理资源66处理,从MAQ 60的第二MAQ输出端62b输出。
属于同一流水线资源池的M个MAQ的流水线模式相同,即M个MAQ均工作在SPP模式,或者,均工作在DPP模式。也就是说,不能部分MAQ工作在SPP模式,其他MAQ工作在DPP模式,这样会造成数据处理的混乱。
如图13所示,流水线资源池50内所有MAQ工作在SPP模式,则流水线资源池50工作在SPP模式时,第一流水线工作,第二流水线不工作。第一流水线的报文从流水线资源池50的第一流水线输入端51a进入后,先后经过第一报文解析核52、MAQ 51的第一流水线、第一报文编辑核54处理,从流水线资源池50的第一流水线输出端51b输出。需要说明的是,如前面针对图7和图11描述的,在SPP模式下,最后一个MAQ(MAQ M)的第二输出端也会输出数据,但是可能与该MAQ的第一输出端输出的数据相同或不同,所以此时忽略第二流水线输出端52b输出的数据。
另外,此时所有M个MAQ都可以访问共享查找存储器58的整个存储空间。假设第一流水线的工作频率为Freq,则SPP模式下每秒可以处理的报文速率最大为Freq包每秒(packet per second,PPS)。
其中,入口流水线组42最少可以有NMAQ_MIN_ING_SPP个MAQ,最多可以有NMAQ_MAX_ING_SPP个MAQ。出口流水线组43最少可以有NMAQ_MIN_EGR_SPP个MAQ,最多可以有NMAQ_MAX_EGR_SPP个MAQ。
如图14所示,流水线资源池50内所有MAQ工作在DPP模式,则流水线资源池50工作在DPP模式时,第一流水线和第二流水线均工作。第一流水线的报文从流水线资源池50的第一流水线输入端51a进入后,先后经过第一报文解析核52、MAQ 51的第一流水线、第一报文编辑核54处理,从流水线资源池50的第一流水线输出端51b输出。第二流水线的报文从流水线资源池50的第二流水线输入端52a进入后,先后经过第二报文解析核53、MAQ 51的第二流水线、第二报文编辑核55处理,从流水线资源池50的第一流水线输出端51b输出。
MAQ 60的MA处理资源均耦合至本地查找存储器65。本地查找存储器65可以用于存储供MAQ 60的MA处理资源进行MA处理所使用的查找表项。通过这种共享存储资源的方式,不必为每个MA处理资源分配单独的存储资源,减少了预留的存储器资源。
本地查找存储器65还与MAQ外部的共享查找存储器58耦合,当MAQ 60的MA 处理资源在本地查找存储器65查找不到对应的查找表项时,可以在共享查找存储器58查找对应的查找表项。通过这种共享存储资源的方式,不必为每个MAQ分配单独的存储资源,减少了预留的存储器资源。
各个MAQ的第一流水线和第二流水线可以访问共享查找存储器58的一半的存储空间空间。假设第一流水线的工作频率为Freq,则DPP模式下每秒可以处理的报文速率最大为2*Freq PPS。
入口流水线组42最少可以有NMAQ_MIN_ING_DPP个MAQ,最多可以有NMAQ_MAX_ING_DPP个MAQ。出口流水线组43最少可以有NMAQ_MIN_EGR_DPP个MAQ,最多可以有NMAQ_MAX_EGR_DPP个MAQ。
本申请实施例提供的交换芯片,当同一流水线需要使用的MA处理资源数量较多时,可以将所有MA处理资源切换至第一流水线,当同一流水线需要使用的MA处理资源数量较少时,可以将一部分MA处理资源切换至第一流水线,将另一部分MA处理资源切换至第二流水线,以提高处理报文的速率。可以使MA处理资源的数量适配不同的应用场景,而不必所有流水线按照最大使用数量来预留MA处理资源,减少了预留的MA处理资源。
上述流水线资源池的工作方式如下:
以图7和图13所示的SPP模式为例,在从第一报文解析核52或前一个MAQ接收到报文的报头数据后,第一流水线的MAQ中的MA处理资源解析报头数据以生成键值,根据键值和控制字段生成查找请求并发送到本MAQ的本地查找存储器65中。
本地查找存储器65根据控制字段确定该查找请求的目的。查找目的可以是在本地查找存储器65的查找表项中查找键值,或者,在共享查找存储器58的查找表项中查找键值。如果是在共享查找存储器58的查找表项中查找键值,则本地查找存储器将查找请求发送到MAQ外部的共享查找存储器58中进行查找。需要说明的是,本地查找存储器65可以将来自本MAQ的不同MA处理资源的查找请求打包后同时发送给共享查找存储器58,同时发送的查找请求的数目小于等于本MAQ中MA处理资源的数目。
在本地查找存储器65或共享查找存储器58完成查找键值后,将查找结果发送给对应的MA处理资源,MA处理资源根据查找结果执行动作。需要说明的是,共享查找存储器58可以将属于同一MAQ的不同MA处理资源的查找结果打包后,同时发送给该MAQ的本地查找存储器65,由本地查找存储器65从打包中提取各个MA处理资源的查找结果,并发送给对应MA处理资源。
从中可以看出,本申请实施例采用了多层次的共享查找缓存技术,减少了预留的存储器资源。
应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实 现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、设备和方法,可以通过其它的方式实现。例如,以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式来实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或者数据中心通过有线(例如同轴电缆、光纤、数字用户线(Digital Subscriber Line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可以用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带),光介质(例如,DVD)、或者半导体介质(例如固态硬盘(Solid State Disk,SSD))等。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (6)

  1. 一种交换芯片,包括交换核,多个入口流水线组和多个出口流水线组,所述交换核用于将来自所述多个入口流水线组的数据,交换到所述多个出口流水线组上去,
    所述入口流水线组或所述出口流水线组包括第一流水线输入端、第二流水线输入端、第一流水线输出端、第二流水线输出端和M个匹配动作组MAQ,M为正整数;
    所述MAQ包括第一MAQ输入端、第二MAQ输入端、第一MAQ输出端、第二MAQ输出端;第m个MAQ的第一MAQ输出端耦合至第m+1个MAQ的第一MAQ输入端,第m个MAQ的第二MAQ输出端耦合至第m+1个MAQ的第二MAQ输入端,1≤m<M,m为正整数;第1个MAQ的第一MAQ输入端耦合至所述第一流水线输入端,第1个MAQ的第二MAQ输入端耦合至所述第二流水线输入端;第M个MAQ的第一MAQ输出端耦合至所述第一流水线输出端,第M个MAQ的第二MAQ输出端耦合至所述第二流水线输出端;
    所述第一流水线输入端与所述第一流水线输出端之间的通路为第一流水线,所述第二流水线输入端与所述第二流水线输出端之间的通路为第二流水线;
    所述MAQ还包括第一MA处理资源、第二MA处理资源、第一数据选择器MUX和第二MUX;
    所述第一MUX和所述第二MUX用于:将所述第一MA处理资源和所述第二MA处理资源串联接入所述第一流水线的通路;或者,将所述第一MA处理资源接入所述第一流水线的通路,将所述第二MA处理资源接入所述第二流水线的通路。
  2. 根据权利要求1所述的交换芯片,其特征在于,
    所述第一MA处理资源的输入端耦合至所述第一MAQ输入端,所述第一MA处理资源的输出端耦合至所述第一MUX的第一选择输入端以及所述第二MUX的第二选择输入端;所述第一MUX的第三选择输入端耦合至所述第二MAQ输入端,所述第一MUX的第一选择输出端耦合至所述第二MUX的第四选择输入端以及所述第二MAQ输出端;所述第二MUX的第二选择输出端耦合至所述第一MAQ输出端。
  3. 根据权利要求2所述的交换芯片,其特征在于,所述MAQ还包括第三MA处理资源,所述第二MA处理资源通过所述第三MA处理资源耦合至所述第二MUX的第四选择输入端以及所述第二MAQ输出端。
  4. 根据权利要求2-3任一项所述的交换芯片,其特征在于,所述MAQ还包括第四MA处理资源,所述第二MUX的第二选择输出端通过所述第四MA处理资源耦合至所述第一MAQ输出端。
  5. 根据权利要求1-4任一项所述的交换芯片,其特征在于,所述MAQ还包括本地查找存储器,所述MAQ的MA处理资源均耦合至所述本地查找存储器,所述本地查找存储器用于存储供所述MAQ的MA处理资源进行MA处理所使用的查找表项。
  6. 根据权利要求1-5任一项所述的交换芯片,其特征在于,所述入口流水线组或所述出口流水线组还包括:共享查找存储器,所述MAQ均耦合至所述共享查找存储器,所述共享查找存储器用于存储供所述MAQ进行MA处理所使用的查找表项。
PCT/CN2019/128905 2019-12-26 2019-12-26 交换芯片 WO2021128221A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980102667.3A CN114747193B (zh) 2019-12-26 2019-12-26 交换芯片
PCT/CN2019/128905 WO2021128221A1 (zh) 2019-12-26 2019-12-26 交换芯片

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/128905 WO2021128221A1 (zh) 2019-12-26 2019-12-26 交换芯片

Publications (1)

Publication Number Publication Date
WO2021128221A1 true WO2021128221A1 (zh) 2021-07-01

Family

ID=76573813

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/128905 WO2021128221A1 (zh) 2019-12-26 2019-12-26 交换芯片

Country Status (2)

Country Link
CN (1) CN114747193B (zh)
WO (1) WO2021128221A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997031462A1 (en) * 1996-02-22 1997-08-28 Fujitsu Ltd. Low latency, high clock frequency plesioasynchronous packet-based crossbar switching chip system and method
CN101795201A (zh) * 2010-01-28 2010-08-04 中国电子科技集团公司第五十四研究所 多级抗毁交换结构装置
CN105516008A (zh) * 2015-12-04 2016-04-20 北京锐安科技有限公司 数据分流设备及其多用户处理的实现方法
CN205320075U (zh) * 2015-12-31 2016-06-15 上海航天科工电器研究院有限公司 一种基于光纤以太网的多业务数字光端机

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908032B (zh) * 2010-08-30 2012-08-15 湖南大学 可重新配置处理器集合的处理器阵列
US9880768B2 (en) * 2015-01-27 2018-01-30 Barefoot Networks, Inc. Dynamic memory reallocation for match-action packet processing
CN110233800A (zh) * 2019-05-09 2019-09-13 星融元数据技术(苏州)有限公司 一种开放可编程的报文转发方法和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997031462A1 (en) * 1996-02-22 1997-08-28 Fujitsu Ltd. Low latency, high clock frequency plesioasynchronous packet-based crossbar switching chip system and method
CN101795201A (zh) * 2010-01-28 2010-08-04 中国电子科技集团公司第五十四研究所 多级抗毁交换结构装置
CN105516008A (zh) * 2015-12-04 2016-04-20 北京锐安科技有限公司 数据分流设备及其多用户处理的实现方法
CN205320075U (zh) * 2015-12-31 2016-06-15 上海航天科工电器研究院有限公司 一种基于光纤以太网的多业务数字光端机

Also Published As

Publication number Publication date
CN114747193A (zh) 2022-07-12
CN114747193B (zh) 2023-02-03

Similar Documents

Publication Publication Date Title
US11102120B2 (en) Storing keys with variable sizes in a multi-bank database
US10455063B2 (en) Packet flow classification
US8363654B2 (en) Predictive packet forwarding for a network switch
US20180205654A1 (en) Hash-based Address Matching
US9231860B2 (en) System and method for hierarchical link aggregation
US20050259672A1 (en) Method to improve forwarding information base lookup performance
US9736073B2 (en) Packet switch device and method of the same
US8626955B2 (en) Directing packets to a processor unit
CN109684269B (zh) 一种pcie交换芯片内核及工作方法
US11847091B2 (en) Data transmission method and device for network on chip and electronic apparatus
US9210082B2 (en) High speed network bridging
US9632958B2 (en) System for migrating stash transactions
WO2020107484A1 (zh) 一种acl的规则分类方法、查找方法和装置
CN113746749A (zh) 网络连接设备
WO2021128221A1 (zh) 交换芯片
CN114946167A (zh) 一种报文解析方法和装置
US9590897B1 (en) Methods and systems for network devices and associated network transmissions
JP5674179B1 (ja) 効率的なネットワークアドレス変換およびアプリケーションレベルゲートウェイ処理のための装置および方法
CN114338529B (zh) 五元组规则匹配方法及装置
WO2019241926A1 (zh) 访问控制列表的管理方法及装置
US20240056393A1 (en) Packet forwarding method and device, and computer readable storage medium
CN110661731B (zh) 一种报文处理方法及其装置
US8037238B2 (en) Multiple mode content-addressable memory
WO2020119796A1 (zh) 指令处理方法和芯片
EP3993366B1 (en) Network load balancer, request message distribution method, program product and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19957115

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19957115

Country of ref document: EP

Kind code of ref document: A1