CN107888521B - Method and device for sharing table resource pool by multiple protocols - Google Patents

Method and device for sharing table resource pool by multiple protocols Download PDF

Info

Publication number
CN107888521B
CN107888521B CN201710985768.2A CN201710985768A CN107888521B CN 107888521 B CN107888521 B CN 107888521B CN 201710985768 A CN201710985768 A CN 201710985768A CN 107888521 B CN107888521 B CN 107888521B
Authority
CN
China
Prior art keywords
entry
table entry
variable
resource pool
lpm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710985768.2A
Other languages
Chinese (zh)
Other versions
CN107888521A (en
Inventor
伍楠
李霞娟
尤子威
杨健鸷
王克非
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Netforward Microelectronic Co ltd
Original Assignee
Shenzhen Netforward Microelectronic Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Netforward Microelectronic Co ltd filed Critical Shenzhen Netforward Microelectronic Co ltd
Priority to CN201710985768.2A priority Critical patent/CN107888521B/en
Publication of CN107888521A publication Critical patent/CN107888521A/en
Application granted granted Critical
Publication of CN107888521B publication Critical patent/CN107888521B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/103Packet switching elements characterised by the switching fabric construction using a shared central buffer; using a shared memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric

Abstract

The embodiment of the invention provides a method and a device for sharing a table item resource pool by multiple protocols, belonging to the technical field of the resource pool shared by the multiple protocols. The method for sharing the table resource pool by multiple protocols comprises the following steps: receiving a table lookup request; obtaining a barrel address of a barrel in a preset table item resource pool according to the table look-up request; reading and writing according to the bucket address in the table item resource pool; wherein, the table resource pool is shared by a layer L2 table entry, a layer L3 table entry and the longest prefix match LPM. The invention sets the resource pool of the shared table item in advance, and concretely searches the address in the resource pool of the shared table item to the barrel address, which is a dividing mode of the fine-grained continuously adjustable protocol table item space and solves the problem that the various protocol table item spaces can not be flexibly distributed.

Description

Method and device for sharing table resource pool by multiple protocols
Technical Field
The invention relates to a multi-protocol shared resource pool technology, in particular to a method and a device for sharing a table entry resource pool by multiple protocols.
Background
In an ethernet switching ASIC chip supporting three-layer switching or even four-layer switching, a plurality of protocol pipelines are generally used to perform three-layer and four-layer protocol processing of ethernet packets in parallel. Each pipeline has multiple access ports, and protocol tables need to be read and written simultaneously.
Each protocol table is a set of protocol data structures, which has one or more external read-write access interfaces, and is internally composed of one or more blocks of RAM (random access memory), and the set of protocol data structures is stored in the RAM according to a certain address sequence. The protocol pipeline part reads and writes each table entry through the read-write port of the protocol table.
In an actual chip, under some application scenarios, a part of protocol tables may require a large capacity to be used; in other application scenarios, it is possible that another part of the protocol table requires a larger table entry capacity. If all protocol tables are designed according to the maximum capacity, the hardware resource may be wasted when the scene in which the protocol tables are located does not need to use such a large table entry capacity. If the system is not designed according to the maximum capacity, the system design requirement cannot be met under some specific scenes.
In order to solve the problem, a multi-protocol shared protocol table is designed in many existing ethernet switching chips. In the shared protocol table, the table entries required by the protocol processing modules are organized and divided into a plurality of blocks, and then a plurality of table entry blocks are allocated to each protocol processing module according to the application scenario. Thus, under different application scenarios, according to different requirements for the table entry capacity in the processing process of each protocol, different numbers of table entry blocks are respectively allocated for the protocol to be used. Therefore, the configuration of the table resource can be optimized, and the use efficiency of the hardware resource is improved under the condition of not reducing the design index.
Fig. 1 is a structure diagram of a shared protocol table entry organization in a chip in the prior art, and as shown in fig. 1, in the chip, protocol processing modules such as a layer L2, a layer L3, and a longest prefix match LPM respectively need to access a protocol table entry. In order to implement the table entry resource sharing among the several protocol processing modules, it is assumed that the chip divides all table entries into 10 banks (physical storage banks) with different specifications, such as bank0, bank1, … …, and bank9, where bank0 and bank1 are dedicated to L2 protocol processing, bank6, bank7, bank8, and bank9 are dedicated to L3 protocol processing, and the middle banks 2, bank3, bank4, and bank5 are shared table entry storage spaces and can also be LPM table entry spaces when needed. If the table entry capacity required by the L2 exceeds the maximum capacity of the bank0 and the bank1 in a certain scene, the bank2, the bank3, the bank4 and the bank5 can be used in sequence; if the table entry capacity required by the L3 exceeds the maximum capacity of bank6-bank9, the bank5, the bank4, the bank3 and the bank2 can be used in sequence; in the process, the same bank can be used by only one protocol processing module under a certain configuration, and the entries used by the same protocol processing module are stored continuously. Thus, there are 5 different configuration modes in such a chip, which correspond to 5 different application scenarios in practical situations, as follows:
in the mode 0, L2 uses bank0-bank5, L3 uses bank6-bank9, and LPM does not use any table entry;
in the mode 1, L2 uses bank0-bank4, L3 uses bank5-bank9, and LPM does not use any table entry;
in the mode 2, L2 uses bank0-bank3, L3 uses bank4-bank9, and LPM does not use any table entry;
in the mode 3, L2 uses bank0-bank2, L3 uses bank3-bank9, and LPM does not use any table entry;
in mode 4, L2 uses bank0-bank1, L3 uses bank6-bank9, and LPM uses bank2-bank 5.
It can be seen that in the chip related to fig. 1, there are only 5 configuration modes, that is, there are only 5 different cases in the capacities of the L2 table entry and the L3 table entry, and there is only one possible configuration to implement when using the table entry of the LPM.
Even in high-end chips, domestic ethernet switch chip designers organize the entries used by the protocol processing units into 18 blocks. In use, each block may be used by one of a plurality of protocol processing modules, depending on the current configuration. Only some blocks in such a chip are allowed to be configured for use by different protocol processing units, and other blocks are also fixedly allocated and cannot be changed at will.
In the above two modes, the table entry capacity that can be used by the protocol processing unit can be adjusted according to different application scenarios and requirements. But the granularity of the adjustment is in blocks, i.e. all entries in each block can only belong to the same protocol processing unit.
The processing mode can only be configured and used in several fixed modes, and although the processing mode is easy to implement, the processing mode cannot support more flexible and changeable application scenes. However, in a certain application environment, it is necessary to configure more flexibly the entries that can be used by L2, L3, or even by LPM, and even to adjust the capacity of the entries that can be used at will, and the above two ways cannot be realized.
Disclosure of Invention
The embodiment of the invention aims to provide a method and a device for sharing a table entry resource pool by multiple protocols, which are used for solving the problem that the space of various protocol table entries cannot be flexibly distributed.
In order to achieve the above object, an embodiment of the present invention provides a method for a resource pool of a multi-protocol shared table entry, where the method includes: receiving a table lookup request; obtaining a barrel address of a barrel in a preset table item resource pool according to the table look-up request; reading and writing according to the bucket address in the table item resource pool; wherein, the table resource pool is shared by a layer L2 table entry, a layer L3 table entry and the longest prefix match LPM.
Optionally, the table entry resource pool includes one or more physical storage banks, each of the one or more physical storage banks includes one or more buckets, and obtaining a bucket address of a bucket in the preset table entry resource pool according to the table lookup request includes: obtaining a bucket address of one of the one or more buckets through hash index according to the table look-up request; and finding the corresponding barrel according to the barrel address, matching each table item in the corresponding barrel, and returning the table item which is successfully matched.
Optionally, the L2 entry, the L3 entry, and the LPM entry satisfy the following conditions: the sum of the total depth of the L2 table entry, the total depth of the L3 table entry and the total depth of the LPM table entry is not more than the total capacity of the table entry resource pool; the sum of the starting address of the L2 table entry in the global address space and the total depth of the L2 table entry is not more than the starting address of the L3 table entry in the global address space; the sum of the starting address of the L3 table entry in the global address space and the total depth of the L3 table entry is not more than the starting address of the LPM table entry in the global address space; and the starting address of the LPM table entry in the global address space and the total depth of the LPM table entry are not more than the total depth of the table entry resource pool.
Optionally, each of the one or more buckets includes one or more logical entries, where a bucket is a physical entry.
Optionally, obtaining the address in the preset table entry resource pool according to the table lookup request includes: setting a first variable, a second variable, a third variable and a fourth variable for the L2 table entry/the L3 table entry/the LPM table entry, wherein the initial value of the first variable is set to 1; scanning the L2 table entry/the L3 table entry/the LPM table entry from the highest bit to the lowest bit, if the current bit is not 1, shifting the first variable right by 1 bit, and if the current bit is 1, complementing the highest bit by 0, and if the current bit is 1, the third variable is the result of bitwise AND of the local address of the L2 table entry/the L3 table entry/the LPM table entry and the first variable; if the third variable > the total depth of the L2 entry/the L3 entry/the LPM entry, then the second variable is the third variable-the total depth of the L2 entry/the L3 entry/the LPM entry, otherwise the second variable is the third variable; and the fourth variable is the starting address of the L2 entry/the L3 entry/the LPM entry + the second variable; wherein the fourth variable is an address to be accessed in the table entry resource pool by the table lookup request.
Correspondingly, the invention also provides a device for sharing the table resource pool by multiple protocols, which comprises: the receiving module is used for receiving a table lookup request; the acquisition module is used for acquiring the barrel address of a barrel in a preset table item resource pool according to the table look-up request; the reading and writing module is used for reading and writing according to the bucket address in the table item resource pool; wherein, the table resource pool is shared by a layer L2 table entry, a layer L3 table entry and the longest prefix match LPM.
Optionally, the table entry resource pool includes one or more physical storage volumes, each of the one or more physical storage volumes includes one or more buckets, and the obtaining module is further configured to: obtaining a bucket address of one of the one or more buckets through hash index according to the table look-up request; and finding the corresponding barrel according to the barrel address, matching each table item in the corresponding barrel, and returning the table item which is successfully matched.
Optionally, the L2 entry, the L3 entry, and the LPM entry satisfy the following conditions: the sum of the total depth of the L2 table entry, the total depth of the L3 table entry and the total depth of the LPM table entry is not more than the total capacity of the table entry resource pool; the sum of the starting address of the L2 table entry in the global address space and the total depth of the L2 table entry is not more than the starting address of the L3 table entry in the global address space; the sum of the starting address of the L3 table entry in the global address space and the total depth of the L3 table entry is not more than the starting address of the LPM table entry in the global address space; and the starting address of the LPM table entry in the global address space and the total depth of the LPM table entry are not more than the total depth of the table entry resource pool.
Optionally, each of the one or more buckets includes one or more logical entries, where a bucket is a physical entry.
Optionally, the obtaining module is further configured to: setting a first variable, a second variable, a third variable and a fourth variable for the L2 table entry/the L3 table entry/the LPM table entry, wherein the initial value of the first variable is set to 1; scanning the L2 table entry/the L3 table entry/the LPM table entry from the highest bit to the lowest bit, if the current bit is not 1, shifting the first variable right by 1 bit, and if the current bit is 1, complementing the highest bit by 0, and if the current bit is 1, the third variable is the result of bitwise AND of the local address of the L2 table entry/the L3 table entry/the LPM table entry and the first variable; if the third variable > the total depth of the L2 entry/the L3 entry/the LPM entry, then the second variable is the third variable-the total depth of the L2 entry/the L3 entry/the LPM entry, otherwise the second variable is the third variable; and the fourth variable is the starting address of the L2 entry/the L3 entry/the LPM entry + the second variable; wherein the fourth variable is an address to be accessed in the table entry resource pool by the table lookup request.
Correspondingly, the invention also provides a switching chip, wherein the switching chip is stored with instructions, and the instructions are used for enabling a machine to execute the method.
Through the technical scheme, the invention is a fine-grained continuously adjustable protocol table item space dividing mode by presetting the shared table item resource pool and specifically searching the addresses in the shared table item resource pool to the barrel addresses, and solves the problem that various protocol table item spaces cannot be flexibly distributed.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the embodiments of the invention without limiting the embodiments of the invention. In the drawings:
fig. 1 is an organization diagram of a shared protocol table entry in a chip according to the prior art;
FIG. 2 is a flow chart of a method for sharing a resource pool of table entries by multiple protocols according to the present invention;
fig. 3 is a table entry storage structure diagram of a table entry resource pool according to an embodiment of the present invention;
FIG. 4 is a storage structure of 8 entries of L2 entries in a shared entry resource pool in a bucket according to an embodiment of the present invention;
FIG. 5 is a block diagram illustrating various possible situations when 4 different entries of L3 entries in a shared entry resource pool are stored in a bucket in a mixed manner according to an embodiment of the present invention;
FIG. 6 illustrates 4 different cases in a bucket of an LPM entry in the shared table entry resource pool according to an embodiment of the present invention; and
FIG. 7 is a block diagram of an apparatus for sharing a resource pool of table entries according to multiple protocols.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
Fig. 2 is a flowchart of a method for sharing a resource pool of table entries by multiple protocols according to the present invention, as shown in fig. 2, the method includes:
step 201, a table lookup request is received.
Step 202, obtaining the bucket address of the bucket in the table item resource pool set in advance according to the table look-up request.
Step 203, reading and writing according to the bucket address in the table resource pool.
The table resource pool is shared by the layer L2 table entry, the layer L3 table entry and the longest prefix match LPM.
Aiming at the method for sharing the table resource pool by multiple protocols, the table resource pool needs to be preset, then the table resource pool is divided into a plurality of physical memory banks, the table resource pool is commonly used by the L2, the L3 and the LPM processing component, different memory spaces can be allocated to each protocol table in advance according to different current application scenes, and the allocation of the memory spaces is realized by configuration signals sent from the outside.
Fig. 3 is a table entry storage structure diagram of a table entry resource pool according to an embodiment of the present invention, as shown in fig. 3, the total capacity of the table entry resource pool is 40K, that is, 40K deep, and the table entry resource pool is implemented by using a single-port RAM, the table entry resource pool of 40K is divided into 8 physical storage banks, each physical storage bank is 5K deep, where the physical storage banks are denoted by banks.
Specifically, the table entry resource pool includes one or more physical memory banks, each of the one or more physical memory banks includes one or more buckets, and obtaining a preset bucket address of a bucket in the table entry resource pool according to the table lookup request includes: obtaining a bucket address of one bucket in the one or more buckets through hash index according to the table look-up request; and finding the corresponding barrel according to the barrel address, matching each table item in the corresponding barrel, and returning the table item which is successfully matched.
Generally, a table entry resource pool includes a plurality of banks, a bank includes a plurality of buckets, and a bucket has a plurality of entries. In the ethernet switching chip, the L2, L3 and LPM table entries may use a Hash (Hash) index manner, that is, after the protocol processing component gives a key value key of a Hash index, the key value key is mapped into a bucket address through a Hash function, then each entry in the bucket is matched by the key, and an entry which is successfully matched is returned.
In order to avoid the conflict of multiple protocol table entry address spaces, the L2 table entry, the L3 table entry and the LPM table entry satisfy the following conditions: the sum of the total depth of the L2 table entries, the total depth of the L3 table entries and the total depth of the LPM table entries is not more than the total capacity of the table entry resource pool; the sum of the starting address of the L2 table entry in the global address space and the total depth of the L2 table entries is not more than the starting address of the L3 table entry in the global address space; the sum of the starting address of the L3 table entry in the global address space and the total depth of the L3 table entry is not more than the starting address of the LPM table entry in the global address space; and the starting address of the LPM table entry in the global address space and the total depth of the LPM table entry are not more than the total depth of the table entry resource pool.
Still taking the total capacity of the table entry resource pool as 40K deep as an example, the allocation of the storage space is defined as follows:
l2_addr_depth+l3_addr_depth+lpm_addr_depth≤40K;
l2_addr_start+l2_addr_depth≤l3_addr_start;
l3_addr_start+l3_addr_depth≤lpm_addr_start;
lpm_addr_start+lpm_addr_depth≤40K。
wherein L2_ addr _ start represents the starting address of the L2 table entry in the global address space; l2_ addr _ depth represents the total depth of L2 table entry; l3_ addr _ start represents the starting address of the L3 entry in the global address space; l3_ addr _ depth represents the total depth of L3 table entry; LPM _ addr _ start represents the initial address of the LPM table entry in the global address space; LPM _ addr _ depth represents the total depth of the LPM entry.
If a table lookup request is received from a table lookup port of a certain protocol processing unit, and the local address of the protocol table entry requested to be accessed is set as req _ index, firstly, which protocol table entry space the request port belongs to is judged, and then whether the received local address req _ index exceeds the table entry space range is judged according to the maximum depth of the protocol table entry space. If the req _ index is larger than the depth of the table entry space of the protocol, the req _ index is used for carrying out the residue on the depth, and the result is the offset from the starting address in the table entry local space. Finally, the offset of the local space is added with the initial address of the protocol table entry, namely the address to be inquired in the global space.
This is because the hardware implementation of the operation of the remainder is more complicated, so the invention improves the original operation of the remainder, specifically, obtaining the address in the table resource pool set in advance according to the table look-up request includes: setting a first variable, a second variable, a third variable and a fourth variable aiming at an L2 table item/L3 table item/LPM table item, wherein the initial value of the first variable is set to be 1; scanning an L2 table entry/L3 table entry/LPM table entry from the highest bit, if the current bit is not 1, shifting a first variable by 1 bit to the right, and complementing the highest bit by 0, if the current bit is 1, wherein the third variable is the result of bitwise AND of the local address of the L2 table entry/L3 table entry/LPM table entry and the first variable; if the third variable is larger than the total depth of the L2 table entry/L3 table entry/LPM table entry, the second variable is the total depth of the third variable-L2 table entry/L3 table entry/LPM table entry, otherwise, the second variable is the third variable; the fourth variable is the starting address of the L2 table entry/L3 table entry/LPM table entry + the second variable; the fourth variable is the address to be accessed in the table resource pool by the table lookup request.
Specific examples are given below.
Let a table lookup request be issued from a port of the L2 processing element, the local address of the request is L2_ index [15:0], and let the start address of the L2 table be L2_ addr _ start [15:0], and the depth of the L2 table be L2_ addr _ depth [15:0], then the following steps are performed:
(1) setting a 16-bit variable l2_ addr _ depth _ mask [15:0] with the initial value of all 1, namely 16' hffff;
(2) starting from the highest bit, scanning l2_ addr _ depth [15:0] from left to right in sequence, if the current bit is not 1, shifting l2_ addr _ depth _ mask [15:0] by 1 bit to the right, and complementing the highest bit by 0; if the current bit is 1, end scan l2_ addr _ depth [15:0], go to (3);
(3) setting 16-bit variables l2_ real _ index [15:0] and l2_ mask _ index [15:0], l2_ mask _ index [15:0] as the result of bitwise summing l2_ index [15:0] and l2_ addr _ depth _ mask [15:0], i.e., l2_ mask _ index [15:0] -, l2_ index [15:0] & l2_ addr _ depth _ mask [15:0 ];
(4) if l2_ mask _ index [15:0] > l2_ addr _ depth [15:0], let l2_ real _ index [15:0] ═ l2_ mask _ index [15:0] -l2_ addr _ depth [15:0 ]; otherwise, let l2_ real _ index [15:0] ═ l2_ mask _ index [15:0 ];
(5) a 16-bit variable L2_ global _ index [15:0] is set, where L2_ global _ index [15:0] is L2_ addr _ start [15:0] + L2_ real _ index [15:0], and L2_ global _ index [15:0] is the address to be accessed in the global address space by the table lookup request issued by the L2 processing unit.
Similar to the L2 processing element, the L3 processing element or LPM processing element is not described in detail herein.
Specific examples of the application of the present invention to L2, L3, and LPM are given below, assuming that one bucket is 840 bits.
When the L2 protocol processing unit accesses the table entry, a Hash index mode is adopted to find the bucket in the table entry resource pool, 8 entries in the bucket are continuously accessed for matching, and an entry which is successfully matched is returned, wherein the width of one entry can be 103 bits. The one bucket including 8 entries is only an example, and those skilled in the art should understand that one bucket may include any number of entries, it should be noted that one entry may include 105 bits, and only 103 bits are used here.
Fig. 4 is a storage structure of 8 entries in a bucket of L2 entries in a shared entry resource pool according to an embodiment of the present invention, where in an L2 table, a 840-bit wide physical entry is divided into 8 logical entries with 105-bit wide, and each logical entry includes 1 entry with 103 bits. It should be noted that a bucket is a physical table entry, and the physical table entry may be divided into one or more logical table entries.
When accessing the table entry, the L3 protocol processing component finds the buckets in the table entry resource pool by using a Hash index manner, where the entries have 4 different types, each bucket may include 4 different entry types, and the width of each type of entry may be selected as follows:
(1) ipv4unicast type: each entry may be 86 bits wide;
(2) ipv4multicast type: each entry may be 190 bits wide;
(3) ipv6unicast type: each entry may be 190 bits wide;
(4) ipv6multicast type: each entry may be 347 bits wide.
Similar to the above for the L2 protocol processing unit, ipv4unicast is taken as an example, where the width of each entry is 86 bits, however, this is only an example, and the width of one entry may be other values, for example, 840 bits are taken as a shared resource pool, one bucket includes 8 entries, and one bucket is generally 105 bits, and the above example only adopts 86 bits. Of course, it is also possible that a packet includes 4 entries, and an entry is 210 bits, and 190 bits (such as ipv4multicast type) or 170 bits may be used.
FIG. 5 is a block diagram of 4 possible situations when 4 different entries of L3 entries in a shared table entry resource pool are stored in a mixed manner in a bucket according to an embodiment of the present invention, as shown in FIG. 5, in an L3 entry, a 840-bit wide physical table entry may be divided into a plurality of different formats, and each physical table entry may include 105-bit, 210-bit, and 420-bit logical table entries. During the division, firstly, 840-bit physical table entries are divided into 8 units with 105-bit width, each logic table entry with 105-bit width can occupy any one unit, and each logic table entry with 210-bit width can occupy units 0 and 1, or units 2 and 3, or units 4 and 5, or units 6 and 7; each 420-bit wide logical table entry may occupy either unit 0123 or unit 4567. Each 105-bit wide logical entry may contain 1 entry of ipv4unicast type that is 86-bit wide, each 210-bit wide logical entry may contain 1 entry of ipv4multicast type or ipv6unicast type that is 190-bit wide, and each 420-bit wide logical entry may contain 1 entry of ipv6multicast type that is 347-bit wide.
When the LPM protocol processing unit accesses the table entry, the Hash index is used to find the bucket in the table entry resource pool, which may be different from the following cases 6:
(1) ipv4 type: 12 entries are accessed consecutively, each entry may be 70 bits wide;
(2) ipv4_1 type: 8 entries are accessed consecutively, each entry may be 105 bits wide;
(3) ipv6_64 type: 8 entries are accessed consecutively, each entry may be 105 bits wide;
(4) ipv6_64_1 type: 6 entries are accessed consecutively, each entry may be 140 bits wide;
(5) ipv6_128 type: 4 entries are accessed consecutively, each entry may be 210 bits wide;
(6) ipv6_128_1 type: 4 entries are accessed consecutively, each of which may be 210 bits wide.
Fig. 6 shows 4 different situations in a bucket in an LPM entry in the shared table entry resource pool according to the embodiment of the present invention, as shown in fig. 6, in the LPM table, if the LPM entry is an ipv4 type entry, a physical table entry with a bit width of 840 bits contains 12 entries with a bit width of 70 bits; if the entry is an ipv4_1 type or ipv6_64 type, a 840-bit-wide physical entry contains 8 entries with 105 bit widths; if the entry is an ipv6_64_1 type, a 840-bit-wide physical entry contains 6 entries 140 bits wide; if the entry is an ipv6_128 type or ipv6_128_1 type, a 840-bit wide physical entry contains 4 entries that are 210 bits wide. Like the above L2 protocol processing section, L3 protocol processing section, fig. 6 is also only an example.
Thus, the table entry storage width of the whole shared table entry resource pool is 840 bits wide, and the total capacity is 33.6 Mb. The table entry space that can be used by each protocol processing unit (L2, L3, or LPM) is configured by signals such as L2_ addr _ start, L2_ addr _ depth, L3_ addr _ start, L3_ addr _ depth, LPM _ addr _ start, and LPM _ addr _ depth; each 840-bit wide physical table entry is a packet, and the entry number and the actual bit width of each packet are different according to different current protocol types.
In order to make each table entry in the table entry pool satisfy the format requirements of the L2, L3 and LPM tables, in the present invention, one physical table entry is 840 bits wide, and when used by different protocol processing units, one 840-bit physical table entry is divided into different logical table entries.
Fig. 7 is a block diagram of an apparatus for sharing a resource pool of table entries according to multiple protocols, as shown in fig. 7, the apparatus includes a receiving module 701, an obtaining module 702, and a reading and writing module 703. The receiving module 701 is configured to receive a table lookup request; the obtaining module 702 is configured to obtain a bucket address of a bucket in a preset table entry resource pool according to a table lookup request; the read-write module 703 is configured to perform read-write according to a bucket address in the table resource pool; the table resource pool is shared by the layer L2 table entry, the layer L3 table entry and the longest prefix match LPM.
It should be noted that the specific details and benefits of the apparatus for sharing a resource pool with multiple protocol entries provided by the present invention are similar to the method for sharing a resource pool with multiple protocol entries provided by the present invention, and are not described herein again.
Correspondingly, the invention also provides a switching chip, wherein the switching chip is stored with instructions, and the instructions are used for enabling a machine to execute the method.
Although the embodiments of the present invention have been described in detail with reference to the accompanying drawings, the embodiments of the present invention are not limited to the details of the above embodiments, and various simple modifications can be made to the technical solutions of the embodiments of the present invention within the technical idea of the embodiments of the present invention, and the simple modifications all belong to the protection scope of the embodiments of the present invention.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, the embodiments of the present invention do not describe every possible combination.
The invention supports the Ethernet exchange chip of three-layer exchange, and when the protocol pipelines of different layers share the access lookup table, the whole table entry is designed into a shared resource pool. The invention provides fine-grained continuously adjustable protocol table item space division and a realization method thereof, aiming at the problem that when a plurality of protocol processing components share and use protocol table items in the Ethernet protocol processing process, the space of various protocol table items cannot be flexibly distributed. Through the technical scheme of the invention, the table entries which can be actually used by each protocol processing component can be configured by taking the bucket as the granularity, and the space division mode of each table entry is more flexible; according to the difference of actual entry widths of different protocol table items, a table item resource pool which can give consideration to multiple protocol table item formats is designed, and the requirement that multiple protocols share and access the table item resource pool is met; the address calculation mode is simple, and hardware implementation is facilitated.
In addition, any combination of various different implementation manners of the embodiments of the present invention is also possible, and the embodiments of the present invention should be considered as disclosed in the embodiments of the present invention as long as the combination does not depart from the spirit of the embodiments of the present invention.

Claims (9)

1. A method for sharing a resource pool of table entries in multiple protocols, the method comprising:
receiving a table lookup request;
obtaining a barrel address of a barrel in a preset table item resource pool according to the table look-up request; and
reading and writing according to the bucket address in the table item resource pool;
wherein, the table resource pool is shared by a layer L2 table, a layer L3 table and a longest prefix match LPM;
the table entry resource pool comprises one or more physical storage bodies, each of the one or more physical storage bodies comprises one or more buckets, and obtaining a preset bucket address of a bucket in the table entry resource pool according to the table lookup request comprises:
obtaining a bucket address of one of the one or more buckets through hash index according to the table look-up request; and
and finding the corresponding barrel according to the barrel address, matching each table item in the corresponding barrel, and returning the table item which is successfully matched.
2. The method of claim 1, wherein the L2 entry, the L3 entry, and the LPM entry satisfy the following condition:
the sum of the total depth of the L2 table entry, the total depth of the L3 table entry and the total depth of the LPM table entry is not more than the total capacity of the table entry resource pool;
the sum of the starting address of the L2 table entry in the global address space and the total depth of the L2 table entry is not more than the starting address of the L3 table entry in the global address space;
the sum of the starting address of the L3 table entry in the global address space and the total depth of the L3 table entry is not more than the starting address of the LPM table entry in the global address space; and
the starting address of the LPM table entry in the global address space and the total depth of the LPM table entry are not more than the total depth of the table entry resource pool.
3. The method of claim 1, wherein each of the one or more buckets includes one or more logical entries, wherein a bucket is a physical entry.
4. The method of claim 1, wherein obtaining the address in the table entry resource pool preset according to the table lookup request comprises:
setting a first variable, a second variable, a third variable and a fourth variable for the L2 table entry/the L3 table entry/the LPM table entry, wherein the initial value of the first variable is set to 1;
scanning the L2 table entry/the L3 table entry/the LPM table entry from the highest bit to the lowest bit, if the current bit is not 1, shifting the first variable right by 1 bit, and if the current bit is 1, complementing the highest bit by 0, and if the current bit is 1, the third variable is the result of bitwise AND of the local address of the L2 table entry/the L3 table entry/the LPM table entry and the first variable;
if the third variable > the total depth of the L2 entry/the L3 entry/the LPM entry, then the second variable is the third variable-the total depth of the L2 entry/the L3 entry/the LPM entry, otherwise the second variable is the third variable; and
the fourth variable is the starting address of the L2 entry/the L3 entry/the LPM entry + the second variable;
wherein the fourth variable is an address to be accessed in the table entry resource pool by the table lookup request.
5. An apparatus for sharing a resource pool of table entries in multiple protocols, the apparatus comprising:
the receiving module is used for receiving a table lookup request;
the acquisition module is used for acquiring the barrel address of a barrel in a preset table item resource pool according to the table look-up request; and
the read-write module is used for reading and writing according to the bucket address in the table item resource pool;
wherein, the table resource pool is shared by a layer L2 table, a layer L3 table and a longest prefix match LPM;
the table entry resource pool includes one or more physical memory banks, each of the one or more physical memory banks includes one or more buckets, wherein the obtaining module is further configured to:
obtaining a bucket address of one of the one or more buckets through hash index according to the table look-up request; and
and finding the corresponding barrel according to the barrel address, matching each table item in the corresponding barrel, and returning the table item which is successfully matched.
6. The apparatus of claim 5, wherein the L2 entry, the L3 entry, and the LPM entry satisfy the following condition:
the sum of the total depth of the L2 table entry, the total depth of the L3 table entry and the total depth of the LPM table entry is not more than the total capacity of the table entry resource pool;
the sum of the starting address of the L2 table entry in the global address space and the total depth of the L2 table entry is not more than the starting address of the L3 table entry in the global address space;
the sum of the starting address of the L3 table entry in the global address space and the total depth of the L3 table entry is not more than the starting address of the LPM table entry in the global address space; and
the starting address of the LPM table entry in the global address space and the total depth of the LPM table entry are not more than the total depth of the table entry resource pool.
7. The apparatus of claim 5, wherein each of the one or more buckets includes one or more logical entries, wherein a bucket is a physical entry.
8. The apparatus of claim 5, wherein the obtaining module is further configured to:
setting a first variable, a second variable, a third variable and a fourth variable for the L2 table entry/the L3 table entry/the LPM table entry, wherein the initial value of the first variable is set to 1;
scanning the L2 table entry/the L3 table entry/the LPM table entry from the highest bit to the lowest bit, if the current bit is not 1, shifting the first variable right by 1 bit, and if the current bit is 1, complementing the highest bit by 0, and if the current bit is 1, the third variable is the result of bitwise AND of the local address of the L2 table entry/the L3 table entry/the LPM table entry and the first variable;
if the third variable > the total depth of the L2 entry/the L3 entry/the LPM entry, then the second variable is the third variable-the total depth of the L2 entry/the L3 entry/the LPM entry, otherwise the second variable is the third variable; and
the fourth variable is the starting address of the L2 entry/the L3 entry/the LPM entry + the second variable;
wherein the fourth variable is an address to be accessed in the table entry resource pool by the table lookup request.
9. A switch chip having stored thereon instructions for causing a machine to perform the method of any one of claims 1 to 4.
CN201710985768.2A 2017-10-20 2017-10-20 Method and device for sharing table resource pool by multiple protocols Active CN107888521B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710985768.2A CN107888521B (en) 2017-10-20 2017-10-20 Method and device for sharing table resource pool by multiple protocols

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710985768.2A CN107888521B (en) 2017-10-20 2017-10-20 Method and device for sharing table resource pool by multiple protocols

Publications (2)

Publication Number Publication Date
CN107888521A CN107888521A (en) 2018-04-06
CN107888521B true CN107888521B (en) 2021-01-01

Family

ID=61781840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710985768.2A Active CN107888521B (en) 2017-10-20 2017-10-20 Method and device for sharing table resource pool by multiple protocols

Country Status (1)

Country Link
CN (1) CN107888521B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113098798B (en) * 2021-04-01 2022-06-21 烽火通信科技股份有限公司 Method for configuring shared table resource pool, packet switching method, chip and circuit
CN113254081B (en) * 2021-06-16 2021-10-08 中国人民解放军国防科技大学 Mirror image reading and writing system and method for control path in exchange chip

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1538663A (en) * 2003-04-16 2004-10-20 华为技术有限公司 Method of finding route table item using ltsh chain table
CN1612562A (en) * 2003-10-28 2005-05-04 武汉烽火网络有限责任公司 Method and device for realizing different IP data packet repeating using strategic flow
CN104283782A (en) * 2014-10-14 2015-01-14 杭州华三通信技术有限公司 Method and device for determining message forwarding paths in multi-protocol label switching network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9408060B2 (en) * 2014-10-14 2016-08-02 Radius Networks Inc. Interleaving multiple bluetooth low energy advertisements

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1538663A (en) * 2003-04-16 2004-10-20 华为技术有限公司 Method of finding route table item using ltsh chain table
CN1612562A (en) * 2003-10-28 2005-05-04 武汉烽火网络有限责任公司 Method and device for realizing different IP data packet repeating using strategic flow
CN104283782A (en) * 2014-10-14 2015-01-14 杭州华三通信技术有限公司 Method and device for determining message forwarding paths in multi-protocol label switching network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Hash算法在网络处理器中的实现";付仲满,张辉,刘涛,;《计算机工程》;20140915;第40卷(第9期);269-274 *

Also Published As

Publication number Publication date
CN107888521A (en) 2018-04-06

Similar Documents

Publication Publication Date Title
US6754776B2 (en) Method and system for logical partitioning of cache memory structures in a partitoned computer system
US9264357B2 (en) Apparatus and method for table search with centralized memory pool in a network switch
US11036403B2 (en) Shared memory block configuration
US7085275B2 (en) Method for accessing a non-symmetric dual-slot address table and switching apparatus using same
US20050044134A1 (en) High-performance hashing system
KR102391602B1 (en) Method and system for reconfigurable parallel lookups using multiple shared memories
CN102035727B (en) Method and equipment for processing routing
US20040225858A1 (en) Systems and methods for processor memory allocation
Bando et al. FlashTrie: beyond 100-Gb/s IP route lookup using hash-based prefix-compressed trie
US20110202571A1 (en) Method and device for storage
KR101779545B1 (en) Configurable spreading function for memory interleaving
EP3499816B1 (en) Method for configuring address table, fpga, and network device applying fpga
CN107888521B (en) Method and device for sharing table resource pool by multiple protocols
KR20170009961A (en) Multi-table hash-based lookups for packet processing
GB2350534A (en) Packet-based network device with forwarding database having a trie search facility
CN104102586A (en) Address mapping processing method and address mapping processing device
US20140358886A1 (en) Internal search engines architecture
KR20180006645A (en) Semiconductor device including a memory buffer
TW465210B (en) Method and apparatus for adaptive address lookup table generator for networking application
US9256548B2 (en) Rule-based virtual address translation for accessing data
US20230367720A1 (en) Data search method and apparatus, and integrated circuit
CN116431530B (en) CXL memory module, memory processing method and computer system
US20080301400A1 (en) Method and Arrangement for Efficiently Accessing Matrix Elements in a Memory
EP2829028B1 (en) Memory architecture determining the number of replicas stored in memory banks or devices according to a packet size
CN106919516B (en) DDR address mapping system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Wu Nan

Inventor after: Li Xiajuan

Inventor after: You Ziwei

Inventor after: Yang Jianzhe

Inventor after: Wang Kefei

Inventor before: Wu Nan

Inventor before: Qi Xingyun

Inventor before: Li Xiajuan

Inventor before: You Ziwei

Inventor before: Yang Jianzhe

Inventor before: Wang Kefei

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant