WO2020107484A1 - 一种acl的规则分类方法、查找方法和装置 - Google Patents
一种acl的规则分类方法、查找方法和装置 Download PDFInfo
- Publication number
- WO2020107484A1 WO2020107484A1 PCT/CN2018/118782 CN2018118782W WO2020107484A1 WO 2020107484 A1 WO2020107484 A1 WO 2020107484A1 CN 2018118782 W CN2018118782 W CN 2018118782W WO 2020107484 A1 WO2020107484 A1 WO 2020107484A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- chip
- dictionary tree
- rule
- bit
- rules
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/40—Network security protocols
Definitions
- This application relates to the field of storage, and in particular to a rule classification method, search method and device for ACL of an access control list.
- the routing device has the function of processing some special data packets to achieve the purpose of controlling the data flow. For example, using an access control list (ACL) in a firewall can allow some data packets to pass through, and can also intercept some data packets, such as discarding these intercepted data packets; in the Internet Protocol (Internet Protocol Security, IPSec) Application of ACL can encrypt the data packets that meet the rules, and forward other data packets that do not meet the rules.
- ACL access control list
- IPSec Internet Protocol Security
- Application of ACL can encrypt the data packets that meet the rules, and forward other data packets that do not meet the rules.
- the routing device selects data packets by configuring a series of rules, and the selected rules can be determined by ACL.
- ACL adaptive decision bit compression
- TCAM ternary content addressable memory
- ILA Interlaken Look-Aside
- TCAM is an expensive resource, and TCAM occupies a large area of the chip, for example, the storage area of one TCAM is equivalent to five or six random access memories (random access memory, RAM), which directly leads to the increase of the cost of the chip
- RAM random access memory
- the present application provides an ACL rule classification method and device, which are used to solve the problems of large chip area, power consumption, and high cost of TCAM storing ACL rules.
- the present application provides a rule classification method for ACL of an access control list, which is applied to the division of ACL rules on the upper part of the chip.
- This method can be executed by the processor of the device.
- the method includes the following steps: acquiring N Rules, N is a positive integer and N ⁇ 2, each of the N rules consists of 1 or 0 in binary representation, and the length of the N rules is the same; a dictionary is constructed according to the N rules Tree, the dictionary tree is divided into M levels in the order from the root node to at least one intermediate node, and then from the at least one intermediate node to the tail node, M is a positive integer and M ⁇ 1; M levels of information are stored in at least one random access storage unit;
- the N rules may be N first rules
- the dictionary tree may be an on-chip dictionary tree.
- the dictionary tree includes a root node and at least two branches, where each branch includes at least one intermediate node and a tail node, and the root node and the at least one intermediate node are provided with bit numbers, the The bit number is the sequence number corresponding to each bit after the N rules are sequentially sorted according to a preset order, and an index number is set on the tail node, and the index number is used to indicate at least one rule.
- the method provided in this aspect first constructs N rules into a dictionary tree, and the root node and at least one intermediate node of the dictionary tree are provided with bit numbers, and then the dictionary tree is hierarchical and stored. After the dictionary tree is layered, you only need to save the bit numbers of each node in the dictionary tree. Compared with the original storage of N rules, the occupied storage space is reduced, so you can use at least one random access storage with a small area
- the unit stores the information of each level in the dictionary tree, and uses the index number to indicate only N rules, thus avoiding the use of TCAM with a large area to store the rules, saving the cost and power consumption of the chip.
- the random access storage unit is a random access memory RAM.
- the RAM includes a static random access memory SRAM and a dynamic random access memory DRAM.
- the random access storage unit may also be other storage memories, which is not limited in this application.
- constructing the dictionary tree according to the N rules includes: determining the number of at least one bit according to the first condition; The number is stored on at least one node of the dictionary tree, the at least one node includes the root node and the at least one intermediate node; at least one rule index number is stored on the tail node of each branch.
- the first condition is: the first bit in the number of the at least one bit divides the N rules into a first rule set and a second rule set, where the first rule set The value at the first bit in the first value is the first value, and the value at the first bit in the second rule set is the second value; for example, the first bit takes N rules according to the first value "1" and the second value "0" are divided into a first rule set and a second rule set;
- the first threshold is determined by the data bit width of the at least one random access storage unit.
- the N rules are divided into different sets according to the number of bits of each rule, and then can be represented in the dictionary tree in the form of a binary tree, which facilitates the classification and search of the N rules.
- At least one of the binary 1 or 0 in the N rules is represented by a mask*, that is, each rule
- the characters "1" or "0" in can be represented by the mask "*”.
- using a mask to represent 1 or 0 in binary can flexibly express a variety of different rules, increasing the flexibility of storing N rules.
- storing the M levels of information in at least one random access storage unit includes: storing the M levels of information in the M levels In RAM, M ⁇ 1.
- the order of the dictionary tree from the root node to at least one intermediate node, and then from the at least one intermediate node to the tail node is divided into M
- Each level includes: dividing the dictionary tree into M levels according to the data bit width of the at least one random access memory unit; wherein the first level among the M levels includes 1 root node and the root node For the P intermediate nodes connected, the number of intermediate nodes included in the second level of the M levels does not exceed P+1, P is a positive integer and P ⁇ 1.
- P 6
- P can also take other values, determined by the data bit width, which is not limited in this embodiment.
- the first level or the second level may further include at least one tail node, and an index number is set on the tail node.
- the information at the M levels includes information at the first level and information at the second level; the information at the first level includes the root node And the bit numbers of the P intermediate nodes, and the information at the second level includes bit numbers of no more than P+1 intermediate nodes.
- the M levels may further include a third level, a fourth level, or more levels, and the specific number of levels is determined by the structure of the dictionary tree.
- the method further includes: storing at least one rule indicated by index numbers on all tail nodes of the dictionary tree in X RAMs, X is a positive integer and X ⁇ 1, that is, N first rules are stored in X storage access storage units.
- the X storage access storage units may be storage units other than at least one storage access storage unit of the foregoing first aspect; or may be storage units in at least one storage access storage unit of the foregoing first aspect, This embodiment does not limit this.
- the dictionary tree of the foregoing first aspect when the dictionary tree of the foregoing first aspect is stored on a chip, it may be referred to as an on-chip dictionary tree; those stored outside the chip The other dictionary tree is called an off-chip dictionary tree; the method further includes: using one or more of the X random access storage units to store the off-chip dictionary tree, wherein, the off-chip dictionary tree is constructed
- the rules of the tree may be the same as the N rules of the on-chip dictionary tree, or they may be different.
- the method further includes: using one or more of the X random access storage units to store at least one ACL rule corresponding to the off-chip dictionary tree.
- the method is performed by a core in a processor chip, the processor chip includes the core and X on-chip RAMs, and the processor The chip is coupled with Y off-chip RAMs, the method further includes: constructing the at least one second rule into an off-chip dictionary tree, the at least one second rule is stored in the Y off-chip RAMs; The off-chip dictionary tree is divided into at least one level, and the information of each level of the off-chip dictionary tree is stored in the Y off-chip RAMs, where Y is a positive integer and Y ⁇ 1.
- the method further includes: selecting at least one of the X on-chip RAMs and storing at least one level of information of the off-chip dictionary tree; Alternatively, at least one of the Y off-chip RAMs is selected to store at least one level of information of the on-chip dictionary tree.
- At least one RAM that stores on-chip rules is used to store off-chip dictionary trees or off-chip rules, so that any hierarchical storage unit on the chip and the off-chip dictionary tree can be dynamically switched, thereby realizing on-chip and on-chip
- the resource sharing of the external storage unit improves the flexibility of resource allocation, so that it can meet different product capacity requirements.
- the present application also provides an ACL rule search method, which is applied to the dictionary tree constructed in the first aspect.
- the method includes the following steps: acquiring message information, and the content of the message information may be binary The indicated 1 or 0 indicates that the content of the message information is searched one by one according to the bit number on the dictionary tree to determine the rule that matches the content of the message information;
- the dictionary tree is constructed by N rules, N is a positive integer and N ⁇ 2, the dictionary tree includes a root node and at least two branches, where each of the branches includes at least one intermediate node and one tail node, so A bit number is set on the root node and the at least one intermediate node, the bit number is the sequence number corresponding to each bit after the N rules are sequentially sorted according to a preset order, and the tail An index number is set on the node, and the index number is used to indicate at least one rule.
- the method provided in this aspect utilizes the characteristics of a dictionary tree, each node in the dictionary tree is provided with a bit number, and each bit in the current message information is searched one by one according to the bit number on the dictionary tree, and finally through the dictionary tree According to the indication of the index number of the upper and lower nodes to find rules matching the message information content, the method realizes a quick search of a dictionary tree composed of N rules.
- the message information includes one or more of the following combinations: address information, message protocol type, port information, etc.
- the address information includes an IP address, MAC address, and the message protocol type Including: Internet Protocol Version 4 (Internet Protocol Version 4, IPv4), or it can be based on Internet Protocol Version 6 (Internet Protocol Version 6, IPv6)
- the port information includes port numbers, such as source port number and destination port number Wait.
- the content of the message information includes a combination of the above-mentioned various information, for example, the current message information with search is a combination of an IP address and a port number, and the length is 32 bits in total, wherein the IP address occupies 16 bits, and the port The number occupies 16 bits in length.
- the IP address includes at least one mask, and the mask may represent 1 or 0.
- searching the address information one by one according to the bit numbers on the dictionary tree includes: searching from the number of the first bit on the root node of the dictionary tree Start by searching for the value in the address information corresponding to the first bit, if the branch connected to the value in the first bit is not a tail node, continue to search for the bit on the intermediate node connected to the branch Bit number until the tail node of the branch is found.
- the method further includes: determining at least one rule according to the index number of the tail node, and comparing each rule in the at least one rule with all characters included in the address information one by one until a Up to the rule that the content of the address information is the same.
- the rule that determines to match the target IP address includes: if at least one rule indicated by the index number on the tail node is If the address information is the same, it is determined that there is a rule matching the address information; if at least one rule indicated by the index label is different from the content of the address information, it is determined that there is no rule matching the address information , The search fails.
- the dictionary tree described in this aspect may be divided into M levels.
- the dividing process is as follows: the dictionary tree is divided from the root node to at least one intermediate node, and then from the at least one The order from the middle node to the tail node is divided into M levels, M is a positive integer and M ⁇ 1, and the information of the M levels is stored in at least one random access storage unit.
- the method further includes: if two or more dictionary trees are stored in the random access storage unit, Two or more dictionary trees are searched according to the content of the message information, and the specific search method is the same as the above search method for a dictionary tree in this aspect, that is, according to the bit number of each node in each dictionary tree One search.
- the search efficiency is improved.
- the method provided in this aspect realizes searching and switching of a single dictionary tree or multiple dictionary trees on a chip, thereby achieving the beneficial effects of on-chip small-capacity storage and high-performance search.
- the present application further provides an ACL rule classification device, which is used to implement the foregoing first aspect and the ACL rule classification method in various implementation manners of the first aspect.
- the device includes at least one functional unit or module.
- the at least one functional unit is an acquisition unit or a processing unit.
- the present application also provides an ACL rule search device, which is used to implement the ACL rule search method in the foregoing second aspect and various implementation manners of the second aspect.
- the device includes at least one functional circuit.
- the at least one functional circuit is an acquisition circuit or a processing circuit, etc.
- the device is a processor with a write function.
- the present application also provides a communication device, including a processor and a memory, the processor is coupled to the memory, the memory is used to store instructions; the processor is used to call the instructions so that the The communication device performs the rule classification method of ACL in the foregoing first aspect and various implementations of the first aspect.
- the processor in the device is also used to execute the instructions in the memory, so that the communication device executes the ACL rule search method in the foregoing second aspect and various implementation manners of the second aspect.
- the present application also provides a computer-readable storage medium, which includes instructions, which when implemented on a computer, are implemented as described in the foregoing first aspect or various implementation manners of the first aspect Method, or implement the method described in the foregoing second aspect or various implementation manners of the second aspect.
- the present application also provides a computer program product that, when run on a computer, implements the method described in the foregoing first aspect or various implementation manners of the first aspect, or implements the foregoing second aspect or The method described in various implementations of the second aspect.
- the ACL rule classification method and device provided in this application first construct N rules into a dictionary tree, and the root node and at least one intermediate node of the dictionary tree are provided with bit numbers, and then the dictionary tree is layered And storage, because after the dictionary tree is layered, only the bit numbers of each node in the dictionary tree need to be saved. Compared with the original storage of N rules, the occupied storage space is reduced, so you can use at least a small area to at least A random access storage unit stores the information of each level in the dictionary tree, and uses the index number to indicate only N rules, thereby avoiding the use of TCAM with a larger area to store rules, saving the cost and power consumption of the chip.
- the ACL rule search method and device provided in this application use the characteristics of the dictionary tree to search the address information in the current message one by one according to the bit numbers on the dictionary tree, and finally search through the index number of the tail node on the dictionary tree To the rule matching the address information, the method realizes a quick search of a dictionary tree composed of N rules.
- FIG. 1 is a schematic diagram of a chip structure of a combination of on-chip TCAM and external commercial TCAM provided by this application;
- FIG. 2 is a schematic flowchart of an ACL rule classification method provided by an embodiment of the present application.
- FIG. 3 is a schematic diagram of an ACL rule classification method provided by an embodiment of the present application.
- FIG. 4 is a schematic structural diagram of a dictionary tree provided by an embodiment of this application.
- FIG. 5 is a schematic diagram of a hierarchy of a dictionary tree provided by an embodiment of this application.
- FIG. 6 is a schematic diagram of another dictionary tree hierarchical division provided by an embodiment of this application.
- FIG. 7 is a schematic diagram of a correspondence between an algorithm tree and a storage unit provided by an embodiment of this application.
- FIG. 8 is a schematic structural diagram of an on-chip algorithm tree and an on-chip rule bucket provided by an embodiment of the present application;
- FIG. 9 is a schematic flowchart of an ACL rule search method provided by an embodiment of the present application.
- FIG. 10a is a schematic diagram of another ACL rule classification method provided by an embodiment of the present application.
- 10b is a schematic diagram of an ACL rule search method provided by an embodiment of the present application.
- FIG. 11 is a schematic structural diagram of an on-chip and off-chip ACL rule mixed storage provided by an embodiment of the present application.
- FIG. 12 is a schematic diagram of multiple SRAM resource sharing provided by an embodiment of the present application.
- 13a is a schematic structural diagram of an on-chip regular bucket before switching to an off-chip algorithm tree according to an embodiment of the present application
- 13b is a schematic structural diagram after an on-chip regular bucket is switched to an off-chip algorithm tree according to an embodiment of the present application;
- FIG. 14 is a schematic structural diagram of a parallel search of two dictionary trees provided by an embodiment of the present application.
- FIG. 15 is a schematic structural diagram of an ACL rule classification device according to an embodiment of the present application.
- FIG. 16 is a schematic structural diagram of an ACL rule search device according to an embodiment of the present application.
- 17 is a schematic structural diagram of a communication device according to an embodiment of the present application.
- a general carrier for storing ACL rules may be ternary addressable storage (ternary content addressable memory, TCAM), as shown in FIG. 1.
- TCAM ternary content addressable memory
- each ACL rule is stored by X value and Y value. According to the difference of X value and Y value, three states can be represented, namely: match 0, match 1 and Mismatch. As shown in Table 1 below, the content stored in each bit of the TCAM is shown. Through the indication of these three states, accurate search and fuzzy search for ACL rules can be achieved.
- each rule is an IP address with a length of 32 bits.
- the 1000 rules are stored in the on-chip TCAM through configuration, and then when entering a target address, the target address can be used as a search
- the key (key), and the length is also 32bit, on-chip TCAM storage unit can search and compare the 1000 rules stored on it, and output the search result.
- TCAM needs to occupy a larger chip area, which leads to an increase in the cost of the chip and the design cost of the circuit board of the whole machine.
- TCAM consumes large power consumption, which leads to an increase in the cost of heat dissipation and energy consumption of the whole machine. Claim. Therefore, there is a demand to use a storage unit with a small occupied area, such as RAM, to store ACL rules, and also to satisfy the function of quickly accessing and searching multiple rules.
- the embodiments of the present application provide a rule classification method for ACL, which reveals a solution for flexible expansion of ACL capacity and performance. While satisfying the fast access and search for multiple rules, the problems of large chip area, high power consumption and high cost caused by using TCAM to store a large number of rules can also be avoided.
- the ACL rule classification method may include:
- Step 201 Obtain N first rules, N is a positive integer and N ⁇ 2, each of the N first rules is composed of 1 or 0 in binary representation, and the N first rules The length is the same.
- the “acquisition” may be understood as: the device receives N first rules from the outside; or may also be understood as: the device itself generates N rules and obtains the N rules. This embodiment does not limit the source from which the device obtains the N rules.
- the first rule is an ACL rule, that is, N first rules are stored in the ACL.
- each rule is an IP address composed of several bits.
- the same length of the N rules means that the N rules occupy the same number of bits, for example, all are 32-bit or 16-bit IP addresses.
- each of the binary IP addresses can be generated from each original IP address after binary conversion.
- At least one of 1 or 0 in the binary 1 or 0 in the N rules is represented by a mask*, that is, when 0 or 1 of a certain bit indicates that it is not concerned, the mask “* ", for example, "rule 1” is 1011101*010111, the mask “*” on the sixth bit from the right to the front indicates that the character in the bit can be either "1" or "" 0".
- N 5. It should be understood that more rules may be included, for example, 1000 rules. In this embodiment, only 5 rules are cited for description.
- Step 202 Construct an on-chip dictionary tree according to the N first rules.
- the constructed dictionary tree is an on-chip dictionary tree.
- the dictionary tree includes a root node and at least two branches, wherein each of the branches includes at least one intermediate node and a tail node, and the root node and the at least one intermediate node are provided with bit numbers
- the number of the bits is the sequence number corresponding to each bit after the N rules are sequentially sorted according to a preset order, an index number is set on the tail node, and the index number is used to indicate at least one rule .
- the process of constructing the dictionary tree according to the N rules includes:
- the first condition is that: the first bit in the number of the at least one bit divides the N rules into a first rule set and a second rule set, and the first rule set
- the value in bits is the first value
- the value in the first bit in the second rule set is the second value
- the first threshold is determined by the data bit width of the at least one random access memory unit.
- the preset order is that all regular bits are sorted in the order from right to left and from the serial number 0, for example, 0,1,2,3,4... In addition, it can also be sorted from left to right, which is not limited in this embodiment.
- the on-chip dictionary tree construction method provided in this embodiment, as shown in FIG. 3, after traversing 5 rules, selects the first bit as the 5th bit, divides the 5 rules, and generates a first rule set and a second rule set,
- the first rule set includes ⁇ rule 2, rule 3, rule 5 ⁇
- the second rule set includes ⁇ rule 1, rule 4 ⁇ , wherein the value on the 5th bit of the number in the first rule set is The second value is "1", and the value on the 5th bit in the second rule set is the first value "0". That is, starting from the root node of the dictionary tree, each node is divided into a binary tree, where the corresponding character in the bit position of one branch is 0, and the corresponding character in the bit position of the other branch is 1.
- the first value is “0" and the second value is “1". This embodiment does not limit whether the first value and the second value are “1" or "0", or the mask "*”.
- the first threshold may be determined by the data bit width of the random access storage unit, and the data bit width may be understood as the amount of data that can be transferred in a memory or video memory at one time. Further, the amount of data and the random access
- the memory bandwidth of the storage unit is related.
- the memory bandwidth refers to the data transmission capability provided by the memory bus, which can be determined by the design of the memory chip and the memory module.
- the first threshold is 1, the number of rules that the random access storage unit can transmit at one time is 1.
- the first threshold is 7, it means that the amount of data that can be transmitted by the storage unit each time does not exceed 7 rules.
- the method further includes: determining whether the number of rules in the first rule set and the second rule set are both greater than 1, and if so, continue to divide the two rule sets, for the first rule set ⁇ Rule 2, Rule 3, Rule 5 ⁇ , according to the 2nd bit of the number, divided into two sets are ⁇ rule 2 ⁇ , ⁇ rule 3, rule 5 ⁇ , where the value of the 2nd bit in ⁇ rule 2 ⁇ is "1 ", the value of the second bit in the set ⁇ rule 3, rule 5 ⁇ is "0".
- ⁇ rule 2 ⁇ contains only one rule that does not exceed the first threshold 1, and the intermediate node of the branch is connected with a tail node, which satisfies the condition for terminating the division.
- rule 5 ⁇ is divided and divided according to the 12th bit of the number. After the division, ⁇ rule 3 ⁇ and ⁇ rule 5 ⁇ are generated. If the first condition is met, the division is stopped. Similarly, the second rule set ⁇ rule 1, rule 4 ⁇ is divided according to the 13th bit number, and the two sets are ⁇ rule 4 ⁇ and ⁇ rule 1 ⁇ , respectively.
- the number of bits used in the 5 rules is divided into 5th, 2nd, 12th and 13th bits to build a dictionary tree.
- the root bit of the root node of the dictionary tree is set to 5 bits, and then the second bit of the intermediate node connected to the root node is 13 bits.
- Rule 1 and rule 4 are divided according to binary 0 or 1.
- the third bit of an intermediate node is 2 bits, and rule 2 is divided; finally rule 5 and rule 3 are divided by the 12th bit.
- Each intermediate node (including 13bit, 2bit and 12bit) is connected to a tail node, and an index number is set on each tail node to indicate the storage location of each rule.
- the index number is an IP address, or contains address information of the IP address.
- Step 203 Divide the dictionary tree into M levels in the order from the root node to at least one intermediate node, and then from the at least one intermediate node to the tail node, where M is a positive integer and M ⁇ 1.
- dividing the dictionary tree into M levels in step 203 includes: dividing the dictionary tree into M levels according to the data bit width of the at least one random access storage unit.
- the divided first level includes a root node (5bit), three intermediate nodes (13bit, 2bit, and 12bit), and further includes 5 tail nodes, each of which is provided with Index number.
- the dictionary tree may be divided in units of 7 nodes.
- the dictionary tree is divided into M levels, from the first level, the second level to the Mth level, wherein the first level of the M levels includes 1 root node and P connected to the root node Intermediate nodes, the second level of the M levels includes no more than P+1 intermediate nodes.
- the first level includes 1 root node and 6 intermediate nodes, and the second level also includes 7 intermediate nodes.
- Step 204 Store the M levels of information in at least one random access storage unit.
- the random access storage unit includes random access memory (random access memory, RAM). Further, the RAM includes static random access memory (static RAM, SRAM) and dynamic random access memory (dynamic RAM, DRAM), etc.
- RAM random access memory
- SRAM static random access memory
- DRAM dynamic random access memory
- step 204 the following two implementation methods are included in step 204:
- One implementation is to store M levels of information in one RAM.
- the first level is stored in the first storage unit.
- the capacity of the first storage unit is large enough to carry all the information of the dictionary tree, and the first storage unit can Divided into multiple levels, such as level 1, level 2, level 3, and so on, and each level corresponds to store information of one level of the dictionary tree. It also includes: storing 5 rules indicated by 5 index numbers in other storage units.
- Another implementation is to store M levels of information in the dictionary tree in M storage units, M ⁇ 2.
- the dictionary tree shown in FIG. 6 includes multiple branches, where the divided first-level information can be stored in the first storage unit, the second-level information is stored in the second storage unit, and the M-th level information is stored In the Mth storage unit.
- the information at the first level includes bit numbers of the root node and the bit numbers of the P intermediate nodes, and the information at the second level includes bit numbers of no more than P+1 intermediate nodes.
- Figure 6 shows the process of mapping a set of ACL rules to a multi-level dictionary tree.
- Each level of the dictionary tree can store up to 7 intermediate nodes. When the size of the tree exceeds 7 intermediate nodes, the excess part needs to be placed in Go to the next level.
- the gray circles shown in FIG. 6 are tail nodes, and each of the tail nodes is provided with an index number or an address space index. Used to indicate specific rules.
- the method provided in this embodiment first constructs N rules into a dictionary tree, the root node and at least one intermediate node of the dictionary tree are set with bit numbers, and then the dictionary tree is layered and stored, because After layering the dictionary tree, you only need to save the bit numbers of each node in the dictionary tree. Compared with the original storage of N rules, the occupied storage space is reduced, so you can use at least one random access storage unit with a small area. To save the information of each level in the dictionary tree, and use the index number to indicate only N rules, thus avoiding the use of TCAM with a large area to store rules, saving the cost and power consumption of the chip.
- the method also includes X storage units, X is a positive integer greater than 1; the method further includes: storing at least one rule indicated by the index number on all tail nodes of the dictionary tree in X storage In the unit. Further, N rules may be stored in one storage unit, or stored in two or more storage units.
- the X storage units used to store the N rules may be called rule buckets, each rule bucket may carry at least one rule, and the same rule may be stored in one or more In the rule bucket.
- the X memory cells for storing N rules are SRAM or DRAM.
- the dictionary tree described in this application may be referred to as an algorithm tree (algorithm tree), and the algorithm tree stored on the chip may be referred to as an on-chip algorithm tree, and correspondingly for an algorithm tree stored outside the chip, such as an external chip It can be called an off-chip algorithm tree.
- the storage unit for N rules on the storage chip may be called an on-chip rule bucket, and the rule corresponding to the algorithm tree for storing an external chip may be called an off-chip rule bucket.
- the ratio of the capacity configured to store the dictionary tree on the chip to the capacity of the regular bucket on the storage chip is about 1:6 to 1:7.
- the method provided in this embodiment may be executed by a core in a processor chip, the processor chip includes the core and X on-chip RAMs, the processor chip is coupled with Y off-chip RAMs, the method The method further includes: constructing at least one second rule into an off-chip dictionary tree, the at least one second rule is stored in Y off-chip RAMs; dividing at least one level into the off-chip dictionary tree, and converting the off-chip dictionary The information of each level of the tree is stored in the Y off-chip RAMs, where Y is a positive integer and Y ⁇ 1.
- the method further includes: selecting at least one of the X on-chip RAMs and storing at least one level of information of the off-chip dictionary tree; or, selecting at least one of the Y off-chip RAMs and storing Information of at least one level of the on-chip dictionary tree.
- the on-chip dictionary tree or the on-chip algorithm tree is divided into M levels, including the first level to Level M, and stored in the first storage unit.
- the on-chip regular bucket is divided into multiple levels, for example, from level 1 to level X, and the on-chip regular bucket of each level is stored in a storage unit.
- the method provided in this embodiment realizes the storage of the first rule corresponding to the on-chip dictionary tree through X RAMs, and uses the Y RAMs to store the second rule corresponding to the off-chip dictionary trees.
- This embodiment implements The small-capacity storage of the dictionary tree on the chip and the large-capacity storage of the off-chip dictionary tree can further meet different needs.
- an ACL rule search method is also provided. As shown in FIG. 9, the method may be executed by a network device, such as a router.
- the method includes:
- Step 901 Obtain message information.
- the content of the message information may be represented by a binary representation of 1 or 0.
- the message information includes one or more of the following: address information, message protocol type, port information, etc.
- the address information includes an IP address and a MAC address
- the message protocol type includes:
- the fourth version of the Internet Protocol (Internet Protocol Version 4, IPv4) may also be based on the sixth version of the Internet Protocol (Internet Protocol Version 6, IPv6)
- the port information includes a port number, such as a source port number and a target port number.
- the 0 or 1 constituting the content of the message information can also be represented by a mask "*".
- the message information is obtained through a message header.
- Step 902 Search the content of the message information one by one according to the bit number of the dictionary tree to determine the rule that matches the message information.
- the dictionary tree is constructed by N first rules, N is a positive integer and N ⁇ 2, the dictionary tree includes a root node and at least two branches, where each branch includes at least one middle node and one tail node , The root node and the at least one intermediate node are provided with bit numbers, and the bit numbers are the sequence numbers corresponding to each bit after the N rules are sequentially sorted according to a preset order. An index number is set on the tail node, and the index number is used to indicate at least one rule.
- the dictionary tree is divided into M levels in order from the root node to at least one intermediate node, and then from the at least one intermediate node to the tail node, M is a positive integer and M ⁇ 1, and the M Each level of information is stored in at least one RAM.
- the number of the at least one bit is determined according to a first condition; wherein the first condition is that the first rule in the number of the at least one bit divides the N rules Is a first rule set and a second rule set, wherein the value at the first bit in the first rule set is the first value, and the value at the first bit in the second rule set is the first Two values; if the number of rules in the first rule set and the second rule set are greater than the first threshold, continue to follow the second bit for the first rule set and the second rule set Or the third bit is divided until the number of rules remaining in each divided rule set is less than or equal to the first threshold.
- the first condition is that the first rule in the number of the at least one bit divides the N rules Is a first rule set and a second rule set, wherein the value at the first bit in the first rule set is the first value, and the value at the first bit in the second rule set is the first Two values; if the number of rules in the first rule set and the second rule set are greater than the first threshold,
- the first threshold is determined by the data bit width of the at least one RAM.
- step 901 the content of the message information is searched one by one according to the bit number of the dictionary tree, which specifically includes: starting from the bit number on the root node of the dictionary tree, searching for the message information The content corresponds to the value in the bit position. If the branch connected to the value in the bit position is not a tail node, continue to search for the bit number on the intermediate node connected to the branch until the tail of the branch is found Node.
- determining a rule that matches the content of the message information specifically includes: if at least one rule indicated by the index number on the tail node is the same as the content of the message information, determining If there is a rule that matches the content of the message information, the search is successful; otherwise, it is determined that the matching rule does not exist, that is, the search fails.
- a total of 12 rules from rule 1 to rule 12 are included, and the 12 rules are divided in the order of bit number to generate a dictionary tree/algorithm tree as shown in FIG. 10b Shows that the number of the first bit set on the root node is the 8th bit, and the 8th bit divides the 12 rules into two rule sets; then the two rule sets are divided until the remaining number of rules is satisfied The first threshold is not exceeded.
- each intermediate node is connected to a tail node, and the index number on each tail node is used to Indicates the location of the corresponding rule.
- tail nodes which can indicate 5 rule buckets, which are: first rule bucket ⁇ rule 6, rule 7 ⁇ , second rule bucket ⁇ rule 2, rule 9 ⁇ , third rule bucket ⁇ Rule 1, rule 3, rule 4 ⁇ , fourth rule bucket ⁇ rule 5, rule 8, rule 10 ⁇ , fifth rule bucket ⁇ rule 9, rule 11, rule 12 ⁇ .
- step 901 the IP address indicated in the address information to be searched for is: 111101010101; step 902: find the matching rule for the bit number of the IP address in the dictionary tree shown in FIG. 10b one by one.
- the bit number of the root node of the dictionary tree is the 8th bit, then the value of the search indication on the 8th bit of the IP address is "1", and the second bit number is 13bit according to the instructions of the dictionary tree, then The value corresponding to the 13th bit in the IP address is "1", and then it is determined that the rule included in the fifth rule bucket is a rule to be matched.
- rule 11 ⁇ 1111*10101*1 ⁇ is the same as the IP address ⁇ 111101010101 ⁇ , so this time the rule that matches the address information is rule 11.
- the mask "*" in rule 11 can represent either “1" or "0".
- the first bit number in rule 11 indicates 0 and the ninth bit number. When 0 is also indicated, it is the same as the IP address corresponding to the address information.
- the highest priority can be selected as the matching rule according to the priority order of all the matching rules.
- This embodiment is based on the first embodiment, in order to avoid congestion when a large amount of SRAM data is aggregated, through the multi-level architecture of the dictionary tree, it can be multiplexed with the algorithm tree of the off-chip ACL rules.
- a technical solution of on-chip and off-chip ACL rule mixed storage is presented.
- the on-chip dictionary tree is referred to as a first dictionary tree
- the off-chip dictionary tree is referred to as a second dictionary tree.
- the rules used in the construction of the second dictionary tree and the N rules used in the construction of the first dictionary tree may be the same or different.
- the on-chip dictionary tree may also be called an on-chip algorithm tree.
- the off-chip dictionary tree may be called an off-chip algorithm tree.
- this embodiment also provides a method for sharing on-chip and off-chip storage units. Specifically, the method includes: using at least one of the X storage units to store an off-chip dictionary tree (second dictionary tree), or at least one off-chip rule.
- second dictionary tree off-chip dictionary tree
- the other storage units in the X storage units may be used to store N rules associated with the first dictionary tree.
- a storage unit for storing N rules of the first dictionary tree among the X storage units may be called an on-chip rule bucket; a storage unit for storing rules of the second dictionary tree may be called an off-chip rule barrel.
- the method is executed by a core in a processor chip, the processor chip includes the core and X on-chip RAMs, the processor chip is coupled with Y off-chip RAMs, the method further includes: The at least one second rule is constructed as an off-chip dictionary tree, and the at least one second rule is stored in the Y off-chip RAMs; at least one level is divided into the off-chip dictionary trees, and the off-chip dictionary is divided The information of each level of the tree is stored in the Y off-chip RAMs, where Y is a positive integer and Y ⁇ 1.
- the method further includes: selecting at least one of the X on-chip RAMs to store at least one level of information of the off-chip dictionary tree; or, selecting at least one of the Y off-chip RAMs to store on-chips Information of at least one level of the dictionary tree, the on-chip dictionary tree is composed of the N first rules.
- the dictionary tree is the on-chip dictionary tree or the off-chip dictionary tree.
- the method further The method includes: selecting the off-chip dictionary tree as the dictionary tree through a mode control signal, or determining the on-chip dictionary tree as the dictionary tree.
- the mode switching between the on-chip regular bucket and the off-chip dictionary tree can be determined by the control signal configured in the register. For example, if the register is a 1-bit register, if the control signal indicated by the current register is "0", it means that X RAMs are used to store the regular bucket mode on-chip. If the currently indicated control signal is bit "1" , It means that any one or more of the X RAMs are used to store the mode of the off-chip algorithm tree.
- the processor or the processing chip can determine the currently adopted storage mode by reading the bit value of the control signal on the register. For example, if the read bit value is "0", it is determined that the regular bucket mode on the memory chip is currently used; when mode switching is required for X RAMs, the control register can output a control signal "1" to indicate that the processor will
- the mode of the current regular bucket on the storage chip is switched to the mode of storing the off-chip algorithm tree, so as to achieve free switching between different modes, improve storage flexibility, and meet different capacity requirements.
- the on-chip rule buckets on the left from level 1 to X-1 implement the storage function of on-chip ACL rules
- the on-chip rule buckets on the right from level X to level Y are used to implement The storage function of the off-chip algorithm tree
- the off-chip rule buckets are stored in off-chip memory.
- the storage unit of level X is switched from the rules on the original storage chip to the off-chip algorithm tree, thereby freely switching between the storage unit of the on-chip rule bucket and the off-chip algorithm tree storage unit.
- on-chip rule bucket processing logic is to process on-chip rule bucket comparison logic processing
- off-chip algorithm tree processing logic is to process off-chip algorithm tree search logic.
- multiple SRAMs in the middle are collections of storage units that store on-chip regular bucket data structures or off-chip algorithm tree data structures.
- each level can switch between the on-chip rule bucket and the off-chip algorithm tree through a mode configuration register, thereby achieving a small capacity and a small delay for storing on-chip ACL rules, and a large capacity for storing off-chip ACL rules. Large delay effect.
- users can store one or more DRAMs to store off-chip ACL rules, and then achieve the effect of flexible switching of storage resources between on-chip and off-chip.
- the first The rules stored at level 7 are moved to or off-chip at other levels.
- the rules originally stored in the 7th level are moved to the off-chip DRAM.
- the algorithm trees corresponding to these rules need to be moved to the off-chip algorithm tree together.
- the storage unit at level 7 can be switched to store the off-chip algorithm tree, as shown in FIG. 13b.
- the storage unit can store larger off-chip ACL rules, but the corresponding on-chip capacity for storing ACL rules will be correspondingly smaller.
- each level of SRAM (or each storage unit) is 4.5Mbit, when there are 14 levels of on-chip regular buckets/off-chip algorithm trees, different levels of on-chip algorithm buckets and off-chip The ACL capacity corresponding to the algorithm tree.
- the capacity for storing the off-chip algorithm tree formed by N1 rules is generally smaller than the capacity occupied by the rule bucket for storing N2 rules, because the capacity for storing the algorithm tree only includes some bits Number, and the capacity used to store the rule bucket needs to save all rules, so it takes up a lot of space.
- the storage unit used to store the off-chip rule bucket can be switched to the algorithm tree used to store the on-chip rule bucket according to different requirements.
- the specific process please refer to the above process of switching from the on-chip rule bucket to the off-chip algorithm tree. The examples will not repeat them in detail.
- the method provided in this embodiment implements resource sharing between on-chip storage units and off-chip storage units through a multi-level search structure, and is used to dynamically switch between multiple storage units that store on-chip rule buckets or off-chip algorithm trees to improve Utilization of storage resources to meet the needs of different application scenarios.
- this embodiment can further improve the search efficiency of address information, such as keys.
- the specific methods include: through configuration, dynamically switch to a storage structure of two or more dictionary trees, such as As shown in FIG. 14, a random access storage unit for storing an on-chip algorithm tree can store two algorithm trees, and each algorithm tree includes a level 1 to a level M. In addition, it also includes a storage unit for storing the rule bucket corresponding to each algorithm tree.
- two algorithm trees can be searched at the same time, that is, two pipeline structures are processed in parallel, which can provide the performance of searching twice per clock cycle, thereby achieving the multi-pipeline simultaneous search function, which improves Find efficiency.
- this embodiment can also expand three or more parallel pipeline structures to realize the fast search function, and the allocation of capacity resources for multiple algorithm trees on the storage chip can be achieved through register configuration.
- the on-chip algorithm tree and chip The capacity of the outer regular bucket and the mutual switching can also be done through register configuration.
- an ACL rule classification device is further provided, which is used to implement the ACL rule classification method described in the foregoing embodiment.
- the ACL rule classification device includes an acquisition unit 1501 and a processing unit 1502.
- the device may also include more units or modules such as a sending unit, a storage unit, etc. This embodiment does not limit this .
- the obtaining unit 1501 is configured to obtain N rules, where N is a positive integer and N ⁇ 2, each of the N rules is composed of 1 or 0 in binary representation, and the The length is the same.
- the processing unit 1502 is configured to construct a dictionary tree according to the N rules, divide the dictionary tree into M levels, M is a positive integer and M ⁇ 1, and store the information of the M levels in at least one random
- the dictionary tree includes a root node and at least two branches, where each branch includes at least one intermediate node and a tail node, and bits are provided on the root node and the at least one intermediate node Bit number, the bit number is the sequence number corresponding to each bit after the N rules are sequentially sorted according to a preset order, an index number is set on the tail node, and the index number is used to indicate At least one rule.
- the dictionary tree formed by the N rules is an on-chip dictionary tree, and the N rules are N first rules.
- the processing unit 1502 is specifically configured to determine the number of at least one bit according to the first condition; store the number of the at least one bit On at least one node of the first dictionary tree, the at least one node includes the root node and the at least one intermediate node; at least one rule index number is stored on the tail node of each branch.
- the first condition is that the first bit in the number of the at least one bit divides the N rules into a first rule set and a second rule set, where the first rule set The value in one bit is the first value, and the value in the first bit in the second rule set is the second value; if the rules in the first rule set and the second rule set Are greater than the first threshold, then continue to divide the first rule set and the second rule set according to the second bit or the third bit, until the number of rules remaining in each divided rule set When it is less than or equal to the first threshold, the first threshold is determined by the data bit width of the at least one storage unit.
- At least one of the binary 1 or 0 in the N rules is represented by a mask*.
- the processing unit 1402 is specifically configured to store the M levels of information in M RAMs, where M is a positive integer and M ⁇ 2 .
- the processing unit 1402 is specifically configured to divide the first dictionary tree into M levels according to the data bit width of the at least one RAM; wherein The first level of the M levels includes 1 root node and P intermediate nodes connected to the root node, and the number of intermediate nodes included in the second level of the M levels does not exceed P+1 , P is a positive integer and P ⁇ 1.
- the P is 6.
- the information at the first level includes the bit number of the root node and the bit numbers of the P intermediate nodes, and the second The level information includes bit numbers of no more than P+1 intermediate nodes.
- the processing unit 1502 is further configured to store at least one rule indicated by the index number on all tail nodes of the first dictionary tree in Among the X memory cells, X is a positive integer and X ⁇ 1.
- the processing unit 1502 is further configured to use at least one storage unit of the X storage units as a storage on-chip dictionary tree or the on-chip dictionary tree At least one first rule indicated by the index number of the dictionary tree.
- the processing unit 1502 constructs the at least one second rule into an off-chip dictionary tree, and the at least one second rule is stored in the Y off-chip RAMs; divide the off-chip dictionary tree into at least one level, and store the information of each level of the off-chip dictionary tree in the Y off-chip RAMs, where Y is a positive integer And Y ⁇ 1.
- the processing unit 1502 is further configured to select at least one of the X on-chip RAMs and store at least one level of the off-chip dictionary tree Or select at least one of the Y off-chip RAMs to store at least one level of information on the on-chip dictionary tree.
- the device includes a core in a processor chip, the processor chip includes the core and X on-chip RAMs, and the processor chip is coupled with Y off-chip RAMs.
- the ACL rule search device is used to search for ACL rules.
- the search device includes: an acquisition circuit 1601 and a processing circuit 1602, wherein the acquisition The circuit 1601 is used to obtain message information, and the content of the message information may be represented by a binary representation of 1 or 0; the processing circuit 1602 is used to search the content of the message information one by one according to the bit number on the dictionary tree, Determine at least one rule that matches the content of the address information.
- the acquisition circuit 1601 and the processing circuit 1602 may also be integrated into one circuit, such as a processing circuit.
- the search device is a processing chip or processor, and has a data writing function.
- the method further includes: determining at least one rule according to the index number of the tail node, and comparing each rule in the at least one rule with all characters included in the address information one by one until a Up to the rule that the content of the address information is the same.
- the dictionary tree is constructed by N first rules, N is a positive integer and N ⁇ 2, the dictionary tree includes a root node and at least two branches, where each branch includes at least one middle node and one tail node , The root node and the at least one intermediate node are provided with bit numbers, and the bit numbers are the sequence numbers corresponding to each bit after the N rules are sequentially sorted according to a preset order. An index number is set on the tail node, and the index number is used to indicate at least one first rule.
- the processing circuit 1602 is specifically configured to search for the message information starting from the number of the first bit on the root node of the dictionary tree The value in the corresponding bit in the content, if the branch connected to the value in the bit is not a tail node, continue to search for the bit number of the intermediate node connected to the branch until the tail node of the branch is found until.
- the processing circuit 1602 is specifically configured to follow at least one first rule indicated by the index number on the tail node and the message information If the content is the same, it is determined that there is a rule matching the message information; otherwise, it is determined that there is no matching rule.
- the processing circuit 1602 is specifically configured to follow the dictionary tree from the root node to at least one intermediate node, and then from the at least one The order from the middle node to the tail node is divided into M levels, M is a positive integer and M ⁇ 1, and the information of the M levels is stored in at least one RAM.
- the number of the at least one bit is determined according to a first condition; wherein the first condition is that of the at least one bit
- the first bit in the number divides the N first rules into a first rule set and a second rule set, where the value on the first bit in the first rule set is the first value, so The value on the first bit in the second rule set is the second value;
- the number of rules in the first rule set and the second rule set is greater than the first threshold, continue to perform the second bit or the third bit on the first rule set and the second rule set
- the bits are divided until the number of rules remaining in each divided rule set is less than or equal to the first threshold, which is determined by the data bit width of the at least one storage unit.
- the processing circuit 1602 is further configured to construct the at least one second rule into an off-chip dictionary tree, and the at least one second rule stores In Y off-chip RAM; divide the off-chip dictionary tree into at least one level, and store the information of each level of the off-chip dictionary tree in the Y off-chip RAMs, where Y is positive Integer and Y ⁇ 1.
- the processing circuit 1602 is further configured to select at least one of the X on-chip RAMs and store at least one level of the off-chip dictionary tree Or select at least one of the Y off-chip RAMs to store at least one level of information on the on-chip dictionary tree, the on-chip dictionary tree is composed of the N first rules.
- the dictionary tree is the on-chip dictionary tree or the off-chip dictionary tree
- the processing circuit 1602 is specifically configured to pass a mode control signal Select to determine the off-chip dictionary tree as the dictionary tree, or determine the on-chip dictionary tree as the dictionary tree.
- the ACL search method provided in this embodiment searches for the bits in the message information content one by one according to the order of the bit numbers of the nodes in the dictionary tree, and realizes the fast search of the message information to be queried.
- the rules matching the IP address can be searched in parallel through two pipelines, which further improves the search efficiency.
- the communication device may be the device in the foregoing embodiment, such as a network device, or may also be a component (such as a chip) for the network device.
- the communication device can implement the ACL rule classification method and search method in the foregoing embodiments.
- the communication device may include a transceiver 171 and a processor 172; further, it may further include a memory 173, which may be used to store code or data.
- the communication device may further include more or less components, or combine some components, or arrange different components, which is not limited in this application.
- the processor 172 is the control center of the communication device, using various interfaces and lines to connect the various parts of the entire communication device, by running or executing the software program or module stored in the memory 173, and calling the data stored in the memory, To perform various functions of the communication device or process data.
- the processor includes a transmission module.
- the transmission module is used to obtain the N rules.
- the transmission module may also be used to obtain other information, such as address information.
- the processor 172 may be composed of an integrated circuit (Integrated Circuit, IC), for example, may be composed of a single packaged IC, or may be composed of multiple packaged ICs connected with the same function or different functions.
- the processor may include only a central processing unit (CPU), or may be a GPU, a digital signal processor (DSP), or a combination of CPU and NP.
- CPU central processing unit
- DSP digital signal processor
- the processor may further include a hardware chip.
- the hardware chip may be an ASIC, a programmable logic device (programmable logic device, PLD), or a combination thereof.
- the PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field programmable gate array (field-programmable gate array, FPGA), a general array logic (generic array logic, GAL), or any combination thereof.
- the processor 172 includes a processing chip, and the processing chip may include one or more random access storage units for storing on-chip dictionary trees and off-chip rule buckets.
- the one or more random access memory units include but are not limited to random access memory (random access memory, RAM), and further, the RAM includes static random access memory (static RAM, SRAM) and dynamic random access memory (dynamic RAM, DRAM).
- the memory may include a non-volatile memory, such as a flash memory, a hard disk HDD, or an SSD; the memory may also include a combination of the foregoing types of memories.
- the transceiver 171 is used to establish a communication channel for the communication device to connect to the network through the communication channel, thereby achieving communication transmission between the communication device and other devices.
- the transceiver 171 may be a module that completes the transceiver function.
- the transceiver may include a communication module such as a wireless local area network (WLAN) module, a Bluetooth module, a baseband module, and a radio frequency (RF) circuit corresponding to the communication module.
- WLAN wireless local area network
- RF radio frequency
- Bluetooth communication infrared communication and/or cellular communication system communication, such as wideband code division multiple access (WCDMA) and/or high speed downlink access (high speed downlink packet access) access, HSDPA).
- WCDMA wideband code division multiple access
- HSDPA high speed downlink access
- the transceiver is used to control the communication of various components in the communication device, and can support direct memory access.
- the processor 172 may be used to implement all or part of the steps in the foregoing embodiments.
- the function to be implemented by the acquiring unit 1501 in FIG. 15 may be implemented by a transmission module on the upper part of the processor chip
- the function to be implemented by the processing unit 1502 may be implemented by a processor chip of the communication device achieve.
- both the acquisition circuit 1601 and the processing circuit 1602 shown in FIG. 16 can be implemented by the processor 172.
- the communication device may be a network device, and the network device includes but is not limited to a router, a switch, etc.
- the network device may also include a chip.
- the communication device in this embodiment may further include an external chip, and the external chip is used to store an off-chip algorithm tree and an off-chip rule bucket.
- the plug-in chip may include a processing unit and at least one random access storage unit, wherein the processing unit is used to implement an ACL rule classification and search function for an off-chip algorithm tree, and the at least one random access storage The unit is used to store off-chip dictionary trees and off-chip rules.
- the router in addition to storing the dictionary tree and rule bucket on the processor chip, the router can also use the communication interface to add a chip to implement the classification of off-chip ACL rules. And search function, to meet the needs of small-capacity storage on chip and large-capacity storage off-chip.
- an embodiment of the present application further provides a computer storage medium, wherein the computer storage medium may store a program, and the program may include the ACL rule classification method and the ACL rule search method provided by the application when executed. Part or all of the steps in each embodiment of The storage medium may be a magnetic disk, an optical disk, ROM, RAM, or the like.
- embodiments of the present application also provide a computer program product containing instructions, which when run on a computer, causes the computer to perform all or part of the steps of the methods described in the foregoing embodiments.
- the computer program product includes one or more computer instructions.
- the computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable devices.
- the computer instructions may be stored in a computer-readable storage medium or transferred from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be from a website site, computer, server or data center Transmit to another website, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
- the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device including a server, a data center, and the like integrated with one or more available media.
- the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium, for example, a solid state disk (SSD).
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
本申请公开了一种访问控制列表ACL的规则分类方法、查找方法和装置。其中所述ACL规则分类方法包括:网络设备获取N条规则,根据所述N条规则构建字典树,将所述第一字典树按照从所述根节点到至少一个中间节点,再从所述至少一个中间节点到尾节点的顺序划分成M个层级;将所述M个层级的信息存储在至少一个随机存取存储单元中,本方法由于对字典树分层后只需保存字典树上各个节点的比特位编号,相比于原来存储N条规则而言,占用的存储空间减小,因此可以通过占用面积较小的随机存取存储单元来保存字典树中各个层级的信息,并利用索引编号来只指示N条规则,从而避免使用占用面积较大的TCAM来存储规则,节约了芯片的成本和功耗。
Description
本申请涉及存储领域,尤其是涉及一种访问控制列表ACL的规则分类方法、查找方法和装置。
路由设备具有对一些特殊数据包进行处理的功能,以达到控制数据流的目的。比如在防火墙中利用访问控制列表(access control list,ACL)可以允许一些数据包通过,也可以拦截一些数据包,例如将这些拦截的数据包丢弃;在网络安全协议(Internet Protocol security,IPSec)中应用ACL可以将符合规则的数据包进行加密处理,而将其它不符合规则的数据包正常转发。路由设备通过配置一系列的规则来选择数据包,其中选择的规则可以通过ACL确定。利用定义ACL可以实现对数据包的有效分流,并且针对ACL的特点,设计研发了一系列算法,例如华为公司设计的自适应决策位压缩(adaptive compress bit choice ACBC)算法来解决各种不同应用场景对ACL查找性能、容量需求的问题。
目前大部分ACL解决方案都采用三元物可寻址存储(ternary content addressable memory,TCAM)来实现,包括芯片上部集成TCAM和通过高速串行接口ILA(Interlaken Look-Aside)接口外挂商用TCAM芯片。如图1所示,传统方案采用内部集成TCAM和外挂商用TCAM芯片组合的方式来实现对ACL规则的存储功能,其中,为了实现片上小容量ACL规则查找和片外大容量ACL规则查找,芯片需要额外增加一个ILA接口来实现和外挂TCAM芯片的对接。
由于TCAM是一种昂贵的资源,并且TCAM占用芯片的面积较大,比如存储一个TCAM的面积相当于五、六条随机存储记忆体(random access memory,RAM),进而直接导致芯片的成本升高以及整机电路板设计成本提高,另外TCAM消耗的功耗大,导致整机散热成本和能耗成本上升,无法满足大容量存储的要求。
发明内容
本申请提供了一种ACL的规则分类方法和装置,用于解决TCAM存储ACL规则占用芯片的面积和功耗大、成本高的问题。
为解决上述问题,本申请提供了以下技术方案:
第一方面,本申请提供了一种访问控制列表ACL的规则分类方法,应用于对芯片上部ACL规则的划分,该方法可以由设备的处理器来执行,所述方法包括以下步骤:获取N条规则,N为正整数且N≥2,所述N条规则中的每条规则均由二进制表示的1或0组成,并且所述N条规则的长度均相同;根据所述N条规则构建字典树,将所述字典树按照从所述根节点到至少一个中间节点,再从所述至少一个中间节点到尾节点的顺序划分成M个层级,M为正整数且M≥1;将所述M个层级的信息存储在至少一个随机存取存储单元中;
其中,所述N条规则可以是N条第一规则,所述字典树可以是片上字典树。
所述字典树包括根节点和至少两条分支,其中每条所述分支包括至少一个中间节点和 一个尾节点,所述根节点和所述至少一个中间节点上设置有比特位的编号,所述比特位的编号为所述N条规则按照预设顺序依次排序后每个比特位所对应的次序号,所述尾节点上设置有索引编号,所述索引编号用于指示至少一条规则。
本方面提供的方法,先将N条规则构建成字典树,所述字典树的根节点和至少一个中间节点上设置有比特位的编号,然后再对该字典树进行分层和存储,由于对字典树分层后只需保存字典树上各个节点的比特位编号,相比于原来存储N条规则而言,占用的存储空间减小,因此可以通过占用面积较小的至少一个随机存取存储单元来保存字典树中各个层级的信息,并利用索引编号来只指示N条规则,从而避免了使用占用面积较大的TCAM来存储规则,节约了芯片的成本和功耗。
可选的,所述随机存取存储单元为随机存取存储器RAM。进一步地,所述RAM包括静态随机存储器SRAM和动态随机存储器DRAM,此外,所述随机存取存储单元还可以为其他的存储记忆体,本申请对此不予限制。
结合第一方面,在第一方面的一种可能的实现中,根据所述N条规则构建字典树包括:按照第一条件确定至少一个所述比特位的编号;将所述至少一个比特位的编号存储在所述字典树的至少一个节点上,所述至少一个节点包括所述根节点和所述至少一个中间节点;将至少一条规则的索引编号存储在每条分支的尾节点上。
可选的,所述第一条件为:所述至少一个比特位的编号中的第一比特位将所述N条规则分为第一规则集合和第二规则集合,其中所述第一规则集合中的第一比特位上的值为第一取值,所述第二规则集合中的第一比特位上的值为第二取值;例如第一比特位将N条规则按照第一取值“1”和第二取值“0”分成第一规则集合和第二规则集合;
如果所述第一规则集合和所述第二规则集合中的规则的数量均大于第一阈值,则继续对所述第一规则集合和所述第二规则集合按照第二比特位或第三比特位进行划分,直到每个划分后的规则集合中剩余的规则数量小于等于第一阈值时为止,所述第一阈值由所述至少一个随机存取存储单元的数据位宽决定。
本实现方式中,按照每个规则的比特位的编号将N条规则划分成不同的集合,进而可以通过二叉树的形式表示在字典树中,便于对N条规则进行分类和查找。
可选的,结合第一方面,在第一方面的另一种可能的实现中,组成所述N条规则中的二进制1或0中至少一个1或0用掩码*表示,即每条规则中的字符“1”或者“0”可以用掩码“*”来表示。本实现方式中,利用掩码表示二进制中的1或0可以灵活地表达多种不同的规则,增加了存储N条规则的灵活性。
结合第一方面,在第一方面的又一种可能的实现中,将所述M个层级的信息存储在至少一个随机存取存储单元中包括:将所述M个层级的信息存储在M个RAM中,M≥1。
结合第一方面,在第一方面的又一种可能的实现中,将所述字典树从所述根节点到至少一个中间节点,再从所述至少一个中间节点到尾节点的顺序划分成M个层级包括:根据所述至少一个随机存取存储单元的数据位宽对所述字典树划分M个层级;其中所述M个层级中的第一层级包括1个根节点和与所述根节点相连接的P个中间节点,所述M个层级中的第二层级包括的中间节点的数量不超过P+1个,P为正整数且P≥1。
可选的,P=6。此外,P还可以取其他值,由所述数据位宽来确定,本实施例不予限制。
可选的,所述第一层级或所述第二层级中还可以包括至少一个尾节点,所述尾节点上 设置有索引编号。
结合第一方面,在第一方面的又一种可能的实现中,所述M个层级的信息包括第一层级的信息和第二层级的信息;所述第一层级的信息包括所述根节点的比特位编号和所述P个中间节点的比特位编号,所述第二层级的信息包括不超过P+1个中间节点的比特位编号。
可选的,所述M个层级中还可以包括第三层级、第四层级或者更多的层级,具体的层级数量由所述字典树的结构来确定。
结合第一方面,在第一方面的又一种可能的实现中,所述方法还包括:将所述字典树的所有尾节点上的索引编号所指示的至少一条规则存储在X个RAM中,X为正整数且X≥1,即将N条第一规则存储在X个存储存取存储单元。其中,所述X个存储存取存储单元可以是前述第一方面的至少一个存储存取存储单元以外的存储单元;还可以是前述第一方面的至少一个存储存取存储单元中的存储单元,本实施例对此不予限制。
结合第一方面,在第一方面的又一种可能的实现中,在前述第一方面的所述字典树被存储在芯片上时,可称为片上字典树;存储在所述芯片之外的其他字典树称为片外字典树;所述方法还包括:将所述X个随机存取存储单元中的一个或多个用于存储所述片外字典树,其中,构建所述片外字典树的规则可以与所述片上字典树的N条规则相同,也可以不同。
可选的,所述方法还包括:将所述X个随机存取存储单元中的一个或多个用于存储所述片外字典树所对应的至少一条ACL规则。
结合第一方面,在第一方面的又一种可能的实现中,所述方法由处理器芯片中的核来执行,所述处理器芯片包括所述核以及X个片上RAM,所述处理器芯片与Y个片外RAM耦合,所述方法还包括:将所述至少一条第二规则构建成片外字典树,所述至少一条第二规则存储于所述Y个片外RAM中;对所述片外字典树划分至少一个层级,将所述片外字典树的每个所述层级的信息存储在所述Y个片外RAM中,其中Y为正整数且Y≥1。
结合第一方面,在第一方面的又一种可能的实现中,所述方法还包括:选择所述X个片上RAM中的至少一个,存储所述片外字典树的至少一个层级的信息;或者,选择所述Y个片外RAM中的至少一个,存储所述片上字典树的至少一个层级的信息。
本实现方式中,将存储片上规则的至少一个RAM用于存储片外字典树或片外规则,使得片上任意一层级存储单元与片外字典树之间可以动态切换,从而实现了芯片片上和片外存储单元的资源共享,提高了资源配置的灵活性,从而可以满足不同的产品容量需求。
第二方面,本申请还提供了一种ACL的规则查找方法,应用于第一方面中构建的字典树,所述方法包括以下步骤:获取报文信息,所述报文信息的内容可以通过二进制表示的1或0表示;将所述报文信息的内容按照字典树上的比特位编号逐一查找,确定与所述报文信息内容相匹配的规则;
其中所述字典树由N条规则构建,N为正整数且N≥2,所述字典树包括根节点和至少两条分支,其中每条所述分支包括至少一个中间节点和一个尾节点,所述根节点和所述至少一个中间节点上设置有比特位的编号,所述比特位的编号为所述N条规则按照预设顺序依次排序后每个比特位所对应的次序号,所述尾节点上设置有索引编号,所述索引编号用于指示至少一条规则。
本方面提供的方法,利用字典树的特点,所述字典树中的各个节点上设置有比特位编号,按照字典树上的比特位编号逐一查找当前报文信息中的各个比特,最后通过字典树上 尾节点的索引编号的指示来查找到与所述报文信息内容相匹配的规则,本方法实现了对由N条规则组成的字典树的快速查找。
可选的,所述报文信息包括以下一种或多种组合:地址信息、报文协议类型、端口信息等,进一步地,所述地址信息包括IP地址、MAC地址,所述报文协议类型包括:互联网协议第四版本(Internet Protocol Version 4,IPv4),也可以是基于互联网协议第六版本(Internet Protocol Version 6,IPv6),所述端口信息包括端口号,例如源端口号和目标端口号等。
可选的,所述报文信息的内容包括上述各种信息的组合,例如当前带查找的报文信息由IP地址和端口号组合,长度共32比特,其中所述IP地址占16比特,端口号占16比特长度。
可选的,所述IP地址中包括至少一个掩码,所述掩码可以表示1或者表示0。
结合第二方面,在第二方面的一种可能的实现中,将所述地址信息按照字典树上的比特位编号逐一查找包括:从所述字典树的根节点上的第一比特位的编号开始,查找所述地址信息中对应第一比特位上的值,若与所述第一比特位上的值相连接的分支不是尾节点,则继续查找所述分支所连接的中间节点上的比特位编号,直到查找出所述分支的尾节点为止。
另外,所述方法还包括:根据所述尾节点的索引编号确定至少一条规则,将所述至少一条规则中的每条规则分别与所述地址信息包含的所有字符进行逐一比对,直到找到与所述地址信息内容相同的规则为止。
结合第二方面,在第二方面的另一种可能的实现中,所述确定与所述目标IP地址相匹配的规则包括:如果按照所述尾节点上的索引编号指示的至少一条规则与所述地址信息相同,则确定存在与所述地址信息相匹配的规则;如果所述索引标号指示的至少一条规则均与所述地址信息的内容不同,则确定不存在与该地址信息相匹配的规则,即查找失败。
可选的,本方面所述的字典树可以被划分成M个层级,具体地,所述划分过程为:将所述字典树是按照从根节点到至少一个中间节点,再从所述至少一个中间节点到尾节点的顺序划分成M个层级,M为正整数且M≥1,且所述M个层级的信息存储在至少一个随机存取存储单元中。
结合第二方面,在第二方面的又一种可能的实现中,所述方法还包括:如果所述随机存取存储单元中存储有两个或两个以上字典树,则同时对所述两个或两个以上字典树按照所述报文信息的内容进行查找,具体地查找方法与本方面中上述对一个字典树的查找方法相同,即按照每个字典树上各个节点的比特位编号之一查找。本实现方式中,通过对两条或两条以上的字典树进行并行查找,提高了查找效率。
本方面提供的方法,实现了对片上单个字典树或多个字典树的查找和切换,进而达到了片上小容量存储且高性能查找的有益效果。
可以理解地,按照上述第一方面和第二方面的方法,还可实现对芯片外其他字典树的存储和快速查找,从而满足芯片外大容量存储的需求。
第三方面,本申请还提供了一种ACL的规则分类装置,所述装置用于实现前述第一方面以及第一方面各种实现方式中的ACL的规则分类方法。
可选的,所述装置包括至少一个功能单元或模块,进一步地,所述至少一个功能单元 为获取单元或处理单元等。
第四方面,本申请还提供了一种ACL的规则查找装置,所述装置用于实现前述第二方面以及第二方面各种实现方式中的ACL的规则查找方法。
可选的,所述装置包括至少一个功能电路,进一步地,所述至少一个功能电路为获取电路或处理电路等,进一步地,所述装置为具备写功能的处理器。
第五方面,本申请还提供了一种通信装置,包括处理器和存储器,所述处理器与所述存储器耦合,所述存储器用于存储指令;所述处理器用于调用所述指令使得所述通信装置执行前述第一方面以及第一方面各种实现方式中的ACL的规则分类方法。
此外,所述装置中的处理器,还用于执行所述存储器中的指令,使得所述通信装置执行前述第二方面以及第二方面各种实现方式中的ACL的规则查找方法。
第六方面,本申请还提供了一种计算机可读存储介质,其特征在于,包括指令,当所述指令在计算机上运行时,实现如前述第一方面或第一方面各种实现方式所述的方法,或实现如前述第二方面或第二方面各种实现方式所述的方法。
第七方面,本申请还提供了一种计算机程序产品,当其在计算机上运行时,实现如前述第一方面或第一方面各种实现方式所述的方法,或实现如前述第二方面或第二方面各种实现方式所述的方法。
本申请提供的ACL规则分类方法和装置,先将N条规则构建成字典树,所述字典树的根节点和至少一个中间节点上设置有比特位的编号,然后再对该字典树进行分层和存储,由于对字典树分层后只需保存字典树上各个节点的比特位编号,相比于原来存储N条规则而言,占用的存储空间减小,因此可以通过占用面积较小的至少一个随机存取存储单元来保存字典树中各个层级的信息,并利用索引编号来只指示N条规则,从而避免了使用占用面积较大的TCAM来存储规则,节约了芯片的成本和功耗。
此外,本申请提供的ACL规则查找方法和装置,利用字典树的特点,按照字典树上的比特位编号逐一查找当前报文中的地址信息,最后通过字典树上尾节点的索引编号的指示来查找到与所述地址信息相匹配的规则,本方法实现了对由N条规则组成的字典树的快速查找。
图1为本申请提供的一种片上TCAM和外挂商用TCAM组合的芯片结构示意图;
图2为本申请实施例提供的一种ACL规则分类方法的流程示意图;
图3为本申请实施例提供的一种ACL规则分类方法的示意图;
图4为本申请实施例提供的一种字典树的结构示意图;
图5为本申请实施例提供的一种字典树层级划分的示意图;
图6为本申请实施例提供的另一种字典树层级划分的示意图;
图7为本申请实施例提供的一种算法树与存储单元对应关系的示意图;
图8为本申请实施例提供的一种片上算法树和片上规则桶的结构示意图;
图9为本申请实施例提供的一种ACL规则查找方法的流程示意图;
图10a为本申请实施例提供的另一种ACL规则分类方法的示意图;
图10b为本申请实施例提供的一种ACL规则查找方法的示意图;
图11为本申请实施例提供的一种片上和片外ACL规则混合存储的结构示意图;
图12为本申请实施例提供的一种多个SRAM资源共享的示意图;
图13a为本申请实施例提供的一种片上规则桶切换成片外算法树之前的结构示意图;
图13b为本申请实施例提供的一种片上规则桶切换成片外算法树之后的结构示意图;
图14为本申请实施例提供的一种两个字典树并向查找的结构示意图;
图15为本申请实施例提供的一种ACL规则分类装置的结构示意图;
图16为本申请实施例提供的一种ACL规则查找装置的结构示意图;
图17为本申请实施例提供的一种通信装置的结构示意图。
在对本申请实施例的技术方案说明之前,首先结合附图对本申请实施例的技术场景进行介绍。
本申请实施例提供的技术方案可应用于对ACL规则存储的技术领域。一般的对ACL规则进行存储的载体可以是三元物可寻址存储(ternary content addressable memory,TCAM),如图1所示。包括片上TCAM和外挂片外TCAM,其中片上的TCAM的容量一般可以存储32K(K=1024)条ACL规则,并且每条所述ACL规则的长度都是160比特(bit),片外的商用芯片的容量一般则可以达到存储512K条ACL规则的容量,查找性能受限于ILA接口的带宽。
在片上TCAM或片外TCAM存储ACL规则时,每条ACL规则是通过X值和Y值进行存储,根据X值和Y值的不同,可以表示三种状态,分别是:匹配0、匹配1和不匹配。如下表1所示,示出了TCAM中每1比特(bit)存储的内容,通过这三种状态的指示,能够实现对ACL规则的精确查找和模糊查找。
X值 | Y值 | 匹配状态 |
0 | 0 | 匹配0或匹配1 |
0 | 1 | 匹配1 |
1 | 0 | 匹配0 |
1 | 1 | 不匹配 |
表1
利用TCAM存储ACL规则,可以实现对多条规则的快速访问和查找。比如对于片上TCAM存储的1000条规则,每条规则均为一个长度是32bit的IP地址,首先通过配置将这1000条规则存储在片上TCAM中,然后当输入一个目标地址,该目标地址可以作为查找密钥(key),且长度也是32bit,片上TCAM存储单元能够对其存储的1000条规则同时进行查找和对比,并输出查找结果。
但是TCAM需要占用芯片的面积较大,导致芯片的成本升高以及整机电路板设计成本提高,另外TCAM消耗的功耗大,导致整机散热成本和能耗成本上升,无法满足大容量存储的要求。所以有一种需求利用占用面积较小的存储单元,比如RAM来存储ACL规则,并且也要满足对多条规则进行快速访问和查找的功能。
本申请实施例提供了一种ACL的规则分类方法,该方法揭示了一种灵活扩展ACL容量和性能的解决方案。在满足对多条规则进行快速访问和查找的同时,还能避免利用TCAM存储大量规则产生的占用芯片面积大,功耗和成本高的问题。
实施例一
如图2所示,本实施例提供的ACL的规则分类方法,所述方法可以包括:
步骤201:获取N条第一规则,N为正整数且N≥2,所述N条第一规则中的每条规则均由二进制表示的1或0组成,并且所述N条第一规则的长度均相同。
其中,所述“获取”可以理解为:设备从外部接收N条第一规则;或者还可以理解为:设备自身生成N规则并得到所述N条规则。本实施例对设备获取所述N条规则的来源不进行限制。
可选的,所述第一规则为ACL规则,即ACL中存储有N条第一规则。
可选的,每条所述规则为由若干比特位组成的IP地址。所述N条规则的长度相同是指N条规则所占用的比特数均相同,例如都是32bit或者16bit的IP地址。进一步地,每个所述二进制的IP地址可以由每个原始的IP地址经过二进制转换后生成。
可选的,组成所述N条规则中的二进制1或者0中至少一个1或0用掩码*表示,即当对某一比特位的0或1表示不关心时,可以通过掩码“*”来替代,例如“规则1”为1011101*010111,在从右往前数的第6比特位上的掩码“*”表示该比特位上的字符既可以是“1”,也可以是“0”。
参见表2,示出了5条ACL规则,N=5。应理解,还可以包括更多条规则,比如1000条,本实施例仅举出5条规则进行说明。
规则编号 | 规则内容 |
规则1 | 1011101*010111 |
规则2 | 0101*0011*1110 |
规则3 | 0110100010*010 |
规则4 | 01101010010001 |
规则5 | 11010*00111010 |
表2
步骤202:根据所述N条第一规则构建片上字典树。
可选的,所述构建的字典树为片上字典树。
所述字典树(trie)包括根节点和至少两条分支,其中每条所述分支包括至少一个中间节点和一个尾节点,所述根节点和所述至少一个中间节点上设置有比特位的编号,所述比特位的编号为所述N条规则按照预设顺序依次排序后每个比特位所对应的次序号,所述尾节点上设置有索引编号,所述索引编号用于指示至少一条规则。
具体地,根据所述N条规则构建字典树的过程包括:
1、按照第一条件确定至少一个所述比特位的编号;
2、将所述至少一个比特位的编号存储在字典树的至少一个节点上,所述至少一个节点包括所述根节点和所述至少一个中间节点;
3、将至少一条规则的索引编号存储在每条分支的尾节点上;
其中所述第一条件为:所述至少一个比特位的编号中的第一比特位将所述N条规则分为第一规则集合和第二规则集合,所述第一规则集合中的第一比特位上的值为第一取值,所述第二规则集合中的第一比特位上的值为第二取值;如果所述第一规则集合和所述第二 规则集合中的规则的数量均大于第一阈值,则继续对所述第一规则集合和所述第二规则集合按照第二比特位或第三比特位进行划分,直到每个划分后的规则集合中剩余的规则数量小于等于第一阈值时为止。
可选的,所述第一阈值由所述至少一个随机存取存储单元的数据位宽决定。
在一具体应用例中,根据上面表2中的5条规则构建字典树过程为:
如图3和图4所示,先确定字典树的根节点和中间节点上的比特位编号。具体地,按照所述第一条件将5条规则尽可能地对半分,划分时尽量避免带有掩码“*”的比特位编号。可选的,如果在按照某一比特位编号进行划分时,遇到掩码“*”时,则划分的两个规则集合中均存在该带有掩码“*”的规则,因为“*”表示可1可0。
可选的,所述预设顺序为所有规则的比特位按照从右往左、从序号0开始的次序来排序,比如依次是0,1,2,3,4……。此外,还可以按照从左往右的顺序排序,本实施例对此不进行限定。
本实施例提供的片上字典树构建方法,如图3所示,遍历5条规则之后,选择第一比特位为第5bit,对5条规则进行划分,生成第一规则集合和第二规则集合,所述第一规则集合中包括{规则2、规则3、规则5},所述第二规则集合中包括{规则1、规则4},其中第一规则集合中的编号第5bit上的值为第二取值“1”,所述第二规则集合中的编号第5bit上的值为第一取值“0”。即从字典树的根节点开始,每个节点分成二叉树,其中一条分支的比特位上对应的字符为0,另一条分支的比特位上对应的字符为1。
可选的,所述第一取值为“0”,所述第二取值为“1”。本实施例对第一取值和第二取值是“1”还是“0”,或者掩码“*”不做限制。
所述第一阈值可以由随机存取存储单元的数据位宽来确定,所述数据位宽可以理解为内存或显存一次能传输的数据量,进一步地,所述数据量与所述随机存取存储单元的内存带宽相关,所述内存带宽是指内存总线所能提供的数据传输能力,该能力可以由内存芯片和内存模组的设计来确定。
本实施例中,如果第一阈值为1,则随机存取存储单元一次能够传输的规则条数为1条。可选的,如果所述第一阈值为7,则表示每次所述存储单元可以传输的数据量不超过7条规则。
本实施例中,所述方法还包括:判断第一规则集合和第二规则集合中的规则数量是否均大于1,如果是,则继续对这两个规则集合进行划分,对于第一规则集合{规则2、规则3、规则5},按照编号第2bit,划分成两个集合分别是{规则2},{规则3、规则5},其中{规则2}中第2bit上的取值为“1”,集合{规则3、规则5}中第2bit上的取值为“0”。此时,{规则2}中仅包含一条规则不超过第一阈值1,该条分支的中间节点连接有一个尾节点,满足终止划分的条件。
同理地,对集合{规则3、规则5}进行划分,按照编号第12bit划分,划分后生成{规则3}和{规则5},满足所述第一条件,则停止划分。同理地,对所述第二规则集合{规则1、规则4}按照编号为第13bit进行划分,得到两个集合分别是{规则4}和{规则1}。
根据上述步骤中划分5个规则使用的比特位编号,分别是第5bit、第2bit、第12bit和第13bit构建字典树。如图4所示,该字典树的根节点设置的第一比特位为5bit,然后与根节点相连接的中间节点的第二比特位为13bit,按照二进制0或1划分出规则1和 规则4。在另一条分支中,一个中间节点的第三比特位为2bit,划分出规则2;最后通过第12比特将规则5和规则3划分出来。其中每个中间节点(包括13bit、2bit和12bit)都连接一个尾节点,每个尾节点上设置有一个索引编号,用于指示每一条规则的存储位置。
可选的,所述索引编号为一IP地址,或者包含所述IP地址的地址信息。
步骤203:将所述字典树按照从所述根节点到至少一个中间节点,再从所述至少一个中间节点到尾节点的顺序划分成M个层级,M为正整数且M≥1。
其中,步骤203中将所述字典树划分成M个层级包括:根据所述至少一个随机存取存储单元的数据位宽对所述字典树划分M个层级。
如图5所示,划分的第一层级中包括一个根节点(5bit),三个中间节点(13bit、2bit和12bit),此外还包括5个尾节点,每个所述尾节点上均设置有索引编号。
可选的,若是字典树中包括更多的中间节点和尾节点,如图6所示,则可以按照7个节点为单位对字典树进行划分。例如,将字典树划分为M个层级,从第一层级、第二层级至第M层级,其中所述M个层级中的第一层级包括1个根节点和与所述根节点相连接的P个中间节点,所述M个层级中的第二层级包括的中间节点的数量不超过P+1个。
可选的,若P=6,则所述第一层级包括1个根节点和6个中间节点,第二层级也包括7个中间节点。
步骤204:将所述M个层级的信息存储在至少一个随机存取存储单元中。
其中,所述随机存取存储单元包括随机存取存储器(random access memory,RAM),进一步地,所述RAM包括静态随机存储器(static RAM,SRAM)和动态随机存储器(dynamic RAM,DRAM)等。
具体地,在步骤204中包括以下两种实现方式:
一种实现方式是,将M个层级的信息存储在一个RAM中。如图5所示,将第一层级存储在第一存储单元中,可以理解地,所述第一存储单元的容量足够大,可以承载字典树的所有信息,且所述第一存储单元中可以分成多个级,比如第1级、第2级、第3级、……,且每一级对应存储字典树的一个层级的信息。另外还包括:将5个索引编号指示的5条规则存储在其它存储单元中。
另一种实现方式是,将字典树的M个层级的信息存储在M个存储单元中,M≥2。如图6所示的字典树,包括多条分支,其中划分的第一层级的信息可以存储在第一存储单元中,第二层级的信息存储在第二存储单元中,第M层级的信息存储在第M存储单元中。且所述第一层级的信息包括所述根节点的比特位编号和所述P个中间节点的比特位编号,所述第二层级的信息包括不超过P+1个中间节点的比特位编号。
图6所示为一组ACL规则映射到多级字典树的过程,其中每一级字典树最多可以存储7个中间节点,当树的规模超过7个中间节点后就需要将超出部分的放到下一层级中去。图6中所示的灰色圆圈为尾节点,每个所述尾节点上设置有索引编号或者地址空间索引。以用于指示具体的规则。
本实施例提供的方法,先将N条规则构建成字典树,所述字典树的根节点和至少一个中间节点上设置有比特位的编号,然后再对该字典树进行分层和存储,由于对字典树分层后只需保存字典树上各个节点的比特位编号,相比于原来存储N条规则,占用的存储空间减小,因此可以通过占用面积较小的至少一个随机存取存储单元来保存字典树中各个层级 的信息,并利用索引编号来只指示N条规则,从而避免了使用占用面积较大的TCAM来存储规则,节约了芯片的成本和功耗。
另外,本方法中还包括X个存储单元,X为大于1的正整数;所述方法还包括:将所述字典树的所有尾节点上的索引编号所指示的至少一条规则存储在X个存储单元中。进一步地,可以将N条规则存储在一个存储单元中,或者存储在两个或两个以上的存储单元中。
可选的,用于存储所述N条规则的X个存储单元可以称为规则桶(rule bucket),每个所述规则桶可以承载至少一条规则,且同一条规则可以存储在一个或多个规则桶中。
示例性地,如图7所示,用于存储N条规则的X个存储单元为SRAM或DRAM。
可选的,本申请所述的字典树可称为算法树(algorithm tree),对于存储在芯片上的算法树可称为片上算法树,对应地对于存储在芯片外,例如外挂芯片的算法树可称为片外算法树。同理地,对于存储芯片上的N条规则的存储单元可称为片上规则桶,对于存储外挂芯片算法树所对应的规则可称为片外规则桶。
一般地,在芯片上,配置用于存储片上字典树的容量与存储片上规则桶的容量之比大约在1:6到1:7之间。例如,存储10兆bit(1兆=1M=1024*1024)的算法树所需要的规则桶的容量大约在60Mbit至70Mbit。因为算法树中存储的是不超过P+1个节点的比特位编号以及尾节点的索引编号,所以相比于存储16bit或32bit长度的多条规则而言,占用的存储容量较小。
此外,本实施例提供的方法可以由处理器芯片中的核来执行,所述处理器芯片包括所述核以及X个片上RAM,所述处理器芯片与Y个片外RAM耦合,所述方法还包括:将至少一条第二规则构建成片外字典树,所述至少一条第二规则存储于Y个片外RAM中;对所述片外字典树划分至少一个层级,将所述片外字典树的每个所述层级的信息存储在所述Y个片外RAM中,其中Y为正整数且Y≥1。
此外,所述方法还包括:选择所述X个片上RAM中的至少一个,存储所述片外字典树的至少一个层级的信息;或者,选择所述Y个片外RAM中的至少一个,存储所述片上字典树的至少一个层级的信息。
在一具体应用例中,比如对于片上(on-chip)ACL的规则存储方案,如图8所示,构建的片上字典树,或者片上算法树被划分成M个层级,具体包括第1层级至第M层级,且被存储在第一存储单元中。对于片上规则桶则被划分成多个级,例如从第1级至第X级,且每一级的片上规则桶均被存储在一个存储单元中。
一般地,对于存储每一级的片上规则桶所占用容量为4.5Mbit,在总共被划分为14级情况下,片外可达到的存储容量为14×4.5M=63M bit,对应地,63M bit的片外规则桶可以存储片上算法树的ACL规则的数量为526K(1K=1000)条,且这些规则的长度均为160bit。
本实施例提供的方法,通过X个RAM实现了对片上字典树所对应的第一规则的存储,并利用Y个RAM实现对片外字典树对应的第二规则的存储,本实施例实现了芯片上字典树的小容量存储和片外字典树的大容量存储,进而可以满足不同的需求。
本实施例中,根据上述构建字典树的算法,还提供了一种ACL的规则查找方法,如图9所示,该方法的执行主体可以是网络设备,比如路由器,所述方法包括:
步骤901:获取报文信息,所述报文信息的内容可以由二进制表示的1或0表示。
其中,所述报文信息包括以下一种或多种组合:地址信息、报文协议类型、端口信息等,进一步地,所述地址信息包括IP地址、MAC地址,所述报文协议类型包括:互联网协议第四版本(Internet Protocol Version 4,IPv4),也可以是基于互联网协议第六版本(Internet Protocol Version 6,IPv6),所述端口信息包括端口号,例如源端口号和目标端口号等。
进一步地,组成所述报文信息内容的0或1还可以通过掩码“*”表示。
可选的,所述报文信息通过报文头获得。
步骤902:将所述报文信息的内容按照字典树的比特位编号逐一查找,确定与所述报文信息相匹配的规则。
其中所述字典树由N条第一规则构建,N为正整数且N≥2,所述字典树包括根节点和至少两条分支,其中每条所述分支包括至少一个中间节点和一个尾节点,所述根节点和所述至少一个中间节点上设置有比特位的编号,所述比特位的编号为所述N条规则按照预设顺序依次排序后每个比特位所对应的次序号,所述尾节点上设置有索引编号,所述索引编号用于指示至少一条规则。
其中,所述字典树按照从所述根节点到至少一个中间节点,再从所述至少一个中间节点到尾节点的顺序划分成M个层级,M为正整数且M≥1,且所述M个层级的信息存储在至少一个RAM中。
可选的,所述至少一个所述比特位的编号是按照第一条件确定的;其中所述第一条件为所述至少一个比特位的编号中的第一比特位将所述N条规则分为第一规则集合和第二规则集合,其中所述第一规则集合中的第一比特位上的值为第一取值,所述第二规则集合中的第一比特位上的值为第二取值;如果所述第一规则集合和所述第二规则集合中的规则的数量均大于第一阈值,则继续对所述第一规则集合和所述第二规则集合按照第二比特位或第三比特位进行划分,直到每个划分后的规则集合中剩余的规则数量小于等于第一阈值时为止。
其中,所述第一阈值由所述至少一个RAM的数据位宽决定。
进一步地,步骤901中,将所述报文信息的内容按照字典树的比特位编号逐一查找,具体包括:从所述字典树的根节点上的比特位的编号开始,查找所述报文信息内容对应比特位上的值,若与所述比特位上的值相连接的分支不是尾节点,则继续查找所述分支所连接的中间节点上的比特位编号,直到查找出所述分支的尾节点为止。
进一步地,步骤902中,确定与所述报文信息内容相匹配的规则,具体包括:如果按照所述尾节点上的索引编号指示的至少一条规则与所述报文信息的内容相同,则确定存在与所述报文信息的内容相匹配的规则,查找成功;否则,确定不存在所述相匹配的规则,即查找失败。
在一个具体的应用例中,如图10a所示,共包括规则1至规则12的12条规则,按照比特位编号的顺序对这12条规则进行划分,生成字典树/算法树如图10b所示,所述根节点上设置的第一比特位的编号为第8bit,该第8bit将12个规则划分为两个规则集合;然后再对这两个规则集合进行划分,直到满足剩余的规则数量不超过第一阈值,本实施例中,假设第一阈值为3,则划分后每个规则桶中的规则数量不得超过3条(即规则桶的桶深为 3),依次划分后的第二比特为编号为第11bit,第三比特位的编号为第13bit,第四比特位的编号为第1bit;最后每一个中间节点再连接一个尾节点,每个所述尾节点上的索引编号用于指示对应规则的位置。
本实施例中,共有5个尾节点,可以指示5个规则桶,分别为:第一规则桶{规则6,规则7},第二规则桶{规则2,规则9},第三规则桶{规则1,规则3,规则4},第四规则桶{规则5,规则8,规则10},第五规则桶{规则9,规则11,规则12}。
在步骤901中,获取地址信息中指示的需要查找的IP地址为:111101010101;步骤902:将该IP地址在图10b所示的字典树中的比特位的编号逐一查找相匹配的规则,所述字典树的根节点的比特位编号为第8bit,则在所述IP地址中的第8bit上查找指示的取值为“1”,按照所述字典树的指示第二比特位编号为13bit,则对应到所述IP地址中的第13bit的取值为“1”,进而确定上述第五规则桶中包括的规则为待匹配的规则。
最后,再将第五规则桶中的{规则9,规则11,规则12}分别与所述IP地址个每个比特位进行比较,若都相同,则存在满足匹配的规则,否则不存在。本实施例中,只有规则11{1111*10101*1}与所述IP地址{111101010101}相同,所以本次查找到与所述地址信息相匹配的规则为规则11。
其中,需要说明的是,规则11中的掩码“*”既可以表示“1”也可以表示“0”,本实施例中规则11中的第1比特位编号表示0,第9比特位编号也表示0时,与所述地址信息所对应的IP地址相同。
此外,若在第五规则桶中的3条规则均与所述IP地址不同时,则确定不存在所述地址信息对应的规则,本次查找失败。
可选的,若是有两个或两个以上相匹配的规则,则可以根据所有相匹配的规则的优先级顺序,选择优先级最高的作为相匹配的规则。
实施例二
本实施例在实施例一的基础上,为避免了大量SRAM数据汇聚时发生拥塞,通过字典树这种多级的架构,可以实现和片外ACL规则的算法树复用,参见图11,提供了一种片上和片外ACL规则混合存储的技术方案。
需要说明的是,为了区别片上字典树和片外字典树,本实施例将片上字典树称为第一字典树,将片外字典树称为第二字典树。其中,所述第二字典树的构建所使用的规则与所述第一字典树的构建所使用的N条规则可以相同,也可以不同。
可选的,所述片上字典树还可以称为片上算法树,同理地,所述片外字典树可以称为片外算法树。
本实施例在实施例一的基础上,还提供了一种片上和片外存储单元的共享方法。具体地,所述方法包括:使用X个存储单元中的至少一个存储单元来存储片外的字典树(第二字典树),或者片外的至少一条规则。
其中,所述X个存储单元中的其他存储单元可用于存储第一字典树所关联的N条规则。
可选的,所述X个存储单元中用于存储第一字典树的N条规则的存储单元可称为片上规则桶;用于存储第二字典树的规则的存储单元可称为片外规则桶。
其中,所述方法由处理器芯片中的核来执行,所述处理器芯片包括所述核以及X 个片上RAM,所述处理器芯片与Y个片外RAM耦合,所述方法还包括:将所述至少一条第二规则构建成片外字典树,所述至少一条第二规则存储于所述Y个片外RAM中;对所述片外字典树划分至少一个层级,将所述片外字典树的每个所述层级的信息存储在所述Y个片外RAM中,其中Y为正整数且Y≥1。
进一步地,所述方法还包括:选择所述X个片上RAM中的至少一个,存储所述片外字典树的至少一个层级的信息;或者,选择Y个片外RAM中的至少一个,存储片上字典树的至少一个层级的信息,所述片上字典树由所述N条第一规则构成。
可选的,所述字典树为所述片内字典树或所述片外字典树,在所述将报文信息的内容按照字典树上的比特位编号逐一查找的步骤之前,所述方法还包括:通过模式控制信号选择将所述片外字典树确定为所述字典树,或者将所述片内字典树确定为所述字典树。
具体地,可以由寄存器里配置的控制信号来确定对片上规则桶和片外字典树的模式切换。例如,对于所述寄存器是1比特的寄存器而言,若当前寄存器指示的控制信号是“0”,则表示采用X个RAM存储片上规则桶的模式,若当前指示的控制信号是比特“1”,则表示选择X个RAM中的任意一个或多个用于存储片外算法树的模式。
处理器或处理芯片可以通过读取寄存器上的控制信号的比特值来确定当前采用的存储模式。比如,若读取的比特值是“0”,则确定当前使用存储片上规则桶模式;当需要对X个RAM进行模式切换时,可以通过控制寄存器输出控制信号“1”,来指示处理器将当前存储片上规则桶的模式切换为存储片外算法树的模式,从而达到不同模式间的自由切换,提高存储的灵活性,满足不同的容量需求。
如图11所示,对于片上算法树,左侧第1级到第X-1级的片上规则桶实现片上ACL规则的存储功能,右侧第X级到第Y级的片上规则桶用来实现片外算法树的存储功能,片外的规则桶则存放在片外存储器。其中第X级的存储单元从原来存储片上的规则切换至用于存储片外的算法树,进而实现了片上规则桶的存储单元与片外算法树的存储单元之间的自由切换。
如图12所示,片上规则桶处理逻辑(on-chip rule bucket process)是处理片上规则桶比较逻辑处理,片外算法树处理逻辑(off-chip algorithm tree process)是处理片外算法树查找逻辑处理,中间的多个SRAM为存储片上规则桶数据结构或者片外算法树数据结构的存储单元集合。其中,每一级都可以通过一个模式配置寄存器在片上规则桶和片外算法树之间切换,进而达到存储片上ACL规则占用的容量小、延时小,存储片外ACL规则占用的容量大、延时大的效果。另外,用户可以通过外接一个或多个DRAM来存储片外ACL规则,进而达到在片上和片外之间存储资源灵活切换的效果。
具体地,如图13a和图13b所示,当片外需要更大存储容量时,比如需要将片上第7级的存储单元从片上规则桶切换成用于存储片外的算法树,首先需要将第7级上存储的规则搬移到其他级的片上或者片外。图13a所示将原来存储在第7级的规则搬移到片外的DRAM,其中,在搬移的时候需要将这些规则对应的算法树也一起搬移到片外算法树上面。当第7级的片上规则桶内的规则全部搬移完成之后,就可以将该第7级的存储单元切换成用于存储片外的算法树,如图13b所示,切换以后片外算法树的存储单元可以存储容量更大的片外ACL规则,但是对应的片上用于存储ACL规则的容量会相应变小。
如下表3所示,根据每级SRAM(或每个存储单元)所能存储的容量为4.5Mbit,一共14级片上规则桶/片外算法树的时,不同级数的片上算法桶和片外算方法树所对应的ACL容量。
表3
如表3所示,以存储片外算法树的级数与存储片上规则桶的级数均为7为例,即7个存储单元(SRAM)用于存储片外算法树的信息,7个存储单元(SRAM)用于存储片上规则桶的规则。则用于存储片外的ACL规则数量(N2)可以达到80,000条,且每条规则的长度为160bit;扩展到片外用于存储算法树的ACL规则数量(N1)可以达到1,000,000条,且每条规则的长度为160bit。
需要说明的是,用于存储由N1条规则所形成的片外算法树的容量一般小于用于存储N2条规则的规则桶所占用的容量,因为用于存储算法树的容量仅包括部分比特位的编号,而用于存储规则桶的容量需要保存所有规则,所以需要占用大量空间。
此外,还可以根据不同需求将用于存储片外规则桶的存储单元切换至用于存储片上的算法树,具体的过程可以参见上述由片上规则桶切换至片外算法树的过程,本申请实施例对此不详细赘述。
本实施例提供的方法,通过多级查找结构来实现片上存储单元和片外存储单元的资源共享,用于存储片上规则桶或片外算法树的多个存储单元之间的动态切换,以提高存储资源的利用率,从而满足不同的应用场景需求。
另外,基于构建的字典树或算法树,实现用占面积较小的存储单元来存储大量ACL规则,进而节约芯片的硬件成本,相比于TCAM减少了功耗。
实施例三
本实施例在上述实施例的基础上,可进一步地提高对地址信息,比如key的查找效率。
本实施例可应用于一些对ACL规则存储的容量需求小,但是对查找性能要求比较高的场景,具体地方法包括:通过配置,动态切换成两个或两个以上字典树的存储结构,如图14所示,用于存储片上算法树的随机存取存储单元,可以存储了两个算法树,每个算法树中包括第1层级至第M层级。此外,还包括用于存储每个算法树所对应的规则桶的存储单元。
在查找过程中,本实施例中可以对两个算法树同时进行查找,即两条流水结构并行处理,可以提供每个时钟周期查找两次的性能,进而实现了多流水同时查找功能,提高了查找效率。
可以理解地,本实施例还可以拓展3条或者更多的并行流水结构,来实现快速查找功能,且对于存储片上多个算法树的容量资源分配可以通过寄存器配置来实现,片上算法树和片外规则桶的容量大小,以及之间互相切换也可以通过寄存器配置完成。
在本申请的另一个实施例中,还提供了一种ACL的规则分类装置,用于实现前述实施例所述的ACL规则分类方法。
如图15所示,所述ACL的规则分类装置包括获取单元1501和处理单元1502,此外所述装置还可以包括发送单元、存储单元等更多的单元或模块,本实施例对此不进行限制。
具体地,获取单元1501,用于获取N条规则,N为正整数且N≥2,所述N条规则中的每条规则均由二进制表示的1或0组成,并且所述N条规则的长度均相同。
处理单元1502,用于根据所述N条规则构建字典树,将所述字典树划分成M个层级,M为正整数且M≥1,以及将所述M个层级的信息存储在至少一个随机存取存储单元中,所述字典树包括根节点和至少两条分支,其中每条所述分支包括至少一个中间节点和一个尾节点,所述根节点和所述至少一个中间节点上设置有比特位的编号,所述比特位的编号为所述N条规则按照预设顺序依次排序后每个比特位所对应的次序号,所述尾节点上设置有索引编号,所述索引编号用于指示至少一条规则。
可选的,由所述N条规则构成的字典树为片上字典树,所述N条规则为N条第一规则。
可选的,在本实施例的一种可能的实现方式中,所述处理单元1502,具体用于按照第一条件确定至少一个所述比特位的编号;将所述至少一个比特位的编号存储在第一字典树的至少一个节点上,所述至少一个节点包括所述根节点和所述至少一个中间节点;将至少一条规则的索引编号存储在每条分支的尾节点上。
其中,所述第一条件为所述至少一个比特位的编号中的第一比特位将所述N条规则分为第一规则集合和第二规则集合,其中所述第一规则集合中的第一比特位上的值为第一取值,所述第二规则集合中的第一比特位上的值为第二取值;如果所述第一规则集合和所述第二规则集合中的规则的数量均大于第一阈值,则继续对所述第一规则集合和所述第二规则集合按照第二比特位或第三比特位进行划分,直到每个划分后的规则集合中剩余的规则数量小于等于第一阈值时为止,所述第一阈值由所述至少一个存储单元的数据位宽决定。
可选的,在本实施例的另一种可能的实现方式中,组成所述N条规则中的二进制1或0中至少一个1或0用掩码*表示。
可选的,在本实施例的又一种可能的实现方式中,所述处理单元1402,具体用于将所述M个层级的信息存储在M个RAM中,M为正整数且M≥2。
可选的,在本实施例的又一种可能的实现方式中,所述处理单元1402,具体用于根据所述至少一个RAM的数据位宽对所述第一字典树划分M个层级;其中所述M个层级中的第一层级包括1个根节点和与所述根节点相连接的P个中间节点,所述M个层级中的第二层级包括的中间节点的数量不超过P+1个,P为正整数且P≥1。
示例性地,所述P为6。
可选的,在本实施例的又一种可能的实现方式中,所述第一层级的信息包括所述根节点的比特位编号和所述P个中间节点的比特位编号,所述第二层级的信息包括不超过P+1个中间节点的比特位编号。
可选的,在本实施例的又一种可能的实现方式中,所述处理单元1502,还用于将所述第一字典树的所有尾节点上的索引编号所指示的至少一条规则存储在X个存储单元中,X为正整数且X≥1。
可选的,在本实施例的又一种可能的实现方式中,所述处理单元1502,还用于将所述X个存储单元中的至少一个存储单元用作存储片上字典树或者所述片上字典树的索引编号所指示的至少一条第一规则。
可选的,在本实施例的又一种可能的实现方式中,所述处理单元1502,将所述至少一条第二规则构建成片外字典树,所述至少一条第二规则存储于所述Y个片外RAM中;对所述片外字典树划分至少一个层级,将所述片外字典树的每个所述层级的信息存储在所述Y个片外RAM中,其中Y为正整数且Y≥1。
可选的,在本实施例的又一种可能的实现方式中,所述处理单元1502,还用于选择所述X个片上RAM中的至少一个,存储所述片外字典树的至少一个层级的信息;或者,选择所述Y个片外RAM中的至少一个,存储所述片上字典树的至少一个层级的信息。
所述装置包括处理器芯片中的核,所述处理器芯片包括所述核以及X个片上RAM,所述处理器芯片与Y个片外RAM耦合。
另外,本实施例还提供的ACL规则的查找装置,用于对ACL的规则进行查找,具体地,如图16所示,所述查找装置包括:获取电路1601和处理电路1602,其中所述获取电路1601用于获取报文信息,所述报文信息的内容可以由二进制表示的1或0表示;处理电路1602用于将所述报文信息的内容按照字典树上的比特位编号逐一查找,确定与所述地址信息内容相匹配的至少一条规则。
可选的,所述获取电路1601和处理电路1602还可以集合于一个电路,比如处理电路。
可选的,所述查找装置为一种处理芯片或者处理器,具备写数据功能。
另外,所述方法还包括:根据所述尾节点的索引编号确定至少一条规则,将所述至少一条规则中的每条规则分别与所述地址信息包含的所有字符进行逐一比对,直到找到与所述地址信息内容相同的规则为止。
其中所述字典树由N条第一规则构建,N为正整数且N≥2,所述字典树包括根节点和至少两条分支,其中每条所述分支包括至少一个中间节点和一个尾节点,所述根节点和所述至少一个中间节点上设置有比特位的编号,所述比特位的编号为所述N条规则按照预设顺序依次排序后每个比特位所对应的次序号,所述尾节点上设置有索引编号,所述索引编号用于指示至少一条第一规则。
可选的,在本实施例的一种可能的实现方式中,所述处理电路1602,具体用于从所述字典树的根节点上的第一比特位的编号开始,查找所述报文信息内容中对应比特位上的值,若与所述比特位上的值相连接的分支不是尾节点,则继续查找所述分支所连接的中间节点比特位编号,直到查找出所述分支的尾节点为止。
可选的,在本实施例的另一种可能的实现方式中,所述处理电路1602,具体用于如果按照所述尾节点上的索引编号指示的至少一条第一规则与所述报文信息内容相同,则确定存在与所述报文信息相匹配的规则;否则,确定不存在所述相匹配的规则。
可选的,在本实施例的又一种可能的实现方式中,所述处理电路1602,具体用于将所述字典树按照从所述根节点到至少一个中间节点,再从所述至少一个中间节点到尾节点的 顺序划分成M个层级,M为正整数且M≥1,且所述M个层级的信息存储在至少一个RAM中。
可选的,在本实施例的又一种可能的实现方式中,所述至少一个所述比特位的编号是按照第一条件确定的;其中所述第一条件为所述至少一个比特位的编号中的第一比特位将所述N条第一规则分为第一规则集合和第二规则集合,其中所述第一规则集合中的第一比特位上的值为第一取值,所述第二规则集合中的第一比特位上的值为第二取值;
如果所述第一规则集合和所述第二规则集合中的规则的数量均大于第一阈值,则继续对所述第一规则集合和所述第二规则集合按照第二比特位或第三比特位进行划分,直到每个划分后的规则集合中剩余的规则数量小于等于第一阈值时为止,所述第一阈值由所述至少一个存储单元的数据位宽决定。
可选的,在本实施例的又一种可能的实现方式中,所述处理电路1602,还用于将所述至少一条第二规则构建成片外字典树,所述至少一条第二规则存储于Y个片外RAM中;对所述片外字典树划分至少一个层级,将所述片外字典树的每个所述层级的信息存储在所述Y个片外RAM中,其中Y为正整数且Y≥1。
可选的,在本实施例的又一种可能的实现方式中,所述处理电路1602,还用于选择所述X个片上RAM中的至少一个,存储所述片外字典树的至少一个层级的信息;或者,选择所述Y个片外RAM中的至少一个,存储片上字典树的至少一个层级的信息,所述片上字典树由所述N条第一规则构成。
可选的,在本实施例的又一种可能的实现方式中,所述字典树为所述片内字典树或所述片外字典树,所述处理电路1602,具体用于通过模式控制信号选择将所述片外字典树确定为所述字典树,或者将所述片内字典树确定为所述字典树。
本实施例提供的ACL查找方法,按照字典树上节点的比特位编号的顺序,逐个查找报文信息内容中的比特,实现了对待查询报文信息的快速查找。另外,若包括有多个字典树时,还可以通过两条流水线并行查找与IP地址相匹配的规则,进一步地提高了查找效率。
参见图17,为本申请实施例提供的一种通信装置的结构示意图。该通信装置可以是前述实施例中的装置,比如网络设备,或者还可以是用于网络设备的部件(例如芯片)。该通信装置可以实现前述实施例中的ACL规则分类方法和查找方法。
如图17所示,所述通信装置可以包括收发器171、处理器172;进一步,还可以包括存储器173,所述存储器173可以用于存储代码或者数据。所述通信装置还可以包括更多或更少的部件,或者组合某些部件,或者不同的部件布置,本申请对此不进行限定。
其中,处理器172为通信装置的控制中心,利用各种接口和线路连接整个通信装置的各个部分,通过运行或执行存储在存储器173内的软件程序或模块,以及调用存储在存储器内的数据,以执行通信装置的各种功能或处理数据。所述处理器中包括传输模块,所述传输模块用于获取所述N条规则,此外,所述传输模块还可以用于获取其他信息,例如地址信息等。
所述处理器172可以由集成电路(integrated circuit,IC)组成,例如可以由单颗封装的IC所组成,也可以由连接多颗相同功能或不同功能的封装IC而组成。举例来说,处理器可以仅包括中央处理器(central processing unit,CPU),也可以是GPU、数字信号处理器(digital signal processor,DSP)、或者CPU和NP的组合。
进一步地,所述处理器还可以进一步包括硬件芯片。所述硬件芯片可以是ASIC,可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
可选的,所述处理器172中包括处理芯片,所述处理芯片中可以包括一个或多个随机存取存储单元,用于存储片上字典树和片外规则桶。进一步地,所述一个或多个随机存取存储单元包括但不限于随机存取存储器(random access memory,RAM),进一步地,所述RAM包括静态随机存储器(static RAM,SRAM)和动态随机存储器(dynamic RAM,DRAM)。另外,所述存储器可以包括非易失性存储器,例如flash memory,硬盘HDD或SSD;所述存储器还可以包括上述种类的存储器的组合。
收发器171用于建立通信信道,使通信装置通过所述通信信道以连接至网络,从而实现通信装置与其他设备之间的通信传输。其中,所述收发器171可以是完成收发功能的模块。例如,所述收发器可以包括无线局域网(wireless local area network,WLAN)模块、蓝牙模块、基带(base band)模块等通信模块,以及所述通信模块对应的射频(radio frequency,RF)电路,用于进行无线局域网络通信、蓝牙通信、红外线通信及/或蜂窝式通信系统通信,例如宽带码分多重接入(wideband code division multiple access,WCDMA)及/或高速下行封包存取(high speed downlink packet access,HSDPA)。所述收发器用于控制通信装置中的各组件的通信,并且可以支持直接内存存取(direct memory access)。
在本申请实施例中,处理器172可以用于实现前述实施例中方法的全部或部分步骤。示例性地,图15中的所述获取单元1501所要实现的功能可以由所述处理器芯片上部的传输模块来实现,所述处理单元1502所要实现的功能则可以由该通信装置的处理器芯片实现。此外,图16所示获取电路1601和处理电路1602的能够均可以通过处理器172实现。
在具体实现中,所述通信装置可以是一种网络设备,所述网络设备包括但不限于路由器、交换机等,另外所述网络设备还可以包括芯片。
需要说明的是,本实施例中所述通信装置中,还可以包括外挂芯片,所述外挂芯片用于存储片外算法树和片外规则桶。具体地,所述外挂芯片可以包括处理单元和至少一个随机存取存储单元,其中,所述处理单元用于实现对片外算法树的ACL规则分类和查找功能,所述至少一个随机存取存储单元用于存储片外字典树和片外规则。
在一具体实现中,比如通信装置为一路由器,则该路由器除了处理器的芯片上存储有字典树和规则桶之外,利用通信接口还可以外挂一个芯片,用于实现片外ACL规则的分类和查找功能,进而满足芯片上小容量存储、片外大容量存储的需求。
一种可能的实现方式中,本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质可存储有程序,该程序执行时可包括本申请提供的ACL规则分类方法、ACL规则查找方法的各实施例中的部分或全部步骤。所述的存储介质可为磁碟、光盘、ROM或RAM等。
此外,本申请实施例还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行上述各实施例所述的方法全部或部分步骤。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。 当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质,例如固态硬盘(solid state disk,SSD)等。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统及装置实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述的本申请实施方式,并不构成对本申请保护范围的限定。任何在本申请的精神和原则之内所作的修改、等同替换和改进等,均应包含在本申请的保护范围之内。
Claims (36)
- 一种访问控制列表ACL的规则分类方法,其特征在于,应用于对芯片上部ACL规则的划分,所述方法包括:获取N条第一规则,N为正整数且N≥2,所述N条第一规则中的每条规则均由二进制表示的1或0组成,并且所述N条第一规则的长度均相同;根据所述N条第一规则构建片上字典树,所述片上字典树包括根节点和至少两条分支,其中每条所述分支包括至少一个中间节点和一个尾节点,所述根节点和所述至少一个中间节点上设置有比特位的编号,所述比特位的编号为所述N条规则按照预设顺序依次排序后每个比特位所对应的次序号,所述尾节点上设置有索引编号,所述索引编号用于指示至少一条第一规则;将所述片上字典树按照从所述根节点到至少一个中间节点,再从所述至少一个中间节点到尾节点的顺序划分成M个层级,M为正整数且M≥1;将所述M个层级的信息存储在至少一个随机存取存储单元RAM中。
- 根据权利要求1所述的方法,其特征在于,根据所述N条第一规则构建片上字典树包括:按照第一条件确定至少一个所述比特位的编号;将所述至少一个比特位的编号存储在所述片上字典树的至少一个节点上,所述至少一个节点包括所述根节点和所述至少一个中间节点;将至少一条规则的索引编号存储在每条分支的尾节点上。
- 根据权利要求2所述的方法,其特征在于,所述第一条件为所述至少一个比特位的编号中的第一比特位将所述N条第一规则分为第一规则集合和第二规则集合,其中所述第一规则集合中的第一比特位上的值为第一取值,所述第二规则集合中的第一比特位上的值为第二取值;如果所述第一规则集合和所述第二规则集合中的规则的数量均大于第一阈值,则继续对所述第一规则集合和所述第二规则集合按照第二比特位或第三比特位进行划分,直到每个划分后的规则集合中剩余的规则数量小于等于第一阈值时为止,所述第一阈值由所述至少一个RAM的数据位宽决定。
- 根据权利要求1至3任一项所述的方法,其特征在于,组成所述N条第一规则中的二进制1或0中至少一个1或0用掩码*表示。
- 根据权利要求1至4任一项所述的方法,将所述M个层级的信息存储在至少一个RAM中,包括:将所述M个层级的信息存储在M个RAM中。
- 根据权利要求1至5任一项所述的方法,其特征在于,将所述片上字典树从所述根节点到至少一个中间节点,再从所述至少一个中间节点到尾节点的顺序划分成M个层级包括:根据所述至少一个RAM的数据位宽对所述片上字典树划分M个层级;其中,所述M个层级中的第一层级包括1个根节点和与所述根节点相连接的P个中间节点,所述M个层级中的第二层级包括的中间节点的数量不超过P+1个,P为正整数且P≥1。
- 根据权利要求6所述的方法,其特征在于,所述M个层级的信息包括第一层级的信息和第二层级的信息;所述第一层级的信息包括所述根节点的比特位编号和所述P个中间节点的比特位编号,所述第二层级的信息包括不超过P+1个中间节点的比特位编号。
- 根据权利要求1至7任一项所述的方法,其特征在于,所述方法还包括:将所述片上字典树的所有尾节点上的索引编号所指示的至少一条第一规则存储在X个RAM中,X为正整数且X≥1。
- 根据权利要求1至8任一项所述的方法,其特征在于,所述方法由处理器芯片中的核来执行,所述处理器芯片包括所述核以及X个片上RAM,所述处理器芯片与Y个片外RAM耦合,所述方法还包括:将所述至少一条第二规则构建成片外字典树,所述至少一条第二规则存储于所述Y个片外RAM中;对所述片外字典树划分至少一个层级,将所述片外字典树的每个所述层级的信息存储在所述Y个片外RAM中,其中Y为正整数且Y≥1。
- 根据权利要求9所述的方法,其特征在于,所述方法还包括:选择所述X个片上RAM中的至少一个,存储所述片外字典树的至少一个层级的信息;或者,选择所述Y个片外RAM中的至少一个,存储所述片上字典树的至少一个层级的信息。
- 一种访问控制列表ACL的规则查找方法,其特征在于,所述方法包括:获取报文信息,所述报文信息的内容通过二进制表示的1或0表示;将所述报文信息的内容按照字典树上的比特位编号逐一查找,确定与所述报文信息内容相匹配的规则;其中所述字典树由N条第一规则构建,N为正整数且N≥2,所述字典树包括根节点和至少两条分支,其中每条所述分支包括至少一个中间节点和一个尾节点,所述根节点和所述至少一个中间节点上设置有比特位的编号,所述比特位的编号为所述N条第一规则按照预设顺序依次排序后每个比特位所对应的次序号,所述尾节点上设置有索引编号,所述索引编号用于指示至少一条所述第一规则。
- 根据权利要求11所述的方法,其特征在于,将所述报文信息的内容按照字典树上的比特位编号逐一查找包括:从所述字典树的根节点上的第一比特位的编号开始,查找所述报文信息内容中对应第一比特位上的值,若与所述第一比特位上的值相连接的分支不是尾节点,则继续查找所述分支所连接的中间节点上的比特位编号,直到查找出所述分支的尾节点为止。
- 根据权利要求11或12所述的方法,其特征在于,所述确定与所述报文信息内容相匹配的规则包括:如果按照所述尾节点上的索引编号指示的至少一条规则与所述报文信息内容相同,则确定存在与所述报文信息相匹配的规则。
- 根据权利要求11至13任一项所述的方法,其特征在于,所述字典树按照从所述根节点到至少一个中间节点,再从所述至少一个中间节点 到尾节点的顺序划分成M个层级,M为正整数且M≥1,且所述M个层级的信息存储在至少一个随机存取存储单元RAM中。
- 根据权利要求11至14任一项所述的方法,其特征在于,所述方法由处理器芯片中的核来执行,所述处理器芯片包括所述核以及X个片上RAM,所述处理器芯片与Y个片外RAM耦合,所述方法还包括:将所述至少一条第二规则构建成片外字典树,所述至少一条第二规则存储于所述Y个片外RAM中;对所述片外字典树划分至少一个层级,将所述片外字典树的每个所述层级的信息存储在所述Y个片外RAM中,其中Y为正整数且Y≥1。
- 根据权利要求15所述的方法,其特征在于,所述方法还包括:选择所述X个片上RAM中的至少一个,存储所述片外字典树的至少一个层级的信息;或者,选择所述Y个片外RAM中的至少一个,存储片上字典树的至少一个层级的信息,所述片上字典树由所述N条第一规则构成。
- 根据权利要求15或16所述的方法,其特征在于,所述字典树为所述片内字典树或所述片外字典树,在所述将报文信息的内容按照字典树上的比特位编号逐一查找的步骤之前,所述方法还包括:通过模式控制信号选择将所述片外字典树确定为所述字典树,或者将所述片内字典树确定为所述字典树。
- 一种访问控制列表ACL的规则分类装置,其特征在于,应用于对芯片上部ACL规则的划分,所述装置包括:获取单元,用于获取N条第一规则,N为正整数且N≥2,所述N条第一规则中的每条规则均由二进制1或0组成,并且所述N条第一规则的长度均相同;处理单元,用于根据所述N条第一规则构建片上字典树,将所述片上字典树按照从所述根节点到至少一个中间节点,再从所述至少一个中间节点到尾节点的顺序划分成M个层级,M为正整数且M≥1,以及将所述M个层级的信息存储在至少一个随机存取存储单元RAM中,其中所述字典树包括根节点和至少两条分支,每条所述分支包括至少一个中间节点和一个尾节点,所述根节点和所述至少一个中间节点上设置有比特位的编号,所述比特位的编号为所述N条规则按照预设顺序依次排序后每个比特位所对应的次序号,所述尾节点上设置有索引编号,所述索引编号用于指示至少一条所述第一规则。
- 根据权利要求18所述的装置,其特征在于,所述处理单元,具体用于按照第一条件确定至少一个所述比特位的编号;将所述至少一个比特位的编号存储在所述片上字典树的至少一个节点上,所述至少一个节点包括所述根节点和所述至少一个中间节点;将至少一条规则的索引编号存储在每条分支的尾节点上。
- 根据权利要求19所述的装置,其特征在于,所述第一条件为所述至少一个比特位的编号中的第一比特位将所述N条第一规则分为第一规则集合和第二规则集合,其中所述第一规则集合中的第一比特位上的值为第一取值,所述第二规则集合中的第 一比特位上的值为第二取值;如果所述第一规则集合和所述第二规则集合中的规则的数量均大于第一阈值,则继续对所述第一规则集合和所述第二规则集合按照第二比特位或第三比特位进行划分,直到每个划分后的规则集合中剩余的规则数量小于等于第一阈值时为止,所述第一阈值由所述至少一个随机存取存储单元RAM的数据位宽决定。
- 根据权利要求18至20任一项所述的装置,其特征在于,组成所述N条第一规则中的二进制1或0中至少一个1或0用掩码*表示。
- 根据权利要求18至21任一项所述的装置,所述处理单元,具体用于将所述M个层级的信息存储在M个RAM中。
- 根据权利要求18至22任一项所述的装置,其特征在于,所述处理单元,具体用于根据所述至少一个RAM的数据位宽对所述片上字典树划分M个层级;其中所述M个层级中的第一层级包括1个根节点和与所述根节点相连接的P个中间节点,所述M个层级中的第二层级包括的中间节点的数量不超过P+1个,P为正整数且P≥1。
- 根据权利要求23所述的装置,其特征在于,所述M个层级的信息包括第一层级的信息和第二层级的信息;所述第一层级的信息包括所述根节点的比特位编号和所述P个中间节点的比特位编号,所述第二层级的信息包括不超过P+1个中间节点的比特位编号。
- 根据权利要求18至24任一项所述的装置,其特征在于,所述处理单元,还用于将所述片上字典树的所有尾节点上的索引编号所指示的至少一条规则存储在X个RAM中,X为正整数且X≥1。
- 根据权利要求18至24任一项所述的装置,其特征在于,所述处理单元,还用于将所述至少一条第二规则构建成片外字典树,所述至少一条第二规则存储于Y个片外RAM中;对所述片外字典树划分至少一个层级,将所述片外字典树的每个所述层级的信息存储在所述Y个片外RAM中,其中Y为正整数且Y≥1
- 根据权利要求26所述的装置,其特征在于,所述处理单元,还用于选择所述X个片上RAM中的至少一个,存储所述片外字典树的至少一个层级的信息;或者,选择所述Y个片外RAM中的至少一个,存储所述片上字典树的至少一个层级的信息。
- 一种访问控制列表ACL的规则查找装置,其特征在于,所述装置包括:获取电路,用于获取报文信息,所述报文信息的内容通过二进制表示的1或0表示;处理电路,用于将所述报文信息的内容按照字典树上的比特位编号逐一查找,确定与所述报文信息内容相匹配的规则;其中所述字典树由N条第一规则构建,N为正整数且N≥2,所述字典树包括根节点和至少两条分支,其中每条所述分支包括至少一个中间节点和一个尾节点,所述根节点和所述至少一个中间节点上设置有比特位的编号,所述比特位的编号为所述N条第一规则按照预设顺序依次排序后每个比特位所对应的次序号,所述尾节点上设置有 索引编号,所述索引编号用于指示至少一条第一规则。
- 根据权利要求28所述的装置,其特征在于,所述处理电路,具体用于从所述字典树的根节点上的第一比特位的编号开始,查找所述报文信息的内容中对应第一比特位上的值,若与所述第一比特位上的值相连接的分支不是尾节点,则继续查找所述分支所连接的中间节点上的比特位编号,直到查找出所述分支的尾节点为止。
- 根据权利要求28或29所述的装置,其特征在于,所述处理电路,具体用于如果按照所述尾节点上的索引编号指示的至少一条规则与所述报文信息内容相同,则确定存在与所述报文信息相匹配的规则。
- 根据权利要求28至30任一项所述的装置,其特征在于,所述处理电路,具体用于将所述字典树按照从所述根节点到至少一个中间节点,再从所述至少一个中间节点到尾节点的顺序划分成M个层级,M为正整数且M≥2,且所述M个层级的信息存储在至少一个RAM中。
- 根据权利要求28至31任一项所述的装置,其特征在于,所述处理电路,还用于将所述至少一条第二规则构建成片外字典树,所述至少一条第二规则存储于Y个片外RAM中;对所述片外字典树划分至少一个层级,将所述片外字典树的每个所述层级的信息存储在所述Y个片外RAM中,其中Y为正整数且Y≥1。
- 根据权利要求32所述的装置,其特征在于,所述处理电路,还用于选择所述X个片上RAM中的至少一个,存储所述片外字典树的至少一个层级的信息;或者,选择所述Y个片外RAM中的至少一个,存储片上字典树的至少一个层级的信息,所述片上字典树由所述N条第一规则构成。
- 根据权利要求32或33所述的装置,其特征在于,所述字典树为所述片内字典树或所述片外字典树,所述处理电路,具体用于通过模式控制信号选择将所述片外字典树确定为所述字典树,或者将所述片内字典树确定为所述字典树。
- 一种通信装置,包括处理器和存储器,所述处理器与所述存储器耦合,其特征在于,所述存储器用于存储指令;所述处理器用于调用所述指令并执行如权利要求1至10中任一项所述的方法。
- 一种计算机可读存储介质,当所述计算机可读存储介质在装置上运行时,执行如权利要求1至10中任一项所述的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201880095689.7A CN112425131B (zh) | 2018-11-30 | 2018-11-30 | 一种acl的规则分类方法、查找方法和装置 |
PCT/CN2018/118782 WO2020107484A1 (zh) | 2018-11-30 | 2018-11-30 | 一种acl的规则分类方法、查找方法和装置 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/118782 WO2020107484A1 (zh) | 2018-11-30 | 2018-11-30 | 一种acl的规则分类方法、查找方法和装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020107484A1 true WO2020107484A1 (zh) | 2020-06-04 |
Family
ID=70852516
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/118782 WO2020107484A1 (zh) | 2018-11-30 | 2018-11-30 | 一种acl的规则分类方法、查找方法和装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112425131B (zh) |
WO (1) | WO2020107484A1 (zh) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112804206A (zh) * | 2020-12-31 | 2021-05-14 | 北京知道创宇信息技术股份有限公司 | 基于查找树的报文匹配方法、装置和电子设备 |
CN113268613A (zh) * | 2021-04-30 | 2021-08-17 | 上海右云信息技术有限公司 | 一种用于获取侵权线索的方法、设备、介质及程序产品 |
CN115633097A (zh) * | 2022-12-21 | 2023-01-20 | 新华三信息技术有限公司 | 一种访问控制列表acl压缩方法及装置 |
CN115834515A (zh) * | 2022-10-28 | 2023-03-21 | 新华三半导体技术有限公司 | 一种报文处理方法、装置、设备及介质 |
CN116156026A (zh) * | 2023-04-20 | 2023-05-23 | 中国人民解放军国防科技大学 | 一种支持rmt的解析器、逆解析器、解析方法及交换机 |
CN116633865A (zh) * | 2023-07-25 | 2023-08-22 | 北京城建智控科技股份有限公司 | 网络流量控制方法、装置、电子设备及存储介质 |
WO2024016863A1 (zh) * | 2022-07-20 | 2024-01-25 | 华为技术有限公司 | 规则查找方法、装置、设备及计算机可读存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101557312A (zh) * | 2009-05-08 | 2009-10-14 | 中兴通讯股份有限公司 | 控制网络设备的访问控制列表的方法及装置 |
CN102487374A (zh) * | 2010-12-01 | 2012-06-06 | 中兴通讯股份有限公司 | 一种访问控制列表实现方法及装置 |
CN102739520A (zh) * | 2012-05-31 | 2012-10-17 | 华为技术有限公司 | 查找方法及装置 |
CN104750834A (zh) * | 2015-04-03 | 2015-07-01 | 浪潮通信信息系统有限公司 | 一种规则的存储方法、匹配方法及装置 |
CN106326475A (zh) * | 2016-08-31 | 2017-01-11 | 中国科学院信息工程研究所 | 一种高效的静态哈希表实现方法及系统 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7366728B2 (en) * | 2004-04-27 | 2008-04-29 | International Business Machines Corporation | System for compressing a search tree structure used in rule classification |
US9596222B2 (en) * | 2011-08-02 | 2017-03-14 | Cavium, Inc. | Method and apparatus encoding a rule for a lookup request in a processor |
US9098601B2 (en) * | 2012-06-27 | 2015-08-04 | Futurewei Technologies, Inc. | Ternary content-addressable memory assisted packet classification |
CN104113516A (zh) * | 2013-04-19 | 2014-10-22 | 中国移动通信集团设计院有限公司 | 一种识别防火墙的规则冲突的方法和终端 |
CN104579941A (zh) * | 2015-01-05 | 2015-04-29 | 北京邮电大学 | 一种OpenFlow交换机中的报文分类方法 |
CN106487769B (zh) * | 2015-09-01 | 2020-02-04 | 深圳市中兴微电子技术有限公司 | 一种访问控制列表acl的实现方法及装置 |
-
2018
- 2018-11-30 WO PCT/CN2018/118782 patent/WO2020107484A1/zh active Application Filing
- 2018-11-30 CN CN201880095689.7A patent/CN112425131B/zh active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101557312A (zh) * | 2009-05-08 | 2009-10-14 | 中兴通讯股份有限公司 | 控制网络设备的访问控制列表的方法及装置 |
CN102487374A (zh) * | 2010-12-01 | 2012-06-06 | 中兴通讯股份有限公司 | 一种访问控制列表实现方法及装置 |
CN102739520A (zh) * | 2012-05-31 | 2012-10-17 | 华为技术有限公司 | 查找方法及装置 |
CN104750834A (zh) * | 2015-04-03 | 2015-07-01 | 浪潮通信信息系统有限公司 | 一种规则的存储方法、匹配方法及装置 |
CN106326475A (zh) * | 2016-08-31 | 2017-01-11 | 中国科学院信息工程研究所 | 一种高效的静态哈希表实现方法及系统 |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112804206A (zh) * | 2020-12-31 | 2021-05-14 | 北京知道创宇信息技术股份有限公司 | 基于查找树的报文匹配方法、装置和电子设备 |
CN113268613A (zh) * | 2021-04-30 | 2021-08-17 | 上海右云信息技术有限公司 | 一种用于获取侵权线索的方法、设备、介质及程序产品 |
CN113268613B (zh) * | 2021-04-30 | 2024-04-09 | 上海右云信息技术有限公司 | 一种用于获取侵权线索的方法、设备、介质及程序产品 |
WO2024016863A1 (zh) * | 2022-07-20 | 2024-01-25 | 华为技术有限公司 | 规则查找方法、装置、设备及计算机可读存储介质 |
CN115834515A (zh) * | 2022-10-28 | 2023-03-21 | 新华三半导体技术有限公司 | 一种报文处理方法、装置、设备及介质 |
CN115633097A (zh) * | 2022-12-21 | 2023-01-20 | 新华三信息技术有限公司 | 一种访问控制列表acl压缩方法及装置 |
CN115633097B (zh) * | 2022-12-21 | 2023-04-28 | 新华三信息技术有限公司 | 一种访问控制列表acl压缩方法及装置 |
CN116156026A (zh) * | 2023-04-20 | 2023-05-23 | 中国人民解放军国防科技大学 | 一种支持rmt的解析器、逆解析器、解析方法及交换机 |
CN116633865A (zh) * | 2023-07-25 | 2023-08-22 | 北京城建智控科技股份有限公司 | 网络流量控制方法、装置、电子设备及存储介质 |
CN116633865B (zh) * | 2023-07-25 | 2023-11-07 | 北京城建智控科技股份有限公司 | 网络流量控制方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN112425131B (zh) | 2022-03-04 |
CN112425131A (zh) | 2021-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020107484A1 (zh) | 一种acl的规则分类方法、查找方法和装置 | |
US11102120B2 (en) | Storing keys with variable sizes in a multi-bank database | |
US10135734B1 (en) | Pipelined evaluations for algorithmic forwarding route lookup | |
US10778721B1 (en) | Hash-based ACL lookup offload | |
Liu | Efficient mapping of range classifier into ternary-CAM | |
US8089961B2 (en) | Low power ternary content-addressable memory (TCAMs) for very large forwarding tables | |
US7325059B2 (en) | Bounded index extensible hash-based IPv6 address lookup method | |
Bando et al. | Flashtrie: Hash-based prefix-compressed trie for IP route lookup beyond 100Gbps | |
US20050259672A1 (en) | Method to improve forwarding information base lookup performance | |
Jedhe et al. | A scalable high throughput firewall in FPGA | |
Bando et al. | FlashTrie: beyond 100-Gb/s IP route lookup using hash-based prefix-compressed trie | |
US20120136846A1 (en) | Methods of hashing for networks and systems thereof | |
US9319299B2 (en) | Method and apparatus for link aggregation using links having different link speeds | |
Le et al. | Scalable tree-based architectures for IPv4/v6 lookup using prefix partitioning | |
US8848707B2 (en) | Method for IP longest prefix match using prefix length sorting | |
US20180107759A1 (en) | Flow classification method and device and storage medium | |
CN1216473C (zh) | 支持多个下一跳的三态内容可寻址存储器查找方法及系统 | |
CN105791455B (zh) | 三态内容寻址存储器tcam空间的处理方法及装置 | |
WO2021104393A1 (zh) | 多规则流分类的实现方法、设备和存储介质 | |
CN107977160B (zh) | 交换机存取资料的方法 | |
US20240056393A1 (en) | Packet forwarding method and device, and computer readable storage medium | |
WO2024037243A1 (zh) | 一种数据处理方法、装置和系统 | |
US20230367720A1 (en) | Data search method and apparatus, and integrated circuit | |
US7702882B2 (en) | Apparatus and method for performing high-speed lookups in a routing table | |
Jia et al. | FACL: A Flexible and High-Performance ACL engine on FPGA-based SmartNIC |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18941301 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18941301 Country of ref document: EP Kind code of ref document: A1 |