WO2019241926A1 - Procédé et dispositif de gestion de liste de contrôle d'accès - Google Patents

Procédé et dispositif de gestion de liste de contrôle d'accès Download PDF

Info

Publication number
WO2019241926A1
WO2019241926A1 PCT/CN2018/091954 CN2018091954W WO2019241926A1 WO 2019241926 A1 WO2019241926 A1 WO 2019241926A1 CN 2018091954 W CN2018091954 W CN 2018091954W WO 2019241926 A1 WO2019241926 A1 WO 2019241926A1
Authority
WO
WIPO (PCT)
Prior art keywords
sram
node
stored
keyword
compression
Prior art date
Application number
PCT/CN2018/091954
Other languages
English (en)
Chinese (zh)
Inventor
杨升
董细金
高峰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2018/091954 priority Critical patent/WO2019241926A1/fr
Priority to CN201880091018.3A priority patent/CN111819552A/zh
Publication of WO2019241926A1 publication Critical patent/WO2019241926A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication

Definitions

  • the present application relates to the field of data processing technologies, and in particular, to a method and device for managing an access control list (Access Control List, ACL).
  • ACL Access Control List
  • ACL is a list of instructions configured in forwarding devices such as routers or switches.
  • the ACL includes multiple rules.
  • a forwarding device receives a packet, it can extract keywords from the packet, and find rules that match the extracted keywords from the multiple rules included in the ACL. As a result, the received message is processed. Because the number of rules included in an ACL is extremely large, in order to facilitate searching, usually, a forwarding device can create a corresponding search tree according to the ACL, and then store the ACL according to the layers included in the search tree.
  • the forwarding device may select multiple specific bits in the rule to divide the multiple rules according to the characteristics of the multiple rules included in the ACL, thereby dividing the multiple rules into multiple buckets.
  • the maximum capacity of each of the multiple buckets is the same.
  • the forwarding device may create a search tree according to the process of dividing the multiple rules, and each leaf node of the search tree eventually points to a bucket. For example, suppose the maximum capacity of the bucket is 3.
  • the ACL includes 12 rules.
  • the forwarding device can first divide the 12 rules into 2 groups according to the 6th bit. Among them, the rule with the value of the 6th bit is 0 is the first group, the rule with the value of 1 is the second group, and the value is the specified character.
  • Rules need to be placed in two groups at the same time. After the first group and the second group are divided, since the number of rules in the first group and the second group are both greater than 3, that is, greater than the maximum capacity of the bucket, the first group and the second group can be continued Divided. Among them, for the first group, they are divided according to the third bit to obtain the third and fourth groups, and for the second group, they are divided according to the first bit to obtain the fifth and sixth groups. Among them, the number of rules included in the fourth group, the fifth group, and the sixth group are all less than 3. Therefore, the rules of the fourth group, the fifth group, and the sixth group can be stored in different buckets, respectively. The number of rules included in the three groups is still greater than three.
  • the third group can be further divided according to the 13th bit to obtain the seventh group and the eighth group. Because the number of rules included in the seventh and eighth groups is less than three, the rules included in the seventh and eighth groups may be stored in different buckets, respectively. It can be seen that through the above process, 12 rules can be divided into 5 buckets, and according to the division process, the search tree shown in FIG. 2 can be created. Each leaf node of the search tree stores the storage address of the corresponding bucket, and other nodes except the leaf nodes store the specific bits corresponding to the corresponding nodes that are used to indicate the division of the rule and the specific Segmentation information for the division method.
  • the forwarding device can start from the first layer of the search tree and A preset number of layers is used as a compression node layer, and multiple nodes in the compression node layer are stored in at least one compression node, and then stored in each static random access memory (SRAM).
  • a compression node layer includes at least one compression node.
  • the present application provides an access control list management method, device, and computer storage medium, which can be used to solve the problem of wasted storage space caused by each SRAM storing a compressed node layer in the related art.
  • the technical scheme is as follows:
  • a method for managing an access control list includes:
  • the compressed node layer is formed by a layer on the search tree corresponding to the access control list ACL, and the compressed node layer includes at least one compressed node, where N is greater than or equal to 1 An integer
  • each of the M SRAMs stores the N compression node layers At least one compression node layer.
  • the forwarding device may store N compression node layers in less than N SRAMs, that is, multiple compression node layers may share one SRAM, which effectively avoids storing only one in each SRAM.
  • the problem of wasted storage space caused by compression of the node layer improves the utilization of the storage space of the SRAM.
  • the storing the N compression node layers in the M SRAMs includes:
  • the method further includes:
  • the searching for a rule matching the keyword based on the keyword and the N compression node layers stored in the M SRAMs includes:
  • Leaf node If the k-th compressed node layer is stored in the i-th SRAM, based on the keyword, find the corresponding to the keyword from at least one compressed node included in the k-th compressed node layer.
  • the method further includes:
  • the method further includes:
  • the method further includes:
  • searching for a rule matching the keyword based on the address of a bucket stored in the found leaf node includes:
  • an access control list management device is provided, and the access control list management device has a function of implementing the behavior of the access control list management method in the first aspect.
  • the apparatus for managing an access control list includes at least one module, and the at least one module is configured to implement the method for managing an access control list provided by the first aspect.
  • an access control list management device includes a processor and a memory, and the memory is configured to store the management device supporting the access control list to perform the first aspect.
  • the processor is configured to execute a program stored in the memory.
  • the operating device of the storage device may further include a communication bus for establishing a connection between the processor and the memory.
  • a computer-readable storage medium stores instructions that, when run on a computer, cause the computer to execute the method for managing an access control list according to the first aspect. .
  • a computer program product containing instructions which when executed on a computer, causes the computer to execute the random access method described in the first aspect.
  • the technical solution provided by this application has the beneficial effect of: obtaining the number N of compressed node layers formed by each adjacent preset number of layers on the search tree corresponding to the ACL, and storing N compressed node layers in M blocks On the SRAM, where M is less than N, each SRAM stores at least one compression node included in at least one compression node layer of the N compression node layers. It can be seen that, in the embodiment of the present application, N compression node layers can be stored in less than N SRAMs. In this way, more than one compression node layer can be stored in each SRAM, which avoids only one SRAM in each SRAM. The problem of wasted storage space caused by storing a compressed node layer improves the utilization of the storage space of the SRAM.
  • FIG. 1 is a schematic diagram of dividing multiple rules included in an ACL provided by related technologies
  • FIG. 2 is a schematic diagram of a search tree created according to an ACL division process in the related art
  • FIG. 3 is a schematic diagram of an applicable implementation environment of an ACL management method according to an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a forwarding device according to an embodiment of the present application.
  • FIG. 5 is a flowchart of an ACL management method according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a compressed node layer of a search tree corresponding to an ACL according to an embodiment of the present application
  • FIG. 7 is a schematic diagram illustrating storing N compression node layers on M SRAMs according to an embodiment of the present application.
  • FIG. 8 is a flowchart of an ACL management method according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a distribution of N compression node layers on M SRAMs provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an access control list management apparatus according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an implementation environment to which an access control list management method according to an embodiment of the present application is applied.
  • the implementation environment may include a transmitting device 301, a forwarding device 302, and a receiving device 303.
  • the transmitting device 301 may communicate with the forwarding device 302 through a wireless network or a wired network, and the receiving device 303 and the forwarding device 302 may also communicate through a wireless network or a wired network.
  • the sending-end device 301 may be used to send a message to the forwarding device 302, and the message may be a message sent by the sending-end device 301 to the receiving-end device 303.
  • the forwarding device 302 includes multiple SRAMs, and the forwarding device 302 may be configured with multiple ACLs. Among them, each ACL can include multiple rules configured by the user.
  • the forwarding device 302 may divide multiple rules in each ACL into multiple buckets, and create a search tree for each ACL according to the division process. Since the number of layers of the ACL search tree may be large, the forwarding device 302 may divide multiple layers included in the search tree of each ACL into multiple compressed node layers, where each compressed node layer includes multiple nodes Stored in at least one compression node.
  • the forwarding device 302 may store multiple compressed node layers included in each ACL in multiple SRAMs.
  • the forwarding device 302 may further include a tri-state content addressable memory (TCAM).
  • TCAM tri-state content addressable memory
  • the forwarding device 302 can be used to parse the received message sent by the sending-end device 301, extract a keyword from the message, and find the rule corresponding to the keyword through the search tree of each ACL, and according to the search As a result, the message is processed differently. For example, if found, the forwarding device 302 may forward the message to the receiving device 303; if not found, the forwarding device 302 may refuse to forward the message.
  • the sending-end device 301 and the receiving-end device 303 may be computer devices such as a smart phone, a tablet computer, a notebook computer, and a desktop computer.
  • the forwarding device 302 may be a router, a switch, or another routing and switching access integrated device.
  • FIG. 4 is a schematic structural diagram of a forwarding device 302 in the foregoing implementation environment provided by an embodiment of the present application.
  • the forwarding device mainly includes a transmitter 3021, a receiver 3022, a memory 3023, a processor 3024, and Communication bus 3025.
  • the structure of the forwarding device 302 shown in FIG. 4 does not constitute a limitation on the forwarding device 302, and may include more or less components than shown in the figure, or combine some components, or different The arrangement of components is not limited in the embodiment of the present application.
  • the transmitter 3021 may be configured to send data and / or signaling to the receiving device 103.
  • the receiver 3022 may be used to receive data and / or signaling sent by the sending-end device 101.
  • the memory 3023 may be used to store multiple rules included in the ACL, the search tree of the ACL, and the data sent by the sending device 101 described above, and the memory 3023 may also be used to store the data used to execute the rules provided by the embodiments of the present invention.
  • the memory 3023 may be a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory (RAM)), or a device that can store information and instructions.
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • CD-ROM Compact Disc
  • Optical disc storage including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.
  • magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program code in the form of instructions or data structures and can Any other medium accessed by the integrated circuit, but is not limited thereto.
  • the memory 3023 may exist independently, and is connected to the processor 3024 through a communication bus 3025. The memory 3023 may also be integrated with the processor 3024.
  • the processor 3024 is the control center of the forwarding device 302.
  • the processor 3024 may be a general-purpose central processing unit (CPU), a microprocessor, and an application-specific integrated circuit (Application-Specific Integrated Circuit). , Hereinafter referred to as ASIC), or one or more integrated circuits used to control the execution of the program procedures.
  • ASIC Application-Specific Integrated Circuit
  • the processor 3024 can implement the ACL storage method provided by the embodiment of the present invention by running or executing software programs and / or modules stored in the memory 3023 and calling data stored in the memory 3023.
  • processor 3024 and the memory 3023 can transmit information through a communication bus 3025.
  • FIG. 5 is a flowchart of an ACL management method according to an embodiment of the present application.
  • the method can be applied to the forwarding device 302 in the implementation environment shown in FIG. 3.
  • the method includes the following steps:
  • Step 501 Obtain the number N of compressed node layers.
  • the forwarding device may start from the first layer of the search tree and treat each adjacent preset number of layers as a compressed node layer, thereby dividing the search tree into multiple compressed node layers, and then forward The device can count the number N of compressed node layers included in the search tree.
  • each compressed node layer includes multiple layers of a search tree
  • each compressed node layer will include multiple nodes, and the forwarding device can combine multiple nodes in each compressed node layer to store at least Within a compressed node.
  • FIG. 6 is a schematic diagram of a compressed node layer of a search tree corresponding to an ACL according to an embodiment of the present application.
  • the forwarding device may start from the first layer, and use the first layer to the fourth layer as the first compression node layer.
  • the compression node layer may include multiple nodes. Stored in a compression node, that is, including a compression node in the first compression node layer. Starting from the fifth layer, the fifth to eighth layers are used as the second compression node layer. It should be noted that in the second compression node layer, the leaf nodes in the fifth layer can be stored in one compression node.
  • each top node and the corresponding branch of the node are stored in a compressed node.
  • the two leaf nodes can be stored in a compressed node, and the remaining 4 nodes other than the leaf nodes can be stored in the four nodes.
  • Each node in the node serves as a top-level node, and each top-level node and the branch derived from each top-level node are stored in a compression node, thereby obtaining four compression nodes.
  • the forwarding device can process according to the above method, so as to obtain N compressed node layers.
  • Step 502 Store N compression node layers in M SRAMs, where M is a positive integer less than N, and each SRAM in the M SRAM stores at least one compression node layer of the N compression node layers.
  • the forwarding device may store at least one compression node included in each compression node layer in a SRAM, that is, N compression node layers are to be stored Requires N blocks of SRAM.
  • N compression node layers are to be stored Requires N blocks of SRAM.
  • the forwarding device may store N compression node layers in less than N SRAMs, that is, each SRAM may store at least one compression node layer.
  • a plurality of compression node layers can share a SRAM, which effectively avoids the problem of wasted space of the SRAM.
  • the forwarding device may store the N compressed node layers in M SRAMs in the following ways.
  • the first method According to the sequence of the N compression node layers, starting from the first SRAM of the M SRAMs, and sequentially from the first SRAM to the Mth SRAM, one compression node is stored in each SRAM in order. When it is stored in the Mth SRAM, it starts from the first SRAM in the Mth SRAM and stores a compression node layer in each SRAM in order from the first SRAM to the Mth SRAM, until Until storing N compression node layers.
  • the forwarding device may determine the sequence of the compressed node layer according to the position of the compressed node layer on the search tree and in a top-down order. That is, the compressed node layer at the top of the search tree is the first compressed node layer, the next layer pointed to by the last node in the first compressed node layer is the second compressed node layer, and so on.
  • the M SRAMs may also be arranged in a certain order.
  • the forwarding device may sequentially store the first compression node layer to the Mth compression node layer in the order of the N compression node layers on the first SRAM to the Mth SRAM. .
  • the first compression node layer is stored on the first SRAM
  • the second compression node layer is stored on the second SRAM
  • the i-th compression node layer is stored on the i-th SRAM
  • the forwarding device can return to the first SRAM, store the M + 1th compression node layer on the first SRAM, and store the M + 2th on the second SRAM. Compression node layers, and so on, until all N compression node layers are completely stored.
  • the i-th SRAM of the M SRAMs will store the i-th compression node layer, the (M + i) -th compression node layer, the (2M + i) -th compression node layer, etc., among which 2M + i Less than or equal to N.
  • FIG. 7 is a schematic diagram of storing N compression node layers on M SRAMs according to an embodiment of the present application.
  • SRAM1 is the first SRAM
  • SRAM2 is the second SRAM
  • Conde_stage_01 is the first compression node layer
  • Conde_stage_02 is the second compression node layer
  • Conde_stage_10 is the tenth compression node layer.
  • the forwarding device can store Conde_stage_01 on SRAM1, Conde_stage_02 on SRAM2, Conde_stage_03 on SRAM3, and Conde_stage_04 on SRAM4.
  • the forwarding device can return to SRAM1, store Conde_stage_5 on SRAM1, Conde_stage_6 on SRAM2, Conde_stage_7 on SRAM3, and Conde_stage_8 on SRAM4. After that, the forwarding device can return to SRAM1 again, store Conde_stage_9 on SRAM1, and store Conde_stage_10 on SRAM2. In this way, 10 compression node layers are stored on 4 SRAMs.
  • Conde_stage_01, Conde_stage_05, and Conde_stage_09 are stored on SRAM1
  • Conde_stage_02, Conde_stage_06, and Conde_stage_10 are stored on SRAM2
  • Conde_stage_03, Conde_stage_07 are stored on SRAM3
  • Conde_stage is stored on SRAM4.
  • the second method According to the sequence of the N compressed node layers, the N compressed node layers are sequentially divided into Q groups, and the number of compressed node layers included in each of the first Q-1 groups in the Q group is The number of compressed node layers included in the last group in the M, Q groups is less than or equal to M; the i-th compressed node layer in the compressed node layers included in each group in the Q group is stored in the i-th in the M SRAMs In block SRAM, i is an integer greater than 0 and not greater than M.
  • the foregoing first method mainly introduces the implementation process of storing N compression node layers one by one into M SRAMs.
  • the forwarding device may first divide the N compressed node layers into Q groups in order, and then store the compressed node layers included in each group in the Q group in parallel.
  • the forwarding device may divide each adjacent M compression node layer into a group starting from the first compression node layer according to the sequence of the N compression node layers, thereby obtaining a Q group. If N is an integer multiple of M, each group in the Q group includes M compression node layers. If N is not an integer multiple of M, the number of compressed node layers included in the last group in the Q group will be less than M .
  • the forwarding device can store the first compression node layer in each compression node layer included in each group on the first SRAM and the second compression node in the M SRAMs. The layers are stored on the second SRAM, and so on, until the compressed node layers included in each group are stored.
  • the forwarding device can store the compression node layers included in each group in the Q group in parallel to speed up the storage. speed.
  • the third method According to the sequence of the N compression node layers, starting from the first SRAM of the M SRAMs, and sequentially from the first SRAM to the Mth SRAM, one compression node is stored in each SRAM in order. Layer; when the storage in the Mth SRAM is completed, if the number of remaining compressed node layers is less than M, then according to the size of the remaining storage space of each SRAM in the M SRAM, select the remaining nodes from the M SRAM The number of SRAM layers is the same, and the remaining compression node layers are stored in the selected SRAM.
  • the sequence of the N compression node layers is restarted, starting from the first SRAM in the M SRAMs, and in the order from the first SRAM to the Mth SRAM.
  • a compression node layer is stored in turn on each SRAM.
  • the forwarding device may first store the first compression node layer to the Mth compression node layer of the N compression node layers in the corresponding storage on the first to the Mth SRAMs in sequence. After the block SRAM is stored, the forwarding device can count whether the number of remaining unstored compression node layers is less than M. If it is not less than M, it can start again from the first SRAM, from the first SRAM to the Mth SRAM. It stores the M + 1th compression node layer to the 2Mth compression node layer in turn.
  • the forwarding device can count the size of the remaining storage space of each SRAM in the M SRAMs, sort them according to the remaining storage space in descending order, and select the remaining storage space first from the sorting result W SRAMs, where W is equal to the number of remaining compressed node layers.
  • W is equal to the number of remaining compressed node layers.
  • the remaining compressed node layers are stored in W SRAMs.
  • the forwarding device can select the SRAM with the most remaining storage space from the four SRAMs and the remaining SRAM is ranked second. Assume that the SRAM with the most remaining storage space is the third SRAM of the 4 SRAMs, and the remaining SRAM is ranked the second SRAM as the first SRAM. One compression node layer is stored in the first SRAM, and the second compression node layer in the remaining compression node layers is stored in the third SRAM.
  • N compression node layers are stored on M SRAMs.
  • the forwarding device may store N compression node layers in less than N SRAMs, that is, multiple compression node layers may share one SRAM, which effectively avoids storing only one in each SRAM.
  • the problem of wasted storage space caused by compression of the node layer improves the utilization of the storage space of the SRAM.
  • the forwarding device receives a message
  • the rules provided by the following embodiments can be used to find the rules that match the message. Based on this, referring to FIG. 8, another ACL management method is provided. The method includes the following steps:
  • Step 801 Obtain the number N of compressed node layers.
  • step 501 For the implementation of this step, reference may be made to step 501 in the foregoing embodiment.
  • Step 802 Store N compression node layers in M SRAMs, where M is a positive integer less than N, and each SRAM in the M SRAM stores at least one compression node layer of the N compression node layers.
  • step 502 For the implementation of this step, reference may be made to step 502 in the foregoing embodiment.
  • Step 803 When a message is received, keywords are extracted from the message.
  • the forwarding device may extract corresponding keywords from the message according to the rules in the ACL.
  • the rules in the ACL are used to classify packets based on the source address of the packet, and the forwarding device can extract the source address information in the packet to obtain keywords.
  • the rules in the ACL are used to classify packets based on time period information, the forwarding device can obtain the time period information of the packet and use it as a keyword.
  • Step 804 Based on the keywords and the N compressed node layers stored in the M SRAMs, find a rule that matches the keywords.
  • the forwarding device can find the leaf nodes that match the keywords from the N compressed node layers stored in the M SRAMs, and then obtain the corresponding nodes based on the addresses of the buckets stored in the found leaf nodes.
  • the keyword matches the rule. If the forwarding device does not find a leaf node matching the keyword in the N compression node layers, the forwarding device may discard the message.
  • the k-th compressed node layer based on keywords, finds the leaf node corresponding to the keyword from at least one compressed node included in the k-th compressed node layer; if a leaf node corresponding to the keyword is found, then Based on the addresses of the buckets stored in the found leaf nodes, find rules that match the keywords.
  • the forwarding device may search from the first SRAM to determine whether the first compression node layer is stored in the first SRAM, and if so, may search from at least one compression node included in the first compression node layer. Whether there is a leaf node matching the keyword. If a leaf node matching the keyword is found in the first compression node layer, the forwarding device can obtain the address of the bucket stored in the leaf node and The address of the bucket is transferred from the first SRAM to the second SRAM, from the second SRAM to the third SRAM, and so on, and until the M-th SRAM is passed, the bucket is output by the M-th SRAM. After that, the forwarding device can obtain a rule matching the keyword based on the bucket address to the corresponding bucket.
  • the forwarding device may find the first compression node layer in the second SRAM, and if the second SRAM still does not store the second compression node layer, If there is a compression node layer, the forwarding device can continue to search to the next SRAM until it finds the first compression node layer, and then decides whether to search for the second compression node layer according to the search result in the first compression node layer.
  • each compression node layer is stored in each SRAM one by one. Therefore, when searching from the first SRAM, the first compression node layer is stored in the first SRAM, the second compression node layer is stored in the second SRAM, and the i-th compression is correspondingly stored in the i-th SRAM.
  • the node layer does not store the case where the k-th compressed node layer is not stored in the i-th compressed node layer.
  • the forwarding device may not perform the step of judging whether the k-th compression node layer is stored in the i-th SRAM, but may directly find and compare with the k-th compression node layer stored in the i-th SRAM.
  • the leaf node corresponding to the keyword may not perform the step of judging whether the k-th compression node layer is stored in the i-th SRAM, but may directly find and compare with the k-th compression node layer stored in the i-th SRAM.
  • the forwarding device may First determine whether the k-th compressed node layer is stored in the i-th SRAM. If it is, then search for the leaf node corresponding to the keyword in the k-th compressed node layer. If not, continue to find the k-th compressed node in the next SRAM. k compressed node layers until found.
  • the forwarding device can go to the next SRAM of the i-th SRAM to find the next compression node layer of the k-th compression node layer. If the next compression node layer is stored in the next SRAM, the forwarding device can Find a leaf node corresponding to the keyword in a compressed node layer.
  • the forwarding device needs to start again from the first SRAM, and find the next compression node layer of the kth compression node layer in the first SRAM.
  • the forwarding device stores the N compressed node layers according to the first method or the second method.
  • the forwarding device when performing a search, can start searching from SRAM1, and first find the leaf node corresponding to the keyword in Conde_stage_01 stored in SRAM1. If a leaf node is found, the forwarding device can pass the address of the bucket stored in the leaf node from SRAM1 to SRAM2, then from SRAM2 to SRAM3, from SRAM3 to SRAM4, and the forwarding device obtains the address of the bucket from the SRAM4. And according to the address of the bucket, the corresponding rule in the bucket is obtained.
  • the forwarding device can search in Conde_stage_02 stored in SRAM2. If found, it can be processed by referring to the foregoing method. If it is not found, it can continue to search downward. If the leaf node is not found until Conde_stage_04 in SRAM4, the forwarding device may return to SRAM1 and continue to find leaf nodes in order starting from Conde_stage_05.
  • the forwarding device stores the N compressed node layers according to the third method.
  • Conde_stage_06 Since Conde_stage_06 is not stored on SRAM2, the forwarding device can continue to determine whether Conde_stage_06 is stored on SRAM3. , Conde_stage_06 is stored in SRAM3, then the forwarding device looks for the leaf node corresponding to the keyword in Conde_stage_06. If found, the forwarding device can obtain the address of the bucket from the found leaf node and pass the address of the bucket to SRAM4. The forwarding device can obtain the address of the bucket from SRAM4, and find the rule matching the keyword in the corresponding bucket according to the address of the bucket.
  • the forwarding device can find the leaf nodes corresponding to the keywords in the N compressed node layers stored in the M blocks of SRAM. As long as the leaf nodes corresponding to the keywords are found, the forwarding device can The address of the bucket stored in the leaf node is passed to the Mth SRAM to end the search process.
  • the forwarding device can only be in the Nth block. Only the SRAM can obtain the address of the bucket stored in the leaf node.
  • the search delay is equal to the delay from the first compressed node layer to the last compressed node layer. It can be seen that, through the storage method and corresponding search method provided in the embodiments of the present application, when the forwarding device finds a leaf node corresponding to a keyword in a certain compressed node layer, the search process can be ended as early as possible, which is effective Reduced search delay.
  • FIG. 10 is a block diagram of an access control list management apparatus according to an embodiment of the present invention.
  • the apparatus is applied to a forwarding device.
  • the apparatus 1000 includes an acquisition module 1001 and a storage module 1002.
  • the obtaining module 1001 is configured to execute step 501 in the foregoing embodiment
  • the storage module 1002 is configured to execute step 502 in the foregoing embodiment.
  • the storage module 1002 is specifically configured to:
  • the device further includes:
  • An extraction module for extracting keywords from a message when a message is received
  • a search module is used to find rules matching keywords based on keywords and N compressed node layers stored in M SRAMs.
  • the search module includes:
  • a first search submodule configured to find, if there is a k-th compressed node layer in the i-th SRAM, at least one compression node included in the k-th compressed node layer based on a keyword, Leaf node
  • the second search submodule is configured to, if a leaf node corresponding to the keyword is found, find a rule matching the keyword based on the address of a bucket stored in the found leaf node.
  • the search module further includes:
  • the search module further includes:
  • the search module further includes:
  • the second search submodule is specifically configured to:
  • the forwarding device can store N compression node layers in less than N SRAMs, that is, multiple compression node layers can share one SRAM, which effectively avoids each block.
  • the problem of wasted storage space caused by storing only one compression node layer in the SRAM improves the utilization of the storage space of the SRAM.
  • the leaf nodes corresponding to the keywords can be found in the N compressed node layers stored in the M blocks of SRAM. As long as the leaf nodes corresponding to the keywords are found, the forwarding device can The address of the bucket stored in the leaf node is passed to the Mth SRAM to end the search process.
  • the search process can be ended as early as possible, effectively reducing Search delay.
  • the management device of the access control list provided in the foregoing embodiment when managing the access control list, uses only the division of the functional modules as an example for illustration. In actual applications, the above functions can be allocated by different functional modules as needed.
  • the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the device for managing an access control list provided by the foregoing embodiment belongs to the same concept as the method embodiments shown in FIG. 5 and FIG. 8, and the specific implementation process thereof is detailed in the method embodiment, and details are not described herein again.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website site, computer, server, or data center through a cable (For example: coaxial cable, optical fiber, Digital Subscriber Line (DSL)) or wireless (for example: infrared, wireless, microwave, etc.) to another website, computer, server or data center for transmission.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes one or more available medium integration.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a Digital Versatile Disc (DVD)), or a semiconductor medium (for example, a solid state disk (Solid State Disk) (SSD)) Wait.
  • a magnetic medium for example, a floppy disk, a hard disk, a magnetic tape
  • an optical medium for example, a Digital Versatile Disc (DVD)
  • DVD Digital Versatile Disc
  • semiconductor medium for example, a solid state disk (Solid State Disk) (SSD)
  • a computer-readable storage medium is provided.
  • the computer-readable storage medium runs on the computer, the computer is caused to execute the steps of the method for managing an access control list shown in FIG. 5 or FIG.
  • the program may be stored in a computer-readable storage medium.
  • the storage medium mentioned may be a read-only memory, a magnetic disk or an optical disk.

Abstract

La présente invention concerne un procédé et un dispositif de gestion de liste de contrôle d'accès, se rapportant au domaine technique du traitement de données. Le procédé consiste : à obtenir le nombre N de couches de nœud compressé ; les couches de nœud compressé sont formées par des couches sur un arbre de recherche correspondant à une liste de contrôles d'accès (ACL) et les couches de nœud compressé comprennent au moins un nœud compressé ; N est un nombre entier supérieur ou égal à 1 ; à stocker les N couches de nœud compressé dans un nombre M de mémoires SRAM, M étant un nombre entier supérieur à 0 et inférieur à N, et chaque mémoire SRAM dans les M SRAM stockant au moins l'une des N couches de nœud compressé. C'est-à-dire que, dans le mode de réalisation de la présente invention, de multiples couches de nœud compressé peuvent partager une mémoire SRAM, ce qui évite de manière efficace le problème de gaspillage d'espace de stockage provoqué par le stockage d'une seule couche de nœud compressé dans chaque mémoire SRAM et améliore l'utilisation de l'espace de stockage de la mémoire SRAM.
PCT/CN2018/091954 2018-06-20 2018-06-20 Procédé et dispositif de gestion de liste de contrôle d'accès WO2019241926A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/091954 WO2019241926A1 (fr) 2018-06-20 2018-06-20 Procédé et dispositif de gestion de liste de contrôle d'accès
CN201880091018.3A CN111819552A (zh) 2018-06-20 2018-06-20 访问控制列表的管理方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/091954 WO2019241926A1 (fr) 2018-06-20 2018-06-20 Procédé et dispositif de gestion de liste de contrôle d'accès

Publications (1)

Publication Number Publication Date
WO2019241926A1 true WO2019241926A1 (fr) 2019-12-26

Family

ID=68982877

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/091954 WO2019241926A1 (fr) 2018-06-20 2018-06-20 Procédé et dispositif de gestion de liste de contrôle d'accès

Country Status (2)

Country Link
CN (1) CN111819552A (fr)
WO (1) WO2019241926A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113328948A (zh) * 2021-06-02 2021-08-31 杭州迪普信息技术有限公司 资源管理方法、装置、网络设备及计算机可读存储介质
CN116633865A (zh) * 2023-07-25 2023-08-22 北京城建智控科技股份有限公司 网络流量控制方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101055535A (zh) * 2006-04-13 2007-10-17 国际商业机器公司 并行计算机和定位并行计算机中硬件故障的方法
CN102487374A (zh) * 2010-12-01 2012-06-06 中兴通讯股份有限公司 一种访问控制列表实现方法及装置
CN102662855A (zh) * 2012-04-17 2012-09-12 华为技术有限公司 一种二叉树的存储方法、系统
CN103858392A (zh) * 2011-08-02 2014-06-11 凯为公司 包分类规则的增量更新
CN107577756A (zh) * 2017-08-31 2018-01-12 南通大学 一种基于多层迭代的改进递归数据流匹配方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577662B (zh) * 2008-05-05 2012-04-04 华为技术有限公司 一种基于树形数据结构的最长前缀匹配方法和装置
EP2515487B1 (fr) * 2010-01-26 2019-01-23 Huawei Technologies Co., Ltd. Procédé et dispositif pour stocker et rechercher un mot-clef

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101055535A (zh) * 2006-04-13 2007-10-17 国际商业机器公司 并行计算机和定位并行计算机中硬件故障的方法
CN102487374A (zh) * 2010-12-01 2012-06-06 中兴通讯股份有限公司 一种访问控制列表实现方法及装置
CN103858392A (zh) * 2011-08-02 2014-06-11 凯为公司 包分类规则的增量更新
CN102662855A (zh) * 2012-04-17 2012-09-12 华为技术有限公司 一种二叉树的存储方法、系统
CN107577756A (zh) * 2017-08-31 2018-01-12 南通大学 一种基于多层迭代的改进递归数据流匹配方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113328948A (zh) * 2021-06-02 2021-08-31 杭州迪普信息技术有限公司 资源管理方法、装置、网络设备及计算机可读存储介质
CN116633865A (zh) * 2023-07-25 2023-08-22 北京城建智控科技股份有限公司 网络流量控制方法、装置、电子设备及存储介质
CN116633865B (zh) * 2023-07-25 2023-11-07 北京城建智控科技股份有限公司 网络流量控制方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN111819552A (zh) 2020-10-23

Similar Documents

Publication Publication Date Title
US11102120B2 (en) Storing keys with variable sizes in a multi-bank database
US10389633B2 (en) Hash-based address matching
EP2643762B1 (fr) Procédé et appareil pour table de hachage à haute performance, actualisable et déterministe pour équipement de réseau
US7827182B1 (en) Searching for a path to identify where to move entries among hash tables with storage for multiple entries per bucket during insert operations
JP6004299B2 (ja) フローテーブルをマッチングするための方法及び装置、並びにスイッチ
Yu et al. Efficient multimatch packet classification and lookup with TCAM
CN112425131B (zh) 一种acl的规则分类方法、查找方法和装置
WO2015127721A1 (fr) Procédé et appareil de mise en correspondance de de données et support de stockage informatique
CN110955704A (zh) 一种数据管理方法、装置、设备及存储介质
US10587516B1 (en) Hash lookup table entry management in a network device
CN107948060A (zh) 一种新型的路由表建立、以及ip路由查找方法和装置
WO2019241926A1 (fr) Procédé et dispositif de gestion de liste de contrôle d'accès
CN104025520A (zh) 查找表的创建方法、查询方法、控制器、转发设备和系统
KR20210121253A (ko) 트래픽 분류 방법 및 장치
CN104253754A (zh) 一种acl快速匹配的方法和设备
CN109672623A (zh) 一种报文处理方法和装置
CN110704419A (zh) 数据结构、数据索引方法、装置及设备、存储介质
JP7133037B2 (ja) メッセージ処理方法、装置およびシステム
CN112231398A (zh) 数据存储方法、装置、设备及存储介质
US20160062888A1 (en) Least disruptive cache assignment
CN106209556B (zh) 一种地址学习、报文传输的方法及装置
JP2007243976A (ja) データ処理装置
CN113225308B (zh) 网络访问的控制方法、节点设备及服务器
CN111506658B (zh) 数据处理方法、装置、第一设备及存储介质
WO2024037243A1 (fr) Procédé, appareil et système de traitement de données

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18923383

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18923383

Country of ref document: EP

Kind code of ref document: A1