CN114281712A - Table lookup method and device, FPGA and readable storage medium - Google Patents

Table lookup method and device, FPGA and readable storage medium Download PDF

Info

Publication number
CN114281712A
CN114281712A CN202111590931.8A CN202111590931A CN114281712A CN 114281712 A CN114281712 A CN 114281712A CN 202111590931 A CN202111590931 A CN 202111590931A CN 114281712 A CN114281712 A CN 114281712A
Authority
CN
China
Prior art keywords
cache
external memory
look
fpga
entry
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111590931.8A
Other languages
Chinese (zh)
Inventor
周志伟
张阿珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Topsec Technology Co Ltd
Beijing Topsec Network Security Technology Co Ltd
Beijing Topsec Software Co Ltd
Original Assignee
Beijing Topsec Technology Co Ltd
Beijing Topsec Network Security Technology Co Ltd
Beijing Topsec Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Topsec Technology Co Ltd, Beijing Topsec Network Security Technology Co Ltd, Beijing Topsec Software Co Ltd filed Critical Beijing Topsec Technology Co Ltd
Priority to CN202111590931.8A priority Critical patent/CN114281712A/en
Publication of CN114281712A publication Critical patent/CN114281712A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application provides a table look-up method, a table look-up device, an FPGA and a readable storage medium, and relates to the technical field of computers. According to the scheme, CACHE is added between the FPGA and the external memory, the FPGA accesses the internal CACHE to look up the table, and high-speed table look-up can be realized due to the high access speed of the CACHE, so that the table look-up efficiency is higher. And whether to look up the table from the external memory is determined according to the table look-up result returned by the CACHE, and the table can be searched because the external memory has more storage capacity and can store more table items.

Description

Table lookup method and device, FPGA and readable storage medium
Technical Field
The application relates to the technical field of computers, in particular to a table look-up method, a table look-up device, an FPGA and a readable storage medium.
Background
In the design of various Field Programmable Gate Arrays (FPGAs), in consideration of practical application scenarios, some memories with large capacity are generally externally hung to store a large number of entries, such as Double Data Rate (DDR) and Quad Data Rate (QDR), and these off-chip memories have large capacity and low cost, but have slow access speed. The FPGA needs to read and write the table entries in the off-chip memory frequently in the running process, and the read-write speed of the off-chip memory becomes the working bottleneck of the FPGA. At present, in order to increase the table lookup performance of the off-chip memory, multiple paths of off-chip memories are generally added for performing parallel table lookup, but the design cost is higher.
Disclosure of Invention
An object of the embodiments of the present application is to provide a table lookup method, an apparatus, an FPGA, and a readable storage medium, so as to solve the problems in the prior art that the table lookup speed of an off-chip memory is low and the cost is high due to the parallel table lookup using multiple off-chip memories.
In a first aspect, an embodiment of the present application provides a table lookup method, which is applied to a field programmable gate array FPGA, and the method includes:
sending a table look-up instruction to a CACHE memory CACHE in the FPGA, wherein the table look-up instruction carries corresponding table entry information and is used for indicating the CACHE to look up a matched table entry according to the table entry information;
receiving a table look-up result returned by the CACHE;
and determining whether to perform table lookup from an external memory of the FPGA according to the table lookup result.
In the implementation process, CACHE is added between the FPGA and the external memory, the FPGA accesses the internal CACHE to look up the table, and the high-speed table look-up can be realized due to the high access speed of the CACHE, so that the table look-up efficiency is higher. And whether to look up the table from the external memory is determined according to the table look-up result returned by the CACHE, and the table can be searched because the external memory has more storage capacity and can store more table items.
Optionally, the determining whether to perform table lookup from an external memory of the FPGA according to the table lookup result includes:
and if the table lookup result is table lookup failure, sending a table lookup instruction to an external memory of the FPGA to instruct the external memory to search a matched table entry according to the table entry information. After the CACHE fails to look up the table, the table look-up instruction is sent to the external memory, so that high-speed table look-up can be preferentially realized in the CACHE, the number of times of accessing the external memory is reduced, and the table look-up performance of the FPGA is improved.
Optionally, the sending a table lookup instruction to a CACHE memory CACHE inside the FPGA includes:
respectively sending a table look-up instruction to a CACHE inside the FPGA and an external memory of the FPGA to indicate the CACHE and the external memory to look up a matched table entry according to the table entry information;
the determining whether to perform table lookup from an external memory of the FPGA according to the table lookup result includes:
if the table lookup result is a table lookup failure, sending a table lookup failure signal to the external memory, wherein the table lookup failure signal is used for indicating the external memory to continue table lookup operation;
and if the table look-up result is that the table look-up is successful, sending a table look-up success signal to the external memory, wherein the table look-up success signal is used for indicating the external memory to cancel the table look-up operation.
In the implementation process, the table look-up command is sent to the external memory and the internal CACHE at the same time, parallel table look-up is realized, and the table look-up speed of the CACHE is higher, so the table look-up result of the CACHE can be preferentially obtained, after the failure of the CACHE table look-up, the external memory can be continuously accessed, the table look-up success of the CACHE can be realized without continuously accessing the external memory, and the table look-up result of the CACHE can be directly used, so that the frequency of accessing the external memory can be reduced, and the table look-up performance is improved.
Optionally, if the table lookup result is a table lookup failure, the method further includes:
determining the table items searched by the external memory;
and updating the list items searched by the external memory into the CACHE.
In the implementation process, when the CACHE table lookup fails, the table entry searched from the external memory is updated into the CACHE, so that the table entry can be searched from the CACHE when being searched next time, the external memory does not need to be accessed again, and the searching efficiency is higher.
Optionally, the entries in the CACHE are stored in a linked list structure, the entries newly inserted into the CACHE are inserted into the tail of the linked list structure, and the entries arranged at the head of the linked list structure are preferentially deleted.
In the implementation process, the CACHE stores each table entry by adopting a linked list structure, so that the table entry can be quickly inquired, the table lookup efficiency is further improved, the operations of quick deletion, insertion and the like of the table entries are supported, and the table entries are flexibly managed.
Optionally, the entries in the CACHE are arranged according to a high-low order of priority, the entries with high priority are arranged at the tail of the linked list structure, and the entries with low priority are arranged at the head of the linked list structure. Thus, the items with high priority can be stored in CACHE preferentially, and the hit probability of the items is improved.
Optionally, the method further comprises:
and if a new table entry is inserted into the CACHE or the table entries in the CACHE are aged, updating the deleted table entries in the CACHE into the external memory. Therefore, when the deleted table entry cannot be searched in the CACHE subsequently, the deleted table entry can be searched in the external memory.
Optionally, the entry information of the entry in the CACHE is stored in a random access memory RAM of the FPGA, and the location originally used for storing the entry information in the CACHE is used for storing the address information of the RAM. This can expand the memory resources of the CACHE and does not need to sacrifice the lookup performance of the CACHE.
In a second aspect, an embodiment of the present application provides a table lookup apparatus, which operates on a field programmable gate array FPGA, and the apparatus includes:
the instruction sending module is used for sending a table look-up instruction to a CACHE memory CACHE in the FPGA, wherein the table look-up instruction carries corresponding table entry information and is used for indicating the CACHE to look up a matched table entry according to the table entry information;
the result acquisition module is used for receiving the table look-up result returned by the CACHE;
and the judging module is used for determining whether to look up the table from the external memory of the FPGA according to the table look-up result.
In a third aspect, an embodiment of the present application provides an FPGA, which includes a processing unit and a storage unit, where the storage unit includes a CACHE memory CACHE, and the storage unit stores computer-readable instructions, and when the computer-readable instructions are executed by the processing unit, the steps in the method provided in the first aspect are executed.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, performs the steps in the method as provided in the first aspect above.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic structural diagram of an FPGA provided in an embodiment of the present application;
fig. 2 is a flowchart of a table lookup method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram illustrating a table lookup process in a CACHE and an external memory according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a CACHE provided in the prior art;
fig. 5 is a schematic diagram illustrating that a list item is stored in a linked list structure according to an embodiment of the present application;
fig. 6 is a schematic diagram of entry insertion and entry deletion according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating that table entry information is stored by using a RAM according to an embodiment of the present application;
fig. 8 is a block diagram of a table lookup apparatus according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
It should be noted that the terms "system" and "network" in the embodiments of the present invention may be used interchangeably. The "plurality" means two or more, and in view of this, the "plurality" may also be understood as "at least two" in the embodiments of the present invention. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" generally indicates that the preceding and following related objects are in an "or" relationship, unless otherwise specified.
The embodiment of the application provides a table lookup method, when the method is used for performing table lookup, a table lookup instruction is sent to a memory CACHE inside an FPGA (field programmable gate array) for indicating the CACHE to perform table lookup according to the table lookup instruction, then the table lookup result returned by the CACHE is received, and whether the table lookup is performed from an external memory is determined according to the table lookup result. And whether to look up the table from the external memory is determined according to the table look-up result returned by the CACHE, and the table can be searched because the external memory has more storage capacity and can store more table items.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an FPGA100 according to an embodiment of the present disclosure, where the FPGA100 includes a storage unit 110 and a storage unit 120, the storage unit 110 includes a CACHE112, and the FPGA100 is further connected to an external memory 200.
Processing unit 120 may be configured to perform the table lookup method in this application, for example, send a table lookup instruction to CACHE112, and instruct CACHE112 to perform table lookup according to the table lookup instruction. After the table lookup is completed, CACHE112 returns the table lookup result to processing unit 120, and processing unit 120 may further determine whether to perform table lookup from external memory 200 according to the table lookup result.
It will be appreciated that storage unit 110 has stored therein computer readable instructions that, when executed by processing unit 120, FPGA100 performs the process illustrated by the table lookup method described below.
For a specific implementation process of the embodiment, the following specific implementation process of the table lookup method may be described, please refer to fig. 2, where fig. 2 is a flowchart of the table lookup method provided in the embodiment of the present application, where the method is applied to the processing unit in the FPGA, and may include the following steps:
step S210: and sending a table look-up instruction to a CACHE inside the FPGA.
When the FPGA needs to access the table entry, a table lookup instruction can be sent to the CACHE through a processing unit in the FPGA. Because the read speed of the CACHE is high, the table look-up operation can be completed in 2 clock periods, so when the table look-up is carried out, a table look-up instruction can be sent to the CACHE firstly, and the table look-up result can be obtained quickly. The table look-up instruction carries the table item information of the table item to be looked up and is used for indicating the CACHE to look up the matched table item according to the table item information, if the table item information is the table item ID, the CACHE can look up the table item matched with the table item ID, the table item information can also be specific quintuple information, and the CACHE can look up the table item matched with the quintuple information.
After receiving the table item information carried in the table look-up command, the CACHE can match the table item information with the table item information of each table item stored in the CACHE to search the matched table item, and after the search is finished, the CACHE returns the corresponding table look-up result to the processing unit.
Step S220: and receiving a table look-up result returned by the CACHE.
After CACHE table look-up, the corresponding table look-up result is returned to the processing unit, and the table look-up result can include that no table entry is found or that a table entry is found.
Step S230: and determining whether to perform table lookup from an external memory of the FPGA according to the table lookup result.
Because the table lookup result returned by the CACHE may be that no table entry needs to be found, which means that no table entry needs to be found is stored in the CACHE, the processing unit may further send the table lookup instruction to the external memory again when the table lookup result is that no table entry is found (i.e., the table lookup fails), so as to instruct the external memory to find the matched table entry according to the table entry information carried in the table lookup instruction. After receiving the table look-up command, the external memory can match the received table entry information with the table entry information of each table entry stored in the external memory to find the matched table entry, and if the matched table entry is found, the corresponding search result is returned to the processing unit.
After the CACHE fails to look up the table, the table look-up instruction is sent to the external memory, so that high-speed table look-up can be preferentially realized in the CACHE, the number of times of accessing the external memory is reduced, and the table look-up performance of the FPGA is improved.
If the table look-up result returned by CACHE is that the table look-up is successful, namely the matched table entry is found, the matched table entry can be found without accessing an external memory at the moment, and the quick table look-up can be realized by finding the table entry in the CACHE because the reading speed of the CACHE is higher than that of the external memory.
It is understood that some entries with frequent access may be cached in CACHE in advance, wherein a large number of entries are stored in the external memory, and the cached in CACHE is a part of entries in the external memory. Because the storage capacity of the external memory is larger than that of the CACHE, if the table is directly looked up in the external memory, more table entries need to be matched, the table look-up speed is slower, and the reading speed of the external memory is not as fast as that of the CACHE.
The external memory may be a DDR memory, a QDR memory, or the like, or may be other memories with larger capacities, such as a disk memory.
In the implementation process, CACHE is added between the FPGA and the external memory, the FPGA accesses the internal CACHE to look up the table, and the high-speed table look-up can be realized due to the high access speed of the CACHE, so that the table look-up efficiency is higher. And whether to look up the table from the external memory is determined according to the table look-up result returned by the CACHE, and the table can be searched because the external memory has more storage capacity and can store more table items.
On the basis of the above embodiment, in order to further improve the table lookup efficiency, the processing unit may further send query instructions to the CACHE and the external memory, respectively, to instruct both the CACHE and the external memory to lookup the matched table entry according to the table entry information.
Since the FPGA is further provided with a read/write control module of the external memory therein, the processing unit may send the query instruction to the external memory through the read/write control module, as shown in fig. 3 (the external memory in fig. 3 takes DDR as an example). Because the efficiency of external memory inquiry is slower, the processing unit can preferentially obtain the table lookup result of CACHE, if the table lookup result of CACHE is table lookup failure, the processing unit can send a table lookup failure signal to the external memory through the read-write control module to indicate the external memory to continue table lookup operation, and if the table lookup result of CACHE is table lookup success, the processing unit does not need to carry out table lookup operation on the external memory, the processing unit can send a table lookup success signal to the external memory to indicate the external memory to cancel the table lookup operation.
For example, after receiving the table lookup instruction, the external memory searches for the matched table entry from the table entries stored in the external memory, and if a table lookup failure signal is subsequently received, continues to perform table lookup until the matched table entry is found, and returns a table lookup result to the processing unit, and if all the table entries are not matched until the table lookup is completed, the external memory also needs to return a corresponding table lookup result to the processing unit. If the external memory receives the signal of successful table lookup subsequently, the current table lookup operation can be cancelled, and the table lookup is not continued.
In the implementation process, the table look-up command is sent to the external memory and the internal CACHE at the same time, parallel table look-up is realized, and the table look-up speed of the CACHE is higher, so the table look-up result of the CACHE can be preferentially obtained, after the failure of the CACHE table look-up, the external memory can be continuously accessed, the table look-up success of the CACHE can be realized without continuously accessing the external memory, and the table look-up result of the CACHE can be directly used, so that the frequency of accessing the external memory can be reduced, and the table look-up performance is improved.
On the basis of the above embodiment, if the result of the CACHE table lookup is that the table lookup fails, the table entry searched from the external memory is obtained, and the table entry searched from the external memory is updated to the CACHE.
If the table lookup fails in the CACHE, it indicates that no table entry to be searched is stored in the CACHE, and if the matched table entry is searched from the external memory, the searched table entry can be written into the CACHE through the read-write control module of the external memory, so that the table entry can be directly searched from the CACHE without being searched from the external memory when the table entry is searched again next time, and the searching efficiency is improved.
As shown in fig. 4, fig. 4 is a schematic structural diagram of a CACHE in the prior art, which is a CACHE composed of a Content-Addressable Memory (CAM) and a Random Access Memory (RAM), where the CAM is used to store an address and matching data (data such as an entry), and the RAM is used to store a data entry corresponding to the matching address. A part of the physical address of the external memory is intercepted as the address of the CAM, and the other part of the address is stored as data in the CAM. The address space of the RAM is consistent with that of the CAM, and the RAM stores data required by actual physical addresses and written into an external memory.
When a section of data needs to be written into an external memory, searching the CAM, judging whether the current address is written into a cache or not through an address comparison module, if the position conflicts, directly writing the data into the external memory, and if not, updating the CAM and writing the data into the external memory and the RAM; when the internal logic (such as a control module) requests the external memory, the same as the writing process, firstly, whether the addresses in the CAM are matched is compared, if so, the data corresponding to the addresses are directly obtained from the RAM, and if not, the data corresponding to the addresses are read from the external memory and returned to the internal logic.
Therefore, the CAM is adopted to CACHE the address in the above mode, so that the hit rate of the CACHE is low, only one address entry can be searched each time, parallel searching cannot be performed, and the efficiency is low; and when data is aged, an address interface is required to be shared with reading and writing, and the reading and writing efficiency is relatively lower. In order to solve the problems, in the embodiment of the present application, the entries in the CACHE are stored in a linked list structure, the entries newly inserted into the CACHE are inserted into the tail of the linked list structure, and the entries arranged at the head of the linked list structure are preferentially deleted.
The linked list structure of the entries in the CACHE is shown in fig. 5, each entry inside the CACHE is connected in a pointer linked list mode, and the content of each entry includes information such as a Peer _ ptr prefix pointer, a next _ ptr suffix pointer, a Table _ id entry identifier, Table _ content entry information, and Table _ priority entry priority/timestamp.
The Table _ ID may refer to an ID value generated from the Table entry information by using a hash algorithm, for example, a corresponding ID value is generated from a Media Access Control (MAC) Address, an Internet Protocol (IP) Address, a Virtual Local Area Network (VLAN) and other information in the Table entry. When table look-up is carried out, the table look-up instruction can carry the ID value of the table item to be looked up, so that the matched table item can be quickly found by directly matching the ID value. While Table _ content represents specific Table entry information, i.e. specific MAC address, IP address, VLAN, update information (such as number of bytes over packets, number of packets over packets, number of hits, etc.), and certainly, when Table look-up is performed, the Table entry information may be matched, i.e. the Table look-up instruction may carry the specific Table entry information, and then the matched Table entry is searched through matching of the Table entry information.
In the implementation process, the CACHE stores each table entry by adopting a linked list structure, so that the table entry can be quickly inquired, the table lookup efficiency is further improved, the operations of quick deletion, insertion and the like of the table entries are supported, and the table entries are flexibly managed. And moreover, a linked list management mechanism is adopted to store the table entries, so that the hardware resource occupation is less, namely the fan-out number of the FPGA is reduced.
In order to store more entries and improve the search efficiency, the FPGA of the present application may include multiple CACHE, so that a parallel matching manner may be adopted to perform matching operation of entry contents, and parallel table search of multiple CACHE may be supported simultaneously.
In order to realize the quick parallel matching of the ID values of the entries or the contents of the entries in the CACHE, each entry can adopt a register set to store the CACHE contents, and each CACHE can contain 16 entries. And when the old table entry needs to be deleted and a new table entry needs to be inserted, the deletion and the insertion search can be completed simultaneously in one clock period.
In practical application, the number of CACHEs can be expanded according to actual needs or logic resources of the FPGA, and then more entries can be stored in the CACHEs, so that the search hit rate of the CACHEs is improved, and the read-write operation of an external memory is reduced.
On the basis of the above embodiment, in order to flexibly manage the entries in the CACHE, the entries in the CACHE may be arranged according to the order of the priority, the entries with high priority are arranged at the tail of the linked list structure, and the entries with low priority are arranged at the head of the linked list structure.
For example, when entries are initially stored in CACHE, 16 entries with high priority may be selected from all entries to be stored in CACHE, for example, there are 100 entries in total, and these 100 entries are stored in the external memory, and then 16 entries with high priority are selected from them to be stored in CACHE.
The priority of the table entry can be flexibly set according to actual requirements, for example, the priority of the table entry can be set according to the importance of the table entry, for example, the priority of the table entry with high importance is high, the priority of the table entry with low importance is low, and the importance of the table entry can be determined by the number of hits of the table entry, the hit probability or the number of over-packets in the table entry.
The items stored in CACHE are arranged at the head _ ptr of the linked list, and the items with low priority are arranged at the rear Tail _ ptr of the linked list. And each table entry is connected together through a Peer _ ptr prefix pointer and a next _ ptr suffix pointer.
Of course, for the convenience of storing the table, the table entries stored in the CACHE may also be stored not according to the priority, but according to the first-in first-out order, for example, the table entry that enters the CACHE first may also be arranged at the head _ ptr of the linked list, and the table entry that enters later may be arranged at the tail _ ptr of the linked list. Of course, the entries may also be stored according to the size of the timestamp, for example, the entry with the larger timestamp is arranged at the head _ ptr of the linked list, and the entry with the smaller timestamp is arranged at the tail _ ptr of the linked list.
In practical application, a corresponding table storage mode can be selected according to actual requirements for storage, and the CACHE can support various table storage modes and is more flexible.
When there is a newly stored entry in CACHE, the entry at the head of the linked list (i.e., the entry with low priority) is deleted and removed, and the newly entered entry or the entry with high priority or short timestamp is inserted into the tail of the linked list, as shown in fig. 6. The newly entered entries may not be compared with priorities or timestamps, but may be directly inserted into the tail of the linked list to avoid readjusting the arrangement of the stored entries. Of course, the priority or timestamp of the newly entered entry and the existing entry may be compared, and then the entries may be rearranged according to the priority or timestamp size.
For example, for the first-in first-out ordering, the design rule is: when a new item is inserted into CACHE, the newly inserted item is directly linked to the position of the linked list pointer Tail _ ptr, if the item in the original CACHE is in a full state, the item in the position of Head _ ptr is deleted, and the Head _ ptr pointer points to the second item, which is equivalent to that the new item replaces the item which is stored in the CACHE for the longest time.
If the table entry needing table lookup is matched with a certain table entry in the CACHE, the matched table entry can be used as a new table entry to be shifted to the position of the chain table pointer Tail _ ptr, the front and rear table entries of the table entry are linked through the pointer, and other table entries are not shifted. Thus, the sorting position of each table entry can be adjusted through the hit of the subsequent table entry.
For the sorting according to the priority order, the design rule is as follows: different priorities are set for each table entry, then the table entries with low priority can be sorted at the head of the linked list pointer, the table entries with high priority can be sorted at the tail of the linked list pointer, and the table entries with the same priority can be sorted in a first-in first-out mode. For the newly entered list item, the priority of the list item can be compared with the priority of the stored list item, then the sorting position of each list item is adjusted, and the list item with the lowest priority is deleted, at this time, the list item with the lowest priority can be positioned at the head of the list, the list item at the head of the list can be directly removed, and thus, the newly entered list item can be adopted to replace the list item with the low priority in the original CACHE. When looking up table, if some table item is matched, the priority of said table item can be changed, then the sorting position of table item can be readjusted according to the priority of said table item.
In the implementation process, with the subsequent insertion of new entries into the CACHE, the stored entry sequence may be updated in a corresponding table storage manner, and entries with low priority, long time or large time stamps may be deleted.
In the above embodiment, the newly inserted entry in CACHE may refer to an entry found from an external memory. If the FPGA searches the table entry from the external memory, the CACHE does not have the table entry, so the table entry searched from the external memory can be updated into the CACHE, and the rapid table lookup can be realized by directly accessing the CACHE without accessing the external memory again in the subsequent table lookup. In this case, a new entry needs to be inserted into the CACHE, or in order to update entries in the CACHE in time, entry aging time may be set for each entry in the CACHE according to priority, or timestamp size, or entry order, for example, the entry arranged at the head of the link is aged in a shorter time, and the entry arranged at the tail of the link is aged in a preferred time, so that when a new entry is inserted into the CACHE or an entry in the CACHE is aged, entries in the CACHE need to be deleted.
If a new entry is inserted into CACHE, the entry at the head of the linked list may be deleted, or the aged entry may be deleted when the entry is aged. After the matched table entry is obtained by table lookup from the CACHE, the content of the matched table entry in the CACHE needs to be updated, for example, information such as the number of bytes over-packed or the number of hits of the table entry is updated, so the deleted table entry in the CACHE may be a table entry whose table entry content has been updated, and is not stored in the external memory of the updated table entry, at this time, the deleted table entry in the CACHE can be updated into the external memory, and when the deleted table entry is deleted, the obtained deleted table entry in the CACHE can be written into the external memory through the read-write control module of the external memory. Therefore, when the deleted table entry cannot be searched in the CACHE subsequently, the deleted table entry can be searched in the external memory.
On the basis of the above embodiment, when the specific entry information (such as the above MAC address, IP address, VLAN information, etc.) of the entry in the CACHE occupies a larger logical register resource, the logical resource cannot be used to store the entry, so the RAM may be added inside the FPGA to store the entry information of the entry in the CACHE, and thus the position (i.e., Table _ content) originally used to store the entry information in the CACHE is used to store the address information of the RAM, and a schematic diagram thereof is shown in fig. 7. Therefore, when the table item information is searched, the corresponding table item information can be searched in the RAM through the address information (Content _ addr) of the RAM of the table item in the CACHE. This can expand the memory resources of the CACHE and does not need to sacrifice the lookup performance of the CACHE.
In this case, in order to perform table lookup quickly, when table lookup is performed, a matched table entry may be quickly looked up by matching the ID values of the table entries, and is not searched by matching table entry information any more.
Referring to fig. 8, fig. 8 is a block diagram of a table lookup apparatus 300 according to an embodiment of the present disclosure, where the apparatus 300 may be a module, a program segment, or a code on an FPGA. It should be understood that the apparatus 300 corresponds to the above-mentioned embodiment of the method of fig. 2, and can perform various steps related to the embodiment of the method of fig. 2, and the specific functions of the apparatus 300 can be referred to the above description, and the detailed description is appropriately omitted here to avoid redundancy.
Optionally, the apparatus 300 comprises:
an instruction sending module 310, configured to send a table lookup instruction to a CACHE memory CACHE inside the FPGA, where the table lookup instruction carries corresponding table entry information, and the table lookup instruction is used to instruct the CACHE to search for a matched table entry according to the table entry information;
a result obtaining module 320, configured to receive a table lookup result returned by the CACHE;
and the judging module 330 is configured to determine whether to perform table lookup from an external memory of the FPGA according to the table lookup result.
Optionally, the determining module 330 is configured to send a table lookup instruction to an external memory of the FPGA to instruct the external memory to search for a matched table entry according to the table entry information if the table lookup result is a table lookup failure.
Optionally, the instruction sending module 310 is configured to send a table look-up instruction to a CACHE inside the FPGA and an external memory of the FPGA, respectively, so as to instruct the CACHE and the external memory to search for a matched table entry according to the table entry information;
the determining module 330 is configured to send a table lookup failure signal to the external memory if the table lookup result is a table lookup failure, where the table lookup failure signal is used to instruct the external memory to continue a table lookup operation; and if the table look-up result is that the table look-up is successful, sending a table look-up success signal to the external memory, wherein the table look-up success signal is used for indicating the external memory to cancel the table look-up operation.
Optionally, if the table lookup result is a table lookup failure, the apparatus 300 further includes:
the table item updating module is used for determining the table items searched by the external memory; and updating the list items searched by the external memory into the CACHE.
Optionally, the entries in the CACHE are stored in a linked list structure, the entries newly inserted into the CACHE are inserted into the tail of the linked list structure, and the entries arranged at the head of the linked list structure are preferentially deleted.
Optionally, the entries in the CACHE are arranged according to a high-low order of priority, the entries with high priority are arranged at the tail of the linked list structure, and the entries with low priority are arranged at the head of the linked list structure.
Optionally, the apparatus 300 further comprises:
and the table item updating module is used for updating the deleted table items in the CACHE to the external memory if a new table item is inserted into the CACHE or the table items in the CACHE are aged.
Optionally, the entry information of the entry in the CACHE is stored in a random access memory RAM of the FPGA, and the location originally used for storing the entry information in the CACHE is used for storing the address information of the RAM.
It should be noted that, for the convenience and brevity of description, the specific working procedure of the above-described apparatus may refer to the corresponding procedure in the foregoing method embodiment, and the description is not repeated herein.
Embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the method processes performed by an electronic device in the method embodiment shown in fig. 2.
The present embodiments disclose a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the methods provided by the above-described method embodiments, for example, comprising: sending a table look-up instruction to a CACHE memory CACHE in the FPGA, wherein the table look-up instruction carries corresponding table entry information and is used for indicating the CACHE to look up a matched table entry according to the table entry information; receiving a table look-up result returned by the CACHE; and determining whether to perform table lookup from an external memory of the FPGA according to the table lookup result.
To sum up, the embodiments of the present application provide a table lookup method, apparatus, FPGA and readable storage medium, where a CACHE is added between the FPGA and the external memory, and the FPGA accesses the internal CACHE to perform table lookup first, and since the access speed of the CACHE is fast, the table lookup can be performed at high speed, and the table lookup efficiency is higher. And whether to look up the table from the external memory is determined according to the table look-up result returned by the CACHE, and the table can be searched because the external memory has more storage capacity and can store more table items.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (11)

1. A table look-up method is applied to a Field Programmable Gate Array (FPGA), and comprises the following steps:
sending a table look-up instruction to a CACHE memory CACHE in the FPGA, wherein the table look-up instruction carries corresponding table entry information and is used for indicating the CACHE to look up a matched table entry according to the table entry information;
receiving a table look-up result returned by the CACHE;
and determining whether to perform table lookup from an external memory of the FPGA according to the table lookup result.
2. The method of claim 1, wherein determining whether to perform table lookup from an external memory of the FPGA according to the table lookup result comprises:
and if the table lookup result is table lookup failure, sending a table lookup instruction to an external memory of the FPGA to instruct the external memory to search a matched table entry according to the table entry information.
3. The method according to claim 1, wherein said sending a table lookup instruction to a CACHE memory CACHE internal to said FPGA comprises:
respectively sending a table look-up instruction to a CACHE inside the FPGA and an external memory of the FPGA to indicate the CACHE and the external memory to look up a matched table entry according to the table entry information;
the determining whether to perform table lookup from an external memory of the FPGA according to the table lookup result includes:
if the table lookup result is a table lookup failure, sending a table lookup failure signal to the external memory, wherein the table lookup failure signal is used for indicating the external memory to continue table lookup operation;
and if the table look-up result is that the table look-up is successful, sending a table look-up success signal to the external memory, wherein the table look-up success signal is used for indicating the external memory to cancel the table look-up operation.
4. The method of claim 3, wherein if the table lookup result is a table lookup failure, the method further comprises:
determining the table items searched by the external memory;
and updating the list items searched by the external memory into the CACHE.
5. The method of claim 1, wherein the entries in the CACHE are stored in a linked list structure, the entries newly inserted into the CACHE are inserted into the tail of the linked list structure, and the entries arranged at the head of the linked list structure are preferentially deleted.
6. The method of claim 5, wherein the entries in the CACHE are arranged according to a high-low order of priority, the entries with high priority are arranged at a tail of the linked list structure, and the entries with low priority are arranged at a head of the linked list structure.
7. The method of claim 5, further comprising:
and if a new table entry is inserted into the CACHE or the table entries in the CACHE are aged, updating the deleted table entries in the CACHE into the external memory.
8. The method according to claim 1, wherein the entry information of the entry in the CACHE is stored in a random access memory RAM of the FPGA, and a location originally used for storing the entry information in the CACHE is used for storing address information of the RAM.
9. A table lookup apparatus operating on a field programmable gate array FPGA, the apparatus comprising:
the instruction sending module is used for sending a table look-up instruction to a CACHE memory CACHE in the FPGA, wherein the table look-up instruction carries corresponding table entry information and is used for indicating the CACHE to look up a matched table entry according to the table entry information;
the result acquisition module is used for receiving the table look-up result returned by the CACHE;
and the judging module is used for determining whether to look up the table from the external memory of the FPGA according to the table look-up result.
10. An FPGA comprising a processing unit and a memory unit, the memory unit comprising a CACHE memory CACHE, the memory unit storing computer readable instructions which, when executed by the processing unit, perform the method of any one of claims 1-8.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN202111590931.8A 2021-12-23 2021-12-23 Table lookup method and device, FPGA and readable storage medium Pending CN114281712A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111590931.8A CN114281712A (en) 2021-12-23 2021-12-23 Table lookup method and device, FPGA and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111590931.8A CN114281712A (en) 2021-12-23 2021-12-23 Table lookup method and device, FPGA and readable storage medium

Publications (1)

Publication Number Publication Date
CN114281712A true CN114281712A (en) 2022-04-05

Family

ID=80874603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111590931.8A Pending CN114281712A (en) 2021-12-23 2021-12-23 Table lookup method and device, FPGA and readable storage medium

Country Status (1)

Country Link
CN (1) CN114281712A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115334013A (en) * 2022-08-12 2022-11-11 北京天融信网络安全技术有限公司 Flow statistical method, network card and electronic equipment
CN115412893A (en) * 2022-10-19 2022-11-29 成都锐成芯微科技股份有限公司 Low-power-consumption Bluetooth attribute access method and low-power-consumption Bluetooth system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115334013A (en) * 2022-08-12 2022-11-11 北京天融信网络安全技术有限公司 Flow statistical method, network card and electronic equipment
CN115334013B (en) * 2022-08-12 2024-01-23 北京天融信网络安全技术有限公司 Flow statistics method, network card and electronic equipment
CN115412893A (en) * 2022-10-19 2022-11-29 成都锐成芯微科技股份有限公司 Low-power-consumption Bluetooth attribute access method and low-power-consumption Bluetooth system
CN115412893B (en) * 2022-10-19 2023-04-21 成都锐成芯微科技股份有限公司 Low-power-consumption Bluetooth attribute access method and low-power-consumption Bluetooth system

Similar Documents

Publication Publication Date Title
CN108153757B (en) Hash table management method and device
CN108427647B (en) Method for reading data and hybrid memory module
US9871727B2 (en) Routing lookup method and device and method for constructing B-tree structure
US10397362B1 (en) Combined cache-overflow memory structure
CN114281712A (en) Table lookup method and device, FPGA and readable storage medium
CN105677580A (en) Method and device for accessing cache
CN110555001B (en) Data processing method, device, terminal and medium
US20100228914A1 (en) Data caching system and method for implementing large capacity cache
CN108762668B (en) Method and device for processing write conflict
JP2000347935A (en) Virtual non-compression cache for compression main memory
WO2008037201A1 (en) Method and apparatus for solving hash collision
WO2016095761A1 (en) Cache processing method and apparatus
US20050283573A1 (en) System and method for an optimized least recently used lookup cache
KR20200097050A (en) Method for managing index
CN109981464B (en) TCAM circuit structure realized in FPGA and matching method thereof
CN109725825A (en) For managing method, equipment and the computer program product of caching
US20080301372A1 (en) Memory access control apparatus and memory access control method
CN112148736A (en) Method, device and storage medium for caching data
US10599572B2 (en) Method and device for optimization of data caching
CN111831691A (en) Data reading and writing method and device, electronic equipment and storage medium
CN111694806A (en) Transaction log caching method, device, equipment and storage medium
CN108804571B (en) Data storage method, device and equipment
CN112148639A (en) High-efficiency small-capacity cache memory replacement method and system
WO2021008552A1 (en) Data reading method and apparatus, and computer-readable storage medium
CN114036077A (en) Data processing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination