CN114077380A - Multi-channel table lookup method and device - Google Patents

Multi-channel table lookup method and device Download PDF

Info

Publication number
CN114077380A
CN114077380A CN202010844665.6A CN202010844665A CN114077380A CN 114077380 A CN114077380 A CN 114077380A CN 202010844665 A CN202010844665 A CN 202010844665A CN 114077380 A CN114077380 A CN 114077380A
Authority
CN
China
Prior art keywords
storage
hash
module
query request
slice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010844665.6A
Other languages
Chinese (zh)
Inventor
华瑞东
刘衡祁
徐金林
周峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanechips Technology Co Ltd
Original Assignee
Sanechips Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanechips Technology Co Ltd filed Critical Sanechips Technology Co Ltd
Priority to CN202010844665.6A priority Critical patent/CN114077380A/en
Priority to PCT/CN2021/111716 priority patent/WO2022037436A1/en
Publication of CN114077380A publication Critical patent/CN114077380A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides a multi-channel table look-up method and a device, wherein the method comprises the following steps: determining an access address corresponding to a query request through at least two compression hash operations in a hash module, wherein the query request is from at least two query channels; and executing the query request in a storage module based on the access address and generating a table lookup result. According to the embodiment of the application, the query requests from different query channels are subjected to multiple times of hash compression operation, so that the probability that the query requests access the same storage module at the same time is reduced, access conflicts among the query requests are reduced, the occupation of storage space in a multi-channel table look-up process is reduced, and the table look-up efficiency is improved.

Description

Multi-channel table lookup method and device
Technical Field
The invention relates to the field of network communication, in particular to a multi-channel table look-up device and a multi-channel table look-up method.
Background
With the development of the signaling society, the demand for information interaction is increasing, and the requirements for communication network devices are becoming higher, for example, the requirements for network processor chips in various network core devices such as routers and switches in terms of storage capacity and search speed are higher. The hash table entry is a common table entry type in a network processor chip, the existing hash table look-up process generally comprises three parts of compression, query and comparison, firstly, an access address is obtained through compression mapping of a table look-up key value, then, data in a corresponding storage space is read through the access address, and finally, whether the obtained data is consistent with the table look-up key value or not is compared, and a table look-up result is output. For the condition of multi-channel table lookup, the same table entry content is usually copied and stored in several copies, so as to realize multi-channel table entry query, however, the method meets the query bandwidth by increasing the storage overhead. Although the method realizes multi-channel table lookup, the method causes waste of storage resources and limits the improvement of the performance of the network processor chip.
Disclosure of Invention
The embodiment of the application discloses a multi-channel table look-up method and a multi-channel table look-up device, so that the occupation of storage space in the multi-channel table look-up process is reduced, the table look-up efficiency is improved, and the power consumption in the multi-channel table look-up process can be reduced.
The embodiment of the application provides a multi-channel table look-up method, which comprises the following steps:
determining an access address corresponding to a query request through at least two compression hash operations in a hash module, wherein the query request is from at least two query channels; and executing the query request in a storage module based on the access address and generating a table lookup result.
The embodiment of the application provides a multichannel table look-up device, and the device includes: a hash module and at least one storage module;
the hash module is used for performing at least two times of compression hash operation on the query request to determine a corresponding access address, wherein the query request is from at least two query channels;
and the storage module is used for storing the target data and executing the query request to generate a table look-up result.
According to the embodiment of the application, the access address corresponding to each query request is determined by performing multiple compression hash operations on the query requests from different query channels in the hash module, the query requests search the storage items in the storage module according to the access addresses to generate a table look-up result, the probability that the query requests of different query channels access one storage module simultaneously is reduced through the multiple compression hash operations, access conflicts are reduced, the occupation of storage space in the table look-up process is reduced, and the table look-up efficiency is improved.
Drawings
FIG. 1 is a diagram illustrating a prior art multi-channel table lookup method according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of a multi-channel table lookup method according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of another multi-channel table lookup method provided in the embodiments of the present application;
FIG. 4 is a diagram illustrating an example of a structure for selecting parameters in a discrete manner according to an embodiment of the present application;
FIG. 5 is a flow chart of another multi-channel table lookup method provided in the embodiments of the present application;
FIG. 6 is a flow chart of another multi-channel table lookup method provided in the embodiments of the present application;
FIG. 7 is a flow chart of another multi-channel table lookup method provided in the embodiments of the present application;
FIG. 8 is a flow chart of another multi-channel table lookup method provided in the embodiments of the present application;
FIG. 9 is a schematic structural diagram of a multi-channel table lookup apparatus according to an embodiment of the present disclosure;
FIG. 10 is a block diagram of a hash module according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of a memory module according to an embodiment of the present application;
FIG. 12 is a diagram illustrating the results of another multi-channel lookup apparatus according to an embodiment of the present disclosure;
FIG. 13 is a diagram illustrating the results of another multi-channel lookup apparatus according to an embodiment of the present disclosure;
FIG. 14 is a diagram illustrating the results of another multi-channel lookup apparatus according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in detail below with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
Fig. 1 is a schematic diagram of a multi-channel table lookup method in the prior art according to an embodiment of the present application, where a table lookup request in the prior art may be sent from a programmable processing module of a network processor chip, and a query result is fed back to the programmable processing module after completing a query, and referring to fig. 1, contents stored in each of the copy/xor storage units shown in the diagram are completely consistent in a copy or xor manner, at this time, each copy/xor storage unit may support simultaneous table lookup of multiple query channels, assuming that the number of the copy/xor storage units that can support simultaneous table lookup of multiple query channels is i, n/i copy/xor storage units may exist to support lookup bandwidths of n paths of query channels, if an entry to be stored is K, a bit width is W, and a copy/xor storage reduction coefficient is j, the copy scheme and the exclusive-or scheme require n/i x j x K x W bits of storage space altogether, and this way realizes multi-channel table lookup, but causes the waste of storage resources, and restricts the improvement of the performance of the network processor chip.
Fig. 2 is a flowchart of a multi-channel table lookup method according to an embodiment of the present application, where the embodiment of the present application is applicable to a case where multiple query channels perform hash table entry query, and the method may be implemented by a multi-channel table lookup apparatus, may be implemented in a software and/or hardware manner, and may be generally integrated in a network processor chip, referring to fig. 2, where the method according to the embodiment of the present application specifically includes the following steps:
step 100, determining an access address corresponding to a query request through at least two compression hash operations in a hash module, wherein the query request is from at least two query channels.
The hash module may be a module for performing compression mapping on the query request, and may map the query request to different access addresses, the compression hash operation may be an operation for performing compression mapping on the query request, the compression hash operation may perform compression mapping on the query request according to a preset hash mapping function, and the hash mapping functions corresponding to each compression hash operation in the hash module may be the same or different. The access address may be a physical address or a logical address of the target data requested to be looked up in the query request.
Specifically, the query request may be a request for querying the stored hash table entry, and the query request may be sent by a plurality of query channels, where the more query channels of the query request, the greater the probability of receiving the query request. After one or more query requests are obtained, the query requests may be subjected to multiple compressed hash operations, for example, a CRC (Cyclic Redundancy Check) function may be used to perform discrete truncation on the lookup key value of the query request or directly perform truncation on the lookup key value of the query request, the calculation result may be subjected to the compressed hash operation again, or the calculation result and the lookup key value of the query request are calculated and then subjected to the compressed hash operation again. After the query request is subjected to multiple times of compressed hash operations, the operation result of each query request can be used as an access address, or the access address stored in association is searched through the operation result, and the obtained access address can be used as a logical address or a physical address to be queried by the query request.
Step 110, executing the query request in the storage module based on the access address and generating a table lookup result.
In the embodiment of the present disclosure, the number of the storage modules may include one or more, and the storage spaces among the storage modules may be the same or different. The table lookup result may indicate whether the query request is successful, for example, the table lookup result may be successful if the table lookup key of the query request exists in the storage module, and the table lookup result may be failed if the table lookup key of the query request does not exist in the storage module.
Specifically, the number of the storage modules may be one or more, the physical addresses or logical addresses between the storage modules may be different, the access address may be located in the physical address or logical address of each storage module, when the access address of one query request belongs to a certain storage module, the query request may be sent to the storage module, the query request is executed in the storage module to implement data query, and a table lookup result may be generated according to a result of the data query.
According to the embodiment of the application, the corresponding access address is determined by using at least twice compressed hash operations for the query request in the hash module, the query request is realized in the storage module through the access address and the corresponding table look-up result is generated, and the query requests from different query channels are subjected to the multiple compressed hash operations, so that the probability that the query request simultaneously accesses the same storage module is reduced, the access conflicts among the query requests are reduced, the storage space occupation in the multi-channel table look-up process is reduced, and the table look-up efficiency is improved.
Fig. 3 is a flowchart of another multi-channel table lookup method according to an embodiment of the present application, which is embodied based on the foregoing embodiment, and referring to fig. 3, the method according to the embodiment of the present application includes the following steps:
step 200, performing a first compression hash operation on the lookup key values of the query requests in the hash module, and obtaining corresponding operation results.
The table lookup key value may be a numerical value used for performing table lookup, and may include an identification number, a numerical value, and the like of data to be looked up, and the operation result may be a numerical value or information generated by compressing and hashing the table lookup key value.
In an exemplary embodiment, a first compressed hash operation may be performed on the lookup key values of the query requests in the hash module, and operation results corresponding to the lookup key values may be obtained. For example, the CRC function is used to perform bit truncation after the lookup key value of the query request is discretized or the CRC function is used to perform bit truncation directly on the lookup key value of the query request, and a value generated after the bit truncation may be used as an operation result of the lookup key value. The hash mode corresponding to the first compressed hash operation in the hash module can be stored in advance, and the compressed hash operation can be directly performed on the table lookup key value of the query request after the query request is obtained.
And step 210, determining a discrete mode selection parameter corresponding to the lookup table key value of each query request based on the operation result.
The discrete mode selection parameter may be a data structure of a second time of compressed hash operation stored in the hash module in advance, and different operation results of a first time of compressed hash operation may correspond to different discrete mode selection parameters. The hash mode may be a compression mapping mode of table lookup key values in the compression hash operation, the types of the hash modes may be multiple, and the hash mode or the identification number of the hash mode corresponding to the second compression hash operation may be stored in the discrete mode selection parameter.
Specifically, the hash modes that can be executed by the second compression hash operation in the hash module may be multiple, the parameter may be selected in a discrete mode that is obtained and stored in the hash module according to the operation result of the first compression hash operation, and the hash mode of the second compression hash operation may be selected according to the hash mode in the parameter or the identification number of the hash mode, for example, the operation result of the first compression hash operation on the lookup table key value of the query request a is 1, the parameter may be selected in a discrete mode with a sequence number of 1, and the hash mode of the second compression hash operation of the query request a may be stored in the parameter in a discrete mode.
And step 220, performing a second compression hash operation on each table lookup key value according to the hash mode corresponding to the discrete mode mark bit in the discrete mode selection parameter.
The discrete mode flag bit may be a flag bit for identifying a hash mode, and the hash mode may be represented by one or more bits of data.
In this embodiment of the application, the hash module may store a plurality of hash modes in advance, after determining the discrete mode selection parameter corresponding to the lookup key value in each query request, the hash mode corresponding to the data may be used as the hash mode for performing the second compression hash operation on the lookup key value corresponding to the query request, and the hash mode may be used for performing the second compression hash operation on the corresponding lookup key value. Furthermore, in order to save the hash compression time of the query request, the first compression hash operation and the second compression hash operation may be performed on the query request at the same time, where the second compression hash operation may use all the pre-stored hash modes to determine the operation results respectively, then the discrete mode selection parameter may be selected according to the operation result of the first compression hash operation, the target hash mode may be determined according to the discrete mode flag bit in the discrete mode selection parameter, and the operation result corresponding to the target hash mode may be obtained as the operation result of the second compression hash operation.
Step 230, determining the corresponding access address according to the operation result of the second hash compression operation of each query request.
Specifically, the access address may be stored in association with the operation result, and after the operation result of the second hash compression operation of each query result is determined, the corresponding access address may be searched for using the operation result, and the searched access address may be used as an address used for querying each query request in the storage module.
Step 240, executing the query request in the storage module based on the access address and generating a table lookup result.
According to the embodiment of the application, the hash mode of the second-time compressed hash operation is determined through the first compressed hash operation of the query request, the second-time compressed hash operation is carried out on the query request based on the hash mode, the access address is searched according to the operation result, the table lookup result corresponding to the query request is obtained in the storage module by using the access address, the probability that the query request simultaneously accesses the same storage module is reduced, the access conflict among the query requests is reduced, the storage space occupation in the multi-channel table lookup process can be reduced, and the table lookup efficiency is improved.
Further, on the basis of the foregoing application embodiment, the determining a corresponding access address according to an operation result of the second hash compression operation of each query request includes: determining a result valid flag bit of the operation result in the corresponding discrete mode selection parameter; if the result valid flag bit in the discrete mode selection parameter is set, determining an access address according to the operation result; and if the result valid flag bit in the discrete mode selection parameter is not set, determining that the corresponding query request does not hit the storage item.
Wherein the discrete mode selection parameter may be a data structure storing a hash mode and a result valid flag, different discrete modes can be identified through different parameter values, whether the data to be checked corresponding to the query request exists can be determined through the result valid flag bit, in an exemplary implementation, fig. 4 is a diagram illustrating a structure of a discrete mode selection parameter provided in an embodiment of the present application, and referring to fig. 4, the discrete mode selection parameter may include p bits of discrete mode selection bits and q bits of result valid flag bits, wherein each flag bit in each result valid flag bit is used for identifying whether data is stored in the corresponding storage module or not, for example, when one data is stored, the result value after the second hash compression is 0, and the valid flag position of result 0 in the discrete mode selection parameter may be 1.
In an exemplary embodiment, the operation result obtained by the second compression hash operation corresponding to each query request may be obtained, and the operation result is used to search for a corresponding result valid flag bit, for example, after the table lookup key value uses the second compression hash operation, the operation result is 10, it may be determined whether the flag bit of the result 10 valid flag in the hash storage unit is set, if the flag bit is set to 1, the access address corresponding to the result 10 valid flag bit may be used as the access address of the query request in the storage module, and if the result 10 valid flag bit is 0, the query result is directly returned to miss. Each result valid flag bit may correspond to an actual memory address in the memory module.
Further, on the basis of the foregoing application embodiment, the discrete selection parameter includes a discrete mode flag bit and a result valid flag bit, where the number of the result valid flag bits corresponds to the number of operation results of the second compression hash operation, and the result valid flag bit is set at the time of data storage.
In this embodiment of the present application, the discrete selection parameter may be composed of a discrete mode flag bit and a result valid flag bit, where the discrete mode flag bit may use one or more flag bits to constitute identification information, different hash modes may be represented by the identification information, the result valid flag bits may be multiple, the result values of multiple second-time compressed hash operations may be respectively set with the result valid flag bits corresponding to the result values after being selected by the discrete mode flag bit, the number of the result valid flag bits may correspond to the number of the result values, and each result value may have a corresponding result valid flag bit.
Fig. 5 is a flowchart of another multi-channel table lookup method provided in the embodiment of the present application, where fig. 5 illustrates a specific implementation manner based on the embodiment of the present application, and the query request is cached in the storage module, and polling access to the storage slice is implemented in a scheduling manner, so as to solve the problem of access conflict between different query requests and the same storage slice, referring to fig. 5, the multi-channel table lookup method provided in the embodiment of the present application includes the following steps: step 300, determining an access address corresponding to a query request through at least two compression hash operations in a hash module, wherein the query request comes from at least two query channels.
And step 310, sending the query request to the corresponding storage module according to the access address.
In the embodiment of the present application, a plurality of storage modules are arranged in a device, different storage modules correspond to different physical addresses or logical addresses, and because access addresses of query requests are different, the query requests can be allocated into the storage modules according to the access addresses, and the physical addresses or the logical addresses of the storage modules include the access addresses.
And step 320, caching the received query request in the storage module.
Specifically, in order to prevent different query requests from simultaneously requesting access to the same block of storage area, the storage module may cache the query requests after receiving the query requests.
And step 330, distributing the cached inquiry request to a storage slice cache in a corresponding storage module according to the corresponding access address.
The storage slice may be a data storage portion of the storage module, the storage module may include one or more storage slices, the storage slice may specifically be a small-scale ram, the storage module is formed by deep splicing a plurality of rams, a physical address or a logical address corresponding to each storage slice may be different, and data stored in different storage slices may be different.
In this embodiment of the present application, for each storage module, the cached query requests may be sent to the corresponding storage slice according to the respective corresponding access address, the order of sending the cached query requests may not be limited, the cached query requests may be sent according to the caching time, the first cached query requests may be sent first, or the query requests may be randomly selected to be sent. The query request is sent to the corresponding storage slice according to the corresponding access address, and the query request can be cached in the corresponding storage slice.
Step 340, scheduling the query request according to a preset rule, comparing the table lookup key value of the query request with the corresponding storage items in the storage slice, if the comparison is the same, the query is hit, otherwise, the query is not hit.
The preset rule may be a rule for scheduling the cached query requests in the storage slice, and may include a sequence for scheduling the query requests, a time for controlling the query requests to access the storage entries in the storage slice, and the like. The storage entry may be data stored within a storage slice, and a plurality of storage entries may be stored within one storage slice.
Specifically, the query request can be scheduled in the storage slice according to a preset rule, so that the query request accessing the storage slice cannot access the storage entries simultaneously, when the query request is scheduled, the table lookup key values of the query request can be respectively compared with the storage entries in the storage slice, when the storage entries are consistent with the table lookup key values, the query hit of the query request can be determined, and when the storage entries are not consistent with the table lookup key values, the query miss of the query request can be determined.
And step 350, judging output information of the hash module and the storage module to determine the correctness of the table lookup result.
The output information may be information output by the hash module or the storage module, and may include information that valid flag bits in the hash module are not marked, a table lookup result output by the storage module, and the like.
In the embodiment of the present application, the output information of the hash module and the output information of the storage module may be respectively determined to determine the correctness of the table lookup result, for example, it may be determined whether the output information of the storage module is received or not, if the output information of the storage module exists, the output hit information or miss information is output as a correct table lookup result, if the output information of the storage module does not exist, it may be determined whether the output miss information of the hash module is received or not, if the output information of the storage module exists, it may be determined that the query request is not hit, and the miss information is output as the table lookup result.
Further, on the basis of the embodiment of the above application, the determining the output information of the hash module and the storage module to determine the correctness of the table lookup result includes: determining whether the query request in the storage module hits a storage item, if so, outputting the hit information of the storage item as a table look-up result; if not, determining whether a result marking bit of the query request in the hash module is not set, and if so, outputting the miss information of the query request as a table lookup result.
In the embodiment of the application, the access address corresponding to the query request is determined by at least two times of compressed hash operations in the hash module, the query request is generated to each storage module according to the corresponding access address, the query request is cached in each storage module respectively, the cached query request can be further distributed to each storage slice cache of the storage module, each query request of the storage slice cache is scheduled according to the preset rule, the comparison between a table lookup key value and a storage entry is realized, when the table lookup key value of the query request is consistent with the storage entry, the query request is determined to be in query hit, otherwise, the query is not in hit, the storage module is divided into a plurality of storage slices, the conflict that the query request accesses the same data block is further reduced, the query request is cached and is scheduled according to the preset rule, and the problem that the query request is simultaneously oriented to the same data block is solved, the occupation of physical storage space in the table lookup process is reduced, and the table lookup efficiency can be improved.
Further, on the basis of the embodiment of the above application, the storage module includes at least one storage slice, and the physical storage addresses corresponding to the storage slices are different.
Specifically, the storage module includes one or more storage slices, the storage slices may be RAM with a small storage scale, physical storage addresses corresponding to the storage slices are different, and data stored in the storage slices may be different.
Further, on the basis of the embodiment of the above application, at least one storage entry is included in the storage slice.
In an exemplary embodiment, a memory slice may store one or more memory entries.
Fig. 6 is a flowchart of another multi-channel table lookup method provided in the embodiment of the present application, and referring to fig. 6, the method provided in the embodiment of the present application includes the following steps:
step 500, determining an access address corresponding to a query request through at least two compression hash operations in a hash module, wherein the query request is from at least two query channels.
And 510, splitting the query request according to the access address to realize query request shunting.
Specifically, in order to further reduce the access collision probability of different query requests to the same storage space, after the access address of each query request is determined, each query request may be split according to the access address, the number of channels of the query request is increased, and the splitting of the query request is implemented, for example, n channels of query requests may be split into m channels, where m may be a positive integer greater than n. The m paths of query requests can be preset, wherein the access address of the query request can be preset, and when the n paths of query requests are received, the table lookup key values of the n paths of query requests can be filled to the corresponding positions of the preset m paths of query requests, so that the query requests are split.
Step 520, executing the query request in the storage module based on the access address and generating a table lookup result.
Fig. 7 is a flowchart of another multi-channel table lookup method provided in the embodiment of the present application, and referring to fig. 7, the method provided in the embodiment of the present application includes the following steps:
step 600, executing the query request in a cache module, wherein the cache module caches memory entries with hit times greater than or equal to threshold times.
The cache module may be a device for caching the memory entry, and may specifically be a cache. The threshold number of times may be a value reflecting whether a memory entry is frequently queried, and the memory entry may be cached to the cache module when the number of times the memory entry is hit by a query request within a period of time is greater than or equal to the threshold number of times.
In the embodiment of the application, when the query request is obtained, whether the storage entries with the consistent numerical values exist in the cache module can be searched by using the table lookup key value of the query request, if the storage entries exist, the table lookup result of the query request can be directly returned to be successful, and otherwise, the table lookup is continuously performed by using the hash module and the storage module.
Step 610, determining an access address corresponding to the query request through at least two compression hash operations in the hash module, wherein the query request is from at least two query channels.
Step 620, executing the query request in the storage module based on the access address and generating a table lookup result.
Fig. 8 is a flowchart of another multi-channel table lookup method provided in the embodiment of the present application, and referring to fig. 8, the method provided in the embodiment of the present application includes the following steps:
step 700, determining an access address corresponding to a query request through at least two compression hash operations in a hash module, wherein the query request is from at least two query channels;
step 710, executing the query request in the storage module based on the access address and generating a table lookup result.
And 720, when determining that the storage module has the hot spot slice with the access frequency within the threshold time being greater than or equal to the threshold access frequency, copying the storage entry in the hot spot storage slice to the hot spot shunting slice.
The threshold time may be a preset time period, the threshold access frequency may be a critical value at which the storage slice becomes a hotspot storage slice, and when the access frequency of a storage slice in a period of time is greater than or equal to the threshold access frequency, the storage slice may be used as a hotspot storage slice.
Specifically, the number of access times of each storage slice in each storage module, which is queried by a query request, may be continuously counted, and when a storage slice is queried for more than a threshold number of access times within a period of time, the storage slice may be used as a hot spot storage slice, and a storage entry in the hot spot storage slice is copied to a hot spot splitting slice for storage.
And step 730, when acquiring a query request for accessing the hotspot storage slice, sending the query request to the hotspot storage slice and/or the hotspot shunting slice according to the cache level of the hotspot storage slice.
The cache level may be the number of the cached query requests in the storage slice, and the higher the cache level is, the greater the number of the query requests cached in the corresponding cache slice is.
In the embodiment of the application, the access address of the query request can be detected, and when the access address of the storage request corresponds to one hotspot storage slice, the storage request can be sent to the original hotspot storage slice and/or the hotspot shunting slice in which the duplicate storage entries are stored according to the cache water level of the hotspot storage slice, so that query blocking in a single storage module is prevented, and the performance of a network processor chip is improved. It can be understood that the query request may be sent to the hotspot storage slice or the hotspot shunting slice according to different cache levels of the hotspot storage slice, and the query request may also be sent to the hotspot storage slice and the hotspot shunting slice respectively.
Fig. 9 is a schematic structural diagram of a multi-channel table lookup apparatus according to an embodiment of the present application, where the multi-channel table lookup apparatus according to the embodiment of the present application is applicable to a case where multiple query channels perform hash table entry query, and may be implemented in a software and/or hardware manner, and may be generally integrated in a network processor chip, referring to fig. 9, the multi-channel table lookup apparatus according to the embodiment of the present application includes: a hash module and at least one storage module; the hash module is used for performing at least two times of compression hash operation on the query request to determine a corresponding access address, wherein the query request is from at least two query channels; and the storage module is used for storing the target data and executing the query request to generate a table look-up result. The device provided by the embodiment of the application can further comprise an output arbitration module, and the output arbitration module is used for judging the output information of the hash module and the storage module so as to determine the correctness of the table lookup result.
In an exemplary embodiment, an apparatus may include a hashing module configured to perform a first compressed hashing operation after table lookup key value input, and obtain a table lookup address for accessing a hashing scheme of a second compressed hashing operation. Meanwhile, a unit for compressing the hash operation for the second time is also arranged in the hash module, and the access address of the query request can be obtained. The common way of compressing hash operation is to intercept the table lookup key value after processing it by CRC function or directly intercept the table lookup key value, so as to obtain the corresponding access address. The output arbitration module can be used for arbitrating the miss output of the hash module and the output of the table lookup result returned by the storage module so as to improve the correctness of the table lookup result output.
Fig. 10 is a schematic structural diagram of a hash module provided in an embodiment of the present application, where the hash module shown in fig. 10 may be embodied on the basis of the hash module of the above embodiment, and the hash module includes a primary hash unit, a hash mode storage unit, and a secondary hash unit; the first hash unit is used for carrying out first compression hash operation on the table lookup key value of the query request and acquiring a corresponding operation result; the hash mode storage unit is used for storing the hash mode corresponding to the second compression hash operation; and the secondary hash unit is used for carrying out secondary compression hash operation on the query request according to the hash mode in the hash mode storage unit. The hash storage unit specifically comprises at least one discrete mode selection parameter, wherein the discrete mode selection parameter comprises result valid flag bits and hash mode flag bits, the number of the result valid flag bits corresponds to the number of operation results of the second compression hash operation, and the result valid flag bits are set during data storage.
Referring to fig. 10, in an exemplary embodiment, the hash module includes two parts, a hash storage unit and a compressed hash unit, the hash storage unit stores a discrete mode selection parameter, wherein the hash storage unit may be a RAM. The discrete mode selection parameter includes a result valid flag bit and a hash mode flag bit. The workflow of the hashing module may be as follows: after the lookup table key value of the query request is input, firstly, performing compression hashing in a primary hashing unit, generally performing bit truncation after dispersing the lookup table key value by using a CRC (cyclic redundancy check) function or directly performing bit truncation on the lookup table key value to obtain an operation result used for accessing a hash storage unit, after data in the hash storage unit is obtained, selecting a bit selection hashing mode according to the discrete mode, performing secondary compression hashing operation on the lookup table key value by the hashing mode, and determining a result valid flag bit of the obtained operation result, for example, after the lookup table key value is subjected to secondary compression hashing operation, the operation result is 10, determining whether the flag bit of a result 10 valid flag in the hash storage unit is set, if the flag bit is set to be 1, using an access address corresponding to the result 10 valid flag bit as an access address of the query request in a storage module, and if the result 10 valid flag bit is 0, then the miss is returned directly. Each result valid flag bit may correspond to an actual memory address in the memory module.
Fig. 11 is a schematic structural diagram of a storage module according to an embodiment of the present application, where the storage module shown in fig. 11 may be embodied on the basis of the storage module according to the foregoing embodiment, and the storage module includes a request cache unit and at least one storage slice, where physical storage addresses corresponding to the storage slices are different; the request cache unit is used for caching the query request for accessing the storage module; and the storage slice is used for storing at least one storage entry and a query request for scheduling access. The storage slice comprises a scheduling unit and an entry storage unit; the scheduling unit is used for scheduling the query request according to a preset rule so as to compare the table lookup key value with the storage items; and the item storage unit is used for carrying out persistent storage on the storage items.
Referring to fig. 11, in an exemplary embodiment, each storage module may be composed of a storage part and a comparison part, the storage part is used to store actual entries, the storage module may be divided into X storage slices, a single address of each storage slice may store o hash entries, when a query request of each channel accesses the storage module according to an access request, the stored o hash entries may be read and compared with a lookup key value to determine whether the query hits, and a workflow of the storage module may be as follows: 1. after the query requests of the n table lookup channels access the hash module, at most n requests for accessing the storage module can be obtained, and the query requests of the n storage modules are distributed to each storage module according to the access addresses. The query requests sent by each table lookup channel are firstly cached in each storage module by using a cache. And distributing the query request to different storage slices for caching according to the access address read from the cache requested by the storage module, wherein the query requests of different table look-up channels need to be scheduled by a scheduling module because multiple channels may access the same storage slice. The scheduled query request reads the entries stored in the memory slice according to the access address carried by the query request (which may be determined by the hash module). Because the same memory address in the memory module may store a plurality of memory entries at the same time, the entries need to be compared at the same time to judge whether the entries hit; the table lookup results of different storage slices are cached firstly, and then corresponding table lookup channels are scheduled to return according to the access sources of the table lookup results. Fig. 12 is a schematic diagram of a result of another multi-channel table lookup apparatus according to an embodiment of the present disclosure, referring to fig. 12, the apparatus according to the embodiment of the present disclosure may further include a splitting module, located between the hash module and the storage module, for splitting the query request according to the access address to implement splitting of the query request.
In an exemplary implementation manner, on the basis of the above application example, a shunting module is added before the storage module, and the shunting module implements scheduling from n ways to m ways, where n is smaller than a value corresponding to m, and n and m may be positive integers. Through the shunting module, n paths of query requests are dispersed to m lookup table channels and then output as m query requests, and the probability that multiple paths of query requests access the same storage slice of the storage module at the same time is reduced.
Fig. 13 is a schematic diagram illustrating a result of another multi-channel table lookup apparatus according to an embodiment of the present disclosure, referring to fig. 13, the apparatus according to an embodiment of the present disclosure may further include a cache module, connected to the hash module, for executing a query request before the hash module determines an access address corresponding to the query request through at least two compressed hash operations; wherein, the cache module caches the memory entries with the hit times larger than or equal to the threshold times.
In an exemplary embodiment, a caching module is added before the hashing module. The cache module can dynamically update the storage entries which can cache the frequently-queried requests in the cache module, and when the corresponding channel searches the storage entries again, the cache module can directly return the table search result without realizing the table search process through the hash module and the storage module, so that the search speed of the frequently-queried entries is increased, and the access pressure of the storage module is reduced.
Fig. 14 is a schematic diagram illustrating a result of another multi-channel table lookup apparatus according to an embodiment of the present disclosure, and referring to fig. 14, the apparatus according to the embodiment of the present disclosure may further include a hot spot splitting slice and a hot spot splitting module; the hotspot shunting slice is connected with the hash module and is used for copying and storing storage entries in the hotspot storage slice when determining that the hotspot storage slice with the access times within the threshold time greater than or equal to the threshold access times exists in the storage module; the hot spot distribution module is respectively connected with the hash module, the hot spot distribution slice and the storage module, and is used for sending a query request to the hot spot storage slice and/or the hot spot distribution slice according to the cache level of the hot spot storage slice when the query request for accessing the hot spot storage slice is acquired.
In an exemplary implementation manner, a hotspot splitting slice and a hotspot splitting module are added on the basis of the embodiment of the application, and when a certain storage slice is sensed to be frequently accessed by multiple query requests in a short time, it is determined that the storage slice needs to be subjected to hotspot splitting. All storage entries in a storage slice can be copied and stored in a hotspot storage slice, a lookup request of an original storage slice is shunted through a hotspot distribution module, the query request can be sent to the original storage slice or to the hotspot shunting slice, and the query request is converged after the query is completed, so that the search congestion of a single storage slice of the storage module is prevented.
The above description is only exemplary embodiments of the present application, and is not intended to limit the scope of the present application.
It will be clear to a person skilled in the art that the term user terminal covers any suitable type of wireless user equipment, such as mobile phones, portable data processing devices, portable web browsers or vehicle-mounted mobile stations.
In general, the various embodiments of the application may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the application is not limited thereto.
Embodiments of the application may be implemented by a data processor of a mobile device executing computer program instructions, for example in a processor entity, or by hardware, or by a combination of software and hardware. The computer program instructions may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages.
Any logic flow block diagrams in the figures of this application may represent program steps, or may represent interconnected logic circuits, modules, and functions, or may represent a combination of program steps and logic circuits, modules, and functions. The computer program may be stored on a memory. The memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), optical storage devices and systems (digital versatile disks, DVDs, or CD discs), etc. The computer readable medium may include a non-transitory storage medium. The data processor may be of any type suitable to the local technical environment, such as but not limited to general purpose computers, special purpose computers, microprocessors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), programmable logic devices (FGPAs), and processors based on a multi-core processor architecture.
The foregoing has provided by way of exemplary and non-limiting examples a detailed description of exemplary embodiments of the present application. Various modifications and adaptations to the foregoing embodiments may become apparent to those skilled in the relevant arts in view of the following drawings and the appended claims without departing from the scope of the invention. Therefore, the proper scope of the invention is to be determined according to the claims.

Claims (21)

1. A multi-channel table lookup method, the method comprising:
determining an access address corresponding to a query request through at least two compression hash operations in a hash module, wherein the query request is from at least two query channels;
and executing the query request in a storage module based on the access address and generating a table lookup result.
2. The method of claim 1, wherein determining, at the hash module, the access address corresponding to the query request by at least two compressed hash operations comprises:
performing a first compression hash operation on the table lookup key values of the query requests in a hash module, and acquiring corresponding operation results;
determining discrete mode selection parameters corresponding to the table look-up key values of the query requests based on the operation results;
performing a second compression hash operation on each table lookup key value according to the hash mode corresponding to the discrete mode mark bit in the discrete mode selection parameter;
and determining a corresponding access address according to the operation result of the second compression hash operation of each query request.
3. The method of claim 2, wherein determining the corresponding access address according to the operation result of the second hash compression operation of each query request comprises:
determining a result valid flag bit of the operation result in the corresponding discrete mode selection parameter;
if the result valid flag bit in the discrete mode selection parameter is set, determining an access address according to the operation result;
and if the result valid flag bit in the discrete mode selection parameter is not set, determining that the corresponding query request does not hit the storage item.
4. The method of claim 3, wherein the discrete selection parameters comprise discrete mode flag bits and result valid flag bits, wherein the number of result valid flag bits corresponds to the number of operation results of the second compressed hash operation, and wherein the result valid flag bits are set at the time of data storage.
5. The method of claim 1 or 2, wherein executing the query request at a storage module based on the access address and generating a table lookup result comprises:
sending the query request to a corresponding storage module according to an access address;
caching the received query request in the storage module;
distributing the cached inquiry request to a storage slice cache in a corresponding storage module according to a corresponding access address;
and scheduling the query request according to a preset rule, comparing the table lookup key value of the query request with the stored items in the corresponding storage slice, if the comparison is the same, the query is hit, otherwise, the query is not hit.
6. The method of claim 5, wherein the memory module comprises at least one memory slice, and wherein the memory slices have different corresponding physical memory addresses.
7. The method of claim 5, wherein the memory slice comprises at least one memory entry therein.
8. The method of claim 3 or 5, further comprising:
and judging the output information of the hash module and the storage module to determine the correctness of the table lookup result.
9. The method of claim 8, wherein determining the output information of the hash module and the storage module to determine the correctness of the table lookup result comprises:
determining whether the query request in the storage module hits a storage item, if so, outputting the hit information of the storage item as a table look-up result;
if not, determining whether a result marking bit of the query request in the hash module is not set, and if so, outputting the miss information of the query request as a table lookup result.
10. The method of claim 1, further comprising, prior to executing the query request at a storage module based on the access address and generating a table lookup result:
and splitting the query request according to the access address to realize query request distribution.
11. The method of claim 1, wherein before the hash module determines the access address corresponding to the query request through at least two compressed hash operations, the method further comprises:
the query request is executed in a cache module, wherein the cache module caches storage entries with the hit number larger than or equal to a threshold number.
12. The method of claim 1, further comprising:
when determining that the storage module has a hotspot slice with access times within a threshold time greater than or equal to a threshold access time, copying storage entries in the hotspot storage slice to a hotspot shunting slice;
and when an inquiry request for accessing the hotspot storage slice is acquired, sending the inquiry request to the hotspot storage slice and/or the hotspot shunting slice according to the cache water level of the hotspot storage slice.
13. A multi-channel table lookup apparatus, the apparatus comprising: a hash module and at least one storage module;
the hash module is used for performing at least two times of compression hash operation on the query request to determine a corresponding access address, wherein the query request is from at least two query channels;
and the storage module is used for storing the target data and executing the query request to generate a table look-up result.
14. The apparatus of claim 13, wherein the hashing module comprises a primary hashing unit, a hash mode storage unit, and a secondary hashing unit;
a first hash unit, configured to perform a first hash compression operation on the lookup key value of the query request, and obtain a corresponding operation result;
the hash mode storage unit is used for storing the hash mode corresponding to the second compression hash operation;
and the secondary hash unit is used for carrying out secondary compression hash operation on the query request according to the hash mode in the hash mode storage unit.
15. The apparatus according to claim 14, wherein the hash storage unit specifically includes at least one discrete mode selection parameter, wherein the discrete mode selection parameter includes a result valid flag bit and a hash mode flag bit, the number of result valid flag bits corresponds to the number of operation results of the second compression hash operation, and the result valid flag bit is set at the time of data storage.
16. The apparatus according to claim 13 or 14, wherein the storage module comprises a request buffer unit and at least one storage slice, wherein the physical storage addresses corresponding to the storage slices are different;
the request caching unit is used for caching the query request for accessing the storage module;
the storage slice is used for storing at least one storage entry and a query request for scheduling access.
17. The apparatus of claim 16, wherein the memory slice comprises a schedule unit and an entry storage unit;
the scheduling unit is used for scheduling the query request according to a preset rule so as to compare the table look-up key value with the stored items;
and the item storage unit is used for performing persistent storage on the storage items.
18. The apparatus of claim 13, further comprising:
and the output arbitration module is used for judging the output information of the hash module and the storage module so as to determine the correctness of the table lookup result.
19. The apparatus of claim 13, further comprising:
and the distribution module is positioned between the hash module and the storage module and is used for splitting the query request according to the access address so as to realize distribution of the query request.
20. The apparatus of claim 13, further comprising:
the cache module is connected with the hashing module and is used for executing the query request before the hashing module determines an access address corresponding to the query request through at least two times of compression hashing operations;
wherein, the cache module caches the memory entries with the hit times larger than or equal to the threshold times.
21. The apparatus of claim 13, further comprising: a hot spot shunting slice and a hot spot shunting module;
the hotspot shunting slice is connected with the hash module and is used for copying and storing storage entries in the hotspot storage slice when determining that the hotspot storage slice with the access times within the threshold time greater than or equal to the threshold access times exists in the storage module;
the hot spot distribution module is respectively connected with the hash module, the hot spot distribution slice and the storage module, and is used for sending a query request to the hot spot storage slice and/or the hot spot distribution slice according to the cache level of the hot spot storage slice when the query request for accessing the hot spot storage slice is acquired.
CN202010844665.6A 2020-08-20 2020-08-20 Multi-channel table lookup method and device Pending CN114077380A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010844665.6A CN114077380A (en) 2020-08-20 2020-08-20 Multi-channel table lookup method and device
PCT/CN2021/111716 WO2022037436A1 (en) 2020-08-20 2021-08-10 Multi-channel table look-up method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010844665.6A CN114077380A (en) 2020-08-20 2020-08-20 Multi-channel table lookup method and device

Publications (1)

Publication Number Publication Date
CN114077380A true CN114077380A (en) 2022-02-22

Family

ID=80282103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010844665.6A Pending CN114077380A (en) 2020-08-20 2020-08-20 Multi-channel table lookup method and device

Country Status (2)

Country Link
CN (1) CN114077380A (en)
WO (1) WO2022037436A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1184775C (en) * 2002-02-07 2005-01-12 华为技术有限公司 Virtual channel mark/virtual route mark searching method of multipl hash function
CN105049240B (en) * 2015-06-26 2018-08-21 大唐移动通信设备有限公司 A kind of message treatment method and server
US11106672B2 (en) * 2015-09-25 2021-08-31 Micro Focus Llc Queries based on ranges of hash values
CN108111421B (en) * 2017-11-28 2021-02-09 苏州浪潮智能科技有限公司 Message distribution method and device based on multiple Hash

Also Published As

Publication number Publication date
WO2022037436A1 (en) 2022-02-24

Similar Documents

Publication Publication Date Title
CN108153757B (en) Hash table management method and device
US10198363B2 (en) Reducing data I/O using in-memory data structures
US10838622B2 (en) Method and apparatus for improving storage performance of container
US10579522B2 (en) Method and device for accessing a cache memory
US20020138648A1 (en) Hash compensation architecture and method for network address lookup
CN109446114B (en) Spatial data caching method and device and storage medium
US20200349113A1 (en) File storage method, deletion method, server and storage medium
US20040098544A1 (en) Method and apparatus for managing a memory system
CN103019960A (en) Distributed cache method and system
CN111352931A (en) Hash collision processing method and device and computer readable storage medium
CN109933543B (en) Data locking method and device of Cache and computer equipment
US20130117302A1 (en) Apparatus and method for searching for index-structured data including memory-based summary vector
CN112579595A (en) Data processing method and device, electronic equipment and readable storage medium
CN110554911A (en) Memory access and allocation method, memory controller and system
CN114860627B (en) Method for dynamically generating page table based on address information
JP2009015509A (en) Cache memory device
CN114077380A (en) Multi-channel table lookup method and device
EP0528584A1 (en) Directory look-aside table for a virtual data storage system
KR20170107061A (en) Method and apparatus for accessing a data visitor directory in a multicore system
US6915373B2 (en) Cache with multiway steering and modified cyclic reuse
US10698834B2 (en) Memory system
CN114996023B (en) Target cache device, processing device, network equipment and table item acquisition method
US11899642B2 (en) System and method using hash table with a set of frequently-accessed buckets and a set of less frequently-accessed buckets
WO2021008552A1 (en) Data reading method and apparatus, and computer-readable storage medium
US12015602B2 (en) Information security system and method for secure data transmission among user profiles using a blockchain network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination