CN106294191A - The method processing table, the method and apparatus accessing table - Google Patents

The method processing table, the method and apparatus accessing table Download PDF

Info

Publication number
CN106294191A
CN106294191A CN201510274566.8A CN201510274566A CN106294191A CN 106294191 A CN106294191 A CN 106294191A CN 201510274566 A CN201510274566 A CN 201510274566A CN 106294191 A CN106294191 A CN 106294191A
Authority
CN
China
Prior art keywords
memory
sublist
hash bucket
memory element
list item
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510274566.8A
Other languages
Chinese (zh)
Other versions
CN106294191B (en
Inventor
王小忠
龚钧
郑远明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XFusion Digital Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201510274566.8A priority Critical patent/CN106294191B/en
Publication of CN106294191A publication Critical patent/CN106294191A/en
Application granted granted Critical
Publication of CN106294191B publication Critical patent/CN106294191B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses and a kind of process the method for table, the method and apparatus of access table.The method of this process table includes: processor the running status of first memory reach pre-conditioned in the case of, the first sublist in the first memory element being stored in first memory is stored in the second memory element of second memory, the bandwidth that the remaining bandwidth of second memory takies when being accessed higher than the first sublist, and the memory space that the residual memory space of second memory takies more than described first sublist;The first sublist deleted in the first memory element by processor;The corresponding relation of the first sublist Yu the second memory element is sent to network processing unit by processor, so that the corresponding relation of the first sublist preserved with the first memory element is updated to the corresponding relation of the first sublist and the second memory element by network processing unit.The present invention is by moving in other memorizeies by the part sublist stored in memory, it is possible to reduce the high performance network processor capacity requirement to internal memory.

Description

The method processing table, the method and apparatus accessing table
Technical field
The present invention relates to the communications field, particularly relate to the method for process table, the method and apparatus of access table.
Background technology
The rise of the flourish and Internet of Things of mobile network, makes network traffics explosive growth, and To maintain this rapid growth trend in foreseeable a period of time, quickly increasing of network traffics causes net The performance of network equipment becomes bottleneck, this for network equipment vendor be challenge be also opportunity.
Generally network processing unit is the core of router, and its performance is the key of router competitiveness.At present, The speed of Single Link Interface by Gonna breakthrough 50 Gigabits per second (Gigabit per second, abridge Gbps), It means that the single-chip handling capacity of future network processor can accomplish 2 too bits per seconds (Terabits per second, abridge Tbps) is the highest, and ratio improves more than 4 to 8 times now.As This high handling capacity, has higher requirement to the process performance within network processing unit.
Memory bandwidth is as the performance bottleneck of network processing unit, compared to the quick raising of interface capability, this Growth rate over a little years is relatively slow, especially Double Data Rate (Double Data Rate is called for short DDR) (Synchronous Dynamic Random Access Memory is called for short synchronous DRAM SDRAM) bandwidth, under random access mode, under multiple storage array (bank) replicates, at present Forth generation DDR (DDR4) single-chip can only provide the highest 125*128 megabit (Megabit, contracting Write Mb) left and right consistent access speed.One message, from entering to going out, needs under most of scenes to access The internal memory at traffic table place more than 10 times, reads the list item of 128 (bit), 750,000,000 every time Wrapping under the Bao Su of (Million Packet Per Second, abridge MPPS) per second, internal memory needs to carry altogether For the bandwidth more than 750*10*128Mb, if the list item of all tables is all placed in DDR, then need The DDR chip of more than 60, this is unpractical.
At present, router needs the forwarding service supported to have tens kinds, and every kind of forwarding service can access about 8~20 traffic table, total more than 400 traffic table on router.But, under any scene, On one equipment, only having one or more forwarding services is in high-speed cruising state simultaneously, and other are most Number forwarding service is in low speed or idle condition.It is to say, under reality scene, only small number of service table By high speed access, it is the lowest even in idle condition that most of traffic table are accessed for frequency.Even if at a high speed Access traffic table, not all list item be accessed for frequency all as.Research shows, more than 90% Flow concentrate on the big stream of 5%~10%, wherein the flow of 20%~40% concentrates on 0.1%~0.5% Super large stream on, it means that, most of flows can hit minority list item, and most of list item is accessed Frequency the lowest even without access.But, the most general scheme is, according to the access performance of whole table Demand is pre-selected the storage position of whole table, when the capacity of table is the biggest, it is necessary to the biggest high bandwidth Internal memory, beyond the tenability of next generation network processor.
Summary of the invention
The invention provides and a kind of process the method for table, the method and apparatus of access table, it is possible to reduce high property The energy network processing unit capacity requirement to internal memory, makes high performance network processor of future generation easily realize.
First aspect, it is provided that a kind of method processing table, the first table includes multiple sublist, the first storage Device includes that multiple memory element, the plurality of sublist include the first sublist, in the plurality of memory element Including the first memory element, the method includes: processor reaches default in the running status of first memory In the case of condition, described first sublist being stored in described first memory element is stored to second In second memory element of memorizer, the remaining bandwidth of described second memory is higher than described first sublist quilt The bandwidth taken during access, and the residual memory space of described second memory accounts for more than described first sublist Memory space;Described first sublist deleted in described first memory element by described processor;Described The corresponding relation of described first sublist Yu described second memory element is sent to network processing unit by processor, So that network processing unit is by the corresponding relation of described first sublist preserved and described first memory element more It it is newly the corresponding relation of described first sublist and described second memory element.
In conjunction with first aspect, in the implementation that the first is possible, described processor is at first memory Running status reach pre-conditioned in the case of, will be stored in described first memory element first Sublist stores the second memory element of second memory and includes: described processor determines first memory The bandwidth taken;The bandwidth that described processor takies at described first memory is preset more than or equal to first In the case of value, described first sublist is stored in the second memory element of second memory;Or,
Described processor obtains the residual memory space of described first memory;At described first memory Residual memory space is less than or equal in the case of the second preset value, and described processor is by described first sublist Store in the second memory element of described second memory.
In conjunction with the implementation that the first is possible, in the implementation that the second is possible, described described The bandwidth that first memory takies is more than or equal in the case of the first preset value, and described processor will be deposited Store up the second storage storing described second memory in described first sublist of described first memory element In unit, including: in the described processor described first memory of acquisition, each memory element is accessed secondary Number;Described processor, according to described accessed number of times, determines that from described first memory described first deposits Storage unit, the accessed frequency of described first memory element is deposited higher than described in described first memory first The accessed frequency of other memory element outside storage unit;Described processor determines from multiple memorizeies Described second memory, the remaining bandwidth of described second memory is higher than described in the plurality of memorizer the The remaining bandwidth of other memorizeies outside two memorizeies;Described first sublist is stored by described processor In second memory element of described second memory.
In conjunction with the implementation that the first is possible, in the implementation that the third is possible, described first The running status of memorizer reach pre-conditioned in the case of, described processor will be stored in described first The first sublist in memory element stores the second memory element of second memory and includes: described process Device determines the bandwidth that described first memory takies;The band that described processor takies at described first memory Wide be less than the 3rd preset value in the case of, obtain the accessed of each memory element in described first memory Number of times;Described processor, according to the accessed number of times of described each memory element, determines described first storage Unit, the accessed frequency of described first memory element is less than the first storage described in described first memory The accessed frequency of other memory element outside unit;Described processor determines institute from multiple memorizeies Stating second memory, the residual memory space of described second memory is higher than described in the plurality of memorizer The residual memory space of other memorizeies outside second memory;Described processor is by described first sublist Store in the second memory element of described second memory.
In conjunction with first aspect or the first is to any one the possible reality in the third possible implementation Existing mode, in the 4th kind of possible implementation, reaches in the described running status at first memory In the case of pre-conditioned, the first sublist that described processor will be stored in described first memory element Before storing in the second memory element in second memory, described method also includes: described processor Obtaining the first write request, the first list item of the first table is write behaviour for request by described first write request Make;Described processor is that described first described first storage of sublist distribution is single according to described first write request Unit, described first sublist includes described first list item.
In conjunction with the 4th kind of possible implementation, in the 5th kind of possible implementation, also include: institute Stating processor and obtain the second write request, described second write request is for asking the second list item to the second sublist Carry out write operation, the plurality of sublist also includes described second sublist;If described processor determines institute Stating the second unallocated memory element of sublist, the most described processor is described second according to described second write request Sublist distribution the 3rd memory element.
In conjunction with any of the above-described kind of possible implementation of first aspect, the 6th kind of possible implementation In, described first table uses the mode of Hash to store, the keyword key of the list item of described first table Being stored in the list item of Hash bucket, described method also includes:
The quantity of the key that processor has stored in each list item of the first Hash bucket is less than first threshold In the case of, key to be stored is stored in described first Hash bucket;
The quantity of the key that described processor has stored in each list item of the first Hash bucket is more than or equal to In the case of first threshold, key to be stored is stored in described first Hash bucket and the second Hash bucket In the Hash bucket of the negligible amounts of the key stored, described first Hash bucket and described second Hash bucket are adopted With different hash functions;
Described processor is the fullest at the list item of the first Hash bucket and the second Hash bucket, and the 3rd Hash bucket is every The quantity of the key stored in individual list item is less than in the case of first threshold, by key storage to be stored In described 3rd Hash bucket, described first Hash bucket uses identical Hash letter with described 3rd Hash bucket Number;
Described processor is the fullest at the list item of the first Hash bucket and the second Hash bucket, and the 3rd Hash bucket is every The quantity of the key stored in individual list item is more than or equal in the case of first threshold, by key to be stored In the Hash bucket of the negligible amounts storing the key stored in the 3rd Hash bucket and the 4th Hash bucket, institute State the first Hash bucket and use identical hash function, described second Hash bucket and institute with described 3rd Hash bucket State the 4th Hash bucket and use identical hash function.
Exist in conjunction with the 6th kind of possible implementation, in the 7th kind of possible implementation, also include:
Described processor is in accessed number of times and described 3rd Hash bucket interviewed of described first Hash bucket Ask that the ratio of number of times less than or equal in the case of Second Threshold, controls network processing unit and first mates described the Three Hash buckets, then mate described first Hash bucket;
Described processor is in accessed number of times and described 4th Hash bucket interviewed of described second Hash bucket Ask that the ratio of number of times, less than or equal in the case of described Second Threshold, controls network processing unit and first mates institute State the 4th Hash bucket, then mate described second Hash bucket;
Described processor at the accessed number of times of described first Hash bucket and described 3rd Hash bucket with described The ratio of the accessed number of times of the second Hash bucket and described 4th Hash bucket is less than or equal to described second threshold In the case of value, control network processing unit and first mate described second Hash bucket and described 4th Hash bucket, then Mate described first Hash bucket and described 3rd Hash bucket.
Second aspect, it is provided that a kind of method accessing table, including: the first table pair described in network processing unit The plot of the memory block map information table answered, described memory block map information table includes in described first table Each sublist and the corresponding relation of memory element;Described network processing unit is according to described plot and described first Memory block map information table described in the index accesses of the first list item of table, and map letter according to described memory block Breath table determines the index of memory element corresponding to first sublist at described first list item place, described first son Table is any sublist in described first table;Described network processing unit according to the index of described memory element and The index of described first list item, determines the physical address of described first list item;Described network processing unit according to The physical address of described first list item accesses described first list item.
In conjunction with second aspect, in the first possible implementation of second aspect, described network processes Device obtains the plot of memory block map information table corresponding to the first table and includes: described network processing unit is according to institute The table mark TID stating the first table accesses memory block mapping plot table, and maps plot according to described memory block Table determines the plot of described interior memory block map information table.
In conjunction with the first possible implementation of second aspect or second aspect, in the second of second aspect Plant in possible implementation, also include: described network processing unit updates the accessed of described memory element Number of times.
In conjunction with the implementation that the first or the second of second aspect or second aspect are possible, in second party In the third possible implementation in face, described network processing unit is according to described plot and described first table Memory block map information table described in the index accesses of item includes: described network processing unit determines according to following formula Address accesses described memory block map information table,
Wherein, base is described plot, and entry index is the index of described first list item, block size Quantity for the list item that described first sublist comprises.
In conjunction with the first or the second or the third possible implementation of second aspect or second aspect, In the 4th kind of possible implementation of second aspect, described network processing unit is according to described memory element Index and the index of described first list item, determine the physical address of described first list item, including: described Network processing unit determines the physical address of described first list item according to following formula,
Real block index*block size+entry index%block size
Wherein, real block index is the index of described memory element, and block size is described first son The quantity of the list item that table comprises, the quantity of the list item that described first sublist comprises stores with described memory element The quantity of list item identical, entry index is the index of described first list item.
The third aspect, it is provided that a kind of device processing table, the first table includes multiple sublist, the first storage Device includes that multiple memory element, the plurality of sublist include the first sublist, in the plurality of memory element Including the first memory element, described device includes: processing unit, for the operation shape at first memory State reach pre-conditioned in the case of, will be stored in described first sublist in described first memory element Storing in the second memory element of second memory, the remaining bandwidth of described second memory is higher than described The bandwidth that first sublist takies when being accessed, and the residual memory space of described second memory is more than described The memory space that first sublist takies;Delete unit, described for deleting in described first memory element First sublist;Transmitting element, for by the corresponding relation of described first sublist Yu described second memory element It is sent to network processing unit, so that network processing unit is by described first sublist preserved and described first storage The corresponding relation of unit is updated to the corresponding relation of described first sublist and described second memory element.
In conjunction with the third aspect, in the first possible implementation of the third aspect, described processing unit Specifically for: determine the bandwidth that described first memory takies;In the bandwidth that described first memory takies In the case of the first preset value, described first sublist is stored in second memory;Or Person, obtains the residual memory space of described first memory;Residue at described first memory stores sky Between less than or equal in the case of the second preset value, described first sublist is stored described second memory The second memory element in.
In conjunction with the first possible implementation of the third aspect, the reality that the second in the third aspect is possible In existing mode, described processing unit specifically for: obtain each memory element in described first memory Accessed number of times;According to the accessed number of times of described each memory element, from described first memory Determining described first memory element, the accessed frequency of described first memory element is higher than described first storage The accessed frequency of other memory element outside the first memory element described in device;From multiple memorizeies Determining described second memory, the remaining bandwidth of described second memory is higher than institute in the plurality of memorizer State the remaining bandwidth of other memorizeies outside second memory;Described first sublist is stored described In second memory element of two memorizeies.
In conjunction with the first possible implementation of the third aspect, the reality that the second in the third aspect is possible In existing mode, described processing unit specifically for: determine the bandwidth that described first memory takies;Institute State bandwidth that first memory takies less than in the case of the 3rd preset value, obtain in described first memory The accessed number of times of each memory element;According to the accessed number of times of described each memory element, determine institute Stating the first memory element, the accessed frequency of described first memory element is less than institute in described first memory State the accessed frequency of other memory element outside the first memory element;Institute is determined from multiple memorizeies Stating second memory, the residual memory space of described second memory is higher than described in the plurality of memorizer The residual memory space of other memorizeies outside second memory;Described first sublist is stored described In second memory element of second memory.
In conjunction with arbitrary in the third possible implementation of the first of the third aspect or the third aspect Plant possible implementation, in the 4th kind of possible implementation of the third aspect, described processing unit Be additionally operable to: the running status of described first memory reach pre-conditioned in the case of, will be stored in Described first sublist in described first memory element stores in the second memory element of second memory Before, obtaining the first write request, the first list item of the first table is carried out by described first write request for request Write operation;It is that described first sublist distributes described first memory element according to described first write request, described First sublist includes described first list item.
In conjunction with the 4th kind of possible implementation of the third aspect, in the 5th kind of possible reality of the third aspect In existing mode, described processing unit is additionally operable to: obtain the second write request, and described second write request is for asking Ask the second list item to the second sublist to carry out write operation, the plurality of sublist also includes described second son Table;If it is determined that the described second unallocated memory element of sublist, then it is described according to described second write request Second sublist distribution the 3rd memory element.
In conjunction with the third aspect or any of the above-described kind of possible implementation of the third aspect, in the third aspect In 6th kind of possible implementation, described first table uses the mode of Hash to store, and described first The keyword key of the list item of table is stored in the list item of Hash bucket, and described processing unit is additionally operable to:
The quantity of the key stored in each list item of the first Hash bucket is less than the situation of first threshold Under, key to be stored is stored to described first Hash bucket;
The quantity of the key stored in each list item of the first Hash bucket is more than or equal to first threshold In the case of, key to be stored is stored the key stored to described first Hash bucket and the second Hash bucket Negligible amounts Hash bucket in, described first Hash bucket uses different Hash with described second Hash bucket Function;
The fullest at the list item of the first Hash bucket and the second Hash bucket, and in each list item of the 3rd Hash bucket Key to be stored, less than in the case of first threshold, is stored to the described 3rd by the quantity of the key of storage In Hash bucket, described first Hash bucket uses identical hash function with described 3rd Hash bucket;
The fullest at the list item of the first Hash bucket and the second Hash bucket, and in each list item of the 3rd Hash bucket Key to be stored more than or equal in the case of first threshold, is stored to the by the quantity of the key of storage In the Hash bucket of the negligible amounts of the key stored in three Hash buckets and the 4th Hash bucket, described first breathes out Uncommon bucket uses identical hash function, described second Hash bucket and the described 4th to breathe out with described 3rd Hash bucket Uncommon bucket uses identical hash function.
In conjunction with the 6th kind of possible implementation of the third aspect, in the 7th kind of possible reality of the third aspect In existing mode, also include: control unit, be used for:
Ratio at accessed number of times and the accessed number of times of described 3rd Hash bucket of described first Hash bucket Value, less than or equal in the case of Second Threshold, controls network processing unit and first mates described 3rd Hash bucket, Mate described first Hash bucket again;
Ratio at accessed number of times and the accessed number of times of described 4th Hash bucket of described second Hash bucket Value, less than or equal in the case of described Second Threshold, controls network processing unit and first mates described 4th Hash Bucket, then mate described second Hash bucket;
Accessed number of times and described second Hash bucket at described first Hash bucket and described 3rd Hash bucket In the case of being less than or equal to described Second Threshold with the ratio of the accessed number of times of described 4th Hash bucket, Control network processing unit and first mate described second Hash bucket and described 4th Hash bucket, then mate described first Hash bucket and described 3rd Hash bucket.
Fourth aspect, it is provided that a kind of device accessing table, including processing unit, is used for: obtain the The plot of the memory block map information table that one table is corresponding, described memory block map information table includes described first Each sublist in table and the corresponding relation of memory element;According to described plot and the first of described first table Memory block map information table described in the index accesses of list item, and determine according to described memory block map information table The index of the memory element that first sublist at described first list item place is corresponding, described first sublist is described Any sublist in first table;Index according to described memory element and the index of described first list item, really The physical address of fixed described first list item;Access unit, for the physical address according to described first list item Access described first list item.
In conjunction with fourth aspect, in the first possible implementation of fourth aspect, described processing unit Specifically for: identify TID according to the table of the first table and access memory block mapping plot table, and according to described interior Counterfoil maps plot table and determines the plot of described interior memory block map information table.
In conjunction with the first possible implementation of fourth aspect or fourth aspect, in the second of fourth aspect Plant in possible implementation, also include: record unit, for updating the accessed of described memory element Number of times.
In conjunction with the implementation that the first or the second of fourth aspect or fourth aspect are possible, in four directions In the third possible implementation in face, described processing unit is specifically for the ground determined according to following formula Location accesses described memory block map information table,
Wherein, base is described plot, and entry index is the index of described first list item, block size Quantity for the list item that described first sublist comprises.
In conjunction with the first or the second or the third possible implementation of fourth aspect or fourth aspect, In the 4th kind of possible implementation of fourth aspect, described processing unit specifically for: according to following formula Determine the physical address of described first list item,
Real block index*block size+entry index%block size
Wherein, real block index is the index of described memory element, and block size is described first son The quantity of the list item that table comprises, the quantity of the list item that described first sublist comprises stores with described memory element The quantity of list item identical, entry index is the index of described first list item.
Based on technique scheme, by the running status of memorizer reach pre-conditioned in the case of, will The sublist that part is stored in the memory element of this memorizer has moved enough remaining bandwidths and storage sky Between other memorizeies in, it is possible to reduce the high performance network processor capacity requirement to internal memory, make next Easily realize for high performance network processor.
Accompanying drawing explanation
In order to be illustrated more clearly that the technical scheme of the embodiment of the present invention, below will be in the embodiment of the present invention The required accompanying drawing used is briefly described, it should be apparent that, drawings described below is only this Some embodiments of invention, for those of ordinary skill in the art, are not paying creative work Under premise, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the schematic diagram of table according to embodiments of the present invention and memorizer.
Fig. 2 is the indicative flowchart of the method for process table according to an embodiment of the invention.
Fig. 3 is the indicative flowchart of the method for process table according to another embodiment of the present invention..
Fig. 4 is the schematic diagram of the statistic unit of the method for process table according to another embodiment of the present invention.
Fig. 5 is the indicative flowchart of the method for process table according to another embodiment of the present invention.
Fig. 6 is the indicative flowchart of the method for process table according to another embodiment of the present invention.
Fig. 7 is the schematic diagram of the Hash bucket of router.
Fig. 8 a, 8b, 8c and 8d are the schematic diagrams of the method for process table according to embodiments of the present invention.
Fig. 9 is the indicative flowchart of the method for access table according to embodiments of the present invention.
Figure 10 is the storage system architecture schematic diagram of the method for access table according to embodiments of the present invention.
Figure 11 is the schematic diagram of the internal memory map unit of the method for access table according to embodiments of the present invention.
Figure 12 is the schematic block diagram of the device of process table according to an embodiment of the invention.
Figure 13 is the schematic block diagram of the device of process table according to another embodiment of the present invention.
Figure 14 is the schematic block diagram of the device of access table according to an embodiment of the invention.
Figure 15 is the schematic block diagram of the device of access table according to another embodiment of the present invention.
Figure 16 is the schematic block diagram of the device of process table according to another embodiment of the present invention.
Figure 17 is the schematic block diagram of the device of access table according to another embodiment of the present invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out Clearly and completely describe, it is clear that described embodiment is a part of embodiment of the present invention, and not It is whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not making wound The every other embodiment obtained on the premise of the property made work, all should belong to the scope of protection of the invention.
Term " first ", " second " and " in the description and claims of this application and accompanying drawing Three " it is etc. for distinguishing different object rather than for describing particular order.Additionally, term " includes " It is not exclusive for " having ".Such as include series of steps or the process of unit, method, system, Product or equipment are not limited to step or the unit listed, it is also possible to include the step do not listed or Unit.
In the embodiment of the present invention, a table includes that multiple sublist, a memorizer include multiple memory element, As shown in Figure 1.Should be understood that the physical space of the memorizer in the embodiment of the present invention can be tied according to physics Structure divides, it is also possible to divide according to logic.
Fig. 2 is the indicative flowchart of the method 200 of process table according to embodiments of the present invention.Such as Fig. 2 Shown in, method 200 includes following content.
210, processor the running status of first memory reach pre-conditioned in the case of, will deposit The storage the first sublist in the first memory element stores in the second memory element of second memory, and second The bandwidth that the remaining bandwidth of memorizer takies when being accessed higher than the first sublist, and the residue of second memory The memory space that memory space takies more than the first sublist.
In the embodiment of the present invention, the first table includes multiple sublist, and first memory includes multiple memory element, Multiple sublists include that the first sublist, multiple memory element include the first memory element.As it is shown in figure 1, Multiple sublists in first table can be stored in first memory, it is also possible to is stored in different storages In device.Wherein, a memory element is for one sublist of storage, and a sublist comprises the company of fixed qty Continuous list item.
Wherein, the running status of memorizer can include bandwidth and the memory space taken.Second storage is single Unit be second memory multiple memory element in any one does not store the memory element of sublist.
Should be understood that the processor in the embodiment of the present invention refers to the processor of chain of command.
220, the first sublist deleted in the first memory element by processor.
230, the corresponding relation of the first sublist Yu the second memory element is sent to network processing unit by processor, So that the corresponding relation of the first sublist preserved with the first memory element is updated to first by network processing unit Sublist and the corresponding relation of the second memory element.
In prior art, generally require to select whole table according to the linear speed meeting all typical services scenes simultaneously Storage position, this require forwarding face network processing unit provide Large Copacity high bandwidth memory system.This Plant by the way of static predistribution internal memory meets all scene performance requirements, it is impossible to support more than 1Tbps Network processing unit, to memory bandwidth and the demand of memory size, becomes the bottleneck of high performance network processor. And in the embodiment of the present invention, the running status of memorizer reach pre-conditioned in the case of, it is possible to flexibly Ground selects the storage position of the part sublist of storage in this memorizer, enabling reduce at high performance network The reason device capacity requirement to high bandwidth internal memory, and then improve the process performance of network processing unit.
Therefore, the method processing table of the embodiment of the present invention, by reaching pre-in the running status of memorizer If in the case of condition, the sublist that part is stored in the memory element of this memorizer is moved enough In other memorizeies of remaining bandwidth and memory space, it is possible to reduce high performance network processor to internal memory Capacity requirement, makes high performance network processor of future generation easily realize.
First sublist is only moved to the process of second memory by the embodiment of the present invention by processor Being described, in the embodiment of the present invention, multiple sublists that multiple memory element store can be moved by processor To a memorizer, or can move multiple sublists to multiple memorizeies, the embodiment of the present invention is to this Do not limit.
Alternatively, as it is shown on figure 3, in step 210, processor reaches in the running status of first memory In the case of pre-conditioned, the first sublist being stored in the first memory element stored to second and deposits Second memory element of reservoir includes:
211, processor determines the bandwidth that first memory takies;
212, processor is in the case of the bandwidth that first memory takies is more than or equal to the first preset value, First sublist is stored in the second memory element of second memory.
Whether this first preset value can be as the bandwidth of memorizer close to the criterion being finished.
The embodiment of the present invention determines that to processor the method for the bandwidth of memory usage is not construed as limiting.Such as: Processor can obtain the accessed number of times of memorizer with polled network processor, further determines that memorizer Accessed frequency, then determines the bandwidth of memory usage according to accessed frequency.Processor can also be taken turns Ask network processing unit and obtain the bandwidth taken of memorizer.It should be noted that the embodiment of the present invention is not Limit the method that network processing unit determines the bandwidth of memory usage.Such as, network processing unit can also be by Hardware is by judging that entry queue's length triggering terminal determines the bandwidth of memory usage.
Alternatively, in step 212, the first sublist is stored the second storage of second memory by processor Unit includes:
Processor obtains the accessed number of times of each memory element in first memory;
Processor, according to the accessed number of times of each memory element, determines the first storage from first memory Unit, the accessed frequency of the first memory element is higher than its outside the first memory element in first memory The accessed frequency of his memory element;
The second storage that the first sublist in first memory element is stored second memory by processor is single In unit.
Specifically, the processor of chain of command can forward the network processing unit in face to obtain memorizer by poll In the accessed number of times of each memory element.Such as, network processing unit can arrange statistic unit, as Shown in Fig. 4, this statistic unit includes multiple enumerator, the plurality of enumerator each with memorizer respectively Individual memory element is associated, and records the accessed number of times of each memory element.Processor is according to getting Each memory element accessed number of times in some cycles i.e. can determine that the accessed of each memory element Frequency.
It should be noted that in the case of the bandwidth of memory usage is more than or equal to the first preset value, As long as the part sublist of storage in this memorizer is moved on other memorizeies, can be reduced this and deposit The purpose of the bandwidth that reservoir takies.What therefore processor was possible not only in memorizer is one or more interviewed Ask that in the memory element that frequency is the highest, the sublist of storage is moved in other memorizeies one or more, also may be used So that the sublist of storage in any one or more memory element in memorizer is moved to other memorizeies, This is not limited by the embodiment of the present invention.
In the embodiment of the present invention, in running, processor accounts for according to the actual of bandwidth of each memorizer Moving of sublist is carried out, it is possible to meet network processes by the statistics of situation and the accessed number of times of memory element The device demand to memory bandwidth.
Alternatively, in step 212, at processor, the first sublist is stored the second of second memory and deposit Before in storage unit, method 200 also includes: processor determines second memory from multiple memorizeies, The remaining bandwidth of second memory is higher than other memorizeies outside second memory in multiple memorizeies Remaining bandwidth.
In embodiments of the present invention, by sublist the highest for accessed frequency has been moved enough tape remaining On wide memorizer, it is possible to make that there is the sublist that the accessed frequency of storage on the memorizer of high bandwidth is high, So can be greatly reduced the network processing unit demand to high-performance memory capacity.
In other words, by statistical memory and the accessed frequency of sublist, son high for accessed frequency Table is placed into high bandwidth internal memory, and the sublist that access frequency is low is placed on low bandwidth internal memory, and according to whole table Access performance selects the storage position of whole table to compare, and can effectively reduce the capacity requirement to high bandwidth internal memory, High performance network processor of future generation is made easily to realize.
Such as, acess control based on sublist selects the storage position of sublist, moves through constantly, makes Such as on sheet, internal memory only stores the list item of high speed access, it is possible to be greatly reduced network processing unit to internal memory on sheet The demand of capacity.Wherein, internal memory on sheet, also referred to as on-chip memory, refer to be integral to the processor at same core Memorizer on sheet.Correspondingly, it is not integral to the processor memorizer on the same chip and is referred to as off-chip Internal memory, also referred to as chip external memory.On sheet, internal memory has higher bandwidth relative to off-chip internal memory.
In the case of on sheet, internal memory uses cache memory (cache), original state, table data Being stored in off-chip internal memory, on sheet, internal memory is used as the backup of off-chip internal memory.Processor preferentially accesses internal memory on sheet, Just access off-chip internal memory when cache is miss, the data of connected reference can be retained in cache for a long time In.Internal memory on sheet, i.e. cache, can be the least, in the case of cache hit probability is higher, it is possible to carry For the highest access performance.Cache is applicable to data access and has the application scenarios of stronger locality, but It is that the internal storage access in router message processing procedure does not have locality characteristics, wants during Message processing Accessing the table that dozens of is different, the size of each table arrives several from several megabits (megabit, abridge Mb) Hundred Mbs, and, the list item that may hit when different messages access same table is in most cases Also not having the feature of locality, these accessing characteristics of Message processing cause the cache hit probability may be very It is low, so that message forwarding performance is relatively low.So, between high speed list item and high speed list item and low speed Between list item, all can there is the possibility of cache conflict.Perhaps, under certain scene, cache hit probability is very Height, but be likely under another kind of scene, cache hit probability may be the lowest, and its reason is cache Static Hash (Hash) mechanism is used to carry out internal memory mapping.Therefore, cache cannot ensure to forward industry Business performance.
The embodiment of the present invention is by the accessed frequency of each sublist in real-time statistics memorizer, according to reality Accessed frequency, sublist is moved between different memorizeies so that high-performance memory (example Such as internal memory on sheet) only store the sublist of high speed access so that uses all fields of internal memory support on less sheet The high speed forward business of scape is possibly realized.Therefore the scheme of the embodiment of the present invention is better than cache mechanism.
Illustrating, the bandwidth of memorizer is close to when being finished, it is assumed that more than 80%, first find out this storage Those memory element that device access frequency is the highest (such as, front several, tens or hundreds of etc.), Then take certain strategy that the sublist in a part of memory element therein is moved other and have the most surplus On the memorizer of remaining bandwidth.Moving of sublist can be carried out, such as: Ke Yicong between any memorizer First on-chip memory is moved on the second on-chip memory, or moves chip external memory to from on-chip memory On, or move to the second chip external memory from first external memory, or move to from chip external memory On on-chip memory.After having moved, update memory element and son on the memory mapping table of network processing unit Mapping relations between table.
It addition, memorizer can also associate an enumerator, processor can obtain the value of this enumerator also Determine the accessed frequency of this memorizer, and then can determine the bandwidth of this memory usage.
Alternatively, as it is shown in figure 5, in step 210, processor reaches in the running status of first memory In the case of pre-conditioned, the first sublist being stored in the first memory element stored to second and deposits In second memory element of reservoir, including:
213, processor obtains the residual memory space of first memory;
Such as, processor can be stored by the residue that poll forwards the network processing unit in face to obtain memorizer Space.
214, processor at the residual memory space of first memory less than or equal to the feelings of the second preset value Under condition, the first sublist is stored in the second memory element of second memory.
In the embodiment of the present invention, in running, processor is according to the reality of the memory space of each memorizer Border takies situation and carries out moving of sublist, it is possible to meet the demand of network processing unit memory size.
Specifically, in step 214, the first sublist is stored the second of second memory and deposits by processor Storage unit includes:
Processor determines the bandwidth that first memory takies;
Processor, in the case of the bandwidth that first memory takies is less than the 3rd preset value, obtains first and deposits The accessed number of times of each memory element in reservoir;
Processor, according to the accessed number of times of each memory element, determines the first memory element, the first storage The accessed frequency of unit is less than other memory element outside the first memory element in first memory Accessed frequency;
The second storage that the first sublist in first memory element is stored second memory by processor is single In unit.
In embodiments of the present invention, when the bandwidth occupancy of memorizer is less than preset value, but the appearance of this memorizer Measure already close to when taking, by one or more sublists that occupied bandwidth on this memorizer is minimum are moved There is bigger vacant capacity to have on the memorizer of enough remaining bandwidths to other, it is possible to meet at network simultaneously The reason device capacity requirement to high bandwidth internal memory.
Alternatively, in step 214, at processor, the first sublist is stored the second of second memory and deposit Before in storage unit, method 200 can also include: processor determines the second storage from multiple memorizeies Device, the residual memory space of second memory is deposited higher than other outside second memory in multiple memorizeies The residual memory space of reservoir.In the embodiment of the present invention, sublist is moved residual memory space maximum In memorizer, it is possible to the distribution storing resource is more equalized.
Before step 210, method 200 can also include: processor is that the first sublist distribution first is deposited Storage unit.
Specifically, processor is that the first sublist is distributed the first memory element and included:
Processor obtains the first write request, and the first list item of the first table is carried out by the first write request for request Write operation;
Processor is that the first sublist distributes the first memory element according to the first write request, and the first sublist includes One list item.
Should be understood that after distributing the first memory element for the first sublist, processor can be write according to first First list item is write the first memory element by request.
In the embodiment of the present invention, the internal memory of table static allocation the most in advance, table and memorizer are divided into many Individual block, i.e. table are divided into multiple sublist, and memorizer is divided into multiple memory element.Alternatively, deposit The capacity of each memory element in reservoir is identical, and in table, the size of each sublist is identical, but the present invention is real Execute example this is not limited.Such as, in memorizer, the capacity of each memory element can also be different, in table The size of each sublist can also be different.
Under original state, processor will not be that whole table distributes memory space, writing list item when just in advance On-demand dynamically distribution memory element in units of sublist from memorizer, and record sublist and memory element Between mapping relations.So can be greatly reduced the network processing unit aggregate demand to internal memory.
Therefore, the method processing table of the embodiment of the present invention, by for table on-demand increment dynamic assigning memory, It can be avoided that the memory headroom waste caused for memory headroom reserved by table such that it is able to reduce network processes The device demand to internal memory.
Owing to, under different scenes, the actual size of each table differs, such as turn for 1 time in scene Photos and sending messages storehouse (Fowarding Information Base is called for short FIB) the biggest multiprotocol label switching of table (Multi-Protocol Label Switching is called for short MPLS), table was the least, and 2 times fib tables of scene The least MPLS table is the biggest.If using the scheme of static predistribution internal memory, in order to same version can Enough support scene 1 can support again scene 2, it is necessary to be simultaneously fib table and MPLS table predistribution enough Big space, and it is true that under any scene, it is completely idle for having a lot of memory headroom, use The method of embodiment of the present invention process table is just avoided that this problem.
Alternatively, method 200 can also include:
Processor obtains the second write request, and the second list item of the second sublist is entered by the second write request for request Row write operates;
If the processor determine that the second unallocated memory element of sublist, then processor according to the second write request is Second sublist distribution the 3rd memory element.
Wherein, the second sublist can be the first table multiple sublists in arbitrary sublist in addition to the first sublist, Second sublist can also is that the arbitrary sublist in multiple sublists of the second table.3rd memory element may be located at In first memory, it is also possible to be positioned in second memory, it is also possible to be positioned at first memory and second and deposit In the 3rd memorizer outside reservoir.
In the embodiment of the present invention when writing list item, it is possible to on-demand dynamically for table distribution memory space.
In the embodiment of the present invention, processor can take situation according to memorizer, is defined as sublist distribution The memory element of which memorizer.Processor can be preferably sublist distribution part and store depositing of sublist Reservoir, when this memorizer has been used up, then for needing the sublist distributing memory space to distribute other memorizeies. Such as, when processor determine first memory less than time, then processor be in the first memory second son Table distribution the 3rd memory element;When processor determines that first memory is the fullest, second memory less than time, Then processor is the second sublist distribution the 3rd memory element in second memory.When processor determines first Memorizer and second memory the most completely time, then processor is the second sublist distribution the in the 3rd memorizer Three memory element.By that analogy, repeat no more.
Processor can also be defined as sublist distribute the storage list of which memorizer according to the type of sublist Unit.Such as, the second sublist is the arbitrary sublist in multiple sublists of the second table, and the second table is non-linear speed industry The table of business, correspondingly, the second sublist is the table of non-linear speed business.When first memory is used for storing linear speed When the table of business, second memory and the 3rd memorizer are for storing the table of non-linear speed business, if second Memorizer part stores sublist, then processor can be preferentially that the second sublist is divided in second memory Join the 3rd memory element.If the processor determine that second memory is the fullest, then processor is at the 3rd memorizer In be second sublist distribution the 3rd memory element.Alternatively, the bandwidth of first memory is higher than the second storage Device or the bandwidth of the 3rd memorizer.
It should be noted that the bandwidth of memorizer refers to total band that memorizer itself can provide here Wide.
When initial storage traffic table, the table of linear speed business is preferably stored in the memorizer that bandwidth is higher, When the memorizer that this bandwidth is higher is finished, just the table of linear speed business is stored in the memorizer that bandwidth is relatively low In, the table of non-linear speed business is preferably stored in the memorizer that bandwidth is relatively low.
Such as, in the case of the first sublist correspondence linear speed business, the corresponding non-linear speed business of the second sublist, First sublist is stored to sheet internal memory, the second sublist is stored to off-chip internal memory.
Alternatively, when selecting initial storage location, if any two in any one forwarding service Table can be accessed simultaneously, then be stored on different memorizeies by these two tables.
The method processing table that the embodiment of the present invention provides can be with the side of common static predistribution internal memory Method is used together, and such as, the linear speed table that part is the least can still use the scheme of static predistribution internal memory, The scheme of this static predistribution can also be applied to the level that in algorithm tree, EMS memory occupation is less, to reduce Access the number of times of memory mapping table, and then reduce Message processing time delay.
Therefore, the method processing table of the embodiment of the present invention, by reaching pre-in the running status of memorizer If in the case of condition, the sublist that part is stored in the memory element of this memorizer is moved enough In other memorizeies of remaining bandwidth and memory space, it is possible to reduce high performance network processor to high bandwidth The capacity requirement of internal memory, makes high performance network processor of future generation easily realize.
Alternatively, the first table can use the mode of Hash (Hash) to store, the list item of the first table Keyword key be stored in the list item of Hash bucket, as shown in Figure 6, method 200 can also include with Lower content.
240, the quantity of the key that processor has stored in each list item of the first Hash bucket is less than first In the case of threshold value, key to be stored is stored to the first Hash bucket.
It should be noted that key can be compression key, it is also possible to being complete key, the embodiment of the present invention is to this also Do not limit.
250, the quantity of the key that processor has stored in each list item of the first Hash bucket more than or etc. In the case of first threshold, key to be stored is stored in the first Hash bucket and the second Hash bucket In the Hash bucket of the negligible amounts of the key of storage, the first Hash bucket and the second Hash bucket use different Kazakhstan Uncommon function.
260, processor is the fullest at the list item of the first Hash bucket and the second Hash bucket, and the 3rd Hash bucket Key to be stored, less than in the case of first threshold, is deposited by the quantity of the key stored in each list item Storing up in the 3rd Hash bucket, the first Hash bucket and the 3rd Hash bucket use identical hash function.
270, processor is the fullest at the list item of the first Hash bucket and the second Hash bucket, and the 3rd Hash bucket The quantity of the key stored in each list item is more than or equal in the case of first threshold, by be stored In the Hash bucket of the negligible amounts that key stores the key stored in the 3rd Hash bucket and the 4th Hash bucket, First Hash bucket and the 3rd Hash bucket use identical hash function, and the second Hash bucket and the 4th Hash bucket are adopted With identical hash function.
Should be understood that before 240, by the rule of each Hash bucket in two Hash buckets of the prior art Lattice reduce, and increase at least one extension bucket for each Hash bucket.
Should also be understood that the content of each list item in Hash bucket includes keyword and value, i.e. key-value pair (key-value), processor can be by this key while being stored in Hash bucket by key to be stored Corresponding value stores in Hash bucket.Succinct in order to describe, the embodiment of the present invention with key is only Example illustrates, but is not intended to limit the scope of the embodiment of the present invention.
In prior art, in router, a lot of traffic table use the mode of Hash (Hash) to deposit Storage and coupling, such as medium access control (Media Access Control is called for short MAC) table, Location analysis protocol (Address Resolution Protocol is called for short ARP) table etc..In order to reduce Hash Conflict, the general scheme using two Hash buckets, the corresponding different hash function of these two Hash buckets, as Shown in Fig. 7.But due to the randomness of Hash, key is randomly dispersed in each Hash bucket, few The key of (such as 1%) also can force driving to apply for all of Hash bucket internal memory, reduces Hash bucket internal memory Utilization rate.And in the embodiment of the present invention, use step 240,250,260,270 strategies described Key is stored in Hash bucket, be possible not only to be effectively ensured in the increment distribution that the embodiment of the present invention provides The mode process table deposited, improves memory usage;The utilization rate of Hash bucket internal memory can be improved simultaneously, have Effect ground solves this problem.
Therefore, the method processing table of the embodiment of the present invention, when the negligible amounts of key, it is possible to significantly Reduce the internal memory taken, and reduce the number of times that Hash bucket accesses.
Should be understood that a Kazakhstan in the first Hash bucket and the 3rd Hash two Hash buckets of bucket correspondence prior art Uncommon bucket, another Hash in two Hash buckets in the second Hash bucket and the 4th Hash bucket correspondence prior art Bucket.
Alternatively, the size of the first Hash bucket and the 3rd Hash bucket is identical, the second Hash bucket and the 4th Hash The size of bucket is identical, but the present invention is not limited to this.
It is exemplified below, the specification of Hash bucket each shown in Fig. 7 is reduced half, is each Kazakhstan simultaneously Uncommon bucket increases an an equal amount of extension bucket, i.e. as shown in Fig. 8 a, 8b, 8c and 8d, Hash bucket 1 With the Hash in the Hash bucket 1 in Hash bucket 3 corresponding diagram 7, Hash bucket 2 and Hash bucket 4 corresponding diagram 7 Bucket 2.Assuming that each list item in Hash bucket can store M bar Key (such as M=6), first threshold is M/2。
When the quantity of the key in each list item of Hash bucket 1 is less than M/2, new Key is stored in Kazakhstan bucket In 1.As shown in Figure 8 a, now Hash bucket 3, Hash bucket 2 and Hash bucket 4 are all empty, are not take up Internal memory, the most at most only takes up 1/4 internal memory, and table is searched and had only to access Hash bucket once.
When the quantity of the key in each list item of Hash bucket 1 is more than or equal to M/2, new Key is stored in In Hash bucket 1 and Hash bucket 2 in the list item of that barrel that the quantity of Key is few.As shown in Figure 8 b, this Shi Haxi bucket 3 and Hash bucket 4 are all empty, are not take up internal memory.The most at most only take up 1/2 Internal memory, table is searched the worst needs and is accessed Hash bucket 2 times.
When all list items of Hash bucket 1 and Hash bucket 2 are the fullest, and the key in the list item of Hash bucket 3 When quantity is less than M/2, new key is stored in Hash bucket 3.As shown in Figure 8 c, now Hash bucket 4 It is empty, is not take up internal memory.The most at most only taking up 3/4 internal memory, the worst needs searched by table Access Hash bucket 3 times.
When all list items of Hash bucket 1 and Hash bucket 2 are the fullest, and the key in the list item of Hash bucket 3 When quantity is more than or equal to M/2, new Key is stored in Hash bucket 3 and Hash bucket 4 quantity of key relatively In the list item of that few Hash bucket.As shown in figure 8d, now Hash bucket 1, Hash bucket 3, Hash bucket 2 and Hash bucket 4 all committed memories, the most now take whole internal memory, table is searched the worst needs and is accessed Hash Bucket 4 times.
Therefore, in embodiments of the present invention, by by each Kazakhstan in two Hash buckets of the prior art The specification of uncommon bucket reduces, and increases at least one extension bucket for each Hash bucket, and according to the strategy set Store, when the negligible amounts of key, it is possible to greatly reduce the internal memory taken, and reduce Hash bucket The number of times accessed.
Alternatively, the first Hash bucket, the second Hash bucket, the 3rd Hash bucket and the 4th Hash bucket can associate Enumerator, enumerator is for recording the first Hash bucket, the second Hash bucket, the 3rd Hash bucket and the 4th respectively The accessed number of times of Hash bucket.
In the embodiment of the present invention, it is also possible to add up the accessed number of times of each Hash bucket, then by moving, The traffic table of high speed access is placed in the Hash bucket of corresponding high bandwidth memory (such as internal memory on sheet).
Method 200 can also include:
Processor is at the accessed number of times of the first Hash bucket and the ratio of the accessed number of times of the 3rd Hash bucket In the case of Second Threshold, control network processing unit and first mate the 3rd Hash bucket, then mate First Hash bucket;
Processor is at the accessed number of times of the second Hash bucket and the ratio of the accessed number of times of the 4th Hash bucket In the case of Second Threshold, control network processing unit and first mate the 4th Hash bucket, then mate Second Hash bucket;
Processor is at the first Hash bucket and the accessed number of times of the 3rd Hash bucket and the second Hash bucket and the 4th The ratio of the accessed number of times of Hash bucket, less than or equal in the case of Second Threshold, controls network processing unit First mate the second Hash bucket and the 4th Hash bucket, then mate the first Hash bucket and the 3rd Hash bucket.
Processor can obtain the accessed number of times of each Hash bucket by the way of poll and compare, then The ratio of the accessed number of times according to Hash bucket, the network processing unit controlling forwarding face mates the suitable of Hash bucket Sequence.
Specifically, processor can configure depositing of forwarding face according to the ratio of the accessed number of times of Hash bucket The value (such as 0 or 1) of device, forwards the network processing unit in face to read the value of this depositor, and according to depositing The value of device determines the matching order of Hash bucket.
Wherein the value of depositor is set in advance, and the value of such as depositor can be 0 or 1.Two Kazakhstan The ratio of the accessed number of times of uncommon bucket, less than or equal to Second Threshold, illustrates the accessed of these two Hash buckets Number of times is close.
It should be noted that in prior art, when searching data by hash function, first coupling two First Hash bucket in Hash bucket, if mating unsuccessful, just may proceed to mate second Hash bucket.Depend on Secondary analogize, it will be understood that in the embodiment of the present invention, first coupling the first Kazakhstan system, if mating unsuccessful, Continue coupling the 3rd Hash bucket (the extension bucket of the i.e. first Hash bucket);If mating unsuccessful, then continuation Join the second Hash bucket;If mating the most unsuccessful, then continuation coupling the 4th Hash bucket be (the i.e. second Hash bucket Extension bucket).Therefore, if the accessed number of times of the first Hash bucket and the accessed number of times of the 3rd Hash bucket Close, then the first Hash bucket is described the match is successful that rate is the lowest.In like manner, if the first Hash bucket and the 3rd The accessed number of times of Hash bucket and the accessed number of times of the second Hash bucket and the 4th Hash bucket are close, then illustrate The match is successful that rate is the lowest for first Hash bucket and the 3rd Hash bucket.
In embodiments of the present invention, control network processing unit by the accessed number of times according to Hash bucket to mate The order of Hash bucket, it is possible to reduce the access times of Hash bucket, and then it is internal to reduce network processing unit Deposit the demand of bandwidth.
Fig. 9 is the indicative flowchart of the method 900 of access table according to embodiments of the present invention.Such as Fig. 9 Shown in, method 900 includes following content.
910, network processing unit obtains the plot of memory block map information table corresponding to the first table, memory block Map information table includes the corresponding relation of each sublist in the first table and memory element.
Wherein, memory block have recorded the corresponding relation of table and memory block map information table in mapping plot table.
920, network processing unit maps according to the index accesses memory block of plot and the first list item of the first table Information table, and determine, according to memory block map information table, the storage that first sublist at the first list item place is corresponding The index of unit, the first sublist is any sublist in this first table.
930, network processing unit is according to the index of memory element and the index of the first list item, determines the first table The physical address of item.
940, network processing unit accesses the first list item according to the physical address of the first list item.
In embodiments of the present invention, network processing unit passes through the index according to list item at memory block map information Table determines the physical address of list item, it is possible to access list item according to the physical address of list item.
Alternatively, in step 910, network processing unit obtains the memory block map information that the first table is corresponding The plot of table includes: network processing unit identifies (Table Identifier is called for short TID) according to the table of the first table Access memory block and map plot table, and determine memory block map information table according to memory block mapping plot table Plot.
Should be understood that network processing unit can also use additive method to obtain the base of memory block map information table Location, this is not defined by the embodiment of the present invention.
Alternatively, before step 910, method 900 also includes:
Network processing unit receives each sublist pass corresponding with memory element of the first table that processor sends System.
This corresponding relation write internal memory can be mapped single after receiving this corresponding relation by network processing unit In unit.
Alternatively, method 900 also includes: network processing unit updates the accessed number of times of memory element.
Certainly, network processing unit can also the accessed number of times of more new memory.
Alternatively, in step 920, network processing unit is according to the index accesses internal memory of plot and the first list item Block map information table includes:
The address that network processing unit determines according to formula (1) accesses memory block map information table;
Wherein, base is memory block map information table plot, and entry index is the index of the first list item, Block size is the quantity of the list item that the first sublist comprises.
Wherein, symbolRepresent and round downwards,It it is the first sublist Index.
Alternatively, step 940 includes: network processing unit determines the thing of the first list item according to formula (2) Reason address;
Real block index*block size+entry index%block size (2)
Wherein, real block index is the index of memory element, and block size is that the first sublist comprises The quantity of list item, the quantity phase of the list item that the quantity of the list item that the first sublist comprises stores with memory element With, entry index is the index of the first list item.
Wherein, symbol " % " represents complementation, and symbol " * " represents multiplying, (entry index %block size) it is first list item side-play amount in the first sublist.
Below in conjunction with Figure 10 and Figure 11, the method accessing table according to embodiments of the present invention is described.
Figure 10 show memory system architecture schematic diagram according to embodiments of the present invention.Work as network processes When device to access certain table, first the table of table to be accessed is identified and the index of list item, i.e. (tid, Entry_index), issuing internal memory map unit, internal memory map unit is converted into table item index and deposits really Memory address, then accesses memorizer.Meanwhile, internal memory map unit issues statistic unit access request, The accessed number of times of respective memory and memory element is counted by statistic unit.In order to support high-performance Disposal ability, internal memory map unit and statistic unit can be in the way of using book copying, such as Figure 10 institute Show, such as 8 parts duplications.
Internal memory map unit can realize the conversion of the physical address indexing memorizer of any table.Such as figure Shown in 11, first access memory block according to TID and map the base of plot table acquisition memory block map information table Location, then accesses memory block map information table according to the address that formula (1) obtains, maps from memory block Information table obtains the physical address of memory element corresponding to this sublist, then calculates according to formula (2) Go out the physical address of list item to be visited.
Therefore, the method processing table of the embodiment of the present invention, network processing unit is by the index according to list item The physical storage address of list item is determined, it is possible to according to the physical address of list item in memory block map information table Access list item.
Should be understood that the size of the sequence number of above-mentioned each process is not meant to the priority of execution sequence, each process Execution sequence should determine with its function and internal logic, and should be to the implementation process of the embodiment of the present invention Constitute any restriction.
The above-detailed method processing table according to embodiments of the present invention and the method accessing table, under Face describes the device processing table according to embodiments of the present invention in detail and accesses the device of table.
Figure 12 is the schematic block diagram of the device 1200 of process table according to embodiments of the present invention.Such as Figure 12 Shown in, device 1200 includes processing unit 1210 and transmitting element 1220.Should be understood that the present invention implements In example, the first table includes multiple sublist, and first memory includes multiple memory element, the plurality of sublist Include that the first sublist, the plurality of memory element include the first memory element.
Processing unit 1210, is used for: the running status of first memory reach pre-conditioned in the case of, The first sublist being stored in the first memory element is stored the second memory element of second memory In, the bandwidth that the remaining bandwidth of second memory takies when being accessed higher than the first sublist, and the second storage The memory space that the residual memory space of device takies more than the first sublist;The is deleted in the first memory element One sublist.
Transmitting element 1230, for sending out the corresponding relation of described first sublist with described second memory element Give network processing unit, so that network processing unit is by single with described first storage for described first sublist preserved The corresponding relation of unit is updated to the corresponding relation of described first sublist and described second memory element.
Therefore, the device processing table of the embodiment of the present invention, by reaching pre-in the running status of memorizer If in the case of condition, the sublist that part is stored in the memory element of this memorizer is moved enough In other memorizeies of remaining bandwidth and memory space, it is possible to reduce high performance network processor to high bandwidth The capacity requirement of internal memory, makes high performance network processor of future generation easily realize.
Alternatively, processing unit 1210 specifically for:
Determine the bandwidth that first memory takies;
The bandwidth taken at first memory is more than or equal in the case of the first preset value, by the first sublist Store in second memory the second memory element.
Correspondingly, processing unit 1210 specifically for:
Obtain the accessed number of times of each memory element in first memory;
According to the accessed number of times of each memory element, from first memory, determine the first memory element, The accessed frequency of the first memory element is deposited higher than other outside the first memory element in first memory The accessed frequency of storage unit;
Determining second memory from multiple memorizeies, the remaining bandwidth of second memory is higher than multiple storages The remaining bandwidth of other memorizeies outside second memory in device;
First sublist is stored in the second memory element of second memory.
In the embodiment of the present invention, in running, processor accounts for according to the actual of bandwidth of each memorizer Moving of sublist is carried out, it is possible to meet network processes by the statistics of situation and the accessed number of times of memory element The device demand to memory bandwidth.
It addition, by the memorizer that sublist the highest for accessed frequency is moved enough remaining bandwidths On, it is possible to make that there is the sublist that the accessed frequency of storage on the memorizer of high bandwidth is high, so can be big The big reduction network processing unit demand to high-performance memory capacity.
Alternatively, processing unit 1210 specifically for:
Obtain the residual memory space of first memory;
In the case of the residual memory space of first memory is less than or equal to the second preset value, by first Sublist stores in the second memory element of second memory.
In the embodiment of the present invention, in running, processor is according to the reality of the memory space of each memorizer Border takies situation and carries out moving of sublist, it is possible to meet the demand of network processing unit memory size.
Correspondingly, processing unit 1210 specifically for:
Determine the bandwidth that first memory takies;
The bandwidth taken at first memory, less than in the case of the 3rd preset value, obtains in first memory The accessed number of times of each memory element;
According to the accessed number of times of each memory element, determine the first memory element, the first memory element Accessed frequency being accessed less than other memory element outside the first memory element in first memory Frequency;
Determining second memory from multiple memorizeies, the residual memory space of second memory is higher than multiple The residual memory space of other memorizeies outside second memory in memorizer;
First sublist is stored in the second memory element of second memory.
In embodiments of the present invention, when the bandwidth occupancy of memorizer is less than the 3rd preset value, but this memorizer Capacity already close to when taking, by one or more sublists that occupied bandwidth on this memorizer is minimum Move other have bigger vacant capacity with Time Bandwidth meet memorizer on, it is possible to meet network processing unit Capacity requirement to high bandwidth internal memory.
It addition, sublist is moved in the memorizer that residual memory space is maximum, it is possible to make to store resource Distribution more equalize.
Alternatively, processing unit 1210 is additionally operable to, and the running status at first memory reaches to preset bar In the case of part, the first sublist being stored in the first memory element is stored in second memory it Before, it is that the first sublist distributes the first memory element.
Alternatively, processing unit 1210 specifically for: obtaining the first write request, the first write request is used for Ask the first list item of the first table is carried out write operation;It is the first sublist distribution first according to the first write request Memory element, the first sublist includes the first list item.
Alternatively, processing unit 1210 is additionally operable to record the first sublist pass corresponding with the first memory element System.
Alternatively, processing unit 1210 is additionally operable to:
Obtaining the second write request, the second list item of the second sublist is write behaviour for request by the second write request Make, multiple sublists also include the second sublist;
If it is determined that the second unallocated memory element of sublist, then it is the second sublist distribution according to the second write request 3rd memory element.
Alternatively, described first table uses the mode of Hash to store, the pass of the list item of described first table Key word key is stored in the list item of Hash bucket.
Correspondingly, processing unit 1210 is additionally operable to:
The quantity of the keyword key stored in each list item of the first Hash bucket is less than first threshold In the case of, key to be stored is stored to the first Hash bucket;
The quantity of the key stored in each list item of the first Hash bucket is more than or equal to first threshold In the case of, key to be stored is stored the key's that stored to the first Hash bucket and the second Hash bucket In the Hash bucket of negligible amounts, the first Hash bucket and the second Hash bucket use different hash functions;
The fullest at the list item of the first Hash bucket and the second Hash bucket, and in each list item of the 3rd Hash bucket Key to be stored, less than in the case of first threshold, is stored to the 3rd Hash by the quantity of the key of storage In Tong, the first Hash bucket and the 3rd Hash bucket use identical hash function;
The fullest at the list item of the first Hash bucket and the second Hash bucket, and in each list item of the 3rd Hash bucket Key to be stored more than or equal in the case of first threshold, is stored to the by the quantity of the key of storage In the Hash bucket of the negligible amounts of the key stored in three Hash buckets and the 4th Hash bucket, the first Hash bucket Identical hash function, the second Hash bucket and the 4th Hash bucket is used to use identical Kazakhstan with the 3rd Hash bucket Uncommon function.
In the embodiment of the present invention when the negligible amounts of key, it is possible to reduce the internal memory taken, and reduce Kazakhstan The number of times that uncommon bucket accesses.
Alternatively, as shown in figure 13, device 1200 also includes control unit 1230.
Control unit 1230 is used for:
The accessed number of times of the first Hash bucket and the accessed number of times of the 3rd Hash bucket ratio less than or In the case of Second Threshold, control network processing unit and first mate the 3rd Hash bucket, then mate the first Kazakhstan Uncommon bucket;
The accessed number of times of the second Hash bucket and the accessed number of times of the 4th Hash bucket ratio less than or In the case of Second Threshold, control network processing unit and first mate the 4th Hash bucket, then mate the second Kazakhstan Uncommon bucket;
At the first Hash bucket and the accessed number of times of the 3rd Hash bucket and the second Hash bucket and the 4th Hash bucket The ratio of accessed number of times less than or equal in the case of Second Threshold, control network processing unit and first mate Second Hash bucket and the 4th Hash bucket, then mate the first Hash bucket and the 3rd Hash bucket.
In embodiments of the present invention, control network processing unit by the accessed number of times according to Hash bucket to mate The order of Hash bucket, it is possible to reduce the access times of Hash bucket, and then it is internal to reduce network processing unit Deposit the demand of bandwidth.
Alternatively, the first Hash bucket and the 3rd Hash bucket can be identical, the second Hash bucket and the 4th Hash The size of bucket is identical.
Should be understood that the device 1200 of process table according to embodiments of the present invention may correspond to according to the present invention Processor in the method 200 processing table of embodiment, and the unit of device 1200 is above-mentioned Operate with other and/or function is respectively for the corresponding flow process of implementation method 200, for sake of simplicity, at this not Repeat again.
Therefore, the device processing table of the embodiment of the present invention, by reaching pre-in the running status of memorizer If in the case of condition, the sublist that part is stored in the memory element of this memorizer is moved enough In other memorizeies of remaining bandwidth and memory space, it is possible to reduce high performance network processor to high bandwidth The capacity requirement of internal memory, makes high performance network processor of future generation easily realize.
Figure 14 is the schematic block diagram of the device 1400 of access table according to embodiments of the present invention.Such as Figure 14 Shown in, device 1400 includes processing unit 1410 and accesses unit 1420.
Processing unit 1410, is used for: obtain the plot of memory block map information table corresponding to the first table, interior Counterfoil map information table includes the corresponding relation of each sublist in the first table and memory element;According to plot With the index accesses memory block map information table of the first list item of the first table, and according to memory block map information Table determines the index of memory element corresponding to first sublist at the first list item place, and the first sublist is the first table In any sublist;Index according to memory element and the index of the first list item, determine the thing of the first list item Reason address.
Access unit 1420, for accessing the first list item according to the physical address of the first list item.
In embodiments of the present invention, network processing unit passes through the index according to list item at memory block map information Table determines the physical address of list item, it is possible to access list item according to the physical address of list item.
Alternatively, processing unit 1410 specifically for, identify TID according to the table of the first table and access internal memory Block maps plot table, and determines described interior memory block map information table according to described memory block mapping plot table Plot.
Alternatively, as shown in figure 15, device 1400 can also include: receives unit 1430, is used for Before processing unit 1410 obtains the plot of memory block map information table corresponding to the first table, reception processes Each sublist in the first table that device sends and the corresponding relation of memory element.
Alternatively, processing unit 1410 is additionally operable to update the accessed number of times of memory element.
Alternatively, processing unit 1410 specifically for: the address determined according to formula (1) accesses internal memory Block map information table,
Wherein, base is plot, and entry index is the index of the first list item, and block size is the first son The quantity of the list item that table comprises.
Alternatively, processing unit 1410 specifically for: determine the physics of the first list item according to formula (2) Address,
Real block index*block size+entry index%block size (2)
Wherein, real block index is the index of memory element, and block size is that the first sublist comprises The quantity of list item, the quantity phase of the list item that the quantity of the list item that the first sublist comprises stores with memory element With, entry index is the index of the first list item.
Should be understood that the device 1400 of access table according to embodiments of the present invention may correspond to according to the present invention real Network processing unit in the method 900 of the access table executing example, and the unit of device 1400 is upper State with other operation and/or function respectively for the corresponding flow process of implementation method 900, for sake of simplicity, at this Repeat no more.
In embodiments of the present invention, network processing unit passes through the index according to list item at memory block map information Table determines the physical address of list item, it is possible to access list item according to the physical address of list item.
The embodiment of the present invention additionally provides a kind of device 1600 processing table.As shown in figure 16, this device 1600 include processor 1610, memorizer 1620, bus system 1630 and transmitter 1640.Wherein, Processor 1610, memorizer 1620 are connected by bus system 1630 with transmitter 1640, this storage Device 1620 is used for storing instruction, and this processor 1610 is for performing the instruction of this memorizer 1620 storage.
Memorizer 1620 is additionally operable to store sublist.Memorizer 1620 includes first memory and the second storage Device.First memory and second memory include that multiple memory element, a memory element are used for depositing respectively Store up a sublist.
Processor 1610 is used for: for reaching pre-conditioned situation in the running status of first memory Under, the first sublist in the first memory element being stored in first memory is stored second memory The second memory element in, the band that the takies when remaining bandwidth of second memory is accessed higher than the first sublist Width, and the memory space that the residual memory space of second memory takies more than the first sublist;Deposit first Storage unit is deleted the first sublist.
Transmitter 1640 is for being sent to the corresponding relation of the first sublist Yu the second memory element at network Reason device, so that the corresponding relation of the first sublist preserved with the first memory element is updated to by network processing unit First sublist and the corresponding relation of the second memory element.
Therefore, the device processing table of the embodiment of the present invention, by reaching pre-in the running status of memorizer If in the case of condition, the sublist that part is stored in the memory element of this memorizer is moved enough In other memorizeies of remaining bandwidth and memory space, it is possible to reduce high performance network processor to high bandwidth The capacity requirement of internal memory, makes high performance network processor of future generation easily realize.
Should be understood that in embodiments of the present invention, this processor 1610 can be CPU (Central Processing Unit, is called for short CPU), this processor 1610 can also is that other general processors, number Word signal processor (Digital Signal Processing is called for short DSP), special IC (Application Specific Integrated Circuit is called for short ASIC), field programmable gate array (Field-Programmable Gate Array is called for short FPGA) or other PLDs, Discrete gate or transistor logic, discrete hardware components etc..General processor can be microprocessor Or this processor can also be the processor etc. of any routine.
This memorizer 1620 can include read only memory and random access memory, and to processor 1610 provide instruction and data.A part for memorizer 1620 can also include that non-volatile random accesses Memorizer.Such as, memorizer 1620 can be with the information of storage device type.
This bus system 1630 is in addition to including data/address bus, it is also possible to includes power bus, always control Line and status signal bus in addition etc..But for the sake of understanding explanation, in the drawings various buses are all designated as always Wire system 1630.
During realizing, each step of said method can be by the collection of the hardware in processor 1610 The instruction becoming logic circuit or software form completes.Step in conjunction with the method disclosed in the embodiment of the present invention Suddenly can be embodied directly in hardware processor to have performed, or by the hardware in processor and software module Combination execution completes.Software module may be located at random access memory, flash memory, read only memory, able to programme The storage medium that this areas such as read only memory or electrically erasable programmable memorizer, depositor are ripe In.This storage medium is positioned at memorizer 1620, and processor 1610 reads the information in memorizer 1620, The step of said method is completed in conjunction with its hardware.For avoiding repeating, it is not detailed herein.
Alternatively, processor 1610 specifically for:
Determine the bandwidth that first memory takies;
The bandwidth taken at first memory is more than or equal in the case of the first preset value, by the first sublist Store in second memory.
Correspondingly, processor 1610 specifically for:
Obtain the accessed number of times of each memory element in first memory;
According to the accessed number of times of each memory element, from first memory, determine the first memory element, The accessed frequency of the first memory element is deposited higher than other outside the first memory element in first memory The accessed frequency of storage unit;
Determining second memory from multiple memorizeies, the remaining bandwidth of second memory is higher than multiple storages The remaining bandwidth of other memorizeies outside second memory in device;
First sublist is stored in the second memory element of second memory.
In the embodiment of the present invention, in running, processor accounts for according to the actual of bandwidth of each memorizer Moving of sublist is carried out, it is possible to meet network processes by the statistics of situation and the accessed number of times of memory element The device demand to memory bandwidth.
It addition, by the memorizer that sublist the highest for accessed frequency is moved enough remaining bandwidths On, it is possible to make that there is the sublist that the accessed frequency of storage on the memorizer of high bandwidth is high, so can be big The big reduction network processing unit demand to high-performance memory capacity.
Alternatively, processor 1610 can also be specifically for:
Obtain the residual memory space of first memory;
In the case of the residual memory space of first memory is less than or equal to the second preset value, by first Sublist stores in second memory.
In the embodiment of the present invention, in running, processor is according to the reality of the memory space of each memorizer Border takies situation and carries out moving of sublist, it is possible to meet the demand of network processing unit memory size.
Correspondingly, processor 1610 specifically for:
Determine the bandwidth that first memory takies;
The bandwidth taken at first memory, less than in the case of the 3rd preset value, obtains in first memory The accessed number of times of each memory element;
According to the accessed number of times of each memory element, determine the first memory element, the first memory element Accessed frequency being accessed less than other memory element outside the first memory element in first memory Frequency;
Determining second memory from multiple memorizeies, the residual memory space of second memory is higher than multiple The residual memory space of other memorizeies outside second memory in memorizer;
First sublist is stored in the second memory element of second memory.
In embodiments of the present invention, when the bandwidth occupancy of memorizer is less than the 3rd preset value, but this memorizer Capacity already close to when taking, by one or more sublists that occupied bandwidth on this memorizer is minimum Move other have bigger vacant capacity with Time Bandwidth meet memorizer on, it is possible to meet network processing unit Capacity requirement to high bandwidth internal memory.
It addition, sublist is moved in the memorizer that residual memory space is maximum, it is possible to make to store resource Distribution more equalize.
Alternatively, processor 1610 is additionally operable to, and the running status at first memory reaches pre-conditioned In the case of, the first sublist in the first memory element being stored in first memory is stored to second Before in second memory element of memorizer, it is that the first sublist distributes the first memory element.
Correspondingly, processor 1610 specifically for: obtain the first write request, the first write request for please The first list item to the first table is asked to carry out write operation;It is that the first sublist distribution first is deposited according to the first write request Storage unit, the first sublist includes the first list item.
Alternatively, memorizer 1620 is additionally operable to the corresponding relation storing the first sublist with the first memory element.
Alternatively, processor 1610 is additionally operable to:
Obtaining the second write request, the second list item of the second sublist is write behaviour for request by the second write request Make, multiple sublists also include the second sublist;
Determine the second unallocated memory element of sublist, be then the second sublist distribution the 3rd according to the second write request Memory element.
Alternatively, the first table uses the mode of Hash to store, the keyword key of the list item of the first table It is stored in the list item of Hash bucket.
Correspondingly, processing unit 1610 is additionally operable to:
The quantity of the keyword key stored in each list item of the first Hash bucket is less than first threshold In the case of, key to be stored is stored to the first Hash bucket;
The quantity of the key stored in each list item of the first Hash bucket is more than or equal to first threshold In the case of, key to be stored is stored the key's that stored to the first Hash bucket and the second Hash bucket In the Hash bucket of negligible amounts, the first Hash bucket and the second Hash bucket use different hash functions;
The fullest at the list item of the first Hash bucket and the second Hash bucket, and in each list item of the 3rd Hash bucket Key to be stored, less than in the case of first threshold, is stored to the 3rd Hash by the quantity of the key of storage In Tong, the first Hash bucket and the 3rd Hash bucket use identical hash function;
The fullest at the list item of the first Hash bucket and the second Hash bucket, and in each list item of the 3rd Hash bucket Key to be stored, more than or equal in the case of first threshold, is stored to the 3rd Hash by the quantity of key In the Hash bucket of the negligible amounts of the key stored in bucket and the 4th Hash bucket, the first Hash bucket and the 3rd Hash bucket uses identical hash function, the second Hash bucket and the 4th Hash bucket to use identical Hash letter Number.
In the embodiment of the present invention, when the negligible amounts of key, it is possible to greatly reduce the internal memory taken, and Reduce the number of times that Hash bucket accesses.
Alternatively, processor 1610 is additionally operable to:
The accessed number of times of the first Hash bucket and the accessed number of times of the 3rd Hash bucket ratio less than or In the case of Second Threshold, control network processing unit and first mate the 3rd Hash bucket, then mate the first Kazakhstan Uncommon bucket;
The accessed number of times of the second Hash bucket and the accessed number of times of the 4th Hash bucket ratio less than or In the case of Second Threshold, control network processing unit and first mate the 4th Hash bucket, then mate the second Kazakhstan Uncommon bucket;
At the first Hash bucket and the accessed number of times of the 3rd Hash bucket and the second Hash bucket and the 4th Hash bucket The ratio of accessed number of times less than or equal in the case of Second Threshold, control network processing unit and first mate Second Hash bucket and the 4th Hash bucket, then mate the first Hash bucket and the 3rd Hash bucket.
In embodiments of the present invention, control network processing unit by the accessed number of times according to Hash bucket to mate The order of Hash bucket, it is possible to reduce the access times of Hash bucket, and then it is internal to reduce network processing unit Deposit the demand of bandwidth.
Alternatively, the size of the first Hash bucket and the 3rd Hash bucket is identical, the second Hash bucket and the 4th Hash The size of bucket is identical.
Should be understood that the device 1600 of process table according to embodiments of the present invention may correspond to according to the present invention Processor in the method 200 processing table of embodiment and the dress processing table according to embodiments of the present invention Put 1200, and above and other operation of the unit of device 1600 and/or function are respectively for reality The corresponding flow process of existing method 200, for sake of simplicity, do not repeat them here.
Therefore, the device processing table of the embodiment of the present invention, by reaching pre-in the running status of memorizer If in the case of condition, the sublist that part is stored in the memory element of this memorizer is moved enough In other memorizeies of remaining bandwidth and memory space, it is possible to reduce high performance network processor to high bandwidth The capacity requirement of internal memory, makes high performance network processor of future generation easily realize.
Figure 17 is the schematic block diagram of the device 1700 of access table according to embodiments of the present invention.Such as Figure 17 Shown in, device 1700 includes processor 1710, memorizer 1720 and bus system 1730.Wherein, Processor 1710 is connected by bus system 1730 with memorizer 1720, and this memorizer 1720 is used for depositing Storage instruction, this processor 1710 is for performing the instruction of this memorizer 1720 storage.
Memorizer 1720 can be also used for storing sublist.Memorizer 1720 includes multiple memory element, its In memory element for one sublist of storage.
Processor 1710 specifically for:
Obtaining the plot of memory block map information table corresponding to the first table, memory block map information table includes Each sublist in one table and the corresponding relation of memory element;
Index accesses memory block map information table according to plot and the first list item of the first table, and according to interior Counterfoil map information table determines the index of memory element corresponding to first sublist at the first list item place, first Sublist is any sublist in the first table;
Index according to memory element and the index of the first list item, determine the physical address of the first list item;
Physical address according to the first list item accesses the first list item.
In embodiments of the present invention, network processing unit passes through the index according to list item at memory block map information Table determines the physical address of list item, it is possible to access list item according to the physical address of list item.
Should be understood that in embodiments of the present invention, this processor 1710 can be CPU (Central Processing Unit, is called for short CPU), this processor 1710 can also is that other general processors, number Word signal processor (Digital Signal Processing is called for short DSP), special IC (Application Specific Integrated Circuit is called for short ASIC), field programmable gate array (Field-Programmable Gate Array is called for short FPGA) or other PLDs, Discrete gate or transistor logic, discrete hardware components etc..General processor can be microprocessor Or this processor can also be the processor etc. of any routine.
This memorizer 1720 can include read only memory and random access memory, and to processor 1710 provide instruction and data.A part for memorizer 1720 can also include that non-volatile random accesses Memorizer.Such as, memorizer 1720 can be with the information of storage device type.
This bus system 1730 is in addition to including data/address bus, it is also possible to includes power bus, always control Line and status signal bus in addition etc..But for the sake of understanding explanation, in the drawings various buses are all designated as always Wire system 1730.
During realizing, each step of said method can be by the collection of the hardware in processor 1710 The instruction becoming logic circuit or software form completes.Step in conjunction with the method disclosed in the embodiment of the present invention Suddenly can be embodied directly in hardware processor to have performed, or by the hardware in processor and software module Combination execution completes.Software module may be located at random access memory, flash memory, read only memory, able to programme The storage medium that this areas such as read only memory or electrically erasable programmable memorizer, depositor are ripe In.This storage medium is positioned at memorizer 1720, and processor 1710 reads the information in memorizer 1720, The step of said method is completed in conjunction with its hardware.For avoiding repeating, it is not detailed herein.
Alternatively, processor 1710 is additionally operable to update the accessed number of times of memory element.
Alternatively, device 1700 can also include: receptor 1740, by bus system 1730 and place Reason device 1710, memorizer 1720 are connected.Receptor 1740 is for obtaining memory block at processor 1710 Map plot table plot before, receive chain of command processor send the first table each sublist with deposit The corresponding relation of storage unit.
Alternatively, processor 1710 reflects specifically for the address access memory block determined according to formula (1) Penetrate information table;
Wherein, base is plot, and entry index is the index of the first list item, and block size is the first son The quantity of the list item that table comprises.
Alternatively, processor 1710 specifically for: determine the first list item physically according to formula (2) Location;
Real block index*block size+entry index%block size (2)
Wherein, real block index is the index of memory element, and block size is that the first sublist comprises The quantity of list item, the quantity phase of the list item that the quantity of the list item that the first sublist comprises stores with memory element With, entry index is the index of the first list item.
Should be understood that the device 1700 of access table according to embodiments of the present invention may correspond to according to the present invention Network processing unit in the method 900 accessing table of embodiment and access table according to embodiments of the present invention Above and other operation of the unit in device 1400, and device 1700 and/or function are respectively The corresponding flow process of implementation method 900, for sake of simplicity, do not repeat them here.
In embodiments of the present invention, network processing unit passes through the index according to list item at memory block map information Table determines the physical address of list item, it is possible to access list item according to the physical address of list item.
Those of ordinary skill in the art are it is to be appreciated that combine each of the embodiments described herein description The unit of example and algorithm steps, it is possible to electronic hardware or computer software and the knot of electronic hardware Incompatible realization.These functions perform with hardware or software mode actually, depend on the spy of technical scheme Fixed application and design constraint.Professional and technical personnel can use not Tongfang to each specifically should being used for Method realizes described function, but this realization is it is not considered that beyond the scope of this invention.
Those skilled in the art is it can be understood that arrive, and for convenience and simplicity of description, above-mentioned retouches The specific works process of system, device and the unit stated, is referred to the correspondence in preceding method embodiment Process, does not repeats them here.
In several embodiments provided herein, it should be understood that disclosed system, device and Method, can realize by another way.Such as, device embodiment described above is only shown Meaning property, such as, the division of described unit, be only a kind of logic function and divide, actual can when realizing There to be other dividing mode, the most multiple unit or assembly can in conjunction with or be desirably integrated into another System, or some features can ignore, or do not perform.Another point, shown or discussed each other Coupling direct-coupling or communication connection can be the INDIRECT COUPLING by some interfaces, device or unit Or communication connection, can be electrical, machinery or other form.
The described unit illustrated as separating component can be or may not be physically separate, makees The parts shown for unit can be or may not be physical location, i.e. may be located at a place, Or can also be distributed on multiple NE.Can select according to the actual needs part therein or The whole unit of person realizes the purpose of the present embodiment scheme.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit In, it is also possible to it is that unit is individually physically present, it is also possible to two or more unit are integrated in one In individual unit.
If described function realizes using the form of SFU software functional unit and as independent production marketing or make Used time, can be stored in a computer read/write memory medium.Based on such understanding, the present invention The part that the most in other words prior art contributed of technical scheme or the portion of this technical scheme Dividing and can embody with the form of software product, this computer software product is stored in a storage medium In, including some instructions with so that computer equipment (can be personal computer, server, Or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention.And it is front The storage medium stated includes: USB flash disk, portable hard drive, read only memory (Read-Only Memory letter Claim ROM), random access memory (Random Access Memory, be called for short RAM), magnetic disc or The various media that can store program code such as person's CD.
The above, the only detailed description of the invention of the present invention, but protection scope of the present invention is not limited to In this, any those familiar with the art, can be easily in the technical scope that the invention discloses Expect change or replace, all should contain within protection scope of the present invention.Therefore, the protection of the present invention Scope should be as the criterion with described scope of the claims.

Claims (26)

1. the method processing table, it is characterised in that the first table includes multiple sublist, the first storage Device includes that multiple memory element, the plurality of sublist include the first sublist, in the plurality of memory element Including the first memory element, described method includes:
Processor the running status of described first memory reach pre-conditioned in the case of, will store Described first sublist in described first memory element stores the second memory element of second memory In, the bandwidth that the remaining bandwidth of described second memory takies when being accessed higher than described first sublist, and The memory space that the residual memory space of described second memory takies more than described first sublist;
Described first sublist deleted in described first memory element by described processor;
The corresponding relation of described first sublist Yu described second memory element is sent to net by described processor Network processor, so that right by described first sublist and described first memory element that preserve of network processing unit The corresponding relation being updated to described first sublist with described second memory element should be related to.
Method the most according to claim 1, it is characterised in that the described fortune at first memory Row state reach pre-conditioned in the case of, described first sublist is stored second memory by processor Second memory element includes:
Described processor determines the bandwidth that described first memory takies;Take at described first memory Bandwidth is more than or equal in the case of the first preset value, and described first sublist is stored institute by described processor State in the second memory element;
Or,
Described processor obtains the residual memory space of described first memory;At described first memory Residual memory space is less than or equal in the case of the second preset value, and described processor is by described first sublist Store in described second memory element.
Method the most according to claim 2, it is characterised in that described at described first memory The bandwidth taken is more than or equal in the case of the first preset value, and described first sublist is deposited by described processor Store up in the second memory element of described second memory, including:
Described processor obtains the accessed number of times of each memory element in described first memory;
Described processor is according to the accessed number of times of described each memory element, from described first memory Determining described first memory element, the accessed frequency of described first memory element is higher than described first storage The accessed frequency of other memory element outside the first memory element described in device;
Described processor determines described second memory from multiple memorizeies, remaining of described second memory Remaining bandwidth is higher than the tape remaining of other memorizeies outside second memory described in the plurality of memorizer Wide;
Described second storage that described first sublist is stored described second memory by described processor is single In unit.
Method the most according to claim 2, it is characterised in that described at described first memory Residual memory space less than or equal in the case of the second preset value, described processor will be described first sub Table stores the second memory element of described second memory and includes:
Described processor determines the bandwidth that described first memory takies;
Described processor, in the case of the bandwidth that described first memory takies is less than the 3rd preset value, obtains Take the accessed number of times of each memory element in described first memory;
Described processor, according to the accessed number of times of described each memory element, determines that described first storage is single Unit, the accessed frequency of described first memory element is single less than the first storage described in described first memory The accessed frequency of other memory element outside unit;
Described processor determines described second memory from multiple memorizeies, remaining of described second memory Remaining memory space remaining higher than other memorizeies outside second memory described in the plurality of memorizer Remaining memory space;
Described second storage that described first sublist is stored described second memory by described processor is single In unit.
Method the most according to any one of claim 1 to 4, it is characterised in that described The running status of first memory reach pre-conditioned in the case of, processor will be stored in described first Before the first sublist in memory element stores in the second memory element of second memory, described method Also include:
Described processor obtains the first write request, and described first write request is for asking first to the first table List item carries out write operation;
Described processor is that described first described first storage of sublist distribution is single according to described first write request Unit, described first sublist includes described first list item.
Method the most according to claim 5, it is characterised in that also include:
Described processor obtains the second write request, described second write request for request to the second sublist the Two list items carry out write operation, also include described second sublist in the plurality of sublist;
If described processor determines the described second unallocated memory element of sublist, the most described processor according to Described second write request is described second sublist distribution the 3rd memory element.
Method the most according to any one of claim 1 to 6, it is characterised in that described first Table uses the mode of Hash to store, and the keyword key of the list item of described first table is stored in Hash bucket List item in, described method also includes:
The quantity of the key that processor has stored in each list item of the first Hash bucket is less than first threshold In the case of, key to be stored is stored in described first Hash bucket;
The quantity of the key that described processor has stored in each list item of the first Hash bucket is more than or equal to In the case of first threshold, key to be stored is stored in described first Hash bucket and the second Hash bucket In the Hash bucket of the negligible amounts of the key stored, described first Hash bucket and described second Hash bucket are adopted With different hash functions;
Described processor is the fullest at the list item of the first Hash bucket and the second Hash bucket, and the 3rd Hash bucket is every The quantity of the key stored in individual list item is less than in the case of first threshold, by key storage to be stored In described 3rd Hash bucket, described first Hash bucket uses identical Hash letter with described 3rd Hash bucket Number;
Described processor is the fullest at the list item of the first Hash bucket and the second Hash bucket, and the 3rd Hash bucket is every The quantity of the key stored in individual list item is more than or equal in the case of first threshold, by key to be stored In the Hash bucket of the negligible amounts storing the key stored in the 3rd Hash bucket and the 4th Hash bucket, institute State the first Hash bucket and use identical hash function, described second Hash bucket and institute with described 3rd Hash bucket State the 4th Hash bucket and use identical hash function.
Method the most according to claim 7, it is characterised in that also include:
Described processor is in accessed number of times and described 3rd Hash bucket interviewed of described first Hash bucket Ask that the ratio of number of times less than or equal in the case of Second Threshold, controls network processing unit and first mates described the Three Hash buckets, then mate described first Hash bucket;
Described processor is in accessed number of times and described 4th Hash bucket interviewed of described second Hash bucket Ask that the ratio of number of times, less than or equal in the case of described Second Threshold, controls network processing unit and first mates institute State the 4th Hash bucket, then mate described second Hash bucket;
Described processor at the accessed number of times of described first Hash bucket and described 3rd Hash bucket with described The ratio of the accessed number of times of the second Hash bucket and described 4th Hash bucket is less than or equal to described second threshold In the case of value, control network processing unit and first mate described second Hash bucket and described 4th Hash bucket, then Mate described first Hash bucket and described 3rd Hash bucket.
9. the method accessing table, it is characterised in that including:
Network processing unit obtains the plot of memory block map information table corresponding to the first table, and described memory block reflects Penetrate each sublist and the corresponding relation of memory element that information table includes in described first table;
Described network processing unit is according to the index accesses institute of described plot and the first list item of described first table State memory block map information table, and determine described first list item place according to described memory block map information table The index of memory element corresponding to the first sublist, described first sublist is the anyon in described first table Table;
Described network processing unit indexing and the index of described first list item according to described memory element, determines The physical address of described first list item;
Described network processing unit accesses described first list item according to the physical address of described first list item.
Method the most according to claim 9, it is characterised in that described network processing unit obtains the The plot of the memory block map information table that one table is corresponding includes:
Described network processing unit identifies TID according to the table of described first table and accesses memory block mapping plot table, And the plot of described memory block map information table is determined according to described memory block mapping plot table.
11. according to the method described in claim 9 or 10, it is characterised in that also include:
Described network processing unit updates the accessed number of times of described memory element.
12. according to the method according to any one of claim 9 to 11, it is characterised in that described net Network processor is according to memory block map information table described in the index accesses of described plot and described first list item Including:
The address that described network processing unit determines according to following formula accesses described memory block map information table,
Wherein, base is described plot, and entry index is the index of described first list item, block size Quantity for the list item that described first sublist comprises.
13. according to the method according to any one of claim 9 to 12, it is characterised in that described net Network processor indexing and the index of described first list item according to described memory element, determines described first table The physical address of item, including:
Described network processing unit determines the physical address of described first list item according to following formula,
Real block index*block size+entry index%block size
Wherein, real block index is the index of described memory element, and block size is described first son The quantity of the list item that table comprises, the quantity of the list item that described first sublist comprises stores with described memory element The quantity of list item identical, entry index is the index of described first list item.
14. 1 kinds of devices processing table, it is characterised in that the first table includes multiple sublist, the first storage Device includes that multiple memory element, the plurality of sublist include the first sublist, in the plurality of memory element Including the first memory element, described device includes:
Processing unit, is used for: the running status of first memory reach pre-conditioned in the case of, will Described first sublist being stored in described first memory element stores the second of second memory and deposits In storage unit, the band that the remaining bandwidth of described second memory takies when being accessed higher than described first sublist Width, and the memory space that the residual memory space of described second memory takies more than described first sublist; Described first sublist is deleted in described first memory element;
Transmitting element, for being sent to the corresponding relation of described first sublist with described second memory element Network processing unit, so that network processing unit is by described first sublist preserved and described first memory element Corresponding relation is updated to the corresponding relation of described first sublist and described second memory element.
15. devices according to claim 14, it is characterised in that described processing unit is specifically used In:
Determine the bandwidth that described first memory takies;The band taken at described first memory be wider than or In the case of the first preset value, described first sublist is stored in described second memory element;Or Person,
Obtain the residual memory space of described first memory;Residue at described first memory stores sky Between less than or equal in the case of the second preset value, described first sublist is stored described second storage single In unit.
16. devices according to claim 15, it is characterised in that described process specifically for:
Obtain the accessed number of times of each memory element in described first memory;
According to the accessed number of times of described each memory element, from described first memory, determine described One memory element, the accessed frequency of described first memory element is higher than described in described first memory the The accessed frequency of other memory element outside one memory element;
Determining described second memory from multiple memorizeies, the remaining bandwidth of described second memory is higher than The remaining bandwidth of other memorizeies outside second memory described in the plurality of memorizer;
Described first sublist is stored in described second memory element of described second memory.
17. devices according to claim 15, it is characterised in that described processing unit is specifically used In:
Determine the bandwidth that described first memory takies;
The bandwidth taken at described first memory, less than in the case of the 3rd preset value, obtains described first The accessed number of times of each memory element in memorizer;
According to the accessed number of times of described each memory element, determine described first memory element, described Outside the accessed frequency of one memory element is less than the first memory element described in described first memory The accessed frequency of other memory element;
Described second memory, the residual memory space of described second memory is determined from multiple memorizeies Residual memory space higher than other memorizeies outside second memory described in the plurality of memorizer;
Described first sublist is stored in described second memory element of described second memory.
18. according to the device according to any one of claim 14 to 17, it is characterised in that described place Reason unit is additionally operable to:
The running status of described first memory reach pre-conditioned in the case of, will be stored in described Before described first sublist in first memory element stores in the second memory element of second memory, Obtaining the first write request, the first list item of the first table is write behaviour for request by described first write request Make;
It is described first sublist described first memory element of distribution according to described first write request, described first Sublist includes described first list item.
19. devices according to claim 18, it is characterised in that described processing unit is additionally operable to:
Obtaining the second write request, the second list item of the second sublist is carried out by described second write request for request Write operation, also includes described second sublist in the plurality of sublist;
If it is determined that the described second unallocated memory element of sublist, then it is described according to described second write request Second sublist distribution the 3rd memory element.
20. according to the device according to any one of claim 14 to 19, it is characterised in that described One table uses the mode of Hash to store, and the keyword key of the list item of described first table is stored in Hash In the list item of bucket, described processing unit is additionally operable to:
The quantity of the key stored in each list item of the first Hash bucket is less than the situation of first threshold Under, key to be stored is stored to described first Hash bucket;
The quantity of the key stored in each list item of the first Hash bucket is more than or equal to first threshold In the case of, key to be stored is stored the key stored to described first Hash bucket and the second Hash bucket Negligible amounts Hash bucket in, described first Hash bucket uses different Hash with described second Hash bucket Function;
The fullest at the list item of the first Hash bucket and the second Hash bucket, and in each list item of the 3rd Hash bucket Key to be stored, less than in the case of first threshold, is stored to the described 3rd by the quantity of the key of storage In Hash bucket, described first Hash bucket uses identical hash function with described 3rd Hash bucket;
The fullest at the list item of the first Hash bucket and the second Hash bucket, and in each list item of the 3rd Hash bucket Key to be stored more than or equal in the case of first threshold, is stored to the by the quantity of the key of storage In the Hash bucket of the negligible amounts of the key stored in three Hash buckets and the 4th Hash bucket, described first breathes out Uncommon bucket uses identical hash function, described second Hash bucket and the described 4th to breathe out with described 3rd Hash bucket Uncommon bucket uses identical hash function.
21. devices according to claim 20, it is characterised in that also include:
Control unit, is used for:
Ratio at accessed number of times and the accessed number of times of described 3rd Hash bucket of described first Hash bucket Value, less than or equal in the case of Second Threshold, controls network processing unit and first mates described 3rd Hash bucket, Mate described first Hash bucket again;
Ratio at accessed number of times and the accessed number of times of described 4th Hash bucket of described second Hash bucket Value, less than or equal in the case of described Second Threshold, controls network processing unit and first mates described 4th Hash Bucket, then mate described second Hash bucket;
Accessed number of times and described second Hash bucket at described first Hash bucket and described 3rd Hash bucket In the case of being less than or equal to described Second Threshold with the ratio of the accessed number of times of described 4th Hash bucket, Control network processing unit and first mate described second Hash bucket and described 4th Hash bucket, then mate described first Hash bucket and described 3rd Hash bucket.
22. 1 kinds of devices accessing table, it is characterised in that including:
Processing unit, is used for: obtain the plot of memory block map information table corresponding to the first table, described interior Counterfoil map information table includes the corresponding relation of each sublist in described first table and memory element;According to Memory block map information table described in the index accesses of the first list item of described plot and described first table, and root The storage list that first sublist at described first list item place is corresponding is determined according to described memory block map information table The index of unit, described first sublist is any sublist in described first table;According to described memory element Index and the index of described first list item, determine the physical address of described first list item;
Access unit, access described first list item for the physical address according to described first list item.
23. devices according to claim 22, it is characterised in that described processing unit is specifically used In, identify TID according to the table of the first table and access memory block mapping plot table, and reflect according to described memory block Penetrate plot table and determine the plot of described interior memory block map information table.
24. according to the device described in claim 22 or 23, it is characterised in that described processing unit is also For updating the accessed number of times of described memory element.
25. according to the device according to any one of claim 22 to 24, it is characterised in that described place Reason unit specifically for:
Described memory block map information table is accessed according to the address that following formula determines,
Wherein, base is described plot, and entry index is the index of described first list item, block size Quantity for the list item that described first sublist comprises.
26. according to the device according to any one of claim 22 to 25, it is characterised in that described place Reason unit specifically for:
The physical address of described first list item is determined according to following formula,
Real block index*block size+entry index%block size
Wherein, real block index is the index of described memory element, and block size is described first son The quantity of the list item that table comprises, the quantity of the list item that described first sublist comprises stores with described memory element The quantity of list item identical, entry index is the index of described first list item.
CN201510274566.8A 2015-05-26 2015-05-26 The method for handling table, the method and apparatus for accessing table Active CN106294191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510274566.8A CN106294191B (en) 2015-05-26 2015-05-26 The method for handling table, the method and apparatus for accessing table

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510274566.8A CN106294191B (en) 2015-05-26 2015-05-26 The method for handling table, the method and apparatus for accessing table

Publications (2)

Publication Number Publication Date
CN106294191A true CN106294191A (en) 2017-01-04
CN106294191B CN106294191B (en) 2019-07-09

Family

ID=57634545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510274566.8A Active CN106294191B (en) 2015-05-26 2015-05-26 The method for handling table, the method and apparatus for accessing table

Country Status (1)

Country Link
CN (1) CN106294191B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108572962A (en) * 2017-03-08 2018-09-25 华为技术有限公司 A kind of method and device of storage physical data table
CN112491725A (en) * 2020-11-30 2021-03-12 锐捷网络股份有限公司 MAC address processing method and device
CN114490449A (en) * 2022-04-18 2022-05-13 飞腾信息技术有限公司 Memory access method and device and processor
CN116016432A (en) * 2022-12-30 2023-04-25 迈普通信技术股份有限公司 Message forwarding method, device, network equipment and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566986A (en) * 2008-04-21 2009-10-28 阿里巴巴集团控股有限公司 Method and device for processing data in online business processing
CN101909068A (en) * 2009-06-02 2010-12-08 华为技术有限公司 Method, device and system for managing file copies
US20120078856A1 (en) * 2001-01-10 2012-03-29 Datacore Software Corporation Methods and apparatus for point-in-time volumes

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120078856A1 (en) * 2001-01-10 2012-03-29 Datacore Software Corporation Methods and apparatus for point-in-time volumes
CN101566986A (en) * 2008-04-21 2009-10-28 阿里巴巴集团控股有限公司 Method and device for processing data in online business processing
CN101909068A (en) * 2009-06-02 2010-12-08 华为技术有限公司 Method, device and system for managing file copies

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108572962A (en) * 2017-03-08 2018-09-25 华为技术有限公司 A kind of method and device of storage physical data table
CN112491725A (en) * 2020-11-30 2021-03-12 锐捷网络股份有限公司 MAC address processing method and device
CN112491725B (en) * 2020-11-30 2022-05-20 锐捷网络股份有限公司 MAC address processing method and device
CN114490449A (en) * 2022-04-18 2022-05-13 飞腾信息技术有限公司 Memory access method and device and processor
CN114490449B (en) * 2022-04-18 2022-07-08 飞腾信息技术有限公司 Memory access method and device and processor
CN116016432A (en) * 2022-12-30 2023-04-25 迈普通信技术股份有限公司 Message forwarding method, device, network equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN106294191B (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN103238145B (en) High-performance in network is equipped, the renewable and method and apparatus of Hash table that determines
US7606236B2 (en) Forwarding information base lookup method
US8705363B2 (en) Packet scheduling method and apparatus
Bando et al. FlashTrie: beyond 100-Gb/s IP route lookup using hash-based prefix-compressed trie
CN106294191A (en) The method processing table, the method and apparatus accessing table
CN104077239B (en) IP hard disk, and memory system and data operation method thereof
CN107526542A (en) Object storage device and its operating method
CN102985909A (en) Method and apparatus for providing highly-scalable network storage for well-gridded objects
CN101741730A (en) Method and equipment for downloading file and method and system for providing file downloading service
CN103929454A (en) Load balancing storage method and system in cloud computing platform
US20200136986A1 (en) Multi-path packet descriptor delivery scheme
CN103997540A (en) Method for achieving distributed storage of network, storage system and customer premise equipment
CN103338242A (en) Hybrid cloud storage system and method based on multi-level cache
CN103905538A (en) Neighbor cooperation cache replacement method in content center network
CN103270727B (en) Bank aware multi-it trie
CN104750432B (en) A kind of date storage method and device
CN108199976A (en) Switching equipment, exchange system and the data transmission method for uplink of RapidIO networks
CN101620623A (en) Method and device for managing list item of content addressable memory CAM
WO2016019554A1 (en) Queue management method and apparatus
CN107667352A (en) File cache and synchronous technology for predictability
CN104836747A (en) Network outbound load balancing method and system
CN109656836A (en) A kind of data processing method and device
JP7241194B2 (en) MEMORY MANAGEMENT METHOD AND APPARATUS
CN101335707B (en) Flow control method and device based on pre-distribution
CN107145449A (en) Storage device and storage method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211223

Address after: 450046 Floor 9, building 1, Zhengshang Boya Plaza, Longzihu wisdom Island, Zhengdong New Area, Zhengzhou City, Henan Province

Patentee after: xFusion Digital Technologies Co., Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd.

TR01 Transfer of patent right