CN106294191B - The method for handling table, the method and apparatus for accessing table - Google Patents
The method for handling table, the method and apparatus for accessing table Download PDFInfo
- Publication number
- CN106294191B CN106294191B CN201510274566.8A CN201510274566A CN106294191B CN 106294191 B CN106294191 B CN 106294191B CN 201510274566 A CN201510274566 A CN 201510274566A CN 106294191 B CN106294191 B CN 106294191B
- Authority
- CN
- China
- Prior art keywords
- memory
- hash bucket
- sublist
- list item
- storage unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a kind of methods for handling table, the method and apparatus for accessing table.The method of the processing table includes: processor in the case where the operating status of first memory reaches preset condition, by the first sublist being stored in the first storage unit of first memory storage into the second storage unit of second memory, the remaining bandwidth of second memory is higher than the bandwidth occupied when the first sublist is accessed, and the residual memory space of second memory is greater than the memory space that first sublist occupies;Processor deletes the first sublist in the first storage unit;The corresponding relationship of first sublist and the second storage unit is sent to network processing unit by processor, so that the corresponding relationship of the first sublist of preservation and the first storage unit is updated to the corresponding relationship of the first sublist Yu the second storage unit by network processing unit.The present invention can reduce high performance network processor to the capacity requirement of memory by moving part sublist stored in memory in other memories.
Description
Technical field
The present invention relates to method, the method and apparatus of access table of the communications field, more particularly to processing table.
Background technique
The rise of mobile network to flourish with Internet of Things, makes network flow explosive growth, and foreseeable
This rapid growth trend will be maintained in a period of time, the rapid growth of network flow causes the performance of the network equipment to become bottle
Neck, this is both challenge and opportunity for network equipment vendor.
Usual network processing unit is the core of router, and performance is the key that router competitiveness.Currently, single-link connects
The rate of mouth is by 50 Gigabits per second of Gonna breakthrough (Gigabit per second, abridge Gbps), which means that at future network
The single-chip handling capacity of reason device can accomplish 2, and too bits per second (Terabits per second, abridge Tbps) is even more
Height improves 4 to 8 times or more than now.So high handling capacity, proposes the process performance inside network processing unit higher
It is required that.
Performance bottleneck of the memory bandwidth as network processing unit, compared to the quick raising of interface capability, in these years
Growth rate is relatively slow, especially Double Data Rate (Double Data Rate, abbreviation DDR) synchronous DRAM
The bandwidth of (Synchronous Dynamic Random Access Memory, abbreviation SDRAM), under random access mode,
Under multiple storage array (bank) duplication, forth generation DDR (DDR4) single-chip can only provide 125*128 megabits of highest at present
The consistent access rate of (Megabit, abridge Mb) left and right.One message needs to access business under most of scenes from entering to going out
Memory 10 times or more where table, the list item of 128 (bit) is read every time, in 750,000,000 packet (Million Packet per second
Per Second, abridge MPPS) Bao Su under, memory need altogether provide be more than 750*10*128Mb bandwidth, if all tables
List item be all placed in DDR, then need 60 or more DDR chips, it is impractical.
Currently, the forwarding service that router needs support has tens kinds, every kind of forwarding service can access about 8~20 business
Table shares more than 400 traffic tables on router.But it is any in a scenario, in an equipment, while only having one
Kind or several forwarding services are in high-speed cruising state, other most of forwarding services are in low speed or idle state.Namely
It says, under reality scene, for only small number of service table by high speed access, the accessed frequency of most of traffic tables is very low even in sky
Not busy state.Even if the traffic table of high speed access, the accessed frequency of not all list item is all.Studies have shown that 90% with
On flow concentrate on 5%~10% it is big stream on, wherein 20%~40% flow concentrates on 0.1%~0.5% super large
, which means that most of flows can hit a small number of list items on stream, and the frequency that most of list items are accessed is very low even without visit
It asks.However, current general scheme is the storage location of whole table to be pre-selected according to the access performance demand of whole table, when table
When capacity is very big, it is necessary to which the memory of very big high bandwidth has exceeded the tenability of next generation network processor.
Summary of the invention
The present invention provides a kind of methods for handling table, the method and apparatus for accessing table, can reduce at high performance network
Device is managed to the capacity requirement of memory, keeps next-generation high performance network processor easy to accomplish.
In a first aspect, providing a kind of method of processing table, the first table includes multiple sublists, and first memory includes multiple
Storage unit includes the first sublist in the multiple sublist, includes the first storage unit, this method in the multiple storage unit
Include: processor in the case where the operating status of first memory reaches preset condition, first storage will be stored in
First sublist in unit stores the remaining bandwidth of the second memory into the second storage unit of second memory
Higher than the bandwidth that first sublist occupies when accessed, and the residual memory space of the second memory is greater than described first
The memory space that sublist occupies;The processor deletes first sublist in first storage unit;The processor
The corresponding relationship of first sublist and second storage unit is sent to network processing unit, so that network processing unit will be protected
The corresponding relationship of first sublist and first storage unit deposited is updated to first sublist and second storage
The corresponding relationship of unit.
With reference to first aspect, in the first possible implementation, operation shape of the processor in first memory
In the case that state reaches preset condition, by the first sublist being stored in first storage unit storage to second memory
The second storage unit in include: the processor determine first memory occupy bandwidth;The processor is described first
In the case that the bandwidth of memory usage is greater than or equal to the first preset value, second memory is arrived into first sublist storage
In second storage unit;Alternatively,
The processor obtains the residual memory space of the first memory;In the residue storage of the first memory
In the case that space is less than or equal to the second preset value, the processor stores first sublist to the second memory
The second storage unit in.
It is in the second possible implementation, described in first storage in conjunction with the first possible implementation
In the case that the bandwidth that device occupies is greater than or equal to the first preset value, the processor will be stored in first storage unit
First sublist store into the second storage unit of the second memory, comprising: the processor obtains described the
The accessed number of each storage unit in one memory;The processor is deposited according to the accessed number from described first
Determine that first storage unit, the accessed frequency of first storage unit are higher than institute in the first memory in reservoir
State the accessed frequency of other storage units except the first storage unit;Described in the processor is determined from multiple memories
Second memory, the remaining bandwidth of the second memory are higher than its except second memory described in the multiple memory
The remaining bandwidth of his memory;First sublist is stored the second storage unit to the second memory by the processor
In.
It is in the third possible implementation, described in first memory in conjunction with the first possible implementation
In the case that operating status reaches preset condition, the processor will be stored in the first sublist in first storage unit
Store includes: that the processor determines the bandwidth that the first memory occupies in the second storage unit of second memory;
The processor obtains the first memory in the case where the bandwidth that the first memory occupies is less than third preset value
In each storage unit accessed number;The processor determines institute according to the accessed number of each storage unit
The first storage unit is stated, the accessed frequency of first storage unit is single lower than the first storage described in the first memory
The accessed frequency of other storage units except member;The processor determines the second memory from multiple memories,
The residual memory space of the second memory is higher than other storages except second memory described in the multiple memory
The residual memory space of device;First sublist is stored the second storage unit to the second memory by the processor
In.
With reference to first aspect or the first any possible implementation into the third possible implementation, exist
It is described described in the case where the operating status of first memory reaches preset condition in 4th kind of possible implementation
The first sublist being stored in first storage unit is stored the second storage unit into second memory by processor
In before, the method also includes: the processor obtains the first write request, and first write request is for requesting to the first table
The first list item carry out write operation;The processor is that first sublist distribution described first is deposited according to first write request
Storage unit, first sublist include first list item.
In conjunction with the 4th kind of possible implementation, in a fifth possible implementation, further includes: the processor obtains
The second write request is taken, second write request is used to request to carry out write operation, the multiple son to the second list item of the second sublist
It further include second sublist in table;If the processor determines the unallocated storage unit of the second sublist, the place
It is that second sublist distributes third storage unit that device, which is managed, according to second write request.
Any of the above-described kind of possible implementation with reference to first aspect, it is in a sixth possible implementation, described
First table is stored by the way of Hash, and the keyword key of the list item of first table is stored in the list item of Hash bucket,
The method also includes:
In the case that the quantity of processor stored key in each list item of the first Hash bucket is less than first threshold,
Key to be stored is stored into the first Hash bucket;
The quantity of the processor stored key in each list item of the first Hash bucket is greater than or equal to first threshold
In the case where, key to be stored is stored to the negligible amounts of the stored key into the first Hash bucket and the second Hash bucket
Hash bucket in, the first Hash bucket and the second Hash bucket use different hash functions;
The processor has been expired in the list item of the first Hash bucket and the second Hash bucket, and in each list item of third Hash bucket
In the case that the quantity of stored key is less than first threshold, key to be stored is stored into the institute into the third Hash bucket
The first Hash bucket and the third Hash bucket are stated using identical hash function;
The processor has been expired in the list item of the first Hash bucket and the second Hash bucket, and in each list item of third Hash bucket
In the case that the quantity of stored key is greater than or equal to first threshold, by key storage to be stored to third Hash bucket and the
In four Hash buckets in the Hash bucket of the negligible amounts of stored key, the first Hash bucket and the third Hash bucket are used
Identical hash function, the second Hash bucket and the 4th Hash bucket use identical hash function.
Exist in conjunction with the 6th kind of possible implementation, in the 7th kind of possible implementation, further includes:
The processor is in the accessed number of the first Hash bucket and the accessed number of the third Hash bucket
In the case that ratio is less than or equal to second threshold, control network processing unit first matches the third Hash bucket, then matches described
First Hash bucket;
The processor is in the accessed number of the second Hash bucket and the accessed number of the 4th Hash bucket
In the case that ratio is less than or equal to the second threshold, control network processing unit first matches the 4th Hash bucket, then matches
The second Hash bucket;
The processor is in the accessed number of the first Hash bucket and the third Hash bucket and second Hash
In the case that the ratio of the accessed number of bucket and the 4th Hash bucket is less than or equal to the second threshold, control at network
Reason device first matches the second Hash bucket and the 4th Hash bucket, then matches the first Hash bucket and the third Hash
Bucket.
Second aspect provides a kind of method of access table, comprising: the corresponding memory block of the first table described in network processing unit
The plot of map information table, the memory block map information table include pair of each sublist and storage unit in first table
It should be related to;Network processing unit memory block according to the index accesses of the plot and the first list item of first table reflects
Information table is penetrated, and the corresponding storage of the first sublist where determining first list item according to the memory block map information table is single
The index of member, first sublist are any sublist in first table;The network processing unit is according to the storage unit
Index and first list item index, determine the physical address of first list item;The network processing unit is according to
The physical address of first list item accesses first list item.
In conjunction with second aspect, in the first possible implementation of the second aspect, the network processing unit obtains the
The plot of the corresponding memory block map information table of one table includes: the network processing unit according to the table of first table mark TID
It accesses memory block and maps plot table, and the base that plot table determines the interior memory block map information table is mapped according to the memory block
Location.
In conjunction with the possible implementation of the first of second aspect or second aspect, second in second aspect is possible
In implementation, further includes: the network processing unit updates the accessed number of the storage unit.
In conjunction with the first or second of possible implementation of second aspect or second aspect, in the third of second aspect
In the possible implementation of kind, network processing unit memory according to the index accesses of the plot and first list item
Block map information table includes: the address access memory block map information table that the network processing unit determines according to the following formula,
Wherein, base is the plot, and entry index is the index of first list item, and block size is described
The quantity for the list item that first sublist includes.
In conjunction with the first or second or the third possible implementation of second aspect or second aspect, in second party
In the 4th kind of possible implementation in face, the network processing unit is according to the index and first list item of the storage unit
Index, determine the physical address of first list item, comprising: the network processing unit determines first list item according to the following formula
Physical address,
Real block index*block size+entry index%block size
Wherein, real block index is the index of the storage unit, and block size is the first sublist packet
The quantity of the list item contained, the quantity phase for the list item that the quantity for the list item that first sublist includes is stored with the storage unit
Together, entry index is the index of first list item.
The third aspect provides a kind of device of processing table, and the first table includes multiple sublists, and first memory includes multiple
Storage unit includes the first sublist in the multiple sublist, includes the first storage unit, the dress in the multiple storage unit
Setting includes: processing unit, described for will be stored in the case where the operating status of first memory reaches preset condition
First sublist in first storage unit is stored into the second storage unit of second memory, the second memory
Remaining bandwidth is higher than the bandwidth occupied when first sublist is accessed, and the residual memory space of the second memory is greater than
The memory space that first sublist occupies;Unit is deleted, for deleting first sublist in first storage unit;
Transmission unit, for the corresponding relationship of first sublist and second storage unit to be sent to network processing unit, so that
The corresponding relationship of first sublist of preservation and first storage unit is updated to first sublist by network processing unit
With the corresponding relationship of second storage unit.
In conjunction with the third aspect, in the first possible implementation of the third aspect, the processing unit is specifically used for:
Determine the bandwidth that the first memory occupies;It is greater than or equal to the first preset value in the bandwidth that the first memory occupies
In the case of, by first sublist storage into second memory;Alternatively, the remaining storage for obtaining the first memory is empty
Between;In the case where the residual memory space of the first memory is less than or equal to the second preset value, by first sublist
It stores in the second storage unit of the second memory.
In conjunction with the first possible implementation of the third aspect, in second of possible implementation of the third aspect
In, the processing unit is specifically used for: obtaining the accessed number of each storage unit in the first memory;According to institute
The accessed number for stating each storage unit determines first storage unit from the first memory, and described first deposits
The accessed frequency of storage unit is higher than the quilt of other storage units except the first storage unit described in the first memory
Access frequency;Determine that the second memory, the remaining bandwidth of the second memory are higher than described more from multiple memories
The remaining bandwidth of other memories except second memory described in a memory;By first sublist storage to described the
In second storage unit of two memories.
In conjunction with the first possible implementation of the third aspect, in second of possible implementation of the third aspect
In, the processing unit is specifically used for: determining the bandwidth that the first memory occupies;In the band that the first memory occupies
In the case that width is less than third preset value, the accessed number of each storage unit in the first memory is obtained;According to institute
The accessed number for stating each storage unit determines first storage unit, the accessed frequency of first storage unit
Lower than the accessed frequency of other storage units except the first storage unit described in the first memory;From multiple storages
Determine the second memory in device, the residual memory space of the second memory is higher than described in the multiple memory the
The residual memory space of other memories except two memories;By first sublist storage to the of the second memory
In two storage units.
It is any possible into the third possible implementation in conjunction with the first of the third aspect or the third aspect
Implementation, in the fourth possible implementation of the third aspect, the processing unit are also used to: in first storage
In the case that the operating status of device reaches preset condition, first sublist being stored in first storage unit is deposited
Before storing up in the second storage unit of second memory, the first write request is obtained, first write request is for requesting to the
First list item of one table carries out write operation;It is that first sublist distribution, first storage is single according to first write request
Member, first sublist include first list item.
In conjunction with the 4th kind of possible implementation of the third aspect, in the 5th kind of possible implementation of the third aspect
In, the processing unit is also used to: obtaining the second write request, second write request is used to request the second table to the second sublist
Item carries out write operation, further includes second sublist in the multiple sublist;If it is determined that the unallocated storage of the second sublist
Unit is then that second sublist distributes third storage unit according to second write request.
In conjunction with any of the above-described kind of possible implementation of the third aspect or the third aspect, the 6th kind in the third aspect can
In the implementation of energy, first table is stored by the way of Hash, and the keyword key of the list item of first table is deposited
In the list item of Hash bucket, the processing unit is also used to for storage:
It, will be wait deposit in the case that the quantity of stored key is less than first threshold in each list item of the first Hash bucket
The key of storage is stored to the first Hash bucket;
In the case that the quantity of stored key is greater than or equal to first threshold in each list item of the first Hash bucket,
Key to be stored is stored to the Hash bucket of the negligible amounts of stored key into the first Hash bucket and the second Hash bucket
In, the first Hash bucket and the second Hash bucket use different hash functions;
Expire in the list item of the first Hash bucket and the second Hash bucket, and stored in each list item of third Hash bucket
In the case that the quantity of key is less than first threshold, key to be stored is stored into the third Hash bucket, described first breathes out
Uncommon bucket and the third Hash bucket use identical hash function;
Expire in the list item of the first Hash bucket and the second Hash bucket, and stored in each list item of third Hash bucket
In the case that the quantity of key is greater than or equal to first threshold, key to be stored is stored to third Hash bucket and the 4th Hash bucket
In stored key negligible amounts Hash bucket in, the first Hash bucket and the third Hash bucket use identical Kazakhstan
Uncommon function, the second Hash bucket and the 4th Hash bucket use identical hash function.
In conjunction with the 6th kind of possible implementation of the third aspect, in the 7th kind of possible implementation of the third aspect
In, further includes: control unit is used for:
Be less than in the ratio of the accessed number of the accessed number and third Hash bucket of the first Hash bucket or
In the case where second threshold, control network processing unit first matches the third Hash bucket, then matches the first Hash bucket;
Be less than in the ratio of the accessed number of the accessed number and the 4th Hash bucket of the second Hash bucket or
In the case where the second threshold, control network processing unit first matches the 4th Hash bucket, then matches described second and breathe out
Uncommon bucket;
In the accessed number of the first Hash bucket and the third Hash bucket and the second Hash bucket and described the
In the case that the ratio of the accessed number of four Hash buckets is less than or equal to the second threshold, control network processing unit is first matched
The second Hash bucket and the 4th Hash bucket, then match the first Hash bucket and the third Hash bucket.
Fourth aspect provides a kind of device of access table, comprising: processing unit is used for: it is corresponding interior to obtain the first table
The plot of counterfoil map information table, the memory block map information table include each sublist and storage unit in first table
Corresponding relationship;The memory block map information table according to the index accesses of the plot and the first list item of first table,
And the index of the corresponding storage unit of the first sublist where first list item is determined according to the memory block map information table,
First sublist is any sublist in first table;According to the index of the storage unit and the rope of first list item
Draw, determines the physical address of first list item;Access unit, for according to the access of the physical address of first list item
First list item.
In conjunction with fourth aspect, in the first possible implementation of the fourth aspect, the processing unit is specifically used for:
According to the table of the first table identify TID access memory block map plot table, and according to the memory block map plot table determine described in
The plot of interior memory block map information table.
In conjunction with the possible implementation of the first of fourth aspect or fourth aspect, second in fourth aspect is possible
In implementation, further includes: recording unit, for updating the accessed number of the storage unit.
In conjunction with the first or second of possible implementation of fourth aspect or fourth aspect, in the third of fourth aspect
In the possible implementation of kind, the processing unit is specifically used for: determining address accesses the memory block mapping according to the following formula
Information table,
Wherein, base is the plot, and entry index is the index of first list item, and block size is described
The quantity for the list item that first sublist includes.
In conjunction with the first or second or the third possible implementation of fourth aspect or fourth aspect, in four directions
In the 4th kind of possible implementation in face, the processing unit is specifically used for: determining the object of first list item according to the following formula
Address is managed,
Real block index*block size+entry index%block size
Wherein, real block index is the index of the storage unit, and block size is the first sublist packet
The quantity of the list item contained, the quantity phase for the list item that the quantity for the list item that first sublist includes is stored with the storage unit
Together, entry index is the index of first list item.
Based on the above-mentioned technical proposal, by the way that when the operating status of memory reaches preset condition, part is stored
It, can in other memories that the sublist in the storage unit of the memory moves enough remaining bandwidths and memory space
Reduce high performance network processor to the capacity requirement of memory, keeps next-generation high performance network processor easy to accomplish.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, will make below to required in the embodiment of the present invention
Attached drawing is briefly described, it should be apparent that, drawings described below is only some embodiments of the present invention, for
For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other
Attached drawing.
Fig. 1 is the schematic diagram of table and memory according to an embodiment of the present invention.
Fig. 2 is the schematic flow chart of the method for processing table according to an embodiment of the invention.
Fig. 3 is the schematic flow chart of the method for processing table according to another embodiment of the present invention.
Fig. 4 is the schematic diagram of the statistic unit of the method for processing table according to another embodiment of the present invention.
Fig. 5 is the schematic flow chart of the method for processing table according to another embodiment of the present invention.
Fig. 6 is the schematic flow chart of the method for processing table according to another embodiment of the present invention.
Fig. 7 is the schematic diagram of the Hash bucket of router.
Fig. 8 a, 8b, 8c and 8d are the schematic diagrames of the method for processing table according to an embodiment of the present invention.
Fig. 9 is the schematic flow chart of the method for access table according to an embodiment of the present invention.
Figure 10 is the storage system configuration diagram of the method for access table according to an embodiment of the present invention.
Figure 11 is the schematic diagram of the memory map unit of the method for access table according to an embodiment of the present invention.
Figure 12 is the schematic block diagram of the device of processing table according to an embodiment of the invention.
Figure 13 is the schematic block diagram of the device of processing table according to another embodiment of the present invention.
Figure 14 is the schematic block diagram of the device of access table according to an embodiment of the invention.
Figure 15 is the schematic block diagram of the device of access table according to another embodiment of the present invention.
Figure 16 is the schematic block diagram of the device of processing table according to another embodiment of the present invention.
Figure 17 is the schematic block diagram of the device of access table according to another embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiment is a part of the embodiments of the present invention, rather than whole embodiments.Based on this hair
Embodiment in bright, those of ordinary skill in the art's every other reality obtained without making creative work
Example is applied, all should belong to the scope of protection of the invention.
Term " first ", " second " and " third " in the description and claims of this application and attached drawing etc. are to be used for
Different objects are distinguished, are not use to describe a particular order.In addition, term " includes " and " having " are not exclusive.Such as it wraps
The process, method, system, product or equipment for having included a series of steps or units are not limited to listed step or unit,
It can also include the steps that not listing or unit.
In the embodiment of the present invention, a table includes multiple sublists, and a memory includes multiple storage units, such as Fig. 1 institute
Show.It should be understood that the physical space of the memory in the embodiment of the present invention can be divided according to physical structure, it can also basis
Logic is divided.
Fig. 2 is the schematic flow chart of the method 200 of processing table according to an embodiment of the present invention.As shown in Fig. 2, method
200 include following content.
210, processor will be stored in first and deposited in the case where the operating status of first memory reaches preset condition
The first sublist in storage unit is stored into the second storage unit of second memory, and the remaining bandwidth of second memory is higher than the
The bandwidth that one sublist occupies when accessed, and the residual memory space of second memory is greater than the storage sky that the first sublist occupies
Between.
In the embodiment of the present invention, the first table includes multiple sublists, and first memory includes multiple storage units, multiple sublists
In include the first sublist, include the first storage unit in multiple storage units.As shown in Figure 1, multiple sublists in the first table can
To be stored in first memory, also can store in different memories.Wherein, a storage unit is for storing one
A sublist, a sublist include the continuous list item of fixed quantity.
Wherein, the operating status of memory may include the bandwidth occupied and memory space.Second storage unit is second
Any one in multiple storage units of memory does not store the storage unit of sublist.
It should be understood that the processor in the embodiment of the present invention refers to the processor of control plane.
220, processor deletes the first sublist in the first storage unit.
230, the corresponding relationship of the first sublist and the second storage unit is sent to network processing unit by processor, so that network
The corresponding relationship of first sublist of preservation and the first storage unit is updated to the first sublist and the second storage unit by processor
Corresponding relationship.
In the prior art, the storage position of whole table is usually selected according to the linear speed requirement for meeting all typical services scenes simultaneously
It sets, this requires the network processing unit of forwarding surface to provide the memory system of large capacity high bandwidth.It is this to pass through static predistribution memory
The mode for meeting all scene performance requirements can not support 1Tbps or more network processing unit to memory bandwidth and memory size
Demand becomes the bottleneck of high performance network processor.And in the embodiment of the present invention, reach default item in the operating status of memory
In the case where part, the storage location of the part sublist stored in the memory can be neatly selected, makes it possible to reduce high property
Can network processing unit to the capacity requirement of high bandwidth memory, and then improve the process performance of network processing unit.
Therefore, the method for the processing table of the embodiment of the present invention, reaches preset condition by the operating status in memory
In the case of, the sublist being partially stored in the storage unit of the memory is moved into enough remaining bandwidths and memory space
In other memories, high performance network processor can be reduced to the capacity requirement of memory, handle next-generation high performance network
Device is easy to accomplish.
It is only described so that processor moves the first sublist to the process of second memory as an example in the embodiment of the present invention,
Multiple sublists that processor can store multiple storage units in the embodiment of the present invention are moved to a memory, or can be with
Multiple sublists are moved to multiple memories, the embodiment of the present invention does not limit this.
Optionally, as shown in figure 3, in step 210, processor reaches preset condition in the operating status of first memory
In the case of, the first sublist being stored in the first storage unit storage is wrapped into the second storage unit of second memory
It includes:
211, processor determines the bandwidth that first memory occupies;
212, processor is in the case where the bandwidth that first memory occupies is greater than or equal to the first preset value, by the first son
Table is stored into the second storage unit of second memory.
First preset value can be used as the whether close judgment criteria being finished of bandwidth of memory.
The embodiment of the present invention determines that the method for bandwidth of memory usage is not construed as limiting to processor.Such as: processor can
The accessed number that memory is obtained with polled network processor, further determines that the accessed frequency of memory, then basis
Accessed frequency determines the bandwidth of memory usage.Processor can obtain the band of the occupancy of memory with polled network processor
It is wide.It should be noted that the embodiment of the present invention does not limit the method that network processing unit determines the bandwidth of memory usage.Example
Such as, network processing unit can also be by hardware by judging that entry queue's length triggering terminal determines the bandwidth of memory usage.
Optionally, in step 212, the first sublist is stored into the second storage unit of second memory and is wrapped by processor
It includes:
Processor obtains the accessed number of each storage unit in first memory;
Processor determines the first storage unit according to the accessed number of each storage unit from first memory, the
The accessed frequency of one storage unit is higher than the accessed of other storage units in first memory except the first storage unit
Frequency;
Processor stores the first sublist in the first storage unit into the second storage unit of second memory.
Specifically, the processor of control plane can be obtained in memory by the network processing unit of poll forwarding surface and each be deposited
The accessed number of storage unit.For example, statistic unit can be set in network processing unit, as shown in figure 4, the statistic unit includes
Multiple counters, multiple counter is associated with each storage unit in memory respectively, records each storage unit
Accessed number.Processor can determine each according to accessed number of each storage unit got in some cycles
The accessed frequency of storage unit.
It should be noted that in the case where the bandwidth of memory usage is greater than or equal to the first preset value, as long as should
The part sublist stored in memory is moved on other memories, that is, can reach the mesh for reducing the bandwidth of the memory usage
's.Therefore one or more of memory not only can be accessed the son stored in the highest storage unit of frequency by processor
Table is moved in other one or more memories, can also will be stored in any one or more storage units in memory
Sublist move to other memories, the embodiment of the present invention does not limit this.
In the embodiment of the present invention, in the process of running, processor according to the practical occupancy situation of the bandwidth of each memory and
The statistics of the accessed number of storage unit carries out moving for sublist, can satisfy demand of the network processing unit to memory bandwidth.
Optionally, in step 212, the first sublist is stored into it in the second storage unit to second memory in processor
Before, method 200 further include: processor determines that second memory, the remaining bandwidth of second memory are higher than from multiple memories
The remaining bandwidth of other memories in multiple memories except second memory.
In embodiments of the present invention, by the way that the highest sublist of accessed frequency to be moved to the storages of enough remaining bandwidths
On device, enables to store the high sublist of accessed frequency on the memory with high bandwidth, network can be greatly reduced in this way
Demand of the processor to high-performance memory capacity.
In other words, by the accessed frequency of statistical memory and sublist, the high sublist of accessed frequency is placed into
High bandwidth memory, the low sublist of access frequency are placed on low bandwidth memory, select depositing for whole table with according to the access performance of whole table
Storage space, which is set, to be compared, and the capacity requirement to high bandwidth memory can be effectively reduced, and keeps next-generation high performance network processor easy to accomplish.
For example, the storage location of the acess control selection sublist based on sublist makes in such as on piece by constantly moving
The list item for only storing high speed access is deposited, demand of the network processing unit on piece memory size can be greatly reduced.Wherein, on piece
It deposits, also referred to as on-chip memory, refers to the memory being integral to the processor on the same chip.Correspondingly, it is not integral to the processor
Memory on the same chip is known as the outer memory of piece, also referred to as chip external memory.On piece memory has more relative to memory outside piece
High bandwidth.
In the case where on piece memory uses cache memory (cache), original state, table data are stored in outside piece
Memory, on piece memory are used as the backup of the outer memory of piece.Processor preferentially accesses on piece memory, just accesses when cache miss
The outer memory of piece, the data of connected reference can be retained in cache for a long time.On piece memory, i.e. cache, can with very little,
In the higher situation of cache hit probability, it is capable of providing very high access performance.Cache, which is suitable for data access, has stronger office
The application scenarios of portion's property, but the internal storage access in router message treatment process does not have locality characteristics, Message processing mistake
The different table of dozens of is accessed in journey, the size of each table is differed from several megabits (megabit, abridge Mb) to several hundred Mbs,
Moreover, the list item that may be hit when accessing the same table of the different messages in most cases also not no feature of locality, message
These accessing characteristics of processing cause cache hit probability very low, to keep message forwarding performance lower.In this way, high speed
Between list item and between high speed list item and low speed list item, can all there be the possibility of cache conflict.Perhaps under certain scene,
Cache hit probability is very high, but is likely under another scene, and cache hit probability may be very low, the reason is that cache is adopted
Memory mapping is carried out with static Hash (Hash) mechanism.Therefore, cache not can guarantee forwarding service performance.
The embodiment of the present invention is accessed by the accessed frequency of each sublist in real-time statistics memory according to actual
Frequency moves sublist between different memories, so that high-performance memory (such as on piece memory) only stores height
The sublist of speed access, makes it possible the high speed forward business that all scenes are supported using lesser on piece memory.Therefore originally
The scheme of inventive embodiments is better than cache mechanism.
For example, the bandwidth of memory is close to when being finished, it is assumed that be greater than 80%, find out the memory access frequencies first
Those of highest storage unit (for example, preceding several, tens or several hundred etc.), then takes certain strategy therein one
Sublist on partial memory cell is moved on the memory that other have enough remaining bandwidths.Can between any memory into
Row sublist is moved, such as: it can move on the second on-chip memory from the first on-chip memory, or be removed from on-chip memory
Onto chip external memory, perhaps moves on the second chip external memory from the first chip external memory or moved to from chip external memory
On on-chip memory.After the completion of moving, the mapping on the memory mapping table of network processing unit between storage unit and sublist is updated
Relationship.
In addition, memory can also be associated with a counter, processor can obtain the value of the counter and determine that this is deposited
The accessed frequency of reservoir, and then can determine the bandwidth of the memory usage.
Optionally, as shown in figure 5, in step 210, processor reaches preset condition in the operating status of first memory
In the case of, by the first sublist being stored in the first storage unit storage into the second storage unit of second memory, packet
It includes:
213, processor obtains the residual memory space of first memory;
For example, processor can obtain the residual memory space of memory by the network processing unit of poll forwarding surface.
214, processor is in the case where the residual memory space of first memory is less than or equal to the second preset value, by the
One sublist is stored into the second storage unit of second memory.
In the embodiment of the present invention, in the process of running, processor is according to the practical occupancy feelings of the memory space of each memory
Condition carries out moving for sublist, can satisfy the demand of network processing unit memory size.
Specifically, in step 214, the first sublist is stored into the second storage unit of second memory and is wrapped by processor
It includes:
Processor determines the bandwidth that first memory occupies;
Processor obtains every in first memory in the case where the bandwidth that first memory occupies is less than third preset value
The accessed number of a storage unit;
Processor determines the first storage unit, the quilt of the first storage unit according to the accessed number of each storage unit
Accessed frequency of the access frequency lower than other storage units except the first storage unit in first memory;
Processor stores the first sublist in the first storage unit into the second storage unit of second memory.
In embodiments of the present invention, when the bandwidth occupancy of memory is less than preset value, but the capacity of the memory has been connect
Closely when taking, other there is larger vacant capacity by the way that the smallest one or more sublists of occupied bandwidth on the memory are moved
Have on the memory of enough remaining bandwidths simultaneously, can satisfy network processing unit to the capacity requirement of high bandwidth memory.
Optionally, in step 214, the first sublist is stored into it in the second storage unit to second memory in processor
Before, method 200 can also include: that processor determines second memory, the remaining storage of second memory from multiple memories
Space is higher than the residual memory space of other memories in multiple memories except second memory.In the embodiment of the present invention,
Sublist is moved in the maximum memory of residual memory space, enables to the distribution of storage resource more balanced.
Before step 210, method 200 can also include: that processor is that the first sublist distributes the first storage unit.
Specifically, processor is that the first storage unit of the first sublist distribution includes:
Processor obtains the first write request, and the first write request is used to request to carry out write operation to the first list item of the first table;
Processor is that the first sublist distributes the first storage unit according to the first write request, and the first sublist includes the first list item.
It should be understood that after for the first sublist the first storage unit of distribution, processor can be according to the first write request by the
The first storage unit is written in one list item.
In the embodiment of the present invention, the no longer preparatory static allocation of the memory of table, table and memory are divided into multiple pieces, i.e. table
Multiple sublists are divided into, memory is divided into multiple storage units.Optionally, the appearance of each storage unit in memory
Measure identical, the size of each sublist is identical in table, but the embodiment of the present invention does not limit this.For example, each in memory deposit
The capacity of storage unit can also be different, and the size of each sublist can also be different in table.
Under original state, processor will not distribute memory space in advance for whole table, just from memory when writing list item
In storage unit is dynamically distributed as unit of sublist on demand, and record the mapping relations between sublist and storage unit.In this way
Network processing unit can be greatly reduced to the aggregate demand of memory.
Therefore, the method for the processing table of the embodiment of the present invention, by can be avoided for the on-demand increment dynamic assigning memory of table
Memory headroom wastes caused by reserving memory headroom for table, so as to reduce demand of the network processing unit to memory.
Since under different scenes, the actual size of each table is different, such as the forwarding information storehouse under scene 1
Very big multiprotocol label switching (the Multi-Protocol Label of (Fowarding Information Base, abbreviation FIB) table
Switching, abbreviation MPLS) table very little, and the lower fib table very little MPLS table of scene 2 is very big.If using static predistribution memory
Scheme, in order to which the same version can support scene 1 that can support scene 2 again, it is necessary to while pre- for fib table and MPLS table
Distribute sufficiently large space, and in fact, under any scene, many memory headrooms be it is completely idle, using the present invention
The method of embodiment processing table is just avoided that this problem.
Optionally, method 200 can also include:
Processor obtains the second write request, and the second write request is for requesting to carry out writing behaviour to the second list item of the second sublist
Make;
If the processor determine that the unallocated storage unit of the second sublist, then processor is the second sublist according to the second write request
Distribute third storage unit.
Wherein, the second sublist can be any sublist in multiple sublists of the first table in addition to the first sublist, the second sublist
It can also be any sublist in multiple sublists of the second table.Third storage unit can be located in first memory, can also be with
In second memory, it may be located in the third memory except first memory and second memory.
In the embodiment of the present invention when writing list item, memory space dynamically can be distributed for table on demand.
In the embodiment of the present invention, processor can be determined as which storage is sublist distribute according to the occupancy situation of memory
The storage unit of device.Processor can be preferably the memory that sublist distribution has partially stored sublist, when the memory has been used
When complete, then to need to distribute the sublist of memory space distribute other memories.For example, when processor determines that first memory is less than
When, then processor is the second sublist distribution third storage unit in the first memory;When processor determines first memory
Full, when second memory is less than, then processor distributes third storage unit in second memory for the second sublist.Work as processor
When determining that first memory and second memory have been expired, then processor distributes third in third memory for the second sublist and deposits
Storage unit.And so on, it repeats no more.
Processor can also be determined as the storage unit which memory is sublist distribute according to the type of sublist.For example, the
Two sublists are any sublist in multiple sublists of the second table, and the second table is the table of non-linear speed business, and correspondingly, the second sublist is
The table of non-linear speed business.When first memory is used to store the table of linear speed business, second memory and third memory for depositing
When storing up the table of non-linear speed business, if second memory partially stores sublist, processor can be preferentially in the second storage
Third storage unit is distributed in device for the second sublist.If the processor determine that second memory has been expired, then processor is deposited in third
Third storage unit is distributed in reservoir for the second sublist.Optionally, the bandwidth of first memory is higher than second memory or third
The bandwidth of memory.
It should be noted that the bandwidth of memory refers to the total bandwidth that memory itself is capable of providing here.
In initial storage traffic table, the table of linear speed business is preferably stored in the higher memory of bandwidth, when the bandwidth
When higher memory is finished, just the table of linear speed business is stored in the lower memory of bandwidth, the table of non-linear speed business is excellent
It is first stored in the lower memory of bandwidth.
For example, in the case where the first sublist corresponds to linear speed business, the second sublist corresponds to non-linear speed business, by the first sublist
It stores on piece memory, the second sublist is stored to memory outside piece.
Optionally, when selecting initial storage location, if any two table can be same in any one forwarding service
When access, then two tables are stored on different memories.
The method of processing table provided in an embodiment of the present invention can make together with the method for common static predistribution memory
With for example, the linear speed table of part very little can still use the scheme of static predistribution memory, the scheme of this static predistribution
Also it can be applied to the lesser level of EMS memory occupation in algorithm tree, to reduce the number of access memory mapping table, and then reduce report
Literary processing delay.
Therefore, the method for the processing table of the embodiment of the present invention, reaches preset condition by the operating status in memory
In the case of, the sublist being partially stored in the storage unit of the memory is moved into enough remaining bandwidths and memory space
In other memories, high performance network processor can be reduced to the capacity requirement of high bandwidth memory, make next-generation high-performance net
Network processor is easy to accomplish.
Optionally, the first table can be stored by the way of Hash (Hash), the keyword key of the list item of the first table
It is stored in the list item of Hash bucket, as shown in fig. 6, method 200 can also include the following contents.
240, the quantity of processor stored key in each list item of the first Hash bucket is less than the case where first threshold
Under, key to be stored is stored to the first Hash bucket.
It should be noted that key can be compression key, it is also possible to complete key, the embodiment of the present invention does not limit this.
250, the quantity of processor stored key in each list item of the first Hash bucket is greater than or equal to first threshold
In the case where, key to be stored is stored to the Kazakhstan of the negligible amounts of stored key into the first Hash bucket and the second Hash bucket
In uncommon bucket, the first Hash bucket and the second Hash bucket use different hash functions.
260, processor has been expired in the list item of the first Hash bucket and the second Hash bucket, and in each list item of third Hash bucket
In the case that the quantity of stored key is less than first threshold, key to be stored is stored into third Hash bucket, first breathes out
Uncommon bucket and third Hash bucket use identical hash function.
270, processor has been expired in the list item of the first Hash bucket and the second Hash bucket, and in each list item of third Hash bucket
In the case that the quantity of stored key is greater than or equal to first threshold, by key storage to be stored to third Hash bucket and the
In four Hash buckets in the Hash bucket of the negligible amounts of stored key, the first Hash bucket and third Hash bucket use identical Kazakhstan
Uncommon function, the second Hash bucket and the 4th Hash bucket use identical hash function.
It should be understood that reduce the specification of each Hash bucket in two Hash buckets in the prior art before 240, and
Increase at least one extension bucket for each Hash bucket.
It should also be understood that the content of each list item in Hash bucket includes keyword and value, i.e. key-value pair (key-value),
Processor can store the corresponding value of the key into Hash bucket while key that will be to be stored is stored into Hash bucket.
It is succinct in order to describe, it is only illustrated by taking key as an example in the embodiment of the present invention, but be not intended to limit the model of the embodiment of the present invention
It encloses.
In the prior art, in router, many traffic tables are stored and are matched by the way of Hash (Hash), than
Such as medium access control (Media Access Control, abbreviation MAC) table, address resolution protocol (Address
Resolution Protocol, abbreviation ARP) table etc..In order to reduce Hash conflict, the scheme of two Hash buckets is generally used, is somebody's turn to do
Two Hash buckets correspond to different hash functions, as shown in Figure 7.But due to the randomness of Hash, key is randomly dispersed in each
In Hash bucket, the key of few (such as 1%) can also force driving to apply for Hash bucket memories all out, reduce Hash bucket memory
Utilization rate.And in the embodiment of the present invention, key is stored to Hash bucket using the strategy of step 240,250,260,270 descriptions
In, the mode processing table of increment storage allocation provided in an embodiment of the present invention can not only be effectively ensured, improve memory usage;
The utilization rate that can be improved Hash bucket memory simultaneously, efficiently solves the problem.
Therefore, the method for the processing table of the embodiment of the present invention can greatly reduce occupancy when the negligible amounts of key
Memory, and reduce the number of Hash bucket access.
It should be understood that the first Hash bucket and third Hash bucket correspond to a Hash bucket in two Hash buckets of the prior art, the
Another Hash bucket in two Hash buckets and the corresponding two Hash buckets in the prior art of the 4th Hash bucket.
Optionally, the first Hash bucket is identical with the size of third Hash bucket, the size of the second Hash bucket and the 4th Hash bucket
It is identical, but the present invention is not limited to this.
It is exemplified below, the specification of each Hash bucket shown in Fig. 7 is reduced into half, while increasing by one for each Hash bucket
A an equal amount of extension bucket, i.e., as shown in Fig. 8 a, 8b, 8c and 8d, Hash bucket 1 in 3 corresponding diagram 7 of Hash bucket 1 and Hash bucket,
Hash bucket 2 in 4 corresponding diagram 7 of Hash bucket 2 and Hash bucket.Assuming that each list item in Hash bucket can store M Key (such as M=
6), first threshold M/2.
When the quantity of the key in each list item of Hash bucket 1 is less than M/2, new Key, which is stored in, to be breathed out in bucket 1.Such as Fig. 8 a institute
Show, Hash bucket 3, Hash bucket 2 and Hash bucket 4 are all empty at this time, are not take up memory, i.e., at most only take up 1/4 memory, and table is searched
It only needs to access Hash bucket primary.
When the quantity of the key in each list item of Hash bucket 1 is more than or equal to M/2, new Key is stored in Hash bucket 1 and breathes out
In uncommon bucket 2 in the list item of that few barrel of the quantity of Key.As shown in Figure 8 b, at this time Hash bucket 3 and Hash bucket 4 be all it is empty, no
Committed memory.That is 1/2 memory is at most only taken up at this time, and table lookup is the worst to be needed to access Hash bucket 2 times.
When all list items of Hash bucket 1 and Hash bucket 2 have been expired, and the quantity of the key in the list item of Hash bucket 3 is less than M/2
When, new key is stored in Hash bucket 3.As shown in Figure 8 c, at this time Hash bucket 4 be it is empty, be not take up memory.That is at this time
3/4 memory is at most only taken up, table lookup is the worst to be needed to access Hash bucket 3 times.
When all list items of Hash bucket 1 and Hash bucket 2 have been expired, and the quantity of the key in the list item of Hash bucket 3 is greater than or waits
When M/2, new Key is stored in Hash bucket 3 and Hash bucket 4 in the list item of that Hash bucket of the negligible amounts of key.Such as Fig. 8 d
Shown, Hash bucket 1, Hash bucket 3, Hash bucket 2 and all committed memories of Hash bucket 4, i.e., occupy entire memory at this time at this time, and table is searched
It is the worst to need to access Hash bucket 4 times.
Therefore, in embodiments of the present invention, by by the rule of each Hash bucket in two Hash buckets in the prior art
Lattice reduce, and increase at least one extension bucket for each Hash bucket, and stored according to the strategy of setting, when the quantity of key
When less, the memory of occupancy can be greatly reduced, and reduce the number of Hash bucket access.
Optionally, the first Hash bucket, the second Hash bucket, third Hash bucket and the 4th Hash bucket can be associated with counter, meter
Number device is used to record the accessed number of the first Hash bucket, the second Hash bucket, third Hash bucket and the 4th Hash bucket respectively.
In the embodiment of the present invention, the accessed number of each Hash bucket can also be counted, then by moving, high speed is visited
The traffic table asked is placed into the Hash bucket of corresponding high bandwidth memory (such as on piece memory).
Method 200 can also include:
Processor is less than or waits in the accessed number of the first Hash bucket and the ratio of the accessed number of third Hash bucket
In the case where second threshold, control network processing unit first matches third Hash bucket, then matches the first Hash bucket;
Processor is less than or waits in the accessed number of the second Hash bucket and the ratio of the accessed number of the 4th Hash bucket
In the case where second threshold, control network processing unit first matches the 4th Hash bucket, then matches the second Hash bucket;
Accessed number and second Hash bucket and fourth Hash bucket of the processor in the first Hash bucket and third Hash bucket
In the case that the ratio of accessed number is less than or equal to second threshold, control network processing unit first matches the second Hash bucket and the
Four Hash buckets, then match the first Hash bucket and third Hash bucket.
Processor can obtain the accessed number of each Hash bucket and be compared by way of poll, then according to Hash
The ratio of the accessed number of bucket controls the sequence of the network processing unit matching Hash bucket of forwarding surface.
Specifically, processor can configure the value of the register of forwarding surface according to the ratio of the accessed number of Hash bucket
(such as 0 or 1), the network processing unit of forwarding surface read the value of the register, and of Hash bucket is determined according to the value of register
Sequence ligand.
Wherein the value of register is preset, such as the value of register can be 0 or 1.Two Hash buckets it is interviewed
It asks that the ratio of number is less than or equal to second threshold, illustrates that the accessed number of two Hash buckets is close.
It should be noted that in the prior art, when searching data by hash function, matching in two Hash buckets first
First Hash bucket just will continue to second Hash bucket of matching if matching is unsuccessful.And so on, it will be understood that the present invention is real
It applies in example, matches first first and breathe out system, if matching is unsuccessful, continue to match (the i.e. extension of the first Hash bucket of third Hash bucket
Bucket);If matching is unsuccessful, continue to match the second Hash bucket;If matching is also unsuccessful, continue to match the 4th Hash bucket (i.e.
The extension bucket of second Hash bucket).Therefore, if the accessed number of the first Hash bucket and the accessed number of third Hash bucket connect
Closely, then illustrate that the successful match rate of the first Hash bucket is very low.Similarly, if accessed time of the first Hash bucket and third Hash bucket
It is several close with the accessed number of the second Hash bucket and the 4th Hash bucket, then illustrate the matching of the first Hash bucket and third Hash bucket
Success rate is very low.
In embodiments of the present invention, Hash bucket is matched by controlling network processing unit according to the accessed number of Hash bucket
Sequentially, the access times of Hash bucket can be reduced, and then demand of the network processing unit to memory bandwidth can be reduced.
Fig. 9 is the schematic flow chart of the method 900 of access table according to an embodiment of the present invention.As shown in figure 9, method
900 include following content.
910, network processing unit obtains the plot of the corresponding memory block map information table of the first table, memory block map information table
Corresponding relationship including each sublist and storage unit in the first table.
Wherein, the corresponding relationship of table Yu memory block map information table is had recorded in memory block mapping plot table.
920, network processing unit is according to the index accesses memory block map information table of plot and the first list item of the first table, and
The index of the corresponding storage unit of the first sublist where the first list item is determined according to memory block map information table, the first sublist is
Any sublist in first table.
930, network processing unit determines the first list item physically according to the index of storage unit and the index of the first list item
Location.
940, network processing unit accesses the first list item according to the physical address of the first list item.
In embodiments of the present invention, network processing unit is determined in memory block map information table by the index according to list item
The physical address of list item can access list item according to the physical address of list item.
Optionally, in step 910, network processing unit obtains the plot packet of the corresponding memory block map information table of the first table
Include: network processing unit identifies (Table Identifier, abbreviation TID) access memory block mapping plot according to the table of the first table
Table, and the plot that plot table determines memory block map information table is mapped according to memory block.
It should be understood that network processing unit can also obtain the plot of memory block map information table, the present invention using other methods
Embodiment is to this without limiting.
Optionally, before step 910, method 900 further include:
Network processing unit receives each sublist for the first table that processor is sent and the corresponding relationship of storage unit.
Network processing unit the corresponding relationship can be written in memory map unit after receiving the corresponding relationship.
Optionally, method 900 further include: network processing unit updates storage the accessed number of unit.
Certainly, network processing unit can also update storage the accessed number of device.
Optionally, in step 920, network processing unit is according to the index accesses memory block map information of plot and the first list item
Table includes:
Network processing unit accesses memory block map information table according to the address that formula (1) determines;
Wherein, base is memory block map information table plot, and entry index is the index of the first list item, block
Size is the quantity for the list item that the first sublist includes.
Wherein, symbolIt indicates to be rounded downwards,For the index of the first sublist.
Optionally, step 940 includes: the physical address that network processing unit determines the first list item according to formula (2);
Real block index*block size+entry index%block size (2)
Wherein, real block index is the index of storage unit, and block size is the list item that the first sublist includes
Quantity, the quantity for the list item that the first sublist includes is identical as the quantity for the list item that storage unit stores, and entry index is the
The index of one list item.
Wherein, symbol " % " indicates complementation, and symbol " * " indicates multiplying, (entry index%block
It size) is offset of first list item in the first sublist.
The method for describing access table according to an embodiment of the present invention below with reference to Figure 10 and Figure 11.
Figure 10 show memory system architecture schematic diagram according to an embodiment of the present invention.When network processing unit will access certain
When a table, first the index of the table mark and list item of the table to be accessed, i.e., (tid, entry_index), memory mapping is issued
Table item index is converted into true storage address, then accesses memory by unit, memory map unit.Meanwhile memory reflects
It penetrates unit and access request is issued statistic unit, statistic unit counts the accessed number of respective memory and storage unit
Number.In order to support high-performance treatments ability, memory map unit and statistic unit can be by the way of book copyings, such as Figure 10
It is shown, such as 8 parts of duplications.
The conversion of the physical address for indexing memory of any table may be implemented in memory map unit.As shown in figure 11,
The plot that memory block mapping plot table obtains memory block map information table is accessed according to TID first, is then obtained according to formula (1)
Address access memory block map information table, the physics of the corresponding storage unit of the sublist is obtained from memory block map information table
Then address calculates the physical address of list item to be visited according to formula (2).
Therefore, the method for the processing table of the embodiment of the present invention, network processing unit is by the index according to list item in memory block
The physical storage address of list item is determined in map information table, can access list item according to the physical address of list item.
It should be understood that magnitude of the sequence numbers of the above procedures are not meant that the order of the execution order, the execution of each process is suitable
Sequence should be determined by its function and internal logic, and the implementation process of the embodiments of the invention shall not be constituted with any limitation.
The method of the method and access table of above-detailed processing table according to an embodiment of the present invention, is in detail below retouched
State the device of processing table according to an embodiment of the present invention and the device of access table.
Figure 12 is the schematic block diagram of the device 1200 of processing table according to an embodiment of the present invention.As shown in figure 12, device
1200 include processing unit 1210 and transmission unit 1220.It should be understood that the first table includes multiple sublists in the embodiment of the present invention,
First memory includes multiple storage units, includes the first sublist in the multiple sublist, includes in the multiple storage unit
First storage unit.
Processing unit 1210, is used for: in the case where the operating status of first memory reaches preset condition, will store
The first sublist in the first storage unit stores the tape remaining of second memory into the second storage unit of second memory
It is wide to be higher than the bandwidth occupied when the first sublist is accessed, and the residual memory space of second memory is greater than what the first sublist occupied
Memory space;The first sublist is deleted in the first storage unit.
Transmission unit 1230, for the corresponding relationship of first sublist and second storage unit to be sent to network
Processor, so that the corresponding relationship of first sublist of preservation and first storage unit is updated to institute by network processing unit
State the corresponding relationship of the first sublist Yu second storage unit.
Therefore, the device of the processing table of the embodiment of the present invention, reaches preset condition by the operating status in memory
In the case of, the sublist being partially stored in the storage unit of the memory is moved into enough remaining bandwidths and memory space
In other memories, high performance network processor can be reduced to the capacity requirement of high bandwidth memory, make next-generation high-performance net
Network processor is easy to accomplish.
Optionally, processing unit 1210 is specifically used for:
Determine the bandwidth that first memory occupies;
In the case where the bandwidth that first memory occupies is greater than or equal to the first preset value, by the first sublist storage to the
In two the second storage units of memory.
Correspondingly, processing unit 1210 is specifically used for:
Obtain the accessed number of each storage unit in first memory;
According to the accessed number of each storage unit, the first storage unit, the first storage are determined from first memory
The accessed frequency of unit is higher than the accessed frequency of other storage units in first memory except the first storage unit;
Determine that second memory, the remaining bandwidth of second memory are higher than second in multiple memories from multiple memories
The remaining bandwidth of other memories except memory;
By the storage of the first sublist into the second storage unit of second memory.
In the embodiment of the present invention, in the process of running, processor according to the practical occupancy situation of the bandwidth of each memory and
The statistics of the accessed number of storage unit carries out moving for sublist, can satisfy demand of the network processing unit to memory bandwidth.
In addition, can be made by moving the highest sublist of accessed frequency on the memory of enough remaining bandwidths
Must have and store the high sublist of accessed frequency on the memory of high bandwidth, network processing unit can be greatly reduced in this way to high property
The demand of energy memory capacity.
Optionally, processing unit 1210 is specifically used for:
Obtain the residual memory space of first memory;
In the case where the residual memory space of first memory is less than or equal to the second preset value, the first sublist is stored
Into the second storage unit of second memory.
In the embodiment of the present invention, in the process of running, processor is according to the practical occupancy feelings of the memory space of each memory
Condition carries out moving for sublist, can satisfy the demand of network processing unit memory size.
Correspondingly, processing unit 1210 is specifically used for:
Determine the bandwidth that first memory occupies;
In the case where the bandwidth that first memory occupies is less than third preset value, each storage in first memory is obtained
The accessed number of unit;
According to the accessed number of each storage unit, the first storage unit, the accessed frequency of the first storage unit are determined
Accessed frequency of the rate lower than other storage units except the first storage unit in first memory;
Determine that second memory, the residual memory space of second memory are higher than in multiple memories from multiple memories
The residual memory space of other memories except second memory;
By the storage of the first sublist into the second storage unit of second memory.
In embodiments of the present invention, when the bandwidth occupancy of memory is less than third preset value, but the capacity of the memory is
Through close to when taking, by the smallest one or more sublists of occupied bandwidth on the memory move it is other have it is larger vacant
On the memory that capacity meets with Time Bandwidth, network processing unit can satisfy to the capacity requirement of high bandwidth memory.
In addition, sublist is moved in the maximum memory of residual memory space, the distribution of storage resource is enabled to more
Add equilibrium.
Optionally, processing unit 1210 is also used to, in the case where the operating status of first memory reaches preset condition,
Before the first sublist being stored in the first storage unit is stored into second memory, deposited for the first sublist distribution first
Storage unit.
Optionally, processing unit 1210 is specifically used for: obtaining the first write request, the first write request is for requesting to the first table
The first list item carry out write operation;It is that the first sublist distributes the first storage unit according to the first write request, the first sublist includes the
One list item.
Optionally, processing unit 1210 is also used to record the corresponding relationship of the first sublist Yu the first storage unit.
Optionally, processing unit 1210 is also used to:
The second write request is obtained, the second write request is used to request to carry out write operation to the second list item of the second sublist, multiple
It further include the second sublist in sublist;
If it is determined that the unallocated storage unit of the second sublist, then be that the second sublist distributes third storage according to the second write request
Unit.
Optionally, first table is stored by the way of Hash, and the keyword key of the list item of first table is deposited
Storage is in the list item of Hash bucket.
Correspondingly, processing unit 1210 is also used to:
In the case that the quantity of stored keyword key is less than first threshold in each list item of the first Hash bucket,
Key to be stored is stored to the first Hash bucket;
In the case that the quantity of stored key is greater than or equal to first threshold in each list item of the first Hash bucket,
Key to be stored is stored into the first Hash bucket and the second Hash bucket in the Hash bucket of the negligible amounts of stored key, the
One Hash bucket and the second Hash bucket use different hash functions;
Expire in the list item of the first Hash bucket and the second Hash bucket, and stored in each list item of third Hash bucket
In the case that the quantity of key is less than first threshold, key to be stored is stored into third Hash bucket, the first Hash bucket and the
Three Hash buckets use identical hash function;
Expire in the list item of the first Hash bucket and the second Hash bucket, and stored in each list item of third Hash bucket
In the case that the quantity of key is greater than or equal to first threshold, key to be stored is stored to third Hash bucket and the 4th Hash bucket
In stored key negligible amounts Hash bucket in, the first Hash bucket and third Hash bucket use identical hash function, the
Two Hash buckets and the 4th Hash bucket use identical hash function.
In the embodiment of the present invention when the negligible amounts of key, the memory of occupancy can be reduced, and reduces the access of Hash bucket
Number.
Optionally, as shown in figure 13, device 1200 further includes control unit 1230.
Control unit 1230 is used for:
It is less than or equal to second in the ratio of the accessed number of the accessed number and third Hash bucket of the first Hash bucket
In the case where threshold value, control network processing unit first matches third Hash bucket, then matches the first Hash bucket;
It is less than or equal to second in the ratio of the accessed number of the accessed number and the 4th Hash bucket of the second Hash bucket
In the case where threshold value, control network processing unit first matches the 4th Hash bucket, then matches the second Hash bucket;
In the accessed number and the second Hash bucket of the first Hash bucket and third Hash bucket and being accessed for the 4th Hash bucket
In the case that the ratio of number is less than or equal to second threshold, control network processing unit first matches the second Hash bucket and the 4th Hash
Bucket, then match the first Hash bucket and third Hash bucket.
In embodiments of the present invention, Hash bucket is matched by controlling network processing unit according to the accessed number of Hash bucket
Sequentially, the access times of Hash bucket can be reduced, and then demand of the network processing unit to memory bandwidth can be reduced.
Optionally, the first Hash bucket and third Hash bucket can be identical, the size of the second Hash bucket and the 4th Hash bucket
It is identical.
It should be understood that the device 1200 of processing table according to an embodiment of the present invention can correspond to place according to an embodiment of the present invention
The processor in the method 200 of table is managed, and above and other operation and/or function of each unit of device 1200 is respectively
The corresponding process of implementation method 200, for sake of simplicity, details are not described herein.
Therefore, the device of the processing table of the embodiment of the present invention, reaches preset condition by the operating status in memory
In the case of, the sublist being partially stored in the storage unit of the memory is moved into enough remaining bandwidths and memory space
In other memories, high performance network processor can be reduced to the capacity requirement of high bandwidth memory, make next-generation high-performance net
Network processor is easy to accomplish.
Figure 14 is the schematic block diagram of the device 1400 of access table according to an embodiment of the present invention.As shown in figure 14, device
1400 include processing unit 1410 and access unit 1420.
Processing unit 1410, is used for: obtaining the plot of the corresponding memory block map information table of the first table, memory block mapping letter
Breath table includes the corresponding relationship of each sublist and storage unit in the first table;According to the rope of plot and the first list item of the first table
Draw access memory block map information table, and determines that the first sublist where the first list item is corresponding according to memory block map information table
The index of storage unit, the first sublist are any sublist in the first table;According to the index of storage unit and the rope of the first list item
Draw, determines the physical address of the first list item.
Access unit 1420, for accessing the first list item according to the physical address of the first list item.
In embodiments of the present invention, network processing unit is determined in memory block map information table by the index according to list item
The physical address of list item can access list item according to the physical address of list item.
Optionally, processing unit 1410 is specifically used for, and identifies TID access memory block according to the table of the first table and maps plot
Table, and the plot that plot table determines the interior memory block map information table is mapped according to the memory block.
Optionally, as shown in figure 15, device 1400 can also include: receiving unit 1430, in processing unit 1410
Before the plot for obtaining the corresponding memory block map information table of the first table, each sublist in the first table that processor is sent is received
With the corresponding relationship of storage unit.
Optionally, processing unit 1410 is also used to update storage the accessed number of unit.
Optionally, processing unit 1410 is specifically used for: accessing memory block map information according to the address that formula (1) determines
Table,
Wherein, base is plot, and entry index is the index of the first list item, and block size is that the first sublist includes
List item quantity.
Optionally, processing unit 1410 is specifically used for: the physical address of the first list item is determined according to formula (2),
Real block index*block size+entry index%block size (2)
Wherein, real block index is the index of storage unit, and block size is the table that the first sublist includes
The quantity of item, the quantity for the list item that the first sublist includes is identical as the quantity for the list item that storage unit stores, and entry index is
The index of first list item.
It should be understood that the device 1400 of access table can correspond to access according to an embodiment of the present invention according to embodiments of the present invention
Network processing unit in the method 900 of table, and above and other operation and/or function difference of each unit of device 1400
For the corresponding process of implementation method 900, for sake of simplicity, details are not described herein.
In embodiments of the present invention, network processing unit is determined in memory block map information table by the index according to list item
The physical address of list item can access list item according to the physical address of list item.
The embodiment of the invention also provides a kind of devices 1600 for handling table.As shown in figure 16, which includes place
Manage device 1610, memory 1620, bus system 1630 and transmitter 1640.Wherein, processor 1610, memory 1620 and transmission
Device 1640 is connected by bus system 1630, and for storing instruction, which deposits the memory 1620 for executing this
The instruction that reservoir 1620 stores.
Memory 1620 is also used to store sublist.Memory 1620 includes first memory and second memory.First deposits
Reservoir and second memory respectively include multiple storage units, and a storage unit is for storing a sublist.
Processor 1610 is used for: for will deposit in the case where the operating status of first memory reaches preset condition
It stores up in the second storage unit that the first sublist in the first storage unit of first memory stores to second memory, second
The remaining bandwidth of memory is higher than the bandwidth occupied when the first sublist is accessed, and the residual memory space of second memory is greater than
The memory space that first sublist occupies;The first sublist is deleted in the first storage unit.
Transmitter 1640 is used to the corresponding relationship of the first sublist and the second storage unit being sent to network processing unit, so that
The corresponding relationship of first sublist of preservation and the first storage unit is updated to the first sublist with network processing unit and the second storage is single
The corresponding relationship of member.
Therefore, the device of the processing table of the embodiment of the present invention, reaches preset condition by the operating status in memory
In the case of, the sublist being partially stored in the storage unit of the memory is moved into enough remaining bandwidths and memory space
In other memories, high performance network processor can be reduced to the capacity requirement of high bandwidth memory, make next-generation high-performance net
Network processor is easy to accomplish.
It should be understood that in embodiments of the present invention, which can be central processing unit (Central
Processing Unit, abbreviation CPU), which can also be other general processors, digital signal processor
(Digital Signal Processing, abbreviation DSP), specific integrated circuit (Application Specific
Integrated Circuit, abbreviation ASIC), field programmable gate array (Field-Programmable Gate Array,
Abbreviation FPGA) either other programmable logic device, discrete gate or transistor logic, discrete hardware components etc..It is general
Processor can be microprocessor or the processor is also possible to any conventional processor etc..
The memory 1620 may include read-only memory and random access memory, and provide instruction to processor 1610
And data.The a part of of memory 1620 can also include nonvolatile RAM.For example, memory 1620 may be used also
With the information of storage device type.
The bus system 1630 can also include power bus, control bus and state letter in addition to including data/address bus
Number bus etc..But for the sake of clear explanation, various buses are all designated as bus system 1630 in figure.
During realization, each step of the above method can pass through the integrated logic circuit of the hardware in processor 1610
Or the instruction of software form is completed.The step of method in conjunction with disclosed in the embodiment of the present invention, can be embodied directly at hardware
Reason device executes completion, or in processor hardware and software module combine and execute completion.Software module can be located at random
The abilities such as memory, flash memory, read-only memory, programmable read only memory or electrically erasable programmable memory, register
In the storage medium of domain maturation.The storage medium is located at memory 1620, and processor 1610 reads the information in memory 1620,
The step of completing the above method in conjunction with its hardware.To avoid repeating, it is not detailed herein.
Optionally, processor 1610 is specifically used for:
Determine the bandwidth that first memory occupies;
In the case where the bandwidth that first memory occupies is greater than or equal to the first preset value, by the first sublist storage to the
In two memories.
Correspondingly, processor 1610 is specifically used for:
Obtain the accessed number of each storage unit in first memory;
According to the accessed number of each storage unit, the first storage unit, the first storage are determined from first memory
The accessed frequency of unit is higher than the accessed frequency of other storage units in first memory except the first storage unit;
Determine that second memory, the remaining bandwidth of second memory are higher than second in multiple memories from multiple memories
The remaining bandwidth of other memories except memory;
By the storage of the first sublist into the second storage unit of second memory.
In the embodiment of the present invention, in the process of running, processor according to the practical occupancy situation of the bandwidth of each memory and
The statistics of the accessed number of storage unit carries out moving for sublist, can satisfy demand of the network processing unit to memory bandwidth.
In addition, can be made by moving the highest sublist of accessed frequency on the memory of enough remaining bandwidths
Must have and store the high sublist of accessed frequency on the memory of high bandwidth, network processing unit can be greatly reduced in this way to high property
The demand of energy memory capacity.
Optionally, processor 1610 can also be specifically used for:
Obtain the residual memory space of first memory;
In the case where the residual memory space of first memory is less than or equal to the second preset value, the first sublist is stored
Into second memory.
In the embodiment of the present invention, in the process of running, processor is according to the practical occupancy feelings of the memory space of each memory
Condition carries out moving for sublist, can satisfy the demand of network processing unit memory size.
Correspondingly, processor 1610 is specifically used for:
Determine the bandwidth that first memory occupies;
In the case where the bandwidth that first memory occupies is less than third preset value, each storage in first memory is obtained
The accessed number of unit;
According to the accessed number of each storage unit, the first storage unit, the accessed frequency of the first storage unit are determined
Accessed frequency of the rate lower than other storage units except the first storage unit in first memory;
Determine that second memory, the residual memory space of second memory are higher than in multiple memories from multiple memories
The residual memory space of other memories except second memory;
By the storage of the first sublist into the second storage unit of second memory.
In embodiments of the present invention, when the bandwidth occupancy of memory is less than third preset value, but the capacity of the memory is
Through close to when taking, by the smallest one or more sublists of occupied bandwidth on the memory move it is other have it is larger vacant
On the memory that capacity meets with Time Bandwidth, network processing unit can satisfy to the capacity requirement of high bandwidth memory.
In addition, sublist is moved in the maximum memory of residual memory space, the distribution of storage resource is enabled to more
Add equilibrium.
Optionally, processor 1610 is also used to, will in the case where the operating status of first memory reaches preset condition
The first sublist being stored in the first storage unit of first memory is stored into the second storage unit of second memory
Before, the first storage unit is distributed for the first sublist.
Correspondingly, processor 1610 is specifically used for: obtaining the first write request, the first write request is for requesting to the first table
First list item carries out write operation;It is that the first sublist distributes the first storage unit according to the first write request, the first sublist includes first
List item.
Optionally, memory 1620 is also used to store the corresponding relationship of the first sublist Yu the first storage unit.
Optionally, processor 1610 is also used to:
The second write request is obtained, the second write request is used to request to carry out write operation to the second list item of the second sublist, multiple
It further include the second sublist in sublist;
It determines the unallocated storage unit of the second sublist, is then that the second sublist distributes third storage list according to the second write request
Member.
Optionally, the first table is stored by the way of Hash, and the keyword key of the list item of the first table is stored in Hash
In the list item of bucket.
Correspondingly, processing unit 1610 is also used to:
In the case that the quantity of stored keyword key is less than first threshold in each list item of the first Hash bucket,
Key to be stored is stored to the first Hash bucket;
In the case that the quantity of stored key is greater than or equal to first threshold in each list item of the first Hash bucket,
Key to be stored is stored into the first Hash bucket and the second Hash bucket in the Hash bucket of the negligible amounts of stored key, the
One Hash bucket and the second Hash bucket use different hash functions;
Expire in the list item of the first Hash bucket and the second Hash bucket, and stored in each list item of third Hash bucket
In the case that the quantity of key is less than first threshold, key to be stored is stored into third Hash bucket, the first Hash bucket and the
Three Hash buckets use identical hash function;
Expire in the list item of the first Hash bucket and the second Hash bucket, and the number of the key in each list item of third Hash bucket
In the case that amount is greater than or equal to first threshold, key to be stored is stored into third Hash bucket and the 4th Hash bucket and has been deposited
In the Hash bucket of the negligible amounts of the key of storage, the first Hash bucket and third Hash bucket use identical hash function, the second Hash
Bucket and the 4th Hash bucket use identical hash function.
In the embodiment of the present invention, when the negligible amounts of key, the memory of occupancy can be greatly reduced, and reduce Hash bucket
The number of access.
Optionally, processor 1610 is also used to:
It is less than or equal to second in the ratio of the accessed number of the accessed number and third Hash bucket of the first Hash bucket
In the case where threshold value, control network processing unit first matches third Hash bucket, then matches the first Hash bucket;
It is less than or equal to second in the ratio of the accessed number of the accessed number and the 4th Hash bucket of the second Hash bucket
In the case where threshold value, control network processing unit first matches the 4th Hash bucket, then matches the second Hash bucket;
In the accessed number and the second Hash bucket of the first Hash bucket and third Hash bucket and being accessed for the 4th Hash bucket
In the case that the ratio of number is less than or equal to second threshold, control network processing unit first matches the second Hash bucket and the 4th Hash
Bucket, then match the first Hash bucket and third Hash bucket.
In embodiments of the present invention, Hash bucket is matched by controlling network processing unit according to the accessed number of Hash bucket
Sequentially, the access times of Hash bucket can be reduced, and then demand of the network processing unit to memory bandwidth can be reduced.
Optionally, the first Hash bucket is identical with the size of third Hash bucket, the size of the second Hash bucket and the 4th Hash bucket
It is identical.
It should be understood that the device 1600 of processing table according to an embodiment of the present invention can correspond to place according to an embodiment of the present invention
The device 1200 of processor and processing table according to an embodiment of the present invention in the method 200 of reason table, and device 1600
Above and other operation and/or function of each unit is respectively for the corresponding process of implementation method 200, for sake of simplicity, herein
It repeats no more.
Therefore, the device of the processing table of the embodiment of the present invention, reaches preset condition by the operating status in memory
In the case of, the sublist being partially stored in the storage unit of the memory is moved into enough remaining bandwidths and memory space
In other memories, high performance network processor can be reduced to the capacity requirement of high bandwidth memory, make next-generation high-performance net
Network processor is easy to accomplish.
Figure 17 is the schematic block diagram of the device 1700 of access table according to an embodiment of the present invention.As shown in figure 17, device
1700 include processor 1710, memory 1720 and bus system 1730.Wherein, processor 1710 and memory 1720 pass through total
Linear system system 1730 is connected, and for storing instruction, the processor 1710 is for executing the memory 1720 storage for the memory 1720
Instruction.
Memory 1720 can be also used for storage sublist.Memory 1720 includes multiple storage units, one of storage
Unit is for storing a sublist.
Processor 1710 is specifically used for:
The plot of the corresponding memory block map information table of the first table is obtained, memory block map information table includes in the first table
The corresponding relationship of each sublist and storage unit;
According to the index accesses memory block map information table of plot and the first list item of the first table, and mapped according to memory block
Information table determines the index of the corresponding storage unit of the first sublist where the first list item, and the first sublist is any in the first table
Sublist;
According to the index of storage unit and the index of the first list item, the physical address of the first list item is determined;
The first list item is accessed according to the physical address of the first list item.
In embodiments of the present invention, network processing unit is determined in memory block map information table by the index according to list item
The physical address of list item can access list item according to the physical address of list item.
It should be understood that in embodiments of the present invention, which can be central processing unit (Central
Processing Unit, abbreviation CPU), which can also be other general processors, digital signal processor
(Digital Signal Processing, abbreviation DSP), specific integrated circuit (Application Specific
Integrated Circuit, abbreviation ASIC), field programmable gate array (Field-Programmable Gate Array,
Abbreviation FPGA) either other programmable logic device, discrete gate or transistor logic, discrete hardware components etc..It is general
Processor can be microprocessor or the processor is also possible to any conventional processor etc..
The memory 1720 may include read-only memory and random access memory, and provide instruction to processor 1710
And data.The a part of of memory 1720 can also include nonvolatile RAM.For example, memory 1720 may be used also
With the information of storage device type.
The bus system 1730 can also include power bus, control bus and state letter in addition to including data/address bus
Number bus etc..But for the sake of clear explanation, various buses are all designated as bus system 1730 in figure.
During realization, each step of the above method can pass through the integrated logic circuit of the hardware in processor 1710
Or the instruction of software form is completed.The step of method in conjunction with disclosed in the embodiment of the present invention, can be embodied directly at hardware
Reason device executes completion, or in processor hardware and software module combine and execute completion.Software module can be located at random
The abilities such as memory, flash memory, read-only memory, programmable read only memory or electrically erasable programmable memory, register
In the storage medium of domain maturation.The storage medium is located at memory 1720, and processor 1710 reads the information in memory 1720,
The step of completing the above method in conjunction with its hardware.To avoid repeating, it is not detailed herein.
Optionally, processor 1710 is also used to update storage the accessed number of unit.
Optionally, device 1700 can also include: receiver 1740, by bus system 1730 and processor 1710, deposit
Reservoir 1720 is connected.Receiver 1740 is used for before the plot that processor 1710 obtains memory block mapping plot table, receives control
The each sublist for the first table that the processor in face processed is sent and the corresponding relationship of storage unit.
Optionally, processor 1710 is specifically used for accessing memory block map information table according to the address that formula (1) determines;
Wherein, base is plot, and entry index is the index of the first list item, and block size is that the first sublist includes
List item quantity.
Optionally, processor 1710 is specifically used for: the physical address of the first list item is determined according to formula (2);
Real block index*block size+entry index%block size (2)
Wherein, real block index is the index of storage unit, and block size is the list item that the first sublist includes
Quantity, the quantity for the list item that the first sublist includes is identical as the quantity for the list item that storage unit stores, and entry index is the
The index of one list item.
It should be understood that the device 1700 of access table according to an embodiment of the present invention can correspond to visit according to an embodiment of the present invention
Ask the device 1400 of the network processing unit and access table according to an embodiment of the present invention in the method 900 of table, and device 1700
In each unit above and other operation and/or function respectively for the corresponding process of implementation method 900, for sake of simplicity,
Details are not described herein.
In embodiments of the present invention, network processing unit is determined in memory block map information table by the index according to list item
The physical address of list item can access list item according to the physical address of list item.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, it can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit
It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components
It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or
The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit
It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product
It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a
People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.
And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (Read-Only Memory abbreviation ROM), deposits at random
Various Jie that can store program code such as access to memory (Random Access Memory, abbreviation RAM), magnetic or disk
Matter.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.
Claims (22)
1. a kind of method for handling table, which is characterized in that the first table includes multiple sublists, and first memory includes that multiple storages are single
Member includes the first sublist in the multiple sublist, includes the first storage unit, the method packet in the multiple storage unit
It includes:
Processor obtains the residual memory space of the first memory;It is less than in the residual memory space of the first memory
Or in the case where being equal to the second preset value, the processor determines the bandwidth that the first memory occupies;
The processor obtains described first and deposits in the case where the bandwidth that the first memory occupies is less than third preset value
The accessed number of each storage unit in reservoir;
The processor determines first storage unit according to the accessed number of each storage unit, described first
The accessed frequency of storage unit is lower than other storage units except the first storage unit described in the first memory
Accessed frequency;
First sublist being stored in first storage unit is stored the to second memory by the processor
In two storage units, the remaining bandwidth of the second memory is higher than the bandwidth occupied when first sublist is accessed, and institute
The residual memory space for stating second memory is greater than the memory space that first sublist occupies;
The processor deletes first sublist in first storage unit;
The corresponding relationship of first sublist and second storage unit is sent to network processing unit by the processor, so that
The corresponding relationship of first sublist of preservation and first storage unit is updated to first sublist by network processing unit
With the corresponding relationship of second storage unit.
2. the method according to claim 1, wherein
The processor determines that the second memory, the residual memory space of the second memory are high from multiple memories
The residual memory space of other memories except the second memory described in the multiple memory.
3. method according to claim 1 or 2, which is characterized in that the processor will be stored in first storage
Before first sublist in unit is stored into the second storage unit of second memory, the method also includes:
The processor obtains the first write request, and first write request is for requesting to carry out writing behaviour to the first list item of the first table
Make;
The processor is that first sublist distributes first storage unit, first son according to first write request
Table includes first list item.
4. according to the method described in claim 3, it is characterized by further comprising:
The processor obtains the second write request, and second write request is for requesting to write the second list item of the second sublist
It operates, further includes second sublist in the multiple sublist;
If the processor determines the unallocated storage unit of the second sublist, the processor is write according to described second and is asked
It asks and distributes third storage unit for second sublist.
5. method according to claim 1 or 2, which is characterized in that first table is stored by the way of Hash,
The keyword key of the list item of first table is stored in the list item of Hash bucket, the method also includes:
It, will be in the case that the quantity of processor stored key in each list item of the first Hash bucket is less than first threshold
The key of storage is stored into the first Hash bucket;
The quantity of the processor stored key in each list item of the first Hash bucket is greater than or equal to the feelings of first threshold
Under condition, key to be stored is stored to the Kazakhstan of the negligible amounts of stored key into the first Hash bucket and the second Hash bucket
In uncommon bucket, the first Hash bucket and the second Hash bucket use different hash functions;
The processor has been expired in the list item of the first Hash bucket and the second Hash bucket, and has been deposited in each list item of third Hash bucket
In the case that the quantity of the key of storage is less than first threshold, key to be stored is stored into the third Hash bucket, described the
One Hash bucket and the third Hash bucket use identical hash function;
The processor has been expired in the list item of the first Hash bucket and the second Hash bucket, and has been deposited in each list item of third Hash bucket
In the case that the quantity of the key of storage is greater than or equal to first threshold, key storage to be stored is breathed out to third Hash bucket and the 4th
In uncommon bucket in the Hash bucket of the negligible amounts of stored key, the first Hash bucket and the third Hash bucket are using identical
Hash function, the second Hash bucket and the 4th Hash bucket use identical hash function.
6. according to the method described in claim 5, it is characterized by further comprising:
Ratio of the processor in the accessed number of the first Hash bucket and the accessed number of the third Hash bucket
In the case where second threshold, control network processing unit first matches the third Hash bucket, then matches described first
Hash bucket;
Ratio of the processor in the accessed number of the second Hash bucket and the accessed number of the 4th Hash bucket
In the case where the second threshold, control network processing unit first matches the 4th Hash bucket, then matches described
Second Hash bucket;
The processor in the accessed number of the first Hash bucket and the third Hash bucket and the second Hash bucket and
In the case that the ratio of the accessed number of the 4th Hash bucket is less than or equal to the second threshold, network processing unit is controlled
The second Hash bucket and the 4th Hash bucket are first matched, then matches the first Hash bucket and the third Hash bucket.
7. a kind of method for accessing table characterized by comprising
Network processing unit obtains the plot of the corresponding memory block map information table of the first table, and the memory block map information table includes
The corresponding relationship of each sublist and storage unit in first table;
The mapping of network processing unit memory block according to the index accesses of the plot and the first list item of first table
Information table, and determine according to the memory block map information table the corresponding storage unit of the first sublist where first list item
Index, first sublist be first table in any sublist;
The network processing unit determines first list item according to the index of the storage unit and the index of first list item
Physical address;
The network processing unit accesses first list item according to the physical address of first list item.
8. the method according to the description of claim 7 is characterized in that the network processing unit obtains the corresponding memory block of the first table
The plot of map information table includes:
The network processing unit identifies TID access memory block according to the table of first table and maps plot table, and according to described interior
Counterfoil mapping plot table determines the plot of the memory block map information table.
9. method according to claim 7 or 8, which is characterized in that further include:
The network processing unit updates the accessed number of the storage unit.
10. method according to claim 7 or 8, which is characterized in that the network processing unit is according to the plot and described
Memory block map information table described in the index accesses of first list item includes:
The address that the network processing unit determines according to the following formula accesses the memory block map information table,
Wherein, base is the plot, and entry index is the index of first list item, and block size is described first
The quantity for the list item that sublist includes.
11. method according to claim 7 or 8, which is characterized in that the network processing unit is according to the storage unit
The index of index and first list item, determines the physical address of first list item, comprising:
The network processing unit determines the physical address of first list item according to the following formula,
Real block index*block size+entry index%block size
Wherein, real block index is the index of the storage unit, and block size first sublist includes
The quantity of the quantity of list item, the list item that first sublist includes is identical as the quantity for the list item that the storage unit stores,
Entry index is the index of first list item.
12. a kind of device for handling table, which is characterized in that the first table includes multiple sublists, and first memory includes multiple storages
Unit includes the first sublist in the multiple sublist, includes the first storage unit, described device packet in the multiple storage unit
It includes:
Processing unit is used for: obtaining the residual memory space of the first memory;In the residue storage of the first memory
In the case that space is less than or equal to the second preset value, the bandwidth that the first memory occupies is determined;In first storage
In the case that the bandwidth that device occupies is less than third preset value, accessed time of each storage unit in the first memory is obtained
Number;According to the accessed number of each storage unit, first storage unit, the quilt of first storage unit are determined
Accessed frequency of the access frequency lower than other storage units except the first storage unit described in the first memory;
By the second storage unit of first sublist being stored in first storage unit storage to second memory
In, the remaining bandwidth of the second memory is higher than the bandwidth occupied when first sublist is accessed, and second storage
The residual memory space of device is greater than the memory space that first sublist occupies;Described is deleted in first storage unit
One sublist;
Transmission unit, for the corresponding relationship of first sublist and second storage unit to be sent to network processing unit,
So that the corresponding relationship of first sublist of preservation and first storage unit is updated to described first by network processing unit
The corresponding relationship of sublist and second storage unit.
13. device according to claim 12, which is characterized in that the processing unit is specifically used for:
Determine that the second memory, the residual memory space of the second memory are higher than the multiple from multiple memories
The residual memory space of other memories except second memory described in memory.
14. device according to claim 12 or 13, which is characterized in that the processing unit is also used to:
By the second storage unit of first sublist being stored in first storage unit storage to second memory
In before, obtain the first write request, first write request is used to request to the first list item of the first table progress write operation;
It is that first sublist distributes first storage unit according to first write request, first sublist includes described
First list item.
15. device according to claim 14, which is characterized in that the processing unit is also used to:
The second write request is obtained, second write request is used to request to carry out write operation to the second list item of the second sublist, described
It further include second sublist in multiple sublists;
If it is determined that the unallocated storage unit of the second sublist, then be second sublist distribution according to second write request
Third storage unit.
16. device according to claim 12 or 13, which is characterized in that first table is deposited by the way of Hash
Storage, the keyword key of the list item of first table are stored in the list item of Hash bucket, and the processing unit is also used to:
It, will be to be stored in the case that the quantity of stored key is less than first threshold in each list item of the first Hash bucket
Key is stored to the first Hash bucket;
It, will be in the case that the quantity of stored key is greater than or equal to first threshold in each list item of the first Hash bucket
The key of storage is stored into the first Hash bucket and the second Hash bucket in the Hash bucket of the negligible amounts of stored key, institute
The first Hash bucket and the second Hash bucket are stated using different hash functions;
Expire in the list item of the first Hash bucket and the second Hash bucket, and stored key in each list item of third Hash bucket
Quantity be less than first threshold in the case where, key to be stored is stored into the third Hash bucket, the first Hash bucket and
The third Hash bucket uses identical hash function;
Expire in the list item of the first Hash bucket and the second Hash bucket, and stored key in each list item of third Hash bucket
In the case that quantity is greater than or equal to first threshold, key to be stored is stored into third Hash bucket and the 4th Hash bucket
In the Hash bucket of the negligible amounts of the key of storage, the first Hash bucket and the third Hash bucket use identical Hash letter
Number, the second Hash bucket and the 4th Hash bucket use identical hash function.
17. device according to claim 16, which is characterized in that further include:
Control unit is used for:
It is less than or equal in the ratio of the accessed number and the accessed number of the third Hash bucket of the first Hash bucket
In the case where second threshold, control network processing unit first matches the third Hash bucket, then matches the first Hash bucket;
It is less than or equal in the ratio of the accessed number and the accessed number of the 4th Hash bucket of the second Hash bucket
In the case where the second threshold, control network processing unit first matches the 4th Hash bucket, then matches the second Hash bucket;
It is breathed out in the accessed number of the first Hash bucket and the third Hash bucket and the second Hash bucket and the described 4th
In the case that the ratio of the accessed number of uncommon bucket is less than or equal to the second threshold, described in control network processing unit first matches
Second Hash bucket and the 4th Hash bucket, then match the first Hash bucket and the third Hash bucket.
18. a kind of device for accessing table characterized by comprising
Processing unit is used for: obtaining the plot of the corresponding memory block map information table of the first table, the memory block map information table
Corresponding relationship including each sublist and storage unit in first table;According to the first of the plot and first table
Memory block map information table described in the index accesses of list item, and first list item is determined according to the memory block map information table
The index of the corresponding storage unit of first sublist at place, first sublist are any sublist in first table;According to
The index of the storage unit and the index of first list item, determine the physical address of first list item;
Access unit, for accessing first list item according to the physical address of first list item.
19. device according to claim 18, which is characterized in that the processing unit is specifically used for, according to the first table
Table identifies TID access memory block and maps plot table, and maps plot table according to the memory block and determine the memory block mapping letter
Cease the plot of table.
20. device described in 8 or 19 according to claim 1, which is characterized in that the processing unit is also used to update the storage
The accessed number of unit.
21. device described in 8 or 19 according to claim 1, which is characterized in that the processing unit is specifically used for:
Determining address accesses the memory block map information table according to the following formula,
Wherein, base is the plot, and entry index is the index of first list item, and block size is described first
The quantity for the list item that sublist includes.
22. device described in 8 or 19 according to claim 1, which is characterized in that the processing unit is specifically used for:
The physical address of first list item is determined according to the following formula,
Real block index*block size+entry index%block size
Wherein, real block index is the index of the storage unit, and block size first sublist includes
The quantity of the quantity of list item, the list item that first sublist includes is identical as the quantity for the list item that the storage unit stores,
Entry index is the index of first list item.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510274566.8A CN106294191B (en) | 2015-05-26 | 2015-05-26 | The method for handling table, the method and apparatus for accessing table |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510274566.8A CN106294191B (en) | 2015-05-26 | 2015-05-26 | The method for handling table, the method and apparatus for accessing table |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106294191A CN106294191A (en) | 2017-01-04 |
CN106294191B true CN106294191B (en) | 2019-07-09 |
Family
ID=57634545
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510274566.8A Active CN106294191B (en) | 2015-05-26 | 2015-05-26 | The method for handling table, the method and apparatus for accessing table |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106294191B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108572962B (en) * | 2017-03-08 | 2020-11-17 | 华为技术有限公司 | Method and device for storing physical data table |
CN112491725B (en) * | 2020-11-30 | 2022-05-20 | 锐捷网络股份有限公司 | MAC address processing method and device |
CN114490449B (en) * | 2022-04-18 | 2022-07-08 | 飞腾信息技术有限公司 | Memory access method and device and processor |
CN116016432A (en) * | 2022-12-30 | 2023-04-25 | 迈普通信技术股份有限公司 | Message forwarding method, device, network equipment and computer readable storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101566986A (en) * | 2008-04-21 | 2009-10-28 | 阿里巴巴集团控股有限公司 | Method and device for processing data in online business processing |
CN101909068A (en) * | 2009-06-02 | 2010-12-08 | 华为技术有限公司 | Method, device and system for managing file copies |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6799258B1 (en) * | 2001-01-10 | 2004-09-28 | Datacore Software Corporation | Methods and apparatus for point-in-time volumes |
-
2015
- 2015-05-26 CN CN201510274566.8A patent/CN106294191B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101566986A (en) * | 2008-04-21 | 2009-10-28 | 阿里巴巴集团控股有限公司 | Method and device for processing data in online business processing |
CN101909068A (en) * | 2009-06-02 | 2010-12-08 | 华为技术有限公司 | Method, device and system for managing file copies |
Also Published As
Publication number | Publication date |
---|---|
CN106294191A (en) | 2017-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7606236B2 (en) | Forwarding information base lookup method | |
US8705363B2 (en) | Packet scheduling method and apparatus | |
JP4483535B2 (en) | Network equipment | |
JP5640234B2 (en) | Layer 2 packet aggregation and fragmentation in managed networks | |
CN106294191B (en) | The method for handling table, the method and apparatus for accessing table | |
CN105446813B (en) | A kind of method and device of resource allocation | |
Bando et al. | FlashTrie: beyond 100-Gb/s IP route lookup using hash-based prefix-compressed trie | |
US11700209B2 (en) | Multi-path packet descriptor delivery scheme | |
US6529897B1 (en) | Method and system for testing filter rules using caching and a tree structure | |
JP7074839B2 (en) | Packet processing | |
US20120294315A1 (en) | Packet buffer comprising a data section and a data description section | |
WO2011015055A1 (en) | Method and system for storage management | |
CN108199976A (en) | Switching equipment, exchange system and the data transmission method for uplink of RapidIO networks | |
CN110083307A (en) | Date storage method, memory and server | |
CN103270727B (en) | Bank aware multi-it trie | |
CN109861931A (en) | A kind of storage redundant system of high speed Ethernet exchange chip | |
CN109656836A (en) | A kind of data processing method and device | |
EP2526478A1 (en) | A packet buffer comprising a data section and a data description section | |
JP7241194B2 (en) | MEMORY MANAGEMENT METHOD AND APPARATUS | |
CN108551485A (en) | A kind of streaming medium content caching method, device and computer storage media | |
CN108650306A (en) | A kind of game video caching method, device and computer storage media | |
CN105704037B (en) | A kind of list item store method and controller | |
US10116588B2 (en) | Large receive offload allocation method and network device | |
US10594631B1 (en) | Methods and apparatus for memory resource management in a network device | |
CN104391751A (en) | Synchronization method and device for algorithmic data processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20211223 Address after: 450046 Floor 9, building 1, Zhengshang Boya Plaza, Longzihu wisdom Island, Zhengdong New Area, Zhengzhou City, Henan Province Patentee after: Super fusion Digital Technology Co.,Ltd. Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd. |
|
TR01 | Transfer of patent right |