CN107888513A - Caching method and device for exchange chip - Google Patents
Caching method and device for exchange chip Download PDFInfo
- Publication number
- CN107888513A CN107888513A CN201710995913.5A CN201710995913A CN107888513A CN 107888513 A CN107888513 A CN 107888513A CN 201710995913 A CN201710995913 A CN 201710995913A CN 107888513 A CN107888513 A CN 107888513A
- Authority
- CN
- China
- Prior art keywords
- address
- look
- tabling look
- entry
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/10—Packet switching elements characterised by the switching fabric construction
- H04L49/103—Packet switching elements characterised by the switching fabric construction using a shared central buffer; using a shared memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/78—Architectures of general purpose stored program computers comprising a single central processing unit
- G06F15/7807—System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
- G06F15/781—On-chip cache; Off-chip memory
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/74—Address processing for routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
Landscapes
- Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computing Systems (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The embodiment of the present invention provides a kind of caching method and device for exchange chip, belongs to IC design field.Methods described includes:This method includes:Receive list item read requests;Address of tabling look-up is obtained from the read requests;Search whether the address of tabling look-up be present in cache entries, wherein the entry includes associated address and data;When tabled look-up described in presence address when, send corresponding with the address of tabling look-up data.Pass through above-mentioned technical proposal, search whether the address of tabling look-up be present in cache entries, when tabled look-up described in presence address when, data corresponding with the address of tabling look-up are sent, so as to reduce the access for shared memory, raising bandwidth availability ratio.
Description
Technical field
The present invention relates to IC design field, more particularly to a kind of caching method and dress for exchange chip
Put.
Background technology
In the design of chip, in order to save area, shared memory has largely been used.Because memory bandwidth itself has
Limit, it is necessary to the port that cannot respond to according to certain scheduling relation obstruction when the bandwidth accessed goes beyond the limit simultaneously.End
Mouth obstruction can make pipelining-stage can not continue to operate, and then according to back-pressure rule pause prime.When message burst amount is big, gather around
Plug is unavoidable, and this is the performance balance made for area.
Ethernet switching chip for supporting L3 Switching, in actual scene, what chip received is message flow.Such as
The each message of fruit can undoubtedly bring substantial amounts of bandwidth waste with accessing shared resource.
For above-mentioned technical problem, good solution is there is no in the prior art.
The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of method and apparatus, and this method and equipment can improve message entry number
According to reading speed.
To achieve these goals, the embodiment of the present invention provides a kind of caching method for exchange chip, this method bag
Include:Receive list item read requests;Address of tabling look-up is obtained from the read requests;Search whether institute be present in cache entries
Address of tabling look-up is stated, wherein the entry includes associated address and data;When tabled look-up described in presence address when, transmission looked into this
Data corresponding to table address.
Alternatively, this method includes:When in the absence of it is described table look-up address when, the read requests are sent to table, wherein,
The table will provide data corresponding with address of tabling look-up according to the read requests.
Alternatively, this method includes:The data corresponding with the address of tabling look-up for tabling look-up address and the table provides are delayed
Save as entry.
Alternatively, will table look-up address and data buffer storage corresponding with the address of tabling look-up that the table provides for entry it
Before, this method includes:Poll is to determine Empty Entry marker bit;And by the address and corresponding with the address of tabling look-up of tabling look-up
Data identified Empty Entry marker bit caching be entry.
Alternatively, this method includes:When in the absence of Empty Entry marker bit, described table look-up and is tabled look-up with described at address
Data corresponding to address cache at the entry marker bit cached at first.
On the other hand, the present invention provides a kind of buffer storage for exchange chip, and the device includes:Receiving module, use
In reception list item read requests;Acquisition module, for obtaining address of tabling look-up from the read requests;Searching modul, for
Search whether the address of tabling look-up be present in cache entries, wherein the entry includes associated address and data;Send
Module, for when tabled look-up described in presence address when, send corresponding with the address of tabling look-up data.
Alternatively, the sending module, for when in the absence of it is described table look-up address when, the read requests are sent to
Table, wherein, the table will provide data corresponding with address of tabling look-up according to the read requests.
Alternatively, the device also includes:Memory module, for address will to be tabled look-up and what the table provided tables look-up ground with described
Data buffer storage corresponding to location is entry.
Alternatively, the memory module, be additionally operable to will table look-up address and the table it is providing with the address pair of tabling look-up
Before the data buffer storage answered is entry:Poll is to determine Empty Entry marker bit;And described table look-up and is looked into described address
Data corresponding to table address are entry in identified Empty Entry marker bit caching.
Alternatively, the memory module, be additionally operable to when in the absence of Empty Entry marker bit, by it is described table look-up address and with
The data corresponding to address of tabling look-up cache at the entry marker bit cached at first.
On the other hand, the present invention also provides a kind of exchange chip, and the exchange chip includes the pipelining-stage and buffer of coupling,
Wherein, the buffer is configured to:Receive the list item read requests from the pipelining-stage;Obtained from the read requests
Table look-up address;Search whether the address of tabling look-up be present in cache entries, wherein the entry includes associated address
And data;When tabled look-up described in presence address when, send corresponding with the address of tabling look-up data to the pipelining-stage.
Alternatively, the buffer is configured to:When in the absence of it is described table look-up address when, the read requests are sent to
Shared memory, wherein, the shared memory will provide data corresponding with address of tabling look-up to institute according to the read requests
State pipelining-stage.
Alternatively, the buffer is configured to:Address will be tabled look-up and what the shared memory provided tables look-up with described
Data buffer storage corresponding to address is entry.
Alternatively, the buffer is configured to:Will table look-up address and the table it is providing with the address pair of tabling look-up
Before the data buffer storage answered is entry, poll is to determine Empty Entry marker bit;And described table look-up and is looked into described address
Data corresponding to table address are entry in identified Empty Entry marker bit caching.
Alternatively, the buffer is configured to:When in the absence of Empty Entry marker bit, by it is described table look-up address and with
The data corresponding to address of tabling look-up cache at the entry marker bit cached at first.
Pass through above-mentioned technical proposal, search whether the address of tabling look-up be present in cache entries, when being looked into described in presence
During table address, data corresponding with the address of tabling look-up are sent, so as to reduce the access for shared memory, improve bandwidth usage
Rate
The further feature and advantage of the embodiment of the present invention will be described in detail in subsequent specific embodiment part.
Brief description of the drawings
Accompanying drawing is that the embodiment of the present invention is further understood for providing, and a part for constitution instruction, with
The embodiment in face is used to explain the embodiment of the present invention together, but does not form the limitation to the embodiment of the present invention.Attached
In figure:
Fig. 1 is the caching method flow chart of exchange chip provided in an embodiment of the present invention;
Fig. 2 is the caching method flow chart for the exchange chip that one embodiment of the invention provides;
Fig. 3 is the buffer storage composition frame chart of exchange chip provided in an embodiment of the present invention;
Fig. 4 is exchange chip composition frame chart provided in an embodiment of the present invention;
Fig. 5 is the system schematic for including exchange chip provided in an embodiment of the present invention;
Fig. 6 is exchange chip functional block diagram provided in an embodiment of the present invention;And
Fig. 7 is exchange chip signal flow diagram provided in an embodiment of the present invention.
Embodiment
The embodiment of the embodiment of the present invention is described in detail below in conjunction with accompanying drawing.It should be appreciated that this
The embodiment of place description is merely to illustrate and explain the present invention embodiment, is not intended to limit the invention embodiment.
It should be noted that in the description of the invention, term " first ", " second " are only used for the different portion of convenient description
Part, and it is not intended that instruction or hint ordinal relation, the relative importance or implicit number for indicating indicated technical characteristic
Amount.Thus, " first " is defined, at least one this feature can be expressed or be implicitly included to the feature of " second ".
In the description of the invention, it is to be understood that the instruction such as term " on ", " under ", " interior ", " outer ", " top ", " bottom "
Orientation or position relationship be based on orientation shown in the drawings or position relationship, be for only for ease of the description present invention and simplification retouched
State, rather than instruction or imply signified device or element there must be specific orientation, with specific azimuth configuration and operation,
Therefore it is not considered as limiting the invention.
Those skilled in the art of the present technique are appreciated that the wording " comprising " used in the specification of the present invention and "comprising" are
Features described above, integer, step, operation, element and/or component be present in finger, but it is not excluded that in the presence of or addition it is one or more
Other features, integer, step, operation, element, component and/or combinations thereof.It should be understood that when we claim element by " even
Connect " or during " coupled " to another element, it can be directly connected or coupled to other elements, or there may also be cental element
Part.In addition, " connection " used herein or " coupling " can include wireless connection or wireless coupling.Wording used herein " and/
Or " include one or more associated list items whole or any cell and all combine.
In general, data access follows time or area of space law.Wherein, time zone refers to memory cell
In after some data is accessed, may be accessed again quickly;Area of space refers to that some data is accessed in memory cell
, neighbouring data are also accessed quickly.The Ethernet switching chip scene suitable for L3 Switching, using VLAN_ID as
The list item of index meets time zone principle.Generally, exchange opportunity receives within a lasting period comes from together
One VLAN message, and it is all resource-sharing that the list item related to VLAN_ID, which has more parts,.If each message goes to access
Shared resource in itself, can undoubtedly bring substantial amounts of bandwidth waste.In order to improve access performance, inventor creatively considers to borrow
The principle of buffer, a kind of fast cache method for reducing shared resource conflict is proposed, can be by handling flowing water and sharing
The capacity less buffer of one high speed is set between resource, used for message within a period of time.
Fig. 1 is the caching method flow chart of exchange chip provided in an embodiment of the present invention.As shown in figure 1, the present invention is implemented
The caching method for the exchange chip that example provides includes:
S101, receive list item read requests.
List item read requests, which can come from processing flowing water or pipelining-stage, the list item read requests, to be carried by message.
S102, address of tabling look-up is obtained from the read requests.
S103, search whether the address of tabling look-up be present in cache entries, wherein the entry includes what is be associated
Address and data.
Cache entries be defined as in embodiments of the present invention read requests reach buffer or buffer storage it
The preceding existing entry in the buffer or buffer storage.The entry can be buffer depositing from such as shared memory
Reservoir obtains, or is cached in message interaction operation before.In embodiments, entry can be identified by index, for example,
VLAN_ID (VLAN _ ID), to be distinguished between each entry.Indexing mark in various embodiments can also
Directly using table look-up address or data.
S104, when tabled look-up described in presence address when, send corresponding with the address of tabling look-up data.
Find in the presence of table look-up address when, can correspond directly to read requests will data corresponding with address of tabling look-up
And/or address of tabling look-up returns to pipelining-stage together.In embodiments, for the read requests for address of tabling look-up be present, buffer can
To abandon or intercept the read requests without sending to shared resource (for example, shared memory).
When in S103, be not present in cache entries it is described table look-up address when, method flow can go to S105, when not
In the presence of it is described table look-up address when, the read requests are sent to table.In embodiments, table can be tables of data in itself,
Can be the table stored in the memory of such as shared memory, then table will provide according to the read requests and table look-up ground
Data corresponding to location.
In embodiments, table or shared memory data corresponding with address of tabling look-up and/or can will table look-up ground simultaneously
Location provides and arrives buffer and pipelining-stage.Wherein, there is provided can learn or cache with for later request for buffer to the former
The read requests of same asset (for example, data) can be provided directly from caching, and the shared resource of reduction such as bandwidth is rushed
It is prominent;It is supplied to the latter to be directed to current read requests and provides resource to requesting party's (for example, pipelining-stage) using shared resource.
S106, after the data corresponding with address of tabling look-up that acquisition table provides, buffer will can table look-up address and institute
The data buffer storage corresponding with the address of tabling look-up for stating table offer is entry.In embodiments, mark for marking can be used
The entry (for example, marker bit).
Pass through above-mentioned technical proposal, search whether the address of tabling look-up be present in cache entries, when being looked into described in presence
During table address, data corresponding with the address of tabling look-up are sent, so as to reduce the access for shared memory, improve bandwidth usage
Rate.
Fig. 2 is the caching method flow chart for the exchange chip that one embodiment of the invention provides.As shown in Fig. 2 it will look into
Before the data buffer storage corresponding with the address of tabling look-up that table address and the table provide is entry, method can include:
S201, judge to whether there is Empty Entry marker bit in buffer, for example, idle bar can be determined by poll
Mesh marker bit.
S202, can be by the address and corresponding with the address of tabling look-up of tabling look-up when Empty Entry marker bit be present
Data are entry in identified Empty Entry marker bit caching.
In the case of buffer is occupied full, can use FIFO (First Input First Output,
FIFO mode) utilizes memory space, carries out data replacement, can perform step S203.
S203, when in the absence of Empty Entry marker bit, by address and the number corresponding with the address of tabling look-up of tabling look-up
Cached according at the entry marker bit cached at first.
In addition, in embodiments, consider from practical application angle, it is assumed that the VLAN_ID that single-swap chip is supported is altogether
There are 4096.If the message in a period of time comes from same VLAN, then the depth of buffer is set to 1 with regard to that can reach quick
The purpose of access.Assuming that in a period of time, there is n VLAN message alternate treatment, although depth is arranged to n and can be completely eliminated
The conflict of shared resource, it is contemplated that the speed for evidence of being fetched from buffer, n should be the smaller the better.In embodiments, on
State depth to be configured according to actual conditions by user when in use, for example, depth can be arranged in 1-n scopes to appoint
Meaning positive integer.The above method can include the depth that buffer is set for VLAN_ID.
A kind of another aspect of the embodiment of the present invention, there is provided buffer storage for exchange chip.
Fig. 3 is the buffer storage composition frame chart of exchange chip provided in an embodiment of the present invention.As shown in figure 3, the device can
With including:Receiving module 301, for receiving list item read requests;Acquisition module 302, for being obtained from the read requests
Table look-up address;Searching modul 303, for searching whether the address of tabling look-up be present in cache entries, wherein the entry
Including associated address and data;Sending module 304, for when tabled look-up described in presence address when, send and the address of tabling look-up
Corresponding data.
In embodiments, sending module 304 be used for when in the absence of it is described table look-up address when, the read requests are transmitted
To table (or shared memory), wherein, the table will provide data corresponding with address of tabling look-up according to the read requests.
In embodiments, the device can also include:Memory module 305, for that will table look-up address and the table provides
Data buffer storage corresponding with the address of tabling look-up be entry.
In embodiments, memory module 305 can be also used for table look-up address and what the table provided looks into described
Before data buffer storage corresponding to table address is entry:Poll is to determine Empty Entry marker bit;And by it is described table look-up address and
Data corresponding with the address of tabling look-up are entry in identified Empty Entry marker bit caching.
In embodiments, memory module 305 can be also used for when in the absence of Empty Entry marker bit, tabling look-up described
Address and data corresponding with the address of tabling look-up cache at the entry marker bit cached at first.
A kind of another aspect of the embodiment of the present invention, there is provided exchange chip.
Fig. 4 is exchange chip composition frame chart provided in an embodiment of the present invention.As shown in figure 4, the exchange chip can include
The pipelining-stage 402 and buffer 401 of coupling, wherein, the buffer 401 may be configured to:Reception comes from the pipelining-stage
402 list item read requests;Address of tabling look-up is obtained from the read requests;Searched whether in cache entries in the presence of described
Table look-up address, wherein the entry includes associated address and data;When tabled look-up described in presence address when, transmission tabled look-up with this
Data corresponding to address are to the pipelining-stage 402.
In embodiments, pipelining-stage 402 can have multiple streamlines 4021、4022、……、402n.Buffer 401
Can with for example, in response to corresponding to the streamline in pipelining-stage 402 (for example, 4021) read requests to the streamline (for example,
4021) returned data.
Fig. 5 is the system schematic for including exchange chip provided in an embodiment of the present invention.With reference to figure 5, buffer 401 can be with
It is configured to:When in the absence of it is described table look-up address when, the read requests are sent to shared memory (or table) 501, wherein,
The shared memory 501 will provide data corresponding with address of tabling look-up to the pipelining-stage 402 according to the read requests.
In embodiments, shared memory 501 can will provide and table look-up address pair simultaneously according to the read requests
The data answered are to pipelining-stage 402 and buffer 401.Buffer 401 can will table look-up address and the shared memory provides
Data buffer storage corresponding with the address of tabling look-up is entry.
Fig. 6 is exchange chip functional block diagram provided in an embodiment of the present invention.As shown in fig. 6, buffer 401 functionally may be used
To be divided into write-in side and read side, it can be independent mutually that it, which is operated,.Wherein, for writing side (such as Fig. 6 left sides are half side):If
List item has data write-in, represents that, without being intercepted, the data now turned to are institutes in buffer 401 to this read requests before
Without, it is necessary to which it is updated into caching.The principle of the renewal of citing is that the entry of the blank of buffer 401 is arrived in renewal successively
In.If buffer 401 is full, it may return to caching head and cover old value;For reading side (such as Fig. 6 right sides are half side):
Because buffer 401 can be built with register, address can will be read with data cached entering with all in buffer 401 simultaneously
Row contrast, produce matching effective marker position.It can finally be selected by multilevel precedence encoder, the number that output matching arrives
According to.
In embodiments, buffer 401 may be configured to:Address will tabled look-up and what the table provided looks into described
Before data buffer storage corresponding to table address is entry, poll is to determine Empty Entry marker bit;And by it is described table look-up address and
Data corresponding with the address of tabling look-up are entry in identified Empty Entry marker bit caching.Further, buffer 401
It may be configured to:When in the absence of Empty Entry marker bit, by address and the number corresponding with the address of tabling look-up of tabling look-up
Cached according at the entry marker bit cached at first.In embodiments, buffer 401 can include multiple registers.
Fig. 7 is exchange chip signal flow diagram provided in an embodiment of the present invention.As shown in fig. 7, with a SRAM list item
Exemplified by meter reading.
S701:Message can initiate read requests after reaching pipelining-stage 402.Buffer 401 can intercept the request first, will
Address of tabling look-up compares the entry cached step by step, and whether inquiry this address carried out study.If the match is successful, close to
The read requests of shared resource, and ignore the back-pressure signal of list item return completely, then branch to step S704;S702, such as
Fruit address then by this request clearance, and prepares to learn the data of this address return, jumps to step S704 without matching;
S703:Shared resource returned data is waited, the marker bit of the internally all entries of poll of buffer 401, finds sky
Not busy that, index and data are cached together.If there is no Empty Entry, according to FIFO principles, directly order covers
Old data.Then pipelining-stage 402 that this data is returned to message uses, when next similar message arrives,
Can be with directly using the data of buffer 401.
S704:In the success of the matching internal of buffer 401, show to be learnt before the data that this message needs to inquire about,
Directly pipelining-stage 402 can be returned to using the data cached.
As long as shared resource can not be visited again as can be seen that being buffered the address date that device learns from flow,
Greatly reduce its bandwidth consumption.Whole process of caching is also to be automatically performed, externally transparent, only passes through very small amount of register
Control or reflect its working condition.Capacity register and scale pass ginseng setting when can pass through actual use.
The optional embodiment of the embodiment of the present invention is described in detail above in association with accompanying drawing, still, the embodiment of the present invention is simultaneously
The detail being not limited in above-mentioned embodiment, can be to of the invention real in the range of the technology design of the embodiment of the present invention
The technical scheme for applying example carries out a variety of simple variants, and these simple variants belong to the protection domain of the embodiment of the present invention.
It is further to note that each particular technique feature described in above-mentioned embodiment, in not lance
In the case of shield, it can be combined by any suitable means.In order to avoid unnecessary repetition, the embodiment of the present invention pair
Various combinations of possible ways no longer separately illustrate.
In addition, it can also be combined between a variety of embodiments of the embodiment of the present invention, as long as it is not
The thought of the embodiment of the present invention is run counter to, it should equally be considered as disclosure of that of the embodiment of the present invention.
Claims (15)
1. a kind of caching method for exchange chip, it is characterised in that this method includes:
Receive list item read requests;
Address of tabling look-up is obtained from the read requests;
Search whether the address of tabling look-up be present in cache entries, wherein the entry includes associated address sum
According to;
When tabled look-up described in presence address when, send corresponding with the address of tabling look-up data.
2. according to the method for claim 1, it is characterised in that this method includes:
When in the absence of it is described table look-up address when, the read requests are sent to table, wherein, the table will according to the reading please
Ask and data corresponding with address of tabling look-up are provided.
3. according to the method for claim 2, it is characterised in that this method includes:
It is entry by the data buffer storage corresponding with the address of tabling look-up for tabling look-up address and the table provides.
4. according to the method for claim 3, it is characterised in that address will tabled look-up and what the table provided tables look-up with described
Before data buffer storage corresponding to address is entry, this method includes:
Poll is to determine Empty Entry marker bit;And
In identified Empty Entry marker bit caching it is bar by table look-up address and the data corresponding with the address of tabling look-up
Mesh.
5. according to the method for claim 4, it is characterised in that this method includes:
When in the absence of Empty Entry marker bit, address is tabled look-up and data corresponding with the address of tabling look-up are delayed at first by described
Cached at the entry marker bit deposited.
6. a kind of buffer storage for exchange chip, it is characterised in that the device includes:
Receiving module, for receiving list item read requests;
Acquisition module, for obtaining address of tabling look-up from the read requests;
Searching modul, for searching whether the address of tabling look-up be present in cache entries, wherein the entry includes correlation
The address of connection and data;
Sending module, for when tabled look-up described in presence address when, send corresponding with the address of tabling look-up data.
7. device according to claim 6, it is characterised in that the sending module, table look-up ground in the absence of described for working as
During location, the read requests are sent to table, wherein, the table will provide corresponding with address of tabling look-up according to the read requests
Data.
8. device according to claim 7, it is characterised in that the device also includes:
Memory module, the data buffer storage corresponding with the address of tabling look-up for will table look-up address and the table provides is entry.
9. device according to claim 8, it is characterised in that the memory module, be additionally operable in address and the institute of tabling look-up
Before the data buffer storage corresponding with the address of tabling look-up for stating table offer is entry:
Poll is to determine Empty Entry marker bit;And
In identified Empty Entry marker bit caching it is bar by table look-up address and the data corresponding with the address of tabling look-up
Mesh.
10. device according to claim 9, it is characterised in that the memory module, be additionally operable to Empty Entry ought be not present
During marker bit, address is tabled look-up and data corresponding with the address of tabling look-up are delayed at the entry marker bit cached at first by described
Deposit.
A kind of 11. exchange chip, it is characterised in that the exchange chip includes the pipelining-stage and buffer of coupling, wherein, it is described slow
Storage is configured to:
Receive the list item read requests from the pipelining-stage;
Address of tabling look-up is obtained from the read requests;
Search whether the address of tabling look-up be present in cache entries, wherein the entry includes associated address sum
According to;
When tabled look-up described in presence address when, send corresponding with the address of tabling look-up data to the pipelining-stage.
12. exchange chip according to claim 11, it is characterised in that the buffer is configured to:When in the absence of institute
State when tabling look-up address, the read requests are sent to shared memory, wherein, the shared memory will be according to the reading
Request provides data corresponding with address of tabling look-up to the pipelining-stage.
13. exchange chip according to claim 12, it is characterised in that the buffer is configured to:
It is entry by the data buffer storage corresponding with the address of tabling look-up for tabling look-up address and the shared memory provides.
14. exchange chip according to claim 13, it is characterised in that the buffer is configured to:It will table look-up ground
Before the data buffer storage corresponding with the address of tabling look-up that location and the table provide is entry, poll is to determine that Empty Entry marks
Position;And
In identified Empty Entry marker bit caching it is bar by table look-up address and the data corresponding with the address of tabling look-up
Mesh.
15. exchange chip according to claim 14, it is characterised in that the buffer is configured to:
When in the absence of Empty Entry marker bit, address is tabled look-up and data corresponding with the address of tabling look-up are delayed at first by described
Cached at the entry marker bit deposited.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710995913.5A CN107888513A (en) | 2017-10-23 | 2017-10-23 | Caching method and device for exchange chip |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710995913.5A CN107888513A (en) | 2017-10-23 | 2017-10-23 | Caching method and device for exchange chip |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107888513A true CN107888513A (en) | 2018-04-06 |
Family
ID=61782122
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710995913.5A Pending CN107888513A (en) | 2017-10-23 | 2017-10-23 | Caching method and device for exchange chip |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107888513A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108768859A (en) * | 2018-05-17 | 2018-11-06 | 迈普通信技术股份有限公司 | Data processing method, apparatus and system |
CN113098798A (en) * | 2021-04-01 | 2021-07-09 | 烽火通信科技股份有限公司 | Method for configuring shared table resource pool, packet switching method, chip and circuit |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1863169A (en) * | 2006-03-03 | 2006-11-15 | 清华大学 | Route searching result cache method based on network processor |
CN101021814A (en) * | 2007-03-16 | 2007-08-22 | 华为技术有限公司 | Storage and polling method and storage controller and polling system |
CN101079817A (en) * | 2007-07-04 | 2007-11-28 | 中兴通讯股份有限公司 | A route searching method and system |
CN105975215A (en) * | 2016-05-25 | 2016-09-28 | 深圳大学 | STL mapping table management method based on Ondemand algorithm |
CN106302374A (en) * | 2015-06-26 | 2017-01-04 | 深圳市中兴微电子技术有限公司 | A kind of for improving list item access bandwidth and the device and method of atomicity operation |
CN106354664A (en) * | 2016-08-22 | 2017-01-25 | 浪潮(北京)电子信息产业有限公司 | Solid state disk data transmission method and device |
-
2017
- 2017-10-23 CN CN201710995913.5A patent/CN107888513A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1863169A (en) * | 2006-03-03 | 2006-11-15 | 清华大学 | Route searching result cache method based on network processor |
CN101021814A (en) * | 2007-03-16 | 2007-08-22 | 华为技术有限公司 | Storage and polling method and storage controller and polling system |
CN101079817A (en) * | 2007-07-04 | 2007-11-28 | 中兴通讯股份有限公司 | A route searching method and system |
CN106302374A (en) * | 2015-06-26 | 2017-01-04 | 深圳市中兴微电子技术有限公司 | A kind of for improving list item access bandwidth and the device and method of atomicity operation |
CN105975215A (en) * | 2016-05-25 | 2016-09-28 | 深圳大学 | STL mapping table management method based on Ondemand algorithm |
CN106354664A (en) * | 2016-08-22 | 2017-01-25 | 浪潮(北京)电子信息产业有限公司 | Solid state disk data transmission method and device |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108768859A (en) * | 2018-05-17 | 2018-11-06 | 迈普通信技术股份有限公司 | Data processing method, apparatus and system |
CN108768859B (en) * | 2018-05-17 | 2021-05-25 | 迈普通信技术股份有限公司 | Data processing method, device and system |
CN113098798A (en) * | 2021-04-01 | 2021-07-09 | 烽火通信科技股份有限公司 | Method for configuring shared table resource pool, packet switching method, chip and circuit |
CN113098798B (en) * | 2021-04-01 | 2022-06-21 | 烽火通信科技股份有限公司 | Method for configuring shared table resource pool, packet switching method, chip and circuit |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8914581B2 (en) | Method and apparatus for accessing cache memory | |
US8316185B2 (en) | Cached memory system and cache controller for embedded digital signal processor | |
CN105094686B (en) | Data cache method, caching and computer system | |
US20050216667A1 (en) | Method of implementing off-chip cache memory in dual-use SRAM memory for network processors | |
US7409502B2 (en) | Selective cache line allocation instruction execution and circuitry | |
CN105900076A (en) | A data processing system and method for handling multiple transactions | |
CN110046106A (en) | A kind of address conversion method, address conversion module and system | |
CN107832343B (en) | Bitmap-based method for quickly retrieving data by MBF data index structure | |
CN108900570B (en) | Cache replacement method based on content value | |
KR102575913B1 (en) | Asymmetric set combined cache | |
US8443162B2 (en) | Methods and apparatus for dynamically managing banked memory | |
CN102681946B (en) | Memory access method and device | |
CN102124703A (en) | Switching table in an Ethernet bridge | |
CN105518631B (en) | EMS memory management process, device and system and network-on-chip | |
GB2493195A (en) | Address translation and routing between dies in a system in package. | |
CN107888513A (en) | Caching method and device for exchange chip | |
CN104424117B (en) | Internal memory physics address inquiring method and device | |
RU2011141892A (en) | DEVICE, METHOD AND METHOD MANAGEMENT SYSTEM | |
US6665775B1 (en) | Cache dynamically configured for simultaneous accesses by multiple computing engines | |
WO2016160182A1 (en) | Command-driven translation pre-fetch for memory management units | |
US9026774B2 (en) | IC with boot transaction translation and related methods | |
CN100414518C (en) | Improved virtual address conversion and converter thereof | |
CN105487988A (en) | Storage space multiplexing based method for increasing effective access rate of SDRAM bus | |
CN107171960A (en) | A kind of maintaining method of distributed dynamic two-layer retransmitting table | |
CN102163320B (en) | Configurable memory management unit (MMU) circuit special for image processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180406 |