CN102385555A - Caching system and method of data caching - Google Patents

Caching system and method of data caching Download PDF

Info

Publication number
CN102385555A
CN102385555A CN2010102689114A CN201010268911A CN102385555A CN 102385555 A CN102385555 A CN 102385555A CN 2010102689114 A CN2010102689114 A CN 2010102689114A CN 201010268911 A CN201010268911 A CN 201010268911A CN 102385555 A CN102385555 A CN 102385555A
Authority
CN
China
Prior art keywords
buffer memory
data
speed interface
state
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2010102689114A
Other languages
Chinese (zh)
Other versions
CN102385555B (en
Inventor
罗盛裕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netac Technology Co Ltd
Original Assignee
Netac Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netac Technology Co Ltd filed Critical Netac Technology Co Ltd
Priority to CN201010268911.4A priority Critical patent/CN102385555B/en
Publication of CN102385555A publication Critical patent/CN102385555A/en
Priority to HK12109119.3A priority patent/HK1168435A1/en
Application granted granted Critical
Publication of CN102385555B publication Critical patent/CN102385555B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the invention discloses a caching system and a method of data caching. The system comprises a high-speed interface, a cache group, a lower-speed interface, a status register, and a command sequence with conditions, wherein the command sequence with conditions is used for controlling the idle high-speed interface and low-speed interface to execute data caching according to caching conditions by the control command, and the caching conditions comprise data read-in conditions and data reading conditions; the data read-in conditions are that a cache in an empty status exists, and the data reading conditions are that a cache in a full status exists; and the high-speed interface and the low-speed interface are used for reading the caching conditions and cache statuses when the high-speed interface and the low-speed interface are in an idle status, judging whether the caching conditions are met according to the cache statuses, executing data caching and simultaneously updating the cache statuses when the caching conditions are met, and the status register is used for storing the status of each cache of the high-speed interface and the low-speed interface. According to the embodiment of the invention, the transmission rate of data between the high-speed interface and the low-speed interface can be improved.

Description

The method of a kind of caching system and metadata cache
Technical field
The application relates to technical field of data storage, particularly relates to a kind of caching system and data cache method.
Background technology
When data between unmatched two modules of read or write speed, or be operated in when transmitting between two modules of different clock-domains; Usually use a kind of buffer circuit that the data of transmission are carried out data buffering, thereby make slow-footed module obtain high as far as possible message transmission rate.As; When the outside storer of the computer access with high-speed interface, printer etc. have the low speed devices of low-speed interface; Because the read or write speed between high-speed interface and the low-speed interface does not match; Therefore, a buffer storage need be set between high speed device and low speed devices cushions the data of transmitting between the two.
Usually, the cache way of data comprises two kinds of ping-pong buffer and circular buffers.Wherein, The circular buffer device adopts RAM (the Random Accessing Memory of a plurality of single port; Random access storage device); Make that two modules can be carried out read operation data cached in the circular buffer device and write operation simultaneously when data were transmitted between unmatched two modules of read or write speed.In addition, when speed differed bigger between unmatched two modules of read or write speed, usually high-speed interface was single passage, and low-speed interface is the passage of a plurality of identical and concurrent workings, and the velocity equivalent of low-speed interface is the summation of each channel speed.Thereby make speeds match between two modules, to obtain high as far as possible message transmission rate.
See also Fig. 1, it is based on the structural representation of the circular buffer system of hyperchannel low-speed interface in the prior art.Wherein, as shown in Figure 1, high-speed interface is a passage H, and low-speed interface is three passage A, B and C, and the quantity of buffer memory is four in the system, Duos one than the quantity of passage.When high-speed interfaces to low-speed interfaces transmission data, the passage H of high-speed interface writes each buffer memory with data according to the sequencing of each buffer memory successively, simultaneously, passage A, B and the C of low-speed interface according to sequencing respectively reading state be full buffer memory.For example, passage H is according to the sequencing of buffer memory write data in buffer memory 0, buffer memory 1, buffer memory 2 and buffer memory 3 successively.Simultaneously, when state was full buffer memory, reading a state by passage A earlier was full buffer memory, and reading next state by channel B again is full buffer memory, and reading next state again by channel C at last is full buffer memory.So circulation finishes up to all data transmission.Equally, when low-speed interface during to high-speed interface transmission data, passage A, B and C write each buffer memory respectively according to sequencing, and simultaneously, passage H reads each buffer memory successively according to the sequencing of each buffer memory.
But the inventor finds under study for action, when in using prior art, carrying out metadata cache based on the circular buffer mode of hyperchannel low-speed interface, if the speed between each passage of low-speed interface side is consistent, then can obtain very high message transmission rate.Yet; Speed between each passage of low-speed interface side is normally inconsistent, for example, and in flash memory system; Whether flash data is wrong; And the difference of number of errors and errors present different all can cause the make-up time different, cause the speed of same passage also can constantly change in time, finally cause the speed between each passage can't be identical.At this moment, continue to use in the prior art when carrying out metadata cache based on the circular buffer mode of hyperchannel low-speed interface, the transfer rate when data are transmitted between high-speed interface and low-speed interface is lower.
Summary of the invention
In order to solve the problems of the technologies described above, the application provides a kind of caching system and data cache method, the transfer rate when transmitting between high-speed interface and low-speed interface to improve data.
The application embodiment discloses following technical scheme:
A kind of caching system; Comprise high-speed interface, buffer memory group, low-speed interface, status register and tape spare command sequence; Wherein, said high-speed interface is a passage, and said low-speed interface is at least two passages; The buffer memory quantity of said buffer memory group is Duoed one at least than the number of channels of said low-speed interface; Said tape spare command sequence is used for through the idle high-speed interface of control command control and low-speed interface according to buffer memory condition execution data buffer memory, and said buffer memory condition comprises data are write the data Writing condition of said buffer memory group and the data read beam spare of reading of data from said buffer memory group; Said data Writing condition is that state is arranged is empty buffer memory, and said data read beam spare is that state is arranged is full buffer memory; Said high-speed interface and low-speed interface; Be used for when self is in idle condition, under the control of said control command, from said tape spare command sequence and status register, read the state of buffer memory condition and each buffer memory respectively, judge whether to satisfy said buffer memory condition according to the buffer status that reads; When satisfying said data Writing condition, carrying out data writes; And upgrade the state of buffer memory, when satisfying said data read beam spare, carry out data and read, and upgrade the state of buffer memory; Said status register is used to store the state of said each buffer memory.
A kind of method of in said buffer system, implementing metadata cache; Comprise: as the input channel of free time and when state is arranged is empty buffer memory; Input channel through the free time writes said state in the empty buffer memory with data waiting for transmission; As the output channel of free time and when state is arranged is full buffer memory, the output channel through the free time is a reading of data the full buffer memory from said state.
Can find out that by the foregoing description utilization the application's data cache method as the input channel of free time, and when state is arranged is empty buffer memory, writes said state in the empty buffer memory with the input channel of data waiting for transmission through the said free time; As the output channel of free time, and when state is arranged is full buffer memory, the output channel through the said free time is a reading of data the full buffer memory from said state.Can effectively utilize passage and buffer memory more, avoid buffer memory to be in unnecessary waiting status, and passage is in unnecessary idle condition, improved the transfer rate when data are transmitted on the whole between high-speed interface and low-speed interface.
Description of drawings
In order to be illustrated more clearly in the application embodiment or technical scheme of the prior art; To do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art below; Obviously, the accompanying drawing in describing below only is some embodiment of the application, for those of ordinary skills; Under the prerequisite of not paying creative work property, can also obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is based on the structural representation of the circular buffer system of hyperchannel low-speed interface in the prior art;
Fig. 2 is the structural representation of an embodiment of a kind of caching system of the application;
Fig. 3 is the structural representation of another embodiment in a kind of caching system of the application;
Fig. 4 is the process flow diagram of an embodiment of a kind of data cache method of the application;
A kind of waveform synoptic diagram of metadata cache when Fig. 5-1 has the circular buffer technology from high-speed interfaces to low-speed interfaces transmission data now for using;
Fig. 5-2 is a kind of waveform synoptic diagram of metadata cache when using the application's caching technology from high-speed interfaces to low-speed interfaces transmission data;
The comparison synoptic diagram of Fig. 5-3 for the waveform among Fig. 5-1 and Fig. 5-2 is compared;
Fig. 6-1 is for using a kind of waveform synoptic diagram of existing circular buffer technology from low-speed interface metadata cache when high-speed interface transmits data;
Fig. 6-2 is for using a kind of waveform synoptic diagram of the application's caching technology from low-speed interface metadata cache when high-speed interface transmits data;
The comparison synoptic diagram of Fig. 6-3 for the waveform among Fig. 6-1 and Fig. 6-2 is compared;
Fig. 7 is for using the varying number buffer memory to carry out a kind of comparison of wave shape synoptic diagram of metadata cache when high-speed interfaces to low-speed interfaces transmits data among the application;
Fig. 8 is for using the varying number buffer memory to carry out a kind of comparison of wave shape synoptic diagram of metadata cache when high-speed interface transmits data from low-speed interface among the application.
Embodiment
For above-mentioned purpose, the feature and advantage that make the application can be more obviously understandable, the application embodiment is described in detail below in conjunction with accompanying drawing.
Embodiment one
See also Fig. 2, it is the structural representation of an embodiment of a kind of caching system of the application, and is as shown in Figure 2; Comprise high-speed interface 201, buffer memory group 202, low-speed interface 203, status register 204 and tape spare command sequence 205; Wherein, high-speed interface 201 is a passage, and low-speed interface 203 is at least two passages; The buffer memory quantity of buffer memory group 202 is Duoed one at least than the number of channels of low-speed interface
Tape spare command sequence 205; Be used for carrying out the data buffer memory with low-speed interface 203 according to the buffer memory condition through the idle high-speed interface 201 of control command control; Said buffer memory condition comprises data is write the data Writing condition of buffer memory group 202 and the data read beam spare of reading of data from buffer memory group 202; Said data Writing condition is that state is arranged is empty buffer memory, and said data read beam spare is that state is arranged is full buffer memory;
High-speed interface 201 and low-speed interface 203; Be used for when self is in idle condition, under the control of said control command, from tape spare command sequence 205 and status register 204, read the state of buffer memory condition and each buffer memory respectively, judge whether to satisfy said buffer memory condition according to the buffer status that reads; When satisfying said data Writing condition, carrying out data writes; And upgrade the state of buffer memory, when satisfying said data read beam spare, carry out data and read, and upgrade the state of buffer memory;
Status register 204 is used to store the state of each buffer memory.
For example, see also Fig. 3, it is the structural representation of another embodiment in a kind of caching system of the application.In caching system as shown in Figure 3, passage H is a high-speed interface, and three passage A, B and C are low-speed interface, and 5 buffer memorys 0,1,2,3 and 4 are arranged in the buffer memory group.Store control command in the tape spare command sequence, this control command control high-speed interface and low-speed interface are carried out the data buffer memory according to the buffer memory condition that presets.Simultaneously, status register stores the state of 5 buffer memorys in the buffer memory group that high-speed interface and low-speed interface upgrade.As, when data when high-speed interface is transferred to low-speed interface, if passage H is idle, will under the control of control command, from tape spare command sequence, obtain the data Writing condition, and from status register, obtain the state of 5 buffer memorys.Passage H is according to the data Writing condition, and the state of judging whether is empty buffer memory, if having; Then meet the data Writing condition; Passage H writes data in this empty buffer memory, and the state of buffer memory that should sky is updated to non-NULL by sky, and the state of having write after the data this buffer memory is updated to full.Simultaneously, in passage A, B and C, if passage A is idle, will under the control of control command, from tape spare command sequence, obtain data read beam spare, and from status register, read the state of 5 buffer memorys.Passage A is according to data read beam spare, and the state of judging whether is full buffer memory, if having; Then meet data read beam spare; Passage A is reading of data from this full buffer memory, and state that will this full buffer memory is non-full by completely being updated to, and the state that runs through after the data this buffer memory is updated to sky.Otherwise, when data when low-speed interface is transferred to high-speed interface, can carry out metadata cache with reference to said process, repeat no more here.
CPU adopts querying method or each passage of interrupt method control to carry out metadata cache in the prior art, and method of the prior art can make the operation more complicated of CPU.And in this application, carry out the data buffer memory according to the buffer memory condition that presets automatically, thereby reduced the burden of CPU by tape spare command sequence 205 each passages of control that CPU is provided with.
The control command that CPU will be provided with in advance is stored in the tape spare command sequence, thereby makes each passage under the control of control command, carry out the data caching according to the buffer memory condition automatically.When passage reads the buffer memory condition, judge whether the buffer memory condition is set up current, if set up, carry out control command immediately, carry out metadata cache, upgrade the state of buffer memory simultaneously immediately; If be false, passage can judge times without number whether the buffer memory condition is set up, till setting up.
Need to prove,, may obtain higher message transmission rate when the buffer memory quantity of buffer memory group manys two or more for a long time than the number of channels of low-speed interface.But,, also can waste hardware resource simultaneously if the utilization rate of certain buffer memory is very low.For when improving message transmission rate, also can save hardware resource effectively, preferred, the buffer memory quantity of buffer memory group 202 is Duoed one than the number of channels of low-speed interface.
Can find out by the foregoing description; Buffer memory condition in utilization the application's the data buffering system; As the input channel of free time, and when state is arranged is empty buffer memory, the input channel of data waiting for transmission through the said free time write said state in the empty buffer memory; As the output channel of free time, and when state is arranged is full buffer memory, the output channel through the said free time is a reading of data the full buffer memory from said state.Of course, avoided buffer memory to be in unnecessary waiting status, and passage is in unnecessary idle condition, improved the transfer rate when data are transmitted on the whole between high-speed interface and low-speed interface.
Embodiment two
Structural representation according to caching system shown in Figure 1; The application embodiment provides a kind of method of in caching system shown in Figure 1, implementing metadata cache; This method comprises: as the input channel of free time and when state is arranged is empty buffer memory; Input channel through the free time writes said state in the empty buffer memory with data waiting for transmission, and as the output channel of free time and when state is arranged is full buffer memory, the output channel through the free time is a reading of data the full buffer memory from said state.
For example, passage H is a high-speed interface, and passage A, B and C are low-speed interface, has 4 buffer memorys 0,1,2 and 3 in the caching system.When data when high-speed interface is transferred to low-speed interface, input channel is passage H, output channel is passage A, B and C.When through passage H after buffer memory 0 writes data, at this moment, passage H is idle, and if in 4 buffer memorys, the state that has only buffer memory 2 is for empty, then the passage H through the free time writes current data waiting for transmission in the buffer memory 2.If in 4 buffer memorys, not having state is that empty buffer memory or passage H is not in idle condition, then wait for, be till empty buffer memory and passage H are in idle condition up to state is arranged.
At opposite side, when passage A just read data from buffer memory 0 after, at this moment, passage A was idle, if in 4 buffer memorys, the state of buffer memory 2 is for full, then through passage A reading of data from buffer memory 2.If in 4 buffer memorys, not have state be full buffer memory or do not have idle output channel, then wait for, up to have state be full buffer memory and idle output channel is arranged till.
Wherein, When data when high-speed interface is transferred to low-speed interface; Possibly have the output channel of a plurality of free time, the output channel that a plurality of free time may appear in this moment takies the situation that same state is full buffer memory simultaneously, for fear of the generation of this situation; Preferably; Said output channel through the free time is that reading of data comprises the full buffer memory from said state: when data when high-speed interface is transferred to low-speed interface, from the output channel of at least two free time, select the highest output channel of priority according to the priority that is provided with in advance; Is reading of data the full buffer memory through the passage selected from said state.For example, when data when high-speed interface is transferred to low-speed interface, output channel is passage A, B and C.When certain data to be transmitted of transmission; For state is empty buffer memory 2; If having two output channel A and B to accomplish data simultaneously in three output channels reads and is in idle condition simultaneously; In order to guarantee that passage A and B can not compete same buffer memory 2 simultaneously, avoid causing the buffer memory conflict, then select the highest output channel of priority according to the priority that is provided with in advance.Be followed successively by like, the priority that passage A, B and C be set in advance that passage A is the highest, channel B is taken second place, and channel C is minimum.When passage A and channel B are all idle, preferential SELCH A.Certainly, can also the priority of passage A, B and C be set arbitrarily according to user's actual use needs, present embodiment does not limit the priority orders of passage.
In addition; Except being path setting priority; And select outside the highest input channel of priority; Said output channel through the free time is that reading of data comprises the full buffer memory from said state: when data when high-speed interface is transferred to low-speed interface, from the output channel of at least two free time, select an output channel arbitrarily; Is reading of data the full buffer memory through the passage selected from said state.
When data are transferred to high-speed interface from low-speed interface; Input channel and a plurality of state that possibly have a plurality of free time are empty buffer memory, and the number of idle input channel is empty buffer memory number greater than state; Be that input channel and buffer memory number can't reach man-to-man coupling, the input channel that this moment may occur a plurality of free time takies the situation of same state for empty buffer memory simultaneously, for fear of the generation of this situation; Preferably; Said input channel through the free time writes said state for comprising in the empty buffer memory with data waiting for transmission: when data when low-speed interface is transferred to high-speed interface, if idle input channel number be empty buffer memory number greater than state in the said caching system, according to the high input channel of priority orders selection priority that is provided with in advance; Input channel through selecting writes all states in the empty buffer memory simultaneously with data waiting for transmission.
For example, when data when low-speed interface is transferred to high-speed interface, more than a scene be example, input channel is passage A, B and C; When certain data to be transmitted of transmission, if the state of buffer memory 0 and 1 is empty, the state of passage A, B and C all is idle; The number of idle input channel is empty buffer memory number greater than state, and might cause channel B and C to take buffer memory 1 simultaneously this moment, for fear of conflict; A kind of mode is, from passage A, B and C, selects the high input channel of priority according to the priority orders that is provided with in advance, as; The priority that passage A, B and C be set in advance is followed successively by that passage A is the highest, and channel B is taken second place, and channel C is minimum.Therefore, according to the rule that is provided with of priority, therefrom SELCH A and B are in the buffer memory 0 and 1 of sky through passage A and the B that selects with data while write state waiting for transmission.
In addition; Except being path setting priority; And select outside the highest input channel of priority; Said input channel through the free time writes said state with data waiting for transmission and comprises for empty buffer memory: when data when low-speed interface is transferred to high-speed interface, if idle input channel number be the buffer memory number of sky greater than state in the said caching system, select input channel arbitrarily; Input channel through selecting writes all states in the empty buffer memory simultaneously with data waiting for transmission.
Certainly; If having two input channel A and B to accomplish data simultaneously in three input channels writes and is in idle condition simultaneously; And the state that has only buffer memory 2 is for empty; In order to guarantee that passage A and B can not compete same buffer memory 2 simultaneously, avoid causing the buffer memory conflict, then can select the highest input channel of priority according to the priority that is provided with in advance.Be followed successively by like, the priority that passage A, B and C be set in advance that passage A is the highest, channel B is taken second place, and channel C is minimum.When passage A and channel B are all idle, preferential SELCH A.Certainly, can also the priority of passage A, B and C be set arbitrarily according to user's actual use needs, present embodiment does not limit the priority orders of passage.
Need to prove that under the original state of metadata cache, buffer memory, input channel and output channel all are in idle state, then carry out data write operation earlier, carry out data read operation again.That is, be the buffer memory of sky through idle input channel write state earlier with data waiting for transmission, the output channel through the free time is a reading of data the full buffer memory from state again.Execution along with the metadata cache process; Because write and the reading all satisfying Writing condition of data of data are carried out with satisfying under the reading conditions, might reading conditions arrive, then execution data read operation earlier prior to Writing condition; Data write operation is carried out in the back; Perhaps, reading conditions and Writing condition arrive simultaneously, then carry out data write operation and data read operation simultaneously.
Based on this, need to prove that the application embodiment only limits the executive condition that data are imported and data read, and do not limit the execution sequence that data are imported and data read.
Can find out that by the foregoing description utilization the application's data cache method be empty buffer memory as state, and when idle input channel is arranged, the input channel of data waiting for transmission through the said free time write in the buffer memory of said state for sky; As state is full buffer memory, and when idle output channel is arranged, and the output channel through the said free time is a reading of data the full buffer memory from said state.Can effectively utilize the passage and the state that are in idle condition more and be empty buffer memory; Avoided buffer memory to be in unnecessary waiting status; And passage is in unnecessary idle condition, improved the transfer rate when data are transmitted between high-speed interface and low-speed interface on the whole.
Embodiment three
Be the process of caching that executive agent comes data of description from passage one side below.Wherein, passage in the present embodiment can be input channel, also can be output channel.See also Fig. 4, it is the process flow diagram of an embodiment of a kind of data cache method of the application, may further comprise the steps:
Step 401: passage is started by CPU;
Step 402: when passage is in idle condition, under the control of control command, from tape spare command sequence, read the buffer memory condition;
Wherein, for input channel, the data Writing condition that reads, for output channel, what read is data read beam spare.
Step 403: passage reads the state of each buffer memory from status register under the control of control command;
Step 404: judge according to the state of each buffer memory whether current buffer memory condition satisfies, if get into step 405, if not, return step 403;
Wherein, for input channel, when the state of determining is the buffer memory of sky, then satisfy the data Writing condition; For output channel, when the state of having determined for completely when the buffer memory, then satisfy data read beam spare.
Step 405: the corresponding relation between passage foundation and the buffer memory, and the state of renewal buffer memory;
For example,, when being empty, set up the corresponding relation between this input channel and the buffer memory 3, the state of buffer memory 3 is updated to non-NULL by sky, prevent to have again other passage to take buffer memory 3 as buffer memory 3 for input channel.
Step 406: realize metadata cache through passage;
Step 407: after passage executes metadata cache, upgrade the state of buffer memory, return step 402 again.
For example, after input channel writes buffer memory 3 with data, be updated to the state of buffer memory 3 full by non-NULL.
Through comparing, explain that the application realizes the method for metadata cache and the effect after this method of enforcement again with data cache method of the prior art.In the application scenarios of present embodiment, high-speed interface is passage H, and low-speed interface is passage A, B and C, and buffer memory quantity is Duoed one than the number of channels of low-speed interface, is 4.The priority orders that preestablishes passage A, B and C is that passage A is the highest, channel B is taken second place, and channel C is minimum.And,, need to suppose transmission 1 segment data for the ease of comparing and explanation.Wherein, the passage H of high-speed interface transmission one piece of data 1 chronomere consuming time, the speed of the passage A of low-speed interface, B and C transmission data has nothing in common with each other, and changes, and consuming time is 2 chronomeres or 5 chronomeres.
See also Fig. 5-1 and Fig. 5-2; Wherein, Black part divides the passage H of expression high-speed interface in buffer memory, writing data in the waveform; The sequence number of the numeral data to be transmitted of black part, white portion is represented low-speed interface in the waveform passage A, B or C be in reading of data from buffer memory, the channel number that the letter representation of white portion is corresponding.
Shown in Fig. 5-1, a kind of waveform synoptic diagram of metadata cache when it has the circular buffer technology from high-speed interfaces to low-speed interfaces transmission data now for using.According to data cache method of the prior art data are carried out buffer memory.Wherein, under original state, the state of all buffer memorys all is empty, and all output channels are idle.According to the execution sequence of buffer memory, passage H writes data 0,1 and 2 to buffer memory 0,1 and 2 successively, and the state of buffer memory 0,1 and 2 is that again according to the execution sequence of passage, passage A, B and C be reading of data 0,1 and 2 from buffer memory 0,1 and 2 successively after expiring.When passage H write data 3 in buffer memory 3 after, according to the execution sequence of passage, should read the data in the buffer memory 3 by passage A again this moment; If passage A is idle; Then directly start passage A, and read the data in the buffer memory 3, if A is not idle through passage A; After waiting for that then passage A is the free time, restart passage A.So circulation finishes until all data transmission.
Shown in Fig. 5-2, it is a kind of waveform synoptic diagram of metadata cache when using the application's caching technology from high-speed interfaces to low-speed interfaces transmission data.According to the data cache method among the application, when state was the buffer memory of sky, idle input channel write said state in the empty buffer memory with data waiting for transmission; When state was full buffer memory, idle output channel was a reading of data the full buffer memory from said state.Wherein, Under the original state, the state of all buffer memorys is empty, and all output channels are idle; Passage H writes data 0,1 and 2 to buffer memory 0,1 and 2 successively; And the state of buffer memory 0,1 and 2 is for after full, and according to the passage priority of setting, passage A, B and C be reading of data 0,1 and 2 from buffer memory 0,1 and 2 successively.When passage H writes data 2 to buffer memory 2, and the state of buffer memory 2 is for after full, and this moment, passage A read out the data in the buffer memory 00, and the state of buffer memory 0 and buffer memory 3 be empty simultaneously, and passage A is the free time.In the prior art, according to the write sequence of data, should in buffer memory 3, write data 3.In this application, be empty buffer memory as long as state is arranged, idle input channel can write data 3 waiting for transmission in this buffer memory.Therefore, shown in Fig. 5-2, passage H is in the buffer memory 0 of sky with data 3 write states waiting for transmission, rather than in the buffer memory 3.Behind the buffer memory 0 that with data 3 write states is sky, the state that has only buffer memory 3 this moment is for empty, and passage H writes buffer memory 3 with data 4; And the state of buffer memory 3 is for after full; This moment, channel C read out data 2 from buffer memory 2, and channel C is idle, then idle channel C reading of data 4 from buffer memory 3.Proceed in this way, when data 9 are write buffer memory 1, and the state of buffer memory 1 is for after full, and the state of channel B and channel C be sky simultaneously, according to the passage priority of setting, then channel B reading of data 9 from buffer memory 1.
Shown in Fig. 5-3, its comparison synoptic diagram for the waveform among Fig. 5-1 and Fig. 5-2 is compared.Obviously, through Fig. 5-1 and Fig. 5-2 are compared and can find, the time that data cache method consumed of the application's the time ratio prior art that data cache method consumed is few.
Equally, shown in Fig. 6-1, it is for using a kind of waveform synoptic diagram of existing circular buffer technology from low-speed interface metadata cache when high-speed interface transmits data.According to data cache method of the prior art data are carried out buffer memory.Wherein, under original state, the state of all buffer memorys all is empty, and all input channels are idle.Passage A, B, C write data 0,1 and 2 to buffer memory 0,1 and 2 simultaneously.After the state of buffer memory 0,1 and 2 was full, according to the execution sequence of buffer memory, passage H is reading of data 0,1 and 2 from buffer memory 0,1 and 2 successively.When passage H from buffer memory 2 after the reading of data 2, according to the execution sequence of passage, this moment, passage A should write data again in buffer memory 3; If passage A makes idle; Then directly start passage A, and in buffer memory 3, write data, if A is not idle through passage A; After waiting for that then passage A is the free time, restart passage A.So circulation finishes until all data transmission.
Shown in Fig. 6-2, it is for using a kind of waveform synoptic diagram of the application's caching technology from low-speed interface metadata cache when high-speed interface transmits data.According to the data cache method among the application.Wherein, under the original state, the state of all buffer memorys all is empty, and all input channels are idle. Write data 0,1 and 2 to buffer memory 0,1 and 2 simultaneously through passage A, B, C.When the state of buffer memory 0 when full, passage H reading of data 0 from buffer memory 0 because the writing speed of channel C is faster than channel B, for full, passage H is reading of data 2 from buffer memory 2 earlier, in reading of data 1 from buffer memory 1 prior to the state of buffer memory 1 for buffer memory 2.When passage A and channel C are accomplished the work that writes data and when being in idle condition simultaneously, according to the passage priority of setting, passage A writes earlier data 3 in buffer memory 3 simultaneously.Proceed in this way, finish until all data transmission.
Shown in Fig. 6-3, its comparison synoptic diagram for the waveform among Fig. 6-1 and Fig. 6-2 is compared.Obviously, through Fig. 6-1 and Fig. 6-2 are compared and can find, the time that data cache method consumed of the application's the time ratio prior art that data cache method consumed is few.
In addition, see also Fig. 7, it is for using the varying number buffer memory to carry out a kind of comparison of wave shape synoptic diagram of metadata cache when high-speed interfaces to low-speed interfaces transmits data among the application.Wherein, first kind of environment for use is: the data cache method according to the application carries out buffer memory to data, and buffer memory quantity is Duoed one than the number of channels of low-speed interface, and at this moment, the formed oscillogram of metadata cache process is first figure among Fig. 7.Second kind of environment for use is: the data cache method according to the application carries out buffer memory to data, and buffer memory quantity manys two than the number of channels of low-speed interface, and at this moment, the formed oscillogram of metadata cache process is second figure among Fig. 7.Obviously, first kind of time that environment for use consumed of second kind of time ratio that environment for use consumed is few, that is, in the present embodiment, the buffer memory quantity of buffer memory group manys two the time than the number of channels of low-speed interface, obtains higher message transmission rate.
Equally, see also Fig. 8, it is for using the varying number buffer memory to carry out a kind of comparison of wave shape synoptic diagram of metadata cache when high-speed interface transmits data from low-speed interface among the application; Wherein, First kind of environment for use is: the data cache method according to the application carries out buffer memory to data, and buffer memory quantity is Duoed one than the number of channels of low-speed interface; At this moment, the formed oscillogram of metadata cache process is first figure among Fig. 7.Second kind of environment for use is: the data cache method according to the application carries out buffer memory to data, and buffer memory quantity manys two than the number of channels of low-speed interface, and at this moment, the formed oscillogram of metadata cache process is second figure among Fig. 7.Obviously, first kind of time that environment for use consumed of second kind of time ratio that environment for use consumed is few, that is, in the present embodiment, the buffer memory quantity of buffer memory group manys two the time than the number of channels of low-speed interface, obtains higher message transmission rate.
Need to prove; One of ordinary skill in the art will appreciate that all or part of flow process that realizes in the foregoing description method; Be to instruct relevant hardware to accomplish through computer program; Described program can be stored in the computer read/write memory medium, and this program can comprise the flow process like the embodiment of above-mentioned each side method when carrying out.Wherein, described storage medium can be magnetic disc, CD, read-only storage memory body (Read-Only Memory, ROM) or at random store memory body (Random Access Memory, RAM) etc.
More than a kind of caching system that the application provided and the method for metadata cache have been carried out detailed introduction; Used specific embodiment among this paper the application's principle and embodiment are set forth, the explanation of above embodiment just is used to help to understand the application's method and core concept thereof; Simultaneously, for one of ordinary skill in the art, according to the application's thought, the part that on embodiment and range of application, all can change, in sum, this description should not be construed as the restriction to the application.

Claims (7)

1. caching system; It is characterized in that, comprise high-speed interface, buffer memory group, low-speed interface, status register and tape spare command sequence, wherein; Said high-speed interface is a passage; Said low-speed interface is at least two passages, and the buffer memory quantity of said buffer memory group is Duoed one at least than the number of channels of said low-speed interface
Said tape spare command sequence; Be used for carrying out the data buffer memory according to the buffer memory condition through control command control idle high-speed interface and low-speed interface; Said buffer memory condition comprises data is write the data Writing condition of said buffer memory group and the data read beam spare of reading of data from said buffer memory group; Said data Writing condition is that state is arranged is empty buffer memory, and said data read beam spare is that state is arranged is full buffer memory;
Said high-speed interface and low-speed interface; Be used for when self is in idle condition, under the control of said control command, from said tape spare command sequence and status register, read the state of buffer memory condition and each buffer memory respectively, judge whether to satisfy said buffer memory condition according to the buffer status that reads; When satisfying said data Writing condition, carrying out data writes; And upgrade the state of buffer memory, when satisfying said data read beam spare, carry out data and read, and upgrade the state of buffer memory;
Said status register is used to store the state of said each buffer memory.
2. caching system according to claim 1 is characterized in that, the buffer memory quantity of said buffer memory group is Duoed one than the number of channels of said low-speed interface.
3. a method of in the caching system of claim 1, implementing metadata cache is characterized in that, comprising:
As the input channel of free time and when state is arranged is empty buffer memory; Input channel through the free time writes said state in the empty buffer memory with data waiting for transmission; As the output channel of free time and when state is arranged is full buffer memory, the output channel through the free time is a reading of data the full buffer memory from said state.
4. data cache method according to claim 3 is characterized in that, said input channel through the free time writes said state with data waiting for transmission and comprises for empty buffer memory:
When data when low-speed interface is transferred to high-speed interface, if idle input channel number be empty buffer memory number greater than state in the said caching system, according to the high input channel of priority orders selection priority that is provided with in advance;
Input channel through selecting writes all states in the empty buffer memory simultaneously with data waiting for transmission.
5. data cache method according to claim 3 is characterized in that, said input channel through the free time writes said state with data waiting for transmission and comprises for empty buffer memory:
When data when low-speed interface is transferred to high-speed interface, if idle input channel number be the buffer memory number of sky greater than state in the said caching system, select input channel arbitrarily;
Input channel through selecting writes all states in the empty buffer memory simultaneously with data waiting for transmission.
6. data cache method according to claim 3 is characterized in that, said output channel through the free time is that reading of data comprises the full buffer memory from said state:
When data when high-speed interface is transferred to low-speed interface, from the output channel of at least two free time, select the highest output channel of priority according to the priority that is provided with in advance;
Is reading of data the full buffer memory through the passage selected from said state.
7. data cache method according to claim 3 is characterized in that, the output channel of said free time is that reading of data comprises the full buffer memory from said state:
When data when high-speed interface is transferred to low-speed interface, from the output channel of at least two free time, select an output channel arbitrarily;
Is reading of data the full buffer memory through the passage selected from said state.
CN201010268911.4A 2010-08-27 2010-08-27 Caching system and method of data caching Active CN102385555B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201010268911.4A CN102385555B (en) 2010-08-27 2010-08-27 Caching system and method of data caching
HK12109119.3A HK1168435A1 (en) 2010-08-27 2012-09-17 A buffer system and a method for data buffering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010268911.4A CN102385555B (en) 2010-08-27 2010-08-27 Caching system and method of data caching

Publications (2)

Publication Number Publication Date
CN102385555A true CN102385555A (en) 2012-03-21
CN102385555B CN102385555B (en) 2015-03-04

Family

ID=45824984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010268911.4A Active CN102385555B (en) 2010-08-27 2010-08-27 Caching system and method of data caching

Country Status (2)

Country Link
CN (1) CN102385555B (en)
HK (1) HK1168435A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103744796A (en) * 2013-09-29 2014-04-23 记忆科技(深圳)有限公司 Caching method and system by means of uSSD
CN108574787A (en) * 2017-03-09 2018-09-25 柯尼卡美能达株式会社 Image forming apparatus
WO2021135763A1 (en) * 2019-12-30 2021-07-08 深圳市中兴微电子技术有限公司 Data processing method and apparatus, storage medium, and electronic apparatus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6530000B1 (en) * 1999-03-24 2003-03-04 Qlogic Corporation Methods and systems for arbitrating access to a disk controller buffer memory by allocating various amounts of times to different accessing units
CN1529243A (en) * 2003-09-29 2004-09-15 港湾网络有限公司 Method and structure for realizing router flow management chip buffer-storage management
CN1585373A (en) * 2004-05-28 2005-02-23 中兴通讯股份有限公司 Ping pong buffer device
CN2724089Y (en) * 2003-09-29 2005-09-07 港湾网络有限公司 Realizing structure for router flow management chip buffer storage

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6530000B1 (en) * 1999-03-24 2003-03-04 Qlogic Corporation Methods and systems for arbitrating access to a disk controller buffer memory by allocating various amounts of times to different accessing units
CN1529243A (en) * 2003-09-29 2004-09-15 港湾网络有限公司 Method and structure for realizing router flow management chip buffer-storage management
CN2724089Y (en) * 2003-09-29 2005-09-07 港湾网络有限公司 Realizing structure for router flow management chip buffer storage
CN1585373A (en) * 2004-05-28 2005-02-23 中兴通讯股份有限公司 Ping pong buffer device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103744796A (en) * 2013-09-29 2014-04-23 记忆科技(深圳)有限公司 Caching method and system by means of uSSD
CN108574787A (en) * 2017-03-09 2018-09-25 柯尼卡美能达株式会社 Image forming apparatus
WO2021135763A1 (en) * 2019-12-30 2021-07-08 深圳市中兴微电子技术有限公司 Data processing method and apparatus, storage medium, and electronic apparatus

Also Published As

Publication number Publication date
CN102385555B (en) 2015-03-04
HK1168435A1 (en) 2012-12-28

Similar Documents

Publication Publication Date Title
USRE49875E1 (en) Memory system having high data transfer efficiency and host controller
CN109669888A (en) A kind of configurable and efficient embedded Nor-Flash controller and control method
CN109614049B (en) Flash memory control method, flash memory controller and flash memory system
CN101387988A (en) Computer having flash memory and method of operating flash memory
CA2587681C (en) Multimedia card interface method, computer program product and apparatus
CN101706760B (en) Matrix transposition automatic control circuit system and matrix transposition method
US20080005387A1 (en) Semiconductor device and data transfer method
US20220365892A1 (en) Accelerating Method of Executing Comparison Functions and Accelerating System of Executing Comparison Functions
CN102385555A (en) Caching system and method of data caching
WO2022002095A1 (en) Memory initialisation apparatus and method, and computer system
CN101232434B (en) Apparatus for performing asynchronous data transmission with double port RAM
US20180173651A1 (en) Data storage device access method, device and system
CN112256203B (en) Writing method, device, equipment, medium and system of FLASH memory
JP2009199384A (en) Data processing apparatus
CN109935252B (en) Memory device and operation method thereof
EP1988463A1 (en) Memory control apparatus and memory control method
US20200348932A1 (en) Memory control system with a sequence processing unit
JP2009059276A (en) Data processing apparatus and program
CN116991764B (en) High-performance Flash controller and embedded system
US8166228B2 (en) Non-volatile memory system and method for reading and storing sub-data during partially overlapping periods
US6085297A (en) Single-chip memory system including buffer
KR102242957B1 (en) High speed NAND memory system and high speed NAND memory package device
CN116974480A (en) Flash memory chip access method, device, equipment and medium
JP4509946B2 (en) Interrupt priority setting circuit
WO2006046272A1 (en) Memory access device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1168435

Country of ref document: HK

C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1168435

Country of ref document: HK