CN103345452B - Data caching method in multiple buffer storages according to weight information - Google Patents

Data caching method in multiple buffer storages according to weight information Download PDF

Info

Publication number
CN103345452B
CN103345452B CN201310302423.4A CN201310302423A CN103345452B CN 103345452 B CN103345452 B CN 103345452B CN 201310302423 A CN201310302423 A CN 201310302423A CN 103345452 B CN103345452 B CN 103345452B
Authority
CN
China
Prior art keywords
data
buffer
data cached
memory buffer
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310302423.4A
Other languages
Chinese (zh)
Other versions
CN103345452A (en
Inventor
毛永泉
陈平
江多默
游贵喜
卢新灿
汪勇军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
The auspicious poly-infotech share company limited in Fujian
Original Assignee
Auspicious Poly-Infotech Share Co Ltd In Fujian
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Auspicious Poly-Infotech Share Co Ltd In Fujian filed Critical Auspicious Poly-Infotech Share Co Ltd In Fujian
Priority to CN201310302423.4A priority Critical patent/CN103345452B/en
Publication of CN103345452A publication Critical patent/CN103345452A/en
Application granted granted Critical
Publication of CN103345452B publication Critical patent/CN103345452B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A data caching method in multiple buffer storages according to weight information comprises the steps of receiving data to be cached, receiving weight information of the data to be cached, writing the data to be cached into a first buffer storage when the weight information of the data to be cached indicates the fact that the weights of the data to be cached are low weights, writing the data to be cached into a second buffer storage when the weight information of the data to be cached indicates the fact that the weights of the data to be cached are high weighs, and carrying out replacing operations based on the weight information respectively under the condition that the first buffer storage and the second buffer storage cannot meet buffer needs. Through the data caching method in the multiple buffer storages according to the weight information, the multiple buffer storages are used for respective storage, respective replacement is carried out according to the weight information of the data to be cached at the same time, and the overall performance of the buffer storages is improved.

Description

A kind of in multiple memory buffer according to the data cached method of weight information
Technical field
The present invention relates to field of data storage, in particular to the data cached method of one, particularly relate to and utilize multiple memory buffer to carry out data cached method, treat that data cached weight information carries out the method cushioned respectively in particular to basis, for carrying out data buffer storage in system operation, by replacing process with the performance and the efficiency that improve buffer memory to data cached the carrying out in memory buffer.
Background technology
When CPU will read data, first search from buffer memory, find and just read immediately and give CPU process; Do not find, just read from internal memory by relatively slow speed and give CPU process, while, call in the data block at this data place in buffer memory, all carries out from buffer memory after can making to the reading of monoblock data, need not invoke memory again.So just reading mechanism makes CPU read the hit rate very high (most of CPU can reach about 90%) of buffer memory, and that is the CPU data 90% that next time will read all in the buffer, only has about 10% to need to read from internal memory.This saves the time that CPU directly reads internal memory greatly, also make CPU read data time substantially without the need to wait for.Generally speaking, CPU reads the order of data is internal memories after first buffer memory.
Buffer memory is the duplicate of small part data in internal memory, so when finding data in CPU to buffer memory, also there will be the situation (going because these data do not copy to buffer memory from internal memory) that can not find, at this moment CPU still can go for data in internal memory, the speed of such system has just slowed down, but CPU can these data Replicas in buffer memory, not get in internal memory more next time.Along with the change of time, accessed must data not be unalterable the most frequently, that is, just now also data infrequently, now to need by frequent visit, just now still data the most frequently, again infrequently, data thus in buffer memory want often change according to certain algorithm, the data in such guarantee buffer memory be accessed the most frequently.Buffer memory is the duplicate of small part data in internal memory, so when finding data in CPU to buffer memory, also there will be the situation (going because these data do not copy to buffer memory from internal memory) that can not find, at this moment CPU still can go for data in internal memory, the speed of such system has just slowed down, but CPU can these data Replicas in buffer memory, not get in internal memory more next time.Along with the change of time, accessed must data not be unalterable the most frequently, that is, just now also data infrequently, have now needed by frequent visit, just now still data the most frequently, again infrequently, data thus in buffer memory are wanted often change according to certain algorithm, the data in such guarantee buffer memory be accessed the most frequently, but existing replacement algorithm is often very complicated.
At present, have employed the way that buffer memory is set between host-processor and main memory.In order to improve the efficiency of reading and writing data, the cache controller of frequency of operation between host-processor and main memory and buffer memory are set.Like this, process external data being write main memory comprises: the data write instruction that outside is sent by host-processor sends to cache controller; Whether available free cache controller to check in buffer memory storage space, if had, then the direct data of carrying in instruction that data write write the storage space depositing the middle free time, if do not had, then calculate the minimum caching data block of frequency of utilization according to the frequency of utilization of the caching data block in memory buffer, remove the minimum caching data block of this frequency of utilization and discharge the storage space shared by the minimum caching data block of this frequency of utilization, then data being write in the storage space after the data write release of carrying in instruction; Cache controller, after defining the data of new write, reads the data of this new write and writes main memory from buffer memory, thus completes the process of external data write main memory.
The outside process reading data from main memory comprises: the data read request that outside is sent by host-processor sends to cache controller; Cache controller checks in buffer memory whether store outside data of asking, if had, then the direct data by the correspondence stored in buffer memory send to host-processor, if do not had, then send this data read request to main memory, the data of current request are sent to cache controller by main memory, and cache controller is stored in the buffer, and host-processor is sent to outside read out the data of current request from buffer memory after.Existing replacement algorithm needs to calculate, and especially needs very complicated calculating and inquiry, extends the time of write and access.
Summary of the invention
In order to solve write and access time long technical matters, invention is provided in multiple memory buffer according to the data cached method of weight information, by utilizing multiple memory buffer, by different weight treat data cachedly be stored in different memory buffer respectively, namely process respectively according to the weight information of data to be stored.At least comprise two classes in memory buffer data cached, high data cached of the data cached and weight that weight is low, can segment weight if desired further, high data cached of the data cached and weight in as low in weight data cached, weight.On the other hand, for the replacement policy that the data cached employing of different weight is different, thus simplify replacement algorithm complicated in prior art.
Particularly, the invention discloses one according to the data cached method of weight information in multiple memory buffer, described method comprises:
S10) reception is treated data cached and treats data cached weight information, wherein saidly treat the total amount that data cached data volume is not more than the storage space of the first buffer-stored, simultaneously, treat that data cached data volume is not more than the total amount of the storage space of the second buffer-stored;
S20) if treat to treat that data cached weight is low weight described in the instruction of data cached weight information, then described treating data cachedly is written in the first memory buffer;
S30) if treat to treat that data cached weight is high weight described in data cached weight information instruction, then data cached being written in the second memory buffer is treated by described.
Preferably, wherein step S20 comprises the following steps:
S22) if the total amount of the idle storage space in the first memory buffer is less than treat data cached data volume, then by the data cached removing in the first memory buffer, discharge the data cached shared storage space in the first memory buffer simultaneously, and will treat that slow data are written in the first memory buffer;
S23) if the total amount of the idle storage space in the first memory buffer is not less than treat data cached data volume, then will treat data cachedly to be written in the first memory buffer;
Preferably, wherein step S30 comprises the following steps:
S32) if the total amount of the idle storage space in the second memory buffer is less than treat data cached data volume, then by the data cached unloading in the second memory buffer in main memory, discharge the data cached shared storage space in the second memory buffer simultaneously, and treat that slow data are written in the second memory buffer by described;
S33) if the total amount of the idle storage space in the second memory buffer is not less than treat data cached data volume, then data cached being written in the second memory buffer will be treated.
Preferably, wherein the total amount of the storage space of the first memory buffer is identical with the total amount of the storage space of the second memory buffer.
Preferably, wherein treat that data cached data volume is not more than the total amount of the storage space of the first memory buffer and the second memory buffer.
Preferably, will treat data cached be written in the first memory buffer after, the storage relation treated between data cached and the first memory buffer is updated in buffer memory mapping table; And will treat data cached be written in the second memory buffer after, the storage relation treated between data cached and the second memory buffer is updated in buffer memory mapping table.
Preferably, wherein said buffer memory mapping table is stored in and maps in impact damper.
Preferably, according to treat data cached weight and buffer memory mapping table read from the first memory buffer or the second memory buffer treat data cached.
Preferably, wherein said buffer memory mapping table comprises following field: data number to be stored, memory buffer numbering, memory address, weight information.
Preferably, treat that data cached weight is high weight or low weight described in described weight information instruction.
Preferably, described memory buffer numbering instruction first memory buffer or the second memory buffer.
Preferably, will treat that data cached first memory buffer that is written to comprises: will treat data cached and treat that data cached weight information is written in the first memory buffer simultaneously; To treat that data cached second memory buffer that is written to comprises: will treat data cached and treat that data cached weight information is written in the second memory buffer simultaneously.
Present invention also offers another in multiple memory buffer according to the data cached method of weight information, comprising:
Receive and treat data cached and treat data cached weight information;
Judge the total amount of the idle storage space of intermediate buffer treats data cached data volume described in whether being less than, if treat data cached data volume described in the total amount of the idle storage space of described intermediate buffer is not less than, then direct described treating data cachedly is written in described intermediate buffer, otherwise based on treating data cached weight information respectively by data cached for the low weight in intermediate buffer and data cached unloading to the first memory buffer of high weight and the second memory buffer, the storage space of release intermediate buffer;
Data cached being written in intermediate buffer will be treated.
Wherein based on treating that data cached for the low weight in intermediate buffer and data cached unloading to the first memory buffer of high weight and the second memory buffer comprise by data cached weight information respectively:
Data cached for low weight, judge whether the total amount of the idle storage space in the first memory buffer is less than the data cached data volume of low weight, if the total amount of the idle storage space in the first memory buffer is not less than the data cached data volume of low weight, then direct the data cached of described low weight is written in described first memory buffer; If the total amount of the idle storage space in the first memory buffer is less than the data cached data volume of low weight, that then removes in the first memory buffer is data cached, discharge the data cached shared storage space in the first memory buffer simultaneously, then the data cached of described low weight is written in described first memory buffer; Data cached for high weight, judge whether the total amount of the idle storage space in the second memory buffer is less than the data cached data volume of high weight, if the total amount of the idle storage space in the second memory buffer is not less than the data cached data volume of high weight, then direct the data cached of described high weight is written in described second memory buffer; If the total amount of the idle storage space in the second memory buffer is less than the data cached data volume of high weight, then by the data cached unloading in the second memory buffer in main memory, discharge the data cached shared storage space in the second memory buffer simultaneously, then the data cached of described high weight is written in described second memory buffer.
Preferably, will treat data cached be written in the first memory buffer after, the storage relation treated between data cached and the first memory buffer is updated in buffer memory mapping table; Will treat data cached be written in the second memory buffer after, the storage relation treated between data cached and the second memory buffer is updated in buffer memory mapping table; And will treat data cached be written in intermediate buffer after, the storage relation treated between data cached and intermediate buffer is updated in buffer memory mapping table.
Preferably, will treat that data cached first memory buffer that is written to comprises: will treat data cached and treat that data cached weight information is written in the first memory buffer simultaneously; To treat that data cached second memory buffer that is written to comprises: will treat data cached and treat that data cached weight information is written in the second memory buffer simultaneously.
By of the present invention in multiple memory buffer according to the data cached method of weight information.Effect of the present invention is: simplify complicated calculating and search algorithm, can be cached to for the weight that difference is data cached in different memory buffer respectively, can carry out replacement process for the weight that difference is data cached.Improve by arranging weight and replacing process the hit rate that processor reads data from buffer memory, improve the access efficiency of system, improve the handling capacity of system.
Accompanying drawing explanation
Included accompanying drawing is used for understanding the present invention further, and its ingredient as instructions also explains principle of the present invention together with instructions, in the accompanying drawings:
Fig. 1 is the structured flowchart of processor in the present invention;
Fig. 2 is data cached according to an embodiment of the invention method;
Fig. 3 is data cached according to another embodiment of the present invention method.
Embodiment
Further describe the preferred embodiments of the present invention with reference to the accompanying drawings.
In the present invention, " low weight " and " weight is low " refer to same content, namely have lower weight.In like manner, " high weight " and " weight is high " refer to same content, namely have higher weight.In like manner, " middle weight " and " in weight " refer to same content, namely have the weight of intergrade.The weight of apparent high weight, middle weight, low weight reduces successively.What wherein have high weight treats that data cached is system data, core data, critical data, or need the data of access and write recently, or access and the higher data etc. of write frequency, what have low weight treats that data cached is specific user's data, application-specific data, long-time data of not accessing, or access and the lower data etc. of write frequency, have weight for data cached be weight between high weight treat data cached and low weight treat data cached between treat data cached, such as many Application share data, multiple users share data etc.In the present invention, more specifically, the total amount of the idle storage space in memory buffer refers to the total amount of the idle storage space of the buffer unit in memory buffer, and the total amount of the storage space in memory buffer refers to the total amount of the idle storage space of the buffer unit in memory buffer.In the present invention, receive and treat data cachedly to refer to reception equally and treat data cached.In the present invention, buffered data and data cachedly refer to same content.In the present invention, weight information, weight properties, weight properties information all refer to same content.
With reference to the accompanying drawings 1, in the present invention, described processor comprises processor core, the first memory buffer and the second memory buffer, also comprises main memory interface, and be coupled with main memory by this main memory interface, also can comprise and map impact damper and intermediate buffer (accompanying drawing is not shown).Wherein the first memory buffer comprises cache controller and buffer unit, and equally, the second memory buffer, mapping impact damper and intermediate buffer also comprise cache controller and buffer unit.Wherein cache controller control is to the write of buffer unit and reading, replaces and removes.In the present invention, first memory buffer, the second memory buffer, map impact damper, intermediate buffer and can have identical memory capacity, namely the first memory buffer, the second memory buffer, to map impact damper identical with the total amount of the storage space of intermediate buffer; Outside two, first memory buffer, the second memory buffer, map impact damper and intermediate buffer and can have different memory capacity, namely the first memory buffer, the second memory buffer, to map impact damper not identical with the total amount of the storage space of intermediate buffer.
With reference to the accompanying drawings 2, describe in detail the first embodiment of the present invention in multiple memory buffer according to the data cached method of weight information.In step slo, processor core receives write or the reading command of external unit or applications, receives simultaneously and treats data cached and treat data cached weight information.Wherein, treat that data cached weight information can be that independent data also can load on and treats data cached head, treat that data cached weight information can be that the low weight information of data cached weight, high weight information or middle weight information are treated in instruction described above.In the present invention, wherein saidly treat the total amount that data cached data volume is not more than the storage space of the first memory buffer, simultaneously, treat that data cached data volume is not more than the total amount of the storage space of the second memory buffer.
In response to write or the reading command of external unit or applications, when needs carry out buffer memory, first processor core judges weight information, in step S20, if treat to treat that data cached weight is low weight described in the instruction of data cached weight information, then described treating data cachedly is written in the first memory buffer; In step s 30, if treat to treat that data cached weight is high weight described in the instruction of data cached weight information, then described treating data cachedly is written in the second memory buffer; In step s 40, upgrade buffer memory mapping table by processor core, described buffer memory mapping table is stored in map in impact damper and indicate treats data cached and between the first memory buffer and the second memory buffer storage relation.Such as, when external unit or application program send write instruction, processor core receives write and treats data cached and treat data cached weight information, then processor core is in response to this write instruction, carry out caching process, first cache instruction is sent based on weight information to the cache controller in the cache controller in the first memory buffer or the second memory buffer, if such as weight information indicates low weight, processor core sends cache instruction to the first memory buffer, if weight information indicates high weight, processor core sends cache instruction to the second memory buffer.If treat to treat that data cached weight is low weight described in data cached weight information instruction, described treating data cachedly is written in the first memory buffer in response to cache instruction by the cache controller then in the first memory buffer, if treat to treat that data cached weight is high weight described in data cached weight information instruction, then the cache controller in the second memory buffer treats data cached being written in the second memory buffer in response to cache instruction by described.Treat in the present invention that total amount that data cached data volume is not more than the storage space of the first memory buffer is also not more than the total amount of the storage space of the second memory buffer, such as processor core receive treat data cached before to be carried out the control of data volume by external unit or applications.If external unit or applications do not carry out the control of data volume, then need to receive after data cached at processor core, send data volume by processor core to the first cache controller and the second cache controller and confirm instruction, and by the first cache controller and the second cache controller judge respectively low weight treat data cached data volume and high weight treat whether data cached data volume is greater than the total amount of the total amount of the storage space of the first memory buffer and the storage space of the second memory buffer, if the first memory buffer and the second memory buffer do not have to store treat data cached memory capacity, then directly will treat in buffer memory storage data write main memory, the buffer memory END instruction finally sent in response to the cache controller in the first memory buffer or the second memory buffer by processor core upgrades buffer memory mapping table, be included in will treat data cached be written in the first memory buffer after, the storage relation treated between data cached and the first memory buffer is updated in buffer memory mapping table, and will treat data cached be written in the second memory buffer after, the storage relation treated between data cached and the second memory buffer is updated in buffer memory mapping table.
Below with reference to the accompanying drawings 3 describe in detail second embodiment of the invention in multiple memory buffer according to the data cached method of weight information.As described in the first embodiment, first, in step slo, processor core receives write or the reading command of external unit or applications, receives simultaneously and treats data cached and treat data cached weight information.In the present invention, wherein saidly treat the total amount that data cached data volume is not more than the storage space of the first memory buffer, simultaneously, treat that data cached data volume is not more than the total amount of the storage space of the second memory buffer.
Then, the write sent in response to external unit or applications or reading command, when needs carry out buffer memory, first processor core judges to treat data cached weight information in step S11, and treats that data cached weight is that low weight or high weight carry out caching process respectively according to treating described in data cached weight information instruction.When described until data cached weight information instruction described in when data cached weight is low weight, processor core sends cache instruction to the cache controller in the first memory buffer, cache controller in step S21 first memory buffer judges whether the total amount of the idle storage space in the first memory buffer is less than and treats data cached data volume, if the total amount of the idle storage space in the first memory buffer is not less than treat data cached data volume, then direct described treating data cachedly is written in described first memory buffer; If the total amount of the idle storage space in the first memory buffer is less than treat data cached data volume, that then removes in the first memory buffer is data cached, discharge the data cached shared storage space in the first memory buffer simultaneously, then treat data cached being written in described first memory buffer by described.When described until data cached weight information instruction described in when data cached weight is high weight, processor core sends cache instruction to the cache controller in the second memory buffer, cache controller in step S31 second memory buffer judges whether the total amount of the idle storage space in the second memory buffer is less than and treats data cached data volume, if the total amount of the idle storage space in the second memory buffer is not less than treat data cached data volume, then direct described treating data cachedly is written in described second memory buffer; If the total amount of the idle storage space in the second memory buffer is less than treat data cached data volume, then by the data cached unloading in the second memory buffer in main memory, discharge the data cached shared storage space in the second memory buffer simultaneously, then treat data cached being written in described second memory buffer by described.Due to current need buffer memory treat that data cached is the data will accessed recently, so it is preferentially placed in buffer memory can improve access speed significantly.For low data cached of weight, due to the attribute of its low weight, such as, can not again access in the recent period, thus when have new treat data cached appearance, can directly be replaced, even as described in the present invention directly removing.For high data cached of weight, due to the attribute of its high weight, such as, still can access in the recent period, thus when have new treat data cached appearance, treating that priority cache is new is data cached, and by original data cached unloading in main memory.Finally upgrade buffer memory mapping table by processor core, concrete grammar is identical with the update method in the first embodiment.
In the third embodiment (accompanying drawing is not shown), processor is except comprising the first memory buffer, the second memory buffer, mapped cache device, also include intermediate buffer, intermediate buffer and the first memory buffer, the second memory buffer, map impact damper there is identical structure, but its effect is different from the former, and the function of the cache controller in intermediate buffer is also different from the former cache controller.Preferentially in intermediate buffer, buffer memory is carried out in data cached method in the third embodiment.Particularly, first, in the step s 100, processor core receives write from external unit or applications or reading command, receives simultaneously and treats data cached and treat data cached weight information.In the present invention, wherein saidly the total amount that data cached data volume is not more than the storage space of the first memory buffer, the second memory buffer and intermediate buffer is treated.Then, the write sent in response to external unit or applications or reading command, when needs carry out caching process, processor core sends cache instruction to the cache controller in intermediate buffer, cache controller in intermediate buffer is in response to the cache instruction received from processor core, judge in step S101 the total amount of the idle storage space of described intermediate buffer treats data cached data volume described in whether being less than, if treat data cached data volume described in the total amount of the idle storage space of described intermediate buffer is not less than, then direct described treating data cachedly is written in described intermediate buffer, otherwise the data cached weight information in reading intermediate buffer, and based on read data cached weight information respectively by data cached for the low weight in intermediate buffer and data cached unloading to the first memory buffer of high weight and the second memory buffer, the storage space of release intermediate buffer, then data cached being written in intermediate buffer will be treated.Wherein based on treat data cached weight information respectively by the method in data cached for the low weight in intermediate buffer and high weight data cached unloading to the first memory buffer and the second memory buffer and the embodiment shown in accompanying drawing of the present invention 3 similar.Particularly, the cache controller in intermediate buffer sends replacement instruction based on weight information respectively to the cache controller in the cache controller in the first memory buffer and the second memory buffer.Data cached for low weight, in response to the replacement instruction that intermediate buffer sends, first cache controller in first memory buffer judges in step S121 whether the total amount of the idle storage space in the first memory buffer is less than the data cached data volume of low weight, if the total amount of the idle storage space in the first memory buffer is not less than the data cached data volume of low weight, then direct the data cached of described low weight is written in described first memory buffer; If the total amount of the idle storage space in the first memory buffer is less than the data cached data volume of low weight, that then removes in the first memory buffer is data cached, discharge the data cached shared storage space in the first memory buffer simultaneously, then the data cached of described low weight is written in described first memory buffer.Data cached for high weight, first cache controller in second memory buffer judges in step S131 whether the total amount of the idle storage space in the second memory buffer is less than the data cached data volume of high weight, if the total amount of the idle storage space in the second memory buffer is not less than the data cached data volume of high weight, then direct the data cached of described high weight is written in described second memory buffer; If the total amount of the idle storage space in the second memory buffer is less than the data cached data volume of high weight, then by the data cached unloading in the second memory buffer in main memory, discharge the data cached shared storage space in the second memory buffer simultaneously, then the data cached of described high weight is written in described second memory buffer.For this embodiment, by adding intermediate buffer as intermediate buffer memory, make when needing the data volume relatively small amount of buffering, only be written in the intermediate buffer as intermediate buffer memory and can meet application requirement, because this reducing the step determining weight information, improve buffer efficiency.In addition, when needing the data volume of buffering relatively a large amount of, data cached by what replace in the first memory buffer and the second memory buffer respectively according to weight information, reduce the complexity of complicated algorithm, make the write of buffer memory and replace more perfect.Final updating buffer memory mapping table, be included in by treat data cached be written in the first memory buffer after, the storage relation treated between data cached and the first memory buffer is updated in buffer memory mapping table; Will treat data cached be written in the second memory buffer after, the storage relation treated between data cached and the second memory buffer is updated in buffer memory mapping table; And will treat data cached be written in intermediate buffer after, the storage relation treated between data cached and intermediate buffer is updated in buffer memory mapping table.
In various embodiments of the present invention, wherein said buffer memory mapping table is stored in and maps in impact damper.According to treat data cached weight and buffer memory mapping table read from the first memory buffer or the second memory buffer or intermediate buffer treat data cached.Wherein said buffer memory mapping table comprises following field: data number to be stored, memory buffer numbering, memory address, weight information, treat that data cached weight is high weight or low weight, described memory buffer numbering instruction first memory buffer, the second memory buffer or intermediate buffer described in described weight information instruction.Wherein will treat that data cached first memory buffer that is written to comprises: will treat data cached and treat that data cached weight information is written in the first memory buffer simultaneously; To treat that data cached second memory buffer that is written to comprises: will treat data cached and treat that data cached weight information is written in the second memory buffer simultaneously
Provided by the invention in multiple memory buffer according to the data cached method of weight information, data cached weight information is treated owing to effectively make use of, thus according to treating that data cached different weights carry out different replacement process, such as directly remove process where necessary for low data cached of weight, for the high data cached unloading where necessary of weight in main memory, thus avoid complicated calculating and query manipulation, be achieved the effect of efficient buffer memory current data.From buffer memory, reading the hit rate of data by invention increases processor, improve the access efficiency of system, improve the handling capacity of system.
Should be appreciated that, above-described embodiment is the detailed description of carrying out for specific embodiment, but the present invention is not limited to this embodiment, without departing from the spirit and scope of the present invention, various improvement and modification can be made to the present invention, such as when weight information instruction is when data cached weight is low weight, middle weight and high weight, without departing from the spirit and scope of the present invention, method data cached in memory buffer of the present invention can be improved further.

Claims (8)

1. one kind in multiple memory buffer according to the data cached method of weight information, described method comprises: S10) receive and treat data cached and treat data cached weight information, wherein saidly treat the total amount that data cached data volume is not more than the storage space of the first buffer-stored, simultaneously, treat that data cached data volume is not more than the total amount of the storage space of the second buffer-stored;
S20) if treat to treat that data cached weight is low weight described in the instruction of data cached weight information, then described treating data cachedly is written in the first memory buffer;
S30) if treat to treat that data cached weight is high weight described in the instruction of data cached weight information, then described treating data cachedly is written in the second memory buffer;
S40) upgrade buffer memory mapping table, data cached and between the first memory buffer and the second memory buffer storage relation is treated in described buffer memory mapping table instruction;
Wherein step S20 comprises the following steps:
S22) if the total amount of the idle storage space in the first memory buffer is less than treat data cached data volume, then by the data cached removing in the first memory buffer, discharge the data cached shared storage space in the first memory buffer simultaneously, and will treat data cachedly to be written in the first memory buffer;
S23) if the total amount of the idle storage space in the first memory buffer is not less than treat data cached data volume, then will treat data cachedly to be written in the first memory buffer;
Wherein step S30 comprises the following steps:
S32) if the total amount of the idle storage space in the second memory buffer is less than treat data cached data volume, then by the data cached unloading in the second memory buffer in main memory, discharge the data cached shared storage space in the second memory buffer simultaneously, and described treating data cachedly is written in the second memory buffer;
S33) if the total amount of the idle storage space in the second memory buffer is not less than treat data cached data volume, then will treat data cachedly to be written in the second memory buffer;
Step S40 comprises: will treat data cached be written in the first memory buffer after, the storage relation treated between data cached and the first memory buffer is updated in buffer memory mapping table; And will treat data cached be written in the second memory buffer after, the storage relation treated between data cached and the second memory buffer is updated in buffer memory mapping table.
2. method according to claim 1, wherein, will treat that data cached first memory buffer that is written to comprises: will treat data cached and treat that data cached weight information is written in the first memory buffer simultaneously; To treat that data cached second memory buffer that is written to comprises: will treat data cached and treat that data cached weight information is written in the second memory buffer simultaneously.
3. method according to claim 1, wherein the total amount of the storage space of the first memory buffer is identical with the total amount of the storage space of the second memory buffer.
4. method according to claim 1, wherein the total amount of the storage space of the first memory buffer is different from the total amount of the storage space of the second memory buffer.
5. method according to claim 4, wherein said buffer memory mapping table is stored in and maps in impact damper.
6. method according to claim 4, comprises further, according to treat data cached weight and buffer memory mapping table read from the first memory buffer or the second memory buffer treat data cached.
7. method according to claim 4, wherein said buffer memory mapping table comprises following field: data number to be stored, memory buffer numbering, memory address, weight information.
8. method according to claim 7, treats that data cached weight is high weight or low weight, described memory buffer numbering instruction first memory buffer or the second memory buffer described in wherein said weight information instruction.
CN201310302423.4A 2013-07-18 2013-07-18 Data caching method in multiple buffer storages according to weight information Active CN103345452B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310302423.4A CN103345452B (en) 2013-07-18 2013-07-18 Data caching method in multiple buffer storages according to weight information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310302423.4A CN103345452B (en) 2013-07-18 2013-07-18 Data caching method in multiple buffer storages according to weight information

Publications (2)

Publication Number Publication Date
CN103345452A CN103345452A (en) 2013-10-09
CN103345452B true CN103345452B (en) 2015-06-10

Family

ID=49280250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310302423.4A Active CN103345452B (en) 2013-07-18 2013-07-18 Data caching method in multiple buffer storages according to weight information

Country Status (1)

Country Link
CN (1) CN103345452B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102580820B1 (en) * 2016-03-10 2023-09-20 에스케이하이닉스 주식회사 Data storage device and operating method thereof
KR20180041898A (en) * 2016-10-17 2018-04-25 에스케이하이닉스 주식회사 Memory system and operating method of memory system
CN106649139B (en) * 2016-12-29 2020-01-10 北京奇虎科技有限公司 Data elimination method and device based on multiple caches
CN107302505B (en) * 2017-06-22 2019-10-29 迈普通信技术股份有限公司 Manage the method and device of caching
CN111552652B (en) * 2020-07-13 2020-11-17 深圳鲲云信息科技有限公司 Data processing method and device based on artificial intelligence chip and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019962A (en) * 2012-12-21 2013-04-03 华为技术有限公司 Data cache processing method, device and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019962A (en) * 2012-12-21 2013-04-03 华为技术有限公司 Data cache processing method, device and system

Also Published As

Publication number Publication date
CN103345452A (en) 2013-10-09

Similar Documents

Publication Publication Date Title
US7257693B2 (en) Multi-processor computing system that employs compressed cache lines' worth of information and processor capable of use in said system
US9405675B1 (en) System and method for managing execution of internal commands and host commands in a solid-state memory
CN106462495B (en) Memory Controller and processor-based system and method
US7512750B2 (en) Processor and memory controller capable of use in computing system that employs compressed cache lines' worth of information
US8122216B2 (en) Systems and methods for masking latency of memory reorganization work in a compressed memory system
US20170177497A1 (en) Compressed caching of a logical-to-physical address table for nand-type flash memory
CN103345452B (en) Data caching method in multiple buffer storages according to weight information
JP6768928B2 (en) Methods and devices for compressing addresses
JP6859361B2 (en) Performing memory bandwidth compression using multiple Last Level Cache (LLC) lines in a central processing unit (CPU) -based system
US9501419B2 (en) Apparatus, systems, and methods for providing a memory efficient cache
US20110082965A1 (en) Processor-bus-connected flash storage module
US20150149742A1 (en) Memory unit and method
CN103345368B (en) Data caching method in buffer storage
US20150143045A1 (en) Cache control apparatus and method
KR20090054657A (en) Cache memory capable of adjusting burst length of write-back data in write-back operation
US11144464B2 (en) Information processing device, access controller, information processing method, and computer program for issuing access requests from a processor to a sub-processor
US8484424B2 (en) Storage system, control program and storage system control method
KR20160060550A (en) Page cache device and method for efficient mapping
WO2023066124A1 (en) Cache management method, cache management apparatus, and processor
US20240086403A1 (en) In-memory database (imdb) acceleration through near data processing
JP2024029007A (en) Method and apparatus for using a storage system as main memory
KR20090063401A (en) Cache memory and method capable of write-back operation, and system having the same
CN110869916A (en) Method and apparatus for two-layer copy-on-write
US20180225224A1 (en) Reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems
JP2020502694A (en) Method and apparatus for accessing non-volatile memory as byte addressable memory

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: FUJIAN RIDGE INFORMATION TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: SICHUAN JIUCHENG INFORMATION TECHNOLOGY CO., LTD.

Effective date: 20150513

C41 Transfer of patent application or patent right or utility model
C53 Correction of patent for invention or patent application
CB03 Change of inventor or designer information

Inventor after: Mao Yongquan

Inventor after: Chen Ping

Inventor after: Jiang Duomo

Inventor after: You Guixi

Inventor after: Lu Xincan

Inventor after: Wang Yongjun

Inventor before: Mao Li

COR Change of bibliographic data

Free format text: CORRECT: INVENTOR; FROM: MAO LI TO: MAO YONGQUAN CHEN PING JIANG DUOMO YOU GUIXI LU XINCAN WANG YONGJUN

TA01 Transfer of patent application right

Effective date of registration: 20150513

Address after: 350003, West floor, Sinotrans building, No. 79, East Lake Road, Gulou District, Fujian, Fuzhou, three

Applicant after: The auspicious poly-infotech share company limited in Fujian

Address before: 610041 A, building, No. two, Science Park, high tech Zone, Sichuan, Chengdu, China 103B

Applicant before: Sichuan Jiucheng Information Technology Co., Ltd.

C14 Grant of patent or utility model
GR01 Patent grant