CN103345452A - Data caching method in multiple buffer storages according to weight information - Google Patents

Data caching method in multiple buffer storages according to weight information Download PDF

Info

Publication number
CN103345452A
CN103345452A CN2013103024234A CN201310302423A CN103345452A CN 103345452 A CN103345452 A CN 103345452A CN 2013103024234 A CN2013103024234 A CN 2013103024234A CN 201310302423 A CN201310302423 A CN 201310302423A CN 103345452 A CN103345452 A CN 103345452A
Authority
CN
China
Prior art keywords
data cached
buffer
data
memory
memory buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013103024234A
Other languages
Chinese (zh)
Other versions
CN103345452B (en
Inventor
毛力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
The auspicious poly-infotech share company limited in Fujian
Original Assignee
SICHUAN JIUCHENG INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SICHUAN JIUCHENG INFORMATION TECHNOLOGY Co Ltd filed Critical SICHUAN JIUCHENG INFORMATION TECHNOLOGY Co Ltd
Priority to CN201310302423.4A priority Critical patent/CN103345452B/en
Publication of CN103345452A publication Critical patent/CN103345452A/en
Application granted granted Critical
Publication of CN103345452B publication Critical patent/CN103345452B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A data caching method in multiple buffer storages according to weight information comprises the steps of receiving data to be cached, receiving weight information of the data to be cached, writing the data to be cached into a first buffer storage when the weight information of the data to be cached indicates the fact that the weights of the data to be cached are low weights, writing the data to be cached into a second buffer storage when the weight information of the data to be cached indicates the fact that the weights of the data to be cached are high weighs, and carrying out replacing operations based on the weight information respectively under the condition that the first buffer storage and the second buffer storage cannot meet buffer needs. Through the data caching method in the multiple buffer storages according to the weight information, the multiple buffer storages are used for respective storage, respective replacement is carried out according to the weight information of the data to be cached at the same time, and the overall performance of the buffer storages is improved.

Description

A kind of in a plurality of memory buffer according to the data cached method of weight information
Technical field
The present invention relates to field of data storage, be particularly related to a kind of data cached method, relate in particular to and utilize a plurality of memory buffer to carry out data cached method, be particularly related to according to treating the method that data cached weight information cushions respectively, be used for carrying out the data buffer memory at system's operational process, by data cached in the memory buffer replaced processing to improve performance and the efficient of buffer memory.
Background technology
When CPU will read data, at first from buffer memory, search, find and just read and give CPU processing immediately; Do not find, just from internal memory, read and give CPU with slow relatively speed and handle, simultaneously the data block at this data place is called in the buffer memory, can be so that later reading all of monoblock data be carried out from buffer memory, invoke memory again.Such mechanism that reads makes CPU read the hit rate very high (most of CPU can reach about 90%) of buffer memory just, that is to say that data 90% that CPU will read next time all in buffer memory, have only about 10% need read from internal memory.This has saved the time that CPU directly reads internal memory greatly, need not substantially when also making CPU read data to wait for.Generally speaking, to read the order of data are internal memories behind the first buffer memory to CPU.
Buffer memory is the duplicate of small part data in the internal memory, when so CPU seeks data in the buffer memory, the also situation that can occur can not find (because these data do not copy to the buffer memory from internal memory go), at this moment CPU still can arrive and go for data in the internal memory, the speed of system has just slowed down like this, but CPU can copy to these data in the buffer memory and go, in order to do not reach in internal memory more next time.Variation along with the time, accessed data the most frequently are not unalterable, that is to say, just now data frequently not also, needed by frequent visit this moment, and still data the most frequently were not frequent again just now, data in the buffer memory are wanted often to change according to certain algorithm thus, could guarantee like this data in the buffer memory be accessed the most frequently.Buffer memory is the duplicate of small part data in the internal memory, when so CPU seeks data in the buffer memory, the also situation that can occur can not find (because these data do not copy to the buffer memory from internal memory go), at this moment CPU still can arrive and go for data in the internal memory, the speed of system has just slowed down like this, but CPU can copy to these data in the buffer memory and go, in order to do not reach in internal memory more next time.Variation along with the time, accessed data the most frequently are not unalterable, that is to say, just now data frequently not also, needed by frequent visit this moment, just now still data the most frequently, not frequent again, data in the buffer memory are wanted often to change according to certain algorithm thus, could guarantee like this data in the buffer memory be accessed the most frequently, yet existing replacement algorithm is often very complicated.
At present, adopted the way that buffer memory is set between host-processor and main memory.In order to improve the efficient of reading and writing data, cache controller and the buffer memory of frequency of operation between host-processor and main memory is set.Like this, the process that external data is write main memory comprises: the data that host-processor is sent the outside write instruction and send to cache controller; Cache controller checks whether idle storage space is arranged in the buffer memory, if have, then directly data are write the data of carrying in the instruction and write the storage space of depositing the middle free time, if do not have, then the frequency of utilization according to the caching data block in the memory buffer calculates the minimum caching data block of frequency of utilization, remove the minimum caching data block of this frequency of utilization and discharge the shared storage space of the minimum caching data block of this frequency of utilization, then data are write in the storage space after the data of carrying in the instruction write release; Cache controller, define the data that newly write after, from buffer memory, read these data that newly write and write main memory, thereby finished the process that external data writes main memory.
The outside process that reads data from main memory comprises: the data read request that host-processor is sent the outside sends to cache controller; Cache controller checks whether store outside data of asking in the buffer memory, if have, then directly the data of the correspondence of storing in the buffer memory are sent to host-processor, if do not have, then send this data read request to main memory, main memory sends to cache controller with the data of current request, and cache controller is stored in it in buffer memory, and host-processor sends to the outside read out the data of current request from buffer memory after.Existing replacement algorithm need calculate, and especially needs very complicated calculating and inquiry, has prolonged the time that writes and visit.
Summary of the invention
Write the technical matters long with the access time in order to solve, invention is provided in a plurality of memory buffer according to the data cached method of weight information, by utilizing a plurality of memory buffer, with data cached the storing into respectively in the different memory buffer for the treatment of of different weights, namely handle respectively according to the weight information of data to be stored.At least comprise in the memory buffer that two classes are data cached, high data cached of the data cached and weight that weight is low can further be segmented weight in case of necessity, as data cached in low data cached, the weight of weight with that weight is high is data cached.On the other hand, at the different replacement policy of the data cached employing of different weights, thereby simplify replacement algorithm complicated in the prior art.
Particularly, the invention discloses a kind of in a plurality of memory buffer according to the data cached method of weight information, described method comprises:
S10) reception is treated data cached and is treated data cached weight information, wherein saidly treat that data cached data volume is not more than the total amount of the storage space of first buffer-stored, simultaneously describedly treat that data cached data volume is not more than the total amount of the storage space of second buffer-stored;
S20) if treating that the indication of data cached weight information is described treats that data cached weight is low weight, then treat data cached being written in first memory buffer with described;
S30) if treating that the indication of data cached weight information is described treats that data cached weight is high weight, then treat data cached being written in second memory buffer with described.
Preferably, wherein step S20 may further comprise the steps:
S22) if the total amount of the idle storage space in first memory buffer less than treating data cached data volume, then with the data cached removing in first memory buffer, discharge the data cached shared storage space in first memory buffer simultaneously, and will treat that slow data are written in first memory buffer;
S23) treat data cached data volume if the total amount of the idle storage space in first memory buffer is not less than, then will treat data cached being written in first memory buffer;
Preferably, wherein step S30 may further comprise the steps:
S32) if the total amount of the idle storage space in second memory buffer less than treating data cached data volume, then with the data cached unloading in second memory buffer to main memory, discharge the data cached shared storage space in second memory buffer simultaneously, and treat that with described slow data are written in second memory buffer;
S33) treat data cached data volume if the total amount of the idle storage space in second memory buffer is not less than, then will treat data cached being written in second memory buffer.
Preferably, wherein the total amount of the storage space of first memory buffer is identical with the total amount of the storage space of second memory buffer.
Preferably, treat that wherein data cached data volume is not more than the total amount of the storage space of first memory buffer and second memory buffer.
Preferably, will treat data cached be written in first memory buffer after, the storage relation between data cached and first memory buffer for the treatment of is updated in the buffer memory mapping table; And will treat data cached be written in second memory buffer after, the storage relation between data cached and second memory buffer for the treatment of is updated in the buffer memory mapping table.
Preferably, wherein said buffer memory mapping table is stored in the mapping impact damper.
Preferably, according to treat data cached weight and buffer memory mapping table from first memory buffer or second memory buffer, read treat data cached.
Preferably, wherein said buffer memory mapping table comprises following field: data number to be stored, memory buffer numbering, memory address, weight information.
Preferably, described weight information indication is described treats that data cached weight is high weight or low weight.
Preferably, described memory buffer numbering indication first memory buffer or second memory buffer.
Preferably, will treat that data cached being written in first memory buffer comprises: will treat data cached and treat that data cached weight information is written in first memory buffer simultaneously; To treat that data cached being written in second memory buffer comprises: will treat data cached and treat that data cached weight information is written in second memory buffer simultaneously.
The present invention also provide another in a plurality of memory buffer according to the data cached method of weight information, comprising:
Reception is treated data cached and is treated data cached weight information;
Judge that whether the total amount of the idle storage space of intermediate buffer treat data cached data volume less than described, if the total amount of the idle storage space of described intermediate buffer is not less than the described data cached data volume for the treatment of, then directly treat data cached being written in the described intermediate buffer with described, otherwise based on treating data cached weight information respectively with in data cached unloading to the first memory buffer of the data cached and high weight of the low weight in the intermediate buffer and second memory buffer, discharge the storage space of intermediate buffer;
To treat data cached being written in the intermediate buffer.
Wherein based on treating that data cached weight information will comprise in data cached unloading to the first memory buffer of the data cached and high weight of the low weight in the intermediate buffer and second memory buffer respectively:
For hanging down the data cached of weight, judge that whether the total amount of the idle storage space in first memory buffer is less than the data cached data volume of hanging down weight, if the total amount of the idle storage space in first memory buffer is not less than the data cached data volume of low weight, then directly the data cached of described low weight is written in described first memory buffer; If the total amount of the idle storage space in first memory buffer is less than the data cached data volume of low weight, then remove data cached in first memory buffer, discharge the data cached shared storage space in first memory buffer simultaneously, then the data cached of described low weight is written in described first memory buffer; Data cached for high weight, whether the total amount of judging the idle storage space in second memory buffer is less than the data cached data volume of high weight, if the total amount of the idle storage space in second memory buffer is not less than the data cached data volume of high weight, then directly the data cached of described high weight is written in described second memory buffer; If the total amount of the idle storage space in second memory buffer is less than the data cached data volume of high weight, then with the data cached unloading in second memory buffer to main memory, discharge the data cached shared storage space in second memory buffer simultaneously, then the data cached of described high weight is written in described second memory buffer.
Preferably, will treat data cached be written in first memory buffer after, the storage relation between data cached and first memory buffer for the treatment of is updated in the buffer memory mapping table; Will treat data cached be written in second memory buffer after, the storage relation between data cached and second memory buffer for the treatment of is updated in the buffer memory mapping table; And will treat data cached be written in the intermediate buffer after, the storage relation between data cached and the intermediate buffer for the treatment of is updated in the buffer memory mapping table.
Preferably, will treat that data cached being written in first memory buffer comprises: will treat data cached and treat that data cached weight information is written in first memory buffer simultaneously; To treat that data cached being written in second memory buffer comprises: will treat data cached and treat that data cached weight information is written in second memory buffer simultaneously.
By of the present invention in a plurality of memory buffer according to the data cached method of weight information.Effect of the present invention is: simplified complicated calculating and search algorithm, can be cached to respectively in the different memory buffer at the data cached weight of difference, can replace processing at the data cached weight of difference.By weight being set and replace handling and to have improved processor reads data from buffer memory hit rate, improved the access efficiency of system, improved the handling capacity of system.
Description of drawings
Included accompanying drawing is used for further understanding the present invention, its as an illustration book ingredient and explain principle of the present invention with instructions, in the accompanying drawings:
Fig. 1 is the structured flowchart of processor among the present invention;
Fig. 2 is data cached according to an embodiment of the invention method;
Fig. 3 is data cached according to another embodiment of the present invention method.
Embodiment
Further describe the preferred embodiments of the present invention with reference to the accompanying drawings.
In the present invention, " low weight " refers to same content with " weight is low ", namely has lower weight.In like manner, " high weight " refers to same content with " weight height ", namely has higher weight.In like manner, " middle weight " refers to same content with " in the weight ", namely has the weight of intergrade.The weight of apparent high weight, middle weight, low weight reduces successively.What wherein have high weight treats that data cached is system data, core data, critical data, or the data that need visit and write recently, or visit and write frequency higher data etc., the data cached specific user's of the being data for the treatment of with low weight, application-specific data, the long-time not data of visit, or the lower data of visit and write frequency etc., having generation of weight, data cached to be weight treat that between high weight data cached and low weight treats to treat data cachedly between data cached that for example use shared data more, multiple users share data etc.In the present invention, more specifically, the total amount of the idle storage space in the memory buffer refers to the total amount of the idle storage space of the buffer unit in the memory buffer, and the total amount of the storage space in the memory buffer refers to the total amount of the idle storage space of the buffer unit in the memory buffer.In the present invention, receive treat data cached refer to equally receive treat data cached.In the present invention, buffered data and data cachedly refer to same content.In the present invention, weight information, weight properties, weight properties information all refer to same content.
With reference to the accompanying drawings 1, among the present invention, described processor comprises processor core, first memory buffer and second memory buffer, also comprises the main memory interface, and by this main memory interface and main memory coupling, also can comprise mapping impact damper and intermediate buffer (accompanying drawing is not shown).Wherein first memory buffer comprises cache controller and buffer unit, and is same, and second memory buffer, mapping impact damper and intermediate buffer also comprise cache controller and buffer unit.Wherein cache controller control writing and reading, replacement and removing buffer unit.In the present invention, first memory buffer, second memory buffer, mapping impact damper, intermediate buffer can have identical memory capacity, and namely the total amount of the storage space of first memory buffer, second memory buffer, mapping impact damper and intermediate buffer is identical; Outside two, first memory buffer, second memory buffer, mapping impact damper can have different memory capacity with intermediate buffer, and namely the total amount of the storage space of first memory buffer, second memory buffer, mapping impact damper and intermediate buffer is inequality.
With reference to the accompanying drawings 2, describe in detail the first embodiment of the present invention in a plurality of memory buffer according to the data cached method of weight information.In step S10, processor core receives writing or reading command of external unit or applications, and reception is simultaneously treated data cached and treated data cached weight information.Wherein, treat that data cached weight information can be that independent data also can load on and treats data cached head, treats that data cached weight information can be low weight information, high weight information or the middle weight information that data cached weight is treated in indication described above.In the present invention, wherein saidly treat that data cached data volume is not more than the total amount of the storage space of first memory buffer, simultaneously describedly treat that data cached data volume is not more than the total amount of the storage space of second memory buffer.
In response to writing or reading command of external unit or applications, when needs carry out buffer memory, processor core is at first judged weight information, in step S20, if treating that the indication of data cached weight information is described treats that data cached weight is low weight, then treat data cached being written in first memory buffer with described; In step S30, if treating that the indication of data cached weight information is described treats that data cached weight is high weight, then treat data cached being written in second memory buffer with described; In step S40, upgrade the buffer memory mapping table by processor core, described buffer memory mapping table is stored in the mapping impact damper and indication treats that the storage between data cached and first memory buffer and second memory buffer concerns.For example, when external unit or application program are sent when writing instruction, the processor core reception writes and treats data cached and treats data cached weight information, then processor core writes instruction in response to this, carry out caching process, at first send the buffer memory instruction based on weight information to cache controller or the cache controller in second memory buffer in first memory buffer, if for example the weight information indication is hanged down weight then the instruction of processor core mind-set first memory buffer transmission buffer memory, processor core mind-set second memory buffer sends the buffer memory instruction if weight information is indicated high weight.Treat that data cached weight is low weight if treat that the indication of data cached weight information is described, then the cache controller in first memory buffer is treated data cached being written in first memory buffer in response to buffer memory instruction with described, if treating that the indication of data cached weight information is described treats that data cached weight is high weight, then the cache controller in second memory buffer is treated data cached being written in second memory buffer in response to the buffer memory instruction with described.The described total amount for the treatment of that data cached data volume is not more than the storage space of first memory buffer also is not more than the total amount of the storage space of second memory buffer in the present invention, for example receives at processor core and treats to carry out the control of data volume by external unit or applications before data cached.If external unit or applications are not carried out the control of data volume, then need processor core receive treat data cached after, send data volume by processor core mind-set first cache controller and second cache controller and confirm instruction, and by first cache controller and second cache controller judge respectively low weight treat data cached data volume and high weight treat that data cached data volume is whether greater than the total amount of the storage space of the total amount of the storage space of first memory buffer and second memory buffer, if not having storage, first memory buffer and second memory buffer do not treat data cached memory capacity, to treat directly that then buffer memory storage data write in the main memory, upgrade the buffer memory mapping table by processor core in response to the buffer memory END instruction that the cache controller in first memory buffer or second memory buffer sends at last, be included in will treat data cached be written in first memory buffer after, the storage relation between data cached and first memory buffer for the treatment of is updated in the buffer memory mapping table; And will treat data cached be written in second memory buffer after, the storage relation between data cached and second memory buffer for the treatment of is updated in the buffer memory mapping table.
Below with reference to accompanying drawing 3 describe in detail second embodiment of the invention in a plurality of memory buffer according to the data cached method of weight information.Described in first embodiment, at first, in step S10, processor core receives writing or reading command of external unit or applications, and reception is simultaneously treated data cached and treated data cached weight information.In the present invention, wherein saidly treat that data cached data volume is not more than the total amount of the storage space of first memory buffer, simultaneously describedly treat that data cached data volume is not more than the total amount of the storage space of second memory buffer.
Then, in response to writing or reading command of external unit or applications transmission, when needs carry out buffer memory, processor core is at first treated data cached weight information in step S11 judgement, and treats that according to treating that data cached weight information indication is described data cached weight is that low weight or high weight are carried out caching process respectively.When described when treating that the indication of data cached weight information is described and treating that data cached weight is low weight, cache controller in processor core mind-set first memory buffer sends the buffer memory instruction, cache controller in step S21 first memory buffer judges that whether the total amount of the idle storage space in first memory buffer is less than treating data cached data volume, treat data cached data volume if the total amount of the idle storage space in first memory buffer is not less than, then directly treat data cached being written in described first memory buffer with described; If the total amount of the idle storage space in first memory buffer is less than treating data cached data volume, then remove data cached in first memory buffer, discharge the data cached shared storage space in first memory buffer simultaneously, treat data cached being written in described first memory buffer with described then.When described when treating that the indication of data cached weight information is described and treating that data cached weight is high weight, cache controller in processor core mind-set second memory buffer sends the buffer memory instruction, cache controller in step S31 second memory buffer judges that whether the total amount of the idle storage space in second memory buffer is less than treating data cached data volume, treat data cached data volume if the total amount of the idle storage space in second memory buffer is not less than, then directly treat data cached being written in described second memory buffer with described; If the total amount of the idle storage space in second memory buffer is less than treating data cached data volume, then with the data cached unloading in second memory buffer to main memory, discharge the data cached shared storage space in second memory buffer simultaneously, treat data cached being written in described second memory buffer with described then.Since current need buffer memory to treat data cached be the data that will visit recently, so preferentially place buffer memory can improve access speed significantly it.For low data cached of weight, because the attribute of its low weight for example can not visit in the recent period again, thus having under the new situation for the treatment of data cached appearance, can be directly with its replacement, even directly remove as described in the present invention.For high data cached of weight, because the attribute of its high weight for example still can be visited in the recent period, so having under the new situation for the treatment of data cached appearance, treating that priority cache is new is data cached, and with original data cached unloading to main memory.Upgrade the buffer memory mapping table by processor core at last, concrete grammar is identical with update method among first embodiment.
In the 3rd embodiment (accompanying drawing is not shown), processor is except comprising first memory buffer, second memory buffer, mapping buffer, also include intermediate buffer, intermediate buffer has identical structure with first memory buffer, second memory buffer, mapping impact damper, but its effect is different from the former, and the function of the cache controller in the intermediate buffer also is different from the former cache controller.Preferentially in intermediate buffer, carry out buffer memory in the data cached method in the 3rd embodiment.Particularly, at first, in step S100, the processor core reception writes or reading command from external unit or applications, and reception is simultaneously treated data cached and treated data cached weight information.In the present invention, wherein saidly treat that data cached data volume is not more than the total amount of the storage space of first memory buffer, second memory buffer and intermediate buffer.Then, in response to writing or reading command of external unit or applications transmission, when needs carry out caching process, cache controller in the processor core mind-set intermediate buffer sends the buffer memory instruction, cache controller in the intermediate buffer is in response to the buffer memory instruction that receives from processor core, judge that at step S101 whether the total amount of the idle storage space of described intermediate buffer treat data cached data volume less than described, if the total amount of the idle storage space of described intermediate buffer is not less than the described data cached data volume for the treatment of, then directly treat data cached being written in the described intermediate buffer with described, otherwise read the data cached weight information in the intermediate buffer, and based on the data cached weight information that reads respectively with in data cached unloading to the first memory buffer of the data cached and high weight of the low weight in the intermediate buffer and second memory buffer, discharge the storage space of intermediate buffer, then will treat data cached being written in the intermediate buffer.Wherein respectively that the method in data cached unloading to the first memory buffer of the data cached and high weight of the low weight in the intermediate buffer and second memory buffer and the embodiment shown in the accompanying drawing of the present invention 3 is similar based on treating data cached weight information.Particularly, the cache controller in first memory buffer and the cache controller in second memory buffer send replacement instruction to the cache controller in the intermediate buffer respectively based on weight information.For hanging down the data cached of weight, replacement instruction in response to the intermediate buffer transmission, cache controller in first memory buffer judges in step S121 that at first whether the total amount of the idle storage space in first memory buffer is less than the data cached data volume of hanging down weight, if the total amount of the idle storage space in first memory buffer is not less than the data cached data volume of low weight, then directly the data cached of described low weight is written in described first memory buffer; If the total amount of the idle storage space in first memory buffer is less than the data cached data volume of low weight, then remove data cached in first memory buffer, discharge the data cached shared storage space in first memory buffer simultaneously, then the data cached of described low weight is written in described first memory buffer.Data cached for high weight, whether the cache controller in second memory buffer is at first judged the idle storage space in second memory buffer in step S131 total amount is less than the data cached data volume of high weight, if the total amount of the idle storage space in second memory buffer is not less than the data cached data volume of high weight, then directly the data cached of described high weight is written in described second memory buffer; If the total amount of the idle storage space in second memory buffer is less than the data cached data volume of high weight, then with the data cached unloading in second memory buffer to main memory, discharge the data cached shared storage space in second memory buffer simultaneously, then the data cached of described high weight is written in described second memory buffer.For this embodiment, by having increased intermediate buffer as intermediate buffer memory, make under the situation of the data volume relatively small amount that needs buffering, only be written to as satisfying application requirements in the intermediate buffer of intermediate buffer memory, therefore reduce the step of definite weight information, improved buffer efficiency.In addition, under a large amount of relatively situation of the data volume of needs bufferings, by replace data cached in first memory buffer and second memory buffer respectively according to weight information, reduce the complexity of complicated algorithm, make buffer memory write and replace more perfect.Final updating buffer memory mapping table, be included in treat data cached be written in first memory buffer after, the storage relation between data cached and first memory buffer for the treatment of is updated in the buffer memory mapping table; Will treat data cached be written in second memory buffer after, the storage relation between data cached and second memory buffer for the treatment of is updated in the buffer memory mapping table; And will treat data cached be written in the intermediate buffer after, the storage relation between data cached and the intermediate buffer for the treatment of is updated in the buffer memory mapping table.
In each embodiment of the present invention, wherein said buffer memory mapping table is stored in the mapping impact damper.According to treat data cached weight and buffer memory mapping table from first memory buffer or second memory buffer or intermediate buffer, read treat data cached.Wherein said buffer memory mapping table comprises following field: data number to be stored, memory buffer numbering, memory address, weight information, described weight information indication is described treats that data cached weight is high weight or low weight, the described indication of memory buffer numbering first memory buffer, second memory buffer or intermediate buffer.To treat that wherein data cached being written in first memory buffer comprises: will treat data cached and treat that data cached weight information is written in first memory buffer simultaneously; To treat that data cached being written in second memory buffer comprises: will treat data cached and treat that data cached weight information is written in second memory buffer simultaneously
Provided by the invention in a plurality of memory buffer according to the data cached method of weight information, owing to effectively utilized and treated data cached weight information, thereby carry out different replacements and handle according to treating data cached different weights, for example directly remove processing where necessary at low data cached of weight, at the high data cached unloading where necessary of weight to main memory, thereby avoided complicated calculating and query manipulation, be achieved the effect of efficient buffer memory current data.Improve processor reads data from buffer memory hit rate by the present invention, improved the access efficiency of system, improved the handling capacity of system.
Should be appreciated that, above-described embodiment is the detailed description of carrying out at specific embodiment, but the present invention is not limited to this embodiment, without departing from the spirit and scope of the present invention, can make various improvement and modification to the present invention, for example treating data cached weight when the indication of weight information is when hanging down weight, middle weight and high weight, without departing from the spirit and scope of the present invention, can further improve method data cached in memory buffer of the present invention.

Claims (10)

  1. One kind in a plurality of memory buffer according to the data cached method of weight information, described method comprises:
    S10) reception is treated data cached and is treated data cached weight information, wherein saidly treat that data cached data volume is not more than the total amount of the storage space of first buffer-stored, simultaneously describedly treat that data cached data volume is not more than the total amount of the storage space of second buffer-stored;
    S20) if treating that the indication of data cached weight information is described treats that data cached weight is low weight, then treat data cached being written in first memory buffer with described;
    S30) if treating that the indication of data cached weight information is described treats that data cached weight is high weight, then treat data cached being written in second memory buffer with described;
    S40) upgrade the buffer memory mapping table, the storage relation between data cached and first memory buffer and second memory buffer is treated in described buffer memory mapping table indication.
  2. 2. method according to claim 1,
    Wherein step S20 may further comprise the steps:
    S22) if the total amount of the idle storage space in first memory buffer less than treating data cached data volume, then with the data cached removing in first memory buffer, discharge the data cached shared storage space in first memory buffer simultaneously, and will treat that slow data are written in first memory buffer;
    S23) treat data cached data volume if the total amount of the idle storage space in first memory buffer is not less than, then will treat data cached being written in first memory buffer;
    Wherein step S30 may further comprise the steps:
    S32) if the total amount of the idle storage space in second memory buffer less than treating data cached data volume, then with the data cached unloading in second memory buffer to main memory, discharge the data cached shared storage space in second memory buffer simultaneously, and treat that with described slow data are written in second memory buffer;
    S33) treat data cached data volume if the total amount of the idle storage space in second memory buffer is not less than, then will treat data cached being written in second memory buffer.
  3. 3. according to the method for claim 1-2, wherein, will treat that data cached being written in first memory buffer comprises: will treat data cached and treat that data cached weight information is written in first memory buffer simultaneously;
    To treat that data cached being written in second memory buffer comprises: will treat data cached and treat that data cached weight information is written in second memory buffer simultaneously.
  4. 4. according to the method for claim 1-2, wherein the total amount of the storage space of first memory buffer is identical with the total amount of the storage space of second memory buffer.
  5. 5. according to the method for claim 1-2, wherein the total amount of the storage space of first memory buffer is different with the total amount of the storage space of second memory buffer.
  6. 6. according to the method for claim 1-2, step S40 comprises:
    Will treat data cached be written in first memory buffer after, the storage relation between data cached and first memory buffer for the treatment of is updated in the buffer memory mapping table; And
    Will treat data cached be written in second memory buffer after, the storage relation between data cached and second memory buffer for the treatment of is updated in the buffer memory mapping table.
  7. 7. according to the method for claim 5, wherein said buffer memory mapping table is stored in the mapping impact damper.
  8. 8. according to the method for claim 5, further comprise, according to treat data cached weight and buffer memory mapping table from first memory buffer or second memory buffer, read treat data cached.
  9. 9. according to the method for claim 5, wherein said buffer memory mapping table comprises following field: data number to be stored, memory buffer numbering, memory address, weight information.
  10. 10. according to the method for claim 9, wherein said weight information indication is described treats that data cached weight is high weight or low weight, described memory buffer numbering indication first memory buffer or second memory buffer.
CN201310302423.4A 2013-07-18 2013-07-18 Data caching method in multiple buffer storages according to weight information Active CN103345452B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310302423.4A CN103345452B (en) 2013-07-18 2013-07-18 Data caching method in multiple buffer storages according to weight information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310302423.4A CN103345452B (en) 2013-07-18 2013-07-18 Data caching method in multiple buffer storages according to weight information

Publications (2)

Publication Number Publication Date
CN103345452A true CN103345452A (en) 2013-10-09
CN103345452B CN103345452B (en) 2015-06-10

Family

ID=49280250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310302423.4A Active CN103345452B (en) 2013-07-18 2013-07-18 Data caching method in multiple buffer storages according to weight information

Country Status (1)

Country Link
CN (1) CN103345452B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107179996A (en) * 2016-03-10 2017-09-19 爱思开海力士有限公司 Data storage device and its operating method
CN107302505A (en) * 2017-06-22 2017-10-27 迈普通信技术股份有限公司 Manage the method and device of caching
CN107957958A (en) * 2016-10-17 2018-04-24 爱思开海力士有限公司 Accumulator system and its operating method
WO2018121242A1 (en) * 2016-12-29 2018-07-05 北京奇虎科技有限公司 Multiple buffer-based data elimination method and device
CN111552652A (en) * 2020-07-13 2020-08-18 深圳鲲云信息科技有限公司 Data processing method and device based on artificial intelligence chip and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019962A (en) * 2012-12-21 2013-04-03 华为技术有限公司 Data cache processing method, device and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019962A (en) * 2012-12-21 2013-04-03 华为技术有限公司 Data cache processing method, device and system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107179996A (en) * 2016-03-10 2017-09-19 爱思开海力士有限公司 Data storage device and its operating method
CN107179996B (en) * 2016-03-10 2020-12-08 爱思开海力士有限公司 Data storage device and method of operating the same
CN107957958A (en) * 2016-10-17 2018-04-24 爱思开海力士有限公司 Accumulator system and its operating method
CN107957958B (en) * 2016-10-17 2021-07-09 爱思开海力士有限公司 Memory system and operating method thereof
WO2018121242A1 (en) * 2016-12-29 2018-07-05 北京奇虎科技有限公司 Multiple buffer-based data elimination method and device
CN107302505A (en) * 2017-06-22 2017-10-27 迈普通信技术股份有限公司 Manage the method and device of caching
CN107302505B (en) * 2017-06-22 2019-10-29 迈普通信技术股份有限公司 Manage the method and device of caching
CN111552652A (en) * 2020-07-13 2020-08-18 深圳鲲云信息科技有限公司 Data processing method and device based on artificial intelligence chip and storage medium
CN111552652B (en) * 2020-07-13 2020-11-17 深圳鲲云信息科技有限公司 Data processing method and device based on artificial intelligence chip and storage medium

Also Published As

Publication number Publication date
CN103345452B (en) 2015-06-10

Similar Documents

Publication Publication Date Title
US7257693B2 (en) Multi-processor computing system that employs compressed cache lines' worth of information and processor capable of use in said system
CN106462495B (en) Memory Controller and processor-based system and method
US8949544B2 (en) Bypassing a cache when handling memory requests
CN105740164B (en) Multi-core processor supporting cache consistency, reading and writing method, device and equipment
CN103069400B (en) Combining write buffer with dynamically adjustable flush metrics
CN103324585B (en) Cooperation in the processor of hierarchical cache prefetches process
JP3880581B2 (en) Streaming data using cache locks
US9501419B2 (en) Apparatus, systems, and methods for providing a memory efficient cache
CN103345452B (en) Data caching method in multiple buffer storages according to weight information
JP2018163659A (en) Hardware based map acceleration using reverse cache tables
JP6859361B2 (en) Performing memory bandwidth compression using multiple Last Level Cache (LLC) lines in a central processing unit (CPU) -based system
CN103345368B (en) Data caching method in buffer storage
KR20090054657A (en) Cache memory capable of adjusting burst length of write-back data in write-back operation
US8484424B2 (en) Storage system, control program and storage system control method
EP2530598B1 (en) Data supply device, cache device, data supply method, and cache method
CN103345451A (en) Data buffering method in multi-core processor
US6697923B2 (en) Buffer management method and a controller thereof
KR20090063401A (en) Cache memory and method capable of write-back operation, and system having the same
US6782444B1 (en) Digital data storage subsystem including directory for efficiently providing formatting information for stored records
US20180225224A1 (en) Reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems
US9959212B2 (en) Memory system
CN111124297B (en) Performance improving method for stacked DRAM cache
JP3973129B2 (en) Cache memory device and central processing unit using the same
JP2017162194A (en) Data management program, data management device, and data management method
US9311988B2 (en) Storage control system and method, and replacing system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: FUJIAN RIDGE INFORMATION TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: SICHUAN JIUCHENG INFORMATION TECHNOLOGY CO., LTD.

Effective date: 20150513

C41 Transfer of patent application or patent right or utility model
C53 Correction of patent of invention or patent application
CB03 Change of inventor or designer information

Inventor after: Mao Yongquan

Inventor after: Chen Ping

Inventor after: Jiang Duomo

Inventor after: You Guixi

Inventor after: Lu Xincan

Inventor after: Wang Yongjun

Inventor before: Mao Li

COR Change of bibliographic data

Free format text: CORRECT: INVENTOR; FROM: MAO LI TO: MAO YONGQUAN CHEN PING JIANG DUOMO YOU GUIXI LU XINCAN WANG YONGJUN

TA01 Transfer of patent application right

Effective date of registration: 20150513

Address after: 350003, West floor, Sinotrans building, No. 79, East Lake Road, Gulou District, Fujian, Fuzhou, three

Applicant after: The auspicious poly-infotech share company limited in Fujian

Address before: 610041 A, building, No. two, Science Park, high tech Zone, Sichuan, Chengdu, China 103B

Applicant before: Sichuan Jiucheng Information Technology Co., Ltd.

C14 Grant of patent or utility model
GR01 Patent grant