CN104850507A - Data caching method and data caching device - Google Patents

Data caching method and data caching device Download PDF

Info

Publication number
CN104850507A
CN104850507A CN201410055379.6A CN201410055379A CN104850507A CN 104850507 A CN104850507 A CN 104850507A CN 201410055379 A CN201410055379 A CN 201410055379A CN 104850507 A CN104850507 A CN 104850507A
Authority
CN
China
Prior art keywords
data block
remaining space
vernier
cache pool
address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410055379.6A
Other languages
Chinese (zh)
Other versions
CN104850507B (en
Inventor
阮佳彬
蔡伟林
陆莉
王海洋
段文文
李映辉
陈秋滢
陈旺林
陈文辉
曾岳锋
秦铭雪
樊伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201410055379.6A priority Critical patent/CN104850507B/en
Publication of CN104850507A publication Critical patent/CN104850507A/en
Application granted granted Critical
Publication of CN104850507B publication Critical patent/CN104850507B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the invention discloses a data caching method and a data caching device, which can be used for storing data whose length is not fixed so as to avoid the waste of storage space. The method of the embodiment of the invention comprises the steps as follows: obtaining capacity of a first residual space pointed by a vernier in a buffer pool when a data block is stored in the buffer pool; judging size relation of the capacity of the first residual space pointed by the vernier and the length of the data block; storing constituent parts of the data block from an address space pointed by the vernier to the first residual space in turn if the capacity of the first residual space pointed by the vernier is more than or equal to the length of the data block; moving the vernier to the tail part of the data block after the data block is stored in the first residual space, and the moved vernier points to first residual space of the buffer pool after the data block is stored.

Description

A kind of data cache method and data buffer storage device
Technical field
The present invention relates to field of computer technology, particularly relate to a kind of data cache method and data buffer storage device.
Background technology
In Internet service, often must provide the service of satisfied a large amount of concurrent request, based on current Computer Architecture, existing a kind of is use Hash caching method to carry out data cached solution.
But at least there is following defect in existing Hash caching method: the first, employing be the storage block of regular length, be only applicable to the data that memory length difference is little; The second, reserved a large amount of space is needed for the unfixed data of length, causes the non-compact storage of data, waste a large amount of storage space.
Summary of the invention
Embodiments provide a kind of data cache method and data buffer storage device, be applicable to the unfixed data of memory length, avoid the waste of storage space.
For solving the problems of the technologies described above, the embodiment of the present invention provides following technical scheme:
First aspect, the embodiment of the present invention provides a kind of data cache method, comprising:
When data block is stored in cache pool, obtain the capacity of the first remaining space that vernier points in described cache pool;
Judge the magnitude relationship of the capacity of the first remaining space that described vernier points to and the length of described data block;
If the capacity of the first remaining space that described vernier points to is more than or equal to the length of described data block, the ingredient of described data block is stored into successively in described first remaining space from the address space that described vernier points to;
After described data block is stored into described first remaining space, described vernier is moved to the afterbody of described data block, the described vernier after mobile points in described cache pool the first remaining space stored after described data block.
Second aspect, the embodiment of the present invention also provides a kind of data buffer storage device, comprising:
First remaining space acquisition module, for when data block is stored in cache pool, obtains the capacity of the first remaining space that vernier points in described cache pool;
Judge module, for the magnitude relationship of the length of the capacity and described data block that judge the first remaining space that described vernier points to;
Data block memory module, for when the capacity of the first remaining space that described vernier points to is more than or equal to the length of described data block, the ingredient of described data block is stored into successively in described first remaining space from the address space that described vernier points to;
Vernier mobile module, after being stored into described first remaining space when described data block, moves to the afterbody of described data block by described vernier, the described vernier after mobile points in described cache pool the first remaining space stored after described data block.
As can be seen from the above technical solutions, the embodiment of the present invention has the following advantages:
In embodiments of the present invention, when there being data block to need stored in cache pool, obtain the capacity of the first remaining space that vernier points in cache pool, then judge whether this first remaining space enough stores this data block, when the capacity of the first remaining space that vernier points to is more than or equal to the length of data block, from the address space that vernier points to, the ingredient of data block is stored in the first remaining space successively, and after data are stored into the first remaining space, vernier is moved to the afterbody of this data block, vernier after then moving points in cache pool the first remaining space stored after this data block.Because data block is stored in the first remaining space of vernier sensing in cache pool, and vernier also can move to the afterbody of this data block stored in cache pool timely, therefore the storage of data block in cache pool carries out storing according to the first remaining space of vernier sensing, vernier always just moves to the afterbody of the data block just stored after having stored data block, therefore when the length of data block is not fixed, also can indicate in cache pool by vernier the remaining space that still can be used for storing data, thus data block is stored in the remaining space in cache pool, therefore the embodiment of the present invention is applicable to the unfixed data of memory length, realize the compact storage of data block in cache pool, avoid the waste of storage space.
Term " first ", " second " etc. in instructions of the present invention and claims and above-mentioned accompanying drawing are for distinguishing similar object, and need not be used for describing specific order or precedence.Should be appreciated that the term used like this can exchange in the appropriate case, this is only describe in embodiments of the invention the differentiation mode that the object of same alike result adopts when describing.In addition, term " comprises " and " having " and their any distortion, intention is to cover not exclusive comprising, to comprise the process of a series of unit, method, system, product or equipment being not necessarily limited to those unit, but can comprise clearly do not list or for intrinsic other unit of these processes, method, product or equipment.
Below be described in detail respectively.
An embodiment of data cache method of the present invention, can be applicable to the buffer memory of data in memory system, specifically can comprise the steps: when data block is stored in cache pool, obtains the capacity of the first remaining space that vernier points in cache pool; Judge the magnitude relationship of the capacity of the first remaining space that vernier points to and the length of data block; If the capacity of the first remaining space that vernier points to is more than or equal to the length of data block, from the address space that vernier points to, the ingredient of data block is stored in the first remaining space successively; After data block is stored into the first remaining space, vernier is moved to the afterbody of data block, the vernier after mobile points in cache pool the first remaining space after storing data block.
Refer to shown in Fig. 1, the data cache method that one embodiment of the invention provides, can comprise the steps:
101, when data block is stored in cache pool, the capacity of the first remaining space that vernier points in cache pool is obtained.
In embodiments of the present invention, cache pool is for storing data block, also vernier is provided with in cache pool, be used to indicate the reference position of the new data that will store in this cache pool, when storing data in cache pool, prioritizing selection is stored into address space still idle in cache pool, then what this vernier also can be understood as sensing is the remaining space that also can be used for storing data in cache pool, and in the embodiment of the present invention, vernier is also understandable is in addition a pointer.When cache pool is in init state, in the embodiment of the present invention, cache pool is empty, the head of the cache pool of now vernier sensing, the address space of whole cache pool is all idle, in the embodiment of the present invention, the address space that can be used for storing data in cache pool after initialization is referred to as " the first remaining space ", first remaining space refer to from vernier point to address space until cache pool afterbody form address space, and indicated the reference position of this first remaining space by vernier, the final position of the first remaining space is the afterbody of cache pool.In embodiments of the present invention, when there being data block to need stored in cache pool, first found the first remaining space in cache pool by vernier, and get the capacity of this first remaining space, thus can know that this cache pool can also store the data block of much length.
In some embodiments of the invention; in order to the data block that can store in fast finding to cache pool; when storing data block in cache pool; the mode of Hash buffer memory can be used to store data block; then can carry key (key) within the data block, when data block is stored into cache pool, Hash table can be inquired about by key; thus finding which address space that this data block be stored in cache pool, then the address space found is exactly the value (value) corresponding with this key.It should be noted that, when being searched corresponding value by key, multiple value may be found, then each value corresponds to a data block in cache pool, can be stringed together by doubly linked list between each data block, then each data block in cache pool comprises following ingredient: key, lastblock address, next block address, data length, data content.Wherein, the instruction of lastblock address be the memory address of a upper data block with identical key assignments, next block address refers to the memory address of the next data block with identical key assignments, data length instruction be the length of data content, and data content is only the authentic data of carrying in data block.
In some embodiments of the invention, step 101 obtains the capacity of the first remaining space that vernier points in cache pool, specifically can comprise: according to the key inquiry Hash table of data block, the 4th data block is found from cache pool, 4th data block is the data block identical with the key value of data block, then in cache pool from the afterbody of the 4th data block until the afterbody of cache pool is the first remaining space.That is, the key inquiry Hash table carried by data block, the data block identical with the key value of this data block can be found from cache pool, be defined as in the embodiment of the present invention " the 4th data block ", and until the afterbody of cache pool is the first remaining space from the afterbody of the 4th data block.
102, the magnitude relationship of the capacity of the first remaining space that vernier points to and the length of data block is judged.
In embodiments of the present invention, after getting the capacity of the first remaining space that vernier points in cache pool, in order to ensure that data block can be stored in cache pool, need to judge whether the first remaining space enough stores data block, therefore perform the magnitude relationship that step 102 judges the capacity of the first remaining space that vernier points to and the length of data block, then judged result can be the length that the capacity of the first remaining space is greater than data block, also can be the length that the capacity of the first remaining space equals data block, also can be the length that the capacity of the first remaining space is less than data block.Concrete, when the capacity that judged result is the first remaining space is more than or equal to the length of data block, triggers and perform step 103.
If the capacity of the first remaining space that 103 verniers point to is more than or equal to the length of data block, from the address space that vernier points to, the ingredient of data block is stored in the first remaining space successively.
In embodiments of the present invention, if the capacity of the first remaining space that vernier points to is more than or equal to the length of data block, then illustrate that the remaining space in current cache pond is also sufficient to store data block, from the address space that vernier points to, the ingredient of data block is stored in the first remaining space successively.Wherein, store each ingredient referring to data block successively and be stored into the first remaining space according to the arrangement architecture that data form, then this data block is compact storage in the first remaining space.
104, after data block is stored into the first remaining space, vernier is moved to the afterbody of data block, the vernier after mobile points in cache pool the first remaining space after storing data block.
In embodiments of the present invention, after data block is stored into the first remaining space, vernier in cache pool moves to the afterbody of data block accordingly, what the vernier after then mobile pointed to is store data block in cache pool after the first remaining space, namely in the embodiment of the present invention along with the storage of data block, vernier constantly moves to the afterbody of cache pool, real-time remains on the new afterbody storing data block, therefore the remaining space in the always cache pool of this vernier sensing, upper once store data block time, this vernier can indicate the address space that data block can store.
In an embodiment of the present invention, when the capacity of the first remaining space that vernier points to is more than or equal to the length of data block, perform step 103 and step 104, so when the capacity of the first remaining space that vernier points to is less than the length of data block, the data cache method that the embodiment of the present invention provides can also comprise the steps:
A1, vernier is moved to the first data block stored in cache pool, and the first data block eliminated from cache pool, the address space vacateed in cache pool after the first data block is eliminated is the second remaining space that vernier points to;
A2, from vernier point to address space the ingredient of data block is stored in the second remaining space successively;
A3, after data block is stored into the second remaining space, vernier is moved to the afterbody of data block, the vernier after mobile points in cache pool the second remaining space after storing data block.
Wherein, when the capacity of the first remaining space that vernier points to is less than the length of data block, illustrate that the current residual space in cache pool is not enough to store this data block, then perform steps A 1 and vernier is moved to the first data block stored in cache pool, and this first data block is eliminated from cache pool, then the first data block can vacate corresponding address space after cache pool is eliminated, by by deleting the data block stored in cache pool, the name space definition obtained is " the second remaining space " in the embodiment of the present invention, it should be noted that, in embodiments of the present invention, first remaining space refers to the remaining space that cache pool just exists upon initialization, and the second remaining space is by the superseded address space vacateed to the data block stored in cache pool.Because vernier is moved to the first data block, and the first data block is eliminated, and illustrates that this vernier has pointed to the second remaining space.Known by aforementioned description, in the embodiment of the present invention, vernier is along with storing data block and superseded data block and mobile in time in cache pool.Concrete, first data block can refer to the data block that in cache pool, storage time is the longest in some embodiments of the invention, therefore in the embodiment of the present invention when data space is limited, if desired store new data, old data more of a specified duration storage time can be removed by certain algorithm.In addition, the first data block also can refer to the data block of particular type, such as, can be the data block that priority is minimum, and concrete connected applications scene decides.
After obtaining the second remaining space having eliminated the first data block, perform steps A 2 is stored in the second remaining space successively by the ingredient of data block from the address space that vernier points to, that is, in the embodiment of the present invention, when the first remaining space is not enough to store data block, partial data can be disposed to vacate address space from cache pool, may be used for storing new data block.After data block is stored into the second remaining space, perform the afterbody that vernier is moved to data block by steps A 3, then the vernier after moving points in cache pool the second remaining space after storing data block.Namely in the embodiment of the present invention along with the storage of data block, vernier is continuous movement, real-time remains on the new afterbody storing data block, therefore the remaining space in the always cache pool of this vernier sensing, upper once store data block time, this vernier can indicate the address space that data block can store.
It should be noted that, in embodiments of the present invention, when the first remaining space only pointed at vernier is not enough to store data block, just perform the data block stored in steps A 1 pair of cache pool to eliminate, concrete, the data block be eliminated can also be the data block that in cache pool, storage time is the longest, the method that timing then can be adopted to detect storage time according to the mode of Hash buffer memory in prior art carrys out superseded old data, timing is needed to detect cache pool according to existing this implementation, unnecessary testing process can be caused, and in the embodiment of the present invention, be instant old data of eliminating, namely old data are just eliminated when the insufficient space only at cache pool, and timing does not detect the process of time data memory, reduce the superseded processing procedure that computing machine is unnecessary.
It should be noted that, in some embodiments of the invention, steps A 1 can also comprise the steps: after first data block being eliminated from cache pool
Steps A 1a, judge the magnitude relationship of the capacity of the second remaining space that vernier points to and the length of data block;
If the capacity of the second remaining space of steps A 1b vernier sensing is more than or equal to the length of data block, triggers execution steps A 2 and from the address space that vernier points to, the ingredient of data block is stored in the second remaining space successively;
If the capacity of the second remaining space that A1c vernier points to is less than the length of data block, the second data block stored in cache pool is eliminated from cache pool, and the address space vacateed in cache pool after the second data block being eliminated integrates with the second remaining space, trigger and perform steps A 2 and from the address space that vernier points to, the ingredient of data block is stored in the second remaining space successively, the second data block be positioned at vernier in cache pool after and the data block adjacent with the first data block.
That is, after the first data block has been eliminated, still need to judge whether the second remaining space is enough to store data block, the capacity of the second remaining space pointed to when vernier is less than the length of data block, perform steps A 1c the second data block stored in cache pool is eliminated from cache pool, namely when eliminating the first data block and being still not enough to store data block, also need to continue to eliminate other data block, in order to ensure the compact storage of data block in cache pool, need by after being positioned at vernier in cache pool and the data block (i.e. second data block) adjacent with the first data block eliminate, the then address space that can vacate in cache pool after being eliminated of the second data block, the address space vacateed can be integrated with the second remaining space, and then trigger execution steps A 2.Be understandable that, when the first data block and the second data block merge the second remaining space still be not enough to store data block time, need multiple exercise steps A 1a to A1c, to ensure the storage of new data block in cache pool.
In other embodiments of the present invention, steps A 1 can also comprise the steps: after first data block being eliminated from cache pool
Steps A 1a, judge the magnitude relationship of the capacity of the second remaining space that vernier points to and the length of data block;
If the capacity of the second remaining space of steps A 1b vernier sensing is more than or equal to the length of data block, triggers execution steps A 2 and from the address space that vernier points to, the ingredient of data block is stored in the second remaining space successively;
If the capacity of the second remaining space that steps A 1d vernier points to is less than the length of data block, when in cache pool from vernier point to address space until when there is not the data block stored in the afterbody of cache pool, first remaining space is integrated with the second remaining space, and the ingredient of data block is stored in the second remaining space from the address space that vernier points to by triggering execution steps A 2 successively.
That is, in some embodiments of the invention, the final position of the first remaining space is exactly the afterbody of cache pool, in cache pool from the second remaining space until do not stored between the afterbody of cache pool data block time, illustrate that the first remaining space is adjacent with the second remaining space, if the second remaining space is not enough to again store data block, first remaining space and the second remaining space can be merged, first remaining space is integrated with in the second remaining space, and then points to steps A 2.
It should be noted that, if the second remaining space until cache pool afterbody between all do not have can for eliminate data block, and the second remaining space be still not enough to after integrating with the first remaining space store data block time, steps A 1 to A3 can also be performed again.
In some embodiments of the invention, step 101 specifically can comprise: according to the key inquiry Hash table of data block, the 4th data block is found from cache pool, 4th data block is the data block identical with the key value of data block, then in cache pool from the afterbody of the 4th data block until the afterbody of cache pool is the first remaining space.In this case, 4th data block comprises: key, lastblock address, next block address, data length, data content, then after the ingredient of data block is stored in the first remaining space successively by step 103 from the address space that vernier points to, can also comprise the steps:
B1, the lastblock address of data block is set to preset symbol;
B2, next block address of data block is set to the memory address of the 4th data block;
B3, the lastblock address of the 4th data block is set to the memory address of data block.
Wherein, data block described in step B1 to step B3 refers in step 103 data block being stored into the first remaining space, other 4th data block can be inquired by inquiry Hash table, and the value of the key of the 4th data block is identical with the key that data block is carried.If include lastblock address and next block address in cache pool in data block, then after the storage of complete paired data block, because the data block stored in cache pool there occurs change, therefore need to modify to the lastblock address of data block and next block address, modified in the lastblock address of the 4th data block, to realize real-time servicing to doubly linked list, ensure can find the identical memory location of all data blocks in cache pool of the value of key by doubly linked list.In addition, under normal circumstances for the ease of inquiring the up-to-date data block be stored in cache pool, the 4th data block found by key can be the up-to-date data block be stored in cache pool, and namely the 4th data block is exactly the gauge outfit of doubly linked list.By this implementation, after data block is stored into the first remaining space of cache pool in step 103, this data block should be just the gauge outfit that replacement the 4th data block becomes doubly linked list, next block address of this data block is set to the memory address of the 4th data block, now the 4th data block becomes the up-to-date data block be stored in cache pool except this data block.Now because data block in step B1 is up-to-dately stored in cache pool, this data block there is not lastblock address, therefore use a preset symbol to represent, such as use space as pre-set symbol, then just represent there is not lastblock address when space is filled out in lastblock address, other symbols can certainly be used as pre-set symbol, such as letter etc.
It should be noted that, in some embodiments of the invention, after the ingredient of data block is stored in the first remaining space successively by step 103 from the address space that vernier points to, also comprise the steps: to modify to Hash table, to make to find data block during the Hash table after according to key query modification.Namely, after data block is stored into the first remaining space in step 103, amendment Hash table, makes can find this data block during the Hash after by the key query modification of this data block.
In some embodiments of the invention, when the second data block stored in cache pool comprises: when key, lastblock address, next block address, data length, data content, wherein, second data block be positioned at vernier in cache pool after and the data block adjacent with the first data block, steps A 1 also comprises after first data block being eliminated from cache pool: the lastblock address of the second data block is set to preset symbol.That is, adjacent with the second data block by doubly linked list first data block, then next block address of the first data block refers to the memory address of the second data block, the lastblock address of the second data block refers to the memory address of the first data block, so after the first data block is eliminated, still need this doubly linked list of maintenance carrying out a step, namely the chained list relation breaking the first data block and the second data block is needed, this pre-set symbol is revised in the lastblock address of the second data block, in addition when the first data block is the data block that in cache pool, storage time is the longest, if after the first data block is eliminated, second data block just becomes the data block that in cache pool, storage time is the longest.
Known by the above description to the embodiment of the present invention, when there being data block to need stored in cache pool, obtain the capacity of the first remaining space that vernier points in cache pool, then judge whether this first remaining space enough stores this data block, when the capacity of the first remaining space that vernier points to is more than or equal to the length of data block, from the address space that vernier points to, the ingredient of data block is stored in the first remaining space successively, and after data are stored into the first remaining space, vernier is moved to the afterbody of this data block, vernier after then moving points in cache pool the first remaining space stored after this data block.Because data block is stored in the first remaining space of vernier sensing in cache pool, and vernier also can move to the afterbody of this data block stored in cache pool timely, therefore the storage of data block in cache pool carries out storing according to the first remaining space of vernier sensing, vernier always just moves to the afterbody of the data block just stored after having stored data block, therefore when the length of data block is not fixed, also can indicate in cache pool by vernier the remaining space that still can be used for storing data, thus data block is stored in the remaining space in cache pool, therefore the embodiment of the present invention is applicable to the unfixed data of memory length, realize the compact storage of data block in cache pool, avoid the waste of storage space.
For ease of better understanding and implement the such scheme of the embodiment of the present invention, corresponding application scenarios of illustrating below is specifically described.
Current computer memory device is hard disk and internal memory mainly.The feature of hard disk is: memory capacity is large, price is low, need not be energized just can maintain store status, but read or write speed is slow, is applicable to storing the long-term but data of less access; The feature of internal memory is: read or write speed is fast, but memory capacity is little, price is high, need energising to maintain store status, is applicable to storing short-term but the data of frequent access.Hash caching method utilizes this feature of counting system structure just, solves the problem that a large number of users request in a short time causes computer disk performance to respond in time, is extremely applicable to the characteristics of demand of interconnected stores service.The main memory length of existing Hash buffer memory changes little data, general needs timing deleting stale data, Existential Space waste, inefficiency, problem that the scope of application is little.When realizing Hash buffer memory in the data cache method that the embodiment of the present invention provides, be applicable to the unfixed data of memory length, avoid the waste of storage space, and detect without the need to timing, simplify the superseded processing procedure of data block.
Owing to providing Internet service very high to system storage performance requirement, meanwhile, user's request change is fast, requires high to the universality of solution.The drawback such as the waste of conventional cache scheme Existential Space, inefficiency, the scope of application are little, cannot respond internet demand fast.The present invention increases data length field in the data block of cache pool, and data compaction is stored, and instant superseded old data.
Referring to as shown in Fig. 2-a, is the implementation procedure schematic diagram of data cache method in the embodiment of the present invention, and to be described as cache pool by internal memory, wherein, the structural design of data buffer storage device can as shown in Fig. 2-a, concrete,
Hash table: for the data block in quick indexing cache pool, finds the memory location of data block in internal memory fast by the value of key.
Cache pool: for storing data block, and in cache pool, no longer mark off multiple storage blocks that length is equal, but directly store each data block, as shown in Fig. 2-a, to store n data block in cache pool under current state, be respectively data block 1, data block 2 ..., data block (k-1), data block k, data block (k+1) ..., data block n, wherein, k is less than n, k and n is non-zero natural number, inquire about Hash table by key, from cache pool, find data block 1.
Vernier: be a pointer, is used to indicate the reference position that will store of new data, and it moves at the memory range Inner eycle of cache pool.
, may there is the free memory of a fritter in * A: because data block length is uncertain, be designated as i.e. the first remaining space here at the afterbody of cache pool.
* B: because data block length is uncertain, the address space cleared from cache pool is designated as the second remaining space, the address space generally cleared can be larger than the space required for new data block, therefore after storing new data block, sub-fraction free memory may be remained, be also referred to as the second remaining space.
Refer to as shown in Fig. 2-b, for the ingredient schematic diagram of data block in the embodiment of the present invention, wherein the structure of data block is as shown in Fig. 2-b, wherein, each data block comprises: key, lastblock address, next block address, data length, data content, concrete:
Key: by the Hash table shown in key assignments and Fig. 2-a, can find the memory location of data block fast.Data block in cache pool is compact storage, has the data block of same keys, is connected to form a doubly linked list by " lastblock address " and " next block address ".
Lastblock address: instruction has the lastblock data of the value of same keys, data block couples together by it, form a chained list (as shown in the chain of the dashed curve series winding in Fig. 2-a), " ∧ " symbol at chained list end represents address blank, illustrates that link leaves it at that.
Next block address: instruction has next blocks of data of the value of same keys, data block also couples together by it, form another chained list (as shown in the chain of the solid-line curve series winding in Fig. 2-a), it is just contrary with the direction of the catena of dashed curve chained list, such two chained lists constitute a bi-directional chaining, and the head of chained list can be arrived by key quick indexing in Hash table.
Data length: the length of designation data content.
Data content: store real data.
In cache pool, the initialize flow of data is as follows:
By Hash table and cache pool all clear 0, vernier is pointed to the head of cache pool, * A is whole cache pool, and * B is empty, does not store any data in cache pool.
In cache pool, the main flow of data query is as follows:
In Hash table, first position of the storage of data block in cache pool is found by the value of key, by next block address of data block, can ergodic data block chained list.
The main flow that data store in cache pool is as follows:
First calculate the size in the new data block cache pool that will take, namely calculate the length of new data block, comprise the summation of size of key, data length, data content, lastblock address, next block address.Then, obtain the capacity of the first remaining space (i.e. aforesaid * A) in cache pool, judge the magnitude relationship of the capacity of * A and the length of new data block, if the capacity of * A is more than or equal to the length of new data block, the ingredient of new new data block is stored into * A successively from the address space that vernier points to, and vernier is moved to the afterbody of new data block.If the off-capacity of * A stores new data block, then vernier is moved to the data block that in cache pool, storage time is the longest, and the data block of eliminating as few as possible after vernier, thus clear address space formation * B, ensure that the second remaining space * B cleared is sufficient to store new data block, wherein, the method for eliminating data block is deleted by its doubly linked list afterbody from its correspondence.After storing new data block, vernier is displaced downwardly to the afterbody of new data block.
Known by aforementioned description, when by key data query block, first data block (i.e. the gauge outfit of bi-directional chaining) of the value of original same keys is found by Hash table, then new data block is inserted into the head of bi-directional chaining, and revise Hash table, make it to point to new data block, such new data block just becomes first data block of identical key assignments.Therefore the embodiment of the present invention effectively can solve the cache problem of various elongated data.By effective memory management, the efficiency that the utilization factor of room for promotion and old data are eliminated.Be applicable to the storage demand of various elongated data, there is extremely strong spectrum adaptive.Compare with existing Hash buffering scheme, save storage space, efficient inquiry and storage, polytype data can be stored, the therefore fast-changing demand of appropriate the Internet.
It should be noted that, for aforesaid each embodiment of the method, in order to simple description, therefore it is all expressed as a series of combination of actions, but those skilled in the art should know, the present invention is not by the restriction of described sequence of movement, because according to the present invention, some step can adopt other orders or carry out simultaneously.Secondly, those skilled in the art also should know, the embodiment described in instructions all belongs to preferred embodiment, and involved action and module might not be that the present invention is necessary.
For ease of better implementing the such scheme of the embodiment of the present invention, be also provided for the relevant apparatus implementing such scheme below.
Refer to shown in Fig. 3-a, a kind of data buffer storage device 300 that the embodiment of the present invention provides, can comprise: the first remaining space acquisition module 301, judge module 302, data block memory module 303, vernier mobile module 304, wherein,
First remaining space acquisition module 301, for when data block is stored in cache pool, obtains the capacity of the first remaining space that vernier points in described cache pool;
Judge module 302, for the magnitude relationship of the length of the capacity and described data block that judge the first remaining space that described vernier points to;
Data block memory module 303, for when the capacity of the first remaining space that described vernier points to is more than or equal to the length of described data block, the ingredient of described data block is stored into successively in described first remaining space from the address space that described vernier points to;
Vernier mobile module 304, after being stored into described first remaining space when described data block, moves to the afterbody of described data block by described vernier, the described vernier after mobile points in described cache pool the first remaining space stored after described data block.
Refer to as shown in Fig. 3-b, in some embodiments of the invention, relative to the data buffer storage device 300 such as shown in Fig. 3-a, data buffer storage device 300 can also comprise: data block eliminates module 305, wherein,
Described vernier mobile module 304, also for when the capacity of the first remaining space that described vernier points to is less than the length of described data block, moves to the first data block stored in described cache pool by described vernier;
Described data block eliminates module 305, for when the capacity of the first remaining space that described vernier points to is less than the length of described data block, described first data block eliminated from described cache pool, the address space vacateed in described cache pool after described first data block is eliminated is the second remaining space that described vernier points to;
Described data block memory module 303, is also stored in described second remaining space for the address space that points to from described vernier by the ingredient of described data block successively;
Described vernier mobile module 304, also for after being stored into described second remaining space when described data block, described vernier is moved to the afterbody of described data block, the described vernier after mobile points in described cache pool the second remaining space stored after described data block.
Refer to as shown in Fig. 3-c, in some embodiments of the invention, relative to the data buffer storage device 300 such as shown in Fig. 3-b, data buffer storage device 300 can also comprise: space merges module 306, wherein,
Described judge module 302, eliminates after described first data block eliminates by module from described cache pool for described data block, judges the magnitude relationship of the capacity of the second remaining space that described vernier points to and the length of described data block; If the capacity of the second remaining space that described vernier points to is more than or equal to the length of described data block, triggers and perform described data block memory module;
Described data block eliminates module 305, also for when the capacity of the second remaining space that described vernier points to is less than the length of described data block, the second data block stored is eliminated from described cache pool in described cache pool;
Described space merges module 306, also for when the capacity of the second remaining space that described vernier points to is less than the length of described data block, the address space vacateed in described cache pool after described second data block being eliminated integrates with described second remaining space, trigger and perform described data block memory module, described second data block be positioned at described vernier in described cache pool after and the data block adjacent with described first data block.
In some embodiments of the invention, described data block comprises: key, lastblock address, next block address, data length, data content.
In this case, first remaining space acquisition module 301, specifically for the key inquiry Hash table according to described data block, the 4th data block is found from described cache pool, described 4th data block is the data block identical with the key value of described data block, in described cache pool from the afterbody of described 4th data block until the afterbody of described cache pool is described first remaining space.
Concrete, described 4th data block comprises: key, lastblock address, next block address, data length, data content.Refer to as shown in Fig. 3-d, in some embodiments of the invention, relative to the data buffer storage device 300 such as shown in Fig. 3-a, data buffer storage device 300 can also comprise: data block modified module 307, after being stored into by the ingredient of described data block in described first remaining space from the address space that described vernier points to successively for described data block memory module, the lastblock address of described data block is set to preset symbol; Next block address of described data block is set to the memory address of described 4th data block; The lastblock address of described 4th data block is set to the memory address of described data block.
In addition, the second data block stored in described cache pool comprises: key, lastblock address, next block address, data length, data content, described second data block be positioned at described vernier in described cache pool after and the data block adjacent with described first data block.Data block modified module 307, after described first data block being eliminated from described cache pool, is set to preset symbol by the lastblock address of described second data block for described data block memory module.
Refer to as shown in Fig. 3-e, in some embodiments of the invention, relative to the data buffer storage device 300 such as shown in Fig. 3-a, data buffer storage device 300 can also comprise: Hash table modified module 308, after the ingredient of described data block being stored into successively in described first remaining space from the address space that described vernier points to for described data block memory module, described Hash table is modified, to make to find described data block during the Hash table after according to described key query modification.
In embodiments of the present invention, when there being data block to need stored in cache pool, first remaining space acquisition module obtains the capacity of the first remaining space that vernier points in cache pool, then judge module judges whether this first remaining space enough stores this data block, when the capacity of the first remaining space that vernier points to is more than or equal to the length of data block, the ingredient of data block is stored in the first remaining space successively by data block memory module from the address space that vernier points to, and after data are stored into the first remaining space, vernier is moved to the afterbody of this data block by vernier mobile module, vernier after then moving points in cache pool the first remaining space stored after this data block.Because data block is stored in the first remaining space of vernier sensing in cache pool, and vernier also can move to the afterbody of this data block stored in cache pool timely, therefore the storage of data block in cache pool carries out storing according to the first remaining space of vernier sensing, vernier always just moves to the afterbody of the data block just stored after having stored data block, therefore when the length of data block is not fixed, also can indicate in cache pool by vernier the remaining space that still can be used for storing data, thus data block is stored in the remaining space in cache pool, therefore the embodiment of the present invention is applicable to the unfixed data of memory length, realize the compact storage of data block in cache pool, avoid the waste of storage space.
The main data cache method with the embodiment of the present invention is applied in terminal and illustrates below, this terminal can comprise smart mobile phone, panel computer, E-book reader, dynamic image expert compression standard audio frequency aspect 3(Moving Picture Experts Group Audio Layer III, MP3) player, dynamic image expert compression standard audio frequency aspect 4(Moving Picture Experts Group Audio LayerIV, MP4) player, pocket computer on knee and desk-top computer etc.
Please refer to Fig. 4, it illustrates the structural representation of the terminal involved by the embodiment of the present invention, specifically:
Terminal can comprise radio frequency (Radio Frequency, RF) circuit 20, the storer 21 including one or more computer-readable recording mediums, input block 22, display unit 23, sensor 24, voicefrequency circuit 25, Wireless Fidelity (wireless fidelity, WiFi) module 26, include the parts such as processor 27 and power supply 28 that more than or processes core.It will be understood by those skilled in the art that the restriction of the not structure paired terminal of the terminal structure shown in Fig. 4, the parts more more or less than diagram can be comprised, or combine some parts, or different parts are arranged.Wherein:
RF circuit 20 can be used for receiving and sending messages or in communication process, the reception of signal and transmission, especially, after being received by the downlink information of base station, transfer to more than one or one processor 27 to process; In addition, base station is sent to by relating to up data.Usually, RF circuit 20 includes but not limited to antenna, at least one amplifier, tuner, one or more oscillator, subscriber identity module (SIM) card, transceiver, coupling mechanism, low noise amplifier (Low Noise Amplifier, LNA), diplexer etc.In addition, RF circuit 20 can also by radio communication and network and other devices communicatings.Described radio communication can use arbitrary communication standard or agreement, include but not limited to global system for mobile communications (Global System ofMobile communication, GSM), general packet radio service (General Packet RadioService, GPRS), CDMA (Code Division Multiple Access, CDMA), Wideband Code Division Multiple Access (WCDMA) (Wideband Code Division Multiple Access, WCDMA), Long Term Evolution (LongTerm Evolution, LTE), Email, Short Message Service (Short Messaging Service, SMS) etc.
Storer 21 can be used for storing software program and module, and processor 27 is stored in software program and the module of storer 21 by running, thus performs the application of various function and data processing.Storer 21 mainly can comprise storage program district and store data field, and wherein, storage program district can store operating system, application program (such as sound-playing function, image player function etc.) etc. needed at least one function; Store data field and can store the data (such as voice data, phone directory etc.) etc. created according to the use of terminal.In addition, storer 21 can comprise high-speed random access memory, can also comprise nonvolatile memory, such as at least one disk memory, flush memory device or other volatile solid-state parts.Correspondingly, storer 21 can also comprise Memory Controller, to provide the access of processor 27 and input block 22 pairs of storeies 21.The kind of storer 21 is a lot, and can be divided into primary memory and supplementary storage by its purposes, primary memory is also known as internal storage.
Input block 22 can be used for the numeral or the character information that receive input, and produces and to arrange with user and function controls relevant keyboard, mouse, control lever, optics or trace ball signal and inputs.Particularly, in a specific embodiment, input block 22 can comprise Touch sensitive surface 221 and other input equipments 222.Touch sensitive surface 221, also referred to as touch display screen or Trackpad, user can be collected or neighbouring touch operation (such as user uses any applicable object or the operations of annex on Touch sensitive surface 221 or near Touch sensitive surface 221 such as finger, stylus) thereon, and drive corresponding coupling arrangement according to the formula preset.Optionally, Touch sensitive surface 221 can comprise touch detecting apparatus and touch controller two parts.Wherein, touch detecting apparatus detects the touch orientation of user, and detects the signal that touch operation brings, and sends signal to touch controller; Touch controller receives touch information from touch detecting apparatus, and converts it to contact coordinate, then gives processor 27, and the order that energy receiving processor 27 is sent also is performed.In addition, the polytypes such as resistance-type, condenser type, infrared ray and surface acoustic wave can be adopted to realize Touch sensitive surface 221.Except Touch sensitive surface 221, input block 22 can also comprise other input equipments 222.Particularly, other input equipments 222 can include but not limited to one or more in physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, control lever etc.
Display unit 23 can be used for the various graphical user interface showing information or the information being supplied to user and the terminal inputted by user, and these graphical user interface can be made up of figure, text, icon, video and its combination in any.Display unit 23 can comprise display panel 231, optionally, the forms such as liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode (OrganicLight-Emitting Diode, OLED) can be adopted to configure display panel 231.Further, Touch sensitive surface 221 can cover display panel 231, when Touch sensitive surface 221 detects thereon or after neighbouring touch operation, send processor 27 to determine the type of touch event, on display panel 231, provide corresponding vision to export with preprocessor 27 according to the type of touch event.Although in the diagram, Touch sensitive surface 221 and display panel 231 be as two independently parts realize input and input function, in certain embodiments, can by Touch sensitive surface 221 and display panel 231 integrated and realize input and output function.
Terminal also can comprise at least one sensor 24, such as optical sensor, motion sensor and other sensors.Particularly, optical sensor can comprise ambient light sensor and proximity transducer, and wherein, ambient light sensor the light and shade of environmentally light can regulate the brightness of display panel 231, proximity transducer at fast mobile terminal to time in one's ear, can cut out display panel 231 and/or backlight.As the one of motion sensor; Gravity accelerometer can detect the size of all directions (are generally three axles) acceleration; size and the direction of gravity can be detected time static, can be used for identifying the application (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating) of mobile phone attitude, Vibration identification correlation function (such as passometer, knock) etc.; As for terminal also other sensors such as configurable gyroscope, barometer, hygrometer, thermometer, infrared ray sensor, do not repeat them here.
Voicefrequency circuit 25, loudspeaker 251, microphone 252 can provide the audio interface between user and terminal.Voicefrequency circuit 25 can by receive voice data conversion after electric signal, be transferred to loudspeaker 251, by loudspeaker 251 be converted to voice signal export; On the other hand, the voice signal of collection is converted to electric signal by microphone 252, voice data is converted to after being received by voicefrequency circuit 25, after again voice data output processor 27 being processed, through RF circuit 20 to send to such as another terminal, or export voice data to storer 21 to process further.Voicefrequency circuit 25 also may comprise earphone jack, to provide the communication of peripheral hardware earphone and terminal.
WiFi belongs to short range wireless transmission technology, and terminal can help user to send and receive e-mail by WiFi module 26, browse webpage and access streaming video etc., and its broadband internet wireless for user provides is accessed.Although Fig. 4 shows WiFi module 26, be understandable that, it does not belong to must forming of terminal, can omit in the scope of essence not changing invention as required completely.
Processor 27 is control centers of terminal, utilize the various piece of various interface and the whole mobile phone of connection, software program in storer 21 and/or module is stored in by running or performing, and call the data be stored in storer 21, perform various function and the process data of terminal, thus integral monitoring is carried out to mobile phone.Optionally, processor 27 can comprise one or more process core; Preferably, processor 27 accessible site application processor and modem processor, wherein, application processor mainly processes operating system, user interface and application program etc., and modem processor mainly processes radio communication.Be understandable that, above-mentioned modem processor also can not be integrated in processor 27.
Terminal also comprises the power supply 28(such as battery of powering to all parts), preferably, power supply can be connected with processor 27 logic by power-supply management system, thus realizes the functions such as management charging, electric discharge and power managed by power-supply management system.Power supply 28 can also comprise one or more direct current or AC power, recharging system, power failure detection circuit, power supply changeover device or the random component such as inverter, power supply status indicator.
Although not shown, terminal can also comprise camera, bluetooth module etc., does not repeat them here.It is concrete that the display unit of terminal is touch-screen display in the present embodiment, the storer 21 of terminal and above-mentioned class database seemingly, can the store sample time period, sampling time interval, frame per second statistical value.
And more than one or one program is stored in storer 21 in the terminal of the present embodiment, and be configured to perform by more than one or one processor 27 the following operational order that above-mentioned more than one or one program comprises:
When data block is stored in cache pool, obtain the capacity of the first remaining space that vernier points in described cache pool;
Judge the magnitude relationship of the capacity of the first remaining space that described vernier points to and the length of described data block;
If the capacity of the first remaining space that described vernier points to is more than or equal to the length of described data block, the ingredient of described data block is stored into successively in described first remaining space from the address space that described vernier points to;
After described data block is stored into described first remaining space, described vernier is moved to the afterbody of described data block, the described vernier after mobile points in described cache pool the first remaining space stored after described data block.
Concrete, processor 27 can also be used for performing following operational order:
If the capacity of the first remaining space that described vernier points to is less than the length of described data block, described vernier is moved to the first data block stored in described cache pool, and described first data block is eliminated from described cache pool, the address space vacateed in described cache pool after described first data block is eliminated is the second remaining space that described vernier points to;
The ingredient of described data block is stored into successively in described second remaining space from the address space that described vernier points to;
After described data block is stored into described second remaining space, described vernier is moved to the afterbody of described data block, the described vernier after mobile points in described cache pool the second remaining space stored after described data block.
Concrete, processor 27 can also be used for performing following operational order: after described first data block being eliminated from described cache pool, judge the magnitude relationship of the capacity of the second remaining space that described vernier points to and the length of described data block, if the capacity of the second remaining space that described vernier points to is more than or equal to the length of described data block, triggers to perform and the ingredient of described data block is stored into successively in described second remaining space from the address space that described vernier points to, if the capacity of the second remaining space that described vernier points to is less than the length of described data block, the second data block stored in described cache pool is eliminated from described cache pool, and the address space vacateed in described cache pool after described second data block being eliminated integrates with described second remaining space, trigger to perform and the ingredient of described data block is stored into successively in described second remaining space from the address space that described vernier points to, described second data block be positioned at described vernier in described cache pool after and the data block adjacent with described first data block.
Concrete, processor 27 can also be used for performing following operational order: described the data block the longest described storage time is eliminated from described cache pool after, judge the magnitude relationship of the capacity of the second remaining space that described vernier points to and the length of described data block; If the capacity of the second remaining space that described vernier points to is more than or equal to the length of described data block, triggers to perform and the ingredient of described data block is stored into successively in described second remaining space from the address space that described vernier points to; If the capacity of the second remaining space that described vernier points to is less than the length of described data block, when the address space pointed to from described vernier in described cache pool is not until when the afterbody of described cache pool exists the data block stored, described first remaining space is integrated with described second remaining space, and the ingredient of described data block is stored in described second remaining space from the address space that described vernier points to by triggering execution successively.
Concrete, described data block comprises: key, lastblock address, next block address, data length, data content.
Concrete, the capacity of the first remaining space that vernier points in the described cache pool of described acquisition, comprising:
According to the key inquiry Hash table of described data block, the 4th data block is found from described cache pool, described 4th data block is the data block identical with the key value of described data block, in described cache pool from the afterbody of described 4th data block until the afterbody of described cache pool is described first remaining space.
Concrete, described 4th data block comprises: key, lastblock address, next block address, data length, data content; Described the ingredient of described data block is stored into successively in described first remaining space from the address space that described vernier points to after, the lastblock address of described data block is set to preset symbol; Next block address of described data block is set to the memory address of described 4th data block; The lastblock address of described 4th data block is set to the memory address of described data block.
Concrete, processor 27 can also be used for performing following operational order: after being stored into by the ingredient of described data block in described first remaining space from the address space that described vernier points to successively, described Hash table is modified, to make to find described data block during the Hash table after according to described key query modification.
Concrete, the second data block stored in described cache pool comprises: key, lastblock address, next block address, data length, data content, described second data block be positioned at described vernier in described cache pool after and the data block adjacent with described first data block; Processor 27 can also be used for performing following operational order: after described first data block being eliminated from described cache pool, the lastblock address of described second data block is set to preset symbol.
Known by the above description to the embodiment of the present invention, when there being data block to need stored in cache pool, obtain the capacity of the first remaining space that vernier points in cache pool, then judge whether this first remaining space enough stores this data block, when the capacity of the first remaining space that vernier points to is more than or equal to the length of data block, from the address space that vernier points to, the ingredient of data block is stored in the first remaining space successively, and after data are stored into the first remaining space, vernier is moved to the afterbody of this data block, vernier after then moving points in cache pool the first remaining space stored after this data block.Because data block is stored in the first remaining space of vernier sensing in cache pool, and vernier also can move to the afterbody of this data block stored in cache pool timely, therefore the storage of data block in cache pool carries out storing according to the first remaining space of vernier sensing, vernier always just moves to the afterbody of the data block just stored after having stored data block, therefore when the length of data block is not fixed, also can indicate in cache pool by vernier the remaining space that still can be used for storing data, thus data block is stored in the remaining space in cache pool, therefore the embodiment of the present invention is applicable to the unfixed data of memory length, realize the compact storage of data block in cache pool, avoid the waste of storage space.
It should be noted that in addition, device embodiment described above is only schematic, the wherein said unit illustrated as separating component or can may not be and physically separates, parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of module wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.In addition, in device embodiment accompanying drawing provided by the invention, the annexation between module represents to have communication connection between them, specifically can be implemented as one or more communication bus or signal wire.Those of ordinary skill in the art, when not paying creative work, are namely appreciated that and implement.
Through the above description of the embodiments, those skilled in the art can be well understood to the mode that the present invention can add required common hardware by software and realize, and can certainly comprise special IC, dedicated cpu, private memory, special components and parts etc. realize by specialized hardware.Generally, all functions completed by computer program can realize with corresponding hardware easily, and the particular hardware structure being used for realizing same function also can be diversified, such as mimic channel, digital circuit or special circuit etc.But under more susceptible for the purpose of the present invention condition, software program realizes is better embodiment.Based on such understanding, technical scheme of the present invention can embody with the form of software product the part that prior art contributes in essence in other words, this computer software product is stored in the storage medium that can read, as the floppy disk of computing machine, USB flash disk, portable hard drive, ROM (read-only memory) (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc., comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform method described in the present invention each embodiment.
In sum, above embodiment only in order to technical scheme of the present invention to be described, is not intended to limit; Although with reference to above-described embodiment to invention has been detailed description, those of ordinary skill in the art is to be understood that: it still can be modified to the technical scheme described in the various embodiments described above, or carries out equivalent replacement to wherein portion of techniques feature; And these amendments or replacement, do not make the essence of appropriate technical solution depart from the spirit and scope of various embodiments of the present invention technical scheme.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, below the accompanying drawing used required in describing embodiment is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, to those skilled in the art, other accompanying drawing can also be obtained according to these accompanying drawings.
The process blocks schematic diagram of a kind of data cache method that Fig. 1 provides for the embodiment of the present invention;
Fig. 2-a is the implementation procedure schematic diagram of data cache method in the embodiment of the present invention;
Fig. 2-b is the ingredient schematic diagram of data block in the embodiment of the present invention;
The composition structural representation of a kind of data buffer storage device that Fig. 3-a provides for the embodiment of the present invention;
The composition structural representation of the another kind of data buffer storage device that Fig. 3-b provides for the embodiment of the present invention;
The composition structural representation of the another kind of data buffer storage device that Fig. 3-c provides for the embodiment of the present invention;
The composition structural representation of the another kind of data buffer storage device that Fig. 3-d provides for the embodiment of the present invention;
The composition structural representation of the another kind of data buffer storage device that Fig. 3-e provides for the embodiment of the present invention;
Fig. 4 is applied to the composition structural representation of terminal for data cache method that the embodiment of the present invention provides.
Embodiment
Embodiments provide a kind of data cache method and data buffer storage device, be applicable to the unfixed data of memory length, avoid the waste of storage space.
For making goal of the invention of the present invention, feature, advantage can be more obvious and understandable, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, the embodiments described below are only the present invention's part embodiments, and not all embodiments.Based on the embodiment in the present invention, the every other embodiment that those skilled in the art obtains, all belongs to the scope of protection of the invention.

Claims (18)

1. a data cache method, is characterized in that, comprising:
When data block is stored in cache pool, obtain the capacity of the first remaining space that vernier points in described cache pool;
Judge the magnitude relationship of the capacity of the first remaining space that described vernier points to and the length of described data block;
If the capacity of the first remaining space that described vernier points to is more than or equal to the length of described data block, the ingredient of described data block is stored into successively in described first remaining space from the address space that described vernier points to;
After described data block is stored into described first remaining space, described vernier is moved to the afterbody of described data block, the described vernier after mobile points in described cache pool the first remaining space stored after described data block.
2. method according to claim 1, is characterized in that, described method also comprises:
If the capacity of the first remaining space that described vernier points to is less than the length of described data block, described vernier is moved to the first data block stored in described cache pool, and described first data block is eliminated from described cache pool, the address space vacateed in described cache pool after described first data block is eliminated is the second remaining space that described vernier points to;
The ingredient of described data block is stored into successively in described second remaining space from the address space that described vernier points to;
After described data block is stored into described second remaining space, described vernier is moved to the afterbody of described data block, the described vernier after mobile points in described cache pool the second remaining space stored after described data block.
3. method according to claim 2, is characterized in that, described described first data block is eliminated from described cache pool after, also comprise:
Judge the magnitude relationship of the capacity of the second remaining space that described vernier points to and the length of described data block;
If the capacity of the second remaining space that described vernier points to is more than or equal to the length of described data block, triggers to perform and the ingredient of described data block is stored into successively in described second remaining space from the address space that described vernier points to;
If the capacity of the second remaining space that described vernier points to is less than the length of described data block, the second data block stored in described cache pool is eliminated from described cache pool, and the address space vacateed in described cache pool after described second data block being eliminated integrates with described second remaining space, trigger to perform and the ingredient of described data block is stored into successively in described second remaining space from the address space that described vernier points to, described second data block be positioned at described vernier in described cache pool after and the data block adjacent with described first data block.
4. method according to claim 2, is characterized in that, described the data block the longest described storage time is eliminated from described cache pool after, also comprise:
Judge the magnitude relationship of the capacity of the second remaining space that described vernier points to and the length of described data block;
If the capacity of the second remaining space that described vernier points to is more than or equal to the length of described data block, triggers to perform and the ingredient of described data block is stored into successively in described second remaining space from the address space that described vernier points to;
If the capacity of the second remaining space that described vernier points to is less than the length of described data block, when the address space pointed to from described vernier in described cache pool is not until when the afterbody of described cache pool exists the data block stored, described first remaining space is integrated with described second remaining space, and the ingredient of described data block is stored in described second remaining space from the address space that described vernier points to by triggering execution successively.
5. method according to any one of claim 1 to 4, is characterized in that, described data block comprises: key, lastblock address, next block address, data length, data content.
6. method according to claim 5, is characterized in that, the capacity of the first remaining space that vernier points in the described cache pool of described acquisition, comprising:
According to the key inquiry Hash table of described data block, the 4th data block is found from described cache pool, described 4th data block is the data block identical with the key value of described data block, in described cache pool from the afterbody of described 4th data block until the afterbody of described cache pool is described first remaining space.
7. method according to claim 6, is characterized in that, described 4th data block comprises: key, lastblock address, next block address, data length, data content;
Described the ingredient of described data block is stored into successively in described first remaining space from the address space that described vernier points to after, also comprise:
The lastblock address of described data block is set to preset symbol;
Next block address of described data block is set to the memory address of described 4th data block;
The lastblock address of described 4th data block is set to the memory address of described data block.
8. method according to claim 6, is characterized in that, described the ingredient of described data block is stored into successively in described first remaining space from the address space that described vernier points to after, also comprise:
Described Hash table is modified, to make to find described data block during the Hash table after according to described key query modification.
9. method according to claim 5, it is characterized in that, the second data block stored in described cache pool comprises: key, lastblock address, next block address, data length, data content, described second data block be positioned at described vernier in described cache pool after and the data block adjacent with described first data block;
Described described first data block is eliminated from described cache pool after, also comprise:
The lastblock address of described second data block is set to preset symbol.
10. a data buffer storage device, is characterized in that, comprising:
First remaining space acquisition module, for when data block is stored in cache pool, obtains the capacity of the first remaining space that vernier points in described cache pool;
Judge module, for the magnitude relationship of the length of the capacity and described data block that judge the first remaining space that described vernier points to;
Data block memory module, for when the capacity of the first remaining space that described vernier points to is more than or equal to the length of described data block, the ingredient of described data block is stored into successively in described first remaining space from the address space that described vernier points to;
Vernier mobile module, after being stored into described first remaining space when described data block, moves to the afterbody of described data block by described vernier, the described vernier after mobile points in described cache pool the first remaining space stored after described data block.
11. devices according to claim 10, is characterized in that, described data buffer storage device also comprises: data block eliminates module, wherein,
Described vernier mobile module, also for when the capacity of the first remaining space that described vernier points to is less than the length of described data block, moves to the first data block stored in described cache pool by described vernier;
Described data block eliminates module, for when the capacity of the first remaining space that described vernier points to is less than the length of described data block, described first data block eliminated from described cache pool, the address space vacateed in described cache pool after described first data block is eliminated is the second remaining space that described vernier points to;
Described data block memory module, is also stored in described second remaining space for the address space that points to from described vernier by the ingredient of described data block successively;
Described vernier mobile module, also for after being stored into described second remaining space when described data block, described vernier is moved to the afterbody of described data block, the described vernier after mobile points in described cache pool the second remaining space stored after described data block.
12. devices according to claim 11, is characterized in that, described data buffer storage device also comprises: space merges module, wherein,
Described judge module, eliminates after described first data block eliminates by module from described cache pool for described data block, judges the magnitude relationship of the capacity of the second remaining space that described vernier points to and the length of described data block; If the capacity of the second remaining space that described vernier points to is more than or equal to the length of described data block, triggers and perform described data block memory module;
Described data block eliminates module, also for when the capacity of the second remaining space that described vernier points to is less than the length of described data block, the second data block stored is eliminated from described cache pool in described cache pool;
Described space merges module, also for when the capacity of the second remaining space that described vernier points to is less than the length of described data block, the address space vacateed in described cache pool after described second data block being eliminated integrates with described second remaining space, trigger and perform described data block memory module, described second data block be positioned at described vernier in described cache pool after and the data block adjacent with described first data block.
13. devices according to claim 11, is characterized in that, described data buffer storage device also comprises: space merges module, wherein,
Described judge module, eliminates after the data block the longest described storage time eliminates by module from described cache pool for described data block, judges the magnitude relationship of the capacity of the second remaining space that described vernier points to and the length of described data block; If the capacity of the second remaining space that described vernier points to is more than or equal to the length of described data block, triggers and perform described data block memory module;
Described space merges module, capacity for the second remaining space pointed to when described vernier be less than the length of described data block and the address space pointed to from described vernier in described cache pool until when there is not the data block stored in the afterbody of described cache pool, described first remaining space is integrated with described second remaining space, and triggers the described data block memory module of execution.
14. devices according to any one of claim 9 to 13, it is characterized in that, described data block comprises: key, lastblock address, next block address, data length, data content.
15. devices according to claim 14, it is characterized in that, described first remaining space acquisition module, specifically for the key inquiry Hash table according to described data block, the 4th data block is found from described cache pool, described 4th data block is the data block identical with the key value of described data block, in described cache pool from the afterbody of described 4th data block until the afterbody of described cache pool is described first remaining space.
16. devices according to claim 15, is characterized in that, described 4th data block comprises: key, lastblock address, next block address, data length, data content;
Described data buffer storage device also comprises: data block modified module, after being stored into by the ingredient of described data block in described first remaining space from the address space that described vernier points to successively for described data block memory module, the lastblock address of described data block is set to preset symbol; Next block address of described data block is set to the memory address of described 4th data block; The lastblock address of described 4th data block is set to the memory address of described data block.
17. devices according to claim 15, it is characterized in that, described data buffer storage device also comprises: Hash table modified module, after the ingredient of described data block being stored into successively in described first remaining space from the address space that described vernier points to for described data block memory module, described Hash table is modified, to make to find described data block during the Hash table after according to described key query modification.
18. devices according to claim 14, it is characterized in that, the second data block stored in described cache pool comprises: key, lastblock address, next block address, data length, data content, described second data block be positioned at described vernier in described cache pool after and the data block adjacent with described first data block;
Described data buffer storage device also comprises: data block modified module, after described first data block being eliminated from described cache pool, the lastblock address of described second data block is set to preset symbol for described data block memory module.
CN201410055379.6A 2014-02-18 2014-02-18 A kind of data cache method and data buffer storage Active CN104850507B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410055379.6A CN104850507B (en) 2014-02-18 2014-02-18 A kind of data cache method and data buffer storage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410055379.6A CN104850507B (en) 2014-02-18 2014-02-18 A kind of data cache method and data buffer storage

Publications (2)

Publication Number Publication Date
CN104850507A true CN104850507A (en) 2015-08-19
CN104850507B CN104850507B (en) 2019-03-15

Family

ID=53850160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410055379.6A Active CN104850507B (en) 2014-02-18 2014-02-18 A kind of data cache method and data buffer storage

Country Status (1)

Country Link
CN (1) CN104850507B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847190A (en) * 2016-03-17 2016-08-10 青岛海信电器股份有限公司 Data transmission method and processor
CN106406756A (en) * 2016-09-05 2017-02-15 华为技术有限公司 Space allocation method of file system, and apparatuses
WO2017156683A1 (en) * 2016-03-14 2017-09-21 深圳创维-Rgb电子有限公司 Linked list-based application cache management method and device
CN107766526A (en) * 2017-10-26 2018-03-06 中国人民银行清算总中心 Data bank access method, apparatus and system
CN108132759A (en) * 2018-01-15 2018-06-08 网宿科技股份有限公司 A kind of method and apparatus that data are managed in file system
CN108763109A (en) * 2018-06-13 2018-11-06 成都心吉康科技有限公司 Date storage method, device and its application
US10241927B2 (en) * 2016-03-14 2019-03-26 Shenzhen Skyworth-Rgb Electronic Co., Ltd. Linked-list-based method and device for application caching management
CN109688085A (en) * 2017-10-19 2019-04-26 中兴通讯股份有限公司 Transmission control protocol proxy method, storage medium and server
CN111159064A (en) * 2019-12-30 2020-05-15 南京六九零二科技有限公司 Low-complexity data block caching method
CN111737295A (en) * 2020-06-11 2020-10-02 上海达梦数据库有限公司 Database cursor query method, device, equipment and storage medium
CN114490459A (en) * 2022-01-27 2022-05-13 重庆物奇微电子有限公司 Data transmission method, device, equipment, receiver and storage medium
CN114845132A (en) * 2022-04-29 2022-08-02 抖动科技(深圳)有限公司 Low-delay live broadcast caching method, device, equipment and medium based on Hash algorithm
WO2022206474A1 (en) * 2021-03-30 2022-10-06 北京字节跳动网络技术有限公司 Data acquisition method and apparatus, electronic device, and computer-readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101122885A (en) * 2007-09-11 2008-02-13 腾讯科技(深圳)有限公司 Data cache processing method, system and data cache device
CN101739301A (en) * 2009-12-09 2010-06-16 南京联创科技集团股份有限公司 Method of interprocess mass data transmission under Unix environment
CN102169460A (en) * 2010-02-26 2011-08-31 航天信息股份有限公司 Method and device for managing variable length data
CN102325360A (en) * 2011-07-13 2012-01-18 中国联合网络通信集团有限公司 Data frame processing method and wireless access point
CN103425435A (en) * 2012-05-15 2013-12-04 深圳市腾讯计算机系统有限公司 Disk storage method and disk storage system
CN103488717A (en) * 2013-09-11 2014-01-01 北京华胜天成科技股份有限公司 Lock-free data gathering method and lock-free data gathering device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101122885A (en) * 2007-09-11 2008-02-13 腾讯科技(深圳)有限公司 Data cache processing method, system and data cache device
CN101739301A (en) * 2009-12-09 2010-06-16 南京联创科技集团股份有限公司 Method of interprocess mass data transmission under Unix environment
CN102169460A (en) * 2010-02-26 2011-08-31 航天信息股份有限公司 Method and device for managing variable length data
CN102325360A (en) * 2011-07-13 2012-01-18 中国联合网络通信集团有限公司 Data frame processing method and wireless access point
CN103425435A (en) * 2012-05-15 2013-12-04 深圳市腾讯计算机系统有限公司 Disk storage method and disk storage system
CN103488717A (en) * 2013-09-11 2014-01-01 北京华胜天成科技股份有限公司 Lock-free data gathering method and lock-free data gathering device

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2016277745B2 (en) * 2016-03-14 2021-02-11 Shenzhen Skyworth-Rgb Electronic Co., Ltd. Linked-list-based method and device for application caching management
WO2017156683A1 (en) * 2016-03-14 2017-09-21 深圳创维-Rgb电子有限公司 Linked list-based application cache management method and device
US10241927B2 (en) * 2016-03-14 2019-03-26 Shenzhen Skyworth-Rgb Electronic Co., Ltd. Linked-list-based method and device for application caching management
CN105847190A (en) * 2016-03-17 2016-08-10 青岛海信电器股份有限公司 Data transmission method and processor
CN105847190B (en) * 2016-03-17 2019-09-20 青岛海信电器股份有限公司 A kind of data transmission method and processor
CN106406756A (en) * 2016-09-05 2017-02-15 华为技术有限公司 Space allocation method of file system, and apparatuses
CN106406756B (en) * 2016-09-05 2019-07-09 华为技术有限公司 A kind of space allocation method and device of file system
CN109688085A (en) * 2017-10-19 2019-04-26 中兴通讯股份有限公司 Transmission control protocol proxy method, storage medium and server
CN107766526A (en) * 2017-10-26 2018-03-06 中国人民银行清算总中心 Data bank access method, apparatus and system
CN107766526B (en) * 2017-10-26 2020-04-28 中国人民银行清算总中心 Database access method, device and system
CN108132759A (en) * 2018-01-15 2018-06-08 网宿科技股份有限公司 A kind of method and apparatus that data are managed in file system
CN108132759B (en) * 2018-01-15 2021-04-16 网宿科技股份有限公司 Method and device for managing data in file system
CN108763109A (en) * 2018-06-13 2018-11-06 成都心吉康科技有限公司 Date storage method, device and its application
CN108763109B (en) * 2018-06-13 2022-04-26 成都心吉康科技有限公司 Data storage method and device and application thereof
CN111159064A (en) * 2019-12-30 2020-05-15 南京六九零二科技有限公司 Low-complexity data block caching method
CN111159064B (en) * 2019-12-30 2023-09-01 南京六九零二科技有限公司 Low-complexity data block caching method
CN111737295A (en) * 2020-06-11 2020-10-02 上海达梦数据库有限公司 Database cursor query method, device, equipment and storage medium
CN111737295B (en) * 2020-06-11 2023-02-03 上海达梦数据库有限公司 Database cursor query method, device, equipment and storage medium
WO2022206474A1 (en) * 2021-03-30 2022-10-06 北京字节跳动网络技术有限公司 Data acquisition method and apparatus, electronic device, and computer-readable storage medium
CN114490459A (en) * 2022-01-27 2022-05-13 重庆物奇微电子有限公司 Data transmission method, device, equipment, receiver and storage medium
CN114845132A (en) * 2022-04-29 2022-08-02 抖动科技(深圳)有限公司 Low-delay live broadcast caching method, device, equipment and medium based on Hash algorithm
CN114845132B (en) * 2022-04-29 2023-05-12 厦门理工学院 Low-delay live broadcast caching method, device, equipment and medium based on hash algorithm

Also Published As

Publication number Publication date
CN104850507B (en) 2019-03-15

Similar Documents

Publication Publication Date Title
CN104850507A (en) Data caching method and data caching device
CN103530040B (en) Object element moving method, device and electronic equipment
CN103327102A (en) Application program recommending method and device
CN103530115B (en) Application program display method and device and terminal equipment
CN104967679A (en) Information recommendation system, method and device
CN103543913A (en) Terminal device operation method and device, and terminal device
CN104636047A (en) Method and device for operating objects in list and touch screen terminal
CN104618794A (en) Method and device for playing video
CN104516887A (en) Webpage data search method, device and system
CN104238918A (en) List view assembly sliding display method and device
CN103455330A (en) Application program management method, terminal, equipment and system
CN104571787A (en) Message display method and communication terminal
CN104301315A (en) Method and device for limiting information access
CN104281394A (en) Method and device for intelligently selecting words
CN104239343A (en) User input information processing method and device
CN103368828B (en) A kind of message temporary storage and system
CN104954159A (en) Network information statistics method and device
CN104519262A (en) Method, device for acquiring video data, and terminal
CN103945241A (en) Streaming data statistical method, system and related device
CN104898936A (en) Page turning method and mobile device
CN103678502A (en) Information collection method and device
CN104424278A (en) Method and device for acquiring hotspot information
CN103327029B (en) A kind of detection method of malice network address and equipment
CN105512150A (en) Method and device for information search
CN104951637A (en) Method and device for obtaining training parameters

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20190730

Address after: Shenzhen Futian District City, Guangdong province 518044 Zhenxing Road, SEG Science Park 2 East Room 403

Co-patentee after: Tencent cloud computing (Beijing) limited liability company

Patentee after: Tencent Technology (Shenzhen) Co., Ltd.

Address before: Shenzhen Futian District City, Guangdong province 518000 Zhenxing Road, SEG Science Park 2 East Room 403

Patentee before: Tencent Technology (Shenzhen) Co., Ltd.