CN103077125A - Self-adaption self-organizing tower type caching method for efficiently utilizing storage space - Google Patents

Self-adaption self-organizing tower type caching method for efficiently utilizing storage space Download PDF

Info

Publication number
CN103077125A
CN103077125A CN2012105400571A CN201210540057A CN103077125A CN 103077125 A CN103077125 A CN 103077125A CN 2012105400571 A CN2012105400571 A CN 2012105400571A CN 201210540057 A CN201210540057 A CN 201210540057A CN 103077125 A CN103077125 A CN 103077125A
Authority
CN
China
Prior art keywords
data
caching
container
size
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012105400571A
Other languages
Chinese (zh)
Other versions
CN103077125B (en
Inventor
郭俸明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ruian Technology Co Ltd
Original Assignee
Beijing Ruian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ruian Technology Co Ltd filed Critical Beijing Ruian Technology Co Ltd
Priority to CN201210540057.1A priority Critical patent/CN103077125B/en
Publication of CN103077125A publication Critical patent/CN103077125A/en
Application granted granted Critical
Publication of CN103077125B publication Critical patent/CN103077125B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a self-adaption self-organizing tower type caching method for efficiently utilizing storage space, which comprises the following steps: establishing multi-level cache containers which take data size as a threshold value on a caching server; serially connecting all-level cache containers into an array, thereby forming a tower type caching structure; receiving data from a data center by the caching server; automatically selecting a suitable caching container for storing according to the size of the received data; and under the condition of keeping unchanged total volume of the storage space, dynamically adjusting the volume values of all-level cache containers according to the hit rate of all-level cache containers in an appointed time period, and causing the cache container with higher hit rate to have a greater volume. According to the method provided by the invention, the suitable cache can be automatically selected according to the size of the data, the hit rate of the cache is maximized by automatically adjusting the volume of the cache, and the method has self-adaption and self-organization.

Description

A kind of tower caching method of self-adaptation self-organization that efficiently utilizes storage space
Technical field
The invention belongs to computer memory technical field, relate to a kind of caching method, be specifically related to a kind of reasonable distribution and efficiently utilize the tower caching method of self-adaptation self-organization of storage space.
Background technology
For popular heterogeneous system, buffer memory is a kind of technology that generally adopts of alleviating client and service end speed difference, as shown in Figure 1, its principle is exactly when the data of obtaining from data center are issued client, it is cached in the internal memory of service end (or in the service of special buffer memory), after receiving the request that meets feature, directly from buffer memory, takes out data and return to client, avoid again retrieving, to respond at high speed user's request.But single buffer memory can not utilize memory headroom well.Take data cached tabulation as example, suppose to be cached with the 100M space, 100 table datas of buffer memory are if table data on average greater than 1M, obviously can cause the problem of insufficient space; If table data is on average less than 1M, then spatial cache does not take full advantage of.In this case, common way is the number of not specifying energy data cached, and next data cached by the size of data, this brings difficulty for the management of buffer memory.
In field of storage, multi-level buffer also is a kind of proven technique, it mainly is to utilize the speed difference characteristic of different physical mediums and make up, and the most used data or instruction are stored in the fastest buffer memory, obtains the lifting of entire system response speed with this.Another kind is based on different data and makes up different buffer memorys, with convenient management, such as patent of invention " the three-dimensional space data self-adaptation multilevel cache system of based on data the content " (patent No.: 200910063371.3).But the factor data content size differs, size is difficult to estimate that existing multi-level buffer method can not take full advantage of storage space.
Summary of the invention
The object of the invention is to for the problems referred to above, a kind of tower caching method of self-adaptation self-organization that efficiently utilizes storage space is proposed, can be automatically according to the suitable buffer memory of the size Selection of data self, and maximize the hit rate of buffer memory by the capacity of automatic adjusting buffer memory, have adaptivity and self-organization.
What the present invention adopted is multi-level buffer, multi-level buffer refers to by treating that deposit data is of a size of threshold values and comes classification, buffer memorys at different levels can only be preserved size data within the specific limits, the level of buffer memory (number of buffer memory) without limits, data cached content does not add differentiation; Make so the limited more data of space energy buffer memory, reach the maximization of space utilization.
Specifically, the technical solution used in the present invention is:
The tower caching method of a kind of self-adaptation self-organization, its step comprises:
1) at the multi-level buffer container of caching server foundation take data size as threshold values, caching containers at different levels are connected into an array, consist of tower buffer structure;
2) described caching server selects suitable caching container to store from data center's receive data according to the size of the data that receive automatically;
3) keeping under the constant prerequisite of storage space volume total amount, according to the caching containers at different levels capability value of the hit rate dynamic adjustments caching containers at different levels in the section at the appointed time, making the capacity of the higher caching container of hit rate larger.
Further, the present invention can set according to the clicking rate of buffer memory the access privileges of caching containers at different levels, clicking rate more high priority is higher, like this when the access cache data, just can according to priority travel through caching containers at different levels for order, shorten the time of buffer memory, searching from probability, make the maximizing efficiency of query caching.If the total volume of buffer memory is very large, when the efficient of buffer memory becomes bottleneck, can consider to use this improvement project.
Buffer structure described above can be regarded the one-dimentional structure that is of a size of threshold values by data as unified data acquisition interface externally is provided, and the hiding data storage realizes details.Further, the present invention can make up on the multi-level buffer container basis take data size as threshold values N dimension buffer structure, and wherein N 〉=2, and the details of external hiding data storage provide unified access interface.Such as the structure that can make up on this basis two dimension, as making up two-dimensional structure as another threshold values from the query time that data center obtains data, the time that the longer data of query time keep in buffer memory is also longer.Principle that in like manner can be same makes up three-dimensional, the four-dimensional even structure of multidimensional more.
The tower capacity that refers to buffer memorys at different levels of the present invention is by the proportional distribution of hit rate, and hit rate is higher, and the data amount check of buffer memory is more.Self-adaptation of the present invention has the implication of three aspects: the one, when data cached, automatically according to the suitable buffer memory of the size Selection of data self, guarantee that a new data can only advance a buffer memory; The 2nd, the capacity of working as buffer memory reaches in limited time, and the data dump with nearest least referenced goes out buffer memory automatically; The 3rd, when obtaining data from buffer memory, the position (which rank of buffer memory is data be kept in) that the automatic decision data are preserved provides unified access interface, and the user is hidden details.Self-organization of the present invention refers to that the capacity of buffer memorys at different levels can dynamically regulate, the hit rate that arrives according to Real-time Collection, under certain trigger condition, guarantee under the constant prerequisite of spatial cache total amount, make buffer memory capacity at different levels be adjusted to optimum condition, so that the whole hit rate that is cached with maximum.
The tower caching method of self-adaptation self-organization of the present invention, can be under the prerequisite of homogeneity storage medium, homogeneity data, automatically according to the suitable buffer memory of the size Selection of data self, and maximize the hit rate of buffer memory by the capacity of automatic adjusting buffer memory, has the adaptivity self-organization, unified access interface externally is provided, can realizes the efficient utilization to storage space.
Description of drawings
Fig. 1 is based on the data call synoptic diagram of caching technology;
Fig. 2 is tower buffer structure synoptic diagram among the embodiment;
Fig. 3 is the process flow diagram that among the embodiment data of obtaining from data center is added to buffer memory;
Fig. 4 is the process flow diagram that obtains data among the embodiment from buffer memory;
Fig. 5 is the process flow diagram of adjusting buffer memory capacity sizes at different levels among the embodiment.
Embodiment
Below by specific embodiment, and cooperate accompanying drawing, the present invention is described further.
Fig. 2 is the structural representation of tower buffer memory of the present invention, and the data that differ in size are left in respectively in the corresponding buffer memory by its size, and unified access interface externally is provided, and the capacity of every one-level buffer memory can be regulated automatically.Tower data cached structure is made of a series of caching containers that are mutually related, the size of the data that each container can hold, and the number of data consists of tower-like.Can in configuration file, set the storage policy of caching containers at different levels: comprising:
A) caching container (Container): the container of storage data;
B) storage level (Level): the number of the caching container that comprises in the structure;
C) container threshold values (Limit): every one-level container can be stored the upper and lower bound of size of data:
The container threshold values upper bound (LimitUp): every one-level container can be stored the upper limit of size of data, is the lower bound of next stage buffer memory simultaneously; Container threshold values lower bound (LimitDown): every one-level container can be stored the lower limit of size of data, is the upper bound of upper level buffer memory simultaneously;
D) container capacity (Size): every one-level container can be stored the number of data.
Above-mentioned buffer structure is a kind of adaptive storage organization.After caching server is received new data from data center, automatically select suitable caching container to store according to the size of data.The mode that service externally is provided is according to query characteristics value (querying condition), searches in buffer memorys at different levels, if having, then hits, and returns to the user, simultaneously this eigenwert is moved to tail of the queue; If all do not find in the buffer memorys at different levels, then extract from data center, return to called side, store the data in simultaneously in the caching container of corresponding size, for follow-up inquiry.
Above-mentioned buffer structure can carry out accurate capacity planning according to the size (Capacity) of available storage space, and neither wasting space is unlikely to again to cause the problem generations such as Memory Leaks.Its storage space takies formula:
Capacity = Σ i = 1 Level LimitU p i × Siz e i - - - ( 1 )
Wherein, Capacity is the size of total storage space; LimitUp iIt is the upper bound that i level caching container can be stored data size; Size iIt is the number that to store data of i level caching container; Level is the number of caching container;
Figure BDA00002580024200032
Represent caching container maximum storage at different levels and.
Above-mentioned buffer structure externally provides unified data acquisition interface getData (key); The hiding data storage realizes details, and from user's angle, data are to obtain or obtain from buffer memory from data center, without any difference, have guaranteed the transparency to the user on method of calling.
Above-mentioned buffer structure also is a kind of storage organization of self-organization, in order to guarantee the highest hit rate, in operational process, according to each caching container at the appointed time the section in hit rate (Hitrate i) regulate dynamically the new capability value (SizeNew of buffer memorys at different levels i); Its computing formula is as follows:
Hitrate i=Hitcount i/Size i (2)
Right = Capacity / Σ i = 1 Level ( H itrate i × LimitUp i × Size i ) - - - ( 3 )
SizeNew i=Hitrate i*Right*Size i (4)
Wherein, Hitrate iIt is the hit rate of i level caching container; Hitcount iIt is the hit-count of i level caching container; Size iIt is the number that to store data of i level caching container; Right represents power (perhaps adjustment factor); Capacity is the size of total storage space; LimitUp iIt is the upper bound that i level caching container can be stored data size; SizeNew iIt is the number that can store data after i level caching container is regulated.
After operation after a while, keeping making the capacity of the higher container of hit rate larger under the constant prerequisite of storage space volume total amount, so just can guarantee all the time to be cached with maximum hit rate.
Above-mentioned tower buffer structure has four kinds of operations, and the below carries out specific description:
(1) structure initialization
From configuration file, read initiation parameter, the number of levels of the caching container of initialization structure (Level), the upper bound of the storage data scale of containers at different levels (if the maximal value of the record number of the List of storage, LimitUp), containers at different levels can be stored the number of data (if the number of List of storage, Size).
Each caching container contains the eigenwert formation (Queue) of data mappings (Map) and data, and the data upper bound and two attributes of capacity (being the number that container at different levels recited above can be stored data), the upper bound that every one-level caching container can be stored size of data also is the lower bound that the next stage caching container can be stored size of data simultaneously.This mapping is used for the key-value pair of save data, and " key " is exactly the eigenwert of data, and " value " is exactly data itself; This eigenwert formation is used for the data that employing lru algorithm (lru algorithm, Least Recently Use, least recently used algorithm) is managed buffer memory.
Containers at different levels are together in series forms an array, has just consisted of tower buffer structure.
(2) add data to buffer memory
Fig. 3 adds data to the process flow diagram of buffer memory, and its detailed step is as follows:
A. (such as the value list that inquires from database, Value) feature (such as querying condition, Key) be assembled into structure Data, iteration variable i(is represented i level caching container) with data is initialized as 0 to the data that will obtain from data center;
B. judge that the scale (Data.size) obtain data is whether less than the upper bound (LimitUp) of current container; If not, continue to compare with next container; Until condition is set up, carry out next step;
C. judge that data amount check (Count) that current container comprises is whether less than the capacity (Size) of container; If not, enter the D step; If so, enter the E step;
D. delete head of the queue from formation, data value corresponding to deletion head of the queue eigenwert forwards the C step to from mapping, and the data amount check of current container subtracts one; Its operation is as follows:
Deletion current cache container queue heads is assigned to Key:Key=Cache[i to eigenwert] .queque.pop ();
Remove that eigenwert is the data of Key: Cache[i in the current cache container mappings] .map.remove (key);
The data amount check of current container subtracts one: Cache[i] .count--;
E. add data to mapping; Add the eigenwert of data to tail of the queue, the container data number adds one.Its operation is as follows;
The data and the eigenwert that inquire are added in the mapping of current cache container: Cache[i] .Map.add (Data.key, Data.value);
The query characteristics value is added to the tail of the queue of the formation of current cache container: Cache[i] .queue.add (Data.key);
The data amount check of current container adds one: Cache[i] .count++.
(3) from buffer memory, search and obtain data
Fig. 4 is the process flow diagram of searching and obtain data from buffer memory, and its detailed step is as follows:
A. receive the request of from buffer memory, inquiring about from the foreground, iteration variable is set to 0;
B. judge whether to comprise query characteristics value (Key) in the formation of current cache container, if not, iteration variable adds 1, continues to judge the next stage buffer memory, until condition is set up, has perhaps traveled through all buffer memorys; If so, then enter into the D step;
C. the numeration (Hitcount) of hitting with current cache adds 1; The purpose of this operation is the record hit-count, makes data for the capacity of adjusting buffer memory and prepares; From the mapping of buffer memory, obtain data value.Its operation is as follows:
The numeration (Hitcount) of hitting of current cache is added 1:Cache[i] .hitcount++;
From the mapping of current cache, obtain data value: Data.value=Cache[i] .map.get (key);
The query characteristics value is assigned to the Key attribute of data object: Data.key=key;
D. the query characteristics value is moved to the tail of the queue of buffer queue; The purpose of this operation is to make up-to-date data of hitting keep maximum activity (be the LRU principle: accessed at last recently, remove at first), makes the data longer time ground terminate-and-stay-resident of the most frequent access, and the data of least referenced clear out buffer memory in time.Its operation is as follows:
From the formation of current cache container, remove query characteristics value: Cache[i] .queque.remove (key);
The query characteristics value is added to the tail of the queue of formation: Cache[i] .queue.add (key);
If E. inquired data in the buffer memory, then data are returned to called side, otherwise from data center, go inquiry to obtain data.
(4) adjust container capacity
Fig. 5 is the process flow diagram of adjusting container capacity, and its detailed step is as follows:
A. the value of the capacity total amount (Capacity) of the whole buffer memory of initialization, interim buffer memory total amount (CapacityTemp) is 0; If in configuration file, indicated the capacity total amount, then can not calculate this value; Initialization iteration variable value is 0.Its operation is as follows:
Initialization iteration variable value is 0:i=0;
The capacity total amount of the whole buffer memory of initialization is 0:Capacity=0;
The value of the interim buffer memory total amount of initialization is 0:CapacityTemp=0;
B. each caching container of iteration calculates the hit rate of each buffer memory; Cumulative calculation buffer memory capacity total amount; Each buffer memory of cumulative calculation multiply by hit rate and unweighted capacity total amount; Its operation is as follows:
Hit rate (Hitrate)=hit number (Hitcount)/buffer memory capacity (Size) namely:
Hitrate[i]=Cache[i].Hitcount/Cache[i].size;
Not weighting buffer memory capacity (CapacityTemp)=each buffer memory capacity * hit rate, that is:
Capacity=Capacity+Cache[i].LimitUp*Cache[i].size;
Not weighting buffer memory capacity sum=not weighting buffer memory capacity of buffer memorys at different levels sum, that is:
CapacityTemp=CapacityTemp+Hitrate[i]*Cache[i].LimitUp*Cache[i].size;
C. iteration complete after, calculate weighted number; The purpose of calculating flexible strategy is that the buffer memory capacity problem is constant before and after keeping adjusting.Its operation is as follows:
Flexible strategy (Right)=capacity total amount (Capacity)/not weighting buffer memory capacity total amount, namely
Right=Capacity/CapacityTemp;
Putting iteration variable is 0, i.e. i=0;
D. iteration adjusts capability value for every one-level buffer memory again.It is operating as:
Capacity after the adjustment (the Size)=former capacity * of clicking rate * flexible strategy, namely
Cache[i].size=Hitrate[i]*Cache[i].size*Right。
Through after the above operation, can make the capacity of each container and hit rate in direct ratio, so that the whole hit rate that is cached with maximum.
In the such scheme, whether start and adjust the container capacity function, and how to touch and adjust the container capacity function and can be configured at configuration file, make it the self-organization of adjusting by predetermined strategy realization capacity.
Above embodiment is only in order to technical scheme of the present invention to be described but not limit it; those of ordinary skill in the art can make amendment or is equal to replacement technical scheme of the present invention; and not breaking away from the spirit and scope of the present invention, protection scope of the present invention should be as the criterion so that claim is described.

Claims (10)

1. tower caching method of self-adaptation self-organization, its step comprises:
1) at the multi-level buffer container of caching server foundation take data size as threshold values, caching containers at different levels are connected into an array, consist of tower buffer structure;
2) described caching server selects suitable caching container to store from data center's receive data according to the size of the data that receive automatically;
3) keeping under the constant prerequisite of storage space volume total amount, according to the caching containers at different levels capability value of the hit rate dynamic adjustments caching containers at different levels in the section at the appointed time, making the capacity of the higher caching container of hit rate larger.
2. the method for claim 1 is characterized in that: set the access privileges of caching containers at different levels according to the buffer memory clicking rate, the higher then priority of clicking rate is higher.
3. the method for claim 1, it is characterized in that: described multi-level buffer container externally provides unified data acquisition interface, and the hiding data storage realizes details.
4. the method for claim 1 is characterized in that: every grade of caching container contains a data-mapping that is used for the key-value pair of save data, and the formation of a data eigenwert.
5. method as claimed in claim 4 is characterized in that: described data feature values formation adopts the lru algorithm management data cached.
6. the method for claim 1 is characterized in that, the computing formula of the capability value of the described dynamic adjustments of step 3) caching containers at different levels is:
Hitrate i=Hitcount i/Size i
Right = Capacity / Σ i = 1 Level ( H itrate i × LimitUp i × Size i ) ;
SizeNew i=Hitrate i*Right*Size i
Wherein, Hitrate iIt is the hit rate of i level caching container; Hitcount iIt is the hit-count of i level caching container; Size iIt is the number that to store data of i level caching container; Right represents flexible strategy; Capacity is the size of total storage space; LimitUp iIt is the upper bound that i level caching container can be stored data size; SizeNew iIt is the number that can store data after i level caching container is regulated.
7. the method for claim 1 is characterized in that: make up N dimension buffer structure, wherein N 〉=2 on the described multi-level buffer container basis take data size as threshold values.
8. method as claimed in claim 7, it is characterized in that: on the described multi-level buffer container basis take data size as threshold values, to make up two-dimentional buffer structure as another threshold values from the query time that data center obtains data, wherein time of keeping in buffer memory of the longer data of query time is longer.
9. the method for claim 1 is characterized in that: storage level, container threshold values and the container capacity of setting caching containers at different levels in configuration file.
10. whether the method for claim 1 is characterized in that: dispose to start in configuration file and adjust the container capacity function and how to touch adjustment container capacity function, with the self-organization of realization capacity adjustment.
CN201210540057.1A 2012-12-13 2012-12-13 A kind of tower caching method of self-adaptation self-organization of efficiency utilization storage space Active CN103077125B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210540057.1A CN103077125B (en) 2012-12-13 2012-12-13 A kind of tower caching method of self-adaptation self-organization of efficiency utilization storage space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210540057.1A CN103077125B (en) 2012-12-13 2012-12-13 A kind of tower caching method of self-adaptation self-organization of efficiency utilization storage space

Publications (2)

Publication Number Publication Date
CN103077125A true CN103077125A (en) 2013-05-01
CN103077125B CN103077125B (en) 2015-09-16

Family

ID=48153657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210540057.1A Active CN103077125B (en) 2012-12-13 2012-12-13 A kind of tower caching method of self-adaptation self-organization of efficiency utilization storage space

Country Status (1)

Country Link
CN (1) CN103077125B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778071A (en) * 2014-01-20 2014-05-07 华为技术有限公司 Cache space distribution method and device
CN104424119A (en) * 2013-08-26 2015-03-18 联想(北京)有限公司 Storage space configuration method and device
CN107305531A (en) * 2016-04-20 2017-10-31 广州市动景计算机科技有限公司 Buffer memory capacity limit value determines method and apparatus and computing device
CN107408071A (en) * 2015-08-21 2017-11-28 华为技术有限公司 A kind of memory pool access method, device and system
CN107977165A (en) * 2017-11-22 2018-05-01 用友金融信息技术股份有限公司 Data buffer storage optimization method, device and computer equipment
CN110968562A (en) * 2019-11-28 2020-04-07 国网上海市电力公司 Buffer self-adaptive adjustment method and device based on ZFS file system
CN112395322A (en) * 2020-12-07 2021-02-23 湖南新云网科技有限公司 List data display method and device based on hierarchical cache and terminal equipment
CN112988619A (en) * 2021-02-08 2021-06-18 北京金山云网络技术有限公司 Data reading method and device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5394531A (en) * 1989-04-03 1995-02-28 International Business Machines Corporation Dynamic storage allocation system for a prioritized cache
CN1607508A (en) * 2003-10-16 2005-04-20 国际商业机器公司 System and method of adaptively reconfiguring buffers
CN101655824A (en) * 2009-08-25 2010-02-24 北京广利核系统工程有限公司 Implementation method of double-port RAM mutual exclusion access

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5394531A (en) * 1989-04-03 1995-02-28 International Business Machines Corporation Dynamic storage allocation system for a prioritized cache
CN1607508A (en) * 2003-10-16 2005-04-20 国际商业机器公司 System and method of adaptively reconfiguring buffers
CN101655824A (en) * 2009-08-25 2010-02-24 北京广利核系统工程有限公司 Implementation method of double-port RAM mutual exclusion access

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424119A (en) * 2013-08-26 2015-03-18 联想(北京)有限公司 Storage space configuration method and device
CN104424119B (en) * 2013-08-26 2018-07-06 联想(北京)有限公司 Memory space configuration method and device
CN103778071A (en) * 2014-01-20 2014-05-07 华为技术有限公司 Cache space distribution method and device
CN107408071A (en) * 2015-08-21 2017-11-28 华为技术有限公司 A kind of memory pool access method, device and system
CN107305531A (en) * 2016-04-20 2017-10-31 广州市动景计算机科技有限公司 Buffer memory capacity limit value determines method and apparatus and computing device
CN107305531B (en) * 2016-04-20 2020-10-16 阿里巴巴(中国)有限公司 Method and device for determining limit value of cache capacity and computing equipment
CN107977165B (en) * 2017-11-22 2021-01-08 用友金融信息技术股份有限公司 Data cache optimization method and device and computer equipment
CN107977165A (en) * 2017-11-22 2018-05-01 用友金融信息技术股份有限公司 Data buffer storage optimization method, device and computer equipment
CN110968562A (en) * 2019-11-28 2020-04-07 国网上海市电力公司 Buffer self-adaptive adjustment method and device based on ZFS file system
CN110968562B (en) * 2019-11-28 2023-05-12 国网上海市电力公司 Cache self-adaptive adjustment method and equipment based on ZFS file system
CN112395322A (en) * 2020-12-07 2021-02-23 湖南新云网科技有限公司 List data display method and device based on hierarchical cache and terminal equipment
CN112395322B (en) * 2020-12-07 2021-06-01 湖南新云网科技有限公司 List data display method and device based on hierarchical cache and terminal equipment
CN112988619A (en) * 2021-02-08 2021-06-18 北京金山云网络技术有限公司 Data reading method and device and electronic equipment

Also Published As

Publication number Publication date
CN103077125B (en) 2015-09-16

Similar Documents

Publication Publication Date Title
CN103077125A (en) Self-adaption self-organizing tower type caching method for efficiently utilizing storage space
CN101916302B (en) Three-dimensional spatial data adaptive cache management method and system based on Hash table
CN106102112B (en) A kind of mobile Sink node method of data capture based on ant group algorithm
CN101692229B (en) Self-adaptive multilevel cache system for three-dimensional spatial data based on data content
CN103366016B (en) E-file based on HDFS is centrally stored and optimization method
CN101201801B (en) Classification storage management method for VOD system
CN108667653B (en) Cluster-based cache configuration method and device in ultra-dense network
CN100578469C (en) Storage and polling method and storage controller and polling system
CN106156331A (en) Cold and hot temperature data server system and processing method thereof
CN107295619B (en) Base station dormancy method based on user connection matrix in edge cache network
CN1499382A (en) Method for implementing cache in high efficiency in redundancy array of inexpensive discs
CN102521405A (en) Massive structured data storage and query methods and systems supporting high-speed loading
CN101373445B (en) Method and apparatus for scheduling memory
CN109597304A (en) Die storehouse Intelligent partition storage method based on artificial bee colony algorithm
CN101916301B (en) Three-dimensional spatial data adaptive pre-scheduling method based on spatial relationship
CN105357247B (en) Multidimensional property cloud resource range lookup method based on layering cloud peer-to-peer network
CN108388666A (en) A kind of database multi-list Connection inquiring optimization method based on glowworm swarm algorithm
CN104714753A (en) Data access and storage method and device
CN109245879A (en) A kind of double hash algorithms of storage and lookup IP address mapping relations
CN110062356B (en) Cache copy layout method in D2D network
CN103200245B (en) A kind of distributed network caching method based on Device Mapper
CN110018794A (en) A kind of rubbish recovering method, device, storage system and readable storage medium storing program for executing
Chang et al. Cooperative edge caching via multi agent reinforcement learning in fog radio access networks
CN108717448A (en) A kind of range query filter method and key-value pair storage system towards key-value pair storage
CN105988720A (en) Data storage device and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Self-adaption self-organizing tower type caching method for efficiently utilizing storage space

Effective date of registration: 20180627

Granted publication date: 20150916

Pledgee: China Co. truction Bank Corp Beijing Zhongguancun branch

Pledgor: RUN TECHNOLOGIES Co.,Ltd. BEIJING

Registration number: 2018110000015

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20210128

Granted publication date: 20150916

Pledgee: China Co. truction Bank Corp Beijing Zhongguancun branch

Pledgor: Run Technologies Co.,Ltd. Beijing

Registration number: 2018110000015

PC01 Cancellation of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: An adaptive self organizing pyramid cache method for efficient memory utilization

Effective date of registration: 20210705

Granted publication date: 20150916

Pledgee: China Co. truction Bank Corp Beijing Zhongguancun branch

Pledgor: Run Technologies Co.,Ltd. Beijing

Registration number: Y2021990000579

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Granted publication date: 20150916

Pledgee: China Co. truction Bank Corp Beijing Zhongguancun branch

Pledgor: RUN TECHNOLOGIES Co.,Ltd. BEIJING

Registration number: Y2021990000579