CN103077125B - A kind of tower caching method of self-adaptation self-organization of efficiency utilization storage space - Google Patents

A kind of tower caching method of self-adaptation self-organization of efficiency utilization storage space Download PDF

Info

Publication number
CN103077125B
CN103077125B CN201210540057.1A CN201210540057A CN103077125B CN 103077125 B CN103077125 B CN 103077125B CN 201210540057 A CN201210540057 A CN 201210540057A CN 103077125 B CN103077125 B CN 103077125B
Authority
CN
China
Prior art keywords
data
container
caching
size
capacity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210540057.1A
Other languages
Chinese (zh)
Other versions
CN103077125A (en
Inventor
郭俸明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ruian Technology Co Ltd
Original Assignee
Beijing Ruian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ruian Technology Co Ltd filed Critical Beijing Ruian Technology Co Ltd
Priority to CN201210540057.1A priority Critical patent/CN103077125B/en
Publication of CN103077125A publication Critical patent/CN103077125A/en
Application granted granted Critical
Publication of CN103077125B publication Critical patent/CN103077125B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention relates to a kind of tower caching method of self-adaptation self-organization of efficiency utilization storage space, its step comprises: setting up on caching server with data size is the multi-level buffer container of threshold values, caching container at different levels is connected into an array, forms tower buffer structure; Described caching server receives data from data center, and the size according to the data received selects suitable caching container to store automatically; Under the prerequisite keeping storage space volume total amount constant, according to the capability value of the hit rate dynamic adjustments caching container at different levels in caching container at different levels at the appointed time section, the capacity of the caching container making hit rate higher is larger.The present invention can automatically according to the buffer memory that the size Selection of data self is suitable, and the hit rate by automatically regulating the capacity of buffer memory to maximize buffer memory, there is adaptivity and self-organization.

Description

A kind of tower caching method of self-adaptation self-organization of efficiency utilization storage space
Technical field
The invention belongs to computer memory technical field, relate to a kind of caching method, be specifically related to the tower caching method of self-adaptation self-organization of a kind of reasonable distribution and efficiency utilization storage space.
Background technology
For popular heterogeneous system, buffer memory is a kind of technology generally adopted alleviating client and service end speed difference, as shown in Figure 1, its principle be exactly the data obtained from data center are issued client while, it is cached in the internal memory of service end (or on special buffer service), after receiving the request meeting feature, from buffer memory, directly takes out data return to client, avoid again retrieving, to respond the request of user at high speed.But single buffer memory can not utilize memory headroom well.For data cached list, suppose to be cached with 100M space, buffer memory 100 table datas, if table data is on average greater than 1M, obviously can cause the problem of insufficient space; If table data is on average less than 1M, then spatial cache does not make full use of.In this case, common way is that do not specify can data cached number, comes data cached by the size of data, and this is that the management of buffer memory brings difficulty.
In field of storage, multi-level buffer is also a kind of proven technique, it mainly utilizes the speed difference characteristic of different physical mediums and builds, and is stored in the fastest buffer memory, obtains the lifting of entire system response speed with this by the most used data or instruction.Another kind builds different buffer memorys based on different data, to facilitate management, as patent of invention " the three-dimensional space data self-adaptation multilevel cache system based on the data content " (patent No.: 200910063371.3).But factor data content size differs, size is difficult to estimate, existing multi-level buffer method can not make full use of storage space.
Summary of the invention
The object of the invention is to for the problems referred to above, a kind of tower caching method of self-adaptation self-organization of efficiency utilization storage space is proposed, can automatically according to the buffer memory that the size Selection of data self is suitable, and the hit rate by automatically regulating the capacity of buffer memory to maximize buffer memory, there is adaptivity and self-organization.
What the present invention adopted is multi-level buffer, multi-level buffer refers to by treating that deposit data is of a size of threshold values and carrys out classification, buffer memory at different levels can only preserve size data within the specific limits, level (number of buffer memory) the not restriction of buffer memory, and data cached content does not add differentiation; Make the limited more data of space energy buffer memory like this, reach the maximization of space utilization.
Specifically, the technical solution used in the present invention is:
The tower caching method of a kind of self-adaptation self-organization, its step comprises:
1) setting up on caching server with data size is the multi-level buffer container of threshold values, and caching container at different levels is connected into an array, forms tower buffer structure;
2) described caching server receives data from data center, and the size according to the data received selects suitable caching container to store automatically;
3) under the prerequisite keeping storage space volume total amount constant, according to the capability value of the hit rate dynamic adjustments caching container at different levels in caching container at different levels at the appointed time section, the capacity of the caching container making hit rate higher is larger.
Further, the present invention can set the access privileges of caching container at different levels according to the clicking rate of buffer memory, clicking rate more high priority is higher, like this when access cache data, just can according to priority for sequence travels through caching container at different levels, shorten the time of searching in the buffer from probability, make the maximizing efficiency of query caching.If the total volume of buffer memory is very large, when the efficiency of buffer memory becomes bottleneck, can consider to use this improvement project.
Buffer structure described above can be regarded the one-dimentional structure being of a size of threshold values by data as and externally provide unified data acquisition interface, and hiding data storage realizes details.Further, the present invention can build N and tie up buffer structure, wherein N >=2 on the multi-level buffer vessel base taking data size as threshold values, and the details of externally hiding data storage, unified access interface is provided.Such as can build the structure of two dimension on this basis, if the query time that will obtain data from data center is as another threshold values structure two-dimensional structure, the time that the longer data of query time retain in the buffer is also longer.Principle that in like manner can be same builds three-dimensional, the structure of four-dimensional even more multidimensional.
Of the present inventionly towerly refer to that the capacity of buffer memory at different levels is by the proportional distribution of hit rate, hit rate is higher, and the data amount check of buffer memory is more.Of the present invention adaptively should have the implication of three aspects: one when being data cached, automatically according to the buffer memory that the size Selection of data self is suitable, ensures that a new data can only enter a buffer memory; Two is when the capacity of buffer memory reaches in limited time, automatically the data dump of nearest least referenced is gone out buffer memory; Three is when obtaining data from buffer memory, the position (which rank of buffer memory is data be kept in) that automatic decision data are preserved, and provides unified access interface, hides details to user.Self-organization of the present invention refers to that the capacity of buffer memory at different levels can dynamically regulate, according to the hit rate that Real-time Collection arrives, under certain trigger condition, under ensureing the prerequisite that spatial cache total amount is constant, buffer memory capacity at different levels is made to be adjusted to optimum condition, to make wholely to be cached with maximum hit rate.
The tower caching method of self-adaptation self-organization of the present invention, can under the prerequisite of homogeneity storage medium, homogeneous data, automatically according to the buffer memory that the size Selection of data self is suitable, and the hit rate by automatically regulating the capacity of buffer memory to maximize buffer memory, there is adaptivity self-organization, unified access interface is externally provided, the efficiency utilization to storage space can be realized.
Accompanying drawing explanation
Fig. 1 is the data call schematic diagram based on caching technology;
Fig. 2 is tower buffer structure schematic diagram in embodiment;
Fig. 3 is the process flow diagram in embodiment, the data obtained from data center being added to buffer memory;
Fig. 4 is the process flow diagram obtaining data in embodiment from buffer memory;
Fig. 5 is the process flow diagram adjusting buffer memory capacity size at different levels in embodiment.
Embodiment
Below by specific embodiment, and coordinate accompanying drawing, the present invention is described further.
Fig. 2 is the structural representation of tower buffer memory of the present invention, and the data differed in size left in respectively in corresponding buffer memory by its size, externally provide unified access interface, the capacity of every level cache can regulate automatically.Tower caching data structure is made up of a series of caching container that is mutually related, the size of the data that each container can hold, and the number of data forms tower-like.The storage policy of caching container at different levels can be set: comprising in configuration file:
A) caching container (Container): the container storing data;
B) level (Level) is stored: the number of the caching container comprised in structure;
C) container threshold values (Limit): every level container can store the upper and lower bound of size of data:
The container threshold values upper bound (LimitUp): every level container can store the upper limit of size of data, is the lower bound of next stage buffer memory simultaneously; Container threshold values lower bound (LimitDown): every level container can store the lower limit of size of data, is the upper bound of upper level buffer memory simultaneously;
D) container capacity (Size): every level container can store the number of data.
Above-mentioned buffer structure is a kind of adaptive storage organization.After caching server receives new data from data center, suitable caching container is automatically selected to store according to the size of data.Externally provide the mode of service to be according to query characteristics value (querying condition), search in buffer memory at different levels, if had, then hit, return to user, this eigenwert is moved to tail of the queue simultaneously; If all do not found in buffer memory at different levels, then extract from data center, return to called side, store the data in the caching container of corresponding size, for follow-up inquiry simultaneously.
Above-mentioned buffer structure can carry out accurate capacity planning according to the size of available storage space (Capacity), neither wasting space, is unlikely to again to cause the problems such as Memory Leaks to occur.Its storage space takies formula:
Capacity = Σ i = 1 Level LimitU p i × Siz e i - - - ( 1 )
Wherein, Capacity is the size of total storage space; LimitUp iit is the upper bound that i-th grade of caching container can store data size; Size iit is the number that can store data of i-th grade of caching container; Level is the number of caching container; represent caching container maximum storage at different levels and.
Above-mentioned buffer structure externally provides unified data acquisition interface getData (key); Hiding data storage realizes details, and from the angle of user, data obtain from data center or obtain from buffer memory, without any difference on method of calling, ensure that the transparency to user.
Above-mentioned buffer structure is also a kind of storage organization of self-organization, in order to ensure the highest hit rate, in operational process, according to the hit rate (Hitrate in each caching container at the appointed time section i) capability value (SizeNew that regulates buffer memory at different levels new dynamically i); Its computing formula is as follows:
Hitrate i=Hitcount i/Size i(2)
Right = Capacity / Σ i = 1 Level ( H itrate i × LimitUp i × Size i ) - - - ( 3 )
SizeNew i=Hitrate i*Right*Size i(4)
Wherein, Hitrate iit is the hit rate of i-th grade of caching container; Hitcount iit is the hit-count of i-th grade of caching container; Size iit is the number that can store data of i-th grade of caching container; Right represents power (or adjustment factor); Capacity is the size of total storage space; LimitUp iit is the upper bound that i-th grade of caching container can store data size; SizeNew iit is the number that can store data after i-th grade of caching container regulates.
After operation after a while, under the prerequisite keeping storage space volume total amount constant, the capacity of the container making hit rate higher is larger, so just can ensure all the time to be cached with maximum hit rate.
Above-mentioned tower buffer structure has four kinds of operations, is specifically described below:
(1) structure initialization
Initiation parameter is read from configuration file, the number of levels (Level) of the caching container of initialisation structures, the upper bound of the storage data scale of container at different levels is (if the maximal value of the record number of the List stored, LimitUp), container at different levels can store the number (if the number of the List stored, Size) of data.
Each caching container contains the eigenwert queue (Queue) of a data-mapping (Map) and data, and the data upper bound and capacity (namely container at different levels recited above can store the number of data) two attributes, the upper bound that every one-level caching container can store size of data is also the lower bound that next stage caching container can store size of data simultaneously.This mapping is for preserving the key-value pair of data, and " key " is exactly the eigenwert of data, and " value " is exactly data itself; The data of this eigenwert queue for adopting lru algorithm (lru algorithm, Least Recently Use, least recently used algorithm) to manage buffer memory.
Be together in series container at different levels composition array, just constitutes tower buffer structure.
(2) data are added to buffer memory
Fig. 3 adds the process flow diagram of data to buffer memory, and its detailed step is as follows:
A. the feature (as querying condition, Key) of the data obtained from data center (as the value list inquired from database, Value) and data is assembled into structure Data, iteration variable i(is represented i-th grade of caching container) be initialized as 0;
Whether the scale (Data.size) B. judging to obtain data is less than the upper bound (LimitUp) of current container; If not, continue to compare with next container; Until condition is set up, carry out next step;
C. judge whether data amount check (Count) that current container comprises is less than the capacity (Size) of container; If not, D step is entered; If so, E step is entered;
D. from queue, delete head of the queue, from mapping, delete data value corresponding to head of the queue eigenwert, forward step C to, the data amount check of current container subtracts one; Its operation is as follows:
Delete current cache container queue heads, eigenwert is assigned to Key:Key=Cache [i] .queque.pop ();
Removing eigenwert in current cache container mappings is the data of Key: Cache [i] .map.remove (key);
The data amount check of current container subtracts one: Cache [i] .count--;
E. data are added to mapping; Add the eigenwert of data to tail of the queue, container data number adds one.Its operation is as follows;
The data inquired and eigenwert are added in the mapping of current cache container: Cache [i] .Map.add (Data.key, Data.value);
Query characteristics value is added to the tail of the queue of the queue of current cache container: Cache [i] .queue.add (Data.key);
The data amount check of current container adds one: Cache [i] .count++.
(3) search from buffer memory and obtain data
Fig. 4 is the process flow diagram searching and obtain data from buffer memory, and its detailed step is as follows:
A. receive the request inquired about from buffer memory from foreground, iteration variable is set to 0;
B. judge whether to comprise query characteristics value (Key) in the queue of current cache container, if not, iteration variable adds 1, continues to judge next stage buffer memory, until condition is set up, or has traveled through all buffer memorys; If so, then D step is entered into;
C. the hit of current cache is counted (Hitcount) add 1; The object of this operation is record hit-count, for the capacity adjusting buffer memory makes data encasement; Data value is obtained from the mapping of buffer memory.Its operation is as follows:
The hit of current cache is counted (Hitcount) add 1:Cache [i] .hitcount++;
Data value is obtained: Data.value=Cache [i] .map.get (key) from the mapping of current cache;
Query characteristics value is assigned to the Key attribute of data object: Data.key=key;
D. query characteristics value is moved to the tail of the queue of buffer queue; The object of this operation is the activity (i.e. LRU principle: finally accessed recently, removes at first) making the data of up-to-date hit keep maximum, and make the data for longer periods terminate-and-stay-resident of the most frequently accessing, the data of least referenced clear out buffer memory in time.Its operation is as follows:
Query characteristics value is removed: Cache [i] .queque.remove (key) from the queue of current cache container;
Query characteristics value is added to the tail of the queue of queue: Cache [i] .queue.add (key);
If E. inquired data in buffer memory, then data are returned to called side, otherwise from data center, go inquiry to obtain data.
(4) container capacity is adjusted
Fig. 5 is the process flow diagram of adjustment container capacity, and its detailed step is as follows:
A. the capacity total amount (Capacity) of the whole buffer memory of initialization, the value of temporal cache total amount (CapacityTemp) are 0; If specify capacity total amount in configuration file, then can not calculate this value; Initialization iteration variable value is 0.Its operation is as follows:
Initialization iteration variable value is 0:i=0;
The capacity total amount of the whole buffer memory of initialization is 0:Capacity=0;
The value of initialization temporal cache total amount is 0:CapacityTemp=0;
B. each caching container of iteration, calculates the hit rate of each buffer memory; Cumulative calculation buffer memory capacity total amount; The each buffer memory of cumulative calculation is multiplied by hit rate and unweighted capacity total amount; Its operation is as follows:
Hit rate (Hitrate)=hit number (Hitcount)/buffer memory capacity (Size) is namely:
Hitrate[i]=Cache[i].Hitcount/Cache[i].size;
Non-weighting buffer memory capacity (CapacityTemp)=each buffer memory capacity * hit rate, that is:
Capacity=Capacity+Cache[i].LimitUp*Cache[i].size;
Non-weighting buffer memory capacity sum=buffer memory at different levels non-weighting buffer memory capacity sum, that is:
CapacityTemp=CapacityTemp+Hitrate[i]*Cache[i].LimitUp*Cache[i].size;
C. iteration complete after, calculate weighted number; The object calculating flexible strategy keeps buffer memory capacity problem before and after adjustment constant.Its operation is as follows:
Flexible strategy (Right)=capacity total amount (Capacity)/non-weighting buffer memory capacity total amount, namely
Right=Capacity/CapacityTemp;
Putting iteration variable is 0, i.e. i=0;
D. iteration gives every level cache adjustment capability value again.It is operating as:
Capacity (Size) after adjustment=clicking rate * former capacity * flexible strategy, namely
Cache[i].size=Hitrate[i]*Cache[i].size*Right。
After operation above, can make the capacity of each container and hit rate in direct ratio, to make wholely to be cached with maximum hit rate.
In such scheme, whether start adjustment container capacity function, and how to touch adjustment container capacity function and can be configured at configuration file, make it the self-organization realizing capacity adjustment by predetermined strategy.
Above embodiment is only in order to illustrate technical scheme of the present invention but not to be limited; those of ordinary skill in the art can modify to technical scheme of the present invention or equivalent replacement; and not departing from the spirit and scope of the present invention, protection scope of the present invention should be as the criterion with described in claim.

Claims (9)

1. the tower caching method of self-adaptation self-organization, its step comprises:
1) setting up on caching server with data size is the multi-level buffer container of threshold values, and caching container at different levels is connected into an array, forms tower buffer structure;
2) described caching server receives data from data center, and the size according to the data received selects suitable caching container to store automatically;
3) under the prerequisite keeping storage space volume total amount constant, according to the capability value of the hit rate dynamic adjustments caching container at different levels in caching container at different levels at the appointed time section, the capacity of the caching container making hit rate higher is larger; The computing formula of the capability value of described dynamic adjustments caching container at different levels is:
Hitrate i=Hitcount i/Size i
Right = Capacity / Σ i = 1 Level ( Hitrate i × LimitUp i × Size i ) ;
SizeNew i=Hitrate i*Right*Size i
Wherein, Hitrate iit is the hit rate of i-th grade of caching container; Hitcount iit is the hit-count of i-th grade of caching container; Size iit is the number that can store data of i-th grade of caching container; Right represents flexible strategy; Capacity is the size of total storage space; LimitUp iit is the upper bound that i-th grade of caching container can store data size; SizeNew iit is the number that can store data after i-th grade of caching container regulates.
2. the method for claim 1, is characterized in that: the access privileges setting caching container at different levels according to buffer memory clicking rate, and the higher then priority of clicking rate is higher.
3. the method for claim 1, is characterized in that: described multi-level buffer container externally provides unified data acquisition interface, and hiding data storage realizes details.
4. the method for claim 1, is characterized in that: every grade of caching container contains one for preserving the data-mapping of the key-value pair of data, and a data feature values queue.
5. method as claimed in claim 4, is characterized in that: described data feature values queue adopts lru algorithm management data cached.
6. the method for claim 1, is characterized in that: on the described multi-level buffer vessel base taking data size as threshold values, build N tie up buffer structure, wherein N >=2.
7. method as claimed in claim 6, it is characterized in that: on the described multi-level buffer vessel base taking data size as threshold values, the query time obtaining data from data center is built two-dimentional buffer structure as another threshold values, and the time that the data that wherein query time is longer retain in the buffer is longer.
8. the method for claim 1, is characterized in that: in configuration file, set the storage level of caching container at different levels, container threshold values and container capacity.
9. the method for claim 1, is characterized in that: configure in configuration file and whether start adjustment container capacity function and how to touch adjustment container capacity function, to realize the self-organization of capacity adjustment.
CN201210540057.1A 2012-12-13 2012-12-13 A kind of tower caching method of self-adaptation self-organization of efficiency utilization storage space Active CN103077125B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210540057.1A CN103077125B (en) 2012-12-13 2012-12-13 A kind of tower caching method of self-adaptation self-organization of efficiency utilization storage space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210540057.1A CN103077125B (en) 2012-12-13 2012-12-13 A kind of tower caching method of self-adaptation self-organization of efficiency utilization storage space

Publications (2)

Publication Number Publication Date
CN103077125A CN103077125A (en) 2013-05-01
CN103077125B true CN103077125B (en) 2015-09-16

Family

ID=48153657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210540057.1A Active CN103077125B (en) 2012-12-13 2012-12-13 A kind of tower caching method of self-adaptation self-organization of efficiency utilization storage space

Country Status (1)

Country Link
CN (1) CN103077125B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424119B (en) * 2013-08-26 2018-07-06 联想(北京)有限公司 Memory space configuration method and device
CN103778071A (en) * 2014-01-20 2014-05-07 华为技术有限公司 Cache space distribution method and device
WO2017031637A1 (en) * 2015-08-21 2017-03-02 华为技术有限公司 Memory access method, apparatus and system
CN107305531B (en) * 2016-04-20 2020-10-16 阿里巴巴(中国)有限公司 Method and device for determining limit value of cache capacity and computing equipment
CN107977165B (en) * 2017-11-22 2021-01-08 用友金融信息技术股份有限公司 Data cache optimization method and device and computer equipment
CN110968562B (en) * 2019-11-28 2023-05-12 国网上海市电力公司 Cache self-adaptive adjustment method and equipment based on ZFS file system
CN112395322B (en) * 2020-12-07 2021-06-01 湖南新云网科技有限公司 List data display method and device based on hierarchical cache and terminal equipment
CN112988619A (en) * 2021-02-08 2021-06-18 北京金山云网络技术有限公司 Data reading method and device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5394531A (en) * 1989-04-03 1995-02-28 International Business Machines Corporation Dynamic storage allocation system for a prioritized cache
CN1607508A (en) * 2003-10-16 2005-04-20 国际商业机器公司 System and method of adaptively reconfiguring buffers
CN101655824A (en) * 2009-08-25 2010-02-24 北京广利核系统工程有限公司 Implementation method of double-port RAM mutual exclusion access

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5394531A (en) * 1989-04-03 1995-02-28 International Business Machines Corporation Dynamic storage allocation system for a prioritized cache
CN1607508A (en) * 2003-10-16 2005-04-20 国际商业机器公司 System and method of adaptively reconfiguring buffers
CN101655824A (en) * 2009-08-25 2010-02-24 北京广利核系统工程有限公司 Implementation method of double-port RAM mutual exclusion access

Also Published As

Publication number Publication date
CN103077125A (en) 2013-05-01

Similar Documents

Publication Publication Date Title
CN103077125B (en) A kind of tower caching method of self-adaptation self-organization of efficiency utilization storage space
CN105205009B (en) A kind of address mapping method and device based on large capacity solid-state storage
CN103366016B (en) E-file based on HDFS is centrally stored and optimization method
CN108667653B (en) Cluster-based cache configuration method and device in ultra-dense network
CN105892947B (en) A kind of SSD and HDD the hybrid cache management method and system of energy conservation storage system
CN101201801B (en) Classification storage management method for VOD system
CN103246613B (en) Buffer storage and the data cached acquisition methods for buffer storage
CN101373445B (en) Method and apparatus for scheduling memory
CN109062505A (en) A kind of write performance optimization method under cache policy write-in layering hardware structure
CN107295619B (en) Base station dormancy method based on user connection matrix in edge cache network
CN108681435A (en) A kind of abrasion equilibrium method of solid state disk, device, equipment and storage medium
CN1499382A (en) Method for implementing cache in high efficiency in redundancy array of inexpensive discs
CN101692229A (en) Self-adaptive multilevel cache system for three-dimensional spatial data based on data content
CN106844740A (en) Data pre-head method based on memory object caching system
CN104572493A (en) Memory resource optimization method and device
CN105335219A (en) Distribution-based task scheduling method and system
CN109739780A (en) Dynamic secondary based on the mapping of page grade caches flash translation layer (FTL) address mapping method
CN110830561B (en) Multi-user ORAM access system and method under asynchronous network environment
CN104699424A (en) Page hot degree based heterogeneous memory management method
CN108647155A (en) A kind of method and apparatus that the multistage cache based on deep learning is shared
CN104598394A (en) Data caching method and system capable of conducting dynamic distribution
CN101021814A (en) Storage and polling method and storage controller and polling system
CN101916301B (en) Three-dimensional spatial data adaptive pre-scheduling method based on spatial relationship
CN103294912B (en) A kind of facing mobile apparatus is based on the cache optimization method of prediction
CN103200245B (en) A kind of distributed network caching method based on Device Mapper

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Self-adaption self-organizing tower type caching method for efficiently utilizing storage space

Effective date of registration: 20180627

Granted publication date: 20150916

Pledgee: China Co. truction Bank Corp Beijing Zhongguancun branch

Pledgor: RUN TECHNOLOGIES Co.,Ltd. BEIJING

Registration number: 2018110000015

PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20210128

Granted publication date: 20150916

Pledgee: China Co. truction Bank Corp Beijing Zhongguancun branch

Pledgor: Run Technologies Co.,Ltd. Beijing

Registration number: 2018110000015

PC01 Cancellation of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: An adaptive self organizing pyramid cache method for efficient memory utilization

Effective date of registration: 20210705

Granted publication date: 20150916

Pledgee: China Co. truction Bank Corp Beijing Zhongguancun branch

Pledgor: Run Technologies Co.,Ltd. Beijing

Registration number: Y2021990000579

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Granted publication date: 20150916

Pledgee: China Co. truction Bank Corp Beijing Zhongguancun branch

Pledgor: RUN TECHNOLOGIES Co.,Ltd. BEIJING

Registration number: Y2021990000579

PC01 Cancellation of the registration of the contract for pledge of patent right