CN106844740A - Data pre-head method based on memory object caching system - Google Patents

Data pre-head method based on memory object caching system Download PDF

Info

Publication number
CN106844740A
CN106844740A CN201710077397.8A CN201710077397A CN106844740A CN 106844740 A CN106844740 A CN 106844740A CN 201710077397 A CN201710077397 A CN 201710077397A CN 106844740 A CN106844740 A CN 106844740A
Authority
CN
China
Prior art keywords
data
user
memory object
request
object caching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710077397.8A
Other languages
Chinese (zh)
Other versions
CN106844740B (en
Inventor
李丁丁
刘继伟
李建国
汤庸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN201710077397.8A priority Critical patent/CN106844740B/en
Publication of CN106844740A publication Critical patent/CN106844740A/en
Application granted granted Critical
Publication of CN106844740B publication Critical patent/CN106844740B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention relates to a kind of data pre-head method based on memory object caching system, described pre-head method is when back-end data base returns to the access request that user requested data sends for the first time, extract the data characteristic of this data access, and associated relevance data are extracted according to the data characteristic, described relevance data are disposably back to front end memory object caching system with user requested data.Increase monitoring system in pre-reading simultaneously, pre-read and monitoring system by being introduced in memory object caching system, effectively raise the hit rate of caching and the stability of system, sufficiently make use of the resource of system, reduce the waste of many unnecessary system I/O and other systems resource, cause the more intelligence that memory object caching system embodies with real time simultaneously, the renewal of caching becomes more actively.

Description

Data pre-head method based on memory object caching system
Technical field
The present invention relates to data reading techniques field, more particularly to a kind of data pre-head based on memory object caching system Method.
Background technology
Based on memory object caching system, they can be in the vertical direction of traditional database, and addition one is special to delay Structure is deposited, the response time that the user's request and affairs of underlying database are performed can be not only improved, additionally it is possible to simplify its bottom Database manipulation, for example:Data access mode based on key assignments.Consequently, it is possible to can in the case where less systematic cost is paid, So that underlying database is while dealing with mass data and pouring in, autgmentability is greatly improved, and is in particular in:Alleviate The performance bottleneck that traditional external equipment is brought, additionally it is possible to while bringing advantageous extension at two aspects of CPU and storage Sexual clorminance, can adapt to the dynamic change of big data, and larger reduces unnecessary system I/O.
At present, based on having obtained larger rule inside memory object the caching system at home and abroad data center of large and medium-sized enterprise The deployment and application of mould, for example:Domestic Taobao, the real-time calling system robbed purchasing system, drip in Jingdone district, external Facebook Image cache system and hundreds of TB grades caching systems of Twitter etc..The memory object caching system that they are specifically used Two kinds of Memcached and Redis that system is mainly increased income.
Memcached is a caching system for high performance distributed memory object, by safeguarding one in internal memory Unified huge hash tables, can store the data of various forms.It is leaf before being deployed in generally use scene On, it is connected with the database of rear end by network.After system normal operation, the user's inquiry for receiving every time(Reading side To), all its desired data can be found in the memory object caching system of front end at this, to reduce back-end data base as far as possible Access times, to improve the speed and its Consumer's Experience of the application such as Dynamic Web of outside.
Redis is the same with Memcached, is also that a memory object caching system for C/S structures is realized, Redis There is identical feature with Memcached at many aspects, the difference is that Redis increased the function of persistence, and prop up Hold more data types, and transaction controlling.Application in terms of caching, Redis and Memcached are substantially similar.
All it is to fill the data in its internal memory in passive manner in two above system, namely according to computer program Spatial locality principle, after the data A for meeting certain user's request, system just the data are retained to the caching system In system, for use in the hit for accessing next time, so as to the operation of itself of the database after avoiding, reach saving expense and carry Rise the effect of performance.But, the cost paid for it is that the performance of access data A for the first time is suboptimum, and next a large amount of Access be all repeat this process:Substantial amounts of data access request needs to penetrate to be cached to up to database (caching is penetrated), To just meet afterwards user's request data remain into caching in.In system as big data high concurrent, this suboptimum Solution undoubtedly exaggeratedization, causes resource utilization not abundant enough, and the I/O of back-end data base is excessively frequent.Meanwhile, largely Network I/O can produce many secondary network I/O, it is most likely that trigger cache bottomless pit problem (when caching system performance not When good, by increasing node, but the phenomenon not taken a turn for the better still), this can cause the batch operation to be related to multiple network Operation, also imply that batch operation can increasing with example, it is time-consuming constantly to increase.If simultaneously at a time There is substantial amounts of request to break through caching, all requests are all gone to look into database, and database CPU and memory negative can be caused in a flash at this Carry too high, or even machine of delaying (caching snowslide phenomenon).
The content of the invention
In view of this, even avoid caching the generation for penetrating and caching bottomless pit phenomenon for reduction, reducing suboptimum please The problems such as generation asked, there is provided a kind of memory object caching system data pre-head method based on active mode, the method The performance of memory object caching system can be improved, most possible visit user's next time can be filtered out according to the request dynamic of user The data asked simultaneously return to memory object caching system, and service condition and the life of current cache according to system resource Middle rate determines the size of returned data amount, so as to reach the utilization system resource of maximization.
A kind of data pre-head method based on memory object caching system, described pre-head method is when back-end data base When once returning to the access request that user requested data sends, the data characteristic of this data access is extracted, and according to the number According to the relevance data that feature extraction is associated, described relevance data are disposably back to front end with user requested data Memory object caching system.
Preferably, for save space, and to the fractional prediction error prediction of data prediction algorithm proposed by the present invention Made up, the effective time that described relevance data are added to caching system is 6 hours, if within 6 hours The data are not accessed, and the internal memory of memory object caching system auto-destruct data occupancy simultaneously discharges corresponding space.
Described pre-head method is realized comprising the following steps:
S1, after user sends request of data, whether system first determines whether the data of user's request in memory object caching system In, if hit in memory object caching system, system directly returns data to user from memory object caching system And terminate this visit;
S2, if the data of request are hit not in memory object caching system, monitoring system can be sentenced according to current system performance Whether disconnected unlatching pre-reads function, if current system performance is not good, monitoring system is closed and pre-reads function, and directly number of units after access According to storehouse, the data of user's request are added to after memory object caching system and are returned to the data of user's request and is supplied to user;
S3, if current system performance is good, unlatching pre-reads function, now, state, caching life that system is run according to current system The size of the data volume of middle rate and user's request determines the size of pre- reading window, and subsequent system obtains user from database Data of request and being added to are treated in buffer queue, next, system can judge whether is the data volume that is added in queue Less than the size of pre- reading window;
S31, if the window size that pre-reads of data in queue less than current system, system can judge currently to treat in buffer queue Latest data whether in database with other tables data exist associate, if there is incidence relation, then system will Associated data are added in treating buffer queue, until the data in queue are more than in the size of pre- reading window or queue Data again without associated data, once size of the data more than pre- reading window in queue, then jump to S32, such as Data in fruit queue have not existed incidence relation, but still less than the size of pre- reading window, then system will be from team Obtained in row during the newest N bars where latest data in table record and be added to and treat buffer queue, if newest N datas exist Treat existed in buffer queue, then continue to obtain during time new record is added to and treats buffer queue;
If S32 treats that data in buffer queue have been above the size of pre- reading window, then system will be directly treated in buffer queue Data be added in caching and return to the data of user's request.
Wherein, the current system performance judges to include being classified system, can be divided into not busy state L1, general numerous Busy condition L2 and busy state L3 three-levels.
Described not busy state L1 may be defined as under the percentage of time and system model that CPU is under user model Percentage of time sum is less than 70 percent, and internal memory is used in free memory and accounts for the percentage of the total internal memory of system being less than or equal to The percentage that percent 60, I/O is used in the number of times relative maximum read-write number of times for being written and read operation per second is less than or equal to 60 percent, the bandwidth for being used in using of current network accounts for the percentage of total bandwidth less than or equal to 60 percent, wide Band postpones to be less than 50ms.
Described busy state L3 is defined as percentage of time and the time under system model that CPU is under user model Percentage sum is not less than 85 percent, and internal memory is used in free memory and accounts for the percentage of the total internal memory of system not less than hundred The percentage that/eight ten, I/O are used in the number of times relative maximum read-write number of times for being written and read operation per second is more than or equal to hundred / eight ten, the bandwidth for being used in using of current network accounts for the percentage of total bandwidth not less than 80 percent, and broadband is prolonged It is more than 100ms late.
It is general busy state L2 in definition not between busy state L1 and busy state L3.
Wherein, it is described to pre-read window size by following condition judgment:
System L1 under not busy state:When cache hit rate is relatively low:R1=R0*4*2;When cache hit rate is higher:R1=R0* 4;
System L2 under general busy state:When cache hit rate is relatively low:R1=R0*2*2;When cache hit rate is higher:R1=R0* 2;
System L3 under busy state:Data pre-head function is closed, unnecessary process, releasing memory space is cleared up;
It is a record in tables of data that the base unit of window is pre-read defined in it, is designated as R, remembers the data pair of road user's request The data volume answered is R0.
Beneficial effect:Pre-read and monitoring system by being introduced in memory object caching system, effectively raise caching Hit rate and system stability, sufficiently make use of the resource of system, reduce many unnecessary system I/O and The waste of other systems resource, while so that the more intelligence of memory object caching system embodiment and real-time, the renewal change of caching Obtain more actively.
Brief description of the drawings
Fig. 1 is based on the data pre-head method technology path flow chart of memory object caching system;
Fig. 2 pre-reads the realization of function in the data pre-head method based on memory object caching system.
Specific embodiment
Below in conjunction with the accompanying drawings, a kind of data pre-head method based on memory object caching system of the invention is done specifically It is bright.
A kind of data pre-head method based on memory object caching system, described pre-head method is when back-end data base When once returning to the access request that user requested data sends, the data characteristic of this data access is extracted, and according to the number According to the relevance data that feature extraction is associated, described relevance data are disposably back to front end with user requested data Memory object caching system.For more specifically, when back-end data base returns to the access request that user sends to A data for the first time When, by knowing the data characteristic of this data access clearly, this is not accessed for other data and is associated(It is here labeled as B), B is simultaneously disposably back to front end memory object caching system with A.Afterwards, when user proposes to access B data for the first time When, data B can be directly found in memory object caching system, and the process of data is found in rear end without experience, to carry The performance of the Database Systems high.In other words, it is the caching based on memory object in the case of overhead acceptable System is made one kind to measure and is pre-read(Read-ahead)Algorithm, and applied into present caching system running environment, To lift whole Database Systems(" whole " refers to front end plus rear end)Handling capacity.
As shown in figure 1, when a certain data during user accesses database first, by pre-reading algorithm, this is not interviewed Other data asked are associated, and are disposably back to database with data are accessed for.Afterwards, when user for the first time Have access to it is pre- read in database data when, directly can hit in the buffer, without experience backstage find data mistake Journey.The performance of caching is improved with this and inessential I/O is reduced;On the other hand, it is necessary to dynamically be pre-read to database, pass through Which data this strategy determines that by great data volume can so that memory object caching system reaches in being added to caching Preferable state.
When user1 accesses system, when sending request A, because the data asked are not in memory object caching system, so The request can be through memory object caching system.Now, performance monitoring system detect system performance it is splendid, so the request After data A required for request A is obtained from database, system will be analyzed to the data, and the subsequent request will be carried The B that data A is associated with data A, C, D are returned in memory object caching system together, and the data A hairs that user1 is asked Give user.
When user2 accesses system, when sending request B, because the data B for asking is in memory object caching system, So system directly can return data to user2 from caching.When user3 accesses system, when sending request E, due to request Data not in the buffer, so the request can penetrate memory object caching system, think that background data base asks for data.But by It is not good in current system performance, so system is closed pre-reads function, simply the data E of user's request is added in caching simultaneously Return to user3;Meanwhile, monitoring system can clear up unnecessary process, releasing memory space, and remove unnecessary I/O behaviour Make to optimize system.
When user4 accesses system, now the performance of system is still very poor, so still being obtained only from Database Systems After the data of request, the request of caching and corresponding user4 is updated.
As can be seen that memory object caching system pre-reads and monitor because having from these request process, become more Intelligence, more actively.The addition for pre-reading and monitoring so that whole system is more stablized and flexible.
The wherein realization of pre-head method:As shown in Fig. 2 first, after a user sends request of data, system first can Whether in the buffer the data of user's request are judged, if hit in the buffer, then system directly caches from memory object and is User is returned data in system and terminates this visit operation;
If not in memory object caching system hit, then now monitoring system can according to the CPU of current system, internal memory Service condition, and the current I/O situations for occurring pre-read function judging whether unlatching, if current system performance is not good, Monitoring system can close the function that pre-reads of system, and directly access background data base, and the data of user's request are added into internal memory The data of user's request are returned after target cache system to user;
If current system is opened pre-reads function, now, the state that system can be run according to current system first, cache hit The size of the data volume of rate and user's request determines the size of pre- reading window, and then, system obtains user from database Data of request and being added to are treated in buffer queue, next, system can judge the Data Data amount being added in queue Whether less than pre- reading window size, if treating that data in buffer queue have been above the size of pre- reading window, then system is straight Connect the data that the data that will be treated in buffer queue are added in caching and return to user's request.
If the data in queue pre-read window size less than current system, system can judge currently to treat buffer queue In latest data whether in database with other tables data exist associate, if there is incidence relation, then system general Associated data can be added in treating buffer queue.If next adding the queue after associated data to be still less than to pre-read The size of window, system may proceed to find associated data, until data in queue more than pre- reading window size or , again without associated data, once size of the data more than pre- reading window in queue, system will for data in queue To treat that the data in buffer queue are added in memory object caching system, and return to the data of user's needs.If in queue Data do not existed incidence relation, but still less than the size of pre- reading window, then system will be obtained from queue During newest N bars in table where latest data record and are added to and treat buffer queue, if newest N datas are in team to be cached Exist in row, then continued to obtain during time new record is added to and treats buffer queue.Next, so circulation, until waiting to delay Deposit size of the data in queue more than pre- reading window.
Herein, the incidence relation between data, the number being associated with current data as far as possible are sufficiently used According to put into caching in.Because the maximum probability that associated data are accessed in next time.
Similar to microblogging and wechat circle of friends, the message of the newest issue of user, the possibility checked or changed in the recent period Maximum, and some operations that active user is carried out, the possibility for carrying out same operation in following other users are also very big, institute In data pre-head algorithm proposed by the present invention, to take full advantage of the temporal locality principle (time that data and user operate Locality:If an item of information is accessed, then in the recent period it be likely to also be accessed again), the acquisition of active The latest data in table corresponding to the data of user's request is added in caching.
Meanwhile, the data acquiescence effective time for being newly added into caching is 6 hours, if the number within 6 hours According to not being accessed, the internal memory of memory object caching system meeting auto-destruct data occupancy simultaneously discharges corresponding space.Do so Purpose be that fractional prediction error prediction for save space, and to data prediction algorithm makes up.
When operation pre-reads algorithm, it is necessary to consider the network load condition of preceding leaf and rear leaf.
The cache hit rate of memory object caching system and the operation conditions of system, the purpose of monitoring are paid the utmost attention to herein It is to be issued to larger cache hit rate in current system running status, so when user's request penetrates caching, it is necessary at once Determine the size of system CPU, internal memory, I/O, the service condition of current network, and cache hit rate, and according to these data with And the data characteristics corresponding to the request sent of user determines the size of current pre- reading window;So user's request is corresponding every time The window size that pre-reads all be different.
By multiple experiment test, the present invention is other according to three performance levels that the service condition of system defines system: L1, L2, L3 are as follows:
Wherein
user%:Represent percentage of time in the user mode at CPU.
sys%:Represent that CPU is in the percentage of time under system model.
free%:Represent that free memory accounts for the percentage of the total internal memory of system.
IOPS:It is per second to be written and read(I/O)The number of times of operation, each hard disc apparatus have a maximum.
IO%:It is currently per second to be written and read(I/O)The percentage of the number of times relative maximum IOPS of operation.
bandwidth%:Refer to that the bandwidth used in current system accounts for the percentage of total bandwidth.
T :Represent network delay degree.
It is a record in tables of data to define the base unit of pre- reading window, is designated as R.Such as user this time asks to have altogether It is related to 10 records in tables of data, then, the data volume that user this time asks is 10R.And remember the data pair of user's request The data volume answered is R0.
Herein, this patent definition data pre-head window size is as follows:
When system is in L1, it is 4 times of user's request data volume to pre-read window size;If hit rate is relatively low (being less than 70%), Pre- reading window expands 2 times again, i.e., the window size that pre-reads now is the * 2 of R1=R0 * 4;If hit rate is higher (being higher than 70%), then now the size of pre- reading window is R1=R0*4;
When systematic function is moderate, i.e., system is under L2 states, and it is 2 times of user's request data volume to pre-read window size;Such as Fruit hit rate is relatively low (being less than 70%), and pre- reading window expands 2 times again, i.e., the window size that pre-reads now is R1=R0 * 2* 2;If hit rate is (being higher than 70%) higher, then now the size of pre- reading window is R1=R0*2;
When systematic function extreme difference, i.e., system is under L3 states, then closing is pre-read function by system, and directly returning to user please The data asked.
So, pre-read window size computing formula as follows:
Under L1 states:
When cache hit rate is relatively low:R1=R0*4*2;
When cache hit rate is higher:R1=R0*4;
Under L2 states:
When cache hit rate is relatively low:R1=R0*2*2;
When cache hit rate is higher:R1=R0*2;
Under L3 states:Close data pre-head function.Under L3 states, monitoring system can actively clear up unnecessary process, release Memory headroom, and remove unnecessary I/O operation system is optimized, and discharge some Internet resources, so as to reach The condition of function can be again turned on pre-reading.
As can be seen here, it is used to fill the data of memory object caching system, the running status current with system, caching life The size that middle rate and user send the data volume corresponding to request has great relation, for filling memory object caching system The data of system have great dynamic characteristic.
The data pre-head method of the caching system based on memory object, it is adaptable to traditional database and memory database system Combination scene, and after focusing on introducing data pre-head algorithm, how in the case where system superior performance is kept Memory object caching system reaches cache hit rate higher.Implementation method of the invention can sufficiently be applied to real system In, with preferable prospect and practicality.
Embodiment described above only expresses several embodiments of the invention, and its description is more specific and detailed, but simultaneously Therefore the limitation to the scope of the claims of the present invention can not be interpreted as.It should be pointed out that for one of ordinary skill in the art For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to guarantor of the invention Shield scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.

Claims (6)

1. a kind of data pre-head method based on memory object caching system, it is characterised in that described pre-head method is:
When back-end data base returns to the access request that user requested data sends for the first time, the data of this data access are extracted Characteristic, and associated relevance data are extracted according to the data characteristic, described relevance data are with user requested data Disposably it is back to front end memory object caching system.
2. the data pre-head method based on memory object caching system according to claim 1, it is characterised in that described The effective time that relevance data are added to caching system is 6 hours, if the data are not accessed within 6 hours, The internal memory of memory object caching system auto-destruct data occupancy simultaneously discharges corresponding space.
3. the data pre-head method based on memory object caching system according to claim 1, it is characterised in that described Pre-head method is realized comprising the following steps:
S1, after user sends request of data, whether system first determines whether the data of user's request in memory object caching system In, if hit in memory object caching system, system directly returns data to user from memory object caching system And terminate this visit;
S2, if the data of request are hit not in memory object caching system, monitoring system can be sentenced according to current system performance Whether disconnected unlatching pre-reads function, if current system performance is not good, monitoring system is closed and pre-reads function, and directly number of units after access According to storehouse, the data of user's request are added to after memory object caching system and are returned to the data of user's request and is supplied to user;
S3, if current system performance is good, unlatching pre-reads function, now, state, caching life that system is run according to current system The size of the data volume of middle rate and user's request determines the size of pre- reading window, and subsequent system obtains user from database Data of request and being added to are treated in buffer queue, next, system can judge whether is the data volume that is added in queue Less than the size of pre- reading window;
S31, if the window size that pre-reads of data in queue less than current system, system can judge currently to treat in buffer queue Latest data whether in database with other tables data exist associate, if there is incidence relation, then system will Associated data are added in treating buffer queue, until the data in queue are more than in the size of pre- reading window or queue Data again without associated data, once size of the data more than pre- reading window in queue, then jump to S32, such as Data in fruit queue have not existed incidence relation, but still less than the size of pre- reading window, then system will be from team Obtained in row during the newest N bars where latest data in table record and be added to and treat buffer queue, if newest N datas exist Treat existed in buffer queue, then continue to obtain during time new record is added to and treats buffer queue;
If S32 treats that data in buffer queue have been above the size of pre- reading window, then system will be directly treated in buffer queue Data be added in caching and return to the data of user's request.
4. the data pre-head method based on memory object caching system according to claim 3, it is characterised in that described to work as Preceding systematic function judges to include being classified system, can be divided into not busy state L1, general busy state L2 and busy state L3 three-levels.
5. the data pre-head method based on memory object caching system according to claim 4, it is characterised in that described Busy state L1 not may be defined as percentage of time and the percentage of time sum under system model that CPU is under user model Less than 70 percent, internal memory is used in free memory and accounts for the percentage of the total internal memory of system less than or equal to percent 60, I/O The percentage of the number of times relative maximum read-write number of times for being written and read operation per second is used in less than or equal to 60 percent, currently The bandwidth for being used in using of network accounts for the percentage of total bandwidth less than or equal to 60 percent, and wideband delay is less than 50ms; Described busy state L3 be defined as CPU be in user model under percentage of time and the percentage of time under system model it With not less than 85 percent, internal memory is used in free memory and accounts for the percentage of the total internal memory of system not less than 8 percent The percentage that ten, I/O are used in the number of times relative maximum read-write number of times for being written and read operation per second is more than or equal to 8 percent Ten, the bandwidth for being used in using of current network accounts for the percentage of total bandwidth not less than 80 percent, and wideband delay is more than 100ms;It is general busy state L2 in definition not between busy state L1 and busy state L3.
6. the data pre-head method based on memory object caching system according to claim 4, it is characterised in that described Window size is pre-read by following condition judgment:
System L1 under not busy state:When cache hit rate is relatively low:R1=R0*4*2;When cache hit rate is higher:R1=R0*4;
System L2 under general busy state:When cache hit rate is relatively low:R1=R0*2*2;When cache hit rate is higher:R1=R0* 2;
System L3 under busy state:Data pre-head function is closed, unnecessary process, releasing memory space is cleared up;
It is a record in tables of data that the base unit of window is pre-read defined in it, is designated as R, remembers the data pair of road user's request The data volume answered is R0.
CN201710077397.8A 2017-02-14 2017-02-14 Data pre-reading method based on memory object cache system Active CN106844740B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710077397.8A CN106844740B (en) 2017-02-14 2017-02-14 Data pre-reading method based on memory object cache system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710077397.8A CN106844740B (en) 2017-02-14 2017-02-14 Data pre-reading method based on memory object cache system

Publications (2)

Publication Number Publication Date
CN106844740A true CN106844740A (en) 2017-06-13
CN106844740B CN106844740B (en) 2020-12-29

Family

ID=59128199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710077397.8A Active CN106844740B (en) 2017-02-14 2017-02-14 Data pre-reading method based on memory object cache system

Country Status (1)

Country Link
CN (1) CN106844740B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107277159A (en) * 2017-07-10 2017-10-20 东南大学 A kind of super-intensive network small station caching method based on machine learning
CN107329908A (en) * 2017-07-07 2017-11-07 联想(北京)有限公司 A kind of data processing method and electronic equipment
CN110324366A (en) * 2018-03-28 2019-10-11 阿里巴巴集团控股有限公司 Data processing method, apparatus and system
CN111104528A (en) * 2018-10-29 2020-05-05 浙江宇视科技有限公司 Picture obtaining method and device and client
CN111258967A (en) * 2020-02-11 2020-06-09 西安奥卡云数据科技有限公司 Data reading method and device in file system and computer readable storage medium
CN111399784A (en) * 2020-06-03 2020-07-10 广东睿江云计算股份有限公司 Pre-reading and pre-writing method and device for distributed storage
CN111782391A (en) * 2020-06-29 2020-10-16 北京达佳互联信息技术有限公司 Resource allocation method, device, electronic equipment and storage medium
WO2021042594A1 (en) * 2019-09-03 2021-03-11 浪潮电子信息产业股份有限公司 Method and apparatus for data caching
US11344818B2 (en) 2018-10-04 2022-05-31 Acer Incorporated Computer system, game loading method thereof and computer readable storage medium
CN115827508A (en) * 2023-01-09 2023-03-21 苏州浪潮智能科技有限公司 Data processing method, system, equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508638A (en) * 2011-09-27 2012-06-20 华为技术有限公司 Data pre-fetching method and device for non-uniform memory access
WO2013066010A1 (en) * 2011-10-31 2013-05-10 에스케이씨앤씨 주식회사 Method for pre-loading in memory and method for parallel processing for high-volume batch processing
CN103257935A (en) * 2013-04-19 2013-08-21 华中科技大学 Cache management method and application thereof
CN103399856A (en) * 2013-07-01 2013-11-20 北京科东电力控制系统有限责任公司 Explosive type data caching and processing system for SCADA system and method thereof
CN103729471A (en) * 2014-01-21 2014-04-16 华为软件技术有限公司 Method and device for database query
CN103902260A (en) * 2012-12-25 2014-07-02 华中科技大学 Pre-fetch method of object file system
CN104657143A (en) * 2015-02-12 2015-05-27 中復保有限公司 High-performance data caching method
CN104715048A (en) * 2015-03-26 2015-06-17 浪潮集团有限公司 File system caching and pre-reading method
CN104731974A (en) * 2015-04-13 2015-06-24 上海新炬网络信息技术有限公司 Dynamic page loading method based on big data stream type calculation
CN104881467A (en) * 2015-05-26 2015-09-02 上海交通大学 Data correlation analysis and pre-reading method based on frequent item set
CN105279240A (en) * 2015-09-28 2016-01-27 暨南大学 Client origin information associative perception based metadata pre-acquisition method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508638A (en) * 2011-09-27 2012-06-20 华为技术有限公司 Data pre-fetching method and device for non-uniform memory access
WO2013066010A1 (en) * 2011-10-31 2013-05-10 에스케이씨앤씨 주식회사 Method for pre-loading in memory and method for parallel processing for high-volume batch processing
CN103902260A (en) * 2012-12-25 2014-07-02 华中科技大学 Pre-fetch method of object file system
CN103257935A (en) * 2013-04-19 2013-08-21 华中科技大学 Cache management method and application thereof
CN103399856A (en) * 2013-07-01 2013-11-20 北京科东电力控制系统有限责任公司 Explosive type data caching and processing system for SCADA system and method thereof
CN103729471A (en) * 2014-01-21 2014-04-16 华为软件技术有限公司 Method and device for database query
CN104657143A (en) * 2015-02-12 2015-05-27 中復保有限公司 High-performance data caching method
CN104715048A (en) * 2015-03-26 2015-06-17 浪潮集团有限公司 File system caching and pre-reading method
CN104731974A (en) * 2015-04-13 2015-06-24 上海新炬网络信息技术有限公司 Dynamic page loading method based on big data stream type calculation
CN104881467A (en) * 2015-05-26 2015-09-02 上海交通大学 Data correlation analysis and pre-reading method based on frequent item set
CN105279240A (en) * 2015-09-28 2016-01-27 暨南大学 Client origin information associative perception based metadata pre-acquisition method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
김종찬等: "A Novel Method of Improving Cache Hit-rate in Hadoop MapReduce using SSD Cache", 《JOURNAL OF THE KOREA SOCIETY OF COMPUTER AND INFORMATION》 *
杨洪章等: "基于pNFS的小文件间数据预读机制研究", 《计算机研究与发展》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107329908A (en) * 2017-07-07 2017-11-07 联想(北京)有限公司 A kind of data processing method and electronic equipment
CN107277159A (en) * 2017-07-10 2017-10-20 东南大学 A kind of super-intensive network small station caching method based on machine learning
CN107277159B (en) * 2017-07-10 2020-05-08 东南大学 Ultra-dense network small station caching method based on machine learning
CN110324366A (en) * 2018-03-28 2019-10-11 阿里巴巴集团控股有限公司 Data processing method, apparatus and system
CN110324366B (en) * 2018-03-28 2022-07-29 阿里巴巴集团控股有限公司 Data processing method, device and system
US11344818B2 (en) 2018-10-04 2022-05-31 Acer Incorporated Computer system, game loading method thereof and computer readable storage medium
CN111104528A (en) * 2018-10-29 2020-05-05 浙江宇视科技有限公司 Picture obtaining method and device and client
CN111104528B (en) * 2018-10-29 2023-05-16 浙江宇视科技有限公司 Picture acquisition method and device and client
WO2021042594A1 (en) * 2019-09-03 2021-03-11 浪潮电子信息产业股份有限公司 Method and apparatus for data caching
US11803475B2 (en) 2019-09-03 2023-10-31 Inspur Electronic Information Industry Co., Ltd. Method and apparatus for data caching
CN111258967A (en) * 2020-02-11 2020-06-09 西安奥卡云数据科技有限公司 Data reading method and device in file system and computer readable storage medium
CN111399784B (en) * 2020-06-03 2020-10-16 广东睿江云计算股份有限公司 Pre-reading and pre-writing method and device for distributed storage
CN111399784A (en) * 2020-06-03 2020-07-10 广东睿江云计算股份有限公司 Pre-reading and pre-writing method and device for distributed storage
CN111782391A (en) * 2020-06-29 2020-10-16 北京达佳互联信息技术有限公司 Resource allocation method, device, electronic equipment and storage medium
CN115827508B (en) * 2023-01-09 2023-05-09 苏州浪潮智能科技有限公司 Data processing method, system, equipment and storage medium
CN115827508A (en) * 2023-01-09 2023-03-21 苏州浪潮智能科技有限公司 Data processing method, system, equipment and storage medium

Also Published As

Publication number Publication date
CN106844740B (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN106844740A (en) Data pre-head method based on memory object caching system
CN102843396B (en) Data write-in and read method and device in a kind of distributed cache system
CN105872040B (en) A method of write performance is stored using gateway node cache optimization distributed block
CN107632784A (en) The caching method of a kind of storage medium and distributed memory system, device and equipment
US10649903B2 (en) Modifying provisioned throughput capacity for data stores according to cache performance
CN101789976B (en) Embedded network storage system and method thereof
CN103853766B (en) A kind of on-line processing method and system towards stream data
CN109766312A (en) A kind of block chain storage method, system, device and computer readable storage medium
CN106528451B (en) The cloud storage frame and construction method prefetched for the L2 cache of small documents
CN108920616A (en) A kind of metadata access performance optimization method, system, device and storage medium
CN108427537A (en) Distributed memory system and its file write-in optimization method, client process method
CN107888687B (en) Proxy client storage acceleration method and system based on distributed storage system
US20190004968A1 (en) Cache management method, storage system and computer program product
CN102279810A (en) Network storage server and method for caching data
CN104657461A (en) File system metadata search caching method based on internal memory and SSD (Solid State Disk) collaboration
CN107133369A (en) A kind of distributed reading shared buffer memory aging method based on the expired keys of redis
CN106201348A (en) The buffer memory management method of non-volatile memory device and device
CN102104494B (en) Metadata server, out-of-band network file system and processing method of system
EP3588913B1 (en) Data caching method, apparatus and computer readable medium
CN108614847A (en) A kind of caching method and system of data
CN107180118A (en) A kind of file system cache data managing method and device
CN101853218B (en) Method and system for reading redundant array of inexpensive disks (RAID)
CN104778132A (en) Multi-core processor directory cache replacement method
CN106201918A (en) A kind of method and system quickly discharged based on big data quantity and extensive caching
CN111506517B (en) Flash memory page level address mapping method and system based on access locality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant