CN107451144A - Cache read method and device - Google Patents

Cache read method and device Download PDF

Info

Publication number
CN107451144A
CN107451144A CN201610374753.8A CN201610374753A CN107451144A CN 107451144 A CN107451144 A CN 107451144A CN 201610374753 A CN201610374753 A CN 201610374753A CN 107451144 A CN107451144 A CN 107451144A
Authority
CN
China
Prior art keywords
cached
criterion
knowledge
cache size
caching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610374753.8A
Other languages
Chinese (zh)
Other versions
CN107451144B (en
Inventor
张帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201610374753.8A priority Critical patent/CN107451144B/en
Publication of CN107451144A publication Critical patent/CN107451144A/en
Application granted granted Critical
Publication of CN107451144B publication Critical patent/CN107451144B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A kind of caching read method is provided, including:Read from caching and be cached with criterion knowledge and cache size;If be cached with criterion know it is invalid, to be cached with criterion know locked and again read off be cached with criterion know;If be cached with, criterion knowledge is still invalid, and setting is cached with criterion knowledge effectively and set is cached with the effective time that criterion is known, return cache value;And start asynchronous thread reading database to update cache size into caching, then the effective time of the cache size after renewal is set, the wherein effective time of cache size is more than the effective time for being cached with criterion knowledge.The present invention locks compared to tradition and is lined up processing mode, improves the handling capacity of system and the hit rate of caching.

Description

Cache read method and device
Technical field
The present invention relates to field of computer technology, and in particular to caching read method and device, uses The processing of caching snowslide in the case of high concurrent.
Background technology
Generally refered in particular in computer realm using caching, caching herein in order to solve database Bottleneck between server and web server, front end request results static to caching system (redis, memcache), for lifted access performance and evade database pressure technology.
When old caching is expired, new a period of time for caching Pending The Entry Into Force, all requests all go to look into Ask database.Cause cpu and the memory pressure of database huge, while web front-end blocks. This also turns into caching snowslide.Impact of the avalanche effect to first floor system during cache invalidation very may be used Be afraid of.Regrettably, this problem does not have very perfect solution at present.Most systems Designer considers to be write with the single thread (process) locked or the mode of queue ensures to cache, from And substantial amounts of concurrent request during failure is avoided to fall in bottom storage system.
The content of the invention
According to illustrated embodiments of the invention, there is provided one kind caching read method, including:From caching Middle reading is cached with criterion knowledge and cache size;If it is invalid to be cached with criterion knowledge, effective to caching Mark, which is locked and again read off, is cached with criterion knowledge;If it is still invalid to be cached with criterion knowledge, Then set to be cached with criterion knowledge effectively and set and be cached with the effective time that criterion is known, return cache Value;And start asynchronous thread reading database to update cache size into caching, then set The effective time of the effective time of cache size after renewal, wherein cache size, which are more than, is cached with criterion The effective time of knowledge.
Preferably, if the criterion that is cached with read knows effective, direct return cache value.It is preferred that Ground, include knowing progress character string locking to being cached with criterion to being cached with criterion knowledge and lock. Preferably, the effective time of cache size be cached with criterion knowledge n times, n be greater than 1 it is whole Number.
The present invention also provides a kind of caching reading device, including:First reading device, is configured Criterion knowledge and cache size are cached with to read;Second reading device, if being configured as being cached with It is invalid that criterion is known, and is locked to being cached with criterion knowledge, and again read off and be cached with criterion knowledge; Be cached with criterion and know and device is set, if be configured as being cached with criterion know it is invalid, to being cached with Criterion, which is known, to be locked, and is again read off and be cached with criterion knowledge;Asynchronous refresh device, is configured To start asynchronous thread reading database to update cache size into caching, after then renewal is set Cache size effective time, wherein the effective time of cache size be more than is cached with criterion knowledge have Imitate the time.
Preferably, if the criterion that is cached with that second reading unit is additionally configured to read is known Effectively, direct return cache value.Preferably, second reading unit is additionally configured to slow There is criterion and knows progress character string locking.Preferably, the effective time of cache size is that caching is effective N times of mark, n are greater than 1 integer.
The present invention locks compared to tradition and is lined up processing mode, improves the handling capacity and caching of system Hit rate.
Brief description of the drawings
Accompanying drawing is used to more fully understand the present invention, does not form limitation of the invention, wherein:
Fig. 1 shows the generally locking processing method of prior art.
Fig. 2 shows the core business process of the present invention.
Fig. 3 shows caching read method according to embodiments of the present invention.
Fig. 4 shows caching reading device according to embodiments of the present invention.
Embodiment
The exemplary embodiment of the present invention is explained below in conjunction with accompanying drawing, including this hair They should be thought what is be merely exemplary by the various details of bright embodiment to help to understand. Therefore, it will be appreciated by the person skilled in the art that can be to each embodiment described herein making Kind modifications and changes, without departing from scope and spirit of the present invention.For the purposes of clear and concise, The description to known function and structure is eliminated in describing below.
The generally locking processing method of prior art is shown in Fig. 1.Front end sends reading caching The request of value, cache size CacheValue is read from redis, if CacheValue is not Sky, then enter and lock queuing, then cache size is read from redis.Next, read from database Data simultaneously write redis.
The pressure that this scheme solves database only by locking and being lined up, but be lined up During need to wait checking storehouse successively, be not improved the handling capacity of system actually.
The present invention devises cache size and is cached with criterion knowledge according to the occurrence scene of caching snowslide Two data.The effective time of cache size is set to be cached with to n times of effect identified time.When having Support to read timeout datum when imitating indicating failure, and start asynchronous refresh data, it is slow so as to be lifted Deposit the hit rate and handling capacity cached under snowslide scene.It is described in detail below of the invention complete Technical scheme.
Fig. 2 shows the core business process of the present invention.Wherein:
1) request is initiated in front end.
2) read and be cached with criterion knowledge CacheSign, this is identified currently whether there is for controlling Valid cache data.
3) cache size CacheValue is read, this is the cache size of the actual storage in redis.
4) if there is criterion knowledge CacheSign is cached with, current cache size is illustrated CacheValue is effective, now direct return cache value.Otherwise step 5 is entered.
5) step 4 is connect, knows CacheValue (marks in other words if there is no criterion is cached with It is invalid to know), it is necessary to know CacheValue progress character string lockings to being cached with criterion, and again Trial reading is cached with criterion and known, and in the case of multithreading, may now be given birth to via other threads Cheng Xin's is cached with criterion knowledge, so needing to read again once after locking code is entered.
It should be noted that the operation flow of the present invention is probably to be performed by multiple concurrent threads 's.Lock, referred to if being cached with criterion knowledge to being cached with criterion knowledge CacheValue CacheValue is locked, then after thread enter queueing condition, i.e. in Fig. 2 shown in square frame Operation do not perform.But for first thread, now it is cached with criterion knowledge CacheValue is not locked, thus it can continue executing with the operation in Fig. 2 shown in square frame. When first thread execution subsequent operation (step 6-8), generate the new criterion that is cached with and know After CacheValue, its this it is new be cached with that criterion knows that CacheValue can be lined up its His thread reads (in steps of 5) again, or the thread newly added is read (in step first It is rapid 4), effective CacheValue, direct return cache can be read so as to this two classes thread Value.
6) know if there is the effective criterion that is cached with, illustrate to set newly via other threads Be cached with criterion knowledge, now can direct return cache value.Otherwise step 7 is entered.The step The cache size of middle return is probably the new cache size of asynchronous refresh, it is also possible to the cache size being a time out.
7) step 6 is connect, if not being cached with criterion knowledge still, then setting is cached with criterion Know caching, and it is TIME X 1 to set its time-out time.What is set herein is cached with criterion The time of knowledge is actual data cached effective time, and it is necessarily less than the effective time of cache size. Set the thread for ensureing newly to come in not wait in line again herein, directly can be returned in step 4 Data.
8) start the asynchronous reading database DB of thread and update cache size, its time-out time is set For TIME X n (n > 1), the effective time of the cache size of setting, which have to be larger than, is cached with criterion knowledge Effective time, be ensure by be cached with criterion know judge caching failed in the case of, Before asynchronous reading DB renewals cache size, moreover it is possible to the cache size of time-out is read, safeguards system Handling capacity.
9) return cache value.Step 9 is gone to, it is first after cache invalidation to illustrate current request Bar is asked, and it is responsible for setting and is cached with criterion knowledge, asynchronous starting renewal cache size, and current Request returns to the cache size of time-out.
Fig. 3 shows caching read method 300 according to embodiments of the present invention.Method 300 can To be performed by some thread in concurrent thread.Method 300 is included in step 301, from caching Middle reading is cached with criterion knowledge and cache size.Step 302, judge that being cached with effect has identified whether Effect, such as effectively, then advance to the direct return cache value of step 307 and terminate.If it is cached with It is invalid that criterion is known, then in step 303, knows to being cached with criterion and locked that (such as character serially adds Lock) and again read off be cached with criterion knowledge.At this moment, in step 304, judge what is read again Whether effectively cashing indication (in the case of multithreading, may be generated new be cached with by other threads Criterion is known), such as effectively, advance to the direct return cache value of step 307 and terminate.If caching Effectively mark is still invalid, and in step 305, setting is cached with criterion knowledge effectively and set and is cached with The effective time that criterion is known, it is then back to cache size.Next, start in step 306 asynchronous Thread, reading database set the cache size after renewal to update cache size into caching Effective time.It should be noted that the effective time of cache size, which is greater than, is cached with criterion knowledge Effective time.For example, the effective time of cache size can be cached with criterion knowledge n times, n It is greater than 1 integer.
Fig. 4 shows caching reading device 400 according to embodiments of the present invention.Caching reads dress Putting 400 includes the first reading unit 401, the second reading unit 402, is cached with criterion knowledge setting Unit 403 and asynchronous refresh unit 404.
First reading unit 401 is configured as reading and is cached with criterion knowledge and cache size.Second reads If taking unit 402 to be configured as being cached with criterion and knowing invalid, know to being cached with criterion and add Lock, and again read off and be cached with criterion knowledge.Criterion knowledge setting unit 403 is cached with to be configured as Know invalid if being cached with criterion, know to being cached with criterion and lock, and again read off caching Effectively mark.Asynchronous refresh unit 404 is configured as starting asynchronous thread reading database with more Then new cache size sets the effective time of the cache size after renewal, wherein caching into caching The effective time of value is more than the effective time for being cached with criterion knowledge.
Preferably, if the second reading unit 402 can be additionally configured to read caching it is effective Mark is effective, direct return cache value.
Preferably, the second reading unit 402 can be additionally configured to know progress to being cached with criterion Character string locks.Preferably, the effective time of cache size is to be cached with n times that criterion is known, n It is greater than 1 integer.
The present invention locks compared to tradition and is lined up processing mode, improves the handling capacity and caching of system Hit rate.Specifically, the present invention is cached with criterion by design and known, when caching snowslide, While asynchronous refresh caches, front end is kept to read the data having timed, out, so as to avoid Obstruction is locked, improves handling capacity.
Detailed description above has been elaborated by using schematic diagram, flow chart and/or example Cache numerous embodiments of read method and device.In this schematic diagram, flow chart and/or example In the case of comprising one or more functions and/or operation, it will be understood by those skilled in the art that this Kind of schematic diagram, flow chart or each function in example and/or operation can by various structures, Hardware, software, firmware or they substantial any combination to realize individually and/or jointly. In one embodiment, if the stem portion of theme described in embodiments of the invention can pass through special collection Into circuit (ASIC), field programmable gate array (FPGA), digital signal processor (DSP), Or other integrate forms to realize.However, those skilled in the art will appreciate that institute is public here The some aspects for the embodiment opened can be realized equally in integrated circuit on the whole or partly In, it is embodied as the one or more computer program (examples run on one or more computer Such as, it is embodied as the one or more programs run in one or more computer system), realize It is one or more programs for running on the one or more processors (for example, being embodied as one The one or more programs run on individual or multi-microprocessor), it is embodied as firmware, or essence On be embodied as any combination of aforesaid way, and those skilled in the art are according to the disclosure, will Possess design circuit and/or write the ability of software and/or firmware code.In addition, this area skill Art personnel are it will be recognized that the mechanism of theme described in the disclosure can produce as the program of diversified forms Product are distributed, and no matter are actually used for performing the particular type of the signal bearing medium of distribution How, the exemplary embodiment of theme described in the disclosure is applicable.The example of signal bearing medium Including but not limited to:Recordable-type media, as floppy disk, hard disk drive, compact-disc (CD), Digital universal disc (DVD), digital magnetic tape, computer storage etc.;And transmission type media, As numeral and/or analogue communication medium (for example, optical fiber cable, waveguide, wired communications links, Wireless communication link etc.).
Although exemplary embodiment describing the present invention with reference to several, it is to be understood that, it is used Term is explanation and exemplary and nonrestrictive term.Because the present invention can be with a variety of shapes Formula is embodied without departing from the spiritual or substantive of invention, it should therefore be appreciated that above-described embodiment Any foregoing details is not limited to, and should be in the spirit and scope that appended claims are limited Widely explain, therefore the whole changes fallen into claim or its equivalent scope and remodeling are all Appended claims are should be to be covered.
It should be noted that it these are only presently preferred embodiments of the present invention and principle.This area Technical staff is it will be appreciated that the invention is not restricted to specific embodiment here.Those skilled in the art Member can make various significant changes, adjustment and replacement, without departing from protection scope of the present invention. The scope of the present invention is defined by the following claims.

Claims (8)

1. one kind caching read method, including:
Read from caching and be cached with criterion knowledge and cache size;
Know invalid if being cached with criterion, know to being cached with criterion and lock, and again read off It is cached with criterion knowledge;
If be cached with, criterion knowledge is still invalid, and setting is cached with criterion and known effectively, and sets and delay There are the effective time of criterion knowledge, return cache value;And
Start asynchronous thread reading database to update cache size into caching, then renewal is set The effective time of the effective time of cache size afterwards, wherein cache size, which are more than, is cached with criterion knowledge Effective time.
2. according to the method for claim 1, if wherein the effect that is cached with read is identified with Effect, direct return cache value.
3. according to the method for claim 1, wherein carrying out locking bag to being cached with criterion knowledge Include and know progress character string locking to being cached with criterion.
4. according to the method for claim 1, the effective time of wherein cache size is to be cached with N times of criterion knowledge, n is greater than 1 integer.
5. one kind caching reading device, including:
First reading unit, it is configured as reading and is cached with criterion knowledge and cache size;
Second reading unit, if be configured as being cached with criterion know it is invalid, to being cached with criterion Knowledge is locked, and is again read off and be cached with criterion knowledge;
Be cached with criterion and know setting unit, if be configured as being cached with criterion know it is invalid, to slow There is criterion knowledge to be locked, and again reads off and be cached with criterion knowledge;
Asynchronous refresh unit, it is configured as starting asynchronous thread reading database to update cache size Into caching, is then set, and wherein cache size is effective the effective time of the cache size after renewal Time is more than the effective time for being cached with criterion knowledge.
6. device according to claim 5, wherein second reading unit is also configured For:If the criterion that is cached with read knows effective, direct return cache value.
7. device according to claim 5, wherein second reading unit is also configured To know progress character string locking to being cached with criterion.
8. the effective time of device according to claim 5, wherein cache size is to be cached with N times of criterion knowledge, n is greater than 1 integer.
CN201610374753.8A 2016-05-31 2016-05-31 Cache reading method and device Active CN107451144B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610374753.8A CN107451144B (en) 2016-05-31 2016-05-31 Cache reading method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610374753.8A CN107451144B (en) 2016-05-31 2016-05-31 Cache reading method and device

Publications (2)

Publication Number Publication Date
CN107451144A true CN107451144A (en) 2017-12-08
CN107451144B CN107451144B (en) 2019-12-31

Family

ID=60485755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610374753.8A Active CN107451144B (en) 2016-05-31 2016-05-31 Cache reading method and device

Country Status (1)

Country Link
CN (1) CN107451144B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108646980A (en) * 2018-04-27 2018-10-12 江苏华存电子科技有限公司 A method of efficiently using memory bandwidth
CN109491928A (en) * 2018-11-05 2019-03-19 深圳乐信软件技术有限公司 Buffer control method, device, terminal and storage medium
CN110764920A (en) * 2019-10-10 2020-02-07 北京美鲜科技有限公司 Cache breakdown prevention method and annotation component thereof
CN115080625A (en) * 2022-07-21 2022-09-20 成都薯片科技有限公司 Caching method, device and equipment based on Spring Cache framework and storage medium
CN116244216A (en) * 2023-03-17 2023-06-09 摩尔线程智能科技(北京)有限责任公司 Cache control method, device, cache line structure, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116634A (en) * 2012-06-12 2013-05-22 上海雷腾软件有限公司 System for supporting high concurrent cache task queue and asynchronous batch operation method thereof
CN105138587A (en) * 2015-07-31 2015-12-09 小米科技有限责任公司 Data access method, apparatus and system
CN105447092A (en) * 2015-11-09 2016-03-30 联动优势科技有限公司 Caching method and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116634A (en) * 2012-06-12 2013-05-22 上海雷腾软件有限公司 System for supporting high concurrent cache task queue and asynchronous batch operation method thereof
CN105138587A (en) * 2015-07-31 2015-12-09 小米科技有限责任公司 Data access method, apparatus and system
CN105447092A (en) * 2015-11-09 2016-03-30 联动优势科技有限公司 Caching method and apparatus

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108646980A (en) * 2018-04-27 2018-10-12 江苏华存电子科技有限公司 A method of efficiently using memory bandwidth
CN109491928A (en) * 2018-11-05 2019-03-19 深圳乐信软件技术有限公司 Buffer control method, device, terminal and storage medium
CN109491928B (en) * 2018-11-05 2021-08-10 深圳乐信软件技术有限公司 Cache control method, device, terminal and storage medium
CN110764920A (en) * 2019-10-10 2020-02-07 北京美鲜科技有限公司 Cache breakdown prevention method and annotation component thereof
CN115080625A (en) * 2022-07-21 2022-09-20 成都薯片科技有限公司 Caching method, device and equipment based on Spring Cache framework and storage medium
CN116244216A (en) * 2023-03-17 2023-06-09 摩尔线程智能科技(北京)有限责任公司 Cache control method, device, cache line structure, electronic equipment and storage medium
CN116244216B (en) * 2023-03-17 2024-03-01 摩尔线程智能科技(北京)有限责任公司 Cache control method, device, cache line structure, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN107451144B (en) 2019-12-31

Similar Documents

Publication Publication Date Title
CN107451144A (en) Cache read method and device
US9805074B2 (en) Compressed representation of a transaction token
Pelley et al. Storage management in the NVRAM era
CN103688259B (en) For the method by compressing and file storage carries out automaticdata placement
Huang et al. Closing the Performance Gap Between Volatile and Persistent {Key-Value} Stores Using {Cross-Referencing} Logs
JP5577350B2 (en) Method and system for efficient data synchronization
US8706687B2 (en) Log driven storage controller with network persistent memory
CN107667364A (en) Use the atomic update of hardware transaction memory control index
CN104951240B (en) A kind of data processing method and processor
US9189512B2 (en) Device and method for acquiring resource lock
CN108874588A (en) A kind of database instance restoration methods and device
CN106575297A (en) High throughput data modifications using blind update operations
JP6192660B2 (en) Computer-implemented process, computer program product, and apparatus for managing a staging area
US8250111B2 (en) Automatic detection and correction of hot pages in a database system
US8200627B2 (en) Journaling database changes using a bit map for zones defined in each page
EP2352090A1 (en) System accessing shared data by a plurality of application servers
CN104317944B (en) A kind of timestamp dynamic adjustment concurrency control method based on formula
Ghandeharizadeh et al. Cache augmented database management systems
CN109947575A (en) Locking, method for releasing and the related system of Read-Write Locks
Levandoski et al. Indexing on modern hardware: Hekaton and beyond
Wang et al. RDMA-enabled concurrency control protocols for transactions in the cloud era
CN106469119A (en) A kind of data write buffer method based on NVDIMM and its device
CN106126878B (en) The coarse granule parallel method and system of electromagnetic functional material optimization design
JP2017529588A (en) Changeable time series for adaptation of randomly occurring event delays
CN106933491A (en) Method and device for managing data access

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant