CN107451144B - Cache reading method and device - Google Patents

Cache reading method and device Download PDF

Info

Publication number
CN107451144B
CN107451144B CN201610374753.8A CN201610374753A CN107451144B CN 107451144 B CN107451144 B CN 107451144B CN 201610374753 A CN201610374753 A CN 201610374753A CN 107451144 B CN107451144 B CN 107451144B
Authority
CN
China
Prior art keywords
cache
valid
value
identifier
read
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610374753.8A
Other languages
Chinese (zh)
Other versions
CN107451144A (en
Inventor
张帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201610374753.8A priority Critical patent/CN107451144B/en
Publication of CN107451144A publication Critical patent/CN107451144A/en
Application granted granted Critical
Publication of CN107451144B publication Critical patent/CN107451144B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Provided is a cache reading method, comprising: reading a cache effective identifier and a cache value from a cache; if the cache valid identification is invalid, locking the cache valid identification and reading the cache valid identification again; if the cache valid identifier is still invalid, setting the cache valid identifier to be valid, setting the valid time of the cache valid identifier, and returning a cache value; and starting the asynchronous thread to read the database to update the cache value into the cache, and then setting the effective time of the updated cache value, wherein the effective time of the cache value is longer than the effective time of the cache effective identification. Compared with the traditional locking queuing processing mode, the invention improves the throughput of the system and the hit rate of the cache.

Description

Cache reading method and device
Technical Field
The invention relates to the technical field of computers, in particular to a cache reading method and a cache reading device, which are used for processing cache avalanche under the condition of high concurrency.
Background
Cache is commonly used in the field of computers, and particularly refers to a technology for solving the bottleneck between a database server and a web server, making a front-end request result staticized to a cache system (redis, memcache) and used for improving access performance and avoiding the pressure of a database.
When the old cache expires, some time the new cache is not in effect, all requests go to the database. Causing the cpu and memory pressure of the database to be huge and the web front end to be blocked. This also becomes a buffer avalanche. The impact of the avalanche effect on the underlying system when the buffer fails is very feared. Unfortunately, this problem is currently not a perfect solution. Most system designers consider ensuring cached single-threaded (process) writes in a locked or queued manner, thereby avoiding a large number of concurrent requests falling on the underlying storage system in the event of a failure.
Disclosure of Invention
According to an exemplary embodiment of the present invention, there is provided a cache read method, including: reading a cache effective identifier and a cache value from a cache; if the cache valid identification is invalid, locking the cache valid identification and reading the cache valid identification again; if the cache valid identifier is still invalid, setting the cache valid identifier to be valid, setting the valid time of the cache valid identifier, and returning a cache value; and starting the asynchronous thread to read the database to update the cache value into the cache, and then setting the effective time of the updated cache value, wherein the effective time of the cache value is longer than the effective time of the cache effective identification.
Preferably, if the read cache valid identification is valid, the cache value is returned directly. Preferably, locking the cache valid identifier comprises string locking the cache valid identifier. Preferably, the valid time of the cached value is n times the valid identifier of the cache, and n is an integer greater than 1.
The present invention also provides a cache reading apparatus, including: a first reading device configured to read a cache valid identifier and a cache value; the second reading device is configured to lock the cache valid identifier and read the cache valid identifier again if the cache valid identifier is invalid; the cache valid identifier setting device is configured to lock the cache valid identifier and read the cache valid identifier again if the cache valid identifier is invalid; and the asynchronous updating device is configured to start the asynchronous thread to read the database to update the cache value into the cache, and then set the effective time of the updated cache value, wherein the effective time of the cache value is greater than the effective time of the cache effective identifier.
Preferably, the second reading unit is further configured to directly return the cache value if the read cache valid identifier is valid. Preferably, the second reading unit is further configured to string lock the cache valid identifier. Preferably, the valid time of the cached value is n times the valid identifier of the cache, and n is an integer greater than 1.
Compared with the traditional locking queuing processing mode, the invention improves the throughput of the system and the hit rate of the cache.
Drawings
The accompanying drawings are included to provide a further understanding of the invention, and are not to be construed as limiting the invention in any way, wherein:
fig. 1 illustrates a prior art generalized locking process.
Fig. 2 shows the core business process of the present invention.
FIG. 3 illustrates a cache read method according to an embodiment of the present invention.
Fig. 4 shows a cache read apparatus according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those skilled in the art will recognize that various modifications and changes may be made to the embodiments described herein without departing from the scope and spirit of the present invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
A generalized locking process of the prior art is shown in fig. 1. And the front end sends a request for reading the cache value, reads the cache value CacheValue from the redis, enters into locking queue if the CacheValue is not empty, and reads the cache value from the redis. Next, data is read from the database and written to redis.
This approach only addresses database pressure by locking and queuing, but does not actually improve system throughput by requiring sequential waiting for database lookups during queuing.
According to the occurrence scene of the cache avalanche, two data of a cache value and a cache validity identification are designed. And setting the effective time of the cache value to be n times of the effective cache identification time. And when the effective identification fails, reading overtime data is supported, and asynchronous updating data is started, so that the cache hit rate and throughput in a cache avalanche scene are improved. Hereinafter, a complete technical solution of the present invention will be described in detail.
Fig. 2 shows the core business process of the present invention. Wherein:
1) the front end initiates a request.
2) And reading a cache valid identifier, namely, CacheSign, wherein the identifier is used for controlling whether valid cache data currently exists or not.
3) The cache value CacheValue is read, which is the cache value actually stored in redis.
4) If the cache valid identifier CacheSign exists, the current cache value CacheValue is valid, and at this time, the cache value is directly returned. Otherwise, go to step 5.
5) Next, in step 4, if the cache valid identifier CacheValue does not exist (or the identifier is invalid), it is necessary to perform string locking on the cache valid identifier CacheValue, and try to read the cache valid identifier again, and in the case of multithreading, a new cache valid identifier may have been generated by another thread at this time, so it is necessary to read once again after entering the locking code.
It should be noted that the business processes of the present invention may be performed by multiple concurrent threads. Locking the cache valid identifier CacheValue means that if the cache valid identifier CacheValue is locked, the subsequent thread enters a queuing state, that is, the operation shown in the block in fig. 2 is not executed. But for the first thread, the cache valid identification is not locked at this point, so it can continue to perform the operations shown in block in fig. 2. When the first thread performs subsequent operations (steps 6-8), and a new cache valid identifier CacheValue is generated, the new cache valid identifier CacheValue can be read again by other threads which are queuing (in step 5) or read for the first time by a newly added thread (in step 4), so that the two types of threads can read the valid CacheValue and directly return a cache value.
6) If a valid cache valid flag exists, indicating that a new cache valid flag has been set by another thread, the cache value may be returned directly at this time. Otherwise step 7 is entered. The cache value returned in this step may be a new cache value that is asynchronously updated or may be a cache value that has timed out.
7) And step 6, if the cache valid identifier still does not exist, setting the cache valid identifier to the cache, and setting the timeout TIME of the cache valid identifier to be TIME X1. The time of the cache valid flag set here is the actual cache data valid time, which must be less than the valid time of the cache value. The setting here ensures that the new incoming thread will not queue up any more, and data can be returned directly at step 4.
8) And starting a thread to asynchronously read the database DB and update the cache value, setting the timeout time of the thread to TIMEX (n is more than 1), wherein the set effective time of the cache value must be longer than the effective time of the cache effective identifier, so that the thread can read the timeout cache value before asynchronously reading the DB to update the cache value under the condition that the cache is judged to be invalid by the cache effective identifier, and the throughput of the system is guaranteed.
9) The cached value is returned. Step 9 is taken to show that the current request is the first request after the cache fails, and the current request is responsible for setting a cache valid identifier, asynchronously starting to update the cache value, and returning the overtime cache value in the current request.
FIG. 3 illustrates a cache read method 300 according to an embodiment of the invention. The method 300 may be performed by one of the concurrent threads. The method 300 includes reading a cache valid identification and a cache value from a cache at step 301. Step 302, determining whether the cache valid identifier is valid, if so, proceeding to step 307 to directly return the cache value, and ending. If the cache valid identifier is invalid, then in step 303, the cache valid identifier is locked (e.g., string locked) and read again. At this time, in step 304, it is determined again whether the read cache id is valid (in the case of multithreading, a new cache valid id may be generated by another thread), and if so, the process proceeds to step 307 to return the cache value directly and end. If the cache valid flag is still invalid, the cache valid flag is set valid and the valid time of the cache valid flag is set, and then the cache value is returned in step 305. Next, the asynchronous thread is started, step 306, the database is read to update the cache value into the cache, and the validity time of the updated cache value is set. It should be noted that the valid time of the cached value is longer than the valid time of the cached valid identifier. For example, the validity time of the cached value may be n times the valid identifier of the cache, where n is an integer greater than 1.
Fig. 4 shows a cache read apparatus 400 according to an embodiment of the invention. The cache reading apparatus 400 includes a first reading unit 401, a second reading unit 402, a cache valid flag setting unit 403, and an asynchronous updating unit 404.
The first reading unit 401 is configured to read the cache valid identification and the cache value. The second reading unit 402 is configured to lock the cache valid identifier and read the cache valid identifier again if the cache valid identifier is invalid. The cache valid flag setting unit 403 is configured to set the cache valid flag to be valid and set the valid time of the cache valid flag to return the cache value if the cache valid flag is still invalid. The asynchronous update unit 404 is configured to initiate an asynchronous thread to read the database to update the cache value into the cache, and then set a valid time of the updated cache value, where the valid time of the cache value is greater than the valid time of the cache valid flag.
Preferably, second reading unit 402 may be further configured to directly return the cache value if the read cache valid identification is valid.
Preferably, second reading unit 402 may be further configured to string-lock the cache valid identification. Preferably, the valid time of the cached value is n times the valid identifier of the cache, and n is an integer greater than 1.
Compared with the traditional locking queuing processing mode, the invention improves the throughput of the system and the hit rate of the cache. Specifically, by designing the effective cache identifier, when the cache is in an avalanche state, the cache is asynchronously updated, and meanwhile, the front end can read overtime data, so that locking blockage is avoided, and the throughput is improved.
The foregoing detailed description has set forth numerous embodiments of cache read methods and apparatus using schematics, flowcharts, and/or examples. Where such diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of structures, hardware, software, firmware, or virtually any combination thereof. In one embodiment, portions of the subject matter described by embodiments of the invention may be implemented by Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), Digital Signal Processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing media used to actually carry out the distribution. Examples of signal bearing media include, but are not limited to: recordable type media such as floppy disks, hard disk drives, Compact Disks (CDs), Digital Versatile Disks (DVDs), digital tape, computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
While the present invention has been described with reference to several exemplary embodiments, it is understood that the terminology used is intended to be in the nature of words of description and illustration, rather than of limitation. As the present invention may be embodied in several forms without departing from the spirit or essential characteristics thereof, it should also be understood that the above-described embodiments are not limited by any of the details of the foregoing description, but rather should be construed broadly within its spirit and scope as defined in the appended claims, and therefore all changes and modifications that fall within the meets and bounds of the claims, or equivalences of such meets and bounds are therefore intended to be embraced by the appended claims.
It is to be noted that the foregoing is only illustrative of the preferred embodiments and principles of the present invention. Those skilled in the art will appreciate that the present invention is not limited to the specific embodiments described herein. Numerous obvious variations, adaptations and substitutions will occur to those skilled in the art without departing from the scope of the invention. The scope of the invention is defined by the appended claims.

Claims (10)

1. A cache read method, comprising:
reading a cache effective identifier and a cache value from a cache;
if the cache valid identification is invalid, locking the cache valid identification, and reading the cache valid identification again;
if the cache valid identifier is still invalid, setting the cache valid identifier to be valid, setting the valid time of the cache valid identifier, and returning a cache value; and
and starting an asynchronous thread to read the database to update the cache value into the cache, and then setting the effective time of the updated cache value, wherein the effective time of the cache value is longer than the effective time of the cache effective identifier.
2. The method of claim 1, wherein if the read cache valid identification is valid, returning the cache value directly.
3. The method of claim 1, wherein locking the cache valid identification comprises string locking the cache valid identification.
4. The method of claim 1, wherein the valid time of the cached value is n times the valid identifier of the cache, n being an integer greater than 1.
5. A cache read apparatus, comprising:
a first reading unit configured to read a cache valid identifier and a cache value;
the second reading unit is configured to lock the cache valid identifier and read the cache valid identifier again if the cache valid identifier is invalid;
the cache valid identifier setting unit is configured to set the cache valid identifier to be valid if the cache valid identifier is still invalid, set the valid time of the cache valid identifier and return a cache value;
and the asynchronous updating unit is configured to start the asynchronous thread to read the database to update the cache value into the cache, and then set the effective time of the updated cache value, wherein the effective time of the cache value is longer than the effective time of the cache effective identifier.
6. The apparatus of claim 5, wherein the second reading unit is further configured to: and if the read cache valid identification is valid, directly returning a cache value.
7. The apparatus of claim 5, wherein the second reading unit is further configured to string lock the cache valid identification.
8. The apparatus of claim 5, wherein the valid time of the cached value is n times the valid identifier of the cache, n being an integer greater than 1.
9. A cache read apparatus, comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the cache read method of any of claims 1-4 based on instructions stored in the memory.
10. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the cache read method of any one of claims 1 to 4.
CN201610374753.8A 2016-05-31 2016-05-31 Cache reading method and device Active CN107451144B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610374753.8A CN107451144B (en) 2016-05-31 2016-05-31 Cache reading method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610374753.8A CN107451144B (en) 2016-05-31 2016-05-31 Cache reading method and device

Publications (2)

Publication Number Publication Date
CN107451144A CN107451144A (en) 2017-12-08
CN107451144B true CN107451144B (en) 2019-12-31

Family

ID=60485755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610374753.8A Active CN107451144B (en) 2016-05-31 2016-05-31 Cache reading method and device

Country Status (1)

Country Link
CN (1) CN107451144B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108646980A (en) * 2018-04-27 2018-10-12 江苏华存电子科技有限公司 A method of efficiently using memory bandwidth
CN109491928B (en) * 2018-11-05 2021-08-10 深圳乐信软件技术有限公司 Cache control method, device, terminal and storage medium
CN110764920A (en) * 2019-10-10 2020-02-07 北京美鲜科技有限公司 Cache breakdown prevention method and annotation component thereof
CN115080625B (en) * 2022-07-21 2022-11-04 成都薯片科技有限公司 Caching method, device and equipment based on Spring Cache framework and storage medium
CN116244216B (en) * 2023-03-17 2024-03-01 摩尔线程智能科技(北京)有限责任公司 Cache control method, device, cache line structure, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116634A (en) * 2012-06-12 2013-05-22 上海雷腾软件有限公司 System for supporting high concurrent cache task queue and asynchronous batch operation method thereof
CN105138587A (en) * 2015-07-31 2015-12-09 小米科技有限责任公司 Data access method, apparatus and system
CN105447092A (en) * 2015-11-09 2016-03-30 联动优势科技有限公司 Caching method and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116634A (en) * 2012-06-12 2013-05-22 上海雷腾软件有限公司 System for supporting high concurrent cache task queue and asynchronous batch operation method thereof
CN105138587A (en) * 2015-07-31 2015-12-09 小米科技有限责任公司 Data access method, apparatus and system
CN105447092A (en) * 2015-11-09 2016-03-30 联动优势科技有限公司 Caching method and apparatus

Also Published As

Publication number Publication date
CN107451144A (en) 2017-12-08

Similar Documents

Publication Publication Date Title
CN107451144B (en) Cache reading method and device
CN108984312B (en) Method and device for reading and writing data
US8195702B2 (en) Online index builds and rebuilds without blocking locks
US9098413B2 (en) Read and write requests to partially cached files
US8250111B2 (en) Automatic detection and correction of hot pages in a database system
US10698831B2 (en) Method and apparatus for data access
US8041691B2 (en) Acquiring locks in wait mode in a deadlock free manner
US9075726B2 (en) Conflict resolution of cache store and fetch requests
US11113195B2 (en) Method, device and computer program product for cache-based index mapping and data access
US9652169B2 (en) Adaptive concurrency control using hardware transactional memory and locking mechanism
CN108108486B (en) Data table query method and device, terminal equipment and storage medium
CN109933606B (en) Database modification method, device, equipment and storage medium
CN109213691B (en) Method and apparatus for cache management
US10311033B2 (en) Alleviation of index hot spots in data sharing environment with remote update and provisional keys
CN114586003A (en) Speculative execution of load order queue using page level tracking
US10754842B2 (en) Preplaying transactions that mix hot and cold data
US11494099B2 (en) Method, device, and computer program product for managing storage system
US7558914B2 (en) Data object processing of storage drive buffers
US10452424B2 (en) Unique transaction identifier based transaction processing
US10606757B2 (en) Method, device and computer program product for flushing metadata in multi-core system
US10747627B2 (en) Method and technique of achieving extraordinarily high insert throughput
US10311039B2 (en) Optimized iterators for RCU-protected skiplists
US11494399B2 (en) Method, device, and computer-readable storage medium for bitmap conversion
CN106575306B (en) Method for persisting data on non-volatile memory for fast update and transient recovery and apparatus therefor
US11726877B1 (en) Method, electronic device, and computer program product for accessing storage device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant