CN110083549B - Cache reading and writing method and terminal - Google Patents

Cache reading and writing method and terminal Download PDF

Info

Publication number
CN110083549B
CN110083549B CN201910255944.6A CN201910255944A CN110083549B CN 110083549 B CN110083549 B CN 110083549B CN 201910255944 A CN201910255944 A CN 201910255944A CN 110083549 B CN110083549 B CN 110083549B
Authority
CN
China
Prior art keywords
cache
read
write
common
write cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910255944.6A
Other languages
Chinese (zh)
Other versions
CN110083549A (en
Inventor
刘德建
林伟
郭玉湖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Tianquan Educational Technology Ltd
Original Assignee
Fujian Tianquan Educational Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Tianquan Educational Technology Ltd filed Critical Fujian Tianquan Educational Technology Ltd
Priority to CN201910255944.6A priority Critical patent/CN110083549B/en
Publication of CN110083549A publication Critical patent/CN110083549A/en
Application granted granted Critical
Publication of CN110083549B publication Critical patent/CN110083549B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0891Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using clearing, invalidating or resetting means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0895Caches characterised by their organisation or structure of parts of caches, e.g. directory or tag array
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1032Reliability improvement, data loss prevention, degraded operation etc

Abstract

The invention discloses a method and a terminal for cache read-write.A read-write cache is arranged between a read-only cache and a common cache, under a read scene, the read-write cache is inquired when corresponding data is not inquired by the read-write cache, the common cache is inquired when corresponding data is not inquired by the read-write cache, and the read-only cache and the read-write cache are respectively updated according to the data inquired by the common cache; and in a writing scene, emptying the read-write cache after the ordinary cache is successfully updated. The read-write cache is used as a middleware, after the common cache is successfully updated, the read-write cache is only cleared, so that the read-write pressure under high concurrency can be effectively relieved when read-write operation occurs simultaneously, and the method is different from the prior art of reading-write waiting caused by locking data read-write.

Description

Cache reading and writing method and terminal
Technical Field
The present invention relates to the field of cache technologies, and in particular, to a method and a terminal for reading and writing a cache.
Background
Cache servers such as redis or memcache are generally used in a system as a cache layer, and a cache service is provided in a cluster or single node manner. In most cases, the system will also implement the data storage function through local cache. When the local memory is used as the cache, the data can be quickly read and written in the high concurrency condition. However, there is a new problem that frequent read-write conflicts will occur to the memory data when the data is to be modified under the high concurrency condition.
At present, in order to avoid the problem of inaccurate data caused by read-write collision, locking processing is usually performed during data writing, but this way affects the reading operation of data, and because synchronous serialization execution is performed, after locking processing is performed during data writing, a process of reading data also waits, and data can be read only after the data writing is finished, so that serious consequences such as thread jamming, program rushing or server downtime and the like can be directly caused under the condition of high concurrency.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method and the terminal for reading and writing the cache can realize the read-write separation of the cache data and support the read-write of the cache data in a high concurrency scene.
In order to solve the technical problems, the invention adopts a technical scheme that:
a method for cache read-write, comprising the steps of:
s1, setting a read-only cache, a read-write cache and a common cache;
s21, in a reading scene, if corresponding data are not inquired in the read-only cache, inquiring the read-write cache, if corresponding data are not inquired in the read-write cache, inquiring the common cache, and respectively updating the read-only cache and the read-write cache according to the data inquired in the common cache;
and S22, under the write scene, after the ordinary cache is updated successfully, emptying the read-write cache.
In order to solve the technical problem, the invention adopts another technical scheme as follows:
a terminal for reading and writing a cache comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the following steps:
s1, setting a read-only cache, a read-write cache and a common cache;
s21, in a reading scene, if corresponding data are not inquired in the read-only cache, inquiring the read-write cache, if corresponding data are not inquired in the read-write cache, inquiring the common cache, and respectively updating the read-only cache and the read-write cache according to the data inquired in the common cache;
and S22, under the write scene, after the ordinary cache is updated successfully, emptying the read-write cache.
The invention has the beneficial effects that: setting a read-write cache between a read-only cache and a common cache, inquiring the read-write cache when corresponding data is not inquired by the read-only cache in a read scene, inquiring the common cache when corresponding data is not inquired by the read-write cache, and respectively updating the read-only cache and the read-write cache according to the data inquired by the common cache; and in a writing scene, emptying the read-write cache after the ordinary cache is successfully updated. The read-write cache is used as a middleware, after the common cache is successfully updated, the read-write cache is only cleared, so that the read-write pressure under high concurrency can be effectively relieved when read-write operation occurs simultaneously, and the method is different from the prior art of reading-write waiting caused by locking data read-write.
Drawings
FIG. 1 is a flowchart illustrating steps of a method for reading and writing a cache according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a terminal for cache read/write according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating steps for reading a scenario according to an embodiment of the present invention;
FIG. 4 is a flowchart of the steps for writing a scenario in accordance with an embodiment of the present invention;
description of reference numerals:
1. a terminal for caching read-write; 2. A memory; 3. A processor.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
The most key concept of the invention is that a read-write cache is arranged between a read-only cache and a common cache, and in a read scene, the read-write cache and the common cache are inquired in sequence until corresponding data is inquired; and in a writing scene, emptying the read-write cache after the ordinary cache is successfully updated.
Referring to fig. 1, a method for reading and writing a cache includes the steps of:
s1, setting a read-only cache, a read-write cache and a common cache;
s21, in a reading scene, if corresponding data are not inquired in the read-only cache, inquiring the read-write cache, if corresponding data are not inquired in the read-write cache, inquiring the common cache, and respectively updating the read-only cache and the read-write cache according to the data inquired in the common cache;
and S22, under the write scene, after the ordinary cache is updated successfully, emptying the read-write cache.
From the above description, the beneficial effects of the present invention are: setting a read-write cache between a read-only cache and a common cache, inquiring the read-write cache when corresponding data is not inquired by the read-only cache in a read scene, inquiring the common cache when corresponding data is not inquired by the read-write cache, and respectively updating the read-only cache and the read-write cache according to the data inquired by the common cache; and in a writing scene, emptying the read-write cache after the ordinary cache is successfully updated. The read-write cache is used as a middleware, after the common cache is successfully updated, the read-write cache is only cleared, so that the read-write pressure under high concurrency can be effectively relieved when read-write operation occurs simultaneously, and the method is different from the prior art of reading-write waiting caused by locking data read-write.
Further, the step S21 is specifically:
s211, inquiring the read-only cache according to the received data request, judging whether the inquiry result of the read-only cache is empty, if so, executing S212, and if not, returning the inquiry result of the read-only cache;
s212, inquiring the read-write cache according to the received data request, and judging whether the inquiry result of the read-write cache is empty, if so, executing S213, and if not, returning the inquiry result of the read-write cache;
s213, inquiring a common cache according to the received data request, judging whether the inquiry result of the common cache is empty, if not, returning the inquiry result of the common cache, and writing the inquiry result of the common cache into the read-write cache and the read-only cache.
As can be seen from the above description, the read-only cache and the read-write cache are accessed in advance for querying, the ordinary cache is accessed only when corresponding data is not queried by the read-only cache and the read-write cache, and the data queried in the ordinary cache is written into the read-only cache and the read-write cache, so that the data can be queried next time, the read-write operation of the data is effectively separated, the influence of the write operation on the ordinary cache is avoided, the read-write conflict of the data is reduced, and read-write waiting is not needed.
Further, the step S22 is specifically:
updating the common cache according to the received data updating request;
and clearing the read-write cache after the ordinary cache is updated successfully.
As can be seen from the above description, after the ordinary cache is successfully updated, only the read-write cache is cleared first, and preparation is made for subsequent data updating of the read-write cache and the read-only cache.
Further, the method also comprises the following steps:
s3, polling the read-write cache regularly, judging whether the read-write cache is empty or not, and if so, emptying the read-only cache.
As can be seen from the above description, by polling the read-write cache at regular time, emptying the read-only cache when the read-write cache is empty, and when both the read-only cache and the read-write cache are empty, each data request accesses the common cache to perform query, and writes the data queried in the common cache into the read-only cache and the read-write cache, which is equivalent to refilling data into the read-only cache and the read-write cache, the updated data is written in, so that the data update of the read-only cache and the read-write cache is realized through the data request, and the working efficiency is improved.
Further, the polling the read-write cache at regular time in step S3 specifically includes:
and periodically polling the read-write cache through an asynchronous thread.
As can be seen from the above description, the read-write cache is polled at regular time by the asynchronous thread, so that the read-write cache can be monitored without affecting the original service logic and occupying resources, and corresponding operations can be performed in time according to changes of the read-write cache.
Referring to fig. 2, a terminal for reading and writing a cache includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor executes the computer program to implement the following steps:
s1, setting a read-only cache, a read-write cache and a common cache;
s21, in a reading scene, if corresponding data are not inquired in the read-only cache, inquiring the read-write cache, if corresponding data are not inquired in the read-write cache, inquiring the common cache, and respectively updating the read-only cache and the read-write cache according to the data inquired in the common cache;
and S22, under the write scene, after the ordinary cache is updated successfully, emptying the read-write cache.
From the above description, the beneficial effects of the present invention are: setting a read-write cache between a read-only cache and a common cache, inquiring the read-write cache when corresponding data is not inquired by the read-only cache in a read scene, inquiring the common cache when corresponding data is not inquired by the read-write cache, and respectively updating the read-only cache and the read-write cache according to the data inquired by the common cache; and in a writing scene, emptying the read-write cache after the ordinary cache is successfully updated. The read-write cache is used as a middleware, after the common cache is successfully updated, the read-write cache is only cleared, so that the read-write pressure under high concurrency can be effectively relieved when read-write operation occurs simultaneously, and the method is different from the prior art of reading-write waiting caused by locking data read-write.
Further, the step S21 is specifically:
s211, inquiring the read-only cache according to the received data request, judging whether the inquiry result of the read-only cache is empty, if so, executing S212, and if not, returning the inquiry result of the read-only cache;
s212, inquiring the read-write cache according to the received data request, and judging whether the inquiry result of the read-write cache is empty, if so, executing S213, and if not, returning the inquiry result of the read-write cache;
s213, inquiring a common cache according to the received data request, judging whether the inquiry result of the common cache is empty, if not, returning the inquiry result of the common cache, and writing the inquiry result of the common cache into the read-write cache and the read-only cache.
As can be seen from the above description, the read-only cache and the read-write cache are accessed in advance for querying, the ordinary cache is accessed only when corresponding data is not queried by the read-only cache and the read-write cache, and the data queried in the ordinary cache is written into the read-only cache and the read-write cache, so that the data can be queried next time, the read-write operation of the data is effectively separated, the influence of the write operation on the ordinary cache is avoided, the read-write conflict of the data is reduced, and read-write waiting is not needed.
Further, the step S22 is specifically:
updating the common cache according to the received data updating request;
and clearing the read-write cache after the ordinary cache is updated successfully.
As can be seen from the above description, after the ordinary cache is successfully updated, only the read-write cache is cleared first, and preparation is made for subsequent data updating of the read-write cache and the read-only cache.
Further, the method also comprises the following steps:
s3, polling the read-write cache regularly, judging whether the read-write cache is empty or not, and if so, emptying the read-only cache.
As can be seen from the above description, by polling the read-write cache at regular time, emptying the read-only cache when the read-write cache is empty, and when both the read-only cache and the read-write cache are empty, each data request accesses the common cache to perform query, and writes the data queried in the common cache into the read-only cache and the read-write cache, which is equivalent to refilling data into the read-only cache and the read-write cache, the updated data is written in, so that the data update of the read-only cache and the read-write cache is realized through the data request, and the working efficiency is improved.
Further, the polling the read-write cache at regular time in step S3 specifically includes:
and periodically polling the read-write cache through an asynchronous thread.
As can be seen from the above description, the read-write cache is polled at regular time by the asynchronous thread, so that the read-write cache can be monitored without affecting the original service logic and occupying resources, and corresponding operations can be performed in time according to changes of the read-write cache.
Example one
Referring to fig. 1 and fig. 3-4, a method for reading and writing a cache includes the steps of:
s1, setting a read-only cache, a read-write cache and a common cache;
specifically, the data types stored in the local memory are divided into read-only cache, read-write cache and common cache, and the common cache is a real data storage cache;
the read-only cache, the read-write cache and the common cache are stored in an internal memory in a data structure mode of MAP;
s21, in a reading scene, if corresponding data are not inquired in the read-only cache, inquiring the read-write cache, if corresponding data are not inquired in the read-write cache, inquiring the common cache, and respectively updating the read-only cache and the read-write cache according to the data inquired in the common cache;
specifically, a client request interface API acquires related data information, namely sends out a data request;
s211, the interface API executes relevant service logic, queries the read-only cache according to the received data request, judges whether the query result of the read-only cache is empty, if yes, executes S212, and if not, returns the query result of the read-only cache;
s212, inquiring the read-write cache according to the received data request, and judging whether the inquiry result of the read-write cache is empty, if so, executing S213, and if not, returning the inquiry result of the read-write cache;
s213, inquiring a common cache according to the received data request, judging whether the inquiry result of the common cache is empty, if not, returning the inquiry result of the common cache, and synchronously writing the inquiry result of the common cache into the read-write cache and the read-only cache so as to refresh the cache data of the read-write cache and the read-write cache;
when the query result of the common cache is synchronously written into the read-write cache and the read-only cache, the expiration time of the data in the read-only cache and the read-write cache can be set, and the expiration time is consistent with the expiration time of the data in the common cache;
as shown in fig. 3, a client requests an interface API, where the interface API accesses the read-only cache, and if corresponding data is not obtained, accesses the read-write cache, and if corresponding data is not obtained, accesses the normal cache;
and S22, under the write scene, after the ordinary cache is updated successfully, emptying the read-write cache.
Specifically, as shown in fig. 4, the background data maintenance is performed according to the received data update request, and the operation and maintenance background performs synchronous update on the common cache;
after the ordinary cache is updated successfully, clearing the read-write cache;
and clearing the read-write cache can be realized by setting all data in the read-write cache to be overdue, so that the overdue data is deleted.
S3, polling the read-write cache regularly, judging whether the read-write cache is empty or not, and if so, emptying the read-only cache.
The interval time of the timing polling of the read-write cache can be set by self, and preferably, the interval time of the timing polling is 15 seconds or 30 seconds;
when polling is carried out that the read-write cache is empty, emptying the read-only cache, when the read-only cache and the read-write cache are both empty, in the next data request, repeating the process of the steps S211-S213, namely, after the read-only cache and the read-write cache are inquired to obtain corresponding data, inquiring the common cache, writing the data inquired in the common cache into the read-only cache and the read-write cache, and realizing the data synchronization of the read-only cache and the read-write cache in a service triggering mode.
Example two
The difference between the present embodiment and the first embodiment is:
the polling the read-write cache at regular time in the step S3 specifically includes:
and periodically polling the read-write cache through an independent asynchronous thread.
Specifically, for example, by polling the read-write cache at an interval of 30 seconds through an independent asynchronous thread, the read-write cache can be monitored without affecting the original service logic and occupying resources.
EXAMPLE III
Referring to fig. 2, a terminal 1 for reading and writing a cache includes a memory 2, a processor 3, and a computer program stored in the memory 2 and capable of running on the processor 3, where the processor 3 implements the steps in the first embodiment when executing the computer program.
Example four
Referring to fig. 2, a terminal 1 for reading and writing a cache includes a memory 2, a processor 3, and a computer program stored in the memory 2 and capable of running on the processor 3, where the processor 3 implements the steps of the second embodiment when executing the computer program.
In summary, according to the method and the terminal for cache read-write provided by the present invention, by setting a read-write cache between a read-only cache and a normal cache, in a read scenario, when corresponding data is not queried by the read-only cache, the read-write cache is queried, when corresponding data is not queried by the read-write cache, the normal cache is queried, and the read-only cache and the read-write cache are respectively updated according to the data queried by the normal cache; and in a writing scene, emptying the read-write cache after the ordinary cache is successfully updated. The read-write cache is used as a middleware, after the common cache is successfully updated, the read-write cache is only cleared first, so that the read-write pressure under high concurrency can be effectively relieved when read-write operation occurs simultaneously, the read-write cache is polled at regular time through an asynchronous thread, and the read-only cache is cleared when the read-write cache is empty. Different from the prior art that the read-write waiting is caused by locking the data read-write, the invention can realize the read-write separation of the cache data and support the read-write of the cache data in a high concurrency scene; when the read-only cache and the read-write cache are empty, each data request accesses the common cache to inquire, and data inquired in the common cache is written into the read-only cache and the read-write cache, so that data updating of the read-only cache and the read-write cache can be realized by using the data request in service, and the working efficiency is improved.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.

Claims (6)

1. A method for cache read-write, comprising the steps of:
s1, setting a read-only cache, a read-write cache and a common cache;
s21, in a reading scene, if corresponding data are not inquired in the read-only cache, inquiring the read-write cache, if corresponding data are not inquired in the read-write cache, inquiring the common cache, and respectively updating the read-only cache and the read-write cache according to the data inquired in the common cache;
s22, in a write scene, after the ordinary cache is updated successfully, emptying the read-write cache;
the step S22 specifically includes:
updating the common cache according to the received data updating request;
after the ordinary cache is updated successfully, clearing the read-write cache;
further comprising:
s3, polling the read-write cache regularly, judging whether the read-write cache is empty or not, if so, clearing the read-only cache, repeating S21 in the next data request, and realizing the data synchronization of the read-only cache and the read-write cache in a service triggering mode.
2. The method for cache read-write according to claim 1, wherein the step S21 specifically includes:
s211, inquiring the read-only cache according to the received data request, judging whether the inquiry result of the read-only cache is empty, if so, executing S212, and if not, returning the inquiry result of the read-only cache;
s212, inquiring the read-write cache according to the received data request, and judging whether the inquiry result of the read-write cache is empty, if so, executing S213, and if not, returning the inquiry result of the read-write cache;
s213, inquiring a common cache according to the received data request, judging whether the inquiry result of the common cache is empty, if not, returning the inquiry result of the common cache, and writing the inquiry result of the common cache into the read-write cache and the read-only cache.
3. The method according to claim 1, wherein the step S3 of polling the read-write buffer periodically specifically includes:
and periodically polling the read-write cache through an asynchronous thread.
4. A terminal for reading and writing a cache, comprising a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor implements the following steps when executing the computer program:
s1, setting a read-only cache, a read-write cache and a common cache;
s21, in a reading scene, if corresponding data are not inquired in the read-only cache, inquiring the read-write cache, if corresponding data are not inquired in the read-write cache, inquiring the common cache, and respectively updating the read-only cache and the read-write cache according to the data inquired in the common cache;
s22, in a write scene, after the ordinary cache is updated successfully, emptying the read-write cache;
the step S22 specifically includes:
updating the common cache according to the received data updating request;
after the ordinary cache is updated successfully, clearing the read-write cache;
further comprising:
s3, polling the read-write cache regularly, judging whether the read-write cache is empty or not, if so, clearing the read-only cache, repeating S21 in the next data request, and realizing the data synchronization of the read-only cache and the read-write cache in a service triggering mode.
5. The terminal for cache read-write according to claim 4, wherein the step S21 specifically includes:
s211, inquiring the read-only cache according to the received data request, judging whether the inquiry result of the read-only cache is empty, if so, executing S212, and if not, returning the inquiry result of the read-only cache;
s212, inquiring the read-write cache according to the received data request, and judging whether the inquiry result of the read-write cache is empty, if so, executing S213, and if not, returning the inquiry result of the read-write cache;
s213, inquiring a common cache according to the received data request, judging whether the inquiry result of the common cache is empty, if not, returning the inquiry result of the common cache, and writing the inquiry result of the common cache into the read-write cache and the read-only cache.
6. The terminal for caching read-write according to claim 4, wherein the polling the read-write cache at regular time in step S3 specifically includes:
and periodically polling the read-write cache through an asynchronous thread.
CN201910255944.6A 2019-04-01 2019-04-01 Cache reading and writing method and terminal Active CN110083549B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910255944.6A CN110083549B (en) 2019-04-01 2019-04-01 Cache reading and writing method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910255944.6A CN110083549B (en) 2019-04-01 2019-04-01 Cache reading and writing method and terminal

Publications (2)

Publication Number Publication Date
CN110083549A CN110083549A (en) 2019-08-02
CN110083549B true CN110083549B (en) 2021-10-01

Family

ID=67414132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910255944.6A Active CN110083549B (en) 2019-04-01 2019-04-01 Cache reading and writing method and terminal

Country Status (1)

Country Link
CN (1) CN110083549B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110691219A (en) * 2019-09-11 2020-01-14 视联动力信息技术股份有限公司 Data processing method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451146A (en) * 2016-05-31 2017-12-08 北京京东尚科信息技术有限公司 The method of data and data cached multi-level buffer device are read using multi-level buffer
CN109213696A (en) * 2017-06-30 2019-01-15 伊姆西Ip控股有限责任公司 Method and apparatus for cache management

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6496907B1 (en) * 1999-10-22 2002-12-17 Apple Computer, Inc. System and method for updating from a read-only to a read-write entry and concurrently invalidating stale cache copies from head-to-tail and tail-to-head directions
US6584550B1 (en) * 1999-11-29 2003-06-24 Apple Computer, Inc. System and method for updating a head entry from read-only to read-write and allowing a list to expand in a cache-coherence sharing list
CN101154230B (en) * 2006-09-30 2010-08-18 中兴通讯股份有限公司 Responding method for large data volume specified searching web pages
CN102541983B (en) * 2011-10-25 2013-07-31 无锡城市云计算中心有限公司 Method for synchronously caching by multiple clients in distributed file system
US8689233B2 (en) * 2012-01-25 2014-04-01 International Business Machines Corporation Distributed function execution for hybrid systems
CN103744975A (en) * 2014-01-13 2014-04-23 锐达互动科技股份有限公司 Efficient caching server based on distributed files
CN103942161B (en) * 2014-04-24 2017-02-15 杭州冰特科技有限公司 Redundancy elimination system and method for read-only cache and redundancy elimination method for cache
CN107493327B (en) * 2017-08-11 2020-05-15 杭州顺网科技股份有限公司 Distributed cache management method, system and data management system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451146A (en) * 2016-05-31 2017-12-08 北京京东尚科信息技术有限公司 The method of data and data cached multi-level buffer device are read using multi-level buffer
CN109213696A (en) * 2017-06-30 2019-01-15 伊姆西Ip控股有限责任公司 Method and apparatus for cache management

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
数据读写分离与缓存的实现;weixin_30696427;《https://blog.csdn.net/weixin_30696427/article/details/98084502》;20180625;第1-4页 *

Also Published As

Publication number Publication date
CN110083549A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
CN1794207B (en) Method and system for implenmenting cache consistency
CN111400268B (en) Log management method of distributed persistent memory transaction system
CN111259071B (en) Concurrent access control method in distributed database system
CN102098344B (en) Method and device for synchronizing editions during cache management and cache management system
CN112612799B (en) Data synchronization method and terminal
CN112468601B (en) Data synchronization method, access method and system of distributed storage system
CN103390041A (en) Method and system for providing data service based on middleware
CN105426321A (en) RDMA friendly caching method using remote position information
US20090240739A1 (en) Optimizing Lock Acquisition on Transaction Logs
EP4216061A1 (en) Transaction processing method, system, apparatus, device, storage medium, and program product
CN113094430B (en) Data processing method, device, equipment and storage medium
CN112307119A (en) Data synchronization method, device, equipment and storage medium
EP3347815B1 (en) Action-based routing of a transaction in an online transaction processing system
CN112181306A (en) Method for generating distributed ID (identity) in number segment mode
EP3404537B1 (en) Processing node, computer system and transaction conflict detection method
WO2022213526A1 (en) Transaction processing method, distributed database system, cluster, and medium
CN111026771B (en) Method for ensuring consistency of cache and database data
US20120290547A1 (en) Transaction processing device, transaction processing method, and transaction processing program
CN110083549B (en) Cache reading and writing method and terminal
CN113010549A (en) Data processing method based on remote multi-active system, related equipment and storage medium
CN112822091B (en) Message processing method and device
CN113220490A (en) Transaction persistence method and system for asynchronous write-back persistent memory
CN114238518A (en) Data processing method, device, equipment and storage medium
CN115495495A (en) Transaction processing method, distributed database system, cluster and medium
CN111221773B (en) Data storage architecture method based on RDMA high-speed network and skip list

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant