CN112131234A - Method, system, equipment and medium for optimizing data cache concurrency - Google Patents

Method, system, equipment and medium for optimizing data cache concurrency Download PDF

Info

Publication number
CN112131234A
CN112131234A CN202010950970.3A CN202010950970A CN112131234A CN 112131234 A CN112131234 A CN 112131234A CN 202010950970 A CN202010950970 A CN 202010950970A CN 112131234 A CN112131234 A CN 112131234A
Authority
CN
China
Prior art keywords
key parameter
cache
key
data
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010950970.3A
Other languages
Chinese (zh)
Inventor
曹风兵
黄帅
朱英澍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202010950970.3A priority Critical patent/CN112131234A/en
Publication of CN112131234A publication Critical patent/CN112131234A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2336Pessimistic concurrency control approaches, e.g. locking or multiple versions without time stamps
    • G06F16/2343Locking methods, e.g. distributed locking or locking implementation details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method, a system, equipment and a storage medium for optimizing data cache concurrency, wherein the method comprises the following steps: caching the server component information according to a key value pair form; responding to the received access request, acquiring corresponding key parameters based on the access request, and judging whether the corresponding value parameters can be read based on the key parameters; setting a lock on the key parameter in response to the corresponding value parameter not being readable based on the key parameter; responding to the successful setting of the lock of the key parameter, reading the database based on the access request and judging whether the reading of the database is successful; and responding to the success of reading the database, acquiring corresponding data and updating the cache. According to the method and the device, the lock is set for the data, so that the problem that the database pressure is overlarge due to the fact that a large number of requests access the database at the same time after the cache fails is avoided, and the cache is prevented from being updated by a plurality of requests at the same time.

Description

Method, system, equipment and medium for optimizing data cache concurrency
Technical Field
The present invention relates to the field of data caching, and more particularly, to a method, a system, a computer device, and a readable medium for optimizing concurrency of data caching.
Background
The BMC (Baseboard Management Controller) supports an IPMI (Intelligent Platform Management Interface) specification of an industry standard. The specification describes management functions that have been built into the server motherboard. Functions include local and remote diagnostics, console support, configuration management, hardware management, and troubleshooting. The reading of the server component data is generally to access the actual device or read the database and the configuration file information, so that the problems of long data return time and slow data acquisition speed exist. To increase access speed, critical information may be generally cached and a cache read interface is provided for access by external processes or users.
When the number of processes or users accessing the cache is too many, the cache is concurrent, and the cache is invalid due to too high concurrency. If one cache fails, the condition that a plurality of processes simultaneously inquire a database or a file and simultaneously set the cache can occur. This can also cause database over-stressing if the concurrency is really large, as well as the problem of frequent updates of the cache. At present, a common data cache concurrency scheme generally adopts a cache expiration policy, sets an expiration time of a cache, and starts database access operation when the cache fails after a certain time. In the existing scheme, when the concurrency is high, a plurality of caches may be generated at the same time at a certain time, and the expiration times are the same, so that when the expiration time is up, the caches may fail at the same time, and all requests are forwarded to the database, thereby causing the database to be over stressed.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, a system, a computer device, and a computer-readable storage medium for optimizing concurrence of data caches, so as to avoid the problem that a database is over-stressed due to a large number of requests accessing the database at the same time after a cache fails by setting a lock on data, and also avoid that a plurality of requests update the cache at the same time.
Based on the above object, an aspect of the embodiments of the present invention provides a method for optimizing data cache concurrency, including the following steps: caching the server component information according to a key value pair form; responding to a received access request, acquiring a corresponding key parameter based on the access request, and judging whether a corresponding value parameter can be read based on the key parameter; in response to an inability to read a corresponding value parameter based on the key parameter, setting a lock on the key parameter; responding to the successful setting of the lock of the key parameter, reading the database based on the access request and judging whether the database reading is successful; and responding to the success of reading the database, acquiring corresponding data and updating the cache.
In some embodiments, said obtaining the corresponding key parameter based on the access request comprises: acquiring the lock state of the key parameter, and judging whether the lock state of the key parameter is open or not; and in response to the lock state of the key parameter being open, transitioning the access request to a dormant state.
In some embodiments, the obtaining corresponding data and updating the cache includes: and awakening the access request in the dormant state, and enabling the access request to acquire corresponding data from the updated cache.
In some embodiments, further comprising: in response to an unsuccessful read of the database, an identifier is set in a cache for the key parameter.
In some embodiments, the determining based on whether the key parameter can be read to a corresponding value parameter includes: it is determined whether the key parameter has an identifier.
In some embodiments, further comprising: responding to the newly added data, and judging whether the newly added data is successfully matched with the key parameters with the identifiers; and in response to the fact that the newly added data are successfully matched with the key parameters with the identifiers, deleting the identifiers corresponding to the key parameters.
In some embodiments, the determining whether the new addition data is successfully matched with the key parameter with the identifier includes: judging whether the same key parameters as those in the newly added data exist in the cache; and in response to the key parameter being the same as the key parameter in the newly added data in the cache, determining whether the key parameter in the cache has an identifier.
In another aspect of the embodiments of the present invention, a system for optimizing data caching concurrency is further provided, including: the cache module is configured to cache the server component information according to a key value pair form; the first judgment module is configured to respond to a received access request, acquire a corresponding key parameter based on the access request, and judge whether a corresponding value parameter can be read based on the key parameter; a lock module configured to set a lock on the key parameter in response to an inability to read a corresponding value parameter based on the key parameter; a second judgment module configured to respond to a successful setting of the lock of the key parameter, read the database based on the access request, and judge whether reading the database is successful; and the obtaining module is configured to respond to the success of reading the database, obtain the corresponding data and update the cache.
In another aspect of the embodiments of the present invention, there is also provided a computer device, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions when executed by the processor implementing the steps of the method as above.
In a further aspect of the embodiments of the present invention, a computer-readable storage medium is also provided, in which a computer program for implementing the above method steps is stored when the computer program is executed by a processor.
The invention has the following beneficial technical effects: by setting a lock on the data, the problem that the database pressure is too high due to the fact that a large number of requests access the database at the same time after the cache fails is avoided, and the cache is also prevented from being updated by a plurality of requests at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
FIG. 1 is a schematic diagram of an embodiment of a method for optimizing concurrency of data caches provided by the present invention;
fig. 2 is a schematic hardware structure diagram of an embodiment of a computer device for optimizing concurrency of data caching according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
In view of the foregoing, a first aspect of the embodiments of the present invention provides an embodiment of a method for optimizing concurrency of data caches. Fig. 1 is a schematic diagram illustrating an embodiment of a method for optimizing concurrency of data caches provided by the present invention. As shown in fig. 1, the embodiment of the present invention includes the following steps:
s1, caching the server component information according to the key value pair form;
s2, responding to the received access request, acquiring corresponding key parameters based on the access request, and judging whether the corresponding value parameters can be read based on the key parameters;
s3, responding to the key parameter that can not read the corresponding value parameter, setting lock for the key parameter;
s4, responding to the successful setting of the lock of the key parameter, reading the database based on the access request and judging whether the reading of the database is successful; and
and S5, responding to the database reading success, acquiring corresponding data and updating the cache.
The use of fans, memory hard disks and other critical components needs to be monitored during the process of managing the server. Monitoring of key information of the server is important for monitoring and analyzing the operation state of the whole server, and the key information can comprise: key voltage information, key temperature information, and power consumption information. By monitoring the key information, a better analysis basis and data support can be provided for the maintenance and fault handling of the server.
And caching the server component information in a key value pair mode. The embodiment of the invention regularly receives the information (such as voltage, temperature, power consumption and the like) of the key components of the server, caches the key information according to the key-value form, and provides an access interface for the outside. Each key-value pair includes a key parameter and a value parameter, from which a corresponding value parameter may be determined in general.
And responding to the received access request, acquiring the corresponding key parameter based on the access request, and judging whether the corresponding value parameter can be read based on the key parameter. When an external process or a user needs to read data, an access request can be sent out, and the read value parameter in the cache is cached according to the key parameter in the access request. However, if the cache fails or the amount of concurrency is too high, the value parameter may not be read in the cache based on the key parameter.
In response to an inability to read a corresponding value parameter based on the key parameter, a lock is set on the key parameter. If the value parameter cannot be read for the first time according to the key parameter, a lock can be set for the key parameter. Setting a lock on the key parameter to indicate that the value parameter cannot be read in the cache according to the key parameter, so that when an access request containing the key parameter is subsequently received, the access request can be directly put into a dormant state.
In some embodiments, said obtaining the corresponding key parameter based on the access request comprises: acquiring the lock state of the key parameter, and judging whether the lock state of the key parameter is open or not; and in response to the lock state of the key parameter being open, transitioning the access request to a dormant state.
In response to the lock setting of the key parameter being successful, reading the database based on the access request and determining whether the reading of the database was successful. And responding to the success of reading the database, acquiring corresponding data and updating the cache. Corresponding data are not read in the cache, the database can be directly read, and if corresponding data exist in the database, the corresponding data can be obtained and the cache can be updated.
In some embodiments, further comprising: in response to an unsuccessful read of the database, an identifier is set in a cache for the key parameter. If the corresponding data is not read in the database, indicating that the data may be deleted or temporarily unreadable, an identifier may be set for the key parameter, e.g., a "NONE" string is added to the key parameter, so that other requests may be processed directly according to the "NONE" string. The function of inserting the NONE character string is to avoid the problem of a dead loop caused by continuous query in a cache and a database when the database can not read data.
In some embodiments, the determining based on whether the key parameter can be read to a corresponding value parameter includes: it is determined whether the key parameter has an identifier. If a key parameter has an identifier, the corresponding value parameter cannot be read, and if the key parameter does not have an identifier, a further determination may be made.
In some embodiments, the obtaining corresponding data and updating the cache includes: and awakening the access request in the dormant state, and enabling the access request to acquire corresponding data from the updated cache. After the data is successfully inserted into the database, the access request corresponding to the data can be awakened, and the identifier of the key parameter can be deleted.
In some embodiments, further comprising: responding to the newly added data, and judging whether the newly added data is successfully matched with the key parameters with the identifiers; and in response to the fact that the newly added data are successfully matched with the key parameters with the identifiers, deleting the identifiers corresponding to the key parameters. And matching each key parameter in the newly added data with the key parameter with the identifier in sequence, and deleting the identifier of the key parameter if the key parameter which is the same as the key parameter with the identifier exists in the newly added data.
In some embodiments, the determining whether the new addition data is successfully matched with the key parameter with the identifier includes: judging whether the same key parameters as those in the newly added data exist in the cache; and in response to the key parameter being the same as the key parameter in the newly added data in the cache, determining whether the key parameter in the cache has an identifier. In the specific determination process, it may be determined whether the same key parameter as that in the newly added data exists in the cache first, and if so, it may be determined whether the key parameter has an identifier.
In the embodiment of the invention, when the concurrent access is too high, if the cache fails, the data is locked, then the database is requested to be accessed, other requests find the lock and then enter a dormant state, the cache data is accessed again after the cache is recovered, a large number of requests access the database at the same time after the cache fails are avoided, the pressure of the database is too high, and the cache is also prevented from being updated by a plurality of requests at the same time.
It should be particularly noted that, the steps in the embodiments of the method for optimizing concurrency of data cache described above may be mutually intersected, replaced, added, or deleted, and therefore, these methods for optimizing concurrency of data cache, which are reasonably transformed by permutation and combination, should also belong to the scope of the present invention, and should not limit the scope of the present invention to the embodiments.
In view of the above, a second aspect of the embodiments of the present invention provides a system for optimizing concurrency of data caches, including: the cache module is configured to cache the server component information according to a key value pair form; the first judgment module is configured to respond to a received access request, acquire a corresponding key parameter based on the access request, and judge whether a corresponding value parameter can be read based on the key parameter; a lock module configured to set a lock on the key parameter in response to an inability to read a corresponding value parameter based on the key parameter; a second judgment module configured to respond to a successful setting of the lock of the key parameter, read the database based on the access request, and judge whether reading the database is successful; and the obtaining module is configured to respond to the success of reading the database, obtain the corresponding data and update the cache.
In some embodiments, the first determining module is configured to: acquiring the lock state of the key parameter, and judging whether the lock state of the key parameter is open or not; and in response to the lock state of the key parameter being open, transitioning the access request to a dormant state.
In some embodiments, the obtaining module is configured to: and awakening the access request in the dormant state, and enabling the access request to acquire corresponding data from the updated cache.
In some embodiments, the system further comprises: an identifier module configured to set an identifier for the key parameter in a cache in response to an unsuccessful read of the database.
In some embodiments, the first determining module is configured to: it is determined whether the key parameter has an identifier.
In some embodiments, the system further comprises: the third judging module is configured to respond to newly added data and judge whether the newly added data is successfully matched with the key parameter with the identifier; and in response to the fact that the newly added data are successfully matched with the key parameters with the identifiers, deleting the identifiers corresponding to the key parameters.
In some embodiments, the third determining module is configured to: judging whether the same key parameters as those in the newly added data exist in the cache; and in response to the key parameter being the same as the key parameter in the newly added data in the cache, determining whether the key parameter in the cache has an identifier.
In view of the above object, a third aspect of the embodiments of the present invention provides a computer device, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions being executable by the processor to perform the steps of: s1, caching the server component information according to the key value pair form; s2, responding to the received access request, acquiring corresponding key parameters based on the access request, and judging whether the corresponding value parameters can be read based on the key parameters; s3, responding to the key parameter that can not read the corresponding value parameter, setting lock for the key parameter; s4, responding to the successful setting of the lock of the key parameter, reading the database based on the access request and judging whether the reading of the database is successful; and S5, responding to the database reading success, acquiring the corresponding data and updating the cache.
In some embodiments, said obtaining the corresponding key parameter based on the access request comprises: acquiring the lock state of the key parameter, and judging whether the lock state of the key parameter is open or not; and in response to the lock state of the key parameter being open, transitioning the access request to a dormant state.
In some embodiments, the obtaining corresponding data and updating the cache includes: and awakening the access request in the dormant state, and enabling the access request to acquire corresponding data from the updated cache.
In some embodiments, the steps further comprise: in response to an unsuccessful read of the database, an identifier is set in a cache for the key parameter.
In some embodiments, the determining based on whether the key parameter can be read to a corresponding value parameter includes: it is determined whether the key parameter has an identifier.
In some embodiments, the steps further comprise: responding to the newly added data, and judging whether the newly added data is successfully matched with the key parameters with the identifiers; and in response to the fact that the newly added data are successfully matched with the key parameters with the identifiers, deleting the identifiers corresponding to the key parameters.
In some embodiments, the determining whether the new addition data is successfully matched with the key parameter with the identifier includes: judging whether the same key parameters as those in the newly added data exist in the cache; and in response to the key parameter being the same as the key parameter in the newly added data in the cache, determining whether the key parameter in the cache has an identifier.
Fig. 2 is a schematic hardware structural diagram of an embodiment of the computer device for optimizing concurrency of data caches according to the present invention.
Taking the apparatus shown in fig. 2 as an example, the apparatus includes a processor 301 and a memory 302, and may further include: an input device 303 and an output device 304.
The processor 301, the memory 302, the input device 303 and the output device 304 may be connected by a bus or other means, and fig. 2 illustrates the connection by a bus as an example.
The memory 302 is a non-volatile computer-readable storage medium, and can be used for storing non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the method for optimizing data cache concurrency in the embodiments of the present application. The processor 301 executes various functional applications of the server and data processing by running nonvolatile software programs, instructions and modules stored in the memory 302, namely, the method for optimizing data cache concurrency of the above method embodiments is realized.
The memory 302 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the method of optimizing concurrency of data caching, and the like. Further, the memory 302 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 302 optionally includes memory located remotely from processor 301, which may be connected to a local module via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 303 may receive information such as a user name and a password that are input. The output means 304 may comprise a display device such as a display screen.
Program instructions/modules corresponding to one or more methods of optimizing data cache concurrency are stored in the memory 302 and, when executed by the processor 301, perform the methods of optimizing data cache concurrency in any of the above-described method embodiments.
Any embodiment of the computer device executing the method for optimizing concurrency of data caching can achieve the same or similar effects as any corresponding method embodiment.
The invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, performs the method as above.
Finally, it should be noted that, as one of ordinary skill in the art can appreciate that all or part of the processes of the methods of the above embodiments can be implemented by a computer program to instruct related hardware, and the program of the method for optimizing concurrence of data caching can be stored in a computer-readable storage medium, and when executed, the program can include the processes of the embodiments of the methods as described above. The storage medium of the program may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like. The embodiments of the computer program may achieve the same or similar effects as any of the above-described method embodiments.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.

Claims (10)

1. A method for optimizing data cache concurrency, comprising the steps of:
caching the server component information according to a key value pair form;
responding to a received access request, acquiring a corresponding key parameter based on the access request, and judging whether a corresponding value parameter can be read based on the key parameter;
in response to an inability to read a corresponding value parameter based on the key parameter, setting a lock on the key parameter;
responding to the successful setting of the lock of the key parameter, reading the database based on the access request and judging whether the database reading is successful; and
and responding to the success of reading the database, acquiring corresponding data and updating the cache.
2. The method of claim 1, wherein the obtaining the corresponding key parameter based on the access request comprises:
acquiring the lock state of the key parameter, and judging whether the lock state of the key parameter is open or not; and
and responding to the locking state of the key parameter as open, and converting the access request into a dormant state.
3. The method of claim 2, wherein the obtaining corresponding data and updating the cache comprises:
and awakening the access request in the dormant state, and enabling the access request to acquire corresponding data from the updated cache.
4. The method of claim 1, further comprising:
in response to an unsuccessful read of the database, an identifier is set in a cache for the key parameter.
5. The method of claim 4, wherein the determining based on whether the key parameter can be read to a corresponding value parameter comprises:
it is determined whether the key parameter has an identifier.
6. The method of claim 4, further comprising:
responding to the newly added data, and judging whether the newly added data is successfully matched with the key parameters with the identifiers; and
and deleting the identifier corresponding to the key parameter in response to the fact that the newly added data is successfully matched with the key parameter with the identifier.
7. The method of claim 6, wherein the determining whether the new addition data successfully matches the key parameter with the identifier comprises:
judging whether the same key parameters as those in the newly added data exist in the cache; and
and responding to the key parameter which is the same as the key parameter in the newly added data in the cache, and judging whether the key parameter which is present in the cache has an identifier.
8. A system for optimizing concurrency of data caches, comprising:
the cache module is configured to cache the server component information according to a key value pair form;
the first judgment module is configured to respond to a received access request, acquire a corresponding key parameter based on the access request, and judge whether a corresponding value parameter can be read based on the key parameter;
a lock module configured to set a lock on the key parameter in response to an inability to read a corresponding value parameter based on the key parameter;
a second judgment module configured to respond to a successful setting of the lock of the key parameter, read the database based on the access request, and judge whether reading the database is successful; and
and the obtaining module is configured to respond to the success of reading the database, obtain corresponding data and update the cache.
9. A computer device, comprising:
at least one processor; and
a memory storing computer instructions executable on the processor, the instructions when executed by the processor implementing the steps of the method of any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202010950970.3A 2020-09-11 2020-09-11 Method, system, equipment and medium for optimizing data cache concurrency Withdrawn CN112131234A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010950970.3A CN112131234A (en) 2020-09-11 2020-09-11 Method, system, equipment and medium for optimizing data cache concurrency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010950970.3A CN112131234A (en) 2020-09-11 2020-09-11 Method, system, equipment and medium for optimizing data cache concurrency

Publications (1)

Publication Number Publication Date
CN112131234A true CN112131234A (en) 2020-12-25

Family

ID=73845554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010950970.3A Withdrawn CN112131234A (en) 2020-09-11 2020-09-11 Method, system, equipment and medium for optimizing data cache concurrency

Country Status (1)

Country Link
CN (1) CN112131234A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157717A (en) * 2021-05-26 2021-07-23 深圳平安智汇企业信息管理有限公司 Cache refreshing method, device and equipment for long data link and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157717A (en) * 2021-05-26 2021-07-23 深圳平安智汇企业信息管理有限公司 Cache refreshing method, device and equipment for long data link and storage medium

Similar Documents

Publication Publication Date Title
CN111427966B (en) Database transaction processing method and device and server
US9262324B2 (en) Efficient distributed cache consistency
CN111506592B (en) Database upgrading method and device
CN110704463B (en) Local caching method and device for common data, computer equipment and storage medium
WO2014161261A1 (en) Data storage method and apparatus
CN109522043B (en) Method and device for managing configuration data and storage medium
CN111177043A (en) Method, system, device and medium for accelerating reading of field replaceable unit information
CN113010476A (en) Metadata searching method, device and equipment and computer readable storage medium
CN112579698A (en) Data synchronization method, device, gateway equipment and storage medium
CN114281653B (en) Application program monitoring method and device and computing equipment
CN112131234A (en) Method, system, equipment and medium for optimizing data cache concurrency
CN109165078B (en) Virtual distributed server and access method thereof
CN114036195A (en) Data request processing method, device, server and storage medium
CN111857939A (en) Method, system, electronic device and storage medium for deleting and pushing mirror image
CN117331902A (en) Dynamic path monitoring method, device, equipment and storage medium
CN115858419B (en) Metadata management method, device, equipment, server and readable storage medium
CN116775646A (en) Database data management method, device, computer equipment and storage medium
CN115587119A (en) Database query method and device, electronic equipment and storage medium
CN110968267A (en) Data management method, device, server and system
CN111813501A (en) Data deleting method, device, equipment and storage medium
US8583596B2 (en) Multi-master referential integrity
CN110647532A (en) Method and device for maintaining data consistency
CN112988208B (en) Data updating method, device, equipment and storage medium
CN117390078B (en) Data processing method, device, storage medium and computer equipment
US12079194B1 (en) Encoding table schema and storage metadata in a file store

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201225