CN112860794A - Cache-based concurrency capability improving method, device, equipment and storage medium - Google Patents

Cache-based concurrency capability improving method, device, equipment and storage medium Download PDF

Info

Publication number
CN112860794A
CN112860794A CN202110150945.1A CN202110150945A CN112860794A CN 112860794 A CN112860794 A CN 112860794A CN 202110150945 A CN202110150945 A CN 202110150945A CN 112860794 A CN112860794 A CN 112860794A
Authority
CN
China
Prior art keywords
data
version
version data
service
synchronized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110150945.1A
Other languages
Chinese (zh)
Other versions
CN112860794B (en
Inventor
唐小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bigo Technology Pte Ltd
Original Assignee
Bigo Technology Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bigo Technology Pte Ltd filed Critical Bigo Technology Pte Ltd
Priority to CN202110150945.1A priority Critical patent/CN112860794B/en
Publication of CN112860794A publication Critical patent/CN112860794A/en
Application granted granted Critical
Publication of CN112860794B publication Critical patent/CN112860794B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F16/275Synchronous replication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/219Managing data history or versioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2315Optimistic concurrency control
    • G06F16/2329Optimistic concurrency control using versioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a cache-based concurrency capability improving method, a cache-based concurrency capability improving device, cache-based concurrency capability improving equipment and a cache-based storage medium. According to the technical scheme provided by the embodiment of the application, the memory version data loaded from the data layer is cached in the logic layer as the read-only version data, the read-only version data is cached as the business version data in the logic layer, business processing is carried out on the business version data in the logic layer when one or more data processing requests come, the business version data subjected to business processing is copied and cached as the version data to be synchronized, when the version data to be synchronized is successfully written back to the data layer, the corresponding request processing result is returned to a requester, low-delay response to high concurrency requests is achieved, the consistency of the returned result and the data layer data is ensured, and meanwhile the bearing capacity of the high concurrency requests is effectively improved.

Description

Cache-based concurrency capability improving method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a concurrency capacity improving method, device, equipment and storage medium based on cache.
Background
In part of internet business scenes, some data are accessed and modified by a large number of users at the same time, such as a wheat robbing scene in live wheat connecting service and a commodity second killing scene in e-commerce service. Generally, service separation between a data layer and a logic layer is performed for the service scenarios, data is loaded from the data layer to the logic layer when a request comes, the data is processed according to the request in the logic layer, then the data is written back to the data layer, and finally a processing result is returned to a requester.
In the processing mode, when each request comes, data is loaded, processed and written back, so that the processing time of one request is higher than the data transmission delay of the logic layer and the data layer, while the next request can be processed continuously after the previous request is processed, and at this time, the contradiction between the request with high concurrency and the data read-write delay appears, so that the processing capability of the request with high concurrency is limited.
Disclosure of Invention
The embodiment of the application provides a cache-based concurrency capability improving method, a cache-based concurrency capability improving device, cache-based concurrency capability improving equipment and a cache-based storage medium, so as to improve the processing capability of high-concurrency requests.
In a first aspect, an embodiment of the present application provides a method for improving concurrency capability based on a cache, including:
loading memory version data from a data layer, and caching the memory version data as read-only version data in a logic layer;
copying the read-only version data into service version data, caching the service version data in the logic layer, and performing service processing on the service version data according to one or more data processing requests sent by a requester;
copying the service version data subjected to service processing into version data to be synchronized, and caching the version data to be synchronized in the logic layer;
and performing data write-back operation on the data layer according to the version data to be synchronized, and returning a corresponding request processing result to the requester based on a data write-back operation result.
In a second aspect, an embodiment of the present application provides a concurrency capability promotion device based on a cache, including a data loading module, a data processing module, a data synchronization module, and a data write-back module, where:
the data loading module is used for loading the memory version data from the data layer and caching the memory version data as read-only version data in the logic layer;
the data processing module is used for copying the read-only version data into business version data, caching the business version data in the logic layer, and simultaneously carrying out business processing on the business version data according to one or more data processing requests sent by a requester;
the data synchronization module is used for copying the service version data subjected to service processing into version data to be synchronized and caching the version data to be synchronized in the logic layer;
and the data write-back module is used for performing data write-back operation on the data layer according to the version data to be synchronized and returning a corresponding request processing result to the requester based on a data write-back operation result.
In a third aspect, an embodiment of the present application provides a concurrent capability improving device based on a cache, including: a memory and one or more processors;
the memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the cache-based concurrency capability promotion method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the cache-based concurrency capability promotion method according to the first aspect.
According to the embodiment of the application, the memory version data loaded from the data layer is cached in the logic layer as the read-only version data, the read-only version data is cached in the logic layer as the business version data, business processing is carried out on the business version data in the logic layer when one or more data processing requests arrive, the business version data after the business processing is copied and cached as the version data to be synchronized, when the version data to be synchronized is successfully written back to the data layer, a corresponding request processing result is returned to a requester, low-delay response to a high-concurrency request is achieved, the consistency of the returned result and the data layer data is ensured, and meanwhile the carrying capacity of the high-concurrency request is effectively improved.
Drawings
Fig. 1 is a flowchart of a concurrency capacity promotion method based on a cache according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a processing flow of multiple concurrent requests according to an embodiment of the present application;
fig. 3 is a flowchart of another cache-based concurrency capability promotion method according to an embodiment of the present application;
fig. 4 is a schematic diagram of a data saving state before a data processing request arrives according to an embodiment of the present application;
fig. 5 is a schematic diagram of a data saving state when a first data request arrives according to an embodiment of the present application;
fig. 6 is a schematic diagram of a data saving state when a second data request comes according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a concurrency capacity improving apparatus based on a cache according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a concurrency capability promotion device based on a cache according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, specific embodiments of the present application will be described in detail with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some but not all of the relevant portions of the present application are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Fig. 1 is a flowchart of a concurrency capacity improving method based on a cache according to an embodiment of the present application, where the concurrency capacity improving method based on a cache according to the embodiment of the present application may be executed by a concurrency capacity improving device based on a cache, and the concurrency capacity improving device based on a cache may be implemented in a hardware and/or software manner and integrated in a concurrency capacity improving device based on a cache.
The following description will be given by taking an example in which the cache-based concurrency capability promotion device executes the cache-based concurrency capability promotion method. Referring to fig. 1, the cache-based concurrency capability promotion method includes:
s101: and loading the memory version data from the data layer, and caching the memory version data as read-only version data in the logic layer.
The concurrency capability improving device based on the cache is provided with a data layer and a logic layer, wherein the data layer and the logic layer are respectively used for storing and processing data, and the memory version data provided by the embodiment is stored in the data layer.
Illustratively, when a data processing service with multiple concurrent requests needs to be performed, corresponding memory version data is loaded from a data layer according to data content corresponding to the data processing service, and then the memory version data obtained by loading is cached in a logic layer as read-only version data. In a possible embodiment, the memory version data may also be loaded into the logic layer in advance and cached as the read-only version data when the data processing service is started (for example, the data of the parking stall queue and the like are loaded into the logic layer in advance when the anchor is played).
In a possible embodiment, after caching the memory version data as read-only version data in a logic layer, the embodiment of the present application further includes: and providing the read-only version data to the requester based on a correctness service request issued by the requester.
It can be understood that the read-only version data provided in this embodiment is unalterable read-only data, and the read-only version data is obtained by copying the memory version data, so that correctness of the data can be ensured, and when a correctness service request that needs to ensure correctness of the data is received and sent by a requester (for example, when the microphone data needs to be regularly pushed to a client in a microphone connecting service), the read-only version data cached in the logic layer can be obtained and sent to a corresponding requester, so that correctness of the obtained data is ensured.
S102: copying the read-only version data into service version data, caching the service version data in the logic layer, and simultaneously carrying out service processing on the service version data according to one or more data processing requests sent by a requester.
Illustratively, after the memory version data is loaded from the data layer and cached as the read-only version data, the read-only version data is copied as the business version data, and the business version data is cached in the logic layer. The service version data provided by this embodiment is used as data for performing service processing, and when a service request arrives, the service version data is subjected to service processing.
Optionally, after the memory version data is loaded from the data layer and cached as the read-only version data each time, the new read-only version data is copied again, and the original service version data is replaced, so that the correctness of the service version data subjected to service processing is ensured.
Furthermore, when one or more data processing requests sent by the requester are received, the business version data is processed and modified according to the data processing requests. One or more data processing requests can be sent by one requester, or can be sent by a plurality of requesters respectively.
It can be understood that, when a plurality of data processing requests are received, subsequent data processing requests are not blocked in the logic layer, and the service version data can be simultaneously or sequentially processed according to the data processing requests, and it is not necessary to wait for returning a request processing result corresponding to a previous data processing request before processing a next data processing request, so as to meet a requirement on the carrying capacity of the concurrent request.
S103: copying the service version data after service processing into version data to be synchronized, and caching the version data to be synchronized in the logic layer.
Illustratively, after the business processing is performed on the business version data, the business version data is additionally copied as version data to be synchronized, and the version data to be synchronized is cached in the logic layer.
The version data to be synchronized provided by this embodiment is obtained by copying the service version data after the service processing, and is used for asynchronously writing back to the data layer.
S104: and performing data write-back operation on the data layer according to the version data to be synchronized, and returning a corresponding request processing result to the requester based on a data write-back operation result.
Illustratively, after the business version data is copied and cached as the version data to be synchronized, a data write-back operation is performed on the data layer according to the version data to be synchronized, that is, the version data to be synchronized cached in the logic layer is written back to the data layer, and the version data to be synchronized is utilized to update the memory version data in the data layer.
Further, after the execution of the data write-back operation is completed, a write-back operation result corresponding to the data write-back operation is determined, and a corresponding request processing result is returned to the requester corresponding to each data processing request according to the write-back operation result.
For example, when the write-back operation result indicates that the data write-back operation is successful, the request processing result returned to the requester is a data processing result obtained by performing business processing on the business version data. When the write-back operation result indicates that the data write-back operation fails, the request processing result returned to the requester is request processing error information (e.g., a service fault error code).
For any data processing request provided by this embodiment, a corresponding data processing result is returned to the requester after the corresponding version data to be synchronized is successfully written back to the data layer, so that consistency between the memory version data and the returned data processing result is ensured, and the situation that the data is successfully modified and is not actually successfully modified is reduced.
Fig. 2 is a schematic view of a processing flow of multiple concurrent requests according to an embodiment of the present application, and the processing flow is applied to the cache-based concurrency capability improvement method according to the embodiment to process multiple concurrent requests. Referring to fig. 2, when a data service with multiple concurrent requests (for example, a user live broadcast multiple people with wheat service, an online commodity second killing service, etc.) needs to be performed, corresponding memory version data is loaded from a data layer into a logic layer, and the loaded memory version data is cached as read-only version data. When a data processing request 1 and a data processing request 2 are received, firstly, business processing is carried out on business version data in a logic layer according to the data processing requests 1 and 2, then the business version data after the business processing is cached as version data to be synchronized, and the version data to be synchronized is written back to a data layer.
And if the data processing request 3 is received in the data write-back operation process, performing service processing on the service version data in the logic layer according to the data processing request 3, and returning data processing results corresponding to the data processing requests 1 and 2 to the requester when the data write-back operation on the version data to be synchronized is successful.
Further, the version data to be synchronized is updated by using the service version data subjected to service processing according to the data processing request 3, the version data to be synchronized is written back to the data layer, and when the data write-back operation is successful, a data processing result corresponding to the data processing request 3 is returned to the requester.
In the multi-concurrent request processing flow, any request can wait until the data is written back successfully and then can be returned to the requester, so that the consistency of the data and the result is ensured. Meanwhile, if a plurality of requests arrive at the same time, after all the requests modify the service version data of the logic layer, the finally modified service version data is written back to the data layer as the version data to be synchronized. In the process of data write-back operation, the logic layer does not block the data processing request but continues to provide data processing service, so that the effect that the time delay of any data processing request does not exceed two times of the time delay of the data layer (the time for performing service processing is shorter than the time delay of the data layer, and the time delay caused by the service processing can be correspondingly ignored) is achieved.
The memory version data loaded from the data layer is cached in the logic layer as read-only version data, the read-only version data is cached in the logic layer as service version data, when one or more data processing requests arrive, service processing is performed on the service version data in the logic layer, the service version data subjected to service processing is copied and cached as to-be-synchronized version data, and when the to-be-synchronized version data is successfully written back to the data layer, a corresponding request processing result is returned to a requester, so that low-delay response to a high-concurrency request is realized, the consistency of the returned result and the data layer data is ensured, and meanwhile, the carrying capacity of the high-concurrency request is effectively improved.
On the basis of the foregoing embodiment, fig. 3 is a flowchart of another cache-based concurrency capability improvement method provided in the embodiment of the present application, which is an embodiment of the cache-based concurrency capability improvement method. Referring to fig. 3, the cache-based concurrency capability promotion method includes:
s201: and loading the memory version data from the data layer based on a set loading time interval or based on data write-back operation failure, and caching the memory version data as read-only version data in the logic layer.
Specifically, in the embodiment of the application, the memory version data is loaded from the data layer according to the set loading time interval and cached in the logic layer as the read-only version data, so that the read-only version number of the read-only version data is kept consistent with the memory version number of the memory version data as much as possible. And when the loading time interval is reached, the memory version data is loaded from the data layer again, and the read-only version data cache in the logic layer is updated.
Further, in addition to loading the memory version data of the data layer at regular time, in the embodiment of the present application, when the data write-back operation performed on the data layer based on the version data to be synchronized fails, the memory version data is loaded from the data layer again, and the read-only version data cache in the logic layer is updated, so that it is ensured that the read-only version data is consistent with the memory version data.
It can be understood that the read-only version number corresponding to the read-only version data cache obtained by loading the memory version data based on the data layer each time is consistent with the memory version number of the memory version data until the memory version data is modified. And when the memory version data is modified, adding one to the corresponding memory version number, and if the memory version data is modified by other services or processes, the memory version number is advanced to the current read-only version number.
S202: copying the read-only version data into service version data, increasing the service version number of the service version data, and caching the service version data in the logic layer.
Specifically, after the read-only version data is updated each time, the updated read-only version data is copied as the business version data, and at this time, the business version number of the business version data is consistent with the read-only version number of the read-only version data.
Furthermore, the service version number of the service version data is increased, that is, the service version number is added, and the service version data is cached in the logic layer.
S203: and carrying out service processing on the service version data according to one or more data processing requests sent by a requester.
S204: it is determined whether a logical layer is performing a data write back operation on the data layer. If so, go to step S203, otherwise go to step S205.
When the business processing of the business version data is completed according to the data processing request, the business version data needs to be copied to the version data to be synchronized, and the data write-back operation is waited to be performed.
Specifically, when the service processing of the service version data is completed according to the data processing request, it is determined whether the logic layer is performing data write-back operation on the data layer.
If the data write-back operation is being performed at this time, and it is necessary to copy the service processing data after the data write-back operation is completed, the process jumps to step S203, and continues to perform service processing on the service version data according to the data processing request, that is, continues to perform service processing on the service version data according to the subsequent data processing request waiting for processing, so as to avoid blocking the subsequent data processing request.
S205: copying the service version data after service processing into version data to be synchronized, caching the version data to be synchronized in the logic layer, and increasing the service version number of the service version data.
And if the data write-back operation is not carried out at the moment, copying the service version data into the version data to be synchronized. It can be understood that, at this time, the version number to be synchronized of the version data to be synchronized is consistent with the service version number of the service version data.
Further, the version data to be synchronized obtained by copying the business version data is cached in the logic layer, and the business version number of the business version data is increased, that is, the business version number is added. It can be understood that, in this embodiment, after the copying of the service version data is completed each time, the service version number of the service version data is increased, and after the service version data subjected to the next service processing is copied to the version data to be synchronized, the difference between the version number to be synchronized and the last version number to be synchronized is 1, so that the synchronization with the memory version number is realized.
S206: and judging whether the version number to be synchronized corresponds to the next version number of the memory version numbers. If so, jumping to step S207, otherwise, reloading the memory version data from the data layer and jumping to step S202.
Specifically, the version number to be synchronized of the version data to be synchronized and the memory version number of the memory version data are determined, the version number to be synchronized and the memory version number are compared, and when the version number to be synchronized is consistent with the next version number of the memory version number (at this time, the version number to be synchronized is equal to the memory version number plus one), the step S207 is skipped to.
And when the version number to be synchronized is not consistent with the next version number of the memory version number, determining that the data write-back operation fails, reloading the memory version data from the data layer, caching the memory version data as read-only version data in the logic layer, and skipping to the step S202.
Specifically, when the version number to be synchronized is not equal to the next version number of the memory version number, it is considered that the memory version data is modified by other processes or services (the read-only version number of the read-only version data lags behind the memory version number of the memory version data at this time), the memory version data needs to be loaded from the data layer again, the memory version data is cached in the logic layer as the read-only version data, it is ensured that the read-only version data is consistent with the memory version data, and the read-only version number of the read-only version data is consistent with the memory version number of the memory version data at this time.
S207: and performing data write-back operation on the data layer according to the version data to be synchronized.
Specifically, when the version number to be synchronized is equal to the next version number of the memory version number, it may be determined that the memory version data is not modified by other processes or services at this time, and then the data layer is subjected to data write-back operation according to the version data to be synchronized, so as to implement asynchronous update of the memory version data. It can be understood that, at this time, since the memory version data is modified, the memory version number is also subjected to an addition process.
In a possible embodiment, if the data write-back operation fails, the process jumps to step S201 to reload the memory version data from the data layer, and the memory version data is cached in the logic layer as the read-only version data. If the data write-back operation is successful, go to step S208.
S208: and returning a corresponding request processing result to the requester based on the data write-back operation result.
S209: and updating the read-only version data according to the version data to be synchronized based on the success of data write-back operation, and updating the version data to be synchronized according to the service version data so as to perform the next data write-back operation.
Specifically, when the data write-back operation is successful, the version data to be synchronized in the logic layer is directly copied, and the read-only version data is updated according to the version data to be synchronized.
Further, after updating the read-only version data is completed, copying the read-only version data into the business version data for caching. Further, the service version data is copied as the version data to be synchronized, so as to update the version data to be synchronized, prepare for the next data write-back operation, and jump to step S203 (at this time, the next data write-back operation can be performed synchronously).
Fig. 4 is a schematic diagram of a data saving state before a data processing request comes according to an embodiment of the present application. As shown in fig. 4, assuming that the memory version number V0 of the memory version data stored in the initial data layer is x, the logic layer loads the memory version data from the data layer in advance according to the data required by the service and caches the memory version data as read-only version data, at this time, the read-only version number V1 is y, and y is less than or equal to x (y is x in the initial state, and y is less than x when other processes or services modify the memory version data). And then copying and caching the read-only version data as service version data, and adding one to the service version number, wherein the service version number V2 is y + 1. At this time, the version data to be synchronized V3 of the version data to be synchronized is equal to 0, and the version data to be synchronized is invalid data.
Fig. 5 is a schematic diagram of a data saving state when a first data request arrives according to an embodiment of the present application. As shown in fig. 5, assuming that concurrent data processing requests R1-R8 arrive, if the modification of the service version data is completed according to the data processing requests R1-R4, the service version data is copied and cached as the version data to be synchronized, and data write-back operation is performed, and the service version number is incremented, at this time, the read-only version number V1 is y, the version number V3 to be synchronized is y +1, and the service version number V2 is y +2, because the read-only version number V3 is V0+1 (the synchronization version number corresponds to the next version number of the memory version number), data write-back operation can be performed normally, and the service version data is modified continuously according to the data processing requests R5-R8 in the data processing process. And when the data write-back operation is successful, returning a corresponding request processing result to the requester, and copying the version data to be synchronized as read-only version data. At this time, the read-only version number V1 is x +1, and the memory version number V0 is x + 1.
Fig. 6 is a schematic diagram of a data saving state when a second data request arrives according to an embodiment of the present application. As shown in fig. 6, assuming that the data processing request R9 is received at this time, the service version data after service processing according to the data processing requests R5-R8 is copied to the version data to be synchronized for data write-back operation, and the service version data is continuously modified according to the data processing request R9 during the data write-back operation. At this time, the read-only version number V1 is x +1, the version number to be synchronized V3 is x +2, and the service version number V2 is x + 2. Assuming that V3 is V0+2, the data write-back operation is performed normally, and if the data write-back operation succeeds, the corresponding request processing result is returned to the requester. Assuming that V3 is less than V0+2, the data write-back operation fails, and all executed data processing requests need to fail, and at this time, all the data processing requests R1 to R9 fail, and the latest memory version data is forced to be reloaded from the data layer, and a new data processing request can be processed after the data loading is completed.
The concurrency capability improving method provided by the embodiment of the application can be applied to application scenes requiring high concurrency request service capability, for example, in live broadcast and microphone connecting service of users, operations of simultaneously grabbing microphones, joining microphone connecting waiting queues and the like of a large number of users in a single live broadcast room are supported, meanwhile, wider product design can be supported, for example, a notification is popped up to enable all users in the room to simultaneously join the microphone connecting waiting queues, and the liveness of a main broadcast and the user microphone connecting is improved. And if the second killing scene of the E-commerce commodities is detected, caching quantity data of the commodities in the logic layer, judging whether the residual quantity in the memory meets the robbery requirement according to the quantity data when the user buys the commodities, if so, generating a robbery task, and asynchronously executing the logic in series to complete the order.
In a possible embodiment, the availability of the service may also be ensured by a current limiting mechanism, for example, when a large number of data processing requests are received, blocking more than a set number of data processing requests, so as to reduce the impact of the large number of data processing requests on the server.
The memory version data loaded from the data layer is cached in the logic layer as read-only version data, the read-only version data is cached in the logic layer as service version data, when one or more data processing requests arrive, service processing is carried out on the service version data in the logic layer, the service version data after the service processing is copied and cached as to-be-synchronized version data, and when the to-be-synchronized version data is successfully written back to the data layer, a corresponding request processing result is returned to a requester, so that low-delay response to a high-concurrency request is realized, the consistency of the returned result and the data layer data is ensured, the bearing capacity of the high-concurrency request is effectively improved, and the high-concurrency modification of hot spot data is met. Meanwhile, based on the comparative update operation (CAS, Compare and Swap), with a monotonically increasing version number, only the data modification based on the last version number will eventually succeed in updating at the data layer, and the correctness of the updating can be effectively ensured. And the corresponding request processing result is returned to the requester only when the data write-back operation is successful, so that the condition that the data is disordered due to the fact that the request actually fails and the return is successful caused by the reason that the network or the backend node is abnormal (for example, the server loses memory data which is too late to write back the disk due to power failure) is reduced, and the consistency of the data is effectively ensured. In a live broadcast and microphone connecting service scene of a user, the concurrent bearing capacity of room microphone connecting is effectively improved, the attack protection capacity for a large number of requests is improved, the condition that a server is abnormal or crashed due to the attack of the large number of requests is reduced, and a high-concurrency service scene is supported, for example, after a main broadcast sets free microphone connecting, a free microphone connecting notification is sent to all users, the users can be arranged to get on the microphone immediately in response to the microphone connecting requests of the users, the waiting time of the users is reduced, and the user experience is optimized.
Fig. 7 is a schematic structural diagram of a concurrency capability promotion device based on a cache according to an embodiment of the present application. Referring to fig. 7, the cache-based concurrency capability promotion device includes a data loading module 31, a data processing module 32, a data synchronization module 33, and a data write-back module 34.
The data loading module 31 is configured to load memory version data from a data layer, and cache the memory version data as read-only version data in a logic layer; the data processing module 32 is configured to copy the read-only version data into service version data, cache the service version data in the logic layer, and perform service processing on the service version data according to one or more data processing requests issued by a requester; the data synchronization module 33 is configured to copy the service version data subjected to service processing as version data to be synchronized, and cache the version data to be synchronized in the logic layer; the data write-back module 34 is configured to perform data write-back operation on the data layer according to the version data to be synchronized, and return a corresponding request processing result to the requester based on a data write-back operation result.
The memory version data loaded from the data layer is cached in the logic layer as read-only version data, the read-only version data is cached in the logic layer as service version data, when one or more data processing requests arrive, service processing is performed on the service version data in the logic layer, the service version data subjected to service processing is copied and cached as to-be-synchronized version data, and when the to-be-synchronized version data is successfully written back to the data layer, a corresponding request processing result is returned to a requester, so that low-delay response to a high-concurrency request is realized, the consistency of the returned result and the data layer data is ensured, and meanwhile, the carrying capacity of the high-concurrency request is effectively improved.
The embodiment of the application also provides a concurrency capability improving device based on the cache, and the concurrency capability improving device based on the cache can be integrated with the concurrency capability improving device based on the cache provided by the embodiment of the application. Fig. 8 is a schematic structural diagram of a concurrency capability promotion device based on a cache according to an embodiment of the present application. Referring to fig. 8, the cache-based concurrency capability promotion device includes: an input device 43, an output device 44, a memory 42, and one or more processors 41; the memory 42 for storing one or more programs; when the one or more programs are executed by the one or more processors 41, the one or more processors 41 are enabled to implement the cache-based concurrency capability promotion method provided by the above embodiments. The cache-based concurrency capability improving device, the equipment and the computer provided by the above can be used for executing the cache-based concurrency capability improving method provided by any of the above embodiments, and have corresponding functions and beneficial effects.
Embodiments of the present application further provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the cache-based concurrency capability promotion method provided in the foregoing embodiments. Of course, the storage medium containing the computer-executable instructions provided in the embodiments of the present application is not limited to the cache-based concurrency capacity improving method described above, and may also perform related operations in the cache-based concurrency capacity improving method provided in any embodiment of the present application. The cache-based concurrency capability improving apparatus, device, and storage medium provided in the foregoing embodiments may execute the cache-based concurrency capability improving method provided in any embodiment of the present application, and reference may be made to the cache-based concurrency capability improving method provided in any embodiment of the present application without detailed technical details described in the foregoing embodiments.
The foregoing is considered as illustrative of the preferred embodiments of the invention and the technical principles employed. The present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the claims.

Claims (11)

1. A concurrency capability promotion method based on cache is characterized by comprising the following steps:
loading memory version data from a data layer, and caching the memory version data as read-only version data in a logic layer;
copying the read-only version data into service version data, caching the service version data in the logic layer, and performing service processing on the service version data according to one or more data processing requests sent by a requester;
copying the service version data subjected to service processing into version data to be synchronized, and caching the version data to be synchronized in the logic layer;
and performing data write-back operation on the data layer according to the version data to be synchronized, and returning a corresponding request processing result to the requester based on a data write-back operation result.
2. The cache-based concurrency capability enhancement method according to claim 1, wherein the loading of the memory version data from the data layer comprises:
and loading the memory version data from the data layer based on the set loading time interval or based on the failure of the data write-back operation.
3. The cache-based concurrency capability enhancement method according to claim 1, wherein the copying the read-only version data into business version data and caching the business version data in the logic layer comprises:
copying the read-only version data into service version data, increasing the service version number of the service version data, and caching the service version data in the logic layer.
4. The cache-based concurrency capability promotion method according to claim 1, wherein the caching the version data to be synchronized in the logic layer comprises:
and caching the version data to be synchronized in the logic layer, and increasing the service version number of the service version data.
5. The cache-based concurrency capability enhancement method according to claim 1, wherein the copying the service version data after the service processing into the version data to be synchronized comprises:
determining whether a logical layer is performing a data write-back operation on the data layer;
if so, continuing to perform service processing on the service version data according to the data processing request;
and if not, copying the service version data subjected to service processing into the version data to be synchronized.
6. The cache-based concurrency capability promotion method according to claim 1, wherein the performing data write-back operation on the data layer according to the version data to be synchronized comprises:
determining a version number to be synchronized of the version data to be synchronized and a memory version number of the memory version data, and judging whether the version number to be synchronized corresponds to a next version number of the memory version number;
if so, performing data write-back operation on the data layer according to the version data to be synchronized;
and if not, reloading the memory version data from the data layer, and caching the memory version data serving as read-only version data in the logic layer.
7. The cache-based concurrency capability promotion method according to claim 1, wherein after the returning the corresponding request processing result to the requester, the method further comprises:
and updating the read-only version data according to the version data to be synchronized based on the success of data write-back operation, and updating the version data to be synchronized according to the service version data so as to perform the next data write-back operation.
8. The cache-based concurrency capability enhancement method according to claim 1, wherein after caching the memory version data as read-only version data in a logic layer, the method further comprises:
and providing the read-only version data to the requester based on a correctness service request issued by the requester.
9. The utility model provides a concurrency ability hoisting device based on buffer memory which characterized in that, includes data loading module, data processing module, data synchronization module and data write back module, wherein:
the data loading module is used for loading the memory version data from the data layer and caching the memory version data as read-only version data in the logic layer;
the data processing module is used for copying the read-only version data into business version data, caching the business version data in the logic layer, and simultaneously carrying out business processing on the business version data according to one or more data processing requests sent by a requester;
the data synchronization module is used for copying the service version data subjected to service processing into version data to be synchronized and caching the version data to be synchronized in the logic layer;
and the data write-back module is used for performing data write-back operation on the data layer according to the version data to be synchronized and returning a corresponding request processing result to the requester based on a data write-back operation result.
10. A concurrency capability promotion device based on cache is characterized by comprising: a memory and one or more processors;
the memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the cache-based concurrency capability promotion method of any one of claims 1-8.
11. A storage medium containing computer-executable instructions for performing the cache-based concurrency capability promotion method of any one of claims 1-8 when executed by a computer processor.
CN202110150945.1A 2021-02-03 2021-02-03 Concurrency capability lifting method, device, equipment and storage medium based on cache Active CN112860794B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110150945.1A CN112860794B (en) 2021-02-03 2021-02-03 Concurrency capability lifting method, device, equipment and storage medium based on cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110150945.1A CN112860794B (en) 2021-02-03 2021-02-03 Concurrency capability lifting method, device, equipment and storage medium based on cache

Publications (2)

Publication Number Publication Date
CN112860794A true CN112860794A (en) 2021-05-28
CN112860794B CN112860794B (en) 2024-08-13

Family

ID=75986448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110150945.1A Active CN112860794B (en) 2021-02-03 2021-02-03 Concurrency capability lifting method, device, equipment and storage medium based on cache

Country Status (1)

Country Link
CN (1) CN112860794B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6578041B1 (en) * 2000-06-30 2003-06-10 Microsoft Corporation High speed on-line backup when using logical log operations
US20100124108A1 (en) * 2008-11-20 2010-05-20 Vishal Sarin Programming methods and memories
CN105740260A (en) * 2014-12-09 2016-07-06 阿里巴巴集团控股有限公司 Method and device for extracting template file data structure
CN106357787A (en) * 2016-09-30 2017-01-25 郑州云海信息技术有限公司 Storage disaster tolerant control system
CN106951456A (en) * 2017-02-24 2017-07-14 广东广信通信服务有限公司 A kind of memory database system and data handling system
CN107545060A (en) * 2017-08-31 2018-01-05 聚好看科技股份有限公司 A kind of method for limiting speed and device of redis principals and subordinates full dose synchrodata
CN107870970A (en) * 2017-09-06 2018-04-03 北京理工大学 A kind of data store query method and system
CN108234641A (en) * 2017-12-29 2018-06-29 北京奇虎科技有限公司 Data read-write method and device based on distributed consensus protocol realization
CN108228669A (en) * 2016-12-22 2018-06-29 腾讯科技(深圳)有限公司 A kind of method for caching and processing and device
CN108595451A (en) * 2017-12-04 2018-09-28 阿里巴巴集团控股有限公司 Service request processing method and device
CN108829413A (en) * 2018-05-07 2018-11-16 北京达佳互联信息技术有限公司 Data-updating method, device and computer readable storage medium, server
CN109413127A (en) * 2017-08-18 2019-03-01 北京京东尚科信息技术有限公司 A kind of method of data synchronization and device
US20190179560A1 (en) * 2017-12-11 2019-06-13 Micron Technology, Inc. Systems and methods for writing zeros to a memory array
CN110059135A (en) * 2019-04-12 2019-07-26 阿里巴巴集团控股有限公司 A kind of method of data synchronization and device
CN110321227A (en) * 2018-03-29 2019-10-11 腾讯科技(深圳)有限公司 Page data synchronous method, electronic device and computer readable storage medium
CN110737682A (en) * 2019-10-17 2020-01-31 贝壳技术有限公司 cache operation method, device, storage medium and electronic equipment
CN111258897A (en) * 2020-01-15 2020-06-09 网银在线(北京)科技有限公司 Service platform testing method, device and system
CN111427853A (en) * 2020-03-23 2020-07-17 腾讯科技(深圳)有限公司 Data loading method and related device
CN111581239A (en) * 2020-04-10 2020-08-25 支付宝实验室(新加坡)有限公司 Cache refreshing method and electronic equipment

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6578041B1 (en) * 2000-06-30 2003-06-10 Microsoft Corporation High speed on-line backup when using logical log operations
US20100124108A1 (en) * 2008-11-20 2010-05-20 Vishal Sarin Programming methods and memories
CN105740260A (en) * 2014-12-09 2016-07-06 阿里巴巴集团控股有限公司 Method and device for extracting template file data structure
CN106357787A (en) * 2016-09-30 2017-01-25 郑州云海信息技术有限公司 Storage disaster tolerant control system
CN108228669A (en) * 2016-12-22 2018-06-29 腾讯科技(深圳)有限公司 A kind of method for caching and processing and device
CN106951456A (en) * 2017-02-24 2017-07-14 广东广信通信服务有限公司 A kind of memory database system and data handling system
CN109413127A (en) * 2017-08-18 2019-03-01 北京京东尚科信息技术有限公司 A kind of method of data synchronization and device
CN107545060A (en) * 2017-08-31 2018-01-05 聚好看科技股份有限公司 A kind of method for limiting speed and device of redis principals and subordinates full dose synchrodata
CN107870970A (en) * 2017-09-06 2018-04-03 北京理工大学 A kind of data store query method and system
CN108595451A (en) * 2017-12-04 2018-09-28 阿里巴巴集团控股有限公司 Service request processing method and device
US20190179560A1 (en) * 2017-12-11 2019-06-13 Micron Technology, Inc. Systems and methods for writing zeros to a memory array
CN108234641A (en) * 2017-12-29 2018-06-29 北京奇虎科技有限公司 Data read-write method and device based on distributed consensus protocol realization
CN110321227A (en) * 2018-03-29 2019-10-11 腾讯科技(深圳)有限公司 Page data synchronous method, electronic device and computer readable storage medium
CN108829413A (en) * 2018-05-07 2018-11-16 北京达佳互联信息技术有限公司 Data-updating method, device and computer readable storage medium, server
CN110059135A (en) * 2019-04-12 2019-07-26 阿里巴巴集团控股有限公司 A kind of method of data synchronization and device
CN110737682A (en) * 2019-10-17 2020-01-31 贝壳技术有限公司 cache operation method, device, storage medium and electronic equipment
CN111258897A (en) * 2020-01-15 2020-06-09 网银在线(北京)科技有限公司 Service platform testing method, device and system
CN111427853A (en) * 2020-03-23 2020-07-17 腾讯科技(深圳)有限公司 Data loading method and related device
CN111581239A (en) * 2020-04-10 2020-08-25 支付宝实验室(新加坡)有限公司 Cache refreshing method and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIANWEI LIAO等: "Fine Granularity and Adaptive Cache Update Mechanism for Client Caching", 《IEEE》, 5 September 2018 (2018-09-05), pages 1587 - 1598 *
黄姗姗等: "面向网络隔离架构的业务流行为控制高可信交互框架", 《计算机系统应用》, 15 October 2019 (2019-10-15), pages 98 - 102 *

Also Published As

Publication number Publication date
CN112860794B (en) 2024-08-13

Similar Documents

Publication Publication Date Title
US7640249B2 (en) System and method for transactional session management
US7900085B2 (en) Backup coordinator for distributed transactions
JP3830886B2 (en) Method for storing data in nonvolatile memory
US11429599B2 (en) Method and apparatus for updating database by using two-phase commit distributed transaction
CN106796546B (en) Method and apparatus for implementation in a data processing system
US20230098190A1 (en) Data processing method, apparatus, device and medium based on distributed storage
CN110134550B (en) Data processing method, device and computer readable storage medium
CN110990133B (en) Edge computing service migration method and device, electronic equipment and medium
CN113010549A (en) Data processing method based on remote multi-active system, related equipment and storage medium
CN115599747A (en) Metadata synchronization method, system and equipment of distributed storage system
CN113254536A (en) Database transaction processing method, system, electronic device and storage medium
CN114500416B (en) Delivery method and delivery system for maximum one message delivery
CN112559496B (en) Method and device for realizing transaction atomicity of distributed database
CN113946287A (en) Distributed storage system and data processing method and related device thereof
US20180246949A1 (en) Early thread return with secondary event writes
CN113110948A (en) Disaster tolerance data processing method and device
WO2023216636A1 (en) Transaction processing method and apparatus, and electronic device
CN112860794A (en) Cache-based concurrency capability improving method, device, equipment and storage medium
CN115774621B (en) Request processing method, system, equipment and computer readable storage medium
WO2023274409A1 (en) Method for executing transaction in blockchain system and blockchain node
CN115238006A (en) Retrieval data synchronization method, device, equipment and computer storage medium
CN112084048B (en) Kafka synchronous disk brushing method and device and message server
CN112162988A (en) Distributed transaction processing method and device and electronic equipment
JP2010176512A (en) Storage device, storage device control method, and storage device control program
CN110865874B (en) Transaction commit method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant