CN115061816A - Method and device for processing data in cache - Google Patents

Method and device for processing data in cache Download PDF

Info

Publication number
CN115061816A
CN115061816A CN202210725664.9A CN202210725664A CN115061816A CN 115061816 A CN115061816 A CN 115061816A CN 202210725664 A CN202210725664 A CN 202210725664A CN 115061816 A CN115061816 A CN 115061816A
Authority
CN
China
Prior art keywords
cache
target
thread
data
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210725664.9A
Other languages
Chinese (zh)
Inventor
夏宇
李鸿哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202210725664.9A priority Critical patent/CN115061816A/en
Publication of CN115061816A publication Critical patent/CN115061816A/en
Priority to PCT/CN2022/127576 priority patent/WO2023245940A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a method and a device for processing data in a cache, wherein the method comprises the following steps: the method comprises the steps of obtaining a service processing request, distributing at least one target thread, distributing the same target cache identification for the at least one target thread according to the system state and the correlation between the target threads, wherein the system state is a first state or a second state, obtaining data from a first cache in the first state, obtaining data from a second cache in the second state, different caches correspond to different cache identifications, the first cache and the second cache are used for storing the same data, the first state indicates that the data in the first cache are completely updated according to a data updating request but the data in the second cache are not completely updated, the second state indicates that the data in the second cache are completely updated according to the data updating request, and the target service is realized by obtaining the target data from the cache corresponding to the target cache identification through the at least one target thread. The method and the device improve the accuracy of data acquisition and updating.

Description

Method and device for processing data in cache
Technical Field
The embodiment of the application relates to the technical field of data processing, in particular to a method and a device for processing data in a cache.
Background
With the development of computer technology, more and more technologies are applied in the financial field, the traditional financial industry is gradually changing to financial technology (Fintech), and the data processing technology is no exception, but higher requirements are also put forward for the data processing technology due to the requirements of security and real-time performance of the financial industry. In order to meet the growing demands of various financial businesses, the use of caching is becoming more and more common.
In the prior art, when a related service is implemented, data may be obtained from a cache through a thread, and then the related service is implemented through the obtained data.
However, the data in the cache may involve an update situation, and when the data is updated, the data needs to be updated one by one within a certain time, which may cause a situation that data values are inconsistent when the thread reads the same data before and after the update, or a situation that a corresponding relationship of associated data has a problem, and reduce accuracy of data acquisition and update, thereby affecting normal implementation of a service.
Disclosure of Invention
The embodiment of the application provides a method and a device for processing data in a cache, so as to improve the accuracy of data acquisition and updating.
In a first aspect, an embodiment of the present application provides a method for processing data in a cache, including:
acquiring a service processing request corresponding to a target service, and allocating at least one target thread to the service processing request corresponding to the target service;
distributing the same target cache identifier for the at least one target thread according to a system state and an association relation between the target threads, wherein the system state is a first state or a second state, data are obtained from a first cache in the first state, data are obtained from a second cache in the second state, different caches correspond to different cache identifiers, the first cache and the second cache are used for storing the same data, the first state represents that the updating of the data in the first cache is completed according to a data updating request, but the updating of the data in the second cache is not completed, and the second state represents that the updating of the data in the second cache is completed according to the data updating request;
and acquiring target data from the cache corresponding to the target cache identifier through the at least one target thread, and realizing the target service according to the target data.
Optionally, the allocating the same target cache identifier for the at least one target thread according to the system state and the association relationship between threads includes:
determining a system state, and determining a target cache identifier according to the system state;
judging whether the target thread has a cache identifier distributed by an associated thread or not aiming at each target thread;
and if the target thread does not have the cache identifier distributed by the associated thread, distributing the target cache identifier for the target thread.
Optionally, the determining a target cache identifier according to the system state includes:
if the system state is a first state, determining that a target cache identifier is a cache identifier of a first cache corresponding to the first state;
and if the system state is a second state, determining that the target cache identifier is the cache identifier of a second cache corresponding to the second state.
Optionally, the method further includes:
and if the target thread has the cache identifier distributed by the associated thread, determining the cache identifier distributed by the associated thread as the target cache identifier.
Optionally, the step of storing the cache identifier in an inheritable variable of a thread, and allocating at least one target thread to the service processing request corresponding to the target service includes:
creating a sub-thread of the associated thread to obtain at least one initial target thread;
setting the inheritable thread variable of the at least one initial target thread as the inheritable thread variable of the associated thread to obtain at least one target thread;
and allocating at least one target thread for the service processing request corresponding to the target service.
Optionally, the step of storing the cache identifier in the custom attribute information of the thread, and allocating at least one target thread to the service processing request corresponding to the target service includes:
acquiring at least one initial target thread from a thread pool;
setting the custom attribute information of the at least one initial target thread as the custom attribute information of the associated thread to obtain at least one target thread;
and allocating at least one target thread for the service processing request corresponding to the target service.
Optionally, the method further includes:
receiving the data updating request, wherein the data updating request comprises a data identifier to be updated and a corresponding data value to be updated;
updating the data value corresponding to the to-be-updated data identifier in the first cache corresponding to the first state into the to-be-updated data value according to the data updating request, and setting the system state to be the first state;
judging whether the thread which is allocated with the cache identifier corresponding to the second state is processed or not;
if the processing is finished, updating the data value corresponding to the to-be-updated data identifier in the second cache corresponding to the second state into the to-be-updated data value;
updating the system state to the second state.
Optionally, the method further includes:
if not, randomly waiting for the target duration;
and after waiting for the target duration, re-executing the steps of judging whether the processing of the thread which is allocated with the cache identifier corresponding to the second state is finished and the subsequent steps.
Optionally, after the obtaining, by the at least one target thread, target data from the cache corresponding to the target cache identifier, the method further includes:
and clearing the target cache identification of the at least one target thread.
In a second aspect, an embodiment of the present application provides an apparatus for processing data in a cache, including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a service processing request corresponding to a target service and distributing at least one target thread for the service processing request corresponding to the target service;
a processing module, configured to allocate the same target cache identifier for the at least one target thread according to a system state and an association relationship between target threads, where the system state is a first state or a second state, data is obtained from a first cache in the first state, data is obtained from a second cache in the second state, and different caches correspond to different cache identifiers, the first cache and the second cache are used to store the same data, the first state indicates that updating of data in the first cache is completed according to a data update request, but updating of data in the second cache is not completed yet, and the second state indicates that updating of data in the second cache is completed according to the data update request;
the processing module is further configured to obtain target data from the cache corresponding to the target cache identifier through the at least one target thread, and implement the target service according to the target data.
The embodiment of the application provides a method and a device for processing data in a cache, after the above-mentioned scheme is adopted, a service processing request corresponding to a target service can be obtained first, at least one target thread is allocated to the service processing request corresponding to the target service, then the same target cache identifier can be allocated to at least one target thread corresponding to the same service according to a system state and an association relation between the target threads, wherein the system state can be a first state or a second state, data is obtained from a first cache in the first state, data is obtained from a second cache in the second state, different caches correspond to different cache identifiers, the first cache and the second cache are used for storing the same data, the first state indicates that updating of the data in the first cache is completed according to a data updating request, but updating of the data in the second cache is not completed, the second state indicates that the data in the second cache is updated according to the data updating request, after the same target cache identifier is allocated to at least one target thread corresponding to the same service, the target service can be realized by acquiring the target data from the cache corresponding to the target cache identifier through at least one target thread, different caches are allocated to different system states, the data in the caches corresponding to the different system states are fixed, and then the cache identifier of the same cache is allocated to at least one target thread in an interactive process corresponding to one service processing request according to the system states and the association relationship among the threads, so that the data acquired by the corresponding threads in one interactive process of the same service are always consistent, and the condition that the read data of the threads are inconsistent or not corresponding due to the data updating is avoided, the accuracy of data acquisition and updating is improved, and further the normal realization of the service is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic architecture diagram of an application system of a method for processing data in a cache according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for processing data in a cache according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a method for processing data in a cache according to another embodiment of the present application;
fig. 4 is a schematic flowchart of a cache switching process according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a data processing apparatus in a cache according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the above-described drawings (if any) are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of including other sequential examples in addition to those illustrated or described. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
When implementing related services (e.g., applying for an account, transferring money, purchasing related products), it may be the case that data is obtained from a cache through a thread, and then the related services are implemented through the obtained data. When the data in the cache needs to be updated, the data in the cache can be updated in a manner of reloading the data into the cache. However, since there is only one cache for storing data, even if the update logic is running fast, it is still updated data by data, which may cause the following problems:
first, the same thread reads the same data cache value before and after the update process, and different results are obtained. For example, data key1 needs to be updated from v1 to v2, the thread reads key1 before cache load to get v1, and reads key1 again after cache load to get v2, which can be referred to as unrepeatable readability of the parameter cache.
Second, the same thread may get erroneous results when reading the associated data cache values before and after the update process. For example, there are two associated data, a userName and a password, which can only be considered correct if they match each other. At present, userName is aaa, passcode is bbb, and userName is kkk, passcode is vvv. Since the data are updated in the cache one by one, the userName may be updated from aaa to kkk first, and then the password may be updated from bbb to vvv, which may occur that a thread reads userName-kkk and password-bbb in the process of updating the two parameters, which causes an actually read data cache value to be incorrect, and this situation may be referred to as inconsistent reading of the parameter cache.
The two situations reduce the accuracy of data acquisition and updating, and further influence the normal realization of the service.
In the prior art, there is a case that the aforementioned problem is solved by two caches, and the specific process is as follows: when the server side is started, two caches are initialized, wherein one cache is read only, and the other cache is writable. Before the interactive request starts, the client requests the server to acquire the sessionId, the server distributes the sessionId and maintains the corresponding relation between the sessionId and the read-only cache. When the client acquires the parameter cache, the sessionId is brought, the server returns the value in the corresponding parameter cache according to the sessionId, and the sessionId is the same in one interaction, so that the read parameter cache blocks are also the same. Before the interactive request is completed, the client notifies the server to delete the corresponding relation between the sessionId and the read-only cache. And the server loads the latest parameters into the writable cache periodically, and after the writing is finished, the cache is switched, the original writable cache is changed into the read-only cache, and the original read-only cache is changed into the writable cache. When the interactive request is newly entered subsequently, the latest parameters in the read-only cache after switching can be read.
However, whether the above scheme can effectively achieve real-time consistency of parameters and repeatable reading update needs to rely on that the service system can accurately "apply for sessionId (for allocating correct cache to the server)" to the server "and" apply for sessionId (for the server to delete sessionId (the server needs to determine that the cache is not used by the client when updating the cache) ", which has the problems of high access cost and poor reliability. That is, when the user accesses the above scheme, it needs to apply for sessionId at each interactive request entry and delete sessionId at the interactive request exit, even though this step is meaningless to the actual traffic processing logic. And because the application system usually has a plurality of interactive entries, for example, the distributed system has a plurality of listening entries, timed task execution entries, etc., when using this scheme, the user is required to analyze and find all possible interactive entries in the system, and logic of "applying sessionId to the server" is added at the entry. In addition, because in one interactive process, there are more than expected interactive result exits, such as interactive success, interactive failure, interactive timeout, etc.; there are also unpredictable abnormal end exits, such as code bug exceptions, network exceptions, machine failures, and the like. For an expected egress, the user may manually invoke a delete sessionId operation, but for an unpredictable abnormal end egress, the user has little assurance that the delete sessionId operation has been invoked for all of its cases. To ensure that the operation of deleting sessionId by the interaction exit is not missed, the user needs extensive analysis and validation. If the logic of "applying sessionId to the server" is missed at some interaction entry of the service system, the parameter cache is obtained multiple times inside the interaction, and the aforementioned problem still exists.
Based on the technical problems, the method allocates different caches to different system states, the data in the corresponding caches in the system states are fixed, and then allocates the cache identifier of the same cache to at least one target thread in an interaction process corresponding to a service processing request according to the system states and the association relationship among threads, so that the data acquired by the corresponding threads in the interaction process of the same service is consistent all the time, the condition that the data read by the threads is inconsistent or not corresponding due to data updating is avoided, the accuracy of data acquisition and updating is improved, and the normal realization of the service is further ensured, in addition, the cache identifiers are allocated by taking the threads as dimensions, the threads are the minimum execution units of an application system, and each interaction (or service) is processed by a plurality of associated threads, the application system can definitely know the start and the end of the thread, so that the distribution process and the maintenance difficulty of the cache identification are simplified, and the normal realization of the service is further ensured.
Fig. 1 is a schematic structural diagram of an application system of a data processing method in a cache according to an embodiment of the present application, and as shown in fig. 1, the application system may include a server, where caches corresponding to different system states are deployed in the server, for example, a first cache corresponding to a system state being a first state and a second cache corresponding to the system state being a second state may be used. The first state is a system state when the data in the cache is updated, and the second state is a system state when the data in the cache is normally read.
After acquiring a service processing request corresponding to a target service (wherein one service processing request may correspond to one interactive process), the server may allocate at least one target thread to the service processing request corresponding to the target service, then allocate the same target cache identifier to at least one target thread according to the system state and the association relationship between the target threads, acquire target data from a cache corresponding to the target cache identifier through at least one target thread, and then implement the target service according to the target data. For example, the target cache identifier may be a cache identifier of a second cache corresponding to the second state, and the target data may be obtained from the second cache corresponding to the second state.
Optionally, the service processing request corresponding to the target service may be triggered at a fixed time or in real time. Correspondingly, a service processing request can be triggered every preset time according to a predefined timing task to implement a target service (e.g., a server polling service, a timing notification service, etc.). The service processing request may also be triggered in real time according to a touch operation of a user, which is not limited in detail herein.
The technical solution of the present application will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a schematic flowchart of a method for processing data in a cache according to an embodiment of the present application, where the method of this embodiment may be executed by a server. As shown in fig. 2, the method of this embodiment may include:
s201: and acquiring a service processing request corresponding to the target service, and allocating at least one target thread to the service processing request corresponding to the target service.
In this embodiment, when the target service is implemented, the service processing request may be triggered. Optionally, the service processing request may be triggered at regular time, or may be triggered in real time based on a touch operation of a user. In addition, the service processing request corresponding to the target service may be directly obtained locally at the server, or the service processing request corresponding to the target service sent by the terminal device may also be obtained, which is not limited in detail here.
In addition, after the service processing request is acquired, at least one target thread may be allocated to the service processing request corresponding to the target service. The number of the target threads may be one or more, and may be specifically allocated according to the actual thread demand number of the target task.
S202: and distributing the same target cache identifier for at least one target thread according to the association relationship between the system state and the target threads, wherein the system state is a first state or a second state, the data is obtained from the first cache in the first state, the data is obtained from the second cache in the second state, different caches correspond to different cache identifiers, the first cache and the second cache are used for storing the same data, the first state indicates that the updating of the data in the first cache is finished according to the data updating request, but the updating of the data in the second cache is not finished, and the second state indicates that the updating of the data in the second cache is finished according to the data updating request.
In this embodiment, in order to avoid the situation of inconsistent or inconsistent data reading, the same data value of the same target task in one execution process should be guaranteed to be the same (for example, the number of ports of the same target task in one execution process is always 8888), and since the same data value in the cache corresponding to the same system state is the same in one execution process of the target task, the cache identifier corresponding to the same system state, that is, the cache identifier corresponding to the same cache, may be allocated to at least one target thread in one execution process of the target task, and the same system state is the system state determined when the cache identifier is allocated to the first target thread. That is, at least one target thread in an interaction process may be allocated with a cache identifier corresponding to a first state or a cache identifier corresponding to a second state, so that the at least one target thread allocated for the target task always obtains data from the same cache in one execution process of the target task, and the data in the cache is fixed in the corresponding current state.
In addition, the system state may be a first state or a second state, data may be obtained from the first cache in the first state, data may be obtained from the second cache in the second state, and different caches correspond to different cache identities. The first cache and the second cache are used for storing the same data, the first state represents that the updating of the data in the first cache is completed according to the data updating request, but the updating of the data in the second cache is not completed yet, and the second state represents that the updating of the data in the second cache is completed according to the data updating request. Optionally, when the data in the cache needs to be updated, the data in the cache may be updated by receiving a data update request. When a data updating request is received, the system state is the second state, that is, the thread acquires data from the second cache corresponding to the second state at this time, therefore, the data in the first cache corresponding to the first state can be updated first, after the data in the first cache is updated, the system state can be updated to the first state, and the thread needs to acquire data from the first cache corresponding to the first state at this time, therefore, the data in the second cache corresponding to the second state can be updated at this time, and the situation that the data read by the thread is inconsistent due to data updating is avoided. In addition, when the data in the second cache corresponding to the second state is updated, in order to ensure that the data acquired from the caches by the threads corresponding to the same interaction process are consistent, the data in the second cache corresponding to the second state may be updated after all the threads reading the second cache are determined to be executed.
Optionally, the target thread may be one or more. After receiving the service processing request, a target thread may be allocated to the service processing request corresponding to the target service, and then a target cache identifier may be allocated to the target thread according to the system state, and the target thread may obtain the target data from the corresponding cache through the allocated target cache identifier. After the target thread is executed, whether the target service needs other threads to continuously realize the target service can be judged. That is, whether the target service is completed or not can be judged, and if the target service is completed, the process can be ended; if not, the target thread can be continuously allocated, a target cache identifier is determined according to the incidence relation between the newly allocated target thread and the target thread, and the newly allocated target thread can acquire target data from the corresponding cache through the allocated target cache identifier until the target task is completed. The newly allocated target thread and the target cache of the target thread have the same identification, that is, the newly allocated target thread and the target thread both acquire data from the same cache, and the data in the cache is unchanged in the current system state, so that the consistency of data reading is ensured.
The data updating request can perform a change operation, a deletion operation or an addition operation on the data in the cache. The data update request may be obtained before receiving the service processing request, may be obtained after receiving the service processing request, and may also be obtained simultaneously with the service processing request, which is not limited herein in detail.
S203: and acquiring target data from the cache corresponding to the target cache identifier through at least one target thread, and realizing the target service according to the target data.
In this embodiment, the number of the target threads may be one or more, and after the target thread is allocated with the target cache identifier, the target service may be implemented according to the target thread allocated with the target cache identifier. Correspondingly, the target data can be acquired from the corresponding cache by the target thread which allocates the target cache identifier, and then the target service is realized according to the acquired target data. Illustratively, the target service may be an account application service, a transfer service, an credit line change service, and the like. In addition, the implementation of the target service according to the acquired target data can be implemented in the existing manner, and will not be discussed in detail here.
In addition, when the thread reads the data in the cache in the same interaction process, the data in the cache is always in a fixed and unchangeable mode, namely, when in the first state, the data in the first cache corresponding to the first state is always unchanged; and when the data is in the second state, the data in the second cache corresponding to the second state is always unchanged, so that the situation that the reading result is inconsistent before and after the same data is read from the cache in one interaction process is avoided.
In addition, after the target cache identifier is allocated to the target thread, the target threads with the same target cache identifier can be stored in the same queue, namely the queue corresponding to the target cache identifier, and subsequently, when data is acquired through the target thread, the target thread can be directly acquired from the queue corresponding to the target cache identifier, and the cache can be directly determined according to the target cache identifier corresponding to the queue, so that the cache does not need to be determined for each target thread, and the efficiency of data acquisition is improved.
After the above scheme is adopted, a service processing request corresponding to a target service may be obtained first, and at least one target thread may be allocated to the service processing request corresponding to the target service, then the same target cache identifier may be allocated to at least one target thread corresponding to the same service according to a system state and an association relationship between the target threads, where the system state may be a first state or a second state, data is obtained from a first cache in the first state, data is obtained from a second cache in the second state, and different caches correspond to different cache identifiers, the first cache and the second cache are used for storing the same data, the first state indicates that updating of data in the first cache has been completed according to a data updating request, but updating of data in the second cache has not been completed, the second state indicates that updating of data in the second cache has been completed according to the data updating request, after the same target cache identifier is allocated to at least one target thread corresponding to the same service, the target service can be realized by acquiring target data from a cache corresponding to the target cache identifier through the at least one target thread, by allocating different caches for different system states and fixing the data in the corresponding caches in the system state, then, according to the system state and the correlation between threads, at least one target thread in an interactive process corresponding to a service processing request is allocated with a cache identifier of the same cache, the data acquired by the corresponding threads in the same service in an interactive process is always consistent, the condition that the read data of the threads is inconsistent or not corresponding due to data updating is avoided, the accuracy of data acquisition and updating is improved, and further the normal realization of the service is ensured.
Based on the method of fig. 2, the present specification also provides some specific embodiments of the method, which are described below.
In another embodiment, the allocating the same target cache identifier for the at least one target thread according to the system state and the association relationship between the target threads may specifically include:
and determining a system state, and determining a target cache identifier according to the system state.
And judging whether the target thread has a cache identifier distributed by the associated thread or not aiming at each target thread.
And if the target thread does not have the cache identifier distributed by the associated thread, distributing the target cache identifier for the target thread.
In this embodiment, after at least one target thread is allocated to a service processing request of a target service, a system state may be determined first, and then a target cache identifier to be allocated may be determined according to the determined system state. In addition, after the system state is determined, for each target thread, it may be determined whether the target thread has a cache identifier allocated to the associated thread. Generally, a first target thread allocated to a service processing request of a target service is created by an operating system, and has no associated thread, and therefore, does not have a cache identifier allocated to the thread, and if a target thread does not complete the target service, the target thread can be continuously created to implement the target service, and the target thread subsequently created to continuously implement the target service can be the associated thread of the first target thread, and the target thread and the associated thread can be associated by allocating the cache identifier of the associated thread to the target thread. The target thread may include an associated thread, that is, the associated thread is the first target thread.
Further, the determining a target cache identifier according to the system state may specifically include:
and if the system state is the first state, determining that the target cache identifier is the cache identifier of the first cache corresponding to the first state.
And if the system state is the second state, determining that the target cache identifier is the cache identifier of the second cache corresponding to the second state.
In this embodiment, after determining the system status, the cache of the data to be acquired may be determined according to the determined system status. Correspondingly, there may be two caches, one of which is a second cache corresponding to normal operation of the system, and the system state is the second state at this time, and the other is a first cache corresponding to the system being updated, and the system state is the first state at this time. If the system state is determined to be the first state, it indicates that data needs to be acquired from the first cache, and therefore, the target cache identifier can be determined as the cache identifier of the first cache; if the system state is determined to be the second state, it indicates that data needs to be acquired from the second cache, and therefore, the target cache identifier may be determined to be the cache identifier of the second cache.
In another embodiment, the method may further include:
and if the target thread has the cache identifier distributed by the associated thread, determining the cache identifier distributed by the associated thread as the target cache identifier.
In this embodiment, if the target thread has the cache identifier allocated to the associated thread, it indicates that the target thread has the associated thread, the cache identifier of the associated thread may be directly obtained, and then the cache identifier allocated to the associated thread is determined as the target cache identifier, that is, the target thread and the associated thread of the target thread may both obtain data from the cache corresponding to the same cache identifier, thereby ensuring data consistency.
In addition, the relationship between the target thread and the associated thread can be two, one is that the target thread is created for the associated thread, namely the associated thread is a father thread of the target thread, and the target thread is a child thread of the associated thread; and the other thread is a thread distributed to the thread pool for the target thread, and the target thread and the associated thread belong to the same interactive process, namely in the process of executing the target service by the associated thread, the thread pool is also required to distribute a new target thread to realize the complete target service.
Further, in an implementation manner, the storing the cache identifier in an inheritable variable of a thread, and the allocating at least one target thread to the service processing request corresponding to the target service includes:
and creating a sub-thread of the associated thread to obtain at least one initial target thread.
And setting the inheritable thread variable of the at least one initial target thread as the inheritable thread variable of the associated thread to obtain at least one target thread.
And allocating at least one target thread for the service processing request corresponding to the target service.
Specifically, when a child thread of the associated thread is created, the inheritable thread variable of the child thread (i.e., the newly created target thread) can be directly assigned, and therefore the child thread can also obtain the thread variable of the parent thread and keep the values of the inheritable variables of the parent and child threads consistent. And subsequently, if the target thread has the cache identifier allocated by the associated thread, directly determining the cache identifier allocated by the associated thread as the target cache identifier.
For example, if the target service is not processed in one thread, another new thread is needed to continue processing, and at this time, a sub-thread or a thread pool allocation thread may be created. If a child thread is created (namely, there is an inheritance relationship between threads), when the child thread is created (namely, a target thread), a father thread (namely, an associated thread) needs to judge whether an inheritable thread variable inheritable local exists or not, if so, the inheritable thread variable of the child thread is assigned when the child thread is created, so that the child thread can also obtain the thread variable of the father thread, and the values of the father thread and the child thread are kept consistent. Correspondingly, the cache identification can be put into the inheritable thread variable of the parent thread, so that the identification can be transmitted all the time among threads with inheritance relationship, and the cache identification of the threads in interaction is ensured to be consistent. For example, assuming that child thread T1 was created in the associated thread T, the cache identification of the associated thread T would be passed to child thread T1:
the inheritable thread variable inheritable local of the thread T { readCacheFlag ═ a };
the inheritable thread variable inheritable local of the child thread T1 { readCacheFlag ═ a }.
Further, in another implementation manner, the storing the cache identifier in the custom attribute information of the thread, and the allocating at least one target thread to the service processing request corresponding to the target service includes:
at least one initial target thread is obtained from the thread pool.
And setting the custom attribute information of the at least one initial target thread as the custom attribute information of the associated thread to obtain at least one target thread.
And allocating at least one target thread for the service processing request corresponding to the target service.
Specifically, if the target thread is a thread allocated to the thread pool, because there is no inheritance relationship between the threads allocated to the thread pool, the inheritance relationship cannot be transferred by the characteristic of the inheritable thread local, and the consistency of the cache identifiers of the parent and child threads is ensured. In order to solve the problem, a thread modification type runnable wrapper can be newly established, custom attribute information is added to the modified thread object, and the cache identifier of the associated thread is stored through the custom attribute information.
For example, because there is no inheritance relationship between threads allocated by the thread pool, the inheritance relationship can no longer be passed through the property of the inheritable thread local, and the cache identification consistency of the parent-child threads is ensured. In order to solve the problem, a thread modification type runnable wrapper can be newly established, custom attribute information is added to a modified thread object, and cache identification of an associated thread is stored. When the runnable wrapper class is used, although the problem of inter-thread transmission values without inheritance relationship in the thread pool can be solved, the use party is forced to use the runnable wrapper class when creating the thread, and the invasiveness to the use party is large. Therefore, the function enhancement can be carried out on the thread pool, a user still uses the native Runnable class when creating the thread, and when the thread is submitted to the thread pool, the Runnable class can be automatically converted into the Runnable wrapper class, so that the user is not invaded. For example, thread Y needs to be newly created in thread T, and the thread pool allocates thread Y _ Wrapper:
the inheritable thread variable inheritable local of the thread T { readCacheFlag ═ a };
y _ Wrapper object { parenterreadcatheflag ═ a; runnable ═ thread Y }.
In conclusion, by distributing the target cache identification for the threads in different ways, automatic dyeing of the threads is realized, and the target cache identification distributed by the threads in the same interaction process is ensured to be the same, so that the data acquired by the threads in the same interaction process is the same, and the consistency and the accuracy of data reading are ensured.
In addition, after the obtaining, by the at least one target thread, target data from the cache corresponding to the cache identifier, the method further includes:
and clearing the target cache identification of the at least one target thread.
In conclusion, by emptying the target cache identifier of the target thread, the target thread can be directly recycled, the recycling efficiency of the target thread is increased, and further the computing resources of the device are saved.
Fig. 3 is a schematic flow diagram of a method for processing data in a cache according to another embodiment of the present application, as shown in fig. 3, in this embodiment, a service processing request of a target service may be received first, then a target thread is created for the service processing request of the target service, and after the creation is completed, thread execution starts. The cache decision can be carried out, a target cache identifier is determined, and if the target cache identifier is the cache identifier of the first cache corresponding to the first state, inheritable variables of the target thread are set according to the target cache identifier (the cache identifier of the first cache); similarly, if the target cache identifier is the cache identifier of the second cache corresponding to the second state, the inheritable variable of the target thread may also be set according to the target cache identifier (the cache identifier of the second cache), and the target thread is added to the corresponding cache reading queue. And then, executing the target service through the threads in the cache queue, determining the parameter identifier to be read and the target cache identifier through the service processing request, and then acquiring the target parameter corresponding to the parameter identifier from the corresponding cache queue according to the target cache identifier. And then judging whether other threads need to be continuously executed, if so, allocating a new target thread, simultaneously allocating a cache identifier of an associated thread for the new target thread, and then allocating the new target thread to continuously execute the process. Meanwhile, the previous target thread (namely the associated thread of the newly allocated target thread) is deleted from the corresponding cache queue, and the inheritable thread variable of the thread is emptied. If the other threads are not required to be continuously executed, deleting the previous target thread (namely the associated thread of the newly distributed target thread) from the corresponding cache queue, and emptying the inheritable thread variable of the thread, thereby realizing the target service.
In this embodiment, when the operating system is started, the global cache block globalthreadflag cache may be initialized first, and at this time:
system state, machineStatus ═ NORMAL.
Cache read queue a cachereadqueue a is empty.
Cache read queue B cachereadqueue B is empty.
The parameters may then be loaded into cache a and cache B, and the user may customize the desired parameters to be loaded into cache a and cache B, and may assume that one parameter key value pair port is loaded as 8888, at this time:
cache a is 8888 { port ═ ca }.
Cache B ═ { port ═ 8888 }.
After starting the target service, the application system may receive a service processing request at this time, and if it is the first target thread to start processing, the target thread is created by the application system, which is assumed to be thread T at this time. In addition, if the target service is not processed in one target thread, another new thread is needed to continue processing, and at this time, a sub-thread or a thread pool assignment thread of the target thread may be created.
Optionally, if a child thread is created, the parent thread needs to determine whether an inheritable thread variable inheritable thread local exists, and if so, the inheritable thread variable of the child thread is assigned when the child thread is created, so that the child thread can also obtain the thread variable of the parent thread and keep the values of the parent thread and the child thread consistent. The cache identification may then be placed in an inheritable thread variable for the thread, such that the identification may be passed on all the time between threads having an inheritance relationship, and the cache identification of the threads within the transaction is guaranteed to be consistent.
Assuming that a child thread T1 was created in thread T, the cache read identification of thread T would be passed to child thread T1:
the inheritable thread variable inheritable local of thread T ═ { readCacheFlag ═ a }.
The inheritable thread variable inheritable local of the child thread T1 { readCacheFlag ═ a }.
Optionally, in the case of allocating threads to the thread pool, because there is no inheritance relationship between the threads allocated to the thread pool, the inheritance relationship cannot be transferred by the characteristic of the inheritable thread local, so that the cache identifier consistency of the parent and child threads is ensured. In order to solve the problem, a thread modification class runnable wrapper can be newly established, an attribute is added to a modified thread object, and a cache reading identifier of an associated thread is stored. In addition, although the use of the runnable wrapper class can solve the problem of inter-thread transmission values without inheritance relationships in a thread pool, this enforces that a user uses the runnable wrapper class when creating a thread, and is very intrusive to the user. Therefore, the function enhancement processing can be carried out on the thread pool, a user still uses the native Runnable class when creating the thread, and when the thread is submitted to the thread pool, the Runnable class is automatically converted into the Runnable wrapper class, so that the user is not invaded. For example, suppose a new thread Y is needed in a thread T, and the thread pool allocates a thread Y _ Wrapper to the new thread Y:
the inheritable thread variable inheritable local of the thread T { readCacheFlag ═ a }.
Y _ Wrapper object { parenterreadcatheflag ═ a; runnable ═ thread Y }.
Then, a thread execution starting flow can be entered, and the two cases of creating the target thread and allocating the target thread can be described:
optionally, if the created target thread is a target thread, the created thread may be automatically dyed and faded, for example, AOP (Aspect organized Programming) technology may be used, and before the target thread executes the target service, the processing logic that allocates the target cache identifier to the target thread may be automatically entered, and after the target service is executed, the processing logic that recycles the target thread may be automatically entered. The specific treatment process can be as follows: defining a tangent point, and performing function enhancement on all classes which realize the java. Then, a tangent point surrounding method can be defined, and enhancement processing is carried out before and after the target service is executed. Static compilation can then be performed, automatically adding the enhanced processing code to the execution code of all target threads, without requiring additional processing by the user.
When the target thread is distributed, in the runnable wrapper class, the function enhancement can be performed on the process of the target thread executing the target service. Before the target thread executes the target service, executing a processing logic for distributing a target cache identifier for the target thread; after the target service is executed, the recovery process of the target thread is executed. Because the target threads allocated in the thread pool are all RunnableWrapper classes, these target threads will all automatically execute the enhanced code.
In addition, the judggeusewhichcache method of globalthreadflag cache may be called when making a cache decision. The default implementation of this method is: determining the cache that should be used by the current target thread according to the current system state, if the system state is NOMAL (i.e. the second state), reading the cache identifier of the cache corresponding to the normal state (for example, may be a), otherwise, reading the cache identifier of the cache corresponding to the updated state (for example, may be B).
Illustratively, the system state machineStatus ═ NORMAL.
The cache read identification readCacheFlag is a.
In addition, when setting the inheritable thread variable of the target thread, the markThread method of the globaltthreadFlagCache can be called. Specifically, it may be determined whether there is an associated thread, and if there is an associated thread, the cache read identifier of the associated thread is used to assign a value to the inheritable variable of the associated thread, and if there is no associated thread, the cache read identifier decided in the previous step is used to assign a value to the inheritable variable of the associated thread.
For example, if the target thread T enters the processing process of the target service for the first time, and therefore the target thread T has no associated thread, the target cache identifier decided in the previous step is used as an assignment value for the inheritable thread variable of the target cache identifier. That is, the inheritable thread variable inheritable local of the target thread T { readCacheFlag ═ a }.
If the child thread T1 is created in the target thread T, because the child thread T1 and the target thread T have an inheritance relationship, the child thread T1 directly assigns an inheritable thread variable for the child thread T by using the target cache identifier of the target thread T. I.e., inheritable thread variable inheritable local of sub-thread T1 { readCacheFlag ═ a }.
If the thread Y _ Wrapper is the thread Y _ Wrapper allocated by the thread pool for the target thread T, the thread Y _ Wrapper can judge that the associated thread exists according to the parenterReadCachFlag attribute of the thread Y _ Wrapper, so that the cache identifier of the associated thread can be used for assigning the inheritable thread variable of the thread Y modified by the cache identifier of the associated thread. Namely, Y _ Wrapper object { parenterreadchaffeflag ═ a; run ═ thread Y }, the inheritable thread variable of thread Y ═ a }, the inheritable thread variable of thread Y.
In addition, the target thread can be added into the corresponding cache reading queue, and the markThread method of the globaltthreadFlagCache can be called actually. If the target cache of the target thread is marked as A, the target thread can be placed into a cache queue A, otherwise, the target thread is placed into a cache queue B. When the cache is switched subsequently, the condition that the cache is used by the thread in the system can be obtained according to the mode that whether the target thread exists in the cache queue, and then whether the cache can be switched or not is judged. At this time:
cache read queue a cachereadqueue a ═ thread id for thread T.
Cache read queue B cachereadqueue B is empty.
Through the process, the automatic dyeing of the target thread is realized, then the process of executing the target service can be entered, if the parameter cache needs to be read, the target cache identifier of the current thread can be judged firstly, if the target cache identifier is A, the parameter is read from the cache A, otherwise, the parameter is read from the cache B. Suppose thread T needs to read the parameter cache port at this time:
because thread T's inheritable thread local is { readCacheFlag is a }, we read the port's value from cache a, with the result returning 8888.
And then judging whether other target threads are needed, and if so, assigning a value to the new target thread according to the inheritable thread variable of the target thread. Then, a clearThreadFlag method of the globaltthreadflag cache can be called, the target thread is deleted from the corresponding cache queue, the target thread is indicated to not read the corresponding cache any more, and at this time:
cache read queue a cachereadqueue a is empty.
Cache read queue B cachereadqueue B is empty.
In addition, a clearthreadFlag method of the GlobalThreadFlagCache can be called, and the inheritable thread variable of the target thread is set to be null. Therefore, the created thread is recycled by the system and is not used again, but the thread allocated by the thread pool is not recycled, so that there may be a case of repeated use, and in order to avoid a parameter reading error caused by using the cache reading identifier assigned last time when the thread is used again, it is necessary to set a inheritable thread variable of the thread to be null in this step, at this time:
the inheritable thread variable Inheritable ThreadLocal of thread T is null.
The inheritable thread variable Inheritable ThreadLocal of thread T1 is null.
Thread Y can inherit the thread variable Inheritable ThreadLocal to be null.
In addition, if the target service does not need other threads to continue execution, it indicates that the target service is completed, and the process may be ended.
In conclusion, the application is non-intrusive to the business logic, so that the application program can realize consistent reading and repeatable reading of the parameter configuration under the condition that a user side does not sense the parameter configuration. And the application program can safely modify the cache parameters during running, and the problems of inconsistent reading of the parameters twice and errors of associated parameters can be avoided when the program is a parameter used in an interactive process, so that the accuracy of data reading is improved.
In another embodiment, the method may further include:
and receiving the data updating request, wherein the data updating request comprises the data identification to be updated and the corresponding data value to be updated.
And updating the data value corresponding to the to-be-updated data identifier in the first cache corresponding to the first state into the to-be-updated data value according to the data updating request, and setting the system state to be the first state.
And judging whether the processing of the thread which is allocated with the cache identifier corresponding to the second state is finished.
And if the processing is finished, updating the data value corresponding to the to-be-updated data identifier in the second cache corresponding to the second state into the to-be-updated data value.
Updating the system state to the second state.
In this embodiment, data in the cache may relate to an update condition, and when updating the data in the cache, in order to avoid a situation that data in the cache is read by a thread in the same interaction process and is inconsistent or inconsistent before and after the data is read, when a data update request is received, a data value corresponding to a to-be-updated data identifier in a first cache corresponding to a first state may be updated to the to-be-updated data value, so that data update of the first cache corresponding to the first state is already achieved, and then a system state may be set to the first state, which ensures that the data in the corresponding cache is fixed in a current state. And then, data updating needs to be performed on the second cache corresponding to the second state, and since it is necessary to ensure that data read from the cache by the thread in the same interaction process is consistent, it is necessary to wait until the data reading by the thread reading the second cache is completed, and perform data updating on the second cache corresponding to the second state. Specifically, it may be determined whether the processing of the thread to which the cache identifier corresponding to the second state is allocated is completed, and if the processing is completed, the data value corresponding to the to-be-updated data identifier in the second cache corresponding to the second state is updated to the to-be-updated data value, and the system state is updated to the second state.
Further, the method may further include:
and if not, randomly waiting for the target duration.
And after waiting for the target duration, re-executing the steps of judging whether the processing of the thread which is allocated with the cache identifier corresponding to the second state is finished and the subsequent steps.
Specifically, only when all threads in the application system are reading the first cache, the next cache switching process can be performed. After the system state is in the first state, the new target thread reads the first cache corresponding to the first state, so that whether all threads read the first cache can be known only by judging whether the queue corresponding to the second cache is empty. If the thread which is allocated to read the cache identifier of the second cache corresponding to the second state is not processed, the target duration can be randomly waited, and the judgment is carried out again after the target duration. Wherein the target time length can be any value from 0 to 60 seconds.
In conclusion, by waiting for the target duration which is randomly set, the success rate of switching the cache can be improved, and the consistency of data reading is further ensured.
Fig. 4 is a schematic flow chart of a cache switching process provided in this embodiment, as shown in fig. 4, in this embodiment, data in a first cache corresponding to a first state may be updated first, after the data in the first cache is updated, a system state may be set to the first state, then a target duration may be randomly waited, and then it is determined whether a second cache corresponding to a second state may be updated, if yes, the second cache corresponding to the second state may be directly updated, otherwise, a randomly generated target duration is waited again, and the determination is performed again until the second cache corresponding to the second state may be updated, and then the second cache corresponding to the second state is directly updated.
In conclusion, by updating the second cache after the thread reading the second cache is completed, the consistency of the thread reading the cache data in the same transaction process is ensured, and the accuracy of data reading is improved.
Based on the same idea, an embodiment of the present specification further provides a device corresponding to the foregoing method, and fig. 5 is a schematic structural diagram of a data processing device in a cache provided in the embodiment of the present application, as shown in fig. 5, the device provided in this embodiment may include:
an obtaining module 501, configured to obtain a service processing request corresponding to a target service, and allocate at least one target thread to the service processing request corresponding to the target service;
a processing module 502, configured to allocate the same target cache identifier to the at least one target thread according to a system state and an association relationship between target threads, where the system state is a first state or a second state, data is obtained from a first cache in the first state, data is obtained from a second cache in the second state, and different caches correspond to different cache identifiers, the first cache and the second cache are used to store the same data, the first state indicates that updating of data in the first cache is completed according to a data update request, but updating of data in the second cache is not completed yet, and the second state indicates that updating of data in the second cache is completed according to the data update request;
the processing module 502 is further configured to obtain target data from the cache corresponding to the target cache identifier through the at least one target thread, and implement the target service according to the target data.
In another embodiment, the processing module 502 is further configured to:
and determining a system state, and determining a target cache identifier according to the system state.
And judging whether the target thread has a cache identifier distributed by the associated thread or not aiming at each target thread.
And if the target thread does not have the cache identifier distributed by the associated thread, distributing the target cache identifier for the target thread.
In this embodiment, the processing module 502 is further configured to:
and if the system state is a first state, determining that the target cache identifier is the cache identifier of the first cache corresponding to the first state.
And if the system state is the second state, determining that the target cache identifier is the cache identifier of the second cache corresponding to the second state.
In this embodiment, the processing module 502 is further configured to:
and if the target thread has the cache identifier distributed by the associated thread, determining the cache identifier distributed by the associated thread as the target cache identifier.
In this embodiment, the processing module 502, where the cache identifier is stored in an inheritable variable of a thread, is further configured to:
and creating a sub-thread of the associated thread to obtain at least one initial target thread.
And setting the inheritable thread variable of the at least one initial target thread as the inheritable thread variable of the associated thread to obtain at least one target thread.
And allocating at least one target thread for the service processing request corresponding to the target service.
In this embodiment, the processing module 502, where the cache identifier is stored in the custom attribute information of the thread, is further configured to:
at least one initial target thread is obtained from the thread pool.
And setting the custom attribute information of the at least one initial target thread as the custom attribute information of the associated thread to obtain at least one target thread.
And allocating at least one target thread for the service processing request corresponding to the target service.
In this embodiment, the processing module 502 is further configured to:
and clearing the target cache identification of the at least one target thread.
In another embodiment, the processing module 502 is further configured to:
receiving a data updating request, wherein the data updating request comprises a data identifier to be updated and a corresponding data value to be updated.
And updating the data value corresponding to the to-be-updated data identifier in the first cache corresponding to the first state into the to-be-updated data value according to the data updating request, and setting the system state to be the first state.
And judging whether the processing of the thread which is allocated with the cache identifier corresponding to the second state is finished.
And if the processing is finished, updating the data value corresponding to the to-be-updated data identifier in the second cache corresponding to the second state into the to-be-updated data value.
Updating the system state to the second state.
In this embodiment, the processing module 502 is further configured to:
and if not, randomly waiting for the target duration.
And after waiting for the target duration, re-executing the steps of judging whether the processing of the thread which is allocated with the cache identifier corresponding to the second state is finished and the subsequent steps.
The apparatus provided in the embodiment of the present application can implement the method of the embodiment shown in fig. 2, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application, and as shown in fig. 6, a device 600 according to the embodiment includes: a processor 601 and a memory communicatively coupled to the processor. The processor 601 and the memory 602 are connected by a bus 603.
In a specific implementation, the processor 601 executes the computer executable instructions stored in the memory 602, so that the processor 601 executes the method in the above method embodiment.
For a specific implementation process of the processor 601, reference may be made to the above method embodiments, which implement the principle and the technical effect similarly, and details of this embodiment are not described herein again.
In the embodiment shown in fig. 6, it should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The memory may comprise high speed RAM memory and may also include non-volatile storage NVM, such as at least one disk memory.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The embodiment of the present application further provides a computer-readable storage medium, where a computer execution instruction is stored in the computer-readable storage medium, and when a processor executes the computer execution instruction, the method for processing data in a cache according to the above-mentioned method embodiment is implemented.
An embodiment of the present application further provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the method for processing data in a cache as described above is implemented.
The computer-readable storage medium may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. Readable storage media can be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary readable storage medium is coupled to the processor such the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium may also be an integral part of the processor. The processor and the readable storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the readable storage medium may also reside as discrete components in the apparatus.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method for processing data in a cache, comprising:
acquiring a service processing request corresponding to a target service, and allocating at least one target thread to the service processing request corresponding to the target service;
distributing the same target cache identifier for the at least one target thread according to a system state and an association relation between the target threads, wherein the system state is a first state or a second state, data are obtained from a first cache in the first state, data are obtained from a second cache in the second state, different caches correspond to different cache identifiers, the first cache and the second cache are used for storing the same data, the first state represents that the data in the first cache are updated according to a data updating request, but the data in the second cache are not updated, and the second state represents that the data in the second cache are updated according to the data updating request;
and acquiring target data from the cache corresponding to the target cache identifier through the at least one target thread, and realizing the target service according to the target data.
2. The method of claim 1, wherein assigning the same target cache identification to the at least one target thread according to the system state and the inter-thread relationship comprises:
determining a system state, and determining a target cache identifier according to the system state;
judging whether the target thread has a cache identifier distributed by an associated thread or not aiming at each target thread;
and if the target thread does not have the cache identifier distributed by the associated thread, distributing the target cache identifier for the target thread.
3. The method of claim 2, wherein determining a target cache identity based on the system state comprises:
if the system state is a first state, determining that a target cache identifier is a cache identifier of a first cache corresponding to the first state;
and if the system state is a second state, determining that the target cache identifier is the cache identifier of a second cache corresponding to the second state.
4. The method of claim 2, further comprising:
and if the target thread has the cache identifier distributed by the associated thread, determining the cache identifier distributed by the associated thread as the target cache identifier.
5. The method of claim 4, wherein the cache identifier is stored in an inheritable variable of a thread, and wherein the allocating at least one target thread to the service processing request corresponding to the target service comprises:
creating a sub-thread of the associated thread to obtain at least one initial target thread;
setting the inheritable thread variable of the at least one initial target thread as the inheritable thread variable of the associated thread to obtain at least one target thread;
and allocating at least one target thread for the service processing request corresponding to the target service.
6. The method according to claim 4, wherein the cache identifier is stored in a custom attribute information of a thread, and the allocating at least one target thread for the service processing request corresponding to the target service comprises:
acquiring at least one initial target thread from a thread pool;
setting the custom attribute information of the at least one initial target thread as the custom attribute information of the associated thread to obtain at least one target thread;
and allocating at least one target thread for the service processing request corresponding to the target service.
7. The method of any one of claims 1-6, further comprising:
receiving the data updating request, wherein the data updating request comprises a data identifier to be updated and a corresponding data value to be updated;
updating the data value corresponding to the to-be-updated data identifier in the first cache corresponding to the first state into the to-be-updated data value according to the data updating request, and setting the system state to be the first state;
judging whether the thread which is allocated with the cache identifier corresponding to the second state is processed or not;
if the processing is finished, updating the data value corresponding to the to-be-updated data identifier in the second cache corresponding to the second state into the to-be-updated data value;
updating the system state to the second state.
8. The method of claim 7, further comprising:
if not, randomly waiting for the target duration;
and after waiting for the target duration, re-executing the steps of judging whether the processing of the thread which is allocated with the cache identifier corresponding to the second state is finished and the subsequent steps.
9. The method according to any of claims 1-6, further comprising, after said obtaining target data from the cache corresponding to the target cache identification by the at least one target thread:
and clearing the target cache identification of the at least one target thread.
10. An in-cache data processing apparatus, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a service processing request corresponding to a target service and distributing at least one target thread for the service processing request corresponding to the target service;
a processing module, configured to allocate the same target cache identifier for the at least one target thread according to a system state and an association relationship between target threads, where the system state is a first state or a second state, data is obtained from a first cache in the first state, data is obtained from a second cache in the second state, and different caches correspond to different cache identifiers, the first cache and the second cache are used to store the same data, the first state indicates that updating of data in the first cache is completed according to a data update request, but updating of data in the second cache is not completed yet, and the second state indicates that updating of data in the second cache is completed according to the data update request;
the processing module is further configured to obtain target data from the cache corresponding to the target cache identifier through the at least one target thread, and implement the target service according to the target data.
CN202210725664.9A 2022-06-24 2022-06-24 Method and device for processing data in cache Pending CN115061816A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210725664.9A CN115061816A (en) 2022-06-24 2022-06-24 Method and device for processing data in cache
PCT/CN2022/127576 WO2023245940A1 (en) 2022-06-24 2022-10-26 Processing method and apparatus for data in cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210725664.9A CN115061816A (en) 2022-06-24 2022-06-24 Method and device for processing data in cache

Publications (1)

Publication Number Publication Date
CN115061816A true CN115061816A (en) 2022-09-16

Family

ID=83201833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210725664.9A Pending CN115061816A (en) 2022-06-24 2022-06-24 Method and device for processing data in cache

Country Status (2)

Country Link
CN (1) CN115061816A (en)
WO (1) WO2023245940A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023245940A1 (en) * 2022-06-24 2023-12-28 深圳前海微众银行股份有限公司 Processing method and apparatus for data in cache

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011086230A (en) * 2009-10-19 2011-04-28 Ntt Comware Corp Cache system and cache access method
US8438339B2 (en) * 2009-12-09 2013-05-07 International Business Machines Corporation Cache management for a number of threads
JP5232814B2 (en) * 2010-02-09 2013-07-10 エヌ・ティ・ティ・コムウェア株式会社 Cache system and cache access method
CN113094430B (en) * 2021-03-25 2023-10-03 北京达佳互联信息技术有限公司 Data processing method, device, equipment and storage medium
CN113343088A (en) * 2021-06-09 2021-09-03 北京奇艺世纪科技有限公司 Data processing method, system, device, equipment and storage medium
CN115061816A (en) * 2022-06-24 2022-09-16 深圳前海微众银行股份有限公司 Method and device for processing data in cache

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023245940A1 (en) * 2022-06-24 2023-12-28 深圳前海微众银行股份有限公司 Processing method and apparatus for data in cache

Also Published As

Publication number Publication date
WO2023245940A1 (en) 2023-12-28

Similar Documents

Publication Publication Date Title
EP3479224B1 (en) Memory allocation techniques at partially-offloaded virtualization managers
CN108462760B (en) Electronic device, automatic cluster access domain name generation method and storage medium
JP5897747B2 (en) Fault tolerant batch processing
US8706667B2 (en) Transactional graph-based computation with error handling
US11924117B2 (en) Automated local scaling of compute instances
CN110795128B (en) Program bug repairing method and device, storage medium and server
CN110955448A (en) Intelligent contract separation method, contract processing method, apparatus, device and medium
CN110750336B (en) OpenStack virtual machine memory hot-expanding method
CN110221845A (en) Using dispositions method, device, equipment and medium
CN109347716B (en) Instantiation method and device of consumer VNF
CN115061816A (en) Method and device for processing data in cache
CN112256444A (en) DAG-based business processing method and device, server and storage medium
CN113285843B (en) Container network configuration method and device, computer readable medium and electronic equipment
CN111538585A (en) Js-based server process scheduling method, system and device
US20220206836A1 (en) Method and Apparatus for Processing Virtual Machine Migration, Method and Apparatus for Generating Virtual Machine Migration Strategy, Device and Storage Medium
CN112835639A (en) Hook implementation method, device, equipment, medium and product
CN112631994A (en) Data migration method and system
CN116346728A (en) Low code platform current limiting method and device
CN116049000A (en) Environment parameter configuration method, device, equipment, storage medium and product
CN107688479B (en) Android system network cluster, construction method thereof, and Android system network cluster data processing method and system
CN110058866B (en) Cluster component installation method and device
CN112732367A (en) Event flow processing method, device and equipment and readable storage medium
CN113760446A (en) Resource scheduling method, device, equipment and medium
CN111240830A (en) Public link contract resource allocation method and device, electronic equipment and storage medium
CN111352710A (en) Process management method and device, computing equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination