CN117493380A - Data processing method, device, electronic equipment and computer readable storage medium - Google Patents

Data processing method, device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN117493380A
CN117493380A CN202310724153.XA CN202310724153A CN117493380A CN 117493380 A CN117493380 A CN 117493380A CN 202310724153 A CN202310724153 A CN 202310724153A CN 117493380 A CN117493380 A CN 117493380A
Authority
CN
China
Prior art keywords
target
target object
database
cache
target information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310724153.XA
Other languages
Chinese (zh)
Inventor
孙振华
姜英朕
吴鹏
蒋宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mashang Xiaofei Finance Co Ltd
Original Assignee
Mashang Xiaofei Finance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mashang Xiaofei Finance Co Ltd filed Critical Mashang Xiaofei Finance Co Ltd
Priority to CN202310724153.XA priority Critical patent/CN117493380A/en
Publication of CN117493380A publication Critical patent/CN117493380A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The present disclosure provides a data processing method, apparatus, electronic device, and computer-readable storage medium. The data processing method comprises the following steps: receiving a query request; inquiring target information of a first target object in a first cache, and determining a target database according to whether the first target object receives a transaction instruction and whether transaction is performed based on the received transaction instruction when the target information of the first target object is not inquired; when a first target object carries out transaction based on a received transaction instruction, determining a target database as a main database; and determining the target database as a slave database under the condition that the first target object does not receive the transaction instruction or does not conduct transaction based on the received transaction instruction. Inquiring target information of a first target object from a target database; and sending the target information of the first target object to the first client. According to the technical scheme provided by the disclosure, the query request is responded, and the accuracy of the generated response data can be improved.

Description

Data processing method, device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of computer technology processing, and in particular, to a data processing method, apparatus, electronic device, and computer readable storage medium.
Background
In software architecture, it is often necessary to synchronize data in a master database using a slave database, so that in highly concurrent data access requests, partial data access requests are transferred to the slave database, thereby reducing the access pressure of the master database.
Disclosure of Invention
The data processing method, the device, the electronic equipment and the computer readable storage medium can improve the accuracy of the generated response data in the process of responding to the data query request of the user.
In a first aspect, an embodiment of the present disclosure provides a data processing method, applied to a server, where the method includes:
receiving a query request sent by a first client, wherein the query request is used for requesting to query target information of a first target object associated with the first client, and the target information is used for representing the residual resource quantity of the target object;
inquiring target information of the first target object in a first cache, and determining a target database according to whether the first target object receives a transaction instruction and whether the first target object carries out a transaction based on the received transaction instruction under the condition that the target information of the first target object is not inquired, wherein the first cache is used for caching the target information, and determining the target database as a main database under the condition that the first target object carries out the transaction based on the received transaction instruction; under the condition that the first target object does not receive a transaction instruction or does not conduct a transaction based on the transaction instruction, determining the target database as a slave database, wherein the master database is a database used for storing the target information by the server, and data in the slave database are synchronous data in the master database;
Querying target information of the first target object from the target database;
and sending the target information of the first target object to the first client.
In a second aspect, an embodiment of the present disclosure provides a data processing apparatus, applied to a server, where the apparatus includes:
the system comprises a receiving module, a first client and a second client, wherein the receiving module is used for receiving a query request sent by the first client, the query request is used for requesting to query target information of a first target object associated with the first client, and the target information is used for representing the residual resource quantity of the target object;
the determining module is used for inquiring the target information of the first target object in a first cache, determining a target database according to whether the first target object receives a transaction instruction and whether the first target object carries out a transaction based on the received transaction instruction under the condition that the target information of the first target object is not inquired, wherein the first cache is used for caching the target information, and determining the target database as a main database under the condition that the first target object carries out the transaction based on the received transaction instruction; under the condition that the first target object does not receive a transaction instruction or does not conduct a transaction based on the transaction instruction, determining the target database as a slave database, wherein the master database is a database used for storing the target information by the server, and data in the slave database are synchronous data in the master database;
The query module is used for querying target information of the first target object from the target database;
and the sending module is used for sending the target information of the first target object to the first client.
In a third aspect, embodiments of the present disclosure also provide an electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program implementing the method steps of the first aspect as described above when executed by the processor.
In a fourth aspect, the presently disclosed embodiments also provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements the method steps of the first aspect as described above.
In the embodiment of the disclosure, in the case that the first target object associated with the first client performs a transaction based on receiving the transaction instruction, the target information of the first target object stored in the main database may be updated due to the transaction. There is an update to the data in the master database and the data stored in the master database is up-to-date and the data stored in the slave database is dirty before the master database synchronizes the updated data to the slave database. Based on this, in the embodiment of the disclosure, in the process of responding to the query request of the first client, it is first determined whether the first target object receives the transaction instruction and whether the transaction is performed based on the received transaction instruction, where in the case that the first target object performs the transaction based on the received transaction instruction, since the target information of the first target object stored in the slave database may be dirty data, the master database is determined as the target database, and the query request is responded based on the master database, so as to avoid returning the dirty database in the slave database to the first client. In the case where the first target object does not receive the transaction instruction or does not perform a transaction based on the receipt of the transaction instruction, since the target information of the first target object stored from the slave database coincides with the target information of the first target object stored in the master database, the slave database may be determined as the target database and the access pressure of the master database may be reduced based on the response of the slave database to the query request. In this way, in the process of responding to the data query request of the user, the problem of returning dirty data from the database to the user can be avoided, thereby improving the accuracy of the generated response data.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are needed in the description of the embodiments of the present disclosure will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
FIG. 1 is one of the flowcharts of a data processing method provided by an embodiment of the present disclosure;
FIG. 2 is a second flowchart of a data processing method according to an embodiment of the present disclosure;
FIG. 3 is a third flowchart of a data processing method provided by an embodiment of the present disclosure;
FIG. 4 is a fourth flowchart of a data processing method provided by an embodiment of the present disclosure;
FIG. 5 is one of the schematic structural diagrams of the data processing apparatus provided in the embodiments of the present disclosure;
FIG. 6 is a second schematic diagram of a data processing apparatus according to an embodiment of the disclosure.
Detailed Description
The following description of the technical solutions in the embodiments of the present disclosure will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
In the related art, in the process of updating data in the master database, a certain delay exists in data synchronization of the master database and the slave database, that is, when the updated data in the master database is not completely synchronized to the slave database, the data in the master database is easily the latest data, and the data in the slave database may be dirty data (that is, data before updating), at this time, if the slave database responds to a data query request, the problem that the data returned to the user is dirty data easily occurs.
Based on this, in the embodiment of the disclosure, by determining that the main cause that may cause the update of the data in the master database is transaction based on the transaction instruction, and determining whether the queried object currently receives the transaction instruction and whether the transaction is performed based on the received transaction instruction when the query request is received, if the first target object performs the transaction based on the received transaction instruction, the target information of the queried object stored in the master database may have been updated at this time, and the target information of the queried object stored in the slave database may not have been updated, so in this case, the master database is determined as the target database, and the query request is responded based on the master database, so as to avoid the problem of returning the dirty database in the slave database to the user.
Referring to fig. 1, fig. 1 is a flow chart of a data processing method provided in an embodiment of the present disclosure, where an execution subject of the method is a server, and specifically may be a server of various transaction backgrounds, for example, may be a server of various financial institutions. The server may be an independent server or a server cluster formed by a plurality of servers. Specifically, the data processing method comprises the following steps: :
step 101, receiving a query request sent by a first client, wherein the query request is used for requesting to query target information of a first target object associated with the first client, and the target information is used for representing the residual resource quantity of the target object;
102, inquiring target information of the first target object in a first cache, and determining a target database according to whether the first target object receives a transaction instruction and whether to transact based on the received transaction instruction under the condition that the target information of the first target object is not inquired, wherein the first cache is used for caching the target information, and determining the target database as a main database under the condition that the first target object transacts based on the received transaction instruction; under the condition that the first target object does not receive a transaction instruction or does not conduct a transaction based on the transaction instruction, determining the target database as a slave database, wherein the master database is a database used for storing the target information by the server, and data in the slave database are synchronous data in the master database;
Step 103, inquiring target information of the first target object from the target database;
step 104, sending the target information of the first target object to the first client.
The target information may be information associated with a user account and may vary with the user handling the transaction, for example, the target information may be account balance, account score, or the like. The transaction service may specifically be various transaction services that can cause the transmission of the target information to vary, and for example, the transaction service may be various transaction services related to commodity purchase, transfer service, point exchange service, and the like. It may be appreciated that the first target object may be an account number bound by the first client.
The data processing method provided by the embodiment of the present disclosure is further explained below by taking the target information as an account balance as an example:
the transaction instruction may be a transaction instruction triggered by the user to which the first client belongs and that is triggered by the transaction platform, and when the first target object receives the transaction instruction, there may be two situations of performing a transaction and not performing the transaction, for example, when the transaction instruction is a consumption transaction instruction and the number of remaining resources of the first target object is greater than or equal to the number of resources that the transaction instruction requests to consume, the transaction may be further performed. Accordingly, when the transaction instruction is a consumption transaction instruction and the remaining resource amount of the first target object is smaller than the resource amount requested to be consumed by the transaction instruction, the transaction may not be directly performed. And only when a transaction is performed, the target information of the first object is caused to be updated.
Based on this, the embodiment of the disclosure can monitor whether the first target object receives a transaction instruction and whether a transaction is performed based on the received transaction instruction, so as to determine whether the target information of the first target object is updated. Wherein, the transaction based on the received transaction instruction may specifically refer to: currently in the process of a transaction. The step of not conducting the transaction based on the received transaction instruction may specifically include: not currently in the process of a transaction.
The first cache may be a local cache of the server. Specifically, in the process of requesting to query the target information, the client generally sends a query request to the server, and the server obtains the corresponding target information from the database, and meanwhile, the server sends the target information to be queried to the client, so that the server can cache the obtained target information locally (i.e. store in the first cache) when obtaining the target information. In this way, when a query request sent by the client is received later, the query can be firstly performed in the first cache, if the corresponding target information can be queried, the target information queried from the cache can be directly sent to the client, and correspondingly, if the corresponding target information cannot be queried from the first cache, the corresponding target information can be queried from the target database. In this way, the frequency of access to the target database can be reduced.
The data cache form in the first cache may be a form of a key (key) +value (value), where the key records a user account (userId), and the value records corresponding target information (such as account balance). The data storage formats in the master database and the slave database may be in the form of a key) +value (value).
It will be appreciated that the data in the master database may be synchronized to the slave database periodically to ensure that the data in the master database is consistent with the data in the slave database. Since various requests may exist during the service processing process to query the target information stored in the master database, in order to relieve the access pressure of the master database, a part of the service requests may be transferred to the slave database for processing, so as to reduce the query frequency of the master database. Specifically, a request that does not cause a change in the transmission of target information in the database may be transferred to the slave database process, for example, when the server receives a query request, the corresponding data is directly queried from the slave database. Accordingly, a request may be sent to the primary database process that may result in target information stored in the database, e.g., when a transaction instruction is received, may be processed by the primary database to ensure that the data stored in the primary database is always up-to-date.
It should be noted that, because the request received by the server side generally includes the service identifier and the corresponding account information to be transacted, the server side can determine whether the account of the client side is in a transaction state according to the service representation and the account information in the request.
In this embodiment, since the first target object associated with the first client performs the transaction based on the received transaction instruction, the target information of the first target object stored in the main database may be updated due to the transaction. There is an update to the data in the master database and the data stored in the master database is up-to-date and the data stored in the slave database is dirty before the master database synchronizes the updated data to the slave database. Based on this, in the embodiment of the disclosure, in the process of responding to the query request of the first client, it is first determined whether the first target object receives the transaction instruction and whether the transaction is performed based on the received transaction instruction, where in the case that the first target object performs the transaction based on the received transaction instruction, since the target information of the first target object stored in the slave database may be dirty data, the master database is determined as the target database, and the query request is responded based on the master database, so as to avoid returning the dirty database in the slave database to the first client. In the case where the first target object does not receive the transaction instruction or does not perform a transaction based on the receipt of the transaction instruction, since the target information of the first target object stored from the slave database coincides with the target information of the first target object stored in the master database, the slave database may be determined as the target database and the access pressure of the master database may be reduced based on the response of the slave database to the query request. In this way, in the process of responding to the data query request of the user, the problem of returning dirty data from the database to the user can be avoided, thereby improving the accuracy of the generated response data.
In the following, in a specific embodiment, how to determine whether the first target object performs the transaction based on the received transaction command in the step 102 is further explained as follows:
optionally, before the determining the target database based on the service state of the first target object, the method further includes:
inquiring in a second cache, and determining that the first target object carries out transaction based on the received transaction instruction under the condition that the identity of the first target object is inquired;
and inquiring in the second cache, and determining that the first target object does not receive a transaction instruction or does not conduct transaction based on receiving the transaction instruction under the condition that the identity of the first target object is not inquired.
Specifically, the second cache may be a local cache of the server. The server side can store the identity of all target objects currently in the transaction process through the second cache. In this way, when determining whether the first target object performs the transaction based on the received transaction instruction, a query may be performed in the second cache to determine whether the second cache includes the identity of the first target object, and if the second cache includes the identity of the first target object, it is determined that the first target object performs the transaction based on the received transaction instruction, where the master database may be determined as the target database. Accordingly, if the second cache does not include the identity of the first target object, it is determined that the first target object does not receive the transaction instruction or does not perform the transaction based on receiving the transaction instruction, and at this time, the database may be determined to be the target database.
It may be appreciated that, when the server receives a transaction instruction sent by a certain client and performs a transaction, an account associated with the client may be stored in the second cache. Accordingly, after completing the response to the transaction instruction (i.e., after the transaction is completed), the account number may be deleted from the second cache. So as to ensure that the identity of the target object in the transaction process is always stored in the second cache.
In this embodiment, the identities of all the target objects currently in the transaction process are stored based on the second cache, so that when the target database is determined, the second cache is used for inquiring to determine whether the current object performs the transaction based on the received transaction instruction.
In one embodiment of the present disclosure, in the case that it is determined that a certain target object is in a transaction state, non-latest data local to the server may be further deleted, so as to guide the query task to query the latest data in the main database. Specifically:
optionally, the method further comprises:
under the condition that a second target object associated with a second client carries out transaction based on a received transaction instruction, updating target information of the second target object in the target database, deleting the target information of the second target object in the first cache, and adding an identity of the second target object in the second cache.
When the target information of the second target object in the main database is updated, the problem that the target information of the second target object recorded in the main database is inconsistent with the target information of the second target object recorded in the first cache is caused, so that the target information of the second target object in the first cache can be deleted at the moment. Therefore, the target information of the second target object does not exist in the first cache, and when a query request for the target information of the second target object is received later, the query task can only query in the target database, so that the queried data is ensured to be the latest data.
Accordingly, when the second target object associated with the second client performs the transaction based on the received transaction instruction, it may be determined that the second target object is in the transaction state, and at this time, by adding the identity of the second target object to the second cache, it may be ensured that the identities of all the target objects in the transaction state can be stored in the second cache all the time. The identity mark can be a user account number or a user nickname and the like.
In this embodiment, when the second target object associated with the second client performs a transaction based on the received transaction instruction, the target information of the second target object in the target database is updated, and the target information of the second target object in the first cache is deleted, and the identity of the second target object is added in the second cache, so that the non-latest data stored in the first cache can be deleted in time, and the identity of all the target objects in the transaction state can be ensured to be stored in the second cache all the time.
Further, since the transaction response process ends after the updating of the target information of the second target object in the target database in response to the transaction instruction. At this time, the data in the master database may not have been synchronized to the slave database, and if a query request is received, the non-latest data may be read from the slave database, and the read non-latest data is stored in the first cache. Subsequently, the data requested by the client is still the non-latest data in the first cache. Based on this, in the embodiment of the present disclosure, the action of deleting the target information of the second target object in the first cache may be performed again after a period of time, so as to ensure data consistency. Specifically:
optionally, the method further includes, after updating the target information of the second target object in the target database, deleting the target information of the second target object in the first cache, and adding the identity of the second target object in the second cache:
deleting target information of a second target object in the first cache at a target time point, wherein the target time point is a time point after an update time point, and the update time point is: and responding to the transaction instruction, and updating the target information of the second target object in the target database.
Wherein a time interval between the target time point and the update time point may be set in advance. For example, at the target point in time, the master database has completed data synchronization to the slave database. Therefore, after the data synchronization of the master database and the slave database is completed, the non-latest data in the first cache is deleted, so that the fact that the non-latest data does not exist in each storage position can be ensured, and the data consistency is improved.
Optionally, the time interval between the target time point and the update time point is greater than a maximum delay duration of data synchronization, wherein the maximum delay duration of data synchronization is a maximum delay duration of data synchronization in the master database to the slave database.
Specifically, a section-oriented programming (spring aop) transverse cutting logic and a stopWatch (stopWatch) mode can be adopted to mark and record the maximum delay time, and the method specifically comprises the following steps: starting timing when the main database starts synchronizing data, ending timing when the main database completes data synchronization, and determining the time spent in the data synchronization process as the maximum delay time.
In one embodiment of the present disclosure, the time interval between the target time point and the update time point may be calculated using the following formula:
Long delay time = Max (master slave synchronization delay duration) +500ms;
wherein the maximum delay time (Long delay time) represents a time interval between the target time point and the update time point; max (master-slave synchronization delay duration) represents the maximum delay duration of data synchronization.
In this embodiment, since the maximum master-slave synchronization delay period can ensure that after synchronization is completed (i.e. the slave library has received the latest data), possible non-latest data in the first buffer is deleted, followed by a query request, the latest data of the slave library can be read and then put into the first buffer, and 500ms is added, which takes time consumption of data synchronization network jitter into consideration, so that data consistency in the buffers can be ensured.
In the process of deleting the target information of the second target object in the first cache, the deleted state can be continuously monitored, and in the case of deleting abnormality, corresponding processing logic is set to ensure that the target information of the second target object can be deleted from the first cache, specifically:
optionally, the deleting the target information of the second target object in the first cache includes:
deleting the target information of the second target object in the first cache, repeating the step of deleting the target information of the second target object in the first cache under the condition of deleting failure, and counting the times of deleting failure;
And outputting alarm information under the condition that the deleting failure times exceed a preset threshold value.
Referring to fig. 2, in one embodiment of the present disclosure, in deleting the target information (Delete Cache 1) of the second target object in the first Cache, whether there is an exception may be continuously monitored. It will be appreciated that when deletion fails, it is determined that there is an abnormality, and accordingly, when deletion is successful, it is determined that there is no abnormality. If the abnormality exists, the number of the abnormality is counted through n by adding 1 automatically, meanwhile, the relative size (n < max) of n after adding 1 and a preset threshold (max) is determined, if n is smaller than max, the action of deleting the target information of the second target object in the first cache is executed again, and if n is larger than or equal to max, alarm information is output to prompt related personnel to manually process the abnormality.
The alarm objects include technicians, product responsible persons, superior leaders and the like, the alarm forms include mail, short messages, telephones and the like, and the user can access the company alarm platform.
In this embodiment, when the target information of the second target object in the first cache is deleted and the deletion fails, the step of deleting the target information of the second target object in the first cache is repeatedly performed, and the number of times of deletion failure is counted; and outputting alarm information under the condition that the deleting failure times exceed a preset threshold value, so that the target information of the second target object can be ensured to be deleted from the first cache.
Optionally, the method further comprises:
monitoring the update state of the target information of all target objects stored in the main database;
and deleting the target information of the third target object in the first cache based on a target scheduling task under the condition that the target information of the third target object is updated, wherein the third target object is any target object in all target objects.
The deleting the target information of the third target object in the first cache based on the target scheduling task includes:
and deleting the target information of the third target object in the first cache based on the target scheduling task, and adding the identity of the third target object to a target queue under the condition that the deletion fails, wherein the target queue is used for storing the identity of the target object corresponding to the target information to be deleted by the target scheduling task in the first cache.
Referring to fig. 3, in one embodiment of the present disclosure, since an update log (binlog) is generated when there is an update in the data in the main database, the binlog may be subscribed to, so that when it is determined that there is an update in the target information of a certain target object (e.g., a third target object) through the binlog, the target information of the third target object in the first Cache may be deleted through the target scheduling task (Delete Cache 1). And whether the deletion of the target information of the third target object is successful or not can be continuously monitored, if the deletion is failed, the third target object can be stored in the target queue, and subsequently, the target scheduling task can acquire the third target object from the target queue, and under the condition that the third target object is acquired, the deletion action is executed on the target information of the third target object in the first cache again until the target information of the third target object is deleted from the first cache.
Optionally, after the querying the target information of the first target object from the target database, the method further includes:
and adding target information of the first target object to the first cache.
In this embodiment, the server may add the target information obtained from the target database to the first cache, so that when the subsequent client requests to obtain the target information again, the target information may be directly obtained from the first cache, without querying the target database again, so that the number of accesses to the target database may be reduced.
Referring to fig. 4, a flow chart of a data processing method provided in an embodiment of the disclosure specifically includes two processing flows as follows:
process flow 1: when receiving the Query request (Query Req), the server first checks whether the data (Cache 1 Hit. If the data required to be queried by the Query request does not exist in the first Cache, judging whether an account number (Cache 2 Hit. Wherein when there is an update to the data in the master database, the update data (binlog) can be synchronized to the slave database.
Process flow 2: when the server receives a transaction instruction (Trade Req) sent by the second client and performs transaction, deleting target information (Delete Cache 1) of the second target object in the first Cache, and adding an identity of the second target object to the second Cache (Add Cache 2). And under the condition that deleting the target information (Delete Cache 1) of the second target object in the first Cache fails, outputting alarm information to prompt a background person to process. And then, in response to the transaction instruction, updating the target information (Trade Buss) of the second target object in the target database, and after the interval is the maximum delay time, deleting the target information (Delete Cache 1) of the second target object in the first Cache again, and ending the transaction flow (Trade End). And under the condition that the deleting of the target information (Delete Cache 1) of the second target object in the first Cache is failed, retrying to Delete the target information (Delete Cache 1) of the second target object in the first Cache. And when the number of deletion failures exceeds a set value, outputting alarm information.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a data processing apparatus 500 according to an embodiment of the disclosure, where the data processing apparatus 500 includes:
A receiving module 501, configured to receive a query request sent by a first client, where the query request is used to request to query target information of a first target object associated with the first client, where the target information is used to characterize a remaining resource amount of the target object;
a determining module 502, configured to query target information of the first target object in a first cache, and determine a target database according to whether the first target object receives a transaction instruction and whether to perform a transaction based on the received transaction instruction if the target information of the first target object is not queried, where the first cache is configured to cache the target information, and determine that the target database is a master database if the first target object performs a transaction based on the received transaction instruction; under the condition that the first target object does not receive a transaction instruction or does not conduct a transaction based on the transaction instruction, determining the target database as a slave database, wherein the master database is a database used for storing the target information by the server, and data in the slave database are synchronous data in the master database;
A query module 503, configured to query the target database for target information of the first target object;
and a sending module 504, configured to send target information of the first target object to the first client.
Optionally, the determining module 502 is further configured to query in a second cache, and determine, when the identity of the first target object is queried, that the first target object performs a transaction based on receiving a transaction instruction;
the determining module 502 is further configured to query in the second cache, and determine that the first target object does not receive a transaction instruction or does not perform a transaction based on receiving the transaction instruction if the identity of the first target object is not queried.
Optionally, the apparatus further comprises:
the processing module is used for updating the target information of the second target object in the target database under the condition that the second target object associated with the second client carries out transaction based on the received transaction instruction, deleting the target information of the second target object in the first cache and adding the identity of the second target object in the second cache.
Optionally, the processing module is specifically configured to delete, at a target time point, target information of the second target object in the first cache, where the target time point is a time point after an update time point, and the update time point is: and responding to the transaction instruction, and updating the target information of the second target object in the target database.
Optionally, the time interval between the target time point and the update time point is greater than a maximum delay duration of data synchronization, wherein the maximum delay duration of data synchronization is a maximum delay duration of data synchronization in the master database to the slave database.
Optionally, the processing module includes:
a deleting sub-module, configured to delete the target information of the second target object in the first cache, and repeatedly execute the step of deleting the target information of the second target object in the first cache and count the number of deletion failures when the deletion fails;
and the alarm sub-module is used for outputting alarm information under the condition that the deletion failure times exceed a preset threshold value.
Optionally, the apparatus further comprises:
the monitoring module is used for monitoring the update states of the target information of all the target objects stored in the main database;
and the deleting module is used for deleting the target information of the third target object in the first cache based on a target scheduling task under the condition that the target information of the third target object is updated, wherein the third target object is any target object in all target objects.
Optionally, the deleting module is specifically configured to delete, based on the target scheduling task, target information of the third target object in the first cache, and if deletion fails, add an identity of the third target object to a target queue, where the target queue is configured to store an identity of a target object corresponding to the target information to be deleted by the target scheduling task in the first cache.
Optionally, the apparatus further comprises:
and the adding module is used for adding the target information of the first target object to the first cache.
Each of the modules in the above-described data processing apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in the processor in the server side in a hardware form or can be independent of the processor in the server side, and can also be stored in the memory in the server side in a software form, so that the processor can call and execute the operations corresponding to the modules.
The data processing apparatus 500 provided in the embodiments of the present disclosure can implement each process in the embodiments of the data processing method, and in order to avoid repetition, a description is omitted here. Referring to fig. 6, fig. 6 is a block diagram of a data processing apparatus 600 provided in another embodiment of the present disclosure, and as shown in fig. 6, the data processing apparatus 600 includes: processor 601, memory 602, and a computer program stored on memory 602 and executable on the processor, the components of the database generating apparatus being coupled together by bus interface 603, the computer program when executed by processor 601 performing the steps of:
receiving a query request sent by a first client, wherein the query request is used for requesting to query target information of a first target object associated with the first client, and the target information is used for representing the residual resource quantity of the target object;
inquiring target information of the first target object in a first cache, and determining a target database according to whether the first target object receives a transaction instruction and whether the first target object carries out a transaction based on the received transaction instruction under the condition that the target information of the first target object is not inquired, wherein the first cache is used for caching the target information, and determining the target database as a main database under the condition that the first target object carries out the transaction based on the received transaction instruction; under the condition that the first target object does not receive a transaction instruction or does not conduct a transaction based on the transaction instruction, determining the target database as a slave database, wherein the master database is a database used for storing the target information by the server, and data in the slave database are synchronous data in the master database;
Querying target information of the first target object from the target database;
and sending the target information of the first target object to the first client.
Optionally, before the determining the target database based on the service state of the first target object, the method further includes:
inquiring in a second cache, and determining that the first target object carries out transaction based on the received transaction instruction under the condition that the identity of the first target object is inquired;
and inquiring in the second cache, and determining that the first target object does not receive a transaction instruction or does not conduct transaction based on receiving the transaction instruction under the condition that the identity of the first target object is not inquired.
Optionally, the method further comprises:
under the condition that a second target object associated with a second client carries out transaction based on a received transaction instruction, updating target information of the second target object in the target database, deleting the target information of the second target object in the first cache, and adding an identity of the second target object in the second cache.
Optionally, the method further includes, after updating the target information of the second target object in the target database, deleting the target information of the second target object in the first cache, and adding the identity of the second target object in the second cache:
deleting target information of the second target object in the first cache at a target time point, wherein the target time point is a time point after an update time point, and the update time point is: and responding to the transaction instruction, and updating the target information of the second target object in the target database.
Optionally, the time interval between the target time point and the update time point is greater than a maximum delay duration of data synchronization, wherein the maximum delay duration of data synchronization is a maximum delay duration of data synchronization in the master database to the slave database.
Optionally, the deleting the target information of the second target object in the first cache includes:
deleting the target information of the second target object in the first cache, repeating the step of deleting the target information of the second target object in the first cache under the condition of deleting failure, and counting the times of deleting failure;
And outputting alarm information under the condition that the deleting failure times exceed a preset threshold value.
Optionally, the method further comprises:
monitoring the update state of the target information of all target objects stored in the main database;
and deleting the target information of the third target object in the first cache based on a target scheduling task under the condition that the target information of the third target object is updated, wherein the third target object is any target object in all target objects.
Optionally, deleting the target information of the third target object in the first cache based on the target scheduling task includes:
and deleting the target information of the third target object in the first cache based on the target scheduling task, and adding the identity of the third target object to a target queue under the condition that the deletion fails, wherein the target queue is used for storing the identity of the target object corresponding to the target information to be deleted by the target scheduling task in the first cache.
Optionally, after the querying the target information of the first target object from the target database, the method further includes:
And adding target information of the first target object to the first cache.
Each of the modules in the above-described data processing apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in the processor in the server side in a hardware form or can be independent of the processor in the server side, and can also be stored in the memory in the server side in a software form, so that the processor can call and execute the operations corresponding to the modules.
The embodiment of the disclosure further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program when executed by the processor implements each process of the above method embodiment, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein.
The embodiments of the present disclosure further provide a computer readable storage medium, on which a computer program is stored, where the computer program when executed by a processor implements each process of the foregoing method embodiments, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein. Among them, a computer readable storage medium such as Read-Only Memory (ROM), random access Memory (Random Access Memory RAM), magnetic disk or optical disk, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present disclosure may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) including several instructions for causing an electronic device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method of the embodiments of the present disclosure.
The embodiments of the present disclosure have been described above with reference to the accompanying drawings, but the present disclosure is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the disclosure and the scope of the claims, which are all within the protection of the present disclosure.

Claims (12)

1. A data processing method, applied to a server, the method comprising:
receiving a query request sent by a first client, wherein the query request is used for requesting to query target information of a first target object associated with the first client, and the target information is used for representing the residual resource quantity of the target object;
inquiring target information of the first target object in a first cache, and determining a target database according to whether the first target object receives a transaction instruction and whether the first target object carries out a transaction based on the received transaction instruction under the condition that the target information of the first target object is not inquired, wherein the first cache is used for caching the target information, and determining the target database as a main database under the condition that the first target object carries out the transaction based on the received transaction instruction; under the condition that the first target object does not receive a transaction instruction or does not conduct a transaction based on the transaction instruction, determining the target database as a slave database, wherein the master database is a database used for storing the target information by the server, and data in the slave database are synchronous data in the master database;
Querying target information of the first target object from the target database;
and sending the target information of the first target object to the first client.
2. The method of claim 1, wherein prior to determining a target database based on the traffic state of the first target object, the method further comprises:
inquiring in a second cache, and determining that the first target object carries out transaction based on the received transaction instruction under the condition that the identity of the first target object is inquired;
and inquiring in the second cache, and determining that the first target object does not receive a transaction instruction or does not conduct transaction based on receiving the transaction instruction under the condition that the identity of the first target object is not inquired.
3. The method according to claim 2, wherein the method further comprises:
under the condition that a second target object associated with a second client carries out transaction based on a received transaction instruction, updating target information of the second target object in the target database, deleting the target information of the second target object in the first cache, and adding an identity of the second target object in the second cache.
4. A method according to claim 3, wherein the updating the target information of the second target object in the target database, and deleting the target information of the second target object in the first cache, and after adding the identity of the second target object in the second cache, the method further comprises:
deleting target information of the second target object in the first cache at a target time point, wherein the target time point is a time point after an update time point, and the update time point is: and responding to the transaction instruction, and updating the target information of the second target object in the target database.
5. The method of claim 4, wherein a time interval between the target point in time and the update point in time is greater than a maximum delay period for data synchronization, wherein the maximum delay period for data synchronization is a maximum delay period for data synchronization in the master database to the slave database.
6. The method of claim 4, wherein said deleting the target information of the second target object in the first cache comprises:
Deleting the target information of the second target object in the first cache, repeating the step of deleting the target information of the second target object in the first cache under the condition of deleting failure, and counting the times of deleting failure;
and outputting alarm information under the condition that the deleting failure times exceed a preset threshold value.
7. The method according to claim 1, wherein the method further comprises:
monitoring the update state of the target information of all target objects stored in the main database;
and deleting the target information of the third target object in the first cache based on a target scheduling task under the condition that the target information of the third target object is updated, wherein the third target object is any target object in all target objects.
8. The method of claim 7, wherein deleting the target information of the third target object in the first cache based on the target scheduling task comprises:
and deleting the target information of the third target object in the first cache based on the target scheduling task, and adding the identity of the third target object to a target queue under the condition that the deletion fails, wherein the target queue is used for storing the identity of the target object corresponding to the target information to be deleted by the target scheduling task in the first cache.
9. The method of claim 1, wherein after querying the target database for the target information of the first target object, the method further comprises:
and adding target information of the first target object to the first cache.
10. A data processing apparatus for application to a server, the apparatus comprising:
the system comprises a receiving module, a first client and a second client, wherein the receiving module is used for receiving a query request sent by the first client, the query request is used for requesting to query target information of a first target object associated with the first client, and the target information is used for representing the residual resource quantity of the target object;
the determining module is used for inquiring the target information of the first target object in a first cache, determining a target database according to whether the first target object receives a transaction instruction and whether the first target object carries out a transaction based on the received transaction instruction under the condition that the target information of the first target object is not inquired, wherein the first cache is used for caching the target information, and determining the target database as a main database under the condition that the first target object carries out the transaction based on the received transaction instruction; under the condition that the first target object does not receive a transaction instruction or does not conduct a transaction based on the transaction instruction, determining the target database as a slave database, wherein the master database is a database used for storing the target information by the server, and data in the slave database are synchronous data in the master database;
The query module is used for querying target information of the first target object from the target database;
and the sending module is used for sending the target information of the first target object to the first client.
11. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor performs the method steps of any one of claims 1 to 9.
12. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, implements the method steps of any of claims 1 to 9.
CN202310724153.XA 2023-06-16 2023-06-16 Data processing method, device, electronic equipment and computer readable storage medium Pending CN117493380A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310724153.XA CN117493380A (en) 2023-06-16 2023-06-16 Data processing method, device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310724153.XA CN117493380A (en) 2023-06-16 2023-06-16 Data processing method, device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117493380A true CN117493380A (en) 2024-02-02

Family

ID=89673230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310724153.XA Pending CN117493380A (en) 2023-06-16 2023-06-16 Data processing method, device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117493380A (en)

Similar Documents

Publication Publication Date Title
CN113238913B (en) Intelligent pushing method, device, equipment and storage medium for server faults
KR102167613B1 (en) Message push method and device
CN111277483B (en) Multi-terminal message synchronization method, server and storage medium
CN114328132A (en) Method, device, equipment and medium for monitoring state of external data source
CN112527844A (en) Data processing method and device and database architecture
CN113326146A (en) Message processing method and device, electronic equipment and storage medium
CN111488373B (en) Method and system for processing request
CN113127564A (en) Parameter synchronization method and device
CN112632093A (en) Work order processing method, device, system, storage medium and program product
CN111309693A (en) Data synchronization method, device and system, electronic equipment and storage medium
CN111988391A (en) Message sending method and device
CN117493380A (en) Data processing method, device, electronic equipment and computer readable storage medium
WO2023147716A1 (en) Flow control and billing methods, apparatuses and system, electronic device, medium and product
CN114006946B (en) Method, device, equipment and storage medium for processing homogeneous resource request
CN113590715A (en) Block chain-based information push method, apparatus, device, medium, and program product
CN114048059A (en) Method and device for adjusting timeout time of interface, computer equipment and storage medium
CN113760398A (en) Interface calling method, server, system and storage medium
CN106375354B (en) Data processing method and device
CN101894119B (en) Mass data storage system for monitoring
CN112860746B (en) Cache reduction-based method, equipment and system
CN111586438A (en) Method, device and system for processing service data
CN111049938A (en) Message notification method and device, electronic equipment and readable storage medium
CN111769965B (en) Information processing method, device and equipment
CN112434050B (en) Data synchronization method and device of power grid business processing system and business processing system
JP2007316719A (en) Message communication method, device and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination