CN116775640A - Data storage method, apparatus, electronic device and computer program product - Google Patents

Data storage method, apparatus, electronic device and computer program product Download PDF

Info

Publication number
CN116775640A
CN116775640A CN202210233315.5A CN202210233315A CN116775640A CN 116775640 A CN116775640 A CN 116775640A CN 202210233315 A CN202210233315 A CN 202210233315A CN 116775640 A CN116775640 A CN 116775640A
Authority
CN
China
Prior art keywords
data
message data
message
storing
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210233315.5A
Other languages
Chinese (zh)
Inventor
周岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Jiangsu Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Jiangsu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Jiangsu Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202210233315.5A priority Critical patent/CN116775640A/en
Publication of CN116775640A publication Critical patent/CN116775640A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to the field of data storage, and provides a data storage method, a data storage device, electronic equipment and a computer program product. The method comprises the following steps: storing the generated plurality of message data to a message queue; storing data preheating request logs corresponding to the message data to a relational database, wherein the data preheating request logs comprise objects of the message data; sequentially pulling target message data from the plurality of message data of the message queue; and storing the object of the target message data to a memory database based on the target message data and a data preheating request log corresponding to the target message data. The data storage method provided by the embodiment of the application solves the jitter problem caused by data penetration generated by multi-center flow switching, can meet the requirement of high timeliness under a large data set, and realizes no perception of a flow multi-center switching user.

Description

Data storage method, apparatus, electronic device and computer program product
Technical Field
The present application relates to the field of data storage technologies, and in particular, to a data storage method, apparatus, electronic device, and computer program product.
Background
With the increase of the high-availability requirements of IT, multiple sets of applications and middleware with the same version are required to be deployed in different places for the production environment of the IT business, so that a multi-center business flow switching scene appears. In a multi-center traffic switching scenario, there are currently common manners of databus buffer synchronization based on binlog and synchronization based on redis-shake. For the first mode, the upgrade is required to be continuously maintained, the maintenance cost of the use of the method is high, and the operation is complex; in the second mode, key value confusion is easy to cause, data storage effect is affected, and extra performance cost is brought to the cache server.
Disclosure of Invention
The embodiment of the application provides a data storage method, a data storage device, electronic equipment and a computer program product, which are used for solving the technical problems of poor effect and complex operation of the existing multi-center data storage.
In a first aspect, an embodiment of the present application provides a data storage method, including:
storing the generated plurality of message data to a message queue;
storing data preheating request logs corresponding to the message data to a relational database, wherein the data preheating request logs comprise objects of the message data;
Sequentially pulling target message data from the plurality of message data of the message queue;
and storing the object of the target message data to a memory database based on the target message data and a data preheating request log corresponding to the target message data.
In one embodiment, the storing the object of the target message data in the memory database based on the target message data and the data warm-up request log corresponding to the target message data includes:
based on the target message data, acquiring a data preheating request log corresponding to the target message data from the relational database;
and storing the object of the target message data in the data preheating request log corresponding to the target message data to the memory database.
In one embodiment, before the storing the generated plurality of message data in the message queue, the method comprises:
generating the message data based on the service change information;
based on the message data, invalidating initial cache data from a memory database corresponding to a center to which the message data belongs, wherein the initial cache data is generated based on service information before change.
In one embodiment, the storing the generated plurality of message data in a message queue includes:
and storing the message data to the message queue under the identification corresponding to the center to which the message data belongs.
In one embodiment, after the storing the object of the target message data in the memory database based on the target message data and the data warm-up request log corresponding to the target message data, the method further includes:
generating a data warm-up operation log based on the target message data, in case that the object storage of the target message data is successful, the data warm-up operation log comprising: at least one of an object of the target message data, a center to which the target message data belongs, an execution operation center, a processing success state, and a processing time;
storing the data preheating operation log to the relational database;
transmitting a data retransmission instruction to the message queue under the condition that the object storage of the target message data fails, and generating a data preheating operation log based on the target message data, wherein the data preheating operation log comprises a processing failure state;
And storing the data preheating operation log into the relational database.
In one embodiment, before the storing the generated plurality of message data in the message queue, the method comprises:
monitoring the working state of the message queue;
generating data to be cached based on service change information under the condition that the message queue is abnormal;
and storing the data to be cached into a target memory database and the relational database, wherein the target memory database corresponds to the central information of the data to be cached.
In one embodiment, the method further comprises:
generating alarm information based on at least one of the index information of the message queue, the data preheating request log and the data preheating operation log;
and outputting the alarm information.
In a second aspect, an embodiment of the present application provides a data storage device, including:
the first processing module is used for storing the generated message data into a message queue;
the second processing module is used for storing data preheating request logs corresponding to the message data to a relational database, wherein the data preheating request logs comprise objects of the message data;
A third processing module, configured to sequentially pull target message data from the plurality of message data in the message queue;
and the fourth processing module is used for storing the object of the target message data to an internal memory database based on the target message data and the data preheating request log corresponding to the target message data.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory storing a computer program, where the processor implements the data storage method according to the first aspect when executing the program.
In a fourth aspect, an embodiment of the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the data storage method according to the first aspect.
The data storage method, the device, the electronic equipment and the computer program product provided by the embodiment of the application store a plurality of message data generated by each center by setting the message list shared by multiple centers, acquire the object of the target message data based on the target message data pulled in the message list, and store the object of the target message data into the memory database of each center, thereby solving the jitter problem caused by data penetration generated by multi-center flow switching, meeting the requirement on high timeliness under a large data set and realizing no perception of a flow multi-center switching user.
Drawings
In order to more clearly illustrate the application or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a data storage method according to an embodiment of the present application;
FIG. 2 is a second flow chart of a data storage method according to an embodiment of the application;
FIG. 3 is a third flow chart of a data storage method according to an embodiment of the application;
FIG. 4 is a flowchart illustrating a data storage method according to an embodiment of the present application;
FIG. 5 is a flowchart of a data storage method according to an embodiment of the present application;
fig. 6 is a schematic diagram of a message data structure of a data storage method according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a data storage device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 is a schematic flow chart of a data storage method according to an embodiment of the present application. Referring to fig. 1, an embodiment of the present application provides a data storage method, which may include: step 110, step 120, step 130 and step 140.
It should be noted that, the execution body of the data storage method may be a multi-center cache synchronization processing system, or a data storage device, or may also be a server, or may be a terminal of a user, where the terminal includes a mobile terminal and a non-mobile terminal, where the mobile terminal may be a mobile phone, a tablet computer, etc. of the user, and the non-mobile terminal may be a PC end, etc.
The data storage method can be applied to a multi-center service flow cross-center switching scene. The multi-center can be a branch company of the same company in different areas, and each branch company is respectively provided with the application and the middleware of the same version to store and process related services, so that the service is split and the service is rapidly switched when equipment in a certain center fails, and the IT service can normally operate under the condition of irresistible factors.
For example, for a company a, which is deployed with a plurality of centers such as an a center and a B center, the business data related to the a center and the B center should be guaranteed to be synchronous.
The following describes the data storage method with the multi-center cache synchronization processing system as an execution subject.
Step 110, storing the generated plurality of message data to a message queue;
in this step, the message data (messageData) is not the complete data to be cached, which needs to be stored, but is the data containing at least part of the key information in the complete data to be cached.
For example, the message data may be a key or value of the complete information to be cached.
The data to be cached is related data which is generated by each center based on the self service condition and needs to be stored in a memory database (Redis) corresponding to each center, and also needs to be synchronized in memory databases (Redis) corresponding to other centers in the same enterprise.
Redis is an in-memory data structure storage of open source code (BSD permissions) that serves as a database, cache, and message broker. Redis provides data structures such as strings, hashes, lists, collections, ordered collections with scoped queries, bitmaps, hyper-journals, geospatial indexes, and streams. Redis has built-in replication, lua script, LRU eviction, transactions and different levels of disk persistence, and provides high availability through Redis Sentinel and Redis Cluster auto-partitioning.
The data to be cached includes, but is not limited to, data object type, data object ID, data content, etc.
In this embodiment, the plurality of message data may be from the same center, or may be from different centers under the same enterprise.
The Message Queue (MQ) is a data structure of "first-in first-out" in the basic data structure, that is, the element that enters the Queue first will be preferentially fetched, so as to solve the problems of application decoupling, asynchronous Message and flow peak clipping, and the like, and has the advantages of high performance, high availability, scalability and final consistency.
The message queue is a core component of the multi-center cache synchronous processing system.
The message queue may include: rocketMQ and Kafka et al.
Multiple centers under the same company share a message queue, as shown in fig. 2, and the a center and the B center share an MQ.
Each center under the same company can generate message data based on own service conditions and send the generated message data to the MQ for storage.
Each center may generate one or more pieces of message data.
In the actual execution process, a cache synchronous coordination unit (Cache Coordinator Unit) can be deployed in each center, and the cache synchronous coordination units of the centers are respectively and electrically connected with the MQ deployed by the company so as to realize data transmission between the cache synchronous coordination units deployed by the enterprises and the MQ.
The cache synchronization coordination unit is used for generating message data and sending the generated message data to the MQ for storage.
As shown in fig. 2, the multi-center cache synchronization processing system includes a message queue and a cache synchronization coordination unit, for the first company, a cache synchronization coordination unit a is disposed in the center a below the first company, a cache synchronization coordination unit B is disposed in the center B, and data transmission is performed between the cache synchronization coordination unit a and the cache synchronization coordination unit B and the MQ disposed in the first company.
The cache synchronization coordination unit A generates message data A corresponding to the A center and sends the message data A to the MQ for storage; and the cache synchronization coordination unit B generates message data B corresponding to the B center and sends the message data B to the MQ for storage.
In some embodiments, the data structure of the message data in the message queue is as shown in fig. 6, including: data object ID, data object code, version number, operation type, and object type.
In some embodiments, step 110 may include: and storing the message data to the message queue under the identification corresponding to the center to which the message data belongs.
In this embodiment, the center to which the message data belongs, i.e., the center that generated the message data, may be represented by an identification of the center that generated the message data.
Wherein the identification may be a name, code or code of the center that generated the message data, etc.
Each center under the same company is correspondingly provided with an identification.
In the actual implementation process, the message queue can be constructed based on the identifiers corresponding to the centers, namely, different topics are built in the MQ cluster, and each topic corresponds to the identifier of one center.
After sending the message data to the MQ, the cache synchronization coordination unit stores the message data to the corresponding topic based on the center source of the received message data, thereby realizing classified storage of the message data from different center sources.
For example, the identifier corresponding to the center a may be a, and the identifier corresponding to the center B may be B, then the MQ stores the message data generated by the center a on topic a in the message queue, and stores the message data generated by the center B on topic B in the message queue.
In the embodiment, the message data is stored under the identifier corresponding to the center source in the message queue based on the center source of the message data, so that the message data of different center sources are classified and stored, the message data has good regularity, and the key value is not disordered.
In some embodiments, prior to step 110, the method may include:
generating message data based on the service change information;
based on the message data, the initial cache data is invalidated from the memory database corresponding to the center to which the message data belongs, and the initial cache data is generated based on the service information before change.
In this embodiment, the traffic change information is used to characterize the change in traffic of the center.
The traffic change information may include, but is not limited to, changed traffic information.
It will be appreciated that the traffic change information corresponding to different centers may be different.
After the service changes, the data corresponding to the new service needs to be stored.
In the actual execution process, the data to be cached may be generated based on the service change information, and as shown in fig. 2, this step may be performed by setting corresponding service object trigger units (Object Trigger Unit) in respective centers.
The business object triggering unit is electrically connected with the cache synchronous coordination unit deployed in the center so as to realize data transmission between the business object triggering unit and the cache synchronous coordination unit.
The service object triggering unit is a data observer and is used for monitoring a service instance object to be written into a cache in a center to which the service object triggering unit belongs, generating data to be cached in response to service change information under the condition that the object is changed, and sending the generated data to be cached to the cache synchronous coordination unit.
With continued reference to fig. 2, the multi-center cache synchronization processing system further includes a service object triggering unit, where the service object triggering unit a deployed in the a center observes a service instance object transaction commit event (i.e. service change information) in the a center, and generates data a to be cached that needs to be written, where the service change information may include: customer order submission and circulation occur changes in order information or order related logistics, etc.
After generating the data to be cached, the business object triggering unit can send the data to be cached to the cache synchronous coordination unit electrically connected with the business object triggering unit.
The cache synchronization coordination unit generates message data based on the data to be cached, and based on the message data, the initial cache data is invalidated from a memory database corresponding to a center to which the message data belongs.
The initial cache data is old cache data stored in the memory database before service change corresponding to the same service.
Invalidating the initial cache data may be manifested as deleting the initial cache data.
For example, the cache synchronization coordination unit may invalidate initial cache data corresponding to a data object ID from all data cached in the memory database of the present center through the data object type and the data object ID in the data to be cached.
By invalidating the initial buffered data, a high concurrency of dirty reads prior to buffered writes may be prevented.
In the actual execution process, the cache synchronization coordination unit may first perform preprocessing on the data to be cached, including, but not limited to, performing deduplication processing on the data to be cached, such as deduplication repeatedly pushed object events, so as to generate first data;
then according to the data object type and the data object ID in the first data, invalidating corresponding initial cache data from a memory database deployed in the center;
after the initial cache data is invalid, processing the first data, constructing the first data into message data, and sending the message data to the MQ.
In some embodiments, prior to step 110, the method may include: and storing the target storage action log in a relational database.
In this embodiment, the target store action log is action information that characterizes storing a plurality of message data to a message queue.
A relational Database (DB) is a Database that organizes, stores, and manages data based on a data structure.
The relational database is used for storing complete data to be cached and related operation logs corresponding to the data to be cached.
The relational database may include Oracle, mysql, and the like.
Multiple centers under the same company share a relational database.
It can be understood that the cache synchronization coordination units deployed in each center are respectively and electrically connected with the relational databases corresponding to the company, so as to realize data transmission between the cache synchronization coordination units and the relational databases.
In the actual execution process, after the buffer synchronization coordination unit generates message data, when the message data is sent to the MQ, a target storage action log for recording the sending action is generated based on the sending action, and the generated target storage action log is sent to the DB for storage.
Step 120, storing the data preheating request logs corresponding to the message data into a relational database;
in this step, the data warm-up request log includes objects of message data.
The object of the message data is the complete data to be cached corresponding to the message data, and the object of the message data comprises all information of the data to be cached, such as a data object ID, a data object structure, a data object format, data content and the like.
In the actual execution process, the data preheating request log can be generated by the cache synchronization coordination unit and sent to the DB for storage.
In some embodiments, the data warm-up request log may also include a center to which the message data pertains;
the center to which the message data belongs may be represented by an identification of the center that generated the message data.
In some embodiments, the data warm-up request log may also include a target store action log, which is not described in detail herein.
Step 130, sequentially pulling target message data from a plurality of message data in a message queue;
in this step, the target message data is the most recent message data among the plurality of message data stored in the message queue.
That is, the earlier the message data stored into the message queue, the earliest it will be pulled from the message queue.
In actual execution, this step may be performed by deploying a cache warm-up unit (Cache Preload Unit) separately at each center.
The buffer preheating unit is electrically connected with the MQ, the DB and the memory database of the center respectively.
The buffer preheating unit is used for monitoring the message queue.
In the actual execution process, the cache preheating unit monitors topics corresponding to all centers in the message queue, and continuously and orderly pulls the message data from the message queue based on the arrangement sequence of the message data in the message queue.
The target message data pulled in this step may be message data sent by a cache synchronization coordination unit disposed in the center, or may be message data sent by a cache synchronization coordination unit disposed in another center under the same company.
For example, with continued reference to FIG. 2, the multi-center cache synchronization processing system further includes a cache warm-up unit. The message data A corresponding to the center A and the message data B corresponding to the center B are sequentially stored in the MQ, a cache preheating unit A deployed in the center A monitors all message data in the MQ, and the message data A and the message data B are sequentially pulled from a message queue.
And 140, storing the object of the target message data to the memory database based on the target message data and the data preheating request log corresponding to the target message data.
In this step, the memory database may be a memory database deployed in an arbitrary center under the same company.
The data warm-up request log includes: the object of the message data and the center to which the message data belongs.
The data warm-up request log may be generated by a cache synchronization coordination unit disposed at each center and transmitted to the DB to be stored.
In the actual execution process, the object of the target message data can be obtained by the cache preheating unit based on the target message data and the data preheating request log corresponding to the target message data, and the obtained object of the target message data is sent to the memory database in the same center as the cache preheating unit for storage, so that the cache preheating operation is realized.
For example, the cache preheating unit obtains the object set of the obtained target message data to the Redis of the center to which the object set belongs, thereby realizing the cache preheating operation.
In some embodiments, step 140 may comprise:
based on the target message data, acquiring a data preheating request log corresponding to the target message data from a relational database;
and storing the object of the target message data in the data preheating request log corresponding to the target message data into a target memory database.
In this embodiment, for example, after the cache warm-up unit a pulls the message data a from the MQ, the cache warm-up unit a may acquire the data warm-up request log a corresponding to the message data a in the DB based on the pulled message data a;
then, acquiring an object of the message data A based on the data preheating request log A, and sending the acquired object of the message data A to the memory database A for storage;
Then the cache preheating unit A continues to monitor the MQ, and after the message data B is pulled from the MQ, the cache preheating unit A can acquire a data preheating request log B corresponding to the message data B in the DB based on the pulled message data B;
and then acquiring the object of the message data B based on the data preheating request log B, and sending the acquired object of the message data B to the memory database A for storage.
The execution steps of the cache preheating unit B are similar to those of the cache preheating unit a, and will not be described in detail herein.
In this embodiment, the cache preheating units deployed under the centers sequentially pull the same target message data from the MQ shared by the multiple centers, acquire the objects of the target message data from the shared DB based on the target message data, and store the acquired objects of the target message data in the memory databases of the centers, so as to realize synchronous update of the multiple center data and ensure that the service data of the centers can maintain consistency.
The inventor finds that when multi-center service flow is switched across centers in the research and development process, the multi-center service flow is affected by different service instance caches in different centers before switching, and a large number of database penetrations exist after switching because the data cannot be queried by an external request cache instance. In order to avoid service jitter and high concurrency pressure of an instantaneous database, the following technical means are mainly adopted in the related technology:
1) Databus buffer synchronization based on binlog adopts a database log mining mode, wherein a database is used as a unique real data source, changes are extracted from a transaction or commit log, and then related derivative databases or buffers are notified to realize synchronization.
The method is influenced by the fact that databases such as Oracle, mySQL and the like have own special private transaction log formats and duplicate redundancy solutions, codes mined by the original database logs possibly fail after each version is upgraded, continuous upgrading is required, maintenance cost is high, and therefore implementation difficulty is high.
2) Based on the synchronization of redis-shake, the principle of using the main synchronization of redis-shake is that after the source RDB is executed to synchronously write into the target RDB, incremental data synchronization is a unidirectional full-quantity+incremental synchronization mode.
The key value confusion can be caused by the multi-center bidirectional synchronization by the method, and extra performance cost is brought to the cache server by performing redis-shake synchronization on the online production system.
In the application, a message list shared by multiple centers is set to cache a plurality of message data generated by each center, and each center sequentially pulls target message data based on the message list, thereby effectively relieving high concurrency pressure of an instant database, meeting the requirement of high timeliness under a large data set and realizing no perception of a flow multi-center switching user;
Each center acquires the object of the target message data based on the target message data pulled in the message list, and then the object of the target message data is stored in the memory database of each center, so that the jitter problem generated by data penetration generated by multi-center flow switching is effectively solved.
According to the data storage method provided by the embodiment of the application, the message list shared by multiple centers is set to store a plurality of message data generated by each center, then the object of the target message data is acquired based on the target message data pulled in the message list, and the object of the target message data is stored in the memory database of each center, so that the jitter problem caused by data penetration generated by multi-center flow switching is solved, the requirement on high timeliness under a large data set can be met, and the situation that a flow multi-center switching user has no perception is realized.
In some embodiments, after step 140, the method may further comprise:
in the case that the object storage of the target message data is successful, generating a data warm-up operation log based on the target message data, the data warm-up operation log including: at least one of an object of the target message data, a center to which the target message data belongs, an execution operation center, a processing success state, and a processing time;
And storing the data preheating operation log into a relational database.
In this embodiment, the object of the target message data is successfully stored, i.e., the object of the target message data is successfully stored in the in-memory database.
It should be noted that, when the objects of the target message data are stored in different centers, the corresponding storage results may be the same or different.
The center to which the target message data belongs is used for representing the originating home center of the target message data.
The execution operation center is used to characterize the center that performs the operation.
In an actual execution, this step may also be performed by the cache warm-up unit.
For example, after the cache preheating unit a successfully stores the message data a in the memory database a, the cache preheating unit a generates a data preheating operation log a based on the object of the message data a, the originating center, the executing operation center, the "processing success" state, the processing time and the like, and sends the data preheating operation log a to the DB for storage.
In this embodiment, the cache warm-up unit warms up the oplog by writing the DB data to provide validity checking and idempotent processing of the message data.
The validity check is mainly used for checking the integrity and the correctness of the format of the data through the regular expression;
Idempotent processing mainly aims at preventing repeated message consumption by message deduplication and mainly uses a distributed lock of redis to establish an anti-duplicate mechanism, so that idempotent guarantee capability is provided for synchronous tasks.
In the embodiment, the data preheating operation log and the data preheating request log are corresponding, so that the consistency of the subsequent traceability request and operation is facilitated; after successful processing, the message queue is confirmed to be successfully processed, thereby completing the consumption of the message.
In other embodiments, after step 140, the method may further comprise:
under the condition that the object storage of the target message data fails, sending a data retransmission instruction to a message queue, and generating a data preheating operation log based on the target message data, wherein the data preheating operation log comprises a processing failure state;
and storing the data preheating operation log into a relational database.
In this embodiment, the object of the target message data fails to be stored in the memory database when the object of the target message data fails to be stored, and then the object of the target message data needs to be stored again.
The data retransmission instruction is an instruction for instructing the message queue to perform target message retransmission.
In the actual execution process, under the condition that the object storage of the target message data fails, the cache preheating unit generates a retransmission instruction based on the target message data so as to require a message queue to carry out retransmission of the target message data, and simultaneously, records a data preheating operation log under the condition of failure into the DB.
For example, when the cache warm-up unit B fails to store the message data a to the memory database B, the cache warm-up unit B generates the data warm-up operation log B based on the "processing failure" state, and sends the data warm-up operation log B to the DB for storage.
In the embodiment, the data preheating operation log and the data preheating request log are corresponding, so that the consistency of the subsequent traceability request and operation is facilitated; and (4) re-storing the data under the condition of processing failure, thereby ensuring the integrity of the stored data.
As shown in fig. 3, in some embodiments, prior to step 110, the method may include:
monitoring the working state of a message queue;
generating data to be cached based on service change information under the condition that the message queue is abnormal;
and storing the data to be cached into a target memory database and a relational database, wherein the target memory database corresponds to the central information to which the data to be cached belongs.
In this embodiment, the operating state of the message queue includes: normal state and abnormal state.
During actual execution, the working state of the message queue can be monitored by deploying a consistency monitoring unit (Consistency Monitor Unit) at each center.
The consistency monitoring unit is used for carrying out consistency check on the synchronous objects in various modes such as database log, queue consumption, rediskey comparison and the like, recording synchronous abnormal data, carrying out automatic or manual compensation on data loss, and ensuring high availability of the system.
The target memory database is a memory database deployed in a center for generating the data to be cached, namely the data to be cached and the target memory database belong to the same center.
The description of this embodiment will be continued taking the above-described a center and B center as examples.
As shown in fig. 3, the multi-center cache synchronous processing system further includes a consistency monitoring unit, wherein the consistency monitoring unit a deployed in the a center monitors the message queue, and closes the synchronous switch when it is determined that the message queue is abnormal, thereby entering an individual management mode of each center.
In actual execution, the synchronous switch may be turned off by the nacos.
Na is a naming/nameServer, namely a naming service; co is configuration, i.e., registry, and service means that the registry/configuration center is all service-centric.
Nacos can provide service online configuration capability.
Through the nacos online configuration center, the reliability and availability of the single-center cache can be ensured when the message queue fails.
The Nacos working flow is shown in FIG. 5, where the Nacos cluster acquires the latest data once the console data changes based on the received configuration information, stores the latest data in a CacheData object, recalculates the value of the md5 attribute of CacheData, and triggers a receiver ConfigInfo callback for the Listener bound by CacheData when the change of the value of the md5 attribute is detected.
After entering the independent management mode of each center, a service object triggering unit A observes a service instance object transaction submitting event of the center A to generate data A to be cached, and sends the data A to be cached to a cache synchronous coordination unit A;
the cache synchronization coordination unit A performs data processing such as de-duplication and the like on the received data A to be cached to generate first data A, and fails corresponding initial cache data from the memory database A of the A center according to the data object type and the data object ID;
the cache synchronization coordination unit A caches the generated first data A into an internal memory database A of the A center, and sends the first data A into a DB common to multiple centers for storage.
After the MQ failure is recovered, statistics are performed in the DB based on the time period for the data to be cached generated during the MQ failure or failure, and manual consistency writing caching operation is performed on each center through the bulk object ID.
For example, the MQ fails in the period of t1-t2, during the failure period, the order of the a center changes, the service object triggering unit a of the a center generates data a to be cached based on the changed order data, the cache synchronization coordination unit a performs data processing such as de-duplication on the received data a to be cached to generate first data a, and then stores the newly generated first data a into the memory databases a and DB respectively.
After the operation and maintenance recover the MQ fault, the data in the time period t1-t2 in the DB are counted manually, for example, the ID of an order can be input to obtain first data A, the first data A is written into other centers of a plurality of centers, such as a memory database B under the center B, through a manual operation and maintenance interface, the visualization of synchronous tasks can be ensured on the basis of ensuring the cache consistency of each center, and the operation and maintenance are controllable and simple to operate.
According to the data storage method provided by the embodiment of the application, the working state of the message queue is monitored, the independent management mode of each center is entered under the condition of the fault of the message queue, the data to be cached is independently generated by each center based on the service change information, and the data to be cached is stored in the memory database corresponding to each center, so that the problem that the data to be cached cannot be transmitted through the MQ under the condition of encountering power failure or other unpredictable faults is effectively avoided; in addition, the data to be cached is stored in the relational database shared by the multiple centers, so that the data to be cached generated during the MQ fault period can be written into the memory database under other centers of the multiple centers through the data in the statistical relational database, the consistency writing is realized, the consistency of the business of each center is effectively ensured, and the operation is simple and quick.
In some embodiments, the method may further comprise:
generating alarm information based on at least one of index information of the message queue, a data preheating request log and a data preheating operation log;
and outputting alarm information.
In this embodiment, the index information of the message queue includes, but is not limited to: consumption state of the message queue, loss information of the message data, working state of the message queue, and the like.
As shown in fig. 4, in the actual execution, the above steps may be performed by the consistency monitoring unit.
The index information of the queue, the data preheating request log and the data preheating operation log are monitored through a consistency monitoring unit, and consistency monitoring is carried out through means of comparing the number of rediskeys of corresponding objects in each center redis at regular time, wherein the generation rule of the keys can be "service type coding: service instance ID ", and the object type of the message data corresponds to the object ID.
The alarm information is used for carrying out fault alarm so as to assist staff in timely operating and maintaining faults.
The output alarm information may be represented as any of the following output forms:
first, the output may be represented as a text output.
In this embodiment, the alarm information may be converted to text form for output.
Second, the output may be represented as a speech output.
In this embodiment, the alarm information may be output by voice.
Third, the output may be represented as an image output.
In this embodiment, the alarm information may be displayed by setting a human operation and maintenance interface.
In the embodiment, the operation is interfaced and visualized, so that the failure compensation based on the abnormality found by manual multi-monitoring can be supported, and the operation is simple and convenient.
Of course, in other embodiments, the output may take other forms, such as output through a signal, etc., and may be determined according to actual needs, which is not limited by the embodiment of the present invention.
For example, in some embodiments, the operating state of the MQ and the production and consumption balance of the MQ for caching synchronous topic may be monitored by the consistency monitoring unit, and the existing alert platform may be integrated to alert.
In some embodiments, the consistency monitoring unit may monitor the data preheating request log generated by the cache synchronization coordination unit and the data preheating operation log generated by the cache preheating unit, determine whether the objects of the message data written into the message queue and the message data successfully consumed (stored in the memory database) are balanced, and alarm the service object failed in the preheating process.
In some embodiments, the number of keys of the corresponding business objects of each center redis may be compared at regular time by the consistency monitoring unit to monitor the number of preheat deviations.
In some embodiments, key consistency and value consistency checks may be performed on the business instance class in conjunction with the redisfull check tool in the case of business leisure.
According to the data storage method provided by the embodiment of the application, the alarm service is realized by monitoring at least one of the index information of the message queue, the data preheating request log and the data preheating operation log, so that the monitoring of the multi-center cache consistency is realized, and the stability of multi-center business flow cross-center switching is improved.
The data storage device provided by the embodiment of the present application is described below, and the data storage device described below and the data storage method described above may be referred to correspondingly.
As shown in fig. 7, the data storage device includes: a first processing module 710, a second processing module 720, a third processing module 730, and a fourth processing module 740.
A first processing module 710, configured to store the generated plurality of message data into a message queue;
a second processing module 720, configured to store data preheating request logs corresponding to the plurality of message data to a relational database, where the data preheating request logs include objects of the message data;
A third processing module 730 for sequentially pulling the target message data from the plurality of message data in the message queue;
the fourth processing module 740 is configured to store the object of the target message data to the memory database based on the target message data and the data warm-up request log corresponding to the target message data.
The data storage device provided by the embodiment of the application stores a plurality of message data generated by each center by setting the message list shared by multiple centers, acquires the object of the target message data based on the target message data pulled in the message list, and stores the object of the target message data into the memory database of each center, thereby solving the jitter problem caused by data penetration generated by multi-center flow switching, meeting the requirement on high timeliness under a large data set, and realizing that a flow multi-center switching user does not have perception
In some embodiments, the fourth processing module 740 may also be configured to:
based on the target message data, acquiring a data preheating request log corresponding to the target message data from a relational database;
and storing the object of the target message data in the data preheating request log corresponding to the target message data into a memory database.
In some embodiments, before storing the generated plurality of message data to the message queue, the apparatus may further include a fifth processing module for:
generating message data based on the service change information;
based on the message data, the initial cache data is invalidated from the memory database corresponding to the center information to which the message data belongs, and the initial cache data is generated based on the service information before change.
In some embodiments, the first processing module 710 may also be configured to: and storing the message data to an identifier corresponding to the central information to which the message data belongs in the message queue.
In some embodiments, after storing the object of the target message data to the in-memory database based on the target message data and the data warm-up request log corresponding to the target message data, the apparatus may further include a sixth processing module for:
in the case that the object storage of the target message data is successful, generating a data warm-up operation log based on the target message data, the data warm-up operation log including: at least one of an object of the target message data, center information to which the target message data belongs, an execution operation center, a processing success state and a processing time;
And storing the data preheating operation log into a relational database.
In some embodiments, after storing the object of the target message data to the in-memory database based on the target message data and the data warm-up request log corresponding to the target message data, the apparatus may further include a seventh processing module for:
under the condition that the object storage of the target message data fails, sending a data retransmission instruction to a message queue, and generating a data preheating operation log based on the target message data, wherein the data preheating operation log comprises a processing failure state;
and storing the data preheating operation log into a relational database.
In some embodiments, before storing the generated plurality of message data to the message queue, the apparatus may further include an eighth processing module for:
monitoring the working state of a message queue;
generating data to be cached based on service change information under the condition that the message queue is abnormal;
and storing the data to be cached into a target memory database and a relational database, wherein the target memory database corresponds to the central information to which the data to be cached belongs.
In some embodiments, the apparatus may further comprise:
a ninth processing module, configured to generate alarm information based on at least one of index information of the message queue, a data preheating request log, and a data preheating operation log;
And the output module is used for outputting alarm information.
Fig. 8 illustrates a physical structure diagram of an electronic device, as shown in fig. 8, which may include: processor 810, communication interface (Communication Interface) 820, memory 830, and communication bus 840, wherein processor 810, communication interface 820, memory 830 accomplish communication with each other through communication bus 840. The processor 810 may call a computer program in the memory 830 to perform the steps of a data storage method, for example comprising: storing the generated plurality of message data to a message queue; storing data preheating request logs corresponding to the message data to a relational database, wherein the data preheating request logs comprise objects of the message data; sequentially pulling target message data from a plurality of message data in a message queue; and storing the object of the target message data to the memory database based on the target message data and the data preheating request log corresponding to the target message data.
Further, the logic instructions in the memory 830 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, embodiments of the present application further provide a computer program product, where the computer program product includes a computer program, where the computer program may be stored on a non-transitory computer readable storage medium, where the computer program when executed by a processor is capable of executing the steps of the data storage method provided in the foregoing embodiments, for example, including: storing the generated plurality of message data to a message queue; storing data preheating request logs corresponding to the message data to a relational database, wherein the data preheating request logs comprise objects of the message data; sequentially pulling target message data from a plurality of message data in a message queue; and storing the object of the target message data to the memory database based on the target message data and the data preheating request log corresponding to the target message data.
In another aspect, embodiments of the present application further provide a processor-readable storage medium storing a computer program for causing a processor to execute the steps of the method provided in the above embodiments, for example, including: storing the generated plurality of message data to a message queue; storing data preheating request logs corresponding to the message data to a relational database, wherein the data preheating request logs comprise objects of the message data; sequentially pulling target message data from a plurality of message data in a message queue; and storing the object of the target message data to the memory database based on the target message data and the data preheating request log corresponding to the target message data.
The processor-readable storage medium may be any available medium or data storage device that can be accessed by a processor, including, but not limited to, magnetic storage (e.g., floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc.), optical storage (e.g., CD, DVD, BD, HVD, etc.), semiconductor storage (e.g., ROM, EPROM, EEPROM, nonvolatile storage (NAND FLASH), solid State Disk (SSD)), and the like.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method of data storage, comprising:
storing the generated plurality of message data to a message queue;
storing data preheating request logs corresponding to the message data to a relational database, wherein the data preheating request logs comprise objects of the message data;
sequentially pulling target message data from the plurality of message data of the message queue;
and storing the object of the target message data to a memory database based on the target message data and a data preheating request log corresponding to the target message data.
2. The data storage method according to claim 1, wherein the storing the object of the target message data to the in-memory database based on the target message data and the data warm-up request log corresponding to the target message data includes:
Based on the target message data, acquiring a data preheating request log corresponding to the target message data from the relational database;
and storing the object of the target message data in the data preheating request log corresponding to the target message data to the memory database.
3. The data storage method of claim 1, wherein prior to said storing the generated plurality of message data to the message queue, the method comprises:
generating the message data based on the service change information;
based on the message data, invalidating initial cache data from a memory database corresponding to a center to which the message data belongs, wherein the initial cache data is generated based on service information before change.
4. A data storage method according to any one of claims 1 to 3, wherein storing the generated plurality of message data to a message queue comprises:
and storing the message data to the message queue under the identification corresponding to the center to which the message data belongs.
5. A data storage method according to any one of claims 1 to 3, wherein after the storing of the object of the target message data to an in-memory database based on the target message data and a data warm-up request log corresponding to the target message data, the method further comprises:
Generating a data warm-up operation log based on the target message data, in case that the object storage of the target message data is successful, the data warm-up operation log comprising: at least one of an object of the target message data, a center to which the target message data belongs, an execution operation center, a processing success state, and a processing time;
storing the data preheating operation log to the relational database;
transmitting a data retransmission instruction to the message queue under the condition that the object storage of the target message data fails, and generating a data preheating operation log based on the target message data, wherein the data preheating operation log comprises a processing failure state;
and storing the data preheating operation log into the relational database.
6. A data storage method according to any of claims 1-3, wherein prior to said storing the generated plurality of message data in the message queue, the method comprises:
monitoring the working state of the message queue;
generating data to be cached based on service change information under the condition that the message queue is abnormal;
and storing the data to be cached into a target memory database and the relational database, wherein the target memory database corresponds to the central information of the data to be cached.
7. A data storage method according to any one of claims 1 to 3, wherein the method further comprises:
generating alarm information based on at least one of the index information of the message queue, the data preheating request log and the data preheating operation log;
and outputting the alarm information.
8. A data storage device, comprising:
the first processing module is used for storing the generated message data into a message queue;
the second processing module is used for storing data preheating request logs corresponding to the message data to a relational database, wherein the data preheating request logs comprise objects of the message data;
a third processing module, configured to sequentially pull target message data from the plurality of message data in the message queue;
and the fourth processing module is used for storing the object of the target message data to an internal memory database based on the target message data and the data preheating request log corresponding to the target message data.
9. An electronic device comprising a processor and a memory storing a computer program, characterized in that the processor implements the data storage method of any of claims 1 to 7 when executing the computer program.
10. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the data storage method of any of claims 1 to 7.
CN202210233315.5A 2022-03-10 2022-03-10 Data storage method, apparatus, electronic device and computer program product Pending CN116775640A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210233315.5A CN116775640A (en) 2022-03-10 2022-03-10 Data storage method, apparatus, electronic device and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210233315.5A CN116775640A (en) 2022-03-10 2022-03-10 Data storage method, apparatus, electronic device and computer program product

Publications (1)

Publication Number Publication Date
CN116775640A true CN116775640A (en) 2023-09-19

Family

ID=88010359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210233315.5A Pending CN116775640A (en) 2022-03-10 2022-03-10 Data storage method, apparatus, electronic device and computer program product

Country Status (1)

Country Link
CN (1) CN116775640A (en)

Similar Documents

Publication Publication Date Title
US9146934B2 (en) Reduced disk space standby
US20200019543A1 (en) Method, apparatus and device for updating data, and medium
CN111177161B (en) Data processing method, device, computing equipment and storage medium
CN110569269A (en) data synchronization method and system
CN112084258A (en) Data synchronization method and device
CN101567805A (en) Method for recovering failed parallel file system
WO2010050288A1 (en) Server system, server device, program, and method
WO2012045245A1 (en) Method and system for maintaining data consistency
KR20140038991A (en) Automatic synchronization of most recently used document lists
US20120278429A1 (en) Cluster system, synchronization controlling method, server, and synchronization controlling program
EP4213038A1 (en) Data processing method and apparatus based on distributed storage, device, and medium
US10095415B2 (en) Performance during playback of logged data storage operations
CN113254275A (en) MySQL high-availability architecture method based on distributed block device
CN113438275B (en) Data migration method and device, storage medium and data migration equipment
CN110196788B (en) Data reading method, device and system and storage medium
US9058326B1 (en) Recovery and flush of endurant cache
WO2024109253A1 (en) Data backup method and system, and device
CN116775640A (en) Data storage method, apparatus, electronic device and computer program product
US7251660B2 (en) Providing mappings between logical time values and real time values in a multinode system
CN114490540B (en) Data storage method, medium, device and computing equipment
US11386043B2 (en) Method, device, and computer program product for managing snapshot in application environment
CN115238006A (en) Retrieval data synchronization method, device, equipment and computer storage medium
US20150088826A1 (en) Enhanced Performance for Data Duplication
CN113268395A (en) Service data processing method, processing device and terminal
CN111752911A (en) Data transmission method, system, terminal and storage medium based on Flume

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination