CN115525684A - Data writing method, device, equipment, computer readable medium and program product - Google Patents

Data writing method, device, equipment, computer readable medium and program product Download PDF

Info

Publication number
CN115525684A
CN115525684A CN202211203572.0A CN202211203572A CN115525684A CN 115525684 A CN115525684 A CN 115525684A CN 202211203572 A CN202211203572 A CN 202211203572A CN 115525684 A CN115525684 A CN 115525684A
Authority
CN
China
Prior art keywords
cache cluster
information
cache
determining
cached
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211203572.0A
Other languages
Chinese (zh)
Inventor
张志维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202211203572.0A priority Critical patent/CN115525684A/en
Publication of CN115525684A publication Critical patent/CN115525684A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0709Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0793Remedial or corrective actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F16/273Asynchronous replication or reconciliation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

Embodiments of the present disclosure disclose data writing methods, apparatuses, devices, computer readable media and program products. One embodiment of the method comprises: determining main cache cluster information in cache configuration in response to receiving a data write cache request; performing distributed lock adding processing on a main cache cluster corresponding to the main cache cluster information; writing the data set to be cached into the main cache cluster in response to determining that the locking of the main cache cluster is successful; and in response to the fact that the data set to be cached is successfully written into the main cache cluster and at least one piece of slave cache cluster information exists in the cache configuration, writing the data set to be cached into at least one slave cache cluster corresponding to the at least one piece of slave cache cluster information. The embodiment is related to data caching, and can ensure consistency of writing data sets to be cached into the main cache cluster and at least one slave cache cluster.

Description

Data writing method, device, equipment, computer readable medium and program product
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a data writing method, apparatus, device, computer-readable medium, and program product.
Background
Currently, most systems generally use a cache class service to store data in order to improve responsiveness. For storing data using a cache class service, the following methods are generally adopted: the data is stored separately using at least one predetermined cache class service.
However, the inventors have found that when storing data in the above manner, there are often technical problems as follows:
when each cache class service writes data respectively, the problem that the written data of each cache class service is inconsistent due to the fact that the individual cache class service is interfered by related tasks often exists, and calling of a subsequent data set to be cached is influenced.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, it may contain information that does not form the prior art that is already known to a person of ordinary skill in the art in this country.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose data writing methods, apparatuses, devices, computer readable media and program products to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a data writing method, including: determining main cache cluster information in cache configuration in response to receiving a data write cache request; performing distributed lock adding processing on the main cache cluster corresponding to the main cache cluster information; writing a data set to be cached into the main cache cluster in response to determining that the locking of the main cache cluster is successful; and in response to determining that the data set to be cached is successfully written into the master cache cluster and that at least one piece of slave cache cluster information exists in the cache configuration, writing the data set to be cached into at least one slave cache cluster corresponding to the at least one piece of slave cache cluster information.
Optionally, the method further includes: and deleting the distributed lock aiming at the main cache cluster in response to the fact that the data set to be cached is successfully written into the at least one slave cache cluster.
Optionally, the method further includes: generating write failure information in response to determining that the data set to be cached is not successfully written into the main cache cluster; determining the write-in failure times corresponding to the main cache cluster in a target time period; adding a predetermined value to the write-in failure times to obtain added write-in failure times according to the write-in failure information; in response to determining that the added write-in failure times are greater than or equal to a predetermined threshold and the at least one piece of slave cache cluster information exists in the cache configuration, receiving first configuration modification information for the cache configuration, where the first configuration modification information is information for determining any piece of slave cache cluster information in the at least one piece of slave cache cluster information in the cache configuration as master cache cluster information; determining modified main cache cluster information aiming at the first configuration modification information; and writing the data set to be cached into the cache cluster corresponding to the modified main cache cluster information.
Optionally, the method further includes: and sending cache failure information representing the write failure of the data to be cached to an upstream server in response to the fact that the added write failure times are smaller than the preset threshold value.
Optionally, the method further includes: receiving second configuration modification information in response to the fact that the fault repair of the target main cache cluster is completed, wherein the second configuration modification information represents that the target main cache cluster is determined to be a slave cache cluster, and the target main cache cluster is a cache cluster corresponding to the replaced main cache cluster information in the cache configuration; determining the target main cache cluster as a cluster corresponding to the slave cache cluster information according to the second configuration modification information; and writing the data set to be cached into the target main cache cluster.
Optionally, the method further includes: in response to determining that the at least one slave cache cluster information does not exist in the cache configuration, deleting the distributed lock for the master cache cluster.
Optionally, the method further includes: determining locking failure type information in response to determining that locking of the primary cache cluster fails; and in response to determining that the locking failure type information represents a loading failure of the main cache cluster, adding the distributed lock again for a predetermined number of times.
Optionally, the method further includes: and executing distributed lock adding operation aiming at the main cache cluster in response to the fact that the locking failure type information represents the task occupation loading fault and the related occupation task is finished executing.
In a second aspect, some embodiments of the present disclosure provide a data writing apparatus, including: a determining unit configured to determine primary cache cluster information in a cache configuration in response to receiving a data write cache request; the adding unit is configured to perform distributed lock adding processing on the main cache cluster corresponding to the main cache cluster information; a first writing unit configured to write a data set to be cached into the primary cache cluster in response to determining that locking of the primary cache cluster is successful; and a second writing unit, configured to, in response to determining that the data set to be cached is successfully written into the master cache cluster and that at least one piece of slave cache cluster information exists in the cache configuration, write the data set to be cached into at least one slave cache cluster corresponding to the at least one piece of slave cache cluster information.
Optionally, the apparatus further comprises: and deleting the distributed lock aiming at the main cache cluster in response to the fact that the data set to be cached is successfully written into the at least one slave cache cluster.
Optionally, the apparatus further comprises: generating write failure information in response to determining that the data set to be cached is not successfully written into the main cache cluster; determining the write-in failure times corresponding to the main cache cluster in a target time period; adding a predetermined value to the write-in failure times to obtain added write-in failure times according to the write-in failure information; in response to determining that the added write-in failure times are greater than or equal to a predetermined threshold and the at least one piece of slave cache cluster information exists in the cache configuration, receiving first configuration modification information for the cache configuration, where the first configuration modification information is information for determining any piece of slave cache cluster information in the at least one piece of slave cache cluster information in the cache configuration as master cache cluster information; determining modified main cache cluster information aiming at the first configuration modification information; and writing the data set to be cached into the cache cluster corresponding to the modified main cache cluster information.
Optionally, the apparatus further comprises: and sending cache failure information representing the write failure of the data to be cached to an upstream server in response to the fact that the added write failure times are smaller than the preset threshold value.
Optionally, the apparatus further comprises: receiving second configuration modification information in response to the fact that the fault repair of the target main cache cluster is completed, wherein the second configuration modification information represents that the target main cache cluster is determined to be a slave cache cluster, and the target main cache cluster is a cache cluster corresponding to the replaced main cache cluster information in the cache configuration; determining the target main cache cluster as a cluster corresponding to the slave cache cluster information according to the second configuration modification information; and writing the data set to be cached into the target main cache cluster.
Optionally, the apparatus further comprises: in response to determining that the at least one slave cache cluster information does not exist in the cache configuration, deleting the distributed lock for the master cache cluster.
Optionally, the apparatus further comprises: determining locking failure type information in response to determining that locking of the primary cache cluster fails; and in response to determining that the locking failure type information represents a loading failure of the main cache cluster, adding the distributed lock again for a predetermined number of times.
Optionally, the apparatus further comprises: and executing distributed lock adding operation aiming at the main cache cluster in response to the fact that the locking failure type information represents the task occupation loading fault and the related occupation task is finished executing.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method as described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, where the program when executed by a processor implements a method as described in any of the implementations of the first aspect.
In a fifth aspect, some embodiments of the present disclosure provide a computer program product comprising a computer program that, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following beneficial effects: the data writing method of some embodiments of the present disclosure can ensure consistency of writing the data set to be cached in the main cache cluster and the at least one slave cache cluster. Specifically, the reason for inconsistency of writing to the data set to be cached is: when each cache class service writes data respectively, the problem that the written data of each cache class service is inconsistent due to the fact that the individual cache class service is interfered by related tasks often exists, and calling of a subsequent data set to be cached is influenced. Based on this, the data writing method of some embodiments of the present disclosure first determines, in response to receiving a data writing caching request, primary cache cluster information in a cache configuration for subsequent writing of a data set to be cached and performing a distributed lock addition operation. And then, performing distributed lock adding processing on the main cache cluster corresponding to the main cache cluster information to ensure that no interference of other tasks exists in the process of executing the task of writing the data set to be cached into the cache cluster. Then, in response to determining that the locking of the primary cache cluster is successful, the data set to be cached can be accurately written into the primary cache cluster, so that the corresponding data can be called from the primary cache cluster subsequently. And finally, in response to determining that the data set to be cached is successfully written into the main cache cluster and that at least one piece of slave cache cluster information exists in the cache configuration, writing the data set to be cached into at least one slave cache cluster corresponding to the at least one piece of slave cache cluster information. Here, by performing distributed locking on the main cache cluster, it can be ensured that when the data sets to be cached are written into the main cache cluster and at least one of the secondary cache clusters, the data sets to be cached are not interfered by other tasks, and the write consistency of the data sets to be cached is ensured.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is a schematic diagram of one application scenario of a data writing method according to some embodiments of the present disclosure;
FIG. 2 is a flow diagram of some embodiments of a data writing method according to the present disclosure;
FIG. 3 is a flow chart of further embodiments of a data writing method according to the present disclosure;
FIG. 4 is a schematic block diagram of some embodiments of a data writing apparatus according to the present disclosure;
FIG. 5 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The operations of collecting, storing, using and the like of personal information (such as a data set to be cached) of a user involved in the present disclosure include, before performing the corresponding operations, the related organization or person as much as possible to perform obligations including the performance of evaluation of security influence of the personal information, the fulfillment of notification obligations to the personal information body, the acquisition of authorization consent of the personal information body in advance, and the like.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of one application scenario of a data writing method according to some embodiments of the present disclosure.
In the application scenario of fig. 1, first, in response to receiving a data write cache request, the electronic device 101 may determine master cache cluster information 1021 in the cache configuration 102. In this application scenario, the main cache cluster information 1021 may be: { main cache cluster information: a cluster information }. Then, the electronic device 101 may perform distributed lock addition processing on the master cache cluster 103 corresponding to the master cache cluster information 1021. Next, in response to determining that the locking of the primary cache cluster 103 is successful, the electronic device 101 may write the data set 105 to be cached into the primary cache cluster 103. Finally, in response to determining that the data set 105 to be cached is successfully written into the master cache cluster 103 and that at least one piece of slave cache cluster information 1022 exists in the cache configuration 102, the electronic device 101 may write the data set 105 to be cached into at least one slave cache cluster 104 corresponding to the at least one piece of slave cache cluster information 1022. In this application scenario, the at least one slave cache cluster information 1022 may be: { buffered cluster information: b cluster information, C cluster information, D cluster information }. The cluster corresponding to the B cluster information is a slave cache cluster 1041 in at least one slave cache cluster 104. The cluster corresponding to the C cluster information is the slave cache cluster 1042 in the at least one slave cache cluster 104. The cluster corresponding to the D cluster information is a slave cache cluster 1043 in at least one slave cache cluster 104.
The electronic device 101 may be hardware or software. When the electronic device is hardware, the electronic device may be implemented as a distributed cluster formed by a plurality of servers or terminal devices, or may be implemented as a single server or a single terminal device. When the electronic device is embodied as software, it may be installed in the above-listed hardware devices. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of electronic devices in fig. 1 is merely illustrative. There may be any number of electronic devices, as desired for an implementation.
With continued reference to fig. 2, a flow 200 of some embodiments of a data writing method according to the present disclosure is shown. The data writing method comprises the following steps:
step 201, in response to receiving a data write cache request, determining primary cache cluster information in a cache configuration.
In some embodiments, in response to receiving a data write cache request, an executing entity of the data write method (e.g., the electronic device 101 shown in fig. 1) may determine primary cache cluster information in a cache configuration. The data write cache request may be a request indicating that a data set to be cached is written into a cache cluster. The cache configuration is configuration information of a cache cluster in which a data set to be cached needs to be written. The cache configuration may include: and primary caching cluster information. In practice, the above-mentioned primary cache cluster information may be cluster name information of the primary cache cluster. The cache configuration may be configuration information generated by a configuration center. The configuration center is an information management center for generating cache configuration and modifying the cache configuration. The main cache cluster may be a cluster to be queried for data during subsequent query of data to be cached. A primary cache cluster may be a group of independent computers interconnected by a high-speed network that may be used to cache data. The primary cache cluster may be, but is not limited to, one of: zookeeper cluster, nacos cluster.
Step 202, performing distributed lock adding processing on the main cache cluster corresponding to the main cache cluster information.
In some embodiments, the executing entity may perform distributed lock addition processing on the primary cache cluster corresponding to the primary cache cluster information. Wherein, the purpose of adding the distributed lock is as follows: the control distributed system orderly processes the resources of the shared resources and keeps consistency by the principle of task mutual exclusion. For example, the tasks of resource processing on shared resources include: a first task, a second task. When the first task adds the distributed lock and the first task is processing the resource to the shared resource, the second task can not process the resource to the shared resource because the second task can not obtain the lock information of the distributed lock corresponding to the first task, and the second task can only execute the resource process to the shared resource until the first task finishes processing the shared resource.
Step 203, in response to determining that the locking of the main cache cluster is successful, writing the data set to be cached into the main cache cluster.
In some embodiments, in response to determining that the locking of the primary cache cluster is successful, the execution agent may write the data set to be cached to the primary cache cluster. The data set to be cached may be a data set corresponding to the data write cache request.
In some optional implementations of some embodiments, after step 203, the step may further include:
in a first step, in response to determining that the locking of the primary cache cluster fails, the execution agent may determine locking failure type information. The locking failure type information may represent a locking failure type. In practice, the locking failure types may include: the primary cache cluster fails to load.
And secondly, in response to determining that the locking failure type information represents a loading failure of the main cache cluster, the execution main body may add the distributed lock again for a predetermined number of times until the adding of the distributed lock is successful. For example, the predetermined number may be "5".
Optionally, the steps may further include:
in response to determining that the locking failure type information represents a task occupation loading fault and that the related occupation task has finished executing, the execution subject may perform a distributed lock addition operation for the primary cache cluster. The related occupation task may be a task that is currently performing resource processing for the primary cache set.
Step 204, in response to determining that the data set to be cached is successfully written into the master cache cluster and that at least one piece of slave cache cluster information exists in the cache configuration, writing the data set to be cached into at least one slave cache cluster corresponding to the at least one piece of slave cache cluster information.
In some embodiments, in response to determining that the data set to be cached is successfully written into the primary cache cluster and that at least one piece of slave cache cluster information exists in the cache configuration, the execution main body may write the data set to be cached into at least one slave cache cluster corresponding to the at least one piece of slave cache cluster information. The number of the slave cache cluster information included in the at least one slave cache cluster information may be preset. For example, the number of slave cache cluster information is 1. The slave cache cluster information may be, but is not limited to, one of: zookeeper cluster, nacos cluster.
Optionally, the cluster type of the cache cluster corresponding to the slave cache cluster information may be the same as or different from the cluster type of the cache cluster corresponding to the master cache cluster information.
In some optional implementations of some embodiments, after step 204, the above step further includes:
in response to determining that the data set to be cached is successfully written into the at least one slave cache cluster, the executing agent may delete the distributed lock for the master cache cluster.
Here, the distributed lock for the master cache cluster is deleted, so that the subsequent task can execute the data processing task for the master cache cluster and the at least one slave cache cluster.
In some optional implementations of some embodiments, after step 204, the above step further includes:
in response to determining that the at least one slave cache cluster information does not exist in the cache configuration, the executing agent may delete the distributed lock for the master cache cluster.
Here, for the condition that at least one piece of slave cache cluster information does not exist in the cache configuration, since only the master cache cluster needs to write the data set to be cached, the problem of inconsistent writing of the cache data set does not exist.
The above embodiments of the present disclosure have the following advantages: the data writing method of some embodiments of the disclosure can ensure consistency of writing the data set to be cached into the main cache cluster and the at least one slave cache cluster. Specifically, the reason for inconsistency of writing to the data set to be cached is: when each cache class service writes data respectively, the problem that the written data of each cache class service is inconsistent due to the fact that the individual cache class service is interfered by related tasks often exists, and calling of a subsequent data set to be cached is influenced. Based on this, the data writing method of some embodiments of the present disclosure first determines, in response to receiving a data writing caching request, primary cache cluster information in a cache configuration for subsequent writing of a data set to be cached and performing a distributed lock addition operation. And then, performing distributed lock adding processing on the main cache cluster corresponding to the main cache cluster information to ensure that no interference of other tasks exists in the process of executing the task of writing the data set to be cached into the cache cluster. Then, in response to determining that the locking of the primary cache cluster is successful, the data set to be cached can be accurately written into the primary cache cluster, so that the corresponding data can be called from the primary cache cluster subsequently. And finally, in response to the fact that the data set to be cached is successfully written into the main cache cluster and at least one piece of slave cache cluster information exists in the cache configuration, writing the data set to be cached into at least one slave cache cluster corresponding to the at least one piece of slave cache cluster information. Here, by performing distributed locking on the main cache cluster, it can be ensured that when the data sets to be cached are written into the main cache cluster and at least one of the secondary cache clusters, the data sets to be cached are not interfered by other tasks, and the write consistency of the data sets to be cached is ensured.
With further reference to fig. 3, a flow 300 of further embodiments of a data writing method according to the present disclosure is shown. The data writing method comprises the following steps:
step 301, in response to receiving a data write cache request, determining primary cache cluster information in a cache configuration.
Step 302, performing distributed lock adding processing on the main cache cluster corresponding to the main cache cluster information.
Step 303, in response to determining that the locking of the primary cache cluster is successful, writing the data set to be cached into the primary cache cluster.
Step 304, in response to determining that the data set to be cached is successfully written into the master cache cluster and that at least one piece of slave cache cluster information exists in the cache configuration, writing the data set to be cached into at least one slave cache cluster corresponding to the at least one piece of slave cache cluster information.
In some embodiments, the specific implementation of steps 301 to 304 and the technical effect thereof may refer to steps 201 to 204 in the embodiment corresponding to fig. 2, which are not described herein again.
Step 305, in response to determining that the data set to be cached is not successfully written into the main cache cluster, generating write failure information.
In some embodiments, an executing agent (e.g., electronic device 101 shown in fig. 1) may generate write failure information in response to determining that the set of data to be cached was not successfully written to the primary cache cluster. The write failure information may be identification information indicating that the data set to be cached is not successfully written into the main cache cluster. For example, the write failure information may be "1", which may represent that the data set to be cached is not successfully written into the primary cache cluster.
Step 306, determining the write failure times corresponding to the primary cache cluster in the target time period.
In some embodiments, the execution agent may determine a number of write failures corresponding to the primary cache cluster within a target time period. The target period may be a preset period. The write-in failure times corresponding to the main cache cluster may be times of failure of the main cache cluster to write in the data set to be cached. For example, the number of write failures is 4.
Step 307, adding a predetermined value to the write failure times to obtain added write failure times for the write failure information.
In some embodiments, the execution body may add a predetermined value to the write failure number to obtain an added write failure number for the write failure information. For example, the number of write failures is "4". The predetermined value may be "1". The number of write failures after addition is "5".
Step 308, in response to determining that the added write-in failure times are greater than or equal to a predetermined threshold and the at least one slave cache cluster information exists in the cache configuration, receiving first configuration modification information for the cache configuration.
In some embodiments, in response to determining that the number of write failures after the addition is greater than or equal to a predetermined threshold and that the at least one slave cache cluster information exists in the cache configuration, the execution body may receive first configuration modification information for the cache configuration. The first configuration modification information is information for determining any slave cache cluster information in at least one slave cache cluster information in the cache configuration as master cache cluster information. For example, the predetermined threshold is 3.
In some optional implementations of some embodiments, after step 308, the steps further include:
in response to determining that the added write-in failure times are smaller than the predetermined threshold, the executing body may send cache failure information indicating that the write-in failure of the data to be cached is performed to an upstream server.
Step 309, determining modified main cache cluster information according to the first configuration modification information.
In some embodiments, the execution subject may determine modified primary cache cluster information with respect to the first configuration modification information. And the modified master cache cluster information is the cluster information determined as the master cache cluster information in the at least one slave cache cluster information.
And 310, writing the data set to be cached into the cache cluster corresponding to the modified main cache cluster information.
In some embodiments, the executing entity may write the data set to be cached into the cache cluster corresponding to the modified main cache cluster information.
In some optional implementations of some embodiments, after step 310, the steps further include:
in the first step, in response to determining that the target primary cache cluster fault repair is complete, the execution subject may receive second configuration modification information. The second configuration modification information represents that the target primary cache cluster is determined as a secondary cache cluster, and the target primary cache cluster is a cache cluster corresponding to the replaced primary cache cluster information in the cache configuration.
It should be noted that the number of the main cache clusters for data caching may be one. The target master cache cluster may be a cluster that has a failure problem and is replaced in the first configuration modification information and that originally functions as a master cache cluster (that is, one slave cache cluster is selected from any one of the at least one slave cache cluster as a master cache cluster, and the master cache cluster that originally has a failure performs failure recovery processing). Therefore, after the fault of the target main cache cluster is repaired, the target main cache cluster can be used as a secondary cache cluster, and data synchronization caching can be continuously carried out.
And secondly, the execution main body may determine the target master cache cluster as a cluster corresponding to the slave cache cluster information according to the second configuration modification information.
Third, the execution subject may write the data set to be cached into the target primary cache cluster.
And writing the data set to be cached into the target main cache cluster to ensure the consistency of the data cache.
As can be seen from fig. 3, compared with the description of some embodiments corresponding to fig. 2, in the process 300 of the data writing method in some embodiments corresponding to fig. 3, first, when it is determined that the data set to be cached is not successfully written into the primary cache cluster, write failure information is generated. Then, whether the information of the main cache cluster needs to be adjusted subsequently or not is determined by counting the write-in failure times corresponding to the main cache cluster in the target time period. Furthermore, when the number of write failures plus a predetermined value is greater than a predetermined threshold value, and the at least one slave cache cluster information exists in the cache configuration, the adjustment of the master cache cluster information is realized through the first configuration modification information. And finally, writing the data set to be cached into the cache cluster corresponding to the modified main cache cluster information. By the cache cluster disaster recovery method, the writing-in of the data set to be cached can be efficiently and safely realized.
With further reference to fig. 4, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of a data writing apparatus, which correspond to those of the method embodiments illustrated in fig. 2, and which may be applied in particular to various electronic devices.
As shown in fig. 4, a data writing apparatus 400 includes: a determination unit 401, an addition unit 402, a first writing unit 403, and a second writing unit 404. The determining unit 401 is configured to determine, in response to receiving a data write cache request, primary cache cluster information in a cache configuration; an adding unit 402, configured to perform distributed lock addition processing on the main cache cluster corresponding to the main cache cluster information; a first writing unit 403, configured to write a data set to be cached into the primary cache cluster in response to determining that locking of the primary cache cluster is successful; a second writing unit 404, configured to, in response to determining that the data set to be cached is successfully written into the primary cache cluster and that at least one piece of slave cache cluster information exists in the cache configuration, write the data set to be cached into at least one slave cache cluster corresponding to the at least one piece of slave cache cluster information.
In some optional implementations of some embodiments, the apparatus 400 further includes: a deletion unit (not shown in the figure). Wherein the deletion unit may be configured to: and deleting the distributed lock aiming at the main cache cluster in response to the fact that the data set to be cached is successfully written into the at least one slave cache cluster.
In some optional implementations of some embodiments, the apparatus 400 further includes: an information writing unit, a number determining unit, an adding unit, a first receiving unit, an information determining unit, and a third writing unit (not shown in the figure). Wherein the information writing unit may be configured to: and generating write failure information in response to determining that the data set to be cached is not successfully written into the main cache cluster. The number-of-times determination unit may be configured to: and determining the write failure times corresponding to the main cache cluster in the target time period. The adding unit may be configured to: and adding a preset numerical value to the write-in failure times according to the write-in failure information to obtain the added write-in failure times. The receiving unit may be configured to: and in response to that it is determined that the number of write failures after the addition is greater than or equal to a predetermined threshold and the at least one piece of slave cache cluster information exists in the cache configuration, receiving first configuration modification information for the cache configuration, where the first configuration modification information is information for determining any piece of slave cache cluster information in the at least one piece of slave cache cluster information in the cache configuration as master cache cluster information. The information determination unit may be configured to: and determining modified main cache cluster information according to the first configuration modification information. The third writing unit may be configured to: and writing the data set to be cached into the cache cluster corresponding to the modified main cache cluster information.
In some optional implementations of some embodiments, the apparatus 400 further includes: a transmitting unit (not shown). Wherein the transmitting unit may be configured to: and sending cache failure information representing the write failure of the data to be cached to an upstream server in response to the fact that the added write failure times are smaller than the preset threshold value.
In some optional implementations of some embodiments, the apparatus 400 further includes: a second receiving unit, a cluster information determining unit, and a fourth writing unit (not shown in the figure). Wherein the second receiving unit may be configured to: and receiving second configuration modification information in response to the fact that the fault repair of the target main cache cluster is completed, wherein the second configuration modification information represents that the target main cache cluster is determined to be a slave cache cluster, and the target main cache cluster is a cache cluster corresponding to the replaced main cache cluster information in the cache configuration. The cluster information determination unit may be configured to: and determining the target main cache cluster as a cluster corresponding to the slave cache cluster information according to the second configuration modification information. The fourth writing unit may be configured to: and writing the data set to be cached into the target main cache cluster.
In some optional implementations of some embodiments, the apparatus 400 further includes: a deletion unit (not shown in the figure). Wherein the deletion unit may be configured to: in response to determining that the at least one slave cache cluster information does not exist in the cache configuration, deleting the distributed lock for the master cache cluster.
In some optional implementations of some embodiments, the apparatus 400 further includes: a type information determination unit and a re-addition unit (not shown in the figure). Wherein the type information determination unit may be configured to: and determining locking failure type information in response to determining that the locking of the main cache cluster fails. The re-adding unit may be configured to: and in response to determining that the locking failure type information represents a loading failure of the main cache cluster, adding the distributed lock again for a predetermined number of times.
In some optional implementations of some embodiments, the apparatus 400 further includes: an execution unit (not shown). Wherein the execution unit may be configured to: and executing distributed lock adding operation aiming at the main cache cluster in response to the fact that the locking failure type information represents the task occupation loading fault and the related occupation task is finished executing.
It will be understood that the elements described in the apparatus 400 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 400 and the units included therein, and will not be described herein again.
Referring now to FIG. 5, a block diagram of an electronic device (e.g., electronic device 101 of FIG. 1) 500 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 5 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining main cache cluster information in cache configuration in response to receiving a data write cache request; performing distributed lock adding processing on the main cache cluster corresponding to the main cache cluster information; writing a data set to be cached into the main cache cluster in response to determining that the locking of the main cache cluster is successful; and in response to the fact that the data set to be cached is successfully written into the main cache cluster and at least one piece of slave cache cluster information exists in the cache configuration, writing the data set to be cached into at least one slave cache cluster corresponding to the at least one piece of slave cache cluster information.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor comprising: the device comprises a determining unit, an adding unit, a first writing unit and a second writing unit. Where the names of these units do not constitute a limitation on the unit itself in some cases, for example, the determining unit may also be described as a "unit that determines primary cache cluster information in a cache configuration in response to receiving a data write cache request".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
Some embodiments of the present disclosure also provide a computer program product comprising a computer program which, when executed by a processor, implements any of the data writing methods described above.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (12)

1. A data writing method, comprising:
determining main cache cluster information in cache configuration in response to receiving a data write cache request;
performing distributed lock adding processing on a main cache cluster corresponding to the main cache cluster information;
writing a data set to be cached into the main cache cluster in response to determining that the locking of the main cache cluster is successful;
and in response to the fact that the data set to be cached is successfully written into the main cache cluster and at least one piece of slave cache cluster information exists in the cache configuration, writing the data set to be cached into at least one slave cache cluster corresponding to the at least one piece of slave cache cluster information.
2. The method of claim 1, wherein the method further comprises:
deleting the distributed lock for the master cache cluster in response to determining that the set of data to be cached is successfully written to the at least one slave cache cluster.
3. The method of claim 1, wherein the method further comprises:
generating write failure information in response to determining that the data set to be cached is not successfully written into the main cache cluster;
determining the write-in failure times corresponding to the main cache cluster in a target time period;
adding a preset numerical value to the write-in failure times aiming at the write-in failure information to obtain added write-in failure times;
in response to determining that the added write-in failure times are greater than or equal to a predetermined threshold and the at least one piece of slave cache cluster information exists in the cache configuration, receiving first configuration modification information for the cache configuration, wherein the first configuration modification information is information for determining any piece of slave cache cluster information in the at least one piece of slave cache cluster information in the cache configuration as master cache cluster information;
determining modified main cache cluster information aiming at the first configuration modification information;
and writing the data set to be cached into the cache cluster corresponding to the modified main cache cluster information.
4. The method of claim 3, wherein the method further comprises:
and responding to the fact that the added write-in failure times are smaller than the preset threshold value, and sending cache failure information representing write-in failure of the data to be cached to an upstream server.
5. The method of claim 3, wherein the method further comprises:
receiving second configuration modification information in response to determining that the fault repair of the target main cache cluster is completed, wherein the second configuration modification information represents that the target main cache cluster is determined to be a slave cache cluster, and the target main cache cluster is a cache cluster corresponding to the replaced main cache cluster information in the cache configuration;
determining the target main cache cluster as a cluster corresponding to the slave cache cluster information according to the second configuration modification information;
and writing the data set to be cached into the target main cache cluster.
6. The method of claim 1, wherein the method further comprises:
in response to determining that the at least one slave cache cluster information does not exist in the cache configuration, deleting the distributed lock for the master cache cluster.
7. The method of claim 1, wherein the method further comprises:
in response to determining that locking of the primary cache cluster fails, determining locking failure type information;
re-adding the distributed lock a predetermined number of times in response to determining that the locking failure type information characterizes a primary cache cluster load failure.
8. The method of claim 7, wherein the method further comprises:
and in response to determining that the locking failure type information represents that the task occupies the loading fault and the related occupied task is finished executing, executing distributed lock adding operation aiming at the main cache cluster.
9. A data writing apparatus comprising:
a determining unit configured to determine primary cache cluster information in a cache configuration in response to receiving a data write cache request;
the adding unit is configured to perform distributed lock adding processing on the main cache cluster corresponding to the main cache cluster information;
a first write unit configured to write a data set to be cached into the primary cache cluster in response to determining that locking of the primary cache cluster is successful;
and the second writing unit is configured to, in response to determining that the data set to be cached is successfully written into the master cache cluster and that at least one piece of slave cache cluster information exists in the cache configuration, write the data set to be cached into at least one slave cache cluster corresponding to the at least one piece of slave cache cluster information.
10. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
11. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-8.
12. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-8.
CN202211203572.0A 2022-09-29 2022-09-29 Data writing method, device, equipment, computer readable medium and program product Pending CN115525684A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211203572.0A CN115525684A (en) 2022-09-29 2022-09-29 Data writing method, device, equipment, computer readable medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211203572.0A CN115525684A (en) 2022-09-29 2022-09-29 Data writing method, device, equipment, computer readable medium and program product

Publications (1)

Publication Number Publication Date
CN115525684A true CN115525684A (en) 2022-12-27

Family

ID=84699011

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211203572.0A Pending CN115525684A (en) 2022-09-29 2022-09-29 Data writing method, device, equipment, computer readable medium and program product

Country Status (1)

Country Link
CN (1) CN115525684A (en)

Similar Documents

Publication Publication Date Title
EP4033374A1 (en) Method and device for synchronizing node data
CN108897628B (en) Method and device for realizing distributed lock and electronic equipment
CN110851139B (en) Method and device for checking codes and electronic equipment
CN112416632B (en) Event communication method and device, electronic equipment and computer readable medium
CN112422551B (en) SSL certificate updating method and device, electronic equipment and storage medium
CN111338834B (en) Data storage method and device
CN111858381A (en) Application program fault tolerance capability test method, electronic device and medium
CN113760503A (en) Task migration method and device, electronic equipment and computer readable medium
CN111460432B (en) On-line document authority control method, device, equipment and computer readable medium
KR20230017329A (en) Method of responding to operation, apparatus of responding to operation, electronic device, storage medium, and computer program
CN116244004A (en) Resource management method, device, medium and electronic equipment
CN115525684A (en) Data writing method, device, equipment, computer readable medium and program product
CN112559258B (en) Disaster recovery processing method, device, system, equipment and medium
CN114785770A (en) Mirror layer file sending method and device, electronic equipment and computer readable medium
CN113760927A (en) Data processing method and device, electronic equipment and computer readable medium
CN113778850A (en) Data processing method and device, electronic equipment and computer readable medium
CN112163176A (en) Data storage method and device, electronic equipment and computer readable medium
CN112507676A (en) Energy report generation method and device, electronic equipment and computer readable medium
CN114116746B (en) Multisystem data storage method, multisystem data storage device, medium and electronic equipment
CN110262756B (en) Method and device for caching data
CN118035594B (en) Method, apparatus, electronic device and computer readable medium for accessing production document
CN115098453B (en) Information storage method, apparatus, electronic device, and computer readable medium
CN114077639B (en) Data writing method, device, electronic equipment and storage medium
CN112506713B (en) Multistage disaster recovery system and method
CN116700956B (en) Request processing method, apparatus, electronic device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination