CN112328637A - High-speed distributed data caching method and device, computer equipment and storage medium - Google Patents

High-speed distributed data caching method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112328637A
CN112328637A CN202011196303.7A CN202011196303A CN112328637A CN 112328637 A CN112328637 A CN 112328637A CN 202011196303 A CN202011196303 A CN 202011196303A CN 112328637 A CN112328637 A CN 112328637A
Authority
CN
China
Prior art keywords
ignite
data
message
node
lock
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011196303.7A
Other languages
Chinese (zh)
Other versions
CN112328637B (en
Inventor
周毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202011196303.7A priority Critical patent/CN112328637B/en
Publication of CN112328637A publication Critical patent/CN112328637A/en
Application granted granted Critical
Publication of CN112328637B publication Critical patent/CN112328637B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/242Query formulation
    • G06F16/2433Query languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/466Transaction processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a high-speed distributed data caching method, a high-speed distributed data caching device, computer equipment and a storage medium, and relates to a distributed storage technology of cloud storage, wherein the method comprises the steps of acquiring a local lock type if a preparation message sent by an application instance embedded with an ignition program JAR package is detected; if the lock type is an optimistic lock, sending a preparation message to other Ignite nodes in the Ignite cluster; sending the confirmation information corresponding to the preparation message to the application example embedded with the JAR package of the Ignite program; if the cache data sent by the application instance embedded with the IGnite program JAR packet is detected, sending a submission message corresponding to the cache data to other IGnite nodes in the IGnite cluster; and sending a transaction confirmation message corresponding to the commit message to the application instance embedded with the IGnite program JAR package. The method synchronously updates the data to be stored to each node by adopting a two-stage submission algorithm, supports a distributed lock mechanism, uses pessimistic locks or optimistic locks when accessing the data, and realizes the consistency of distributed storage of the data.

Description

High-speed distributed data caching method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of distributed storage of cloud storage, in particular to a high-speed distributed data caching method and device, computer equipment and a storage medium.
Background
At present, a common data caching mode is Redis, which is a k/v mode data storage mode. The query of data in this mode has great limitation, and the query of range type data cannot be carried out. And distributed data consistency can not be achieved under the condition that complex data are stored in a k/v mode, so that errors are prone to occur in the data caching process.
Disclosure of Invention
The embodiment of the invention provides a high-speed distributed data caching method, a high-speed distributed data caching device, computer equipment and a storage medium, and aims to solve the problem that in the prior art, a database of a k/v mode data storage mode cannot achieve distributed data consistency under the condition of storing complex data, so that an error is easy to occur in the data caching process.
In a first aspect, an embodiment of the present invention provides a method for caching distributed data, including:
if a preparation message sent by an application instance embedded with an Ignite program JAR packet is detected, acquiring a local lock type; wherein the lock types include pessimistic locks and optimistic locks;
if the lock type is an optimistic lock, sending the preparation message to other Ignite nodes in the Ignite cluster;
sending the confirmation information corresponding to the preparation message to the application example embedded with the JAR package of the Ignite program;
if the cache data sent by the application instance embedded with the IGnite program JAR packet is detected, sending a submission message corresponding to the cache data to other IGnite nodes in the IGnite cluster;
sending a transaction confirmation message corresponding to the submission message to the application instance embedded with the JAR package of the Ignite program; and
and if the SQL query statement is detected, locally querying according to the SQL query statement to obtain corresponding target data.
In a second aspect, an embodiment of the present invention provides a high-speed distributed data caching apparatus, including:
the device comprises a preparation message detection unit, a local lock type acquisition unit and a local lock type acquisition unit, wherein the preparation message detection unit is used for acquiring a local lock type if a preparation message sent by an application instance embedded with an ignition program JAR packet is detected; wherein the lock types include pessimistic locks and optimistic locks;
a preparation message sending unit, configured to send the preparation message to other Ignite nodes in the Ignite cluster if the lock type is an optimistic lock;
the confirmation message sending unit is used for sending the confirmation message corresponding to the preparation message to the application instance embedded with the Ignite program JAR packet;
the commit message sending unit is used for sending the commit message corresponding to the cache data to other Ignite nodes in the Ignite cluster if the cache data sent by the application instance embedded with the Ignite program JAR packet is detected;
a transaction confirmation message sending unit, configured to send a transaction confirmation message corresponding to the commit message to the application instance in which the Ignite program JAR packet has been embedded; and
and the data retrieval unit is used for locally querying and acquiring corresponding target data according to the SQL query statement if the SQL query statement is detected.
In a third aspect, an embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the method for caching distributed data according to the first aspect when executing the computer program.
In a fourth aspect, the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, causes the processor to execute the method for caching distributed data according to the first aspect.
The embodiment of the invention provides a high-speed distributed data caching method, a high-speed distributed data caching device, computer equipment and a storage medium, wherein if a preparation message sent by an application instance embedded with an Ignite program JAR packet is detected, a local lock type is acquired; wherein the lock types include pessimistic locks and optimistic locks; if the lock type is an optimistic lock, sending the preparation message to other Ignite nodes in the Ignite cluster; sending the confirmation information corresponding to the preparation message to the application example embedded with the JAR package of the Ignite program; if the cache data sent by the application instance embedded with the IGnite program JAR packet is detected, sending a submission message corresponding to the cache data to other IGnite nodes in the IGnite cluster; sending a transaction confirmation message corresponding to the submission message to the application instance embedded with the JAR package of the Ignite program; and if the SQL query statement is detected, locally querying according to the SQL query statement to obtain corresponding target data. The method synchronously updates the stored data to each node by adopting a two-stage submission algorithm, supports a distributed lock mechanism, and realizes the consistency of distributed storage of the data by using a pessimistic lock or an optimistic lock when the data is accessed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a high-speed distributed data caching method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for caching distributed data according to an embodiment of the present invention;
FIG. 3 is a schematic block diagram of a high-speed distributed data caching apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic block diagram of a computer device provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic view illustrating an application scenario of a high-speed distributed data caching method according to an embodiment of the present invention; fig. 2 is a schematic flow chart of a distributed data caching method according to an embodiment of the present invention, where the distributed data caching method is applied to a server, and the method is executed by application software installed in the server.
As shown in fig. 2, the method includes steps S110 to S160.
S110, if a preparation message sent by an application instance embedded with an ignition program JAR packet is detected, acquiring a local lock type; wherein the lock types include pessimistic locks and optimistic locks.
In this embodiment, in order to more clearly understand the technical solution of the present application, the following detailed description is made on the terminal concerned. The technical scheme is described in the angle of one main node in a plurality of main nodes included in an Ignite cluster.
First, the client may also be understood as an Ignite node (which is also one of the Ignite nodes in the Ignite cluster) embedded in the Ignite program JAR packet, and a cache space is configured locally at the client, and the Ignite node embedded in the Ignite program JAR packet may start a process to synchronize cache data in the cache space to other Ignite nodes.
The second is an Ignite cluster, in which the Ignite cluster includes several Ignite nodes, since processing a certain computation task in the Ignite cluster may involve multiple Ignite nodes, it is necessary to keep the data of each Ignite node in the Ignite cluster consistent. Once the data in a particular Ignite node is updated, the updated data needs to be synchronized to other Ignite nodes in the Ignite cluster in time. During the synchronization of data in the Ignite cluster, a two-phase commit protocol needs to be used.
Wherein the two-phase commit protocol includes two phases, the first phase being a prepare phase and the second phase being a commit phase. Step S110 is the first step of the preparation phase, and the Ignite node (i.e., the application instance) that needs to embed the Ignite program JAR packet first sends a preparation message to the master node in the Ignite cluster. When the master node in the Ignite cluster receives the preparation message, the master node needs to further communicate with other nodes in the Ignite cluster, and at this time, a local lock type of the master node needs to be acquired first to judge whether the master node locks data when the data synchronization transaction starts, so as to avoid the data being modified.
If the lock type is an optimistic lock, it indicates that the lock local to the Ignite node can be obtained and locks the data before the transaction is finished committing. If the lock type is a pessimistic lock, it indicates that the lock local to the Ignite node gets and locks the data at the beginning of the transaction. Wherein, the optimistic concurrency control mode is acquired by the main node in the preparation phase of the two-phase commit protocol, and the optimistic lock can be acquired before the transaction is finished to be committed. In a pessimistic concurrency control model, all data that is to be read, written, or modified needs to be locked at the beginning of a transaction.
In an embodiment, step S110 further includes:
and acquiring a locally stored distributed hash table, and acquiring locally stored data of each Ignite node in the Ignite cluster according to the distributed hash table.
In this embodiment, a distributed hash table is used in the ignate cluster to determine how data is distributed in the cluster, and the distributed hash table is stored in the master node, that is, if data needs to be synchronized from a cache space of the master node to a corresponding target ignate node, local storage data of each ignate node in the ignate cluster needs to be acquired according to the distributed hash table, so that which target ignate nodes of the data to be synchronized can be obtained through querying the distributed hash table.
And S120, if the lock type is an optimistic lock, sending the preparation message to other Ignite nodes in the Ignite cluster.
In this embodiment, after receiving the preparation message and determining that the local lock type is an optimistic lock, the master node may first send the preparation message to other Ignite nodes in the Ignite cluster. By means of forwarding the preparation message by the master node, whether other Ignite nodes in the Ignite cluster are ready for synchronizing data in the preparation stage can be effectively detected.
And S130, sending the confirmation information corresponding to the preparation message to the application instance embedded with the ignition program JAR package.
In this embodiment, in order to indicate that the node has completed acquiring the lock and is ready to synchronize data, other Ignite nodes in the Ignite cluster send an acknowledgement message to the application instance embedded with the Ignite program JAR packet. Execution proceeds to step S130 at this point, and the preparation phase in the two-phase commit protocol is completed. Subsequent steps begin to execute the commit phase.
And S140, if the cache data sent by the application instance embedded with the Ignite program JAR packet is detected, sending a submit message corresponding to the cache data to other Ignite nodes in the Ignite cluster.
In this embodiment, in the commit stage, the master node first detects whether the cache data sent by the application instance embedded with the Ignite program JAR packet is received, and if the master node detects that the cache data sent by the application instance embedded with the Ignite program JAR packet is received, the master node first sends a commit message corresponding to the cache data to other Ignite nodes in the Ignite cluster.
In one embodiment, step S140 includes:
acquiring an internal management transaction request of a transaction coordinator;
writing the key value of the cache data into the local;
writing the key value into a local transaction map through a transaction coordinator;
receiving a submission message sent by an application instance embedded with an ignition program JAR packet;
acquiring an optimistic lock locally, and performing ignition node write-locking according to the optimistic lock;
sending the locked confirmation message to the application example embedded with the JAR packet of the ignition program;
and sending the submission message to other Ignite nodes in the Ignite cluster.
In this embodiment, when the master node local lock is an optimistic lock, reference may be made to Computer Aided Design (CAD), where a designer works on a part of the entire design, usually the design is checked out from the central repository to the local workspace, and then partial updates are performed before checking the results in the central repository, since the designer is responsible for only a part of the entire design, and therefore there is no possibility of conflict with updates on other parts.
In contrast to the pessimistic concurrency model, the optimistic concurrency model delays lock acquisition, which is more suitable for applications with less resource contention, such as the CAD example described above.
By executing the steps in the optimistic lock concurrency mode, the cache data can be ensured to be written into the main node in a consistent manner.
The transaction coordinator is deployed in the master node and can detect an internal management transaction request. Once the internal management transaction request is detected, the key value of the cache data may be written first, and then the key value may be written into the local transaction map of the primary node. After the local transaction mapping is completed, the master node can be locked according to the optimistic lock, and after the locking is completed, the master node sends a locking confirmation message to the application instance embedded in the JAR packet of the Ignite program, so that the cache data is synchronized to the master node firstly, and then the cache data is synchronized to other Ignite nodes in the Ignite cluster.
S150, sending the transaction confirmation message corresponding to the commit message to the application instance embedded with the Ignite program JAR package.
In this embodiment, after sending a commit message corresponding to the cached data to the master node, and after completing the synchronization of the cached data in the master node, the master node sends a transaction confirmation message corresponding to the commit message to the application instance embedded with the JAR package of the Ignite program. Similarly, after the commit message corresponding to the cached data is sent to other Ignite nodes in the Ignite cluster, the other Ignite nodes in the Ignite cluster may synchronize the cached data according to the commit message, and after the caching process is completed, the transaction confirmation message corresponding to the commit message and the other Ignite nodes in the Ignite cluster are all sent to the application instance embedded with the Ignite program JAR packet.
In an embodiment, step S150 is followed by:
acquiring a node unique number corresponding to the Ignite node according to the received transaction confirmation message to form a current effective node list;
and acquiring an Ignite cluster total list, and acquiring a complement of the current effective node list by taking the Ignite cluster total list as a full set to obtain a current fault node list.
In this embodiment, after other Ignite nodes other than the master node in the Ignite cluster receive the commit message forwarded by the master node, a transaction confirmation message needs to be fed back to the master node to notify whether the current Ignite node is a valid node (i.e., a non-failed node). When some of the Ignite nodes send the transaction confirmation message and the other of the Ignite nodes do not send the transaction confirmation message, the unique node numbers corresponding to the Ignite nodes which send the transaction confirmation message form the current effective node list. And obtaining a complement of the current effective node list by taking the known Ignite cluster total list as a full set so as to obtain a current fault node list.
And S160, if the SQL query statement is detected, locally querying and acquiring corresponding target data according to the SQL query statement.
In this embodiment, an SQL engine is embedded in the Ignite cluster, supports standard SQL99, supports indexes, and also supports KV mode (the KV is called Key-value, which represents a Key value). When any Ignite uses the data, the local memory is queried to find the corresponding data, and the query mode can be realized by SQL query statements.
In one embodiment, the step S160 includes:
judging whether query data corresponding to the SQL query statement exists in the local host node;
if the query data corresponding to the SQL query statement exists in the local host node, taking the query data as target data;
and if the query data corresponding to the SQL query statement does not exist in the local of the main node, acquiring a corresponding current target Ignite node according to the DHT, and receiving the target data which is sent by the current target Ignite node and corresponds to the SQL query statement.
In this embodiment, since the distributed hash table is stored in the master node, when the corresponding target data is not queried on the master node according to the SQL query statement, the master node may determine an Ignite node stored in the target data by querying the distributed hash table, and the node sends the target data to the master node to implement the query. When the corresponding target data is queried on the main node according to the SQL query statement, the main node directly takes the queried data as the target data.
In an embodiment, step S110 is followed by:
if the lock type is a pessimistic lock, detecting whether a confirmation preparation message is generated;
and if the confirmation preparation message is generated, receiving the cache data sent by the application instance embedded with the JAR packet of the ignition program.
In this embodiment, when the master node receives the preparation message and determines that the local lock type is a pessimistic lock, the cache data sent by the application instance embedded in the JAR packet of the Ignite program is received after the confirmation preparation message is detected.
In one embodiment, the step of detecting whether an acknowledge prepare message has been generated if the lock type is a pessimistic lock comprises:
acquiring an internal management transaction request of a transaction coordinator;
locally generating a pessimistic lock request, and performing the write-locking of the Ignite node according to the pessimistic lock;
an acknowledgement preparation message is generated and sent to the application instance that embedded the ignition program JAR package.
To more clearly understand the principles of pessimistic locks, an example is described below. An example of a pessimistic concurrency model is the transfer of money between two bank accounts that needs to be assured that the debit status of the two bank accounts is properly recorded. At this point, two accounts need to be locked to ensure that the update is fully completed and the balance is correct. In a pessimistic concurrency model, an application needs to lock all data that is to be read, written, or modified at the beginning of a transaction.
In a pessimistic model, a pessimistic lock is held until a transaction is completed, and the lock prevents other transactions from accessing data, so that each Ignite node can only synchronize cached data sent from an application instance that has embedded the Ignite program JAR package.
The method synchronously updates the stored data to each node by adopting a two-stage submission algorithm, supports a distributed lock mechanism, and realizes the consistency of distributed storage of the data by using a pessimistic lock or an optimistic lock when the data is accessed.
Embodiments of the present invention further provide a high-speed distributed data caching apparatus, where the high-speed distributed data caching apparatus is configured to execute any of the embodiments of the high-speed distributed data caching method. Specifically, referring to fig. 3, fig. 3 is a schematic block diagram of a high-speed distributed data caching apparatus according to an embodiment of the present invention. The distributed cache data caching apparatus 100 may be configured in a server.
As shown in fig. 3, the high-speed distributed data caching apparatus 100 includes: a ready message detection unit 110, a ready message sending unit 120, an acknowledgement message sending unit 130, a commit message sending unit 140, a transaction acknowledgement message sending unit 150, a data retrieval unit 160.
A prepared message detection unit 110, configured to obtain a local lock type if a prepared message sent by an application instance in which an ignition program JAR packet has been embedded is detected; wherein the lock types include pessimistic locks and optimistic locks.
In this embodiment, in order to more clearly understand the technical solution of the present application, the following detailed description is made on the terminal concerned. The technical scheme is described in the angle of one main node in a plurality of main nodes included in an Ignite cluster.
First, the client may also be understood as an Ignite node (which is also one of the Ignite nodes in the Ignite cluster) embedded in the Ignite program JAR packet, and a cache space is configured locally at the client, and the Ignite node embedded in the Ignite program JAR packet may start a process to synchronize cache data in the cache space to other Ignite nodes.
The second is an Ignite cluster, in which the Ignite cluster includes several Ignite nodes, since processing a certain computation task in the Ignite cluster may involve multiple Ignite nodes, it is necessary to keep the data of each Ignite node in the Ignite cluster consistent. Once the data in a particular Ignite node is updated, the updated data needs to be synchronized to other Ignite nodes in the Ignite cluster in time. During the synchronization of data in the Ignite cluster, a two-phase commit protocol needs to be used.
Wherein the two-phase commit protocol includes two phases, the first phase being a prepare phase and the second phase being a commit phase. The step executed in the preparation message detection unit 110 is the first step of the preparation phase, and requires that the Ignite node (i.e., the application instance) that has embedded the Ignite program JAR packet first sends a preparation message to the master node in the Ignite cluster. When the master node in the Ignite cluster receives the preparation message, the master node needs to further communicate with other nodes in the Ignite cluster, and at this time, a local lock type of the master node needs to be acquired first to judge whether the master node locks data when the data synchronization transaction starts, so as to avoid the data being modified.
If the lock type is an optimistic lock, it indicates that the lock local to the Ignite node can be obtained and locks the data before the transaction is finished committing. If the lock type is a pessimistic lock, it indicates that the lock local to the Ignite node gets and locks the data at the beginning of the transaction. Wherein, the optimistic concurrency control mode is acquired by the main node in the preparation phase of the two-phase commit protocol, and the optimistic lock can be acquired before the transaction is finished to be committed. In a pessimistic concurrency control model, all data that is to be read, written, or modified needs to be locked at the beginning of a transaction.
In one embodiment, the high-speed distributed data caching apparatus 100 further comprises:
and the hash table acquisition unit is used for acquiring a locally stored distributed hash table and acquiring locally stored data of each Ignite node in the Ignite cluster according to the distributed hash table.
In this embodiment, a distributed hash table is used in the ignate cluster to determine how data is distributed in the cluster, and the distributed hash table is stored in the master node, that is, if data needs to be synchronized from a cache space of the master node to a corresponding target ignate node, local storage data of each ignate node in the ignate cluster needs to be acquired according to the distributed hash table, so that which target ignate nodes of the data to be synchronized can be obtained through querying the distributed hash table.
A prepare message sending unit 120, configured to send the prepare message to other Ignite nodes in the Ignite cluster if the lock type is an optimistic lock.
In this embodiment, after receiving the preparation message and determining that the local lock type is an optimistic lock, the master node may first send the preparation message to other Ignite nodes in the Ignite cluster. By means of forwarding the preparation message by the master node, whether other Ignite nodes in the Ignite cluster are ready for synchronizing data in the preparation stage can be effectively detected.
An acknowledgement message sending unit 130, configured to send acknowledgement information corresponding to the preparation message to the application instance in which the Ignite program JAR packet is embedded.
In this embodiment, in order to indicate that the node has completed acquiring the lock and is ready to synchronize data, other Ignite nodes in the Ignite cluster send an acknowledgement message to the application instance embedded with the Ignite program JAR packet. At this time, the preparation phase in the two-phase commit protocol is completed by the steps performed in the acknowledgment message sending unit 130. Subsequent steps begin to execute the commit phase.
A commit message sending unit 140, configured to, if the cache data sent by the application instance in which the Ignite program JAR packet is embedded is detected, send a commit message corresponding to the cache data to other Ignite nodes in the Ignite cluster.
In this embodiment, in the commit stage, the master node first detects whether the cache data sent by the application instance embedded with the Ignite program JAR packet is received, and if the master node detects that the cache data sent by the application instance embedded with the Ignite program JAR packet is received, the master node first sends a commit message corresponding to the cache data to other Ignite nodes in the Ignite cluster.
In specific implementation, the submit message sending unit 140 includes:
a first transaction request obtaining unit, configured to obtain an internal management transaction request of the transaction coordinator;
the first key value writing unit is used for writing the key value of the cache data into a local area;
the first transaction mapping unit is used for writing the key value into a local transaction mapping through a transaction coordinator;
a commit message receiving unit, configured to receive a commit message sent by an application instance in which the Ignite program JAR packet has been embedded;
the first locking unit is used for acquiring an optimistic lock locally and performing ignition node writing locking according to the optimistic lock;
a first locking message sending unit, configured to send a locked acknowledgement message to an application instance in which an Ignite program JAR packet is embedded;
and the first submission message distribution unit is used for sending the submission message to other Ignite nodes in the Ignite cluster.
In this embodiment, when the master node local lock is an optimistic lock, reference may be made to Computer Aided Design (CAD), where a designer works on a part of the entire design, usually the design is checked out from the central repository to the local workspace, and then partial updates are performed before checking the results in the central repository, since the designer is responsible for only a part of the entire design, and therefore there is no possibility of conflict with updates on other parts.
In contrast to the pessimistic concurrency model, the optimistic concurrency model delays lock acquisition, which is more suitable for applications with less resource contention, such as the CAD example described above.
By executing the steps in the optimistic lock concurrency mode, the cache data can be ensured to be written into the main node in a consistent manner.
The transaction coordinator is deployed in the master node and can detect an internal management transaction request. Once the internal management transaction request is detected, the key value of the cache data may be written first, and then the key value may be written into the local transaction map of the primary node. After the local transaction mapping is completed, the master node can be locked according to the optimistic lock, and after the locking is completed, the master node sends a locking confirmation message to the application instance embedded in the JAR packet of the Ignite program, so that the cache data is synchronized to the master node firstly, and then the cache data is synchronized to other Ignite nodes in the Ignite cluster.
A transaction confirmation message sending unit 150, configured to send the transaction confirmation message corresponding to the commit message to the application instance in which the Ignite program JAR packet has been embedded.
In this embodiment, after sending a commit message corresponding to the cached data to the master node, and after completing the synchronization of the cached data in the master node, the master node sends a transaction confirmation message corresponding to the commit message to the application instance embedded with the JAR package of the Ignite program. Similarly, after the commit message corresponding to the cached data is sent to other Ignite nodes in the Ignite cluster, the other Ignite nodes in the Ignite cluster may synchronize the cached data according to the commit message, and after the caching process is completed, the transaction confirmation message corresponding to the commit message and the other Ignite nodes in the Ignite cluster are all sent to the application instance embedded with the Ignite program JAR packet.
In one embodiment, the high-speed distributed data caching apparatus 100 further comprises:
a current valid node list obtaining unit, configured to obtain a node unique number corresponding to the Ignite node according to the received transaction confirmation message, so as to form a current valid node list;
and the current fault node list acquisition unit is used for acquiring an Ignite cluster total list, and acquiring a complement of the current effective node list by taking the Ignite cluster total list as a full set to obtain a current fault node list.
In this embodiment, after other Ignite nodes other than the master node in the Ignite cluster receive the commit message forwarded by the master node, a transaction confirmation message needs to be fed back to the master node to notify whether the current Ignite node is a valid node (i.e., a non-failed node). When some of the Ignite nodes send the transaction confirmation message and the other of the Ignite nodes do not send the transaction confirmation message, the unique node numbers corresponding to the Ignite nodes which send the transaction confirmation message form the current effective node list. And obtaining a complement of the current effective node list by taking the known Ignite cluster total list as a full set so as to obtain a current fault node list.
And the data retrieval unit 160 is configured to, if the SQL query statement is detected, locally query and acquire corresponding target data according to the SQL query statement.
In this embodiment, an SQL engine is embedded in the Ignite cluster, supports standard SQL99, supports indexes, and also supports KV mode (the KV is called Key-value, which represents a Key value). When any Ignite uses the data, the local memory is queried to find the corresponding data, and the query mode can be realized by SQL query statements.
In one embodiment, the data retrieving unit 160 includes:
the query data judging unit is used for judging whether the query data corresponding to the SQL query statement exists in the local host node;
the first data acquisition unit is used for taking the query data as target data if the query data corresponding to the SQL query statement exists in the local host node;
and the second data acquisition unit is used for acquiring a corresponding current target ignate node according to the distributed hash table and receiving target data which is sent by the current target ignate node and corresponds to the SQL query statement if the query data corresponding to the SQL query statement does not exist in the local of the main node.
In this embodiment, since the distributed hash table is stored in the master node, when the corresponding target data is not queried on the master node according to the SQL query statement, the master node may determine an Ignite node stored in the target data by querying the distributed hash table, and the node sends the target data to the master node to implement the query. When the corresponding target data is queried on the main node according to the SQL query statement, the main node directly takes the queried data as the target data.
In one embodiment, the high-speed distributed data caching apparatus 100 further comprises:
a pessimistic lock detection unit for detecting whether a confirmation preparation message has been generated if the lock type is pessimistic;
and the cache data receiving unit is used for receiving cache data sent by the application instance embedded with the Ignite program JAR packet if the confirmation preparation message is generated.
In this embodiment, when the master node receives the preparation message and determines that the local lock type is a pessimistic lock, the cache data sent by the application instance embedded in the JAR packet of the Ignite program is received after the confirmation preparation message is detected.
In one embodiment, the pessimistic lock detection unit includes:
a second transaction request obtaining unit, configured to obtain an internal management transaction request of the transaction coordinator;
the second locking unit is used for locally generating a pessimistic lock request and performing ignition node write locking according to the pessimistic lock;
and the second locking message sending unit is used for generating a confirmation preparation message and sending the confirmation preparation message to the application instance embedded with the ignition program JAR packet.
To more clearly understand the principles of pessimistic locks, an example is described below. An example of a pessimistic concurrency model is the transfer of money between two bank accounts that needs to be assured that the debit status of the two bank accounts is properly recorded. At this point, two accounts need to be locked to ensure that the update is fully completed and the balance is correct. In a pessimistic concurrency model, an application needs to lock all data that is to be read, written, or modified at the beginning of a transaction.
In a pessimistic model, a pessimistic lock is held until a transaction is completed, and the lock prevents other transactions from accessing data, so that each Ignite node can only synchronize cached data sent from an application instance that has embedded the Ignite program JAR package.
The device synchronously updates the stored data to each node by adopting a two-stage submission algorithm, supports a distributed lock mechanism, and realizes the consistency of distributed storage of the data by using a pessimistic lock or an optimistic lock when the data is accessed.
The above-described cache distributed data caching apparatus may be implemented in the form of a computer program which is executable on a computer device as shown in fig. 4.
Referring to fig. 4, fig. 4 is a schematic block diagram of a computer device according to an embodiment of the present invention. The computer device 500 is a server, and the server may be an independent server or a server cluster composed of a plurality of servers.
Referring to fig. 4, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer programs 5032, when executed, cause the processor 502 to perform a high-speed distributed data caching method.
The processor 502 is used to provide computing and control capabilities that support the operation of the overall computer device 500.
The memory 504 provides an environment for the execution of the computer program 5032 on the non-volatile storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 can be caused to perform the cache distributed data caching method.
The network interface 505 is used for network communication, such as providing transmission of data information. Those skilled in the art will appreciate that the configuration shown in fig. 4 is a block diagram of only a portion of the configuration associated with aspects of the present invention and is not intended to limit the computing device 500 to which aspects of the present invention may be applied, and that a particular computing device 500 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The processor 502 is configured to run a computer program 5032 stored in the memory to implement the high-speed distributed data caching method disclosed in the embodiment of the present invention.
Those skilled in the art will appreciate that the embodiment of a computer device illustrated in fig. 4 does not constitute a limitation on the specific construction of the computer device, and that in other embodiments a computer device may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. For example, in some embodiments, the computer device may only include a memory and a processor, and in such embodiments, the structures and functions of the memory and the processor are consistent with those of the embodiment shown in fig. 4, and are not described herein again.
It should be understood that, in the embodiment of the present invention, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In another embodiment of the invention, a computer-readable storage medium is provided. The computer readable storage medium may be a non-volatile computer readable storage medium. The computer readable storage medium stores a computer program, wherein the computer program, when executed by a processor, implements the high-speed distributed data caching method disclosed by the embodiments of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, devices and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided by the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only a logical division, and there may be other divisions when the actual implementation is performed, or units having the same function may be grouped into one unit, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method of caching distributed data, comprising:
if a preparation message sent by an application instance embedded with an Ignite program JAR packet is detected, acquiring a local lock type; wherein the lock types include pessimistic locks and optimistic locks;
if the lock type is an optimistic lock, sending the preparation message to other Ignite nodes in the Ignite cluster;
sending the confirmation information corresponding to the preparation message to the application example embedded with the JAR package of the Ignite program;
if the cache data sent by the application instance embedded with the IGnite program JAR packet is detected, sending a submission message corresponding to the cache data to other IGnite nodes in the IGnite cluster;
sending a transaction confirmation message corresponding to the submission message to the application instance embedded with the JAR package of the Ignite program; and
and if the SQL query statement is detected, locally querying according to the SQL query statement to obtain corresponding target data.
2. The method of caching distributed data as recited in claim 1, further comprising:
and acquiring a locally stored distributed hash table, and acquiring locally stored data of each Ignite node in the Ignite cluster according to the distributed hash table.
3. The method according to claim 1, wherein if the cache data sent by the application instance embedded in the Ignite program JAR packet is detected, sending a commit message corresponding to the cache data to other Ignite nodes in the Ignite cluster, includes:
acquiring an internal management transaction request of a transaction coordinator;
writing the key value of the cache data into the local;
writing the key value into a local transaction map through a transaction coordinator;
receiving a submission message sent by an application instance embedded with an ignition program JAR packet;
acquiring an optimistic lock locally, and performing ignition node write-locking according to the optimistic lock;
sending the locked confirmation message to the application example embedded with the JAR packet of the ignition program;
and sending the submission message to other Ignite nodes in the Ignite cluster.
4. The method of caching distributed data as recited in claim 1, further comprising:
acquiring a node unique number corresponding to the Ignite node according to the received transaction confirmation message to form a current effective node list;
and acquiring an Ignite cluster total list, and acquiring a complement of the current effective node list by taking the Ignite cluster total list as a full set to obtain a current fault node list.
5. The method of caching distributed data as recited in claim 1, further comprising:
if the lock type is a pessimistic lock, detecting whether a confirmation preparation message is generated;
and if the confirmation preparation message is generated, receiving the cache data sent by the application instance embedded with the JAR packet of the ignition program.
6. The method of claim 5, wherein said detecting whether an acknowledgement preparation message has been generated if the lock type is a pessimistic lock comprises:
acquiring an internal management transaction request of a transaction coordinator;
locally generating a pessimistic lock request, and performing the write-locking of the Ignite node according to the pessimistic lock;
an acknowledgement preparation message is generated and sent to the application instance that embedded the ignition program JAR package.
7. The method according to claim 2, wherein the locally querying according to the SQL query statement to obtain corresponding target data comprises:
judging whether query data corresponding to the SQL query statement exists in the local host node;
if the query data corresponding to the SQL query statement exists in the local host node, taking the query data as target data;
and if the query data corresponding to the SQL query statement does not exist in the local of the main node, acquiring a corresponding current target Ignite node according to the DHT, and receiving the target data which is sent by the current target Ignite node and corresponds to the SQL query statement.
8. A high-speed distributed data caching apparatus, comprising:
the device comprises a preparation message detection unit, a local lock type acquisition unit and a local lock type acquisition unit, wherein the preparation message detection unit is used for acquiring a local lock type if a preparation message sent by an application instance embedded with an ignition program JAR packet is detected; wherein the lock types include pessimistic locks and optimistic locks;
a preparation message sending unit, configured to send the preparation message to other Ignite nodes in the Ignite cluster if the lock type is an optimistic lock;
the confirmation message sending unit is used for sending the confirmation message corresponding to the preparation message to the application instance embedded with the Ignite program JAR packet;
the commit message sending unit is used for sending the commit message corresponding to the cache data to other Ignite nodes in the Ignite cluster if the cache data sent by the application instance embedded with the Ignite program JAR packet is detected;
a transaction confirmation message sending unit, configured to send a transaction confirmation message corresponding to the commit message to the application instance in which the Ignite program JAR packet has been embedded; and
and the data retrieval unit is used for locally querying and acquiring corresponding target data according to the SQL query statement if the SQL query statement is detected.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of caching distributed data according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to carry out the method of caching distributed data according to any one of claims 1 to 7.
CN202011196303.7A 2020-10-30 2020-10-30 High-speed distributed data caching method, device, computer equipment and storage medium Active CN112328637B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011196303.7A CN112328637B (en) 2020-10-30 2020-10-30 High-speed distributed data caching method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011196303.7A CN112328637B (en) 2020-10-30 2020-10-30 High-speed distributed data caching method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112328637A true CN112328637A (en) 2021-02-05
CN112328637B CN112328637B (en) 2023-11-14

Family

ID=74323734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011196303.7A Active CN112328637B (en) 2020-10-30 2020-10-30 High-speed distributed data caching method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112328637B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113377795A (en) * 2021-06-23 2021-09-10 北京沃东天骏信息技术有限公司 Message processing method and device
CN117114559A (en) * 2023-10-24 2023-11-24 广州一链通互联网科技有限公司 Weather factor optimization algorithm in dynamic programming of internal trade containerized route

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102281332A (en) * 2011-08-31 2011-12-14 上海西本网络科技有限公司 Distributed cache array and data updating method thereof
CN108509433A (en) * 2017-02-23 2018-09-07 北京京东金融科技控股有限公司 The method, apparatus and electronic equipment of formation sequence number based on distributed system
CN108776934A (en) * 2018-05-15 2018-11-09 中国平安人寿保险股份有限公司 Distributed data computational methods, device, computer equipment and readable storage medium storing program for executing
CN111651464A (en) * 2020-04-15 2020-09-11 北京皮尔布莱尼软件有限公司 Data processing method and system and computing equipment
CN111797107A (en) * 2020-07-08 2020-10-20 贵州易鲸捷信息技术有限公司 Database transaction concurrency control method for mixing optimistic lock and pessimistic lock

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102281332A (en) * 2011-08-31 2011-12-14 上海西本网络科技有限公司 Distributed cache array and data updating method thereof
CN108509433A (en) * 2017-02-23 2018-09-07 北京京东金融科技控股有限公司 The method, apparatus and electronic equipment of formation sequence number based on distributed system
CN108776934A (en) * 2018-05-15 2018-11-09 中国平安人寿保险股份有限公司 Distributed data computational methods, device, computer equipment and readable storage medium storing program for executing
CN111651464A (en) * 2020-04-15 2020-09-11 北京皮尔布莱尼软件有限公司 Data processing method and system and computing equipment
CN111797107A (en) * 2020-07-08 2020-10-20 贵州易鲸捷信息技术有限公司 Database transaction concurrency control method for mixing optimistic lock and pessimistic lock

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113377795A (en) * 2021-06-23 2021-09-10 北京沃东天骏信息技术有限公司 Message processing method and device
CN117114559A (en) * 2023-10-24 2023-11-24 广州一链通互联网科技有限公司 Weather factor optimization algorithm in dynamic programming of internal trade containerized route
CN117114559B (en) * 2023-10-24 2024-01-23 广州一链通互联网科技有限公司 Weather factor optimization algorithm in dynamic programming of internal trade containerized route

Also Published As

Publication number Publication date
CN112328637B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
US12001420B2 (en) Disconnected operation within distributed database systems
US11003689B2 (en) Distributed database transaction protocol
US8645319B2 (en) Information processing system, data update method and data update program
EP3803618B1 (en) Distributed transactions in cloud storage with hierarchical namespace
WO2019231689A1 (en) Multi-protocol cloud storage for big data and analytics
EP3803619A1 (en) Cloud storage distributed file system
US7996360B2 (en) Coordinating updates to replicated data
US8600933B2 (en) Multi-master attribute uniqueness
US20180074919A1 (en) Hybrid Database Concurrent Transaction Control
JP7549137B2 (en) Transaction processing method, system, device, equipment, and program
US20150074070A1 (en) System and method for reconciling transactional and non-transactional operations in key-value stores
CN106354732B (en) A kind of off-line data version conflict solution for supporting concurrently to cooperate with
CN112328637B (en) High-speed distributed data caching method, device, computer equipment and storage medium
CN113168371A (en) Write-write collision detection for multi-master shared storage databases
US8996484B2 (en) Recursive lock-and-propagate operation
US9430541B1 (en) Data updates in distributed system with data coherency
CN114691307A (en) Transaction processing method and computer system
US10445338B2 (en) Method and system for replicating data in a cloud storage system
Zhang et al. Dependency preserved raft for transactions
CN114207600A (en) Distributed cross-regional database transaction processing
CN117435574B (en) Improved two-stage commit transaction implementation method, system, device and storage medium
EP4390718A1 (en) Method and apparatus for controlling database transaction, and related device
Witt Distributed Cache-Aware Transactions for Polyglot Persistence
Kim et al. Validation-based reprocessing scheme for updating spatial data in mobile computing environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant