CN112328637B - High-speed distributed data caching method, device, computer equipment and storage medium - Google Patents
High-speed distributed data caching method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN112328637B CN112328637B CN202011196303.7A CN202011196303A CN112328637B CN 112328637 B CN112328637 B CN 112328637B CN 202011196303 A CN202011196303 A CN 202011196303A CN 112328637 B CN112328637 B CN 112328637B
- Authority
- CN
- China
- Prior art keywords
- ignite
- data
- node
- message
- embedded
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000012790 confirmation Methods 0.000 claims abstract description 50
- 238000004590 computer program Methods 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 6
- 230000000295 complement effect Effects 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 abstract description 2
- 230000001360 synchronised effect Effects 0.000 description 10
- 238000007726 management method Methods 0.000 description 8
- 238000013461 design Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 238000011960 computer-aided design Methods 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 230000003139 buffering effect Effects 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/242—Query formulation
- G06F16/2433—Query languages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
- G06F16/2365—Ensuring data consistency and integrity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24552—Database cache management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/27—Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/466—Transaction processing
Abstract
The invention discloses a high-speed distributed data caching method, a device, computer equipment and a storage medium, relating to a distributed storage technology of cloud storage, wherein the method comprises the steps of acquiring a local lock type if a preparation message sent by an application instance embedded with an Ignite program JAR packet is detected; if the lock type is an optimistic lock, sending a preparation message to other Ignite nodes in the Ignite cluster; transmitting confirmation information corresponding to the preparation message to the application instance embedded with the Ignite program JAR packet; if the cache data sent by the application instance embedded with the Ignit program JAR packet is detected, sending a commit message corresponding to the cache data to other Ignit nodes in the Ignit cluster; the transaction confirmation message corresponding to the commit message is sent to the application instance in which the Ignite program JAR packet has been embedded. According to the method, the data to be stored is synchronously updated to each node by adopting a two-stage submitting algorithm, a distributed lock mechanism is supported, and pessimistic lock or optimistic lock is used when the data is accessed, so that the consistency of the distributed storage of the data is ensured.
Description
Technical Field
The present invention relates to the field of distributed storage technologies of cloud storage, and in particular, to a method and apparatus for caching high-speed distributed data, a computer device, and a storage medium.
Background
At present, a common data caching mode is Redis, which is a k/v mode data storage mode. In this mode, there is a great limitation in querying data, and querying of range type data cannot be performed. And the distributed data consistency cannot be achieved under the condition of complex data storage in a k/v mode, so that the data caching process is prone to error.
Disclosure of Invention
The embodiment of the invention provides a high-speed distributed data caching method, a device, computer equipment and a storage medium, which aim to solve the problem that in the prior art, a database in a k/v mode data storage mode cannot achieve distributed data consistency under the condition of storing complex data, so that the data caching process is easy to make mistakes.
In a first aspect, an embodiment of the present invention provides a high-speed distributed data caching method, including:
if a preparation message sent by an application instance embedded with an Ignit program JAR packet is detected, acquiring a local lock type; wherein the lock types include pessimistic locks and optimistic locks;
If the lock type is an optimistic lock, sending the preparation message to other Ignite nodes in an Ignite cluster;
transmitting confirmation information corresponding to the preparation message to an application instance embedded with an Ignite program JAR packet;
if the cache data sent by the application instance embedded with the Ignit program JAR packet is detected, sending a commit message corresponding to the cache data to other Ignit nodes in the Ignit cluster;
transmitting a transaction confirmation message corresponding to the commit message to an application instance in which an Ignite program JAR packet is embedded; and
if the SQL query statement is detected, corresponding target data is obtained by local query according to the SQL query statement.
In a second aspect, an embodiment of the present invention provides a high-speed distributed data caching apparatus, including:
the preparation message detection unit is used for acquiring a local lock type if detecting a preparation message sent by an application instance embedded with an Ignite program JAR packet; wherein the lock types include pessimistic locks and optimistic locks;
a preparation message sending unit, configured to send the preparation message to other Ignite nodes in an Ignite cluster if the lock type is an optimistic lock;
a confirmation message sending unit, configured to send confirmation information corresponding to the preparation message to an application instance in which an Ignite program JAR packet is embedded;
The system comprises a commit message sending unit, a commit message sending unit and a commit message sending unit, wherein the commit message sending unit is used for sending commit messages corresponding to cache data to other Ignite nodes in an Ignite cluster if the cache data sent by an application instance embedded with an Ignite program JAR packet is detected;
a transaction confirmation message sending unit, configured to send a transaction confirmation message corresponding to the commit message to an application instance in which an Ignite program JAR packet is embedded; and
and the data retrieval unit is used for obtaining corresponding target data in a local query according to the SQL query statement if the SQL query statement is detected.
In a third aspect, an embodiment of the present invention further provides a computer device, including a memory, a processor, and a computer program stored on the memory and capable of running on the processor, where the processor implements the high-speed distributed data caching method according to the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, where the computer program when executed by a processor causes the processor to perform the high-speed distributed data caching method according to the first aspect.
The embodiment of the invention provides a high-speed distributed data caching method, a device, computer equipment and a storage medium, which comprise the steps of acquiring a local lock type if a preparation message sent by an application instance embedded with an Ignit program JAR packet is detected; wherein the lock types include pessimistic locks and optimistic locks; if the lock type is an optimistic lock, sending the preparation message to other Ignite nodes in an Ignite cluster; transmitting confirmation information corresponding to the preparation message to an application instance embedded with an Ignite program JAR packet; if the cache data sent by the application instance embedded with the Ignit program JAR packet is detected, sending a commit message corresponding to the cache data to other Ignit nodes in the Ignit cluster; transmitting a transaction confirmation message corresponding to the commit message to an application instance in which an Ignite program JAR packet is embedded; and if the SQL query statement is detected, acquiring corresponding target data in the local query according to the SQL query statement. According to the method, the data stored in the storage device are synchronously updated to each node by adopting a two-stage submitting algorithm, a distributed lock mechanism is supported, and when the data is accessed, pessimistic lock or optimistic lock is used, so that the consistency of the distributed storage of the data is ensured.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario schematic diagram of a high-speed distributed data caching method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for caching data in a high-speed distributed manner according to an embodiment of the present invention;
FIG. 3 is a schematic block diagram of a high-speed distributed data caching apparatus according to an embodiment of the present invention;
fig. 4 is a schematic block diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic diagram of an application scenario of a high-speed distributed data caching method according to an embodiment of the present invention; fig. 2 is a flow chart of a high-speed distributed data caching method according to an embodiment of the present invention, where the high-speed distributed data caching method is applied to a server, and the method is executed by application software installed in the server.
As shown in fig. 2, the method includes steps S110 to S160.
S110, if a preparation message sent by an application instance embedded with an Ignit program JAR packet is detected, acquiring a local lock type; wherein the lock types include pessimistic locks and optimistic locks.
In this embodiment, in order to more clearly understand the technical solution of the present application, the following describes the related terminal in detail. The application relates to an angle description technical scheme of one of a plurality of master nodes included in an Ignite cluster.
First, the client is a client, which can also be understood as an igite node (which is also one of the igite nodes in the igite cluster) embedded with the igite program JAR packet, and a cache space is configured locally on the client, and the igite node embedded with the igite program JAR packet can start a process to synchronize cache data in the cache space to other igite nodes.
The second is an igite cluster, wherein the igite cluster includes a plurality of igite nodes, and since processing a certain computing task in the igite cluster may involve a plurality of igite nodes, it is required that data of each igite node in the igite cluster is consistent. Once the data in one Ignite node is updated, the updated data needs to be timely synchronized to other Ignite nodes in the Ignite cluster. In synchronizing data in an Ignite cluster, a two-phase commit protocol is required.
The two-stage commit protocol comprises two stages, wherein the first stage is a preparation stage and the second stage is a commit stage. Step S110 is the first step in the preparation phase, and requires that the Ignite node (i.e. the application instance) embedded with the Ignite program JAR packet first send a preparation message to the master node in the Ignite cluster. When the master node in the Ignite cluster receives the preparation message, the master node needs to further communicate with other nodes in the Ignite cluster, and at this time, the local lock type of the master node needs to be acquired first to judge whether the master node locks the data when the data synchronization is started, so as to avoid the data from being modified.
If the lock type is an optimistic lock, it means that a lock local to the Ignite node may be acquired and lock the data before the transaction has finished committing. If the lock type is pessimistic, it means that the lock local to the Ignite node is acquired at the beginning of the transaction and locks the data. Wherein the optimistic concurrency control mode is acquired by the master node in a preparation phase of the two-phase commit protocol, the optimistic lock being available before the transaction has finished committing. In the pessimistic concurrency control model, all data to be read, written or modified needs to be locked at the beginning of a transaction.
In one embodiment, step S110 further includes:
and obtaining a locally stored distributed hash table, and obtaining the locally stored data of each Igite node in the Igite cluster according to the distributed hash table.
In this embodiment, a distributed hash table is used in the Ignite cluster to determine how data is distributed in the cluster, and the distributed hash table is stored on the master node, that is, if there is data to be synchronized from the cache space of the master node to the corresponding target Ignite node, local storage data of each Ignite node in the Ignite cluster needs to be obtained according to the distributed hash table, so that the target Ignite nodes of the data to be synchronized can be obtained by querying the distributed hash table.
And S120, if the lock type is an optimistic lock, sending the preparation message to other Ignite nodes in the Ignite cluster.
In this embodiment, when the master node receives the prepare message and determines that the local lock type is an optimistic lock, the prepare message may be sent by the master node to other Ignite nodes in the Ignite cluster. The method for forwarding the preparation message by the master node can effectively detect whether other Ignite nodes in the Ignite cluster are ready for synchronous data in the preparation stage.
And S130, sending the confirmation information corresponding to the preparation message to the application instance embedded with the Ignite program JAR packet.
In this embodiment, to indicate that the other Ignite node in the Ignite cluster has completed lock acquisition and has prepared for synchronization data, the other Ignite node sends an acknowledgement message to the application instance in which the Ignite program JAR packet has been embedded. At this time, the preparation phase in the two-phase commit protocol is completed until step S130 is performed. The subsequent steps begin to perform the commit phase.
And S140, if the cache data sent by the application instance embedded with the Ignit program JAR packet is detected, sending the commit message corresponding to the cache data to other Ignit nodes in the Ignit cluster.
In this embodiment, in the commit phase, the master node detects whether the cache data sent by the application instance of the embedded Ignite program JAR packet is received, and if the master node detects that the cache data sent by the application instance of the embedded Ignite program JAR packet is received, the master node sends a commit message corresponding to the cache data to other Ignite nodes in the Ignite cluster.
In one embodiment, step S140 includes:
Acquiring an internal management transaction request of a transaction coordinator;
writing the key value of the cache data into a local area;
writing the key value into a local transaction map through a transaction coordinator;
receiving a commit message sent by an application instance embedded with an Ignit program JAR packet;
the optimistic lock is obtained locally, and Ignite node writing and locking is carried out according to the optimistic lock;
sending a locked confirmation message to an application instance of the embedded Ignite program JAR packet;
the commit message is sent to other Ignite nodes in the Ignite cluster.
In this embodiment, when the master local lock is an optimistic lock, reference may be made to a Computer Aided Design (CAD), where one designer works on a part of the entire design, typically checking the design from a central repository to a local workspace, and then after a partial update, checking the results into the central repository, since the designer is only responsible for a part of the entire design, it is unlikely to conflict with updates of other parts.
In contrast to the pessimistic concurrency model, the optimistic concurrency model delays the acquisition of locks, which is more suitable for applications with less resource contention, such as the CAD example described above.
By performing the above steps in optimistically locked concurrency mode, it is ensured that the cached data is written to the master node with first ensured consistency.
The transaction coordinator is deployed in the master node and can detect the internal management transaction request. Once the internal management transaction request is detected, the key of the cache data may be written first, and then written to the local transaction map of the master node. After the local transaction mapping is completed, the master node can be locked according to optimistic lock, and after the locking is completed, the master node sends a locked confirmation message to an application instance embedded with the Ignite program JAR packet, so that the buffer data is synchronized to the master node first and then to other Ignite nodes in the Ignite cluster.
And S150, sending the transaction confirmation message corresponding to the commit message to the application instance embedded with the Ignite program JAR packet.
In this embodiment, after the commit message corresponding to the buffered data is sent to the master node, after synchronization of the buffered data is completed in the master node, at this time, the master node sends a transaction confirmation message corresponding to the commit message to the application instance in which the Ignite program JAR packet has been embedded. Similarly, after the commit message corresponding to the buffered data is sent to other Ignite nodes in the Ignite cluster, the other Ignite nodes in the Ignite cluster can synchronize the buffered data according to the commit message, and after the buffering process is completed, the transaction confirmation messages corresponding to the commit message by the other Ignite nodes in the Ignite cluster are sent to the application instance embedded with the Ignite program JAR packet.
In an embodiment, step S150 further includes:
acquiring a node unique number of a corresponding Ignite node according to the received transaction confirmation message so as to form a current valid node list;
and acquiring an Ignite cluster total list, taking the Ignite cluster total list as a complete set, and acquiring a complement of the current effective node list to obtain a current fault node list.
In this embodiment, when other Ignite nodes other than the master node in the Ignite cluster receive the commit message forwarded by the master node, a transaction confirmation message needs to be fed back to the master node to notify whether the current Ignite node is a valid node (i.e., a non-failure node). When a part of Ignite nodes send transaction confirmation messages and another part of Ignite nodes do not send transaction confirmation messages, node unique numbers corresponding to the Ignite nodes which send the transaction confirmation messages form a current valid node list. And acquiring the complement of the current effective node list in the known Ignite cluster total list by taking the Ignite cluster total list as a complete set so as to obtain the current fault node list.
S160, if the SQL query statement is detected, corresponding target data is obtained in a local query mode according to the SQL query statement.
In this embodiment, an SQL engine is implanted in the Ignite cluster, supporting standard SQL99, supporting index, and also supporting KV mode (KV is known as Key-value, which represents a Key value). When any Ignite uses the data, the corresponding data can be found by querying the local memory, and the query mode can be realized through SQL query statement.
In one embodiment, the step S160 includes:
judging whether query data corresponding to the SQL query statement exists locally at the master node;
if the query data corresponding to the SQL query statement exists locally at the master node, taking the query data as target data;
and if the query data corresponding to the SQL query statement does not exist in the local area of the main node, acquiring a corresponding current target Ignite node according to the distributed hash table, and receiving target data corresponding to the SQL query statement, which is sent by the current target Ignite node.
In this embodiment, since the distributed hash table is stored in the master node, when the master node does not query the corresponding target data according to the SQL query statement, the master node may determine the Ignite node stored in the target data by querying the distributed hash table, and the node may send the target data to the master node to implement the query. When the corresponding target data is queried on the main node according to the SQL query statement, the main node directly takes the query data as the target data.
In one embodiment, step S110 further includes:
if the lock type is pessimistic, detecting whether a confirmation preparation message has been generated;
if the confirmation preparation message is generated, the cache data sent by the application instance embedded with the Ignite program JAR packet is received.
In this embodiment, when the master node receives the prepare message and determines that the local lock type is pessimistic, the master node receives the cache data sent by the application instance in which the Ignite program JAR packet is embedded after detecting the confirm prepare message.
In one embodiment, the step of detecting whether a validation ready message has been generated if the lock type is pessimistic, comprises:
acquiring an internal management transaction request of a transaction coordinator;
generating a pessimistic lock request locally, and performing Ignite node write-locking according to the pessimistic lock;
an acknowledgement prepare message is generated and sent to the application instance in which the Ignite program JAR packet has been embedded.
To better understand the principles of pessimistic locks, an example is described below. An example of a pessimistic concurrency model is the transfer between two bank accounts, where it is necessary to ensure that the lending status of the two bank accounts is correctly recorded. Both accounts need to be locked at this time to ensure that the update is complete and the balance is correct. In the pessimistic concurrency model, an application needs to lock all data that is to be read, written, or modified at the beginning of a transaction.
In the pessimistic model, the pessimistic lock is held until the transaction is completed and the lock prevents other transactions from accessing the data so that each Ignite node can only synchronize the cached data sent from the application instance in which the Ignite program JAR packet has been embedded.
According to the method, the data stored in the storage device are synchronously updated to each node by adopting a two-stage submitting algorithm, a distributed lock mechanism is supported, and when the data is accessed, pessimistic lock or optimistic lock is used, so that the consistency of the distributed storage of the data is ensured.
The embodiment of the invention also provides a high-speed distributed data caching device, which is used for executing any embodiment of the high-speed distributed data caching method. In particular, referring to fig. 3, fig. 3 is a schematic block diagram of a high-speed distributed data caching apparatus according to an embodiment of the present invention. The high-speed distributed data caching apparatus 100 may be configured in a server.
As shown in fig. 3, the high-speed distributed data caching apparatus 100 includes: a ready message detection unit 110, a ready message transmission unit 120, a confirmation message transmission unit 130, a commit message transmission unit 140, a transaction confirmation message transmission unit 150, and a data retrieval unit 160.
A preparation message detecting unit 110, configured to obtain a local lock type if a preparation message sent by an application instance embedded with an Ignite program JAR packet is detected; wherein the lock types include pessimistic locks and optimistic locks.
In this embodiment, in order to more clearly understand the technical solution of the present application, the following describes the related terminal in detail. The application relates to an angle description technical scheme of one of a plurality of master nodes included in an Ignite cluster.
First, the client is a client, which can also be understood as an igite node (which is also one of the igite nodes in the igite cluster) embedded with the igite program JAR packet, and a cache space is configured locally on the client, and the igite node embedded with the igite program JAR packet can start a process to synchronize cache data in the cache space to other igite nodes.
The second is an igite cluster, wherein the igite cluster includes a plurality of igite nodes, and since processing a certain computing task in the igite cluster may involve a plurality of igite nodes, it is required that data of each igite node in the igite cluster is consistent. Once the data in one Ignite node is updated, the updated data needs to be timely synchronized to other Ignite nodes in the Ignite cluster. In synchronizing data in an Ignite cluster, a two-phase commit protocol is required.
The two-stage commit protocol comprises two stages, wherein the first stage is a preparation stage and the second stage is a commit stage. The step performed in the prepare message detection unit 110 is the first step of the prepare phase, requiring that the Ignite node (i.e. application instance) having embedded the Ignite program JAR packet first send a prepare message to the master node in the Ignite cluster. When the master node in the Ignite cluster receives the preparation message, the master node needs to further communicate with other nodes in the Ignite cluster, and at this time, the local lock type of the master node needs to be acquired first to judge whether the master node locks the data when the data synchronization is started, so as to avoid the data from being modified.
If the lock type is an optimistic lock, it means that a lock local to the Ignite node may be acquired and lock the data before the transaction has finished committing. If the lock type is pessimistic, it means that the lock local to the Ignite node is acquired at the beginning of the transaction and locks the data. Wherein the optimistic concurrency control mode is acquired by the master node in a preparation phase of the two-phase commit protocol, the optimistic lock being available before the transaction has finished committing. In the pessimistic concurrency control model, all data to be read, written or modified needs to be locked at the beginning of a transaction.
In one embodiment, the high-speed distributed data caching apparatus 100 further includes:
the hash table acquisition unit is used for acquiring a locally stored distributed hash table, and acquiring the locally stored data of each Igite node in the Igite cluster according to the distributed hash table.
In this embodiment, a distributed hash table is used in the Ignite cluster to determine how data is distributed in the cluster, and the distributed hash table is stored on the master node, that is, if there is data to be synchronized from the cache space of the master node to the corresponding target Ignite node, local storage data of each Ignite node in the Ignite cluster needs to be obtained according to the distributed hash table, so that the target Ignite nodes of the data to be synchronized can be obtained by querying the distributed hash table.
A preparation message sending unit 120, configured to send the preparation message to other Ignite nodes in the Ignite cluster if the lock type is an optimistic lock.
In this embodiment, when the master node receives the prepare message and determines that the local lock type is an optimistic lock, the prepare message may be sent by the master node to other Ignite nodes in the Ignite cluster. The method for forwarding the preparation message by the master node can effectively detect whether other Ignite nodes in the Ignite cluster are ready for synchronous data in the preparation stage.
The acknowledgement message sending unit 130 is configured to send acknowledgement information corresponding to the preparation message to an application instance in which the Ignite program JAR packet is embedded.
In this embodiment, to indicate that the other Ignite node in the Ignite cluster has completed lock acquisition and has prepared for synchronization data, the other Ignite node sends an acknowledgement message to the application instance in which the Ignite program JAR packet has been embedded. At this time, the preparation phase in the two-phase commit protocol is completed by the step in the confirmation message sending unit 130. The subsequent steps begin to perform the commit phase.
The commit message sending unit 140 is configured to send, if detecting the cache data sent by the application instance in which the Ignite program JAR packet has been embedded, a commit message corresponding to the cache data to other Ignite nodes in the Ignite cluster.
In this embodiment, in the commit phase, the master node detects whether the cache data sent by the application instance of the embedded Ignite program JAR packet is received, and if the master node detects that the cache data sent by the application instance of the embedded Ignite program JAR packet is received, the master node sends a commit message corresponding to the cache data to other Ignite nodes in the Ignite cluster.
In particular, commit message sending unit 140 includes:
a first transaction request acquisition unit configured to acquire an internal management transaction request of a transaction coordinator;
the first key value writing unit is used for writing the key value of the cache data into the local;
a first transaction mapping unit, configured to write the key value into a local transaction map through a transaction coordinator;
a commit message receiving unit, configured to receive a commit message sent by an application instance in which an Ignite program JAR packet has been embedded;
the first locking unit is used for locally acquiring the optimistic lock and carrying out Ignite node write locking according to the optimistic lock;
a first locking message sending unit, configured to send a locked confirmation message to an application instance in which an Ignite program JAR packet is embedded;
and the first commit message distribution unit is used for sending the commit message to other Ignite nodes in the Ignite cluster.
In this embodiment, when the master local lock is an optimistic lock, reference may be made to a Computer Aided Design (CAD), where one designer works on a part of the entire design, typically checking the design from a central repository to a local workspace, and then after a partial update, checking the results into the central repository, since the designer is only responsible for a part of the entire design, it is unlikely to conflict with updates of other parts.
In contrast to the pessimistic concurrency model, the optimistic concurrency model delays the acquisition of locks, which is more suitable for applications with less resource contention, such as the CAD example described above.
By performing the above steps in optimistically locked concurrency mode, it is ensured that the cached data is written to the master node with first ensured consistency.
The transaction coordinator is deployed in the master node and can detect the internal management transaction request. Once the internal management transaction request is detected, the key of the cache data may be written first, and then written to the local transaction map of the master node. After the local transaction mapping is completed, the master node can be locked according to optimistic lock, and after the locking is completed, the master node sends a locked confirmation message to an application instance embedded with the Ignite program JAR packet, so that the buffer data is synchronized to the master node first and then to other Ignite nodes in the Ignite cluster.
The transaction confirmation message sending unit 150 is configured to send a transaction confirmation message corresponding to the commit message to an application instance in which the Ignite program JAR packet is embedded.
In this embodiment, after the commit message corresponding to the buffered data is sent to the master node, after synchronization of the buffered data is completed in the master node, at this time, the master node sends a transaction confirmation message corresponding to the commit message to the application instance in which the Ignite program JAR packet has been embedded. Similarly, after the commit message corresponding to the buffered data is sent to other Ignite nodes in the Ignite cluster, the other Ignite nodes in the Ignite cluster can synchronize the buffered data according to the commit message, and after the buffering process is completed, the transaction confirmation messages corresponding to the commit message by the other Ignite nodes in the Ignite cluster are sent to the application instance embedded with the Ignite program JAR packet.
In one embodiment, the high-speed distributed data caching apparatus 100 further includes:
the current effective node list acquisition unit is used for acquiring the node unique number of the corresponding Ignite node according to the received transaction confirmation message so as to form a current effective node list;
the current fault node list obtaining unit is used for obtaining an igite cluster total list, taking the igite cluster total list as a whole set, and obtaining a complement of the current effective node list to obtain the current fault node list.
In this embodiment, when other Ignite nodes other than the master node in the Ignite cluster receive the commit message forwarded by the master node, a transaction confirmation message needs to be fed back to the master node to notify whether the current Ignite node is a valid node (i.e., a non-failure node). When a part of Ignite nodes send transaction confirmation messages and another part of Ignite nodes do not send transaction confirmation messages, node unique numbers corresponding to the Ignite nodes which send the transaction confirmation messages form a current valid node list. And acquiring the complement of the current effective node list in the known Ignite cluster total list by taking the Ignite cluster total list as a complete set so as to obtain the current fault node list.
The data retrieval unit 160 is configured to, if an SQL query statement is detected, obtain corresponding target data in a local query according to the SQL query statement.
In this embodiment, an SQL engine is implanted in the Ignite cluster, supporting standard SQL99, supporting index, and also supporting KV mode (KV is known as Key-value, which represents a Key value). When any Ignite uses the data, the corresponding data can be found by querying the local memory, and the query mode can be realized through SQL query statement.
In one embodiment, the data retrieval unit 160 includes:
the query data judging unit is used for judging whether the query data corresponding to the SQL query statement exists locally at the main node or not;
the first data acquisition unit is used for taking query data corresponding to the SQL query statement as target data if the query data exists locally at the main node;
and the second data acquisition unit is used for acquiring a corresponding current target Ignite node according to the distributed hash table if query data corresponding to the SQL query statement does not exist in the local area of the main node, and receiving target data corresponding to the SQL query statement, which is sent by the current target Ignite node.
In this embodiment, since the distributed hash table is stored in the master node, when the master node does not query the corresponding target data according to the SQL query statement, the master node may determine the Ignite node stored in the target data by querying the distributed hash table, and the node may send the target data to the master node to implement the query. When the corresponding target data is queried on the main node according to the SQL query statement, the main node directly takes the query data as the target data.
In one embodiment, the high-speed distributed data caching apparatus 100 further includes:
a pessimistic lock detection unit, configured to detect whether a confirmation preparation message has been generated if the lock type is a pessimistic lock;
and the cache data receiving unit is used for receiving cache data sent by the application instance embedded with the Ignite program JAR packet if the confirmation preparation message is generated.
In this embodiment, when the master node receives the prepare message and determines that the local lock type is pessimistic, the master node receives the cache data sent by the application instance in which the Ignite program JAR packet is embedded after detecting the confirm prepare message.
In an embodiment, the pessimistic lock detection unit comprises:
a second transaction request acquisition unit configured to acquire an internal management transaction request of the transaction coordinator;
The second locking unit is used for locally generating a pessimistic lock request and carrying out Ignite node write locking according to the pessimistic lock;
and the second locking message sending unit is used for generating an acknowledgement preparation message and sending the acknowledgement preparation message to the application instance embedded with the Ignite program JAR packet.
To better understand the principles of pessimistic locks, an example is described below. An example of a pessimistic concurrency model is the transfer between two bank accounts, where it is necessary to ensure that the lending status of the two bank accounts is correctly recorded. Both accounts need to be locked at this time to ensure that the update is complete and the balance is correct. In the pessimistic concurrency model, an application needs to lock all data that is to be read, written, or modified at the beginning of a transaction.
In the pessimistic model, the pessimistic lock is held until the transaction is completed and the lock prevents other transactions from accessing the data so that each Ignite node can only synchronize the cached data sent from the application instance in which the Ignite program JAR packet has been embedded.
The device synchronously updates the stored data to each node by adopting a two-stage submitting algorithm, supports a distributed lock mechanism, and ensures the consistency of the distributed storage of the data by using pessimistic locks or optimistic locks when the data is accessed.
The high-speed distributed data caching apparatus described above may be implemented in the form of a computer program that is executable on a computer device as shown in fig. 4.
Referring to fig. 4, fig. 4 is a schematic block diagram of a computer device according to an embodiment of the present invention. The computer device 500 is a server, and the server may be a stand-alone server or a server cluster formed by a plurality of servers.
With reference to FIG. 4, the computer device 500 includes a processor 502, memory, and a network interface 505, connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032, when executed, may cause the processor 502 to perform a high-speed distributed data caching method.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the execution of a computer program 5032 in the non-volatile storage medium 503, which computer program 5032, when executed by the processor 502, causes the processor 502 to perform a high-speed distributed data caching method.
The network interface 505 is used for network communication, such as providing for transmission of data information, etc. It will be appreciated by those skilled in the art that the architecture shown in fig. 4 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting of the computer device 500 to which the present inventive arrangements may be implemented, and that a particular computer device 500 may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
The processor 502 is configured to execute a computer program 5032 stored in a memory, so as to implement the high-speed distributed data caching method disclosed in the embodiment of the present invention.
Those skilled in the art will appreciate that the embodiment of the computer device shown in fig. 4 is not limiting of the specific construction of the computer device, and in other embodiments, the computer device may include more or less components than those shown, or certain components may be combined, or a different arrangement of components. For example, in some embodiments, the computer device may include only a memory and a processor, and in such embodiments, the structure and function of the memory and the processor are consistent with the embodiment shown in fig. 4, and will not be described again.
It should be appreciated that in embodiments of the present invention, the processor 502 may be a central processing unit (Central Processing Unit, CPU), the processor 502 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf Programmable gate arrays (FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In another embodiment of the invention, a computer-readable storage medium is provided. The computer readable storage medium may be a non-volatile computer readable storage medium. The computer readable storage medium stores a computer program, wherein the computer program when executed by a processor implements the high-speed distributed data caching method disclosed in the embodiment of the invention.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus, device and unit described above may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein. Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the units is merely a logical function division, there may be another division manner in actual implementation, or units having the same function may be integrated into one unit, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices, or elements, or may be an electrical, mechanical, or other form of connection.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment of the present invention.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units may be stored in a storage medium if implemented in the form of software functional units and sold or used as stand-alone products. Based on such understanding, the technical solution of the present invention is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.
Claims (7)
1. A method for high-speed distributed data caching, comprising:
if a preparation message sent by an application instance embedded with an Ignit program JAR packet is detected, acquiring a local lock type; wherein the lock types include pessimistic locks and optimistic locks; the application example of the embedded Ignite program JAR packet is an Ignite node embedded with the Ignite program JAR packet;
if the lock type is an optimistic lock, sending the preparation message to other Ignite nodes in an Ignite cluster;
transmitting confirmation information corresponding to the preparation message to an application instance embedded with an Ignite program JAR packet;
if the cache data sent by the application instance embedded with the Ignit program JAR packet is detected, sending a commit message corresponding to the cache data to other Ignit nodes in the Ignit cluster;
Transmitting a transaction confirmation message corresponding to the commit message to an application instance in which an Ignite program JAR packet is embedded; and
if the SQL query statement is detected, acquiring corresponding target data in the local query according to the SQL query statement;
the method further comprises the steps of:
acquiring a locally stored distributed hash table, and acquiring local storage data of each Igite node in an Igite cluster according to the distributed hash table;
if the cache data sent by the application instance embedded with the Ignit program JAR packet is detected, a commit message corresponding to the cache data is sent to other Ignit nodes in the Ignit cluster, including:
acquiring an internal management transaction request of a transaction coordinator;
writing the key value of the cache data into a local area;
writing the key value into a local transaction map through a transaction coordinator;
receiving a commit message sent by an application instance embedded with an Ignit program JAR packet;
the optimistic lock is obtained locally, and Ignite node writing and locking is carried out according to the optimistic lock;
sending a locked confirmation message to an application instance of the embedded Ignite program JAR packet;
sending the commit message to other Ignite nodes in an Ignite cluster;
the obtaining the corresponding target data in the local query according to the SQL query statement comprises the following steps:
Judging whether query data corresponding to the SQL query statement exists locally at the master node;
if the query data corresponding to the SQL query statement exists locally at the master node, taking the query data as target data;
and if the query data corresponding to the SQL query statement does not exist in the local area of the main node, acquiring a corresponding current target Ignite node according to the distributed hash table, and receiving target data corresponding to the SQL query statement, which is sent by the current target Ignite node.
2. The method of high-speed distributed data caching according to claim 1, further comprising:
acquiring a node unique number of a corresponding Ignite node according to the received transaction confirmation message so as to form a current valid node list;
and acquiring an Ignite cluster total list, taking the Ignite cluster total list as a complete set, and acquiring a complement of the current effective node list to obtain a current fault node list.
3. The method of high-speed distributed data caching according to claim 1, further comprising:
if the lock type is pessimistic, detecting whether a confirmation preparation message has been generated;
if the confirmation preparation message is generated, the cache data sent by the application instance embedded with the Ignite program JAR packet is received.
4. The method of claim 3, wherein if the lock type is pessimistic, detecting whether an acknowledge ready message has been generated comprises:
acquiring an internal management transaction request of a transaction coordinator;
generating a pessimistic lock request locally, and performing Ignite node write-locking according to the pessimistic lock;
an acknowledgement prepare message is generated and sent to the application instance in which the Ignite program JAR packet has been embedded.
5. A high-speed distributed data caching apparatus for implementing a high-speed distributed data caching method according to any one of claims 1-4, said apparatus comprising:
the preparation message detection unit is used for acquiring a local lock type if detecting a preparation message sent by an application instance embedded with an Ignite program JAR packet; wherein the lock types include pessimistic locks and optimistic locks; the application example of the embedded Ignite program JAR packet is an Ignite node embedded with the Ignite program JAR packet;
a preparation message sending unit, configured to send the preparation message to other Ignite nodes in an Ignite cluster if the lock type is an optimistic lock;
A confirmation message sending unit, configured to send confirmation information corresponding to the preparation message to an application instance in which an Ignite program JAR packet is embedded;
the system comprises a commit message sending unit, a commit message sending unit and a commit message sending unit, wherein the commit message sending unit is used for sending commit messages corresponding to cache data to other Ignite nodes in an Ignite cluster if the cache data sent by an application instance embedded with an Ignite program JAR packet is detected;
a transaction confirmation message sending unit, configured to send a transaction confirmation message corresponding to the commit message to an application instance in which an Ignite program JAR packet is embedded; and
the data retrieval unit is used for obtaining corresponding target data in a local query according to the SQL query statement if the SQL query statement is detected;
the apparatus further comprises:
the hash table acquisition unit is used for acquiring a locally stored distributed hash table and acquiring local storage data of each Igite node in the Igite cluster according to the distributed hash table;
the commit message sending unit includes:
a first transaction request acquisition unit configured to acquire an internal management transaction request of a transaction coordinator;
the first key value writing unit is used for writing the key value of the cache data into the local;
a first transaction mapping unit, configured to write the key value into a local transaction map through a transaction coordinator;
A commit message receiving unit, configured to receive a commit message sent by an application instance in which an Ignite program JAR packet has been embedded;
the first locking unit is used for locally acquiring the optimistic lock and carrying out Ignite node write locking according to the optimistic lock;
a first locking message sending unit, configured to send a locked confirmation message to an application instance in which an Ignite program JAR packet is embedded;
a first commit message distribution unit configured to send the commit message to other Ignite nodes in an Ignite cluster;
the data retrieval unit includes:
the query data judging unit is used for judging whether the query data corresponding to the SQL query statement exists locally at the main node or not;
the first data acquisition unit is used for taking query data corresponding to the SQL query statement as target data if the query data exists locally at the main node;
and the second data acquisition unit is used for acquiring a corresponding current target Ignite node according to the distributed hash table if query data corresponding to the SQL query statement does not exist in the local area of the main node, and receiving target data corresponding to the SQL query statement, which is sent by the current target Ignite node.
6. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the high-speed distributed data caching method of any one of claims 1 to 4 when the computer program is executed by the processor.
7. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, causes the processor to perform the high-speed distributed data caching method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011196303.7A CN112328637B (en) | 2020-10-30 | 2020-10-30 | High-speed distributed data caching method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011196303.7A CN112328637B (en) | 2020-10-30 | 2020-10-30 | High-speed distributed data caching method, device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112328637A CN112328637A (en) | 2021-02-05 |
CN112328637B true CN112328637B (en) | 2023-11-14 |
Family
ID=74323734
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011196303.7A Active CN112328637B (en) | 2020-10-30 | 2020-10-30 | High-speed distributed data caching method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112328637B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117114559B (en) * | 2023-10-24 | 2024-01-23 | 广州一链通互联网科技有限公司 | Weather factor optimization algorithm in dynamic programming of internal trade containerized route |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102281332A (en) * | 2011-08-31 | 2011-12-14 | 上海西本网络科技有限公司 | Distributed cache array and data updating method thereof |
CN108509433A (en) * | 2017-02-23 | 2018-09-07 | 北京京东金融科技控股有限公司 | The method, apparatus and electronic equipment of formation sequence number based on distributed system |
CN108776934A (en) * | 2018-05-15 | 2018-11-09 | 中国平安人寿保险股份有限公司 | Distributed data computational methods, device, computer equipment and readable storage medium storing program for executing |
CN111651464A (en) * | 2020-04-15 | 2020-09-11 | 北京皮尔布莱尼软件有限公司 | Data processing method and system and computing equipment |
CN111797107A (en) * | 2020-07-08 | 2020-10-20 | 贵州易鲸捷信息技术有限公司 | Database transaction concurrency control method for mixing optimistic lock and pessimistic lock |
-
2020
- 2020-10-30 CN CN202011196303.7A patent/CN112328637B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102281332A (en) * | 2011-08-31 | 2011-12-14 | 上海西本网络科技有限公司 | Distributed cache array and data updating method thereof |
CN108509433A (en) * | 2017-02-23 | 2018-09-07 | 北京京东金融科技控股有限公司 | The method, apparatus and electronic equipment of formation sequence number based on distributed system |
CN108776934A (en) * | 2018-05-15 | 2018-11-09 | 中国平安人寿保险股份有限公司 | Distributed data computational methods, device, computer equipment and readable storage medium storing program for executing |
CN111651464A (en) * | 2020-04-15 | 2020-09-11 | 北京皮尔布莱尼软件有限公司 | Data processing method and system and computing equipment |
CN111797107A (en) * | 2020-07-08 | 2020-10-20 | 贵州易鲸捷信息技术有限公司 | Database transaction concurrency control method for mixing optimistic lock and pessimistic lock |
Also Published As
Publication number | Publication date |
---|---|
CN112328637A (en) | 2021-02-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110263035B (en) | Block chain-based data storage and query method and device and electronic equipment | |
US10657064B1 (en) | Extending usages of cached data objects beyond cache expiration periods | |
US8645319B2 (en) | Information processing system, data update method and data update program | |
US8301600B1 (en) | Failover recovery in a distributed data store | |
US6012059A (en) | Method and apparatus for replicated transaction consistency | |
US10678697B1 (en) | Asynchronous cache building and/or rebuilding | |
US7996360B2 (en) | Coordinating updates to replicated data | |
CN111338766A (en) | Transaction processing method and device, computer equipment and storage medium | |
CN102779132B (en) | Data updating method, system and database server | |
WO2019231689A1 (en) | Multi-protocol cloud storage for big data and analytics | |
US10275347B2 (en) | System, method and computer program product for managing caches | |
US10824559B2 (en) | Counter tracker service | |
US20130110781A1 (en) | Server replication and transaction commitment | |
CN113535656B (en) | Data access method, device, equipment and storage medium | |
US11250395B2 (en) | Blockchain-based transaction processing methods and apparatuses and electronic devices | |
CN111274310A (en) | Distributed data caching method and system | |
US8600933B2 (en) | Multi-master attribute uniqueness | |
US11010305B2 (en) | Invalidating cached data objects in write-through operations | |
CN111597015A (en) | Transaction processing method and device, computer equipment and storage medium | |
JP2023541298A (en) | Transaction processing methods, systems, devices, equipment, and programs | |
US8996484B2 (en) | Recursive lock-and-propagate operation | |
US20200034472A1 (en) | Asynchronous cache coherency for mvcc based database systems | |
CN113010549A (en) | Data processing method based on remote multi-active system, related equipment and storage medium | |
CN112328637B (en) | High-speed distributed data caching method, device, computer equipment and storage medium | |
CN117616411A (en) | Method and system for processing database transactions in a distributed online transaction processing (OLTP) database |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |