CN114328564A - Method and device for realizing distributed lock - Google Patents

Method and device for realizing distributed lock Download PDF

Info

Publication number
CN114328564A
CN114328564A CN202111679981.3A CN202111679981A CN114328564A CN 114328564 A CN114328564 A CN 114328564A CN 202111679981 A CN202111679981 A CN 202111679981A CN 114328564 A CN114328564 A CN 114328564A
Authority
CN
China
Prior art keywords
lock
record
distributed
target
lock record
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111679981.3A
Other languages
Chinese (zh)
Inventor
朱峰
姚亚峰
刘亮
田�健
易剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Postal Savings Bank of China Ltd
Original Assignee
Postal Savings Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Postal Savings Bank of China Ltd filed Critical Postal Savings Bank of China Ltd
Priority to CN202111679981.3A priority Critical patent/CN114328564A/en
Publication of CN114328564A publication Critical patent/CN114328564A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a method and a device for realizing a distributed lock, wherein a database comprises a plurality of sub-databases, and the method comprises the following steps: under the condition that target data are called by a target service, locking records are inserted into a distributed lock record table to lock the target data, and the distributed lock record table is in one-to-one correspondence with sub-libraries and is stored in the sub-libraries where the target data are located; and under the condition that the target service is finished, inquiring the lock record in the distributed lock record table, and unlocking the target data. According to the method and the device, the distributed lock record table is adopted to lock and unlock the target data of the sub-databases, the safety of the lock records is improved through the sub-databases of the databases, and compared with the traditional database which is logically a center lock information node, the method and the device avoid the problem that the center lock information node causes overlarge reading and writing pressure of the lock records in a high concurrency scene, and improve concurrency capability.

Description

Method and device for realizing distributed lock
Technical Field
The present application relates to the field of computer software, and in particular, to a method and an apparatus for implementing a distributed lock, a computer-readable storage medium, and a processor.
Background
The distributed lock implementation principle is as follows: in a distributed system, different processes need to check the locking condition of a target resource before accessing a shared resource. If the distributed lock exists, the locking fails, and the target resource cannot be accessed; if no distributed lock exists, then the distributed lock is added and the target resource is accessed.
Common distributed lock implementations are the following three:
1) based on Zookeeper. When a locking party locks a certain shared resource, a unique instant ordered node is generated under the directory of the designated node corresponding to the resource on the Zookeeper for realizing the distributed lock. And if the sequence number in the ordered node is the minimum, the lock acquisition is successful. When the lock is released, only the instant node is deleted;
2) cache based (redis). When the lock is acquired, setnx is used for locking, an expire command is used for adding overtime time to the lock, the lock is automatically released when the overtime time is exceeded, the value of the lock is a randomly generated UUID, and the lock with the same UUID is deleted through delete when the lock is released.
3) Based on a database. And establishing a distributed lock record table in the database, and realizing acquisition and release of the distributed lock by controlling the insertion and deletion operations of the lock record table.
However, the following problems generally exist in the existing distributed lock implementation mode:
1) it is not possible to reduce the weight. For example, a distributed lock based on redis and a distributed lock for realizing Zookeeper needs to introduce an additional open source component;
2) there is a problem of lock loss. For example, in distributed locks implemented based on a redis cluster, when a master node is successfully locked and is down when the master node is not synchronized with a slave node, the problem of lock loss is caused;
3) a transaction isolation problem. In a common distributed lock implementation mode, the lock is not associated with a service state when overtime processing is performed, and if the lock record is directly deleted when the service is not completed, the isolation of the transaction may be damaged;
4) the lock level is not divided. The existing distributed lock implementation mode only supports one lock level of distributed exclusive locks, and limits the concurrency capability of the system in certain scenes;
5) high concurrency problems. Distributed locks realized by means of Zookeeper realization and the like based on traditional databases, redis and distributed locks can be logically regarded as a central lock information node, so that bottlenecks are easily generated under high concurrency scenes, and the concurrency capability of the system is limited.
The above information disclosed in this background section is only for enhancement of understanding of the background of the technology described herein and, therefore, certain information may be included in the background that does not form the prior art that is already known in this country to a person of ordinary skill in the art.
Disclosure of Invention
The present application mainly aims to provide a method and an apparatus for implementing a distributed lock, a computer-readable storage medium, and a processor, so as to solve the problem of poor concurrency capability of a distributed lock implementation manner in the prior art.
According to an aspect of the embodiments of the present invention, there is provided a method for implementing a distributed lock, where a database includes a plurality of sub-libraries, the method includes: under the condition that target data are called by a target service, locking records are inserted into a distributed lock record table to lock the target data, the distributed lock record table is stored in a sub-library where the target data are located, and the distributed lock record table is in one-to-one correspondence with the sub-libraries; and under the condition that the target service is completed, inquiring the lock record in the distributed lock record table, and unlocking the target data.
Optionally, the locking record includes a primary key and a locking level, where the primary key is a unique identifier of the target data, the locking records of the distributed locking record table correspond to the target data one to one, and a locking record is inserted into the distributed locking record table to lock the target data, including: under the condition that the primary key of a second lock record is the same as the primary key of any one first lock record, locking the target data according to the lock level, wherein the first lock record is the inserted lock record, and the second lock record is the lock record to be inserted; and under the condition that the primary key of the second lock record is different from the primary keys of all the first lock records or the distributed lock record table does not have the first lock records, inserting the lock records to be inserted into the distributed lock record table and replying the locking success.
Optionally, the locking level includes a distributed exclusive lock and a distributed shared lock, and in a case that a primary key of a second lock record is the same as a primary key of any one of the first lock records, locking the target data according to the locking level includes: replying to a lock failure if one of the lock level of the second lock record and the lock level of a third lock record is the same primary key of the second lock record is the distributed exclusive lock; and under the condition that the lock level of the second lock record and the lock level of the third lock record are both the distributed shared lock, updating the third lock record and replying that locking is successful.
Optionally, the lock record includes a lock level and a sharing count, the lock level includes a distributed exclusive lock and a distributed shared lock, the sharing count is the number of the target services that currently call the target data, and when the target services are completed, the lock record in the distributed lock record table is queried to unlock the target data, including: under the condition that a target lock record does not exist, replying that unlocking is successful, wherein the target lock record is the lock record corresponding to the target data; deleting the target lock record and replying that the unlocking is successful under the condition that the lock level of the target lock record is the distributed exclusive lock; and under the condition that the lock level of the target lock record is the distributed shared lock, processing the target lock record according to the sharing count and replying that the unlocking is successful.
Optionally, processing the target lock record according to the sharing count and replying that unlocking is successful includes: in the event that the share count of the target lock record is greater than 1, decrementing the share count by 1; and under the condition that the sharing count of the target lock record is equal to 1, deleting the target lock record and replying to successful unlocking.
Optionally, the lock record includes a lock level and a lock value, the lock level includes a distributed exclusive lock and a distributed shared lock, the lock value is used to characterize the service information of the target service, before the target service invokes the target data, the method further includes: replying the distributed exclusive lock without other service addition under the condition that no target lock record exists in the distributed lock record table or the lock level of the target lock record is the distributed shared lock, wherein the target lock record is the lock record corresponding to the target data; replying to the distributed exclusive lock without the addition of the other traffic if the lock level of the target lock record is the distributed exclusive lock and the lock value of the target lock record is the same as the lock value of the lock record to be inserted; replying to the presence of the distributed exclusive lock added by the other service if the lock level of the target lock record is the distributed exclusive lock and the lock value of the target lock record is not the same as the lock value of the lock record to be inserted.
Optionally, the lock record includes an expiration time, a number of deferrals, and a lock level, the expiration time is a time when the target service is expected to be completed, the number of deferrals is a number of times the target service has been deferred, the lock level includes a distributed exclusive lock and a distributed shared lock, and after the target data is locked by inserting the lock record into the distributed lock record table, the method further includes: deleting an overdue lock record under the condition that the lock level of the overdue lock record is the distributed shared lock, wherein the overdue lock record is the lock record of which the current time reaches the overdue time; when the lock level of the overdue lock record is the distributed exclusive lock and the postponing times of the overdue lock record are smaller than the maximum postponing times, postponing the target service corresponding to the overdue lock record; suspending the overdue lock record if the lock level of the overdue lock record is the distributed exclusive lock and the number of deferrals of the overdue lock record is greater than or equal to a maximum number of deferrals.
Optionally, after suspending the overdue lock record, the method further comprises: under the condition that the target service corresponding to the suspended lock record is completed, deleting the suspended lock record and replying the unlocking success, wherein the suspended lock record is the suspended overdue lock record; and under the condition that the target service corresponding to the suspended lock record is not completed, replying deletion failure.
Optionally, after suspending the overdue lock record, the method further comprises: forcibly deleting a pending lock record, wherein the pending lock record is the pending overdue lock record.
According to another aspect of the embodiments of the present invention, there is also provided an apparatus for implementing a distributed lock, where a database includes a plurality of sub-libraries, the apparatus includes: the system comprises a first processing unit, a second processing unit and a third processing unit, wherein the first processing unit is used for inserting a lock record into a distributed lock record table to lock target data under the condition that the target data is called by a target service, the distributed lock record table is stored in a sub-library where the target data is located, and the distributed lock record table is in one-to-one correspondence with the sub-library; and the second processing unit is used for inquiring the lock record in the distributed lock record table and unlocking the target data under the condition that the target service is completed.
According to another aspect of embodiments of the present invention, there is also provided a computer-readable storage medium including a stored program, wherein the program performs any one of the methods.
According to another aspect of the embodiments of the present invention, there is also provided a processor for executing a program, where the program executes to perform any one of the methods.
In the method for implementing the distributed lock according to the embodiment of the present invention, first, when a target service calls target data, a lock record is inserted into a distributed lock record table to lock the target data, the distributed lock record table is stored in a sub-library where the target data is located, and the distributed lock record table corresponds to the sub-libraries one by one; then, when the target service is completed, the lock record in the distributed lock record table is queried to unlock the target data. The distributed lock record tables of the distributed lock implementation method are stored in the sub-databases in a one-to-one correspondence mode, the distributed lock record tables are used for locking and unlocking target data of the sub-databases, the security of lock records is improved through the sub-databases of the databases, compared with the traditional implementation mode that the database is logically a central lock information node, the problem that the read-write pressure of the lock records under a high concurrency scene is overlarge to become a system bottleneck caused by the central lock information node is avoided, and the problem that the concurrency capability of the distributed lock implementation mode in the prior art is poor is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
FIG. 1 is a flow chart illustrating a method for implementing a distributed lock according to an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a distributed locking flow in an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a distributed lock unlocking flow in an embodiment of the present application;
FIG. 4 is a diagram illustrating a distributed lock timeout process in one embodiment of the present application;
FIG. 5 illustrates an apparatus diagram for implementing a distributed lock of an exemplary embodiment of the present application;
fig. 6 shows a schematic diagram of an overall structure of a distributed lock in an embodiment of the present application.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It will be understood that when an element such as a layer, film, region, or substrate is referred to as being "on" another element, it can be directly on the other element or intervening elements may also be present. Also, in the specification and claims, when an element is described as being "connected" to another element, the element may be "directly connected" to the other element or "connected" to the other element through a third element.
For convenience of description, some terms or expressions referred to in the embodiments of the present application are explained below:
distributed exclusive locks: when a distributed exclusive lock exists, no other transactions for the same locked object are allowed to occur. A distributed exclusive lock is mutually exclusive with a distributed shared lock.
Distributed shared Lock-the distributed shared lock here is not a read-only lock. When a distributed shared lock exists, other transactions using the distributed shared lock for the same locked object are allowed to occur simultaneously. The distributed shared lock is mutually exclusive with the distributed exclusive lock.
Re-entry: in the same transaction, adding distributed exclusive locks to the same locked object multiple times is called reentry.
Sharing: in multiple transactions, adding a distributed shared lock to the same locked object at the same time is called sharing.
As mentioned in the background, the prior art distributed lock implementation has poor concurrency capability, and in order to solve the above problems, in an exemplary embodiment of the present application, a method, an apparatus, a computer-readable storage medium, and a processor for implementing a distributed lock are provided.
According to an embodiment of the application, a method for implementing a distributed lock is provided.
Fig. 1 is a flowchart of a method for implementing a distributed lock according to an embodiment of the present application. As shown in fig. 1, the method comprises the steps of:
step S101, under the condition that target data are called by a target service, locking records are inserted into a distributed lock record table to lock the target data, the distributed lock record table is stored in a sub-library where the target data are located, and the distributed lock record table is in one-to-one correspondence with the sub-libraries;
and step S102, under the condition that the target service is completed, inquiring the lock record in the distributed lock record table, and unlocking the target data.
In the method for implementing the distributed lock, firstly, under the condition that target data is called by a target service, a lock record is inserted into a distributed lock record table to lock the target data, wherein the distributed lock record table is in one-to-one correspondence with the sub-libraries and is stored in the sub-library where the target data is located; then, when the target service is completed, the lock record in the distributed lock record table is queried to unlock the target data. The distributed lock record tables are stored in the sub-databases in a one-to-one correspondence mode, the distributed lock record tables are used for locking and unlocking target data of the sub-databases, the safety of lock records is improved through the sub-databases of the databases, compared with the traditional implementation mode that the database is logically a central lock information node, the problem that the system bottleneck is caused by the fact that the central lock information node causes overlarge reading and writing pressure of the lock records in a high concurrency scene is solved, and the problem that the concurrency capability of the distributed lock implementation mode in the prior art is poor is solved.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
In a specific embodiment of the application, in the distributed lock implementation method, the distributed lock implementation is relied on in an SDK form and is deployed along with the target service application, which is a lightweight implementation manner, and no extra open source component needs to be introduced. In addition, the high availability of the distributed lock can be guaranteed by selecting a proper database system. In addition, the lock record is written into the database, and the problem of lock record loss does not exist.
Note that the lock record includes a plurality of fields, and the meaning of each field is shown in table 1.
TABLE 1
Figure BDA0003453777670000061
In an embodiment of the present application, as shown in table 1, the lock record includes a primary key and a lock level, the primary key is a unique identifier of the target data, the lock records of the distributed lock record table correspond to the target data one to one, and a lock record is inserted into the distributed lock record table to lock the target data, as shown in fig. 2, the method includes: locking the target data according to the lock level when the primary key of a second lock record is the same as the primary key of any one of first lock records, wherein the first lock record is the inserted lock record, and the second lock record is the lock record to be inserted; and under the condition that the primary key of the second lock record is different from the primary keys of all the first lock records or the distributed lock record table does not have the first lock records, inserting the lock records to be inserted into the distributed lock record table and replying the locking success.
When a target service calls target data, if the primary key of the second lock record is the same as the primary key of any first lock record, locking the target data according to the lock level, and if the primary key of the first lock record is different from the primary key of the second lock record or the distributed lock record table does not exist, directly locking the target data successfully.
In another embodiment of the present application, as shown in fig. 2, the locking levels include a distributed exclusive lock and a distributed shared lock, and in a case where the primary key of the second lock record is the same as the primary key of any one of the first lock records, locking the target data according to the locking levels includes: replying to a lock failure if one of said lock level of said second lock record and said lock level of a third lock record is said distributed exclusive lock, said third lock record being said first lock record which is the same as the primary key of said second lock record; and under the condition that the lock level of the second lock record and the lock level of the third lock record are both the distributed shared lock, updating the third lock record and replying that the locking is successful. And if at least one of the second lock record and the third lock record is the distributed exclusive lock, replying the locking failure, ensuring the completion of the target service, setting different lock grades, avoiding adopting the exclusive lock when any service calls the target data, and adopting the shared lock without exclusive service, thereby further improving the concurrency capability of the system.
It should be noted that, in the case that the lock master key is the same and the lock value is the same, the locking may be successful when the exclusive lock is re-entered, and when the locking fails, the locking is attempted again until the maximum retry number is exceeded, and the locking failure is replied, for example, the maximum retry number is 3 times, and the locking failure is replied after the maximum retry number is exceeded 3 times.
In another embodiment of the present application, as shown in table 1, the lock record includes a lock level and a sharing count, the lock level includes a distributed exclusive lock and a distributed shared lock, the sharing count is the number of the target services that currently call the target data, and when the target service is completed, the lock record in the distributed lock record table is queried to unlock the target data, as shown in fig. 3, the method includes: under the condition that the target lock record does not exist, successfully unlocking, wherein the target lock record is the lock record corresponding to the target data; deleting the target lock record and replying to successful unlocking under the condition that the lock level of the target lock record is the distributed exclusive lock; and under the condition that the lock level of the target lock record is the distributed shared lock, processing the target lock record according to the sharing count and replying that the unlocking is successful. And after the target service is finished, releasing target data, successfully unlocking when the target lock record does not exist or the lock level of the target lock record is the distributed exclusive lock, if the lock level of the target lock record is the distributed shared lock, confirming a sharing count, processing the target lock record according to the sharing count and replying the successful unlocking, preventing confusion and conflict generated during target data release and ensuring the normal target data release.
It should be noted that, when the unlocking fails, the unlocking is attempted again until the maximum retry number is exceeded, and the locking is returned to the failed state, for example, the maximum retry number is 3, and the unlocking is returned to the failed state after 3 times.
In another embodiment of the present application, as shown in fig. 3, processing the target lock record according to the sharing count and replying that the unlocking is successful includes: subtracting 1 from the share count of the target lock record when the share count is greater than 1; and deleting the target lock record and replying to successful unlocking under the condition that the sharing count of the target lock record is equal to 1. And if the sharing count of the target lock record is equal to 1, deleting the target lock record and replying that the unlocking is successful, and then successfully unlocking the last distributed shared lock, namely, explaining that all the distributed shared locks are successfully unlocked, thereby further ensuring the normal release of the target data.
In another specific embodiment of the present application, as shown in fig. 3, when the sharing count of the target lock record is greater than 1, after subtracting 1 from the sharing count to obtain an updated sharing count, it is required to determine whether the updated sharing count is equal to the sharing count, and if the updated sharing count is equal to the sharing count, delete the target lock record and reply to an unlocking success; and if the update sharing count is not equal to the sharing count, replying to the unlocking failure.
If a change in the number of lockers is identified by the value of the lock, and a change in the number of lockers is identified by the share count, and if either of the two changes, the lock record changes during the operation, the current update or delete operation should not be successful, and a retry is required by a retry mechanism, and if the retry number is set to zero or the current retry number reaches the maximum retry number, an unlock failure is returned.
In another embodiment of the present application, as shown in table 1, the lock record includes a lock level and a lock value, the lock level includes a distributed exclusive lock and a distributed shared lock, the lock value is used to characterize service information of the target service, and before the target service invokes the target data, the method further includes: replying the distributed exclusive lock without other service addition under the condition that a target lock record does not exist in the distributed lock record table or the lock level of the target lock record is the distributed shared lock, wherein the target lock record is the lock record corresponding to the target data; in the case where the lock level of the target lock record is the distributed exclusive lock and the value of the lock of the target lock record is the same as the value of the lock record to be inserted, replying to the distributed exclusive lock without the addition of the other service; and if the lock level of the target lock record is the distributed exclusive lock and the value of the lock of the target lock record is different from the value of the lock record to be inserted, recovering that the distributed exclusive lock added by the other service exists. When the target service calls the target data but does not itself need to add a distributed exclusive lock, the method prevents the target data from being modified if it is locked by checking if the target lock record has been added by another service with a distributed exclusive lock.
In another embodiment of the present application, as shown in table 1, the lock record includes an expiration time, a delay time and a lock level, the expiration time is a time when the target service is expected to be completed, the delay time is a time when the target service has been delayed, the lock level includes a distributed exclusive lock and a distributed shared lock, and after the target data is locked by inserting the lock record into the distributed lock record table, as shown in fig. 4, the method further includes: deleting an overdue lock record under the condition that the lock level of the overdue lock record is the distributed shared lock, wherein the overdue lock record is the lock record of which the current time reaches the overdue time; when the lock level of the overdue lock record is the distributed exclusive lock and the delay times of the overdue lock record are less than the maximum delay times, delaying the target service corresponding to the overdue lock record; and suspending the overdue lock record when the lock level of the overdue lock record is the distributed exclusive lock and the delay number of the overdue lock record is greater than or equal to the maximum delay number.
It should be noted that, for the extended exclusive lock record, the service state callback confirmation interface (realized by the calling party, which judges the current service state according to the service logic of the calling party and returns whether the service is completed) provided by the distributed lock in a unified manner is used to check the service state, and if the service is completed, the lock record is directly released; if the transaction is not complete, a further determination is made as to whether to defer or suspend the lock record.
In another specific embodiment of the present application, an independent timing detection service is deployed, and the timeout record in the lock record is continuously detected and processed, so that the shared resource is prevented from being locked for a long time due to an abnormal factor, and the timeout detection service is deployed independently without affecting the performance of the application itself. In addition, in this embodiment, a callback confirmation interface is provided, as shown in fig. 4, when a timeout lock record is detected, the service state is checked through the interface, and a lock record processing party is determined according to the service state, so that the transaction isolation is prevented from being damaged due to the lock record being deleted.
In another embodiment of the present application, after suspending the overdue lock record, the method further includes: under the condition that the target service corresponding to the suspended lock record is completed, deleting the suspended lock record and replying the unlocking success, wherein the suspended lock record is the suspended overdue lock record; and under the condition that the target service corresponding to the suspended lock record is not completed, replying to delete failure. And deleting the hang lock record, and processing the abnormal information to enable the target service to be smoothly completed.
In yet another embodiment of the present application, after suspending the overdue lock record, the method further includes: and forcibly deleting the suspended lock record, wherein the suspended lock record is the suspended overdue lock record. And forcibly deleting the suspension lock records which are failed to be deleted, and further ensuring that the target service is completely finished.
The embodiment of the present application further provides an implementation apparatus of a distributed lock, and it should be noted that the implementation apparatus of a distributed lock according to the embodiment of the present application may be used to execute the implementation method for a distributed lock provided in the embodiment of the present application. The following describes an implementation apparatus of a distributed lock provided in an embodiment of the present application.
Fig. 5 is a schematic diagram of an implementation apparatus of a distributed lock according to an embodiment of the present application. As shown in fig. 5, the database includes a plurality of sub-libraries, and the apparatus includes:
a first processing unit 10, configured to insert a lock record into a distributed lock record table to lock target data when the target service calls the target data, where the distributed lock record table is stored in a sub-library where the target data is located, and the distributed lock record table corresponds to the sub-libraries one to one;
and a second processing unit 20, configured to, when the target service is completed, query the lock record in the distributed lock record table to unlock the target data.
In the device for implementing the distributed lock, when a target service calls target data, a first processing unit inserts a lock record into a distributed lock record table to lock the target data, wherein the distributed lock record table is stored in a sub-library where the target data is located, and the distributed lock record table corresponds to the sub-libraries one by one; and then, the second processing unit inquires the lock record in the distributed lock record table under the condition that the target service is completed, and unlocks the target data. The device adopts the distributed lock record table to lock and unlock the target data of the sub-database, improves the safety of the lock record through the sub-database, and compared with the traditional realization mode that the database is logically a central lock information node, avoids the problem that the central lock information node causes the overlarge read-write pressure of the lock record to become a system bottleneck under a high concurrency scene, and solves the problem that the concurrency capability of the distributed lock realization mode in the prior art is poor.
In a specific embodiment of the present application, in the distributed lock implementation method of the apparatus, the distributed lock implementation is relied on in an SDK form and is deployed along with a target service application, which is a lightweight implementation manner and does not need to introduce an extra open source component. In addition, the high availability of the distributed lock can be guaranteed by selecting a proper database system. In addition, the lock record is written into the database, and the problem of lock record loss does not exist.
In an embodiment of the present application, as shown in table 1, the lock record includes a primary key and a lock level, the primary key is a unique identifier of the target data, the lock records of the distributed lock record table correspond to the target data one to one, and the first processing unit includes a first processing subunit and a first recovery subunit, where the first processing subunit is configured to lock the target data according to the lock level when the primary key of a second lock record is the same as the primary key of any one first lock record, the first lock record is the inserted lock record, and the second lock record is the lock record to be inserted; the second processing subunit is configured to insert the lock record to be inserted into the distributed lock record table and reply to a successful locking when the primary key of the second lock record is different from the primary keys of all the first lock records or the distributed lock record table does not have the first lock record.
The lock records of the distributed lock record table correspond to the target data one to one, when a target service calls the target data, the first processing subunit is used for locking the target data according to the lock level when the primary key of the second lock record is the same as the primary key of any one of the first lock records, and the second processing subunit is used for directly successfully locking the target data when the primary key of the first lock record is different from the primary key of the second lock record or the distributed lock record table does not exist.
In another embodiment of the present application, the lock levels include a distributed exclusive lock and a distributed shared lock, and the first processing subunit includes a first reply module and a second reply module, wherein the first reply module is configured to reply to a lock failure in a case where one of the lock level of the second lock record and the lock level of a third lock record is the distributed exclusive lock, and the third lock record is the first lock record which is the same as the primary key of the second lock record; and the second replying module updates the third lock record and replies that the locking is successful under the condition that the lock level of the second lock record and the lock level of the third lock record are both the distributed shared lock. And if at least one of the second lock record and the third lock record is the distributed exclusive lock, replying the locking failure, ensuring the completion of the target service, setting different lock grades, avoiding adopting the exclusive lock when any service calls the target data, and adopting the shared lock without exclusive service, thereby further improving the concurrency capability of the system.
In another embodiment of the present application, as shown in table 1, the lock record includes a lock level and a sharing count, the lock level includes a distributed exclusive lock and a distributed shared lock, the sharing count is the number of the target services currently calling the target data, the second processing unit includes a second replying subunit, a third replying subunit and a fourth replying subunit, where the second replying subunit is configured to reply that the unlocking is successful if the target lock record does not exist, and the target lock record is the lock record corresponding to the target data; the third replying subunit is configured to delete the target lock record and reply to successful unlocking if the lock level of the target lock record is the distributed exclusive lock; the fourth replying subunit is configured to, when the lock level of the target lock record is the distributed shared lock, process the target lock record according to the sharing count and reply that unlocking is successful. And after the target service is finished, releasing target data, successfully unlocking when the target lock record does not exist or the lock level of the target lock record is the distributed exclusive lock, if the lock level of the target lock record is the distributed shared lock, confirming a sharing count, processing the target lock record according to the sharing count and replying the successful unlocking, preventing confusion and conflict generated during target data release and ensuring the normal target data release.
In yet another embodiment of the present application, the fourth replying subunit includes a calculating module and a third replying module, wherein the calculating module is configured to subtract 1 from the shared count recorded by the target lock when the shared count is greater than 1; the third replying module is configured to delete the target lock record and reply to an unlocking success if the sharing count of the target lock record is equal to 1. And if the sharing count of the target lock record is equal to 1, deleting the target lock record and replying that the unlocking is successful, and then successfully unlocking the last distributed shared lock, namely, explaining that all the distributed shared locks are successfully unlocked, thereby further ensuring the normal release of the target data.
In another specific embodiment of the present application, the calculating module needs to determine whether the updated sharing count is equal to the sharing count after calculation, and if the updated sharing count is equal to the sharing count, delete the target lock record and reply to an unlocking success; and if the update sharing count is not equal to the sharing count, replying to the unlocking failure.
It should be noted that, if a change in the number of locking parties is identified by the lock value and a change in the number of locking parties is identified by the share count, and if either of the two changes, the lock record changes during the operation, and the current update or delete operation should not succeed.
In another embodiment of the present application, as shown in table 1, the lock record includes a lock level and a lock value, the lock level includes a distributed exclusive lock and a distributed shared lock, and the lock value is used to characterize service information of the target service, the apparatus further includes a first reply unit, a second reply unit, and a third reply unit, where the first reply unit is configured to reply to the exclusive distributed lock that is not added by another service in a case that no target lock record exists in the distributed lock record table or the lock level of the target lock record is the distributed shared lock before the target service invokes target data, and the target lock record is the lock record corresponding to the target data; the second replying unit is configured to reply to the distributed exclusive lock without the addition of the other service if the lock level of the target lock record is the distributed exclusive lock and the value of the lock of the target lock record is the same as the value of the lock record to be inserted; the third replying unit is configured to, in a case where the lock level of the target lock record is the distributed exclusive lock and the value of the lock of the target lock record is different from the value of the lock record to be inserted, reply that the distributed exclusive lock added by the other service exists. When the target service calls the target data but does not itself need to add a distributed exclusive lock, the method prevents the target data from being modified if it is locked by checking if the target lock record has been added by another service with a distributed exclusive lock.
In another embodiment of the present application, as shown in table 1, the lock record includes an expiration time, a delay number, and a lock level, the expiration time is a time when the target service is expected to be completed, the delay number is a number of times when the target service has been delayed, and the lock level includes a distributed exclusive lock and a distributed shared lock, the apparatus further includes a deleting unit, a delay unit, and a suspending unit, where the deleting unit is configured to delete the expiration lock record when the lock level of the expiration lock record is the distributed shared lock after a lock record is inserted in the distributed lock record table to lock the target data, and the expiration lock record is the lock record when a current time reaches the expiration time; the postponing unit is configured to postpone the target service corresponding to the overdue lock record when the lock class of the overdue lock record is the distributed exclusive lock and the postponing number of the overdue lock record is smaller than a maximum postponing number; the suspending unit is configured to suspend the overdue lock record when the lock class of the overdue lock record is the distributed exclusive lock and the number of deferrals of the overdue lock record is greater than or equal to a maximum number of deferrals.
In another specific embodiment of the present application, an independent timing detection service is deployed, and the timeout record in the lock record is continuously detected and processed, so that the shared resource is prevented from being locked for a long time due to an abnormal factor, and the timeout detection service is deployed independently without affecting the performance of the application itself. In addition, in this embodiment, a callback confirmation interface is provided, as shown in fig. 4, when a timeout lock record is detected, the service state is checked through the interface, and a lock record processing party is determined according to the service state, so that the transaction isolation is prevented from being damaged due to the lock record being deleted.
In another embodiment of the present application, the apparatus further includes a fourth replying unit and a fifth replying unit, where the fourth replying unit is configured to delete the pending lock record and reply to an unlocking success when the target service corresponding to the pending lock record is completed after the pending lock record is suspended, where the pending lock record is the suspended pending lock record; and the five replying units reply deletion failure under the condition that the target service corresponding to the suspended lock record is not completed. And deleting the hang lock record, and processing the abnormal information to enable the target service to be smoothly completed.
In yet another embodiment of the present application, the apparatus further includes a means for forcibly deleting a pending lock record after suspending the overdue lock record, wherein the pending lock record is the suspended overdue lock record. And forcibly deleting the suspension lock records which are failed to be deleted, and further ensuring that the target service is completely finished.
It should be noted that the implementation method of the distributed lock is implemented by a distributed lock component, as shown in fig. 6, the distributed lock component is composed of 6 modules, namely, distributed lock acquisition, distributed lock release, distributed lock query, distributed lock timeout processing, distributed lock operation and maintenance management, the distributed lock acquisition is used to execute the process in fig. 2, the distributed lock release is used to execute the process in fig. 3, the distributed lock timeout processing is used to execute the process in fig. 4, the distributed lock query is used to query whether a distributed exclusive lock exists in target data, and the distributed lock operation and maintenance management is used to process a lock record suspended overtime. The distributed lock assembly is relied upon in SDK form, deployed along with the service application. When the application is started, the distributed lock assembly caches the relevant configuration of the configuration center to the local for each process to use. When the application carries out service operation, firstly, lock records are added (or inquired) in the distributed lock record table through the API of the distributed lock, then service data access is carried out, and finally, the lock records in the distributed lock record table are deleted through the API of the distributed lock. The distributed lock record table and the service data are stored in the same database, the configuration related to the database of the distributed lock assembly is simplified, and the non-functional requirements of the application assembly and the distributed lock assembly on high availability, expandability and the like of the database are met through measures such as database clustering, data sub-database and the like.
The device for realizing the distributed lock comprises a processor and a memory, wherein the first processing unit, the second processing unit and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more than one, and the problem of poor concurrency capability of a distributed lock implementation mode in the prior art is solved by adjusting kernel parameters.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
An embodiment of the present invention provides a computer-readable storage medium, on which a program is stored, where the program, when executed by a processor, implements the method for implementing the distributed lock.
The embodiment of the invention provides a processor, which is used for running a program, wherein the method for realizing the distributed lock is executed when the program runs.
The embodiment of the invention provides equipment, which comprises a processor, a memory and a program which is stored on the memory and can run on the processor, wherein when the processor executes the program, at least the following steps are realized:
step S101, under the condition that target data are called by a target service, locking records are inserted into a distributed lock record table to lock the target data, the distributed lock record table is stored in a sub-library where the target data are located, and the distributed lock record table is in one-to-one correspondence with the sub-libraries;
and step S102, under the condition that the target service is completed, inquiring the lock record in the distributed lock record table, and unlocking the target data.
The device herein may be a server, a PC, a PAD, a mobile phone, etc.
The present application further provides a computer program product adapted to perform a program of initializing at least the following method steps when executed on a data processing device:
step S101, under the condition that target data are called by a target service, locking records are inserted into a distributed lock record table to lock the target data, the distributed lock record table is stored in a sub-library where the target data are located, and the distributed lock record table is in one-to-one correspondence with the sub-libraries;
and step S102, under the condition that the target service is completed, inquiring the lock record in the distributed lock record table, and unlocking the target data.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a computer-readable storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned computer-readable storage media comprise: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
From the above description, it can be seen that the above-described embodiments of the present application achieve the following technical effects:
1) in the implementation method of the distributed lock, firstly, under the condition that target data is called by a target service, a lock record is inserted into a distributed lock record table to lock the target data, wherein the distributed lock record table is in one-to-one correspondence with the sub-libraries and is stored in the sub-library where the target data is located; then, when the target service is completed, the lock record in the distributed lock record table is queried to unlock the target data. The distributed lock record tables are stored in the sub-databases in a one-to-one correspondence mode, the distributed lock record tables are used for locking and unlocking target data of the sub-databases, the safety of lock records is improved through the sub-databases of the databases, compared with the traditional implementation mode that the database is logically a central lock information node, the problem that the system bottleneck is caused by the fact that the central lock information node causes overlarge reading and writing pressure of the lock records in a high concurrency scene is solved, and the problem that the concurrency capability of the distributed lock implementation mode in the prior art is poor is solved.
2) The device for realizing the distributed lock comprises a first processing unit, a second processing unit and a third processing unit, wherein under the condition that target data are called by a target service, a lock record is inserted into a distributed lock record table to lock the target data, the distributed lock record table is stored in a sub-library where the target data are located, and the distributed lock record table corresponds to the sub-libraries one by one; and then, the second processing unit inquires the lock record in the distributed lock record table under the condition that the target service is completed, and unlocks the target data. The device adopts the distributed lock record table to lock and unlock the target data of the sub-database, improves the safety of the lock record through the sub-database, and compared with the traditional realization mode that the database is logically a central lock information node, avoids the problem that the central lock information node causes the overlarge read-write pressure of the lock record to become a system bottleneck under a high concurrency scene, and solves the problem that the concurrency capability of the distributed lock realization mode in the prior art is poor.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (12)

1. A method for implementing a distributed lock, wherein a database comprises a plurality of sub-libraries, the method comprising:
under the condition that target data are called by a target service, locking records are inserted into a distributed lock record table to lock the target data, the distributed lock record table is stored in a sub-library where the target data are located, and the distributed lock record table is in one-to-one correspondence with the sub-libraries;
and under the condition that the target service is completed, inquiring the lock record in the distributed lock record table, and unlocking the target data.
2. The method of claim 1, wherein the lock record comprises a primary key and a lock level, the primary key is a unique identifier of the target data, the lock records of the distributed lock record table have one-to-one correspondence with the target data, and inserting a lock record into the distributed lock record table locks the target data, comprising:
under the condition that the primary key of a second lock record is the same as the primary key of any one first lock record, locking the target data according to the lock level, wherein the first lock record is the inserted lock record, and the second lock record is the lock record to be inserted;
and under the condition that the primary key of the second lock record is different from the primary keys of all the first lock records or the distributed lock record table does not have the first lock records, inserting the lock records to be inserted into the distributed lock record table and replying the locking success.
3. The method of claim 2, wherein the lock levels comprise a distributed exclusive lock and a distributed shared lock, and wherein locking the target data according to the lock levels in the case that the primary key of the second lock record is the same as the primary key of any one of the first lock records comprises:
replying to a lock failure if one of the lock level of the second lock record and the lock level of a third lock record is the same primary key of the second lock record is the distributed exclusive lock;
and under the condition that the lock level of the second lock record and the lock level of the third lock record are both the distributed shared lock, updating the third lock record and replying that locking is successful.
4. The method of claim 1, wherein the lock record comprises a lock level and a sharing count, the lock level comprises a distributed exclusive lock and a distributed shared lock, the sharing count is the number of the target service currently invoking the target data, and when the target service is completed, the lock record in the distributed lock record table is queried to unlock the target data, comprising:
under the condition that a target lock record does not exist, replying that unlocking is successful, wherein the target lock record is the lock record corresponding to the target data;
deleting the target lock record and replying that the unlocking is successful under the condition that the lock level of the target lock record is the distributed exclusive lock;
and under the condition that the lock level of the target lock record is the distributed shared lock, processing the target lock record according to the sharing count and replying that the unlocking is successful.
5. The method of claim 4, wherein processing the target lock record according to the share count and replying to an unlocking success comprises:
in the event that the share count of the target lock record is greater than 1, decrementing the share count by 1;
and under the condition that the sharing count of the target lock record is equal to 1, deleting the target lock record and replying to successful unlocking.
6. The method of claim 1, wherein the lock record comprises a lock level and a lock value, wherein the lock level comprises a distributed exclusive lock and a distributed shared lock, wherein the lock value is used to characterize the service information of the target service, and wherein before the target service invokes the target data, the method further comprises:
replying the distributed exclusive lock without other service addition under the condition that no target lock record exists in the distributed lock record table or the lock level of the target lock record is the distributed shared lock, wherein the target lock record is the lock record corresponding to the target data;
replying to the distributed exclusive lock without the addition of the other traffic if the lock level of the target lock record is the distributed exclusive lock and the lock value of the target lock record is the same as the lock value of the lock record to be inserted;
replying to the presence of the distributed exclusive lock added by the other service if the lock level of the target lock record is the distributed exclusive lock and the lock value of the target lock record is not the same as the lock value of the lock record to be inserted.
7. The method of any one of claims 1 to 6, wherein the lock record comprises an expiration time, a number of deferrals, and a lock level, wherein the expiration time is a time when the target service is expected to be completed, wherein the number of deferrals is a number of times the target service has been deferred, wherein the lock level comprises a distributed exclusive lock and a distributed shared lock, and wherein after inserting a lock record in a distributed lock record table locks the target data, the method further comprises:
deleting an overdue lock record under the condition that the lock level of the overdue lock record is the distributed shared lock, wherein the overdue lock record is the lock record when the current time reaches the overdue time;
when the lock level of the overdue lock record is the distributed exclusive lock and the postponing times of the overdue lock record are smaller than the maximum postponing times, postponing the target service corresponding to the overdue lock record;
suspending the overdue lock record if the lock level of the overdue lock record is the distributed exclusive lock and the number of deferrals of the overdue lock record is greater than or equal to a maximum number of deferrals.
8. The method of claim 7, wherein after suspending the overdue lock record, the method further comprises:
under the condition that the target service corresponding to the suspended lock record is completed, deleting the suspended lock record and replying the unlocking success, wherein the suspended lock record is the suspended overdue lock record;
and under the condition that the target service corresponding to the suspended lock record is not completed, replying deletion failure.
9. The method of claim 7, wherein after suspending the overdue lock record, the method further comprises:
forcibly deleting a pending lock record, wherein the pending lock record is the pending overdue lock record.
10. An apparatus for implementing a distributed lock, wherein a database comprises a plurality of sub-libraries, the apparatus comprising:
the system comprises a first processing unit, a second processing unit and a third processing unit, wherein the first processing unit is used for inserting a lock record into a distributed lock record table to lock target data under the condition that the target data is called by a target service, the distributed lock record table is stored in a sub-library where the target data is located, and the distributed lock record table is in one-to-one correspondence with the sub-library;
and the second processing unit is used for inquiring the lock record in the distributed lock record table and unlocking the target data under the condition that the target service is completed.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the program performs the method of any one of claims 1 to 9.
12. A processor, characterized in that the processor is configured to run a program, wherein the program when running performs the method of any of claims 1 to 9.
CN202111679981.3A 2021-12-31 2021-12-31 Method and device for realizing distributed lock Pending CN114328564A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111679981.3A CN114328564A (en) 2021-12-31 2021-12-31 Method and device for realizing distributed lock

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111679981.3A CN114328564A (en) 2021-12-31 2021-12-31 Method and device for realizing distributed lock

Publications (1)

Publication Number Publication Date
CN114328564A true CN114328564A (en) 2022-04-12

Family

ID=81022940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111679981.3A Pending CN114328564A (en) 2021-12-31 2021-12-31 Method and device for realizing distributed lock

Country Status (1)

Country Link
CN (1) CN114328564A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117742979A (en) * 2024-02-18 2024-03-22 中国电子科技集团公司第十五研究所 Distributed lock method oriented to space-time data processing and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117742979A (en) * 2024-02-18 2024-03-22 中国电子科技集团公司第十五研究所 Distributed lock method oriented to space-time data processing and electronic equipment
CN117742979B (en) * 2024-02-18 2024-04-23 中国电子科技集团公司第十五研究所 Distributed lock method oriented to space-time data processing and electronic equipment

Similar Documents

Publication Publication Date Title
RU2686594C2 (en) File service using for interface of sharing file access and transmission of represent state
CN106844014B (en) Method and device for realizing suspension prevention of distributed transactions
US9756469B2 (en) System with multiple conditional commit databases
KR100625595B1 (en) Parallel Logging Method of Transaction Processing System
CN107391758B (en) Database switching method, device and equipment
JP4759570B2 (en) Techniques for providing locks for file operations in database management systems
CN104065636B (en) Data processing method and system
US20080243865A1 (en) Maintaining global state of distributed transaction managed by an external transaction manager for clustered database systems
EP3816912B1 (en) Blockchain-based transaction processing method and apparatus, and electronic device
US10924587B1 (en) Live migration for highly available data stores
CN112039970B (en) Distributed business lock service method, server, system and storage medium
US8180746B2 (en) Method and assignment of transaction branches by resource name aliasing
EP3905172A1 (en) Blockchain-based invoice voiding method and apparatus, and electronic device
CN105426469A (en) Database cluster metadata management method and system
US20210216523A1 (en) Data Storage Method, Metadata Server, and Client
AU2018348327B2 (en) Utilizing nonce table to resolve concurrent blockchain transaction failure
CN114328564A (en) Method and device for realizing distributed lock
JP2020184325A (en) Method for processing replica, node, storage system, server, and readable storage medium
US20240119039A1 (en) Writing graph data
JP4356018B2 (en) Asynchronous messaging over storage area networks
CN108733477B (en) Method, device and equipment for data clustering processing
CN112100190B (en) Distributed lock state synchronization method based on update sequence
CN110659303A (en) Read-write control method and device for database nodes
US11422715B1 (en) Direct read in clustered file systems
CN112463757A (en) Resource access method of distributed system and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination