CN108874552B - Distributed lock execution method, device and system, application server and storage medium - Google Patents

Distributed lock execution method, device and system, application server and storage medium Download PDF

Info

Publication number
CN108874552B
CN108874552B CN201810688442.8A CN201810688442A CN108874552B CN 108874552 B CN108874552 B CN 108874552B CN 201810688442 A CN201810688442 A CN 201810688442A CN 108874552 B CN108874552 B CN 108874552B
Authority
CN
China
Prior art keywords
distributed lock
thread
cache server
time
lock information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810688442.8A
Other languages
Chinese (zh)
Other versions
CN108874552A (en
Inventor
张铭明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yunyi Network Technology Co.,Ltd.
Original Assignee
Hangzhou Yunyi Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yunyi Network Technology Co ltd filed Critical Hangzhou Yunyi Network Technology Co ltd
Priority to CN201810688442.8A priority Critical patent/CN108874552B/en
Publication of CN108874552A publication Critical patent/CN108874552A/en
Application granted granted Critical
Publication of CN108874552B publication Critical patent/CN108874552B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/524Deadlock detection or avoidance

Abstract

The application provides a distributed lock execution method, a device and a system, an application server and a storage medium, wherein the method comprises the following steps: the thread generates a unique time identifier by using the current time acquired from the cache server, and sends a locking instruction comprising distributed locking information to the cache server; the distributed lock information includes a current identification; receiving a return result which is sent by the cache server and corresponds to the locking instruction; if the returned result indicates that the thread obtains the distributed lock, the thread accesses the shared resource and executes task operation corresponding to the shared resource; after the task execution operation corresponding to the shared resource is finished, the thread releases the distributed lock only under the condition that the timestamp in the locally stored distributed lock information is consistent with the time identifier in the distributed lock information stored by the cache server; if the returned result indicates that the thread does not acquire the distributed lock, the thread executes the lock-robbing operation. The method and the device can solve the problem that data inconsistency occurs in the shared resources caused by the fact that a plurality of threads obtain the distributed lock at the same time.

Description

Distributed lock execution method, device and system, application server and storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a distributed lock execution method, apparatus, and system, an application server, and a storage medium.
Background
In a distributed environment, the problem of mutually exclusive access to shared resources by different processes exists. The process comprises one or more threads, and if the threads in the multiple processes all need to operate the shared resource, the threads in different processes often need mutually exclusive access when accessing the shared resource, so as to prevent mutual interference. In which case it is often necessary to use a distributed lock.
Typically represented as Redis in distributed locks, in which a distributed lock may be allocated to a thread in a process and a usage time of the distributed lock is set during which the thread may access a shared resource.
However, in Redis, there are some threads waiting to access the shared resource, and in case of timeout of the distributed lock of one thread, an abnormal situation may occur in which two or more threads obtain the distributed lock.
For example, in a Redis single library environment, the distributed lock information of the thread C1 sets a failure time in Redis by using an exception instruction, if the distributed lock in Redis cleared by the time, the thread C2 may rob the distributed lock, but the thread C1 also owns the distributed lock after the execution is finished. It may occur that both threads C1 and C2 own a distributed lock.
As another example, in the Redis Master Slave environment, thread C1 gets to the distributed lock at master node A of Redis. If the master node a is down before synchronizing the distributed lock information of the thread C1 to the slave node B, the slave node B becomes the master node at this time, and the distributed lock information of the thread C1 does not exist therein. At this point, thread C2 can acquire the distributed lock by preempting the lock. At this point thread C1 and thread C2 both own a distributed lock.
Disclosure of Invention
In view of this, the present application provides a distributed lock execution method, apparatus and system, an application server and a storage medium, which can solve the problem of data inconsistency of shared resources caused by multiple threads simultaneously obtaining a distributed lock.
In order to achieve the above object, the present application provides the following technical features:
a distributed lock execution method is applied to a plurality of threads needing to access the same shared resource in the process of an application server, and comprises the following steps:
generating a unique time identifier by a thread by using the current time acquired from a cache server, and sending a locking instruction comprising distributed locking information to the cache server; wherein the distribution lock information comprises the current identification;
receiving a return result which is sent by the cache server and corresponds to the locking instruction;
if the returned result indicates that the thread obtains the distributed lock, the thread accesses the shared resource and executes task operation corresponding to the shared resource;
after the task execution operation corresponding to the shared resource is finished, the thread releases the distributed lock only under the condition that the time stamp in the locally stored distributed lock information is consistent with the time identifier in the distributed lock information stored by the cache server;
and if the returned result indicates that the thread does not acquire the distributed lock, the thread executes the lock robbing operation.
Optionally, the generating, by the thread, a unique time identifier using the current time obtained from the cache server includes:
the thread sends a timestamp acquisition instruction to the cache server;
the thread acquires a return value fed back by the cache server; wherein the return value comprises a current time of the cache server and a number of microseconds that have elapsed since the current second;
and the thread forms a current time stamp by the current time and the microsecond value, and the current time stamp is the time identifier.
Optionally, before sending the locking instruction including the distributed lock information to the cache server, the method further includes:
constructing a key value pair which takes a distributed lock identifier as a key and takes the time identifier as a value;
distributed lock information including the key-value pair is generated and stored.
Optionally, before sending the locking instruction including the distributed lock information to the cache server, the method further includes:
setting the use time corresponding to the distributed lock;
constructing a key-value pair with distributed lock identification as a key and the timestamp and value of the time of use as values; or, constructing a key-value pair which takes a distributed lock identifier as a key and takes a character string of the time identifier and a combined character string of the use time as values;
distributed lock information including the key-value pair is generated and stored.
Optionally, the releasing the distributed lock by the thread only when the timestamp in the locally stored distributed lock information is consistent with the time identifier in the distributed lock information stored by the cache server includes:
after the execution of the task operation corresponding to the shared resource is finished, the thread sends a distributed lock information acquisition instruction to the cache server and receives the current distributed lock information sent by the cache server;
judging whether the time identifier in the distributed lock information locally stored by the thread is consistent with the time identifier in the current distributed lock information;
and if the time identifier in the distributed lock information locally stored by the thread is consistent with the time identifier in the current distributed lock information, sending a distributed lock releasing instruction to the cache server so that the cache server can delete the current distributed lock information.
Optionally, the method further includes:
and if the time identifier in the distributed lock information locally stored by the thread is not consistent with the time identifier in the current distributed lock information, the thread throws an exception prompt message to trigger a rollback mechanism to execute rollback operation on the thread so as to enable the thread to rollback to the latest state before the distributed lock is obtained.
Optionally, the thread performs a lock preemption operation, including:
judging whether the thread occupying the distributed lock is overtime or not;
if the thread occupying the distributed lock is overtime, sending a distributed lock releasing instruction to the cache server so that the cache server can delete the distributed lock information corresponding to the overtime thread; the thread enters the step that the thread generates a unique time identifier by using the current time acquired from the cache server, and a locking instruction comprising distributed locking information is sent to the cache server;
and if the thread occupying the distributed lock at present is not overtime, re-entering the step of judging whether the thread occupying the distributed lock at present is overtime.
Optionally, the distributed lock information includes a time identifier; the determining whether the thread currently occupying the distributed lock is overtime includes:
sending a distributed lock information acquisition instruction to the cache server, and receiving distributed lock information, current time and use time of a distributed lock sent by the cache server;
calculating the difference value between the current time and the time identifier in the distributed lock information;
if the difference is larger than the use time, the thread occupying the distributed lock at present is overtime;
and if the difference is not larger than the use time, indicating that the thread occupying the distributed lock currently does not time out.
Optionally, the distributed lock information includes a key value pair that takes a distributed lock identifier as a key and takes the timestamp and the value of the time of use as values, and then the determining whether the thread currently occupying the distributed lock is overtime includes:
sending a distributed lock information acquisition instruction to a cache server, and receiving distributed lock information and current time sent by the cache server;
extracting a timestamp and a value from the distributed lock information;
judging whether the current time is greater than the timestamp and the value;
if yes, indicating that the thread occupying the distributed lock is overtime;
if not, the thread occupying the distributed lock is not overtime.
Optionally, the distributed lock information includes a merged string of a string of the time identifier and a string of the usage time using the distributed lock identifier as a key, and the determining whether the thread currently occupying the distributed lock is overtime includes:
sending a distributed lock information acquisition instruction to a cache server, and receiving distributed lock information and current time sent by the cache server;
calculating the difference value between the current time and the time identifier in the distributed lock information;
if the difference is larger than the service time, determining that the thread occupying the distributed lock is overtime currently;
and if the difference is not larger than the using time, determining that the thread occupying the distributed lock at present is not overtime.
A distributed lock execution device, integrated in a process of an application server, for accessing multiple threads of a same shared resource, the distributed lock execution device comprising:
the locking unit is used for generating a unique time identifier by a thread by using the current time acquired from the cache server and sending a locking instruction comprising distributed lock information to the cache server; wherein the distribution lock information comprises the current identification;
the receiving unit is used for receiving a return result which is sent by the cache server and corresponds to the locking instruction;
the access unit is used for accessing the shared resource and executing the task operation corresponding to the shared resource by the thread if the returned result indicates that the thread obtains the distributed lock;
a releasing unit, configured to release the distributed lock only when a timestamp in locally stored distributed lock information is consistent with a time identifier in distributed lock information stored by the cache server after a task execution operation corresponding to the shared resource is completed by the thread;
and the lock snatching unit is used for executing the lock snatching operation by the thread if the return result shows that the thread does not obtain the distributed lock.
Optionally, the generating, by the thread in the locking unit, a unique time identifier by using the current time obtained from the cache server specifically includes:
the timestamp obtaining instruction sending unit is used for sending a timestamp obtaining instruction to the cache server by the thread;
the feedback unit is used for the thread to acquire a return value fed back by the cache server; wherein the return value comprises a current time of the cache server and a number of microseconds that have elapsed since the current second;
and the forming unit is used for forming a current time stamp by the thread according to the current time and the microsecond value, wherein the current time stamp is the time identifier.
Optionally, before sending the locking instruction including the distributed lock information to the cache server, the method further includes generating a distributed lock information unit.
Optionally, the distributed lock information unit includes: the first key value pair building unit is used for building a key value pair which takes a distributed lock identifier as a key and takes the time identifier as a value; a first generating and storing unit to generate and store distributed lock information including the key-value pair.
Optionally, the distributed lock information unit includes:
a second build key-value pair unit for setting a usage time corresponding to the distributed lock; constructing a key-value pair with distributed lock identification as a key and the timestamp and value of the time of use as values; or, constructing a key-value pair which takes a distributed lock identifier as a key and takes a character string of the time identifier and a combined character string of the use time as values; a second generating and storing unit configured to generate and store distributed lock information including the key-value pair.
Optionally, the release unit specifically includes:
the distributed lock information acquisition unit is used for sending a distributed lock information acquisition instruction to the cache server and receiving the current distributed lock information sent by the cache server after the thread finishes executing the task operation corresponding to the shared resource;
the judging unit is used for judging whether the time identifier in the distributed lock information locally stored by the thread is consistent with the time identifier in the current distributed lock information;
and a distributed lock releasing instruction sending unit, configured to send a distributed lock releasing instruction to the cache server if a time identifier in the distributed lock information locally stored by the thread is consistent with a time identifier in the current distributed lock information, so that the cache server deletes the current distributed lock information.
Optionally, the method further includes:
and the rollback unit is used for throwing an exception prompt message by the thread if the time identifier in the distributed lock information locally stored by the thread is inconsistent with the time identifier in the current distributed lock information, so as to trigger a rollback mechanism to execute a rollback operation on the thread to enable the thread to rollback to a latest state before the distributed lock is acquired.
Optionally, the lock snatching unit is configured to execute a lock snatching operation by the thread, and specifically includes:
the judging overtime unit is used for judging whether the thread occupying the distributed lock is overtime or not;
the distributed lock releasing unit is used for sending a distributed lock releasing instruction to the cache server if the thread occupying the distributed lock is overtime so that the cache server can delete the distributed lock information corresponding to the overtime thread; the thread enters the step that the thread generates a unique time identifier by using the current time acquired from the cache server, and a locking instruction comprising distributed locking information is sent to the cache server; and if the thread occupying the distributed lock does not time out, re-entering the judging time-out unit.
Optionally, the distributed lock information includes a time identifier; the determining that the timeout unit is configured to determine whether the thread currently occupying the distributed lock is timeout, specifically includes:
the distributed lock information acquisition unit is used for acquiring and sending a distributed lock information acquisition instruction to the cache server and receiving distributed lock information, current time and using time of the distributed lock sent by the cache server;
a difference calculating unit, configured to calculate a difference between the current time and a time identifier in the distributed lock information;
the determining unit is used for indicating that the thread occupying the distributed lock at present is overtime if the difference value is larger than the using time; and if the difference is not larger than the use time, indicating that the thread occupying the distributed lock currently does not time out.
Optionally, the distributed lock information includes a key value pair that takes a distributed lock identifier as a key and takes a timestamp and a value of the time identifier and the use time as values, and then the determining unit is configured to determine whether a thread currently occupying the distributed lock is overtime, specifically including:
the distributed lock information acquisition unit is used for sending a distributed lock information acquisition instruction to the cache server and receiving the distributed lock information and the current time sent by the cache server;
an extraction unit, configured to extract a timestamp and a value from the distributed lock information;
a judging unit, configured to judge whether the current time is greater than the timestamp and the value;
the determining unit is used for indicating that the thread occupying the distributed lock is overtime if the thread occupying the distributed lock is overtime; if not, the thread occupying the distributed lock is not overtime.
Optionally, the distributed lock information includes a merged string of a string of the time identifier and a string of the usage time using the distributed lock identifier as a key, and then the determining timeout unit is configured to determine whether the thread currently occupying the distributed lock is timeout, specifically including:
the distributed lock information acquisition unit is used for sending a distributed lock information acquisition instruction to the cache server and receiving the distributed lock information and the current time sent by the cache server;
the difference value calculating unit is used for calculating the difference value between the current time and the time identifier in the distributed lock information;
the determining unit is used for determining that the thread occupying the distributed lock at present is overtime if the difference value is larger than the using time; and if the difference is not larger than the using time, determining that the thread occupying the distributed lock at present is not overtime.
A distributed lock execution system comprises a cache server and one or more application servers connected with the cache server; the process of the application server comprises a plurality of threads needing to access the same shared resource;
the thread in the application server process is used for generating a unique time identifier by the thread by using the current time acquired from the cache server and sending a locking instruction comprising distributed locking information to the cache server; wherein the distribution lock information comprises the current identification; receiving a return result which is sent by the cache server and corresponds to the locking instruction; if the returned result indicates that the thread obtains the distributed lock, the thread accesses the shared resource and executes task operation corresponding to the shared resource; after the task execution operation corresponding to the shared resource is finished, the thread releases the distributed lock only under the condition that the time stamp in the locally stored distributed lock information is consistent with the time identifier in the distributed lock information stored by the cache server; and if the returned result indicates that the thread does not acquire the distributed lock, the thread executes the lock robbing operation.
An application server, comprising:
the processor is used for generating a unique time identifier by a thread by using the current time acquired from the cache server and sending a locking instruction comprising distributed locking information to the cache server; wherein the distribution lock information comprises the current identification; receiving a return result which is sent by the cache server and corresponds to the locking instruction; if the returned result indicates that the thread obtains the distributed lock, the thread accesses the shared resource and executes task operation corresponding to the shared resource; after the task execution operation corresponding to the shared resource is finished, the thread releases the distributed lock only under the condition that the time stamp in the locally stored distributed lock information is consistent with the time identifier in the distributed lock information stored by the cache server; if the returned result indicates that the thread does not acquire the distributed lock, the thread executes the lock robbing operation;
a memory to store the distributed lock information.
A storage medium having stored thereon a computer program for execution by a processor to perform the steps of a distributed lock execution method.
Through the technical means, the following beneficial effects can be realized:
in the application, when the thread sends a locking instruction to the cache server and obtains the distributed lock, the cache server stores the distributed lock information corresponding to the thread.
After the thread accesses the shared resource and executes the task operation, the distributed lock is not directly released, but whether the distributed lock is still the thread is judged based on the distributed lock information; if the distributed lock information of the thread is consistent with the current distributed lock information, the distributed lock is still the thread; otherwise, it indicates that the distributed lock for that thread is preempted by other threads.
In order to ensure the uniformity of shared resources, the thread throws an exception to trigger a rollback mechanism after finding that other threads occupy the distributed lock, so that the consistency of shared data is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a distributed lock execution system disclosed in an embodiment of the present application;
FIG. 2 is a flowchart of a distributed lock execution method disclosed in an embodiment of the present application;
FIG. 3 is a flowchart of a distributed lock execution method disclosed in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a distributed lock execution apparatus disclosed in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Interpretation of terms:
redis: the system is an open source log-type and Key-Value database which is written by using ANSI C language, supports network, can be based on memory and can also be persistent, and provides application program interfaces of a plurality of languages.
Locking: the core is a mechanism to coordinate the use of shared resources by various users. When a plurality of users mutually exclusive use a certain common resource, in order to avoid uncontrollable modification results caused by parallel use, a mark needs to be recorded in a certain place. The mark can be seen by all users, when the mark does not exist, the mark can be set and the use right of the public resource can be obtained, and when other users find that the mark already exists, the users can only wait for the mark owner to release and then try to set the mark. This marking can be understood as a lock.
Distributed lock: is a way to control the synchronous access of multiple users to shared resources between distributed systems. If multiple users share one or a group of resources, when accessing the resources, it is often necessary to mutually exclusive access the shared resources to prevent interference with each other and to ensure the consistency of the shared resources, in which case, a distributed lock is used.
To facilitate understanding by those skilled in the art, a distributed lock execution system is described below. Referring to fig. 1, the distributed lock execution system includes: the cache server 100, one or more application servers 200 (three application servers are shown as an example, and are respectively represented by an application server 201, an application server 202 and an application server 203) connected to the cache server 100, and a database device 300 connected to the cache server 100.
Cache server 100 may implement distributed locks and database device 300 may store shared resources. The cache server 100 may implement the distributed lock by using a Redis technology, and certainly may also implement the distributed lock by using technologies such as memcached and tair, which is not limited herein.
Multiple threads in one or more application servers 200 may apply for a distributed lock from the cache server 100. The thread that obtains the distributed lock may access a shared resource stored by database device 300.
The distributed lock execution method is applied to a distributed lock execution system, and a process of an application server of the distributed lock execution system includes a plurality of threads which need to access the same shared resource. Since the execution of multiple threads in one or more application servers 200 is consistent, the distributed lock execution method is described herein using one thread in one application server as an example.
According to an embodiment provided by the present application, an embodiment of a distributed lock execution method is provided. Referring to fig. 2, the method comprises the following steps:
generating a unique time identifier by a thread by using the current time acquired from a cache server, and sending a locking instruction comprising distributed locking information to the cache server; wherein the distribution lock information includes the current identification. This process can be realized by the following steps S201 to S206.
The thread generating a unique time identifier by using the current time acquired from the cache server comprises the following steps: the thread sends a timestamp acquisition instruction to the cache server; the thread acquires a return value fed back by the cache server; wherein the return value comprises a current time of the cache server and a number of microseconds that have elapsed since the current second; and the thread forms a current time stamp by the current time and the microsecond value, and the current time stamp is the time identifier. The present process can be implemented by using steps S202 to S204.
Step S201: after determining that the shared resource needs to be accessed, the thread in the application server 200 sets the use time of the distributed lock corresponding to the shared resource.
Under the condition that the thread in the application server 200 determines that the thread needs to access the shared resource of the cache server 100, the use time of the distributed lock corresponding to the shared resource can be set according to the condition that the thread accesses the shared resource. The length of the usage time is related to the application scenario corresponding to the specific process, and is not limited herein.
Step S202: the thread in the application server 200 sends a timestamp get instruction to the cache server 100.
In order to unify the timestamps of the application servers, in this embodiment, the thread in the application server 200 may use a Lua script of Redis to obtain the timestamp of the cache server 100, and the script content may be: call ('TIME') is local a.
Step S203: the cache server 100 extracts the current time and sends a return value to the thread in the application server 200.
Where the return value contains two strings, the first string being the current time (in UNIX timestamp format) and the second string being the number of microseconds that has elapsed since the current second.
For example, the return value a [1] ═ 1332395997", a [2] ═ 952581"; where "1332395997" is the current time of cache server fetch and "952581" is the number of microseconds that have elapsed since the current second.
Step S204: the thread in the application server 200 calculates the current timestamp based on the return value.
It will be appreciated that superimposing two strings in the return value results in the current timestamp.
The script content is as follows: return (a < 1 > 1000000+ a < 2 >). In the above example, the current timestamp is: and a is 1332395997952581.
The beneficial effects of steps S202-S204 are:
in this embodiment, the timestamp of the cache server 100 is adopted instead of the timestamp of the application server 200 itself, so that the consistency of the timestamps on the respective application servers 200 can be ensured.
Compared with the prior art in which each application server adopts synchronization operation to unify timestamps (synchronization operation can only be accurate to millisecond level generally), the embodiment can avoid synchronization operation, and has better operability and practicability. In addition, the timestamp acquired according to the method can be accurate to a microsecond level, and the accuracy and consistency of the timestamp are improved.
Step S205: the thread in the application server 200 constructs a key-value pair.
Before sending a locking instruction including distributed lock information to the cache server, constructing a key-value pair. The present application provides two implementations of building key-value pairs:
the first implementation mode comprises the following steps: constructing a key value pair which takes a distributed lock identifier as a key and takes the time identifier as a value; that is, the key is a distributed lock identification and the value is the current timestamp.
The second implementation mode comprises the following steps: constructing a key-value pair with distributed lock identification as a key and the timestamp and value of the time of use as values; that is, the key is a distributed lock identification and the value constitutes a timestamp and a value for the current timestamp and the time of use.
Or, constructing a key-value pair which takes a distributed lock identifier as a key and takes a character string of the time identifier and a combined character string of the use time as values; that is, the key is a distributed lock identification and the value is the current timestamp string and a string of concatenation using the time string.
And after the key value pairs are constructed according to the first implementation mode and the second implementation mode, generating and storing distributed lock information comprising the key value pairs.
Step S206: the thread in the application server 200 sends a locking instruction to the cache server 100. That is, the thread in the application server 200 sends a locking instruction containing distributed lock information to the cache server 100.
The thread in the application server 200 adopts a redisset key if not exist instruction to realize locking operation. The locking instruction comprises a locking instruction of distributed locking information, and the distributed locking information is a key value pair. Step S207: the cache server 100 receives the locking instruction and sends a return result of the locking instruction to the thread in the application server 200.
After receiving the locking instruction, the cache server 100 determines whether there is a distributed lock corresponding to the distributed lock identifier, and if so, feeds back a returned result indicating that the locking is not successful. If not, the key value pair is stored, and a return result indicating that the locking is successful is fed back.
And if the returned result indicates that the thread obtains the distributed lock, the thread accesses the shared resource and executes the task operation corresponding to the shared resource. This step can be implemented by steps S208 to S209.
Step S208: the thread in the application server 200 determines whether the returned result is a successful locking result, and if so, the process goes to step S209; if not, the process proceeds to step S215.
Step S209: if the thread in the application server 200 determines that the returned result is that the locking is successful, the shared resource of the database device 300 is accessed through the cache server 100, and the obtained shared resource is used to execute the task operation.
If the thread in the application server 200 determines that the returned result is that the locking is successful, it indicates that the thread may access the shared resource of the database device 300 through the cache server 100 within the use time, and perform the task operation corresponding to the shared resource by using the acquired shared resource.
The specific implementation of the task operation is related to a specific application scenario, and is not limited herein.
And after the task execution operation corresponding to the shared resource is finished, the thread releases the distributed lock only under the condition that the time stamp in the locally stored distributed lock information is consistent with the time identifier in the distributed lock information stored by the cache server. The process specifically comprises the following detailed processes:
after the execution of the task operation corresponding to the shared resource is finished, the thread sends a distributed lock information acquisition instruction to the cache server and receives the current distributed lock information sent by the cache server;
judging whether the time identifier in the distributed lock information locally stored by the thread is consistent with the time identifier in the current distributed lock information;
and if the time identifier in the distributed lock information locally stored by the thread is consistent with the time identifier in the current distributed lock information, sending a distributed lock releasing instruction to the cache server so that the cache server can delete the current distributed lock information.
In addition, if the time identifier in the distributed lock information locally stored by the thread is inconsistent with the time identifier in the current distributed lock information, the thread throws an exception prompt message to trigger a rollback mechanism to execute a rollback operation on the thread so that the thread rolls back to a latest state before the distributed lock is obtained. The above process may be implemented using steps S210 to S214.
Step S210: after the threads in the application server 200 access the shared resources and execute the task operation, the threads send the distributed lock information acquisition instruction to the cache server 100.
After the application server 200 executes the task operation by using the acquired shared resource, the distributed lock can be released, and the current distributed lock information is acquired before the distributed lock is released, so as to determine whether the thread still occupies the distributed lock based on the current distributed lock information.
Step S211: the cache server 100 obtains the current distributed lock information and sends the current distributed lock information to the threads in the application server 200.
In the process of executing, accessing and sharing resources and executing task operations by a thread in the application server 200, an abnormal situation may occur, and at this time, a problem that the thread uses a distributed lock overtime may occur; after the thread times out, the distributed lock may be preempted by other threads, and at this time, the cache server stores distributed information of other threads (distributed lock identifications are all consistent, except for time stamp).
As can be seen from the foregoing description, the distributed lock information includes key-value pairs of the distributed lock.
Step S212: the thread in the application server 200 compares the timestamp used in the locking instruction with the timestamp in the distributed lock information; if yes, go to step S213; if not, the process proceeds to step S214.
It will be appreciated that in the case where the distributed lock is not preempted by another thread, the distributed lock information in the cache server 100 is still for that thread, and the signature is that the timestamp should still be the timestamp of that thread.
When the distributed lock is released, the ring-releasing is required and a ring-contacting person is required, namely, under the condition of normally releasing the distributed lock, the thread can only release the distributed lock added by the thread. Therefore, before releasing the distributed lock, it is determined whether the distributed lock is still the current thread. That is, the timestamp that the thread uses in the locking instruction is compared to the timestamp in the current distributed lock information.
If the timestamps are consistent, it indicates that the distributed lock is still for the thread and is not preempted by other threads. If not, the thread may find that the distributed lock is not already owned by the thread and is preempted by other threads.
Step S213: if the two are consistent, an instruction for releasing the distributed lock is sent to the cache server 100.
If the distributed lock belongs to the thread, a distributed lock releasing instruction may be sent to the cache server 100, so as to release the distributed lock, so that other threads can preempt the distributed lock.
After receiving the distributed lock instruction, the cache server 100 deletes the distributed lock information corresponding to the distributed lock identifier. Namely, the distributed lock information corresponding to the timestamp of the current time is deleted.
Step S214: and if the two are not consistent, throwing out a system exception to trigger a rollback mechanism.
If not, the distributed lock is not the thread, and the thread cannot release the distributed locks added by other threads. That is, before the thread finishes the task operation, some exception occurs to the thread, causing the thread to occupy the distributed lock beyond the time of use, so the distributed lock has been preempted by other threads.
Since other threads are continuously judging whether the distributed lock is overtime, the distributed lock will be released by other threads under the condition that the thread occupies the distributed lock and is overtime (at this time, the distributed lock is released in an abnormal condition, so that other threads can directly release the distributed lock of the thread, and it is not necessary to verify whether the timestamps are consistent in step S212).
Because the thread uses the distributed lock when overtime, in order to ensure the consistency operation of the shared resource, the thread throws out the system exception at the moment, and the shared resource is recovered to the state before the thread occupies the distributed lock.
The steps S210 to S214 can realize the following beneficial effects:
if execution times out after a thread acquires a distributed lock: if no other thread robs the lock, the lock can be normally executed and released, because the distributed lock information in Redis is not deleted; an exception can be popped if there are other threads that have preempted the lock (the prior art fails to find the exception). At this time, the thread cannot normally release the lock, but throws an exception to trigger a rollback operation, thereby ensuring the consistency of the shared resources.
In addition, the steps S210-S214 can also skillfully solve the fault tolerance problem of the distributed lock under Redis master-slave or cluster; taking the case that the distributed lock information stored in the slave node in the background art is thread C1 as an example, when thread C0 releases the distributed lock, the distributed lock information held by thread C0 is found to be inconsistent with that in the slave node, so thread C0 is treated as a time-out condition for rollback. Thus, the present application supports the implementation of distributed locks via Redis masters or clusters.
And if the return result corresponding to the locking instruction sent by the server indicates that the thread does not acquire the distributed lock, executing the lock robbing operation by the thread.
The execution of the lock preemption operation by the thread specifically comprises the following steps: judging whether the thread occupying the distributed lock is overtime or not; if the thread occupying the distributed lock is overtime, sending a distributed lock releasing instruction to the cache server so that the cache server can delete the distributed lock information corresponding to the overtime thread; the thread enters the step that the thread generates a unique time identifier by using the current time acquired from the cache server, and a locking instruction comprising distributed locking information is sent to the cache server; and if the thread occupying the distributed lock at present is not overtime, re-entering the step of judging whether the thread occupying the distributed lock at present is overtime.
The above process can be implemented by using steps S215 to S218.
Step S208 proceeds to step S215: if the thread in the application server 200 determines that the returned result is not successfully locked, the thread sends a distributed lock information acquisition instruction to the cache server 100.
Step S216: the cache server 100 obtains the current distributed lock information and sends the current distributed lock information to the threads in the application server 200.
The thread holding the distributed lock adds the distributed lock information to the cache server 100 after the locking is successful.
With respect to step S206, the distributed lock information also has three implementations:
the first implementation mode comprises the following steps: the distributed lock information includes key-value pairs.
The locking instruction includes a key-value pair, the key (key) being a distributed lock identification and the value (value) being a current timestamp. The use time is default to a preset use time.
The second implementation mode comprises the following steps: the distributed lock information includes key-value pairs.
Compared with the first implementation mode, the use time is fused in the key value pair of the locking instruction. It can be understood that the use time of different threads can be determined according to the actual scene, so the fitness is high.
The value (value) is a current time stamp + a time stamp composed of a usage time, or a string composed of a current time stamp and a usage time.
Step S217: the thread in the application server 200 determines whether the thread currently holding the distributed lock is out of date based on the current distributed lock information. If so, the process proceeds to step S218, otherwise, the process proceeds to step S202.
For a first implementation manner, sending a distributed lock information acquisition instruction to the cache server, and receiving distributed lock information, current time and use time of a distributed lock sent by the cache server; calculating the difference value between the current time and the time identifier in the distributed lock information; if the difference is larger than the use time, the thread occupying the distributed lock at present is overtime; and if the difference is not larger than the use time, indicating that the thread occupying the distributed lock currently does not time out.
That is, the thread in the application server 200 calculates a difference between the current timestamp and the timestamp in the distributed lock information, and determines whether the difference is greater than the use time. If yes, indicating that the thread currently holding the distributed lock is overdue; if not, the thread currently holding the distributed lock is not expired.
For the second implementation, two classes can also be subdivided.
The first type: the distributed lock information includes a key value pair which takes a distributed lock identifier as a key and takes the timestamp and the value of the time of use as values, and the step of judging whether the thread occupying the distributed lock at present is overtime includes:
sending a distributed lock information acquisition instruction to a cache server, and receiving distributed lock information and current time sent by the cache server; extracting a timestamp and a value from the distributed lock information; judging whether the current time is greater than the timestamp and the value; if yes, indicating that the thread occupying the distributed lock is overtime; if not, the thread occupying the distributed lock is not overtime.
The second type: the distributed lock information includes a merged character string of the time identifier and the character string of the use time, and the step of judging whether the thread occupying the distributed lock is overtime includes:
sending a distributed lock information acquisition instruction to a cache server, and receiving distributed lock information and current time sent by the cache server; calculating the difference value between the current time and the time identifier in the distributed lock information; if the difference is larger than the service time, determining that the thread occupying the distributed lock is overtime currently; and if the difference is not larger than the using time, determining that the thread occupying the distributed lock at present is not overtime.
That is, the thread in the application server 200 directly determines whether the current timestamp is greater than the timestamp in the distributed lock information (the value of the key value pair in the distributed lock information is the timestamp, that is, the timeout of the thread), and if so, it indicates that the thread currently holding the distributed lock has expired; if not, the thread currently holding the distributed lock is not expired.
Step S218: the thread in the application server 200 sends an instruction to release the distributed lock to the cache server 100, and the process proceeds to step S202.
If the thread currently holding the distributed lock has expired, indicating that a deadlock has occurred, the thread in the application server 200 sends a delete instruction to the cache server 100. The cache server 100 deletes the distributed lock directly. The thread in the application server 200 will re-determine the current timestamp and perform the locking operation.
In this embodiment, the cache server itself no longer clears the overtime thread, but finds the overtime thread by the thread, and sends a release instruction to the cache server, so that the cache server deletes the distributed lock information corresponding to the overtime thread stored in the cache server after receiving the clear instruction.
Through the above description, it can be found that the present embodiment has the following beneficial effects:
firstly, under the condition that a thread holding a distributed lock is overtime, a system exception is thrown to trigger a rollback mechanism, so that the consistency of shared resources is ensured.
Secondly, in this embodiment, the timestamps of the cache servers are uniformly used, instead of using the timestamps of the respective application servers, so that the consistency of the timestamps of the respective application servers can be ensured. In addition, the timestamp synchronization operation of each application server can be avoided, and the method has better operability and practicability. In addition, the timestamp acquired according to the method can be accurate to microsecond level, the accuracy and consistency of the timestamp are improved, and a uniform timestamp is provided for the execution of the subsequent distributed locks.
Thirdly, the fault tolerance of the distributed lock under the Redis master-slave or cluster can be skillfully solved; for the case in the background art where both the thread C0 and the thread C1 rob the distributed lock, but the distributed lock information finally stored in the slave node is the thread C1, when the thread C0 releases the distributed lock, the distributed lock information held by the thread C0 and the slave node do not match, so the thread C0 is rolled back as a timeout condition. Thus, the present application supports the implementation of distributed locks via Redis masters or clusters.
Fourthly, no matter in a master-slave distributed environment or in a cluster distributed environment, under the condition that a plurality of threads may obtain a distributed lock, after the distributed information is added to the cache server, the embodiment can always ensure that only one piece of distributed lock information exists in the cache server at the same time, and the timestamp in the distributed lock information determines that the distributed lock information only corresponds to one thread.
Fifthly, under the condition that the threads holding the distributed lock are normal, the rest threads cannot obtain the distributed lock. When the thread holding the distributed lock is overtime, the next thread to rob the lock releases the overtime lock after the lock robbing fails. Then join the lock-robbing army and other threads to continue the fair competition lock.
Referring to fig. 3, an application example of the present application is described below, taking redis as an example to implement distributed locks:
1) a distributed lock is initialized and the lock timeOut time timeOut is set.
2) When locking, firstly, the current time stamp of the redis server is obtained through the lua script, then an item with the key being lockName is created, and the value is the time stamp obtained before.
3) Starting a lock preemption process, firstly executing a redis instruction setnx key timestamp, if the time stamp returns to 1, indicating that the lock is preempted, and then executing a service code; if 0 is returned, it means that the lock is not robbed and continues to wait for the lock to work.
3) When waiting for the lock, firstly, judging whether the currently existing distributed lock is overtime. If overtime happens, the lock is released, and then the lock grabbing operation is carried out; if not, directly continuing to rob the lock.
5) And when the task is finished, performing lock releasing operation.
The lock releasing operation is to delete an item with key being lockName in redis, and the deleting operation is performed only when the value is equal to the timestamp stored in the current lock object, otherwise, the lock is released after overtime, and an exception needs to be thrown to perform rollback operation.
If a process times out after acquiring a distributed lock: if no other process robs the lock, the lock can be normally executed and released, because the lock record in the redis is not cleared; if other processes rob the lock, the lock cannot be released normally, and at the moment, the lock cannot be released, and rollback processing can be carried out according to overtime.
The software method executed by the thread in the embodiment can be understood as a middleware, and is provided in the form of a third party import package. Middleware is a non-business technology component. Taking the Java language as an example, Java applications can introduce third party Jar packages for direct use without concern for the details inside. The software method executed by the thread can be independently made into a Jar package for any Java application program to call.
Of course, the software methods executed by the threads may also be combined into an online middleware. The method is released in a Web service mode, and other application programs are called in an Http mode; or issued in Socket form, and other applications are called in TCP form. The application does not limit the implementation form of the software method executed by the thread.
According to one embodiment provided by the application, a distributed lock execution device is provided. Referring to fig. 4, the distributed lock execution apparatus includes:
a locking unit 41, configured to generate a unique time identifier by using the current time obtained from the cache server by the thread, and send a locking instruction including distributed lock information to the cache server; wherein the distribution lock information includes the current identification.
Wherein, locking unit 41 includes:
a timestamp obtaining instruction sending unit 411, configured to send a timestamp obtaining instruction to the cache server by the thread;
a feedback unit 412, configured to obtain, by the thread, a return value fed back by the cache server; wherein the return value comprises a current time of the cache server and a number of microseconds that have elapsed since the current second;
a forming unit 413, configured to form, by the thread, a current timestamp from the current time and the microsecond value, where the current timestamp is the time identifier.
Before sending a locking instruction including distributed lock information to the cache server, the method further includes two implementation manners of generating the distributed lock information:
the first implementation mode comprises the following steps: and the first key value pair building unit is used for building a key value pair which takes the distributed lock identification as a key and takes the time identification as a value. A first generating and storing unit to generate and store distributed lock information including the key-value pair.
The second implementation mode comprises the following steps: a second build key-value pair unit for setting a usage time corresponding to the distributed lock; constructing a key-value pair with distributed lock identification as a key and the timestamp and value of the time of use as values; or, constructing a key-value pair which takes a distributed lock identifier as a key and takes a character string of the time identifier and a combined character string of the use time as values; a second generation and storage unit that generates and stores distributed lock information including the key-value pair.
And the receiving unit 42 is configured to receive a return result sent by the cache server and corresponding to the locking instruction.
And an accessing unit 43, configured to, if the returned result indicates that the thread obtains the distributed lock, access the shared resource and execute a task operation corresponding to the shared resource by the thread.
And a releasing unit 44, configured to release the distributed lock only when a timestamp in the locally stored distributed lock information is consistent with a time identifier in the distributed lock information stored by the cache server after the task execution operation corresponding to the shared resource by the thread is finished.
Wherein, the releasing unit 44 specifically includes:
the distributed lock information obtaining unit 441 is configured to, after the thread finishes executing the task operation corresponding to the shared resource, send a distributed lock information obtaining instruction to the cache server, and receive the current distributed lock information sent by the cache server.
The determining unit 442 is configured to determine whether a time identifier in the distributed lock information locally stored by the thread is consistent with a time identifier in the current distributed lock information.
A release distributed lock instruction sending unit 443, configured to send a release distributed lock instruction to the cache server if a time identifier in the distributed lock information locally stored by the thread is consistent with a time identifier in the current distributed lock information, so that the cache server deletes the current distributed lock information.
And a rollback unit 45, configured to, if the time identifier in the distributed lock information locally stored by the thread is inconsistent with the time identifier in the current distributed lock information, throw an exception notification message by the thread to trigger a rollback mechanism to execute a rollback operation on the thread, so that the thread rolls back to a latest state before the distributed lock is acquired.
And the lock snatching unit 46 is configured to execute a lock snatching operation on the thread if the return result indicates that the thread does not obtain the distributed lock. Wherein, the first locking unit 46 specifically includes:
a time-out judging unit 461, configured to judge whether a thread currently occupying the distributed lock is time-out; and if the thread occupying the distributed lock at present is not overtime, re-entering the step of judging whether the thread occupying the distributed lock at present is overtime.
A release distributed lock unit 462, configured to send a release distributed lock instruction to the cache server if the thread currently occupying the distributed lock is overtime, so that the cache server deletes the distributed lock information corresponding to the overtime thread; the thread enters the locking unit 41 to execute the step that the thread generates a unique time identifier by using the current time acquired from the cache server, and sends a locking instruction including distributed lock information to the cache server.
In the event that the distributed lock information includes a time identification; the determining whether the thread currently occupying the distributed lock is overtime includes:
sending a distributed lock information acquisition instruction to the cache server, and receiving distributed lock information, current time and use time of a distributed lock sent by the cache server; calculating the difference value between the current time and the time identifier in the distributed lock information; if the difference is larger than the use time, the thread occupying the distributed lock at present is overtime; and if the difference is not larger than the use time, indicating that the thread occupying the distributed lock currently does not time out.
If the distributed lock information includes a key value pair having a distributed lock identifier as a key and a timestamp and a value of the time of use as values, the determining whether the thread currently occupying the distributed lock is overtime includes:
sending a distributed lock information acquisition instruction to a cache server, and receiving distributed lock information and current time sent by the cache server; extracting a timestamp and a value from the distributed lock information; judging whether the current time is greater than the timestamp and the value; if yes, indicating that the thread occupying the distributed lock is overtime; if not, the thread occupying the distributed lock is not overtime.
If the distributed lock information includes a merged string of a string of the time identifier and a string of the usage time using the distributed lock identifier as a key, the determining whether the thread currently occupying the distributed lock is overtime includes:
sending a distributed lock information acquisition instruction to a cache server, and receiving distributed lock information and current time sent by the cache server; calculating the difference value between the current time and the time identifier in the distributed lock information; if the difference is larger than the service time, determining that the thread occupying the distributed lock is overtime currently; and if the difference is not larger than the using time, determining that the thread occupying the distributed lock at present is not overtime.
For a specific scheme of the distributed lock execution apparatus, reference may be made to the embodiment shown in fig. 2, and details are not described here.
The present application further provides an application server, comprising:
the processor is used for generating a unique time identifier by a thread by using the current time acquired from the cache server and sending a locking instruction comprising distributed locking information to the cache server; wherein the distribution lock information comprises the current identification; receiving a return result which is sent by the cache server and corresponds to the locking instruction; if the returned result indicates that the thread obtains the distributed lock, the thread accesses the shared resource and executes task operation corresponding to the shared resource; after the task execution operation corresponding to the shared resource is finished, the thread releases the distributed lock only under the condition that the time stamp in the locally stored distributed lock information is consistent with the time identifier in the distributed lock information stored by the cache server; if the returned result indicates that the thread does not acquire the distributed lock, the thread executes the lock robbing operation;
a memory to store the distributed lock information.
The processor executing the thread generates a unique time identifier by using the current time acquired from the cache server, and the method specifically comprises the following steps: the thread sends a timestamp acquisition instruction to the cache server; the thread acquires a return value fed back by the cache server; wherein the return value comprises a current time of the cache server and a number of microseconds that have elapsed since the current second; and the thread forms a current time stamp by the current time and the microsecond value, and the current time stamp is the time identifier.
Before sending the locking instruction including the distributed lock information to the cache server, the processor further includes generating the distributed lock information, which may include the following two implementation manners:
the first implementation mode comprises the following steps: constructing a key value pair which takes a distributed lock identifier as a key and takes the time identifier as a value; distributed lock information including the key-value pair is generated and stored.
The second implementation mode comprises the following steps: setting the use time corresponding to the distributed lock; constructing a key-value pair with distributed lock identification as a key and the timestamp and value of the time of use as values; or, constructing a key-value pair which takes a distributed lock identifier as a key and takes a character string of the time identifier and a combined character string of the use time as values; distributed lock information including the key-value pair is generated and stored.
The processor executes the thread, and only when the timestamp in the locally stored distributed lock information is consistent with the time identifier in the distributed lock information stored in the cache server, the thread releases the distributed lock, which specifically includes:
after the execution of the task operation corresponding to the shared resource is finished, the thread sends a distributed lock information acquisition instruction to the cache server and receives the current distributed lock information sent by the cache server; judging whether the time identifier in the distributed lock information locally stored by the thread is consistent with the time identifier in the current distributed lock information; and if the time identifier in the distributed lock information locally stored by the thread is consistent with the time identifier in the current distributed lock information, sending a distributed lock releasing instruction to the cache server so that the cache server can delete the current distributed lock information.
And the processor is further used for throwing an exception prompt message by the thread if the time identifier in the distributed lock information locally stored by the thread is inconsistent with the time identifier in the current distributed lock information, so as to trigger a rollback mechanism to execute a rollback operation on the thread, so that the thread is rolled back to a latest state before the distributed lock is obtained.
The processor executes the thread to execute the lock preemption operation, which specifically comprises the following steps: judging whether the thread occupying the distributed lock is overtime or not; if the thread occupying the distributed lock is overtime, sending a distributed lock releasing instruction to the cache server so that the cache server can delete the distributed lock information corresponding to the overtime thread; the thread enters the step that the thread generates a unique time identifier by using the current time acquired from the cache server, and a locking instruction comprising distributed locking information is sent to the cache server; and if the thread occupying the distributed lock at present is not overtime, re-entering the step of judging whether the thread occupying the distributed lock at present is overtime.
In the event that the distributed lock information includes a time identification; the process of executing the process of judging whether the thread currently occupying the distributed lock is overtime by the processor specifically includes:
sending a distributed lock information acquisition instruction to the cache server, and receiving distributed lock information, current time and use time of a distributed lock sent by the cache server; calculating the difference value between the current time and the time identifier in the distributed lock information; if the difference is larger than the use time, the thread occupying the distributed lock at present is overtime; and if the difference is not larger than the use time, indicating that the thread occupying the distributed lock currently does not time out.
When the distributed lock information includes a key value pair having a distributed lock identifier as a key and a timestamp and a value of the time of use as values, the processor determining whether the thread currently occupying the distributed lock is overtime specifically includes:
sending a distributed lock information acquisition instruction to a cache server, and receiving distributed lock information and current time sent by the cache server; extracting a timestamp and a value from the distributed lock information; judging whether the current time is greater than the timestamp and the value; if yes, indicating that the thread occupying the distributed lock is overtime; if not, the thread occupying the distributed lock is not overtime.
If the distributed lock information includes a merged string of a string of the time identifier and a string of the usage time using the distributed lock identifier as a key, the determining whether the thread currently occupying the distributed lock is overtime includes:
sending a distributed lock information acquisition instruction to a cache server, and receiving distributed lock information and current time sent by the cache server; calculating the difference value between the current time and the time identifier in the distributed lock information; if the difference is larger than the service time, determining that the thread occupying the distributed lock is overtime currently; and if the difference is not larger than the using time, determining that the thread occupying the distributed lock at present is not overtime.
The present application further provides a storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the steps of the distributed lock execution method: generating a unique time identifier by a thread by using the current time acquired from a cache server, and sending a locking instruction comprising distributed locking information to the cache server; wherein the distribution lock information comprises the current identification; receiving a return result which is sent by the cache server and corresponds to the locking instruction; if the returned result indicates that the thread obtains the distributed lock, the thread accesses the shared resource and executes task operation corresponding to the shared resource; after the task execution operation corresponding to the shared resource is finished, the thread releases the distributed lock only under the condition that the time stamp in the locally stored distributed lock information is consistent with the time identifier in the distributed lock information stored by the cache server; and if the returned result indicates that the thread does not acquire the distributed lock, the thread executes the lock robbing operation.
Specific implementation of each step can be referred to the embodiment shown in fig. 2 and is not described herein again.
The functions described in the method of the present embodiment, if implemented in the form of software functional units and sold or used as independent products, may be stored in a storage medium readable by a computing device. Based on such understanding, part of the contribution to the prior art of the embodiments of the present application or part of the technical solution may be embodied in the form of a software product stored in a storage medium and including several instructions for causing a computing device (which may be a personal computer, a server, a mobile computing device or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (21)

1. A distributed lock execution method is applied to a plurality of threads which need to access the same shared resource in the process of an application server, and comprises the following steps:
the method comprises the steps that a thread generates a unique time identifier by using current time acquired from a cache server, distributed lock information is constructed and stored on the basis of the time identifier, and a locking instruction comprising the distributed lock information is sent to the cache server, so that the cache server stores the distributed lock information when determining to issue a distributed lock to the thread; receiving a return result which is sent by the cache server and corresponds to the locking instruction;
if the returned result indicates that the thread obtains the distributed lock, the thread accesses the shared resource and executes task operation corresponding to the shared resource;
after the execution task operation corresponding to the shared resource is finished, the thread releases the distributed lock only under the condition that the locally stored distributed lock information is consistent with the distributed lock information stored by the cache server; the method comprises the following steps: after the execution of the task operation corresponding to the shared resource is finished, the thread sends a distributed lock information acquisition instruction to the cache server and receives the current distributed lock information sent by the cache server; judging whether the time identifier in the distributed lock information locally stored by the thread is consistent with the time identifier in the current distributed lock information; if not, the thread throws an exception prompt message to trigger a rollback mechanism to execute a rollback operation on the thread so that the thread rolls back to the nearest state before the distributed lock is obtained;
if the returned result indicates that the thread does not acquire the distributed lock, the thread executes the lock robbing operation;
and when other threads continuously execute to judge whether the distributed lock is timed out, releasing the distributed lock by the other threads under the condition that the distributed lock occupied by the thread is timed out.
2. The method of claim 1, wherein the thread generating a unique time identification using a current time obtained from a cache server comprises:
the thread sends a timestamp acquisition instruction to the cache server;
the thread acquires a return value fed back by the cache server; wherein the return value comprises a current time of the cache server and a number of microseconds that have elapsed since the current second;
and the thread forms a current time stamp by the current time and the microsecond value, and the current time stamp is the time identifier.
3. The method of claim 2, wherein the constructing and storing distributed lock information based on the time identification comprises:
constructing a key value pair which takes a distributed lock identifier as a key and takes the time identifier as a value;
distributed lock information including the key-value pair is generated and stored.
4. The method of claim 2, wherein the constructing and storing distributed lock information based on the time identification further comprises:
setting the use time corresponding to the distributed lock;
constructing a key-value pair with distributed lock identification as a key and the timestamp and value of the time of use as values; or, constructing a key-value pair which takes a distributed lock identifier as a key and takes a character string of the time identifier and a combined character string of the use time as values;
distributed lock information including the key-value pair is generated and stored.
5. The method of claim 1,
and if the time identifier in the distributed lock information locally stored by the thread is consistent with the time identifier in the current distributed lock information, sending a distributed lock releasing instruction to the cache server so that the cache server can delete the distributed lock information.
6. The method of claim 1, wherein the thread performs a rob lock operation comprising:
judging whether the thread occupying the distributed lock is overtime or not;
if the thread occupying the distributed lock is overtime, sending a distributed lock releasing instruction to the cache server so that the cache server can delete the distributed lock information corresponding to the overtime thread; the thread enters the step that the thread generates a unique time identifier by using the current time acquired from the cache server, and a locking instruction comprising distributed locking information is sent to the cache server;
and if the thread occupying the distributed lock at present is not overtime, re-entering the step of judging whether the thread occupying the distributed lock at present is overtime.
7. The method of claim 6, wherein the distributed lock information comprises a key-value pair keyed by a distributed lock identification and valued by the time identification; the determining whether the thread currently occupying the distributed lock is overtime includes:
sending a distributed lock information acquisition instruction to the cache server, and receiving distributed lock information, current time and use time of a distributed lock sent by the cache server;
calculating the difference value between the current time and the time identifier in the distributed lock information;
if the difference is larger than the use time, the thread occupying the distributed lock at present is overtime;
and if the difference is not larger than the use time, indicating that the thread occupying the distributed lock currently does not time out.
8. The method of claim 6, wherein the distributed lock information comprises a key-value pair having a distributed lock identifier as a key and a timestamp and a value of the time of use as values, and wherein the determining whether the thread currently occupying the distributed lock has timed out comprises:
sending a distributed lock information acquisition instruction to a cache server, and receiving distributed lock information and current time sent by the cache server;
extracting a timestamp and a value from the distributed lock information;
judging whether the current time is greater than the timestamp and the value;
if yes, indicating that the thread occupying the distributed lock is overtime;
if not, the thread occupying the distributed lock is not overtime.
9. The method of claim 6, wherein the distributed lock information includes a merged string of a string of the time identifier and a string of the usage time with the distributed lock identifier as a key, and the determining whether the thread currently occupying the distributed lock is timed out includes:
sending a distributed lock information acquisition instruction to a cache server, and receiving distributed lock information and current time sent by the cache server;
calculating the difference value between the current time and the time identifier in the distributed lock information;
if the difference is larger than the service time, determining that the thread occupying the distributed lock is overtime currently;
and if the difference is not larger than the using time, determining that the thread occupying the distributed lock at present is not overtime.
10. A distributed lock execution device, integrated in a process of an application server, for multiple threads that need to access the same shared resource, the distributed lock execution device comprising:
the locking unit is used for generating a unique time identifier by a thread by using the current time acquired from the cache server, constructing and storing distributed lock information based on the time identifier, and sending a locking instruction comprising the distributed lock information to the cache server so that the cache server stores the distributed lock information when determining to issue a distributed lock to the thread; the receiving unit is used for receiving a return result which is sent by the cache server and corresponds to the locking instruction;
the access unit is used for accessing the shared resource and executing the task operation corresponding to the shared resource by the thread if the returned result indicates that the thread obtains the distributed lock;
a releasing unit, configured to release the distributed lock only when the locally stored distributed lock information is consistent with the distributed lock information stored by the cache server after the thread finishes executing the task operation corresponding to the shared resource; the method comprises the following steps: after the execution of the task operation corresponding to the shared resource is finished, the thread sends a distributed lock information acquisition instruction to the cache server and receives the current distributed lock information sent by the cache server; judging whether the time identifier in the distributed lock information locally stored by the thread is consistent with the time identifier in the current distributed lock information; if not, the thread throws an exception prompt message to trigger a rollback mechanism to execute a rollback operation on the thread so that the thread rolls back to the nearest state before the distributed lock is obtained;
the lock robbing unit is used for executing the lock robbing operation by the thread if the return result indicates that the thread does not obtain the distributed lock;
and when other threads continuously execute to judge whether the distributed lock is timed out, releasing the distributed lock by the other threads under the condition that the distributed lock occupied by the thread is timed out.
11. The apparatus as claimed in claim 10, wherein the process in the locking unit generates the unique time identifier using the current time obtained from the cache server, specifically comprising:
the timestamp obtaining instruction sending unit is used for sending a timestamp obtaining instruction to the cache server by the thread;
the feedback unit is used for the thread to acquire a return value fed back by the cache server; wherein the return value comprises a current time of the cache server and a number of microseconds that have elapsed since the current second;
the composition unit is used for the thread to compose a current time stamp from the current time and the microsecond value, and the current time stamp is the time identifier;
and the sending unit is used for sending a locking instruction comprising distributed lock information to the cache server.
12. The apparatus of claim 11, wherein constructing and storing distributed lock information based on the time identification in the locking unit comprises:
the first key value pair building unit is used for building a key value pair which takes a distributed lock identifier as a key and takes the time identifier as a value;
a first generating and storing unit to generate and store distributed lock information including the key-value pair.
13. The apparatus of claim 11, wherein constructing and storing distributed lock information based on the time identification in the locking unit comprises:
the second key value pair building unit is used for setting the use time corresponding to the distributed lock, building a key value pair which takes the distributed lock identification as a key and takes the timestamp and the value of the time identification and the use time as values; or, constructing a key-value pair which takes a distributed lock identifier as a key and takes a character string of the time identifier and a combined character string of the use time as values;
a second generating and storing unit configured to generate and store distributed lock information including the key-value pair.
14. The apparatus of claim 10, further comprising:
and a distributed lock releasing instruction sending unit, configured to send a distributed lock releasing instruction to the cache server if a time identifier in the distributed lock information locally stored by the thread is consistent with a time identifier in the current distributed lock information, so that the cache server deletes the distributed lock information.
15. The apparatus of claim 10, wherein the preemptive lock unit comprises:
the judging overtime unit is used for judging whether the thread occupying the distributed lock is overtime or not;
the distributed lock releasing unit is used for sending a distributed lock releasing instruction to the cache server if the thread occupying the distributed lock is overtime so that the cache server can delete the distributed lock information corresponding to the overtime thread; the thread enters the step that the thread generates a unique time identifier by using the current time acquired from the cache server, and a locking instruction comprising distributed locking information is sent to the cache server;
and if the thread occupying the distributed lock does not time out, re-entering the judging time-out unit.
16. The apparatus of claim 15, wherein distributed lock information comprises a key-value pair keyed by a distributed lock identification and valued by the time identification; the judging timeout unit includes:
sending a distributed lock information acquisition instruction to the cache server, and receiving distributed lock information, current time and use time of a distributed lock sent by the cache server; calculating the difference value between the current time and the time identifier in the distributed lock information; if the difference is larger than the use time, the thread occupying the distributed lock at present is overtime; and if the difference is not larger than the use time, indicating that the thread occupying the distributed lock currently does not time out.
17. The apparatus of claim 15, wherein the distributed lock information includes a key-value pair having a distributed lock identifier as a key and a timestamp and a value of the time identifier and the usage time as values, the decision timeout unit comprises:
sending a distributed lock information acquisition instruction to a cache server, and receiving distributed lock information and current time sent by the cache server; extracting a timestamp and a value from the distributed lock information; judging whether the current time is greater than the timestamp and the value; if yes, indicating that the thread occupying the distributed lock is overtime; if not, the thread occupying the distributed lock is not overtime.
18. The apparatus as claimed in claim 15, wherein the distributed lock information includes a merged string of a string of the time stamp and a string of the usage time using the distributed lock stamp as a key, and the judgment timeout unit comprises:
sending a distributed lock information acquisition instruction to a cache server, and receiving distributed lock information and current time sent by the cache server; calculating the difference value between the current time and the time identifier in the distributed lock information; if the difference is larger than the service time, determining that the thread occupying the distributed lock is overtime currently; and if the difference is not larger than the using time, determining that the thread occupying the distributed lock at present is not overtime.
19. A distributed lock execution system is characterized by comprising a cache server and one or more application servers connected with the cache server; the process of the application server comprises a plurality of threads needing to access the same shared resource;
the thread in the application server process is used for generating a unique time identifier by the thread by using the current time acquired from the cache server, constructing and storing distributed lock information based on the time identifier, and sending a locking instruction comprising the distributed lock information to the cache server so that the cache server stores the distributed lock information when determining to issue a distributed lock to the thread; receiving a return result which is sent by the cache server and corresponds to the locking instruction; if the returned result indicates that the thread obtains the distributed lock, the thread accesses the shared resource and executes task operation corresponding to the shared resource; after the execution task operation corresponding to the shared resource is finished, the thread releases the distributed lock only under the condition that the locally stored distributed lock information is consistent with the distributed lock information stored by the cache server; the method comprises the following steps: after the execution of the task operation corresponding to the shared resource is finished, the thread sends a distributed lock information acquisition instruction to the cache server and receives the current distributed lock information sent by the cache server; judging whether the time identifier in the distributed lock information locally stored by the thread is consistent with the time identifier in the current distributed lock information; if not, the thread throws an exception prompt message to trigger a rollback mechanism to execute a rollback operation on the thread so that the thread rolls back to the nearest state before the distributed lock is obtained; if the returned result indicates that the thread does not acquire the distributed lock, the thread executes the lock robbing operation; and when other threads continuously execute to judge whether the distributed lock is timed out, releasing the distributed lock by the other threads under the condition that the distributed lock occupied by the thread is timed out.
20. An application server, comprising:
the processor is used for generating a unique time identifier by a thread by using the current time acquired from the cache server, constructing and storing distributed lock information based on the time identifier, and sending a locking instruction comprising the distributed lock information to the cache server; receiving a return result which is sent by the cache server and corresponds to the locking instruction; if the returned result indicates that the thread obtains the distributed lock, the thread accesses the shared resource and executes task operation corresponding to the shared resource; after the execution task operation corresponding to the shared resource is finished, the thread releases the distributed lock only under the condition that the locally stored distributed lock information is consistent with the distributed lock information stored by the cache server; the method comprises the following steps: after the execution of the task operation corresponding to the shared resource is finished, the thread sends a distributed lock information acquisition instruction to the cache server and receives the current distributed lock information sent by the cache server; judging whether the time identifier in the distributed lock information locally stored by the thread is consistent with the time identifier in the current distributed lock information; if not, the thread throws an exception prompt message to trigger a rollback mechanism to execute a rollback operation on the thread so that the thread rolls back to the nearest state before the distributed lock is obtained; if the returned result indicates that the thread does not acquire the distributed lock, the thread executes the lock robbing operation; when other threads continuously execute to judge whether the distributed lock is overtime, the other threads release the distributed lock under the condition that the threads occupy the distributed lock and are overtime;
a memory to store the distributed lock information.
21. A storage medium having stored thereon a computer program for execution by a processor for performing the steps of the distributed lock execution method of any of claims 1-9.
CN201810688442.8A 2018-06-28 2018-06-28 Distributed lock execution method, device and system, application server and storage medium Active CN108874552B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810688442.8A CN108874552B (en) 2018-06-28 2018-06-28 Distributed lock execution method, device and system, application server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810688442.8A CN108874552B (en) 2018-06-28 2018-06-28 Distributed lock execution method, device and system, application server and storage medium

Publications (2)

Publication Number Publication Date
CN108874552A CN108874552A (en) 2018-11-23
CN108874552B true CN108874552B (en) 2021-09-21

Family

ID=64296497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810688442.8A Active CN108874552B (en) 2018-06-28 2018-06-28 Distributed lock execution method, device and system, application server and storage medium

Country Status (1)

Country Link
CN (1) CN108874552B (en)

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111258976A (en) * 2018-12-03 2020-06-09 北京京东尚科信息技术有限公司 Distributed lock implementation method, system, device and storage medium
CN109743366B (en) * 2018-12-21 2022-04-05 苏宁易购集团股份有限公司 Resource locking method, device and system for multi-living scene
CN110147386A (en) * 2019-04-16 2019-08-20 平安科技(深圳)有限公司 The caching method of data, device, computer equipment
CN110162384B (en) * 2019-04-19 2023-04-28 深圳壹账通智能科技有限公司 Redis distributed lock-based timeout time dynamic adjustment method and system
CN110377405A (en) * 2019-06-17 2019-10-25 平安科技(深圳)有限公司 The concurrent processing method and relevant device of lightweight request
CN110677465B (en) * 2019-09-09 2022-11-22 北京信安世纪科技股份有限公司 Control method and device of distributed lock
CN110866021B (en) * 2019-10-16 2023-06-30 微梦创科网络科技(中国)有限公司 Method and system for locking and unlocking distributed lock based on Go language
CN110888858B (en) * 2019-10-29 2023-06-30 北京奇艺世纪科技有限公司 Database operation method and device, storage medium and electronic device
CN111026807A (en) * 2019-11-25 2020-04-17 深圳壹账通智能科技有限公司 Distributed lock synchronization method and device, computer equipment and readable storage medium
CN110941496A (en) * 2019-12-16 2020-03-31 上海米哈游天命科技有限公司 Distributed lock implementation method and device, computer equipment and storage medium
CN111158878B (en) * 2019-12-30 2023-08-29 北京三快在线科技有限公司 Resource transfer request thread control method, device and storage medium
CN111208949B (en) * 2020-01-13 2020-12-25 上海依图网络科技有限公司 Method for determining data rollback time period in distributed storage system
CN111258771A (en) * 2020-01-16 2020-06-09 青木数字技术股份有限公司 Method and system for realizing distributed lock based on Raft algorithm
CN111259030A (en) * 2020-01-16 2020-06-09 平安医疗健康管理股份有限公司 Thread execution method and device based on distributed lock and storage medium
CN113138992A (en) * 2020-01-19 2021-07-20 上海静客网络科技有限公司 Method for constructing distributed lock
CN111324622A (en) * 2020-02-20 2020-06-23 中国建设银行股份有限公司 Resource first-aid-purchase processing method, device, equipment and storage medium
CN111405015B (en) * 2020-03-09 2022-09-30 中国建设银行股份有限公司 Data processing method, device, equipment and storage medium
CN111400330B (en) * 2020-03-13 2024-04-09 深圳前海微众银行股份有限公司 Task processing method, device, equipment and computer readable storage medium
CN111552474B (en) * 2020-03-26 2022-03-18 平安医疗健康管理股份有限公司 Processing method and device for executing distributed lock operation
CN111352948B (en) * 2020-03-31 2023-12-26 中国建设银行股份有限公司 Data processing method, device, equipment and storage medium
CN111459963B (en) * 2020-04-07 2024-03-15 中国建设银行股份有限公司 Concurrent processing method and device for core accounting transaction
CN111722934B (en) * 2020-06-12 2022-08-16 南京甄视智能科技有限公司 Distributed timing task processing method and system
CN111880909A (en) * 2020-07-27 2020-11-03 平安科技(深圳)有限公司 Distributed data publishing method and device
CN112650776A (en) * 2020-11-03 2021-04-13 重庆盈熙横纵信息技术有限公司 Redis-based distributed lock implementation method, storage device and server
CN112486696A (en) * 2020-12-11 2021-03-12 上海悦易网络信息技术有限公司 Method and equipment for acquiring distributed lock
CN112835982B (en) * 2021-02-26 2023-03-24 浪潮云信息技术股份公司 Table lock implementation method based on distributed database
CN113282580A (en) * 2021-04-28 2021-08-20 青岛海尔科技有限公司 Method, storage medium and server for executing timed task
CN113127570B (en) * 2021-05-18 2022-11-04 上海莉莉丝科技股份有限公司 Data operation method, system, equipment and storage medium of distributed server
CN113342507B (en) * 2021-07-07 2024-03-29 湖南四方天箭信息科技有限公司 Distributed lock service realization method and device and computer equipment
CN114679464B (en) * 2022-03-24 2024-02-13 深圳九有数据库有限公司 Data rollback method, device, equipment and storage medium based on distributed lock
CN115878696B (en) * 2023-03-06 2023-04-28 中国西安卫星测控中心 High availability method and device for distributed data processing cluster
CN116389579A (en) * 2023-03-22 2023-07-04 安芯网盾(北京)科技有限公司 Report generation method and system based on micro-service
CN117520362B (en) * 2024-01-04 2024-04-12 北京佳讯飞鸿电气股份有限公司 Data management system, method, equipment and storage medium based on distributed lock

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1798645A1 (en) * 2005-12-13 2007-06-20 Deutsche Thomson-Brandt Gmbh Method and apparatus for protecting a data set
CN104268024A (en) * 2014-09-24 2015-01-07 浪潮(北京)电子信息产业有限公司 Method and device for achieving process mutual exclusion
CN107656815A (en) * 2016-07-26 2018-02-02 北京京东尚科信息技术有限公司 Method and apparatus for controlling distributed lock

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1798645A1 (en) * 2005-12-13 2007-06-20 Deutsche Thomson-Brandt Gmbh Method and apparatus for protecting a data set
CN104268024A (en) * 2014-09-24 2015-01-07 浪潮(北京)电子信息产业有限公司 Method and device for achieving process mutual exclusion
CN107656815A (en) * 2016-07-26 2018-02-02 北京京东尚科信息技术有限公司 Method and apparatus for controlling distributed lock

Also Published As

Publication number Publication date
CN108874552A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN108874552B (en) Distributed lock execution method, device and system, application server and storage medium
CN110389900B (en) Distributed database cluster testing method and device and storage medium
CN109101341B (en) Distribution method and equipment of distributed lock
CN110888858B (en) Database operation method and device, storage medium and electronic device
WO2018045758A1 (en) Data storage method and data centre
CN109173270B (en) Game service system and implementation method
CN110619226A (en) Platform-based data processing method, system, equipment and storage medium
CN107423942B (en) Service transfer method and device
WO2021107988A1 (en) Distributed processing of transactions in a network using timestamps
CN110032424B (en) Method and device for realizing distributed lock
CN112486695A (en) Distributed lock implementation method under high concurrency service
US20230098963A1 (en) Object processing method and apparatus, computer device, and storage medium
CN111258771A (en) Method and system for realizing distributed lock based on Raft algorithm
CN107463390B (en) Software upgrading method and upgrading server
CN110809051B (en) Service data processing method and system
CN112667409A (en) Implementation method of reentrant distributed exclusive lock
CN111626606A (en) Business rule processing method, device, equipment, system and storage medium
CN111143041A (en) Data consistency method, distributed coordinator and central coordinator
Osrael et al. Axis2-based replication middleware forweb services
CN115086355A (en) User information processing method and device based on block chain
CN101751292A (en) Method for realizing consistency function of multimachine core data in ATC (automatic timing corrector) system
CN112565467A (en) Service processing method, device and storage medium
CN113360272A (en) Access processing method and device of shared resource, electronic equipment and storage medium
CN117608766B (en) Distributed lock processing method, device, storage medium and system
CN115242499B (en) Network request method, network request device, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200731

Address after: 310000 2 6th Floor, No. 3588 Jiangnan Avenue, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Yunyi Network Technology Co.,Ltd.

Address before: 310053 9 floor, 2 building, 3588 Jiangnan Road, Binjiang District, Hangzhou, Zhejiang.

Applicant before: HANGZHOU YUN-IN NETWORK TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant