CN112799836A - Distributed lock system architecture - Google Patents

Distributed lock system architecture Download PDF

Info

Publication number
CN112799836A
CN112799836A CN202110108739.4A CN202110108739A CN112799836A CN 112799836 A CN112799836 A CN 112799836A CN 202110108739 A CN202110108739 A CN 202110108739A CN 112799836 A CN112799836 A CN 112799836A
Authority
CN
China
Prior art keywords
lock
distribution server
locking
module
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110108739.4A
Other languages
Chinese (zh)
Inventor
杨金涛
刘远
杨森
郭镔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Minglue Zhaohui Technology Co Ltd
Original Assignee
Beijing Minglue Zhaohui Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Minglue Zhaohui Technology Co Ltd filed Critical Beijing Minglue Zhaohui Technology Co Ltd
Priority to CN202110108739.4A priority Critical patent/CN112799836A/en
Publication of CN112799836A publication Critical patent/CN112799836A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Abstract

The invention provides a distributed lock system architecture, which comprises: the system comprises a plurality of lock application clients, a plurality of lock management server and a plurality of lock management server, wherein the lock application clients are used for providing client interfaces for applying locks and sending locking or unlocking requests through the client interfaces; the lock storage servers are used for storing lock data information; a lock distribution server cluster, communication connection is a plurality of lock application client and a plurality of lock storage server, just communication connection between a plurality of locks distribution server in the lock distribution server cluster, lock distribution server cluster is used for receiving locking or unblock request, and it is a plurality of to visit lock storage server's lock data information, and handle locking or unblock request will visit the result and send one lock application client. The system architecture does not need to process the logic processing of the lock and the data service of the lock based on the same server, improves the flexibility, the system expansibility and the resource utilization rate of the distributed lock system architecture, and reduces the time for fault recovery.

Description

Distributed lock system architecture
Technical Field
The invention relates to the technical field of distributed systems, in particular to a distributed lock system architecture.
Background
With the rapid development of the internet industry, the cloud computing era has come. In the cloud computing era, most services need to be deployed in a distributed environment, and many underlying distributed services need to maintain high consistency and high reliability of data. In order to improve the reliability and access rate of data, some distributed systems often have multiple backups for one data and are distributed on different servers. In writing data, in order to maintain data consistency, a concurrency control technology is required to ensure that only one process writes the same data each time, and the current concurrency control technology mainly adopts a blocking concurrency control method.
Currently, existing distributed lockout technologies mainly include Chubby, Zookeeper, Redis, and Memcached, etc., where Chubby is a lock service designed by Google to solve the problem that in a network for high-speed communication, loosely coupled small computers synchronize their own behaviors or achieve consistency on some data, and is a lock service independent of data, aiming to solve the problem of availability facing a large number of clients; zookeeper of Apache is realized by referring to an open source of Chubby, and the use scene and the advantages and disadvantages of the Zookeeper are consistent with those of Chubby; redis is a NoSql database in the form of a high-performance key-value that supports a series of NX-ending instructions, set if not exists, with which to implement a distributed lock mechanism. Such as: the client wants to obtain a lock named M, namely, the client can use an instruction setNX M value, if 1 is returned, the acquisition is successful, and if 0 is returned, the lock is occupied by other processes; memcached is a memory database in a key-value form, is often used as a distributed cache system, and implements distributed locks through add instructions, for a lock, successful execution through the add instructions indicates that the lock is acquired, and if the execution fails, indicates that the lock is occupied.
However, in terms of the prior art, Chubby is primarily designed to face the availability of a large number of clients, and is a secondary consideration for other factors, such as throughput, response time, scalability, and the like. Because Google's distributed services try to use long transactions, Chubby's response time and failover time are both long, and Chubby is not just a distributed lock service, but it also serves to consider some data management issues that are often encountered in distributed applications, such as: the system has the advantages that unified naming service, state synchronization service, cluster management and the like, the problems to be solved are excessive, and the system is too overstaffed when only the distributed lock service is needed; redis is a pure NoSql database and does not provide any other support for realizing lock logic, so that a lock applicant can only poll the Redis all the time, the mode not only wastes resources of the applicant, but also increases Redis load, and if each thread of a client needs to be connected with the Redis, the Redis resources are quickly exhausted; memcached is similar to Redis, the application of a lock can only poll Memcached all the time, so that resource waste is caused, and servers in the Memcached cluster are not communicated with each other, so that the Memcached has the problem of single-point failure.
Disclosure of Invention
In order to solve the technical problems of long fault recovery time, poor system expansibility and resource waste of a distributed lock in the prior art, the invention provides a distributed lock system architecture, which adopts a strategy of separating and processing the logic processing of the lock and the data service of the lock, does not need to process the logic processing of the lock and the data service of the lock based on the same server, improves the flexibility, the system expansibility and the resource utilization rate of the distributed lock system architecture, and reduces the fault recovery time.
The invention provides a distributed lock system architecture, comprising:
the system comprises a plurality of lock application clients, a plurality of lock management server and a plurality of lock management server, wherein the lock application clients are used for providing client interfaces for applying locks and sending locking or unlocking requests through the client interfaces;
the lock storage servers are used for storing lock data information;
a lock distribution server cluster, communication connection is a plurality of lock application client and a plurality of lock storage server, just communication connection between a plurality of locks distribution server in the lock distribution server cluster, lock distribution server cluster is used for receiving locking or unblock request, and it is a plurality of to visit lock storage server's lock data information, and handle locking or unblock request will visit the result and send one lock application client.
In the foregoing distributed lock system architecture, each lock application client includes:
the system comprises a client interface module, a lock processing module and a lock processing module, wherein the client interface module is used for providing a client interface for applying for lock and sending a locking or unlocking request of a process through the client interface;
and the locking/unlocking module is in communication connection with the client interface module and is used for receiving the locking or unlocking request and sending the locking or unlocking request to the lock distribution server.
In the foregoing distributed lock system architecture, each lock application client further includes:
and the load balancing module is in communication connection with the locking/unlocking module and the lock distribution server cluster and is used for receiving the locking or unlocking request and sending the locking or unlocking request to the lock distribution server by adopting a load balancing method.
In the foregoing distributed lock system architecture, the sending of the locking or unlocking request to one of the lock distribution servers by using a load balancing method in one of the load balancing modules specifically includes:
the load balancing module periodically sends a server list request to the lock distribution server to acquire a first lock distribution server list stored by the lock distribution server;
when the name of a new lock distribution server exists in the first lock distribution server list, updating a second lock distribution server list stored by the lock application client based on the first lock distribution server list;
and establishing communication connection with a new lock distribution server based on the updated second lock distribution server list, and sending the locking or unlocking request to one lock distribution server or one new lock distribution server.
In the foregoing distributed lock system architecture, each lock application client further includes:
a deadlock prevention module, communicatively connected to the locking/unlocking module, for storing a set of acquired lock processes; when a process applies for a locking request, if the lock to be added is in the set, the locking request is forbidden to be applied; otherwise, adding the locks to be added into the set; when a process applies for an unlock request, the locks that need to be released are removed from the set.
In the foregoing distributed lock system architecture, each lock application client further includes:
the fault detection module is in communication connection with the load balancing module and the lock distribution server cluster and is used for receiving fault information of the load balancing module and deleting the name of the corresponding lock distribution server in the second lock distribution server list according to the fault information;
the system is further configured to periodically and respectively send heartbeat packets to a plurality of lock distribution servers, and acquire a first lock distribution server list stored by the lock distribution servers;
when the returned result of one heartbeat packet is wrong, deleting the name of the corresponding lock distribution server in the second lock distribution server list;
and when the first lock distribution server list stored by one lock distribution server and the first lock distribution server list stored by the other lock distribution servers have different names of the lock distribution servers, deleting the different names of the lock distribution servers from the second lock distribution server list.
In the foregoing distributed lock system architecture, each lock distribution server includes:
a maintenance cluster module, communicatively connected to the load balancing module, the fault detection module, and the remaining lock distribution servers in the lock distribution server cluster, for sending the first lock distribution server list to the load balancing module according to the server list request, and sending the first lock distribution server list to the fault detection module according to the heartbeat packet;
the system is also used for establishing communication connection with a new lock distribution server when the new lock distribution server joins the lock distribution server cluster, updating the first lock distribution server list and sending the updated first lock distribution server list to the balanced load module according to the server list request;
and the lock logic processing module is in communication connection with the load balancing module and the plurality of lock storage servers and is used for receiving the locking or unlocking request, accessing a plurality of lock data information of the lock storage servers, processing the locking or unlocking request and sending an access result to the lock application client.
In the foregoing distributed lock system architecture, each of the lock logic processing modules includes:
the first thread is used for receiving and analyzing the locking or unlocking request and sending the locking or unlocking request;
and the second threads are used for accessing the lock storage servers according to the locking or unlocking request of the first thread, processing the locking or unlocking request and sending an access result to the lock application client.
In the above distributed lock system architecture, a plurality of the lock storage servers adopt a master-slave mode, which includes
The master lock storage server is in communication connection with the lock distribution server cluster and is used for writing and storing the lock data information;
the plurality of slave lock storage servers are in communication connection with one master lock storage server and one lock distribution server cluster, and are used for storing lock data information stored by the master lock storage server and also used for being accessed by one lock distribution server to read the lock data information.
In the foregoing distributed lock system architecture, the lock data information includes:
a lock ID number and lock status information.
The invention has the technical effects or advantages that:
the invention provides a distributed lock system architecture, which comprises a plurality of lock application clients and a plurality of lock management servers, wherein the lock application clients are used for providing client interfaces for applying locks and sending locking or unlocking requests through the client interfaces; a plurality of lock memories for storing lock data information; the lock distribution server cluster is used for receiving locking or unlocking requests, accessing lock data information of the lock storage servers, processing the locking or unlocking requests and sending access results to the lock application client. Through the mode, the distributed lock system architecture adopts the strategy of separating the logic processing of the lock and the data service of the lock, the logic processing of the lock and the data service of the lock do not need to be processed based on the same server, the flexibility, the system expansibility and the resource utilization rate of the distributed lock system architecture are improved, and the fault recovery time is reduced.
Drawings
Fig. 1 is a schematic structural diagram of a distributed lock system architecture according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a lock application client according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram for implementing load balancing according to an embodiment of the present invention;
FIG. 4 is a block diagram of a lock distribution server according to an embodiment of the present invention;
in the upper diagram:
1. a lock application client; 11. a client interface module; 2. a locking/unlocking module; 13. a load balancing module; 14. a deadlock prevention module; 15. a fault detection module; 2. a lock distribution server cluster; 21. a lock distribution server; 201. maintaining the cluster module; 202. a lock logic processing module; 3. a latch storage server; 31. a master lock storage server; 32. the slave latch stores the server.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict. Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect.
The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
In order to solve the technical problems of long fault recovery time, poor system expansibility and resource waste of a distributed lock in the prior art, the invention provides a distributed lock system architecture, which adopts a strategy of separating and processing the logic processing of the lock and the data service of the lock, does not need to process the logic processing of the lock and the data service of the lock based on the same server, improves the flexibility, the system expansibility and the resource utilization rate of the distributed lock system architecture, and reduces the fault recovery time.
The technical solution of the present invention will be described in detail below with reference to the specific embodiments and the accompanying drawings.
The present embodiment provides a distributed lock system architecture, including:
the system comprises a plurality of lock application clients 1, a plurality of lock management server and a plurality of lock management server, wherein the lock application clients 1 are used for providing client interfaces for lock application and sending locking or unlocking requests through the client interfaces;
a plurality of lock storage servers 3 for storing lock data information;
a lock distribution server cluster 2, communication connection is a plurality of lock application client 1 and a plurality of lock storage server 3, just communication connection between a plurality of locks distribution server 21 in the lock distribution server cluster 2, lock distribution server cluster 2 is used for receiving locking or unblock request, it is a plurality of to visit lock storage server 3's lock data information, and handle locking or unblock request will visit the result and send one lock application client 1.
In the distributed lock system architecture provided by this embodiment, a policy of separating the logic processing of the lock and the data service of the lock is adopted, and the logic processing of the lock and the data service of the lock do not need to be processed based on the same server, so that the flexibility, the system expansibility and the resource utilization rate of the distributed lock system architecture are improved, and the time for fault recovery is reduced.
Specifically, referring to fig. 1, fig. 1 is a schematic structural diagram of a distributed lock system architecture according to an embodiment of the present invention. The invention provides a distributed lock system architecture, comprising: the system comprises a plurality of lock application clients 1, a plurality of lock storage servers 3 and a lock distribution server cluster 2, wherein the lock distribution server cluster 2 is in communication connection with the lock application clients 1 and the lock storage servers 3. The lock distribution server cluster 2 comprises a plurality of lock distribution servers 21, and the number of the lock application clients 1 in the distributed lock system architecture is far larger than that of the lock distribution servers 21.
The lock application client 1 is used for providing a client interface of the lock, and sending a locking or unlocking request through the client interface. Referring to fig. 2, each lock application client 1 includes a client interface module 11, a locking/unlocking module 12, a load balancing module 13, a deadlock prevention module 14, and a fault detection module 15, where a client interface module 11 is communicatively connected to a locking/unlocking module 12, a locking/unlocking module 12 is communicatively connected to a deadlock prevention module 14 and a load balancing module 13, a load balancing module 13 is communicatively connected to the lock distribution server cluster 2 and a fault detection module 15, and a fault detection module 15 is communicatively connected to the lock distribution server cluster 2. In this embodiment, the client interface module 11 is configured to provide a client interface for applying for a lock, and send a locking or unlocking request of a process through the client interface, and the user program sends a locking or unlocking request of a process through the client interface. The locking/unlocking module 12 is used to send a locking or unlocking request of a process sent by a client interface module 11 to a lock distribution server 21 in the lock distribution server cluster 2. The locking/unlocking module 12 implements a locking or unlocking request sent to the lock distribution server cluster 2, and sends the locking or unlocking request to the lock distribution server cluster 2 through the socket interface.
TABLE 1 client API interface
Function name Parameter(s) Type of return Description of the invention
init String fileName bool Initializing client instances
lock int lockID bool Locking device
unlock int lockID bool Unlocking of
The load balancing module 13 is configured to receive a locking or unlocking request, and send the locking or unlocking request to a lock distribution server 21 in the lock distribution server cluster 2 by using a load balancing method. Specifically, the load balancing module 13 receives a locking or unlocking request of the locking/unlocking module 12, wherein the load balancing module 13 sends the locking or unlocking request to the lock distribution server 21 by using a load balancing method, which specifically includes:
a load balancing module 13 periodically sends a server list request to a lock distribution server 21, and acquires a first lock distribution server list stored by the lock distribution server 21;
when the name of the new lock distribution server exists in the first lock distribution server list, updating a second lock distribution server list stored in a lock application client 1 based on the first lock distribution server list;
based on the updated second lock distribution server list, a communication connection is established with the new lock distribution server, and a lock or unlock request is sent to the lock distribution server 21 or the new lock distribution server.
In this embodiment, the lock or unlock request is sent to the lock distribution server 21 or a new lock distribution server, and based on the updated second lock distribution server list (including the original lock distribution server and the new lock distribution server), the lock distribution server 21 or the new lock distribution server is selected, and the lock or unlock request is sent to the lock distribution server 21 or the new lock distribution server. When there is no new lock distribution server name in the first lock distribution server list and the lock distribution server name in the first lock distribution server list is consistent with the lock distribution server name in the second lock distribution server list, a load balancing module 13 selects a lock distribution server 21 based on the second lock distribution server list, and sends a locking or unlocking request to the lock distribution server 21.
In this embodiment, load balancing is achieved at the lock application client 1 based on the load balancing module 13, and since the number of the lock application clients 1 is much greater than the number of the lock distribution servers 21 in the lock distribution server cluster 2, the overhead of the lock distribution servers 21 is distributed to the lock application clients 1, which greatly reduces the burden of the distributed lock system architecture, and increases the fault handling capacity and flexibility of the distributed system architecture.
A deadlock prevention module 14 for storing a set of acquired lock processes; when a process applies for a locking request, if the lock to be added is in the set, the application of the locking request is forbidden; otherwise, adding the locks to be added into the set; when a process applies for an unlock request, the locks that need to be released are removed from the set.
In this embodiment, deadlock is generally caused by occupying resources and waiting for resources, and in this embodiment, deadlock is prevented by only allowing one process to occupy at most one resource. The set of acquired lock processes stored by the lock application client 1 is a hash set, which is a HaveLockThreadSet data structure.
The fault detection module 15 is configured to receive fault information of the load balancing module 13, and delete a name of a corresponding lock distribution server in the second lock distribution server list according to the fault information;
the lock distribution server 21 is further configured to periodically send heartbeat packets to the plurality of lock distribution servers 21, and obtain a first lock distribution server list stored by the lock distribution server 21;
when the returned result of one heartbeat packet is wrong, deleting the name of the corresponding lock distribution server in the second lock distribution server list;
when the first lock distribution server list stored by one lock distribution server 21 and the first lock distribution server lists stored by the remaining lock distribution servers 21 have different names of lock distribution servers, the names of the different lock distribution servers are deleted from the second lock distribution server list.
In this embodiment, when a load balancing module 13 sends a server request list to a lock allocation server 21 to return an RST response, timeout or unreachable error, which indicates that the lock allocation server 21 has a failure, the load balancing module 13 sends failure information to a failure detection module 15, and deletes the name of the failed lock allocation server from a second lock allocation server list stored in the lock application client 1; when a failure detection module 15 sends a heartbeat packet to a lock distribution server 21 and the return result of the heartbeat packet is an error, the lock distribution server 21 is deleted from the second lock distribution server list stored in the lock application client 1. If the lock assignment server names in the first lock assignment server list acquired from the remaining lock assignment servers 21 are all the same and the first lock assignment server list acquired from one lock assignment server 21 has a lock assignment server name different from that in the first lock assignment server list, the lock assignment server 21 corresponding to the different lock assignment server name is considered to be faulty, and the name of this lock assignment server 21 is deleted from the second lock assignment server list.
In specific application, as time goes by, the names of the failed lock distribution servers are deleted from the second lock distribution server lists stored in all the lock application clients 1, so that the purpose of rapid fault detection is achieved.
And the lock distribution server cluster 2 is used for receiving locking or unlocking requests, accessing the lock data information of the plurality of lock storage servers 21, processing the locking or unlocking requests, and sending access results to the lock application client 1. Wherein the lock data information includes a lock ID number and lock status information. Specifically, the lock distribution server cluster 2 includes a plurality of lock distribution servers 21, the plurality of lock distribution servers 21 in the lock distribution server cluster 2 are communicatively connected, each lock distribution server 21 includes a maintenance cluster module 201 and a lock logic processing module 202, wherein the maintenance cluster module 201 is communicatively connected to a load balancing module 13, a failure detection module 15 and the remaining lock distribution servers 21 in the lock distribution server cluster 2, and the lock logic processing module 202 is communicatively connected to a load balancing module 13 and the plurality of lock storage servers 3. The maintenance cluster module 201 is configured to send the first lock distribution server list to a balanced load module 13 according to the server list request, and send the first lock distribution server list to a failure detection module 15 according to the heartbeat packet; the system is also used for establishing communication connection with a new lock distribution server when the new lock distribution server joins the lock distribution server cluster 2, updating the first lock distribution server list, and sending the updated first lock distribution server list to a balanced load module 13 according to a server list request; the lock logic processing module 202 is configured to receive a locking or unlocking request, access lock data information of the plurality of lock storage servers 3, process the locking or unlocking request, and send an access result to the lock application client 1.
In this embodiment, a lock logic processing module 202 includes:
the first thread is used for receiving and analyzing a locking or unlocking request and sending the locking or unlocking request;
and the second threads are used for accessing the lock storage servers 3 according to the locking or unlocking request of the first thread, processing the locking or unlocking request and sending an access result to the lock application client 1.
In this embodiment, a first thread receives and parses a lock or unlock request from a balanced load module 13.
In specific application, the first thread can be a Master thread, the second thread can be a LockHander thread, the Master thread receives and analyzes a locking or unlocking request and stores the locking or unlocking request in a memory pool, simple remainder is obtained according to IDs (identification) of locks needing to be added and locks needing to be removed in the memory pool to obtain remainder hashes, different hashes correspond to different LockHander threads, and the locking or unlocking request is sent to one LockHander thread according to the remainder hashes. A lockhandler thread receives locking or unlocking requests and creates sockets, accesses a plurality of lock storage servers 3 according to the locking or unlocking requests and sends access return results to a lock application client 1, when the access return results return correct locking/unlocking information (namely, the lock is not occupied to meet the locking requirements or the unlocking requirements), the information is sent to the lock application client 1, and when the access return results return the information that the lock is occupied, the sockets are added into a waiting array until the lockable information is returned. The maintenance cluster module 201 has only one keep thread, establishes a monitoring socket, receives the connection between the lock application client 1 and other lock distribution servers 21, and then processes the communication between the lock distribution servers 21, and the server list request and heartbeat packet of the lock application client 1.
And a plurality of lock storage servers 3 for storing lock data information. In particular, the plurality of lock storage servers 3 are in a master-slave mode, including
A master lock storage server 31, which is in communication connection with the lock distribution server cluster 2 and is used for writing and storing lock data information;
the plurality of slave lock storage servers 32 are communicatively connected with a master lock storage server 31 and a lock distribution server cluster 2, and are used for storing lock data information stored by the master lock storage server 31 and accessing and reading the lock data information by the lock distribution server 21.
In a specific application, high performance reads are used as the lock storage scheme, the lock data information is stored in the form of key-value, key represents the ID of the lock, value represents the lock state information, where value equal to 1 represents that the lock can be used, and value equal to 0 represents that the lock is occupied. The master-slave storage mode provided by the embodiment improves the availability of the distributed lock system architecture and reduces the data recovery time in case of failure.
In the distributed lock system architecture provided by this embodiment, a policy of separating the logic processing of the lock and the data service of the lock is adopted, and the logic processing of the lock and the data service of the lock do not need to be processed based on the same server, so that the flexibility, the system expansibility and the resource utilization rate of the distributed lock system architecture are improved, and the time for fault recovery is reduced.
In the distributed lock system architecture provided in this embodiment, a specific workflow of a lock application client for applying for locking or unlocking is as follows:
the user program sends a locking or unlocking request to the locking/unlocking module 12 through the client interface module 11, the locking/unlocking module 12 sends the locking or unlocking request to the balanced load module 13, the balanced load module 13 selects a lock distribution server 21 based on a second lock distribution server list stored in the lock application client 1 (the lock distribution server list is periodically updated), and sends the locking or unlocking request to the lock distribution server 21, the lock distribution server 21 accesses the slave lock storage server 32, when the locking request is applied, if the lock required to be added is not stored in the slave lock storage server 32, the lock is written into the master lock storage server 31 for storage in a key-value form, if the lock required to be added is stored in the slave lock storage server 32, the locking request is processed according to the state information stored in the lock, and sending the access result to the lock application client 1; when the lock request is applied, the lock request is processed according to the state information of the lock stored in the lock storage server 32, and the access result is sent to the lock application client 1.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A distributed lock system architecture, comprising:
the system comprises a plurality of lock application clients, a plurality of lock management server and a plurality of lock management server, wherein the lock application clients are used for providing client interfaces for applying locks and sending locking or unlocking requests through the client interfaces;
the lock storage servers are used for storing lock data information;
a lock distribution server cluster, communication connection is a plurality of lock application client and a plurality of lock storage server, just communication connection between a plurality of locks distribution server in the lock distribution server cluster, lock distribution server cluster is used for receiving locking or unblock request, and it is a plurality of to visit lock storage server's lock data information, and handle locking or unblock request will visit the result and send one lock application client.
2. The distributed lock system architecture of claim 1, wherein each lock application client comprises:
the system comprises a client interface module, a lock processing module and a lock processing module, wherein the client interface module is used for providing a client interface for applying for lock and sending a locking or unlocking request of a process through the client interface;
and the locking/unlocking module is in communication connection with the client interface module and is used for receiving the locking or unlocking request and sending the locking or unlocking request to the lock distribution server.
3. The distributed lock system architecture of claim 2, wherein each of the lock application clients further comprises:
and the load balancing module is in communication connection with the locking/unlocking module and the lock distribution server cluster and is used for receiving the locking or unlocking request and sending the locking or unlocking request to the lock distribution server by adopting a load balancing method.
4. The distributed lock system architecture of claim 3, wherein a load balancing module in the load balancing module sends the locking or unlocking request to a lock distribution server by using a load balancing method, specifically comprising:
the load balancing module periodically sends a server list request to the lock distribution server to acquire a first lock distribution server list stored by the lock distribution server;
when the name of a new lock distribution server exists in the first lock distribution server list, updating a second lock distribution server list stored by the lock application client based on the first lock distribution server list;
and establishing communication connection with a new lock distribution server based on the updated second lock distribution server list, and sending the locking or unlocking request to one lock distribution server or one new lock distribution server.
5. The distributed lock system architecture of claim 4, wherein each of the lock application clients further comprises:
a deadlock prevention module, communicatively connected to the locking/unlocking module, for storing a set of acquired lock processes; when a process applies for a locking request, if the lock to be added is in the set, the locking request is forbidden to be applied; otherwise, adding the locks to be added into the set; when a process applies for an unlock request, the locks that need to be released are removed from the set.
6. The distributed lock system architecture of claim 4, wherein each of the lock application clients further comprises:
the fault detection module is in communication connection with the load balancing module and the lock distribution server cluster and is used for receiving fault information of the load balancing module and deleting the name of the corresponding lock distribution server in the second lock distribution server list according to the fault information;
the system is further configured to periodically and respectively send heartbeat packets to a plurality of lock distribution servers, and acquire a first lock distribution server list stored by the lock distribution servers;
when the returned result of one heartbeat packet is wrong, deleting the name of the corresponding lock distribution server in the second lock distribution server list;
and when the first lock distribution server list stored by one lock distribution server and the first lock distribution server list stored by the other lock distribution servers have different names of the lock distribution servers, deleting the different names of the lock distribution servers from the second lock distribution server list.
7. The distributed lock system architecture of claim 6, wherein each of the lock distribution servers comprises:
a maintenance cluster module, communicatively connected to the load balancing module, the fault detection module, and the remaining lock distribution servers in the lock distribution server cluster, for sending the first lock distribution server list to the load balancing module according to the server list request, and sending the first lock distribution server list to the fault detection module according to the heartbeat packet;
the system is also used for establishing communication connection with a new lock distribution server when the new lock distribution server joins the lock distribution server cluster, updating the first lock distribution server list and sending the updated first lock distribution server list to the balanced load module according to the server list request;
and the lock logic processing module is in communication connection with the load balancing module and the plurality of lock storage servers and is used for receiving the locking or unlocking request, accessing a plurality of lock data information of the lock storage servers, processing the locking or unlocking request and sending an access result to the lock application client.
8. The distributed lock system architecture of claim 7, wherein each of the lock logic processing modules comprises:
the first thread is used for receiving and analyzing the locking or unlocking request and sending the locking or unlocking request;
and the second threads are used for accessing the lock storage servers according to the locking or unlocking request of the first thread, processing the locking or unlocking request and sending an access result to the lock application client.
9. The distributed lock system architecture of claim 1, wherein a plurality of the lock storage servers are in a master-slave mode, including
The master lock storage server is in communication connection with the lock distribution server cluster and is used for writing and storing the lock data information;
the plurality of slave lock storage servers are in communication connection with one master lock storage server and one lock distribution server cluster, and are used for storing lock data information stored by the master lock storage server and also used for being accessed by one lock distribution server to read the lock data information.
10. The distributed lock system architecture of claim 1, wherein the lock data information comprises:
a lock ID number and lock status information.
CN202110108739.4A 2021-01-27 2021-01-27 Distributed lock system architecture Pending CN112799836A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110108739.4A CN112799836A (en) 2021-01-27 2021-01-27 Distributed lock system architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110108739.4A CN112799836A (en) 2021-01-27 2021-01-27 Distributed lock system architecture

Publications (1)

Publication Number Publication Date
CN112799836A true CN112799836A (en) 2021-05-14

Family

ID=75811985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110108739.4A Pending CN112799836A (en) 2021-01-27 2021-01-27 Distributed lock system architecture

Country Status (1)

Country Link
CN (1) CN112799836A (en)

Similar Documents

Publication Publication Date Title
US11888599B2 (en) Scalable leadership election in a multi-processing computing environment
US9460185B2 (en) Storage device selection for database partition replicas
US10122595B2 (en) System and method for supporting service level quorum in a data grid cluster
US9817703B1 (en) Distributed lock management using conditional updates to a distributed key value data store
US8954391B2 (en) System and method for supporting transient partition consistency in a distributed data grid
US8566299B2 (en) Method for managing lock resources in a distributed storage system
US8346719B2 (en) Multi-node replication systems, devices and methods
US9996403B2 (en) System and method for providing message queues for multinode applications in a middleware machine environment
US20160292249A1 (en) Dynamic replica failure detection and healing
US20090177914A1 (en) Clustering Infrastructure System and Method
CN107077358B (en) System and method for supporting dynamic deployment of executable code in a distributed computing environment
US20110225121A1 (en) System for maintaining a distributed database using constraints
US20110162069A1 (en) Suspicious node detection and recovery in mapreduce computing
US20150312377A1 (en) System and method for updating service information for across-domain messaging in a transactional middleware machine environment
CN109213571B (en) Memory sharing method, container management platform and computer readable storage medium
CN112714018B (en) Gateway-based ElasticSearch search service method, system, medium and terminal
US9747323B1 (en) Method for reconstruction of a distributed lock state after a node addition or removal using a consistent hash
CN109639773A (en) A kind of the distributed data cluster control system and its method of dynamic construction
CA2177022A1 (en) Customer information control system and method with temporary storage queuing functions in a loosely coupled parallel processing environment
US10250519B2 (en) System and method for supporting a distributed data structure in a distributed data grid
KR102450133B1 (en) Distributed sysetm for managing distributed lock and operating method thereof
CN112799836A (en) Distributed lock system architecture
Millet et al. Facing peak loads in a p2p transaction system
US11853177B2 (en) Global entity distribution
US20210349643A1 (en) Techniques for scalable storage without communication on the synchronous path

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination