CN104657260B - The implementation method of the distributed lock of shared resource is accessed between control distributed node - Google Patents
The implementation method of the distributed lock of shared resource is accessed between control distributed node Download PDFInfo
- Publication number
- CN104657260B CN104657260B CN201310607160.8A CN201310607160A CN104657260B CN 104657260 B CN104657260 B CN 104657260B CN 201310607160 A CN201310607160 A CN 201310607160A CN 104657260 B CN104657260 B CN 104657260B
- Authority
- CN
- China
- Prior art keywords
- node
- lock
- operation requests
- lock operation
- new
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
An embodiment of the present invention provides a kind of implementation method for the distributed lock that shared resource is accessed between control distributed node.This method includes:The occupancy situation of the Node_A and Node_B shared resources is recorded by the lock operation metadata in multiple lock operation requests;The lock operation requests that Node_A is currently executing, the lock operation requests information that can be taken are stored in Node_A, Node_B, after the node Node_A or Node_B produces a resource access request, a lock operation requests OP_NEW is distributed to resource access request, the Grant information of operation requests OP_NEW is locked according to the lock operation requests acquisition of information stored in Node_A and Node_B.The embodiment of the present invention is by splitting resource, the occupancy situation of whole resource is described using multiple scattered lock operation requests, in the case where not reducing distributed lock precision, greatly reduce the description required memory source of distributed lock state, reduce the consumption to memory source, thus substantially increase can distributed lock access efficiency.
Description
Technical field
The present invention relates to access shared resource between Computer Applied Technology field, more particularly to a kind of control distributed node
Distributed lock implementation method.
Background technology
Nowadays big data analysis, Distributed Calculation, cloud computing have become promotes IT industry development after Internet technology
Mainstream technology.Each IT companies, Internet firm expand the competition of cruelty in these fields.Which company can be in these fields
Core technology is controlled, which company just can control and dominate information technology of future generation.
With the fast development of information technology, the target resource of data analysis increases via original MB, GB magnitude is
TB, PB magnitude.With being significantly greatly increased for target resource size, the distribution of shared resource is accessed between control distributed computational nodes
Resource needed for lock also rapidly increases therewith.At present, usually with a global structure described need not each splitting for above-mentioned resource
The occupancy situation of logic unit, can be by inquiring about and marking the global structure to be provided each to the operation requests of resource
Source.But when asset size is larger, the usual global structure size also can be very considerable, so as to cause above-mentioned distributed lock
Access efficiency reduces, and therefore, it is very necessary to develop a kind of consumption distributed lock that resource is few, access efficiency is high.
The content of the invention
The embodiment provides a kind of realization for the distributed lock that shared resource is accessed between control distributed node
Method, with improve can distributed lock access efficiency.
The present invention provides following scheme:
The implementation method of the distributed lock of shared resource is accessed between a kind of control distributed node, suitable for Node_A and
The binodal dot pattern of Node_B compositions, the method specifically include:
Node_A the and Node_B shared resources are recorded by the lock operation metadata in multiple lock operation requests
Occupancy situation;
The lock operation requests that Node_A is currently executing, the lock operation requests letter that can be taken are stored in the Node_A
Breath, stores the lock operation requests that Node_B is currently executing, the lock operation requests information that can be taken in the Node_B;
After the node Node_A or Node_B produces a resource access request, to the resource access request point
With a lock operation requests OP_NEW, the lock according to the lock operation requests acquisition of information stored in the Node_A and Node_B
The Grant information of operation requests OP_NEW;
After the lock operation requests OP_NEW is allowed, the lock operation requests OP_NEW is performed.
Described is shared by the lock operation metadata in multiple lock operation requests to record the Node_A and Node_B
The occupancy situation of resource, including:
Node_A the and Node_B shared resources are divided into multiple logic units, distributes and originates to each logic unit
And end address, set the lock operation metadata in each lock operation requests to include:The starting of this operation logic unit and knot
The initiation node of beam address, the read-write requests type of this operation and this operation, the lock in all lock operation requests is operated
Metadata is integrated, and obtains the occupancy situation of the Node_A and Node_B shared resources.
Described stores the lock operation requests that Node_A is currently executing, the lock operation that can be taken in the Node_A
Solicited message, stores the lock operation requests that Node_B is currently executing, the lock operation requests that can be taken in the Node_B
Information, including:
The current lock operation requests chained list Local_Grant_Link of storage, lock are grasped respectively in the Node_A and Node_B
Make request and wait chained list Operation_Waiting_Link and operable locks operation requests caching chained list Operation_
Cache;
The Local_Grant_Link include the generation of this end node, Lothrus apterus, be currently executing it is all
Operation requests are locked, all lock operation requests that the Operation_Waiting_Link includes are all and Local_Grant_
There is conflict at least one lock operation requests in Link chained lists, the Operation_Cache, which includes this end node, to be accounted for
All lock operation requests.
Described locks operation requests according to the lock operation requests acquisition of information stored in the Node_A and Node_B
The Grant information of OP_NEW, including:
Node Node_A generates a resource access request, and a lock operation requests are distributed for the resource access request
OP_NEW, and configure the corresponding lock operation metadatas of the lock operation requests OP_NEW.
The Local_Grant_Link chained lists of the node Node_A inquiries Node_A, check the lock operation requests OP_
Whether NEW conflicts mutually with the lock operation requests in the Local_Grant_Link chained lists, if there is conflict, then by described in
In the Operation_Waiting_Link chained lists for locking operation requests OP_NEW insertions Node_A, and wait;
When the lock operation requests in lock the operation requests OP_NEW and the Local_Grant_Link chained lists are not present
During conflict, then the lock operation requests OP_NEW is inserted into the Local_Grant_Link chained lists, and checked described
Whether the lock operation requests OP_NEW, if buffered, institute are stored in the Operation_Cache chained lists of Node_A
Lock operation requests OP_NEW is stated to be allowed;If do not cached, the Node_A sends the lock operation requests OP_NEW
It is after the approval message for the lock operation requests OP_NEW that the Node_B is returned is received, then described to the Node_B
Lock operation requests OP_NEW is allowed.
The lock operation requests OP_NEW is sent to the Node_B by the Node_A, when receiving the Node_B
After the approval message of the lock operation requests OP_NEW returned, then the lock operation requests OP_NEW is allowed, including:
After the Node_B receives the lock operation requests OP_NEW that the Node_A is sended over, inquire about Node_B's
Local_Grant_Link chained lists, check lock operation requests OP_NEW whether with the Local_Grant_Link chained lists of Node_B
Lock operation requests there is conflict, if there is conflict, then will lock the Operation_ of operation requests OP_NEW insertions Node_B
In Waiting_Link chained lists, and wait;If there is no conflict, then by the Operation_Cache of Node_B it is all with
The lock operation requests of lock operation requests OP_NEW conflict are deleted from the Operation_Cache of the Node_B, to described
Node_A returns to the approval message of the lock operation requests OP_NEW.
It is described to perform the lock operation requests OP_NEW after the lock operation requests OP_NEW permits, including:
After the Node_A receives the approval message for the lock operation requests OP_NEW that the Node_B is returned, by described in
Lock operation requests OP_NEW increases in the Operation_Cache chained lists of Node_A, the pending lock such as described Node_A
Operation requests OP_NEW.
The method further includes:
After the Node_A performs the lock operation requests OP_NEW, the lock operation requests OP_NEW is discharged, will
The lock operation requests OP_NEW is deleted from the Local_Grant_Link of Node_A;
Node Node_A handles the Operation_ with the associated Node_A of the lock operation requests OP_NEW one by one
Lock operation requests in Waiting_Link, for the Operation_ with the associated Node_A of the lock operation requests OP_NEW
Each lock operation requests in Waiting_Link, according to the lock operation requests information stored in the Node_A and Node_B
Obtain the Grant information of each lock operation requests.
The method further includes:
The Operation_Cache of Node_A is initialized as to have access right to whole resource, Node_A's
Initial storage is directed to all lock operation requests of whole resource in Operation_Cache chained lists, by Node_B's
Operation_Cache is initialized as to whole resource without direct access right, in the Operation_Cache chained lists of Node_B just
Begin as sky.
The resource includes data file, logical resource or address space resource.
The embodiment of the present invention is by carrying out resource it can be seen from the technical solution provided by embodiments of the invention described above
Segmentation, describes the occupancy situation of whole resource using multiple scattered lock operation requests, is not reducing distributed lock precision
In the case of, the description required memory source of distributed lock state is greatly reduced, reduces the consumption to memory source, so that
Substantially increase can distributed lock access efficiency.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, required use in being described below to embodiment
Attached drawing be briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for this
For the those of ordinary skill of field, without having to pay creative labor, other can also be obtained according to these attached drawings
Attached drawing.
Fig. 1 is provided in an embodiment of the present invention a kind of by schematic diagram that division of resources is multiple minimum logic units;
Fig. 2 is a kind of schematic diagram of lock operation metadata for describing each lock operation requests provided in an embodiment of the present invention;
Fig. 3 is a kind of lock status view for describing each node maintenance provided in an embodiment of the present invention;
Fig. 4 is a kind of whole description global lock Status view provided in an embodiment of the present invention;
Fig. 5 is the binodal point model for Node_A and Node_B two nodes composition, provided in an embodiment of the present invention one
The process chart of the method for distribution lock operation requests in the distributed lock of shared resource is accessed between kind control distributed node;
Fig. 6 is the binodal point model for Node_A and Node_B two nodes composition, provided in an embodiment of the present invention one
The process chart of the method for release lock operation requests in the distributed lock of shared resource is accessed between kind control distributed node;
Embodiment
For ease of the understanding to the embodiment of the present invention, done further by taking several specific embodiments as an example below in conjunction with attached drawing
Explanation, and each embodiment does not form the restriction to the embodiment of the present invention.
An embodiment of the present invention provides a kind of realization side for the distributed lock being applied under big data Distributed Calculation scene
Case, the program are suitable for the binodal point model that Node_A and Node_B is formed.
The resource that distributed lock describes in the embodiment of the present invention has continuity in logic, divisible, the big spy of size
Point, you can it is N parts continuous to be logically divided into resource, it is a minimum logic unit without segmentation per portion, to every
A logic unit distribution starting and ending address.Division of resources is multiple minimum logic lists by one kind provided in an embodiment of the present invention
The schematic diagram of member is as shown in Figure 1, the continuous resource that certain a size is M can be described as [LAX:LAX+M-1].Wherein LA is logic
The address of unit(Logical Address)Abbreviation, under be designated as the starting and ending logical number of logic unit.In big data
Analyze in application scenarios, above-mentioned resource can be data file, logical resource, address space resource etc..
For the vivider above-mentioned logic of explanation, it is contemplated that the resource that a size is 1TB, minimum need not split logic
Unit is 4KB.So the resource can logically be divided into N=1*240/4*210=228=256M.From this, in this configuration
Under, logic unit numbers are still a considerable numeral.
For example, in Distributed Calculation scene, calculate node Node_A and calculate node Node_B need to cooperate jointly
A job is completed, and final output product is the report files of a size super large.Therefore Node_A and Node_B
Need concurrent access report files.Therefore Node_A and Node_B is needed by subordinate's scheme works:Consult file division first
Scheme, that is, determine the size of indivisible unit, and the upper dimension bound of report files.
The embodiment of the present invention by the occupancy situation dispersed record of whole resource into each resource lock operation requests, it is above-mentioned
Whole resource is Node_A and Node_B shared resources.Above-mentioned resource lock operation requests are matched somebody with somebody by the way of distribution according to need
Put, often produce a resource access request, dynamically distribute a resource lock operation requests for resource access request, often increase by one
A new lock operation requests, can increase a lock operation metadata.
The schematic diagram of the lock operation metadata of each lock operation requests of description is as shown in Fig. 2, the lock operation metadata includes:
The initiation section of the starting and ending address of this operation logic unit, the read-write requests type of this operation and this operation
Point.Metadata in each comprehensive resource lock operation requests, can accurately describe the occupancy situation of resource, will all lock behaviour
The metadata recorded in asking integrates the occupancy situation that can describe whole resource.
The embodiment of the present invention describes the occupation condition of each distributed node by the way of view, describes each
The lock status view of node maintenance three parts as shown in figure 3, be made of:
1:Local_Grant_Link(Current lock operation requests chained list), the chained list describe this end node generation,
Lothrus apterus, all lock operation requests for being currently executing.
2、Operation_Waiting_Link(Lock operation requests and wait chained list), all lock operation requests are all in the chained list
Exist with least one lock operation requests in Local_Grant_Link chained lists and conflict, the lock operation requests in the chained list need
It can be just performed after corresponding lock operation requests release Deng in Local_Grant_Link chained lists.
3、Operation_Cache(Operable locks operation requests cache chained list), please by historical node lock operation
Seek composition, these lock operation requests the characteristics of be not with historical opposite end lock operation exist conflict, substantially its store
The lock operation requests that this end node can take.The caching can be safeguarded using ripe common buffer update and failure algorithm
State.The presence of the caching, it is therefore an objective to reduce the traffic between node, reduce the stand-by period for obtaining lock, obtained with improving
Efficiency.
One lock operation requests of each rectangle node on behalf and its metadata in Fig. 3, provided in an embodiment of the present invention one
Kind whole description global lock Status view is as shown in figure 4, each node passes through Local_Grant_Link, Operation_
Waiting_Link and Operation_Cache describes the resource occupation Status view that this node is observed jointly.
For convenience, action type is referred to as read operation for the lock operation requests of read operation here, class will be operated
Type is referred to as write operation for the lock operation requests of write operation.Refer to a lock operation requests without conflict in the embodiment of the present invention
Do not conflict with other all lock operation requests, it has both sides implication:On the one hand refer to not deposit between all read operations
Conflicting, at this moment, no matter the size and distribution condition in the operating resource section of read operation;On the other hand the operation of read operation is referred to
When resource section and the operating resource section of write operation are not present overlapping, then also there is no punching between the read operation and write operation
It is prominent.
There is conflict also to have both sides implication in the embodiment of the present invention:On the one hand the operating resource section of read operation is referred to
There are overlapping with the operating resource section of write operation, then there is conflict between the read operation and write operation;On the other hand refer to
There are overlapping with the operating resource section of another write operation in the operating resource section of write operation, then two write operations it
Between exist conflict.
For the binodal point model of two node compositions of Node_A and Node_B, a kind of control provided in an embodiment of the present invention
Between distributed node access shared resource distributed lock in distribution lock operation requests method process flow as shown in figure 4,
Including following processing step:
Step S410, Node_A and Node_B initializes respective lock status view.
Operation_Cache is initialized as having access right to whole resource by Node_A, that is, is not required to consulting Node_B i.e.
It can permit the lock operation requests of Node_A initiations;
Operation_Cache is initialized as to whole resource without direct access right by Node_B, i.e. what Node_B was initiated
Lock operation requests must seek advice from Node_A.
When Node_A and Node_B initiate lock request at the same time, and there is conflict, Node_A is preferentially accessed, so can be with
Avoid there is a situation where lock operation requests deadlock.
Step S420, node Node_A generates a new resource access request, is dynamically the resource access request
One lock operation requests OP_NEW of distribution, and configure the corresponding lock operation metadatas of lock operation requests OP_NEW.
Node Node_A inquires about the Local_Grant_Link chained lists of Node_A, checks that above-mentioned lock operation requests OP_NEW is
Lock operation requests in no and above-mentioned Local_Grant_Link chained lists conflict mutually, i.e., whether be carrying out on Node_A but
There is conflict in also undelivered other lock operation requests.
If there is conflict, then operation requests OP_NEW insertions and the Operation_Waiting_Link of Node_A will be locked
In chained list, and wait;
If there is no conflict, then step S430 is performed, by the Local_Grant_ of OP_NEW insertion Node_A nodes
Link chained lists;
Step S430, the Operation_Cache chained lists of Node_A, the Operation_Cache chained lists of Node_A are inquired about
Middle initial storage is directed to all lock operation requests of whole resource, is initially in the Operation_Cache chained lists of Node_B
Sky, afterwards, as Node_B has initiated lock operation requests, the lock stored in the Operation_Cache chained lists of Node_A operates
Request is reduced, and the lock operation requests stored in the Operation_Cache chained lists of Node_B increase.
Check in the Operation_Cache chained lists of Node_A whether store above-mentioned lock operation requests OP_NEW, if
Buffered, then lock operation requests OP_NEW is allowed to, and terminates process flow;If do not cached, above-mentioned lock is grasped
Make request OP_NEW and be sent to above-mentioned Node_B, request obtains the permission of Node_B.Perform step S440.
Step S440, after Node_B receives the lock operation requests OP_NEW that Node_A is sended over, inquire about Node_B's
Local_Grant_Link chained lists, check lock operation requests OP_NEW whether with the Local_Grant_Link chained lists of Node_B
Lock operation requests exist conflict.
If there is conflict, then the Operation_Waiting_Link chains that operation requests OP_NEW is inserted into Node_B will be locked
In table, and wait;
If there is no conflict, the Operation_Cache of Node_B is updated using operation requests OP_NEW is locked, will
Operation_ of all lock operation requests to conflict with OP_NEW from the Node_B in the Operation_Cache of Node_B
Deleted in Cache.Afterwards, the approval message of Node_B loopbacks lock operation requests OP_NEW is to Node_A, to notify its lock to operate
Request OP_NEW has been allowed.
Step S450, after Node_A receives the approval message of Node_B returns, using on approval message renewal Node_A
Operation_Cache, will lock operation requests OP_NEW increase in the Operation_Cache chained lists of oneself.Afterwards, most
Lock operation requests OP_NEW is permitted eventually, waits pending lock operation requests OP_NEW.
After locking the corresponding resource unit access of operation requests, that is, lock after operation requests are finished, it is necessary to dynamically
Release lock operation requests.An old lock operation requests are often discharged, can reduce by a lock operation metadata.This distribution according to need
Principle can will be maintained at a most rational level for describing the metadata size of overall resource occupation.
For the binodal point model of two node compositions of Node_A and Node_B, a kind of control provided in an embodiment of the present invention
The process flow of release lock operation requests method in the distributed lock of shared resource is accessed between distributed node as shown in figure 5, bag
Include following processing step:
Step S510, node Node_A performs lock operation requests OP_OLD and finishes, and produces release lock operation requests OP_OLD.
Step S520, node Node_A deletes the lock operation requests OP_OLD from the Local_Grant_Link of Node_A
Remove.
Step S530, node Node_A handles the Operation_ with the associated Node_A of lock operation requests OP_OLD one by one
Lock operation requests in Waiting_Link, for the Operation_ with locking the associated Node_A of operation requests OP_OLD
Each lock operation requests in Waiting_Link, perform the once process flow shown in above-mentioned Fig. 4, previous etc. to meet
Wait to ask.
Node Node_A release lock operation requests are not notified that Node_B, and common cache flush and failure can be utilized to calculate
Method safeguards the Operation_Cache chained lists of Node_A and Node_B, and refresh operation needs to carry out in background thread, refresh
Frequency and failure algorithm need to select and customize according to application-specific model.
In conclusion the embodiment of the present invention by splitting to resource, is retouched using multiple scattered lock operation requests
The occupancy situation of whole resource is stated, in the case where not reducing distributed lock precision, greatly reduces description distributed lock state
Required memory source, reduces the consumption to memory source, thus substantially increase can distributed lock access efficiency.
The embodiment of the present invention is by setting Operation_Cache(Operable locks operation requests cache chained list), drop significantly
The traffic between low distributed node, further increases the acquisition efficiency of distributed lock.
One of ordinary skill in the art will appreciate that:Attached drawing is the schematic diagram of one embodiment, module in attached drawing or
Flow is not necessarily implemented necessary to the present invention.
As seen through the above description of the embodiments, those skilled in the art can be understood that the present invention can
Realized by the mode of software plus required general hardware platform.Based on such understanding, technical scheme essence
On the part that contributes in other words to the prior art can be embodied in the form of software product, the computer software product
It can be stored in storage medium, such as ROM/RAM, magnetic disc, CD, including some instructions are used so that a computer equipment
(Can be personal computer, server, or network equipment etc.)Perform some of each embodiment or embodiment of the invention
Method described in part.
Each embodiment in this specification is described by the way of progressive, identical similar portion between each embodiment
Divide mutually referring to what each embodiment stressed is the difference with other embodiment.Especially for device or
For system embodiment, since it is substantially similar to embodiment of the method, so describing fairly simple, related part is referring to method
The part explanation of embodiment.Apparatus and system embodiment described above is only schematical, wherein the conduct
The unit that separating component illustrates may or may not be it is physically separate, can be as the component that unit is shown or
Person may not be physical location, you can with positioned at a place, or can also be distributed in multiple network unit.Can root
Factually border needs to select some or all of module therein realize the purpose of this embodiment scheme.Ordinary skill
Personnel are without creative efforts, you can to understand and implement.
The foregoing is only a preferred embodiment of the present invention, but protection scope of the present invention be not limited thereto,
Any one skilled in the art the invention discloses technical scope in, the change or replacement that can readily occur in,
It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with scope of the claims
Subject to.
Claims (8)
1. the implementation method of the distributed lock of shared resource is accessed between a kind of control distributed node, it is characterised in that be suitable for
The binodal dot pattern of Node_A and Node_B compositions, the method specifically include:
The occupancy of the Node_A and Node_B shared resources is recorded by the lock operation metadata in multiple lock operation requests
Situation, including:Node_A the and Node_B shared resources are divided into multiple logic units, are distributed to each logic unit
Begin and end address, each lock operation metadata locked in operation requests of setting include:The starting of this operation logic unit and
The initiation node of end address, the read-write requests type of this operation and this operation, the lock in all lock operation requests is grasped
Integrated as metadata, obtain the occupancy situation of the Node_A and Node_B shared resources;
The lock operation requests that Node_A is currently executing, the lock operation requests information that can be taken are stored in the Node_A,
The lock operation requests that Node_B is currently executing, the lock operation requests information that can be taken are stored in the Node_B;
After the Node_A or Node_B produces a resource access request, a lock is distributed to the resource access request
Operation requests OP_NEW, locks operation requests according to the lock operation requests acquisition of information stored in the Node_A and Node_B
The Grant information of OP_NEW;
After the lock operation requests OP_NEW is allowed, the lock operation requests OP_NEW is performed.
2. the implementation method of the distributed lock of shared resource is accessed between control distributed node according to claim 1, its
It is characterized in that, it is described that the lock operation requests that Node_A is currently executing, the lock behaviour that can be taken are stored in the Node_A
Make solicited message, in the Node_B store Node_B be currently executing lock operation requests, can take lock operation ask
Information is sought, including:
Operation requests chained list Local_Grant_Link is currently locked in storage respectively in the Node_A and Node_B, lock operates please
Ask and wait chained list Operation_Waiting_Link and operable locks operation requests caching chained list Operation_Cache;
The Local_Grant_Link includes all locks behaviour that this end node produces, Lothrus apterus, being currently executing
Ask, all lock operation requests that the Operation_Waiting_Link includes all with Local_Grant_Link chains
There is conflict at least one lock operation requests in table, the Operation_Cache includes the institute that this end node can take
There are lock operation requests.
3. the implementation method of the distributed lock of shared resource is accessed between control distributed node according to claim 2, its
It is characterized in that, it is described to lock operation requests according to the lock operation requests acquisition of information stored in the Node_A and Node_B
The Grant information of OP_NEW, including:
Node Node_A generates a resource access request, and a lock operation requests OP_ is distributed for the resource access request
NEW, and configure the corresponding lock operation metadatas of the lock operation requests OP_NEW;
The Local_Grant_Link chained lists of the node Node_A inquiries Node_A, check that the lock operation requests OP_NEW is
Lock operation requests in no and described Local_Grant_Link chained lists conflict mutually, if there is conflict, then grasp the lock
In the Operation_Waiting_Link chained lists for making request OP_NEW insertions Node_A, and wait;
When the lock operation requests in lock the operation requests OP_NEW and the Local_Grant_Link chained lists, there is no conflict
When, then the lock operation requests OP_NEW is inserted into the Local_Grant_Link chained lists, and check the Node_A
Operation_Cache chained lists in whether store the lock operation requests OP_NEW, if buffered, it is described lock behaviour
Make request OP_NEW to be allowed;If do not cached, the lock operation requests OP_NEW is sent to described by the Node_A
Node_B, after the approval message for the lock operation requests OP_NEW that the Node_B is returned is received, then the lock operation
Request OP_NEW is allowed.
4. the implementation method of the distributed lock of shared resource is accessed between control distributed node according to claim 3, its
It is characterized in that, the lock operation requests OP_NEW is sent to the Node_B by the Node_A, when receiving the Node_
After the approval message for the lock operation requests OP_NEW that B is returned, then the lock operation requests OP_NEW is allowed, including:
After the Node_B receives the lock operation requests OP_NEW that the Node_A is sended over, the Local_ of Node_B is inquired about
Grant_Link chained lists, check whether lock operation requests OP_NEW grasps with the lock on the Local_Grant_Link chained lists of Node_B
There is conflict in work request, if there is conflict, then will lock the Operation_ that operation requests OP_NEW is inserted into Node_B
In Waiting_Link chained lists, and wait;If there is no conflict, then by the Operation_Cache of Node_B it is all with
The lock operation requests of lock operation requests OP_NEW conflict are deleted from the Operation_Cache of the Node_B, to described
Node_A returns to the approval message of the lock operation requests OP_NEW.
5. the implementation method of the distributed lock of shared resource is accessed between control distributed node according to claim 4, its
It is characterized in that, it is described to perform the lock operation requests OP_NEW after the lock operation requests OP_NEW permits, including:
After the Node_A receives the approval message for the lock operation requests OP_NEW that the Node_B is returned, the lock is grasped
Making request OP_NEW increases in the Operation_Cache chained lists of Node_A, the pending lock operation such as described Node_A
Ask OP_NEW.
6. the implementation method of the distributed lock of shared resource is accessed between control distributed node according to claim 5, its
It is characterized in that, the method further includes:
After the Node_A performs the lock operation requests OP_NEW, the lock operation requests OP_NEW is discharged, by described in
Lock operation requests OP_NEW is deleted from the Local_Grant_Link of Node_A;
Node Node_A handles the Operation_Waiting_ with the associated Node_A of the lock operation requests OP_NEW one by one
Lock operation requests in Link, for the Operation_Waiting_ with the associated Node_A of the lock operation requests OP_NEW
Each lock operation requests in Link, according to the lock operation requests acquisition of information stored in the Node_A and Node_B
The Grant information of each lock operation requests.
7. the implementation method of the distributed lock of shared resource is accessed between control distributed node according to claim 2, its
It is characterized in that, the method further includes:
The Operation_Cache of Node_A is initialized as to have access right to shared resource, the Operation_ of Node_A
Initial storage is directed to all lock operation requests of shared resource in Cache chained lists, by the beginning of the Operation_Cache of Node_B
Beginning is turned to shared resource without direct access right, is initially empty in the Operation_Cache chained lists of Node_B.
8. the implementation method of the distributed lock of shared resource is accessed between control distributed node according to claim 1, its
It is characterized in that, the shared resource includes data file, logical resource or address space resource.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310607160.8A CN104657260B (en) | 2013-11-25 | 2013-11-25 | The implementation method of the distributed lock of shared resource is accessed between control distributed node |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310607160.8A CN104657260B (en) | 2013-11-25 | 2013-11-25 | The implementation method of the distributed lock of shared resource is accessed between control distributed node |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104657260A CN104657260A (en) | 2015-05-27 |
CN104657260B true CN104657260B (en) | 2018-05-15 |
Family
ID=53248427
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310607160.8A Active CN104657260B (en) | 2013-11-25 | 2013-11-25 | The implementation method of the distributed lock of shared resource is accessed between control distributed node |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104657260B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105224255B (en) * | 2015-10-14 | 2018-10-30 | 浪潮(北京)电子信息产业有限公司 | A kind of storage file management method and device |
CN106126673A (en) * | 2016-06-29 | 2016-11-16 | 上海浦东发展银行股份有限公司信用卡中心 | A kind of based on Redis and HBase point of locking method |
CN106446037A (en) * | 2016-08-31 | 2017-02-22 | 南威软件股份有限公司 | Method for realizing consistency of Redis and MYSQL data based on distributed lock |
CN106302825A (en) * | 2016-10-31 | 2017-01-04 | 杭州华为数字技术有限公司 | File access control method and device |
CN108038004A (en) * | 2017-09-30 | 2018-05-15 | 用友金融信息技术股份有限公司 | Distributed lock generation method, device, computer equipment and readable storage medium storing program for executing |
CN109144740B (en) * | 2018-08-16 | 2021-05-04 | 郑州云海信息技术有限公司 | Distributed lock implementation method and device |
CN110134738B (en) * | 2019-05-21 | 2021-09-10 | 中国联合网络通信集团有限公司 | Distributed storage system resource estimation method and device |
CN110515911B (en) * | 2019-08-09 | 2022-03-22 | 济南浪潮数据技术有限公司 | Resource processing method and device |
CN111639309B (en) * | 2020-05-26 | 2021-08-24 | 腾讯科技(深圳)有限公司 | Data processing method and device, node equipment and storage medium |
CN112099961B (en) * | 2020-09-21 | 2024-02-06 | 天津神舟通用数据技术有限公司 | Method for realizing distributed lock manager based on lock state cache |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1945539A (en) * | 2006-10-19 | 2007-04-11 | 华为技术有限公司 | Method for distributing shared resource lock in computer cluster system and cluster system |
CN101800763A (en) * | 2009-02-05 | 2010-08-11 | 威睿公司 | hybrid locking using network and on-disk based schemes |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8495266B2 (en) * | 2004-12-10 | 2013-07-23 | Hewlett-Packard Development Company, L.P. | Distributed lock |
-
2013
- 2013-11-25 CN CN201310607160.8A patent/CN104657260B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1945539A (en) * | 2006-10-19 | 2007-04-11 | 华为技术有限公司 | Method for distributing shared resource lock in computer cluster system and cluster system |
CN101800763A (en) * | 2009-02-05 | 2010-08-11 | 威睿公司 | hybrid locking using network and on-disk based schemes |
Also Published As
Publication number | Publication date |
---|---|
CN104657260A (en) | 2015-05-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104657260B (en) | The implementation method of the distributed lock of shared resource is accessed between control distributed node | |
DE102012216568B4 (en) | Scheduling and managing compute tasks with different execution priority levels | |
US9652161B2 (en) | System, method, and medium of optimizing load reallocation in an in-memory data management grid | |
JP2010092222A (en) | Caching mechanism based on update frequency | |
US6513056B1 (en) | System and method for efficiently synchronizing cache and persistant data in an object oriented transaction processing system | |
CN109144994A (en) | Index updating method, system and relevant apparatus | |
US8161195B2 (en) | Adaptable management in sync engines | |
CN109643310B (en) | System and method for redistribution of data in a database | |
US20150186051A1 (en) | Data Row Cache for an Acid Compliant In-Memory Row Store in a Page-Based RDBMS Engine | |
CN104184812B (en) | A kind of multipoint data transmission method based on private clound | |
US20140012867A1 (en) | Method And Process For Enabling Distributing Cache Data Sources For Query Processing And Distributed Disk Caching Of Large Data And Analysis Requests | |
WO2007088081A1 (en) | Efficient data management in a cluster file system | |
DE102017118341B4 (en) | Repartitioning of data in a distributed computer system | |
CN110019112A (en) | Data transactions method, apparatus and electronic equipment | |
CN112162846B (en) | Transaction processing method, device and computer readable storage medium | |
CN103959275A (en) | Dynamic process/object scoped memory affinity adjuster | |
CN106326239A (en) | Distributed file system and file meta-information management method thereof | |
DE102013200997A1 (en) | A non-blocking FIFO | |
CN108319496A (en) | resource access method, service server, distributed system and storage medium | |
CN107229593A (en) | The buffer consistency operating method and multi-disc polycaryon processor of multi-disc polycaryon processor | |
CN112596762A (en) | Rolling upgrading method and device | |
CN109376151A (en) | Data divide library processing method, system, device and storage medium | |
US20210064602A1 (en) | Change service for shared database object | |
JP2004102631A (en) | Database retrieving program, data base retrieval method and database retrieval device | |
CN107896248B (en) | A kind of parallel file system application method based on client communication |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |