NZ622122B2 - Clustered client failover - Google Patents
Clustered client failover Download PDFInfo
- Publication number
- NZ622122B2 NZ622122B2 NZ622122A NZ62212212A NZ622122B2 NZ 622122 B2 NZ622122 B2 NZ 622122B2 NZ 622122 A NZ622122 A NZ 622122A NZ 62212212 A NZ62212212 A NZ 62212212A NZ 622122 B2 NZ622122 B2 NZ 622122B2
- Authority
- NZ
- New Zealand
- Prior art keywords
- resource
- client
- request
- application instance
- access
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract description 49
- 150000002500 ions Chemical class 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000005012 migration Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000006011 modification reaction Methods 0.000 description 3
- 229940035295 Ting Drugs 0.000 description 2
- 239000000969 carrier Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- NLZUEZXRPGMBCV-UHFFFAOYSA-N Butylhydroxytoluene Chemical compound CC1=CC(C(C)(C)C)=C(O)C(C(C)(C)C)=C1 NLZUEZXRPGMBCV-UHFFFAOYSA-N 0.000 description 1
- 150000001768 cations Chemical class 0.000 description 1
- 238000010367 cloning Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 230000003287 optical Effects 0.000 description 1
- 230000000644 propagated Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/40—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
Abstract
Disclosed is a method of providing continuous access to a resource in a clustered computing environment. The method includes receiving a first request to access a resource by a process, wherein the request is received from a first client in a client cluster. A first application instance identifier is identified with the resource and the first request to access the resource is granted. The method also includes receiving a second request to access the resource by the process after loss of connection with the first client, wherein the second request is received from a second client in the client cluster and the second client is different from the first client. A second application instance identifier associated with the second request is received and, upon determining that the first and second application instance identifiers are the same, the first request is invalidated. Invalidating the first request comprises: determining that the resource is not located on the first node that receives the second request; sending a request to a second node to invalidate the resource; and granting the second request to access the resource on the first node. is identified with the resource and the first request to access the resource is granted. The method also includes receiving a second request to access the resource by the process after loss of connection with the first client, wherein the second request is received from a second client in the client cluster and the second client is different from the first client. A second application instance identifier associated with the second request is received and, upon determining that the first and second application instance identifiers are the same, the first request is invalidated. Invalidating the first request comprises: determining that the resource is not located on the first node that receives the second request; sending a request to a second node to invalidate the resource; and granting the second request to access the resource on the first node.
Description
RED CLIENT FAILOVER
Background
Clustered environments, e.g., nments where workload is distributed across
multiple machines, are commonly used to provide failover and high availability of
information to s. Clustered environments allow clients to access resources via the
one or more nodes that are a part of the environment. A clustered environment can act as
a client, a server, or both. In a client r server, an application may reside on any of
the nodes that make up the r. The application may issue requests for resources that
are stored locally within the client cluster or stored remotely. If an error occurs on the
node, the client failover, or migrate, to a different node in the cluster. However, when the
client again ts to access a resource that it was working with at the time of the error,
the resource may be fenced or locked by the server for the previous client node that the
ation resided on.
It is with respect to these and other considerations that embodiments have been
made. Also, although relatively specific problems have been discussed, it should be
understood that the embodiments should not be limited to solving the specific problems
identified in the ound.
It is an object of red embodiments of the present invention to address some
of the aforementioned disadvantages. An additional or alternative object is to at least
provide the public with a useful choice.
Summary
This summary is provided to introduce a selection of concepts in a simplified
form that are further described below in the Detail Description section. This summary is
not intended to fy key features or essential features of the claimed subject matter, nor
is it intended to be used as an aid in determining the scope of the d subject matter.
Systems and methods are disclosed herein that provide an application or a
process with continuous access to a resource after the application migrates to a new node
in a clustered client environment. An application or process ng on a node in a client
cluster sends a request to a server to access a resource. In embodiments, a unique
application instance identifier is used to identify an application ting a resource. The
unique application identifier may be provided with the request. When the client accesses a
resource, the application instance identifier is associated with the requested resource.
Before the application or process completes its operations on the resource, the
node upon which the client resides in the clustered environment may experience an error
that causes it to fail or ise lose access to the resource prior to the application
properly releasing the resource. In such circumstances, the resource may remain in a
fenced or locked state on the server per the previous client’s request. Upon failing over to
a different node in the client r, the application on the new client node may
reestablish a connection with the server managing the resource and make a second request
for the resource that the application previously had access to at the time of the error. The
second t may include the application instance identifier was sent with the first
request. Although the second request for the resource may be received from a ent
node in the clustered environment, the application instance identifier permits the server
managing the request to determine that the second request belongs to the same application
or process that had previously locked the resource. Doing so allows the server to
invalidate the ce and grant the client’s second request to access the resource while
insuring a conflict situation does not arise.
In one aspect the invention ses a method of providing continuous access
to a resource. The method comprises receiving a first request to access a resource by a
s, wherein the request is received from a first client in a client cluster; ating a
first application instance fier with the resource; ng the first request to access
the resource; receiving a second request to access the resource by the process after loss of
connection with the first client, wherein the second request is received from a second
client in the client cluster and the second client is different from the first client; receiving a
second application instance identifier associated with the second request; determining that
the first and second application ce identifiers are the same; and invalidating the first
request, wherein invalidating the first request comprises: determining that the resource is
not located on the first node that receives the second request; sending a request to a second
node to invalidate the resource; and granting the second request to access the ce on
the first node.
The term 'comprising' as used in this specification and claims means sting
at least in part of'. When interpreting statements in this specification and claims which
include 'comprising', other features besides the es we faced by this term in each
statement can also be present. Related terms such as 'comprise' and 'comprised' are to be
interpreted in a similar manner.
In a further aspect the invention comprises a system for tating client failover
in a clustered environment. The system comprises at least one server comprising: at least
one processor configured to e a first set of computer executable instructions; at least
one computer readable storage media storing the first set of computer executable
instructions, wherein the first set of computer executable instructions, when executed by
the at least one processor comprise instructions for: receiving a first request to access a
resource from a first client in a client cluster, n the first client sends the first request
on behalf of a process; associating a first application instance identifier with the first
resource; allowing the process access to the resource; receiving a second request for the
ce from a second client in the client cluster on behalf of the process after loss of
connection with the first client, wherein the second client is different from the first client;
receiving a second application instance identifier associated with the second request;
ining that the first and second application instance identifiers are the same; and,
invalidating the first request, wherein invalidating the first request ses: determining
that the resource is not d on the first node that receives the second request; sending a
request to a second node to invalidate the resource; and granting the second request to
access the resource on the first node.
In a further aspect the invention comprises a computer readable storage media,
storing computer executable instructions that, when executed by a processor, comprise
instructions for: receiving a first request to access a resource from a first client in a client
cluster, wherein the first client sends the first request on behalf of a process; ating a
first application instance identifier with the first resource; allowing the process access to
the resource; receiving a second request for the resource from a second client in the client
cluster on behalf of the s after loss of connection with the first client, wherein the
second client is different from the first client; receiving a second application instance
identifier associated with the second request; determining that the first and second
application ce fiers are the same; and invalidating the first request, wherein
dating the first request comprises: determining that the resource is not located on the
first node that receives the second request; sending a request to a second node to invalidate
the resource; and granting the second request to access the resource on the first node.
Embodiments may be implemented as a er process, a computing system
or as an article of manufacture such as a computer m product or er le
media. The computer m product may be a computer storage media readable by a
computer system and encoding a computer program of instructions for executing a
computer process. The computer program product may also be a propagated signal on a
carrier readable by a computing system and encoding a computer program of instructions
for ing a computer process.
Brief Description of the Drawings
[0012] Non-limiting and non-exhaustive embodiments are bed with reference to
the following figures.
illustrates a system that may be used to implement embodiments
described herein.
is a block diagram illustrating a software environment that may be used
to implement the embodiments disclosed herein.
is an embodiment of a method that a client may perform to gain
continuous access to a resource in a clustered environment.
is an embodiment of a method performed by a node in a clustered
environment to provide continuous access to a ce.
[0017] illustrates a block diagram of a computing environment suitable for
implementing embodiments.
ed Description
Various ments are described more fully below with reference to the
accompanying drawings, which form a part hereof, and which show specific ary
embodiments. However, embodiments may be implemented in many different forms and
should not be construed as limited to the ments set forth ; rather, these
embodiments are ed so that this disclosure will be thorough and complete, and will
fully convey the scope of the embodiments to those skilled in the art. Embodiments may
be practiced as methods, s or devices. Accordingly, embodiments may take the
form of a hardware implementation, an entirely software implementation or an
implementation combining software and hardware aspects. The following detailed
description is, therefore, not to be taken in a limiting sense.
Embodiments of the present disclosure are d to providing clustered client
failover mechanisms that allow a requestor to regain access to a resource after a failover
event. In ments, a requestor may be a process, an application, or one or more child
processes of an application. A resource may be a file, an object, data, or any other type of
resource in a computing environment. In embodiments, a resource may reside on a
lone server or it may reside in a clustered environment. In the embodiments
disclosed herein, a clustered environment may include one or more nodes (e.g., client
and/or server devices).
In an example embodiment, an application residing on a node in a red
environment may request access to a particular resource. In embodiments, the resource
may be stored locally (e.g., on the client node), in a remote device (e.g., a remote server or
a different node in the client red environment), or in a clustered environment (e.g.,
an environment containing multiple nodes) that is different from the client clustered
environment. For example, in embodiments the red environment may be a client or
server cluster; however, one of skill in the art will appreciate that the systems and methods
disclosed herein may be employed in any other type of environment, such as, but not
limited to, a virtual network.
In such environments, resources may be shared among clients and applications.
When an application accesses a resource, the resource may be fenced or , thereby
prohibiting other applications from accessing the resource until the accessing ation
releases the resource. Fencing or locking the resource may be employed to t against
a ct, that is, protect against modification of the resource by another application
before the accessing application has performed its operations on the resource. However, if
the node in a clustered client environment fails, the application accessing the resource may
not ly release the resource from a fenced or locked state. For example, the client
node accessing the resource on behalf of the application may lose a k connection,
may crash, or may otherwise lose access to the resource prior to the application
ting its operations and ly releasing the resource. Thus, the resource may
remain in a state in which it is unavailable to other clients or applications. isms
may be employed that automatically release a resource from a fenced or locked state,
y preventing the resource from being permanently locked out. However, such
mechanisms often wait a period of time before releasing a fenced of locked resource.
In some instances, when the application performs a failover to migrate from the
failed client node to a different client node in the client cluster, the application may
attempt to reestablish its previous connection with the server and resume its operation(s)
on the resource via the different client node. However, because the resource was not
properly ed by the failed client node, which previously accessed the resource on the
application’s behalf, due to the error, the application that had previous access to the
resource may not be able to resume its access of the resource until the server releases the
resource from its fenced or locked state. However, because a different node is now
attempting to access the resource on the application’s behalf, the server may not be able to
identify the application as the same application that usly ished the lock on the
resource. However, because the same application is trying to access the resource, a
conflict situation does not exist. In such situations, g for the server to release the
previous lock on the resource may cause an unacceptable delay for the ation.
As described, because the application is operating in a clustered client
environment, when the application requests to access the resource a second time, the
t to access to the resource may be made from a different location, such as, a
different node in the client clustered environment. Thus, the second request may come
from a location or different IP address. Because the request may be made from a different
location, a server may have difficultly ng that the client or application attempting to
again access the ce is actually the same client that previously accessed the resource.
The systems and methods disclosed herein provide mechanisms to identify situations
where the same application is attempting to access a resource, thereby avoiding such delay
and providing an application continuous access to the resource.
illustrates a system 100 that may be used to implement some of the
embodiments disclosed herein. System 100 includes client cluster 102 and a server cluster
106. Client cluster includes multiple nodes, such as clients 102A and 102B. Clients 102A
and 102B may be a device or application residing in client cluster 102. Client cluster 102
may communicate with server cluster 106 through k 108. In embodiments, network
108 may be the Internet, a WAN, a LAN, or any other type of network known to the art.
Server cluster 106 stores resources that are accessed by ations on client cluster 102
(e.g., applications residing on Client 102A or Client 102B). In ments, a client
(e.g., Client 102A) may establish a n with cluster 106 to access the resources on
cluster 106 on behalf of an application residing on the . Although in client
cluster 102 only includes two clients (e.g., Client 102A and Client 102B), one of skill in
the art will appreciate that any number of clients may be included in client cluster 102.
As shown in server cluster 106 includes servers 106A, 106B, and 106C,
which provide both high availability and redundancy for the information stored on cluster
106. In embodiments, the cluster 106 may have a file system, a database, or other
information that is accessed by clients 102 and 104. Although three servers are shown in
in other embodiments cluster 106 may include more than three servers, or fewer
than three servers. rmore, while the embodiments herein described relate to a client
communicating with a server that is part of a server cluster, one of skill in the art will
appreciate that the embodiments disclosed herein may also be performed using a
standalone server.
In embodiments, client cluster 102 provides failover isms that allow a
client to migrate from a first client node to a second client node in case of an error or
failure occurring on the first client node. One of skill in the art will appreciate that any
type of failover mechanism may be employed with the systems and methods disclosed
herein. The methods and s disclosed herein may be employed to avoid undue delay
when an application ts to regain access to a resource migrating from one client to
another (e.g., from Client 102A to Client 102B) in the case of a failover. In ments,
an application instance identifier identifying the application accessing the resource may be
associated with the resource. The application instance identifier may be a globally unique
identifier (GUID) that is associated with an ation, an action performed by an
application, or a child process of an ation. For example, in one embodiment an
application may be associated with an application instance identifier that is a GUID. In
another embodiment, an ation instance identifier may be associated with a specific
ion or action med by an application. For e, if the application issues
two different open requests for two different files, each open request may have its own
application instance identifier. In yet another embodiment, an application instance
identifier may be associated with one or more child processes of the application. As will
be clear to one of skill in the art from the embodiments described herein, associating the
application instance identifier of an application with its one or more child processes will
allow the child processes to access the resource if the resource is placed in a locked or
fenced state that s to the application. In embodiments, the application instance
identifier may be sent by the client when, or after, sending a request for a resource.
[0027] In accordance with another embodiment, in on to storing information
accessed by clients that are a part of client cluster 102, server cluster 106 also provide a
failover ism that allows continuous access of a resource in case of a server node
failure. Again, one of skill in the art will appreciate that any type of failover mechanism
may be employed with the systems and methods disclosed herein.
[0028] In embodiments, when a client requests access to a resource on behalf of an
application, the application instance identifier of the application is sent with the request.
The server receiving the request may associate the application instance identifier with the
resource. For example, the server cluster may store the application instance identifier in a
table or cache d on one or more nodes (e.g., servers such as servers 106A, 106B,
and/or 106C) located in the server cluster 106 in such a manner that the application
instance fier is associated with the resource. Before the client is done with the
resource, the client may ence an error that will force it to lose connection with the
resource. For example, the client hosting the application or performing requests or
operations on the application’s behalf may lose its network connection to the server
cluster, the client may crash, or any other type of error may occur that interferes with the
applications use of the resource. Upon experiencing the error, the application may failover
to a new client node in the client cluster 102. The new client node may reconnect to the
server cluster and send a second request to access the resource on the application’s behalf.
In embodiments, the client may reconnect to the same node in the sever cluster 106 or a
different node. The second request to access the resource may include the application
instance identifier of the application. Upon receiving the second request, the sever (e.g., a
Server 106A of server cluster 106) compares the application instance fier of the
second request with the application instance identifier associated with the resource. If the
two application instance identifiers match, the server r invalidates the ce. In
embodiments, dating the resource may comprise closing a file, removing a lock on
the resource, or otherwise taking any action that frees the ce for use. The server
node may then grant the application’s second request to access the resource. If the
application instance identifier of the second node does not match application identifier
ated with the resource, the server will not allow access to the resource until the
resource s free.
To illustrate one ment, a requestor (e.g., a process, application, etc.) on
Client 102A in client cluster 106 may request that Client 102A establishes a session with a
server of server cluster 106. For example, Client 102A may establish a session with server
106A to access a database stored that is on server 106A or that is a part of server cluster
106, in which server 106A may access the database. Client 102A then sends a request for
a resource on behalf of the requestor. An application instance identifier that identifies the
requestor is associated with the request. In embodiments, the request may include the
application ce identifier or the application instance identifier may be sent separately
in a manner such that the server 106A can determine that the application instance
identifier is associated with the request. In yet another embodiment, the server 106A or
the server cluster 106A may already have information needed to associate the application
instance identifier with the request without having to receive the application ce
identifier along with the request. Server 106A then grants the requestor access to the
ce, thereby allowing the requestor to m operations on or otherwise access the
resource. When granting the requestor access to the resource, server 106A associates an
application instance identifier with the resource in a manner that indicates the requestor is
currently accessing the resource. The resource may then be fenced or locked so other
clients or ations cannot access or modify the resource until client 102 has completed
its operation.
Before the requestor completes its operations on the resource, an error occurs
that causes Client 102A to fail or to otherwise lose its tion to the resource. Because
client requestor ted its operation, it has not released control of the ce. Thus,
the resource may remain in a fenced or locked state. The requestor or client cluster 102
may employ a er mechanism to migrate the requestor from client 102A to client
102B. Once the failover operation is complete, client 102B may reconnect to server
cluster 106 on the requestor’s behalf. Client 102B may reconnect to server 106A or
establish a new connection with any other server in server r 106 (e.g., server 106B or
106C). In an example situation, Client 102B reconnects to server 106A. Upon
reconnecting, the client 102B may send a second request to access the resource on behalf
of the requestor. As previously noted, because the tor did not release control the
resource, the resource may still be in a locked or fenced state. In order to access the
resource, without waiting for the server to automatically change the state of the resource,
for example, through a time out operation, the requestor may again provide its application
instance fier with the second request. Server 106A compares the application
instance identifier provided with the second request to the application instance identifier
associated with the resource. For example, by comparing the application instance
identifier received or ise associated with the second request to an application
instance fier that Server 106A associated with the resource. The associated
application instance identifier may be stored in a local cache or table of server 106A, or it
may be stored elsewhere in server cluster 106. If the application instance identifier stored
in the cache matches the application instance identifier that is associated with the resource,
server 106A invalidates or ise frees the resource and allows client 102B to again
access the resource on the requestor’s behalf without waiting for the resource to be
released by some other mechanism (e.g., by the fenced or locked state timing out). If the
application ce identifiers do not match, Client 102B will have to wait for the
resource to become free before accessing it.
While in the above example Client 102B reconnected to the same server 106A, it
is also possible, in other embodiments, for the client to connect to another node in server
cluster 106. For example, client 102B may reconnect to server 106B and submit a second
request to regain access to the resource on behalf of the requestor. The second request
may again be associated with the requestor’s application ce identifier, for example,
by being included in the second request or ise ated with the second request.
In this example, Server 106B may not have the application instance identifier associated
with the resource stored in its local cache because the original access of the resource was
on server 106A. In such a situation, server 106B may contact the other servers in server
cluster 106 to determine if they have an application identifier ated with the resource.
If the application fier associated with the resource is stored on a different node in the
server cluster (e.g., server 106A), the application instance identifier on the other node in
the server cluster is compared with the application instance fier provided with the
second request. If they match, server 106B may send a request to server 106A to
date the resource, and then server 106B may allow the requestor (now on client
102B) to access the resource. If the application instance identifiers do not match, Client
102B will have to wait for the resource to free.
Based on the above examples, one of skill in the art will appreciate that any
client node in the client cluster 102 may t to access for, and then provide access to, a
requestor in the client cluster 102. Furthermore, any server node in a server r (e.g.,
any server in server cluster 106) is capable of determining whether the requestor
previously had access to the resource even if the access occurred on a ent server node
in the server cluster. One of skill in the art will appreciate that the following description is
merely one example of how the embodiment shown in may operate and other
embodiments exist. For example, rather than accessing resources on a remote server or
server cluster, a client nodes may perform the embodiments described herein to provide
requestors (e.g., applications or ses) continuous access to resources residing in the
clustered environment (e.g., on the same or different client cluster nodes that make up the
client cluster). As described in greater detail below, ments described herein may
involve various different steps or operations. Furthermore, the embodiments described
herein may be implemented using any appropriate software or re component or
module.
Turning now to the figure illustrates a block diagram of a software
environment 200 showing client node cluster 201 with multiple client nodes (e.g., clients
202 and 204) and a server node cluster 206 with multiple server nodes (e.g., Node 1 208
and Node 2 216). In embodiments, client 202 requests to access a resource, such as
resource 226, in a server cluster environment 206 on behalf of a requestor. Client node
cluster 201 may be a client cluster such as client cluster 102 (. Although not
rated, client r may contain more than two clients. Server node cluster 206, may
be a server cluster, such as server cluster 106 ( or it may be any other type of
red environment such as, but not limited to, a virtual network. Resource 226 may be
stored in a datastore 228 that is part of the clustered environment. Although not shown, in
ate ments, the datastore 228 may not be part of the clustered environment, but
may be connected to the clustered environment over a network. Examples of such a
network include, but are not limited to, the Internet, a WAN, a LAN, or any other type of
network known to the art. In still further embodiments, datastore may be part of a node
(e.g., a device) that is a part of cluster 206.
Server node cluster 206 may include one or more nodes, such as Node 1 208 and
Node 2 216. Although only two node clusters are rated in any number of
node rs may be included in clustered environment 206. In ments, node
clusters 208 and 216 are capable of receiving a request for, performing an operation on,
and/or granting access to resource 226. In embodiments, resource 226 may be a file, an
object, an application, data, or any other type of resource stored on or ible by node
in node cluster 206 or by a standalone server.
In embodiments, a client sends an initial request 222 to clustered environment
206. As illustrated in the initial t 222 may be sent by client 202 and
received by Node 1 208. However, in alternate embodiments, the initial request 222 may
be sent by client or any other client node in client cluster 201 and received by Node 2 216
or any other node in server cluster 206. An example requests include, but are not limited
to, ts to , open, or otherwise access a file. Request 222 may be transmitted
from the client to the Node cluster via a network such as, but not limited to, the Internet, a
WAN, a LAN, or any other type of network known to the art. Initial request 222 may
include a request to access a resource, such as resource 226. In embodiments, request 222
may also include an application instance identifier that identifies the requestor that client
202 is making the request on behalf of. In embodiments, initial request 222 may consist of
one or more messages. For example, request 222 may be a single e containing both
the request and an application instance identifier. In another embodiment, request 222
may be multiple messages that e one or more requests as well as one or more
application instance identifiers. In embodiments, client 202 may include an App Instance
Cache 214 that is used to store and/or generate one or more application instance identifiers
that may be transmitted with request 222.
As shown in Node 1 208 may receive request 222 and an application
instance identifier from client 202. If the requested resource 226 is available, e.g., not
fenced or locked by another client or application, Node 1 may grant the client’s (e.g.,
client 202) request to access resource 226 on behalf of a requestor that is ing on the
client. Upon ng access to resource 226, filter driver 210 may allocate or otherwise
create an association between client 202 and the resource 226 by g the application
instance identifier it received from client 202. In embodiments, the association may be
stored in as an object in app instance cache 212 that is a part of Node 1. Although the
illustrated embodiment shows app instance cache 212 as a part of Node 1 208, in
embodiments, app instance cache 212 may be stored elsewhere as a part of node r
206. One of skill in the art will appreciate Node cluster 206 may include one or more app
instance , such as app instance cache 220 on Node 2 216. In embodiments when
more than one app instance cache is present, the data stored in the multiple app instance
caches may be replicated across all app instance caches, or each app instance cache may
store separate data.
In one embodiment, the application instance identifier received from client an
application instance identifier that identifies a requestor (e.g., an ation or process)
may be stored in a _NETWORK_APP_INSTANCE_ECP_CONTEXT ure. The
_NETWORK_APP_INSTANCE_ECP_CONTEXT structure may be defined as follows:
typdef struct _NETWORK_APP_INSTANCE_ECP_CONTEXT {
USHORT Size;
USHORT Reserved;
GUID AppInstanceID;
} _NETWORK_APP_INSTANCE_ECP_CONTEXT,
*PNETWORK_APP_INSTANCE_ECP_CONTEXT;
In such ments, the variable size may store information related to the size
of the ure and the variable AppInstanceID may be a unique application instance
identifier for a failover cluster client application, such as a requestor executing on client
202. In embodiments, the RK_APP_INSTANCE_ECP_CONTEXT, or r
object or variable containing the requestor’s application instance identifier may be stored
in the globally unique identifier (GUID) cache 214. In embodiments the
_NETWORK_APP_INSTANCE_ECP_CONTEXT structure may be sent from a client to
a server in association with a request to access a resource (e.g., a create or open request)In
one embodiment, the requestor’s application instance identifier may be stored in the GUID
cache of the client node that the requestor is executing on in the clustered client
environment 201. In another embodiment, although not shown in the client node
cluster 201 may have a central repository that stores application instance identifiers. In
such an ment, multiple client nodes in the client node cluster 201 may access the
centralized repository. In yet another embodiment, application instance identifiers may be
stored across multiple GUID caches (e.g., GUID cache 214 and GUID cache 216). In
such embodiments, the client node cluster 201 may employ a ation algorithm to
ensure that the le GUID caches n the same application instance identifiers.
As previously described, the application instance identifier may be associated
with resource 226 while client 202 accesses resource 226 on behalf of a requestor. A
server node 206 may store such an association in one or more app instance caches that are
part of server node cluster 206, such as app instance caches 212 and 220. In one
embodiment, the application instance fier may be associated with the resource by
adding it to an Extra Create Parameter (ECP) list for the ce 226. The ECP list may
be stored in an app instance cache that is part of the server node cluster 206, such as app
instance caches 212 and 220. In embodiments, when an ECP is received by a server, the
server extracts an application instance identifier from the ECP and adds it to a cache to be
associated with a resource, resource handle, etc. As described with t to g
application instance fiers in client cluster 201, the application instance identifiers
associated with a node may be stored in an individual app instance cache on node in server
node cluster 206, in a l repository in server cluster 206, or replicated across multiple
app instance caches on multiple nodes in server node cluster 206.
In embodiments, resource 226 is fenced or locked while a requestor ing on
client 202 has access to resource 222, thereby preventing other client or applications from
accessing resource 226 and avoiding any potential conflicts. In embodiments, before the
requestor completes its operation on resource 226, client 202 experiences an error that
causes it to lose tion with the resource. For example, the client may crash, be taken
offline, or lose its network connection to server node 208. In such instances, resource 226
may still be in a fenced or locked state because the tor did not release a lock on the
resource, thereby preventing other clients from accessing resource 226.
When the error occurs to client 202, the requestor may utilize a client failover
mechanism 232 to migrate to a new client node (e.g., client 204) in the client cluster 201.
One of skill in the art will appreciate that any type of failover mechanism may be
employed at client failover 232. In embodiments, the failover mechanism 232 may also
include the migration of the requestor’s ation instance fier which may have
been stored in GUID cache 214 on the now failed client 202. Upon completing the
migration, the tor may attempt to regain access to the resource 202. In
embodiments, client 216 may send a second request 224 to Node 1 to request access to
resource 226 on behalf of the requestor. However without the continuous access
embodiments disclosed herein, when Node 1 208 receives a request to access the resource
226 on behalf of client 204 (the sender of second request 224), it may deny the request
because resource 226 is still in a fenced or locked state from the previous access that client
202 made on behalf of the resource. Without the ments disclosed herein, Node 1
208 would ize that the second request to access resource 226 was from a different
location (e.g., client 204). Node 1 208 would not be able to determine that the request is
for the same requestor that holds the lock on resource 226, and would therefore determine
that ng the request would result in a conflict. However, if the same requestor is
attempting to access resource 224, there is no issue of conflict and forcing the client to
wait for the resource to be freed by the system may result in undue delays.
[0042] The application instance identifier may be used to solve this problem. In
embodiments, the second request 224 may also include the application instance fier
from identifying the requestor that ed to client 204 during the failover shown at 232.
In embodiments, the requestor’s application instance identifier may be present in the
GUID cache 228 of client 204 prior to the migration of the requestor during the client
failover 232. For example, a replication mechanism may have been employed to replicate
the requestor’s ation instance identifier across the nodes in client cluster 201. In
another ment, the requestor 203 may store its application instance identifier. In yet
another embodiment, the requestor’s 203 application instance identifier may be migrated
during client failover 232.
[0043] As described with t to request 222, the application instance identifier may
be transmitted in the same message as the second request 224 or the second request 224
may be composed of a number of ent messages. When the second request is
received at the node cluster 206, or an individual node in the cluster, such as Node 1 208,
and the receiving server determines that the resource is fenced or , a determination
is made as to r the application instance identifier in the second request 224 is the
same as the application instance identifier ated with resource 226. In embodiments,
Node 2 216 will compare the application instance identifier received with the second
request 222 with the application instance identifier that is associated with resource 226.
The application identifier associated with resource 220 may be stored in the app instance
cache 212 of Node 1 212. In embodiments where le app instance caches exist in
node cluster 206, the determination may check more than one application instance caches
in the node cluster 206. In such embodiments, if a matching application instance identifier
is not located in app instance cache 212, Node 1 216 may send a request to Node 2 212 to
determine if a matching application instance identifier is located in app ce cache
220.
In one embodiment, if the application instance identifier received in second
request 224 does not match the application instance identifier associated with resource 226
(which may be stored in application instance cache 212 and/or 220), the second request
224 may not be granted until resource 226 is free. However, if a match is found, the
receiving server (e.g., Node 1 208) and/or the server node cluster 206 perform actions to
grant access to resource 226 without causing undue delay to client 204 and requestor 203.
In such instances, node cluster 206 may invalidate the resource 226, thereby removing
ce 226 from a fenced or locked state. In ments, invaliding a previous access
may comprise any action that brings a resource out of a fenced or locked state. One nonlimiting
example is closing an opened file (e.g., if resource 226 is a file). Once the
previous access is invalidated, the second request 224 to access resource 226 may be
d, thereby providing continuous access to the requestor 203.
In one embodiment, the node receiving the second request 224, such as Node 1
208 in may perform the required actions to date the previous access of
resource 226 if a ent node (e.g., Node 2 216) has access and/or permission to
invalidate the previous access. However, in some instances, the node receiving the request
may not have access or permission to invalidate the previous . For example, such an
instance may occur if the original request 222 was made to Node 2 216, in which case
Node 2 216 may have l over the resource. In such ces, the node ing the
second request 224 may send a request to the controlling node to invalidate the previous
access. Once the controlling node has invalidated the previous access, the node receiving
the second request 224 may grant the second request 224. In other embodiments, the node
receiving the second request 224 may send a t to a different node to grant client 204
and/or requestor 203 (now residing on client 204) access to ce 226.
The described process avoids undue delay in ng a second request 224 to
access a resource 226 from a resource 203 that previously accessed and still holds a lock
on ce 226 through the use of application instance identifiers. rmore, the
application instance identifiers e the benefit of ensuring that any request granted
does not create a conflict on resource 226. For example, if the t was received from
a different application, the request will include an application instance identifier that is
different from the ation instance fier associated with the resource which would
result in the request being denied. Because ation instance identifiers are globally
unique identifiers, the application instance identifier for different applications will not be
the same.
is an embodiment of a method 300 that a requestor may employ to gain
continuous access to a resource in a client clustered environment. For example, a
requestor may be a client, such as client 202 (, that employs the method 300 to
access a resource (e.g., resource 226). In embodiments, the resource may reside on a
remote machine, such as a server. The server may be a lone server or part of a
clustered environment, such as sever r 206 (. Flow begins at operation 302
where a request for a resource is sent to a server. In embodiments, the request may be to
access a resource. In embodiments, ing a resource may se opening a file,
creating a file, or otherwise accessing or performing an operation on a resource that may
be remote to a client. In embodiments, a requestor may operate in a client clustered
environment. In such embodiments, the request sent at operation 302 may be sent from a
first client in the client clustered environment.
[0048] Flow ues to operation 304 where an application instance identifier is sent,
for example, to a server (e.g., a lone server or a node in a clustered environment).
In one embodiment, the first client that sent the request may also send the application
instance identifier on behalf of the requestor. As earlier described, an application instance
identifier is a GUID identifying the requestor (e.g., an application, client, or a child
process of an application requesting access to a resource). In one embodiment, the
application instance identifier may be sent in a message transmitted via a network. The
application instance identifier may be transmitted in the same message containing the
request in operation 302 or it may be transmitted in a different message. In such
embodiments, an object containing the application instance identifier, such as but not
d to the _NETWORK_APP_INSTANCE_ECP_CONTEXT described with respect
to may be sent at operation 302.
In one embodiment, an interface may be used to send the application ce
identifier at operation 304. The interface may be a kernel level ace located on a
client or available to a client operating in a client clustered environment. In embodiments,
the kernel level interface may be used by the requestor and/or client to send an application
instance identifier to a server. The following is a non-limiting example of a kernel level
interface that may be employed at operation 304 to send an application instance identifier:
#if (NTDDI_VERSION >= NTDDI_WIN8)
//
// ECP context for an application to provide its instance ID.
typedef struct RK_APP_INSTANCE_ECP_CONTEXT {
//
// This must be set to the size of this struct ure.
USHORT Size;
// This must be set to zero.
USHORT Reserved;
// The caller places a GUID that should always be unique for a single instance of
// the application.
//
GUID AppInstanceID;
} NETWORK_APP_INSTANCE_ECP_CONTEXT,
*PNETWORK_APP_INSTANCE_ECP_CONTEXT;
// The GUID used for the APP_INSTANCE_ECP_CONTEXT structure.
40 // {6AA6BC45-A7EF-4af7FA462E144D74}
DEFINE_GUID(GUID_ECP_NETWORK_APP_INSTANCE, bc45, 0xa7ef,
0x4af7, 0x90, 0x8, 0xfa, 0x46, 0x2e, 0x14, 0x4d, 0x74);
45 #endif // NTDDI_Version >= NTDDI_WIN8
Although a specific kernel level interface is provided, one of skill in the art will
appreciate that other kernel level interfaces may be employed at operation 304 to send the
application instance identifier.
In another embodiment, an application user interface (API) may be employed at
operation 304 to send an application instance identifier. In such embodiment, the
requestor and/or client may send an application instance identifier by making a call to the
API. The API may be hosted on the client performing the operation 304 (e.g., the first
client in a server cluster) or the API may be hosted on another device and accessed by the
requestor or another application or process. The following is a non-limiting example of an
API that may be employed at operation 304 to send an ation ce identifier:
NTSTATUS RegisterAppInstance (
_in PGUID AppInstance
Although a specific API is provided, one of skill in the art will appreciate that
other API’s may be employed at ion 304. Furthermore, although operation 304 is
rated as a te operation, one of skill in the art will iate that sending the
application instance identifier may be performed simultaneously with sending the request
at operation 302.
When the requested resource is not locked, the request sent at operation 302 is
granted and flow continues to operation 306 where the resource is accessed. As
previously described, the server or device controlling the resource may place the resource
in a fenced or locked state while the requestor accesses the resource at operation 306. At
some point while accessing the resource, an error occurs, such as the errors described with
nce to which causes the client to fail or otherwise lose connection to the
resource. The error may cause the client (e.g., the first client in the server r) to lose
access to the resource before the requestor completes its use of the resource. Under such
circumstances, the resource may not be released from its fenced or locked state.
Flow continues to operation 308, where a failover operation is performed. In
embodiments, the failover operation may comprise cloning the requestor and its state to a
ent client in the client node cluster (e.g., a second ). In embodiments, the
requestor’s state may be cloned on the second on the second and the tor may be
executed on the second client in a manner such that it can resume execution from the point
where the first client failed. In another embodiment, the requestor may be in
communication with the first client (rather than executing on) at the time of the first clients
failover. In such embodiments, the failover operation may comprise the requestor
establishing communications with a second client in the client cluster.
In embodiments, state information, including but not d to the requestors
application instance identifier, may be transferred from the first client to the second client.
In one ment, the first client may send a message including the requestor’s
application instance identifier and/or the requestor’s state information. The application
instance identifier and/or state may be sent during the failover process or, in ments,
may be sent before the first client fails, such as, during a replication process that clone’s
information across the clients in a client clustered environment. In another embodiment,
the requestor’s application instance identifier and/or state information may be stored in a
central on or repository in the client clustered network. In such embodiments, the
failover process may provide the second client with the location of the requestor’s
application instance identifier and/or state information. In yet another embodiment, the
requestor may maintain its application instance identifier. In such embodiments, the client
failover operation may comprise relocating to or otherwise establishing a connection
between the requestor and a second client.
In embodiments, after the client failover operation flow ues to operation
310. At operation 310 a second request for the same resource is sent to the red
environment. In embodiments, the second t is sent by the second client in the client
r on behalf of the requestor. The second request may be sent using the same manner
as bed with respect to the first resource at operation 302. In order to maintain
continuous access to the resource and avoid undue delay, flow continues to operation 312
where the application instance identifier is again sent to the clustered environment. The
application instance identifier may be sent at operation 308 according to one of the
embodiments described with respect to operation 304. In embodiments, because the a
ent client (e.g., the second client) is sending the second request, the server receiving
the request may not be able to fier the second request as ing to the same
tor that holds a lock on the resource (e.g., because the request is made from a
ent machine, a different address etc.) However, by sending the application ce
identifiers at operations 304 and 308, the server will be able to identify the requests as
belonging to the same requestor, and will grant continuous access to the resource as
previously described with respect to FIGs. 1 and 2. Flow continues to operation 314 and
the requestor resumes access to the resource. In embodiments, the second client may
receive a response to the second request from the server indicating that the server granted
the second request. In embodiments, upon receiving the indication, the second client may
access the resource on behalf of the requestor.
is an embodiment of a method 400 performed by a node in a server
clustered environment to provide continuous access to a resource. Embodiments of the
method 400 may be performed by a node such as Node 1 208 ( in a clustered
environment, such as node cluster 206 (. In ments, method 400 may be
performed by a node that has access to a resource. Flow begins at operation 402 where the
node receives a request for a resource. In embodiments, a resource may be a file, an
object, a method, data, or any other type of resource that is under the control of and/or
may be accessed by the node performing operation 400. An application ce identifier
may be ed with the request at operation 402.
Flow continues to decision operation 404 where a ination is made as to
whether the resource is in a fenced or locked state. One of skill in the art will appreciate
that any manner of ining whether a resource is fenced or locked may be employed
at operation 404. If the resource is not in a fenced or locked state, flow branches NO to
operation 412 where the request for the resource is granted. In embodiments, granting the
request may comprise ng the requestor access to the resource, performing an
operation on the resource on behalf of the requestor, or permitting any kind of access or
modification to the resource. For example, granting the request in ion 412 may
include opening a file or creating a file.
If the resource is in a fenced or locked state, flow branches YES from operation
404 to decision operation 406. At decision operation 406, the application instance
identifier received with the request at operation 402 is compared to an application instance
identifier that is associated with the resource. For example, as describe with t to
a node may associate an application instance identifier with a resource when a
client or application accesses a resource. As described earlier, the association of the
application instance identifier of a requestor accessing a resource may be stored on a node,
for e, in an app instance cache, as described in s embodiments sed in
In ments, the application instance fier that is ed in an ECP sent
with a request for a resource, for example, in a _NETWORK_APP_INSTANCE_ECP_
CONTEXT structure, may be added to an ECP list associated with the resource.
In one embodiment, the association of the ation instance resource may
reside locally on the node performing method 400. In such instances, the comparison may
be made at a local app instance cache resident on the server. However, as discussed with
t to a clustered environment may contain a number of app instance caches
distributed across different nodes. Furthermore, the different application instance caches
may each store separate and/or different data. The application identifier associated with
the fenced or locked resource may be stored on a different node in the clustered
environment. In such ces, operation 406 may include sending a request to a
different node to perform the comparison at operation 406. The request may include the
application instance identifier received at operation 402.
If the received application instance identifier is not the same as the application
ce identifier associated with the resource, flow branches NO to operation 410. At
operation 410 the request to access the resource received at operation 402 is denied. In
embodiments, the request may be denied in order to avoid a resource conflict. e the
received application fier is not the same as the associated application instance
identifier, the t to access the resource received at operation 402 is from a different
tor or application. Granting a request to the different client or application, as may
be in this case, may cause a conflict situation that will interfere with the application
currently accessing the resource. For example, the ent application may modify the
resource in a manner that modifies or otherwise interferes with the operations performed
on the ce by the requestor that currently holds a lock on the resource.
However, receiving an application identifier with the request 402 that is the same
as the application identifier associated with the fenced or locked resources indicates that
an error may have occurred that caused the tor that was accessing the resource to
lose its access to the resource without properly ing the resource. For example, the
tor may operate in a client node r. The particular client the requestor was
operating on may have lost connection to the server or otherwise failed before the
requestor completed its operations upon the resource. In order to provide continuous
access the resource, that is, to allow the requestor to regain access to the resource without
experiencing undue or unacceptable delay, flow branches YES to operation 408.
At ion 408, the resource is invalidated. As earlier described herein,
invalidating the ce may include changing the fenced state of the resource or
otherwise removing a lock on the resource. For example, if the ce is a file,
invalidating the resource may include closing the file. One of skill in the art will
appreciate that any method of releasing a fenced or locked resource may be employed at
operation 408.
Referring back to in embodiments, access to a resource may be under
control of a node in the clustered environment different than the node that receives the
request for access to the resource at operation 402. For example, a handle to the resource
may reside on a different node in the red environment. In such embodiments,
invalidating the resource may include sending a request to the node controlling access to
the resource to invalidate the resource. In response to sending the request, the remote
node may invalidate the ce.
After the ce is invalidated, flow continues to operation 412 where the
request to access the resource is granted. Granting the request may comprise ng the
tor access to the ce, performing an operation on the resource on behalf of the
requestor, or permitting any kind of access or modification to the resource. For example,
granting the request in operation 412 may include opening a file or ng a file.
Granting such access may be performed by the node receiving the request at ion
402, or by another node in the clustered environment.
[0066] Methods 300 and 400 are merely some examples of operational flows that may
be performed in accordance with embodiments. ments are not limited to the
specific description provided above with respect to FIGS. 3-6 and may include additional
operations. Further, ional steps ed may be combined into other steps and/or
rearranged. Further, fewer or additional steps may be used, employed with the methods
described in FIGs. 3-4.
illustrates a general computer system 500, which can be used to
implement the embodiments described . The computer system 500 is only one
example of a computing environment and is not intended to suggest any limitation as to
the scope of use or onality of the computer and network architectures. Neither
should the computer system 500 be interpreted as having any dependency or requirement
relating to any one or combination of components illustrated in the example computer
system 500. In embodiments, system 500 may be used as the clients and/or servers
described above with respect to FIGs. 1 and 2.
In its most basic configuration, system 500 typically includes at least one
processing unit 502 and memory 504. Depending on the exact configuration and type of
computing device, memory 504 may be volatile (such as RAM), non-volatile (such as
ROM, flash memory, etc.) or some combination. This most basic configuration is
illustrated in by dashed line 506. System memory 504 stores instructions 520 such
as the instructions to perform the continuous availability s disclosed herein and
data 522 such as ation instance identifiers that may be stored in a file storage system
with storage such as storage 508.
The term computer readable media as used herein may include computer storage
media. er storage media may include volatile and nonvolatile, removable and non-
removable media implemented in any method or technology for storage of information,
such as computer readable instructions, data structures, program modules, or other data.
System memory 504, removable storage, and non-removable storage 508 are all computer
storage media examples (e.g. memory storage). Computer storage media may include, but
is not limited to, RAM, ROM, electrically erasable read-only memory (EEPROM), flash
memory or other memory technology, CD-ROM, digital ile disks (DVD) or other
optical e, magnetic cassettes, magnetic tape, magnetic disk e or other ic
storage devices, or any other medium which can be used to store information and which
can be accessed by ing device 500. Any such computer storage media may be part
of device 500. Computing device 500 may also have input device(s) 514 such as a
keyboard, a mouse, a pen, a sound input device, a touch input device, etc. Output
device(s) 516 such as a y, speakers, a printer, etc. may also be included. The
aforementioned devices are examples and others may be used.
The term computer readable media as used herein may also include
communication media. Communication media may be embodied by computer readable
instructions, data ures, program modules, or other data in a ted data ,
such as a carrier wave or other transport mechanism, and includes any information
delivery media. The term ated data signal” may describe a signal that has one or
more characteristics set or changed in such a manner as to encode information in the
signal. By way of example, and not limitation, communication media may include wired
media such as a wired network or direct-wired connection, and wireless media such as
acoustic, radio frequency (RF), infrared, and other wireless media.
Embodiments of the invention may be practiced via a system-on-a-chip (SOC)
where each or many of the components illustrated in Figure 5 may be integrated onto a
single integrated circuit. Such an SOC device may include one or more processing units,
graphics units, communications units, system virtualization units and various application
functionality all of which are integrated (or “burned”) onto the chip substrate as a single
integrated circuit. When operating via an SOC, the functionality, described herein, with
respect to providing uous access to a ce may operate via application-specific
logic integrated with other components of the computing /system 500 on the single
integrated circuit (chip).
nce has been made throughout this specification to “one embodiment” or
“an embodiment,” meaning that a particular described feature, structure, or characteristic
is included in at least one embodiment. Thus, usage of such phrases may refer to more
than just one embodiment. Furthermore, the described features, structures, or
characteristics may be ed in any suitable manner in one or more embodiments.
One skilled in the relevant art may recognize, however, that the embodiments
may be practiced without one or more of the specific details, or with other methods,
resources, materials, etc. In other instances, well known structures, resources, or
operations have not been shown or described in detail merely to avoid obscuring aspects
of the embodiments.
While example ments and applications have been illustrated and
described, it is to be understood that the embodiments are not limited to the precise
configuration and resources described above. Various cations, changes, and
variations apparent to those skilled in the art may be made in the arrangement, operation,
and s of the methods and systems disclosed herein without departing from the scope
of the claimed embodiments.
Claims (23)
1. A method of providing continuous access to a resource, the method sing: receiving a first request to access a resource by a s, wherein the request 5 is received from a first client in a client cluster; associating a first application instance identifier with the resource; granting the first request to access the resource; receiving a second request to access the resource by the process after loss of connection with the first , wherein the second request is received from a second client in the client 10 cluster and the second client is different from the first client; receiving a second application instance fier associated with the second t; determining that the first and second application instance identifiers are the same; and dating the first request, wherein invalidating the first request comprises: determining that the resource is not d on the first node that receives the 15 second request; sending a t to a second node to invalidate the resource; and granting the second request to access the resource on the first node.
2. The method of claim 1, n the first application instance identifier is associated 20 with an application instance of an open request.
3. The method of claim 1, wherein the first application instance identifier is associated with the process. 25
4. The method of claim 1, wherein the first application instance identifier is associated with at least one child process of the process.
5. The method of claim 1, further comprising storing the first application ce 30 identifier in a registry.
6. The method of claim 1, wherein associating the first application instance identifier comprises receiving the first application instance identifier in a NETWORK_APP_INSTANCE_ECP_CONTEXT structure.
7. The method of claim 6, wherein the first application instance identifier in the NETWORK_APP_INSTANCE_ECP_CONTECT structure is added to an Extra Create Parameter (ECP) list. 5
8. The method of claim 1, wherein the process comprises an application.
9. A system for facilitating client er in a clustered environment, the system 10 comprising: at least one server comprising: at least one processor configured to e a first set of computer executable instructions; at least one computer readable storage media storing the first set of computer 15 executable instructions, wherein the first set of computer executable instructions, when executed by the at least one processor comprise instructions for: receiving a first request to access a resource from a first client in a client cluster, wherein the first client sends the first request on behalf of a process; 20 associating a first ation instance identifier with the first resource; allowing the process access to the ce; receiving a second request for the resource from a second client in the client cluster on behalf of the process after loss of connection with the first client, wherein the second client is different from the first client; 25 ing a second application instance fier associated with the second request; ining that the first and second application instance fiers are the same; and, invalidating the first request, wherein invalidating the first request 30 comprises: determining that the resource is not located on the first node that receives the second request; sending a request to a second node to invalidate the resource; and granting the second t to access the resource on the first node.
10. The system of claim 9, wherein the system further comprises: the first client, comprising: at least one processor configured to execute a second set of computer executable instructions; 5 at least one er readable e media storing the second set of computer executable instructions, wherein the second set of computer executable instructions, when executed by the at least one processor, comprise instructions for: sending the first request; sending the first application ce identifier to the second .
11. The system of claim 10, wherein the second client further comprises: at least one processor configured to execute a third set of computer executable instructions; at least one computer readable e media storing the third set of computer 15 executable instructions, wherein the third set of er executable instructions, when executed by the at least one processor, comprise instructions for: ing first the application instance identifier from the first client; and sending the second request to access the resource with the application ce identifier.
12. The system of claim 9 wherein the process comprises an application.
13. The system of claim 9, wherein the first application instance fier is associated with at least one child process of the process.
14. The system of claim 9, wherein the first application instance identifier is associated with an application instance of an open request.
15. The system of claim 9, wherein the first application instance identifier is associated 30 with the process.
16. A computer readable e media, storing computer executable instructions that, when ed by a processor, comprise instructions for: receiving a first request to access a resource from a first client in a client cluster, 35 wherein the first client sends the first request on behalf of a process; associating a first application instance identifier with the first resource; allowing the process access to the resource; receiving a second request for the resource from a second client in the client cluster on behalf of the process after loss of connection with the first , wherein the second client is 5 different from the first client; receiving a second application instance identifier associated with the second request; determining that the first and second application instance identifiers are the same; and invalidating the first request, wherein invalidating the first request comprises: ining that the resource is not located on the first node that receives the 10 second request; sending a t to a second node to invalidate the resource; and granting the second t to access the resource on the first node.
17. The computer readable storage media of claim 16, wherein the process comprises an 15 application.
18. The er readable storage media of claim 16, wherein the first application instance identifier is ated with at least one child process of the process. 20
19. The er readable storage media of claim 16, n the first application instance identifier is associated with an application instance of an open request.
20. The computer readable storage media of claim 16, wherein the first application instance identifier is associated with the s.
21. A method of providing continuous access to a resource, substantially as herein described with reference to the accompanying figures.
22. A system for facilitating client er in a clustered environment, substantially as 30 herein described with reference to the accompanying figures.
23. A computer readable storage media, storing computer executable instructions that, when executed by a processor, comprise instructions substantially as herein described with reference to the accompanying figures. WO 36697 PCT/USZO
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/228,732 US8788579B2 (en) | 2011-09-09 | 2011-09-09 | Clustered client failover |
US13/228,732 | 2011-09-09 | ||
PCT/US2012/054038 WO2013036697A2 (en) | 2011-09-09 | 2012-09-07 | Clustered client failover |
Publications (2)
Publication Number | Publication Date |
---|---|
NZ622122A NZ622122A (en) | 2015-01-30 |
NZ622122B2 true NZ622122B2 (en) | 2015-05-01 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8788579B2 (en) | Clustered client failover | |
CN114787781B (en) | System and method for enabling high availability managed failover services | |
US9495293B1 (en) | Zone consistency | |
US10970110B1 (en) | Managed orchestration of virtual machine instance migration | |
KR102438595B1 (en) | File service using a shared file access-rest interface | |
US10404579B1 (en) | Virtual machine instance migration using a hypervisor | |
US11556500B2 (en) | Session templates | |
US11228510B2 (en) | Distributed workload reassignment following communication failure | |
US8977703B2 (en) | Clustering without shared storage | |
US20210326168A1 (en) | Autonomous cell-based control plane for scalable virtualized computing | |
US20150082302A1 (en) | High availability using dynamic quorum-based arbitration | |
WO2016074167A1 (en) | Lock server malfunction processing method and system thereof in distribution system | |
US9313208B1 (en) | Managing restricted access resources | |
US10193767B1 (en) | Multiple available witnesses | |
EP3629178B1 (en) | System and method for providing backup services to high availability applications | |
NZ622122B2 (en) | Clustered client failover | |
US11086846B2 (en) | Group membership and leader election coordination for distributed applications using a consistent database | |
US20230401337A1 (en) | Two person rule enforcement for backup and recovery systems | |
Carter et al. | Implementing AlwaysOn Availability Groups |