US20050091226A1 - Persistent caching directory level support - Google Patents

Persistent caching directory level support Download PDF

Info

Publication number
US20050091226A1
US20050091226A1 US10/692,212 US69221203A US2005091226A1 US 20050091226 A1 US20050091226 A1 US 20050091226A1 US 69221203 A US69221203 A US 69221203A US 2005091226 A1 US2005091226 A1 US 2005091226A1
Authority
US
United States
Prior art keywords
file
csc
request
client
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/692,212
Inventor
Yun Lin
Navjot Virk
Brian Aust
Shishir Pardikar
David Steere
Mohammed Samji
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US10/692,212 priority Critical patent/US20050091226A1/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PARDIKAR, SHISHIR P., VIRK, NAVJOT, AUST, BRIAN S., LIN, YUN, SAMJI, MOHAMMED A., STEERE, DAVID C.
Priority to US11/064,255 priority patent/US7698376B2/en
Priority to US11/064,235 priority patent/US7702745B2/en
Publication of US20050091226A1 publication Critical patent/US20050091226A1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching

Definitions

  • the present invention relates generally to client side caching, and more particularly to systems and methods that facilitates persistent caching to shield a user and client applications across connectivity interruptions and/or bandwidth changes such that truth on the client is supported.
  • Computers have become a household staple instead of a luxury, educational tool and/or entertainment center, and provide users with a tool to manage and forecast finances, control household operations like heating, cooling, lighting and security, and store records and images in a permanent and reliable medium.
  • Networking technologies like the Internet provide users with virtually unlimited access to remote systems, information and associated applications.
  • a user interfaces with a client(s) application (e.g., word processing documents, files, etc.) to interact with a network or remote server(s) that stores information in a database that is accessible by the client application.
  • Databases provide a persistent, durable store for data that can be shared across multiple users and applications.
  • Client applications generally retrieve data from the database through a query(s), which returns results containing the subset of data that is interesting to the client application.
  • the client application then consumes, displays, transforms, stores, or acts on those results, and may modify or otherwise manipulate the data retrieved.
  • Every remote name in SMB begins with a prefix that identifies two elements: a server and a share in the format of a path beginning with “ ⁇ server ⁇ share ⁇ . . . . ”
  • the server is the physical server (e.g., name of machine) to which the client is talking.
  • the share refers to a name on the machine which can be found on the machine's hard drive.
  • the server and the share were created on the same machine or remote server. Therefore, if any object along the ⁇ server ⁇ share ⁇ . . . path was disconnected and/or offline, then the server would be marked as offline as well. Multiple shares can be located on one server; thus when one share, for example, becomes disconnected from the network, the entire server goes offline as well.
  • client applications has been traditionally dependent upon the connection state of the remote server. In some cases, however, clients may have access to some data while disconnected from the remote server.
  • the modified client version is usually not visible to the client and/or user when the server returns online. This is commonly referred to as “truth on the server” because the server version of the data is kept and/or maintained when a conflict between the client and server data is detected. Inevitably, this results in incoherent data by client applications as well as increased server and/or network traffic in addition to the myriad of other inconveniences and problems for most users.
  • the present invention provides a novel client side caching (CSC) infrastructure which facilitates a seamless operation across connectivity states (e.g., online-offline) between client and remote server.
  • CSC client side caching
  • a persistent caching architecture is employed to safeguard the user (e.g., client) and/or the client applications across connectivity interruptions and/or bandwidth changes. This is accomplished in part by caching the desirable file(s) together with the appropriate protocol information (e.g., SMB and Webdav (Web-based Distributed, Authoring, and Versioning) to a local (e.g., client) data store.
  • protocol information e.g., SMB and Webdav (Web-based Distributed, Authoring, and Versioning
  • Such information includes object access rights and share access rights which correspond to the file or group of files being cached.
  • the files to be cached to the local data store (on the client) can be determined in any number of ways according to the preferences of the user. In a first instance, caching can be automatic. In a second instance, caching can be manual. For example, substantially all files accessed at least once by a client application can be cached. Conversely, only certain files marked by the user and/or client application for caching can be cached. In addition, the caching of files accessed by the user can be performed at prescribed time intervals or even at random depending on such user preferences.
  • data requested when connected to a remote server can continue to be accessed, manipulated, and/or modified by the client while disconnected from the server.
  • the files are presented to the client as if they reside on the remote physical server location. For instance, any particular file cached to the local hard drive in the prescribed manner maintains the same name whether offline or online server. Hence, it is not apparent to the user or client that it may have been retrieved from either the local cache or from the server.
  • file access parameters including read/write capabilities can also be cached for offline use. Therefore, access to files can be granted or denied in a similar manner as when connected to the server. For example, imagine a user has access rights to a document located on the server. The file is cached to the user's local hard drive. Thus, when disconnected from the server, the user can still access that file from his/her local memory as long as that the requisite access rights (e.g., object access rights and share access rights) accompany the respective file (e.g., cached with the file). However, if the corresponding access rights are not cached locally, then access may be denied.
  • the requisite access rights e.g., object access rights and share access rights
  • the user experience is substantially uniform across server types.
  • the user may not know which type of network is serving up the files that he/she is accessing and specifically, the reasons why one server allows a particular feature while another server does not.
  • achieving uniformity across server types is based at least in part upon the location of the CSC component. For example, client side caching can be located above all redirectors, independent of the type of network redirection being performed. As a result, the offline experience remains consistent and without change when switching between server types.
  • I/O requests can be sent to the CSC component before the DFS component to ensure that all relevant information (e.g., identifications of DFS links, corresponding physical shares, share access rights, etc.) is cached before the connection state changes from online to offline.
  • the DFS component can only obtain referrals while online and the connection may be lost at any time.
  • the present invention provides for truth on the client. This is accomplished in part by write back caching.
  • Write back caching involves caching data on the client first and then pushing it back to the server at appropriate times. For example, any file modified or manipulated by the client while disconnected from the remote server can be stored to the client's memory and then uploaded to the server when the client regains its connection to the server. This can be particularly useful when a conflict in the data exists between the client copy and the server copy. User resolution may be needed to resolve the conflict in data; however, when reconnected to the server, the user continues to see its modified version of the file rather than the server's version.
  • FIG. 1 illustrates a high level schematic block diagram of a remote file system in accordance with one aspect of the present invention.
  • FIG. 2 illustrates a block diagram of a remote file system in accordance with one aspect of the present invention.
  • FIG. 3 illustrates an exemplary data structure in accordance with one aspect of the present invention.
  • FIG. 4 illustrates an exemplary diagram of a user's view of an online, partial cached namespace in accordance with one aspect of the present invention.
  • FIG. 5 illustrates an exemplary diagram of a user's view of an offline, partial cached namespace in accordance with one aspect of the present invention.
  • FIG. 6 illustrates an exemplary diagram of a user's view of an offline, partial cached namespace with shadow instances in accordance with one aspect of the present invention.
  • FIG. 7 illustrates an exemplary diagram of a user's view of an online, server namespace change requiring synchronization between the client and the server in accordance with one aspect of the present invention.
  • FIG. 8 illustrates an exemplary diagram of truth on the client during normal CSC operations in accordance with one aspect of the present invention.
  • FIG. 9 illustrates an exemplary diagram of truth on the client during synchronization between client and server copies of a file object in accordance with one aspect of the present invention.
  • FIG. 10 illustrates an exemplary diagram of truth on the client as normal CSC operations have resumed in accordance with one aspect of the present invention.
  • FIG. 11 illustrates a flow diagram of an exemplary methodology that facilitates maintaining access to remote files (e.g., server-based) during any period of disconnect from a remote location in accordance with one aspect of the present invention.
  • remote files e.g., server-based
  • FIG. 12 is a continuation of FIG. 11 , in accordance with one aspect of the present invention.
  • FIG. 13 is a continuation of FIG. 11 , in accordance with one aspect of the present invention.
  • FIG. 14 illustrates an exemplary API in accordance with one aspect of the present invention.
  • FIG. 15 illustrates an exemplary API in accordance with one aspect of the present invention.
  • FIG. 16 illustrates an exemplary operating system in accordance with one aspect of the present invention.
  • a component is intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution.
  • a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution a program, and/or a computer.
  • an application running on a server and the server can be a computer component.
  • One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
  • a “thread” is the entity within a process that the operating system kernel schedules for execution.
  • each thread has an associated “context” which is the volatile data associated with the execution of the thread.
  • a thread's context includes the contents of system registers and the virtual address belonging to the thread's process. Thus, the actual data comprising a thread's context varies as it executes.
  • the term “inference” as used herein refers generally to the process of reasoning about or inferring states of the system, environment, and/or user from a set of observations as captured via events and/or data. Inference can be employed to identify a specific context or action, or can generate a probability distribution over states, for example. The inference can be probabilistic-that is, the computation of a probability distribution over states of interest based on a consideration of data and events. Inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources.
  • various aspects of the subject invention can employ probabilistic-based and/or statistical-based classifiers in connection with making determinations and/or inferences in connection with the subject invention.
  • classifiers can be employed in connection with utility-based analyses described herein.
  • a support vector machine (SVM) classifier can be employed which generally operates by finding a dynamically changing hypersurface in the space of possible inputs.
  • Other directed and undirected models/classification approaches include, e.g., naive Bayes, Bayesian networks, decision trees, Hidden Markov Model (HMM), data fusion engine, neural network, expert system, fuzzy logic, or any suitable probabilistic classification models providing different patterns of independence can be employed.
  • Classification as used herein also is inclusive of statistical regression that is utilized to develop models of priority.
  • the present invention involves systems and methods that facilitate client side caching and truth on client persistent caching.
  • Client side caching provides off-line access to files and/or other data when the network version of the file is otherwise unavailable due to a network outage or intentional disconnection. It also can increase server scalability while connected to the network by reducing file operations directed at remote servers.
  • a client can access the cached copy of a file using the same file name and with the same namespace as when the client is connected to the network.
  • the client may not even be aware that a temporary disconnection from the network (e.g., remote server(s)) is occurring since access to and/or modification of one or more files has not been interrupted.
  • DFS Distributed File System
  • DFS links are based at least in part upon logical names which can be expressed in the format of . . . ⁇ domain ⁇ name ⁇ . . . , for example.
  • logical names are not necessarily physical names that identify a server. Rather, DFS links can point to a physical server(s) or file(s).
  • DFS links are structured to deal with SMB shares, NFS shares, as well as Webdav (or DAV) shares or any other remote process that an operating system can be pointed at by a DFS share or link.
  • the logical name space can include multiple DFS links which are backed up by a physical share on a server which can be individually online or offline.
  • the client side caching can keep track of any particular DFS link persistently so that it can transition a connection state at a proper logical directory. This effectively minimizes the scope of offlineness to the physical share.
  • client side caching can support state transitions at the directory level on a logical path which represents a DFS link. Therefore, if a physical share is down, only part of the logical name space hosted on the physical share is offline—the rest of the logical name space remaining online. In a cascaded DFS scenario, part of the name space can be offline while the next or adjacent part of the name space can still be online. For example, the underlined portion in the following path is offline while the remaining portions remain online.
  • 4051 may be a DFS link pointing to a physical share and x86gsf maybe a file located on a portion of the share. Thus, it appears that share 4051 is offline and accordingly, any files listed that belong to that offline share will also appear to be offline. Conversely, 4073 may correspond to another DFS link or physical share that is online. Thus, despite being downstream from the offline link or share, any files belonging to other online physical shares remains online.
  • FIG. 1 there is illustrated a high-level, schematic diagram of an exemplary remote file system 100 comprising client side caching (CSC) architecture for communication and interaction between one or more clients 110 and a network 120 or remote server(s) in accordance with an aspect of the present invention.
  • a client application makes a request (by way of input to the remote file system 100 ) using paths into a kernel portion of the remote file system 100 .
  • requests typically are directly communicated to a redirector component (not shown) such as SMB or Webdav or some other file system over the relevant network.
  • a CSC component such as a client-side-caching mechanism is situated at about the middle of that path.
  • the CSC component comprises a data store for offline retrieval of data which was previously cached. The previous caching may have taken place when the client was connected (online as depicted by 130 ) to the remote network 110 or server.
  • the local cache on the client's side can determine whether it also has a copy of the file. If it does contain a copy, the CSC component can retrieve the data from data store, thereby mitigating network traffic. To some extent, it may also be possible to request such information from other clients, provided that they are on the same or similar network.
  • This configuration can be referred to a distributed client cache system, whereby a plurality of clients can access each other's cache. As the technology continues to advance in this area, the distributed client cache system may become more efficient with respect to performance, speed, and bandwidth consumption.
  • the CSC component operates on logical namespaces (e.g., names of files as users see them) and supports connection state transitions at the directory level on a logical path that is representative of a DFS link.
  • the DFS link can point to a physical share and typically translates a logical path into its physical path.
  • Logical namespaces can be backed up by multiple shares on different physical servers.
  • the CSC component when a connection state changes (e.g., online to offline) due to a failure returned from a redirector or a network disconnect indication, for example, the CSC component will only transition the directory on the list that hosts to the object. The rest of the logical name space is not affected. Therefore, when a file I/O request comes down, the CSC component can cross-reference the list to see if the path is offline. If it is, the request is processed offline. Otherwise, it will be sent to a redirector for further processing. The transition version of the directory where the path is forced offline can be tracked.
  • directory access rights as well as share access rights (if a DFS link) for the respective portions of the logical name space are also stamped on the directory cache entries.
  • the CSC component can check the file access and share access rights to determine whether to allow the request a success.
  • FIG. 2 there is illustrated a block diagram of an exemplary remote file system 200 utilizing client side caching in accordance with an aspect of the present invention.
  • an I/O Manager 210 Whenever a client/user application generates a request for the file system such as to gain access to a directory or file, an I/O Manager 210 initially can determine whether the desired path is a local or remote path. If it is a remote path, then the remote file system 200 as shown in FIG. 2 can be employed.
  • the remote file system 200 comprises a Multiple UNC (Universal Name Code) Provider (MUP) 220 , surrogate providers (e.g., CSC 230 , DFS 240 ), and one or more redirectors 250 (e.g., SMB 252 , NFS 254 , and DAV (Webdav) 256 ).
  • MUP Multiple UnC
  • CSC Central System for Mobile communications
  • DFS DFS
  • redirectors 250 e.g., SMB 252 , NFS 254 , and DAV (Webdav) 256 .
  • CSC mechanism 230 and the DFS 240 are at the same level as the MUP 220 .
  • CSC 230 can receive all or substantially all UNC and drive-letter-based I/O destined for a network or remote server. Because the CSC 230 registers as a surrogate provider, it can also receive nearly if not all pre- and post-view of IRP and FastIO calls to network providers.
  • an extended mini-redirector interface can be used to communicate with a plurality of mini-redirectors in order to get additional information from the mini-redirector and to simplify callback mechanisms for events such as oplock breaks and transports appearing and disappearing from the redirectors.
  • Substantially all calls fielded by the MUP 220 are handed down to the appropriate redirectors 250 .
  • the CSC mechanism 230 can also filter substantially all calls going through the MUP 220 , thus allowing it the opportunity to decide on appropriate caching pathnames.
  • the MUP 220 in the present invention supports surrogate providers.
  • Surrogate providers such as the CSC mechanism 230 can register to the MUP 220 with pre- and post-process handlers ( 232 , 234 ).
  • the MUP 220 calls the pre-process handler(s) 232 in a predetermined order before calling any of the network providers. It can return one of the following statuses when the pre-process is done:
  • the CSC post process handler 234 can be called after the request is handled by a network provider and/or another surrogate provider, depending on the status returned on its pre-process.
  • the post-process handler 234 has a chance to handle the request again. For instance, it can store the data returned back from the server if “success” is returned, or take the connection offline, process the request from cache, and return “success”.
  • the post-process handler(s) 234 are called in the opposite order some time thereafter. Since the CSC mechanism 230 is in the path of every call that the MUP handles, the CSC can do the relevant pre- and post-processing that is necessary to obtain the appropriate functionality.
  • the MUP 220 can call the CSC 230 and/or the DFS 240 , in either order. However it is preferable that the MUP 220 calls the CSC mechanism 230 before the DFS 240 . This can be important for a create request in particular because it gives the CSC mechanism 230 a chance to cache the logical path in a local cache 236 before the DFS 240 translates the file object name into the physical path.
  • FIG. 3 illustrates a schematic block diagram of representative file-based data structures in both logical (CSC) 300 and physical namespace (Mini-Rdr) 310 and the relationships between them when a file is created online.
  • CSC 300 maintains the connection based data structures in logical name space while Mini-Rdr 310 maintains the connection based data structures in physical name space.
  • File based data structures are created by CSC 300 and shared among CSC 300 and Mini-Rdr 310 .
  • Some file based data structures have access to the connection based data structures in both logical and physical name space. Therefore, file I/O requests can be executed by either CSC 300 or Mini-Rdr 310 , based on the circumstances.
  • the CSC can provide and/or facilitate persistent caching to yield truth on the client.
  • CSC delay write persistent caching semantics
  • At least a portion of CSC consistency is based in part on the last write time stamp and file-size; hence, it has to do various tasks at create/close time to ensure that this information is accurate.
  • a create request comes to the MUP (e.g., MUP 220 in FIG. 2 ).
  • the MUP 220 calls the pre-process handler (e.g., FIG. 2, 232 ) of the CSC (surrogate provider).
  • the CSC attempts to find or create connection data structures for the file object that is issued with the create request.
  • Examples of the connection data structures include a server Connection structure: SrvCall; a share mapping structure: NetRoot; and a per-user share mapping structure: VNetRoot.
  • the surrogate e.g., CSC finds or creates the logical namespace structure and returns “success”. However, if the surrogate provider does not have the information indicating that the path is offline, it can ask the MUP to proceed with further processing of the create call after creating or finding the above structures in the logical name space.
  • the MUP may continue its perorations, by supplying the path to the DFS (e.g., FIG. 2, 240 ), which in turn might translate the logical path to an actual server share, depending on whether there is a DFS link along the way.
  • the DFS e.g., FIG. 2, 240
  • one redirector e.g., FIG. 2, 250
  • a mini-redirector MINI-RDR
  • MINI-RDR automatically refers to RDBSS to execute the common create code.
  • the create call returns to the MUP, it can call the CSC post-process handler. If the call is not fielded by a mini-redirector that supports the CSC, a post-processor routine may tell the MUP that it is not interested in the file, and no subsequent operations are seen by the CSC.
  • the CSC pre-process handle can get the file extension, size, and caching flags of the path by looking at the physical NetRoot of the Fcb (file control block—an abstraction of a file that contains information about the file such as name, size, time stamps, cache map, shared access rights, mini-redirector device object, pointer to NetRoot, etc.) of the parent directory, or by issuing a FSCTL (File System Control) against the handle.
  • Fcb file control block—an abstraction of a file that contains information about the file such as name, size, time stamps, cache map, shared access rights, mini-redirector device object, pointer to NetRoot, etc.
  • the CSC can decide whether to own this file object. If the share characteristics so demand, such as it being a cacheable share, or if the mini-redirector demands caching be on all the time, such as DAV, the CSC can claim ownership and create the file data structures for the open instance (e.g., Fcb, SrvOpen, and Fobx) represented by this file object. However, if the share is marked non-cacheable, the CSC can disassociate itself from the file object so as to not see the operations against this file object thereafter.
  • the share characteristics so demand such as it being a cacheable share, or if the mini-redirector demands caching be on all the time, such as DAV
  • the CSC can claim ownership and create the file data structures for the open instance (e.g., Fcb, SrvOpen, and Fobx) represented by this file object. However, if the share is marked non-cacheable, the CSC can disassociate itself from the file object so as to not see the operations against this file
  • SrvOpen refers to Server Side Open Context which is the abstraction of an open sent to the server, and which stores the desire access, share access, security context, server file handle, mini-rdr context, pointer to Fcb, etc. Multiple SrvOpens with different access rights and session IDs can collapse on a single Fcb.
  • Fobx refers to File Object Extensions which is the RDR (redirector) extension of a file object, containing information unique to a handle, such as a directory enumeration template, resume key, etc. It also has the pointer to a SrvOpen. Multiple Fobx can collapse on a single SrvOpen, if their access rights match.
  • the CSC can issue a create directly to the mini-redirector with the prefixed file name in the physical namespace which can be obtained through the file handle for querying attributes.
  • the CSC can keep the connection data structure around for some time even if it is a non-cacheable path.
  • it can put the parent directory on the name cache under the NetRoot so that it can quickly determine the persistent caching characteristics on the next create request without issuing an open to it again. With this approach, per directory caching can be obtained.
  • connection-related data structure can be separated and the file-related data structure can be shared between the logical and physical namespaces.
  • the CSC and redirectors can create their own SrvCall, NetRoot, and VNetRoot, and share the Fcb, SrvOpen and Fobx. This way, the CSC and redirectors can handle many different UNC paths (logical and physical) without doubling the resources such as in-memory data structures and cache maps.
  • the CSC In a File Read Operation, the CSC needs to know the buffering state of the file before every read from the persistent cache. If the buffering state is such that read caching is allowed then the CSC can read the persistently cached data and serve it to the app. However, if the buffering state is at the equivalent of OPLOCK_LEVEL_NONE, then it should not return the data from the cache and let all the reads go to an underlying provider.
  • the buffering state info can be obtained by checking the FCB_STATE_READCACHING_ENABLED flag on Fcb->FcbState.
  • the CSC fills a user buffer with cached data and return success on the CSC pre-process. MUP will return the request without forwarding this request to mini-rdr. If read caching is disabled or the file is sparse, the CSC sends the read request directly to the mini-redirector on the pre-process. Once the request is completed by the mini-redirector successfully, data is saved on cache. If an error is returned from mini-redirector, the CSC attempts to transition to offline and if succeeds, complete the request offline. Either case, read operation is completed on CSC pre-process without having MUP sends the request to mini-redirector.
  • the CSC needs to know the buffering state of the file before every write is executed. If the buffering state is such that write caching is allowed, then the CSC can write the persistently cached data and return success to the application. However, if the buffering state is at the equivalent of OPLOCK_LEVEL_II or less, then it should not cache the write and let all the writes go to the underlying provider. Again the buffering state info can be obtained by checking the FCB_STATE_WRITECACHING_ENABLED flag on Fcb->FcbState.
  • the CSC sends the write request to local cache and return success on the CSC pre-process. MUP will return the request without forwarding this request to mini-rdr.
  • the CSC sends the write request directly to the mini-redirector on the pre-process. Once the request is completed by the mini-redirector successfully, data is saved on cache. If an error is returned from the mini-redirector, the CSC attempts to transition to offline and if it succeeds, it completes the request offline. In either case, write operation is completed on CSC pre-process without having the MUP send the request to mini-redirector.
  • the MUP calls CSC pre-process first when a close request comes to the MUP.
  • the CSC pre-process checks if there is any cached data from a previous write request on this file. If so, the CSC sends back the only sections with the modified data to the server by issuing write requests to mini-redirector. After cached data are pushed back to the server, CSC pre-process sends the close request to the mini-redirector. Once the mini-redirector completes the close request, CSC pre-process queries the timestamp from the server, sets it on the cached file, closes cache handle, and returns to MUP. Thus writes are cached until the file is closed. If durable op-lock is granted to the client, the close request is only executed locally since there is no remote handle to the file. Writes are cached until the op-lock is broken.
  • CSC will follow the design principle of always processing the request after a redirector takes a first look, except with a create request.
  • exemplary pseudo codes for the control flows of the typical create, read, and write operations are similar to read/write.
  • CSC pre-process handler creates the SrvCall, NetRoot and VNetRoot for the logical path and Fcb, SrvOpen and Fobx for the file object.
  • Fobx is stored on the FileObject->FsContext2 so that mini-rdr can pick it up when it is called from either DFS or MUP.
  • Fcb is created by the CSC
  • Fcb->RxDeviceObject is CSC Device Object. Otherwise, it will be mini-rdr Device Object.
  • Mini-rdr also sets the Fcb->MiniRdrDeviceObject on the create path.
  • the subsequent create request on the same file has to wait on the CSC pre-process handler.
  • CSC post process handler signals the other create request after the first one is completed.
  • the present invention facilitates a seamless user experience that includes online to offline transition, offline to online transition, merge and synchronization, and low bandwidth operations.
  • the present invention is capable of transition connection state at the share as well as the directory level. If one share goes offline, other shares on the same server (logical name space) remain online. Offline at directory level is built for the logical name space with a DFS link(s). When a physical share is disconnected, the part of the name space hosted on the physical share is disconnected. The rest of the logical path remains online. In cascaded DFS cases, it is possible that you can have mixed online and offline directories along the logical path.
  • the CSC keeps a transition list of the directories on the logical NetRoot which are backed up by a physical share.
  • the logical share is the first one on the list.
  • the CSC can add the directories representing DFS link on the list.
  • the directory on the list close to the object gets transitioned offline. If there is only the logical share (e.g., non-DFS case) on the list, the share is transitioned offline.
  • the CSC keeps the connection state, version number, cache ID, etc on each of the list item.
  • the online to offline transition continues to work seamlessly as previous versions of the CSC.
  • the transition can be triggered by the network related error during I/O operation or by the transport disconnection indication.
  • the application handle remains valid and continues to work offline.
  • the CSC simply marks the directory (or share) offline, increases the version number, and references the NetRoot so that the connection is maintained until it is transitioned back online.
  • the CSC is capable of transitioning back online at the share and directory level.
  • the CSC does not invalidate application handles while transitioning the connection to online. Rather, it eliminates the requirement of user initiation of the transition followed by closing all the applications. This removes a main blocker for the CSC to work on a TS/FUS environment.
  • the subject CSC defers the merge and synchronization process after transitioning online so that transition becomes fast. With both improvements, transitioning online becomes as painless as transitioning offline.
  • transitioning to online is initiated by a CSC agent as the result of discovering that a path has become reachable.
  • the CSC agent periodically scans the paths that are offline.
  • a network arrival event can also trigger CSC agent to transition the paths.
  • CSC agent detects a path can be reachable it sends an IOCTL (I/O control) to a CSC driver to initiate an online transition on this path.
  • IOCTL I/O control
  • the CSC driver simply resets the state of the directory on the transition list and increases the version number. All the existing handles still remain offline until the handle is backpatched individually.
  • Per file offlineness can be accomplished in part by having a flag on SrvOpen indicating the offline state independent of the network connection state.
  • the CSC driver starts to backpatch the handles. It walks through the list of the outstanding Fcbs and backpatches the handles for directories first based at least in part on the following reasons:
  • the CSC completes the pending directory change notification on these handles so that the application can send a new directory enumeration to see the online view.
  • the CSC can merge the results from the server with the results from the cache, such as add, remove, or modify entries on the enumeration buffer depending on the cache files.
  • the namespace as seen by the user is a union of the namespace as it exists on the server and as it exists in the offline store. This ensures that the applications, and hence, the user retains a consistent view of the files after transitioning online automatically, such as file sizes and time stamps, even when the files have been modified locally but the changes have not been pushed out.
  • FIGS. 4-7 illustrate how the contents of the cache and the associated network files are viewed by the user on the client computer by way of a user interface.
  • Each diagram contains three panes.
  • the right-most pane depicts the contents of a network file system
  • the middle pane depicts the contents of a local CSC cache
  • the left-most pane depicts what the user “sees” when viewing the remote namespace through an application of an operating system.
  • the scenarios are represented by a fictitious file system tree composed of one root directory and two subdirectories containing three files each.
  • FIG. 4 demonstrates what a user (end-user) would see when connected to the network with a partially cached namespace.
  • the user-view includes the partial cached namespace (e.g., ⁇ bedrock ⁇ flintstone ⁇ wilma-fred-pebbles) as well as subdirectory rubble with files betty, barney, and bambam which is from the network version of root directory-bedrock. Since sub-directory rubble and its files are not cached in the user's local cache, they are no longer viewable by the user when disconnected from the network. This is depicted in the image shown in FIG. 5 .
  • partial cached namespace e.g., ⁇ bedrock ⁇ flintstone ⁇ wilma-fred-pebbles
  • subdirectory rubble and its files are not cached in the user's local cache, they are no longer viewable by the user when disconnected from the network. This is depicted in the image shown in FIG. 5 .
  • the client computer is again offline or disconnected from the network at least temporarily.
  • the cache comprises similar copies of the root directory bedrock.
  • the cache has been updated with the modified portions of sub-directory rubble (see cross-hatched boxes compared to solid white boxes in cache and user views).
  • the network comprises an apparently different version or copy of the bedrock root directory as a whole. This can be an indication that the modifications to some of the files stored in the local cache can be stored in the cache while offline and viewed by the user during the disconnection period.
  • the network version is not updated with the modified version during the offline period.
  • FIGS. 8-10 there are illustrate schematic diagrams that represent a sequence of points in time where a user is working on a cached document. They illustrate a sequence of points (e.g., diagrams 800 , 900 , and 1000 ) in time where a user is working on a cached document ( FIG. 8 ), the document is synchronized ( FIG. 9 ), and the user continues to work on the document ( FIG. 10 ).
  • the primary issue to observe is that, for cached content, the user is always working from the local cache, not directly from the associated server. A connection to the network can be required when that content must be synchronized with the associated server(s). Otherwise, the connection does not need to be maintained, thereby reducing bandwidth and network traffic.
  • the first table (Table 1) illustrates 21 different “conditions” that a pair of files (server copy and cached copy) may exist in at the time of synchronization. Some may find the visual nature of this illustration helpful to understand and “visualize” the various conditions.
  • Table 2 describes the behavior of the system for each of the 21 scenarios; first when the file or directory is not open and second, when the file or directory is open. TABLE 2 Offline ⁇ >Online transition behavior Occurred while offline File/dir not open File/dir open 0 No changes None Handle kept open to cached file 1 File created on client. No server copy exists. Copy file to server after online. Handle kept open to cached file 2 Directory created on client. Create the directory on server Handle kept open to cached directory. No server copy exists. after online. 3 File sparse on client. Copy file from server after online. Not available. 4 File renamed on server. Rename cached file after online. Handle kept open to cached file. File is renamed after close. 5 Directory renamed on server.
  • the CSC comprises an offline store which has two on-disk structures: the file system hierarchy and priority queue.
  • the hierarchy is used to keep track of the remote entities we have cached and the name spaces there of.
  • the priority queue is used for iterating over the entire offline store in an MRU or an LRU fashion.
  • the design of the offline store may leverage as much of the local file system as possible in order to simplify the recovery logic in cases of a crash or data corruption.
  • APIs can be employed. Below are exemplary APIs that are part of the user-mode components of CSC. The main goal of these APIs is to allow an operating system to manage the user experience in terms of online/offline states, synchronization, viewing the offline store and cache cleanup operations.
  • Parameters None Return Value: The function returns TRUE if successful; FALSE is returned if the function fails. GetLastError ( ) can be called to get extended information about the error.
  • CSCPinFile BOOL CSCPinFile IN LPTSTR Name, // Name of the item IN DWORD dwHintFlags, // Flags to be Ored for pinning, // see FLAG_CSC_PIN_XXX OUT LPDOWRD lpdwStatus, // Status of the item OUT LPDWORD lpdwResultingPinCount // Pin count for this file OUT LPDWORD lpdwResultingHintFlags );
  • This API allows an application to insert a file/directory in the Client-Side-Cache. If this API returns TRUE then the file is resident in the cache. If any of the pin flags are specified, the API takes the appropriate pinning action.
  • the function returns TRUE if successful; FALSE is returned if the function fails.
  • GetLastError can be called to get extended information about the error.
  • CSCUnPinFile BOOL CSCUnPinFile ( IN LPTSTR Name, // Name of the file or directory IN DWORD dwHintlagsMask, // Bits to be removed from the entry OUT LPDOWRD lpdwStatus, // Status of the item OUT LPDWORD lpdwResultingPinCount // Pin count for this file OUT LPDWORD lpdwResultingHintFlags );
  • This API allows the caller to unpin a file or directory from the client side persistent cache.
  • the function returns TRUE if successful.
  • the status bits indicate more information about the item in the cache. FALSE is returned if the function fails.
  • CSCFindFirstCachedFile HANDLE CSCFindFirstCachedFile ( LPCTSTR Name, OUT LPWIN32_FIND_DATA lpFindFileData, OUT LPDWORD lpdwStatus, OUT LPDWORD lpdwPinCount, OUT LPDWORD lpdwHintFlags, OUT FILETIME *lpftOrgTime );
  • This API allows the caller to enumerate files in the client side cache.
  • lpftOrgTime The timestamp of the original file on the server. This value makes sense only when the file/direcotry is a copy of a file on a server. It does not mean anything if the file/directory was created while offline, in which case the status bit FLAG_CSC_LOCALLY_CREATED is set.
  • CSCFindNextCachedFile BOOL CSCFindNextCachedFile HANDLE hCSCFindHandle, LPWIN32_FIND_DATA lpFindFileData; OUT LPDWORD lpdwStatus, OUT LPDWORD lpdwPinCount, OUT LPDOWRD lpdwHintFlags, OUT FILETIME *lpftOrgTime );
  • This function continues a cache file search from a previous call to the CSCFindFirstCachedFile function.
  • lpftOrgTime The timestamp of the original file on the server. This value makes sense only when the file/directory is a copy of a file on a server. It does not mean anything if the file/directory was created while offline, in which case the status bit FLAG_CSC_LOCALLY_CREATED is set.
  • the CSCFindClose function closes the specified cache search handle.
  • the CSCFindFirstCachedFile and CSCFindNextCachedFile functions use the search handle to locate cached files with names that match the given name.
  • hCSCFindHandle identifies the search handle. This handle must have been previously opened by the CSCFindFirstCachedFile function.
  • CSCFindFirstCachedFileForSid HANDLE CSCFindFirstCachedFile ( LPCTSTR Name, PSID pSid, OUT LPWIN32_FIND_DATA lpFindFileData, OUT LPDWORD lpdwStatus, OUT LPDWORD lpdwPinCount, OUT LPDWORD lpdwHintFlags, OUT FILETIME *lpftOrgTime );
  • This API allows the caller to enumerate files in the client side cache for a particular principal, which is the only difference between this API and CSCFindFirstCachedFile.
  • the handle returned by this API can be used by CSCFindNextCachedFile and CSCFindClose APIs.
  • lpftOrgTime The timestamp of the original file on the server. This value makes sense only when the file/direcotry is a copy of a file on a server. It does not mean anything if the file/directory was created while offline, in which case the status bit FLAG_CSC_LOCALLY_CREATED is set.
  • CSCSetMaxSpace BOOL CSCSetMaxSpace( DWORD nFileSizeHigh, DWORD nFileSizeLow ) Routine Description:
  • This routine allows the caller to set the maximum persistent cache size for files which are not pinned. It is used by the UI that allows the user to set the cache size. Maximum limit in Win2K/Windows XP is 2 GB
  • CSCDeleteCachedFile BOOL CSCDeleteCachedFile ( IN LPTSTR Name // Name of the cached file );
  • This API deletes the file from the client side cache.
  • the function returns TRUE if successful; FALSE is returned on error and GetLastError ( ) can be called to get extended information about the error.
  • Example error cases are: a) If a directory is being deleted and it has descendents, then this call will fail b) If a file is in use, this call will fail. c) If the share on which this item exists is being merged, this call will fail.
  • CSCBeginSynchronization BOOL CSCBeginSynchronizationW( IN LPCTSTR lpszShareName, LPDWORD lpdwSpeed, LPDWORD lpdwContext )
  • This API sets up a synchronization context to begin the sync operation. Thus if user input is needed to synchronize a share, by calling this API, the input is obtained only once, and is reused to synchronize both inward and outward.
  • This API cleans up the context obtained on a successful call to CSCBeginSynchronization API.
  • the API cleans up any network connections established, possibly with user supplied credentials, during the CSCBeginSynchronization API.
  • This API allows the caller to initiate a merge of a share that may have been modified offline.
  • the API maps a drive to the share that needs merging and uses that drive to do the merge.
  • the mapped drive is reported in the callback at the beginning of the merge in the cFileName field of the lpFind32 parameter of the callback function.
  • the caller of this API must a) use the drive letter supplied to do any operations on the net b) must do all the operations in the same thread that issues this API call.
  • This API allows the caller to copy the data for the replica of a remote item out of the CSC offline store into a temporary local file.
  • This API returns the current space consumption by unpinned data in the csc offline store.
  • This API frees up the space occupied by unpinned files in the CSC offline store by deleting them.
  • the passed in parameters are used as a guide to how much space needs to be freed. Note that the API can delete local replicas only if they are not in use at the present time.
  • This API allows the caller to enumerate a share or the entire CSC offline store to obtain salient statistics. It calls the callback function with CSC_REASON_BEGIN before beginning the enumeration, for each item it calls the callback with CSC_REASON_MORE_DATA and at the end of the callback, it calls it with CSC_REASON_END. For details of parameters with which the callback is made, see below.
  • This API does a rename in the offline store.
  • the rename operation can be used to move a file or a directory tree from one place in the hierarchy to another. It's principal use at the present time is for folder redirection of MyDocuments share. If a directory is being moved and such a directory exists at the destination, the API tries to merge the two trees. If a destination file already exists, and fReplaceifExists parameter is TRUE, then an attempt is made to delete the destination file and put the source file in its place, else an error is returned.
  • the offline store can have the following status set on it based on the four encryption states:
  • the offline store stats are marked to the appropriate XX_PARTIAL_XX state. At the end, if all goes well, it is transitioned to the final state.
  • LPCSCPROC DWORD (*LPCSCPROC)( LPTSTR lpszName, DWORD dwStatus, DWORD dwHintFlags, DWORD dwPinCount, WIN32_FIND_DATA *lpFind32, DWORD dwReason, DWORD dwParam1, DWORD dwParam2, DWORD dwContext )
  • NetShareSetInfo This API is used to set the CSC attributes of a server share.
  • NET_API_STATUS NetShareSetInfo LPTSTR servername, LPTSTR sharename, DWORD level LPBYTE buf LPDWORD parm_err ); Parameters:
  • NetShareGetInfo This API is used to get the CSC attributes of a server share.
  • NET_API_STATUS NetShareGetInfo LPTSTR servername, LPTSTR sharename, DWORD level, LPBYTE *bufptr, ); Parameters:
  • SHARE_INFO_1007 Typedef struct SHARE INFO 1007 ⁇ DWORD shi1007_flags; LPTSTR shi1007 AlternateDirectoryName; ⁇ SHARE_INFO_1007, *PSHARE_INFO_1007, *LPSHARE_INFO_1007; Shi 1007 _flags:
  • FIGS. 11-14 there are illustrated flow diagrams of exemplary methodologies that facilitate supporting connection state transitions at the directory level (e.g., DFS link) and partial name space offline in accordance with an aspect of the present invention.
  • directory level e.g., DFS link
  • partial name space offline in accordance with an aspect of the present invention.
  • FIG. 11 depicts a process 1100 that facilitates maintaining access to remote files (e.g., server-based) during any period of disconnect from the server or network.
  • a client can be connected to a network or remote server(s) at 1110 .
  • one or more file objects, directories, and/or any other data files can be selectively cached to the client's local database or data store (e.g., memory) at 1120 .
  • the selective caching can be based at least in part upon user preferences. For example, file objects that have been accessed while online can be cached to the client's hard drive. Alternatively or in addition, the client can infer which file objects are more likely to be desired for caching based on the user's current online activity.
  • Such file objects can include those files that have been accessed as well as other files that are related thereto. This can be determined in part by file location (e.g., related directory), metadata associated with the respective files, past client behavior (e.g., files were accessed at a similar time in the past), and the like. Selective caching can also be facilitated by learning and/or employing training systems and techniques by the client or end-user.
  • properties associated with the respective file objects can also be cached to facilitate security measures, for example.
  • directory rights can be cached and physical share cache configurations at the DFS link are honored in accordance with an aspect of the present invention.
  • Cache configurations include manual caching and auto caching. For instance, if the physical share is set to be no caching, the files under the part of the logical namespace hosted on the physical share will not be cached.
  • the client can be disconnected from the server either intentionally or unintentionally.
  • the client, or the user can continue to work on the file as illustrated in FIG. 12 , infra, at 1210 .
  • the user may not even be aware that the connection to the network has been lost because file and/or directory access has not been interrupted. That is, despite the state transition from online to offline, the client can still perform computer operations with respect to remote-based files and directories as if it were connected to the remote server.
  • any modifications or changes to the document can be saved or stored in the local cache on the client.
  • the client version of the file can be pushed to the server if no conflict exists between the client's version and the server version.
  • CSC client side caching
  • the caching component satisfies the request with only the local handle and subsequent file I/O operations are performed on the local cache.
  • This feature facilitates deferred synchronization of the files in the background after the path is transitioned online since users continue to see the file that he/she has been working on during the offline period. Therefore, the particular file is operated in offline state while the path is still online.
  • the request can be sent to the server and handled in a manner in accordance with the present invention. It should be appreciated that the client maintains a persistent cache, which must be flushed out to the server before the handle closes. This ensures that the existing file semantics continue to work.
  • FIG. 13 illustrates a method that facilitates bandwidth reduction and/or conservation in accordance with an aspect of the present invention.
  • a request can be submitted for a file object, for example.
  • the client cache is searched. If the file object is found in the client cache, then the client cache can satisfy the request. Thus, the server is not accessed and network traffic is mitigated. When a connection is slow, this method also facilitates conserving the available bandwidth for instances where only the server can fulfill the request(s).
  • the client version overrides the server version in instances of conflict and availability. That is, the client version can be used to satisfy requests even if the server has the same copy unbeknownst to the user or client since any file accessed from the client cache will appear as if it came from the server, regardless of the connection state.
  • the API 1400 involves receiving the create request from an I/O manager at 1410 .
  • a pre-process handler of a CSC surrogate provider is called.
  • the CSC surrogate provider finds or creates a logical namespace structure if part of the logical namespace on which a target of the create request resides is already offline.
  • the create request is passed to a DFS surrogate provider to translate the logical path to a physical server share.
  • the create request is passed to a redirector component (e.g., RDBSS) to allow a particular redirector (e.g., SMB, Webdav, NFS) claim the physical path.
  • a post-process handler of the CSC surrogate provider can be called again to express one of either no interest or interest to cache a file object requested by the create request.
  • the API 1500 shown in FIG. 15 involves receiving the create request from an I/O manager at 1510 and calling a pre-process handler of a CSC surrogate provider to handle the request by mapping the logical path to local cache data since redirectors are unavailable to claim the path at 1520 .
  • the CSC surrogate provider handles the request since the DFS component and redirectors are not available to the CSC when offline or disconnected from the remote location.
  • FIG. 16 and the following discussion are intended to provide a brief, general description of a suitable operating environment 1610 in which various aspects of the present invention may be implemented. While the invention is described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices, those skilled in the art will recognize that the invention can also be implemented in combination with other program modules and/or as a combination of hardware and software.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular data types.
  • the operating environment 1610 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention.
  • Other well known computer systems, environments, and/or configurations that may be suitable for use with the invention include but are not limited to, personal computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include the above systems or devices, and the like.
  • an exemplary environment 1610 for implementing various aspects of the invention includes a computer 1612 .
  • the computer 1612 includes a processing unit 1614 , a system memory 1616 , and a system bus 1618 .
  • the system bus 1618 couples the system components including, but not limited to, the system memory 1616 to the processing unit 1614 .
  • the processing unit 1614 can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as the processing unit 1614 .
  • the system bus 1618 can be any of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, 11-bit bus, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MSA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), and Small Computer Systems Interface (SCSI).
  • ISA Industrial Standard Architecture
  • MSA Micro-Channel Architecture
  • EISA Extended ISA
  • IDE Intelligent Drive Electronics
  • VLB VESA Local Bus
  • PCI Peripheral Component Interconnect
  • USB Universal Serial Bus
  • AGP Advanced Graphics Port
  • PCMCIA Personal Computer Memory Card International Association bus
  • SCSI Small Computer Systems Interface
  • the system memory 1616 includes volatile memory 1620 and nonvolatile memory 1622 .
  • the basic input/output system (BIOS) containing the basic routines to transfer information between elements within the computer 1612 , such as during start-up, is stored in nonvolatile memory 1622 .
  • nonvolatile memory 1622 can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory.
  • Volatile memory 1620 includes random access memory (RAM), which acts as external cache memory.
  • RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM).
  • SRAM synchronous RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM Synchlink DRAM
  • DRRAM direct Rambus RAM
  • Disk storage 1624 includes, but is not limited to, devices like a magnetic disk drive, floppy disk drive, tape drive, Jaz drive, Zip drive, LS-100 drive, flash memory card, or memory stick.
  • disk storage 1624 can include storage media separately or in combination with other storage media including, but not limited to, an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM).
  • an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM).
  • a removable or non-removable interface is typically used such as interface 1626 .
  • FIG. 16 describes software that acts as an intermediary between users and the basic computer resources described in suitable operating environment 1610 .
  • Such software includes an operating system 1628 .
  • Operating system 1628 which can be stored on disk storage 1624 , acts to control and allocate resources of the computer system 1612 .
  • System applications 1630 take advantage of the management of resources by operating system 1628 through program modules 1632 and program data 1634 stored either in system memory 1616 or on disk storage 1624 . It is to be appreciated that the present invention can be implemented with various operating systems or combinations of operating systems.
  • Input devices 1636 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect to the processing unit 1614 through the system bus 1618 via interface port(s) 1638 .
  • Interface port(s) 1638 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB).
  • Output device(s) 1640 use some of the same type of ports as input device(s) 1636 .
  • a USB port may be used to provide input to computer 1612 and to output information from computer 1612 to an output device 1640 .
  • Output adapter 1642 is provided to illustrate that there are some output devices 1640 like monitors, speakers, and printers among other output devices 1640 that require special adapters.
  • the output adapters 1642 include, by way of illustration and not limitation, video and sound cards that provide a means of connection between the output device 1640 and the system bus 1618 . It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) 1644 .
  • Computer 1612 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 1644 .
  • the remote computer(s) 1644 can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device or other common network node and the like, and typically includes many or all of the elements described relative to computer 1612 .
  • only a memory storage device 1646 is illustrated with remote computer(s) 1644 .
  • Remote computer(s) 1644 is logically connected to computer 1612 through a network interface 1648 and then physically connected via communication connection 1650 .
  • Network interface 1648 encompasses communication networks such as local-area networks (LAN) and wide-area networks (WAN).
  • LAN technologies include Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), Ethernet/IEEE 1102.3, Token Ring/IEEE 1102.5 and the like.
  • WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL).
  • ISDN Integrated Services Digital Networks
  • DSL Digital Subscriber Lines
  • Communication connection(s) 1650 refers to the hardware/software employed to connect the network interface 1648 to the bus 1618 . While communication connection 1650 is shown for illustrative clarity inside computer 1612 , it can also be external to computer 1612 .
  • the hardware/software necessary for connection to the network interface 1648 includes, for exemplary purposes only, internal and external technologies such as. modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and Ethernet cards.

Abstract

The present invention provides a novel client side caching (CSC) infrastructure that supports transition states at the directory level to facilitate a seamless operation across connectivity states between client and remote server. More specifically, persistent caching is performed to safeguard the user (e.g., client) and/or the client applications across connectivity interruptions and/or bandwidth changes. This is accomplished in part by caching to a client data store the desirable file(s) together with the appropriate file access parameters. Moreover, the client maintains access to cached files during periods of disconnect. Furthermore, portions of a path can be offline while other portions upstream can remain online. CSC operates on the logical path which cooperates with DFS which operates on the physical path to keep track of files cached, accessed and changes in the directories. In addition, truth on the client is facilitated whether or not a conflict of file copies exists.

Description

    TECHNICAL FIELD
  • The present invention relates generally to client side caching, and more particularly to systems and methods that facilitates persistent caching to shield a user and client applications across connectivity interruptions and/or bandwidth changes such that truth on the client is supported.
  • BACKGROUND OF THE INVENTION
  • Computing and networking technologies have transformed many important aspects of everyday life. Computers have become a household staple instead of a luxury, educational tool and/or entertainment center, and provide users with a tool to manage and forecast finances, control household operations like heating, cooling, lighting and security, and store records and images in a permanent and reliable medium. Networking technologies like the Internet provide users with virtually unlimited access to remote systems, information and associated applications.
  • Traditional business practices are evolving with computing and networking technologies. Typically, a user interfaces with a client(s) application (e.g., word processing documents, files, etc.) to interact with a network or remote server(s) that stores information in a database that is accessible by the client application. Databases provide a persistent, durable store for data that can be shared across multiple users and applications. Client applications generally retrieve data from the database through a query(s), which returns results containing the subset of data that is interesting to the client application. The client application then consumes, displays, transforms, stores, or acts on those results, and may modify or otherwise manipulate the data retrieved.
  • Unfortunately, data is typically inaccessible by the client application from the remote server when the remote server is offline or otherwise disconnected. In particular, every remote name in SMB (Server Message Block) begins with a prefix that identifies two elements: a server and a share in the format of a path beginning with “†\server\share\ . . . . ” The server is the physical server (e.g., name of machine) to which the client is talking. The share refers to a name on the machine which can be found on the machine's hard drive. Conventionally, the server and the share were created on the same machine or remote server. Therefore, if any object along the \\server\share\ . . . path was disconnected and/or offline, then the server would be marked as offline as well. Multiple shares can be located on one server; thus when one share, for example, becomes disconnected from the network, the entire server goes offline as well.
  • As can be seen, the operation of client applications has been traditionally dependent upon the connection state of the remote server. In some cases, however, clients may have access to some data while disconnected from the remote server. Unfortunately, if some of the data has been modified by the client, the modified client version is usually not visible to the client and/or user when the server returns online. This is commonly referred to as “truth on the server” because the server version of the data is kept and/or maintained when a conflict between the client and server data is detected. Inevitably, this results in incoherent data by client applications as well as increased server and/or network traffic in addition to the myriad of other inconveniences and problems for most users.
  • SUMMARY OF THE INVENTION
  • The following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. It is not intended to identify key/critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later.
  • The present invention provides a novel client side caching (CSC) infrastructure which facilitates a seamless operation across connectivity states (e.g., online-offline) between client and remote server. More specifically, a persistent caching architecture is employed to safeguard the user (e.g., client) and/or the client applications across connectivity interruptions and/or bandwidth changes. This is accomplished in part by caching the desirable file(s) together with the appropriate protocol information (e.g., SMB and Webdav (Web-based Distributed, Authoring, and Versioning) to a local (e.g., client) data store. Such information includes object access rights and share access rights which correspond to the file or group of files being cached.
  • The files to be cached to the local data store (on the client) can be determined in any number of ways according to the preferences of the user. In a first instance, caching can be automatic. In a second instance, caching can be manual. For example, substantially all files accessed at least once by a client application can be cached. Conversely, only certain files marked by the user and/or client application for caching can be cached. In addition, the caching of files accessed by the user can be performed at prescribed time intervals or even at random depending on such user preferences.
  • Moreover, data requested when connected to a remote server can continue to be accessed, manipulated, and/or modified by the client while disconnected from the server. However, the files are presented to the client as if they reside on the remote physical server location. For instance, any particular file cached to the local hard drive in the prescribed manner maintains the same name whether offline or online server. Hence, it is not apparent to the user or client that it may have been retrieved from either the local cache or from the server.
  • In light of security concerns, file access parameters including read/write capabilities can also be cached for offline use. Therefore, access to files can be granted or denied in a similar manner as when connected to the server. For example, imagine a user has access rights to a document located on the server. The file is cached to the user's local hard drive. Thus, when disconnected from the server, the user can still access that file from his/her local memory as long as that the requisite access rights (e.g., object access rights and share access rights) accompany the respective file (e.g., cached with the file). However, if the corresponding access rights are not cached locally, then access may be denied.
  • According to another aspect of the invention, the user experience, whether offline or online, is substantially uniform across server types. In conventional networking infrastructure, the user may not know which type of network is serving up the files that he/she is accessing and specifically, the reasons why one server allows a particular feature while another server does not. In the present invention, achieving uniformity across server types is based at least in part upon the location of the CSC component. For example, client side caching can be located above all redirectors, independent of the type of network redirection being performed. As a result, the offline experience remains consistent and without change when switching between server types. By way of example, I/O requests can be sent to the CSC component before the DFS component to ensure that all relevant information (e.g., identifications of DFS links, corresponding physical shares, share access rights, etc.) is cached before the connection state changes from online to offline. The DFS component can only obtain referrals while online and the connection may be lost at any time.
  • According to yet another aspect, the present invention provides for truth on the client. This is accomplished in part by write back caching. Write back caching involves caching data on the client first and then pushing it back to the server at appropriate times. For example, any file modified or manipulated by the client while disconnected from the remote server can be stored to the client's memory and then uploaded to the server when the client regains its connection to the server. This can be particularly useful when a conflict in the data exists between the client copy and the server copy. User resolution may be needed to resolve the conflict in data; however, when reconnected to the server, the user continues to see its modified version of the file rather than the server's version.
  • To the accomplishment of the foregoing and related ends, the invention comprises the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative aspects and implementations of the invention. These are indicative, however, of but a few of the various ways in which the principles of the invention may be employed. Other objects, advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a high level schematic block diagram of a remote file system in accordance with one aspect of the present invention.
  • FIG. 2 illustrates a block diagram of a remote file system in accordance with one aspect of the present invention.
  • FIG. 3 illustrates an exemplary data structure in accordance with one aspect of the present invention.
  • FIG. 4 illustrates an exemplary diagram of a user's view of an online, partial cached namespace in accordance with one aspect of the present invention.
  • FIG. 5 illustrates an exemplary diagram of a user's view of an offline, partial cached namespace in accordance with one aspect of the present invention.
  • FIG. 6 illustrates an exemplary diagram of a user's view of an offline, partial cached namespace with shadow instances in accordance with one aspect of the present invention.
  • FIG. 7 illustrates an exemplary diagram of a user's view of an online, server namespace change requiring synchronization between the client and the server in accordance with one aspect of the present invention.
  • FIG. 8 illustrates an exemplary diagram of truth on the client during normal CSC operations in accordance with one aspect of the present invention.
  • FIG. 9 illustrates an exemplary diagram of truth on the client during synchronization between client and server copies of a file object in accordance with one aspect of the present invention.
  • FIG. 10 illustrates an exemplary diagram of truth on the client as normal CSC operations have resumed in accordance with one aspect of the present invention.
  • FIG. 11 illustrates a flow diagram of an exemplary methodology that facilitates maintaining access to remote files (e.g., server-based) during any period of disconnect from a remote location in accordance with one aspect of the present invention.
  • FIG. 12 is a continuation of FIG. 11, in accordance with one aspect of the present invention.
  • FIG. 13 is a continuation of FIG. 11, in accordance with one aspect of the present invention.
  • FIG. 14 illustrates an exemplary API in accordance with one aspect of the present invention.
  • FIG. 15 illustrates an exemplary API in accordance with one aspect of the present invention.
  • FIG. 16 illustrates an exemplary operating system in accordance with one aspect of the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The present invention is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It may be evident, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the present invention.
  • As used in this application, the term “component” is intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a computer component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. A “thread” is the entity within a process that the operating system kernel schedules for execution. As is well known in the art, each thread has an associated “context” which is the volatile data associated with the execution of the thread. A thread's context includes the contents of system registers and the virtual address belonging to the thread's process. Thus, the actual data comprising a thread's context varies as it executes.
  • Furthermore, the term “inference” as used herein refers generally to the process of reasoning about or inferring states of the system, environment, and/or user from a set of observations as captured via events and/or data. Inference can be employed to identify a specific context or action, or can generate a probability distribution over states, for example. The inference can be probabilistic-that is, the computation of a probability distribution over states of interest based on a consideration of data and events. Inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources.
  • Accordingly, it is to be appreciated that various aspects of the subject invention can employ probabilistic-based and/or statistical-based classifiers in connection with making determinations and/or inferences in connection with the subject invention. For example, such classifiers can be employed in connection with utility-based analyses described herein. A support vector machine (SVM) classifier can be employed which generally operates by finding a dynamically changing hypersurface in the space of possible inputs. Other directed and undirected models/classification approaches include, e.g., naive Bayes, Bayesian networks, decision trees, Hidden Markov Model (HMM), data fusion engine, neural network, expert system, fuzzy logic, or any suitable probabilistic classification models providing different patterns of independence can be employed. Classification as used herein also is inclusive of statistical regression that is utilized to develop models of priority.
  • The present invention involves systems and methods that facilitate client side caching and truth on client persistent caching. Client side caching provides off-line access to files and/or other data when the network version of the file is otherwise unavailable due to a network outage or intentional disconnection. It also can increase server scalability while connected to the network by reducing file operations directed at remote servers. By the employment of the present invention, a client can access the cached copy of a file using the same file name and with the same namespace as when the client is connected to the network. Thus, the client may not even be aware that a temporary disconnection from the network (e.g., remote server(s)) is occurring since access to and/or modification of one or more files has not been interrupted.
  • The present invention can be used in conjunction with DFS (Distributed File System) shares or links. DFS links are based at least in part upon logical names which can be expressed in the format of . . . \\domain\name\ . . . , for example. However, logical names are not necessarily physical names that identify a server. Rather, DFS links can point to a physical server(s) or file(s). DFS links are structured to deal with SMB shares, NFS shares, as well as Webdav (or DAV) shares or any other remote process that an operating system can be pointed at by a DFS share or link. It should be understood that the logical name space can include multiple DFS links which are backed up by a physical share on a server which can be individually online or offline. Thus, the client side caching can keep track of any particular DFS link persistently so that it can transition a connection state at a proper logical directory. This effectively minimizes the scope of offlineness to the physical share.
  • Moreover, client side caching can support state transitions at the directory level on a logical path which represents a DFS link. Therefore, if a physical share is down, only part of the logical name space hosted on the physical share is offline—the rest of the logical name space remaining online. In a cascaded DFS scenario, part of the name space can be offline while the next or adjacent part of the name space can still be online. For example, the underlined portion in the following path is offline while the remaining portions remain online.
      • s:\\msdev\release\lab01\4051\x86gsf\4073\file.
  • This is because 4051 may be a DFS link pointing to a physical share and x86gsf maybe a file located on a portion of the share. Thus, it appears that share 4051 is offline and accordingly, any files listed that belong to that offline share will also appear to be offline. Conversely, 4073 may correspond to another DFS link or physical share that is online. Thus, despite being downstream from the offline link or share, any files belonging to other online physical shares remains online.
  • The figures below further describe the many aspects of client side caching and the manner in which truth on the client can be achieved. It should be appreciated that the systems and methods described herein are merely representative and are not meant to limit the scope of the invention.
  • Referring now to FIG. 1, there is illustrated a high-level, schematic diagram of an exemplary remote file system 100 comprising client side caching (CSC) architecture for communication and interaction between one or more clients 110 and a network 120 or remote server(s) in accordance with an aspect of the present invention. In general, a client application makes a request (by way of input to the remote file system 100) using paths into a kernel portion of the remote file system 100. Such requests typically are directly communicated to a redirector component (not shown) such as SMB or Webdav or some other file system over the relevant network. A CSC component such as a client-side-caching mechanism is situated at about the middle of that path. The CSC component comprises a data store for offline retrieval of data which was previously cached. The previous caching may have taken place when the client was connected (online as depicted by 130) to the remote network 110 or server.
  • Thus, when a data request enters the kernel, a determination can be made as to whether the remote server is online. If it is not online such as depicted by 140, the CSC component can direct the file request to a local cache 150 on the client. If the file was previously cached by the user (client) and the user/client has the requisite access rights, then access to the particular file can be granted to the user (client).
  • However, when the system is online and a file request is made, the local cache on the client's side can determine whether it also has a copy of the file. If it does contain a copy, the CSC component can retrieve the data from data store, thereby mitigating network traffic. To some extent, it may also be possible to request such information from other clients, provided that they are on the same or similar network. This configuration can be referred to a distributed client cache system, whereby a plurality of clients can access each other's cache. As the technology continues to advance in this area, the distributed client cache system may become more efficient with respect to performance, speed, and bandwidth consumption.
  • In general, the CSC component operates on logical namespaces (e.g., names of files as users see them) and supports connection state transitions at the directory level on a logical path that is representative of a DFS link. The DFS link can point to a physical share and typically translates a logical path into its physical path. Logical namespaces can be backed up by multiple shares on different physical servers. By querying DFS component, CSC can identify those directors on a logical path which are DFS links and store them on a list.
  • Hence, when a connection state changes (e.g., online to offline) due to a failure returned from a redirector or a network disconnect indication, for example, the CSC component will only transition the directory on the list that hosts to the object. The rest of the logical name space is not affected. Therefore, when a file I/O request comes down, the CSC component can cross-reference the list to see if the path is offline. If it is, the request is processed offline. Otherwise, it will be sent to a redirector for further processing. The transition version of the directory where the path is forced offline can be tracked.
  • In addition, directory access rights as well as share access rights (if a DFS link) for the respective portions of the logical name space are also stamped on the directory cache entries. Thus, when a create request comes down while offline, the CSC component can check the file access and share access rights to determine whether to allow the request a success.
  • Referring now to FIG. 2, there is illustrated a block diagram of an exemplary remote file system 200 utilizing client side caching in accordance with an aspect of the present invention. Whenever a client/user application generates a request for the file system such as to gain access to a directory or file, an I/O Manager 210 initially can determine whether the desired path is a local or remote path. If it is a remote path, then the remote file system 200 as shown in FIG. 2 can be employed. The remote file system 200 comprises a Multiple UNC (Universal Name Code) Provider (MUP) 220, surrogate providers (e.g., CSC 230, DFS 240), and one or more redirectors 250 (e.g., SMB 252, NFS 254, and DAV (Webdav) 256).
  • One notable aspect in the system 200 is that the CSC mechanism 230 and the DFS 240 are at the same level as the MUP 220. Thus, CSC 230 can receive all or substantially all UNC and drive-letter-based I/O destined for a network or remote server. Because the CSC 230 registers as a surrogate provider, it can also receive nearly if not all pre- and post-view of IRP and FastIO calls to network providers. One advantage to this is that an extended mini-redirector interface can be used to communicate with a plurality of mini-redirectors in order to get additional information from the mini-redirector and to simplify callback mechanisms for events such as oplock breaks and transports appearing and disappearing from the redirectors.
  • Substantially all calls fielded by the MUP 220 are handed down to the appropriate redirectors 250. In general, the CSC mechanism 230 can also filter substantially all calls going through the MUP 220, thus allowing it the opportunity to decide on appropriate caching pathnames.
  • Unlike conventional systems, the MUP 220 in the present invention supports surrogate providers. Surrogate providers such as the CSC mechanism 230 can register to the MUP 220 with pre- and post-process handlers (232, 234). For instance, the MUP 220 calls the pre-process handler(s) 232 in a predetermined order before calling any of the network providers. It can return one of the following statuses when the pre-process is done:
      • STATUS_MORE_PROCESSING_REQUIRED—The request has not been satisfied. The IRP needs to be sent to the next provider.
      • STATUS_PENDING—The surrogate provider needs more time to process the request. A MUP resume routine will be called after the process is done.
      • Other status—The surrogate provider has handled the request. MUP can complete the IRP without invoking the rest of the providers.
  • Subsequently, the CSC post process handler 234 can be called after the request is handled by a network provider and/or another surrogate provider, depending on the status returned on its pre-process. The post-process handler 234 has a chance to handle the request again. For instance, it can store the data returned back from the server if “success” is returned, or take the connection offline, process the request from cache, and return “success”.
  • The post-process handler(s) 234 are called in the opposite order some time thereafter. Since the CSC mechanism 230 is in the path of every call that the MUP handles, the CSC can do the relevant pre- and post-processing that is necessary to obtain the appropriate functionality.
  • As can be seen in FIG. 2, the MUP 220 can call the CSC 230 and/or the DFS 240, in either order. However it is preferable that the MUP 220 calls the CSC mechanism 230 before the DFS 240. This can be important for a create request in particular because it gives the CSC mechanism 230 a chance to cache the logical path in a local cache 236 before the DFS 240 translates the file object name into the physical path.
  • FIG. 3 illustrates a schematic block diagram of representative file-based data structures in both logical (CSC) 300 and physical namespace (Mini-Rdr) 310 and the relationships between them when a file is created online. Essentially, CSC 300 maintains the connection based data structures in logical name space while Mini-Rdr 310 maintains the connection based data structures in physical name space. File based data structures are created by CSC 300 and shared among CSC 300 and Mini-Rdr 310. Some file based data structures have access to the connection based data structures in both logical and physical name space. Therefore, file I/O requests can be executed by either CSC 300 or Mini-Rdr 310, based on the circumstances. By maintaining such data structures, the CSC can provide and/or facilitate persistent caching to yield truth on the client.
  • There are several operations that CSC can perform to implement delay write persistent caching semantics. At least a portion of CSC consistency is based in part on the last write time stamp and file-size; hence, it has to do various tasks at create/close time to ensure that this information is accurate.
  • For example, in a File Create Operation, a create request comes to the MUP (e.g., MUP 220 in FIG. 2). The MUP 220 calls the pre-process handler (e.g., FIG. 2, 232) of the CSC (surrogate provider). The CSC attempts to find or create connection data structures for the file object that is issued with the create request. Examples of the connection data structures include a server Connection structure: SrvCall; a share mapping structure: NetRoot; and a per-user share mapping structure: VNetRoot.
  • If the part of the logical namespace on which the target of the create falls is already offline, the surrogate (e.g., CSC) finds or creates the logical namespace structure and returns “success”. However, if the surrogate provider does not have the information indicating that the path is offline, it can ask the MUP to proceed with further processing of the create call after creating or finding the above structures in the logical name space.
  • The MUP may continue its perorations, by supplying the path to the DFS (e.g., FIG. 2, 240), which in turn might translate the logical path to an actual server share, depending on whether there is a DFS link along the way. Ultimately, one redirector (e.g., FIG. 2, 250) claims the name. If that redirector is part of RDBSS architecture (e.g., FIG. 2, 260—a communication link between CSC component and redirectors), then a mini-redirector (MINI-RDR) automatically refers to RDBSS to execute the common create code. When the create call returns to the MUP, it can call the CSC post-process handler. If the call is not fielded by a mini-redirector that supports the CSC, a post-processor routine may tell the MUP that it is not interested in the file, and no subsequent operations are seen by the CSC.
  • On a successful open of the path, all the connection structures get established at the appropriate mini-redirector and a handle is available to the file. The CSC pre-process handle can get the file extension, size, and caching flags of the path by looking at the physical NetRoot of the Fcb (file control block—an abstraction of a file that contains information about the file such as name, size, time stamps, cache map, shared access rights, mini-redirector device object, pointer to NetRoot, etc.) of the parent directory, or by issuing a FSCTL (File System Control) against the handle.
  • Once this information is obtained, the CSC can decide whether to own this file object. If the share characteristics so demand, such as it being a cacheable share, or if the mini-redirector demands caching be on all the time, such as DAV, the CSC can claim ownership and create the file data structures for the open instance (e.g., Fcb, SrvOpen, and Fobx) represented by this file object. However, if the share is marked non-cacheable, the CSC can disassociate itself from the file object so as to not see the operations against this file object thereafter.
  • It should be understood that SrvOpen refers to Server Side Open Context which is the abstraction of an open sent to the server, and which stores the desire access, share access, security context, server file handle, mini-rdr context, pointer to Fcb, etc. Multiple SrvOpens with different access rights and session IDs can collapse on a single Fcb. Furthermore, Fobx refers to File Object Extensions which is the RDR (redirector) extension of a file object, containing information unique to a handle, such as a directory enumeration template, resume key, etc. It also has the pointer to a SrvOpen. Multiple Fobx can collapse on a single SrvOpen, if their access rights match.
  • After the file data structures are created and linked to the connection based data structures of both logical and physical namespaces, the CSC can issue a create directly to the mini-redirector with the prefixed file name in the physical namespace which can be obtained through the file handle for querying attributes. The CSC can keep the connection data structure around for some time even if it is a non-cacheable path. In addition, it can put the parent directory on the name cache under the NetRoot so that it can quickly determine the persistent caching characteristics on the next create request without issuing an open to it again. With this approach, per directory caching can be obtained.
  • Moreover, the connection-related data structure can be separated and the file-related data structure can be shared between the logical and physical namespaces. The CSC and redirectors can create their own SrvCall, NetRoot, and VNetRoot, and share the Fcb, SrvOpen and Fobx. This way, the CSC and redirectors can handle many different UNC paths (logical and physical) without doubling the resources such as in-memory data structures and cache maps.
  • In a File Read Operation, the CSC needs to know the buffering state of the file before every read from the persistent cache. If the buffering state is such that read caching is allowed then the CSC can read the persistently cached data and serve it to the app. However, if the buffering state is at the equivalent of OPLOCK_LEVEL_NONE, then it should not return the data from the cache and let all the reads go to an underlying provider. The buffering state info can be obtained by checking the FCB_STATE_READCACHING_ENABLED flag on Fcb->FcbState.
  • If read caching is allowed and the file is not sparse, the CSC fills a user buffer with cached data and return success on the CSC pre-process. MUP will return the request without forwarding this request to mini-rdr. If read caching is disabled or the file is sparse, the CSC sends the read request directly to the mini-redirector on the pre-process. Once the request is completed by the mini-redirector successfully, data is saved on cache. If an error is returned from mini-redirector, the CSC attempts to transition to offline and if succeeds, complete the request offline. Either case, read operation is completed on CSC pre-process without having MUP sends the request to mini-redirector.
  • With respect to a File Write Operation, the CSC needs to know the buffering state of the file before every write is executed. If the buffering state is such that write caching is allowed, then the CSC can write the persistently cached data and return success to the application. However, if the buffering state is at the equivalent of OPLOCK_LEVEL_II or less, then it should not cache the write and let all the writes go to the underlying provider. Again the buffering state info can be obtained by checking the FCB_STATE_WRITECACHING_ENABLED flag on Fcb->FcbState.
  • If write caching is allowed and the file is not sparse, the CSC sends the write request to local cache and return success on the CSC pre-process. MUP will return the request without forwarding this request to mini-rdr.
  • If write caching is disabled or the file is sparse, the CSC sends the write request directly to the mini-redirector on the pre-process. Once the request is completed by the mini-redirector successfully, data is saved on cache. If an error is returned from the mini-redirector, the CSC attempts to transition to offline and if it succeeds, it completes the request offline. In either case, write operation is completed on CSC pre-process without having the MUP send the request to mini-redirector.
  • In a File Close Operation, the MUP calls CSC pre-process first when a close request comes to the MUP. The CSC pre-process checks if there is any cached data from a previous write request on this file. If so, the CSC sends back the only sections with the modified data to the server by issuing write requests to mini-redirector. After cached data are pushed back to the server, CSC pre-process sends the close request to the mini-redirector. Once the mini-redirector completes the close request, CSC pre-process queries the timestamp from the server, sets it on the cached file, closes cache handle, and returns to MUP. Thus writes are cached until the file is closed. If durable op-lock is granted to the client, the close request is only executed locally since there is no remote handle to the file. Writes are cached until the op-lock is broken.
  • Moreover, if a handle is opened online, CSC will follow the design principle of always processing the request after a redirector takes a first look, except with a create request. Below are exemplary pseudo codes for the control flows of the typical create, read, and write operations. Other file I/O operations are similar to read/write.
    Create
    MupCreate(Irp(FileObject))
    CscSurrogatePreProcess(MupIrpContext) // cscinit.c
    // invoke RxFsdDispatch
    RdbssCommonCreate(Irp(FileObject(LogicalPath))) // Create SrvCall, NetRoot,
    //VNetRoot for the logical
    //path. Create or find Fcb,
    // SrvOpen, Fobx for the file
    //object
    Fcb->RxDeviceObject = CscDeviceObject
    Fcb->CscNetRoot = RxContext->Create.NetRoot;
    SrvOpen->CscVNetRoot = RxContext->Create.VNetRoot;
    CscCreate // openclose.c
    FileObject->FsContexts = Fobx
    If Cache is dirty
    Return STATUS_SUCCESS // local open
    Else
    Return More_Processing_Required // remote open
    If Status == More_Processing_Required // MUP
    DfsSurrogatePreProcess(MupIrpContext)
    If IsDfsPath(FileObject(LogicalPath)) then
    ResolveDfsPath(FileObject(logicalPath))
    RedirectorDispatcher(Irp(FileObject(PhysicalPath)))
    RdbssCommonCreate(Irp(FileObject(PhysicalPath))) // Create SrvCall,
    //NetRoot, VNetRoot
    //for physical path
    Fobx = FileObject->FsContexts
    SrvOpen = Fobx->SrvOpen
    Fcb = SrvOpen->Fcb
    Fcb->MiniRdrDeviceObject = RdrDeviceObject
    Fcb->NetRoot = RxContext->Create.NetRoot;
    SrvOpen->VNetRoot = RxContext->Create.VNetRoot;
    RedirectorCreate(RxContext) // Create mini-rdr extensions, contact server,
    //set mini-rdr device object
    IrpCompletion(Irp) // Mini-rdr
    Return status
    MupResumeIoOperation
    DfsSurrogatePostProcess(MupIrpContext)
    MupResumeIoOperation
    CscSurrogatePostProcess(MupIrpContext)
    CreateCompletion(SrvOpen,Fcb) // Synchronize the create request
    Else // none DFS path
    Return More_Processing_Required
    MupResumeIoOperation
    MupLocateRedirector(Irp(FileObject(LogicalPath)))
    RedirectorDispatcher(Irp(FileObject(LogicalPath)))
    Fobx = FileObject->FsContexts
    SrvOpen = Fobx->SrvOpen
    Fcb = SrvOpen->Fcb
    Fcb->MiniRdrDeviceObject = RdrDeviceObject
    Fcb->NetRoot = RxContext->Create.NetRoot;
    SrvOpen->VNetRoot = RxContext->Create.VNetRoot;
    RdbssCommonCreate(Irp(FileObject(LogicalPath))) // Create SrvCall,
    //NetRoot, VNetRoot
    RedirectorCreate(RxContext) // Create mini-rdr extensions, contact server,
    //set mini-rdr device object
    IrpCompletion(Irp) // Mini-rdr
    Return status
    MupResumeIoOperation
    DfsSurrogatePostProcess(MupIrpContext)
    MupResumeIoOperation
    CscSurrogatePostProcess(MupIrpContext) // cscinit.c
    CreateCompletion(SrvOpen.Fcb) // Complete SrvOpen and Fcb construction,
    //synchronize other opens with the same
    // path, replace the Mini-rdr dispatch table
    // with CSC dispatch table
    IrpCompletion(Irp) // MUP
    Else // more_process_required
    IrpCompletion(Irp) // MUP
  • As you see from the above, CSC pre-process handler creates the SrvCall, NetRoot and VNetRoot for the logical path and Fcb, SrvOpen and Fobx for the file object. Fobx is stored on the FileObject->FsContext2 so that mini-rdr can pick it up when it is called from either DFS or MUP. If the Fcb is created by the CSC, Fcb->RxDeviceObject is CSC Device Object. Otherwise, it will be mini-rdr Device Object. Mini-rdr also sets the Fcb->MiniRdrDeviceObject on the create path. In addition, the subsequent create request on the same file has to wait on the CSC pre-process handler. CSC post process handler signals the other create request after the first one is completed.
    Read/Write
    MupFsdIrpPassThrough(Irp(FileObject))
    CscSurrogatePreProcess(MupIrpContext) // cscinit.c
    Return More_Processing_Required
    MupResumeIoOperation
    DfsSurrogatePreProcess(MupIrpContext)
    Return More_Processing_Required
    MupResumeIoOperation
    RedirectorDispatcher(Irp(FileObject))
    RdbssCommonRead/Write(RxContext) // read.c or write.c
    If !PagingIO then
    If CcRead/write(FileObject,Offset,Length) then
    IrpCompletion(Irp)
    Fcb->RxDeviceObject->Dispatcher[Read/Write](RxContext)
    CscRead/Write(RxContext) // readwrit.c
    If oplockAcquired && CacheNotSparse then
    CscRead/WriteCache(RxContext)
    Else
    Fcb->MiniRdrDeviceObject->Dispatcher[Read/Write](RxContext)
    RedirectorRead/Write(RxContext) // read.c or write.c
    CscRead/WriteEpilogue // cache data from or to server
    IrpCompletion(Irp)
    MupResumeIoOperation
    DfsSurrogatePostProcess(MupIrpContext)
    MupResumeIoOperation
    CscSurrogatePostProcess(MupIrpContext)
    Close
    MupFsdIrpPassThrough(Irp(FileObject))
    CscSurrogatePreProcess(MupIrpContext)
    Return More Processing Required
    MupResumeIoOperation
    DfsSurrogatePreProcess(MupIrpContext)
    Return More_Processing_Required
    RedirectorDispatcher(Irp(FileObject))
    RdbssCommonClose(RxContext) // close.c
    Fcb->RxDeviceObject->Dispatcher[Close](RxContext)
    CscClose(RxContext) // openclos.c
    If FileIsDirty(RxContext) then
    CscFlushDirtyPages(RxContext)
    Fcb->RxDeviceObject->Dispatcher[Write](RxContext)
    Fcb->RxDeviceObject->MiniRdrDispatcher[Close](RxContext)
    RedirectorClose(RxContext) // openclos.c
    CscCloseEpilogue(RxContext) // close CSC handle
    Dereference(Fobx)
    Dereference(SrvOpen->VNetRoot)
    Dereference(SrvOpen->CscNetRoot)
    Dereference(SrvOpen)
    Dereference(Fcb->NetRoot)
    Dereference(Fcb->CscNetRoot)
    Dereference(Fcb)
    IrpCompletion(Irp)
    MupResumeIoOperation
    DfsSurrogatePostProcess(MupIrpContext)
    MupResumeIoOperation
    CscSurrogatePostProcess(MupIrpContext)
  • If a handle is opened offline, the CSC will handle the file I/O requests on the pre-process, since there is no redirector to claim the path. Here are control flows of the create, read, write and close operations. Other file IO operations are similar to read/write.
    Create
    MupCreate(Irp(FileObject))
    CscSurrogatePreProcess(MupIrpContext)
    RdbssCommonCreate(Irp(FileObject(LogicalPath))) // Create SrvCall, NetRoot,
    //VNetRoot for the logical
    // path. Create or find Fcb,
    // SrvOpen, Fobx for the file
    //object
    Fcb->RxDeviceObject = CscDeviceObject
    Fcb->CscNetRoot = RxContext->Create.NetRoot;
    SrvOpen->CscVNetRoot = RxContext->Create.VNetRoot;
    CscCreate // openclose.c
    If Disconnected then
    Return Success
    Else
    Return More_Processing_Required // remote open
    MupResumeIoOperation
    DfsSurrogatePreProcess(MupIrpContext)
    Return More_Processing_Required
    MupResumeIoOperation
    MupFindRedirector(Irp(FileObject(LogicalPath)))
    return Network_Disconnected
    MupResumeIoOperation
    DfsSurrogatePostProcess
    MupResumeIoOperation
    CscSurrogatePostProcess
    If TransitionOnline(Status) then
    CscCreateEiplogue(RxContext)
    If !CacheSparse then return Success
    CreateCompletion(SrvOpen,Fcb) // Complete SrvOpen and Fcb construction,
    //synchronize other opens with the same
    //path
    IrpCompletion(Irp) // MUP
    Read/Write
    MupFsdIrpPassThrough(Irp(FileObject))
    CscSurrogatePreProcess(MupIrpContext)
    RdbssCommonRead/Write(RxContext)
    If !PagingIO then
    If CcRead/write(FileObject,Offset,Length) then
    IrpCompletion(Irp)
    Fcb->RxDeviceObject->Dispatcher[Read/Write](RxContext)
    CscRead/Write(RxContext)
    Read/WriteCache(RxContext)
    IrpCompletion(Irp) //MUP
    Close
    MupFsdIrpPassThrough(Irp(FileObject))
    CscSurrogatePreProcess(MupIrpContext)
    RdbssCommonClose(RxContext)
    Fcb->RxDeviceObject->Dispatcher[Close](RxContext)
    CscClose(RxContext)
    CscCloseEpilogue(RxContext)
    Dereference(Fobx)
    Dereference(SrvOpen->CscNetRoot)
    Dereference(SrvOpen)
    Dereference(Fcb->CscNetRoot)
    Dereference(Fcb)
    IrpCompletion(Irp) //MUP
  • As previously mentioned, the present invention facilitates a seamless user experience that includes online to offline transition, offline to online transition, merge and synchronization, and low bandwidth operations. Unlike conventional systems, the present invention is capable of transition connection state at the share as well as the directory level. If one share goes offline, other shares on the same server (logical name space) remain online. Offline at directory level is built for the logical name space with a DFS link(s). When a physical share is disconnected, the part of the name space hosted on the physical share is disconnected. The rest of the logical path remains online. In cascaded DFS cases, it is possible that you can have mixed online and offline directories along the logical path.
  • In one aspect of the invention, the CSC keeps a transition list of the directories on the logical NetRoot which are backed up by a physical share. By default, the logical share is the first one on the list. Based on the result returned back from the DFS API that tells whether the directory is a DFS link, the CSC can add the directories representing DFS link on the list. When an operation fails with a network error, the directory on the list close to the object gets transitioned offline. If there is only the logical share (e.g., non-DFS case) on the list, the share is transitioned offline. The CSC keeps the connection state, version number, cache ID, etc on each of the list item.
  • The online to offline transition continues to work seamlessly as previous versions of the CSC. The transition can be triggered by the network related error during I/O operation or by the transport disconnection indication. The application handle remains valid and continues to work offline. The CSC simply marks the directory (or share) offline, increases the version number, and references the NetRoot so that the connection is maintained until it is transitioned back online.
  • Similar to transitioning offline, the CSC is capable of transitioning back online at the share and directory level. The CSC does not invalidate application handles while transitioning the connection to online. Rather, it eliminates the requirement of user initiation of the transition followed by closing all the applications. This removes a main blocker for the CSC to work on a TS/FUS environment. In addition, the subject CSC defers the merge and synchronization process after transitioning online so that transition becomes fast. With both improvements, transitioning online becomes as painless as transitioning offline.
  • In practice, for instance, transitioning to online is initiated by a CSC agent as the result of discovering that a path has become reachable. The CSC agent periodically scans the paths that are offline. A network arrival event can also trigger CSC agent to transition the paths. Once CSC agent detects a path can be reachable, it sends an IOCTL (I/O control) to a CSC driver to initiate an online transition on this path. The CSC driver simply resets the state of the directory on the transition list and increases the version number. All the existing handles still remain offline until the handle is backpatched individually. Per file offlineness can be accomplished in part by having a flag on SrvOpen indicating the offline state independent of the network connection state.
  • Once the connection is online, the CSC driver starts to backpatch the handles. It walks through the list of the outstanding Fcbs and backpatches the handles for directories first based at least in part on the following reasons:
      • (a) it is desirable for users to have online directory view as soon as the connection is online;
      • (b) the directory does not have data to flush; and
      • (c) the directory is less likely to get into conflict.
  • The CSC completes the pending directory change notification on these handles so that the application can send a new directory enumeration to see the online view. To maintain the consistent view of the directory after transitioning online and before outbound synchronization completes, the CSC can merge the results from the server with the results from the cache, such as add, remove, or modify entries on the enumeration buffer depending on the cache files. The namespace as seen by the user is a union of the namespace as it exists on the server and as it exists in the offline store. This ensures that the applications, and hence, the user retains a consistent view of the files after transitioning online automatically, such as file sizes and time stamps, even when the files have been modified locally but the changes have not been pushed out.
  • FIGS. 4-7 illustrate how the contents of the cache and the associated network files are viewed by the user on the client computer by way of a user interface. Each diagram contains three panes. The right-most pane depicts the contents of a network file system, the middle pane depicts the contents of a local CSC cache, and the left-most pane depicts what the user “sees” when viewing the remote namespace through an application of an operating system. The scenarios are represented by a fictitious file system tree composed of one root directory and two subdirectories containing three files each. In particular, FIG. 4 demonstrates what a user (end-user) would see when connected to the network with a partially cached namespace. As shown, the user-view includes the partial cached namespace (e.g., \bedrock\flintstone\wilma-fred-pebbles) as well as subdirectory rubble with files betty, barney, and bambam which is from the network version of root directory-bedrock. Since sub-directory rubble and its files are not cached in the user's local cache, they are no longer viewable by the user when disconnected from the network. This is depicted in the image shown in FIG. 5.
  • In FIG. 6, the client computer is again offline or disconnected from the network at least temporarily. In this period of disconnection, the cache comprises similar copies of the root directory bedrock. For instance, it is apparent that the cache has been updated with the modified portions of sub-directory rubble (see cross-hatched boxes compared to solid white boxes in cache and user views). However, the network comprises an apparently different version or copy of the bedrock root directory as a whole. This can be an indication that the modifications to some of the files stored in the local cache can be stored in the cache while offline and viewed by the user during the disconnection period. However, it also clearly illustrates that the network version is not updated with the modified version during the offline period.
  • In case of conflicts, the local namespace overrides the remote namespace, so at all times, the applications and hence, the user see only additions of what they have not seen while offline, as shown in FIG. 7, infra.
  • Referring now to FIGS. 8-10, there are illustrate schematic diagrams that represent a sequence of points in time where a user is working on a cached document. They illustrate a sequence of points (e.g., diagrams 800, 900, and 1000) in time where a user is working on a cached document (FIG. 8), the document is synchronized (FIG. 9), and the user continues to work on the document (FIG. 10). The primary issue to observe is that, for cached content, the user is always working from the local cache, not directly from the associated server. A connection to the network can be required when that content must be synchronized with the associated server(s). Otherwise, the connection does not need to be maintained, thereby reducing bandwidth and network traffic. This is particularly true for snapshot-putback protocols such as WebDAV. For SMB, sharing semantics is maintained while online, by keeping handles open on the server, but not propagating any changes while the file is not being shared. Synchronization of offline content can occur automatically with user intervention required only to resolve synchronization conflicts. This results in a much less intrusive user experience for the present CSC user.
  • To ensure Offline Files properly handles all file and directory change scenarios, it can be important that an exemplary set of possible scenarios is understood. The first table (Table 1) illustrates 21 different “conditions” that a pair of files (server copy and cached copy) may exist in at the time of synchronization. Some may find the visual nature of this illustration helpful to understand and “visualize” the various conditions.
    TABLE 1
    Client Server
    0. No changes
    Figure US20050091226A1-20050428-C00001
    Figure US20050091226A1-20050428-C00002
    1. File created on client, No server copy exists.
    Figure US20050091226A1-20050428-C00003
    Figure US20050091226A1-20050428-C00004
    2. Directory Created on Client. No server copy exists.
    Figure US20050091226A1-20050428-C00005
    Figure US20050091226A1-20050428-C00006
    3. Fite sparse on client.
    Figure US20050091226A1-20050428-C00007
    Figure US20050091226A1-20050428-C00008
    4. File renamed on server.
    Figure US20050091226A1-20050428-C00009
    Figure US20050091226A1-20050428-C00010
    5. Directory renamed on server.
    Figure US20050091226A1-20050428-C00011
    Figure US20050091226A1-20050428-C00012
    6. File deleted on server.
    Figure US20050091226A1-20050428-C00013
    Figure US20050091226A1-20050428-C00014
    7. Directory deleted on server.
    Figure US20050091226A1-20050428-C00015
    Figure US20050091226A1-20050428-C00016
    8. File created on Client. Different file of same name exists on server.
    Figure US20050091226A1-20050428-C00017
    Figure US20050091226A1-20050428-C00018
    9. Directory created on client. Different directory of same name exists on server.
    Figure US20050091226A1-20050428-C00019
    Figure US20050091226A1-20050428-C00020
    10. File created on client. Directory of same name exists on server.
    Figure US20050091226A1-20050428-C00021
    Figure US20050091226A1-20050428-C00022
    11. Directory created on client. File of same name exists on server
    Figure US20050091226A1-20050428-C00023
    Figure US20050091226A1-20050428-C00024
    12. File renamed on client.
    Figure US20050091226A1-20050428-C00025
    Figure US20050091226A1-20050428-C00026
    13. Directory renamed on client.
    Figure US20050091226A1-20050428-C00027
    Figure US20050091226A1-20050428-C00028
    14. File deleted on client.
    Figure US20050091226A1-20050428-C00029
    Figure US20050091226A1-20050428-C00030
    15. Directory deleted on client.
    Figure US20050091226A1-20050428-C00031
    Figure US20050091226A1-20050428-C00032
    16. File changed on client.
    Figure US20050091226A1-20050428-C00033
    Figure US20050091226A1-20050428-C00034
    17. File changed on server.
    Figure US20050091226A1-20050428-C00035
    Figure US20050091226A1-20050428-C00036
    18. File changed on client. File changed on server.
    Figure US20050091226A1-20050428-C00037
    Figure US20050091226A1-20050428-C00038
    19. File changed on client. File deleted on server.
    Figure US20050091226A1-20050428-C00039
    Figure US20050091226A1-20050428-C00040
    20. File deleted on client. File changed on server.
    Figure US20050091226A1-20050428-C00041
    Figure US20050091226A1-20050428-C00042
  • Table 2 describes the behavior of the system for each of the 21 scenarios; first when the file or directory is not open and second, when the file or directory is open.
    TABLE 2
    Offline−>Online transition behavior
    Occurred while offline File/dir not open File/dir open
    0 No changes Nothing Handle kept open to cached file
    1 File created on client. No server copy exists. Copy file to server after online. Handle kept open to cached file
    2 Directory created on client. Create the directory on server Handle kept open to cached directory.
    No server copy exists. after online.
    3 File sparse on client. Copy file from server after online. Not available.
    4 File renamed on server. Rename cached file after online. Handle kept open to cached
    file. File is renamed after close.
    5 Directory renamed on server. Rename cached directory after online. Handle kept open. Directory
    renamed after close.
    6 File deleted on server. Delete cached file after online. Handle kept open to cached
    file. File is deleted after close.
    7 Directory deleted on server. Delete cached directory after online. Handle kept open to cached directory.
    Directory is deleted after close.
    8 File created on client. Different file of See cached file before resolve conflict. Handle kept open to cached file.
    same name exists on server. Conflict can be resolved after close.
    9 Directory created on client. Different See cached directory before Handle kept open to cached directory.
    directory of same name exists on server. resolve conflict. Conflict can be resolved after close.
    10 File created on client. Directory of See cached file before resolve conflict. Handle kept open to cached file.
    same name exists on server. Conflict can be resolved after close.
    11 Directory created on client. File of See cached directory before Handle kept open to cached dir.
    same name exists on server. resolve conflict. Conflict can be resolved after close.
    12 File renamed on client. Rename server file after online. Handle kept open to cached
    file while rename server file.
    13 Directory renamed on client. Rename server directory after Handle kept open to cached directory
    online. while rename server directory.
    14 File deleted on client. Delete server file after online. Not available.
    15 Directory deleted on client. Delete server directory after online. Not available.
    16 File changed on client. Sync file after online. Handle kept open to cached
    file while sync file.
    17 File changed on server. Sync file after online. Handle kept open to cached
    file. Sync after close.
    18 File changed on both client and server. See cached file before resolve conflict. Handle kept open to cached file.
    Conflict can be resolved after close.
    19 File changed on client. Same file See cached file before Handle kept open to cached file.
    deleted on server. resolved conflict. File is deleted after close.
    20 File deleted on client. Same file Hide file before resolve conflict. Not available.
    changed on server.
  • TABLE 3
    No. Silent User Description
    0 X No changes. No action required.
    1 X Silently copy file to server.
    2 X Silently create directory on server.
    3 X Silently fill sparse file on client
    4 X Silently rename file in cache
    5 X Silently rename directory in cache
    6 X Delete file in cache. Prompt user to
    confirm deletion. What if cached file
    is the ONLY remaining copy?
    7 X Delete directory in cache including
    all subfolders and files. Prompt user
    to confirm deletion.
    8 X Resolve conflict
    9 X Resolve conflict
    10 X Resolve conflict
    11 X Resolve conflict
    12 X Silently rename file on server
    13 X Silently rename directory on server
    14 X Delete file on server. Prompt user
    to confirm deletion.
    15 X Delete directory on server. Prompt
    user to confirm deletion.
    16 X Silently copy file to server.
    17 X Silently copy file to client
    18 X Resolve conflict
    19 X Resolve conflict
    20 X Resolve conflict
  • In order to properly operate in offline mode as described hereinabove, the CSC comprises an offline store which has two on-disk structures: the file system hierarchy and priority queue. The hierarchy is used to keep track of the remote entities we have cached and the name spaces there of. The priority queue is used for iterating over the entire offline store in an MRU or an LRU fashion. The design of the offline store may leverage as much of the local file system as possible in order to simplify the recovery logic in cases of a crash or data corruption.
  • In order to carry out the many aspects of the present invention as described hereinabove, several APIs can be employed. Below are exemplary APIs that are part of the user-mode components of CSC. The main goal of these APIs is to allow an operating system to manage the user experience in terms of online/offline states, synchronization, viewing the offline store and cache cleanup operations.
    CSCIsCSCEnabled
    BOOL
    CSCIsCSCEnabled(
    VOID
    );
    This API allows an application to find out whether CSC is enabled at this time.
    Parameters:
    None
    Return Value:
    The function returns TRUE if successful; FALSE is returned if the function fails.
    GetLastError ( ) can be called to get extended information about the error.
    CSCPinFile
    BOOL
    CSCPinFile (
    IN LPTSTR Name, // Name of the item
    IN DWORD dwHintFlags, // Flags to be Ored for pinning,
    // see FLAG_CSC_PIN_XXX
    OUT LPDOWRD lpdwStatus, // Status of the item
    OUT LPDWORD lpdwResultingPinCount // Pin count for this file
    OUT LPDWORD lpdwResultingHintFlags
    );
  • This API allows an application to insert a file/directory in the Client-Side-Cache. If this API returns TRUE then the file is resident in the cache. If any of the pin flags are specified, the API takes the appropriate pinning action.
  • Parameters:
      • Name: The fully qualified UNC name of the file or directory to be pinned into the client cache
      • dwHintFlags These flags are hints to the CSC as to how to treat this entry. These flags are Or'ed with existing flags on the entry. If the entry is newly created because of this call, then these flags are the only flags on the entry
      • lpdwStatus: The status of the file/folder as defined by the status flags
      • lpdwResultingPinCount: Each file pinned in the CSC cache has a non-zero PinCount. Each call to CSCPinFile ( ) increments a file's PinCount by one, each call to CSCUnPinFile ( ) decrements the file's PinCount. ResultingPinCount returns the file's PinCount resulting from this call.
      • lpchvResultingHintFlags: hint flags after this operation is successful
        Return Value:
  • The function returns TRUE if successful; FALSE is returned if the function fails.
  • GetLastError ( ) can be called to get extended information about the error.
    CSCUnPinFile
    BOOL
    CSCUnPinFile (
    IN LPTSTR Name, // Name of the file or directory
    IN DWORD dwHintlagsMask, // Bits to be removed from the entry
    OUT LPDOWRD lpdwStatus, // Status of the item
    OUT LPDWORD lpdwResultingPinCount // Pin count for this file
    OUT LPDWORD lpdwResultingHintFlags
    );
  • This API allows the caller to unpin a file or directory from the client side persistent cache.
  • Parameters:
      • Name: The fully qualified UNC name of the item to be unpinned
      • dwPinFlagMask: pin flags to remove from the entry. No error is reported if if flags to be removed aren't already there on the entry If one of the inherit flags is removed, the effect occurs On subsequently created descendents of that folder Descendents which got the a user/system pin count due to are unaffected.
      • lpdwStatus: The status of the file as defined by the status flags
      • lpdwResultingPinCount: Each file pinned in the CSC cache has a non-zero PinCount. Each call to CSCPinFile 0 increments a file's PinCount by one, each call to CSCUnPinFile 0 decrements the file's PinCount. ResultingPinCount returns the file's PinCount resulting from this call. A file is no longer pinned to the CSC cache when ResultingPinCount is zero.
      • lpdwResultinHintFlags: pin flags after this operation is successful.
        Return Value:
  • The function returns TRUE if successful. The status bits indicate more information about the item in the cache. FALSE is returned if the function fails.
  • GetLastError( ) can be called to get extended information about the error.
    CSCFindFirstCachedFile
    HANDLE
    CSCFindFirstCachedFile (
    LPCTSTR Name,
    OUT LPWIN32_FIND_DATA lpFindFileData,
    OUT LPDWORD lpdwStatus,
    OUT LPDWORD lpdwPinCount,
    OUT LPDWORD lpdwHintFlags,
    OUT FILETIME *lpftOrgTime
    );
  • This API allows the caller to enumerate files in the client side cache.
  • Parameters:
      • Name: Points to a null-terminated string that specifies a valid UNC name for a share. The API operates like the win32 FindFirstFile API, except that wild cards are not implemented in the first version.
      • If a NULL parameter is passed in, the API begins enumeration of all the \\server\share entries in the client-side-cache.
      • lpFindFileData: Points to the WIN32_FIND_DATA structure that receives information about the found file or directory. The structure can be used in subsequent calls to the CSCFindNextCachedFile or CSCFindClose function to refer to the file or subdirectory. The elements of the WIN32_FIND_DATA structure is filled in just as it would be for a non-cached file.
      • lpdwStatus: if lpFindFileData is not NULL, this returns the status of the file in terms of the flags defined below.
      • If lpFindFileData is NULL, it returns the status of the share as defined by FLAG_CSC_SHARE_STATUS_XXX.
      • lpdwPinCount: Pin Count of the file
  • lpftOrgTime: The timestamp of the original file on the server. This value makes sense only when the file/direcotry is a copy of a file on a server. It does not mean anything if the file/directory was created while offline, in which case the status bit FLAG_CSC_LOCALLY_CREATED is set.
    CSCFindNextCachedFile
    BOOL
    CSCFindNextCachedFile (
    HANDLE hCSCFindHandle,
    LPWIN32_FIND_DATA lpFindFileData;
    OUT LPDWORD lpdwStatus,
    OUT LPDWORD lpdwPinCount,
    OUT LPDOWRD lpdwHintFlags,
    OUT FILETIME *lpftOrgTime
    );
  • This function continues a cache file search from a previous call to the CSCFindFirstCachedFile function.
  • Parameters:
      • hCSCFindHandle: identifies a search handle returned by a previous call to the CSCFindFirstCachedFile function.
      • lpFindFileData: points to the WIN32_FIND_DATA structure that receives information about the found file or subdirectory. The structure can be used in subsequent calls to CSCFindNextCachedFile to refer to the found file or directory. The WIN32_FIND_DATA structure receives data as described in CSCFindFirstCachedFile.
      • lpdwStatus: if the enumeration is for file/folder, this returns the status of the file in terms of the flags defined below.
        • If the enumeration is for \\server\shares\ this returns the status of the share as defined by FLAG_CSC_SHARE_STATUS_XXX.
      • lpdwPinCount: Pin Count of the file
  • lpftOrgTime: The timestamp of the original file on the server. This value makes sense only when the file/directory is a copy of a file on a server. It does not mean anything if the file/directory was created while offline, in which case the status bit FLAG_CSC_LOCALLY_CREATED is set.
    CSCFindClose
    BOOL
    CSCFindClose (
    HANDLE hCSCFindHandle
    );
  • The CSCFindClose function closes the specified cache search handle. The CSCFindFirstCachedFile and CSCFindNextCachedFile functions use the search handle to locate cached files with names that match the given name.
  • Parameters:
  • hCSCFindHandle: identifies the search handle. This handle must have been previously opened by the CSCFindFirstCachedFile function.
    CSCFindFirstCachedFileForSid
    HANDLE
    CSCFindFirstCachedFile (
    LPCTSTR Name,
    PSID pSid,
    OUT LPWIN32_FIND_DATA lpFindFileData,
    OUT LPDWORD lpdwStatus,
    OUT LPDWORD lpdwPinCount,
    OUT LPDWORD lpdwHintFlags,
    OUT FILETIME *lpftOrgTime
    );
  • This API allows the caller to enumerate files in the client side cache for a particular principal, which is the only difference between this API and CSCFindFirstCachedFile. The handle returned by this API can be used by CSCFindNextCachedFile and CSCFindClose APIs.
  • Parameters:
      • Name: Points to a null-terminated string that specifies a valid UNC name for a share. The API operates like the win32 FindFirstFile API, except that wild cards are not implemented in the first version.
      • If a NULL parameter is passed in, the API begins enumeration of all the \\server\share entries in the client-side-cache.
      • lpFindFileData: Points to the WIN32_FIND_DATA structure that receives information about the found file or directory. The structure can be used in subsequent calls to the CSCFindNextCachedFile or CSCFindClose function to refer to the file or subdirectory. The elements of the WIN32_FIND_DATA structure is filled in just as it would be for a non-cached file.
      • pSid: Security ID of the principal for whom the cache is to be enumerated.
      • If NULL, then guest is assumed
      • lpdwStatus: if lpFindFileData is not NULL, this returns the status of the file in terms of the flags defined below.
        • If lpFindFileData is NULL, it returns the status of the share as defined by FLAG_CSC_SHARE_STATUS_XXX.
      • lpdwPinCount: Pin Count of the file
  • lpftOrgTime: The timestamp of the original file on the server. This value makes sense only when the file/direcotry is a copy of a file on a server. It does not mean anything if the file/directory was created while offline, in which case the status bit FLAG_CSC_LOCALLY_CREATED is set.
    CSCSetMaxSpace
    BOOL
    CSCSetMaxSpace(
    DWORD nFileSizeHigh,
    DWORD nFileSizeLow
    )

    Routine Description:
  • This routine allows the caller to set the maximum persistent cache size for files which are not pinned. It is used by the UI that allows the user to set the cache size. Maximum limit in Win2K/Windows XP is 2 GB
  • Arguments:
      • nFileSizeLow Lower DWORD of the cachesize setting
      • nFileSizeHigh Higher DWORD of the cachesize setting
        Returns:
  • The function returns TRUE if successful; FALSE is returned on error and GetLastError ( ) can be called to get extended information about the error.
    CSCDeleteCachedFile
    BOOL
    CSCDeleteCachedFile (
    IN LPTSTR Name // Name of the cached file
    );
  • This API deletes the file from the client side cache.
  • Parameters:
      • Name: The fully qualified UNC name of the file to be deleted
        Return Value:
  • The function returns TRUE if successful; FALSE is returned on error and GetLastError ( ) can be called to get extended information about the error.
  • Notes: Example error cases are: a) If a directory is being deleted and it has descendents, then this call will fail b) If a file is in use, this call will fail. c) If the share on which this item exists is being merged, this call will fail.
    CSCBeginSynchronization
    BOOL
    CSCBeginSynchronizationW(
    IN LPCTSTR lpszShareName,
    LPDWORD lpdwSpeed,
    LPDWORD lpdwContext
    )
  • This API sets up a synchronization context to begin the sync operation. Thus if user input is needed to synchronize a share, by calling this API, the input is obtained only once, and is reused to synchronize both inward and outward.
  • Arguments:
      • lpszShareName The name of the share to be synchronized
      • lpdwSpeed A value returned by CSC to indicate to the caller, the underlying speed on which sync operation is being performed. This allows the synchronization UI to tailor its behavior according to the bandwidth
      • lpdwContext A context returned by the API
        Returns:
  • TRUE if the function is successful, FALSE if some error was encountered, or the operation was aborted. GetLastError( ) returns the errorcode.
    CSCEndSynchronization
    BOOL
    CSCEndSynchronization(
    IN LPCTSTR lpszShareName,
    DWORD dwContext
    )
  • This API cleans up the context obtained on a successful call to CSCBeginSynchronization API. The API cleans up any network connections established, possibly with user supplied credentials, during the CSCBeginSynchronization API.
  • Arguments:
      • lpszShareName Name of the share being synchronized
      • dwContext Context obtained from the CSCBeginSynchronization API
        Returns:
  • TRUE if the function is successful, FALSE if some error was encountered, or the operation was aborted. GetLastError( ) returns the errorcode.
    CSCMergeShare
    BOOL
    CSCMergeShare(
    LPTSTR lpszShareName,
    LPCSCPROC lpfnMergeProgress
    DWORD dwContext
    )
  • This API allows the caller to initiate a merge of a share that may have been modified offline. The API maps a drive to the share that needs merging and uses that drive to do the merge. The mapped drive is reported in the callback at the beginning of the merge in the cFileName field of the lpFind32 parameter of the callback function. The caller of this API must a) use the drive letter supplied to do any operations on the net b) must do all the operations in the same thread that issues this API call.
  • Parameters:
      • lpszShareName Share to make changes. If this is NULL, all modified shares are merged
      • lpfnMergeProgress Callback function that informs the caller about the progress of the merge.
      • dwContext Context returned during callback
        Return:
  • TRUE if the function is successful, FALSE if some error was encountered, or the operation was aborted. GetLastError( ) returns the errorcode.
    CSCFillSparseFiles
    BOOL
    CSCFillSparseFiles(
    IN LPTSTR lpszName,
    IN BOOL fFullSync,
    IN LPCSCPROC lpprocFillProgress,
    IN DWORD dwContext
    );

    Parameters:
      • lpszName Share or file name to sparsefill.
      • fFullSync If TRUE, files which are not sparse are checked for staleness, and a fill attempted on them
      • lpprocCheckStatusProgress Callback function that informs the caller about the progress of the status check
      • dwContext Context returned during callback
        Return:
  • TRUE if the function is successful, FALSE if some error was encountered, or the operation was aborted. GetLastError( ) returns the errorcode.
    CSCCopyReplica
    BOOL
    CSCCopyReplica(
    IN LPTSTR lpszFullPath,
    OUT LPTSTR *lplpszLocalName
    )
  • This API allows the caller to copy the data for the replica of a remote item out of the CSC offline store into a temporary local file.
  • Parameters:
      • lpszFullPath Full path of the file that needs to be moved/copied
      • lplpszLocalName pointer to a full qualified path of local file that contains the replica data. This is LocalAlloced by the API. It is the callers responsibility to free it.
  • Return Value:
  • TRUE if successful, FALSE if failed. If FALSE, GetLastError( ) returns the exact error code.
    CSCGetSpaceUsage
    BOOL
    CSCGetSpaceUsage(
    OUT LPDWORD lpnFileSizeHigh,
    OUT LPDWORD lpnFileSizeLow
    )
  • This API returns the current space consumption by unpinned data in the csc offline store.
  • Parameters:
      • lpnFileSizeHigh High dword of the total data size
      • lpfnFileSizeLow Low dword of the total data size
        Return Value:
  • Returns TRUE if successful. If the return value is FALSE, GetLastError( ) returns the actual error code.
    CSCFreeSpace
    BOOL
    CSCFreeSpace(
    DWORD nFileSizeHigh,
    DWORD nFileSizeLow
    )
  • This API frees up the space occupied by unpinned files in the CSC offline store by deleting them. The passed in parameters are used as a guide to how much space needs to be freed. Note that the API can delete local replicas only if they are not in use at the present time.
  • Parameters:
      • nFileSizeHigh High DWORD of the amount of space to be freed.
      • nFileSizeLow Low DWORD of the amount of space to be freed Return Value:
  • Returns TRUE if successful. If the return value is FALSE, GetLastError( ) returns the actual error code.
    CSCEnumForStats
    BOOL
    CSCMergeShare(
    LPTSTR lpszShareName,
    LPCSCPROC lpfnEnumProgress
    DWORD dwContext
    )
  • This API allows the caller to enumerate a share or the entire CSC offline store to obtain salient statistics. It calls the callback function with CSC_REASON_BEGIN before beginning the enumeration, for each item it calls the callback with CSC_REASON_MORE_DATA and at the end of the callback, it calls it with CSC_REASON_END. For details of parameters with which the callback is made, see below.
  • Parameters:
      • lpszShareName Share to make changes. If this is NULL, all shares are enumerated.
      • lpfnEnumProgress Callback function that informs the caller about the progress of the enumeration.
        • The callback is invoked on every file/directory on that part of the share/database. The only significant parameters are dwStatus, dwHintFlags, dwPinCount, dwReason, dwParam1 and dwContext.
        • If the item is a file, dwParam1 is 1, for directories, it is 0.
      • dwContext Context returned during callback
        Return:
  • TRUE if the function is successful, FALSE if some error was encountered, or the operation was aborted. GetLastError( ) returns the error code.
    CSCDoLocalRename
    BOOL
    CSCDoLocalRename(
    IN LPCWSTR lpszSource,
    IN LPCWSTR lpszDestination,
    IN BOOL fReplaceFileIfExists
    )
    /*++
  • This API does a rename in the offline store. The rename operation can be used to move a file or a directory tree from one place in the hierarchy to another. It's principal use at the present time is for folder redirection of MyDocuments share. If a directory is being moved and such a directory exists at the destination, the API tries to merge the two trees. If a destination file already exists, and fReplaceifExists parameter is TRUE, then an attempt is made to delete the destination file and put the source file in its place, else an error is returned.
  • Parameters:
      • lpszSource Fully qualified source name (must be UNC). This can be a file or any directory other than the root of a share.
      • lpszDestination Fully qualified destination name (must be UNC). This can only be a directory.
      • fReplaceFilelfExists replace destination file with the source if it exists.
  • Returns:
  • TRUE if successful, FALSE otherwise. If the API fails, GetLastError returns the specific error code.
    CSCDoEnableDisable
    BOOL
    CSCDoEnableDisable(
    BOOL fEnable
    )

    Routine Description:
      • This routine enables/disables CSC. It should be used only by the control panel applet. Enable CSC always succeeds. Disable CSC succeeds if there are no files or directories from the local offline store are open at the time of issuing this call.
        Parameters:
      • fEnable enable CSC if TRUE, else disable CSC Returns:
  • TRUE if successful, FALSE otherwise. If the API fails, GetLastError returns the specific error code.
    CSCCheckShareOnline
    BOOL
    CSCCheckShareOnline(
    IN LPCWSTR lpszShareName
    )

    Routine Description:
      • This routine checks whether a given share is available online.
        Parameters:
      • lpszShareName
        Returns:
  • TRUE if successful, FALSE otherwise. If the API fails, GetLastError returns the specific error code.
    CSCTransitionServerOnline
    BOOL
    CSCTransitionServerOnline(
    IN LPCWSTR lpszShareName
    )

    Routine Description:
      • This routine transitions the server for the given share to online.
        Arguments:
      • lpszShareName Returns:
  • TRUE if successful, FALSE if a failure occurs. On error, GetLastError is used to obtain the actual error code.
    CSCEncryptDecryptDatabase
    BOOL
    CSCEncryptDecryptDatabase(
    IN BOOL fEncrypt,
    IN LPCSCPROCW lpfnEnumProgress,
    IN DWORD_PTR dwContext
    )

    Routine Description:
      • This routine is used to encrypt/decrypt the entire offline store in system context. The routine checks that the CSC offline store is hosted on a file system that allows encryption. Only admins can do the conversion.
        Arguments:
      • fEncrypt if TRUE, we encrypt the offline store else we decrypt.
      • LPCSCPROCW call back proc. The usual set of CSCPROC_REASON_BEGIN, CSCPROC_REASON_MORE_DATA, CSC_PROC_END are sent when the conversion actually begins. Conversion can fail if a file is open or for some other reason, in which case the second to last parameter in the callback with CSCPROC_REASON_MORE_DATA has the error code. The third to last parameter indicates whether the conversion was complete or not. Incomplete conversion is not an error condition.
      • dwContext callback context
        Returns:
  • TRUE if no errors encountered.
  • Notes:
      • Theory of operations:
        • The CSC offline store encryption code encrypts all the inodes represented by remote files.
      • Who: Only user in admingroup can do encryption/decryption. This is checked in kernel.
      • Which context: Files are encrypted in system context. This allows files to be shared while still being encrypted. This solution protects from a stolen laptop case.
  • The offline store can have the following status set on it based on the four encryption states:
      • a) FLAG_DATABASESTATUS_UNENCRYPTED
      • b) FLAG_DATABASESTATUS_PARTIALLY_UNENCRYPTED
      • c) FLAG_DATABASESTATUS_ENCRYPTED
      • d) FLAG_DATABASESTATUS_PARTIALLY_ENCRYPTED
  • In states a) and b), new files are created unencrypted. In states c) and d), new files are created encrypted.
  • At the beginning of the conversion, the offline store stats are marked to the appropriate XX_PARTIAL_XX state. At the end, if all goes well, it is transitioned to the final state.
  • At the time of enabling CSC, if the offline store state is XX_PARTIAL_XX, the kernel code tries to complete the conversion to the appropriate final state.
    LPCSCPROC
    DWORD (*LPCSCPROC)(
    LPTSTR lpszName,
    DWORD dwStatus,
    DWORD dwHintFlags,
    DWORD dwPinCount,
    WIN32_FIND_DATA *lpFind32,
    DWORD dwReason,
    DWORD dwParam1,
    DWORD dwParam2,
    DWORD dwContext
    )
  • Parameters:
      • lpszName fully qualified UNC path
      • dwStatus status of the entry (see FLAG_CSC_COPY_STATUS_xxx)
      • dwHintFlags hint flags on the entry (see FLAG_CSC_HINT_xxx)
      • dwPinCount pin count of the entry
      • lpFind32 WIN32_FIND_DATA_STRUCTURE of the local copy in the offline store.
        • This may be NULL if the callback is CSC_REASON_BEGIN and CSC_REASON_END for a share.
        • During merging this parameter will be non-NULL for CSC_REASON_BEGIN. The cFileName member of this structure will contain the mapped drive letter to the share, through which all net access should be performed.
      • dwReason callback reason (see CSCPROC_REASON_xxx)
      • dwParam1 contents dependent on dwReason above CSCPROC_REASON_BEGIN:
        • If merging is in progress a no-zero value of this parameter indicates that this item conflicts with the remote item.
      • CSCPROC_REASON_MORE_DATA: contains the low order dword of the amount of the amount of data transferred
      • dwParam2 contents dependent on dwReason above CSCPROC_REASON_MORE_DATA: contains the high order dword of the amount of the amount of data transferred CSCPROC_REASON_END: contains error codes as defined in winerror.h. If it is ERROR_SUCCESS, then the operation that was started with the CSCPROC_REASON_BEGIN completed successfully.
      • dwContext context passed in by the caller while calling the API Return Value:
  • See CSCPROC_RETURN_xxx.
  • File/Folder Status Bit Definitions:
      • FLAG_CSC_COPY_STATUS_DATA_LOCALLY_MODIFIED
      • FLAG_CSC_COPY_STATUS_ATTRIB_LOCALLY_MODIFIED
      • FLAG_CSC_COPY_STATUS_TIME_LOCALLY_MODIFIED
      • FLAG_CSC_COPY_STATUS_STALE
      • FLAG_CSC_COPY_STATUS_LOCALLY_DELETED
      • FLAG_CSC_COPY_STATUS_SPARSE
      • FLAG_CSC_COPY_STATUS_ORPHAN
      • FLAG_CSC_COPY_STATUS_SUSPECT
      • FLAG_CSC_COPY_STATUS_LOCALLY_CREATED
      • FLAG_CSC_USER_ACCESS_MASK
      • FLAG_CSC_GUEST_ACCESS_MASK
      • FLAG_CSC_OTHER_ACCESS_MASK
        Share Status Bit Definitions: (Read only)
      • FLAG_CSC_SHARE_STATUS_MODIFIED_OFFLINE
      • FLAG_CSC_SHARE_STATUS_CONNECTED
      • FLAG_CSC_SHARE_STATUS_FILES_OPEN
      • FLAG_CSC_SHARE_STATUS_FINDS_IN_PROGRESS
      • FLAG_CSC_SHARE_STATUS_DISCONNECTED_OP
      • FLAG_CSC_SHARE_MERGING
        Hint flags Definitions:
      • FLAG_CSC_HINT_PIN_USER When this bit is set, the item is being pinned for the user. Note that there is only one pincount allotted for user.
      • FLAG_CSC_HINT_PIN_INHERIT_USER When this flag is set on a folder, all descendents subsequently created in this folder get pinned for the user.
      • FLAG_CSC_HINT_PIN_INHERIT_SYSTEM When this flag is set on a folder, all descendents subsequently created in this folder get pinned for the system.
      • FLAG_CSC_HINT_CONSERVE_BANDWIDTH When this flag is set on a folder, for executables and other related file, CSC tries to conserver bandwidth by not flowing opens when these files are fully cached.
        CSC callback function related definitions:
        Defintions for callback reason:
      • CSCPROC_REASON_BEGIN
      • CSCPROC_REASON_MORE_DATA
      • CSCPROC_REASON_END
        Definitions for callback return values:
      • CSCPROC_RETURN_CONTINUE
      • CSCPROC_RETURN_SKIP
      • CSCPROC_RETURN_ABORT
      • CSCPROC_RETURN_FORCE_INWARD //applies only while merging
      • CSCPROC_RETURN_FORCE_OUTWARD //applies only while merging
  • The following APIs are available to manage the CSC settings for an SMB shares for Win2K and beyond.
    NetShareSetInfo
    This API is used to set the CSC attributes of a server share.
    NET_API_STATUS
    NetShareSetInfo (
    LPTSTR servername,
    LPTSTR sharename,
    DWORD level
    LPBYTE buf
    LPDWORD parm_err
    );

    Parameters:
      • Servername: Pointer to a Unicode string containing the name of the remote server on which the function is to execute. A NULL pointer or string specifies the local computer.
      • ShareName: Pointer to a Unicode string containing the network name of the share to set information on.
  • Level: Has value 1007, indicating that the buf parameter points to a SHARE_INFO1007 structure (below)
    NetShareGetInfo
    This API is used to get the CSC attributes of a server share.
    NET_API_STATUS
    NetShareGetInfo (
    LPTSTR servername,
    LPTSTR sharename,
    DWORD level,
    LPBYTE *bufptr,
    );

    Parameters:
      • Servername: Pointer to a Unicode string containing the name of the remote server on which the function is to execute. A NULL pointer or string specifies the local computer.
      • Sharename: Pointer to a Unicode string containing the network name of the share to get information on.
  • Level: Has value 1007, indicating that level 1007 information should be returned, and bufptr should be set to point to resulting SHARE_INFO_1007 structure. Bufptr should be freed with NetApiBufferFree( ) when no longer needed.
    SHARE_INFO_1007
    Typedef struct SHARE INFO 1007 {
    DWORD shi1007_flags;
    LPTSTR shi1007 AlternateDirectoryName;
    } SHARE_INFO_1007, *PSHARE_INFO_1007,
    *LPSHARE_INFO_1007;

    Shi1007_flags:
      • CSC_CACHEABLE indicates that the client can safely cache files on this directory for off-line access
      • CSC_NOFLOWOPS indicates that the client need not send opens or other operations to the server when accessing its locally cached copies of files in this share
      • CSC_AUTO_INWARD indicates that files changed on the server should automatically replace cached copies on the client
      • CSC_AUTO_OUTWARD indicates that files cached on the client should automatically replace copies on the server
        AlternateDirectoryName
      • If set, this is the name of the alternate directory where COW files should be written. See the (to be written) COW specification for details.
  • Various methodologies in accordance with the subject invention will now be described via a series of acts. It is to be understood and appreciated that the present invention is not limited by the order of acts, as some acts may, in accordance with the present invention, occur in different orders and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all illustrated acts may be required to implement a methodology in accordance with the present invention.
  • Turning now to FIGS. 11-14, there are illustrated flow diagrams of exemplary methodologies that facilitate supporting connection state transitions at the directory level (e.g., DFS link) and partial name space offline in accordance with an aspect of the present invention.
  • FIG. 11 depicts a process 1100 that facilitates maintaining access to remote files (e.g., server-based) during any period of disconnect from the server or network. In particular, a client can be connected to a network or remote server(s) at 1110. While connected to the network, one or more file objects, directories, and/or any other data files can be selectively cached to the client's local database or data store (e.g., memory) at 1120. The selective caching can be based at least in part upon user preferences. For example, file objects that have been accessed while online can be cached to the client's hard drive. Alternatively or in addition, the client can infer which file objects are more likely to be desired for caching based on the user's current online activity. Such file objects can include those files that have been accessed as well as other files that are related thereto. This can be determined in part by file location (e.g., related directory), metadata associated with the respective files, past client behavior (e.g., files were accessed at a similar time in the past), and the like. Selective caching can also be facilitated by learning and/or employing training systems and techniques by the client or end-user.
  • In addition to the file objects, properties associated with the respective file objects can also be cached to facilitate security measures, for example. This includes the object access rights, share access rights and the pertinent DFS link. Moreover, directory rights can be cached and physical share cache configurations at the DFS link are honored in accordance with an aspect of the present invention. Cache configurations include manual caching and auto caching. For instance, if the physical share is set to be no caching, the files under the part of the logical namespace hosted on the physical share will not be cached.
  • At 1130, the client can be disconnected from the server either intentionally or unintentionally. When disconnected from the server, the client, or the user, can continue to work on the file as illustrated in FIG. 12, infra, at 1210. In fact, the user may not even be aware that the connection to the network has been lost because file and/or directory access has not been interrupted. That is, despite the state transition from online to offline, the client can still perform computer operations with respect to remote-based files and directories as if it were connected to the remote server.
  • At 1220, any modifications or changes to the document can be saved or stored in the local cache on the client. When the connection to the server resumes, the client version of the file can be pushed to the server if no conflict exists between the client's version and the server version.
  • In practice, for example, when an open request is sent to a client side caching (CSC) component, it detects whether the file is in conflict with the server version. If a conflict is detected, the caching component satisfies the request with only the local handle and subsequent file I/O operations are performed on the local cache. This feature facilitates deferred synchronization of the files in the background after the path is transitioned online since users continue to see the file that he/she has been working on during the offline period. Therefore, the particular file is operated in offline state while the path is still online. However, if no conflict occurs between the local and server copies, the request can be sent to the server and handled in a manner in accordance with the present invention. It should be appreciated that the client maintains a persistent cache, which must be flushed out to the server before the handle closes. This ensures that the existing file semantics continue to work.
  • FIG. 13 illustrates a method that facilitates bandwidth reduction and/or conservation in accordance with an aspect of the present invention. At 1310, a request can be submitted for a file object, for example. Instead of querying the server, the client cache is searched. If the file object is found in the client cache, then the client cache can satisfy the request. Thus, the server is not accessed and network traffic is mitigated. When a connection is slow, this method also facilitates conserving the available bandwidth for instances where only the server can fulfill the request(s). It should be appreciated that the client version overrides the server version in instances of conflict and availability. That is, the client version can be used to satisfy requests even if the server has the same copy unbeknownst to the user or client since any file accessed from the client cache will appear as if it came from the server, regardless of the connection state.
  • Referring now to FIGS. 14 and 15, there are illustrated exemplary APIs for create requests submitted while online with a remote location and offline with a remote location, respectively. As shown in FIG. 14, the API 1400 involves receiving the create request from an I/O manager at 1410. At 1420, a pre-process handler of a CSC surrogate provider is called. Following therefrom at 1430, the CSC surrogate provider finds or creates a logical namespace structure if part of the logical namespace on which a target of the create request resides is already offline. At 1440, the create request is passed to a DFS surrogate provider to translate the logical path to a physical server share. At 1450, the create request is passed to a redirector component (e.g., RDBSS) to allow a particular redirector (e.g., SMB, Webdav, NFS) claim the physical path. At 1460, a post-process handler of the CSC surrogate provider can be called again to express one of either no interest or interest to cache a file object requested by the create request.
  • The API 1500 shown in FIG. 15 involves receiving the create request from an I/O manager at 1510 and calling a pre-process handler of a CSC surrogate provider to handle the request by mapping the logical path to local cache data since redirectors are unavailable to claim the path at 1520. With respect to FIG. 15, the CSC surrogate provider handles the request since the DFS component and redirectors are not available to the CSC when offline or disconnected from the remote location.
  • In order to provide additional context for various aspects of the present invention, FIG. 16 and the following discussion are intended to provide a brief, general description of a suitable operating environment 1610 in which various aspects of the present invention may be implemented. While the invention is described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices, those skilled in the art will recognize that the invention can also be implemented in combination with other program modules and/or as a combination of hardware and software.
  • Generally, however, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular data types. The operating environment 1610 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Other well known computer systems, environments, and/or configurations that may be suitable for use with the invention include but are not limited to, personal computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include the above systems or devices, and the like.
  • With reference to FIG. 16, an exemplary environment 1610 for implementing various aspects of the invention includes a computer 1612. The computer 1612 includes a processing unit 1614, a system memory 1616, and a system bus 1618. The system bus 1618 couples the system components including, but not limited to, the system memory 1616 to the processing unit 1614. The processing unit 1614 can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as the processing unit 1614.
  • The system bus 1618 can be any of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, 11-bit bus, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MSA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), and Small Computer Systems Interface (SCSI).
  • The system memory 1616 includes volatile memory 1620 and nonvolatile memory 1622. The basic input/output system (BIOS), containing the basic routines to transfer information between elements within the computer 1612, such as during start-up, is stored in nonvolatile memory 1622. By way of illustration, and not limitation, nonvolatile memory 1622 can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory. Volatile memory 1620 includes random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM).
  • Computer 1612 also includes removable/nonremovable, volatile/nonvolatile computer storage media. FIG. 16 illustrates, for example a disk storage 1624. Disk storage 1624 includes, but is not limited to, devices like a magnetic disk drive, floppy disk drive, tape drive, Jaz drive, Zip drive, LS-100 drive, flash memory card, or memory stick. In addition, disk storage 1624 can include storage media separately or in combination with other storage media including, but not limited to, an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM). To facilitate connection of the disk storage devices 1624 to the system bus 1618, a removable or non-removable interface is typically used such as interface 1626.
  • It is to be appreciated that FIG. 16 describes software that acts as an intermediary between users and the basic computer resources described in suitable operating environment 1610. Such software includes an operating system 1628. Operating system 1628, which can be stored on disk storage 1624, acts to control and allocate resources of the computer system 1612. System applications 1630 take advantage of the management of resources by operating system 1628 through program modules 1632 and program data 1634 stored either in system memory 1616 or on disk storage 1624. It is to be appreciated that the present invention can be implemented with various operating systems or combinations of operating systems.
  • A user enters commands or information into the computer 1612 through input device(s) 1636. Input devices 1636 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect to the processing unit 1614 through the system bus 1618 via interface port(s) 1638. Interface port(s) 1638 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB). Output device(s) 1640 use some of the same type of ports as input device(s) 1636. Thus, for example, a USB port may be used to provide input to computer 1612 and to output information from computer 1612 to an output device 1640. Output adapter 1642 is provided to illustrate that there are some output devices 1640 like monitors, speakers, and printers among other output devices 1640 that require special adapters. The output adapters 1642 include, by way of illustration and not limitation, video and sound cards that provide a means of connection between the output device 1640 and the system bus 1618. It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) 1644.
  • Computer 1612 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 1644. The remote computer(s) 1644 can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device or other common network node and the like, and typically includes many or all of the elements described relative to computer 1612. For purposes of brevity, only a memory storage device 1646 is illustrated with remote computer(s) 1644. Remote computer(s) 1644 is logically connected to computer 1612 through a network interface 1648 and then physically connected via communication connection 1650. Network interface 1648 encompasses communication networks such as local-area networks (LAN) and wide-area networks (WAN). LAN technologies include Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), Ethernet/IEEE 1102.3, Token Ring/IEEE 1102.5 and the like. WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL).
  • Communication connection(s) 1650 refers to the hardware/software employed to connect the network interface 1648 to the bus 1618. While communication connection 1650 is shown for illustrative clarity inside computer 1612, it can also be external to computer 1612. The hardware/software necessary for connection to the network interface 1648 includes, for exemplary purposes only, internal and external technologies such as. modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and Ethernet cards.
  • What has been described above includes examples of the present invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the present invention, but one of ordinary skill in the art may recognize that many further combinations and permutations of the present invention are possible. Accordingly, the present invention is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. Furthermore, to the extent that the term “includes” is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term “comprising” as “comprising” is interpreted when employed as a transitional word in a claim.

Claims (43)

1. A remote file system, comprising:
one or more surrogate providers comprising at least a first surrogate provider that selectively caches at least a subset of data from at least one online server; and
one or more client computers that receive and store the subset of data to their respective local databases for offline use by the respective client computers to facilitate a seamless operation of data retrieval across connectivity states for a user.
2. The system of claim 1, the first surrogate provider is a client side caching (CSC) component that supports connection state transitions at the directory level on a logical namespace.
3. The system of claim 1, further comprising an MUP that supports the one or more surrogate providers at the directory level to handle incoming requests from a user.
4. The system of claim 1, further comprising a second surrogate provider that translates a logical path into a physical path.
5. The system of claim 4, the second surrogate provider is a DFS component that points to at least one physical share or at least one physical server.
6. The system of claim 1, wherein selectively caching comprises automatic caching and manual caching based at least in part upon user preferences.
7. The system of claim 1, the data comprises file access parameters comprising at least one of object access rights and share access rights, the file access parameters corresponding to a cached file object.
8. The system of claim 2, the CSC component caches the logical namespace of a file request such that when accessed during an offline state, the file is presented to a user as if it resides at a remote server location.
9. The system of claim 2, the CSC component maintains connection based data structures in logical namespace, the data structures comprising a server connection structure (SrvCall), a share mapping structure (NetRoot), and a per-user share mapping structure (VNetRoot) to facilitate handling at least one of create, read, and write requests.
10. The system of claim 2, the CSC component creates file based data structures and shares the data structures with one or more redirectors to facilitate handling at least one of create, read, and write requests, the one or more redirectors operatively connected to one or more network providers.
11. The system of claim 1, the first surrogate provider comprises a pre-process handler and a post-process handler which facilitates responding to any one of create, read, and write requests.
12. The system of claim 2, the surrogate providers determine who owns a path request whereby the CSC components makes an initial determination before allowing the DFS component to examine the path to identify any DFS links.
13. The system of claim 12, the CSC component operates cooperatively with the DFS component to determine whether DFS links are present in the path while in an online connection state.
14. The system of claim 2, the CSC component determines whether to cache an object file associated with the path.
15. The system of claim 2, further comprising a CSC agent pings the server to determine whether the server is online.
16. The system of claim 2, the CSC component tracking substantially all DFS links included in the logical namespace persistently to transition a connection state at a proper logical directory which facilitates minimizing a scope of offlineness to a physical share.
17. The system of claim 1, the server broadcasts to substantially all CSC agents that it is online to mitigate latency.
18. The system of claim 1, the client computer accesses remote files offline by retrieving them from their respective local databases if file access parameters are satisfied.
19. The system of claim 1, the first surrogate provider keeps track of DFS links corresponding to every object, wherein the DFS links are physical shares.
20. The system of claim 1, the first surrogate provider determines whether the request against a specific object should be carried out offline or not, before returning to MUP, by looking at a corresponding physical share connection state.
21. A method that facilitates maintaining access to remote files (e.g., server-based) during any period of disconnect from a remote location, comprising:
providing one or more client computers, each client computer comprising a local data store; and
selectively caching one or more file objects from at least one online server to the respective data store for subsequent offline use by the client computer.
22. The method of claim 21, further comprising maintaining access to the one or more files cached while offline.
23. The method of claim 21, further comprising caching one of more file access parameters that correspond to the one or more cached file objects to permit client access to the file objects while offline.
24. The method of claim 21, when connected to the remote location, retrieving a file object from the local data store to mitigate bandwidth usage with respect to accessing the remote location despite being connected to the remote location.
25. The method of claim 21, further comprising:
mapping a logical namespace to a physical namespace to facilitate keeping track of cached files and enumerating directories as files are modified or deleted locally at the client or at the remote location; and
tracking connection states and version of physical shares that correspond to at least one object along a path that facilitates updating a tree connect structure in a continuous manner.
26. A method that facilitates seamless operation across connectivity states between at least one client and at least one remote server, comprising:
providing at least a first surrogate provider that receives one or more I/O requests from an MUP, the first surrogate provider comprising a pre-process handler and a post-process handler that facilitate handling the requests at a directory level, the first surrogate provider examining a logical path of the request; and
passing the one or more requests to a second surrogate provider that is operational in an online state, the second surrogate provider translating the logical path of the request into a physical path; and
generating one or more data structures for each respective I/O request that facilitates determining whether the first surrogate provider wants to own or cache a file object related to the request.
27. The method of claim 26, further comprising:
processing the request using the pre-process handler to determine whether the request was handled by at least one of a network provider and the first surrogate provider;
optionally calling the post-process handler after the request is handled to handle the request again;
optionally passing the request to a second surrogate provider, the second surrogate provider examines the request and maps the request path to a physical path at the directory level; and
optionally passing the request to one or more redirectors to allow the one or more redirectors to claim ownership of a file object requested.
28. The method of claim 26, the request is a create request.
29. The method of claim 26, the first surrogate provider is a CSC component and the second surrogate provider is a DFS component, the DFS identifying DFS links in cooperation with the CSC component only while online.
30. The method of claim 26, the request being one of a read and a write operation request.
31. The method of claim 30, the first surrogate provider is provided with a buffering state of a file before substantially every read from a persistent cache to a client application or before substantially every write is executed, respectively.
32. The method of claim 26, employing the first surrogate provider to keep track of DFS links corresponding to every object, wherein the DFS links are physical shares.
33. The method of claim 26, employing the first surrogate provider to determine whether the request against a specific object should be carried out offline or not, before returning to MUP, by looking at a corresponding physical share connection state.
34. An API that facilitates satisfying a create request on an online remote file system comprising:
receive the create request from I/O manager;
call a pre-process handler of a CSC surrogate provider;
find or create a logical namespace structure if part of the logical namespace on which a target of the create request resides is already offline;
pass the create request to a DFS surrogate provider to translate the logical path to an physical server share;
pass the create request to a redirector component to allow a redirector to claim the physical path; and
call a post-process handler of the CSC surrogate provider to express one of either no interest or interest to cache a file object requested by the create request.
35. An API that facilitates satisfying a create request on a client computer when disconnected from a remote file system comprising:
receive the create request from I/O manager; and
call a pre-process handler of a CSC surrogate provider to handle the request by mapping the logical path to local cache data since redirectors are unavailable to claim the path.
36. A system that facilitates maintaining access to remote files (e.g., server-based) during any period of disconnect from a remote location, comprising:
means for providing one or more client computers, each client computer comprising a local data store; and
means for selectively caching one or more file objects from at least one online server to the respective data store for subsequent offline use by the client computer.
37. The system of claim 36, further comprising means for maintaining access to the one or more files cached while offline.
38. The system of claim 36, further comprising means for caching one of more file access parameters that correspond to the one or more cached file objects to permit client access to the file objects while offline.
39. The system of claim 36, when connected to the remote location, means for retrieving a file object from the local data store to mitigate bandwidth usage with respect to accessing the remote location despite being connected to the remote location.
40. The system of claim 36, further comprising:
means for mapping a logical namespace to a physical namespace to facilitate keeping track of cached files and enumerating directories as files are modified or deleted locally at the client or at the remote location; and
means for tracking connection states and version of physical shares that correspond to at least one object along a path that facilitates updating a tree connect structure in a continuous manner.
41. A system that facilitates seamless operation across connectivity states between at least one client and at least one remote server, comprising:
means for providing at least a first surrogate provider that receives one or more I/O requests from an MUP, the first surrogate provider comprising a pre-process handler and a post-process handler that facilitate handling the requests at a directory level, the first surrogate provider examining a logical path of the request; and
means for passing the one or more requests to a second surrogate provider that is operational in an online state, the second surrogate provider translating the logical path of the request into a physical path; and
means for generating one or more data structures for each respective I/O request that facilitates determining whether the first surrogate provider wants to own or cache a file object related to the request.
42. A data packet adapted to be transmitted between two or more computer processes facilitating extracting data from messages, the data packet comprising:
information associated with providing one or more client computers, each client computer comprising a local data store, selectively caching one or more file objects from at least one online server to the respective data store for subsequent offline use by the caching one of more file access parameters that correspond to the one or more cached file objects to permit client access to the file objects while offline in connection with seamless connection state transitions at a directory level.
43. A computer readable medium storing computer executable components of claim 1.
US10/692,212 2003-10-23 2003-10-23 Persistent caching directory level support Abandoned US20050091226A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US10/692,212 US20050091226A1 (en) 2003-10-23 2003-10-23 Persistent caching directory level support
US11/064,255 US7698376B2 (en) 2003-10-23 2005-02-22 Persistent caching directory level support
US11/064,235 US7702745B2 (en) 2003-10-23 2005-02-22 Persistent caching directory level support

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/692,212 US20050091226A1 (en) 2003-10-23 2003-10-23 Persistent caching directory level support

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US11/064,255 Division US7698376B2 (en) 2003-10-23 2005-02-22 Persistent caching directory level support
US11/064,235 Division US7702745B2 (en) 2003-10-23 2005-02-22 Persistent caching directory level support

Publications (1)

Publication Number Publication Date
US20050091226A1 true US20050091226A1 (en) 2005-04-28

Family

ID=34522055

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/692,212 Abandoned US20050091226A1 (en) 2003-10-23 2003-10-23 Persistent caching directory level support
US11/064,255 Expired - Fee Related US7698376B2 (en) 2003-10-23 2005-02-22 Persistent caching directory level support
US11/064,235 Expired - Fee Related US7702745B2 (en) 2003-10-23 2005-02-22 Persistent caching directory level support

Family Applications After (2)

Application Number Title Priority Date Filing Date
US11/064,255 Expired - Fee Related US7698376B2 (en) 2003-10-23 2005-02-22 Persistent caching directory level support
US11/064,235 Expired - Fee Related US7702745B2 (en) 2003-10-23 2005-02-22 Persistent caching directory level support

Country Status (1)

Country Link
US (3) US20050091226A1 (en)

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040181576A1 (en) * 2003-03-12 2004-09-16 Microsoft Corporation Protocol-independent client-side caching system and method
US20050222895A1 (en) * 2004-04-03 2005-10-06 Altusys Corp Method and Apparatus for Creating and Using Situation Transition Graphs in Situation-Based Management
US20050222810A1 (en) * 2004-04-03 2005-10-06 Altusys Corp Method and Apparatus for Coordination of a Situation Manager and Event Correlation in Situation-Based Management
US20050228763A1 (en) * 2004-04-03 2005-10-13 Altusys Corp Method and Apparatus for Situation-Based Management
US20050235012A1 (en) * 2004-04-15 2005-10-20 Microsoft Corporation Offline source code control
US20060177012A1 (en) * 2005-02-07 2006-08-10 David Forney Networked voicemail
US20060177005A1 (en) * 2005-02-07 2006-08-10 Anthony Shaffer System and method for voicemail privacy
US20060177023A1 (en) * 2005-02-07 2006-08-10 Shahriar Vaghar Distributed cache system
US20060177007A1 (en) * 2005-02-07 2006-08-10 Shahriar Vaghar Caching message information in an integrated communication system
US20060177008A1 (en) * 2005-02-07 2006-08-10 David Forney Extensible diagnostic tool
US20070239789A1 (en) * 2006-03-28 2007-10-11 Microsoft Corporation Active cache offline sharing of project files
US20070255760A1 (en) * 2006-04-27 2007-11-01 Canon Kabushiki Kaisha Information processing apparatus and information processing method
US20070260592A1 (en) * 2006-05-03 2007-11-08 International Business Machines Corporation Hierarchical storage management of metadata
US20080133548A1 (en) * 2005-02-07 2008-06-05 Adomo, Inc. Caching User Information in an Integrated Communication System
US20080155082A1 (en) * 2006-12-22 2008-06-26 Fujitsu Limited Computer-readable medium storing file delivery program, file delivery apparatus, and distributed file system
US20080198980A1 (en) * 2007-02-21 2008-08-21 Jens Ulrik Skakkebaek Voicemail filtering and transcription
US20090193107A1 (en) * 2008-01-25 2009-07-30 Microsoft Corporation Synchronizing for Directory Changes Performed While Offline
US7808980B2 (en) 2005-02-07 2010-10-05 Avaya Inc. Integrated multi-media communication system
US7885275B2 (en) 2005-02-07 2011-02-08 Avaya Inc. Integrating messaging server directory service with a communication system voice mail message interface
US8064576B2 (en) 2007-02-21 2011-11-22 Avaya Inc. Voicemail filtering and transcription
EP2407895A1 (en) * 2010-07-16 2012-01-18 Research In Motion Limited Persisting file system information on mobile devices
US8160212B2 (en) 2007-02-21 2012-04-17 Avaya Inc. Voicemail filtering and transcription
US20120150796A1 (en) * 2010-12-10 2012-06-14 Sap Ag Transparent Caching of Configuration Data
US20120303686A1 (en) * 2009-12-16 2012-11-29 International Business Machines Corporation Scalable caching of remote file data in a cluster file system
US8346774B1 (en) * 2011-08-08 2013-01-01 International Business Machines Corporation Protecting network entity data while preserving network properties
US20130054734A1 (en) * 2011-08-23 2013-02-28 Microsoft Corporation Migration of cloud applications between a local computing device and cloud
US8488751B2 (en) 2007-05-11 2013-07-16 Avaya Inc. Unified messenging system and method
US8639734B1 (en) * 2008-03-31 2014-01-28 Symantec Operating Corporation Use of external information about a file to determine virtualization
US20140237049A1 (en) * 2009-05-02 2014-08-21 Citrix Systems, Inc. Methods and systems for providing a consistent profile to overlapping user sessions
US20140244688A1 (en) * 2009-01-15 2014-08-28 Microsoft Corporation Access requests with cache intentions
US9323921B2 (en) 2010-07-13 2016-04-26 Microsoft Technology Licensing, Llc Ultra-low cost sandboxing for application appliances
US9389933B2 (en) 2011-12-12 2016-07-12 Microsoft Technology Licensing, Llc Facilitating system service request interactions for hardware-protected applications
US9413538B2 (en) 2011-12-12 2016-08-09 Microsoft Technology Licensing, Llc Cryptographic certification of secure hosted execution environments
US9495183B2 (en) 2011-05-16 2016-11-15 Microsoft Technology Licensing, Llc Instruction set emulation for guest operating systems
US9588803B2 (en) 2009-05-11 2017-03-07 Microsoft Technology Licensing, Llc Executing native-code applications in a browser
US9639583B2 (en) 2014-04-14 2017-05-02 Business Objects Software Ltd. Caching predefined data for mobile dashboard
US10311455B2 (en) * 2004-07-08 2019-06-04 One Network Enterprises, Inc. Computer program product and method for sales forecasting and adjusting a sales forecast
US10402375B2 (en) * 2016-07-18 2019-09-03 Microsoft Technology Licensing, Llc Cloud content states framework
CN111966283A (en) * 2020-07-06 2020-11-20 云知声智能科技股份有限公司 Client multi-level caching method and system based on enterprise-level super-computation scene
CN112256208A (en) * 2020-11-02 2021-01-22 南京云信达科技有限公司 Offline data packet storage analysis method and device

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9083765B2 (en) * 2004-07-02 2015-07-14 Oracle International Corporation Systems and methods of offline processing
US8127024B2 (en) * 2004-10-29 2012-02-28 Oracle International Corporation Parameter passing in web based systems
US8316129B2 (en) 2005-05-25 2012-11-20 Microsoft Corporation Data communication coordination with sequence numbers
WO2007059469A2 (en) 2005-11-10 2007-05-24 Computer Associates Think, Inc. System and method for delivering results of a search query in an information management system
US7483927B2 (en) * 2005-12-01 2009-01-27 International Business Machines Corporation Method for merging metadata on files in a backup storage
JP4795787B2 (en) * 2005-12-09 2011-10-19 株式会社日立製作所 Storage system, NAS server, and snapshot method
US20070234328A1 (en) * 2006-03-01 2007-10-04 Microsoft Corporation File handling for test environments
US7685367B2 (en) * 2006-03-08 2010-03-23 Microsoft Corporation Multi-cache cooperation for response output caching
US8140474B2 (en) * 2007-09-13 2012-03-20 Intel Corporation Aggregation of file/directory structures
US9083667B2 (en) * 2008-01-16 2015-07-14 International Business Machines Corporation System and method for follow-on message processing
US8195704B2 (en) * 2009-01-06 2012-06-05 International Business Machines Corporation Validation and correction in a distributed namespace
US8738584B2 (en) * 2009-02-17 2014-05-27 Microsoft Corporation Context-aware management of shared composite data
US8171067B2 (en) * 2009-06-11 2012-05-01 International Business Machines Corporation Implementing an ephemeral file system backed by a NFS server
US9454325B2 (en) * 2009-11-04 2016-09-27 Broadcom Corporation Method and system for offline data access on computer systems
JP5539126B2 (en) * 2010-09-09 2014-07-02 キヤノン株式会社 Data processing apparatus, control method, and program
US8589553B2 (en) 2010-09-17 2013-11-19 Microsoft Corporation Directory leasing
US8533682B2 (en) * 2010-11-05 2013-09-10 Microsoft Corporation Amplification of dynamic checks through concurrency fuzzing
US8631277B2 (en) 2010-12-10 2014-01-14 Microsoft Corporation Providing transparent failover in a file system
US9331955B2 (en) 2011-06-29 2016-05-03 Microsoft Technology Licensing, Llc Transporting operations of arbitrary size over remote direct memory access
US8856582B2 (en) 2011-06-30 2014-10-07 Microsoft Corporation Transparent failover
US8788579B2 (en) 2011-09-09 2014-07-22 Microsoft Corporation Clustered client failover
US20130067095A1 (en) 2011-09-09 2013-03-14 Microsoft Corporation Smb2 scaleout
CN103107905B (en) 2011-11-14 2017-08-04 华为技术有限公司 Abnormality eliminating method, device and client
KR101373461B1 (en) * 2012-02-24 2014-03-11 주식회사 팬택 Terminal and method for using cloud sevices
TWI576703B (en) * 2015-03-27 2017-04-01 宏碁股份有限公司 Electronic apparatus and method for temporarily storing data thereof
CN105512185B (en) * 2015-11-24 2019-03-26 无锡江南计算技术研究所 A method of it is shared based on operation timing caching
FR3063361B1 (en) * 2017-02-24 2019-04-19 Moore METHOD, EQUIPMENT AND SYSTEM FOR MANAGING THE FILE SYSTEM
CN106991176A (en) * 2017-04-06 2017-07-28 广州视源电子科技股份有限公司 File management method, device, equipment and storage medium
US10623470B2 (en) 2017-06-14 2020-04-14 International Business Machines Corporation Optimizing internet data transfers using an intelligent router agent

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5867661A (en) * 1996-02-15 1999-02-02 International Business Machines Corporation Method and apparatus of using virtual sockets for reducing data transmitted over a wireless communication link between a client web browser and a host web server using a standard TCP protocol
US5878213A (en) * 1996-02-15 1999-03-02 International Business Machines Corporation Methods, systems and computer program products for the synchronization of time coherent caching system
US5907678A (en) * 1997-05-07 1999-05-25 International Business Machines Corporation Client/server system in which protocol caches for multiple sessions are selectively copied into a common checkpoint cache upon receiving a checkpoint request
US6003087A (en) * 1996-02-15 1999-12-14 International Business Machines Corporation CGI response differencing communication system
US6018619A (en) * 1996-05-24 2000-01-25 Microsoft Corporation Method, system and apparatus for client-side usage tracking of information server systems
US6026474A (en) * 1996-11-22 2000-02-15 Mangosoft Corporation Shared client-side web caching using globally addressable memory
US6047356A (en) * 1994-04-18 2000-04-04 Sonic Solutions Method of dynamically allocating network node memory's partitions for caching distributed files
US6065043A (en) * 1996-03-14 2000-05-16 Domenikos; Steven D. Systems and methods for executing application programs from a memory device linked to a server
US6096096A (en) * 1996-12-13 2000-08-01 Silicon Graphics, Inc. Web-site delivery
US6119153A (en) * 1998-04-27 2000-09-12 Microsoft Corporation Accessing content via installable data sources
US6185601B1 (en) * 1996-08-02 2001-02-06 Hewlett-Packard Company Dynamic load balancing of a network of client and server computers
US20020078142A1 (en) * 2000-12-20 2002-06-20 Microsoft Corporation Method and system for enabling offline detection of software updates
US20020083148A1 (en) * 2000-05-12 2002-06-27 Shaw Venson M. System and method for sender initiated caching of personalized content
US20020109718A1 (en) * 2001-02-14 2002-08-15 Mansour Peter M. Platform-independent distributed user interface server architecture
US6446088B1 (en) * 1997-04-01 2002-09-03 The Board Of Trustees Of The University Of Illinois Application-directed variable-granularity caching and consistency management
US20030028695A1 (en) * 2001-05-07 2003-02-06 International Business Machines Corporation Producer/consumer locking system for efficient replication of file data
US20030066065A1 (en) * 2001-10-02 2003-04-03 International Business Machines Corporation System and method for remotely updating software applications
US6697849B1 (en) * 1999-08-13 2004-02-24 Sun Microsystems, Inc. System and method for caching JavaServer Pages™ responses
US20040064570A1 (en) * 1999-10-12 2004-04-01 Theron Tock System and method for enabling a client application to operate offline from a server
US20050091340A1 (en) * 2003-10-01 2005-04-28 International Business Machines Corporation Processing interactive content offline

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2026474A (en) * 1931-10-30 1935-12-31 Bbc Brown Boveri & Cie Generator
US5452447A (en) * 1992-12-21 1995-09-19 Sun Microsystems, Inc. Method and apparatus for a caching file server
EP0629960B1 (en) * 1993-06-17 2000-05-24 Sun Microsystems, Inc. Extendible file system
US6094684A (en) * 1997-04-02 2000-07-25 Alpha Microsystems, Inc. Method and apparatus for data communication
US6163856A (en) * 1998-05-29 2000-12-19 Sun Microsystems, Inc. Method and apparatus for file system disaster recovery
US6754696B1 (en) * 1999-03-25 2004-06-22 Micosoft Corporation Extended file system
US7296274B2 (en) * 1999-11-15 2007-11-13 Sandia National Laboratories Method and apparatus providing deception and/or altered execution of logic in an information system
US20020091340A1 (en) * 2000-11-13 2002-07-11 Robbins Daniel J. Vibration device for use with a resting unit
US7788335B2 (en) * 2001-01-11 2010-08-31 F5 Networks, Inc. Aggregated opportunistic lock and aggregated implicit lock management for locking aggregated files in a switched file system
US7409420B2 (en) * 2001-07-16 2008-08-05 Bea Systems, Inc. Method and apparatus for session replication and failover

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6047356A (en) * 1994-04-18 2000-04-04 Sonic Solutions Method of dynamically allocating network node memory's partitions for caching distributed files
US5878213A (en) * 1996-02-15 1999-03-02 International Business Machines Corporation Methods, systems and computer program products for the synchronization of time coherent caching system
US6003087A (en) * 1996-02-15 1999-12-14 International Business Machines Corporation CGI response differencing communication system
US5867661A (en) * 1996-02-15 1999-02-02 International Business Machines Corporation Method and apparatus of using virtual sockets for reducing data transmitted over a wireless communication link between a client web browser and a host web server using a standard TCP protocol
US6065043A (en) * 1996-03-14 2000-05-16 Domenikos; Steven D. Systems and methods for executing application programs from a memory device linked to a server
US6018619A (en) * 1996-05-24 2000-01-25 Microsoft Corporation Method, system and apparatus for client-side usage tracking of information server systems
US6185601B1 (en) * 1996-08-02 2001-02-06 Hewlett-Packard Company Dynamic load balancing of a network of client and server computers
US6026474A (en) * 1996-11-22 2000-02-15 Mangosoft Corporation Shared client-side web caching using globally addressable memory
US6096096A (en) * 1996-12-13 2000-08-01 Silicon Graphics, Inc. Web-site delivery
US6446088B1 (en) * 1997-04-01 2002-09-03 The Board Of Trustees Of The University Of Illinois Application-directed variable-granularity caching and consistency management
US5907678A (en) * 1997-05-07 1999-05-25 International Business Machines Corporation Client/server system in which protocol caches for multiple sessions are selectively copied into a common checkpoint cache upon receiving a checkpoint request
US6453343B1 (en) * 1997-05-07 2002-09-17 International Business Machines Corporation Methods, systems and computer program products for maintaining a common checkpoint cache for multiple sessions between a single client and server
US6119153A (en) * 1998-04-27 2000-09-12 Microsoft Corporation Accessing content via installable data sources
US6697849B1 (en) * 1999-08-13 2004-02-24 Sun Microsystems, Inc. System and method for caching JavaServer Pages™ responses
US20040064570A1 (en) * 1999-10-12 2004-04-01 Theron Tock System and method for enabling a client application to operate offline from a server
US20020083148A1 (en) * 2000-05-12 2002-06-27 Shaw Venson M. System and method for sender initiated caching of personalized content
US20020078142A1 (en) * 2000-12-20 2002-06-20 Microsoft Corporation Method and system for enabling offline detection of software updates
US20020109718A1 (en) * 2001-02-14 2002-08-15 Mansour Peter M. Platform-independent distributed user interface server architecture
US20030028695A1 (en) * 2001-05-07 2003-02-06 International Business Machines Corporation Producer/consumer locking system for efficient replication of file data
US20030066065A1 (en) * 2001-10-02 2003-04-03 International Business Machines Corporation System and method for remotely updating software applications
US20050091340A1 (en) * 2003-10-01 2005-04-28 International Business Machines Corporation Processing interactive content offline

Cited By (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7349943B2 (en) * 2003-03-12 2008-03-25 Microsoft Corporation Protocol-independent client-side caching system and method
US20040181576A1 (en) * 2003-03-12 2004-09-16 Microsoft Corporation Protocol-independent client-side caching system and method
US20050222895A1 (en) * 2004-04-03 2005-10-06 Altusys Corp Method and Apparatus for Creating and Using Situation Transition Graphs in Situation-Based Management
US20050222810A1 (en) * 2004-04-03 2005-10-06 Altusys Corp Method and Apparatus for Coordination of a Situation Manager and Event Correlation in Situation-Based Management
US20050228763A1 (en) * 2004-04-03 2005-10-13 Altusys Corp Method and Apparatus for Situation-Based Management
US8694475B2 (en) * 2004-04-03 2014-04-08 Altusys Corp. Method and apparatus for situation-based management
US20050235012A1 (en) * 2004-04-15 2005-10-20 Microsoft Corporation Offline source code control
US7779387B2 (en) * 2004-04-15 2010-08-17 Microsoft Corporation Offline source code control
US10311455B2 (en) * 2004-07-08 2019-06-04 One Network Enterprises, Inc. Computer program product and method for sales forecasting and adjusting a sales forecast
US20060177007A1 (en) * 2005-02-07 2006-08-10 Shahriar Vaghar Caching message information in an integrated communication system
US7907704B2 (en) 2005-02-07 2011-03-15 Avaya Inc. Caching user information in an integrated communication system
US8233594B2 (en) 2005-02-07 2012-07-31 Avaya Inc. Caching message information in an integrated communication system
US8391461B2 (en) 2005-02-07 2013-03-05 Avaya Inc. Caching user information in an integrated communication system
US20060177008A1 (en) * 2005-02-07 2006-08-10 David Forney Extensible diagnostic tool
US20080133548A1 (en) * 2005-02-07 2008-06-05 Adomo, Inc. Caching User Information in an Integrated Communication System
US8559605B2 (en) 2005-02-07 2013-10-15 Avaya Inc. Extensible diagnostic tool
US8175233B2 (en) * 2005-02-07 2012-05-08 Avaya Inc. Distributed cache system
US20060177023A1 (en) * 2005-02-07 2006-08-10 Shahriar Vaghar Distributed cache system
US20060177012A1 (en) * 2005-02-07 2006-08-10 David Forney Networked voicemail
US8059793B2 (en) 2005-02-07 2011-11-15 Avaya Inc. System and method for voicemail privacy
US7724880B2 (en) 2005-02-07 2010-05-25 Avaya Inc. Networked voicemail
US20110131287A1 (en) * 2005-02-07 2011-06-02 Avaya, Inc. Catching user information in an integrated communication system
US20060177005A1 (en) * 2005-02-07 2006-08-10 Anthony Shaffer System and method for voicemail privacy
US7808980B2 (en) 2005-02-07 2010-10-05 Avaya Inc. Integrated multi-media communication system
US7885275B2 (en) 2005-02-07 2011-02-08 Avaya Inc. Integrating messaging server directory service with a communication system voice mail message interface
US20070239789A1 (en) * 2006-03-28 2007-10-11 Microsoft Corporation Active cache offline sharing of project files
US7698280B2 (en) * 2006-03-28 2010-04-13 Microsoft Corporation Active cache offline sharing of project files
US20100146017A1 (en) * 2006-04-27 2010-06-10 Canon Kabushiki Kaisha Information processing apparatus and information processing method
US8417742B2 (en) * 2006-04-27 2013-04-09 Canon Kabushiki Kaisha Information processing apparatus and information processing method
US20070255760A1 (en) * 2006-04-27 2007-11-01 Canon Kabushiki Kaisha Information processing apparatus and information processing method
US7574435B2 (en) 2006-05-03 2009-08-11 International Business Machines Corporation Hierarchical storage management of metadata
US20070260592A1 (en) * 2006-05-03 2007-11-08 International Business Machines Corporation Hierarchical storage management of metadata
US20080155082A1 (en) * 2006-12-22 2008-06-26 Fujitsu Limited Computer-readable medium storing file delivery program, file delivery apparatus, and distributed file system
US20080198980A1 (en) * 2007-02-21 2008-08-21 Jens Ulrik Skakkebaek Voicemail filtering and transcription
US8107598B2 (en) 2007-02-21 2012-01-31 Avaya Inc. Voicemail filtering and transcription
US8160212B2 (en) 2007-02-21 2012-04-17 Avaya Inc. Voicemail filtering and transcription
US8064576B2 (en) 2007-02-21 2011-11-22 Avaya Inc. Voicemail filtering and transcription
US8488751B2 (en) 2007-05-11 2013-07-16 Avaya Inc. Unified messenging system and method
US8065381B2 (en) 2008-01-25 2011-11-22 Microsoft Corporation Synchronizing for directory changes performed while offline
US20090193107A1 (en) * 2008-01-25 2009-07-30 Microsoft Corporation Synchronizing for Directory Changes Performed While Offline
US8639734B1 (en) * 2008-03-31 2014-01-28 Symantec Operating Corporation Use of external information about a file to determine virtualization
US9076012B2 (en) * 2009-01-15 2015-07-07 Microsoft Technology Licensing, Llc Access requests with cache intentions
US20140244688A1 (en) * 2009-01-15 2014-08-28 Microsoft Corporation Access requests with cache intentions
US9357029B2 (en) 2009-01-15 2016-05-31 Microsoft Technology Licensing, Llc Access requests with cache intentions
US10225363B2 (en) 2009-05-02 2019-03-05 Citrix Systems, Inc. Methods and systems for providing a consistent profile to overlapping user sessions
US9451044B2 (en) * 2009-05-02 2016-09-20 Citrix Systems, Inc. Methods and systems for providing a consistent profile to overlapping user sessions
US20140237049A1 (en) * 2009-05-02 2014-08-21 Citrix Systems, Inc. Methods and systems for providing a consistent profile to overlapping user sessions
US9588803B2 (en) 2009-05-11 2017-03-07 Microsoft Technology Licensing, Llc Executing native-code applications in a browser
US10824716B2 (en) 2009-05-11 2020-11-03 Microsoft Technology Licensing, Llc Executing native-code applications in a browser
US9860333B2 (en) 2009-12-16 2018-01-02 International Business Machines Corporation Scalable caching of remote file data in a cluster file system
US10659554B2 (en) 2009-12-16 2020-05-19 International Business Machines Corporation Scalable caching of remote file data in a cluster file system
US9176980B2 (en) * 2009-12-16 2015-11-03 International Business Machines Corporation Scalable caching of remote file data in a cluster file system
US20120303686A1 (en) * 2009-12-16 2012-11-29 International Business Machines Corporation Scalable caching of remote file data in a cluster file system
US9158788B2 (en) 2009-12-16 2015-10-13 International Business Machines Corporation Scalable caching of remote file data in a cluster file system
US9323921B2 (en) 2010-07-13 2016-04-26 Microsoft Technology Licensing, Llc Ultra-low cost sandboxing for application appliances
US8489725B2 (en) 2010-07-16 2013-07-16 Research In Motion Limited Persisting file system information on mobile devices
EP2407895A1 (en) * 2010-07-16 2012-01-18 Research In Motion Limited Persisting file system information on mobile devices
US8983902B2 (en) * 2010-12-10 2015-03-17 Sap Se Transparent caching of configuration data
US20120150796A1 (en) * 2010-12-10 2012-06-14 Sap Ag Transparent Caching of Configuration Data
US10289435B2 (en) 2011-05-16 2019-05-14 Microsoft Technology Licensing, Llc Instruction set emulation for guest operating systems
US9495183B2 (en) 2011-05-16 2016-11-15 Microsoft Technology Licensing, Llc Instruction set emulation for guest operating systems
US8346774B1 (en) * 2011-08-08 2013-01-01 International Business Machines Corporation Protecting network entity data while preserving network properties
US20130054734A1 (en) * 2011-08-23 2013-02-28 Microsoft Corporation Migration of cloud applications between a local computing device and cloud
US9425965B2 (en) 2011-12-12 2016-08-23 Microsoft Technology Licensing, Llc Cryptographic certification of secure hosted execution environments
US9413538B2 (en) 2011-12-12 2016-08-09 Microsoft Technology Licensing, Llc Cryptographic certification of secure hosted execution environments
US9389933B2 (en) 2011-12-12 2016-07-12 Microsoft Technology Licensing, Llc Facilitating system service request interactions for hardware-protected applications
US9639583B2 (en) 2014-04-14 2017-05-02 Business Objects Software Ltd. Caching predefined data for mobile dashboard
US10402375B2 (en) * 2016-07-18 2019-09-03 Microsoft Technology Licensing, Llc Cloud content states framework
CN111966283A (en) * 2020-07-06 2020-11-20 云知声智能科技股份有限公司 Client multi-level caching method and system based on enterprise-level super-computation scene
CN112256208A (en) * 2020-11-02 2021-01-22 南京云信达科技有限公司 Offline data packet storage analysis method and device

Also Published As

Publication number Publication date
US7698376B2 (en) 2010-04-13
US7702745B2 (en) 2010-04-20
US20050160096A1 (en) 2005-07-21
US20050165735A1 (en) 2005-07-28

Similar Documents

Publication Publication Date Title
US7698376B2 (en) Persistent caching directory level support
US7441011B2 (en) Truth on client persistent caching
US11388251B2 (en) Providing access to managed content
KR101109340B1 (en) Protocol-independent client-side caching system and method
US7818287B2 (en) Storage management system and method and program
US6256712B1 (en) Scaleable method for maintaining and making consistent updates to caches
US6598060B2 (en) Method and system for creating and maintaining version-specific properties in a distributed environment
EP1612702A1 (en) Systems and methods for conflict handling in peer-to-peer synchronization of units of information
US8156507B2 (en) User mode file system serialization and reliability
JP2004303211A (en) System and method for invalidation of cached database result and derived object
JP3481054B2 (en) Gateway device, client computer and distributed file system connecting them
HU219996B (en) Client computer, as well as method for operating it
US10133744B2 (en) Composite execution of rename operations in wide area file systems
CA2815562C (en) Systems and methods for extensions and inheritance for units of information manageable by a hardware/software interface system
Hicks Improving I/O bandwidth with Cray DVS Client‐side Caching
Krzyzanowski Distributed File Systems Design
Ghosh et al. Storage Systems for Mobile Environment
Wang et al. Cooperative Cache Management in S2FS.
Zhang A mobile file service based on double middleware
ZHANG DOUBLE MIDDLEWARE-BASED MOBILE FILE SERVICE

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIN, YUN;VIRK, NAVJOT;AUST, BRIAN S.;AND OTHERS;REEL/FRAME:014638/0881;SIGNING DATES FROM 20031022 TO 20031023

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034766/0001

Effective date: 20141014