US20130179947A1 - System and method for decentralized online data transfer and synchronization - Google Patents

System and method for decentralized online data transfer and synchronization Download PDF

Info

Publication number
US20130179947A1
US20130179947A1 US13/734,843 US201313734843A US2013179947A1 US 20130179947 A1 US20130179947 A1 US 20130179947A1 US 201313734843 A US201313734843 A US 201313734843A US 2013179947 A1 US2013179947 A1 US 2013179947A1
Authority
US
United States
Prior art keywords
node
nexus
data
management
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/734,843
Other versions
US8955103B2 (en
Inventor
Frank-Robert Kline, III
Aaron Moise Nathan
Jonathan R. Schoenberg
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Open Text Holdings Inc
Original Assignee
Adept Cloud Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US13/734,843 priority Critical patent/US8955103B2/en
Application filed by Adept Cloud Inc filed Critical Adept Cloud Inc
Assigned to ADEPT CLOUD, INC. reassignment ADEPT CLOUD, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KLINE, FRANK-ROBERT, NATHAN, AARON MOISE, SCHOENBERG, JONATHAN R.
Publication of US20130179947A1 publication Critical patent/US20130179947A1/en
Assigned to HIGHTAIL, INC. reassignment HIGHTAIL, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: YOUSENDIT, INC.
Assigned to HIGHTAIL, INC. reassignment HIGHTAIL, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Adept Cloud, Inc.
Assigned to HIGHTAIL, INC. reassignment HIGHTAIL, INC. CORRECTIVE ASSIGNMENT TO CORRECT THE INCORRECT APPLICATION NO. 13/733,351 PREVIOUSLY RECORDED AT REEL: 031288 FRAME: 0656. ASSIGNOR(S) HEREBY CONFIRMS THE CHANGE OF NAME. Assignors: YOUSENDIT, INC.
Publication of US8955103B2 publication Critical patent/US8955103B2/en
Application granted granted Critical
Assigned to OPEN TEXT HOLDINGS, INC. reassignment OPEN TEXT HOLDINGS, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: HIGHTAIL, INC.
Assigned to BARCLAYS BANK PLC reassignment BARCLAYS BANK PLC SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: OPEN TEXT HOLDINGS, INC.
Assigned to BARCLAYS BANK PLC reassignment BARCLAYS BANK PLC SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: OPEN TEXT HOLDINGS, INC.
Assigned to BARCLAYS BANK PLC reassignment BARCLAYS BANK PLC SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: OPEN TEXT HOLDINGS, INC.
Assigned to THE BANK OF NEW YORK MELLON reassignment THE BANK OF NEW YORK MELLON SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: OPEN TEXT HOLDINGS, INC.
Assigned to OPEN TEXT HOLDINGS, INC. reassignment OPEN TEXT HOLDINGS, INC. RELEASE OF SECURITY INTEREST IN PATENTS (REEL/FRAME 063558/0682) Assignors: BARCLAYS BANK PLC
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0884Network architectures or network communication protocols for network security for authentication of entities by delegation of authentication, e.g. a proxy authenticates an entity to be authenticated on behalf of this entity vis-à-vis an authentication entity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0823Network architectures or network communication protocols for network security for authentication of entities using certificates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0892Network architectures or network communication protocols for network security for authentication of entities by using authentication-authorization-accounting [AAA] servers or protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services

Definitions

  • This application relates generally to the technical field of sharing files, and, in one specific example, to allowing organizations to implement internal and external data collaboration without violating the security policies of the organization.
  • employees of an organization often need to share access to files, whether they are working locally (e.g., inside a firewall of the organization) or remotely (e.g., outside the firewall). Additionally, employees of the organization may need to share access to such files, which may otherwise be intended to remain private to the organization, outside the organization (e.g., with employees of other organizations). With existing data collaboration tools, it may be difficult for an organization to control such file sharing such that security policies of the organization are not compromised.
  • FIG. 1 is a screenshot depicting an example embodiment of a user interface of a desktop client
  • FIG. 2 is a screenshot depicting an example embodiment of a user interface in which a cloud (e.g., “my cloud”) has been mounted on a users computer;
  • a cloud e.g., “my cloud”
  • FIG. 3 is a screenshot depicting an example embodiment of a user interface of a Cloud Browser
  • FIG. 4 is a screenshot depicting an example embodiment for allowing a user to view files in my cloud using Onsite;
  • FIG. 5 is a screenshot depicting an example embodiment of a user interface presented in a mobile device to allow a user to view files in my cloud;
  • FIG. 6 is a screenshot depicting an example embodiment of a user interface for central management of the storage system
  • FIG. 7 is a screenshot depicting an example embodiment of a user interface for managing computers and devices centrally;
  • FIG. 8 is a screenshot depicting an example embodiment of a user interface for viewing organizations centrally;
  • FIG. 9 is a screenshot depicting an example embodiment of a user interface for managing users and onsite services centrally;
  • FIG. 10 is a block diagram illustrating an example architecture of the system
  • FIG. 11 is an interaction diagram depicting example interactions between components during authentication and data transfer
  • FIG. 12 is a table illustrating examples of data items that each nexus session may persistently keep track of
  • FIG. 13 is a table depicting examples of data items that each node session may keep track of
  • FIG. 14 is a description of an example embodiment of what message construction may look like
  • FIG. 15 is a table illustrating an example embodiment of a database table for an example candidate implementation of a revisioning file storage service
  • FIG. 16 is a block diagram depicting an example embodiment of a design of the access component
  • FIG. 17 is a table illustrating an example embodiment of nexus logging particulars for three classes: user, organization, and cloud;
  • FIG. 18 is a table illustrating an example embodiment of cloud-level logging particulars
  • FIG. 19 is a table illustrating example fields included in a database table for indexing
  • FIG. 20 is a flowchart illustrating an example method of sharing data
  • FIG. 21 is a block diagram of machine in the example form of a computer system within which instructions for causing the machine to perform operations corresponding to any one or more of the methodologies discussed herein may be executed.
  • methods for sharing data are disclosed.
  • a request from a client node to access data in a share associated with a server node is received.
  • a communication from a management nexus is received.
  • the communication includes a confirmation of an identity of the client node and a confirmation of an authorization for the client node to access the data in the share associated with the server node.
  • the client node is allowed to access the data in the share associated with the server node based on the communication from the management nexus. However, the data is not sent to the management nexus.
  • This method and other methods disclosed herein may be implemented as a computer system having one or more modules (e.g., hardware modules or software modules). This method and other methods disclosed herein may be embodied as instructions stored on a machine-readable medium that, when executed by a processor, cause the processor to perform the method.
  • methods and systems described herein may offer a better way for businesses with sensitive data to collaborate internally and externally.
  • methods include providing a private, unlimited solution that is simple to use, easy to manage, and low in complexity and cost.
  • methods include providing systems that are purpose-built for sensitive files.
  • data of a business is stored on private nodes, but not on servers external from the private nodes.
  • Existing cloud-based collaboration solutions may charge business for storing data at their internet datacenters, penalizing data generation.
  • the methods and systems described herein do not include such charges for cloud storage space.
  • such methods include enabling colleagues in business to synchronize and share unlimited files of unlimited size.
  • a secure gateway is used to provide access to files from outside a business enclave or to collaborate with other businesses while enabling the business to keep complete control over its network.
  • Onsite technology enables an administrator to efficiently and effortlessly track, view, and restore files from any moment in time.
  • the Onsite technology is privately located, yet managed and monitored centrally (e.g., from a public domain such as adeptcloud.com).
  • a method enables users to collaborate on files in various workflows, such as the workflows described below.
  • Synchronization Users may actively synchronize files and folders on their computers by mounting Clouds using the client desktop software. Changes a user makes to files/folders within a mounted Cloud will be immediately synchronized in the background to other computers who have the Cloud mounted (as well as backed up to Onsite).
  • Cloud Browser Enables collaboration with large repositories (e.g., repositories having too much data to synchronize to a local computer), and solving other use cases as well.
  • the Cloud Browser enables access and modification of files/folders in Clouds not mounted to the desktop. Importantly, this enables virtualizing/synthesizing of multiple disparate file repositories into an easy Windows-Explorer-like view.
  • Web-based Access (via Onsite).
  • Onsite may provide web-based access to files and repositories.
  • Mobile e.g., iOS, Android, Blackberry, Kindle Fire
  • mobile clients may operate by providing browse/download/upload into/from existing Clouds.
  • FIG. 1 is a screenshot depicting an example embodiment of a user interface 100 of a desktop client. Once installed, the desktop client may run in the background on user computers. The desktop client user interface 100 may show status via a System Tray icon 102 and may be opened/interacted with via the System Tray icon.
  • FIG. 2 is a screenshot depicting an example embodiment of a user interface 200 in which a Cloud (e.g., “my cloud”) has been mounted on the user's computer.
  • a Cloud e.g., “my cloud”
  • a file icon 202 is overlaid on a folder or file corresponding to the Cloud in an application of an operating system (e.g., Windows Explorer of Microsoft Windows) or a Cloud icon 204 is displayed in a toolbar of the application.
  • the desktop client 206 shows “my cloud” is mounted.
  • FIG. 3 is a screenshot depicting an example embodiment of a user interface 300 of the Cloud Browser.
  • the Cloud Browser enables exploration and collaboration with respect to large repositories.
  • the Cloud Browser may be opened via the desktop client.
  • FIG. 4 is a screenshot depicting an example embodiment of a user interface 400 presented in a web browser with web access to enable a user to view files in my cloud using Onsite.
  • FIG. 5 depicts an example embodiment of a user interface 500 presented in a mobile device (e.g., via an iOS mobile application) to enable a user to view files in my cloud.
  • a mobile device e.g., via an iOS mobile application
  • FIG. 6 is a screenshot depicting an example embodiment of a user interface 600 for central management of the storage system.
  • the management of the system including accounts and business information, may be performed centrally (e.g., on a public domain, such as adeptcloud.com) even though the data is stored privately (e.g., on private nodes).
  • FIG. 7 is a screenshot depicting an example embodiment of a user interface 700 for managing computers and devices centrally (e.g., from a public domain, such as adeptcloud.com). As shown, the user interface 700 may allow a user to link or unlink computers and devices to and from with the user's account, as well as rename the computers and devices.
  • a user interface 700 may allow a user to link or unlink computers and devices to and from with the user's account, as well as rename the computers and devices.
  • FIG. 8 is a screenshot depicting an example embodiment of a user interface 800 for viewing organizations centrally.
  • Such organizations may include one or more computers and devices that may be managed, as shown in FIG. 7 .
  • FIG. 9 is a screenshot depicting an example embodiment of a user interface 900 for managing users and onsite services centrally.
  • the user interface 900 may enable an administrator to manage users that have access to an organization.
  • the administrator may create a managed user or manage behind-the-firewall services for the organizations, such as the Onsite service.
  • an infrastructure of a system includes one or more nodes and nexuses.
  • a node may be deployed to users as desktop clients. Its backend may provide indexing and synchronization services to power both mounted cloud synchronization as well as the Cloud Browser.
  • composition of the node may consist of a selection of important services and functions, such as those described below.
  • Directory Watcher Mounted clouds may be watched by directory watchers, receiving notifications when files are changed.
  • the indexing service may index files in mounted clouds, keeping track of their status as reported by directory watchers.
  • the Synchronization service may retrieve updates from remote nodes about changes to files and synchronize those files locally that need to be updated for mounted clouds.
  • the index may keep track of updates to local files in mounted clouds as well as a virtual representation of the state of unmounted clouds.
  • the Cloud Browser may display a global index (e.g., an index for both mounted and unmounted clouds). For unmounted clouds, the Cloud Browser may enable the user to download and upload files by interacting with nodes that have the cloud mounted.
  • the Cloud Browser may be coupled to the Indexing Service to retrieve indexing data.
  • the Nexus may maintain security relationships between the nodes and in Clouds. It may be the central conductor and gateway for communication in the system.
  • the system's web interface may also run the central Nexus. It is the central configuration management interface as well.
  • the Nexus can be distributed in a cluster for redundancy and load balancing.
  • the system adds the ability for Nodes to communicate in non-local networks through the use of an external bidirectional proxy (e.g., an HTTP proxy).
  • an external bidirectional proxy e.g., an HTTP proxy
  • Various protocols e.g., HTTP
  • HTTP may have a limitation in that the creation of a socket is tied to the person making the original request. This may be fine for a client connecting to a public server, but causes issues when a public client is trying to connect to a private (firewalled) server.
  • a Relay Server enables the system to decouple the direction of the connection from who is making the request.
  • FIG. 10 is a block diagram illustrating an example architecture 1000 of the system, including the relay described above.
  • the Relay Server is designed so that it only requires incoming connections (i.e., that Transmission Control Protocol (TCP) sessions originate from Nodes).
  • TCP Transmission Control Protocol
  • an exception is communication with the Nexus, which is assumed to be bidirectional.
  • the Relay Server may guarantee some level of performance in detecting if a Node is reachable. In various embodiments, the Relay Server does not need to change the existing timeout thresholds used in direct connection methods.
  • FIG. 11 is an interaction diagram depicting example interactions 1100 between components during authentication and data transfer.
  • the process may be started by Node A (the client, e.g., a Home Desktop) attempting to establish a connection to Node B (the server, e.g., a Workplace Desktop).
  • Node A the client, e.g., a Home Desktop
  • Node B the server, e.g., a Workplace Desktop
  • Node B makes an HTTP connection to the appropriate Relay Server, encoding the original request for Node A along with Node A's computer ID.
  • the Relay Server looks up the control session associated with Node A's computer ID.
  • the Relay Server sends a control message to establish an incoming HTTP connection from Node A to the Relay Server with a randomly generated session ID.
  • Node A makes an HTTP connection to the Relay Server, encoding its computer ID and the session ID in the header (no body).
  • the Relay Server forwards the request from Node B as the response to Node A's HTTP connection, again including the session ID.
  • Node A executes the request and establishes another HTTP connection to the Relay Server sending the result of Node B's request in the HTTP request.
  • the Relay Server forwards the result HTTP request from Node A in the response to Node B's original request, with no session ID.
  • the Relay Server sends a blank response to Node A indicating the relayed request is complete.
  • Node B establishes a control session with the Relay Server.
  • the Relay Server informs the Nexus that Node B is accessible from this Relay.
  • the Nexus may delegate a new relay session dynamically for Node request (if the requested server node is considered online), find and query the relay server, and return this relay endpoint to the requesting Node.
  • the Node may use this to establish a relay.
  • HTTP timeouts may need to be long enough for the relay to function. Due to the CTRL port architecture, the Nexus may respond just as quickly (if not faster) when a node server is offline. External code may be used to address the switching logic (e.g., whether to use the relay or not).
  • the Relay Servers may have easy-to-query metrics, such as:
  • the Relay Servers may present metric endpoints over the protocol interface (e.g., HTTP interface) to the Nexus for aggregation into the Nexus user interface.
  • protocol interface e.g., HTTP interface
  • the Relay Server may have the ability to deploy its logs over this protocol interface as well to the Nexus, thus enabling centralized administration.
  • the Relay Server may reconfigurable from the Nexus user interface. There may be a mechanism to force re-election of which nodes connect to which relay server in the case of failure or bad connectivity.
  • the system may include a sharing infrastructure feature.
  • the goal of this feature is to provide a flexible backend for our sharing infrastructure.
  • This means the Nexus may support advanced features like access control lists (ACLs) for each share.
  • ACLs access control lists
  • a secondary goal is to minimize the number of changes to the overall infrastructure in each phase of the build out. This lets the system or an administrator of the system go back and test and do a sanity check on the overall approach.
  • user A wishes to share a folder with user B and user C that is synchronized across all of their machines.
  • the Nexus' existing endpoints may be slightly modified and used so that in general the shares work identically to the existing “my cloud” sync.
  • UUID The immutable unique ID assigned when the share is created. It is used to uniquely identify a share across the system.
  • ACL_ID The access control list identifier which identifies which users have which permissions with regard to this share. Examples may include OWNER, READ, WRITE.
  • the Nexus must maintain a list of all shares and govern the unique generation of the UUIDs
  • the Nexus must resolve the ACL_ID into a set of permissions for a particular user
  • the Nexus must enumerate which shares a user has access to;
  • the Node must be able to connect to other users' Nodes to synchronize shares
  • the Node must be able to provide index information independently for each share
  • the Node must NOT be able to connect to other users' Nodes that it does not need to synchronize shares;
  • the Node must NOT be able to synchronize anything except shares it has access to.
  • a share may be created with an owner.
  • User A uses a GUI to create a Share entitled ABCSHARE.
  • the new share is added to the nexus DB.
  • the nexus forces a Share Data Refresh on User A;
  • the Node checks its cache of shares (empty) against the list received in the Share Data Refresh (UUID,“ABCSHARE”, ACL);
  • the Node identifies this as a new share and automatically creates a folder “ABCSHARE” in the % USERDATA %/Shares folder (this is known as the “mount point”);
  • the Node updates its cache of shares and associates the mount point: % USERDATA %/Shares/ABCSHARE with the UUID. This association is stored locally on the node only using the Folders service. Logic will be needed at some point here to handle the case where the mount point already exists; and
  • the Indexing Service is told to begin indexing the new share (the Indexing Service must be made aware of an indexed path's UUID, the System Properties table must now track an Index Revision for each UUID in addition to the Index Revision for “my cloud” or the endpoint must calculate this dynamically from the Index table).
  • this node is prepared for synchronization of updates with other Nodes.
  • the following paragraphs describe the synchronization sequence with the shared folder.
  • the heartbeat message may have an optional list of “desired” UUIDs which represent the Share UUIDs for which sync information is also being sought;
  • the Heartbeat message may return a set of UUID-ComputerInformation tuples.
  • the computers belonging to “my cloud” may have an implicit UUID of 0, and shares owned by other users computers will have a UUID matching that of the share. This allows the node to resolve where assets are based on the assets' UUID);
  • the Node may collapse this returned message to remove duplicate computers and creates a list of available UUIDs for each computer. This is stored in a Share Location Service on the node;
  • the handshake process may occur, and modifications in the nexus may allow for sessions to be granted between nodes that have READ or greater access to other nodes via shares.
  • the ACL is checked before each operation to see if that operation is permitted;
  • the remote revision given is the remote revision for that UUID
  • the index is updated against that UUID.
  • the Node External Service may check its local ACLs before allowing nodes to perform certain operations. Currently all operations on this endpoint are READ ONLY except upload, but all should perform an ACL verification.
  • a user deletes an entire shared folder this may not be automatically propagated by default, unless the user does it from the web UI or confirms with a dialog on the node, for example.
  • the system specifically logs ACL failures at the nexus and node. This may indicate that a node is very close to being able to do something it shouldn't be able to, most likely pointing to a bug in the client side ACL enforcement code.
  • the ACL cache on the client side may be used to eliminate useless (non-permitted) queries from a client node to a server node. Therefore, it may only be necessary to send down the ACL from the perspective of the requesting client, and instead perform the ACL enforcement on the nexus (in the same call as the session keeping).
  • the renaming/moving of share mount points works the same way as with the my cloud folder implementation.
  • This Nexus Session is considered valid in various scenarios, such as:
  • the client has consistently heartbeated with the nexus at within some interval (the nexus session reaper interval);
  • the Time Ended stored for the nexus session is NULL.
  • FIG. 12 is a table illustrating examples of data items 1200 that each nexus session may persistently keep track of.
  • the data items 1200 may include a nexus session ID, a nexus session token, a computer ID, a user ID, a user ACL revision, a time created, or a time ended.
  • the Time Ended will become NON-NULL if the client fails to heartbeat within the nexus session reaper interval or if the ACLs the user is a part of change.
  • the User ACL Revision is instantiated before a session is created. This alleviates any race conditions (e.g., if the ACL revision is updated via a race condition).
  • node session management and canonical storage is performed at the nexus.
  • Node sessions are created by a client requesting to handshake with a node that it has access to data from.
  • this Node Session is considered valid in various scenarios, such as:
  • the Time Ended stored for the nexus session is NULL.
  • FIG. 13 is a table depicting examples of data items 1300 that each node session may keep track of.
  • the data items 1300 may include a nexus session ID, a nexus session token, a From ID, a to ID, a from nexus session ID, a to nexus session ID, a time created, or a time ended.
  • the Time Ended will become NON-NULL if From or To Nexus Sessions become invalid or if a Nexus Session is deleted. In various embodiments, node session storage referencing these deleted nexus sessions will be lost.
  • the nexus can check open sessions to other computers and get the index revision numbers for those other nodes the client has access to.
  • node index revisions are kept track of in the nexus, as well as lower the heartbeat times, so sync can happen more quickly.
  • the goal of encryption may be to prevent attackers or unprivileged users from reading data or spoofing data between the nodes or between the nodes and nexus.
  • the encryption feature makes sure of the following:
  • a message cannot be read by any third party
  • the system may implement the encryption feature by using a combination of techniques, such as:
  • AES Advanced Encryption Standard
  • HMAC hash-based message authentication code
  • FIG. 14 is a description of an example embodiment of a description 1400 what message construction may look like.
  • communication between a node and a nexus will use standard SSL for authenticity, integrity and encryption.
  • Encryption may be used for password protection.
  • Encryption may be used for access restriction/authentication.
  • Encryption may be used to encrypt additional information (e.g., general network traffic, message data, or video data).
  • additional information e.g., general network traffic, message data, or video data.
  • the “nexus” is a central server software stack.
  • Nodes are clients that are deployed to and installed on user computers and devices.
  • a “node client” is a node that is requesting information in a transaction between nodes.
  • a “node server” is a node that is serving information in a transaction between nodes.
  • industry standard SSL is used with 256 bit AES encryption in code-block-chaining mode, SHA1 message authentication and Diffie-Helman RSA assymetric key exchange.
  • 128 bit AES is used in code-block-chaining mode with PKCS#5 padding and SHA1 message authentication.
  • the implementation encrypts the ‘data’ content of messages used in a proprietary protocol between nodes on the Adept Cloud network.
  • asymmetric keys are managed using traditional SSL Public-key infrastructure (PKI).
  • PKI SSL Public-key infrastructure
  • the system may have a wildcard SSL certificate whose private keys are known only to the system; public keys are verified by a trusted SSL root authority.
  • the signature used may be SHA-1 RSA, and the key modulus is 2048 bits.
  • symmetric encryption keys are distributed by way of the node-nexus SSL communication layer.
  • the private keys (and other metadata) may be sent to nodes on demand and upon certain exceptional events (such as a user permission change).
  • node-node keys are never stored on node clients except in temporary memory (e.g., RAM) and have a maximum lifetime of 24 hours.
  • no asymmetric encryption is used for node-node communication, so no modulus sizes are supported.
  • the plain text consists of proprietary messages that define the protocol used between nodes and nodes-nexus. Some of these messages may be compressed using gzip or other industry-standard data compression techniques.
  • node-nexus communication uses standard SSL after which no further post-processing methods are applied to the ciphertext.
  • the ciphertext is encapsulated with an unencrypted message header and an unencrypted message footer.
  • the message header may consist of a hashed client identifier, the length (in bytes) of the message ciphertext, the IV (initialization vector) used to encrypt the ciphertext (randomized for each message) and SHA-1 HMAC of the unencrypted message header to authenticate the header contents.
  • the message footer may contain a SHA-1 HMAC of the union of the message header and the ciphertext.
  • Node-nexus communication may employ standard SSL over TCP using TLS1.0 or greater.
  • node-node communication may support only a proprietary encryption protocol over TCP or UDP.
  • node-nexus communication may make use of a Java SSL library (e.g., provided by Oracle), which inherently prevents user modification of encryption algorithms, key managements and key space.
  • Java SSL library e.g., provided by Oracle
  • node-node communication uses a proprietary protocol which does not allow for protocol negotiation. This may prevent users from modifying the encryption algorithms without being denied access to a remote resource. Key management may be enforced for both the client node and the server node by the nexus, so in the event a client attempts to use an old or invalid key, the node-node communication will be terminated as the key will be denied when the server nodes attempts to verify the invalid key with the nexus.
  • centralized key management is performed for all users by the system infrastructure (e.g., the nexus). This means there may be a number of encrypted data channels equal to the number of active computers on the system, which may be equal to the aggregated number of computers owned by each user.
  • Businesses may critically need to be able to create an organization in the system, in which they can manage users and their data centrally.
  • I′m an Administrator of an Organization and I want to deploy the system fully set up for my organization.
  • An organization is a managed set of users and resources.
  • An organization may have one or more super-administrators.
  • organizations are not to be tied to a specific domain. For example, soasta.com and gmail.com email addresses may be used in the same organization.
  • Super-administrators may have an option to restrict sharing to only users within the organization.
  • users only have an opportunity to be a member of a one organization. If an administrator attempts to add a user to an organization and the user is already in another organization, there may be an error thrown back and presented to the administrator. Users may be invited to more than one organization if they haven't accepted an invite into an organization yet.
  • the system uses a permissions-based access control system that is notionally tied to cloud privileges (e.g., for a given cloud, a user will have permission (e.g., Owner, Write, Read, None)).
  • cloud privileges e.g., for a given cloud, a user will have permission (e.g., Owner, Write, Read, None)).
  • the system uses a privilege-based, resource-centric access control system.
  • a resource is an entity within the Adept Cloud system, such as an organization or a cloud, that requires users to have specific privileges to perform specific actions.
  • a role is a set of privileges.
  • native roles are assigned to group common privileges into colloquial buckets.
  • Role.CLOUD_READER will include Privilege.VIEW_CLOUD and Privilege.READ_CLOUD_DATA.
  • a privilege is a positive access permission on a specific resource.
  • Each privilege has an intrinsic role for simplicity of defining ACLs in implementation.
  • An access control list for a resource may map users to roles. User access queries on an access control list may return a set of all roles.
  • ROLE_NONE A catch-all role of ROLE_NONE.
  • ROLE_NONE may be added. In various embodiments, this role can never be granted, and is only returned upon queries for user privileges on a resource when the user has no granted privileges.
  • required information for organization creation includes the name of the organization and email address (e.g., provided by an administrator).
  • the system may create an organization. If an administrator is not an existing system user, the system will create a new system account for the email address used for signup. In various embodiments, the does not send an activation email to the user yet.
  • the system may set the administrator's system account as a user of the organization, with role ORGANIZATION_SUPER_ADMINISTRATOR.
  • the system may send an activation email now.
  • Administration features of the system may include the following:
  • the Admin tab may have applications (Users, Clouds, Computers, Settings);
  • ORGANIZATION_USER_MANAGER A Users application that enables ORGANIZATION_USER_MANAGERs to create a user for the organization with options to set their authentication and metadata information, send an activation email tailored to the organization, highlight the name of the organization in the email and in the activation page; invite an existing cloud user to join their organization; view, modify, and/or delete organization users; view, modify, and/or delete user computers and devices if the administrator is an ORGANIZATION_COMPUTER_MANAGER;
  • a clouds application that enables ORGANIZATION_CLOUD_MANAGERs to add/view/modify/delete clouds, managing any cloud created by any organization user;
  • a computers application that enables ORGANIZATION_COMPUTER_MANAGERs to view/modify/unlink computers and devices registered to users of the organization;
  • a settings application that enables ORGANIZATION_SUPER_ADMINISTRATORs to add/view/modify/remove users with admin privileges; view/modify organization-wide settings; optionally limit cloud membership to organization users only; delete the organization and all its users.
  • organization membership will be strongly tied to a user object.
  • a backup and versioning system may be installed to enable the system to recover data in the case of accidental misuse by users or a bug in the system software.
  • the backup and versioning system may include the following features:
  • the backup server is able to serve an organization with a predetermined number of users without becoming a bottleneck and is not to delete files off its file system (e.g., it is only to be able to write metadata that the files are deleted). This will allow the system to have a better guarantee that it can't possibly permanently lose data due to a programming or user error.
  • the backup server is backed by a data store that includes the following features:
  • Simple to backup and restore e.g., performing a backup of a “data” directory is enough to recreate the entire state in case the backup server computer needs to be restored
  • the backup server consists of three major components: (1) a Synchronization Service that polls the nexus for revision updates on each node and then contacts each node to get the contents of those revisions; (2) a Revisioning File Storage Service that provides the backing store for the data content, maintains a revision history on a per-file basis, satisfies the no-delete constraint, intelligently stores revisions to files so as not to use an exorbitant amount of space; maintains an index of the head revision plus all other revisions; (3) a Restore Service that provides the endpoint and backing services for clients to browse for and retrieve versions of backed up files, mimics the existing endpoints for synchronization to the head revision for regular nodes (so standard nodes can sync with the backup node), and works in tandem with the Revisioning File Storage service to actually retrieve the file listings and data itself.
  • a Synchronization Service that polls the nexus for revision updates on each node and then contacts each node to get the contents of those revisions
  • the Synchronization Service works mostly like a synchronization service that a node may already have.
  • the general cycle is for a particular entity: (1) Contact the nexus to get the current revision numbers for all other nodes of that entity, and (2) Loop through each node: compare a locally cached revision number for that node against what was received from the nexus; retrieve a list of updates from the node by issuing our last cached revision number (getUpdates); relay each update operation to the Revisioning File Storage Service; and upon success, update the locally cached revision number for that node.
  • getUpdates our last cached revision number
  • the backup server may have one sync service per entity instead of one per server; the backup server sync service may use thread pools instead of an explicit sync thread; the backup server sync service may not have a lock service since its indexing service is only accessed by the sync service (i.e., not the file system (FS) watcher) (alternatively, entity scope locking could be used); and the backup server sync service may send FS operations to the Revisioning File Storage Service instead of performing them directly on the FS.
  • FS file system
  • Node A, Node B, and backup server S are in the same entity.
  • Node A modified a file F offline.
  • Node B modified the same file F offline, in a different way.
  • Node B comes back online later, and Server S tries to get node B's file F(B) [B ⁇ 1].
  • the system may detect a fork and start tracking revisions independently.
  • the backing store is aware of the versioning clock.
  • the system may just pick a revision to track (e.g., the first one) and ignore all conflicts. However, if the wrong revision is picked, data could be lost.
  • the Revisioning File Storage Service is the heart of the backup and versioning service. Effectively this component acts as a write-only, versioning file system that is used by the Backup Synchronization Service (above).
  • Verbs used by the existing Synchronization Service may include the following (see doUpdate):
  • the RFS provides all the features of a regular FS, namely being supporting the reading, writing and deletion of files and the creation and deletion of folders. It differs in the following ways:
  • the FS has the concept of a “revision”, which is a number that represents a global state of the file system.
  • the FS supports queries such as the following:
  • each “transaction” on the FS increments the revision by one.
  • head revision the global maximum
  • each file or directory full modification constitutes a transaction.
  • a full modification means a full replacement of the file, so intermediate edits by something like Rsync would not result in a new transaction as this would cause files to have an invalid state.
  • a possible candidate implementation is Revisioning Index+Native File System.
  • every file and folder is stored on the native file system as normal.
  • the cloud has a revisioning index which is a database that contains an entry for each file/folder action and its metadata as well as some global metadata for the database itself. Note that the data stored in the database may be tightly coupled to the underlying file revisioning strategy.
  • the database has a row for every single revision of every file/directory. Therefore, the row ID ends up being the revision number.
  • FIG. 15 is a table illustrating an example embodiment of a database table 1500 for an example candidate implementation of a revisioning file storage service.
  • the database table 1500 includes the following fields:
  • filename the relative path of the file in the entity.
  • the filename is the same for both directories and files;
  • adler32 the adler32 checksum of the file, 0 for directories
  • file timestamp the propagated time stamp of the file
  • version clock the version clock associate with this operation
  • the Restore Service is just JSON endpoints (and possibly a rudimentary GUI/CLI).
  • browseRevisions Input: Relative Path (string), UUID (long), fromRevision (long); Output: List of Browse Elements (augmented with revisions). Returns immediate children elements of a particular path between the fromRevision and the HEAD.
  • Relative Path may be a folder that existed between fromRevision and Head, otherwise error;
  • the system is easily deployable (e.g., by an organization's IT staff).
  • the process is painless and avoids messing with configuration scripts and the like.
  • the Storage Agents will be deployed by the system at select customer premises. If allowed, log-me-in or some remote access solution may be installed so that the system can be managed remotely after on-site deployment.
  • Storage Agents will: automatically download updated Storage Agent software similar to how nodes do now and be able to install updates from a central web interface.
  • the “unboxing” experience includes the following steps:
  • An organization admin (or similar privileged organization user) downloads a “special” storage agent from the nexus;
  • the setup package is real-time injected with a unique token identifying which organization it is associated with;
  • the installer requires the admin to enter the storage agent daemon password for the org;
  • the installer asks the administrator for a name to identify this storage agent
  • the installer asks the administrator for location(s) to store backups
  • the installer optionally presents the administrator with different backup strategies (e.g., how long to keep data, etc.) and whether the backup server should be enabled now; and
  • the installer installs the application as a service.
  • the access service may be a component of Onsite.
  • the access service may give organizations the ability to give their users the ability to access their clouds from a web browser.
  • a user uses the access component to:
  • View content for media e.g., in a special shadowbox with OTF video conversion where appropriate.
  • an Organization Administrator Uses the Access Component to:
  • FIG. 16 is a block diagram depicting an example embodiment of a design 1600 of the access component.
  • the system is referred to as “AdeptCloud” and the Onsite component is referred to as “AdeptOnsite.”
  • the access service acts a lot like the mobile clients on one end and on the other end serves an HTML user interface.
  • Every user who logs into the access service (using their system credentials) spools up a virtual node-like client which communicates with other nodes as if it itself were a node.
  • One option is a thin “node-like” layer that sits in the access service that allows changes to be served right from the access server as if it were a node. In various embodiments, this layer serves until the changes propagate “sufficiently”. Sufficient propagation may be based on propagation to a fixed number of peer node or a percentage of peer nodes. Sufficient propagation may be based on a time out.
  • Peers may make an incoming connection to the access server in order to get the changes (using retreiveUpdates etc.). Therefore, at some point messages may be dispatched to the appropriate “user handler” for which the message applies.
  • the cryptoId may be sent to the nexus and the access server would then need to figure out not only who the remote user is, but which local user the remote node is trying to communicate with (which in turn may be answered by the nexus).
  • the flow may be as follows:
  • a regular node request a cryptoId from the nexus as usual, asking to connect to the Access Server;
  • the nexus issues the cryptoId between the node's comp and the synthetic comp of the access server;
  • the node connects to the access server with this cryptoId
  • the access server attempts to verify this with the nexus, but instead of using its synthetic computers nexus session, it uses a special nexus session for this purpose;
  • the nexus verifies the cryptoId and also returns who the intended computerId receiver was (which should be one of the synthetic comp IDs represented by the access server); and
  • the request is forwarded to the appropriate access client.
  • Another option is to keep the simple design of running every RPC call on the node, effectively making the access server a message delivery system.
  • the access server may connect to N nodes for the particular cloud the front end is editing. If a user has “low connectivity”, i.e. only a single remote node is on, a small warning indicating this may be presented. With this option, fewer features may need to be custom implemented on the access server.
  • This option may also support a future design where the always available node feature is handled by an external component running on the same computer (or even another server, perhaps a storage agent “lite”).
  • users may see all connected computers via the web UI as well as the associated job queue.
  • New endpoints on the nodes download file, download files, upload file (either new file or new version of existing file), rename a file (delete it and synthetic upload), delete a file, move a folder (and its contents) to a new location, move a file to a new location.
  • the node may have specific code which prevents the file system watcher from detecting changes made by the node itself from increasing its own version clock entry.
  • these endpoints may need to make modifications to the existing files on the node, but may do so without the node changing the version clock for its own computer. Instead, these actions may change the version clocks by incrementing the entry for the synthetic computer which represents the user on the access server. This way future logging and auditing may ensure the version clocks always represent a truthful map of who and where edits were made. Additionally, the mobile clients may make use of these new endpoints, and the same guarantees may then be made about modifications made on those devices.
  • the audit server gives visibility into the health of the system and provides a central repository for logging user, file and security events.
  • Auditing may be broken down into three primary categories:
  • Cloud-level logging Tracking the version and history of individual files, who edited, where, on what device, etc. (storage agent);
  • Transaction-level logging This is node-node communications used for figuring out when two nodes sync.
  • each audited event may be “visible” to certain principals, depending on the event. This is because certain “container” roles change over time, and should have retroactive access into an audit trail. For instance, a new IT administrator may be able to access audit history that occurred before their sign on date. However, users who join a cloud may not be able to get access to audit history before they joined.
  • the goal of the audit server is to record enough information at the time of the audit event to extract these security relationships when the data needs to be accessed.
  • FIG. 17 is a table illustrating an example embodiment of nexus logging particulars 1700 for three classes: user, organization, and cloud.
  • global system visibility allows a super-admin to see all events at the nexus level.
  • FIG. 18 is a table illustrating an example embodiment of cloud-level logging particulars 1800 .
  • this logging is done exclusively at the cloud level.
  • Most the data may come from the storage agents with some of the same data coming from nexus just as above.
  • This type of logging may log when nodes communicate with each other and what they say. In various embodiments, this is just the sync pipeline, which is a combination of RetreiveUpdatesSinceRevision and RetreiveFile.
  • the nodes may log both the server and client sides of a connection; this way, if either node is compromised or if only a single node is “managed”, both sides of the transaction can be found later.
  • the PKI system works as follows:
  • a private key is generated by the audit server for an organization.
  • the public key is then sent to the nexus;
  • Log events are encrypted with the public key before being sent to the nexus.
  • the nexus then queues these events to be sent to an audit server;
  • the audit server retrieves the events and can decrypt them with its private key.
  • the direct communication system works as follows:
  • Nodes locally “cache” every audit event to a local persistent store on the node (e.g., the database);
  • the node connects with the audit server and delivers updates
  • the node must optionally handshake with an audit server to avoid totally orphaned nodes from never delivering their logs (this could be an organization parameter).
  • a user interface may enable users to perform the following actions:
  • View Statistics e.g., of logins, time since last login, failed logins, last IP, email, and so on.
  • the synchronization service supports the access server (e.g., browser client) and improves performance for the sync service, including decoupling the indexing, synchronization and filewatcher services.
  • the synchronization service may have the ability to handle conflicts.
  • the synchronization service may also maintain index information from all nodes in mounted and unmounted clouds. Every node may have the version clock of every file in the mounted cloud.
  • the synchronization service may provide file level information of what's available and what's not on each node in the network. The index may be able to handle the thin mounting concept.
  • Use cases may include the following:
  • a user wants to access data from any computer in the world
  • a file is modified offline on two nodes and a conflict is created (e.g., user wants resolve the conflict);
  • a user wants more performance out of the node client
  • a user wants to browse unmounted clouds
  • a user wants to download data from an unmounted cloud
  • a user wants to upload data to an unmounted cloud.
  • the synchronization server may support the following workflows:
  • Changes to the index data in the database may include:
  • Add availability table with each id maps to an entry in the adept_index and each column is a computer UUID in the cloud (could truncate the list of computers);
  • the indexing service may support the following features:
  • a counter on the nexus that tells nodes when they should talk to each other. This may be the primary mechanism that nodes use in the SyncService and NodeExternalServices to communicate.
  • FIG. 19 is a table illustrating example fields included in a database table 1900 for indexing (e.g., adept_index). As shown, the example fields include index_id, computer_id, and version_clock.
  • the adept_index table includes a locally_available column, and stores information about unmounted clouds in addition to mounted clouds.
  • Locally_available is a Boolean to indicate whether the PATH is available on the local node.
  • SHARES may include all clouds (UUIDs) and include a new field to indicate if the cloud is mounted (not just a null PATH).
  • a column “mounted” may indicate if the cloud is locally mounted. Clouds in the SHARES table may be assumed to be mounted.
  • SyncService (a) synchronizationServiceThread.performAction—loop over all UUIDS, not just mounted ones (do not skip if folder service getFolder( ) call returns null for unmounted UUIDs); call syncWithComputer any time; do not call setRevisions for unmounted clouds to tell the nexus your local revision is zero for that cloud; (b) syncWithComputer: mounted clouds—call doUpdate and only update the index once the transfer has completed and the hash equality checked; unmounted clouds—a couple of options.
  • Option 1 User the current check to the foldersService to see if a cloud is mounted by checking if the folder is null.
  • Option 2 Explicitly check at the beginning of the function if the UUID is mounted via the SHARES table. If not mounted, process the IndexedPathUpdates via the IndexingStorageService, set the updated remote revision via the foldersService.
  • a type of SynchronizationEvent indicates that just index information is being shared, but this may happen very quickly and perhaps frequently.
  • IndexingService (a) mounted clouds—the fileWatcherEvents may be the primary mechanism for updating the index; (b) unmounted clouds—no FileWatchers are enabled, so the unmounted clouds may not interact with the IndexingService via the QueuedIndexEvents.
  • doUpdateIndexedPath add computer_id to call, update corresponding element in adept_availability table
  • getUpdatesSinceRevision perform join query to indicate not just the data in adept_index, but also information from the adept_availability table to indicate if the corresponding element in the adept_index table exists on the given compputer (assumes this will populate the IndexedPath available with the current node computer_id from the availability table);
  • getIndexRevision like getMaxIndexRevision, persist data on unmounted clouds in SHARES table
  • incrementAndGetCurrentMaxRevision see getMaxIndexRevision for thought on persisting data on unmounted clouds in SHARES table
  • overrideIndexRevision see getMaxIndexRevision for thought on persisting data on unmounted clouds in SHARES table
  • NodeExternalServices changes in the IndexedPathResultSetHandler may propagate the unique identifier if an IndexedPath is mounted (and thus available on a given node).
  • IndexedPath A field may indicate if the IndexedPath is available on the node returning the IndexedPath.
  • IndexedPathResultSetHandler A translation may set the available field in the IndexedPath based on the data returned from the IndexedPath query.
  • IndexFilter In various embodiments, has the ability to filter based on data that persists on a given computer_id from the availability table.
  • Mounting a cloud invokes the foldersService.setFolderPath function; foldersService, setFolderPath; unmountFolder—unmounting and clearing the data from the index will cause all remote nodes with this cloud mounted to give the most up-to-date information because the local remote revision will be out of sync with the nexus.
  • Unmounting a cloud Invokes the foldersService.unmountFolder; foldersService; unmountFolder:
  • Option 1 add additional call (and always run) to increment the local IndexRevision number (how does this get pushed up to the nexus);
  • Option 2 add another method to explicitly unmount and increment. This would allow other methods that don't need to increment the index revision to unmount a folder without incrementing;
  • Option 3 Put the call to the nexusClient at the CloudController level.
  • nexus infrastructure may be federated for redundancy and load balancing.
  • PKI Public Key Infrastructure
  • a high-level solution may be to partition the location of the sensitive data and partition how access to data is granted. This solution may be realized with a standards-based PKI (Public Key Infrastructure) solution.
  • PKI Public Key Infrastructure
  • Identification provide identification for two peers who are communicating
  • Authorization authorize one peer to access a resource from another peer.
  • the PKI feature addresses problem 1, and provides a way for organizations to fairly easily substitute out the nexus for their own PKI solution.
  • each client may establish its identity using a X509 certificate.
  • Each connection between nodes may use two-way TLS, thereby allowing both peers to establish the identify of one another before communicating.
  • the system does this internally by maintaining a map of everyone's certificate to their user/computer ID pair at the nexus. Effectively, the nexus may act as a certificate authority (CA).
  • CA certificate authority
  • the nexus may perform the following CA-like activities: accepting a generated public key from a node in the form of a CSR, returning a signed public key certificate with the nexus root CA, maintaining a list of revoked certificates, supporting an Online Certificate Status Protocol (OCSP) (or OCSP like) protocol to check validity of a certificate.
  • OCSP Online Certificate Status Protocol
  • a computer token may be generated (nexus side) for each new computer and associated with a computer/user ID pair.
  • a public/private RSA key pair may generated (node side) and the public key is associated with a computer/user id pair.
  • security may be session-based.
  • the computer token may be held secret between the node and the nexus, and a temporary handshake token may be generated to establish identity, which leads to a session token which exists for the duration of the logical connection.
  • security may be certificate-based. For example, nodes may directly communicate with one another without ever needing to talk to the nexus (regarding identity) as they may verify the authenticity of the connecting node's identity by verifying the root of the presented certificate.
  • the PKI feature including its communication infrastructure, may result in significantly reduced load on the nexus and faster connections between nodes because, for example, the node identity may be verified without a round trip to the nexus (e.g., through caching the issuing certificate public key).
  • the nodes may generate a public/private key pair.
  • the nodes may generate RSA keys with 2048 or 4096 bit length.
  • a key store on the node may be the sole location of the node's private key.
  • a trust store on the node may contain the nexus public key. In this way, trust of certificates signed with the nexus public key may be enabled.
  • trust stores may be Java key stores (JKS).
  • JKS Java key stores
  • trust stores may be non-Java specific trust stores.
  • Another registerComputer call may perform a SCEP request (Simple Certificate Enrollment Request).
  • a signed X509 certificate may be issued back to the node as per SCEP from the nexus.
  • the nexus may record the issuer and serial number of the certificate and associate it with that computer/user ID.
  • the node may store this certificate for some time (e.g., one year by default). Thus, the installation of the PKI feature may be completed.
  • third-party APIs e.g., Legion of Bouncy Castle APIs
  • Legion of Bouncy Castle APIs may be used to perform various operations.
  • Node A gets the IP and port of Node B as before.
  • Node A attempts to connect via TLS to Node B directly to the getFile endpoint.
  • Node B challenges Node A for its client certificate.
  • Node A provides the client certificate it got during install, signed by the nexus.
  • Node B performs an OCSP request to the nexus to verify the status of node A's certificate. Alternatively this can be done directly over the existing SSL connection with the nexus.
  • Node B replies with its public certificate which is subsequently also verified by the nexus (e.g., by node A).
  • Node A accepts the cert, and the secure channel is created.
  • Node A gets the file from node B.
  • OCSP supports a cache lifetime (like a TTL). This may be set to a default value that organizations may configure later.
  • the only things tying the PKI feature to the nexus may be:
  • the location of the SCEP endpoint i.e. registering a certificate with the CA
  • the location of the OCSP endpoint (or similar) (i.e. verifying an issued certificate with the CA);
  • the public key that is preloaded into a trust store for the CA i.e. which CAs does the system trust.
  • relayed connection may be more complex.
  • Many SSL libraries may not support decoupling the socket and the SSL state machine, which may be necessary to inject unencrypted (or at least public to the relay) routing information on the message so the relay knows how to deliver a given message.
  • the solution may be twofold.
  • the system may create a plaintext handshake with the relay server, communicate the routing info, establish the connection to the relayed node, and then transition to an SSL connection before the first bit of cyphertext is ever sent to the relayed client.
  • the relay servers will NOT be performing any part of the SSL handshake; they merely forward the packets to the intended host in a transparent manner. Therefore the relays have absolutely no visibility in the underlying data that is being transmitted.
  • Android may leverage the same code as the normal (e.g., PC) clients and onsite.
  • iOS may need to do Simple Certificate Enrollment Protocol (SCEP) server-side generation and deliver the cert using a PIN.
  • SCEP Simple Certificate Enrollment Protocol
  • One of the most powerful aspects of the system may be the ability for two or more organizations with separate IT infrastructure to collaborate easily.
  • each client's certificate may identify the common system CA as a trusted root authority, and therefore accept the remote peer's certificate. Effectively it may make no difference that the two nodes are in separate organizations since they trust the same root.
  • Company X and Company Y may agree that they need to collaborate on data.
  • every client in their organization may load both companys' CAs into the client's trusted store, making the client trust certificates issued from either authority.
  • a system application may enforce that clients in company X must be signed with company X's CA, and clients in company Y must be signed by company Y's CA. This is not how typical certificate identification (e.g., standards-based PKI) works.
  • the system may verify not only the identity of an endpoint, but that the endpoint identity is established with a proper chain.
  • a client may have ⁇ client id>. ⁇ org id>.client.adeptcloud.com in their subject name, which must match the organization ID in the signing CA's certificate.
  • client id>. ⁇ org id>.client.adeptcloud.com in their subject name, which must match the organization ID in the signing CA's certificate.
  • even a single client may be added to trust for establishing finer trust silos.
  • Synchronizing and maintaining the trust stores on the clients would be a nightmare in a typical piece of software.
  • the system may use a central server to delegate which clients synchronize which CA's (or client certificates) into their trust stores. This information may come directly from the nexus, or for even more added security, may be delivered using the system onsite servers.
  • Another possible useful configuration may be allowing for organizations to provide intermediate certificates that will be delivered by the system. Clients may have special permission for these types of “chained” certificate configurations, for instance the ability to synchronize more sensitive data.
  • a client-side implementation may include prototype certificate generation, prototype certificate chaining (e.g., signing by a third party), establishing base socket communication (e.g., using Netty with TLS 2.0 and custom certs), streaming interfaces (e.g., Interface standard Input/Ouput streams to Netty ByteStreams), refactoring node interfaces in preparation for secure messaging applications (e.g., AdeptSecureMessage), building request/response wrappers (e.g., on top of Netty), tying back Node External Services to new TLS backend, tying back Onsite External Services to new TLS backend, building a STARTTLS pipeline factory, updating relay server to relay STARTTLS, and modifying relay client to support STARTTLS.
  • prototype certificate generation e.g., signing by a third party
  • base socket communication e.g., using Netty with TLS 2.0 and custom certs
  • streaming interfaces e.g., Interface standard Input/Ouput streams to Netty By
  • a server-side implementation may include adding a serial number entry to the computer field nexus side, implementing SCEP, implementing OCSP, and exposing some OCSP/SCEP configuration options to organizations.
  • FIG. 20 is a flowchart illustrating an example method 2000 of sharing data.
  • a request is received from a client node to access data in a share associated with as server node.
  • the request may be received at the server node or the request may be received at an Onsite service installed within a firewall.
  • a communication is received from a management nexus (e.g., at the server node or the Onsite service).
  • the communication confirms the identity of the client node and a confirmation of an authorization for the client node to access the data in the share associated with the server node.
  • the communication may be sent in response to a request for the confirmation of the identity of the client node and a confirmation of the authorization for the client node to access the data in the share associated with the server node.
  • the client node is allowed to access the data in the share associated with the server node based on the communication received from the management nexus. For example, the client node is allowed to establish a connection with the server node or the Onsite service via a relay endpoint, as described above. In various embodiments, the connection is established based on the security measures described above (e.g., in response to an exchange of certificates between the client node, the server node, and the management nexus). In various embodiments, the data in the share is not transferred to the management nexus. Instead, the data is transferred directly from the server node (or Onsite service) to the client node (e.g., via the relay) without involving the management nexus. Thus the nexus remains unaware of the actual data that is transferred between nodes.
  • Modules may constitute either software modules (e.g., code embodied on a machine-readable medium or in a transmission signal) or hardware modules.
  • a hardware module is a tangible unit capable of performing certain operations and may be configured or arranged in a certain manner.
  • one or more computer systems e.g., a standalone, client or server computer system
  • one or more hardware modules of a computer system e.g., a processor or a group of processors
  • software e.g., an application or application portion
  • a hardware module may be implemented mechanically or electronically.
  • a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations.
  • a hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
  • the term “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired) or temporarily configured (e.g., programmed) to operate in a certain manner and/or to perform certain operations described herein.
  • hardware modules are temporarily configured (e.g., programmed)
  • each of the hardware modules need not be configured or instantiated at any one instance in time.
  • the hardware modules comprise a general-purpose processor configured using software
  • the general-purpose processor may be configured as respective different hardware modules at different times.
  • Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time.
  • Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple of such hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices and can operate on a resource (e.g., a collection of information).
  • a resource e.g., a collection of information
  • processors may be temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions.
  • the modules referred to herein may, in some example embodiments, comprise processor-implemented modules.
  • the methods described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations.
  • the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), these operations being accessible via a network (e.g., the network 104 of FIG. 1 ) and via one or more appropriate interfaces (e.g., APIs).
  • SaaS software as a service
  • Example embodiments may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them.
  • Example embodiments may be implemented using a computer program product, e.g., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable medium for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers.
  • a computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, subroutine, or other unit suitable for use in a computing environment.
  • a computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
  • operations may be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output.
  • Method operations can also be performed by, and apparatus of example embodiments may be implemented as, special purpose logic circuitry (e.g., a FPGA or an ASIC).
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • both hardware and software architectures should be considered.
  • the choice of whether to implement certain functionality in permanently configured hardware e.g., an ASIC
  • temporarily configured hardware e.g., a combination of software and a programmable processor
  • a combination of permanently and temporarily configured hardware may be a design choice.
  • hardware e.g., machine
  • software architectures that may be deployed, in various example embodiments.
  • FIG. 21 is a block diagram of machine in the example form of a computer system 5000 within which instructions 5024 for causing the machine to perform operations corresponding to one or more of the methodologies discussed herein may be executed.
  • the machine operates as a standalone device or may be connected (e.g., networked) to other machines.
  • the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
  • the machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA Personal Digital Assistant
  • STB set-top box
  • WPA Personal Digital Assistant
  • a cellular telephone a web appliance
  • network router switch or bridge
  • machine any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • the example computer system 5000 includes a processor 5002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), a main memory 5004 and a static memory 5006 , which communicate with each other via a bus 5008 .
  • the computer system 5000 may further include a video display unit 5010 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)).
  • the computer system 5000 also includes an alphanumeric input device 5012 (e.g., a keyboard), a user interface (UI) navigation (or cursor control) device 5014 (e.g., a mouse), a storage unit 5016 , a signal generation device 5018 (e.g., a speaker) and a network interface device 5020 .
  • an alphanumeric input device 5012 e.g., a keyboard
  • UI user interface
  • cursor control device 5014 e.g., a mouse
  • storage unit 5016 e.g., a storage unit 5016
  • signal generation device 5018 e.g., a speaker
  • network interface device 5020 e.g., a network interface device 5020 .
  • the storage unit 5016 includes a machine-readable medium 5022 on which is stored one or more sets of data structures and instructions 5024 (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein.
  • the instructions 5024 may also reside, completely or at least partially, within the main memory 5004 and/or within the processor 5002 during execution thereof by the computer system 5000 , the main memory 5004 and the processor 5002 also constituting machine-readable media.
  • the instructions 5024 may also reside, completely or at least partially, within the static memory 5006 .
  • machine-readable medium 5022 is shown in an example embodiment to be a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions 5024 or data structures.
  • the term “machine-readable medium” shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present embodiments, or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions.
  • the term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.
  • machine-readable media include non-volatile memory, including by way of example semiconductor memory devices, e.g., Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and compact disc-read-only memory (CD-ROM) and digital versatile disc (or digital video disc) read-only memory (DVD-ROM) disks.
  • semiconductor memory devices e.g., Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), and flash memory devices
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • flash memory devices e.g., electrically Erasable Programmable Read-Only Memory (EEPROM), and flash memory devices
  • magnetic disks such as internal hard disks and removable disks
  • the instructions 5024 may further be transmitted or received over a communications network 5026 using a transmission medium.
  • the instructions 5024 may be transmitted using the network interface device 5020 and any one of a number of well-known transfer protocols (e.g., HTTP).
  • Examples of communication networks include a LAN, a WAN, the Internet, mobile telephone networks, POTS networks, and wireless data networks (e.g., WiFi and WiMax networks).
  • the term “transmission medium” shall be taken to include any intangible medium capable of storing, encoding or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such software.
  • inventive subject matter may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed.
  • inventive concept merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A method of sharing data is disclosed. A request from a client node to access data in a share associated with a server node is received. A communication from a management nexus is received. The communication includes a confirmation of an identity of the client node and a confirmation of an authorization for the client node to access the data in the share associated with the server node. The client node is allowed to access the data in the share associated with the server node based on the communication from the management nexus. However, the data is not sent to the management nexus.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application No. 61/583,340, filed Jan. 5, 2012, entitled “SYSTEM AND METHOD FOR DECENTRALIZED ONLINE DATA TRANSFER AND SYNCHRONIZATION,” and U.S. Provisional Application No. 61/720,973, filed Oct. 31, 2012, entitled “PRIVATE DATA COLLABORATION SYSTEM WITH CENTRAL MANAGEMENT NEXUS,” each of which is incorporated herein by reference in its entirety.
  • TECHNICAL FIELD
  • This application relates generally to the technical field of sharing files, and, in one specific example, to allowing organizations to implement internal and external data collaboration without violating the security policies of the organization.
  • BACKGROUND
  • Employees of an organization often need to share access to files, whether they are working locally (e.g., inside a firewall of the organization) or remotely (e.g., outside the firewall). Additionally, employees of the organization may need to share access to such files, which may otherwise be intended to remain private to the organization, outside the organization (e.g., with employees of other organizations). With existing data collaboration tools, it may be difficult for an organization to control such file sharing such that security policies of the organization are not compromised.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Some embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings in which:
  • FIG. 1 is a screenshot depicting an example embodiment of a user interface of a desktop client;
  • FIG. 2 is a screenshot depicting an example embodiment of a user interface in which a cloud (e.g., “my cloud”) has been mounted on a users computer;
  • FIG. 3 is a screenshot depicting an example embodiment of a user interface of a Cloud Browser;
  • FIG. 4 is a screenshot depicting an example embodiment for allowing a user to view files in my cloud using Onsite;
  • FIG. 5 is a screenshot depicting an example embodiment of a user interface presented in a mobile device to allow a user to view files in my cloud;
  • FIG. 6 is a screenshot depicting an example embodiment of a user interface for central management of the storage system;
  • FIG. 7 is a screenshot depicting an example embodiment of a user interface for managing computers and devices centrally;
  • FIG. 8 is a screenshot depicting an example embodiment of a user interface for viewing organizations centrally;
  • FIG. 9 is a screenshot depicting an example embodiment of a user interface for managing users and onsite services centrally;
  • FIG. 10 is a block diagram illustrating an example architecture of the system;
  • FIG. 11 is an interaction diagram depicting example interactions between components during authentication and data transfer;
  • FIG. 12 is a table illustrating examples of data items that each nexus session may persistently keep track of;
  • FIG. 13 is a table depicting examples of data items that each node session may keep track of;
  • FIG. 14 is a description of an example embodiment of what message construction may look like;
  • FIG. 15 is a table illustrating an example embodiment of a database table for an example candidate implementation of a revisioning file storage service;
  • FIG. 16 is a block diagram depicting an example embodiment of a design of the access component;
  • FIG. 17 is a table illustrating an example embodiment of nexus logging particulars for three classes: user, organization, and cloud;
  • FIG. 18 is a table illustrating an example embodiment of cloud-level logging particulars;
  • FIG. 19 is a table illustrating example fields included in a database table for indexing;
  • FIG. 20 is a flowchart illustrating an example method of sharing data; and
  • FIG. 21 is a block diagram of machine in the example form of a computer system within which instructions for causing the machine to perform operations corresponding to any one or more of the methodologies discussed herein may be executed.
  • DETAILED DESCRIPTION
  • In the following description, for purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the present subject matter. It will be evident, however, to those skilled in the art that various embodiments may be practiced without these specific details.
  • In various embodiments, methods for sharing data are disclosed. A request from a client node to access data in a share associated with a server node is received. A communication from a management nexus is received. The communication includes a confirmation of an identity of the client node and a confirmation of an authorization for the client node to access the data in the share associated with the server node. The client node is allowed to access the data in the share associated with the server node based on the communication from the management nexus. However, the data is not sent to the management nexus.
  • This method and other methods disclosed herein may be implemented as a computer system having one or more modules (e.g., hardware modules or software modules). This method and other methods disclosed herein may be embodied as instructions stored on a machine-readable medium that, when executed by a processor, cause the processor to perform the method.
  • Existing cloud-based collaboration systems may store data of a business at their internet datacenters, which may result in a loss of privacy, loss of security, or regulatory concerns for the business. Methods and systems described herein may offer a better way for businesses with sensitive data to collaborate internally and externally. In various embodiments, methods include providing a private, unlimited solution that is simple to use, easy to manage, and low in complexity and cost. In various embodiments, methods include providing systems that are purpose-built for sensitive files. In various embodiments, data of a business is stored on private nodes, but not on servers external from the private nodes.
  • Existing cloud-based collaboration solutions may charge business for storing data at their internet datacenters, penalizing data generation. In various embodiments, the methods and systems described herein do not include such charges for cloud storage space. In various embodiments, such methods include enabling colleagues in business to synchronize and share unlimited files of unlimited size.
  • Existing private collaboration and storage solutions may come with significant complexity and maintenance burdens for a business, based on, for example, the sole responsibility of the business to deploy, manage, monitor, and upgrade a complex storage system. In various embodiments, unique technology enables configuration, administration, and management of a storage solution at a central location, but does not access or have information about business files or data. In various embodiments, methods enable deployment of a storage solution within minutes that takes care of the system complexities.
  • Existing private collaboration solutions may emphasize data privacy at the cost of usability and access. In various embodiments, methods and system described herein focus on ease and accessibility. In various embodiments, such methods include an installation procedure that takes just a few clicks within an operating system, such as Windows, Mac, and Linux. In various embodiments, mobile applications (e.g., iOS, Android, Kindle Fire, and Blackberry apps) provide on-the-go access. In various embodiments, a secure gateway is used to provide access to files from outside a business enclave or to collaborate with other businesses while enabling the business to keep complete control over its network.
  • In various embodiments, Onsite technology enables an administrator to efficiently and effortlessly track, view, and restore files from any moment in time. In various embodiments, the Onsite technology is privately located, yet managed and monitored centrally (e.g., from a public domain such as adeptcloud.com).
  • In various embodiments, a method enables users to collaborate on files in various workflows, such as the workflows described below.
  • Synchronization. Users may actively synchronize files and folders on their computers by mounting Clouds using the client desktop software. Changes a user makes to files/folders within a mounted Cloud will be immediately synchronized in the background to other computers who have the Cloud mounted (as well as backed up to Onsite).
  • Browse/Download/Upload (Cloud Browser). Enables collaboration with large repositories (e.g., repositories having too much data to synchronize to a local computer), and solving other use cases as well. The Cloud Browser enables access and modification of files/folders in Clouds not mounted to the desktop. Importantly, this enables virtualizing/synthesizing of multiple disparate file repositories into an easy Windows-Explorer-like view.
  • Web-based Access (via Onsite). Importantly, as data may not be stored externally from private nodes (e.g., in a central repository), web-based access may not be provided at a public domain (e.g., adeptcloud.com). Thus, Onsite may provide web-based access to files and repositories.
  • Mobile (e.g., iOS, Android, Blackberry, Kindle Fire). Similar to the Cloud Browser, mobile clients may operate by providing browse/download/upload into/from existing Clouds.
  • FIG. 1 is a screenshot depicting an example embodiment of a user interface 100 of a desktop client. Once installed, the desktop client may run in the background on user computers. The desktop client user interface 100 may show status via a System Tray icon 102 and may be opened/interacted with via the System Tray icon.
  • FIG. 2 is a screenshot depicting an example embodiment of a user interface 200 in which a Cloud (e.g., “my cloud”) has been mounted on the user's computer. In various embodiments, a file icon 202 is overlaid on a folder or file corresponding to the Cloud in an application of an operating system (e.g., Windows Explorer of Microsoft Windows) or a Cloud icon 204 is displayed in a toolbar of the application. In various embodiments, the desktop client 206 shows “my cloud” is mounted.
  • FIG. 3 is a screenshot depicting an example embodiment of a user interface 300 of the Cloud Browser. In various embodiments, the Cloud Browser enables exploration and collaboration with respect to large repositories. In various embodiments, the Cloud Browser may be opened via the desktop client.
  • FIG. 4 is a screenshot depicting an example embodiment of a user interface 400 presented in a web browser with web access to enable a user to view files in my cloud using Onsite.
  • FIG. 5 depicts an example embodiment of a user interface 500 presented in a mobile device (e.g., via an iOS mobile application) to enable a user to view files in my cloud.
  • FIG. 6 is a screenshot depicting an example embodiment of a user interface 600 for central management of the storage system. The management of the system, including accounts and business information, may be performed centrally (e.g., on a public domain, such as adeptcloud.com) even though the data is stored privately (e.g., on private nodes).
  • FIG. 7 is a screenshot depicting an example embodiment of a user interface 700 for managing computers and devices centrally (e.g., from a public domain, such as adeptcloud.com). As shown, the user interface 700 may allow a user to link or unlink computers and devices to and from with the user's account, as well as rename the computers and devices.
  • FIG. 8 is a screenshot depicting an example embodiment of a user interface 800 for viewing organizations centrally. Such organizations may include one or more computers and devices that may be managed, as shown in FIG. 7.
  • FIG. 9 is a screenshot depicting an example embodiment of a user interface 900 for managing users and onsite services centrally. As shown, the user interface 900 may enable an administrator to manage users that have access to an organization. For example, the administrator may create a managed user or manage behind-the-firewall services for the organizations, such as the Onsite service.
  • In various embodiments, an infrastructure of a system includes one or more nodes and nexuses. A node may be deployed to users as desktop clients. Its backend may provide indexing and synchronization services to power both mounted cloud synchronization as well as the Cloud Browser.
  • The composition of the node may consist of a selection of important services and functions, such as those described below.
  • Directory Watcher. Mounted clouds may be watched by directory watchers, receiving notifications when files are changed.
  • Indexing Service. The indexing service may index files in mounted clouds, keeping track of their status as reported by directory watchers.
  • Synchronization Service. The synchronization service may retrieve updates from remote nodes about changes to files and synchronize those files locally that need to be updated for mounted clouds.
  • Index. The index may keep track of updates to local files in mounted clouds as well as a virtual representation of the state of unmounted clouds.
  • Cloud Browser. The Cloud Browser may display a global index (e.g., an index for both mounted and unmounted clouds). For unmounted clouds, the Cloud Browser may enable the user to download and upload files by interacting with nodes that have the cloud mounted. The Cloud Browser may be coupled to the Indexing Service to retrieve indexing data.
  • The Nexus may maintain security relationships between the nodes and in Clouds. It may be the central conductor and gateway for communication in the system. The system's web interface may also run the central Nexus. It is the central configuration management interface as well. The Nexus can be distributed in a cluster for redundancy and load balancing.
  • Remote Connections
  • In various embodiments, the system adds the ability for Nodes to communicate in non-local networks through the use of an external bidirectional proxy (e.g., an HTTP proxy). Various protocols (e.g., HTTP) may have a limitation in that the creation of a socket is tied to the person making the original request. This may be fine for a client connecting to a public server, but causes issues when a public client is trying to connect to a private (firewalled) server. A Relay Server enables the system to decouple the direction of the connection from who is making the request.
  • FIG. 10 is a block diagram illustrating an example architecture 1000 of the system, including the relay described above.
  • In various embodiments, the Relay Server is designed so that it only requires incoming connections (i.e., that Transmission Control Protocol (TCP) sessions originate from Nodes). In various embodiments, an exception is communication with the Nexus, which is assumed to be bidirectional.
  • In various embodiments, the Relay Server may guarantee some level of performance in detecting if a Node is reachable. In various embodiments, the Relay Server does not need to change the existing timeout thresholds used in direct connection methods.
  • FIG. 11 is an interaction diagram depicting example interactions 1100 between components during authentication and data transfer. The process may be started by Node A (the client, e.g., a Home Desktop) attempting to establish a connection to Node B (the server, e.g., a Workplace Desktop).
  • From the Client's Perspective:
  • 1) Node A asks the Nexus which Relay Server Node B is registered with.
  • 2) The Nexus responds with the Relay hostname to get to Node B (or if Node B is not found, it returns an error).
  • 3) Node B makes an HTTP connection to the appropriate Relay Server, encoding the original request for Node A along with Node A's computer ID.
  • 4) The Relay Server looks up the control session associated with Node A's computer ID.
  • 5) The Relay Server sends a control message to establish an incoming HTTP connection from Node A to the Relay Server with a randomly generated session ID.
  • 6) Node A makes an HTTP connection to the Relay Server, encoding its computer ID and the session ID in the header (no body).
  • 7) The Relay Server forwards the request from Node B as the response to Node A's HTTP connection, again including the session ID.
  • 8) Node A executes the request and establishes another HTTP connection to the Relay Server sending the result of Node B's request in the HTTP request.
  • 9) The Relay Server forwards the result HTTP request from Node A in the response to Node B's original request, with no session ID.
  • 10) The Relay Server sends a blank response to Node A indicating the relayed request is complete.
  • From the Server's Perspective:
  • 1) Node B starts up and asks the Nexus which Relay Server it should use. It will always do this regardless of whether any clients intend to connect.
  • 2) Node B establishes a control session with the Relay Server.
  • 3) The Relay Server informs the Nexus that Node B is accessible from this Relay.
  • 4) Node B waits until Step 5 in the above process.
  • In a use case, user A is behind a firewall or other network device preventing a local connection to User B. In this use case, the Nexus may delegate a new relay session dynamically for Node request (if the requested server node is considered online), find and query the relay server, and return this relay endpoint to the requesting Node. The Node may use this to establish a relay.
  • In various embodiments, HTTP timeouts may need to be long enough for the relay to function. Due to the CTRL port architecture, the Nexus may respond just as quickly (if not faster) when a node server is offline. External code may be used to address the switching logic (e.g., whether to use the relay or not).
  • The Relay Servers may have easy-to-query metrics, such as:
      • Current number of active control sessions (this indicates how many nodes are connected in the server capacity);
      • Current number of active relay sessions (aggregate);
      • Average number of sessions/node; and
      • Bandwidth (aggregate and rate).
  • The Relay Servers may present metric endpoints over the protocol interface (e.g., HTTP interface) to the Nexus for aggregation into the Nexus user interface.
  • The Relay Server may have the ability to deploy its logs over this protocol interface as well to the Nexus, thus enabling centralized administration.
  • The Relay Server may reconfigurable from the Nexus user interface. There may be a mechanism to force re-election of which nodes connect to which relay server in the case of failure or bad connectivity.
  • Sharing Infrastructure
  • The system may include a sharing infrastructure feature. The goal of this feature is to provide a flexible backend for our sharing infrastructure. This means the Nexus may support advanced features like access control lists (ACLs) for each share.
  • A secondary goal is to minimize the number of changes to the overall infrastructure in each phase of the build out. This lets the system or an administrator of the system go back and test and do a sanity check on the overall approach.
  • In a use case, user A wishes to share a folder with user B and user C that is synchronized across all of their machines.
  • It may help to think of shares like regular Nodes on a given user's network. The Nexus' existing endpoints may be slightly modified and used so that in general the shares work identically to the existing “my cloud” sync.
  • As used herein, a “share” is defined by the following elements:
  • UUID: The immutable unique ID assigned when the share is created. It is used to uniquely identify a share across the system.
  • ACL_ID: The access control list identifier which identifies which users have which permissions with regard to this share. Examples may include OWNER, READ, WRITE.
  • Friendly Name: This is what the user calls a share, which is simply a string that will usually end up being the name of the folder representing the share.
  • Implicitly, implementation of a share this necessitates the following:
  • The Nexus must maintain a list of all shares and govern the unique generation of the UUIDs;
  • The Nexus must resolve the ACL_ID into a set of permissions for a particular user;
  • The Nexus must enumerate which shares a user has access to;
  • The Node must be informed which Nodes contain shares it has access to;
  • The Node must be able to connect to other users' Nodes to synchronize shares;
  • The Node must be able to provide index information independently for each share;
  • The Node must NOT be able to connect to other users' Nodes that it does not need to synchronize shares; and
  • The Node must NOT be able to synchronize anything except shares it has access to.
  • Initially, a share may be created with an owner. We'll assume User A uses a GUI to create a Share entitled ABCSHARE. In various embodiments, immediately after this event, the following happens:
  • The new share is added to the nexus DB. A new UUID is generated for this share (i.e., the SHARES table has a new record inserted with: UUID, “ABCSHARE”, the SHARESACL table has a new record inserted with: UUID, UserA.UserId, PERMISSIONS (READ=true,WRITE=true,OWNER=true));
  • The nexus forces a Share Data Refresh on User A;
  • At this point, the share is now available in the system infrastructure and is only accessible to User A's Nodes;
  • On the Share Data Refresh, a list of available shares, which currently is our one new Share (List<Shares> contains one element: (UUID, “ABCSHARE”,ACL));
  • The Node checks its cache of shares (empty) against the list received in the Share Data Refresh (UUID,“ABCSHARE”, ACL);
  • The Node identifies this as a new share and automatically creates a folder “ABCSHARE” in the % USERDATA %/Shares folder (this is known as the “mount point”);
  • The Node updates its cache of shares and associates the mount point: % USERDATA %/Shares/ABCSHARE with the UUID. This association is stored locally on the node only using the Folders service. Logic will be needed at some point here to handle the case where the mount point already exists; and
  • The Indexing Service is told to begin indexing the new share (the Indexing Service must be made aware of an indexed path's UUID, the System Properties table must now track an Index Revision for each UUID in addition to the Index Revision for “my cloud” or the endpoint must calculate this dynamically from the Index table).
  • Now this node is prepared for synchronization of updates with other Nodes. The following paragraphs describe the synchronization sequence with the shared folder.
  • First, Nodes Heartbeat with Nexus.
  • The heartbeat message may have an optional list of “desired” UUIDs which represent the Share UUIDs for which sync information is also being sought;
  • The Heartbeat message may return a set of UUID-ComputerInformation tuples. (The computers belonging to “my cloud” may have an implicit UUID of 0, and shares owned by other users computers will have a UUID matching that of the share. This allows the node to resolve where assets are based on the assets' UUID);
  • The Node (or Nexus) may collapse this returned message to remove duplicate computers and creates a list of available UUIDs for each computer. This is stored in a Share Location Service on the node; and
  • The handshake process may occur, and modifications in the nexus may allow for sessions to be granted between nodes that have READ or greater access to other nodes via shares.
  • Second, the Node Synchronization Service now may iterate through the normal connected computers as well as others that may have our share. For computers that have a UUID of 0, the same existing logic is followed. For computers with a UUID!=0, we follow similar logic for my cloud except, for example:
  • The ACL is checked before each operation to see if that operation is permitted;
  • The remote revision given is the remote revision for that UUID;
  • The index is updated against that UUID; and
  • folders are synced to that UUID's local mount point.
  • Third, independently, the Node External Service may check its local ACLs before allowing nodes to perform certain operations. Currently all operations on this endpoint are READ ONLY except upload, but all should perform an ACL verification.
  • In various embodiments, if a user deletes an entire shared folder, this may not be automatically propagated by default, unless the user does it from the web UI or confirms with a dialog on the node, for example.
  • In various embodiments, the system specifically logs ACL failures at the nexus and node. This may indicate that a node is very close to being able to do something it shouldn't be able to, most likely pointing to a bug in the client side ACL enforcement code.
  • The above description assumes a single session between two nodes that may be used for accessing data granted under many different ACIs. For instance, if two nodes have ten shares that they both independently belong to, only one session token may be given by the nexus.
  • The ACL cache on the client side may be used to eliminate useless (non-permitted) queries from a client node to a server node. Therefore, it may only be necessary to send down the ACL from the perspective of the requesting client, and instead perform the ACL enforcement on the nexus (in the same call as the session keeping).
  • In various embodiments, the renaming/moving of share mount points works the same way as with the my cloud folder implementation.
  • Nexus Session Updates
  • To start the process of migrating to nexus-managed sessions between nodes, we'll need to move to a better structure for tracking the sessions each node keeps with the nexus, the Nexus Session. This Nexus Session is activated by calling the nexus' “refresh” endpoint with a valid ComputerToken.
  • This Nexus Session is considered valid in various scenarios, such as:
  • The client has consistently heartbeated with the nexus at within some interval (the nexus session reaper interval);
  • All ACLs the user is a part of have remained constant since the time the nexus session was created (the user ACL revision is current); and
  • The Time Ended stored for the nexus session is NULL.
  • FIG. 12 is a table illustrating examples of data items 1200 that each nexus session may persistently keep track of. As shown, the data items 1200 may include a nexus session ID, a nexus session token, a computer ID, a user ID, a user ACL revision, a time created, or a time ended.
  • The Time Ended will become NON-NULL if the client fails to heartbeat within the nexus session reaper interval or if the ACLs the user is a part of change.
  • In various embodiments, the User ACL Revision is instantiated before a session is created. This alleviates any race conditions (e.g., if the ACL revision is updated via a race condition).
  • Node Session Updates
  • In various embodiments, node session management and canonical storage is performed at the nexus. Node sessions are created by a client requesting to handshake with a node that it has access to data from.
  • In various embodiments, this Node Session is considered valid in various scenarios, such as:
  • The From & To Nexus Sessions that this node session was established under are valid (The To Session is important in case the To computer goes offline); and
  • The Time Ended stored for the nexus session is NULL.
  • FIG. 13 is a table depicting examples of data items 1300 that each node session may keep track of. As shown, the data items 1300 may include a nexus session ID, a nexus session token, a From ID, a to ID, a from nexus session ID, a to nexus session ID, a time created, or a time ended.
  • The Time Ended will become NON-NULL if From or To Nexus Sessions become invalid or if a Nexus Session is deleted. In various embodiments, node session storage referencing these deleted nexus sessions will be lost.
  • During client heartbeating with nexus, the nexus can check open sessions to other computers and get the index revision numbers for those other nodes the client has access to. In various embodiments, node index revisions are kept track of in the nexus, as well as lower the heartbeat times, so sync can happen more quickly.
  • Encryption
  • The goal of encryption may be to prevent attackers or unprivileged users from reading data or spoofing data between the nodes or between the nodes and nexus.
  • In various embodiments, the encryption feature makes sure of the following:
  • A message cannot be read by any third party;
  • A message cannot be replayed long after it was originally sent; and
  • It may be verified that the message was not modified in any way during transmission.
  • The system may implement the encryption feature by using a combination of techniques, such as:
  • Advanced Encryption Standard (AES) encryption (e.g., to try to make it impossible for any third party to read the data);
  • Changing the Key for the AES encryption and the hash-based message authentication code (HMAC) (keyed-hashing for message authentication) at some time interval and using a different key for every pair of nodes;
  • Using SHA-1 and HMAC algorithms (e.g., to guarantee message authenticity); and
  • Changing the secret key required by AES and HMAC over time (e.g., to accomplish replay attack resistance).
  • FIG. 14 is a description of an example embodiment of a description 1400 what message construction may look like.
  • In various embodiments, communication between a node and a nexus will use standard SSL for authenticity, integrity and encryption.
  • Encryption may be used for password protection.
  • Encryption may be used for access restriction/authentication.
  • Encryption may be used to encrypt additional information (e.g., general network traffic, message data, or video data).
  • In various embodiments, the “nexus” is a central server software stack. “Nodes” are clients that are deployed to and installed on user computers and devices. A “node client” is a node that is requesting information in a transaction between nodes. A “node server” is a node that is serving information in a transaction between nodes.
  • In various embodiments, for node-nexus communication, industry standard SSL is used with 256 bit AES encryption in code-block-chaining mode, SHA1 message authentication and Diffie-Helman RSA assymetric key exchange.
  • In various embodiments, for node-node communications, 128 bit AES is used in code-block-chaining mode with PKCS#5 padding and SHA1 message authentication. The implementation encrypts the ‘data’ content of messages used in a proprietary protocol between nodes on the Adept Cloud network.
  • In various embodiments, for node-nexus communication, asymmetric keys are managed using traditional SSL Public-key infrastructure (PKI). The system may have a wildcard SSL certificate whose private keys are known only to the system; public keys are verified by a trusted SSL root authority. The signature used may be SHA-1 RSA, and the key modulus is 2048 bits.
  • In various embodiments, for node-node communication, symmetric encryption keys are distributed by way of the node-nexus SSL communication layer. The private keys (and other metadata) may be sent to nodes on demand and upon certain exceptional events (such as a user permission change). In various embodiments, node-node keys are never stored on node clients except in temporary memory (e.g., RAM) and have a maximum lifetime of 24 hours. In various embodiments, no asymmetric encryption is used for node-node communication, so no modulus sizes are supported.
  • In various embodiments, the plain text consists of proprietary messages that define the protocol used between nodes and nodes-nexus. Some of these messages may be compressed using gzip or other industry-standard data compression techniques.
  • In various embodiments, node-nexus communication uses standard SSL after which no further post-processing methods are applied to the ciphertext.
  • In various embodiments, in node-node communication, the ciphertext is encapsulated with an unencrypted message header and an unencrypted message footer. The message header may consist of a hashed client identifier, the length (in bytes) of the message ciphertext, the IV (initialization vector) used to encrypt the ciphertext (randomized for each message) and SHA-1 HMAC of the unencrypted message header to authenticate the header contents. The message footer may contain a SHA-1 HMAC of the union of the message header and the ciphertext. Node-nexus communication may employ standard SSL over TCP using TLS1.0 or greater.
  • In various embodiments, node-node communication may support only a proprietary encryption protocol over TCP or UDP.
  • In various embodiments, node-nexus communication may make use of a Java SSL library (e.g., provided by Oracle), which inherently prevents user modification of encryption algorithms, key managements and key space.
  • In various embodiments, node-node communication uses a proprietary protocol which does not allow for protocol negotiation. This may prevent users from modifying the encryption algorithms without being denied access to a remote resource. Key management may be enforced for both the client node and the server node by the nexus, so in the event a client attempts to use an old or invalid key, the node-node communication will be terminated as the key will be denied when the server nodes attempts to verify the invalid key with the nexus.
  • In various embodiments, centralized key management is performed for all users by the system infrastructure (e.g., the nexus). This means there may be a number of encrypted data channels equal to the number of active computers on the system, which may be equal to the aggregated number of computers owned by each user.
  • Organizational Structure
  • Businesses may critically need to be able to create an organization in the system, in which they can manage users and their data centrally.
  • Here's an example use case: I′m an Administrator of an Organization and I want to deploy the system fully set up for my organization. As an administrator, I need to be able to add users. As a user of an organization, I want my teammates to be initialized to all members of my organization. As an administrator, I want to be able to create and manage clouds. P3. As an administrator, I want to be able to see and manage all computers/devices in my organization.
  • Organizations
  • An organization is a managed set of users and resources. An organization may have one or more super-administrators. In various embodiments, organizations are not to be tied to a specific domain. For example, soasta.com and gmail.com email addresses may be used in the same organization.
  • Super-administrators may have an option to restrict sharing to only users within the organization.
  • In various embodiments, users only have an opportunity to be a member of a one organization. If an administrator attempts to add a user to an organization and the user is already in another organization, there may be an error thrown back and presented to the administrator. Users may be invited to more than one organization if they haven't accepted an invite into an organization yet.
  • Role Based Security
  • In various embodiments, the system uses a permissions-based access control system that is notionally tied to cloud privileges (e.g., for a given cloud, a user will have permission (e.g., Owner, Write, Read, None)).
  • In various other embodiments, the system uses a privilege-based, resource-centric access control system.
  • A resource is an entity within the Adept Cloud system, such as an organization or a cloud, that requires users to have specific privileges to perform specific actions.
  • A role is a set of privileges. In various embodiments, native roles are assigned to group common privileges into colloquial buckets. For example, Role.CLOUD_READER will include Privilege.VIEW_CLOUD and Privilege.READ_CLOUD_DATA.
  • A privilege is a positive access permission on a specific resource. Each privilege has an intrinsic role for simplicity of defining ACLs in implementation.
  • An access control list for a resource may map users to roles. User access queries on an access control list may return a set of all roles.
  • A catch-all role of ROLE_NONE. ROLE_NONE may be added. In various embodiments, this role can never be granted, and is only returned upon queries for user privileges on a resource when the user has no granted privileges.
  • In various embodiments, required information for organization creation includes the name of the organization and email address (e.g., provided by an administrator).
  • The system may create an organization. If an administrator is not an existing system user, the system will create a new system account for the email address used for signup. In various embodiments, the does not send an activation email to the user yet.
  • The system may set the administrator's system account as a user of the organization, with role ORGANIZATION_SUPER_ADMINISTRATOR.
  • If the administrator was not an existing system user, the system may send an activation email now.
  • Administration features of the system may include the following:
  • An Admin tab available to any super-user;
  • The Admin tab may have applications (Users, Clouds, Computers, Settings);
  • A Users application that enables ORGANIZATION_USER_MANAGERs to create a user for the organization with options to set their authentication and metadata information, send an activation email tailored to the organization, highlight the name of the organization in the email and in the activation page; invite an existing cloud user to join their organization; view, modify, and/or delete organization users; view, modify, and/or delete user computers and devices if the administrator is an ORGANIZATION_COMPUTER_MANAGER;
  • A clouds application that enables ORGANIZATION_CLOUD_MANAGERs to add/view/modify/delete clouds, managing any cloud created by any organization user;
  • A computers application that enables ORGANIZATION_COMPUTER_MANAGERs to view/modify/unlink computers and devices registered to users of the organization;
  • A settings application that enables ORGANIZATION_SUPER_ADMINISTRATORs to add/view/modify/remove users with admin privileges; view/modify organization-wide settings; optionally limit cloud membership to organization users only; delete the organization and all its users.
  • In various embodiments, for user organization mapping, organization membership will be strongly tied to a user object.
  • Backup and Versioning
  • Before deploying the system, a backup and versioning system may be installed to enable the system to recover data in the case of accidental misuse by users or a bug in the system software.
  • The backup and versioning system may include the following features:
  • Maintain a copy of all entities (clouds) for an organization;
  • Serve as a data source for nodes/mobile clients to sync with;
  • Maintain a history of changes made to the files inside the entity; and
  • Have the ability to browse to and serve up specific revisions of the files in an entity from a fat client.
  • In various embodiments, the backup server is able to serve an organization with a predetermined number of users without becoming a bottleneck and is not to delete files off its file system (e.g., it is only to be able to write metadata that the files are deleted). This will allow the system to have a better guarantee that it can't possibly permanently lose data due to a programming or user error.
  • In various embodiments, the backup server is backed by a data store that includes the following features:
  • Independent across entities (e.g., so corruption in one entity doesn't result in a total loss across the organization);
  • Simple to backup and restore (e.g., performing a backup of a “data” directory is enough to recreate the entire state in case the backup server computer needs to be restored); and
  • Eventually scalable so backup servers can be used in parallel or have the workload split in some way between multiple instances.
  • In various embodiments, the backup server consists of three major components: (1) a Synchronization Service that polls the nexus for revision updates on each node and then contacts each node to get the contents of those revisions; (2) a Revisioning File Storage Service that provides the backing store for the data content, maintains a revision history on a per-file basis, satisfies the no-delete constraint, intelligently stores revisions to files so as not to use an exorbitant amount of space; maintains an index of the head revision plus all other revisions; (3) a Restore Service that provides the endpoint and backing services for clients to browse for and retrieve versions of backed up files, mimics the existing endpoints for synchronization to the head revision for regular nodes (so standard nodes can sync with the backup node), and works in tandem with the Revisioning File Storage service to actually retrieve the file listings and data itself.
  • In various embodiments, the Synchronization Service works mostly like a synchronization service that a node may already have. The general cycle is for a particular entity: (1) Contact the nexus to get the current revision numbers for all other nodes of that entity, and (2) Loop through each node: compare a locally cached revision number for that node against what was received from the nexus; retrieve a list of updates from the node by issuing our last cached revision number (getUpdates); relay each update operation to the Revisioning File Storage Service; and upon success, update the locally cached revision number for that node.
  • There may be differences between the backup server's sync service and the node's sync service. For example, the backup server may have one sync service per entity instead of one per server; the backup server sync service may use thread pools instead of an explicit sync thread; the backup server sync service may not have a lock service since its indexing service is only accessed by the sync service (i.e., not the file system (FS) watcher) (alternatively, entity scope locking could be used); and the backup server sync service may send FS operations to the Revisioning File Storage Service instead of performing them directly on the FS.
  • Even if the backup service never modified files, a conflict can still occur. For example:
  • Node A, Node B, and backup server S are in the same entity.
  • Node A modified a file F offline.
  • Node B modified the same file F offline, in a different way.
  • Node A comes back online first, and now Server S gets node A's file, F(A) [A−1].
  • Node B comes back online later, and Server S tries to get node B's file F(B) [B−1].
  • Server S detects the conflict.
  • In such cases, the system may detect a fork and start tracking revisions independently. In various embodiments, the backing store is aware of the versioning clock. In other embodiments, the system may just pick a revision to track (e.g., the first one) and ignore all conflicts. However, if the wrong revision is picked, data could be lost.
  • The Revisioning File Storage Service is the heart of the backup and versioning service. Effectively this component acts as a write-only, versioning file system that is used by the Backup Synchronization Service (above).
  • Verbs used by the existing Synchronization Service may include the following (see doUpdate):
  • Directory Functions:
  • Delete Directory (recursive); and
  • Create Directory (recursive).
  • File Functions:
  • Delete File If Exists;
  • Write File; and
  • Read file.
  • Think of each entity as having a different revisioning file system (RFS) where that entity acts as the root of the FS.
  • At the basic level, the RFS provides all the features of a regular FS, namely being supporting the reading, writing and deletion of files and the creation and deletion of folders. It differs in the following ways:
  • When a file is deleted, the contents of the file are not actually deleted. Instead, some metadata is changed so the FS reports the file as being deleted;
  • When a folder is deleted, the contents of the folder and files in the folder are not deleted. Instead, the metadata for that folder and its children are changed; and
  • When a file is written to, it is not overwritten; instead, data is stored so that the new revision and some number of previous revisions can be generated upon request.
  • The FS has the concept of a “revision”, which is a number that represents a global state of the file system.
  • The FS supports queries such as the following:
  • Return all revisions where file X changed;
  • Return all revisions where directory X changed;
  • Return file X at revision R;
  • Possibly return directory X at revision R;
  • Return all revisions back to date D (implies a map from D->revisions).
  • In general, each “transaction” on the FS increments the revision by one. When a user is browsing the FS, they generally do so against the head revision (the global maximum), as this should reflect what a standard FS would look like.
  • In various embodiments, each file or directory full modification constitutes a transaction. A full modification means a full replacement of the file, so intermediate edits by something like Rsync would not result in a new transaction as this would cause files to have an invalid state.
  • A possible candidate implementation is Revisioning Index+Native File System. In this implementation, every file and folder is stored on the native file system as normal. In addition, the cloud has a revisioning index which is a database that contains an entry for each file/folder action and its metadata as well as some global metadata for the database itself. Note that the data stored in the database may be tightly coupled to the underlying file revisioning strategy. The database has a row for every single revision of every file/directory. Therefore, the row ID ends up being the revision number.
  • FIG. 15 is a table illustrating an example embodiment of a database table 1500 for an example candidate implementation of a revisioning file storage service.
  • The database table 1500 includes the following fields:
  • revision—the PK (e.g., primary key) of the table, an incremental counter that effectively=row index;
  • filename—the relative path of the file in the entity. The filename is the same for both directories and files;
  • directory?—indicates if this is a directory;
  • deleted?—indicates if the file is marked as deleted;
  • adler32—the adler32 checksum of the file, 0 for directories;
  • size—the file size, 0 for directories;
  • file timestamp—the propagated time stamp of the file;
  • backup timestamp—the time when this record was inserted;
  • version clock—the version clock associate with this operation; and
  • data loc—the “pointer” to the data for this version (in the most basic case, this is a file location on the hard drive; Null for directories).
  • In various embodiments, the Restore Service is just JSON endpoints (and possibly a rudimentary GUI/CLI).
  • Endpoints May Include (Similar to Existing Nodes):
  • browseRevisions—Input: Relative Path (string), UUID (long), fromRevision (long); Output: List of Browse Elements (augmented with revisions). Returns immediate children elements of a particular path between the fromRevision and the HEAD. Relative Path may be a folder that existed between fromRevision and Head, otherwise error;
  • retrieveFileRevisions—Input: Relative Path (string), UUID (long), fromRevision (long), Output: List of Browse Elements (augmented with revisions). May return only revisions for the requested file, as a file (meaning if this path was once a directory, those are omitted from this response). Relative path must be a FILE that existed between fromRevision and Head, otherwise error;
  • retrieveFileAtRevision—Input: Relative Path (string), UUID (long), revision (long), Output: file, Relative path may be a FILE that existed between fromRevision and Head, otherwise error;
  • getRevisionFromDate—Input: Revision (long), Output (Date (UTC)).
  • Deployment
  • The system is easily deployable (e.g., by an organization's IT staff). The process is painless and avoids messing with configuration scripts and the like.
  • Initially, the Storage Agents will be deployed by the system at select customer premises. If allowed, log-me-in or some remote access solution may be installed so that the system can be managed remotely after on-site deployment.
  • In various embodiments, Storage Agents will: automatically download updated Storage Agent software similar to how nodes do now and be able to install updates from a central web interface.
  • In various embodiments, the “unboxing” experience includes the following steps:
  • An organization admin (or similar privileged organization user) downloads a “special” storage agent from the nexus;
  • The setup package is real-time injected with a unique token identifying which organization it is associated with;
  • The installer requires the admin to enter the storage agent daemon password for the org;
  • The installer asks the administrator for a name to identify this storage agent;
  • The installer asks the administrator for location(s) to store backups;
  • The installer optionally presents the administrator with different backup strategies (e.g., how long to keep data, etc.) and whether the backup server should be enabled now; and
  • The installer installs the application as a service.
  • Once this is complete, the Storage Agent application is entirely managed from the nexus.
  • Access
  • In various embodiments, the access service may be a component of Onsite. The access service may give organizations the ability to give their users the ability to access their clouds from a web browser.
  • In various embodiments, a user (e.g., an employee of an organization) uses the access component to:
  • Download files in any of my clouds;
  • Upload files to a specific folder in any of my clouds;
  • Delete files in any of my clouds;
  • Search for files by name in one or more of my clouds;
  • Rename files or folders in any of my clouds;
  • Move files or folders within the same cloud for any of my clouds;
  • Access a file on a particular computer that is not in a cloud already;
  • Access previous revisions and deleted folders/files;
  • Get a link to a file the user can send to someone else who already has cloud access;
  • Get a link that is public to the external world (e.g., no username or password);
  • Comment on files, see previous comments, and subscribe to a comment feed;
  • Subscribe to a feed for file modifications on a server (this is a hand-in-hand feature with storage agents);
  • Browse pictures and videos (e.g., in a thumbnail view); and
  • View content for media (e.g., in a special shadowbox with OTF video conversion where appropriate).
  • In Various Embodiments, an Organization Administrator Uses the Access Component to:
  • Control at a user level who can use the access server;
  • Control at a cloud level whether access servers can access a cloud;
  • Have the same controls enforced as if files were accessed via a node; and
  • Have similar guarantees of the time it takes for an access list modification to take effect.
  • FIG. 16 is a block diagram depicting an example embodiment of a design 1600 of the access component. In various embodiments, the system is referred to as “AdeptCloud” and the Onsite component is referred to as “AdeptOnsite.”
  • In various embodiments, the access service acts a lot like the mobile clients on one end and on the other end serves an HTML user interface.
  • Effectively, every user who logs into the access service (using their system credentials) spools up a virtual node-like client which communicates with other nodes as if it itself were a node.
  • There is a fundamental issue at hand with the access server specifically that the user expects the state modifications they make to be guaranteed to propagate throughout the system. However, if the node the access server happens to connect to goes offline before the changes propagate, this assumption will be violated and result in user frustration. The system has various options for handling this issue.
  • One option is a thin “node-like” layer that sits in the access service that allows changes to be served right from the access server as if it were a node. In various embodiments, this layer serves until the changes propagate “sufficiently”. Sufficient propagation may be based on propagation to a fixed number of peer node or a percentage of peer nodes. Sufficient propagation may be based on a time out.
  • Communication. Peers may make an incoming connection to the access server in order to get the changes (using retreiveUpdates etc.). Therefore, at some point messages may be dispatched to the appropriate “user handler” for which the message applies. The cryptoId may be sent to the nexus and the access server would then need to figure out not only who the remote user is, but which local user the remote node is trying to communicate with (which in turn may be answered by the nexus). The flow may be as follows:
  • A regular node request a cryptoId from the nexus as usual, asking to connect to the Access Server;
  • The nexus issues the cryptoId between the node's comp and the synthetic comp of the access server;
  • The node connects to the access server with this cryptoId;
  • The access server attempts to verify this with the nexus, but instead of using its synthetic computers nexus session, it uses a special nexus session for this purpose;
  • The nexus verifies the cryptoId and also returns who the intended computerId receiver was (which should be one of the synthetic comp IDs represented by the access server); and
  • The request is forwarded to the appropriate access client.
  • Another option is to keep the simple design of running every RPC call on the node, effectively making the access server a message delivery system. However, instead of connecting to just a single node, the access server may connect to N nodes for the particular cloud the front end is editing. If a user has “low connectivity”, i.e. only a single remote node is on, a small warning indicating this may be presented. With this option, fewer features may need to be custom implemented on the access server. This option may also support a future design where the always available node feature is handled by an external component running on the same computer (or even another server, perhaps a storage agent “lite”).
  • In various embodiments, users may see all connected computers via the web UI as well as the associated job queue.
  • Another option is to make the web interface extremely transparent to the fact that it's connected to a single node, and even make the propagation information available in the web. Note that this is actually a small subset of Option 2.
  • New endpoints on the nodes: download file, download files, upload file (either new file or new version of existing file), rename a file (delete it and synthetic upload), delete a file, move a folder (and its contents) to a new location, move a file to a new location.
  • Node Feature Interactions. The node may have specific code which prevents the file system watcher from detecting changes made by the node itself from increasing its own version clock entry. In a similar way, these endpoints may need to make modifications to the existing files on the node, but may do so without the node changing the version clock for its own computer. Instead, these actions may change the version clocks by incrementing the entry for the synthetic computer which represents the user on the access server. This way future logging and auditing may ensure the version clocks always represent a truthful map of who and where edits were made. Additionally, the mobile clients may make use of these new endpoints, and the same guarantees may then be made about modifications made on those devices.
  • A note about real-time conflicts: some users may only use the web interface for downloading files, and this may always be made fast because downloading from the system has no effect on other nodes. However, in various embodiments, the modification endpoints will need to “run to completion” before returning, and in order to maintain consistency, the web UI may also wait for these operations to complete on the connected node before returning control to the user.
  • Auditing
  • In various embodiments, the audit server gives visibility into the health of the system and provides a central repository for logging user, file and security events.
  • Auditing may be broken down into three primary categories:
  • 1) Nexus logging—User level events, ACL changes, org changes, server config changes, etc., in the nexus;
  • 2) Cloud-level logging—Tracking the version and history of individual files, who edited, where, on what device, etc. (storage agent);
  • 3) Transaction-level logging—This is node-node communications used for figuring out when two nodes sync.
  • Furthermore, each audited event may be “visible” to certain principals, depending on the event. This is because certain “container” roles change over time, and should have retroactive access into an audit trail. For instance, a new IT administrator may be able to access audit history that occurred before their sign on date. However, users who join a cloud may not be able to get access to audit history before they joined. In various embodiments, the goal of the audit server is to record enough information at the time of the audit event to extract these security relationships when the data needs to be accessed.
  • FIG. 17 is a table illustrating an example embodiment of nexus logging particulars 1700 for three classes: user, organization, and cloud. In various embodiments, global system visibility allows a super-admin to see all events at the nexus level.
  • FIG. 18 is a table illustrating an example embodiment of cloud-level logging particulars 1800. In various embodiments, this logging is done exclusively at the cloud level. Most the data may come from the storage agents with some of the same data coming from nexus just as above.
  • Transaction-Level Particulars. This type of logging may log when nodes communicate with each other and what they say. In various embodiments, this is just the sync pipeline, which is a combination of RetreiveUpdatesSinceRevision and RetreiveFile.
  • The nodes may log both the server and client sides of a connection; this way, if either node is compromised or if only a single node is “managed”, both sides of the transaction can be found later.
  • The difficulty here is that these logs may contain sensitive data that the nexus should not be able to see. There are at least two ways to address this problem, one using PKI, the other using direct communication.
  • In various embodiments, the PKI system works as follows:
  • A private key is generated by the audit server for an organization. The public key is then sent to the nexus;
  • When a node logs in, it is delivered the public key for the single audit server it needs to communicate with;
  • Log events are encrypted with the public key before being sent to the nexus. The nexus then queues these events to be sent to an audit server; and
  • The audit server retrieves the events and can decrypt them with its private key.
  • In various embodiments, the direct communication system works as follows:
  • Nodes locally “cache” every audit event to a local persistent store on the node (e.g., the database);
  • Asynchronously the node connects with the audit server and delivers updates; and
  • At some interval, the node must optionally handshake with an audit server to avoid totally orphaned nodes from never delivering their logs (this could be an organization parameter).
  • A user interface may enable users to perform the following actions:
  • Generate reports from combined nexus-level and cloud-level data;
  • View a geographic snapshot of where some filtered number of clients current are (or where they were at some point in time);
  • View server usage for relays, storage agents, access servers, etc.; and
  • View Statistics (e.g., of logins, time since last login, failed logins, last IP, email, and so on).
  • Advanced Indexing and Synchronization
  • In various embodiments, the synchronization service supports the access server (e.g., browser client) and improves performance for the sync service, including decoupling the indexing, synchronization and filewatcher services. The synchronization service may have the ability to handle conflicts. The synchronization service may also maintain index information from all nodes in mounted and unmounted clouds. Every node may have the version clock of every file in the mounted cloud. The synchronization service may provide file level information of what's available and what's not on each node in the network. The index may be able to handle the thin mounting concept.
  • Use cases may include the following:
  • A user wants to access data from any computer in the world;
  • A file is modified offline on two nodes and a conflict is created (e.g., user wants resolve the conflict);
  • A user wants more performance out of the node client;
  • A user wants to browse unmounted clouds;
  • A user wants to download data from an unmounted cloud; and
  • A user wants to upload data to an unmounted cloud.
  • The synchronization server may support the following workflows:
  • Mount an entire cloud;
  • Mount a portion of a cloud;
  • Unmount a portion of a cloud; and
  • Unmount the entire cloud.
  • Changes to the index data in the database may include:
  • Status bit if available locally (don't need to explicitly list this—can do a lookup in the availability table);
  • List of computers that have the head revision—via availability table;
  • Head version clock;
  • On update copy local version clock to head version clock;
  • Local version clock;
  • Add availability table with each id maps to an entry in the adept_index and each column is a computer UUID in the cloud (could truncate the list of computers);
  • On update—create table availability; and
  • Copy the files in the current index NOT marked as deleted to the availability table with existing computer_id.
  • The indexing service may support the following features:
  • Decoupling the updates to the index and updates to the content;
  • Only propagating “local” updates to the index;
  • Updating when a cloud is unmounted to remove all elements in the availability table to not include the local machine; and
  • Updating when a cloud is mounted to include all local files in the availability table to include local machine.
  • In various embodiments, there exists a counter on the nexus that tells nodes when they should talk to each other. This may be the primary mechanism that nodes use in the SyncService and NodeExternalServices to communicate.
  • FIG. 19 is a table illustrating example fields included in a database table 1900 for indexing (e.g., adept_index). As shown, the example fields include index_id, computer_id, and version_clock.
  • In various embodiments, the adept_index table includes a locally_available column, and stores information about unmounted clouds in addition to mounted clouds. Locally_available is a Boolean to indicate whether the PATH is available on the local node.
  • SHARES may include all clouds (UUIDs) and include a new field to indicate if the cloud is mounted (not just a null PATH). A column “mounted” may indicate if the cloud is locally mounted. Clouds in the SHARES table may be assumed to be mounted.
  • The following paragraphs describe example steady state operations for the various services.
  • SyncService: (a) synchronizationServiceThread.performAction—loop over all UUIDS, not just mounted ones (do not skip if folder service getFolder( ) call returns null for unmounted UUIDs); call syncWithComputer any time; do not call setRevisions for unmounted clouds to tell the nexus your local revision is zero for that cloud; (b) syncWithComputer: mounted clouds—call doUpdate and only update the index once the transfer has completed and the hash equality checked; unmounted clouds—a couple of options. Option 1: User the current check to the foldersService to see if a cloud is mounted by checking if the folder is null. If so, process the IndexedPathUpdates via the IndexingStorageService, set the updated remote revision via the foldersService and move on; Option 2: Explicitly check at the beginning of the function if the UUID is mounted via the SHARES table. If not mounted, process the IndexedPathUpdates via the IndexingStorageService, set the updated remote revision via the foldersService. In various embodiments, a type of SynchronizationEvent indicates that just index information is being shared, but this may happen very quickly and perhaps frequently.
  • IndexingService: (a) mounted clouds—the fileWatcherEvents may be the primary mechanism for updating the index; (b) unmounted clouds—no FileWatchers are enabled, so the unmounted clouds may not interact with the IndexingService via the QueuedIndexEvents.
  • IndexingStorageService:
  • doUpdateIndexedPath—add computer_id to call, update corresponding element in adept_availability table;
  • doAddIndexedPath—add computer_id to call, add corresponding element in adept_availability table;
  • getUpdatesSinceRevision—perform join query to indicate not just the data in adept_index, but also information from the adept_availability table to indicate if the corresponding element in the adept_index table exists on the given compputer (assumes this will populate the IndexedPath available with the current node computer_id from the availability table);
  • getIndexedPaths—if the IndexFilter support a computer_id, this needs to include only data that persists on a given computer_id;
  • getMaxIndexRevision—by grabbing the data from the SHARES table, this suggests the SHARES table might need unmounted cloud revision information
  • getIndexRevision—like getMaxIndexRevision, persist data on unmounted clouds in SHARES table;
  • incrementAndGetCurrentMaxRevision—see getMaxIndexRevision for thought on persisting data on unmounted clouds in SHARES table;
  • overrideIndexRevision—see getMaxIndexRevision for thought on persisting data on unmounted clouds in SHARES table;
  • clearIndex—Clear the specified UUID data from the adept_availability;
  • NodeExternalServices: changes in the IndexedPathResultSetHandler may propagate the unique identifier if an IndexedPath is mounted (and thus available on a given node).
  • IndexedPath: A field may indicate if the IndexedPath is available on the node returning the IndexedPath.
  • IndexedPathResultSetHandler: A translation may set the available field in the IndexedPath based on the data returned from the IndexedPath query.
  • IndexFilter: In various embodiments, has the ability to filter based on data that persists on a given computer_id from the availability table.
  • Initial Upgrade:
  • Update the database configuration to current number +1;
  • Provide an upgrade task and new upgrade SQL file with the following commands: Create the new adept_availability table; copy all of the (id,version_clock) from the current adept_index table to the adept_availability; assign the computer_id of the current node from the system_properties table to all elements in the newly created elements in the adept_availability table; get the available clouds and put into SHARES table.
  • Mounting a cloud: invokes the foldersService.setFolderPath function; foldersService, setFolderPath; unmountFolder—unmounting and clearing the data from the index will cause all remote nodes with this cloud mounted to give the most up-to-date information because the local remote revision will be out of sync with the nexus.
  • Unmounting a cloud: Invokes the foldersService.unmountFolder; foldersService; unmountFolder:
  • Option 1: add additional call (and always run) to increment the local IndexRevision number (how does this get pushed up to the nexus);
  • Option 2: add another method to explicitly unmount and increment. This would allow other methods that don't need to increment the index revision to unmount a folder without incrementing;
  • Option 3: Put the call to the nexusClient at the CloudController level.
  • Federation
  • In various embodiments, nexus infrastructure may be federated for redundancy and load balancing.
  • Public Key Infrastructure (PKI) Solution
  • Environments with highly sensitive data may be worried about the insider threat. For example, what can an internal system employee with access to the entirety of internal system resources do if they decide to try to get at someone's data?
  • A high-level solution may be to partition the location of the sensitive data and partition how access to data is granted. This solution may be realized with a standards-based PKI (Public Key Infrastructure) solution.
  • In various embodiments, there are two functions of the overall system: (1) Identification—provide identification for two peers who are communicating and (2) Authorization—authorize one peer to access a resource from another peer.
  • In various embodiments, the PKI feature addresses problem 1, and provides a way for organizations to fairly easily substitute out the nexus for their own PKI solution.
  • In various embodiments, with TLS node-node communication, each client may establish its identity using a X509 certificate. Each connection between nodes may use two-way TLS, thereby allowing both peers to establish the identify of one another before communicating. In various embodiments, the system does this internally by maintaining a map of everyone's certificate to their user/computer ID pair at the nexus. Effectively, the nexus may act as a certificate authority (CA).
  • Specifically, the nexus may perform the following CA-like activities: accepting a generated public key from a node in the form of a CSR, returning a signed public key certificate with the nexus root CA, maintaining a list of revoked certificates, supporting an Online Certificate Status Protocol (OCSP) (or OCSP like) protocol to check validity of a certificate.
  • In a legacy system, a computer token may be generated (nexus side) for each new computer and associated with a computer/user ID pair. With the PKI feature, a public/private RSA key pair may generated (node side) and the public key is associated with a computer/user id pair.
  • In a legacy system, security may be session-based. For example, the computer token may be held secret between the node and the nexus, and a temporary handshake token may be generated to establish identity, which leads to a session token which exists for the duration of the logical connection. With the PKI feature, security may be certificate-based. For example, nodes may directly communicate with one another without ever needing to talk to the nexus (regarding identity) as they may verify the authenticity of the connecting node's identity by verifying the root of the presented certificate. Thus, the PKI feature, including its communication infrastructure, may result in significantly reduced load on the nexus and faster connections between nodes because, for example, the node identity may be verified without a round trip to the nexus (e.g., through caching the issuing certificate public key).
  • Installation
  • Upon installation the nodes may generate a public/private key pair. The nodes may generate RSA keys with 2048 or 4096 bit length. A key store on the node may be the sole location of the node's private key. A trust store on the node may contain the nexus public key. In this way, trust of certificates signed with the nexus public key may be enabled. In various embodiments, trust stores may be Java key stores (JKS). Alternatively, trust stores may be non-Java specific trust stores.
  • Another registerComputer call may perform a SCEP request (Simple Certificate Enrollment Request). A signed X509 certificate may be issued back to the node as per SCEP from the nexus. The nexus may record the issuer and serial number of the certificate and associate it with that computer/user ID.
  • The node may store this certificate for some time (e.g., one year by default). Thus, the installation of the PKI feature may be completed.
  • In various embodiments, third-party APIs (e.g., Legion of Bouncy Castle APIs) may be used to perform various operations.
  • Normal Operation:
  • Say Node A wants to perform a getFile on Node B.
  • Node A gets the IP and port of Node B as before.
  • Node A attempts to connect via TLS to Node B directly to the getFile endpoint.
  • Node B challenges Node A for its client certificate.
  • Node A provides the client certificate it got during install, signed by the nexus.
  • Node B performs an OCSP request to the nexus to verify the status of node A's certificate. Alternatively this can be done directly over the existing SSL connection with the nexus.
  • Nexus replies that Node A's certificate is “good.”
  • Node B replies with its public certificate which is subsequently also verified by the nexus (e.g., by node A).
  • Node A accepts the cert, and the secure channel is created.
  • Node A gets the file from node B.
  • A few optimizations may be made:
  • Support resuming TLS sessions, so if a secure connection was still established but idle, it may be reused, allowing the last step of normal operation to be skipped.
  • OCSP supports a cache lifetime (like a TTL). This may be set to a default value that organizations may configure later.
  • In various embodiments, the only things tying the PKI feature to the nexus may be:
  • The location of the SCEP endpoint (i.e. registering a certificate with the CA);
  • The location of the OCSP endpoint (or similar) (i.e. verifying an issued certificate with the CA); and
  • The public key that is preloaded into a trust store for the CA (i.e. which CAs does the system trust).
  • Implementing the PKI feature on the communication architecture allows the system to interoperate with existing PKI systems, essentially making the system completely blind to what is going on and stripping the system of the ability to modify any communication relationships. In various embodiments, multiple organizations, each with their own PKI, may interoperate with one another.
  • Transparent Relays:
  • With the standards-based communication structure, relayed connection may be more complex. Many SSL libraries may not support decoupling the socket and the SSL state machine, which may be necessary to inject unencrypted (or at least public to the relay) routing information on the message so the relay knows how to deliver a given message.
  • The solution may be twofold. Using STARTTLS, the system may create a plaintext handshake with the relay server, communicate the routing info, establish the connection to the relayed node, and then transition to an SSL connection before the first bit of cyphertext is ever sent to the relayed client.
  • In various embodiments, the relay servers will NOT be performing any part of the SSL handshake; they merely forward the packets to the intended host in a transparent manner. Therefore the relays have absolutely no visibility in the underlying data that is being transmitted.
  • Mobile:
  • In various embodiments, Android may leverage the same code as the normal (e.g., PC) clients and onsite. In various embodiments, iOS may need to do Simple Certificate Enrollment Protocol (SCEP) server-side generation and deliver the cert using a PIN.
  • Private-Private Cloud Communication:
  • One of the most powerful aspects of the system may be the ability for two or more organizations with separate IT infrastructure to collaborate easily.
  • In the normal certificate infrastructure case where the system is the central, common CA, this may be fairly straightforward. Each client's certificate may identify the common system CA as a trusted root authority, and therefore accept the remote peer's certificate. Effectively it may make no difference that the two nodes are in separate organizations since they trust the same root.
  • In various embodiments, when organizations use their own internal PKI, an assumption that each party's root is trusted by the opposite party will not be true. For example, a certificate signed by Company X may not be trusted by a client who only trusts Company Y's CA. Therefore, the system may need to modify the trust relationships to support trusting not just the system root CA, but other CAs or endpoint certificates. In effect, by managing who is trusted, companies may define specific whitelists or “rings of trust” enforced by the system.
  • In one example, Company X and Company Y may agree that they need to collaborate on data. Using the system, every client in their organization may load both companys' CAs into the client's trusted store, making the client trust certificates issued from either authority. Furthermore, a system application may enforce that clients in company X must be signed with company X's CA, and clients in company Y must be signed by company Y's CA. This is not how typical certificate identification (e.g., standards-based PKI) works. However, by using the common name or subject in the certificate, the system may verify not only the identity of an endpoint, but that the endpoint identity is established with a proper chain. For example, a client may have <client id>.<org id>.client.adeptcloud.com in their subject name, which must match the organization ID in the signing CA's certificate. In special circumstances, even a single client may be added to trust for establishing finer trust silos.
  • Synchronizing and maintaining the trust stores on the clients would be a nightmare in a typical piece of software. However, leveraging the system's cloud-managed paradigm, the system may use a central server to delegate which clients synchronize which CA's (or client certificates) into their trust stores. This information may come directly from the nexus, or for even more added security, may be delivered using the system onsite servers.
  • Another possible useful configuration may be allowing for organizations to provide intermediate certificates that will be delivered by the system. Clients may have special permission for these types of “chained” certificate configurations, for instance the ability to synchronize more sensitive data.
  • In various embodiments, a client-side implementation may include prototype certificate generation, prototype certificate chaining (e.g., signing by a third party), establishing base socket communication (e.g., using Netty with TLS 2.0 and custom certs), streaming interfaces (e.g., Interface standard Input/Ouput streams to Netty ByteStreams), refactoring node interfaces in preparation for secure messaging applications (e.g., AdeptSecureMessage), building request/response wrappers (e.g., on top of Netty), tying back Node External Services to new TLS backend, tying back Onsite External Services to new TLS backend, building a STARTTLS pipeline factory, updating relay server to relay STARTTLS, and modifying relay client to support STARTTLS.
  • In various embodiments, a server-side implementation may include adding a serial number entry to the computer field nexus side, implementing SCEP, implementing OCSP, and exposing some OCSP/SCEP configuration options to organizations.
  • FIG. 20 is a flowchart illustrating an example method 2000 of sharing data. At operation 2002, a request is received from a client node to access data in a share associated with as server node. For example, the request may be received at the server node or the request may be received at an Onsite service installed within a firewall.
  • At operation 2004, a communication is received from a management nexus (e.g., at the server node or the Onsite service). The communication confirms the identity of the client node and a confirmation of an authorization for the client node to access the data in the share associated with the server node. The communication may be sent in response to a request for the confirmation of the identity of the client node and a confirmation of the authorization for the client node to access the data in the share associated with the server node.
  • At operation 2006, the client node is allowed to access the data in the share associated with the server node based on the communication received from the management nexus. For example, the client node is allowed to establish a connection with the server node or the Onsite service via a relay endpoint, as described above. In various embodiments, the connection is established based on the security measures described above (e.g., in response to an exchange of certificates between the client node, the server node, and the management nexus). In various embodiments, the data in the share is not transferred to the management nexus. Instead, the data is transferred directly from the server node (or Onsite service) to the client node (e.g., via the relay) without involving the management nexus. Thus the nexus remains ignorant of the actual data that is transferred between nodes.
  • Modules, Components, and Logic
  • Certain embodiments are described herein as including logic or a number of components, modules, or mechanisms. Modules may constitute either software modules (e.g., code embodied on a machine-readable medium or in a transmission signal) or hardware modules. A hardware module is a tangible unit capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein.
  • In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
  • Accordingly, the term “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired) or temporarily configured (e.g., programmed) to operate in a certain manner and/or to perform certain operations described herein. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where the hardware modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time.
  • Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple of such hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices and can operate on a resource (e.g., a collection of information).
  • The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules.
  • Similarly, the methods described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations.
  • The one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), these operations being accessible via a network (e.g., the network 104 of FIG. 1) and via one or more appropriate interfaces (e.g., APIs).
  • Example embodiments may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Example embodiments may be implemented using a computer program product, e.g., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable medium for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers.
  • A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
  • In example embodiments, operations may be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Method operations can also be performed by, and apparatus of example embodiments may be implemented as, special purpose logic circuitry (e.g., a FPGA or an ASIC).
  • The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In embodiments deploying a programmable computing system, it will be appreciated that both hardware and software architectures should be considered. Specifically, it will be appreciated that the choice of whether to implement certain functionality in permanently configured hardware (e.g., an ASIC), in temporarily configured hardware (e.g., a combination of software and a programmable processor), or a combination of permanently and temporarily configured hardware may be a design choice. Below are set out hardware (e.g., machine) and software architectures that may be deployed, in various example embodiments.
  • FIG. 21 is a block diagram of machine in the example form of a computer system 5000 within which instructions 5024 for causing the machine to perform operations corresponding to one or more of the methodologies discussed herein may be executed. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • The example computer system 5000 includes a processor 5002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), a main memory 5004 and a static memory 5006, which communicate with each other via a bus 5008. The computer system 5000 may further include a video display unit 5010 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). The computer system 5000 also includes an alphanumeric input device 5012 (e.g., a keyboard), a user interface (UI) navigation (or cursor control) device 5014 (e.g., a mouse), a storage unit 5016, a signal generation device 5018 (e.g., a speaker) and a network interface device 5020.
  • The storage unit 5016 includes a machine-readable medium 5022 on which is stored one or more sets of data structures and instructions 5024 (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 5024 may also reside, completely or at least partially, within the main memory 5004 and/or within the processor 5002 during execution thereof by the computer system 5000, the main memory 5004 and the processor 5002 also constituting machine-readable media. The instructions 5024 may also reside, completely or at least partially, within the static memory 5006.
  • While the machine-readable medium 5022 is shown in an example embodiment to be a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions 5024 or data structures. The term “machine-readable medium” shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present embodiments, or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media include non-volatile memory, including by way of example semiconductor memory devices, e.g., Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and compact disc-read-only memory (CD-ROM) and digital versatile disc (or digital video disc) read-only memory (DVD-ROM) disks.
  • The instructions 5024 may further be transmitted or received over a communications network 5026 using a transmission medium. The instructions 5024 may be transmitted using the network interface device 5020 and any one of a number of well-known transfer protocols (e.g., HTTP). Examples of communication networks include a LAN, a WAN, the Internet, mobile telephone networks, POTS networks, and wireless data networks (e.g., WiFi and WiMax networks). The term “transmission medium” shall be taken to include any intangible medium capable of storing, encoding or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such software.
  • Although an embodiment has been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the present disclosure. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof, show by way of illustration, and not of limitation, specific embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to allow those skilled in the art to practice the teachings disclosed herein. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
  • Such embodiments of the inventive subject matter may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed. Thus, although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description.

Claims (20)

What is claimed is:
1. A system comprising:
a processor-implemented server node configured to:
receive a request from a processor-implemented client node to access data in a share associated with the server node;
receive a communication from a processor-implemented management nexus, the communication including a confirmation of an identity of the client node and a confirmation of an authorization for the client node to access the data in the share associated with the server node; and
allow the client node to access the data in the share associated with the server node based on the communication from the management nexus without sending the data to the management nexus.
2. The system of claim 1, wherein the processor-implemented server node is further configured to:
receive a client certificate from the client node; and
send a request to the management nexus to verify a status of the client certificate;
wherein the confirmation of the authorization for the client node to access the data in the share associated with the server node is based on the verifying of the status of the client certificate by the management nexus; and
the accessing of the data in the share associated with the server node includes establishing a secure channel based on the verifying of the status of the certificate by the management nexus.
3. The system of claim 1, wherein the client node is configured to:
receive a notification from the management nexus that the data in the share of the server node is accessible to the client, the notification including information pertaining to accessing the data;
establish a connection to the server node based on the information pertaining to the accessing of the data;
receive the data over the connection to the server node without the data being sent to the management nexus.
4. The system of claim 3, wherein the information pertaining to the accessing of the data includes an endpoint for the connection, the endpoint being registered by the server node with a relay server created by the management nexus.
5. The system of claim 3, wherein the client node is further configured to:
receive the client certificate from the management nexus;
send the client certificate to the server node;
receive a server certificate from the server node; and
send a request to the management nexus to verify a status of the server certificate; and
wherein the establishing of the connection to the server node is based on the verifying of the status of the server certificate by the management nexus.
6. The system of claim 3, wherein the share is a copy of an original share, the original share being associated with an original server node, the information pertaining to the accessing of the data selected by the management nexus based on a determination that the original server node is unreachable.
7. The system of claim 4, wherein the relay server is created dynamically based on a receiving of a request by the client node to establish the connection and a determining by the management nexus that the server node is behind a firewall.
8. A method comprising:
receiving a request from a client node to access data in a share associated with a server node;
receiving a communication from a management nexus, the communication including a confirmation of an identity of the client node and a confirmation of an authorization for the client node to access the data in the share associated with the server node; and
allowing the client node to access the data in the share associated with the server node based on the communication from the management nexus without sending the data to the management nexus, wherein the allowing of the client node to access the data in the share associated with the server node is performed by one or more processors.
9. The method of claim 8, further comprising:
receiving a client certificate from the client node; and
sending a request to the management nexus to verify a status of the client certificate;
wherein the confirmation of the authorization for the client node to access the data in the share associated with the server node is based on the verifying of the status of the client certificate by the management nexus; and
the accessing of the data in the share associated with the server node includes establishing a secure channel based on the verifying of the status of the certificate by the management nexus.
10. The method of claim 8, wherein the client node is configured to:
receive a notification from the management nexus that the data in the share of the server node is accessible to the client, the notification including information pertaining to accessing the data;
establish a connection to the server node based on the information pertaining to the accessing of the data; and
receive the data over the connection to the server node without the data being sent to the management nexus.
11. The method of claim 10, wherein the information pertaining to the accessing of the data includes an endpoint for the connection, the endpoint being registered by the server node with a relay server created by the management nexus.
12. The method of claim 10, wherein the client node is further configured to:
receive the client certificate from the management nexus;
send the client certificate to the server node;
receive a server certificate from the server node; and
send a request to the management nexus to verify a status of the server certificate; and
wherein the establishing of the connection to the server node is based on the verifying of the status of the server certificate by the management nexus.
13. The method of claim 10, wherein the share is a copy of an original share, the original share being associated with an original server node, the information pertaining to the accessing of the data selected by the management nexus based on a determination that the original server node is unreachable.
14. The method of claim 11, wherein the relay server is created dynamically based on a receiving of a request by the client node to establish the connection and a determining by the management nexus that the server node is behind a firewall.
15. A non-transitory machine readable storage medium storing a set of instructions that, when executed by at least one processor, causes the at least one processor to perform operations, the operations comprising:
receiving a request from a client node to access data in a share associated with a server node;
receiving a communication from a management nexus, the communication including a confirmation of an identity of the client node and a confirmation of an authorization for the client node to access the data in the share associated with the server node; and
allowing the client node to access the data in the share associated with the server node based on the communication from the management nexus without sending the data to the management nexus.
16. The non-transitory machine readable storage medium of claim 15, the operations further comprising:
receiving a client certificate from the client node; and
sending a request to the management nexus to verify a status of the client certificate;
wherein the confirmation of the authorization for the client node to access the data in the share associated with the server node is based on the verifying of the status of the client certificate by the management nexus; and
the accessing of the data in the share associated with the server node includes establishing a secure channel based on the verifying of the status of the certificate by the management nexus.
17. The non-transitory machine readable storage medium of claim 15, wherein the client node is configured to:
receive a notification from the management nexus that the data in the share of the server node is accessible to the client, the notification including information pertaining to accessing the data;
establish a connection to the server node based on the information pertaining to the accessing of the data; and
receive the data over the connection to the server node without the data being sent to the management nexus.
18. The non-transitory machine readable storage medium of claim 17, wherein the information pertaining to the accessing of the data includes an endpoint for the connection, the endpoint being registered by the server node with a relay server created by the management nexus.
19. The non-transitory machine readable storage medium of claim 18, wherein the client node is further configured to:
receive the client certificate from the management nexus;
send the client certificate to the server node;
receive a server certificate from the server node; and
send a request to the management nexus to verify a status of the server certificate; and
wherein the establishing of the connection to the server node is based on the verifying of the status of the server certificate by the management nexus.
20. The non-transitory machine readable storage medium of claim 17, wherein the share is a copy of an original share, the original share being associated with an original server node, the information pertaining to the accessing of the data selected by the management nexus based on a determination that the original server node is unreachable.
US13/734,843 2012-01-05 2013-01-04 System and method for decentralized online data transfer and synchronization Active US8955103B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/734,843 US8955103B2 (en) 2012-01-05 2013-01-04 System and method for decentralized online data transfer and synchronization

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201261583340P 2012-01-05 2012-01-05
US201261720973P 2012-10-31 2012-10-31
US13/734,843 US8955103B2 (en) 2012-01-05 2013-01-04 System and method for decentralized online data transfer and synchronization

Publications (2)

Publication Number Publication Date
US20130179947A1 true US20130179947A1 (en) 2013-07-11
US8955103B2 US8955103B2 (en) 2015-02-10

Family

ID=47595070

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/734,843 Active US8955103B2 (en) 2012-01-05 2013-01-04 System and method for decentralized online data transfer and synchronization

Country Status (2)

Country Link
US (1) US8955103B2 (en)
WO (1) WO2013103897A1 (en)

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8671080B1 (en) * 2008-09-18 2014-03-11 Symantec Corporation System and method for managing data loss due to policy violations in temporary files
US8775972B2 (en) * 2012-11-08 2014-07-08 Snapchat, Inc. Apparatus and method for single action control of social network profile access
US20140258350A1 (en) * 2013-03-05 2014-09-11 Hightail, Inc. System and Method for Cloud-Based Read-Only Folder Synchronization
US20140379647A1 (en) * 2013-06-21 2014-12-25 Box, Inc. Maintaining and updating file system shadows on a local device by a synchronization client of a cloud-based platform
CN104572823A (en) * 2014-12-05 2015-04-29 深圳天珑无线科技有限公司 Intelligent terminal and naming method of apk file thereof
US20160140139A1 (en) * 2014-11-17 2016-05-19 Microsoft Technology Licensing, Llc Local representation of shared files in disparate locations
US20160344700A1 (en) * 2015-05-18 2016-11-24 A2Zlogix, Inc. System and method for reception and transmission optimization of secured video, image, audio, and other media traffic via proxy
US9507795B2 (en) 2013-01-11 2016-11-29 Box, Inc. Functionalities, features, and user interface of a synchronization client to a cloud-based environment
USD774094S1 (en) * 2014-12-30 2016-12-13 Microsoft Corporation Display screen with icon
US20160373431A1 (en) * 2013-07-01 2016-12-22 Thomson Licensing Method to enroll a certificate to a device using scep and respective management application
US9535924B2 (en) 2013-07-30 2017-01-03 Box, Inc. Scalability improvement in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform
US9553758B2 (en) 2012-09-18 2017-01-24 Box, Inc. Sandboxing individual applications to specific user folders in a cloud-based service
US9575981B2 (en) 2012-04-11 2017-02-21 Box, Inc. Cloud service enabled to handle a set of files depicted to a user as a single file in a native operating system
US9633037B2 (en) 2013-06-13 2017-04-25 Box, Inc Systems and methods for synchronization event building and/or collapsing by a synchronization component of a cloud-based platform
US9652741B2 (en) 2011-07-08 2017-05-16 Box, Inc. Desktop application for access and interaction with workspaces in a cloud-based content management system and synchronization mechanisms thereof
USD792910S1 (en) * 2015-08-28 2017-07-25 S-Printing Solution Co., Ltd. Display screen or portion thereof with graphical user interface
USD793450S1 (en) * 2015-08-28 2017-08-01 S-Printing Solution Co., Ltd. Display screen or portion thereof with graphical user interface
US9769203B2 (en) 2014-09-22 2017-09-19 Sap Se Methods, systems, and apparatus for mitigating network-based attacks
US9773051B2 (en) 2011-11-29 2017-09-26 Box, Inc. Mobile platform file and folder selection functionalities for offline access and synchronization
US9794256B2 (en) 2012-07-30 2017-10-17 Box, Inc. System and method for advanced control tools for administrators in a cloud-based service
US20180047330A1 (en) * 2016-08-09 2018-02-15 Jacob Villarreal Rich enterprise service-oriented client-side integrated-circuitry infrastructure, and display apparatus
US9953036B2 (en) 2013-01-09 2018-04-24 Box, Inc. File system monitoring in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform
US9954958B2 (en) * 2016-01-29 2018-04-24 Red Hat, Inc. Shared resource management
US10110683B2 (en) * 2015-08-11 2018-10-23 Unisys Corporation Systems and methods for maintaining ownership of and avoiding orphaning of communication sessions
CN108694102A (en) * 2018-05-11 2018-10-23 携程旅游信息技术(上海)有限公司 A kind of data manipulation method, equipment, system and medium based on Nexus services
US10192063B2 (en) 2015-04-17 2019-01-29 Dropbox, Inc. Collection folder for collecting file submissions with comments
US10235383B2 (en) 2012-12-19 2019-03-19 Box, Inc. Method and apparatus for synchronization of items with read-only permissions in a cloud-based environment
US10530854B2 (en) 2014-05-30 2020-01-07 Box, Inc. Synchronization of permissioned content in cloud-based environments
US10542092B2 (en) 2015-04-17 2020-01-21 Dropbox, Inc. Collection folder for collecting file submissions
US10599671B2 (en) 2013-01-17 2020-03-24 Box, Inc. Conflict resolution, retry condition management, and handling of problem files for the synchronization client to a cloud-based platform
US10599483B1 (en) * 2017-03-01 2020-03-24 Amazon Technologies, Inc. Decentralized task execution bypassing an execution service
US10601916B2 (en) * 2015-04-17 2020-03-24 Dropbox, Inc. Collection folder for collecting file submissions via a customizable file request
CN111444449A (en) * 2018-12-27 2020-07-24 北京奇虎科技有限公司 Http request processing method and apparatus
US10725968B2 (en) 2013-05-10 2020-07-28 Box, Inc. Top down delete or unsynchronization on delete of and depiction of item synchronization with a synchronization client to a cloud-based platform
US10790979B1 (en) 2019-08-29 2020-09-29 Alibaba Group Holding Limited Providing high availability computing service by issuing a certificate
US10846074B2 (en) 2013-05-10 2020-11-24 Box, Inc. Identification and handling of items to be ignored for synchronization with a cloud-based platform by a synchronization client
US10885209B2 (en) 2015-04-17 2021-01-05 Dropbox, Inc. Collection folder for collecting file submissions in response to a public file request
WO2021036186A1 (en) * 2019-08-29 2021-03-04 创新先进技术有限公司 Method and apparatus for providing high-availability computing service by means of certificate issuing
US20210112113A1 (en) * 2019-10-12 2021-04-15 Breezeway Logic Llc Computing and Communication Systems and Methods
US20220021656A1 (en) * 2015-05-27 2022-01-20 Ping Identity Corporation Scalable proxy clusters
US11256711B2 (en) * 2013-10-04 2022-02-22 Hyland Uk Operations Limited Hybrid workflow synchronization between cloud and on-premise systems in a content management system
CN114422488A (en) * 2022-01-24 2022-04-29 北京理工大学重庆创新中心 Multi-thread message processing method based on netty framework
US11399011B2 (en) * 2017-03-03 2022-07-26 Samsung Electronics Co., Ltd. Method for transmitting data and server device for supporting same
US11411940B2 (en) * 2019-12-20 2022-08-09 AT&T Global Network Services Hong Kong LTD Zero-knowledge proof network protocol for N-party verification of shared internet of things assets
US11422912B2 (en) 2019-04-19 2022-08-23 Vmware, Inc. Accurate time estimates for operations performed on an SDDC
US11424940B2 (en) * 2019-06-01 2022-08-23 Vmware, Inc. Standalone tool for certificate management
CN116501559A (en) * 2023-04-18 2023-07-28 杭州指令集智能科技有限公司 Method for realizing distributed HTTP interface performance test based on Netty
US11783033B2 (en) 2017-10-13 2023-10-10 Ping Identity Corporation Methods and apparatus for analyzing sequences of application programming interface traffic to identify potential malicious actions
US11843605B2 (en) 2019-01-04 2023-12-12 Ping Identity Corporation Methods and systems for data traffic based adaptive security
US11855968B2 (en) 2016-10-26 2023-12-26 Ping Identity Corporation Methods and systems for deep learning based API traffic security
US11948473B2 (en) 2015-12-31 2024-04-02 Dropbox, Inc. Assignments for classrooms
US11954072B2 (en) * 2018-10-12 2024-04-09 Open Text Sa Ulc Systems and methods for bidirectional content synching and collaboration through external systems
CN118158238A (en) * 2024-03-07 2024-06-07 北京理工大学 Data synchronization method suitable for large-scale distributed cluster simulation

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD618248S1 (en) 2008-09-23 2010-06-22 Apple Inc. Graphical user interface for a display screen or portion thereof
WO2013103897A1 (en) 2012-01-05 2013-07-11 Adept Cloud, Inc. System and method for decentralized online data transfer and synchronization
US20140280483A1 (en) * 2013-03-15 2014-09-18 Meteor Development Group, Inc. Client database cache
USD741874S1 (en) 2013-06-09 2015-10-27 Apple Inc. Display screen or portion thereof with animated graphical user interface
USD757737S1 (en) * 2013-06-09 2016-05-31 Apple Inc. Display screen or portion thereof with icon
USD753678S1 (en) 2014-06-01 2016-04-12 Apple Inc. Display screen or portion thereof with animated graphical user interface
USD753711S1 (en) 2014-09-01 2016-04-12 Apple Inc. Display screen or portion thereof with graphical user interface
USD806129S1 (en) * 2016-08-09 2017-12-26 Xerox Corporation Printer machine user interface screen with icon
USD818037S1 (en) 2017-01-11 2018-05-15 Apple Inc. Type font
USD898755S1 (en) 2018-09-11 2020-10-13 Apple Inc. Electronic device with graphical user interface
USD902221S1 (en) 2019-02-01 2020-11-17 Apple Inc. Electronic device with animated graphical user interface
USD900925S1 (en) 2019-02-01 2020-11-03 Apple Inc. Type font and electronic device with graphical user interface
USD900871S1 (en) 2019-02-04 2020-11-03 Apple Inc. Electronic device with animated graphical user interface
US11637710B2 (en) * 2019-12-10 2023-04-25 Jpmorgan Chase Bank, N.A. Systems and methods for federated privacy management
USD971961S1 (en) * 2021-02-08 2022-12-06 Electronics And Telecommunications Research Institute Display panel with graphical user interface
US11501630B2 (en) 2021-03-02 2022-11-15 Walmart Apollo, Llc Systems and methods for processing emergency alert notifications
USD986256S1 (en) * 2021-03-02 2023-05-16 Walmart Apollo, Llc Display screen with graphical user interface
USD1002643S1 (en) 2021-06-04 2023-10-24 Apple Inc. Display or portion thereof with graphical user interface
CN114666195B (en) * 2022-03-21 2022-09-16 江苏红网技术股份有限公司 Multi-level safety protection data exchange sharing system and method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030093666A1 (en) * 2000-11-10 2003-05-15 Jonathan Millen Cross-domain access control
US20040054779A1 (en) * 2002-09-13 2004-03-18 Yoshiteru Takeshima Network system
US6996841B2 (en) * 2001-04-19 2006-02-07 Microsoft Corporation Negotiating secure connections through a proxy server
US8103876B2 (en) * 2002-03-20 2012-01-24 Research In Motion Limited System and method for checking digital certificate status

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009004732A1 (en) * 2007-07-05 2009-01-08 Hitachi Software Engineering Co., Ltd. Method for encrypting and decrypting shared encrypted files
US8700892B2 (en) 2010-03-19 2014-04-15 F5 Networks, Inc. Proxy SSL authentication in split SSL for client-side proxy agent resources with content insertion
WO2013103897A1 (en) 2012-01-05 2013-07-11 Adept Cloud, Inc. System and method for decentralized online data transfer and synchronization

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030093666A1 (en) * 2000-11-10 2003-05-15 Jonathan Millen Cross-domain access control
US6996841B2 (en) * 2001-04-19 2006-02-07 Microsoft Corporation Negotiating secure connections through a proxy server
US8103876B2 (en) * 2002-03-20 2012-01-24 Research In Motion Limited System and method for checking digital certificate status
US20040054779A1 (en) * 2002-09-13 2004-03-18 Yoshiteru Takeshima Network system

Cited By (97)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8671080B1 (en) * 2008-09-18 2014-03-11 Symantec Corporation System and method for managing data loss due to policy violations in temporary files
US9652741B2 (en) 2011-07-08 2017-05-16 Box, Inc. Desktop application for access and interaction with workspaces in a cloud-based content management system and synchronization mechanisms thereof
US11537630B2 (en) 2011-11-29 2022-12-27 Box, Inc. Mobile platform file and folder selection functionalities for offline access and synchronization
US9773051B2 (en) 2011-11-29 2017-09-26 Box, Inc. Mobile platform file and folder selection functionalities for offline access and synchronization
US10909141B2 (en) 2011-11-29 2021-02-02 Box, Inc. Mobile platform file and folder selection functionalities for offline access and synchronization
US11853320B2 (en) 2011-11-29 2023-12-26 Box, Inc. Mobile platform file and folder selection functionalities for offline access and synchronization
US9575981B2 (en) 2012-04-11 2017-02-21 Box, Inc. Cloud service enabled to handle a set of files depicted to a user as a single file in a native operating system
US9794256B2 (en) 2012-07-30 2017-10-17 Box, Inc. System and method for advanced control tools for administrators in a cloud-based service
US9553758B2 (en) 2012-09-18 2017-01-24 Box, Inc. Sandboxing individual applications to specific user folders in a cloud-based service
US11252158B2 (en) 2012-11-08 2022-02-15 Snap Inc. Interactive user-interface to adjust access privileges
US10887308B1 (en) 2012-11-08 2021-01-05 Snap Inc. Interactive user-interface to adjust access privileges
US8775972B2 (en) * 2012-11-08 2014-07-08 Snapchat, Inc. Apparatus and method for single action control of social network profile access
US10235383B2 (en) 2012-12-19 2019-03-19 Box, Inc. Method and apparatus for synchronization of items with read-only permissions in a cloud-based environment
US9953036B2 (en) 2013-01-09 2018-04-24 Box, Inc. File system monitoring in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform
US9507795B2 (en) 2013-01-11 2016-11-29 Box, Inc. Functionalities, features, and user interface of a synchronization client to a cloud-based environment
US10599671B2 (en) 2013-01-17 2020-03-24 Box, Inc. Conflict resolution, retry condition management, and handling of problem files for the synchronization client to a cloud-based platform
US20240045840A1 (en) * 2013-03-05 2024-02-08 Open Text Holdings, Inc. System and method for cloud-based read-only folder synchronization
US12093223B2 (en) * 2013-03-05 2024-09-17 Open Text Holdings, Inc. System and method for cloud-based read-only folder synchronization
US11500820B2 (en) 2013-03-05 2022-11-15 Open Text Holdings, Inc. System and method for cloud-based read-only folder synchronization
US11822517B2 (en) * 2013-03-05 2023-11-21 Open Text Holdings, Inc. System and method for cloud-based read-only folder synchronization
US10691645B2 (en) 2013-03-05 2020-06-23 Open Text Holdlings, Inc. System and method for cloud-based read-only folder synchronization
US9934241B2 (en) * 2013-03-05 2018-04-03 Hightail, Inc. System and method for cloud-based read-only folder synchronization
US20140258350A1 (en) * 2013-03-05 2014-09-11 Hightail, Inc. System and Method for Cloud-Based Read-Only Folder Synchronization
US10725968B2 (en) 2013-05-10 2020-07-28 Box, Inc. Top down delete or unsynchronization on delete of and depiction of item synchronization with a synchronization client to a cloud-based platform
US10846074B2 (en) 2013-05-10 2020-11-24 Box, Inc. Identification and handling of items to be ignored for synchronization with a cloud-based platform by a synchronization client
US10877937B2 (en) 2013-06-13 2020-12-29 Box, Inc. Systems and methods for synchronization event building and/or collapsing by a synchronization component of a cloud-based platform
US9633037B2 (en) 2013-06-13 2017-04-25 Box, Inc Systems and methods for synchronization event building and/or collapsing by a synchronization component of a cloud-based platform
US11531648B2 (en) 2013-06-21 2022-12-20 Box, Inc. Maintaining and updating file system shadows on a local device by a synchronization client of a cloud-based platform
US20140379647A1 (en) * 2013-06-21 2014-12-25 Box, Inc. Maintaining and updating file system shadows on a local device by a synchronization client of a cloud-based platform
US9805050B2 (en) * 2013-06-21 2017-10-31 Box, Inc. Maintaining and updating file system shadows on a local device by a synchronization client of a cloud-based platform
US20160373431A1 (en) * 2013-07-01 2016-12-22 Thomson Licensing Method to enroll a certificate to a device using scep and respective management application
US9930028B2 (en) * 2013-07-01 2018-03-27 Thomson Licensing Method to enroll a certificate to a device using SCEP and respective management application
US9535924B2 (en) 2013-07-30 2017-01-03 Box, Inc. Scalability improvement in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform
US11256711B2 (en) * 2013-10-04 2022-02-22 Hyland Uk Operations Limited Hybrid workflow synchronization between cloud and on-premise systems in a content management system
US11727035B2 (en) * 2013-10-04 2023-08-15 Hyland Uk Operations Limited Hybrid workflow synchronization between cloud and on-premise systems in a content management system
US20240004899A1 (en) * 2013-10-04 2024-01-04 Hyland Uk Operations Limited Hybrid workflow synchronization between cloud and on-premise systems in a content management system
US12019650B2 (en) 2013-10-04 2024-06-25 Hyland Uk Operations Limited Linking of content between installations of a content management system
US20220222273A1 (en) * 2013-10-04 2022-07-14 Hyland Uk Operations Limited Hybrid workflow synchronization between cloud and on-premise systems in a content management system
US10530854B2 (en) 2014-05-30 2020-01-07 Box, Inc. Synchronization of permissioned content in cloud-based environments
US9769203B2 (en) 2014-09-22 2017-09-19 Sap Se Methods, systems, and apparatus for mitigating network-based attacks
US20160140139A1 (en) * 2014-11-17 2016-05-19 Microsoft Technology Licensing, Llc Local representation of shared files in disparate locations
CN104572823A (en) * 2014-12-05 2015-04-29 深圳天珑无线科技有限公司 Intelligent terminal and naming method of apk file thereof
USD774094S1 (en) * 2014-12-30 2016-12-13 Microsoft Corporation Display screen with icon
US12079353B2 (en) 2015-04-17 2024-09-03 Dropbox, Inc. Collection folder for collecting file submissions
US10929547B2 (en) 2015-04-17 2021-02-23 Dropbox, Inc. Collection folder for collecting file submissions using email
US10628595B2 (en) 2015-04-17 2020-04-21 Dropbox, Inc. Collection folder for collecting and publishing file submissions
US10826992B2 (en) * 2015-04-17 2020-11-03 Dropbox, Inc. Collection folder for collecting file submissions via a customizable file request
US10628593B2 (en) 2015-04-17 2020-04-21 Dropbox, Inc. Collection folder for collecting file submissions and recording associated activities
US10621367B2 (en) 2015-04-17 2020-04-14 Dropbox, Inc. Collection folder for collecting photos
US10885210B2 (en) 2015-04-17 2021-01-05 Dropbox, Inc. Collection folder for collecting file submissions
US10885209B2 (en) 2015-04-17 2021-01-05 Dropbox, Inc. Collection folder for collecting file submissions in response to a public file request
US10885208B2 (en) 2015-04-17 2021-01-05 Dropbox, Inc. Collection folder for collecting file submissions and scanning for malicious content
US10192063B2 (en) 2015-04-17 2019-01-29 Dropbox, Inc. Collection folder for collecting file submissions with comments
US10601916B2 (en) * 2015-04-17 2020-03-24 Dropbox, Inc. Collection folder for collecting file submissions via a customizable file request
US10713371B2 (en) 2015-04-17 2020-07-14 Dropbox, Inc. Collection folder for collecting file submissions with comments
US11783059B2 (en) 2015-04-17 2023-10-10 Dropbox, Inc. Collection folder for collecting file submissions
US11475144B2 (en) 2015-04-17 2022-10-18 Dropbox, Inc. Collection folder for collecting file submissions
US11630905B2 (en) 2015-04-17 2023-04-18 Dropbox, Inc. Collection folder for collecting file submissions in response to a public file request
US11157636B2 (en) 2015-04-17 2021-10-26 Dropbox, Inc. Collection folder for collecting file submissions in response to a public file request
US10599858B2 (en) 2015-04-17 2020-03-24 Dropbox, Inc. Collection folder for collecting file submissions
US10542092B2 (en) 2015-04-17 2020-01-21 Dropbox, Inc. Collection folder for collecting file submissions
US11244062B2 (en) 2015-04-17 2022-02-08 Dropbox, Inc. Collection folder for collecting file submissions
US10395045B2 (en) 2015-04-17 2019-08-27 Dropbox, Inc. Collection folder for collecting file submissions and scanning for plagiarism
US12086276B2 (en) 2015-04-17 2024-09-10 Dropbox, Inc. Collection folder for collecting file submissions in response to a public file request
US11270008B2 (en) * 2015-04-17 2022-03-08 Dropbox, Inc. Collection folder for collecting file submissions
US10204230B2 (en) 2015-04-17 2019-02-12 Dropbox, Inc. Collection folder for collecting file submissions using email
US20160344700A1 (en) * 2015-05-18 2016-11-24 A2Zlogix, Inc. System and method for reception and transmission optimization of secured video, image, audio, and other media traffic via proxy
US20220021656A1 (en) * 2015-05-27 2022-01-20 Ping Identity Corporation Scalable proxy clusters
US11641343B2 (en) 2015-05-27 2023-05-02 Ping Identity Corporation Methods and systems for API proxy based adaptive security
US11582199B2 (en) * 2015-05-27 2023-02-14 Ping Identity Corporation Scalable proxy clusters
US10110683B2 (en) * 2015-08-11 2018-10-23 Unisys Corporation Systems and methods for maintaining ownership of and avoiding orphaning of communication sessions
USD792910S1 (en) * 2015-08-28 2017-07-25 S-Printing Solution Co., Ltd. Display screen or portion thereof with graphical user interface
USD793450S1 (en) * 2015-08-28 2017-08-01 S-Printing Solution Co., Ltd. Display screen or portion thereof with graphical user interface
US11948473B2 (en) 2015-12-31 2024-04-02 Dropbox, Inc. Assignments for classrooms
US9954958B2 (en) * 2016-01-29 2018-04-24 Red Hat, Inc. Shared resource management
US20180047330A1 (en) * 2016-08-09 2018-02-15 Jacob Villarreal Rich enterprise service-oriented client-side integrated-circuitry infrastructure, and display apparatus
US11924170B2 (en) 2016-10-26 2024-03-05 Ping Identity Corporation Methods and systems for API deception environment and API traffic control and security
US11855968B2 (en) 2016-10-26 2023-12-26 Ping Identity Corporation Methods and systems for deep learning based API traffic security
US10599483B1 (en) * 2017-03-01 2020-03-24 Amazon Technologies, Inc. Decentralized task execution bypassing an execution service
US11399011B2 (en) * 2017-03-03 2022-07-26 Samsung Electronics Co., Ltd. Method for transmitting data and server device for supporting same
US11783033B2 (en) 2017-10-13 2023-10-10 Ping Identity Corporation Methods and apparatus for analyzing sequences of application programming interface traffic to identify potential malicious actions
CN108694102A (en) * 2018-05-11 2018-10-23 携程旅游信息技术(上海)有限公司 A kind of data manipulation method, equipment, system and medium based on Nexus services
US11954072B2 (en) * 2018-10-12 2024-04-09 Open Text Sa Ulc Systems and methods for bidirectional content synching and collaboration through external systems
CN111444449A (en) * 2018-12-27 2020-07-24 北京奇虎科技有限公司 Http request processing method and apparatus
US11843605B2 (en) 2019-01-04 2023-12-12 Ping Identity Corporation Methods and systems for data traffic based adaptive security
US11422912B2 (en) 2019-04-19 2022-08-23 Vmware, Inc. Accurate time estimates for operations performed on an SDDC
US11424940B2 (en) * 2019-06-01 2022-08-23 Vmware, Inc. Standalone tool for certificate management
WO2021036186A1 (en) * 2019-08-29 2021-03-04 创新先进技术有限公司 Method and apparatus for providing high-availability computing service by means of certificate issuing
US10972272B2 (en) 2019-08-29 2021-04-06 Advanced New Technologies Co., Ltd. Providing high availability computing service by issuing a certificate
US10790979B1 (en) 2019-08-29 2020-09-29 Alibaba Group Holding Limited Providing high availability computing service by issuing a certificate
US11206137B2 (en) 2019-08-29 2021-12-21 Advanced New Technologies Co., Ltd. Providing high availability computing service by issuing a certificate
US11856045B2 (en) * 2019-10-12 2023-12-26 Breezeway Logic Llc Computing and communication systems and methods
US20210112113A1 (en) * 2019-10-12 2021-04-15 Breezeway Logic Llc Computing and Communication Systems and Methods
US11411940B2 (en) * 2019-12-20 2022-08-09 AT&T Global Network Services Hong Kong LTD Zero-knowledge proof network protocol for N-party verification of shared internet of things assets
CN114422488A (en) * 2022-01-24 2022-04-29 北京理工大学重庆创新中心 Multi-thread message processing method based on netty framework
CN116501559A (en) * 2023-04-18 2023-07-28 杭州指令集智能科技有限公司 Method for realizing distributed HTTP interface performance test based on Netty
CN118158238A (en) * 2024-03-07 2024-06-07 北京理工大学 Data synchronization method suitable for large-scale distributed cluster simulation

Also Published As

Publication number Publication date
WO2013103897A1 (en) 2013-07-11
US8955103B2 (en) 2015-02-10

Similar Documents

Publication Publication Date Title
US8955103B2 (en) System and method for decentralized online data transfer and synchronization
US11411944B2 (en) Session synchronization across multiple devices in an identity cloud service
US10581820B2 (en) Key generation and rollover
US10530578B2 (en) Key store service
US11652685B2 (en) Data replication conflict detection and resolution for a multi-tenant identity cloud service
US11907359B2 (en) Event-based user state synchronization in a local cloud of a cloud storage system
US11061929B2 (en) Replication of resource type and schema metadata for a multi-tenant identity cloud service
US10454915B2 (en) User authentication using kerberos with identity cloud service
US10705823B2 (en) Application templates and upgrade framework for a multi-tenant identity cloud service
US10341354B2 (en) Distributed high availability agent architecture
US8996884B2 (en) High privacy of file synchronization with sharing functionality
US8336089B1 (en) Method and apparatus for providing authentication and encryption services by a software as a service platform
US10007767B1 (en) System and method for securing tenant data on a local appliance prior to delivery to a SaaS data center hosted application service
US8291490B1 (en) Tenant life cycle management for a software as a service platform
US8706800B1 (en) Client device systems and methods for providing secure access to application services and associated client data hosted by an internet coupled platform
US20110137991A1 (en) Systems and methods for management and collaboration in a private network
US20240195840A1 (en) Synthetic request injection for real-time cloud security posture management
US11985033B2 (en) Propagating information with network nodes
US11636068B2 (en) Distributed file locking for a network file share
Edge et al. Directory Services Clients
Schrittwieser Analysis and Design of a Secure File Transfer Architecture for Document Management Systems
Aaltonen Strong authentication and NFS version 4 on Linux workstations

Legal Events

Date Code Title Description
AS Assignment

Owner name: ADEPT CLOUD, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KLINE, FRANK-ROBERT;NATHAN, AARON MOISE;SCHOENBERG, JONATHAN R.;REEL/FRAME:030162/0122

Effective date: 20130104

AS Assignment

Owner name: HIGHTAIL, INC., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:YOUSENDIT, INC.;REEL/FRAME:031288/0656

Effective date: 20130710

AS Assignment

Owner name: HIGHTAIL, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ADEPT CLOUD, INC.;REEL/FRAME:031410/0433

Effective date: 20131007

AS Assignment

Owner name: HIGHTAIL, INC., CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE INCORRECT APPLICATION NO. 13/733,351 PREVIOUSLY RECORDED AT REEL: 031288 FRAME: 0656. ASSIGNOR(S) HEREBY CONFIRMS THE CHANGE OF NAME;ASSIGNOR:YOUSENDIT, INC.;REEL/FRAME:033416/0647

Effective date: 20130710

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551)

Year of fee payment: 4

AS Assignment

Owner name: OPEN TEXT HOLDINGS, INC., CALIFORNIA

Free format text: MERGER;ASSIGNOR:HIGHTAIL, INC.;REEL/FRAME:048416/0166

Effective date: 20181220

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

AS Assignment

Owner name: BARCLAYS BANK PLC, NEW YORK

Free format text: SECURITY INTEREST;ASSIGNOR:OPEN TEXT HOLDINGS, INC.;REEL/FRAME:063558/0682

Effective date: 20230428

Owner name: BARCLAYS BANK PLC, NEW YORK

Free format text: SECURITY INTEREST;ASSIGNOR:OPEN TEXT HOLDINGS, INC.;REEL/FRAME:063558/0698

Effective date: 20230501

Owner name: BARCLAYS BANK PLC, NEW YORK

Free format text: SECURITY INTEREST;ASSIGNOR:OPEN TEXT HOLDINGS, INC.;REEL/FRAME:063558/0690

Effective date: 20230430

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON, NEW YORK

Free format text: SECURITY INTEREST;ASSIGNOR:OPEN TEXT HOLDINGS, INC.;REEL/FRAME:064749/0852

Effective date: 20230430

AS Assignment

Owner name: OPEN TEXT HOLDINGS, INC., CANADA

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS (REEL/FRAME 063558/0682);ASSIGNOR:BARCLAYS BANK PLC;REEL/FRAME:067807/0062

Effective date: 20240621