GB2560010A - Data collaboration - Google Patents

Data collaboration Download PDF

Info

Publication number
GB2560010A
GB2560010A GB1703060.2A GB201703060A GB2560010A GB 2560010 A GB2560010 A GB 2560010A GB 201703060 A GB201703060 A GB 201703060A GB 2560010 A GB2560010 A GB 2560010A
Authority
GB
United Kingdom
Prior art keywords
data
collaborate
server
node
collaborating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1703060.2A
Other versions
GB201703060D0 (en
GB2560010B (en
Inventor
Ramsey Mark
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sage UK Ltd
Original Assignee
Sage UK Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sage UK Ltd filed Critical Sage UK Ltd
Priority to GB1703060.2A priority Critical patent/GB2560010B/en
Publication of GB201703060D0 publication Critical patent/GB201703060D0/en
Publication of GB2560010A publication Critical patent/GB2560010A/en
Application granted granted Critical
Publication of GB2560010B publication Critical patent/GB2560010B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/176Support for shared access to files; File sharing support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/176Support for shared access to files; File sharing support
    • G06F16/1767Concurrency control, e.g. optimistic or pessimistic approaches
    • G06F16/1774Locking methods, e.g. locking methods for file systems allowing shared and concurrent access to files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2336Pessimistic concurrency control approaches, e.g. locking or multiple versions without time stamps
    • G06F16/2343Locking methods, e.g. distributed locking or locking implementation details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A system is provided for sharing data between two or more collaborating nodes 101 including a collaborate server 103 and network 105. A first node transmits, to the collaborate server, a lock request to lock first data, and the collaborate server locks the first data in response to the request; preventing modification by other nodes. The first node can modify the first data such as by modifying a local version of first data and may generate delta payloads, each representing a sequential modification of the first data or may selectively generate a rebase comprising the entire modified content. The first node can provide the delta payloads or rebase to the collaborate server. The system may determine whether the collaborate server is inoperative and store modifications of the first data and transmit these to the collaborate server when it becomes operable again. The nodes may repeatedly transmit a heartbeat signal to the collaborate server; if a heartbeat is not received from the first node within a threshold time the collaborate server may release the lock of the first data.

Description

(71) Applicant(s):
Sage (UK) Ltd
North Park, NEWCASTLE-UPON-TYNE, NE13 9AA, (56) Documents Cited:
EP 1452981 A2 US 6067551 A US 20100023521 A1 US 20080208869 A1
US 7603357 B1 US 20140279846 A1 US 20090006553 A1
United Kingdom (58) Field of Search:
(72) Inventor(s):
Mark Ramsey
INT CL G06F
Other: WPI, EPODOC, Patents Fulltext (74) Agent and/or Address for Service:
HGF Limited
Document Handling - HGF - (York), 1 City Walk, LEEDS, LS11 9DX, United Kingdom (54) Title of the Invention: Data collaboration
Abstract Title: Controlling the locking of data in a collaboration architecture (57) A system is provided for sharing data between two or more collaborating nodes 101 including a collaborate server 103 and network 105. A first node transmits, to the collaborate server, a lock request to lock first data, and the collaborate server locks the first data in response to the request; preventing modification by other nodes. The first node can modify the first data such as by modifying a local version of first data and may generate delta payloads, each representing a sequential modification of the first data or may selectively generate a rebase comprising the entire modified content. The first node can provide the delta payloads or rebase to the collaborate server. The system may determine whether the collaborate server is inoperative and store modifications of the first data and transmit these to the collaborate server when it becomes operable again. The nodes may repeatedly transmit a heartbeat signal to the collaborate server; if a heartbeat is not received from the first node within a threshold time the collaborate server may release the lock of the first data.
Figure GB2560010A_D0001
At least one drawing originally filed was informal and the print reproduced here is taken from a later filed formal copy.
1/7
1605 18
Figure GB2560010A_D0002
FIG. 1a
2/7
CO ο
Figure GB2560010A_D0003
§.g
Ο CLCO ϋν”
luawadeuew aoia
Payload Management Daemon
ejepej9|/\|
ί ;
Ο ο
?» ο
ο <D _I co
1605 18
dn-uBis
'Ξ o <C Configure
IdV <Ζ>
_ο
IdV
Notification Service Web Socket Channel
IdV
Figure GB2560010A_D0004
Ο
CXI
Ajepunog }snjj
ρθί|3Β0Ι±ΙΘΙ/\|
ElastiCache (Beta) REDIS
9)BAIJd ο
_ο
Figure GB2560010A_D0005
uoijejjsiBoy
Collaborate Service Security
IdWM
Figure GB2560010A_D0006
Figure GB2560010A_D0007
siepou
Figure GB2560010A_D0008
Ajepunog jsrui
Figure GB2560010A_D0009
O
IdV 8)ejoqe||O0
Q_ <C Collaborate Client Support Notification EndPoint
o
CXI
Figure GB2560010A_D0010
(P99||OJ1UO0) onqnd co <c
3/7
1605 18 co ο
I
Figure GB2560010A_D0011
ο
4/7
1605 18
Figure GB2560010A_D0012
FIG. 2a
5/7 oo
LO ο
co
Figure GB2560010A_D0013
FIG. 2b
6/7
1605 18
Start V
239 x Lock request from first collaborating node and collaborate service grants lock
1 r
241 X Collaborate service becomes inoperative
1 r
243 x. First node detects inoperability of collaborate service
1 r
245 X First node operates in offline mode and tracks modifications of locked data (lock may not be released)
1
247 First node detects operability of collaborate service
1
249 χ First node exits offline mode and publishes delta payloads and/or rebase
FIG. 2c
7/7
Replay i Replay
Figure GB2560010A_D0014
Client 2
Restore : io—1 :
i ρΞξ Rebase 2 j Replaces payloads below
CZZ3
Payload n
1605 18
Replay i
Figure GB2560010A_D0015
Client 1
Restore go—I i (=□ Rebase 1 i i I :
Initial Backup
FIG. 3
Application No. GB1703060.2
RTM
Date :9 August 2017
Intellectual
Property
Office
The following terms are registered trade marks and should be read as such wherever they occur in this document:
Google, Microsoft
Sharepoint
Amazon
ElastiCache
Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
Data Collaboration
BACKGROUND OF THE INVENTION
Field of the Invention
Certain embodiments of the present invention provide a technique for sharing data between two or more collaborating nodes in a collaborative architecture. For example, certain exemplary embodiments provide a method, apparatus and system for controlling the locking of data in a collaboration architecture. Certain exemplary embodiments provide a method, apparatus and system for managing the synchronisation of data between collaborating nodes in a collaboration architecture.
Description of the Related Art
In traditional (or legacy) applications involving access to and modification of data (e.g. business data), the access and modification is typically performed with respect to locally stored data (e.g. on-premises data), for example data locally stored on a desktop computer, server or the like. In many situations, it is desirable to allow data to be shared with another party. For example, a business may wish to share their data with an accountant or other third party. In this case, a copy of the locally stored data may be provided to the other party, or the other party may visit the site of the locally stored data, to allow the other party to access and modify the data.
However, these techniques require either pausing of modifications to the locally stored data until the other party has finished their modification, or a relatively complex consolidation process of merging the other party’s changes with the locally stored data.
Recently, various data collaboration systems have been developed to facilitate sharing of data among multiple parties, for example Google Apps for Business and Microsoft SharePoint. In a typical collaboration system, multiple users at different locations may simultaneously access and/or modify shared data through local applications executing on computing devices connected by a network. Each entity of the system at which data may be accessed and/or modified may be referred to as a node, end point or client. Some collaboration systems apply cloud computing techniques, for example cloud storage.
In a data collaboration system, it is important to ensure the integrity of data across the system at all times. In particular, it is important to ensure that the data at each node is consistent (i.e.
synchronised) with each other and up-to-date. What is desired is a technique for facilitating data integrity, and a technique for facilitating data synchronisation in an efficient manner.
The above information is presented as background information only to assist with an understanding of the present invention. No determination has been made, and no assertion is made, as to whether any of the above might be applicable as prior art with regard to the present invention.
SUMMARY OF THE INVENTION
It is an aim of certain exemplary embodiments of the present invention to address, solve and/or mitigate, at least partly, at least one of the problems and/or disadvantages associated with the related art, for example at least one of the problems and/or disadvantages described above. It is an aim of certain exemplary embodiments of the present invention to provide at least one advantage over the related art, for example at least one of the advantages described below.
The present invention is defined in the independent claims. Advantageous features are defined in the dependent claims.
In accordance with an aspect of the present invention, there is provided a system for sharing data between two or more collaborating nodes, the system comprising a collaborate server and a first collaborating node, wherein: the first collaborating node is configured to transmit, to the collaborate server, a lock request with respect to first data; the collaborate server is configured, in response to the lock request, to control locking of the first data, thereby preventing modification, by a collaborating node other than the first collaborating node, of the first data; and the first collaborating node is configured to modify the first data.
In accordance with another aspect of the present invention, there is provided a first collaborating node for sharing data between two or more collaborating nodes in a system comprising a collaborate server and the first collaborating node, wherein the first collaborating node comprises: a transmitter for transmitting, to the collaborate server, a lock request with respect to first data, for requesting the collaborate server to control locking of the first data, thereby preventing modification, by a collaborating node other than the first collaborating node, of the first data; and a data modifier for modifying the first data.
In accordance with another aspect of the present invention, there is provided a collaborate server for sharing data between two or more collaborating nodes in a system comprising the collaborate server and a first collaborating node, wherein the collaborate server comprises: a receiver for receiving, from the first collaborating node, a lock request with respect to first data; and a lock service for controlling locking of the first data in response to the lock request, thereby preventing modification, by a collaborating node other than the first collaborating node, of the first data.
In accordance with another aspect of the present invention, there is provided a method for sharing data between two or more collaborating nodes in a system comprising a collaborate server and a first collaborating node, the method comprising: transmitting, by the first collaborating node, to the collaborate server, a lock request with respect to first data; in response to the lock request, controlling, by the collaborate server, locking of the first data, thereby preventing modification, by a collaborating node other than the first collaborating node, of the first data; and modifying, by the first collaborating node, the first data.
In accordance with another aspect of the present invention, there is provided a method, for a first collaborating node, for sharing data between two or more collaborating nodes in a system comprising a collaborate server and the first collaborating node, the method comprising: transmitting, to the collaborate server, a lock request with respect to first data, for requesting the collaborate server to control locking of the first data, thereby preventing modification, by a collaborating node other than the first collaborating node, of the first data; and modifying the first data.
In accordance with another aspect of the present invention, there is provided a method, for a collaborate server, for sharing data between two or more collaborating nodes in a system comprising the collaborate server and a first collaborating node, the method comprising: receiving, from the first collaborating node, a lock request with respect to first data; and controlling locking of the first data in response to the lock request, thereby preventing modification, by a collaborating node other than the first collaborating node, of the first data.
In accordance with another aspect of the present invention, there is provided a system for sharing data between two or more collaborating nodes, the system comprising a collaborate server, a first collaborating node, and a second collaborating node, wherein the first collaborating node is configured to: sequentially modify a locally stored version of first data one or more times to obtain second data; generate one or more delta payloads, each delta payload representing a corresponding sequential modification of the first data, and to provide the delta payloads to the collaborate server; and selectively generate a rebase comprising the entire content of the second data, and to provide the rebase to the collaborate server.
In accordance with another aspect of the present invention, there is provided a first collaborating node for sharing data between two or more collaborating nodes in a system comprising a collaborate server, the first collaborating node, and a second collaborating node, wherein the first collaborating node comprises: a data modifier for sequentially modifying a locally stored version of first data one or more times to obtain second data; a delta payload generator for generating one or more delta payloads, each delta payload representing a corresponding sequential modification of the first data; a rebase generator for selectively generating a rebase comprising the entire content of the second data; and a transmitter for providing the delta payloads and the rebase to the collaborate server.
In accordance with another aspect of the present invention, there is provided a method, for a first collaborating node, for sharing data between two or more collaborating nodes in a system comprising a collaborate server, the first collaborating node, and a second collaborating node, the method comprising: sequentially modifying a locally stored version of first data one or more times to obtain second data; generating one or more delta payloads, each delta payload representing a corresponding sequential modification of the first data; selectively generating a rebase comprising the entire content of the second data; and providing the delta payloads and the rebase to the collaborate server.
In accordance with another aspect of the present invention, there is provided a computer program comprising instructions or code which, when executed, implement a method, system and/or apparatus in accordance with any aspect, claim, example and/or embodiment disclosed herein. A further aspect of the present invention provides a machine-readable storage storing such a program.
Other aspects, advantages, and salient features of the invention will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, disclose exemplary embodiments of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and other aspects, and features and advantages of certain exemplary embodiments and aspects of the present invention will be more apparent from the following detailed description when taken in conjunction with the accompanying drawings, in which:
Figures 1a-c illustrate a data collaboration system according to an exemplary embodiment;
Figures 2a-c illustrate an exemplary method for controlling the locking of data in the system illustrated in Figure 1; and
Figure 3 illustrates an exemplary method for efficiently managing the synchronisation of data between collaborating nodes in the system illustrated in Figure 1.
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS OF THE PRESENT INVENTION
The following description of exemplary embodiments of the present invention, with reference to the accompanying drawings, is provided to assist in a comprehensive understanding of the present invention, as defined by the claims. The description includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the invention.
The same or similar components may be designated by the same or similar reference numerals, although they may be illustrated in different drawings.
Detailed descriptions of techniques, structures, constructions, functions or processes known in the art may be omitted for clarity and conciseness, and to avoid obscuring the subject matter of the present invention.
The terms and words used herein are not limited to the bibliographical or standard meanings, but, are merely used to enable a clear and consistent understanding of the invention.
Throughout the description and claims of this specification, the words “comprise”, “contain” and “include”, and variations thereof, for example “comprising”, “containing” and “including”, means “including but not limited to”, and is not intended to (and does not) exclude other features, elements, components, integers, steps, processes, functions, characteristics, and the like.
Throughout the description and claims of this specification, the singular form, for example “a”, “an” and “the”, encompasses the plural unless the context otherwise requires. For example, reference to “an object” includes reference to one or more of such objects.
Throughout the description and claims of this specification, language in the general form of “X for Y” (where Y is some action, process, function, activity or step and X is some means for carrying out that action, process, function, activity or step) encompasses means X adapted, configured or arranged specifically, but not necessarily exclusively, to do Y.
Features, elements, components, integers, steps, processes, functions, characteristics, and the like, described in conjunction with a particular aspect, embodiment, example or claim disclosed herein are to be understood to be applicable to any other aspect, embodiment, example or claim described herein unless incompatible therewith.
The techniques described herein may be implemented using any suitably configured apparatus and/or system. Such an apparatus and/or system may be configured to perform a method according to any aspect, embodiment, example or claim disclosed herein. Such an apparatus may comprise one or more elements, for example one or more of processors, controllers, modules, units, and the like, each element configured to perform one or more corresponding processes, operations and/or method steps for implementing the techniques described herein. For example, an operation of X may be performed by a module configured to perform X (or an X-module). The one or more elements may be implemented in the form of hardware, software, or any combination of hardware and software.
Certain embodiments of the present invention may be provided in the form of a collaborating node and/or method therefor. Certain embodiments of the present invention may be provided in the form of a collaborate server or collaborate service and/or a method therefor. Certain embodiments of the present invention may be provided in the form of a system comprising a collaborate server/service and two or more collaborating nodes and/or a method therefor.
Figures 1a-c illustrate a data collaboration architecture according to an exemplary embodiment. Figure 1a is a simplified schematic diagram of the overall data collaboration architecture. Figure 1b provides a high level overview of various components of the data collaboration architecture. Figure 1c shows interactions between the various components of the data collaboration architecture. The system 100 comprises two or more nodes (also referred to as end points or clients) 101, and a collaborate server 103. The nodes 101 and collaborate server 103 are connected by a network 105. In certain embodiments, the nodes 101 may each have the same general functionality. The collaboration architecture enables the sharing of data between two or more collaborating nodes (e.g. nodes 101).
Each node 101 provides an interface to allow a user to access and/or modify shared data. In certain embodiments, a node 101 may be implemented as an application executing on a computing device, for example a desktop computer.
A node 101 may comprise a transmitter for transmitting data (e.g. to the collaborate server 103) and a receiver for receiving data (e.g. from the collaborate server 103). The transmission of information may be wired or wireless. A node 101 may also comprise a memory for storing data and other information during operation of the node 101. A node 101 may further comprise a processor, controller and/or one or more hardware and/or software elements (e.g. modules) for performing various operations of the node 101 as described below.
The collaborate server 103 manages various operations and services in relation to the collaboration architecture. For example, the collaborate server 103 may implement a suitable application programming interface (API), for example based on the SData standard, and provide support for exchange of data (e.g. payloads, for example containing native application data, backups and documents) between collaborating nodes 101. As discussed further below, the collaborate server 103 may provide a Lock Service for controlling the locking of data in the collaboration architecture and/or a Payload Management Service for managing the synchronisation of data between collaborating nodes 101. The collaborate server 103 may also provide one or more further services, for example relating to authentication, authorisation, sign-up and/or registration.
The collaborate server 103 may comprise a transmitter for transmitting data (e.g. to a node 101) and a receiver for receiving data (e.g. from a node). The transmission of information may be wired or wireless. The collaborate server 103 may also comprise a memory for storing data and other information during operation of the collaborate server 103. The collaborate server 103 may further comprise a processor, controller and/or one or more hardware and/or software elements (e.g. modules) for performing various operations of the collaborate server 103 as described below.
In certain embodiments, the collaborate server (or collaborate service) 103 may be implemented as one or more web services, for example a public RESTful (representational state transfer) web service. The collaborate service may be built based on a number of web services. For example Amazon S3 may be used for storage of payloads (e.g. binary large objects, BLOBs) being exchanged between collaborating nodes 101. Amazon ElastiCache may be used as a cache for payloads retrieved from S3. The skilled person will appreciate that the present invention is not limited to any of these examples. For example, in certain alternative embodiments ElastiCache may be replaced with any suitable type of self-managed distributed cache implementation, for example Memcached or Redis.
In certain embodiments, the collaborate service is based on node.js for provide an interface between the collaborate service 103 and the nodes 101. Node.js is a software platform for building highly scalable service-side applications and which uses an event-driven, nonblocking I/O model, making it lightweight and efficient. The collaborate service may utilise any suitable node modules for its building blocks, for example Architect, Bluebird, Express, Mocha, Instanbul, PM2 and Socket.io. The skilled person will appreciate that the present invention is not limited to these examples.
As mentioned above, the collaborate service provides a mechanism for exchanging data (e.g. payloads) between collaborating nodes 101. Each payload may be communicated from one node 101 to another in a data structure comprising a number of fields containing relevant information. For example, one exemplary data structure may comprise one or more of:
(i) A Content field containing data (which may be encrypted) representing the data being exchanged between the collaborating nodes 101.
(ii) A Verb field for specifying how the receiving collaborating node 101 should process the payload, for example in the form of a command. In certain embodiments the command may include one or more of: Create (e.g. specifying creation of a record or document), Delete (e.g. specifying deletion of a record or document) and Update (e.g. specifying updating of a record or document).
(iii) A MimeType field specifying the type of data held within the payload, for example native application data, a Word document or an Excel spreadsheet.
(iv) A Properties field including an array of one or more custom attributes associated with the payload.
In certain embodiments, certain data (e.g. payloads) exchanged between nodes 101 may be encrypted. For example, the entire data structure described above, or one or more fields thereof, may be encrypted. In this case, the encryption strategy used may be selected according to any suitable design criteria. In certain embodiments, the encryption strategy may be selected by the application that owns the data (e.g. the application to which the data applies). In some embodiments, the collaborate service may act as the enabler for the exchanging of payloads between the collaborating nodes 101, but may not need to access the payload data. In this case, the collaborate service may not be required to decrypt the exchanged payloads. Accordingly, the chosen encryption strategy may be a private strategy between the collaborating nodes 101, which may be unknown to the collaborate service. For example, if a certain application implements symmetric encryption for encrypting its application data, users corresponding to collaborating nodes 101 may share a chosen secret key with each other using a suitable secure method.
An exemplary scenario using the system 100 illustrated in Figure 1 in which a first node 101a and a second node 101b collaborate with respect to the same shared data will now be described.
In this example, it is assumed that the first and second nodes 101a, 101b are initially offline (i.e. are not operationally connected to the data collaboration architecture, and in particular the collaborate server 103). It is also assumed that the collaborate server 103 remains operable and the nodes 101, once online (i.e. become operationally connected to the data collaborate architecture, and in particular the collaborate server 103), operate in an online mode in which the nodes 101 maintain communication with the collaborate server 103.
In a first step, the first node 101a comes online. For example, the first node 101a may come online in response to a first user switching on a computing device corresponding to the first node 101a.
The first node 101a then retrieves data (e.g. using a suitable module provided in the node, for example a data retrieving module or a receiver) that the first user wishes to access and/or modify, which may be referred to as collaboration data. The collaboration data may be retrieved from a local store, for example a local hard drive of the computing device, or from a remote location, for example a remote server. In certain embodiments, the collaboration data may be retrieved by downloading the collaboration data from the collaborate server 103.
In certain embodiments, if the collaboration data is retrieved from the collaborate server 103 then it may be assumed that the retrieved collaboration data is up to date since the collaborate server 103 maintains the most up to date data. On the other hand, if the collaboration data is retrieved from a source other than the collaborate server 103 (e.g. a local restore of a backup) then the retrieved collaboration data may not be up to date, for example as a result of another node 101 having previously modified its own local version of the collaboration data. Accordingly, in cases where the retrieved collaboration data is not assumed to be up to date, the first node 101a may determine whether the retrieved collaboration data is up-to-date (e.g. using a suitable module provided in the node, for example an up-to-date checking module).
This determination may be made at any suitable time. For example, the first node 101a may determine whether the collaboration data is up-to-date immediately after the collaboration data is retrieved, or alternatively only once the user has indicated that it is desire to modify the collaboration data. In certain embodiments, before a lock of the collaboration data can be requested in order to begin modification of the collaboration data, as described further below, the first node 101a should determine whether the collaboration data is up to date, and if the collaboration data is determined to be not up-to-date, the collaboration data should be brought up to date, for example according to a procedure described further below. In certain embodiments, the determination is always made prior to any local modification of the collaboration data.
To determine whether collaboration data is up-to-date, the collaboration data may be associated with information (e.g. a “sync state” variable) representing the current state of the collaboration data. Each time the collaboration data is modified, the sync state associated with that collaboration data changes to reflect the modification. The sync state variable is defined such that two versions of the same collaboration data that are synchronised (i.e. consistent) have the same sync state, while two versions of the same collaboration data that are not synchronised (i.e. inconsistent) have different sync states. Accordingly, the synchronisation or otherwise of two versions of the same collaboration data may be determined by comparing the sync states of the two versions of the collaboration data (e.g. using a comparator provided in the node).
The collaborate server 103 maintains a record of the sync state of the most up to date version of the collaboration data (e.g. in a suitable storage or memory provided in the collaborate server 103). In particular, the collaborate server 103 keeps track of all modifications to the collaboration data made by each collaborating node 101 (e.g. the first and second nodes 101a, 101b) and updates a sync state variable of the collaboration data according to the modifications (e.g. using a suitable module provided in the collaborate server 103, for example a sync state updating module). In addition, each collaborating node 101 maintains and updates (e.g. in a suitable storage or memory provided in the node 101) a sync state variable (e.g. using a suitable module provided in the node 101, for example a sync state updating module) associated with its own locally stored version of the collaboration data according to modifications by itself to its own locally stored collaboration data. Accordingly, the sync state maintained by the collaborate server 103 may be thought of as reflecting a global view of all modifications to the collaboration data made by all collaborating nodes 101, whereas the sync state maintained by an certain individual node 101 may be thought of as reflecting a more local view that does not necessarily reflect subsequent modifications to the collaboration data made by other collaborating nodes 101.
When the first node 101a retrieves the collaboration data, the first node 101a also retrieves the sync state (first sync state) associated with the retrieved collaboration data, which may be stored in association with the collaboration data. The first node 101a then queries the collaborate server 103 requesting the sync state (second sync state) of the most up to date version of the collaboration data. The first node 101a then compares the first sync state with the second sync state. If the first sync state and the second sync state are consistent (e.g. equal), the first node 101a determines that the retrieved collaboration data is up to date. On the other hand, if the first sync state and the second sync state are not consistent (e.g. not equal), the first node 101a determines that the retrieved collaboration data is not up to date. In this example, it is assumed that the collaboration data retrieved by the first node 101a is up to date at this stage.
In certain alternative embodiments, the first node 101a may provide the first sync state to the collaborate server 103, and the collaborate server 103 may determine whether the collaboration data retrieved by the first node 101a is up to date by comparing the first sync state with the second sync state known at the collaborate server 103. In certain embodiments, the first node 101a may provide the first sync state to the collaborate server 103 as part of a lock request, described in greater detail below. In certain embodiments, a lock service of the collaborate server 103 may perform the determination.
When the first node 101 a updates its locally stored data (e.g. using a suitable module provided in the node 101, for example a data updating module) by downloading and applying delta payloads (as described in greater detail below), the first node 101a may request the next payload by providing the current sync state of the locally stored data of the first node 101a. If there is a payload to download (i.e. the locally stored data of the first node 101a is not yet up to date, as determined from the current sync state), the next payload is downloaded together with the sync state for that payload. The payload is then applied to the locally stored data and the current sync state is updated using the sync state for the payload. This process may then be repeated such that the first node 101a may request subsequent payloads, until the locally stored data of the first node 101a is up to date.
As described in greater detail below, when the first node 101a wishes to modify the collaboration data (e.g. using a suitable module provided in the node 101, for example a data modifying module), the first node 101a requests the collaborate server 103 to lock the collaboration data, thereby temporarily preventing any other collaborating nodes 101 from modifying the collaboration data during the duration of the lock. As mentioned above, in certain embodiments, before the lock can be requested, it should be determined whether the collaboration data is up to date, and if the collaboration data is determined to be not up to date, the collaboration data should be brought up to date, for example according to a procedure described further below.
Once the collaborate server 103 has granted the lock of the collaboration data, the first node 101a may modify the collaboration data. In this example, modification of the collaboration data may be represented by a sequence of incremental modifications, each incremental modification being represented by a certain operation or command (such as Create, Delete or Update) together with data associated with the command (such as the data to be created, deleted or updated). Accordingly, as the collaboration data is modified a sequence of commands and associated data are applied to the collaboration data.
For each command applied to the collaboration data by the first node 101a, a packet (or delta payload) is generated (e.g. using a suitable module provided in the node 101, for example a delta payload generator module) containing sufficient information to enable a different collaborating node 101 to apply (or ‘replay’) the same modification to its own locally stored version of the collaboration data. For example, the delta payload may comprise a data structure of the form described above. The resulting sequence of delta payloads may be transmitted to the collaborate server 103, which stores the delta payloads (for example in S3 storage). The collaborate server 103 also updates its sync state associated with the collaboration data in accordance with the received delta payloads.
In certain embodiments, the first node 101a may transmit the delta payloads to the collaborate server 103 as and when the delta payloads are generated. Alternatively, in certain other embodiments, the first node 101a may accumulate the delta payloads as the collaboration data is modified during the lock, and the first node 101a may combine or consolidate the accumulated delta payloads to form one or more combined packets (e.g. prior to the lock being released), and transmit the combined packets to the collaborate server 103 (e.g. immediately before the lock is released). The delta payloads or the combined packets may be transmitted by a transmitter provided in the node.
In some circumstances, for example as discussed further below, the first node 101a may additionally or alternatively transmit a backup (or rebase) of the entire collaboration data at an appropriate time, for example after modification of the collaboration data by the first node 101a has finished and/or upon request by the collaborate server 103 and/or at the instigation of the first node 101a.
When the first node 101a transmits the delta payloads and/or rebase to the collaborate server 103, the collaborate server 103 may update its own sync state associated with the collaboration data and may provide the updated sync state to the first node 101a, which then stores the updated sync state locally.
The lock may be released according to any suitable condition. The lock may be released when it is determined that the user has finished modifying the collaboration data. For example, in certain embodiments the lock may be released if the collaboration data has not been modified for a certain threshold amount of time, for example of the order of seconds (e.g. 10 seconds). In some embodiments, the threshold may be different for different nodes 101. For example, the nodes 101 may each be assigned a priority level and the threshold may be set such that higher priority nodes 101 have higher thresholds. This allows certain parties (e.g. on-site owners of the collaboration data) to maintain a lock more easily than other parties (e.g. third parties such as an accountant). The skilled person will appreciate that the present invention is not limited to these specific examples. For example, in certain embodiments a lock may be released in response to a user input indicating that the user has finished modifying the collaboration data.
When the second node 101b wishes to collaborate with the same collaboration data (e.g. access and modify the collaboration data), the following procedure may be carried out.
First, the second node 101b comes online, for example in response to a second user switching on a computing device corresponding to the second node 101b. In a similar manner to the first node 101a described above, the second node 101b retrieves its own version of the collaboration data, which at this stage has been previously modified by the first node 101a.
In a similar manner to the first node 101a described above, the second node 101b transmits a request to the collaborate server 103 to lock the data. However, in a similar manner as described above, before the lock can be requested by the second node 101b, the second node 101b determines whether the retrieved collaboration data is up to date, and if the collaboration data is not up to date, the second node 101b brings the collaboration data up to date, for example according to a procedure described further below.
For example, to determine whether the collaboration data is up to date, the second node 101b compares a sync state variable (third sync state) associated with the locally stored version of the collaboration data with a sync state variable (first sync state) received from the collaborate server 103, or alternatively the collaborate server 103 compares the third sync state provided by the second node 101b (e.g. in the lock request) with the first sync state. In this example, since the first node 101a has previously modified the collaboration data, the third sync state will be different from the first sync state, indicating that the locally stored collaboration data retrieved by the second node 101b is not up to date.
In order to bring the locally stored collaboration data of a certain node 101 (e.g. the first node 101a or the second node 101b) up to date, the node 101 downloads necessary information from the collaborate server 103 to apply to its locally stored collaboration data.
In some case, the relevant node 101 (e.g. the second node 101b) may retrieve one or more delta payloads from the collaborate server 103 (e.g. from S3 storage) representing the modifications to the collaboration data previously made by other collaborating nodes 101 (e.g. the first node 101a) and ‘replay’ the delta payloads against their locally stored version of the collaborating data. For example, the second node 101b may use the information contained in the delta payloads to apply the same sequence of commands to its locally stored version of the collaboration data that the first node 101a previously applied to its locally stored version of the collaboration data.
In other cases, the relevant node 101 (e.g. the second node 101b) may retrieve the most recent rebase from the collaborate server 103 and replace its locally stored version of the collaboration data with the rebase data.
As discussed further below, in certain embodiments, the decision whether to download delta payloads or a rebase may depend on which option is most efficient, for example in terms of the amount of data needed to be downloaded, or any other suitable efficiency measure.
After the second node 101b has brought its version of the collaboration data up to date, the second node 101b may request a lock of the collaboration data to enable the second node 101b to modify the collaboration data. After the lock has been granted by the collaborate server 103, the second node 101b may modify the data. The operations carried out when the second node 101b modifies the data are substantially the same as described above in relation to the first node 101a. That is, the second node 101b may apply one or more commands (e.g. Create, Delete, Update) to the collaboration data, generate one or more corresponding delta payloads, transmit the delta payloads and/or a rebase to the collaborate server 103 at the appropriate time, and update its sync state of the collaboration data. When modification of the collaboration data by the second node 101b is determined to have been completed, for example in a manner described above, the collaborate server 103 releases the lock, thereby allowing other collaborating nodes 101 (e.g. the first node 101a once again or another collaborating node 101) to request a lock of the collaboration data and hence modify the collaboration data.
The Lock Service for controlling the locking of data in the collaboration architecture illustrated in Figure 1 will now be described in more detail with reference to Figures 2a-c.
The Lock Service provides a mechanism for granting a collaborating node 101 exclusive access to certain data, for example a certain area or scope of an application, thus locking out any other collaborating nodes 101 from the same data for the duration of the lock. In certain embodiments, the lock duration may be relatively short, for example of the order of seconds. The Lock Service helps to ensure the integrity of the application data at all times.
As shown in Figure 2a, the Lock Service is called (Step 203) by a node (first node 101a) prior to performing any local data (e.g. local application data) modification. Before calling the lock service, the local application data of the calling node (i.e. the first node 101a) should be up to date (Step 201). If the local application data of the calling node 101a is not up to date then it may be brought up to date by applying a suitable synchronisation procedure, for example as described above. Once the lock service has been called and the lock has been granted, the first node 101a performs modification of the local application data (Step 205), and in response generates and publishes one or more corresponding payloads (Steps 207 and 209). For example, the payloads may be provided to the S3 service described above. The node 101 that requested and is granted the lock may be referred to as the owner of the lock. Once the modification is complete and the modifications have been committed locally (i.e. to the local application data of the first node 101a), the lock is released (Step 211).
In the above scheme, a problem may arise if the collaborating node 101 that called the Lock Service (i.e. the first node 101) or the collaborate service becomes inoperative (e.g. fails or goes off-line). In order to ensure that access and/or modification of the data can continue uninterrupted under these circumstances, certain embodiments of the present invention implement one or more of the following techniques as part of the Lock Service.
A first technique, illustrated in Figure 2b, may be provided to handle the situation in which a collaborating node 101 requesting a lock becomes inoperative. In response to a lock request from a first collaborating node 101a (Step 221), the first node 101a repeatedly (e.g. periodically) transmits a signal (e.g. a ‘heartbeat’ signal) to the collaborate service to indicate that the first node 101a is still operative (Step 223). In certain embodiments, in response to the lock request, the collaborate service may provide a ‘heartbeat endpoint’ (e.g. a URL), which the first node 101a uses to call the collaborate service (to transmit the heartbeat signals to the collaborate service). The collaborate service may also provide a heartbeat interval defining the period of the calls to the heartbeat endpoint.
If a heartbeat call is not received by the collaborate service within a certain amount of time (e.g. within the heartbeat interval, or an interval based on the heartbeat interval, for example a certain multiple of the heartbeat interval) then the first node 101a is deemed to have become inoperative (Step 225). In this case, the collaborate service releases the lock (Step 227), thereby allowing other collaborating nodes 101 (i.e. nodes 101 other than the first node 101a) to request a lock of the same data.
The skilled person will appreciate that any suitable technique may be used to determine the inoperability of the collaborating node 101 that requested the lock (first node 101a). For example, as an alternative to the heartbeat scheme described above, the collaborate service may repeatedly (e.g. periodically) poll the first node 101a and determine that the first node 101a is inoperative if a poll response is not received from the first node 101a within a certain period of time.
A second technique, illustrated in Figure 2c, may be provided to handle the situation in which the collaborate service becomes inoperative. If the collaborate service becomes inoperative (Step 241), this change of state is detected (Step 243).
Any suitable technique may be used for detecting the inoperability of the collaborate service. For example, the collaborate service may repeatedly (e.g. periodically) transmit a heartbeat signal, and the inoperability of the collaborate service may be detected when a heartbeat signal is not received from the collaborate service within a certain period of time. Alternatively, one or more collaborating nodes 101 may repeatedly (e.g. periodically) poll the collaborate service, and determine that the collaborate service is inoperative if a poll response is not received within a certain period of time.
When the owner of the lock detects the inoperability of the collaborate service, that node 101 may operate in an offline mode (Step 245). When operating in the offline mode, the node 101 may continue to modify the collaboration data. For example, as described above, the node 101 may apply one or more commands to the collaboration data and generate one or more corresponding delta payloads. However, while the collaborate server 103 is inoperative and the node 101 that owns the lock is operating in the offline mode, the lock may not be released and the delta payloads and/or a rebase may not be transmitted to the collaborate service. In this case, the node 101 may track the modifications to the collaboration data by accumulating the delta payloads and updating the sync state of the collaboration data. In certain embodiments, a user of the lock owner may be queried as to whether the node 101 should enter the offline mode to enable modification of the collaboration data to continue, or whether modifications to the collaboration data should be prevented until the collaborate service returns to an operative state.
When the collaborate service returns to an operative state (Step 247), the node 101 may exit the offline mode and may enter the online mode (Step 249). In addition, the lock may be released at the appropriate time (e.g. when it is determined that modification of the data is completed). Furthermore, one or more delta payloads, include those that were accumulated during the period of inoperability of the collaborate service, and/or a rebase may be transmitted to the collaborate service (Step 249) at the appropriate time (e.g. at the time the collaborate service returns to an operable state or when the lock is released). Accordingly, when the collaborate service returns to operability, other collaborating nodes 101 may bring their local data up to date, either by replaying the delta payloads against their own local data or by entirely replacing their own local data with a rebase, and may request a lock of the collaboration data to allow the nodes 101 to modify the collaboration data.
An operative state of the collaborate service may be detected by any suitable technique, for example resumption of heartbeat signals from the collaborate service or receipt of a poll response from the collaborate service.
In certain alternative embodiments, if the collaborate service becomes inoperative, this change of state is detected and one of the nodes 101 takes over management of the lock from the collaborate server 103. For example, certain operations and functions normally performed by the collaborate service to implement the Lock Service are performed (temporarily) by the node 101. When the collaborate service returns to an operative state, management of the lock may pass back to the collaborate service.
The node 101 that takes over management of the lock may be selected using any suitable technique. For example, one of the nodes 101 may be pre-assigned as a master node and management of the lock may pass to the master node. In certain embodiments, the collaborating nodes 101 may be placed in a predefined order of priority, and management of the lock may pass to the highest priority node 101 that is currently operative. In another example, management of the lock may pass to a randomly selected (operative) node 101.
The Payload Management Service for managing the synchronisation of data between collaborating nodes 101 in the collaboration architecture illustrated in Figure 1 will now be described in more detail with reference to Figure 3.
The aim of the Payload Management Service is to efficiently manage the synchronisation of local data (e.g. application data) for nodes 101 that are collaborating against the same shared data. In particular, the Payload Management Service may be configured to ensure that collaborating nodes 101 download the least amount of data possible required to bring their local data up to date.
In some circumstances, one or more nodes 101 may be disconnected from the collaboration architecture or may be in an inoperable state. For example, the host application may not be running and/or the computing device may be switched off. Such nodes 101 may be referred to as inactive nodes. In this case, if other nodes 101 continue to collaborate against the same shared data while there are inactive nodes 101, a situation arises where the local data of inactive nodes 101 becomes out of date.
Accordingly, when an inactive node 101 becomes active, it is necessary to bring the local data of that node 101 up to date. This may be achieved by downloading payloads, for example from S3 described above, to allow the node 101 to bring their data up to date. For example, the node 101 may download one or more payloads (e.g. delta payloads) corresponding to one or more respective incremental modifications to replay against their own local data, or alternatively may download a single payload (e.g. a baseline payload) corresponding to the entire modified data that the node 101 may use to entirely replace their own local data.
The longer a node 101 remains inactive, the greater the number of delta payloads the node 101 will be required to download and replay to bring their local data up to date. It is the aim of the Payload Manager to bring inactive nodes 101 up to date as efficiently as possible, for example with the smallest total amount of data required to be downloaded.
For a node 101 to bring itself up to date, this involves the host application downloading and replaying relevant payloads, where a payload can be one of the following types:
1. Rebase - this type of payload is defined as the initial starting point and can be thought of as an initial backup of all application data. Replaying a rebase typically involves the wiping of all local application data and replacing this (e.g. using a restore operation) within the contents of the rebase.
2. Delta Payload - this type of payload contains one or more changes/instructions (Create, Update, Delete) which the host application should replay against the local application data.
Figure 3 illustrates a scenario where there are two nodes 101 collaborating against the same shared data, but who are at different synchronisation points. In Figure 3, there are two rebase payloads (Rebase 1 and Rebase 2). The Payload Manager may define two different types of Rebase payloads:
1. (Standard) Rebase - Defined as the initial starting point or as a Refresh Rebase. A Refresh Rebase is an approach where the Payload Manager requests that the host application should generate a new Rebase due to there being a significant number of payloads ora significant consolidated size in payloads since the prior Rebase. If a new client were to join into the collaboration the Payload Manager would insist that the new client starts from the newly generated (Refresh) Rebase instead of having to download the prior Rebase and potentially many delta payloads.
2. Mandatory Rebase - This type of Rebase is host application controlled and ensures that all clients are forced to download this rebase.
Typical scenarios for Mandatory Rebases are: (a) the User has performed a local restore application data operation (e.g. restoring of local application data from a local backup) on the originating desktop machine, resulting in all shared application data needing to be refreshed; and (b) the host application is performing a significant number of amendments to the shared application data and as such has placed an exclusive lock on the shared data. Once the lock is removed, a mandatory rebase is pushed into Sage Drive once again resulting in all shared application data needing to be refreshed.
In Figure 3, assuming Rebase 2 is a Refresh Rebase, the download and replay behaviour for Client 1 is as follows:
1. Rebase 1
2. Payload 1
3. Payload 2
4. ... Up to Payload n
5. Skip Rebase 2
6. Payload 1
In step 5 above, Client 1 skips the downloading of (Refresh) Rebase 2. The Payload Manager knows (for example via a provided SyncState) that Client 1 is already up to date because they have downloaded all prior payloads. On the other hand, in an alternative scenario in which Rebase 2 is of a Mandatory Rebase type, step 5 would not be skipped.
In addition to the above steps for Client 1, the download and replay behaviour for Client 2 is as follows:
1. Rebase 2
2. Payload 1
As mentioned above, the Payload Manager defines the concept of a Refresh Rebase, which is an approach for efficiently ensuring that new clients are able to bring their local application data up to date by downloading and replaying the smallest number (count or size) of payloads.
In certain embodiments, the Payload Management Service provides a mechanism for informing a node 101 whether a new (Refresh) rebase is required. For example, the Payload Management Service may provide an end point (e.g. ‘rebases\$status’ end point), which the host application may call as required. The response from this service request includes a value indicating whether the host application should push a new (Refresh) Rebase. For example, the response may comprise a Boolean value called ‘requiresPushRebase’, which when set to ‘true’ requests that the host application should push a new (Refresh) Rebase.
The determination as to whether the host application should push a new (Refresh) Rebase may be made based on any suitable set of one or more criteria. For example, the criteria may be based on the count of payloads since the last rebase and/or the total size of the payloads since the last rebase. In some embodiments, the criteria may be based on comparing these values to certain thresholds.
For example, the calculation the Payload Manager uses to determine whether or not to set the ‘requiresPushRebase’ value to ‘true’ is as follows:
1. Retrieve (count) of payloads since last rebase
2. Retrieve (total size) of these payloads
3. If (count) > (threshold value) requiresPushRebase = true
4. If (total size) > (threshold value) requiresPushRebase = true
In certain embodiments, the total size of the payloads since the last rebase may be compared to the total size of a new rebase, and a new rebase may be pushed if the total size of the payloads since the last rebase is greater than the total size of a new rebase.
Default values may be provided for the threshold values defined above. In certain embodiments, these values can also be overridden on a per consuming applications basis during the registration of a shared dataset.
The skilled person will appreciate that the present invention is not limited to these specific examples. For example, the criteria for determining whether the host application should push a new rebase may be based on any suitable efficiency consideration. In certain embodiments, the criteria may be chosen to ensure that a node may bring their local data up-to-date in the most efficient manner possible (e.g. in terms of the amount of data to download, the number of payloads to download, the total time required to download, etc., or in terms of any other resource usage) by downloading delta payloads, a rebase, or combination thereof. In certain embodiments, if greater efficiency may be obtained by downloading a rebase rather than one or more delta payloads, then the host application should push a new rebase.
As described above, certain embodiments of the present invention provide a data collaboration architecture that is secure, simple to consume, highly scalable, relatively low cost and nondiscriminating (i.e. open to third party developers).
It will be appreciated that embodiments of the present invention can be realized in the form of hardware, software or a combination of hardware and software. Any such software may be stored in the form of volatile or non-volatile storage, for example a storage device like a ROM, whether erasable or rewritable or not, or in the form of memory such as, for example, RAM, memory chips, device or integrated circuits or on an optically or magnetically readable medium such as, for example, a CD, DVD, magnetic disk or magnetic tape or the like.
It will be appreciated that the storage devices and storage media are embodiments of machine-readable storage that are suitable for storing a program or programs comprising instructions that, when executed, implement certain embodiments of the present invention. Accordingly, certain embodiments provide a program comprising code for implementing a method, apparatus or system as claimed in any one of the claims of this specification, and a machine-readable storage storing such a program. Still further, such programs may be conveyed electronically via any medium, for example a communication signal carried over a wired or wireless connection, and embodiments suitably encompass the same.
While the invention has been shown and described with reference to certain embodiments thereof, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the scope of the invention, as defined by the appended claims.

Claims (31)

Claims
1. A system for sharing data between two or more collaborating nodes, the system comprising a collaborate server and a first collaborating node, wherein:
the first collaborating node is configured to transmit, to the collaborate server, a lock request with respect to first data;
the collaborate server is configured, in response to the lock request, to control locking of the first data, thereby preventing modification, by a collaborating node other than the first collaborating node, of the first data; and the first collaborating node is configured to modify the first data.
2. A system according to claim 1, wherein the first collaborating node is configured to transmit the lock request in response to a request to modify the first data.
3. A system according to claim 1 or 2, wherein the first collaborating node is configured to retrieve a local copy of the first data and to modify the first data by modifying the local copy of the first data.
4. A system according to claim 3, wherein the first collaborating node is configured to retrieve the local copy of the first data by one of: downloading a copy of the first data from the collaborate server; and retrieving the local copy of the first data from local storage.
5. A system according to claim 3 or 4, wherein, prior to transmitting the lock request, the first collaborating node is further configured to:
determine whether the local copy of the first data is consistent with an up to date version of the first data stored at the collaborate server; and if the local copy of first data is not consistent with the up to date version of the first data, update the local copy of the first data to be consistent with the up to date version of the first data.
6. A system according to claim 5, wherein the system is configured to determine whether the local copy of the first data is consistent with the up to date version of the first data by comparing a synchronisation state variable associated with the local copy of the first data with a synchronisation state variable associated with the up to date version of the first data.
7. A system according to claim 5 or 6, wherein the first collaborating node is configured to update the local copy of the first data by:
downloading, from the collaborate server, one or more delta payloads, each delta payload representing an incremental modification to the first data, and applying the delta payloads to the local copy of the first data; or downloading, from the collaborate server, a rebase representing the up to date version of the first data, and replacing the local copy of the first data with the rebase.
8. A system according to any preceding claim, wherein:
the first collaborating node is configured modify the first data after the first data has been locked; and the collaborate server is configured to release the lock of the first data after the first collaborating node has completed modifying the first data.
9. A system according to claim 8, wherein the collaborate server is configured to release the lock of the first data if no modification of the first data by the first collaborating node has occurred within a threshold time.
10. A system according to claim 8 or 9, wherein the first collaborating node is configured to transmit, to the collaborate server, modification information representing one or more modifications to a local copy of the first data by the first collaborating node.
11. A system according to claim 10, wherein the modification information comprises one of more of: one or more delta payloads, each delta payload representing an incremental modification to the first data; and a rebase representing the entire content of the first data after the modification.
12. A system according to claim 10 or 11, wherein the first collaborating node is configured to transmit the modification information when the lock is released.
13. A system according to claim 10, 11 or 12, wherein the collaborate server is configured to update a synchronisation state variable associated with the first data according to received modification information.
14. A system according to claim 13, wherein:
the collaborate server is configured to provide the updated synchronisation state to the first collaborating node; and the first collaborating node is configured to store the updated synchronisation state in association with the local copy of the first data.
15 A system according to any preceding claim, wherein:
the system is configured to determine whether the collaborate server is in an inoperative state; and if the collaborate server is determined to be in an inoperative state, the first collaborating node is configured to store modification information representing one or more modifications of the first data by the first collaborating node during the period in which the collaborate server is in the inoperative state.
16. A system according to claim 15, wherein:
the system is further configured to determine whether the collaborate server becomes operable after being in the inoperative state; and when the collaborate server is determined to have become operable, the first collaborating node is configured to transmit, to the collaborate server, the stored modification information to the collaborate server.
17. A system according to any preceding claim, wherein:
the first collaborating node is further configured to repeatedly transmit a signal to the collaborate server; and the collaborate server is further configured to release the lock of the first data if a signal is not received from the first collaborating node within a threshold time, such that a collaborating node other than the first collaborating node is allowed to request locking of the first data.
18. A first collaborating node for sharing data between two or more collaborating nodes in a system comprising a collaborate server and the first collaborating node, wherein the first collaborating node comprises:
a transmitter for transmitting, to the collaborate server, a lock request with respect to first data, for requesting the collaborate server to control locking of the first data, thereby preventing modification, by a collaborating node other than the first collaborating node, of the first data; and a data modifier for modifying the first data.
19. A collaborate server for sharing data between two or more collaborating nodes in a system comprising the collaborate server and a first collaborating node, wherein the collaborate server comprises:
a receiver for receiving, from the first collaborating node, a lock request with respect to first data; and a lock service for controlling locking of the first data in response to the lock request, thereby preventing modification, by a collaborating node other than the first collaborating node, of the first data.
20. A method for sharing data between two or more collaborating nodes in a system comprising a collaborate server and a first collaborating node, the method comprising:
transmitting, by the first collaborating node, to the collaborate server, a lock request with respect to first data;
in response to the lock request, controlling, by the collaborate server, locking of the first data, thereby preventing modification, by a collaborating node other than the first collaborating node, of the first data; and modifying, by the first collaborating node, the first data.
21. A method, for a first collaborating node, for sharing data between two or more collaborating nodes in a system comprising a collaborate server and the first collaborating node, the method comprising:
transmitting, to the collaborate server, a lock request with respect to first data, for requesting the collaborate server to control locking of the first data, thereby preventing modification, by a collaborating node other than the first collaborating node, of the first data; and modifying the first data.
22. A method, for a collaborate server, for sharing data between two or more collaborating nodes in a system comprising the collaborate server and a first collaborating node, the method comprising:
receiving, from the first collaborating node, a lock request with respect to first data; and controlling locking of the first data in response to the lock request, thereby preventing modification, by a collaborating node other than the first collaborating node, of the first data.
23. A system for sharing data between two or more collaborating nodes, the system comprising a collaborate server, a first collaborating node, and a second collaborating node, wherein the first collaborating node is configured to:
sequentially modify a locally stored version of first data one or more times to obtain second data;
generate one or more delta payloads, each delta payload representing a corresponding sequential modification of the first data, and to provide the delta payloads to the collaborate server; and selectively generate a rebase comprising the entire content of the second data, and to provide the rebase to the collaborate server.
24. A system according to claim 23, wherein the second collaborating node is configured to perform one or both of:
download one or more delta payloads from the collaborate server, and apply the downloaded delta payloads to a locally stored version of the first data; and download a rebase from the collaborate server, and replace a locally stored version of the first data with the rebase.
25. A system according to claim 23 or 24, wherein the first collaborating node is configured to selectively generate a first rebase in response to a request from the collaborate server.
26. A system according to claim 25, wherein the collaborate server is configured to request the first collaborating node to generate the first rebase if the number of delta payloads exceeds a first threshold, or if the total size of the delta payloads exceeds a second threshold.
27. A system according to any of claims 23 to 26, wherein the first collaborating node is configured to selectively generate a second rebase under control of the first collaborating node, and wherein the collaborate server is configured to control the second collaborating node to unconditionally download the second rebase and replace a locally stored version of the first data with the second rebase.
28. A system according to claim 27, wherein the first collaborating node is configured to generate the second rebase if the number of delta payloads exceeds a first threshold, or if the total size of the delta payloads exceeds a second threshold.
29. A system according to claim 27 or 28, wherein the first collaborating node is configured to generate the second rebase if the first collaborating node has performed a local restore operation on a locally stored version of the first data.
ΊΊ
30. A first collaborating node for sharing data between two or more collaborating nodes in a system comprising a collaborate server, the first collaborating node, and a second collaborating node, wherein the first collaborating node comprises:
a data modifier for sequentially modifying a locally stored version of first data one or 5 more times to obtain second data;
a delta payload generator for generating one or more delta payloads, each delta payload representing a corresponding sequential modification of the first data;
a rebase generator for selectively generating a rebase comprising the entire content of the second data; and
10 a transmitter for providing the delta payloads and the rebase to the collaborate server.
31. A method, for a first collaborating node, for sharing data between two or more collaborating nodes in a system comprising a collaborate server, the first collaborating node, and a second collaborating node, the method comprising:
15 sequentially modifying a locally stored version of first data one or more times to obtain second data;
generating one or more delta payloads, each delta payload representing a corresponding sequential modification of the first data;
selectively generating a rebase comprising the entire content of the second data; and 20 providing the delta payloads and the rebase to the collaborate server.
Intellectual
Property
Office
Application No: Claims searched:
GB1703060.2 1 to 22
GB1703060.2A 2017-02-24 2017-02-24 Data collaboration Active GB2560010B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1703060.2A GB2560010B (en) 2017-02-24 2017-02-24 Data collaboration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1703060.2A GB2560010B (en) 2017-02-24 2017-02-24 Data collaboration

Publications (3)

Publication Number Publication Date
GB201703060D0 GB201703060D0 (en) 2017-04-12
GB2560010A true GB2560010A (en) 2018-08-29
GB2560010B GB2560010B (en) 2021-08-11

Family

ID=58544406

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1703060.2A Active GB2560010B (en) 2017-02-24 2017-02-24 Data collaboration

Country Status (1)

Country Link
GB (1) GB2560010B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6067551A (en) * 1997-11-14 2000-05-23 Microsoft Corporation Computer implemented method for simultaneous multi-user editing of a document
EP1452981A2 (en) * 2003-02-28 2004-09-01 Microsoft Corporation A method to delay locking of server files on edit
US20080208869A1 (en) * 2007-02-28 2008-08-28 Henri Han Van Riel Distributed online content
US20090006553A1 (en) * 2007-06-01 2009-01-01 Suman Grandhi Remote Collaboration Tool For Rich Media Environments
US7603357B1 (en) * 2004-06-01 2009-10-13 Adobe Systems Incorporated Collaborative asset management
US20100023521A1 (en) * 2008-07-28 2010-01-28 International Business Machines Corporation System and method for managing locks across distributed computing nodes
US20140279846A1 (en) * 2013-03-13 2014-09-18 CoralTree Inc. System and method for file sharing and updating

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040267697A1 (en) * 2003-06-25 2004-12-30 Javad Hamidi File storage network
US8244678B1 (en) * 2008-08-27 2012-08-14 Spearstone Management, LLC Method and apparatus for managing backup data

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6067551A (en) * 1997-11-14 2000-05-23 Microsoft Corporation Computer implemented method for simultaneous multi-user editing of a document
EP1452981A2 (en) * 2003-02-28 2004-09-01 Microsoft Corporation A method to delay locking of server files on edit
US7603357B1 (en) * 2004-06-01 2009-10-13 Adobe Systems Incorporated Collaborative asset management
US20080208869A1 (en) * 2007-02-28 2008-08-28 Henri Han Van Riel Distributed online content
US20090006553A1 (en) * 2007-06-01 2009-01-01 Suman Grandhi Remote Collaboration Tool For Rich Media Environments
US20100023521A1 (en) * 2008-07-28 2010-01-28 International Business Machines Corporation System and method for managing locks across distributed computing nodes
US20140279846A1 (en) * 2013-03-13 2014-09-18 CoralTree Inc. System and method for file sharing and updating

Also Published As

Publication number Publication date
GB201703060D0 (en) 2017-04-12
GB2560010B (en) 2021-08-11

Similar Documents

Publication Publication Date Title
RU2421799C2 (en) Safety in applications of equivalent nodes synchronisation
CN102438041B (en) Upgrade of highly available farm server groups
CN101167069B (en) System and method for peer to peer synchronization of files
US7136903B1 (en) Internet-based shared file service with native PC client access and semantics and distributed access control
US11269813B2 (en) Storing temporary state data in separate containers
US20080243847A1 (en) Separating central locking services from distributed data fulfillment services in a storage system
US20060129627A1 (en) Internet-based shared file service with native PC client access and semantics and distributed version control
KR20170002441A (en) File service using a shared file access-rest interface
KR20130131362A (en) Providing transparent failover in a file system
EP3649592A1 (en) Systems and methods for content sharing through external systems
CN105378711A (en) Sync framework extensibility
JP5848339B2 (en) Leader arbitration for provisioning services
CN102332016A (en) Catalogue chance lock
CN101689166A (en) Use has the server process write request of global knowledge
US20180152434A1 (en) Virtual content repository
WO2001033361A1 (en) Internet-based shared file service with native pc client access and semantics
US20220335106A1 (en) Cloud-native content management system
US20110208761A1 (en) Coordinating content from multiple data sources
JP2008046860A (en) File management system and file management method
US9794351B2 (en) Distributed management with embedded agents in enterprise apps
KR20160025282A (en) System and method for providing client terminal to user customized synchronization service
GB2560010A (en) Data collaboration
CN105516343A (en) Network dynamic self-organized file-sharing system and method for implementing same
US20200250146A1 (en) Data storage methods and systems
JP6216673B2 (en) Data management method and data management system