WO2011033174A1 - Method and a storage server for data redundancy - Google Patents
Method and a storage server for data redundancy Download PDFInfo
- Publication number
- WO2011033174A1 WO2011033174A1 PCT/FI2010/050710 FI2010050710W WO2011033174A1 WO 2011033174 A1 WO2011033174 A1 WO 2011033174A1 FI 2010050710 W FI2010050710 W FI 2010050710W WO 2011033174 A1 WO2011033174 A1 WO 2011033174A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- error correction
- stored
- service
- user
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1095—Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1448—Management of the data involved in backup or backup restore
- G06F11/1453—Management of the data involved in backup or backup restore using de-duplication of the data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1008—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1456—Hardware arrangements for backup
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1458—Management of the backup or restore process
- G06F11/1464—Management of the backup or restore process for networked environments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1458—Management of the backup or restore process
- G06F11/1469—Backup restoration techniques
Definitions
- the present invention relates to the method for data redundancy, according to the preamble of claim 1.
- the purpose of the invention is to create new, more efficient system for data redundancy requiring only fraction of the storage capacity and storage resources the prior systems require.
- error correction algorithms for calculating error correction data from several separate stored data sets and storing the error correction data instead of the full copy of the stored data.
- Possible error correction algorithms to be used in this method are, for example, XOR (exclusive or) or Reed-Solomon -coding.
- the stored data can be recovered without the full copy, if the error correction data and the other stored data used for calculating the error correction data is available.
- the system can additionally use error correction algorithms that allow recovery of the stored information even if another data set used for the error correction data calculation is unavailable in addition to the data set that is being recovered.
- the bottle-neck in applying the invention over a public network is the network data transfer speed as the recovery of the stored data requires all the data sets used for error correction data calculation other than the recovered data itself.
- the recovery time can be shortened by parallelizing the recovery process.
- the stored data can be split to parts and the error correction data is calculated for each of these parts with data available through different network connection. Then the data required for the recovery can be transferred in parallel the number of groups used for error correction calculation equalling the numbers of the parts.
- the data can be recovered to a new mass memory storage in the premises of the
- the mass memory storage can be sent physically to the customer.
- Encrypted data can be used for the error correction data calculation. If the present invention is applied as a service, encrypting the data ensures that the service provider has no access to the user's data.
- the error correction data is calculated in the same way as for the unencrypted data and the result of the recovery is the same encrypted data that was used for the error correction calculation.
- the error correction data can be updated by transferring the changed data and by calculating the new error correction data based on the old error correction data and the changed data. If the data that the redundancy is provided for is stored in a file system that supports taking snapshots of the data for a specific moment of time, the snapshot can be used to save the state of the storage when the data is transferred to the service for redundancy and later identify the changes in the stored data by comparing the difference between the snapshot of the last transfer and the current state.
- the data can be copied to silicon-based memory to wait for the transfer to the redundancy service, so that the mass memory component can be stopped for the duration of the transfer.
- the members of the data groups used for the error correction data calculation can be selected based on various criteria, if the present invention is applied in a service over a public network.
- the users of the service and the associated risk can be classified based on the probability of user's data being unavailable or how old the mass storage device is.
- the risk analysis is trying to ensure that the composite risk levels of the error correction calculation group are low enough to prevent too many data sets being unavailable for the recovery calculation to be successful.
- the risk analysis can be done for new users of the users by initially creating full copies of user's stored data to the mass storage of the service provider and logging the availability of the user's stored data before the user's data is included in the error correction based redundancy scheme.
- the geographical location of the stored data or the service user's data connection speed can be used as a factor in selecting the group of data sets for error correction calculation.
- the geographical distribution can be used to lower the risk of a natural disaster causing too many data sets becoming unavailable simultaneously.
- Network speed can be used to form the groups where only users with high speed connections belong to the same group allowing faster recovery times.
- the size of the groups used for error correction data calculation can also be determined based on user risk analysis, mass storage device age, stored information geographical location, users' network connection speeds type of service or user's selected service level.
- the recovery can be started automatically to a new mass storage utilising error correction data and the stored data that is available from the other users.
- the user can be removed from the group used for error correction data calculation, if the user's stored data is unavailable longer than a specified time. The time can be specified in the service contract. If the user's stored data is removed from the error correction calculation, the stored data can be removed from the data center to release the capacity for other use.
- the data identification information and the content of the data can be secured separately and the identification information can be copied directly even if the content is part of the error correction based redundancy scheme.
- This enables creation of recovery mechanism where data can prioritised for recovery based on the identification data. For example, user can select to recover office documents before music files.
- the error correction data can be stored in the mass storage of the service provider or alternatively certain portion of the mass storage devices of each of the users can be allocated for storing error correction data that is not based on the same user's stored data.
- the integrity of the stored data can be verified by storing checksums of the data sets that are used for the calculation of the error correction data and the checksums are stored by the service.
- the service receives also the checksum for the data of the original data that the new data replaces. This ensures the integrity of the data.
- the checksums are calculated based on algorithms such as CRC (Cyclic Redundancy Check), MD5 (Message Digest) or SHA (Secure Hash Algorithm).
- the method according to the present invention is mainly characterised by what is stated in the characterisation part of claim 1.
- the storage server according to the present invention is mainly characterised what is stated in the characterisation part of claim 18.
- the system comprises of the service customers' mass strorage devices 3, 5 that connect to the storage server 2 of the service provider using a public data network 1, for example, Internet.
- Customers store their information 3.1 to the mass storage using their terminals 4.
- the information is stored to the file system 3.2 in a mass storage device 3.
- the mass storage device 3 encrypts 3.3 the stored information 3.1 and sends the information in encrypted form using the public data networks 1 to the storage server 2.
- the storage server 2 calculates the error correction data 2.1 using the encrypted information sent by the users' mass storage devices 3, 5.
- the error correction data 2.3 is stored to the mass storage 2.2 of the storage server 2.
- the storage server 2 requests the stored information 3.1 of all the users that was part of the error correction data calculation 2.1 using the public data network 1 for use in the recovery calculation 2.4.
- the error correction calculation 2.1 can use, for example, XOR-operation, where each bit of the data is calculated error correction data 2.3 together with specified number of other customer's data.
- the error correction data 2.3 indicates if the sum of the selected user data is odd or even.
- the value of the error correction is 0, if both users' data bits are equal and the value is 1, if the bits are not equal. If the data of one of the users is lost, it can be recovered by calculating the sum of the user data that is still available and the corresponding error correction data bit. When the sum is even, the recovered data bit is 0 and when the sum is odd, the recovered data bit is 1.
- the method can utilise such error correction algorithms that survive the unavailability of more than one user data sets.
- the size of the group used for the error correction calculation is selected based on the required probability for successful recovery.
- Each mass storage device can be approximated failure probability and using the probabilities each group of mass storage devices can be calculated a combined probability of simultaneously having more devices and the stored information unavailable than can be recovered using the error correction data 2.3.
- the error correction data 2.3 can be calculated based on the blocks of the mass
- the stored information of a user of the service can be split to several groups used for the error correction data calculation 2.1.
- the recovery calculation 2.4 requires the data sets used for the error correction data calculation 2.1 excluding the data set being recovered.
- the data connections of the users of the service to the public data networks 1 may be slow and thus transferring the data required for the recovery calculation 2.1 from the customer to the storage server 2 can take a long time.
- the data transfer can be parallelised attaching the user to multiple groups, ensuring that number of groups where two individual users belong to the same group is minimised.
- Some file systems include a feature, where the state of the file system 3.2 can be saved at a specified moment of time. This feature can be utilised in the system of the present invention by storing the state of the file system 3.2, when the changed information is sent to the storage server 2 for redundancy. Further changed information can be identified by comparing the saved state to the current state. The changed information can also be saved temporarily to silicon-based memory 3.4, so that a mass storage device, that is producing noise or consuming a lot of energy can be stopped for the duration of the transfer of the changed information to the storage server 2 using the public data network 1.
- the service can start the recovery calculation 2.4 automatically for the user's stored information.
- the information is recovered to a new mass storage device.
- the user's stored information can also be removed from the error correction calculation, if it is found improbable that the user's mass storage device will become reachable again.
- the meta-data of the user's stored information 3.1 can be stored as is to the storage server, so that the file system 3.2 structure and the file attributes are available without recovery calculation 2.4.
- This meta data can be, for example, utilised to offer the user a possibility to select the recovery order of the recovery calculation 2.4 as the recovery of the stored information 3.1 in large mass storage devices 3 over a public data network 1 may be slow and service users may have urgent need for small portion of all the stored data.
- the method of the present invention can be implemented so that no error correction data 2.3 is stored in the storage servers 2 of the service provider, but so that specified portion of the service user's mass storage device 3 is reserved for error correction data storage. User's own stored data is not to be included in the error correction data calculation, where the error correction data is stored to the user's device.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Storage Device Security (AREA)
Abstract
The present invention relates to a method and a storage server for backing up data (3.1) stored in a mass storage device (3.1). The system comprises of the mass storage devices (3, 5) of the users of the service that connect to the storage server (2) of the service provider over a public data network (1), for example the Internet. The customers store the data (3.1) with their terminals (4). The data is stored to the file system (3.2) of the mass storage device (3). The mass storage device (3) encrypts (3.3) the stored data (3.1) and transfers the data in encrypted form over the public data network (1) to the storage server (2). The storage server (2) calculates error correction data (2.1) from the encrypted data sent by the mass storage devices (3, 5) of the users. The error correction data (2.3) is stored in the mass memory (2.2) of the storage server (2). When recovering the stored data (3.1) of the users, the storage server (2) requests the stored data (3.1) of all the users whose data was used for the error correction data calculation (2.1) over the public data network (1) to be used for the recovery calculation (2.4).
Description
Description
Title of Invention: METHOD AND A STORAGE SERVER FOR
DATA REDUNDANCY
Description
[1] The present invention relates to the method for data redundancy, according to the preamble of claim 1.
[2] In prior art methods for data redundancy over a data network the original stored data is copied, thus making the redundancy expensive as a redundancy service is required to have same amount of storage capacity as the combined capacity of the users of the service have. Examples of this type of services are Decho Mozy, Carbonite, Norton On-line Backup and F-Secure Online Backup. The services are marketed as unlimited, but the backup capacity is limited by constraining the backed up file types, by limiting the tranfer speed or by limiting the data sources that can be backed up, so that the cost is kept as low as possible. The services targeted towards business users don't typically have these indirect limitations, but the users pay for the capacity they use for the backup. The services available today are implemented as client applications that send the stored information to the data center of the service provider where the information is stored to the mass storage of the service provider.
[3] The purpose of the invention is to create new, more efficient system for data redundancy requiring only fraction of the storage capacity and storage resources the prior systems require.
[4] The purpose can be achieved by the method of the present invention using error
correction algorithms for calculating error correction data from several separate stored data sets and storing the error correction data instead of the full copy of the stored data. Possible error correction algorithms to be used in this method are, for example, XOR (exclusive or) or Reed-Solomon -coding. The stored data can be recovered without the full copy, if the error correction data and the other stored data used for calculating the error correction data is available. The system can additionally use error correction algorithms that allow recovery of the stored information even if another data set used for the error correction data calculation is unavailable in addition to the data set that is being recovered.
[5] The bottle-neck in applying the invention over a public network is the network data transfer speed as the recovery of the stored data requires all the data sets used for error correction data calculation other than the recovered data itself. In a situation where the stored data is behind slow network connections, the recovery time can be shortened by parallelizing the recovery process. The stored data can be split to parts and the error
correction data is calculated for each of these parts with data available through different network connection. Then the data required for the recovery can be transferred in parallel the number of groups used for error correction calculation equalling the numbers of the parts.
[6] The data can be recovered to a new mass memory storage in the premises of the
service provider and thus avoid the transfer of the recovered data over the data network. The mass memory storage can be sent physically to the customer.
[7] Encrypted data can be used for the error correction data calculation. If the present invention is applied as a service, encrypting the data ensures that the service provider has no access to the user's data. The error correction data is calculated in the same way as for the unencrypted data and the result of the recovery is the same encrypted data that was used for the error correction calculation.
[8] If the stored data is changed, the error correction data can be updated by transferring the changed data and by calculating the new error correction data based on the old error correction data and the changed data. If the data that the redundancy is provided for is stored in a file system that supports taking snapshots of the data for a specific moment of time, the snapshot can be used to save the state of the storage when the data is transferred to the service for redundancy and later identify the changes in the stored data by comparing the difference between the snapshot of the last transfer and the current state. If the data is stored to a mass memory device that has components causing noise or components that consume a lot of energy, the data can be copied to silicon-based memory to wait for the transfer to the redundancy service, so that the mass memory component can be stopped for the duration of the transfer.
[9] The members of the data groups used for the error correction data calculation can be selected based on various criteria, if the present invention is applied in a service over a public network. The users of the service and the associated risk can be classified based on the probability of user's data being unavailable or how old the mass storage device is. The risk analysis is trying to ensure that the composite risk levels of the error correction calculation group are low enough to prevent too many data sets being unavailable for the recovery calculation to be successful. The risk analysis can be done for new users of the users by initially creating full copies of user's stored data to the mass storage of the service provider and logging the availability of the user's stored data before the user's data is included in the error correction based redundancy scheme.
[10] The geographical location of the stored data or the service user's data connection speed can be used as a factor in selecting the group of data sets for error correction calculation. The geographical distribution can be used to lower the risk of a natural disaster causing too many data sets becoming unavailable simultaneously. Network speed can be used to form the groups where only users with high speed connections
belong to the same group allowing faster recovery times.
The size of the groups used for error correction data calculation can also be determined based on user risk analysis, mass storage device age, stored information geographical location, users' network connection speeds type of service or user's selected service level.
If the user's stored data is not available within specified time, the recovery can be started automatically to a new mass storage utilising error correction data and the stored data that is available from the other users. The user can be removed from the group used for error correction data calculation, if the user's stored data is unavailable longer than a specified time. The time can be specified in the service contract. If the user's stored data is removed from the error correction calculation, the stored data can be removed from the data center to release the capacity for other use.
In creating the redundancy for the stored data the data identification information and the content of the data can be secured separately and the identification information can be copied directly even if the content is part of the error correction based redundancy scheme. This enables creation of recovery mechanism where data can prioritised for recovery based on the identification data. For example, user can select to recover office documents before music files.
If the present invention is implemented as a service, the error correction data can be stored in the mass storage of the service provider or alternatively certain portion of the mass storage devices of each of the users can be allocated for storing error correction data that is not based on the same user's stored data.
The integrity of the stored data can be verified by storing checksums of the data sets that are used for the calculation of the error correction data and the checksums are stored by the service. When receiving changed data from the user's mass storage, the service receives also the checksum for the data of the original data that the new data replaces. This ensures the integrity of the data. The checksums are calculated based on algorithms such as CRC (Cyclic Redundancy Check), MD5 (Message Digest) or SHA (Secure Hash Algorithm).
In particular, the method according to the present invention is mainly characterised by what is stated in the characterisation part of claim 1. The storage server according to the present invention is mainly characterised what is stated in the characterisation part of claim 18.
In the following, a way of carrying out the invention claimed is examined more closely using attached drawing, that illustrates a system utilising the method of the present invention, that is implemented as a service. The example is non-exclusive. The system comprises of the service customers' mass strorage devices 3, 5 that connect to the storage server 2 of the service provider using a public data network 1, for example,
Internet. Customers store their information 3.1 to the mass storage using their terminals 4. The information is stored to the file system 3.2 in a mass storage device 3. The mass storage device 3 encrypts 3.3 the stored information 3.1 and sends the information in encrypted form using the public data networks 1 to the storage server 2. The storage server 2 calculates the error correction data 2.1 using the encrypted information sent by the users' mass storage devices 3, 5. The error correction data 2.3 is stored to the mass storage 2.2 of the storage server 2. When recovering the stored information 3.1 of the users of the service, the storage server 2 requests the stored information 3.1 of all the users that was part of the error correction data calculation 2.1 using the public data network 1 for use in the recovery calculation 2.4.
[18] The error correction calculation 2.1 can use, for example, XOR-operation, where each bit of the data is calculated error correction data 2.3 together with specified number of other customer's data. In the case of XOR-operation the error correction data 2.3 indicates if the sum of the selected user data is odd or even. In a simple case when the error correction data 2.3 is calculated from two users' data, the value of the error correction is 0, if both users' data bits are equal and the value is 1, if the bits are not equal. If the data of one of the users is lost, it can be recovered by calculating the sum of the user data that is still available and the corresponding error correction data bit. When the sum is even, the recovered data bit is 0 and when the sum is odd, the recovered data bit is 1. The method can utilise such error correction algorithms that survive the unavailability of more than one user data sets. The size of the group used for the error correction calculation is selected based on the required probability for successful recovery. Each mass storage device can be approximated failure probability and using the probabilities each group of mass storage devices can be calculated a combined probability of simultaneously having more devices and the stored information unavailable than can be recovered using the error correction data 2.3.
[19] The error correction data 2.3 can be calculated based on the blocks of the mass
storage device 3, the blocks of the file system 3.2 or the blocks of the files of the file system 3.2. The stored information of a user of the service can be split to several groups used for the error correction data calculation 2.1. The recovery calculation 2.4 requires the data sets used for the error correction data calculation 2.1 excluding the data set being recovered. The data connections of the users of the service to the public data networks 1 may be slow and thus transferring the data required for the recovery calculation 2.1 from the customer to the storage server 2 can take a long time. The data transfer can be parallelised attaching the user to multiple groups, ensuring that number of groups where two individual users belong to the same group is minimised.
[20] Some file systems include a feature, where the state of the file system 3.2 can be saved at a specified moment of time. This feature can be utilised in the system of the
present invention by storing the state of the file system 3.2, when the changed information is sent to the storage server 2 for redundancy. Further changed information can be identified by comparing the saved state to the current state. The changed information can also be saved temporarily to silicon-based memory 3.4, so that a mass storage device, that is producing noise or consuming a lot of energy can be stopped for the duration of the transfer of the changed information to the storage server 2 using the public data network 1.
[21] Risk analysis based on the mass storage device 3 age, the geographical location of the stored information 3.1 and the speed of the network connection of the user can be used to select the members of the groups used for error correction data 2.3 calculation. The probability of the availability of the user's stored information 3.1 can be approximated from the storage server 2 perspective, if new users of the service are attached to the service initially so that the stored information is copied to the storage server 2 and only when enough statistics about the availability of the stored data 3.1 availability is gathered, the user's mass storage device 3 is added to the error correction calculation 2.1.
[22] If the user's mass storage device is unreachable for a specified amount of time, the service can start the recovery calculation 2.4 automatically for the user's stored information. The information is recovered to a new mass storage device. The user's stored information can also be removed from the error correction calculation, if it is found improbable that the user's mass storage device will become reachable again.
[23] The meta-data of the user's stored information 3.1 can be stored as is to the storage server, so that the file system 3.2 structure and the file attributes are available without recovery calculation 2.4. This meta data can be, for example, utilised to offer the user a possibility to select the recovery order of the recovery calculation 2.4 as the recovery of the stored information 3.1 in large mass storage devices 3 over a public data network 1 may be slow and service users may have urgent need for small portion of all the stored data.
[24] Alternatively the method of the present invention can be implemented so that no error correction data 2.3 is stored in the storage servers 2 of the service provider, but so that specified portion of the service user's mass storage device 3 is reserved for error correction data storage. User's own stored data is not to be included in the error correction data calculation, where the error correction data is stored to the user's device.
[25] The present invention is not limited to the way of carrying out the invention or the technologies described above, but it can be modified within the attached claims.
Claims
Claims
A method for data redundancy using a data network (1) to transfer the data to be backed up to a remote service (2), comprising of the data to be backed up (3.1) and the stored information of other users of the service (5) that are used to calculate error correction data (2.3) that is used to recover the original stored information (3.1) by calculating the original backed up data from the other users' data (5) and the error correction data (2.3).
The method of claim 1, wherein the error correction data (2.3) is stored in the storage server (2) of the service provider.
The method of claim 1, wherein the error correction data (2.3) is stored in the mass storage devices (5) of such users that are not part of the group, whose stored data was used to calculate the error correction data.
The methods of any claims 1-3, wherein the data to be backed up (3.1) is encrypted (3.3) before transfer to the service, the error correction data (2.3) is calculated from the encrypted data and the data is recovered to encrypted form.
The methods of any claims 1-4, wherein an error correction algorithm that enables recovery of the stored information, even if more than one data set used for the calculation is missing, is used for the error correction data calculation (2.1).
The methods of any claims 1-5, wherein the data to be backed up is split into pieces, each having the related error correction data (2.3) calculated with the stored data (5) of different users of the service. The method of claim 6, wherein the error correction data (2.3) for a specific portion the data to be backed up (3.1) is calculated (2.1) with the stored data of the users, whose data is not used for calculating error correction data (2.3) for any other portion of the same user.
The methods of any claims 1-7, wherein the error correction data (2.3) is calculated again when the stored data (3.1) changes using a method where the new error correction data (2.3) can be calculated based on the changed data and the prior error correction data.
The method of claim 8, wherein the data is stored to a file system (3.2), the state of the file system (3.2) can be saved at any moment of time, the state can be recovered later and the difference between the saved state and the later state can be used to identify the changes in the stored
data.
The methods of any claims 8-9, wherein the file system (3.2) is in a mass storage device (3) that has moving components that produce noise, the changed data is copied to a silicon-based memory (3.4) in the same device from where the changed data is transferred to the service (2).
The methods of any claims 1-10, wherein the users, whose data belongs to the same group used for calculating the error correction data (2.3), are selected based on the risk analysis of the user, the characteristics of the mass storage device (3) or characteristics of the network connection of the user.
The method of claim 11, wherein the user is attached to the service (2) so that the data to be backed up (3.1) is copied to the service (2) and the usage information of the user is used for creating the risk analysis. The methods of any claims 1-10, wherein the size of the group used for error correction data calculation (2.1) is determined based on the risk analysis of the user, the characteristics of the mass storage device (3) used for storing the data, the characteristics of the data network connection of the user, the type of the stored data (3.1) or the user selected service level.
The methods of any claims 1-13, wherein the stored data (3.1) of the user is unavailable for specified time period, the data recovery (2.4) is initiated automatically by the service (2).
The method of claim 14, wherein the stored data (3.1) of the user is unavailable for specified time period, the user's data is removed from the group used for calculating the error correction data (2.1).
The methods of any claims 1-15, wherein the stored data (3.1) metadata is copied to the service (2) separately and the metadata is used to prioritise the recovery order of the recovered data.
The methods of any claims 1-16, wherein a checksum is calculated from the stored data that is used for the error correction data calculation (2.1). The checksum is used to verify that the changed data sent by the user's mass storage device (3) represents the change relative to the same data that was used to calculate the error correction data used for the previous calculation.
A storage server for backing up data using a data network (1) to transfer the data to be backed up from the customers' mass storage devices (3, 5) comprising of the data to be backed up (3.1) and the
stored information of other users of the service (5) that are used to calculate error correction data (2.3) that is used to recover the original stored information (3.1) by calculating the original backed up data from the other users' data (5) and the error correction data (2.3).
[Claim 19] A storage server of claim 18, wherein the error correction data (2.3) is stored in the storage server (2) of the service provider.
[Claim 20] A storage server of any claims 18-19, wherein the data to be backed up is split into pieces, each having the related error correction data (2.3) calculated with the stored data (5) of different users of the service.
[Claim 21] A storage server of claim 20, wherein the error correction data (2.3) for a specific portion the data to be backed up (3.1) is calculated (2.1) with the stored data of the users, whose data is not used for calculating error correction data (2.3) for any other portion of the same user.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP10816753A EP2478438A4 (en) | 2009-09-16 | 2010-09-15 | Method and a storage server for data redundancy |
US13/496,674 US8954793B2 (en) | 2009-09-16 | 2010-09-15 | Method and a storage server for data redundancy |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FI20095950A FI126228B (en) | 2009-09-16 | 2009-09-16 | A method and a data storage server for data redundancy |
FI20095950 | 2009-09-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011033174A1 true WO2011033174A1 (en) | 2011-03-24 |
Family
ID=41136412
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/FI2010/050710 WO2011033174A1 (en) | 2009-09-16 | 2010-09-15 | Method and a storage server for data redundancy |
Country Status (4)
Country | Link |
---|---|
US (1) | US8954793B2 (en) |
EP (1) | EP2478438A4 (en) |
FI (1) | FI126228B (en) |
WO (1) | WO2011033174A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016041928A1 (en) * | 2014-09-17 | 2016-03-24 | Bundesdruckerei Gmbh | Distributed data storage by means of authorisation token |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9483657B2 (en) * | 2013-01-14 | 2016-11-01 | Accenture Global Services Limited | Secure online distributed data storage services |
US9442803B2 (en) * | 2014-06-24 | 2016-09-13 | International Business Machines Corporation | Method and system of distributed backup for computer devices in a network |
US9697083B2 (en) | 2014-11-21 | 2017-07-04 | International Business Machines Corporation | Using geographical location information to provision multiple target storages for a source device |
US10353595B2 (en) | 2014-11-21 | 2019-07-16 | International Business Machines Corporation | Using geographical location information and at least one distance requirement to determine a target storage to provision to backup data for a source device |
US9965351B2 (en) * | 2015-01-27 | 2018-05-08 | Quantum Corporation | Power savings in cold storage |
CN107547589B (en) * | 2016-06-27 | 2020-08-14 | 腾讯科技(深圳)有限公司 | Data acquisition processing method and device |
CN110278222B (en) * | 2018-03-15 | 2021-09-14 | 华为技术有限公司 | Method, system and related device for data management in distributed file storage system |
KR102557993B1 (en) * | 2018-10-02 | 2023-07-20 | 삼성전자주식회사 | System on Chip and Memory system including security processor and Operating method of System on Chip |
JP7447495B2 (en) * | 2020-01-08 | 2024-03-12 | 富士フイルムビジネスイノベーション株式会社 | Information processing device and program |
US11989110B2 (en) * | 2021-09-24 | 2024-05-21 | Dell Products L.P. | Guidance system for computer repair |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040049700A1 (en) * | 2002-09-11 | 2004-03-11 | Fuji Xerox Co., Ltd. | Distributive storage controller and method |
US7529834B1 (en) * | 2000-06-02 | 2009-05-05 | Hewlett-Packard Development Company, L.P. | Method and system for cooperatively backing up data on computers in a network |
US20090187609A1 (en) * | 2008-01-18 | 2009-07-23 | James Barton | Distributed backup and retrieval system |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6990667B2 (en) * | 2001-01-29 | 2006-01-24 | Adaptec, Inc. | Server-independent object positioning for load balancing drives and servers |
US6883110B1 (en) * | 2001-06-18 | 2005-04-19 | Gateway, Inc. | System and method for providing a data backup of a server on client systems in a network |
US7685126B2 (en) * | 2001-08-03 | 2010-03-23 | Isilon Systems, Inc. | System and methods for providing a distributed file system utilizing metadata to track information about data stored throughout the system |
US6904547B2 (en) * | 2002-01-04 | 2005-06-07 | Sun Microsystems, Inc | Method and apparatus for facilitating validation of data retrieved from disk |
US7127577B2 (en) * | 2003-01-21 | 2006-10-24 | Equallogic Inc. | Distributed snapshot process |
JP4354233B2 (en) * | 2003-09-05 | 2009-10-28 | 株式会社日立製作所 | Backup system and method |
CA2452251C (en) * | 2003-12-04 | 2010-02-09 | Timothy R. Jewell | Data backup system and method |
US7761678B1 (en) * | 2004-09-29 | 2010-07-20 | Verisign, Inc. | Method and apparatus for an improved file repository |
EP1933536A3 (en) * | 2006-11-22 | 2009-05-13 | Quantum Corporation | Clustered storage network |
JP2010512584A (en) * | 2006-12-06 | 2010-04-22 | フュージョン マルチシステムズ,インク.(ディービイエイ フュージョン−アイオー) | Apparatus, system and method for managing data from a requesting device having an empty data token command |
WO2008118168A1 (en) * | 2007-03-23 | 2008-10-02 | Andrew Winkler | Method and system for backup storage of electronic data |
-
2009
- 2009-09-16 FI FI20095950A patent/FI126228B/en not_active IP Right Cessation
-
2010
- 2010-09-15 WO PCT/FI2010/050710 patent/WO2011033174A1/en active Application Filing
- 2010-09-15 US US13/496,674 patent/US8954793B2/en active Active
- 2010-09-15 EP EP10816753A patent/EP2478438A4/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7529834B1 (en) * | 2000-06-02 | 2009-05-05 | Hewlett-Packard Development Company, L.P. | Method and system for cooperatively backing up data on computers in a network |
US20040049700A1 (en) * | 2002-09-11 | 2004-03-11 | Fuji Xerox Co., Ltd. | Distributive storage controller and method |
US20090187609A1 (en) * | 2008-01-18 | 2009-07-23 | James Barton | Distributed backup and retrieval system |
Non-Patent Citations (2)
Title |
---|
MORCOS F. ET AL: "iDIBS: An improved distributed backup system", PROCEEDINGS - 12TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS, ICPADS 2006 [ONLINE], 2006, XP010928335, Retrieved from the Internet <URL:http://www.cse.nd.edu/~dthain/papers/idibs-icpads06.pdf> [retrieved on 20101228] * |
See also references of EP2478438A4 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016041928A1 (en) * | 2014-09-17 | 2016-03-24 | Bundesdruckerei Gmbh | Distributed data storage by means of authorisation token |
US10534920B2 (en) | 2014-09-17 | 2020-01-14 | Bundesdruckerei Gmbh | Distributed data storage by means of authorisation token |
US11475137B2 (en) | 2014-09-17 | 2022-10-18 | Bundesdruckerei Gmbh | Distributed data storage by means of authorisation token |
Also Published As
Publication number | Publication date |
---|---|
FI20095950A (en) | 2011-03-17 |
US8954793B2 (en) | 2015-02-10 |
US20120173925A1 (en) | 2012-07-05 |
EP2478438A1 (en) | 2012-07-25 |
FI126228B (en) | 2016-08-31 |
EP2478438A4 (en) | 2013-03-13 |
FI20095950A0 (en) | 2009-09-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8954793B2 (en) | Method and a storage server for data redundancy | |
EP2394220B1 (en) | Distributed storage of recoverable data | |
US8296263B2 (en) | Systems and methods for data upload and download | |
CN106294585B (en) | A kind of storage method under cloud computing platform | |
JP4846156B2 (en) | Hash file system and method for use in a commonality factoring system | |
US8554735B1 (en) | Systems and methods for data upload and download | |
CN106156359B (en) | A kind of data synchronization updating method under cloud computing platform | |
CN102902600B (en) | Efficient application-aware disaster recovery | |
EP2883132B1 (en) | Archival data identification | |
US7203871B2 (en) | Arrangement in a network node for secure storage and retrieval of encoded data distributed among multiple network nodes | |
AU757667B2 (en) | Access to content addressable data over a network | |
US8370307B2 (en) | Cloud data backup storage manager | |
US7529785B1 (en) | Efficient backups using dynamically shared storage pools in peer-to-peer networks | |
WO2015175411A1 (en) | Distributed secure data storage and transmission of streaming media content | |
TW200523753A (en) | Apparatus, system, and method for grid based data storage | |
US10558581B1 (en) | Systems and techniques for data recovery in a keymapless data storage system | |
US8315986B1 (en) | Restore optimization | |
US20020078461A1 (en) | Incasting for downloading files on distributed networks | |
CN105320577A (en) | Data backup and recovery method, system and device | |
JP6671708B2 (en) | Backup restore system and backup restore method | |
CN112416878A (en) | File synchronization management method based on cloud platform | |
JP2002318716A (en) | System and method for delivery, server computer and client computer | |
CN113626251A (en) | Method, apparatus and computer program product for migrating backup systems | |
Douglis | A case for end-to-end deduplication | |
WO2017039538A1 (en) | Systems and methods for unified storage services |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10816753 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010816753 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13496674 Country of ref document: US |