EP3360034A1 - Verfahren und system zur dynamisch verteilten sicherung - Google Patents

Verfahren und system zur dynamisch verteilten sicherung

Info

Publication number
EP3360034A1
EP3360034A1 EP16778810.8A EP16778810A EP3360034A1 EP 3360034 A1 EP3360034 A1 EP 3360034A1 EP 16778810 A EP16778810 A EP 16778810A EP 3360034 A1 EP3360034 A1 EP 3360034A1
Authority
EP
European Patent Office
Prior art keywords
data
server
storage
blocks
servers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP16778810.8A
Other languages
English (en)
French (fr)
Inventor
Francis Pinault
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ROBERTO GIORI CO Ltd
Original Assignee
ROBERTO GIORI CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ROBERTO GIORI CO Ltd filed Critical ROBERTO GIORI CO Ltd
Publication of EP3360034A1 publication Critical patent/EP3360034A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1464Management of the backup or restore process for networked environments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Definitions

  • the present invention relates to the computer field, and more particularly to a system and method for storing data associated with a user in a computer network comprising a plurality of storage servers. In other words, it is distributed or distributed storage or backup of data over a network.
  • An object of the present invention is thus to improve the security of a personal data during its storage distributed over a plurality of servers.
  • a first aspect of the present invention relates to a method of storing data associated with a user in a computer network comprising a plurality of storage servers, the method comprising the following steps:
  • the determination of the respective server is a function of a current time instant.
  • the determination of the respective server as a function of a current time instant can be carried out for each block of data, so that the storage server used to store each respective block of data varies periodically in time.
  • the invention relates, according to a second aspect, to a system (which can be integrated into a simple user terminal) for storing data associated with a user in a computer network comprising a plurality of storage servers, the system comprising at least one microprocessor configured to execute, in a system execution environment, the following steps:
  • the determination of the respective server is a function of a current time instant.
  • the method or system according to the invention thus makes it possible to increase the security of personal data, for example confidential data, encrypted or not, or personal programs.
  • the storage server used for storing a particular data block may vary in time, i.e., it is determined according to one or more dynamic distribution laws. The task of locating and retrieving the blocks of data is thus greatly complexified for a malicious person.
  • a new respective storage server is determined, at each new time instant, for each data block, so as to store the data block at a new storage server at each new time instant.
  • This provision specifies the dependence, at the time, of the determination of the server to use.
  • the method further comprises the following steps in response to a request for access to the data associated with the user:
  • This arrangement makes it possible to process the storage discontinuity of the data blocks during a change of time instant. Indeed, depending on whether the request for access to the data received in the vicinity of this change in time is processed more or less quickly, the data blocks may have been moved from one server to another, according to the new distributed storage scheme applicable at time T + 1.
  • the data is reconstituted using the scheme applicable at time T, and if this data is erroneous (lack of coherence, error on an identification criterion such as a user identity in the reconstituted data, etc.), a reconstruction is performed using the scheme applicable at time T + 1.
  • the determination of the respective server is further dependent on a private binary key associated with the user. It can be any cryptographic key associated with the user, which is used in its binary form.
  • This arrangement makes it possible to encrypt the distribution scheme of the storage servers according to each user, and thus makes it more difficult for an attacker to carry out the operations to be implemented to identify the storage location of each of the data blocks.
  • the step of determining the storage servers includes a step of applying the binary key as a mask to a first server distribution table to identify storage servers to be used for a portion of the data blocks. respective server distribution table associating a server with each block of data.
  • the step of determining the storage servers further comprises a step of applying a complementary bit key as a mask to a second server distribution table to identify storage servers to be used for the other blocks. respective data.
  • said second server distribution table can associate a server with each data block and can be formed from the same elementary table as the first server distribution table.
  • Distribution tables are generated by repeating (and concatenating) the elementary table, the second distribution table being the continuity of the first distribution table with regard to the repetition of the elementary table.
  • the mask formed by the binary key is shifted relative to the first or second distribution table of the servers by a number of positions depending on the current time instant, before being applied to the first or second second server distribution table.
  • the current time instant is thus used as disturbing reference in the application of the mask (user's binary key), increasing the security of the distributed storage of the personal data.
  • the mask is formed of a repetition (possibly partial) of the binary key so as to reach the size of the first or second distribution table servers, that is to say the number blocks of data to store.
  • the method further comprises a step of determining a server distribution basic table by duplication of which the server distribution table or tables are obtained,
  • the step of determining the elementary table is a function of a performance index associated with each storage server and a confidence index associated with the geographical location of each storage server.
  • a strategy for prioritizing the use of certain servers can be implemented, for example to favor efficient servers and / or located in areas with low geographical risk (eg seismic risk or geopolitical risk).
  • the length of the elementary table is a function of the sum of weights associated with the storage servers, the weight associated with a storage server being determined from the performance and trust indices of the storage server considered.
  • the step of determining the elementary table comprises the following steps:
  • the elementary table thus makes it possible to obtain a complex and interlaced distribution of the servers, in proportions equal to their respective weights, that is to say to their confidence and performance indices. Also, such an elementary table is complex to recreate for a malicious person, while ensuring equity between the servers given their characteristics.
  • the step of dividing the data comprises the following steps:
  • FIG. 1 illustrates an example of a hardware architecture in which the present invention can be implemented, notably in the form of computer programs;
  • FIG. 2 illustrates an example of a computer network comprising a plurality of storage servers in which the invention can be implemented
  • FIG. 3 illustrates, using a flow chart, the general steps of a method of distributed backup of a piece of data according to embodiments of the invention
  • FIG. 4 illustrates, in the form of a flow chart, steps for the determination of storage servers of the method of FIG. 3;
  • FIG. 5 illustrates an example of implementation of the steps of FIG. 4
  • FIG. 6 illustrates, in the form of a flow chart, steps for determining an elementary table of the method of FIG. 3;
  • FIG. 7 illustrates an example of implementation of the steps of FIG. 6.
  • FIG. 8 illustrates, in the form of a flow chart, an example of general steps of a method of accessing a data item saved according to the method of FIG. 3.
  • FIG. 1 illustrates an example of a hardware architecture in which the present invention may be implemented, notably in the form of computer programs.
  • this hardware architecture can be part of a device or user terminal, such as an onboard computer or not, laptop, mobile terminal, a mobile tablet, or from a server offering services. distributed backup of data and access to this data.
  • the hardware architecture 10 comprises in particular a communication bus 100 to which are connected:
  • CPU Central Processing Unit
  • non-volatile memory 120 for example ROM (for Read Only Memory), EEPROM (for Electrically Erasable Read Only Memory) or Flash, for storing computer programs for the implementation of the invention and possible parameters used for this one;
  • a random access memory 130 or cache memory or volatile memory for example RAM (for Random Access Memory), configured to store the executable code of methods according to embodiments of the invention, and to store registers adapted to memorize, at least temporarily, variables and parameters necessary for the implementation of the invention according to embodiments;
  • an interface 140 of input / output I / O for Input / Outpuf
  • I / O for Input / Outpuf
  • a screen for example a screen, a keyboard, a mouse or other pointing device such as a touch screen or a remote control enabling a user to interact with the system via a graphical interface
  • a communication interface COM 150 adapted to exchange data for example with storage servers via a computer or communication network.
  • the instruction codes of the program stored in non-volatile memory 120 are loaded into RAM 130 for execution by the processing unit CPU 1 10.
  • the non-volatile memory 120 also stores confidential information of the user, for example a private key in binary form.
  • confidential information of the user for example a private key in binary form.
  • SE Secure Element
  • HSM Hardware Security Module
  • the present invention is part of the backup (or storage) distributed data on storage servers of a communication network, typically an extended computer network, such as the Internet.
  • FIG. 2 illustrates an example of a computer network 20 comprising a plurality of M storage servers S x .
  • M storage servers
  • Si storage servers
  • S 2 storage servers
  • S 3 storage servers
  • S 4 storage servers
  • the servers are synchronized on the same reference clock.
  • a user terminal 21, presenting the hardware architecture of FIG. 1, enables a user to request the safeguarding of personal data, sometimes confidential, encrypted or not, and access this personal data once it stored distributed in the network 20.
  • the user terminal 21 can implement the present invention to manage the distributed storage of such personal data and its subsequent access.
  • the user terminal 21 can access a distributed data backup service and subsequent access to this data, proposed by a server S of the network 20.
  • all the parameters (indices of confidence and performance, user keys, etc.) discussed subsequently can be stored on such a server S, and be retrieved, if necessary, by the user terminals.
  • the general principles of distributed data backup include dividing the data to obtain a plurality of data blocks; determining, for each data block, a respective one of the plurality of storage servers; and storing each block of data at the respective storage server.
  • the present invention provides for increasing the protection, and therefore the security, of the data thus stored according to these solutions, by making a determination of each respective server according to a current time instant, that is to say say according to time.
  • FIG. 3 illustrates, using a flow chart, general steps of an exemplary method according to embodiments of the invention. These steps are implemented in a system according to the invention, which may be the user terminal 21 or the server S of FIG.
  • step 30 a request for storing a personal data DATA is received from the user (via the user terminal 21 if necessary).
  • This data is personal in that it is attached to a user or group of users. It may consist of a plurality of elementary data, for example confidential. Typically, the personal data is encrypted.
  • the personal data DATA forms a file of size LENGTH.
  • step 31 the DATA data is divided to obtain a plurality of data blocks.
  • This step is broken down into three sub-steps: dividing the data DATA into elementary blocks of data DD N ; duplicate the elementary blocks into duplicate DVD blocks to provide a sufficient level of redundancy; and interleaving the duplicate blocks to improve the reliability of the storage mechanism.
  • the data DATA can be divided into a plurality of blocks of the same size Lvar, this block size being variable in that it can depend on one or more parameters, for example chosen from the following parameters: the size LENGTH of the DATA data, the user, the operator of the distributed backup and data access service, etc.
  • the number of blocks obtained is then:
  • Nb T LENGTH / Lvar 1.
  • variable lengths further improves the security of the data to be backed up.
  • the variability of the size of the block as a function of the LENGTH size of the data DATA can follow one of the following formulas:
  • Nbmax is a predefined parameter and Lmin is a predefined minimum integer size.
  • Nb of data blocks obtained tends to Nbmax plus the data DATA is large;
  • Nb of data blocks obtained is min (rV (LENGTH) 1, Nbmax), and therefore tends to Nbmax plus the DATA data is large.
  • the variability of the block size according to the user may consist of using a unique identifier ID of the user (for example a social security number, a passport number or identity card, etc.) that one normalizes in a predefined interval [0; Nbmax], to calculate this length:
  • a unique identifier ID of the user for example a social security number, a passport number or identity card, etc.
  • Nb ID - Nbmax.LlD / NbmaxJ, where L J is the function returning the integer part by default, and thus:
  • an integer of the interval [0; Nbmax] can be randomly assigned to each user and used to define the value Nb.
  • the variability of the size of the block according to an operator may consist in providing different levels (value Nb) of dividing the data according to subscription options or performance.
  • Value Nb the number of bits assigned to the data according to subscription options or performance.
  • the use of a division into one large number of blocks is safer, but requires more calculations as described later. Also, such a level of division can be reserved for premium subscribers.
  • these blocks of data can be duplicated to provide data redundancy, making the reconstitution of DATA data more reliable.
  • the redundancy law can be fixed, defining the number of duplications RD by a fixed number, for example 3.
  • the applied redundancy law may be variable in that it may depend on one or more parameters, for example according to CS confidence indices, assigned to the M servers S ,.
  • the whole number of duplications can be:
  • Nb ' RD.Nb D'-iD blocks are obtained from n elementary blocks DD N.
  • the m DVD blocks can be interleaved to improve the reliability of the backup system with regard to errors occurring in the processing of DVD blocks.
  • step 32 a basic table of distribution servers S , denoted TABLE E, is obtained in step 32 .
  • the elementary table TABLE E consists of an ordered plurality of L TAB LE entries, each identifying one of the servers S ,.
  • This table elementary TABLE E can be a predefined table recovered in non-volatile memory of the system. As a variant, it can be determined according to the method of FIG. 6 described below in order, in particular, to favor trusted and / or efficient servers.
  • a private key of the user is obtained in step 33. It is preferably a cryptographic key obtained from an elliptic curve.
  • This key, noted K, is securely stored in the system embodying the invention, for example using a secure element type smart card.
  • the private key K is used to determine the servers to be used to store each D 'block.
  • a respective storage server is determined, for each data block D ', among the plurality of storage servers, according to a current time instant Tu.
  • the current time instant is defined with an accuracy directly dependent on a chosen time unit.
  • the time instant can be defined by the current time if a time unit of the order of the hour is chosen.
  • a time unit of the agenda can be used, so as to modify the storage location of the blocks D ', thirty or thirty-one times a month.
  • Step 34 an embodiment of which is described in greater detail hereinafter with reference to FIG. 4, thus makes it possible to identify a storage server for each data block D ', resulting from the division of the initial data. DATA, and this according to the current time instant Tu.
  • step 35 the proper storage of each block of data at the respective storage server and determined.
  • Conventional secure communication techniques with the storage servers S are preferably implemented.
  • step 36 the system waits for the next time instant, for example the beginning of the next hour or the next day.
  • steps 31 to 35 are reiterated to determine a new respective storage server for each data block D'i, and thus store the data block at the new storage server during this new time. time instant.
  • the data blocks are erased from the old storage servers on which they were stored during the old time instant just ended.
  • steps 31, 32 and 33 may simply consist in recovering the result of a previous execution of these steps, when these do not involve the current time instant as a parameter (for example the elementary table distribution can change over time).
  • Step 35 is itself dependent on the current time instant, ensuring that the storage servers identified for each block of data to backup evolve over time.
  • FIG. 4 illustrates an embodiment of the step 34 of determining the storage servers for saving the data blocks D 'at the current time instant Tu. This determination takes into account, besides the current time instant Tu, the private key K, the elementary distribution table TABLE E and the size LENGTH of the data DATA to be saved.
  • a first step 40 consists in obtaining a first distribution table TABLE1 from the elementary table TABLE E , by duplication of the latter in order to obtain a table TABLE1 of length equal to Nb '(that is to say a table TABLE1 of the same length as the number of data blocks D ', to be saved).
  • the next step 41 is to obtain a MASK bit mask from the private key K and the current time instant Tu. Since this mask MASK will be applied to the first distribution table TABLE1, it has the same size Nb 'as this one.
  • the private key K is used in its binary form (continuation of and ⁇ '), here a 32-bit key. Then, the mask MASK is formed by the repetition of the binary key K, until reaching the size Nb 'of the first server distribution table. In the figure, the nine bolded bits are from a K key repetition.
  • step 42 the mask MASK is applied to the first distribution table TABLE1 servers to identify storage servers to use for a portion of the respective data blocks D ',. According to embodiments, it is at this stage that the current time instant Tu is taken into account to disturb the identification of the storage servers to be used.
  • it may be provided to shift the mask MASK relative to the beginning of the first distribution table servers TABLE1 a number of positions according to the current time instant Tu, before being applied to this server distribution table .
  • the mask MASK is shifted from Tu positions before application to the TABLE1 (offset indicated by K "Tu); and the result RESULT1 of this masking operation (those of the mask identify the servers of the table TABLE1 to keep) identifies only a portion of the storage servers to use.
  • step 43 a second server distribution table is obtained
  • the second distribution table can simply be the continuity of the first distribution table with regard to the repetition of the elementary table, as illustrated in FIG.
  • step 44 a second MASK2 mask formed, for example, of the binary (bit-to-bit) complement of the first mask MASK is obtained.
  • the second mask also has a size equal to Nb '.
  • step 45 the second mask MASK2 is applied to the second distribution table TABLE2 in the same way as in step 42, so as to identify the storage servers to be used for the other data blocks D ', (those for which step 42 could not identify such servers). Indeed, the use of the complementary of the first mask ensures that ultimately each of the blocks D ', is associated with a respective storage server.
  • the process of FIG. 4 ends at step 46 by merging the results RESULT1 and RESULT2 of the masking operations, so as to obtain a RESULT grid for locating the Nb 'storage servers.
  • This grid thus identifies the storage server S to be used for each of the Nb 'data blocks D'.
  • the step of determining the elementary table is a function of a performance index associated with each storage server and a confidence index associated with the geographical location of each storage server.
  • Each server S is associated with a geographical location
  • Two servers can have the same location LS j , hence N ⁇ M.
  • a confidence index CS is associated with each location LS j .
  • This confidence index is representative of a local stability with regard to the accessibility of servers located there.
  • 0 low confidence
  • other ranges of values are possible.
  • Each storage server S is thus indirectly associated with a CS index of confidence, linked to its geographical location.
  • a performance index PS is associated with each storage server S, depending for example on the performance of access to this server and / or the performance of the server itself.
  • server performance There are a large number of techniques to scale server performance, including storage performance, memory performance, processor performance, network performance, and process performance. Also they will not be described in detail here.
  • PS performance index can vary over time:
  • step 32 f (Tu), in which case for example step 32 is re-executed entirely at each new time instant (after step 36).
  • the step of determining the elementary distribution table of TABLE E servers begins with a step 60 of obtaining a weight WS, associated with each storage server S 1.
  • This weight can in particular be representative of the associated confidence and performance.
  • the weight WS, associated with a storage server S can be determined from the performance and confidence indices of the storage server in question, for example by combining the indices CS, and PS, associated with S ,.
  • WS (CSi.PSi) / (CS max .PS max) for a weight ranging from 0 to 1.
  • a repetition frequency F x is determined for each storage server according to the weight WS X associated with the storage server S x considered.
  • F x is representative of a frequency of occurrence of the storage server S x in the elementary distribution table TABLE E , when it will be necessary to create it.
  • 1 / F X L (L T ABLE / WS X ) _
  • TABLE table E is formed by repeating WS x times each server S x with a frequency F x .
  • the first position in the elementary table TABLE E therefore informs the server Si.
  • the position of the input entered is stored in a variable 'z'.
  • a counter of the number of NBOCC occurrences is initialized to 1.
  • step 66 provides for populating the next occurrence of the server S x in the TABLE E.
  • the method then loops back to step 65 making it possible to fill all the occurrences of the server S x in the elementary table TABLE E.
  • the elementary table TABLE E is filled by repeating, for each server S, iteratively considered and according to its determined repetition frequency (F,), an occurrence of the server within the elementary table until reaching a number of repetition NBOCC equal to the weight WS, associated with the considered server.
  • TABLE E (9) is already filled in (for the server S 2 )
  • TABLE E (10) 3.
  • TABLE E (12) 3
  • TABLE E (14) 3.
  • TABLE E (16) 3
  • TABLE E (18) 3.
  • this algorithm includes a mechanism for managing the risks of ambiguity relating to the passage of a message. transition from one time instant to the next, upon receipt of a request to access the data DATA.
  • the algorithm starts at step 80 by receiving a request to access the data DATA by a user U. If necessary, the division, redundancy and interleaving mechanisms (step 31) of the data DATA are set to particularly for the purpose of knowing the number Nb 'of data blocks D' to be recovered.
  • a variable ⁇ ' is initialized to 0, to serve as a mechanism for managing temporal transitions.
  • the temporal instant Tu of reception of the request is memorized.
  • the following steps identify the storage servers storing, at this time, the data blocks forming the data DATA to access.
  • step 81 the elementary table TABLE E is obtained in a manner similar to step 32.
  • step 82 the private key K of the user is obtained in a manner similar to step 33.
  • step 83 the data block storage servers D 'are determined similarly to step 34, for the time slot Tu.
  • step 84 the data blocks D 'are recovered from these determined storage servers by conventional mechanisms (for example, requests secure). Then, during step 85, the data DATA is reformed from the blocks D ', thus recovered.
  • the next step, 86 is to check the consistency of the result of step 85.
  • the verification may relate to the identification of the user U which must be identical to that indicated in the data reformed DATA (for example if the DATA data is encrypted, the use of a public key of the user U can verify the authenticity).
  • checksum verification may be carried out (for example if the end of the data DATA consists of a checksum of the rest of the data).
  • Other checks can be carried out such as the dating of the last storage stored compared to a traceability stored operations performed for this user.
  • loop 1 (test 88)
  • an error message is returned to the user in response to his request (step 90).
  • the reformed data is returned to the user in response to his request (step 91).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Hardware Redundancy (AREA)
EP16778810.8A 2015-10-08 2016-10-07 Verfahren und system zur dynamisch verteilten sicherung Withdrawn EP3360034A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP15306582.6A EP3153961A1 (de) 2015-10-08 2015-10-08 Verfahren und system zum verteilten dynamischen schutz
PCT/EP2016/074104 WO2017060495A1 (fr) 2015-10-08 2016-10-07 Procédé et système de sauvegarde répartie dynamique

Publications (1)

Publication Number Publication Date
EP3360034A1 true EP3360034A1 (de) 2018-08-15

Family

ID=55027646

Family Applications (2)

Application Number Title Priority Date Filing Date
EP15306582.6A Withdrawn EP3153961A1 (de) 2015-10-08 2015-10-08 Verfahren und system zum verteilten dynamischen schutz
EP16778810.8A Withdrawn EP3360034A1 (de) 2015-10-08 2016-10-07 Verfahren und system zur dynamisch verteilten sicherung

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP15306582.6A Withdrawn EP3153961A1 (de) 2015-10-08 2015-10-08 Verfahren und system zum verteilten dynamischen schutz

Country Status (5)

Country Link
US (1) US10678468B2 (de)
EP (2) EP3153961A1 (de)
CN (1) CN108139869A (de)
BR (1) BR112018006134A2 (de)
WO (1) WO2017060495A1 (de)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309528A (zh) * 2020-03-23 2020-06-19 重庆忽米网络科技有限公司 一种基于云计算及分布式存储的数据协同备份系统及方法

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003063423A1 (en) * 2002-01-24 2003-07-31 University Of Southern California Pseudorandom data storage
US7096328B2 (en) * 2002-01-25 2006-08-22 University Of Southern California Pseudorandom data storage
US20060020569A1 (en) * 2004-07-22 2006-01-26 Goodman Brian G Apparatus, system, and method for time-based library scheduling
US7844775B2 (en) * 2005-09-23 2010-11-30 Avid Technology, Inc. Distribution of data in a distributed shared storage system
US9178693B2 (en) * 2006-08-04 2015-11-03 The Directv Group, Inc. Distributed media-protection systems and methods to operate the same
FR2945644A1 (fr) 2009-05-18 2010-11-19 Alcatel Lucent Systeme de sauvegarde de donnees
KR101483127B1 (ko) * 2011-03-31 2015-01-22 주식회사 케이티 클라우드 스토리지 시스템에서 리소스를 고려한 자료분배방법 및 장치
US8732421B2 (en) * 2011-08-26 2014-05-20 Hitachi, Ltd. Storage system and method for reallocating data
CN104255011B (zh) * 2012-03-09 2017-12-08 英派尔科技开发有限公司 云计算安全数据存储
WO2014010016A1 (ja) 2012-07-09 2014-01-16 富士通株式会社 プログラム、データ管理方法、および情報処理装置
CN203149553U (zh) * 2013-03-28 2013-08-21 中国大唐集团财务有限公司 一种带有数据安全校验的异地灾备系统
US20160147838A1 (en) * 2013-06-14 2016-05-26 Nec Corporation Receiving node, data management system, data management method and strage medium
CN103916477A (zh) * 2014-04-09 2014-07-09 曙光云计算技术有限公司 用于云环境的数据存储方法和装置、及下载方法和装置

Also Published As

Publication number Publication date
US10678468B2 (en) 2020-06-09
WO2017060495A1 (fr) 2017-04-13
EP3153961A1 (de) 2017-04-12
BR112018006134A2 (pt) 2018-10-23
US20180284991A1 (en) 2018-10-04
CN108139869A (zh) 2018-06-08

Similar Documents

Publication Publication Date Title
EP1570648B1 (de) Verfahren zur sicherung von software-upgrades
CA2034002C (fr) Procede pour verifier l'integrite d'un logiciel ou de donnees, et systeme pour la mise en oeuvre de ce procede
EP0810506B1 (de) Verfahren und Einrichtung zur gesicherten Identifikation zwischen zwei Endgeräten
EP1055203B1 (de) Zugangskontrollprotokoll zwischen einem schlüssel und einem elektronischen schloss
EP1761835B1 (de) Sicherheitsmodul und verfahren zur individuellen anpassung eines derartigen moduls
CN113810465A (zh) 一种异步二元共识方法及装置
CN111680013A (zh) 基于区块链的数据共享方法、电子设备和装置
EP3360034A1 (de) Verfahren und system zur dynamisch verteilten sicherung
FR2792141A1 (fr) Procede de securisation d'un ou plusieurs ensembles electroniques mettant en oeuvre un meme algorithme cryptographique avec cle secrete, une utilisation du procede et l'ensemble electronique
FR3052894A1 (fr) Procede d'authentification
WO2019175482A1 (fr) Traitement sécurisé de données
EP1609326B1 (de) Verfahren zum schutz eines telekommunikationsendgeräts des mobiltelephontyps
EP1436792B1 (de) Authentisierungsprotokoll mit speicherintegritaetsverifikation
EP3394812A1 (de) Authentifizierungsverfahren
FR3107416A1 (fr) Tokenisation aléatoire efficace dans un environnement dématérialisé
US10262310B1 (en) Generating a verifiable download code
WO2020065185A1 (fr) Procédé cryptographique de comparaison sécurisée de deux données secrètes x et y
FR3121240A1 (fr) Procédé permettant de garantir l’intégrité des données informatiques gérées par une application tout en préservant leur confidentialité
EP3284209B1 (de) Verfahren zur erzeugung und verifizierung eines sicherheitsschlüssels einer virtuellen geldeinheit
EP3842970A1 (de) Verfahren zur überprüfung des passworts eines dongles, entsprechendes computerprogramm, benutzerendgerät und entsprechender dongle
FR3122753A1 (fr) Procédé d'exécution d'un code binaire par un microprocesseur
FR3003058A1 (fr) Systeme et procede de gestion d’au moins une application en ligne, objet portable utilisateur usb et dispositif distant du systeme
FR2987711A1 (fr) Delegation de calculs cryptographiques
EP3564841A1 (de) Authentifizierung eines elektronischen schaltkreises
FR2912529A1 (fr) Couplage d'un programme informatique ou de donnees a un systeme de reference et verification associee.

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20180504

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20211115

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20220326