CN111240579B - Method and device for data storage - Google Patents

Method and device for data storage Download PDF

Info

Publication number
CN111240579B
CN111240579B CN201811436152.0A CN201811436152A CN111240579B CN 111240579 B CN111240579 B CN 111240579B CN 201811436152 A CN201811436152 A CN 201811436152A CN 111240579 B CN111240579 B CN 111240579B
Authority
CN
China
Prior art keywords
data
overflow
stored
setting
stored data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811436152.0A
Other languages
Chinese (zh)
Other versions
CN111240579A (en
Inventor
张乾
赵君杰
苏京
赵砚秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN201811436152.0A priority Critical patent/CN111240579B/en
Priority to PCT/CN2019/121611 priority patent/WO2020108563A1/en
Priority to US17/296,628 priority patent/US11747991B2/en
Publication of CN111240579A publication Critical patent/CN111240579A/en
Application granted granted Critical
Publication of CN111240579B publication Critical patent/CN111240579B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • G06F11/1451Management of the data involved in backup or backup restore by selection of backup contents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1464Management of the backup or restore process for networked environments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1469Backup restoration techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0652Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure relates to a method for data storage, and an apparatus for data storage. A method for data storage, comprising: receiving data sent by an application entity by using a service entity; when data overflow exists, selecting a corresponding reservation strategy for the stored data according to a locking setting or an overflow setting; and storing part or all of overflow data according to the retention policy.

Description

Method and device for data storage
Technical Field
The present application relates to the field of data storage, and in particular to a method for data storage, and an apparatus for data storage.
Background
In the field of data storage, data uploaded by an application entity may generally be received by a service entity. In the case of faster changes in the application entity environment values and higher required accuracy, for example in the context of monitoring such as ocean currents, seafloor volcanic temperatures, the speed of uploading data by the application entity is faster, which will soon break through the storage limit of the service entity on the data resource capacity. When the uploaded data reaches the maximum resource capacity storage limit of the service entity, overflow of the data is caused.
Disclosure of Invention
The present disclosure provides a method for data storage and an apparatus for data storage.
According to one aspect of the present disclosure, there is provided a method for data storage, comprising: receiving data sent by an application entity by using a service entity; when data overflow exists, selecting a corresponding retention policy for the stored data according to the locking setting or the overflow setting; and storing part or all of overflow data according to the retention policy.
According to one aspect of the disclosure, wherein selecting a respective retention policy for data based on overflow settings comprises: selecting at least one of the following retention policies for the stored data: local backup of stored data, uploading backup of stored data, partial reservation of stored data according to time points, partial reservation of stored data according to change rates, partial reservation of stored data according to difference modes, and replacement of stored data with newly uploaded data.
According to one aspect of the disclosure, wherein the overflow arrangement further comprises: and when the stored data reach the early warning threshold value, sending an early warning notice to the application entity.
According to one aspect of the disclosure, wherein the overflow setting comprises whether to lock the stored data, wherein the lock setting comprises locking the stored data by a lock period, wherein the stored data generated in a preset lock period is to be locked when the data overflow occurs.
According to one aspect of the disclosure, wherein the overflow setting includes whether to lock the stored data set with a lock setting.
According to one aspect of the disclosure, wherein the locking arrangement comprises locking the stored data according to a locking period or a locking data rate of change, wherein upon occurrence of a data overflow, the stored data generated in a preset locking period or the stored data conforming to the preset locking data rate of change is to be locked.
According to one aspect of the disclosure, wherein the overflow setting further comprises setting a specific overflow category.
According to one aspect of the disclosure, wherein the overflow category includes one or more of a maximum bit value, a maximum number of instances, a maximum instance lifetime.
According to one aspect of the disclosure, wherein the overflow setting further comprises setting an overflow state of whether overflow occurs.
According to one aspect of the disclosure, wherein the overflow setting further comprises setting whether there is an associated operation after the overflow.
According to one aspect of the disclosure, wherein the overflow setting further comprises setting an operation to be performed after the overflow, the setting corresponding to at least one of the above-described retention policies.
According to one aspect of the disclosure, when the stored data reaches the early warning threshold, an early warning notification is sent to other application entities subscribed to the resource of the data in the service entity.
According to another aspect of the present disclosure, there is provided a method for data storage, comprising: uploading data to a service entity by using an application entity; and when there is a data overflow, replacing the stored data with the newly uploaded data according to a retention policy determined by a lock setting or overflow setting of the service entity.
According to one aspect of the disclosure, wherein the retention policy comprises at least one of: local backup of stored data, uploading backup of stored data, partial reservation of stored data according to time points, partial reservation of stored data according to change rates, partial reservation of stored data according to difference modes, and replacement of stored data with newly uploaded data.
According to one aspect of the disclosure, wherein the overflow setting comprises: and when the stored data reach the early warning threshold value, receiving an early warning notice sent by the service entity.
According to one aspect of the disclosure, the other application entity is capable of acquiring the required target data from the service entity, wherein the other application entity acquires the required target data from the service entity includes: when all the target data are stored on the service entity, other application entities directly extract the target data from the service entity; when part of target data is stored in the service entity and part of target data is backed up to other service entities, the other application entities respectively extract corresponding target data to the service entity and other service entities, or the service entity extracts part of target data which is not available in the service entity from the other service entities, and then all the target data are sent to the other application entities together; when all the target data are stored on other service entities, the other application entities directly extract the target data from the other service entities or extract the target data from the other service entities by the service entities and then send the target data to the other application entities.
According to yet another aspect of the present disclosure, there is provided a method for data storage, comprising: uploading data to a service entity by using an application entity; receiving data sent by an application entity by using a service entity; when data overflow exists, selecting a corresponding reservation strategy for the stored data according to a locking setting or an overflow setting; and storing part or all of overflow data according to the retention policy.
According to yet another aspect of the present disclosure, there is provided an apparatus for data storage, comprising: a receiving unit for receiving data sent by an application entity by using a service entity; a selection unit for selecting a corresponding retention policy for the data according to the lock setting or the overflow setting when there is the data overflow; and a storage unit for storing part or all of overflow data according to the retention policy.
According to one aspect of the disclosure, wherein the selection unit selects at least one of the following retention policies: local backup of stored data, uploading backup of stored data, partial reservation of stored data according to time points, partial reservation of stored data according to change rates, partial reservation of stored data according to difference modes, and replacement of stored data with newly uploaded data.
According to one aspect of the disclosure, wherein the overflow setting includes issuing an early warning notification to the application entity when the stored data reaches a predetermined threshold.
According to yet another aspect of the present disclosure, there is provided an apparatus for data storage, comprising: a receiving and transmitting unit for uploading data to a service entity by using an application entity; and a storage unit for replacing the stored data with the newly uploaded data according to a reservation policy determined by a lock setting or an overflow setting of the service entity when there is the data overflow.
According to one aspect of the disclosure, wherein the retention policy comprises at least one of: local backup of stored data, uploading backup of stored data, partial reservation of stored data according to time points, partial reservation of stored data according to change rates, partial reservation of stored data according to difference modes, and replacement of stored data with newly uploaded data.
According to still another aspect of the present disclosure, there is provided a computer-readable storage medium storing a computer-readable program that causes a computer to execute the method for data storage of the above aspect of the present disclosure.
In the above aspect of the disclosure, an early warning is provided for an application entity to adjust the retention policy of stored data in time when data is about to overflow, and notification is provided for the application entity to adjust the retention policy of stored data in time when data overflows, so as to avoid loss of data accordingly.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments thereof with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure, without limitation to the disclosure. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 is a schematic diagram of a data storage system for implementing an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method for data storage for implementing an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of setting resources with respect to overflow according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram of an operation A to be performed after setting up overflow resources, in accordance with an embodiment of the disclosure;
FIG. 5 is a schematic diagram of an operation B to be performed after setting up overflow resources, in accordance with an embodiment of the disclosure;
FIG. 6 is a schematic diagram of an example of uploading backup of stored data according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of an operation C to be performed after setting up overflow resources, in accordance with an embodiment of the disclosure;
FIG. 8 is a schematic diagram of an example of partial reservation of stored data according to a point in time, according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of an operation D to be performed after setting up overflow resources, in accordance with an embodiment of the disclosure;
FIGS. 10 (a) -10 (b) are exemplary diagrams of partial reservation of stored data at a rate of change in accordance with embodiments of the present disclosure;
FIG. 11 is a schematic diagram of an operation E to be performed after setting up overflow resources, in accordance with an embodiment of the disclosure;
12 (a) -12 (b) are schematic diagrams of examples of partial reservation of stored data in a differential manner, according to embodiments of the present disclosure;
FIG. 13 is a schematic diagram of an early warning regarding setting overflow resources according to an embodiment of the disclosure;
FIG. 14 is a flow chart of a method for data storage according to another embodiment of the present disclosure;
FIG. 15 is a schematic diagram of an example of other application entities obtaining desired target data from a service entity according to one embodiment of the present disclosure;
FIG. 16 is a schematic diagram of an example of other application entities obtaining desired target data from a service entity according to another embodiment of the present disclosure;
FIG. 17 is a schematic diagram of an example of other application entities acquiring desired target data from a service entity according to yet another embodiment of the present disclosure;
FIG. 18 is a schematic diagram of an example of other application entities obtaining desired target data from a service entity according to yet another embodiment of the present disclosure;
FIG. 19 is a flow chart of a method for data storage according to yet another embodiment of the present disclosure;
FIG. 20 is a schematic diagram of an apparatus for data storage according to an embodiment of the disclosure;
fig. 21 is a schematic diagram of an apparatus for data storage according to another embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure. It will be apparent that the described embodiments are merely embodiments of a portion, but not all, of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without the need for inventive faculty, are intended to be within the scope of the present disclosure, based on the embodiments in this disclosure.
First, a data storage system for implementing an embodiment of the present disclosure is described with reference to fig. 1. As shown in fig. 1, in a data storage system, an application entity (Application Entity, AE) 10 uploads data 14 to a service entity (Common Service Entity, CSE) 11, and the service entity 11 then creates a content instance for each request by the application entity 10. For example, the application entity 10 may be a sensor, such as a sensor of an air conditioner, and the data uploaded to the service entity may be humidity, temperature, etc. At this time, the service entity 11 may create a resource container (container) for the sensor, and store information of content instances of humidity, temperature, etc. under the resource container accordingly. When the data uploaded by the application entity 10 reaches the maximum capacity storage limit of the service entity, overflow 13 of data is caused. In the application, an early warning is performed on an application entity when data is about to overflow so as to timely adjust the retention policy of the stored data, and notification is performed on the application entity when the data overflows so as to timely adjust the retention policy of the stored data.
A method for data storage for implementing an embodiment of the present disclosure will be described below with reference to fig. 2. The method may be performed by the service entity 11.
As shown in fig. 2, in step S101, the service entity 11 may receive data transmitted by the application entity 10. For example, the application entity 10 may be a sensor as described above.
In step S102, when there is a data overflow, the service entity 11 selects a corresponding retention policy for the stored data according to the lock setting or the overflow setting.
In step S103, part or all of the overflow data is stored according to the retention policy.
As described above, when data overflow occurs, at this time, it is common practice to directly replace old stored data with newly uploaded data, but this leads to loss of original records, so that the function of acquiring a history table, a history graph, or the like cannot be realized, and flexibility is poor. In addition, this approach lacks the functionality of notifying the application entity to adjust the retention policy of the stored data in time when the data overflows, and pre-warning the application entity to adjust the retention policy of the stored data in time when the data is about to overflow. But when a corresponding retention policy is selected for stored data according to a lock setting or an overflow setting, loss of data can be avoided accordingly.
The following will describe with reference to fig. 3 regarding overflow settings (overflow cfg) 30 resources according to an embodiment of the present disclosure. The service entity 11 may create a resource container (container) for the application entity 10 as described above and store information of content instances of humidity, temperature, etc. under the resource container accordingly. In addition, the service entity 11 may also create an overflow setting (overflow cfg) 30 resource for the application entity 10 under the resource container (container), and store the overflow related information as a child resource of the resource container (container).
Overflow settings 30 may include whether or not a lock setting (i.e., lock state 31) has been made to the data.
The lock setting (overflowReserve) 32 may include locking data in accordance with a lock period or a lock data change rate, wherein stored data generated in a preset lock period or stored data conforming to a preset lock data change rate will be locked without being deleted when data overflow occurs. Alternatively, the lock setup (overflowReserve) and whether the data has been lock setup (overflowReserve status) may also be placed under the resource container, storing the lock related information as child resources or attribute values of the resource container (container).
For example, the lock state 31 may be set to yes/no (True/False) to clearly show whether the lock setting has been made on the data.
The lock setting 32 may include setting in accordance with a lock period. For example, a lock period may be preset, and then data generated in the lock period is locked, and the data is not deleted after data overflow occurs. For example, a lock period 00:00:00-00:07:00 and 01:00:00-02:59:59 may be set, so that data (dataGenerationTime) generated in the two periods will be locked, the data will not be deleted after data overflow occurs, when one or both of the two lock periods are deleted, the data corresponding to the deleted period loses the locked attribute accordingly, and when the data overflows, the data is not limited by the lock policy. It should be appreciated that the above examples are illustrative only and are not limiting of what is claimed herein, and that other suitable examples may be included herein.
Further, for example, the lock settings 32 may include settings at a lock data rate of change. For example, the lock data change rate may be set in advance, and then data larger than the lock data change rate is locked, and the data is not deleted after the occurrence of data overflow. For example, the change rate of the locking data can be set to be 1km/h, when data overflow occurs, the data instance of the data list is screened, and if the change rate of two data separated by 1h is greater than 1km, all data between the two data are locked and cannot be deleted after overflow occurs.
In addition, overflow settings 30 may also include setting a specific overflow category (overflowCat) 36.
For example, the overflow category 36 may be set to a maximum bit value (maxByteSize), a maximum number of instances (maxnrof instances), a maximum instance lifetime (maxnstanceage), and the like. It will be appreciated that the above arrangements are merely illustrative and not limiting of what is claimed in this application, and that other suitable arrangements may be included in this application.
Further, the overflow setting may also include setting an overflow state (overflow status) 33 of whether overflow occurs.
For example, the overflow state 33 may be set to: yes/no (True/False) to clearly show whether overflow has occurred. In addition, any overflow condition of the different overflow categories 36 may be set. For example, overflow status is set for one or more of the maximum bit value, the maximum number of instances, and the maximum instance lifetime, respectively.
Further, overflow settings 30 may also include setting whether there is an associated operation (overflowCtrl) 34 after an overflow.
For example, whether there is a correlation operation 34 after overflow may be set to: yes/no (True/False) to clearly show if there is a relevant operation after overflow.
Still further, the overflow setting 30 may further include setting an operation (overflowOp) 35 to be performed after the overflow, the setting corresponding to at least one of the above-described reservation policies.
For example, different numbers may be set for operations to be performed after overflow: operation a 37 to be performed after overflow (overflow oponedescription), operation B38 to be performed after overflow (overflow optwodescription), operation C39 to be performed after overflow (overflow opthrescript), operation D40 to be performed after overflow (overflow opfourdscription), operation E41 to be performed after overflow (overflow opfivedescription), and operation F42 to be performed after overflow (overflow opsixdescription), wherein different numbers represent different operations to clearly show whether there is a relevant operation or what retention policy to select after overflow.
Wherein selecting a corresponding retention policy for the data based on the overflow setting may comprise: at least one of the following retention strategies is selected: local backup of stored data, uploading backup of stored data, partial reservation of stored data according to time points, partial reservation of stored data according to change rates, partial reservation of stored data according to difference modes, and replacement of stored data with newly uploaded data.
For example, operation a to be performed after the overflow may be set to perform the above-described local backup of the stored data.
FIG. 4 is a schematic diagram of an operation A to be performed after setting overflow resources, according to an embodiment of the disclosure. As shown in fig. 4, in the case of operation a37 to be performed after overflow, a different attribute value may be set for the operation to perform local backup. For example, a root directory location (overconflowbackupposition) 46, a backed-up list (overconflowbackuplist) 47, and a point in time (overconflowbackuptimepoint) 48 at which an overconflued occurs for this operation a may be set. It will be appreciated that the above arrangements are merely illustrative and not limiting of what is claimed in this application, and that other suitable arrangements may be included in this application.
For example, operation B to be performed after the overflow may be set to upload the stored data as described above.
Fig. 5 is a schematic diagram of an operation B to be performed after setting overflow resources according to an embodiment of the disclosure. As shown in fig. 5, in the case of operation B38 to be executed after overflow, a root directory location (overflowUpLoadPosition) 56 of an upload resource, an uploaded list (overflowUpLoadList) 57, a time point (overflowUpLoadTimePoint) 58 at which the upload occurs, and the like may also be set for this operation B. It will be appreciated that the above arrangements are merely illustrative and not limiting of what is claimed in this application, and that other suitable arrangements may be included in this application.
Fig. 6 is a schematic diagram of an example of uploading backup of stored data according to an embodiment of the present disclosure. As shown in fig. 6, AE 60 uploads data 63 to an intermediate node service entity (MN-CSE) 61, and MN-CSE 61 then creates a content instance 64 for each request of AE 60; when the data uploaded by AE 60 reaches the maximum capacity storage limit of MN-CSE 61, overflow 65 of data may result. At this point, the MN-CSE 61 will upload the stored data to other service entities (e.g., the storing CSE 62 as shown in fig. 6) for data backup 66 and delete the data stored on the original MN-CSE 61, after which there is excess capacity on the MN-CSE 61 to store data that the AE 60 continues to upload 67 and continue to create content instances 68 for each request of the AE 60.
For example, operation C to be performed after overflow may be set to partially reserve stored data according to a point of time.
Shown in fig. 7 is a schematic diagram of an operation C to be performed after setting overflow resources according to an embodiment of the disclosure. As shown in fig. 7, in the case of operation C39 to be performed after overflow, different attribute values may be set for the operation to partially retain stored data according to the time point. For example, an overflow time interval (overflow timer) 76 may be set for this operation C, etc. It will be appreciated that the above arrangements are merely illustrative and not limiting of what is claimed in this application, and that other suitable arrangements may be included in this application.
Fig. 8 is a schematic diagram of an example of partial reservation of stored data according to a point in time according to an embodiment of the present disclosure. As shown in fig. 8, it is assumed that the overflow time interval 76 is set to 1h, and that the first integer point in the data queue is assumed as a start value, the first data in this period is reserved. As shown in FIG. 8, after the overflow 80 of time 05:20, there are 6 data 81 newly added, and the first integer point in the queue is used as the starting point, the first data 83 is reserved, and the remaining 5 data 84 are replaced, namely, the 5 data 84 after 6 data in the overflow 00:00-01:00 are deleted and replaced by the 5 data 81 newly added after the overflow of time 05:20. Next, two data 82 are added again, and a new time interval after 01:00 is 00:00 is important, the first data 85 after 01:00 is reserved, the second two data 86 in 3 data of 01:00-02:00 are deleted, and 2 data 82 are added again after 05:20 overflows.
For example, the operation D to be performed after the overflow may be set to partially reserve the stored data at the rate of change as described above.
Fig. 9 is a schematic diagram of an operation D to be performed after setting overflow resources according to an embodiment of the disclosure. As shown in fig. 9, in the case of operation D40 to be performed after overflow, different attribute values may be set for the operation to partially retain stored data at a change rate. For example, an overflow replacement start point (overflow value point) 96 and an overflow value interval (overflow value change) 97 may be set for this operation D, and it should be appreciated that the above settings are merely illustrative and are not limiting of what is to be protected by the present application, and that other suitable settings may be included in the present application. .
Fig. 10 (a) -10 (b) are exemplary diagrams of partial reservation of stored data according to a rate of change, according to embodiments of the present disclosure. As shown in fig. 10 (a) -10 (b), assuming that the maximum data amount that the service entity can store is set to 12, overflow occurs at the 12 th data establishment. Assuming that the overflow replacement start point 96 is set to 2 and the overflow value interval 97 is set to 1, this means that from the start of overflow, replacement is performed from the 2 nd data originally stored, and the next data of the adjacent two original data whose values lie in the same section is replaced. As shown in fig. 10 (b), overflow occurs at 12 th data over time, and when 13 th new data is created after overflow occurs, 1 st data originally stored is not replaced according to a rule starting from the leftmost side of the time axis as a starting point; the 2 nd data of the original storage is located in a 1-2 interval, which is different from the 2-3 interval of the 1 st data of the original storage (namely, the value of the first data and the value of the second data of the original storage are not located in the same interval), and the replacement is not carried out; the 3 rd data originally stored is located in a 2-3 interval, is different from the 1-2 interval of the 2 nd data originally stored, and is not replaced; the 4 th data originally stored is located in a 3-4 interval, is different from the 2-3 interval of the 3 rd data originally stored, and is not replaced; the 5 th data of the original storage is positioned in a 3-4 interval and is the same as the 3-4 interval of the 4 th data of the original storage (namely, the value of the fifth data of the original storage and the value of the fourth data are positioned in the same interval), deletion replacement is carried out, and the 5 th data of the original storage is replaced by 13 th new data; and so on.
For another example, when the 14 th new data is built, the 4-5 interval of the 6 th data originally stored is analyzed as above, and is different from the 3-4 interval of the 5 th data originally stored, and no replacement is performed; the 3-4 interval of the 7 th data originally stored is different from the 4-5 interval of the 6 th data originally stored, and the replacement is not carried out; the 8 th data of the original storage is positioned in a 3-4 interval and is the same as the 3-4 interval of the 7 th data of the original storage, deletion replacement is carried out, and the 8 th data of the original storage is replaced by the 14 th new data; and so on.
For example, operation E to be performed after overflow may be set to partially preserve stored data in a differential manner.
FIG. 11 is a schematic diagram of an operation E to be performed after setting overflow resources, according to an embodiment of the disclosure. As shown in fig. 11, in the case of operation E41 to be performed after overflow, different attribute values may be set for the operation to partially retain the stored data in a differential manner. For example, an overflow reservation interval (overflow reservation interval) 111 or the like may be set for this operation E41. It will be appreciated that the above arrangements are merely illustrative and not limiting of what is claimed in this application, and that other suitable arrangements may be included in this application.
Fig. 12 (a) -12 (b) are schematic diagrams of examples of partial reservation of stored data in a differential manner, according to embodiments of the present disclosure. As shown in fig. 12 (a) -12 (b), assuming that the maximum data amount that the service entity can store is set to 12, overflow occurs at the 12 th data establishment. Assuming that the overflow reservation interval 111 is set to 1, then, as time goes by, from the leftmost side of the time axis, the 1 st new data after overflow is increased (corresponding to 13 th new data) corresponding to the deletion of the 2 nd data originally stored in the queue, the 2 nd new data after overflow is increased (corresponding to 14 th new data) corresponding to the deletion of the 4 th data originally stored in the queue, the 3 rd new data after overflow is increased (corresponding to 15 th new data) corresponding to the deletion of the 6 th data originally stored in the queue, and so on.
For example, the operation F to be performed after the overflow may be set to replace the stored data with the newly uploaded data as described above. The method is the same as the traditional method of directly replacing the overflowed data with the originally stored data, and therefore, the description is omitted here.
Returning to fig. 3, still further, the overflow arrangement may further comprise: and sending an overflow warning 43 of a warning notice to the application entity when the stored data reaches a warning threshold, wherein the warning notice is sent to other application entities subscribed with the resource of the data in the service entity when the stored data reaches the warning threshold.
Fig. 13 is a schematic diagram of an early warning regarding setting overflow resources according to an embodiment of the disclosure. As shown in fig. 13, an early warning threshold (warning level) 136, an early warning object (notification uri) 137, an early warning category (warning cat) 138, and the like may be set for the overflow early warning 43. Wherein, the early warning object (notification uri) 137 represents an object or a target address that needs early warning. The pre-warning category 138 represents an overflow category for which pre-warning is specifically required, as described above, one or more of a maximum bit value (maxByteSize), a maximum number of instances (maxnrof instances), a maximum instance lifetime (maxnstanceage), and the like. The early warning threshold 136 represents an early warning notification that is issued when a certain category reaches a predetermined threshold. For example, the pre-warning category may be set to a maximum bit value, and the corresponding pre-warning threshold is 75%, and when the maximum bit value of the stored data has reached 75% of the maximum capacity, a pre-warning notification is sent to the target address. It will be appreciated that the above arrangements are merely illustrative and not limiting of what is claimed in this application, and that other suitable arrangements may be included in this application.
In addition, in addition to the function of providing the early warning notification in the overflow setting, the resource containing the subscription function of the service entity corresponding to the other application entity may also include a function of sending the early warning notification. At this time, when the attribute of the subscribed resource is greater than the specified early warning threshold, an early warning notification may also be sent to the target address, which is not described herein.
A method for data storage, which may be performed by the application entity 10, according to another embodiment of the present disclosure is described below with reference to fig. 14.
In step S201, data is uploaded to a service entity using an application entity.
In step S202, when there is an overflow of data, the stored data is replaced with the newly uploaded data according to a retention policy determined by the overflow setting of the service entity. As described above, when data overflow occurs, the corresponding retention policy is selected for the stored data according to the lock setting or the overflow setting, and loss of data can be avoided accordingly.
Wherein the retention policy may include at least one of: the method includes the steps of locally backing up stored data, uploading the stored data, partially retaining the stored data according to time points, partially retaining the stored data according to change rates, partially retaining the stored data according to difference modes, and replacing the stored data with newly uploaded data, which are described in detail above due to the above strategies, and will not be repeated here.
Further, the overflow arrangement may further comprise: and when the stored data is about to overflow, receiving an early warning notice sent by the service entity.
In addition, other application entities can obtain the required target data from the service entity, wherein the other application entities obtain the required target data from the service entity comprises: when all the target data are stored on the service entity, other application entities directly extract the target data from the service entity; when part of target data is stored in the service entity and part of target data is backed up to other service entities, other application entities respectively extract corresponding target data to the service entity and other service entities, or the service entity extracts part of target data which is not available in the service entity from other service entities, and then all the target data are sent to other application entities together; when all the target data are stored on other service entities, the other application entities directly extract the target data from the other service entities or the service entities extract the target data from the other service entities and then send the target data to the other application entities.
For example, when the service entity performs uploading backup on the data stored by the application entity, other application entities may acquire required target data from the service entity in two steps: (1) address list acquisition; (2) based on address acquisition. When there is a history data that the using entity wants to use at a certain moment of the history, for example, when 20 days want to acquire typhoon wind power sampling data in 1-8 days: the other application entity should first perform a discovery operation on the registration CSE (e.g., where the registration CSE may be a MN-CSE as described above, and the registration CSE is denoted as MN-CSE by way of example) corresponding to the application entity that uploaded the data. The operation information specifies a time interval of data generation, and then the MN-CSE (registration CSE) replies to the other application entity with a data storage address list, which may be:
(1) All data are on the MN-CSE;
(2) Part of the data exists on the MN-CSE, and part of the data is already backed up on the storage CSE;
(3) All data has been backed up on the storage CSE;
fig. 15-18 are schematic diagrams of examples of other application entities obtaining desired target data from a service entity according to embodiments of the present disclosure.
Fig. 15 is a schematic diagram showing an example of acquiring required target data from a service entity by other application entities when all target data are on MN-CSE. As shown in fig. 15, AE 1501 uploads data 1505 to MN-CSE 1502, and MN-CSE 1502 then creates a content instance 1506 for each request of AE 1501; when the data uploaded by AE 1501 reaches the maximum capacity storage limit of MN-CSE 1502, overflow 1507 of data may result. At this point, the MN-CSE 1502 will upload the stored data to the other service entity (e.g., store CSE 1503 as shown in fig. 15) for data backup 1508 and delete the data stored on the original MN-CSE 1502, after which there is excess capacity on the MN-CSE 1502 to store AE 1501 to continue uploading 1509 the data and continue creating content instances 1510 for each request of AE 1501. At this time, when other Application Entity (AE) 1504 wants to acquire desired target data from MN-CSE 1502, first AE 1504 performs target data discovery 1512 to MN-CSE 1502, thereby obtaining an address list of the desired target data, and information of target data storage can be obtained from the obtained address list. For example, in fig. 15, the target data obtained from the obtained address list is stored in the MN-CSE 1502 in its entirety, and then the AE 1504 sends a target data obtaining request to the MN-CSE 1502, and the MN-CSE 1502 sends data required for the AE 1504 to the AE 1504 in response to the request 1514, thereby completing the process in which the AE 1504 obtains the target data from the MN-CSE 1502.
Fig. 16 is a schematic diagram showing an example of the acquisition of required target data from the service entity by other application entities when part of the data exists on the MN-CSE and the part of the data has been backed up on the storage CSE. As shown in fig. 16, AE 1601 uploads data 1605 to MN-CSE 1602, and MN-CSE 1602 then creates a content instance 1606 for each request of AE 1601; when the data uploaded by AE 1601 reaches the maximum capacity storage limit of MN-CSE 1602, overflow 1607 of the data may result. At this point, the MN-CSE 1602 will upload the stored data to other service entities (e.g., the stored CSE 1603 as shown in fig. 16) for data backup 1608 and delete the data stored on the original MN-CSE 1602, after which there is excess capacity on the MN-CSE 1602 to store data that the AE 1601 continues to upload 1609 and continue to create content instances 1610 for each request of the AE 1601. At this time, when other Application Entity (AE) 1604 wants to acquire desired destination data from MN-CSE 1602, first AE 1604 performs destination data discovery 1612 to MN-CSE 1602, thereby obtaining an address list of the desired destination data, and information of destination data storage can be obtained from the obtained address list. For example, in fig. 16, the required partial target data is obtained from the obtained address list and stored on the MN-CSE 1602, the partial target data has been backed up on the storage CSE 1603, then the AE 1604 sends a partial target data obtaining request 1613 to the MN-CSE 1602, and the MN-CSE 1602 also sends a partial target data obtaining request 1614 to the storage CSE 1603, and the storage CSE 1603 feeds back the partial target data stored thereon to the MN-CSE 1602 in response to the 1615 in response to the partial target data obtaining request 1614, and then the MN-CSE 1602 sends the partial target data stored thereon and the partial target data fed back by the storage CSE 1603 to the AE 1604 in their entirety, thereby completing the process of the AE 1604 obtaining the target data from the MN-CSE 1602.
Fig. 17 is a schematic diagram showing an example of acquiring required target data from a service entity by other application entities when part of data exists on the MN-CSE and part of data has been backed up on the storage CSE. As shown in fig. 17, AE 1701 uploads data 1705 to MN-CSE 1702, and MN-CSE 1702 then creates content instances 1706 for each request of AE 1701; when the data uploaded by AE 1701 reaches the maximum capacity storage limit of MN-CSE 1702, an overflow 1707 of data may result. At this point, the MN-CSE 1702 will upload the stored data to other service entities (e.g., store CSE 1703 as shown in fig. 17) for data backup 1708 and delete the data stored on the original MN-CSE 1702, after which there is excess capacity on the MN-CSE 1702 to store data that AE 1701 continues to upload 1709 and continue to create content instances 1710 for each request of AE 1701. At this time, when other Application Entity (AE) 1704 wants to acquire desired target data from MN-CSE 1702, first AE 1704 performs target data discovery 1712 to MN-CSE 1702, thereby acquiring an address list of the desired target data, and information of target data storage can be obtained from the acquired address list. For example, in fig. 17, the required partial target data is obtained from the obtained address list and stored on the MN-CSE 1702, and the partial target data is already backed up on the storage CSE 1703, then the AE 1704 sends a partial target data obtaining request 1714 to the MN-CSE 1702, and at the same time the AE 1704 sends a partial target data obtaining request 1713 to the storage CSE 1703, and then the storage CSE 1703 responds 1715 to feedback the partial target data stored thereon to the AE 1704 in the partial target data obtaining request 1713, and the MN-CSE 1702 responds 1716 to feedback the partial target data stored thereon to the AE 1704 in the partial target data obtaining request 1714, thereby completing the process of the AE 1704 obtaining all target data from the MN-CSE 1702.
Fig. 18 is a schematic diagram showing an example of acquiring required target data from a service entity by other application entities when all data exists on a storage CSE. As shown in fig. 18, AE 1801 uploads data 1805 to MN-CSE 1802, and MN-CSE 1802 then creates a content instance 1806 for each request of AE 1801; when the data uploaded by AE 1801 reaches the maximum capacity storage limit of MN-CSE 1802, overflow 1807 of data may result. At this point, the MN-CSE 1802 will upload the stored data to other service entities (e.g., the storing CSE 1803 as shown in fig. 18) for data backup 1808 and delete the data stored on the original MN-CSE 1802, after which there is excess capacity on the MN-CSE 1802 to store data that AE 1801 continues to upload 1809 and continue to create content instances 1810 for each request of AE 1801. At this time, when the other Application Entity (AE) 1804 wants to acquire desired target data from the MN-CSE 1802, first the AE 1804 performs target data discovery 1812 to the MN-CSE 1802, thereby obtaining an address list of the desired target data, and information of target data storage can be obtained from the obtained address list. For example, in fig. 18, where all data is found to exist on the storage CSE from the acquired address list, then AE 1804 sends a target data acquisition request 1813 to storage CSE 1803, and then storage CSE 1803 feeds back all target data stored thereon to AE 1804 in response 1814 to part of target data acquisition request 1813, thereby completing the process of AE 1804 acquiring all target data from MN-CSE 1802. Alternatively, the MN-CSE 1802 may send a target data acquisition request by the MN-CSE 1802 to the storage CSE 1803 as in fig. 16, and then the storage CSE 1803 feeds back the target data stored thereon to the MN-CSE 1802 in response to the target data acquisition request, and then the MN-CSE 1802 sends all the received target data to the AE 1804, thereby completing the process of the AE 1804 acquiring the target data from the MN-CSE 1802.
As described above for the different methods of data storage. It can be applied in different fields. For example, when the automobile data recorder records data in real time, the data storage mode is basically to store the data locally, and often, new data is stored in a mode of covering original old data due to the storage space of the memory card. In such a scenario, once emergency scenes such as sudden braking, collision and the like occur, the current front-and-back video needs to be stored, and even if the capacity of a memory card is full, when resource coverage is realized, certain specific resources should be marked so that the specific resources cannot be covered by new image data, and therefore, the specific resources can be called and utilized in subsequent processing. At this time, the storage method of the present disclosure as described above may be adopted so that the old data is not completely covered, so that the old data may be continuously called as needed. The steps of a method of using the data storage of the present disclosure in the case of storing data recorded in real time by a drive recorder are described below.
1) Step one:
assuming that a vehicle collision occurs at time T1, a collision trigger setting is set according to a storage setting (2 min before and after the save collision time), in which:
overflowReserve attribute value: a list type data adding field (T1-2) min- (T1 + 2) min;
overflowReserveStatus attribute value: true, representing that an instance has been locked on the instance queue (the locked instance of content will not be deleted after data overflows).
2) Step two:
when the memory card space overflows, although the original data is covered, the newly built data content instance will not cover the data before and after the collision time because the locking policy already exists.
3) Step three:
and checking the data record video in the collision, calling the data instance in the collision, and releasing the locking state of the data after the data instance is useless, wherein the original field (T1-2) min- (T1+2) min of the list-type data deletion is executed on the overflowReserve attribute value, so that the locking attribute of the data corresponding to the time period is lost at the moment, and the data in the time period can be deleted according to the creation time when the memory card overflows again.
For example, natural environment data records, such as data records of extreme weather including storm, typhoon, tsunami, etc., may upload a large amount of data in a short time due to the burstiness, large number, rapid change of the data, unlike the common smart home sensor data detection; meanwhile, the data have great value for scientific research and scientific application, so the data have high-level importance. If the existing standard implementation mode is adopted, the occurrence scale and duration of the natural environment cannot be expected due to the burstiness of the natural environment, the data capacity estimation is difficult to complete, and once the data storage reaches the upper limit of the data capacity, hidden danger caused by deleting old data and storing new data is that the hidden value of important data is lost.
According to the application, because of the importance of the natural environment data, a full back-up mode can be adopted. The steps of a method of using the data store of the present disclosure in the context of storing natural environment data are described below.
1) Step one: overflow arrangement (overflowCfg)
Overflow settings may be made for a corresponding service entity or content storage resource (container) of a corresponding application entity (e.g., sensor), wherein:
overflowStatus attribute value: setting False (representing a default value when no overflow occurs), which represents that no overflow condition is found in the data at the present moment, and the maximum resource limit set on the service entity still has a margin;
overflowCat attribute value: the data is set as maxNrOfInstances, and the size of the data has a basically determined data unit size base number according to the sensor attribute, so that the number of examples is abrupt when the emergency is dealt with;
override Ctrl attribute value: set to True, which represents that once an overflow event occurs, the retention policy after the overflow will be executed;
override op attribute value: set to B, representing that if overflow occurs, the stored data is backed up for upload, where the root directory location (overflowUpLoadPosition) attribute value of the upload resource is set to, for example, address AAA.
2) Step two: implementing reservation policy in emergency situations
When overflow occurs (t=time1):
the overflowStatus attribute value of the resource overflowCfg will change from False to True as the attribute value that changed first;
and according to the overflowOp=B, performing uploading resource backup, uploading data in a parent resource corresponding to the resource overflowCfg to an address AAA, adding time1 to the overflowUploadTimePoint, and adding a backup resource address to the overflowUploadList.
3) Step three: historical data extraction and utilization
As described above, when the service entity performs the uploading backup on the data stored by the application entity, the other application entities may obtain the required target data from the service entity in two steps: (1) address list acquisition; (2) based on address acquisition. When there is a history data that the using entity wants to use at a certain moment of the history, for example, when 20 days want to acquire typhoon wind power sampling data in 1-8 days: the other application entity should first perform discovery operation on the MN-CSE (registration CSE) corresponding to the application entity that uploads the data, where the operation information specifies a time interval of data generation, and then the MN-CSE (registration CSE) replies a data storage address list to the other application entity, where the case may be: all data are on the MN-CSE; part of the data exists on the MN-CSE, and part of the data is already backed up on the storage CSE; all data has been backed up on the storage CSE.
As analyzed above, there are different policies for different situations, such as when all target data is stored on the service entity, other application entities directly extract target data from the service entity; when part of target data is stored in the service entity and part of target data is backed up to other service entities, other application entities respectively extract corresponding target data to the service entity and other service entities, or the service entity extracts part of target data which is not available in the service entity from other service entities, and then all the target data are sent to other application entities together; when all the target data are stored on other service entities, the other application entities directly extract the target data from the other service entities or the service entities extract the target data from the other service entities and then send the target data to the other application entities. The various strategies described above have been described in detail in this disclosure with reference to fig. 15-18 and, therefore, are not described in detail herein.
As described above, the present disclosure provides a method for data storage. FIG. 19 depicts a flow chart of a method for data storage according to yet another embodiment of the present disclosure. As depicted in fig. 19, the method for data storage of the present disclosure includes: uploading data to a service entity using an application entity (S301); receiving data transmitted by an application entity using a service entity (S302); when there is an overflow of data, the service entity selects a corresponding reservation policy for the stored data according to a lock setting or an overflow setting (S303); and storing part or all of the overflow data by the service entity according to the reservation policy (S304).
An apparatus 1900 for data storage according to an embodiment of the present disclosure is described below with reference to fig. 20. Fig. 20 is a schematic diagram of an apparatus for data storage according to an embodiment of the present disclosure. Since the function of the apparatus for data storage of the present embodiment is the same as the details of the method described above with reference to fig. 2, a detailed description of the same is omitted here for simplicity.
As shown in fig. 20, an apparatus 1900 for data storage includes a receiving unit 1901, a selecting unit 1902, and a storing unit 1903. It should be noted that although the apparatus 1900 for data storage is shown as including only 3 apparatuses in fig. 20, this is merely illustrative, and the apparatus 1900 for data storage may include one or more other apparatuses, which are not related to the inventive concept and thus omitted herein.
The apparatus 1900 for data storage of the present disclosure comprises: a receiving unit 1901 for receiving data transmitted by an application entity by using a service entity; a selection unit 1902 that selects a corresponding retention policy for the stored data according to a lock setting or an overflow setting when there is an overflow of the data; and a storage unit 1903 storing part or all of the overflow data according to the retention policy.
The selection unit 1902 selects at least one of the following retention policies: local backup of stored data, uploading backup of stored data, partial reservation of stored data according to time points, partial reservation of stored data according to change rates, partial reservation of stored data according to difference modes, and replacement of stored data with newly uploaded data.
Wherein the overflow arrangement further comprises: and when the stored data reach the early warning threshold value, sending an early warning notice to the application entity, wherein the resources containing the subscription function of the service entity corresponding to other application entities also comprise the function of sending the early warning notice.
Wherein the overflow setting includes whether to lock the data, wherein the locking setting includes locking the stored data by a locking period or a locking data rate of change, wherein when the data overflow occurs, the stored data generated in the preset locking period or the stored data conforming to the preset locking data rate of change is to be locked without being deleted.
Wherein the overflow setting further comprises setting a specific overflow category.
Wherein the overflow setting further comprises setting an overflow state of whether overflow occurs.
Wherein the overflow setting further comprises setting whether there is a related operation after the overflow.
Wherein the overflow setting further includes setting an operation to be performed after the overflow.
An apparatus 2000 for data storage according to another embodiment of the present disclosure is described below with reference to fig. 21. Fig. 21 is a schematic diagram of an apparatus for data storage according to another embodiment of the present disclosure. Since the function of the apparatus for data storage of the present embodiment is the same as the details of the method described hereinabove with reference to fig. 14, a detailed description of the same is omitted here for simplicity.
As shown in fig. 21, the apparatus 2000 for data storage includes a transceiver unit 2001 and a storage unit 2002. It should be noted that although the apparatus 2000 for data storage is shown as including only 2 apparatuses in fig. 21, this is merely illustrative, and the apparatus 2000 for data storage may include one or more other apparatuses, which are not related to the inventive concept and thus omitted herein.
The apparatus 2000 for data storage of the present disclosure includes a transceiving unit 2001 for uploading data to a service entity using an application entity; the storage unit 2002 replaces the stored data with the newly uploaded data according to a retention policy determined by an overflow setting of the service entity when there is an overflow of the data. In addition, the transceiver unit 2001 may further receive an early warning notification sent by the service entity when the stored data reaches an early warning threshold.
Wherein the retention policy comprises at least one of: local backup of stored data, uploading backup of stored data, partial reservation of stored data according to time points, partial reservation of stored data according to change rates, partial reservation of stored data according to difference modes, and replacement of stored data with newly uploaded data.
Wherein the overflow arrangement comprises: and when the stored data reach the early warning threshold value, receiving an early warning notice sent by the service entity.
Wherein, other application entities can obtain required target data from the service entity, and the other application entities obtain required target data from the service entity comprises: when all the target data are stored on the service entity, other application entities directly extract the target data from the service entity; when part of target data is stored in the service entity and part of target data is backed up to other service entities, other application entities respectively extract corresponding target data to the service entity and other service entities, or the service entity extracts part of target data which is not available in the service entity from other service entities, and then all the target data are sent to other application entities together; when all the target data are stored on other service entities, the other application entities directly extract the target data from the other service entities or the service entities extract the target data from the other service entities and then send the target data to the other application entities.
According to still another aspect of the present disclosure, there is provided a computer-readable storage medium storing a computer-readable program that causes a computer to execute the method for data storage of the above aspect of the present disclosure.
Those skilled in the art will appreciate that the various aspects of the invention are illustrated and described in terms of several patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
This application uses specific words to describe embodiments of the application. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present application may be combined as suitable.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present disclosure and is not to be construed as limiting thereof. Although a few exemplary embodiments of this disclosure have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this disclosure. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the claims. It is to be understood that the foregoing is illustrative of the present disclosure and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The disclosure is defined by the claims and their equivalents.

Claims (14)

1. A method for data storage, comprising:
receiving data sent by an application entity by using a service entity;
when data overflow exists, selecting a corresponding retention policy for the stored data according to the locking setting or the overflow setting; and
storing part or all overflow data according to the retention strategy;
wherein the locking arrangement comprises locking the stored data according to a locking time period or a locking data change rate, wherein when data overflow occurs, the stored data generated in a preset locking time period or the stored data conforming to the preset locking data change rate is locked.
2. The method of claim 1, wherein selecting a corresponding retention policy for the stored data based on overflow settings comprises:
selecting at least one of the following retention policies for the stored data: local backup of stored data, uploading backup of stored data, partial reservation of stored data according to time points, partial reservation of stored data according to change rates, partial reservation of stored data according to difference modes, and replacement of stored data with newly uploaded data.
3. The method of claim 1, wherein the overflow setting comprises:
and when the stored data reach the early warning threshold value, sending an early warning notice to the application entity.
4. The method of claim 1, wherein the overflow setting includes whether to lock the stored data.
5. The method of any of claims 1-4, wherein the overflow setting further comprises setting a specific overflow category.
6. The method of claim 5, wherein the overflow category comprises one or more of a maximum bit value, a maximum number of instances, a maximum instance lifetime.
7. The method of any of claims 1-4, wherein the overflow setting further comprises setting an overflow condition for whether overflow occurs.
8. The method of any of claims 1-4, wherein the overflow setting further comprises setting whether there is an associated operation after overflow.
9. The method of any of claims 2-4, wherein the overflow setting further comprises setting an operation to be performed after overflow, the setting corresponding to at least one of the retention policies.
10. A method according to claim 3, wherein when the stored data reaches an early warning threshold, an early warning notification is issued to other application entities subscribed to the resource of the data in the service entity.
11. A method for data storage, comprising:
uploading data to a service entity by using an application entity;
when there is a data overflow, replacing the stored data with the newly uploaded data according to a retention policy determined by a lock setting or overflow setting of the service entity;
wherein, other application entities can obtain required target data from the service entity, and the other application entities obtain required target data from the service entity comprises:
when all the target data are stored on the service entity, other application entities directly extract the target data from the service entity;
when part of target data is stored in the service entity and part of target data is backed up to other service entities, other application entities respectively extract corresponding target data to the service entity and other service entities, or the service entity extracts part of target data which is not available in the service entity from other service entities, and then all the target data are sent to other application entities together;
when all the target data are stored on other service entities, the other application entities directly extract the target data from the other service entities or the service entities extract the target data from the other service entities and then send the target data to the other application entities.
12. The method of claim 11, wherein the retention policy comprises at least one of:
local backup of stored data, uploading backup of stored data, partial reservation of stored data according to time points, partial reservation of stored data according to change rates, partial reservation of stored data according to difference modes, and replacement of stored data with newly uploaded data.
13. The method of claim 11, wherein the overflow setting comprises:
and when the stored data reach the early warning threshold value, receiving an early warning notice sent by the service entity.
14. A computer-readable storage medium storing a computer-readable program that causes a computer to perform the method for data storage according to any one of claims 1-13.
CN201811436152.0A 2018-11-28 2018-11-28 Method and device for data storage Active CN111240579B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201811436152.0A CN111240579B (en) 2018-11-28 2018-11-28 Method and device for data storage
PCT/CN2019/121611 WO2020108563A1 (en) 2018-11-28 2019-11-28 Data storage method, general service entity device, and storage medium
US17/296,628 US11747991B2 (en) 2018-11-28 2019-11-28 Method for data storage, general service entity device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811436152.0A CN111240579B (en) 2018-11-28 2018-11-28 Method and device for data storage

Publications (2)

Publication Number Publication Date
CN111240579A CN111240579A (en) 2020-06-05
CN111240579B true CN111240579B (en) 2024-03-19

Family

ID=70854352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811436152.0A Active CN111240579B (en) 2018-11-28 2018-11-28 Method and device for data storage

Country Status (3)

Country Link
US (1) US11747991B2 (en)
CN (1) CN111240579B (en)
WO (1) WO2020108563A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112130771B (en) * 2020-09-27 2023-05-16 广州市优仪科技股份有限公司 Storage control method and device for test box, electronic equipment and storage medium
KR20220154596A (en) * 2021-05-13 2022-11-22 현대자동차주식회사 Method and apparatus for trnsferring large amount of data in machine to machine system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106155595A (en) * 2016-08-01 2016-11-23 乐视控股(北京)有限公司 The storage optimization method of memory and system
CN106909319A (en) * 2017-02-17 2017-06-30 武汉盛信鸿通科技有限公司 A kind of Hadoop framework and scheduling strategy based on virtual memory disk
JP2017147585A (en) * 2016-02-17 2017-08-24 セイコーエプソン株式会社 Data transfer system, data transfer device, receiver, and data transfer method
WO2017218248A1 (en) * 2016-06-14 2017-12-21 Microsoft Technology Licensing, Llc Secure removal of sensitive data

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4432870B2 (en) * 2005-10-04 2010-03-17 ソニー株式会社 RECORDING DEVICE, RECORDING MEDIUM MANAGEMENT METHOD, RECORDING MEDIUM MANAGEMENT METHOD PROGRAM, AND RECORDING MEDIUM MANAGEMENT METHOD PROGRAM
CN100524244C (en) * 2006-12-14 2009-08-05 英业达股份有限公司 Method for early warning insufficiency of memory space of network memory system
US10740353B2 (en) * 2010-12-23 2020-08-11 Mongodb, Inc. Systems and methods for managing distributed database deployments
CN104243425B (en) 2013-06-19 2018-09-04 深圳市腾讯计算机系统有限公司 A kind of method, apparatus and system carrying out Content Management in content distributing network
KR102611638B1 (en) 2016-09-27 2023-12-08 삼성전자주식회사 Method of operating storage device and data storage system including storage device
US10268413B2 (en) * 2017-01-27 2019-04-23 Samsung Electronics Co., Ltd. Overflow region memory management

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017147585A (en) * 2016-02-17 2017-08-24 セイコーエプソン株式会社 Data transfer system, data transfer device, receiver, and data transfer method
WO2017218248A1 (en) * 2016-06-14 2017-12-21 Microsoft Technology Licensing, Llc Secure removal of sensitive data
CN106155595A (en) * 2016-08-01 2016-11-23 乐视控股(北京)有限公司 The storage optimization method of memory and system
CN106909319A (en) * 2017-02-17 2017-06-30 武汉盛信鸿通科技有限公司 A kind of Hadoop framework and scheduling strategy based on virtual memory disk

Also Published As

Publication number Publication date
US11747991B2 (en) 2023-09-05
WO2020108563A1 (en) 2020-06-04
CN111240579A (en) 2020-06-05
US20220019354A1 (en) 2022-01-20

Similar Documents

Publication Publication Date Title
CN103167026B (en) A kind of cloud store environmental data processing method, system and equipment
CN111240579B (en) Method and device for data storage
CN104980343A (en) Sharing method and system of road condition information, automobile data recorder, and cloud server
CN101470645B (en) High-speed cache data recovery method and apparatus
CN108268211B (en) Data processing method and device
CN110941393A (en) Logical volume management-based LV supply method, device, equipment and medium
KR102572702B1 (en) Method and apparatus for managing storage space in an electronic device using context data and user profile data
US20140280387A1 (en) System and method for expanding storage space of network device
JP2020107348A (en) Data management method and data management system for memory device
CN103441910A (en) Device in home network and method for executing commands in home network
CN104615691A (en) Mobile terminal and data storage method
CN103763496A (en) Method and device for prerecording video
CN113022484A (en) Storage battery monitoring method and device, vehicle and computer storage medium
KR20090116595A (en) Method and apparatus for managing binding information on bundles installed into osgi service platform remotely
CN102609532A (en) Method and device for monitoring file directory
CN105824827A (en) File path storage and local file visiting method and apparatus
CN101753944B (en) Method and device for video management of video monitoring system
KR20180126604A (en) A method, apparatus and computer program for managing a storage area of a control unit of a vehicle
DE102015108724B4 (en) Method and device for managing temporary content on a mobile device
CN103702353B (en) Fault Locating Method and system, and access point and Network Management Equipment
US20140089267A1 (en) Information processing apparatus, information processing method, and program
CN116614532A (en) Vehicle information management method, system and computer storage medium
CN115984677A (en) Intelligent data analysis method, device and system
CN110750411B (en) Method and device for monitoring, early warning and repairing file index node
EP2622482A1 (en) System for configurable reporting of network data and related method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant