CN112698789B - Data caching method, device, equipment and storage medium - Google Patents

Data caching method, device, equipment and storage medium Download PDF

Info

Publication number
CN112698789B
CN112698789B CN202011607295.0A CN202011607295A CN112698789B CN 112698789 B CN112698789 B CN 112698789B CN 202011607295 A CN202011607295 A CN 202011607295A CN 112698789 B CN112698789 B CN 112698789B
Authority
CN
China
Prior art keywords
data
incremental
metadata
queue
increment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011607295.0A
Other languages
Chinese (zh)
Other versions
CN112698789A (en
Inventor
邱龙金
赵勇
王子骏
马立珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Dingjia Computer Technology Co ltd
Original Assignee
Guangzhou Dingjia Computer Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Dingjia Computer Technology Co ltd filed Critical Guangzhou Dingjia Computer Technology Co ltd
Priority to CN202011607295.0A priority Critical patent/CN112698789B/en
Publication of CN112698789A publication Critical patent/CN112698789A/en
Application granted granted Critical
Publication of CN112698789B publication Critical patent/CN112698789B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Quality & Reliability (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a data caching method, a data caching device, data caching equipment and a storage medium, wherein the method comprises the following steps: in the process of backing up the backup data in the disk file to a backup server, adding the generated incremental data packet to an incremental sending queue; the incremental data packet represents the changed data of the backup data; if the data volume of the incremental sending queue is larger than or equal to a preset first data volume threshold value, acquiring and storing the change parameters of the backup data; the change parameters are used for describing the change data in the backup data; and under the condition that the increment sending queue has a free space, acquiring a corresponding target increment data packet according to the change parameters, and adding the target increment data packet into the increment sending queue. The technical scheme provided by the embodiment of the application can improve the success rate of data real-time backup.

Description

Data caching method, device, equipment and storage medium
Technical Field
The present application relates to the field of network technologies, and in particular, to a data caching method, apparatus, device, and storage medium.
Background
In the process of data real-time backup, the production end host sends the change data to the backup server for real-time synchronization, and when the speed of the production end host generating the change data exceeds the speed of sending the data to the backup server, the production end host needs to cache the change data locally.
When local caching is performed, a conventional method for caching change data generally uses a local external memory disk space or a memory buffer queue as a buffer area, and caches the change data in the buffer area.
However, in the above-mentioned manner of buffering the changed data, when the speed of generating the changed data exceeds the speed of sending the data, the buffer will overflow, so that the real-time backup process of the data is interrupted, the risk of data loss is generated, and the success rate of the real-time backup of the data is reduced.
Disclosure of Invention
Based on this, embodiments of the present application provide a data caching method, apparatus, device, and storage medium, which can improve a success rate of real-time data backup.
In a first aspect, a data caching method is provided, and the method includes:
in the process of backing up the backup data in the disk file to a backup server, adding the generated incremental data packet to an incremental sending queue; the incremental data packet represents the changed data of the backup data; if the data volume of the incremental sending queue is larger than or equal to a preset first data volume threshold value, acquiring and storing the change parameters of the backup data; the change parameters are used for describing the change data in the backup data; and under the condition that the increment sending queue has a free space, acquiring a corresponding target increment data packet according to the change parameters, and adding the target increment data packet into the increment sending queue.
In one embodiment, the change parameters include metadata and a change data bitmap; acquiring and storing the change parameters of the backup data, comprising the following steps:
acquiring metadata from a write input/output packet of a disk, and storing the metadata into an incremental metadata queue; and if the data volume of the incremental metadata queue is greater than or equal to a preset second data volume threshold value, generating a change data bitmap according to the change data of the backup data, and storing the change data bitmap into a memory mapping file.
In one embodiment, acquiring a corresponding target incremental data packet according to a variation parameter, and adding the target incremental data packet to an incremental transmission queue, includes:
acquiring a target incremental data packet from corresponding backup data according to the metadata, and adding the target incremental data packet into an incremental sending queue; the metadata comprises a timestamp written by the input and output packets, a disk name, an offset in the disk and the length of the change data; and if the data volume of the incremental metadata queue is greater than or equal to a preset second data volume threshold value, acquiring a target incremental data packet according to the changed data bitmap, and adding the target incremental data packet into an incremental sending queue.
In one embodiment, acquiring a target incremental data packet according to a change data bitmap, and adding the target incremental data packet to an incremental transmission queue, includes:
and under the condition that the increment metadata queue is empty, calling a CBT processing thread to acquire a target increment data packet from the backup data according to the effective data bit of the change data bitmap, and adding the target increment data packet into the increment sending queue.
In one embodiment, the method further includes:
if the data volume of the incremental metadata queue is smaller than a second data volume threshold value, acquiring new metadata, and storing the new metadata into a temporary cache queue; the new metadata is generated by backup data during the execution of the CBT processing thread; and if the CBT processing thread is executed, adding the new metadata in the temporary cache queue to the increment metadata queue.
In one embodiment, the method further includes:
and if the CBT processing thread is executed, releasing the storage resource corresponding to the memory mapping file.
In one embodiment, the method further includes:
and if the data volume of the increment sending queue is smaller than the first data volume threshold value and the increment metadata queue is empty, adding the currently acquired increment data packet into the increment sending queue.
In one embodiment, the method further includes:
and releasing the storage resource corresponding to the increment metadata queue.
In one embodiment, the method further includes:
and if all the changed data of the backup data are synchronized to the backup server, releasing the storage resources corresponding to the incremental sending queue.
In a second aspect, a data caching apparatus is provided, the apparatus comprising:
the first adding module is used for adding the generated incremental data packet into the incremental sending queue in the process of backing up the backup data in the disk file to the backup server; the incremental data packet represents the changed data of the backup data;
the storage module is used for acquiring and storing the change parameters of the backup data if the data volume of the incremental sending queue is greater than or equal to a preset first data volume threshold; the change parameters are used for describing the change data in the backup data;
and the second adding module is used for acquiring a corresponding target incremental data packet according to the change parameter under the condition that the incremental sending queue has the free space, and adding the target incremental data packet into the incremental sending queue.
In a third aspect, a computer device is provided, comprising a memory and a processor, the memory storing a computer program, the computer program, when executed by the processor, implementing the method steps in any of the embodiments of the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, is adapted to carry out the method steps of any of the embodiments of the first aspect described above.
According to the data caching method, the data caching device, the data caching equipment and the data caching storage medium, the generated incremental data packets are added to the incremental sending queue in the process of backing up the backup data in the disk file to the backup server; if the data volume of the incremental sending queue is larger than or equal to a preset first data volume threshold value, acquiring and storing the change parameters of the backup data; and under the condition that the increment sending queue has a free space, acquiring a corresponding target increment data packet according to the change parameters, and adding the target increment data packet into the increment sending queue. Because the automatic switching of the multiple cache modes is realized in the real-time data synchronization process, the cache modes can be automatically adjusted according to the amount of the cache data, so that the memory buffer area cannot overflow, the real-time synchronization task can continuously run without interruption, the risk of data loss caused by the overflow of the memory buffer area is fundamentally and completely eliminated, and the success rate of data real-time backup is improved.
Drawings
FIG. 1 is a diagram of an application environment according to an embodiment of the present application;
fig. 2 is a flowchart of a data caching method according to an embodiment of the present application;
fig. 3 is a flowchart of a data caching method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an effective area of a disk according to an embodiment of the present application;
fig. 5 is a flowchart of a data caching method according to an embodiment of the present application;
fig. 6 is a flowchart of a data caching method according to an embodiment of the present application;
fig. 7 is a flowchart of a data caching method according to an embodiment of the present application;
fig. 8 is a block diagram of a data caching apparatus according to an embodiment of the present application;
fig. 9 is a block diagram of a data caching apparatus according to an embodiment of the present application;
fig. 10 is a block diagram of a data caching apparatus according to an embodiment of the present application;
fig. 11 is a block diagram of a data caching apparatus according to an embodiment of the present application;
fig. 12 is a block diagram of a data caching apparatus according to an embodiment of the present application;
fig. 13 is a block diagram of a data caching apparatus according to an embodiment of the present application;
fig. 14 is a block diagram of a data caching apparatus according to an embodiment of the present application;
fig. 15 is a block diagram of a data caching apparatus according to an embodiment of the present application;
fig. 16 is a block diagram of a source-side production host according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The data caching method provided by the application can be applied to the application environment shown in fig. 1. Where the source production host 102 may communicate with the backup server 104. The source production host 102 refers to a device that needs to perform data backup, and the backup server 104 refers to a server that performs backup on data on the source production host 102. The source end production host 102 establishes connection with the backup server 104, performs node registration, disk information reporting, memory information reporting, receives task control information, and notifies the kernel to respond to a server control command, the kernel disk in the source end production host 102 filters the IO change of the drive monitoring system and establishes connection with the backup server, and the source end production host 102 can be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices; the backup server 104 includes a database, a Web console and a server program, where the database is used to store important configuration data, the Web console is responsible for providing a human-computer interaction interface, the server program controls an actual real-time task control and data synchronization process, and the backup server 104 may be one server or a server cluster composed of multiple servers, which is not specifically limited in this embodiment of the present application.
The following describes in detail the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems by embodiments and with reference to the drawings. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. It should be noted that in the data caching method provided by the present application, the execution main body in fig. 2 to fig. 7 is the source-end production host, where the execution main body may also be a data caching device, where the device may be implemented as part or all of the source-end production host by software, hardware, or a combination of software and hardware.
In an embodiment, as shown in fig. 2, a flowchart of a data caching method provided in an embodiment of the present application is shown, where the method may include the following steps:
step 201, in the process of backing up the backup data in the disk file to a backup server, adding the generated incremental data packet to an incremental sending queue; the incremental data packets represent the changed data of the backup data.
The backup server refers to a server for backing up data on the source end production host, the incremental data packets represent change data of the backup data, and the incremental data packets generated by the source end production host are added into the incremental sending queue in the process of backing up the backup data in the disk file to the backup server. The source end production host refers to a host needing data backup, and synchronizes data needing backup to a backup server to realize data backup. Before data backup is carried out, a real-time synchronization agent can be installed on a source end production host computer needing backup, information such as an IP address and a port of a backup server is set, the real-time synchronization agent can be automatically connected to the backup server, and the source end production host computer can upload a disk list and the total size of a disk memory to the backup server.
Optionally, the disk filter driver in the source-end production host may intercept the change data to be written into the disk file, the disk filter driver may monitor Input/Output (IO) of the disk file, when the disk filter driver intercepts an IO request for writing from an upper layer, it indicates that the data on the source-end production host is performing writing operation and the data in the disk file may change, the disk filter driver may extract the change data to be written into the disk file from the IO packet by intercepting the IO packet, obtain an incremental data packet according to the written change data, add the incremental data packet to an incremental sending queue to cache the change data, the incremental data packet may include a timestamp written by the Input/Output packet, a disk name data _ dev, an offset data _ offset in the disk, a length data _ length of the change data, and a change data _ buf, the format of the increment recording packet may be < timestamp, data _ dev, data _ offset, data _ length, data _ buf >, and the increment sending queue is an increment sending queue in a buffer created by the disk filter driver in the memory, and this process is a complete cache mode, and the complete cache mode may be set as a default cache mode in real-time synchronization from the source end production host to the backup server.
Step 202, if the data volume of the incremental sending queue is greater than or equal to a preset first data volume threshold, acquiring and storing a change parameter of backup data; the change parameters are used to describe changed data in the backup data.
When the data volume of the incremental sending queue is greater than or equal to the preset first data volume threshold, the speed of generating data by the source end production host exceeds the speed of sending the data to the backup server, so as to avoid buffer overflow, at this time, when a disk filter driver intercepts a write IO request of an upper layer, only a change parameter of the backup data is extracted from the write IO packet, and the corresponding change data is not extracted, the change parameter is a parameter for describing change data in the backup data, the change parameter may be metadata, and the metadata may include offset, length, timestamp, disk name and the like of the change data written into the disk; the change parameter may also be a change data bitmap generated according to a change condition of the disk file data, and the change parameter may also be in other forms, which is not specifically limited in this application. After the change parameter of the backup data is extracted, the change parameter needs to be stored, so that the source end production host can obtain the corresponding incremental data packet from the backup data according to the change parameter.
And 203, under the condition that the increment sending queue has a free space, acquiring a corresponding target increment data packet according to the change parameters, and adding the target increment data packet into the increment sending queue.
When the incremental data cached in the incremental sending queue is synchronized to the backup server and the incremental sending queue has a free space, the source end production host can acquire corresponding incremental data from the backup data of the disk file according to the change parameters, package the incremental data into a target incremental data packet, and add the target incremental data packet to the incremental sending queue.
In the embodiment, the generated incremental data packet is added to the incremental sending queue in the process of backing up the backup data in the disk file to the backup server; if the data volume of the incremental sending queue is larger than or equal to a preset first data volume threshold value, acquiring and storing the change parameters of the backup data; and under the condition that the increment sending queue has a free space, acquiring a corresponding target increment data packet according to the change parameters, and adding the target increment data packet into the increment sending queue. Because the automatic switching of the multiple cache modes is realized in the real-time data synchronization process, the cache modes can be automatically adjusted according to the amount of the cache data, so that the memory buffer area cannot overflow, the real-time synchronization task can continuously run without interruption, the risk of data loss caused by the overflow of the memory buffer area is fundamentally and completely eliminated, and the success rate of data real-time backup is improved.
In an embodiment, the change parameter includes metadata and a change data bitmap, as shown in fig. 3, which shows a flowchart of a data caching method provided in an embodiment of the present application, and this embodiment relates to a process of obtaining and storing a change parameter of backup data, and the method may include the following steps:
step 301, obtaining metadata from a write input/output packet of the disk, and storing the metadata into an incremental metadata queue.
In the process of sending the backup data in the disk file of the source end production host to the backup server, because the disk filter driver can start IO monitoring on the disk file, a write input/output packet (i.e., a write IO packet) sent by an upper layer can be intercepted in real time by the disk filter driver of the source end production host, metadata is extracted from the write IO packet, the metadata is a parameter describing change data in the backup data, the metadata is stored into an incremental metadata queue after being acquired, the incremental metadata queue is an incremental metadata queue of the disk filter driver in a buffer area created by a memory, and the process is a simplified cache mode.
It should be noted that the disk filter driver captures the change data of the disk effective region in real time, the disk effective region is shown in fig. 4, the disk offset at the initialization synchronization stage in fig. 4 is the boundary of the initialization synchronization completion region, the initialization synchronization disk offset is a boundary, the partition 1 and the partition 2 are initialized synchronization regions, the partition 3 is an uninitialized synchronization region, the initialized synchronization region is the disk effective region, the disk filter driver needs to monitor the write IO packet, and the change data synchronization is performed while the initialization synchronization is performed; the uninitialized synchronous area is an invalid area, and the changed data can be synchronized to the backup server along with the initialization synchronization, so that the write IO packet does not need to be monitored; the area of the unallocated space has no data, and no IO packets hit the area, so that initialization synchronization is not required, and IO change does not need to be monitored.
Step 302, if the data volume of the incremental metadata queue is greater than or equal to a preset second data volume threshold, generating a change data bitmap according to the change data of the backup data, and storing the change data bitmap into a memory mapping file.
The second data volume threshold is a preset buffer data volume threshold of the incremental metadata queue, and the second data volume threshold may be determined according to the size of the buffer space of the incremental metadata, and when the data volume of the incremental metadata queue is greater than or equal to the preset second data volume threshold, it indicates that the speed of the source end production host generating data far exceeds the speed of sending data to the backup server, so as to avoid buffer overflow, at this time, the source end production host may create a Changed Block Tracking (CBT) processing thread and allocate a change data bitmap, and then generate a change data bitmap according to the change data of the backup data, where the change data bitmap is also a parameter for describing the change data in the backup data. When the change data bitmap is generated, the change data can be tracked through the allocated change data bitmap, the position of the corresponding change data on the change data bitmap is set to be 1, the position without the change data is set to be 0, and the change data bitmap is stored in a memory mapping file, wherein the process is a CBT cache mode.
In this embodiment, the metadata is obtained from the write input/output packet of the disk and stored in the incremental metadata queue, and if the data amount of the incremental metadata queue is greater than or equal to the preset second data amount threshold, a change data bitmap is generated according to the change data of the backup data, and the change data bitmap is stored in the memory mapping file. The obtained metadata describing the change data and the change data bitmap are only stored, and the change data do not need to be cached, so that the cached data volume is greatly reduced, the buffer overflow is avoided, and the success rate of real-time data backup is improved.
In an embodiment, as shown in fig. 5, which shows a flowchart of a data caching method provided in an embodiment of the present application, this embodiment relates to a possible process of adding a target delta data packet to a delta sending queue, and the method may include the following steps:
501, acquiring a target incremental data packet from corresponding backup data according to metadata, and adding the target incremental data packet into an incremental sending queue; the metadata includes a time stamp of the input-output packet write, a disk name, an offset in the disk, and a length of the change data.
The source end production host adds the target incremental data packet into the incremental sending queue through the created incremental packaging thread when the incremental data cached in the incremental sending queue is synchronized to the backup server so that the incremental sending queue has a free space after acquiring the target incremental data packet from the corresponding backup data according to the metadata.
And 502, if the data volume of the incremental metadata queue is greater than or equal to a preset second data volume threshold value, acquiring a target incremental data packet according to the changed data bitmap, and adding the target incremental data packet into an incremental sending queue.
And acquiring a target incremental data packet according to the change data bitmap under the condition that the data volume of the incremental metadata queue is greater than or equal to a preset second data volume threshold value. The change data bitmap is a bitmap which is generated according to the change data and used for describing the change data, and because the bit of each change data bitmap is fixed in length, the corresponding incremental data can be uniquely determined from the backup data of the disk file through each effective bit in the change data bitmap, and the effective bit refers to a bit of which the corresponding position in the change data bitmap is 1.
In this embodiment, a target incremental data packet is obtained from corresponding backup data according to metadata, and the target incremental data packet is added to an incremental sending queue, and if the data amount of the incremental metadata queue is greater than or equal to a preset second data amount threshold, the target incremental data packet is obtained according to a changed data bitmap, and the target incremental data packet is added to the incremental sending queue. By acquiring the target incremental data packet from the corresponding backup data according to the metadata in the simplified cache mode, although the process of reading the incremental data from the backup data is increased once, the cached data amount is small, and the risk of buffer overflow is reduced; furthermore, when overflow still exists in the simplified cache mode, the mode can be switched to the CBT cache mode according to the cache data amount, and the length of the changed data bitmap is fixed, so that the length of the changed data bitmap cannot be increased along with the increase of the written data, and the buffer area cannot overflow, thereby further improving the success rate of data real-time backup.
On the basis of the above embodiment, when the target incremental data packet is obtained, optionally, under the condition that the incremental metadata queue is empty, the CBT processing thread is invoked to obtain the target incremental data packet from the backup data according to the valid data bit of the change data bitmap, and add the target incremental data packet to the incremental sending queue.
When the target incremental data packet is obtained, the source end production host calls a CBT processing thread to obtain the target incremental data packet from backup data according to an effective data bit of a change data bitmap under the condition that an incremental metadata queue is empty, the target incremental data packet is added to the incremental metadata queue, and when the target incremental data packet in the incremental metadata queue has empty space in an incremental sending queue, the target incremental data packet is added to the incremental sending queue through the incremental packaging thread.
In this embodiment, when the incremental metadata queue is empty, the CBT processing thread is invoked to obtain the target incremental data packet from the backup data according to the valid data bits of the change data bitmap, and add the target incremental data packet to the incremental sending queue, because the change data bitmap describes the change data in a simple form, and each bit in the change data bitmap uniquely describes the corresponding change data, the efficiency when the target incremental data packet is obtained from the backup data according to the valid bits of the change data bitmap is high, and thus the efficiency of data real-time backup is also improved.
During the execution of the CBT processing thread, since data is generated and written continuously, in order to avoid data loss, the newly generated data needs to be stored and backed up, as shown in fig. 6, which is a flowchart illustrating a data caching method provided in an embodiment of the present application, where this embodiment relates to a process of adding new metadata to an incremental metadata queue, and the method may include the following steps:
601, if the data volume of the incremental metadata queue is smaller than a second data volume threshold value, acquiring new metadata, and storing the new metadata into a temporary cache queue; the new metadata is metadata generated by the backup data during execution of the CBT processing thread.
And under the condition that the data volume of the incremental metadata queue is smaller than a second data volume threshold value, starting a CBT processing thread, wherein the CBT processing thread is used for processing the change data in the change data bitmap, the metadata generated by the backup data during the execution of the CBT processing thread is used as new metadata, and the new metadata is stored in the temporary buffer queue without being inserted into the incremental metadata queue.
Step 602, if the CBT processing thread is executed, adding the new metadata in the temporary buffer queue to the incremental metadata queue.
When the execution of the CBT processing thread is completed, that is, after the processing of the change data in the change data bitmap is completed, new metadata is acquired from the temporary cache queue and added to the incremental metadata queue, which is a CBT cache transition mode.
In this embodiment, if the data size of the incremental metadata queue is smaller than the second data size threshold, new metadata is acquired and stored in the temporary cache queue, and if the CBT processing thread is executed, the new metadata in the temporary cache queue is added to the incremental metadata queue. Because newly generated metadata are stored by using the temporary buffer queue during the execution of the CBT processing thread, the loss of data is avoided, and the new metadata are added to the incremental metadata queue after the execution of the CBT processing thread is finished, so that the sequence of writing data is ensured.
Optionally, if the data amount of the increment sending queue is smaller than the first data amount threshold and the increment metadata queue is empty, adding the currently acquired increment data packet to the increment sending queue.
When the data volume of the increment sending queue is smaller than the first data volume threshold, the speed of data generation of the source end production host is lower than the speed of data sending to the backup server, and when the increment metadata queue is empty, the currently acquired increment data packet is added to the increment sending queue, namely, the mode can be switched to the complete cache mode again.
In this embodiment, if the data volume of the incremental sending queue is smaller than the first data volume threshold and the incremental metadata queue is empty, the currently acquired incremental data packet is added to the incremental sending queue, and since the data generation speed of the source end production host is lower than the data sending speed to the backup server, the data real-time backup efficiency is improved by switching to the complete cache mode again.
On the basis of the above embodiment, when the real-time synchronization agent receives a control command for stopping the real-time backup task of data, the real-time synchronization agent sends the stop command to the disk filter driver, the disk filter driver stops IO monitoring on the disk, if it is detected that the current cache mode is the CBT cache mode, waits for the CBT processing thread to continue processing, and after switching from the CBT cache mode to the reduced cache mode, may release the corresponding storage resource, optionally, if the CBT processing thread is executed, releases the storage resource corresponding to the memory mapping file; if the current cache mode is detected to be the simplified cache mode, waiting for the increment packaging thread to continue processing, stopping the increment packaging thread after the simplified cache mode is switched to the complete cache mode, releasing the corresponding storage resource, and optionally releasing the storage resource corresponding to the increment metadata queue; if the current cache mode is detected to be the complete cache mode, the sending thread is waited to continue processing, when the increment sending queue is empty, all the change data generated before the task stop control command is received are all synchronized to the backup server, the sending thread is stopped, and the corresponding storage resources can be released, optionally, if all the change data of the backup data are synchronized to the backup server, the storage resources corresponding to the increment sending queue are released.
In this embodiment, if the CBT processing thread is executed, the storage resource corresponding to the memory mapping file is released; and releasing the storage resources corresponding to the incremental metadata queue, if all the change data of the backup data are synchronized to the backup server, releasing the storage resources corresponding to the incremental sending queue, and by releasing the corresponding storage resources, the storage resources can store other data, so that the utilization rate of the storage resources is improved.
In an embodiment, as shown in fig. 7, a flowchart of a data caching method provided in an embodiment of the present application is shown, where the method may include the following steps:
step 701, monitoring the disk write IO packet in real time in an initialization synchronization stage and a change data backup stage.
Step 702, setting a cache mode as a complete mode by initial default, directly packaging the change data into an increment data packet, storing the increment data packet into an increment sending queue, and sending the increment data packet to a backup server by a sending thread.
And 703, when the cache data amount of the increment sending queue exceeds a first data amount threshold value, automatically switching to a simplification mode, starting a packaging thread, and when changed data exists, extracting the changed data metadata to package into an increment data packet instead of caching actual data, and storing the increment data packet into the increment metadata queue.
And 704, circularly reading the incremental data packet in the incremental metadata queue by the packaging thread, directly obtaining the change data from the backup data of the corresponding disk according to the disk device name, the disk offset, the change data length and other metadata in the incremental data packet, packaging the change data into the incremental data packet, storing the incremental data packet into the incremental sending queue when the incremental sending queue has free space, and sending the incremental data packet to the backup server by the sending thread.
Step 705, in the process of processing the compact mode, monitoring whether the buffer data volume of the incremental sending queue is lower than a first data volume threshold value in real time, if so, switching back to the complete mode, and processing the rest incremental data packet records in the incremental metadata queue by the packaging thread.
And step 706, when the cache data amount of the increment metadata queue also exceeds the second data amount threshold, automatically switching the cache mode to the CBT mode, and creating a CBT change data bitmap, wherein when change data exist, the change data bitmap is only simply located at a position 1 corresponding to the bitmap.
And 707, in the CBT mode processing process, monitoring whether the buffer data amount of the incremental metadata queue is lower than a second data amount threshold in real time, and if so, switching the buffer mode to the transition mode.
Step 708, in the transition mode, starting the CBT processing thread, processing the change data in the change data bitmap, and converting all the change data into the incremental data packets, during the working period of the CBT processing thread, storing the newly generated incremental data packets by using the temporary queue without inserting the incremental metadata queue, and after the working period of the CBT processing thread, switching to insert the incremental metadata queue from the temporary queue.
And 709, releasing the change data bitmap after the change data bitmap is processed, stopping the CBT processing thread, and inserting the temporary queue increment data packet into the increment metadata queue.
And 710, switching to a complete mode when the buffer data amount of the increment sending queue is lower than a first data amount threshold and the increment metadata queue is empty.
And 711, continuing the above process until stopping the data real-time backup task.
The implementation principle and technical effect of each step in the data caching method provided in this embodiment are similar to those in the previous embodiments of the data caching method, and are not described herein again. The implementation manner of each step in the embodiment of fig. 7 is only an example, and is not limited to this, and the order of each step may be adjusted in practical application as long as the purpose of each step can be achieved.
In the technical scheme provided by the embodiment of the application, because the automatic switching of the multiple cache modes is realized in the real-time data synchronization process, the cache modes can be automatically adjusted according to the amount of the cache data, so that the memory buffer area cannot overflow, the real-time synchronization task can continuously run without interruption, the risk of data loss caused by the overflow of the memory buffer area is fundamentally and completely eliminated, and the success rate of data real-time backup is improved.
It should be understood that although the various steps in the flow charts of fig. 2-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-7 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
Referring to fig. 8, a block diagram of a data caching device 80 according to an embodiment of the present application is shown, where the data caching device 80 may be configured in a source-end production host. As shown in fig. 8, the data caching apparatus 80 may include a first adding module 81, a storing module 82, and a second adding module 83, wherein:
a first adding module 81, configured to add, to an increment sending queue, an increment data packet generated in a process of backing up backup data in a disk file to a backup server; the incremental data packets represent the changed data of the backup data.
The storage module 82 is configured to obtain and store a change parameter of the backup data if the data amount of the incremental sending queue is greater than or equal to a preset first data amount threshold; the change parameters are used to describe changed data in the backup data.
And a second adding module 83, configured to, when there is a free space in the increment sending queue, obtain a corresponding target increment data packet according to the change parameter, and add the target increment data packet to the increment sending queue.
In one embodiment, please refer to fig. 9, which shows a block diagram of a data caching device 90 according to an embodiment of the present application, where the data caching device 90 may be configured in a source-end production host. The change parameter includes metadata and a change data bitmap, and the storage module 82 includes a first obtaining unit 821 and a generating unit 822, where:
the first obtaining unit 821 is configured to obtain metadata from a write input/output packet of a disk, and store the metadata in an incremental metadata queue.
A generating unit 822, configured to generate a change data bitmap according to the change data of the backup data if the data amount of the incremental metadata queue is greater than or equal to a preset second data amount threshold, and store the change data bitmap in the memory mapped file.
In one embodiment, please refer to fig. 10, which shows a block diagram of a data caching device 100 according to an embodiment of the present application, where the data caching device 100 can be configured in a source-end production host. As shown in fig. 10, the second adding module 83 includes a second obtaining unit 831 and a third obtaining unit 832, wherein:
a second obtaining unit 831, configured to obtain a target incremental data packet from corresponding backup data according to the metadata, and add the target incremental data packet to an incremental sending queue; the metadata includes a time stamp of the input-output packet write, a disk name, an offset in the disk, and a length of the change data.
A third obtaining unit 832, configured to, if the data amount of the incremental metadata queue is greater than or equal to a preset second data amount threshold, obtain a target incremental data packet according to the changed data bitmap, and add the target incremental data packet to the incremental sending queue.
In one embodiment, the third obtaining unit 832 is specifically configured to, when the delta metadata queue is empty, invoke a CBT processing thread to obtain a target delta data packet from the backup data according to the valid data bits of the change data bitmap, and add the target delta data packet to the delta sending queue.
In one embodiment, please refer to fig. 11, which shows a block diagram of a data caching device 110 according to an embodiment of the present application, where the data caching device 110 may be configured in a source-end production host. As shown in fig. 11, the data caching apparatus 110 may include an obtaining module 111 and a third adding module 112, where:
an obtaining module 111, configured to obtain new metadata and store the new metadata in a temporary buffer queue if the data amount of the incremental metadata queue is smaller than a second data amount threshold; the new metadata is metadata generated by the backup data during execution of the CBT processing thread.
A third adding module 112, configured to add the new metadata in the temporary buffer queue to the incremental metadata queue if the CBT processing thread is executed completely.
In one embodiment, please refer to fig. 12, which shows a block diagram of a data caching device 120 according to an embodiment of the present application, where the data caching device 120 may be configured in a source production host. As shown in fig. 12, the data caching apparatus 120 may include a first releasing module 121, wherein:
the first releasing module 121 is configured to release the storage resource corresponding to the memory mapping file if the CBT processing thread is executed completely.
In one embodiment, please refer to fig. 13, which shows a block diagram of a data caching apparatus 130 according to an embodiment of the present application, where the data caching apparatus 130 can be configured in a source-end production host. As shown in fig. 13, the data caching apparatus 130 may include a fourth adding module 131, wherein:
a fourth adding module 131, configured to add the currently acquired incremental data packet to the incremental sending queue if the data amount of the incremental sending queue is smaller than the first data amount threshold and the incremental metadata queue is empty.
In one embodiment, please refer to fig. 14, which shows a block diagram of a data caching device 140 according to an embodiment of the present application, where the data caching device 140 may be configured in a source production host. As shown in fig. 14, the data caching apparatus 140 may include a second releasing module 141, wherein:
and a second releasing module 141, configured to release the storage resource corresponding to the delta metadata queue.
In one embodiment, please refer to fig. 15, which shows a block diagram of a data caching device 150 according to an embodiment of the present application, where the data caching device 150 may be configured in a source-end production host. As shown in fig. 15, the data caching apparatus 150 may include a third releasing module 151, wherein:
a third releasing module 151, configured to release the storage resource corresponding to the incremental sending queue if all the change data of the backup data is synchronized in the backup server.
For the specific limitation of the data caching device, reference may be made to the above limitation on the data caching method, which is not described herein again. The modules in the data caching device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute the operations of the modules.
In one embodiment, a computer device is provided, which may be a source production host, and its internal structure diagram may be as shown in fig. 16. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a data caching method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 16 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment of the present application, there is provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the following steps when executing the computer program:
in the process of backing up the backup data in the disk file to a backup server, adding the generated incremental data packet to an incremental sending queue; the incremental data packet represents the changed data of the backup data; if the data volume of the incremental sending queue is larger than or equal to a preset first data volume threshold value, acquiring and storing the change parameters of the backup data; the change parameters are used for describing the change data in the backup data; and under the condition that the increment sending queue has a free space, acquiring a corresponding target increment data packet according to the change parameters, and adding the target increment data packet into the increment sending queue.
In one embodiment, the change parameters include metadata and a change data bitmap;
the processor, when executing the computer program, further performs the steps of:
acquiring metadata from a write input/output packet of a disk, and storing the metadata into an incremental metadata queue; and if the data volume of the incremental metadata queue is greater than or equal to a preset second data volume threshold value, generating a change data bitmap according to the change data of the backup data, and storing the change data bitmap into a memory mapping file.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
acquiring a target incremental data packet from corresponding backup data according to the metadata, and adding the target incremental data packet into an incremental sending queue; the metadata comprises a timestamp written by the input and output packets, a disk name, an offset in the disk and the length of the change data; and if the data volume of the incremental metadata queue is greater than or equal to a preset second data volume threshold value, acquiring a target incremental data packet according to the changed data bitmap, and adding the target incremental data packet into an incremental sending queue.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
and under the condition that the increment metadata queue is empty, calling a CBT processing thread to acquire a target increment data packet from the backup data according to the effective data bit of the change data bitmap, and adding the target increment data packet into the increment sending queue.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
if the data volume of the incremental metadata queue is smaller than a second data volume threshold value, acquiring new metadata, and storing the new metadata into a temporary cache queue; the new metadata is generated by backup data during the execution of the CBT processing thread; and if the CBT processing thread is executed, adding the new metadata in the temporary cache queue to the increment metadata queue.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
and if the CBT processing thread is executed, releasing the storage resource corresponding to the memory mapping file.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
and if the data volume of the increment sending queue is smaller than the first data volume threshold value and the increment metadata queue is empty, adding the currently acquired increment data packet into the increment sending queue.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
and releasing the storage resource corresponding to the increment metadata queue.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
and if all the changed data of the backup data are synchronized to the backup server, releasing the storage resources corresponding to the incremental sending queue.
The implementation principle and technical effect of the computer device provided by the embodiment of the present application are similar to those of the method embodiment described above, and are not described herein again.
In an embodiment of the application, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of:
in the process of backing up the backup data in the disk file to a backup server, adding the generated incremental data packet to an incremental sending queue; the incremental data packet represents the changed data of the backup data; if the data volume of the incremental sending queue is larger than or equal to a preset first data volume threshold value, acquiring and storing the change parameters of the backup data; the change parameters are used for describing the change data in the backup data; and under the condition that the increment sending queue has a free space, acquiring a corresponding target increment data packet according to the change parameters, and adding the target increment data packet into the increment sending queue.
In one embodiment, the change parameters include metadata and a change data bitmap;
the computer program when executed by the processor further realizes the steps of:
acquiring metadata from a write input/output packet of a disk, and storing the metadata into an incremental metadata queue; and if the data volume of the incremental metadata queue is greater than or equal to a preset second data volume threshold value, generating a change data bitmap according to the change data of the backup data, and storing the change data bitmap into a memory mapping file.
In one embodiment of the application, the computer program when executed by the processor further performs the steps of:
acquiring a target incremental data packet from corresponding backup data according to the metadata, and adding the target incremental data packet into an incremental sending queue; the metadata comprises a timestamp written by the input and output packets, a disk name, an offset in the disk and the length of the change data; and if the data volume of the incremental metadata queue is greater than or equal to a preset second data volume threshold value, acquiring a target incremental data packet according to the changed data bitmap, and adding the target incremental data packet into an incremental sending queue.
In one embodiment of the application, the computer program when executed by the processor further performs the steps of:
and under the condition that the increment metadata queue is empty, calling a CBT processing thread to acquire a target increment data packet from the backup data according to the effective data bit of the change data bitmap, and adding the target increment data packet into the increment sending queue.
In one embodiment of the application, the computer program when executed by the processor further performs the steps of:
if the data volume of the incremental metadata queue is smaller than a second data volume threshold value, acquiring new metadata, and storing the new metadata into a temporary cache queue; the new metadata is generated by backup data during the execution of the CBT processing thread; and if the CBT processing thread is executed, adding the new metadata in the temporary cache queue to the increment metadata queue.
In one embodiment of the application, the computer program when executed by the processor further performs the steps of:
and if the CBT processing thread is executed, releasing the storage resource corresponding to the memory mapping file.
In one embodiment of the application, the computer program when executed by the processor further performs the steps of:
and if the data volume of the increment sending queue is smaller than the first data volume threshold value and the increment metadata queue is empty, adding the currently acquired increment data packet into the increment sending queue.
In one embodiment of the application, the computer program when executed by the processor further performs the steps of:
and releasing the storage resource corresponding to the increment metadata queue.
In one embodiment of the application, the computer program when executed by the processor further performs the steps of:
and if all the changed data of the backup data are synchronized to the backup server, releasing the storage resources corresponding to the incremental sending queue.
The implementation principle and technical effect of the computer-readable storage medium provided by this embodiment are similar to those of the above-described method embodiment, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the claims. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for caching data, the method comprising:
in the process of backing up the backup data in the disk file to a backup server, adding the generated incremental data packet to an incremental sending queue; the incremental data package represents changed data of the backup data;
if the data volume of the increment sending queue is larger than or equal to a preset first data volume threshold value, acquiring metadata from a write input/output packet of a disk, and storing the metadata into an increment metadata queue; if the data volume of the incremental metadata queue is larger than or equal to a preset second data volume threshold value, generating a change data bitmap according to change data of the backup data, and storing the change data bitmap into a memory mapping file;
and under the condition that the increment sending queue has free space, acquiring a corresponding target increment data packet according to the metadata and the change data bitmap, and adding the target increment data packet into the increment sending queue.
2. The method according to claim 1, wherein the obtaining a corresponding target incremental data packet according to the metadata and the change data bitmap, and adding the target incremental data packet to the incremental transmission queue, comprises:
acquiring the target incremental data packet from corresponding backup data according to the metadata, and adding the target incremental data packet into the incremental sending queue; the metadata comprises a timestamp written by the input and output packet, a disk name, an offset in the disk and the length of the change data;
and if the data volume of the incremental metadata queue is greater than or equal to a preset second data volume threshold value, acquiring the target incremental data packet according to the change data bitmap, and adding the target incremental data packet to the incremental sending queue.
3. The method according to claim 2, wherein the obtaining the target incremental data packet according to the change data bitmap and adding the target incremental data packet to the incremental transmission queue comprises:
when the increment metadata queue is empty, calling a CBT processing thread to acquire the target increment data packet from the backup data according to the effective data bit of the change data bitmap, and adding the target increment data packet to the increment sending queue; the CBT processing thread is a data block modification tracking processing thread and is used for tracking change data.
4. The method of claim 3, further comprising:
if the data volume of the incremental metadata queue is smaller than the second data volume threshold value, acquiring new metadata, and storing the new metadata into a temporary cache queue; the new metadata is metadata generated by backup data during the execution of the CBT processing thread;
and if the CBT processing thread is executed completely, adding the new metadata in the temporary cache queue to the increment metadata queue.
5. The method of claim 4, further comprising:
and if the CBT processing thread is executed, releasing the storage resource corresponding to the memory mapping file.
6. The method of claim 2, further comprising:
and if the data volume of the increment sending queue is smaller than the first data volume threshold value and the increment metadata queue is empty, adding the currently acquired increment data packet into the increment sending queue.
7. A data caching apparatus, comprising:
the first adding module is used for adding the generated incremental data packet into the incremental sending queue in the process of backing up the backup data in the disk file to the backup server; the incremental data package represents changed data of the backup data;
the storage module is used for acquiring metadata from a write input/output packet of a disk and storing the metadata into an increment metadata queue if the data volume of the increment sending queue is greater than or equal to a preset first data volume threshold; if the data volume of the incremental metadata queue is larger than or equal to a preset second data volume threshold value, generating a change data bitmap according to change data of the backup data, and storing the change data bitmap into a memory mapping file;
and the second adding module is used for acquiring a corresponding target incremental data packet according to the metadata and the change data bitmap and adding the target incremental data packet to the incremental sending queue under the condition that the incremental sending queue has a free space.
8. The apparatus according to claim 7, wherein the second adding module is further configured to obtain the target incremental data packet from corresponding backup data according to the metadata, and add the target incremental data packet to the incremental sending queue; the metadata comprises a timestamp written by the input and output packet, a disk name, an offset in the disk and the length of the change data; and if the data volume of the incremental metadata queue is greater than or equal to a preset second data volume threshold value, acquiring the target incremental data packet according to the change data bitmap, and adding the target incremental data packet to the incremental sending queue.
9. A computer arrangement comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, carries out the steps of the method according to any one of claims 1 to 6.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202011607295.0A 2020-12-29 2020-12-29 Data caching method, device, equipment and storage medium Active CN112698789B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011607295.0A CN112698789B (en) 2020-12-29 2020-12-29 Data caching method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011607295.0A CN112698789B (en) 2020-12-29 2020-12-29 Data caching method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112698789A CN112698789A (en) 2021-04-23
CN112698789B true CN112698789B (en) 2022-03-15

Family

ID=75512422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011607295.0A Active CN112698789B (en) 2020-12-29 2020-12-29 Data caching method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112698789B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342898B (en) * 2021-06-29 2022-10-04 杭州数梦工场科技有限公司 Data synchronization method and device
CN114625502A (en) * 2022-03-03 2022-06-14 盐城金堤科技有限公司 Word-throwing task processing method and device, storage medium and electronic equipment
CN116431396B (en) * 2023-06-07 2023-08-25 成都云祺科技有限公司 Method, system and storage medium for processing real-time backup cache data of volume

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6549992B1 (en) * 1999-12-02 2003-04-15 Emc Corporation Computer data storage backup with tape overflow control of disk caching of backup data stream
JP2014229131A (en) * 2013-05-23 2014-12-08 株式会社日立エルジーデータストレージ Data recording/reproduction system and data recording control method
CN108399186A (en) * 2018-01-12 2018-08-14 联动优势科技有限公司 A kind of collecting method and device
CN109189577A (en) * 2018-08-31 2019-01-11 武汉达梦数据库有限公司 A kind of data prevent memory from overflowing method and apparatus when synchronous
CN109213817A (en) * 2018-08-10 2019-01-15 杭州数梦工场科技有限公司 Incremental data abstracting method, device and server
CN109669818A (en) * 2018-12-20 2019-04-23 广州鼎甲计算机科技有限公司 Continuous data protection method and system without local cache

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103365745A (en) * 2013-06-07 2013-10-23 上海爱数软件有限公司 Block level backup method based on content-addressed storage and system
US9665437B2 (en) * 2013-11-18 2017-05-30 Actifio, Inc. Test-and-development workflow automation
CN110058959B (en) * 2018-01-18 2023-06-16 伊姆西Ip控股有限责任公司 Data backup method, apparatus and computer program product
GB2572136B (en) * 2018-03-12 2020-04-15 Micro Consulting Ltd Backup systems and methods
CN109800260A (en) * 2018-12-14 2019-05-24 深圳壹账通智能科技有限公司 High concurrent date storage method, device, computer equipment and storage medium
CN110413689B (en) * 2019-06-29 2022-04-26 苏州浪潮智能科技有限公司 Multi-node data synchronization method and device for memory database

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6549992B1 (en) * 1999-12-02 2003-04-15 Emc Corporation Computer data storage backup with tape overflow control of disk caching of backup data stream
JP2014229131A (en) * 2013-05-23 2014-12-08 株式会社日立エルジーデータストレージ Data recording/reproduction system and data recording control method
CN108399186A (en) * 2018-01-12 2018-08-14 联动优势科技有限公司 A kind of collecting method and device
CN109213817A (en) * 2018-08-10 2019-01-15 杭州数梦工场科技有限公司 Incremental data abstracting method, device and server
CN109189577A (en) * 2018-08-31 2019-01-11 武汉达梦数据库有限公司 A kind of data prevent memory from overflowing method and apparatus when synchronous
CN109669818A (en) * 2018-12-20 2019-04-23 广州鼎甲计算机科技有限公司 Continuous data protection method and system without local cache

Also Published As

Publication number Publication date
CN112698789A (en) 2021-04-23

Similar Documents

Publication Publication Date Title
CN112698789B (en) Data caching method, device, equipment and storage medium
US9798655B2 (en) Managing a cache on storage devices supporting compression
US11435931B2 (en) Memory data migration method and apparatus where cold data is migrated to shared storage prior to storing in destination storage
US8719639B2 (en) Virtual machine control program, virtual machine control system, and dump capturing method
US11593186B2 (en) Multi-level caching to deploy local volatile memory, local persistent memory, and remote persistent memory
CN110196681B (en) Disk data write-in control method and device for business write operation and electronic equipment
CN112000287B (en) IO request processing device, method, equipment and readable storage medium
US9081692B2 (en) Information processing apparatus and method thereof
CN110865989A (en) Business processing method for large-scale computing cluster
WO2022142312A1 (en) Page processing method and apparatus, computer device and storage medium
US20140059314A1 (en) Preventing data loss during reboot and logical storage resource management device
JP2017120626A (en) System and apparatus including storage device to perform double-writes, and method therefor
US20220253252A1 (en) Data processing method and apparatus
CN110557398B (en) Service request control method, device, system, computer equipment and storage medium
CN108196937B (en) Method and device for processing character string object, computer equipment and storage medium
CN112698987A (en) On-line backup method, device, equipment and storage medium for snapshot-free operating system
CN112162818B (en) Virtual memory allocation method and device, electronic equipment and storage medium
CN113704027B (en) File aggregation compatible method and device, computer equipment and storage medium
US20210048958A1 (en) Concept for Controlling a Memory Performance in a Computer System
CN113157738B (en) In-heap data cache synchronization method and device, computer equipment and storage medium
KR102456017B1 (en) Apparatus and method for file sharing between applications
WO2022262623A1 (en) Data exchange method and apparatus
CN115202892B (en) Memory expansion system and memory expansion method of cryptographic coprocessor
CN111352730B (en) Caching method and device for application program upgrade, computer equipment and storage medium
US20220083267A1 (en) High Bandwidth Controller Memory Buffer For Peer To Peer Data Transfer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Data caching method, device, device and storage medium

Effective date of registration: 20221021

Granted publication date: 20220315

Pledgee: Industrial Bank Co.,Ltd. Guangzhou Development Zone sub branch

Pledgor: Guangzhou Dingjia Computer Technology Co.,Ltd.

Registration number: Y2022980018838

PE01 Entry into force of the registration of the contract for pledge of patent right