CN106293520B - A kind of processing method of I/O request - Google Patents

A kind of processing method of I/O request Download PDF

Info

Publication number
CN106293520B
CN106293520B CN201610619194.2A CN201610619194A CN106293520B CN 106293520 B CN106293520 B CN 106293520B CN 201610619194 A CN201610619194 A CN 201610619194A CN 106293520 B CN106293520 B CN 106293520B
Authority
CN
China
Prior art keywords
local controller
request data
module
write request
cache module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610619194.2A
Other languages
Chinese (zh)
Other versions
CN106293520A (en
Inventor
杨善松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Beijing Electronic Information Industry Co Ltd
Original Assignee
Inspur Beijing Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Beijing Electronic Information Industry Co Ltd filed Critical Inspur Beijing Electronic Information Industry Co Ltd
Priority to CN201610619194.2A priority Critical patent/CN106293520B/en
Publication of CN106293520A publication Critical patent/CN106293520A/en
Application granted granted Critical
Publication of CN106293520B publication Critical patent/CN106293520B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of processing methods of I/O request, this method comprises: when host side issues IO write request data to local controller, upper layer cache module in local controller receives the IO write request data, and will carry out data backup on the I/O request data image to companion's partner side controller;The partner side controller sends feedback information to the local controller, and IO write request data described in local controller is notified successfully to back up;The local controller returns to the response message for indicating that I O process is completed to host side;The IO write request data are written in memory the local controller.This method, which is realized, reduces IO delay.

Description

A kind of processing method of I/O request
Technical field
The present invention relates to storage system caching technology fields, more particularly to a kind of processing method of I/O request.
Background technique
Currently, within the storage system, snapshot, remote copy, from advanced features such as simplification volumes is high-end storage series at clone Essential advanced feature in product, these advanced features need to accelerate by caching to the process of I O process, but store system In system, as soon as only existing a cache layer, the performance of storage system will affect to a certain extent.Under normal circumstances, storage system In only one layer caching, and snapshot, clone, remote copy, from the advanced features such as simplification volume need caching accelerate, otherwise Performance degradation is caused, the I/O request that host side issues just needs just be issued to after the treatment process of above-mentioned advanced feature In caching, i.e., the I/O request that host side issues needs to flow into after these advanced features, can just be eventually written caching, host The response time of IO write request will receive the influence of advanced feature treatment process, certainly will will cause host IO delay long-tail in this way, Extend overlong time.
Summary of the invention
The object of the present invention is to provide a kind of processing methods of I/O request, reduce IO delay to realize.
In order to solve the above technical problems, the present invention provides a kind of processing method of I/O request, comprising:
When host side issues IO write request data to local controller, the upper layer cache module in local controller is received The IO write request data, and data backup will be carried out on the I/O request data image to companion's partner side controller;
The partner side controller sends feedback information to the local controller, notifies IO described in local controller Write request data successfully back up;
The local controller returns to the response message for indicating that I O process is completed to host side;
The IO write request data are written in memory the local controller.
Preferably, the local controller includes upper layer cache module, advanced feature module and lower layer's cache module, described Advanced feature module is between the upper layer cache module and lower layer's cache module.
Preferably, the advanced feature module includes: snapshot module, cloning module, remote copy module or simplifies certainly Module.
Preferably, the IO write request data are written in memory the local controller, comprising:
The IO write request data are transmitted to advanced in local controller by the upper layer cache module in local controller Personality modnies;
Advanced feature module in local controller handles the IO write request data, will be described after the completion of processing IO write request data are transferred to lower layer's cache module in local controller, are written in lower layer's cache module.
Preferably, it is connected between the local controller and the partner side controller by the channel NTB.
Preferably, upper layer cache module in the local controller receives the IO write request data, and by the IO Request data is mirrored on companion's partner side controller and carries out data backup, comprising:
Upper layer cache module in the local controller receives the IO write request data, and by the I/O request data Data backup is carried out on the upper layer cache module being mirrored in companion's partner side controller.
Preferably, the partner side controller sends feedback information to the local controller, comprising:
Upper layer cache module hair of the upper layer cache module into the local controller in the partner side controller Send feedback information.
Preferably, the method also includes:
When delay machine event occurs in local controller, local controller returns to the response letter for indicating that I O process is completed to host side Breath, the advanced feature module IO write request data being transferred in local controller.
A kind of processing method of I/O request provided by the present invention is controlled when host side issues IO write request data to local terminal When device, the upper layer cache module in local controller receives the IO write request data, and the I/O request data image is arrived Data backup is carried out on companion's partner side controller;The partner side controller is sent to the local controller to be fed back Information notifies IO write request data described in local controller successfully to back up;The local controller is returned to host side and is indicated The response message that I O process is completed;The IO write request data are written in memory the local controller.As it can be seen that this method is drawn The double-deck caching is entered, after system receives the I/O request i.e. IO write request data of host side, IO write request data are initially entered I.e. upper layer cache module is cached to upper layer, upper layer cache module receives IO write request data image to the end partner The response message for indicating that I O process is completed will be returned to host side after the feedback information at the end partner, in this way informing host side The duplication that I O process completes i.e. IO write request data is completed, and is so allowed IO write request data to be directly entered upper layer caching and is just completed IO Processing is completed, and is completed in time to host feedback I O process, can be masked the I O process process of advanced feature, acceleration system Can, the delay of the host IO as caused by the I O process process of advanced feature can be masked by upper layer caching in this way, so should Method, which is realized, reduces IO delay, acceleration system performance.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis The attached drawing of offer obtains other attached drawings.
Fig. 1 is a kind of flow chart of the processing method of I/O request provided by the present invention;
Fig. 2 is the structural schematic diagram of local controller and partner side controller;
Fig. 3 is IO stream process schematic diagram.
Specific embodiment
Core of the invention is to provide a kind of processing method of I/O request, reduces IO delay to realize.
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art Every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Referring to FIG. 1, Fig. 1 is a kind of flow chart of the processing method of I/O request provided by the present invention, this includes:
S11: the upper layer cache module when host side issues IO write request data to local controller, in local controller IO write request data are received, and data backup will be carried out on I/O request data image to companion's partner side controller;
S12:partner side controller sends feedback information to local controller, notifies local controller IO write request number According to successfully backing up;
S13: local controller returns to the response message for indicating that I O process is completed to host side;
S14: IO write request data are written in memory local controller.
As it can be seen that this method introduces the double-deck caching, after system receives the I/O request i.e. IO write request data of host side, IO write request data are first into upper layer i.e. upper layer cache module, the upper layer cache module of caching and arrive IO write request data image The end partner will return to the response message for indicating that I O process is completed to host side after receiving the feedback information at the end partner, Informing that host side I O process is completed in this way is that the duplication of IO write request data is completed, and IO write request data is so allowed to be directly entered Layer caching is just completed I O process and is completed, and completes in time to host feedback I O process, can mask the I O process mistake of advanced feature Journey, acceleration system performance can mask the host as caused by the I O process process of advanced feature by upper layer caching in this way IO delay, realize reduces IO delay, acceleration system performance in this way.
Based on the above method, specifically, local controller includes upper layer cache module, advanced feature module and lower layer's caching Module, advanced feature module is between upper layer cache module and lower layer's cache module.
Double-deck i.e. upper layer cache module and lower layer's cache module, the caching, that is, upper layer cache module on upper layer of caching is introduced to exist After receiving IO write request, it can be mirrored to the end partner, that is, copy backup is to partner side controller, subsequent meeting It is completed to host side feedback I O process, then in the data handling procedure for carrying out above-mentioned advanced feature, these height can be masked System delay caused by level characteristics.This method breaks the implementation method of conventional monolayers caching, introduces bilayer within the storage system Caching can mask the delay of the host IO as caused by the data handling procedure of advanced feature by upper layer caching.This method Applied in storage system, applying in the multichannels storage systems such as 2 tunnels, 4 tunnels, 8 tunnels, it can be effectively reduced system IO delay, mention System performance is risen, is guaranteed data security simultaneously.
Specifically, advanced feature module includes: snapshot module, cloning module, remote copy module or simplifies module certainly.
Wherein, it is connected between local controller and partner side controller by the channel NTB.
Step S14 preferably uses following steps to realize:
S21: IO write request data are transmitted to advanced in local controller by the upper layer cache module in local controller Personality modnies;
S22: the advanced feature module in local controller handles IO write request data, writes IO after the completion of processing Request data is transferred to lower layer's cache module in local controller, is written in lower layer's cache module.
In step S11, upper layer cache module in local controller receives IO write request data, and by I/O request data mirror As the process to progress data backup on companion's partner side controller specifically: the upper layer cache module in local controller IO write request data are received, and I/O request data image is enterprising to the upper layer cache module in companion's partner side controller Row data backup.
In step S12, partner side controller sends the process of feedback information to local controller specifically: partner Upper layer cache module of the upper layer cache module into local controller in side controller sends feedback information.
Wherein, partner side controller also includes upper layer cache module, advanced feature module and lower layer's cache module, height Level characteristics module is between upper layer cache module and cache module.
Further, when delay machine event occurs in local controller, local controller is returned to host side indicates that I O process is complete At response message, advanced feature module IO write request data being transferred in local controller.
Method introduces bilayers to cache, and after system receives the I/O request of host side, is first into upper layer caching, After I/O data is mirrored to the end partner by upper layer caching, I O process will be returned to host side and completed, it will be able to masked advanced The I O process process of characteristic, acceleration system performance.Upper layer is buffered in receive host IO write request after, transmitted by IO write request Before being handled to above-mentioned advanced feature module, by IO write request data image to the end partner, successively guarantee data peace Entirely, it is then informed about the completion of host side I O process, to shield the I O process delay of above-mentioned advanced feature, acceleration system Energy.Shielding snapshot, clone, remote copy and these advanced specific I O process delays from simplification volume are cached by upper layer, are realized System performance is promoted.
As shown in Fig. 2, in storage system each controller include two layers of cache module, snapshot, clone, remote copy and from It the advanced features such as simplifies to be located between two layers of caching.Local controller includes upper layer cache module, advanced feature module and lower layer Cache module, advanced feature module is between upper layer cache module and cache module.Partner side controller also includes upper layer Cache module, advanced feature module and lower layer's cache module, advanced feature module be located at upper layer cache module and cache module it Between.
As shown in figure 3, the step of S1-S6 numeral mark expression IO stream in figure flows to, that is, IO write request data Flow to process, what IO write request data were marked since S1 flows to step, successively by S2, S3, S4, S5, S6 these label Flow direction, snapshot/clone/remote copy in figure/from simplification volume module indicate snapshot module, cloning module, remote copy mould Block simplifies module certainly, these modules are all advanced feature modules, has snapshot module, cloning module, remote copy module Or these advanced features are simplified certainly.Specifically, S1 process is that host side issues IO write request data into local controller Upper layer cache module, S2 process are upper layer cache module I/O request data images in local controller to the end companion partner Data backup is carried out on upper layer cache module in controller, S3 process is the upper layer cache module in partner side controller Upper layer cache module into local controller sends feedback information to notify that it is standby that local controller IO write request data have succeeded Part, S4 process is that the upper layer cache module in local controller returns to the response message for indicating that I O process is completed, S5 to host side Process be upper layer cache module in local controller IO write request data are transferred in local controller snapshot/clone/ Remote copy/from simplification volume module, S6 process are snapshot/clone/remote copies in local controller/will from simplification volume module IO write request data are transferred to lower layer's cache module in local controller.
In Fig. 3, by the process of S1 to S6, when host side issues IO write request to local terminal, upper layer cache module is received IO write request, and by the channel NTB between controller, it is standby by data are carried out on the controller of data image to the end partner Part, subsequent partner side controller can notify Local Data successfully to back up to local terminal feedback information, and then local terminal controls IO write request data can be written in memory by device.If after this, there is delay machine event in local controller, these IO numbers According to will not lose, so safety when thinking the IO write request data that host side issues at this time, so can be returned to host side I O process is completed.The subsequent IO can just be submitted to snapshot, clone, remote copy or simplify these advanced feature moulds of module certainly Block carries out subsequent processing.Do so the delay that can also mask in these advanced feature data handling procedures.
The drawbacks of conventional monolayers cache module is: the response time of host IO write request will receive advanced feature treatment process Influence, and this method masks, and snapshot, clone, remote copy and simplifies these advanced features certainly to host IO write request Delay, after IO write request data are written to memory by upper layer cache module, advanced feature module can be with base to the treatment process of IO It is carried out in pointer, can also accelerate the efficiency of processing.
To sum up, the processing method of a kind of I/O request provided by the present invention, when host side issues IO write request data to originally When side controller, the upper layer cache module in local controller receives IO write request data, and by I/O request data image to together Data backup is carried out on companion's partner side controller;Partner side controller sends feedback information, notice to local controller Local controller IO write request data successfully back up;Local controller returns to the response letter for indicating that I O process is completed to host side Breath;IO write request data are written in memory local controller.As it can be seen that this method introduces the double-deck caching, when system receives After the I/O request of host side, that is, IO write request data, IO write request data are first into upper layer caching i.e. upper layer cache module, For upper layer cache module by IO write request data image to the end partner, receiving will be to host after the feedback information at the end partner The duplication that the response message that end return expression I O process is completed, in this way informing host side I O process complete i.e. IO write request data is complete At, it so allows IO write request data to be directly entered upper layer caching and just completes I O process completion, it is complete to host feedback I O process in time At, can mask the I O process process of advanced feature, acceleration system performance, in this way by upper layer caching can mask due to The delay of host IO caused by the I O process process of advanced feature, realize reduces IO delay, acceleration system performance in this way.
A kind of processing method of I/O request provided by the present invention is described in detail above.Tool used herein Principle and implementation of the present invention are described for body example, the above embodiments are only used to help understand this hair Bright method and its core concept.It should be pointed out that for those skilled in the art, not departing from the present invention , can be with several improvements and modifications are made to the present invention under the premise of principle, these improvement and modification also fall into right of the present invention It is required that protection scope in.

Claims (6)

1. a kind of processing method of I/O request characterized by comprising
When host side issues IO write request data to local controller, described in the upper layer cache module in local controller receives IO write request data, and data backup will be carried out on the I/O request data image to companion's partner side controller;
The partner side controller sends feedback information to the local controller, notifies IO described in local controller writes to ask Data are asked successfully to back up;
The local controller returns to the response message for indicating that I O process is completed to host side;
The IO write request data are written in memory the local controller;
The local controller includes upper layer cache module, advanced feature module and lower layer's cache module, the advanced feature mould Block is between the upper layer cache module and lower layer's cache module;
The advanced feature module includes: snapshot module, cloning module, remote copy module or simplifies module certainly.
2. the method as described in claim 1, which is characterized in that the local controller will be in IO write request data write-in In depositing, comprising:
The IO write request data are transmitted to the advanced feature in local controller by the upper layer cache module in local controller Module;
Advanced feature module in local controller handles the IO write request data, writes the IO after the completion of processing Request data is transferred to lower layer's cache module in local controller, is written in lower layer's cache module.
3. the method as described in claim 1, which is characterized in that the local controller and the partner side controller it Between connected by the channel NTB.
4. the method as described in claim 1, which is characterized in that described in the upper layer cache module in the local controller receives IO write request data, and data backup will be carried out on the I/O request data image to companion's partner side controller, comprising:
Upper layer cache module in the local controller receives the IO write request data, and by the I/O request data image Data backup is carried out on to the upper layer cache module in companion's partner side controller.
5. method as claimed in claim 4, which is characterized in that the partner side controller is sent out to the local controller Send feedback information, comprising:
Upper layer cache module of the upper layer cache module into the local controller in the partner side controller sends anti- Feedforward information.
6. the method as described in any one of claim 1 to 5, which is characterized in that further include:
When delay machine event occurs in local controller, local controller returns to the response message for indicating that I O process is completed to host side, The advanced feature module IO write request data being transferred in local controller.
CN201610619194.2A 2016-07-29 2016-07-29 A kind of processing method of I/O request Active CN106293520B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610619194.2A CN106293520B (en) 2016-07-29 2016-07-29 A kind of processing method of I/O request

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610619194.2A CN106293520B (en) 2016-07-29 2016-07-29 A kind of processing method of I/O request

Publications (2)

Publication Number Publication Date
CN106293520A CN106293520A (en) 2017-01-04
CN106293520B true CN106293520B (en) 2019-03-19

Family

ID=57663793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610619194.2A Active CN106293520B (en) 2016-07-29 2016-07-29 A kind of processing method of I/O request

Country Status (1)

Country Link
CN (1) CN106293520B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107797945A (en) * 2017-10-31 2018-03-13 郑州云海信息技术有限公司 A kind of storage system and its date storage method, device, system and equipment
CN109213631A (en) * 2018-08-22 2019-01-15 郑州云海信息技术有限公司 A kind of transaction methods, device, equipment and readable storage medium storing program for executing
CN110780816B (en) * 2019-10-17 2023-01-10 苏州浪潮智能科技有限公司 Data synchronization method, device and medium
CN115422207B (en) * 2022-11-02 2023-03-24 苏州浪潮智能科技有限公司 Method, device, equipment and medium for reducing transmission quantity of mirror image data by double-layer cache
CN116107516B (en) * 2023-04-10 2023-07-11 苏州浪潮智能科技有限公司 Data writing method and device, solid state disk, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6912669B2 (en) * 2002-02-21 2005-06-28 International Business Machines Corporation Method and apparatus for maintaining cache coherency in a storage system
US7111189B1 (en) * 2000-03-30 2006-09-19 Hewlett-Packard Development Company, L.P. Method for transaction log failover merging during asynchronous operations in a data storage network
CN101471955A (en) * 2007-12-28 2009-07-01 英业达股份有限公司 Method for writing equipment data in dual-controller network storage circumstance
CN102137138A (en) * 2010-09-28 2011-07-27 华为技术有限公司 Method, device and system for cache collaboration
CN103577125A (en) * 2013-11-22 2014-02-12 浪潮(北京)电子信息产业有限公司 Cross controller group mirror image writing method and device applied to high-end disk array

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7111189B1 (en) * 2000-03-30 2006-09-19 Hewlett-Packard Development Company, L.P. Method for transaction log failover merging during asynchronous operations in a data storage network
US6912669B2 (en) * 2002-02-21 2005-06-28 International Business Machines Corporation Method and apparatus for maintaining cache coherency in a storage system
CN101471955A (en) * 2007-12-28 2009-07-01 英业达股份有限公司 Method for writing equipment data in dual-controller network storage circumstance
CN102137138A (en) * 2010-09-28 2011-07-27 华为技术有限公司 Method, device and system for cache collaboration
CN103577125A (en) * 2013-11-22 2014-02-12 浪潮(北京)电子信息产业有限公司 Cross controller group mirror image writing method and device applied to high-end disk array

Also Published As

Publication number Publication date
CN106293520A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
CN106293520B (en) A kind of processing method of I/O request
CN104156361B (en) A kind of method and system for realizing data syn-chronization
US6640291B2 (en) Apparatus and method for online data migration with remote copy
EP0602822B1 (en) Method and apparatus for remote data duplexing
CN103092778B (en) A kind of buffer memory mirror method of storage system
US10929431B2 (en) Collision handling during an asynchronous replication
CN106815251A (en) Distributed data base system, data bank access method and device
CN107025289A (en) The method and relevant device of a kind of data processing
US5940592A (en) Communication processing method in parallel computer and system therefor
CN105808374A (en) Snapshot processing method and associated equipment
JP2008046969A (en) Access monitoring method and device for shared memory
CN110515557A (en) A kind of cluster management method, device, equipment and readable storage medium storing program for executing
US8499133B2 (en) Cache management for increasing performance of high-availability multi-core systems
CN105353984B (en) High-availability cluster controller, control method and system based on soft magnetism disk array
CN101751230A (en) Equipment and method for calibrating time stamp of I/O (input/output) data
CN103970620B (en) Quasi continuity data replication method and device
CN102325171B (en) Data storage method in monitoring system and system
CN102521023B (en) Multi-system transaction integration processing method and transaction integration processing system
CN102103530A (en) Snapshot methods, snapshot device and snapshot system
CN108616591A (en) A kind of interface equipment and method for carrying out data exchange
CN109656754A (en) A kind of BBU failure power-off protection method, device, equipment and storage medium
CN104735386A (en) ADVB sending control circuit and implementation method
CN107643942B (en) State information storage method and device
CN106897024A (en) Method for writing data and device
CN109246202A (en) A kind of method and system for realizing storage dual-active using optical fiber switch

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant