CN107046575A - A kind of cloud storage system and its high density storage method - Google Patents

A kind of cloud storage system and its high density storage method Download PDF

Info

Publication number
CN107046575A
CN107046575A CN201710250990.8A CN201710250990A CN107046575A CN 107046575 A CN107046575 A CN 107046575A CN 201710250990 A CN201710250990 A CN 201710250990A CN 107046575 A CN107046575 A CN 107046575A
Authority
CN
China
Prior art keywords
node
memory
disk
osd
memory nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710250990.8A
Other languages
Chinese (zh)
Other versions
CN107046575B (en
Inventor
金友兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Zhuo Shengyun Mdt Infotech Ltd
Original Assignee
Nanjing Zhuo Shengyun Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Zhuo Shengyun Mdt Infotech Ltd filed Critical Nanjing Zhuo Shengyun Mdt Infotech Ltd
Priority to CN201710250990.8A priority Critical patent/CN107046575B/en
Publication of CN107046575A publication Critical patent/CN107046575A/en
Application granted granted Critical
Publication of CN107046575B publication Critical patent/CN107046575B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0653Monitoring storage devices or systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of cloud storage system, including several clients, client is connected by netting twine with interchanger, it is characterised in that:Interchanger is connected by netting twine with metadata node, monitor node and several OSD memory nodes, and each two OSD memory nodes are connected by SAS lines with disk chassis JBOD.The high density storage method of said system is:Each OSD memory nodes respectively manage the hard disk in a part of extension cabinet, after a memory node delays machine, all hard disks in another memory node adapter extension cabinet, it is ensured that all data are normally read and write.When delay machine after node recover after, associated hard disk take over back again, it is ensured that store overall performance.The present invention can support OSD memory nodes to realize highdensity storage capacity, substantially reduce the cost of cloud storage, and lift storage performance.

Description

A kind of cloud storage system and its high density storage method
Technical field
The invention belongs to computer application field, more particularly to a kind of cloud storage system and its high density storage method.
Background technology
With the fast development of computer technology, cloud storage service is increasingly received by enterprise.Usual cloud storage With the presence of multiserver role in system, metadata node, monitor node, OSD memory nodes etc. are generally comprised.Wherein OSD is deposited Storage node is also referred to as object storage nodes, and it is the place of the main contents storage of user data.In a Mass storage collection In group, data or file content that user preserves are divided into multiple objects.Each object is according to certain algorithm, or metadata The judgement of service, is saved on some OSD memory node.In order to prevent loss of data, an object can copy as multiple copies, It is saved on different OSD memory nodes.After some memory node delays machine or damage, remained able to by other copies Read-write object data.This storage method can be very good to support the extension of big data and the reliability of data.
But this storage mode is not suitable for providing too many memory space in an OSD memory node.Assuming that an OSD is deposited There is substantial amounts of hard disk on storage node, many data can be preserved.After the OSD memory nodes delay machine or damage, hard disk thereon It will all stop servicing.When after the OSD memory nodes recover after, even if the data of hard disk are not lost thereon, but every piece hard The data of disk all lag behind the copy data on other nodes, and system needs to carry out substantial amounts of data recovery and duplication, at this moment can Have a strong impact on the ability that cloud storage externally provides service.If the number of hard disk is all lost on the node, that is more likely produced Raw network data storm, at this moment almost can not externally provide service.Although the speed that can be recovered by control data, to ensure The ability that cloud storage is externally serviced, but the speed that reduction recovers will cause the overlong time of data recovery, so be set other During standby failure, the possibility for producing service disruption or loss of data is larger.
So can allow to fail while single hard disk or small-scale hard disk in common distributed storage, still The problem of failure can bring serious while a large amount of hard disks.So single OSD memory nodes typically can not built-in too many hard disk. In view of the fast development of current CPU and internal memory, this OSD memory nodes can not provide high density storage, cause cloud storage Cost is too high.
The content of the invention
Goal of the invention:For problems of the prior art, the present invention, which provides one kind, can support OSD memory nodes real Existing highdensity storage capacity, substantially reduces the cost of cloud storage, and provide the cloud storage system and its high density of storage performance Storage method.
Technical scheme:In order to solve the above technical problems, a kind of cloud storage system that the present invention is provided, including several clients End, client is connected by netting twine with interchanger, and interchanger passes through netting twine and metadata node, monitor node and several OSD Memory node is connected, and each two OSD memory nodes are connected by SAS lines with disk chassis JBOD.
Further, the disk chassis JBOD is made up of bottom plate and several hard disk drives, the hard disk drive peace On bottom plate.
A kind of high density storage method with cloud storage system as described above, is comprised the following steps that:
Step one:Initialized, two memory nodes have been found that JBOD equipment disks, disk is carried out only first then One property is numbered;
Step 2:The respective adapter certain proportion disk of two memory nodes, is then registered to monitor node;
Step 3:Memory node timing carries out network service with monitor node;
Step 4:By monitor node timing acquisition memory node state, then find that certain memory node fails;
Step 5:Win failure node from cluster, notify that opposite end memory node takes over all disks, and to monitor node Registration, finally completes adapter process;
Step 6:Node recovery process, after first recovering normal to memory node, is registered to monitor node;
Step 7:Monitor node notifies opposite end memory node to cancel the adapter to associative disk, and judges whether to cancel into Work(, enters step 8 if cancelling successfully, continues the adapter cancelled to associative disk if not cancelling successfully until taking Disappear success;
Step 8:Original memory node adapter associative disk, is registered to monitor node, and final system recovers normal.
Further, the computational methods of two memory node adapter disk ratios are comprised the following steps that in the step one: Weight is calculated according to the CPU core number and memory size of two OSD memory nodes, automatic distribution hard disk ratio is stored to OSD to be saved Point management, the wherein computational methods of weight:CPU core number * 50%+ memory sizes * 50.
Compared with prior art, the advantage of the invention is that:
Implementation method of the present invention is simple, and traditional dual control storage server can also realize the adapter of hard disk, but be due to This kind of server is typically individually present, and is not to add in cloud storage cluster, realizing for High Availabitity is extremely complex.Such as dual control is deposited Storage server generally requires three passages and judges whether opposite end is normal, has main heartbeat, attached heartbeat and isolation card here, and ensure three Individual passage will not simultaneously disable.Meanwhile, in order to prevent two nodes while writing one piece of hard disk, there is also a set of complicated suicide Mechanism.Because equipment is in cloud storage cluster in this patent, it can be determined that mode is easy and effective, quite with introducing cluster monitoring Node is as the 3rd arbitration, but this arbitration is simpler, efficient.In addition, in the OSD memory nodes monitoring agent of itself When program can not communicate with cluster monitoring node, any read-write requests will be refused, it is ensured that be not in that two OSD memory nodes are same When write the situation of a hard disk;
Therefore, the invention provides the OSD memory node high availability schemes of a cloud storage system, whole cloud clothes are improved The reliabilty and availability of business so that whole cloud storage system service is more reliable and more stable, while can preferably support highly dense Spend disk chassis, the carrying cost of reduction.
Brief description of the drawings
Fig. 1 is structural representation of the invention;
Fig. 2 is overall flow figure of the invention.
Embodiment
With reference to the accompanying drawings and detailed description, the present invention is furture elucidated.
The present invention provides OSD memory node high density storage methods in a kind of storage cluster.The system architecture of the present invention is such as Accompanying drawing 1.In hardware connection, there are HBA cards on each OSD memory nodes, while the connection one jointly of each two OSD memory nodes Individual extension cabinet JBOD, all disks in extension cabinet are recognized by HBA cards, and two such memory node can access extension cabinet On all hard disks.The hard disk in a part of extension cabinet is respectively managed present invention is primarily intended to each OSD memory nodes, when one Memory node is delayed after machine, all hard disks in another memory node adapter extension cabinet, it is ensured that all data are normally read and write. When delay machine after node recover after, associated hard disk take over back again, it is ensured that store overall performance.In order to reach above-mentioned purpose The process step of the present invention is as described in Figure 2:
1. initialization procedure:A) all disks in the paired each self-discovery extension cabinet in OSD storage services end of each two, Various OSD memory nodes see that each disk is an equipment;B) each disk Unique Device in cluster is registered as to number;c) The ownership situation of each disk is controlled according to certain software algorithm.Generally, can if two OSD memory nodes configurations are identical Situation is belonged to mean allocation disk;If configuration is different, disk ownership situation can be distributed according to performance ratio;d)OSD The disk oneself belonged to is carried out carry by memory node, and is registered to monitor node, can so be provided read-write and be supported.e)OSD Timing communicates with monitor node in memory node running, reports the state of each disk.
2. machine adapter process of delaying:A) when one of node delays machine, the monitor node of cluster finds the node failure, The node is extractd from cluster, is so no longer had read-write data and is gone to access the failure node, while notifying peer node to perform Adapter.B) OSD memory nodes find that peer node departs from after cluster, all disks taken in the extension cabinet.Because equipment is compiled Number all record before, so directly carry.C) complete after adapter, registered to monitor node, the disk so taken over Can normally it read and write.This process is fully able to avoid two nodes while the situation of one hard disk of operation, has reached and realized OSD Memory node High Availabitity target.
3. after the OSD memory nodes of failure recover, rejoin group system.Monitor node finds that malfunctioning node recovers Afterwards, peer node is notified to unload original adapter disk.After the completion of unloading, the node recovered again takes over back original hard disk, and to Monitor node is registered.Because data are still now consistent, the recovery operation of data need not be being carried out after adapter, also will not The service ability externally provided is provided.
One typical cloud storage cluster includes:Metadata node, monitor node, OSD memory nodes.But scale is most Big is OSD memory nodes.Original OSD memory nodes are replaced with OSD memory node groups, the hardware connection side of cluster by the system As shown above, each hardware is all general to formula here.Wherein:
1. extension cabinet JBOD (Just a Bunch Of Disks, hard disk cluster):It is to be installed on a bottom plate with many The storage device of individual hard disk drive.A usual JBOD can have very Multi-disk, connect a large amount of hard disks, such as 60 or 90 disks The JBOD of position commercially starts popularization.Because JBOD does not have simpler in controller, no various intelligent functions, making It is single, it is possible to extreme high reliability.
2. each server node:Need to configure appropriate CPU numbers and memory size, because the present invention supports high density to deposit Storage, so can configure larger CPU core number and internal memory scale for OSD memory nodes therein, two OSD memory nodes are needed HBA cards are configured, are connected by SAS port with JBOD.
3. the network switch:Whole cluster is attached using general interchanger, externally provides cloud storage service.
In this structure, in an OSD memory node group, memory node A and B can be accessed on connection disk chassis JBOD All hard disks, i.e., in linux system, the dev equipment of associated hard disk can be immediately seen.If OSD memory nodes are wanted Manage certain block hard disk, directly carry out the mount equipment operations under Linux, it is possible to read and write the hard disk;If it is desired to abandoning to certain The read-write operation of individual hard disk, directly carries out the umount equipment under Linux.
The present invention is a kind of cloud storage system and its high density storage method.The system is divided on two OSD memory nodes Do not dispose, mainly realize that the flow and method of High Availabitity is as follows:
1. calculating weight according to the CPU core number and memory size of two OSD memory nodes, automatic distribution hard disk ratio is given OSD memory nodes are managed.The computational methods of weight:CPU core number * 50%+ memory sizes * 50, but this computational methods may Ratio is adjusted as needed;
After 2.OSD memory nodes start, to cluster monitoring this node state of Node registry and the hard disk serial number of management.Registration After success, OSD memory nodes use these hard disks, the read-write requests of client are received, come the data for preserving and reading user;
3.OSD memory nodes reported the status information of oneself, and last report to cluster per several seconds (can configure) Time;Cluster monitoring node certain time finds that OSD memory nodes do not have report information, will determine that the OSD memory nodes fail, The node is extractd from cluster, no matter so failure node state is as follows, read-write requests will not occur to arrive the node;
4. in addition, OSD memory nodes can also obtain the integrality of cluster per several seconds by monitor node, find to end segment Whether point fails.If it is determined that peer node fails, start hard disk takeover process.The hard disk that the program manages opposite end all by This node administration, and be registered in group system, such group system remains able to all hard disks of normal process, but read-write please Failure node will not gone to by asking;
5. monitoring nodes program is if it is determined that this end node is out of touch with cluster, any read-write requests will not received, It is not in two nodes while writing the situation of one piece of hard disk that this, which ensure that,;
Takeover process part is not to be taken over using all hard disks of peer node to be overall, but using each hard disk for singly Position judges whether to take over single hard disk.When hard disk corruptions, although can also start takeover process, but be due to the section Put also None- identified or read the hard disk, so understand taking over failing, then still start common hard disk failure alarm.
In order to prevent the network or service flash problem of node, after failure node recovers, adapter is typically manually performed The inverse process of operation recovers whole cluster service.It can so avoid concussive adapter occur repeatedly firmly in some situations Disk.The main process of this recovery operation is:
After 1. machine node of delaying recovers normal, to cluster monitoring Node registry.
2. monitor node notifies opposite end memory node to cancel the adapter to associative disk.Usually peer node stops to phase Close the read-write of disk, complete rear execution umount unloading operations.
3. after unloading successfully, the memory node of recovery re-executes the adapter operation of these disks, typically directly performs Mount carries, and registered to monitor node.
Embodiments of the invention is the foregoing is only, is not intended to limit the invention.All principles in the present invention Within, the equivalent substitution made should be included in the scope of the protection.The content category that the present invention is not elaborated In prior art known to this professional domain technical staff.

Claims (4)

1. a kind of cloud storage system, including several clients, client are connected by netting twine with interchanger, it is characterised in that: Interchanger is connected by netting twine with metadata node, monitor node and several OSD memory nodes, each two OSD memory nodes It is connected by SAS lines with disk chassis JBOD.
2. a kind of cloud storage system according to claim 1, it is characterised in that:The disk chassis JBOD is by bottom plate and some Individual hard disk drive is constituted, and the hard disk drive is arranged on bottom plate.
3. a kind of high density storage method with cloud storage system as described above, it is characterised in that comprise the following steps that:
Step one:Initialized first, two memory nodes have been found that JBOD equipment disks, uniqueness then is carried out to disk Numbering;
Step 2:The respective adapter certain proportion disk of two memory nodes, is then registered to monitor node;
Step 3:Memory node timing carries out network service with monitor node;
Step 4:By monitor node timing acquisition memory node state, then find that certain memory node fails;
Step 5:Failure node is won from cluster, notifies opposite end memory node to take over all disks, and noted to monitor node Volume, finally completes adapter process;
Step 6:Node recovery process, after first recovering normal to memory node, is registered to monitor node;
Step 7:Monitor node notifies opposite end memory node to cancel the adapter to associative disk, and judges whether to cancel successfully, such as Fruit, which is cancelled, successfully then enters step 8, continues the adapter cancelled to associative disk if not cancelling successfully until cancelling into Work(;
Step 8:Original memory node adapter associative disk, is registered to monitor node, and final system recovers normal.
4. a kind of high density storage method of cloud storage system according to claim 3, it is characterised in that the step one In the computational methods of two memory node adapter disk ratios comprise the following steps that:According to the CPU core number of two OSD memory nodes Weight is calculated with memory size, automatic distribution hard disk ratio is managed to OSD memory nodes, wherein the computational methods of weight:CPU Check figure * 50%+ memory sizes * 50.
CN201710250990.8A 2017-04-18 2017-04-18 A kind of high density storage method for cloud storage system Active CN107046575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710250990.8A CN107046575B (en) 2017-04-18 2017-04-18 A kind of high density storage method for cloud storage system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710250990.8A CN107046575B (en) 2017-04-18 2017-04-18 A kind of high density storage method for cloud storage system

Publications (2)

Publication Number Publication Date
CN107046575A true CN107046575A (en) 2017-08-15
CN107046575B CN107046575B (en) 2019-07-12

Family

ID=59544315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710250990.8A Active CN107046575B (en) 2017-04-18 2017-04-18 A kind of high density storage method for cloud storage system

Country Status (1)

Country Link
CN (1) CN107046575B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109976946A (en) * 2019-02-27 2019-07-05 深圳点猫科技有限公司 It is a kind of for educating the scheduling system history data restoration methods and device of cloud platform
WO2020135889A1 (en) * 2018-12-28 2020-07-02 杭州海康威视系统技术有限公司 Method for dynamic loading of disk and cloud storage system
CN111444157A (en) * 2019-01-16 2020-07-24 阿里巴巴集团控股有限公司 Distributed file system and data access method
CN111901415A (en) * 2020-07-27 2020-11-06 星辰天合(北京)数据科技有限公司 Data processing method and system, computer readable storage medium and processor
CN112579384A (en) * 2019-09-27 2021-03-30 杭州海康威视数字技术股份有限公司 Method, device and system for monitoring nodes of SAS domain and nodes
CN115988008A (en) * 2022-12-29 2023-04-18 江苏倍鼎网络科技有限公司 High-density storage method and system for cloud storage system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049225A (en) * 2013-01-05 2013-04-17 浪潮电子信息产业股份有限公司 Double-controller active-active storage system
WO2016070375A1 (en) * 2014-11-06 2016-05-12 华为技术有限公司 Distributed storage replication system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049225A (en) * 2013-01-05 2013-04-17 浪潮电子信息产业股份有限公司 Double-controller active-active storage system
WO2016070375A1 (en) * 2014-11-06 2016-05-12 华为技术有限公司 Distributed storage replication system and method
CN106062717A (en) * 2014-11-06 2016-10-26 华为技术有限公司 Distributed storage replication system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
金友兵: "SurFS Product Description", 《HTTPS://GITHUB.COM/SURCLOUDORG/SURFS/COMMITS/MASTER/SURFS%20PRODUCT%20DESCRIPTION.PDF》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020135889A1 (en) * 2018-12-28 2020-07-02 杭州海康威视系统技术有限公司 Method for dynamic loading of disk and cloud storage system
CN111381766A (en) * 2018-12-28 2020-07-07 杭州海康威视系统技术有限公司 Method for dynamically loading disk and cloud storage system
CN111381766B (en) * 2018-12-28 2022-08-02 杭州海康威视系统技术有限公司 Method for dynamically loading disk and cloud storage system
CN111444157A (en) * 2019-01-16 2020-07-24 阿里巴巴集团控股有限公司 Distributed file system and data access method
CN111444157B (en) * 2019-01-16 2023-06-20 阿里巴巴集团控股有限公司 Distributed file system and data access method
CN109976946A (en) * 2019-02-27 2019-07-05 深圳点猫科技有限公司 It is a kind of for educating the scheduling system history data restoration methods and device of cloud platform
CN112579384A (en) * 2019-09-27 2021-03-30 杭州海康威视数字技术股份有限公司 Method, device and system for monitoring nodes of SAS domain and nodes
CN111901415A (en) * 2020-07-27 2020-11-06 星辰天合(北京)数据科技有限公司 Data processing method and system, computer readable storage medium and processor
CN111901415B (en) * 2020-07-27 2023-07-14 北京星辰天合科技股份有限公司 Data processing method and system, computer readable storage medium and processor
CN115988008A (en) * 2022-12-29 2023-04-18 江苏倍鼎网络科技有限公司 High-density storage method and system for cloud storage system

Also Published As

Publication number Publication date
CN107046575B (en) 2019-07-12

Similar Documents

Publication Publication Date Title
CN107046575A (en) A kind of cloud storage system and its high density storage method
CN103763383B (en) Integrated cloud storage system and its storage method
US11068350B2 (en) Reconciliation in sync replication
US6678788B1 (en) Data type and topological data categorization and ordering for a mass storage system
US6691209B1 (en) Topological data categorization and formatting for a mass storage system
US7627779B2 (en) Multiple hierarichal/peer domain file server with domain based, cross domain cooperative fault handling mechanisms
US6594775B1 (en) Fault handling monitor transparently using multiple technologies for fault handling in a multiple hierarchal/peer domain file server with domain centered, cross domain cooperative fault handling mechanisms
US6865157B1 (en) Fault tolerant shared system resource with communications passthrough providing high availability communications
US10496320B2 (en) Synchronous replication
US6578160B1 (en) Fault tolerant, low latency system resource with high level logging of system resource transactions and cross-server mirrored high level logging of system resource transactions
US7039827B2 (en) Failover processing in a storage system
US7219260B1 (en) Fault tolerant system shared system resource with state machine logging
CN1770110B (en) Method and system for lockless infinibandtm poll for I/O completion
CN108696569A (en) The system and method that data replicate are provided in NVMe-oF Ethernets SSD
CN106815298A (en) Distributed sharing file system based on block storage
US9471449B2 (en) Performing mirroring of a logical storage unit
TW200401970A (en) Method and apparatus for reliable failover involving incomplete raid disk writes in a clustering system
US20140337457A1 (en) Using network addressable non-volatile memory for high-performance node-local input/output
US20150288752A1 (en) Application server to nvram path
WO2017041616A1 (en) Data reading and writing method and device, double active storage system and realization method thereof
US20120144006A1 (en) Computer system, control method of computer system, and storage medium on which program is stored
CN107422989B (en) Server SAN system multi-copy reading method and storage system
CN104268038B (en) The high-availability system of disk array
CN103473328A (en) MYSQL (my structured query language)-based database cloud and construction method for same
US20170116096A1 (en) Preserving coredump data during switchover operation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant