WO2015180434A1 - 一种数据库集群管理数据的方法、节点及系统 - Google Patents

一种数据库集群管理数据的方法、节点及系统 Download PDF

Info

Publication number
WO2015180434A1
WO2015180434A1 PCT/CN2014/092140 CN2014092140W WO2015180434A1 WO 2015180434 A1 WO2015180434 A1 WO 2015180434A1 CN 2014092140 W CN2014092140 W CN 2014092140W WO 2015180434 A1 WO2015180434 A1 WO 2015180434A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
ssd
dual port
transaction log
port
Prior art date
Application number
PCT/CN2014/092140
Other languages
English (en)
French (fr)
Inventor
于巍
刘辉军
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP14893080.3A priority Critical patent/EP3147797B1/en
Priority to KR1020167036343A priority patent/KR101983208B1/ko
Priority to JP2017514759A priority patent/JP6457633B2/ja
Priority to RU2016152176A priority patent/RU2653254C1/ru
Publication of WO2015180434A1 publication Critical patent/WO2015180434A1/zh
Priority to US15/365,728 priority patent/US10379977B2/en
Priority to US16/455,087 priority patent/US10860447B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F16/275Synchronous replication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2094Redundant storage or storage space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2046Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share persistent storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2089Redundant storage control functionality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2089Redundant storage control functionality
    • G06F11/2092Techniques of failing over between control units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • G06F16/113Details of archiving
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/13File access structures, e.g. distributed indices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/1734Details of monitoring file system events, e.g. by the use of hooks, filter drivers, logs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • G06F16/1824Distributed file systems implemented using Network-attached Storage [NAS] architecture
    • G06F16/1827Management specifically adapted to NAS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1471Saving, restoring, recovering or retrying involving logging of persistent data for recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/80Database-specific techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/805Real-time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/82Solving problems relating to consistency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0026PCI express

Definitions

  • the present invention relates to the field of databases, and in particular, to a method, a node and a system for managing data in a database cluster.
  • All nodes in the database cluster are connected to a shared magnetic array, and the shared magnetic array stores data for all nodes. If a node in the database cluster is down (Crash), there will be some time when the updated data on the node cannot be used, causing some services to be affected.
  • the usual solution is to update the node's log to the shared magnetic array. When the node crashes, other nodes read the node log for recovery. If other nodes fail to read the node log, they must wait until the node is restarted. In order to recover the data on the node, and then provide services externally, this process is very time consuming and affects the business. At the same time, when the log information is updated to the shared magnetic array, the performance of the cluster system is greatly affected due to the large amount of synchronized logs.
  • the embodiment of the invention provides a method, a node and a system for managing data of a database cluster, and aims to solve the problem that the recovery process of the node after the node is down-time affects the service.
  • the first aspect is a method for managing data by a database cluster, where the database cluster includes a first dual port solid state drive SSD, a second dual port solid state drive SSD, a first node, a second node, and a third node, where the first The dual port SSD connects the first node and the second node, and the second dual port SSD connects the second node and the third node, and the method includes:
  • the third node and the first node and the second node can perform data transmission with each other.
  • the method further includes:
  • the first node acquires a transaction log of the first dual-port SSD checkpoint checkpoint in a preset period, and archives the transaction log after the checkpoint to Archieve to the shared disk array.
  • the method further includes:
  • the first node directly performs data transmission by using the first dual port SSD and the second node.
  • At least one port in the first dual-port SSD is a PCIE port; in the second dual-port SSD At least one port is a PCIE port.
  • the method further includes:
  • the second node starts another database process to run data before the first node is down, and the other database process is independent of the original database process in the second node.
  • the method further includes:
  • the third node starts another database process to run data before the first node is down, and the other database process is independent of the original database process in the third node.
  • a first node the database cluster includes a first dual port solid state drive SSD, a second dual port solid state drive SSD, the first node, a second node, and a third node, and the first dual port SSD Connecting the first node and the second node, the second dual port SSD connecting the second node and the third node, where the first node includes:
  • a writing unit configured to write a transaction log to the first dual port SSD, such that, in a case where the first node is down, the second node acquires the transaction from the first dual port SSD a log, the second node runs data before the first node is down according to the transaction log;
  • the third node and the first node and the second node can perform data transmission with each other.
  • the first node further includes:
  • An obtaining unit configured to obtain a transaction log after the first dual-port SSD checkpoint checkpoint in a preset period
  • An archiving unit configured to archive the transaction log after the checkpoint to Archieve to the shared disk array.
  • the first node further includes:
  • a transmitting unit configured to directly perform data transmission by using the first dual port SSD and the second node, where the first node and the second node are both database instances.
  • At least one port in the first dual-port SSD is a PCIE port; in the second dual-port SSD At least one port is a PCIE port.
  • the first node further includes:
  • a startup unit configured to start another database process to run data before the first node is down, and the another database process is independent of the original database process in the second node.
  • a startup unit configured to start another database process to run data before the first node is down, and the another database process is independent of the original database process in the third node.
  • a database cluster management data system the database cluster includes a first dual port solid state drive SSD, a second dual port solid state drive SSD, a first node, a second node, and a third node, the first a dual port SSD connecting the first node and the second node, the second dual port SSD connecting the second node and the third node,
  • the first node is configured to write a transaction log to the first dual port SSD
  • the second node is configured to acquire the transaction log from the first dual port SSD in the case that the first node is down, and run the first node before the Crash according to the transaction log Data; or,
  • the transaction log is obtained from the first dual port SSD, and the transaction log is sent to the third node, and the third node runs according to the transaction log.
  • the first node crashes the data before the Crash;
  • the third node and the first node and the second node can perform data transmission with each other.
  • the first node further includes:
  • the obtaining unit is configured to obtain a transaction log after the checkpoint of the dual port SSD checkpoint in a preset period
  • An archiving unit configured to archive the transaction log after the checkpoint to Archieve to the shared disk array.
  • the first node further includes:
  • a transmitting unit configured to directly perform data transmission by using the first dual port SSD and the second node, where the first node and the second node are both database instances.
  • At least one port in the first dual-port SSD is a PCIE port; in the second dual-port SSD At least one port is a PCIE port.
  • the first node further includes:
  • a startup unit configured to start another database process to run data before the first node is down, and the another database process is independent of the original database process in the second node.
  • the startup unit is configured to start another database process to run data before the first node is down, and the another database process is independent of the original database process in the third node.
  • An embodiment of the present invention provides a method for managing data of a database cluster, where the database cluster includes a first dual port solid state drive SSD, a second dual port solid state drive SSD, a first node, a second node, and a third node, where the a dual port SSD connecting the first node and the second node, the second dual port
  • the SSD connects the second node and the third node, and the first node writes a transaction log to the first dual port SSD, so that in the case that the first node is down, the second The node acquires the transaction log from the first dual port SSD, and the second node runs data before the first node is down according to the transaction log; or the first node writes a transaction log to the node Determining a first dual port SSD, such that, in the case that the first node is down, after the second node acquires the transaction log from the first dual port SSD, the second node will use the transaction
  • the log is sent to the third node, where the third node runs data before the first no
  • FIG. 1 is a system structural diagram of database cluster management data according to an embodiment of the present invention.
  • FIG. 2 is a schematic structural diagram of data management of a database cluster according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of database cluster management data according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of data management of a database cluster according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a method for managing data of a database cluster according to an embodiment of the present invention.
  • FIG. 6 is a flowchart of a method for managing data of a database cluster according to an embodiment of the present invention
  • FIG. 7 is a structural diagram of a device of a first node according to an embodiment of the present invention.
  • FIG. 8 is a structural diagram of a device of a first node according to an embodiment of the present invention.
  • FIG. 1 is a system structural diagram of database cluster management data according to an embodiment of the present invention. As shown in FIG. 1, the system includes:
  • a first dual port 101, a second dual port 102, a first node 103, a second node 104, a third node 105, and a first dual port solid state drive (Solid State) Disk, SSD) 101 is connected to the first node 103 and the second node 104
  • the second dual port 102 is connected to the second node 104 and the third node 105
  • the second node 104 is connected respectively.
  • the first node 103 is configured to use a transaction log (transaction) Log) is written into the first dual port SSD 101;
  • the second node 104 is configured to acquire the transaction log from the first dual port SSD 101 when the first node 103 is down, and run the first node 103 according to the transaction log. Data before the Crash; or,
  • the transaction log is acquired from the first dual port SSD 101, and the transaction log is sent to the third node 105, and the third node 105 according to the The transaction log runs the data before the first node 103 crashes Crash;
  • the third node 105 and the first node 103 and the second node 104 can perform data transmission with each other.
  • the first node 103 further includes:
  • An obtaining unit configured to obtain a transaction log after the first dual-port SSD checkpoint checkpoint in a preset period
  • An archiving unit configured to archive the transaction log after the checkpoint to Archieve to the shared disk array.
  • FIG. 2-4 is a schematic structural diagram of database cluster management data according to an embodiment of the present invention.
  • the first node generates a transaction log and is written by the background write log process.
  • SSD the log of the first node periodically reads the log after the CheckPoint in the SSD.
  • the log archiving process of the first node archives the log after the CheckPoint to the shared magnetic array; the second node reads the log of the first node through the first dual port. After recovery, work instead of the first node.
  • the second node reads the log of the first node from the first dual port, the second node starts a new database process for recovery, and the second node performs recovery, and provides external services.
  • the two nodes read the logs of the first node through the first dual port, and deliver the logs to other nodes for recovery.
  • the second node reads the log of the first node from the SSD, and the second node passes the log to the third node, and after the third node acquires the data of the first node, after performing the recovery operation, Provide external services.
  • the first node further includes:
  • a transmitting unit configured to directly perform data transmission by using the first dual port SSD and the second node, where the first node and the second node are both database instances.
  • FIG. 5 is a schematic diagram of a method for managing data of a database cluster according to an embodiment of the present invention.
  • At least one port of the first dual-port SSD is a PCIE port; the second dual port At least one port in the SSD is a PCIE port.
  • the first node further includes:
  • a startup unit configured to start another database process to run data before the first node is down, and the another database process is independent of the original database process in the second node.
  • the first node further includes:
  • a startup unit configured to start another database process to run data before the first node is down, and the another database process is independent of the original database process in the third node.
  • An embodiment of the present invention provides a database cluster management data system, where the database cluster includes a first dual port solid state drive SSD, a second dual port solid state drive SSD, a first node, a second node, and a third node.
  • a dual port SSD connecting the first node and the second node, the second dual port The SSD connects the second node and the third node, and the first node writes a transaction log to the first dual port SSD, so that in the case that the first node is down, the second The node acquires the transaction log from the first dual port SSD, and the second node runs data before the first node is down according to the transaction log; or the first node writes a transaction log to the node Determining a first dual port SSD, such that, in the case that the first node is down, after the second node acquires the transaction log from the first dual port SSD, the second node will use the transaction The log is sent to the third node, where the third node runs data before the first node is down; the third node
  • FIG. 6 is a flowchart of a method for managing data of a database cluster according to an embodiment of the present invention.
  • the first dual port SSD is connected to the first node and the second node
  • the second dual port SSD is connected to the second node and the third node
  • the second node is connected to the first dual port SSD and the second pair respectively Port SSD
  • the method includes:
  • Step 601 The first node writes a transaction log to the first dual-port SSD, so that, when the first node is down, the second node acquires the first dual-port SSD. a transaction log, the second node running data before the first node is down according to the transaction log; or
  • the third node and the first node and the second node can perform data transmission with each other.
  • the method further includes:
  • the first node acquires a transaction log of the first dual-port SSD checkpoint checkpoint in a preset period, and archives the transaction log after the checkpoint to Archieve to the shared disk array.
  • the method further includes:
  • the first node directly performs data transmission by using the first dual port SSD and the second node.
  • At least one port of the first dual-port SSD is a PCIE port; at least one of the second dual-port SSDs is a PCIE port.
  • FIGS. 2-4 Specifically, refer to the description of FIGS. 2-4.
  • the method further includes:
  • the second node starts another database process to run data before the first node is down, and the other database process is independent of the original database process in the second node.
  • the method further includes:
  • the third node starts another database process to run data before the first node is down, and the other database process is independent of the original database process in the third node.
  • FIG. 5 Specifically, the description of FIG. 5 is referred to.
  • An embodiment of the present invention provides a method for managing data of a database cluster, where the database cluster includes a first dual port solid state drive SSD, a second dual port solid state drive SSD, a first node, a second node, and a third node, where the a dual port SSD connecting the first node and the second node, the second dual port
  • the SSD connects the second node and the third node, and the first node writes a transaction log to the first dual port SSD, so that in the case that the first node is down, the second The node acquires the transaction log from the first dual port SSD, and the second node runs data before the first node is down according to the transaction log; or the first node writes a transaction log to the node Determining a first dual port SSD, such that, in the case that the first node is down, after the second node acquires the transaction log from the first dual port SSD, the second node will use the transaction
  • the log is sent to the third node, where the third node runs data before the first no
  • FIG. 7 is a structural diagram of a device of a first node according to an embodiment of the present invention. As shown in Figure 7,
  • the database cluster includes a first dual port solid state drive SSD, a second dual port solid state drive SSD, a first node, a second node, and a third node, wherein the first dual port SSD connects the first node and the first a second node, the second dual port SSD connecting the second node and the third node, where the first node includes:
  • a writing unit 701 configured to write a transaction log to the first dual port SSD, so that, in a case that the first node is down, the second node acquires the information from the first dual port SSD a transaction log, the second node running data before the first node is down according to the transaction log;
  • the third node and the first node and the second node can perform data transmission with each other.
  • the first node further includes:
  • An obtaining unit configured to obtain a transaction log after the first dual-port SSD checkpoint checkpoint in a preset period
  • An archiving unit configured to archive the transaction log after the checkpoint to Archieve to the shared disk array.
  • the first node further includes:
  • a transmitting unit configured to directly perform data transmission by using the first dual port SSD and the second node, where the first node and the second node are both database instances.
  • At least one port of the first dual-port SSD is a PCIE port; at least one of the second dual-port SSDs is a PCIE port.
  • the first node further includes:
  • a startup unit configured to start another database process to run data before the first node is down, and the another database process is independent of the original database process in the second node.
  • the first node further includes:
  • a startup unit configured to start another database process to run data before the first node is down, and the another database process is independent of the original database process in the third node.
  • FIGS. 2 to 5 Specifically, the descriptions of FIGS. 2 to 5 are referred to.
  • An embodiment of the present invention provides a first node, where the first node writes a transaction log to the first dual-port SSD, so that, in a case where the first node is down, the second node
  • the first dual-port SSD acquires the transaction log, and the second node runs data before the first node is down according to the transaction log; or the first node writes a transaction log to the first a dual port SSD, such that after the second node acquires the transaction log from the first dual port SSD in the case that the first node is down, the second node sends the transaction log to a third node, the third node runs data before the first node is down; the third node and the first node and the second node can mutually transmit data, thereby implementing the first
  • the second node or the third node can use the dual-port SSD to read the log information of the Crash node, and after the recovery, replace the first node to provide external services, improve the cluster recovery speed, and improve system availability.
  • FIG. 8 is a structural diagram of a device of a first node according to an embodiment of the present invention.
  • FIG. 8 is a first node 800 according to an embodiment of the present invention.
  • the specific embodiment of the present invention does not limit the specific implementation of the first node.
  • the first node 800 includes:
  • Processor 801 communication interface (Communications Interface 802, memory 803, bus 804.
  • the processor 801, the communication interface 802, and the memory 803 complete communication with each other via the bus 804.
  • a communication interface 802 configured to communicate with other devices
  • the processor 801 is configured to execute a program.
  • the program can include program code, the program code including computer operating instructions.
  • the processor 801 may be a central processing unit (central processing unit, CPU), or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention.
  • CPU central processing unit
  • ASIC Application Specific Integrated Circuit
  • the memory 803 is configured to store a program.
  • the memory 803 can be a volatile memory (volatile Memory), such as random-access memory (RAM), or non-volatile memory (non-volatile) Memory), such as read-only memory (ROM), flash memory, hard disk Drive, HDD) or solid-state drive (SSD).
  • volatile memory such as random-access memory (RAM)
  • non-volatile memory non-volatile Memory
  • ROM read-only memory
  • flash memory such as hard disk Drive, HDD
  • SSD solid-state drive
  • the processor 801 performs the following methods according to the program instructions stored in the memory 803:
  • the third node and the first node and the second node can perform data transmission with each other.
  • the method further includes:
  • the first node acquires a transaction log of the first dual-port SSD checkpoint checkpoint in a preset period, and archives the transaction log after the checkpoint to Archieve to the shared disk array.
  • the method further includes:
  • the first node directly performs data transmission by using the first dual port SSD and the second node.
  • At least one port of the first dual-port SSD is a PCIE port; at least one of the second dual-port SSDs is a PCIE port.
  • the method further includes:
  • the second node starts another database process to run data before the first node is down, and the other database process is independent of the original database process in the second node.
  • the method further includes:
  • the third node starts another database process to run data before the first node is down, and the other database process is independent of the original database process in the third node.
  • An embodiment of the present invention provides a first node, where the first node writes a transaction log to the first dual-port SSD, so that, in a case where the first node is down, the second node
  • the first dual-port SSD acquires the transaction log, and the second node runs data before the first node is down according to the transaction log; or the first node writes a transaction log to the first a dual port SSD, such that after the second node acquires the transaction log from the first dual port SSD in the case that the first node is down, the second node sends the transaction log to a third node, the third node runs data before the first node is down; the third node and the first node and the second node can mutually transmit data, thereby implementing the first
  • the second node or the third node can use the dual-port SSD to read the log information of the Crash node, and after the recovery, replace the first node to provide external services, improve the cluster recovery speed, and improve system availability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

提供一种数据库集群管理数据的方法、节点及系统,所述数据库集群包括第一双端口(101)固态硬盘SSD、第二双端口(102)固态硬盘SSD、第一节点(103)、第二节点(104)、第三节点(105),所述第一双端口(101)SSD连接所述第一节点(103)和所述第二节点(104),所述第二双端口(102)SSD连接所述第二节点(104)和所述第三节点(105),所述第一节点(103)将事务日志写入所述第一双端口(101)SSD,以使得在所述第一节点(103)宕机的情况下,所述第二节点(104)从所述第一双端口(101)SSD获取所述事务日志,所述第二节点(104)根据所述事务日志运行所述第一节点(103)宕机前的数据,从而实现第一节点(103)Crash时,第二节点(104)或者第三节点(105)可以利用双端口SSD,读取Crash节点的日志信息,进行恢复后,代替第一节点对外提供服务,提升集群恢复速度,提高系统可用性。

Description

一种数据库集群管理数据的方法、节点及系统 技术领域
本发明涉及数据库领域,尤其涉及到一种数据库集群管理数据的方法、节点及系统。
背景技术
在数据库集群中的所有节点和共享磁阵连接,共享磁阵存储所有节点的数据。如果数据库集群中的某个节点宕机(Crash)后,会有一段时间无法使用该节点上更新的数据,导致部分业务受到影响。通常的解决方法是将节点的日志更新到共享磁阵,当该节点Crash时,其它节点读取该节点日志进行恢复,若其他节点未能读取该节点日志则必须等到该节点重新启动后,才能恢复该节点上的数据,之后对外提供服务,这一过程非常耗时,影响业务。同时,将日志信息更新到共享磁阵时,由于同步的日志量很大,也会极大的影响集群系统性能。
技术问题
本发明实施例提供了一种数据库集群管理数据的方法、节点及系统,旨在解决节点宕机后恢复过程耗时影响业务的问题。
技术解决方案
第一方面,一种数据库集群管理数据的方法,所述数据库集群包括第一双端口固态硬盘SSD、第二双端口固态硬盘SSD、第一节点、第二节点、第三节点,所述第一双端口SSD连接所述第一节点和所述第二节点,所述第二双端口SSD连接所述第二节点和所述第三节点,所述方法包括:
所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志,所述第二节点根据所述事务日志运行所述第一节点宕机前的数据;或者,
所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志后,所述第二节点将所述事务日志发送到第三节点,所述第三节点运行所述第一节点宕机前的数据;
所述第三节点和所述第一节点、所述第二节点之间能互相进行数据传输。
结合第一方面,在第一方面的第一种可能的实现方式中,所述方法还包括:
所述第一节点在预先设置的周期内获取第一双端口SSD检查点checkpoint后的事务日志,并将所述checkpoint后的事务日志归档Archieve到共享磁盘阵列。
结合第一方面或者第一方面的第一种可能的实现方式,在第一方面的第二种可能的实现方式中,所述方法还包括:
在所述第一节点和所述第二节点均是数据库实例的情况下,所述第一节点通过所述第一双端口SSD和所述第二节点直接进行数据传输。
结合第一方面的第二种可能的实现方式,在第一方面的第三种可能的实现方式中,所述第一双端口SSD中至少一个端口为PCIE端口;所述第二双端口SSD中至少一个端口为PCIE端口。
结合第一方面或者第一方面的第一种可能的实现方式或者第一方面的第二种可能的实现方式或者第一方面的第三种可能的实现方式,在第一方面的第四种可能的实现方式中,所述方法还包括:
所述第二节点启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第二节点中原数据库进程。
结合第一方面或者第一方面的第一种可能的实现方式或者第一方面的第二种可能的实现方式或者第一方面的第三种可能的实现方式,在第一方面的第五种可能的实现方式中,所述方法还包括:
所述第三节点启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第三节点中原数据库进程。
第二方面,一种第一节点,数据库集群包括第一双端口固态硬盘SSD、第二双端口固态硬盘SSD、所述第一节点、第二节点、第三节点,所述第一双端口SSD连接所述第一节点和所述第二节点,所述第二双端口SSD连接所述第二节点和所述第三节点,所述第一节点包括:
写入单元,用于将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志,所述第二节点根据所述事务日志运行所述第一节点宕机前的数据;或者,
将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志后,所述第二节点将所述事务日志发送到第三节点,所述第三节点运行所述第一节点宕机前的数据;
所述第三节点和所述第一节点、所述第二节点之间能互相进行数据传输。
结合第二方面,在第二方面的第一种可能的实现方式中,所述第一节点还包括:
获取单元,用于在预先设置的周期内获取第一双端口SSD检查点checkpoint后的事务日志;
归档单元,用于将所述checkpoint后的事务日志归档Archieve到共享磁盘阵列。
结合第二方面或者第二方面的第一种可能的实现方式,在第二方面的第二种可能的实现方式中,所述第一节点还包括:
传输单元,用于在所述第一节点和所述第二节点均是数据库实例的情况下,通过所述第一双端口SSD和所述第二节点直接进行数据传输。
结合第二方面的第二种可能的实现方式,在第二方面的第三种可能的实现方式中,所述第一双端口SSD中至少一个端口为PCIE端口;所述第二双端口SSD中至少一个端口为PCIE端口。
结合第二方面或者第二方面的第一种可能的实现方式或者第二方面的第二种可能的实现方式或者第二方面的第三种可能的实现方式,在第二方面的第四种可能的实现方式中,所述第一节点还包括:
启动单元,用于启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第二节点中原数据库进程。
结合第二方面或者第二方面的第一种可能的实现方式或者第二方面的第二种可能的实现方式或者第二方面的第三种可能的实现方式,在第二方面的第五种可能的实现方式中,
启动单元,用于启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第三节点中原数据库进程。
第三方面,一种数据库集群管理数据的系统,所述数据库集群包括第一双端口固态硬盘SSD、第二双端口固态硬盘SSD、第一节点、第二节点、第三节点,所述第一双端口SSD连接所述第一节点和所述第二节点,所述第二双端口SSD连接所述第二节点和所述第三节点,
所述第一节点,用于将事务日志写入所述第一双端口SSD;
所述第二节点,用于在所述第一节点宕机的情况下,从所述第一双端口SSD获取所述事务日志,并根据所述事务日志运行所述第一节点宕机Crash前的数据;或者,
在所述第一节点宕机的情况下,从所述第一双端口SSD获取所述事务日志,将所述事务日志发送给所述第三节点,所述第三节点根据所述事务日志运行所述第一节点宕机Crash前的数据;
所述第三节点和所述第一节点、所述第二节点之间能互相进行数据传输。
结合第三方面,在第三方面的第一种可能的实现方式中,所述第一节点还包括:
获取单元,用于在预先设置的周期内获取双端口SSD检查点checkpoint后的事务日志;
归档单元,用于将所述checkpoint后的事务日志归档Archieve到共享磁盘阵列。
结合第三方面或者第三方面的第一种可能的实现方式,在第三方面的第二种可能的实现方式中,所述第一节点还包括:
传输单元,用于在所述第一节点和所述第二节点均是数据库实例的情况下,通过所述第一双端口SSD和所述第二节点直接进行数据传输。
结合第二方面的第二种可能的实现方式,在第二方面的第三种可能的实现方式中,所述第一双端口SSD中至少一个端口为PCIE端口;所述第二双端口SSD中至少一个端口为PCIE端口。
结合第二方面或者第二方面的第一种可能的实现方式或者第二方面的第二种可能的实现方式或者第二方面的第三种可能的实现方式,在第二方面的第四种可能的实现方式中,所述第一节点还包括:
启动单元,用于启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第二节点中原数据库进程。
结合第二方面或者第二方面的第一种可能的实现方式或者第二方面的第二种可能的实现方式或者第二方面的第三种可能的实现方式,在第二方面的第五种可能的实现方式中,启动单元,用于启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第三节点中原数据库进程。
有益效果
本发明实施例提供一种数据库集群管理数据的方法,所述数据库集群包括第一双端口固态硬盘SSD、第二双端口固态硬盘SSD、第一节点、第二节点、第三节点,所述第一双端口SSD连接所述第一节点和所述第二节点,所述第二双端口 SSD连接所述第二节点和所述第三节点,所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志,所述第二节点根据所述事务日志运行所述第一节点宕机前的数据;或者,所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志后,所述第二节点将所述事务日志发送到第三节点,所述第三节点运行所述第一节点宕机前的数据;所述第三节点和所述第一节点、所述第二节点之间能互相进行数据传输,从而实现第一节点Crash时,第二节点或者第三节点可以利用双端口SSD,读取Crash节点的日志信息,进行恢复后,代替第一节点对外提供服务,提升集群恢复速度,提高系统可用性。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种数据库集群管理数据的系统结构图;
图2是本发明实施例提供的一种数据库集群管理数据的结构示意图;
图3是本发明实施例提供的一种数据库集群管理数据的结构示意图;
图4是本发明实施例提供的一种数据库集群管理数据的结构示意图;
图5是本发明实施例提供的一种数据库集群管理数据的方法示意图;
图6是本发明实施例提供的一种数据库集群管理数据的方法流程图;
图7是本发明实施例提供的第一节点的装置结构图;
图8是本发明实施例提供的一种第一节点的装置结构图。
本发明的实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
参考图1,图1是本发明实施例提供的一种数据库集群管理数据的系统结构图。如图1所示,所述系统包括:
第一双端口101、第二双端口102、第一节点103、第二节点104,第三节点105,第一双端口固态硬盘(Solid State Disk,SSD)101连接所述第一节点103和所述第二节点104,所述第二双端口102连接所述第二节点104和所述第三节点105,所述第二节点104分别连接所述第一双端口101、所述第二双端口102;
所述第一节点103,用于将事务日志(transaction log)写入所述第一双端口SSD101;
所述第二节点104,用于在所述第一节点103宕机的情况下,从所述第一双端口SSD101获取所述事务日志,并根据所述事务日志运行所述第一节点103宕机Crash前的数据;或者,
在所述第一节点103宕机的情况下,从所述第一双端口SSD101获取所述事务日志,将所述事务日志发送给所述第三节点105,所述第三节点105根据所述事务日志运行所述第一节点103宕机Crash前的数据;
所述第三节点105和所述第一节点103、所述第二节点104之间能互相进行数据传输。
所述第一节点103还包括:
获取单元,用于在预先设置的周期内获取第一双端口SSD检查点checkpoint后的事务日志;
归档单元,用于将所述checkpoint后的事务日志归档Archieve到共享磁盘阵列。
具体的,参考图2-4,图2-4是本发明实施例提供的一种数据库集群管理数据的结构示意图。如图2所示,第一节点生成事务日志,并由后台写日志进程写入 SSD,第一节点的日志定期到SSD中读取CheckPoint后的日志,第一节点的日志归档进程将CheckPoint后的日志归档到共享磁阵;第二节点通过第一双端口读取第一节点日志恢复后,代替第一节点工作。
如图3,第一节点宕机后,第二节点从第一双端口读取第一节点的日志,第二节点开启新的数据库进程进行恢复,第二节点执行恢复后,对外提供服务,第二节点通过第一双端口读取第一节点的日志,将日志传递到其他节点进行恢复。
如图4,第一节点宕机后,第二节点从SSD读取第一节点的日志,第二节点将日志传递到第三节点,第三节点获取第一节点的数据后,执行恢复操作后,对外提供服务。
所述第一节点还包括:
传输单元,用于在所述第一节点和所述第二节点均是数据库实例的情况下,通过所述第一双端口SSD和所述第二节点直接进行数据传输。
具体的, 如图5,图5是本发明实施例提供的一种数据库集群管理数据的方法示意图。
如图5所示,第一节点和第二节点为数据库实例的情况下, 可以直接通过双端口进行数据传输,从而避免网络拥塞等情况造成的数据传输速率慢等问题。
可选地,所述第一双端口SSD中至少一个端口为PCIE端口;所述第二双端口 SSD中至少一个端口为PCIE端口。
可选地,所述第一节点还包括:
启动单元,用于启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第二节点中原数据库进程。
可选地,所述第一节点还包括:
启动单元,用于启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第三节点中原数据库进程。
本发明实施例提供一种数据库集群管理数据的系统,所述数据库集群包括第一双端口固态硬盘SSD、第二双端口固态硬盘SSD、第一节点、第二节点、第三节点,所述第一双端口SSD连接所述第一节点和所述第二节点,所述第二双端口 SSD连接所述第二节点和所述第三节点,所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志,所述第二节点根据所述事务日志运行所述第一节点宕机前的数据;或者,所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志后,所述第二节点将所述事务日志发送到第三节点,所述第三节点运行所述第一节点宕机前的数据;所述第三节点和所述第一节点、所述第二节点之间能互相进行数据传输,从而实现第一节点Crash时,第二节点或者第三节点可以利用双端口SSD,读取Crash节点的日志信息,进行恢复后,代替第一节点对外提供服务,提升集群恢复速度,提高系统可用性。
参考图6,图6是本发明实施例提供的一种数据库集群管理数据的方法流程图。第一双端口SSD连接第一节点和第二节点,第二双端口SSD连接所述第二节点和第三节点,所述第二节点分别连接所述第一双端口SSD、所述第二双端口SSD,所述方法包括:
步骤601,所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志,所述第二节点根据所述事务日志运行所述第一节点宕机前的数据;或者,
所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志后,所述第二节点将所述事务日志发送到第三节点,所述第三节点运行所述第一节点宕机前的数据;
所述第三节点和所述第一节点、所述第二节点之间能互相进行数据传输。
所述方法还包括:
所述第一节点在预先设置的周期内获取第一双端口SSD检查点checkpoint后的事务日志,并将所述checkpoint后的事务日志归档Archieve到共享磁盘阵列。
所述方法还包括:
在所述第一节点和所述第二节点均是数据库实例的情况下,所述第一节点通过所述第一双端口SSD和所述第二节点直接进行数据传输。
所述第一双端口SSD中至少一个端口为PCIE端口;所述第二双端口SSD中至少一个端口为PCIE端口。
具体的,参考图2-4的描述。
所述方法还包括:
所述第二节点启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第二节点中原数据库进程。
所述方法还包括:
所述第三节点启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第三节点中原数据库进程。
具体的,参考图5的描述。
本发明实施例提供一种数据库集群管理数据的方法,所述数据库集群包括第一双端口固态硬盘SSD、第二双端口固态硬盘SSD、第一节点、第二节点、第三节点,所述第一双端口SSD连接所述第一节点和所述第二节点,所述第二双端口 SSD连接所述第二节点和所述第三节点,所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志,所述第二节点根据所述事务日志运行所述第一节点宕机前的数据;或者,所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志后,所述第二节点将所述事务日志发送到第三节点,所述第三节点运行所述第一节点宕机前的数据;所述第三节点和所述第一节点、所述第二节点之间能互相进行数据传输,从而实现第一节点Crash时,第二节点或者第三节点可以利用双端口SSD,读取Crash节点的日志信息,进行恢复后,代替第一节点对外提供服务,提升集群恢复速度,提高系统可用性。
参考图7,图7是本发明实施例提供的第一节点的装置结构图。如图7所示,
所述数据库集群包括第一双端口固态硬盘SSD、第二双端口固态硬盘SSD、第一节点、第二节点、第三节点,所述第一双端口SSD连接所述第一节点和所述第二节点,所述第二双端口SSD连接所述第二节点和所述第三节点,所述第一节点包括:
写入单元701,用于将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志,所述第二节点根据所述事务日志运行所述第一节点宕机前的数据;或者,
将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志后,所述第二节点将所述事务日志发送到第三节点,所述第三节点运行所述第一节点宕机前的数据;
所述第三节点和所述第一节点、所述第二节点之间能互相进行数据传输。
可选地,所述第一节点还包括:
获取单元,用于在预先设置的周期内获取第一双端口SSD检查点checkpoint后的事务日志;
归档单元,用于将所述checkpoint后的事务日志归档Archieve到共享磁盘阵列。
可选地,所述第一节点还包括:
传输单元,用于在所述第一节点和所述第二节点均是数据库实例的情况下,通过所述第一双端口SSD和所述第二节点直接进行数据传输。
所述第一双端口SSD中至少一个端口为PCIE端口;所述第二双端口SSD中至少一个端口为PCIE端口。
可选地,所述第一节点还包括:
启动单元,用于启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第二节点中原数据库进程。
可选地,所述第一节点还包括:
启动单元,用于启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第三节点中原数据库进程。
具体的,参考图2-图5的描述。
本发明实施例提供一种第一节点,所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志,所述第二节点根据所述事务日志运行所述第一节点宕机前的数据;或者,所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志后,所述第二节点将所述事务日志发送到第三节点,所述第三节点运行所述第一节点宕机前的数据;所述第三节点和所述第一节点、所述第二节点之间能互相进行数据传输,从而实现第一节点Crash时,第二节点或者第三节点可以利用双端口SSD,读取Crash节点的日志信息,进行恢复后,代替第一节点对外提供服务,提升集群恢复速度,提高系统可用性。
图8是本发明实施例提供的一种第一节点的装置结构图。参考图8,图8是本发明实施例提供的一种第一节点800,本发明具体实施例并不对所述第一节点的具体实现做限定。所述第一节点800包括:
处理器(processor)801,通信接口(Communications Interface)802,存储器(memory)803,总线804。
处理器801,通信接口802,存储器803通过总线804完成相互间的通信。
通信接口802,用于与其他设备进行通信;
处理器801,用于执行程序。
具体地,程序可以包括程序代码,所述程序代码包括计算机操作指令。
处理器801可能是一个中央处理器(central processing unit, CPU),或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本发明实施例的一个或多个集成电路。
存储器803,用于存储程序。存储器803可以是易失性存储器(volatile memory),例如随机存取存储器(random-access memory, RAM),或者非易失性存储器(non-volatile memory),例如只读存储器(read-only memory, ROM),快闪存储器(flash memory),硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive, SSD)。处理器801根据存储器803存储的程序指令,执行以下方法:
所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志,所述第二节点根据所述事务日志运行所述第一节点宕机前的数据;或者,
所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志后,所述第二节点将所述事务日志发送到第三节点,所述第三节点运行所述第一节点宕机前的数据;
所述第三节点和所述第一节点、所述第二节点之间能互相进行数据传输。
所述方法还包括:
所述第一节点在预先设置的周期内获取第一双端口SSD检查点checkpoint后的事务日志,并将所述checkpoint后的事务日志归档Archieve到共享磁盘阵列。
所述方法还包括:
在所述第一节点和所述第二节点均是数据库实例的情况下,所述第一节点通过所述第一双端口SSD和所述第二节点直接进行数据传输。
所述第一双端口SSD中至少一个端口为PCIE端口;所述第二双端口SSD中至少一个端口为PCIE端口。
所述方法还包括:
所述第二节点启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第二节点中原数据库进程。
所述方法还包括:
所述第三节点启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第三节点中原数据库进程。
本发明实施例提供一种第一节点,所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志,所述第二节点根据所述事务日志运行所述第一节点宕机前的数据;或者,所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志后,所述第二节点将所述事务日志发送到第三节点,所述第三节点运行所述第一节点宕机前的数据;所述第三节点和所述第一节点、所述第二节点之间能互相进行数据传输,从而实现第一节点Crash时,第二节点或者第三节点可以利用双端口SSD,读取Crash节点的日志信息,进行恢复后,代替第一节点对外提供服务,提升集群恢复速度,提高系统可用性。
以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应该以权利要求的保护范围为准。

Claims (18)

  1. 一种数据库集群管理数据的方法,其特征在于,所述数据库集群包括第一双端口固态硬盘SSD、第二双端口固态硬盘SSD、第一节点、第二节点、第三节点,所述第一双端口SSD连接所述第一节点和所述第二节点,所述第二双端口SSD连接所述第二节点和所述第三节点,所述方法包括:
    所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志,所述第二节点根据所述事务日志运行所述第一节点宕机前的数据;或者,
    所述第一节点将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志后,所述第二节点将所述事务日志发送到第三节点,所述第三节点运行所述第一节点宕机前的数据;
    所述第三节点和所述第一节点、所述第二节点之间能互相进行数据传输。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述第一节点在预先设置的周期内获取第一双端口SSD检查点 checkpoint后的事务日志,并将所述checkpoint后的事务日志归档 Archieve到共享磁盘阵列。
  3. 根据权利要求1或2所述的方法, 其特征在于,所述方法还包括:
    在所述第一节点和所述第二节点均是数据库实例的情况下,所述第一节点通过所述第一双端口SSD和所述第二节点直接进行数据传输。
  4. 根据权利要求3所述的方法,其特征在于,所述第一双端口SSD中至少一个端口为PCIE端口;所述第二双端口SSD中至少一个端口为PCIE端口。
  5. 根据权利要求1-4任意一项所述的方法,其特征在于,所述方法还包括:
    所述第二节点启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第二节点中原数据库进程。
  6. 根据权利要求1-4任意一项所述的方法,其特征在于,所述方法还包括:
    所述第三节点启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第三节点中原数据库进程。
  7. 一种第一节点,其特征在于,数据库集群包括第一双端口固态硬盘SSD、第二双端口固态硬盘SSD、所述第一节点、第二节点、第三节点,所述第一双端口SSD连接所述第一节点和所述第二节点,所述第二双端口SSD连接所述第二节点和所述第三节点,所述第一节点包括:
    写入单元,用于将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口 SSD获取所述事务日志,所述第二节点根据所述事务日志运行所述第一节点宕机前的数据;或者,
    将事务日志写入所述第一双端口SSD,以使得在所述第一节点宕机的情况下,所述第二节点从所述第一双端口SSD获取所述事务日志后,所述第二节点将所述事务日志发送到第三节点,所述第三节点运行所述第一节点宕机前的数据;
    所述第三节点和所述第一节点、所述第二节点之间能互相进行数据传输。
  8. 根据权利要求7所述的第一节点,其特征在于,所述第一节点还包括:
    获取单元,用于在预先设置的周期内获取第一双端口SSD检查点 checkpoint后的事务日志;
    归档单元,用于将所述checkpoint后的事务日志归档Archieve到共享磁盘阵列。
  9. 根据权利要求7或8所述的第一节点,其特征在于,所述第一节点还包括:
    传输单元,用于在所述第一节点和所述第二节点均是数据库实例的情况下,通过所述第一双端口SSD和所述第二节点直接进行数据传输。
  10. 根据权利要求9所述的第一节点,其特征在于,所述第一双端口 SSD中至少一个端口为PCIE端口;所述第二双端口SSD中至少一个端口为PCIE端口。
  11. 根据权利要求7-10任意一项所述的第一节点,其特征在于,所述第一节点还包括:
    启动单元,用于启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第二节点中原数据库进程。
  12. 根据权利要求7-10任意一项所述的第一节点,其特征在于,所述第一节点还包括:
    启动单元,用于启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第三节点中原数据库进程。
  13. 一种数据库集群管理数据的系统,其特征在于,所述数据库集群包括第一双端口固态硬盘SSD、第二双端口固态硬盘SSD、第一节点、第二节点、第三节点,所述第一双端口SSD连接所述第一节点和所述第二节点,所述第二双端口SSD连接所述第二节点和所述第三节点,
    所述第一节点,用于将事务日志写入所述第一双端口SSD;
    所述第二节点,用于在所述第一节点宕机的情况下,从所述第一双端口SSD获取所述事务日志,并根据所述事务日志运行所述第一节点宕机Crash前的数据;或者,
    在所述第一节点宕机的情况下,从所述第一双端口SSD获取所述事务日志,将所述事务日志发送给所述第三节点,所述第三节点根据所述事务日志运行所述第一节点宕机Crash前的数据;
    所述第三节点和所述第一节点、所述第二节点之间能互相进行数据传输。
  14. 根据权利要求13所述的系统,其特征在于,所述第一节点还包括:
    获取单元,用于在预先设置的周期内获取双端口SSD检查点 checkpoint后的事务日志;
    归档单元,用于将所述checkpoint后的事务日志归档Archieve到共享磁盘阵列。
  15. 根据权利要求13或14所述的系统,其特征在于,所述第一节点还包括:
    传输单元,用于在所述第一节点和所述第二节点均是数据库实例的情况下,通过所述第一双端口SSD和所述第二节点直接进行数据传输。
  16. 根据权利要求15所述的系统,其特征在于,所述第一双端口SSD中至少一个端口为PCIE端口;所述第二双端口SSD中至少一个端口为PCIE端口。
  17. 根据权利要求13-16任意一项所述的第一节点,其特征在于,所述第一节点还包括:
    启动单元,用于启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第二节点中原数据库进程。
  18. 根据权利要求13-16任意一项所述的第一节点,其特征在于,所述第一节点还包括:
    启动单元,用于启动另一数据库进程运行所述第一节点宕机前的数据,所述另一数据库进程独立于第三节点中原数据库进程。
PCT/CN2014/092140 2014-05-30 2014-11-25 一种数据库集群管理数据的方法、节点及系统 WO2015180434A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
EP14893080.3A EP3147797B1 (en) 2014-05-30 2014-11-25 Data management method, node and system for database cluster
KR1020167036343A KR101983208B1 (ko) 2014-05-30 2014-11-25 데이터 관리 방법, 노드, 그리고 데이터베이스 클러스터를 위한 시스템
JP2017514759A JP6457633B2 (ja) 2014-05-30 2014-11-25 データベース・クラスタのデータ管理方法、ノード、及びシステム
RU2016152176A RU2653254C1 (ru) 2014-05-30 2014-11-25 Способ, узел и система управления данными для кластера базы данных
US15/365,728 US10379977B2 (en) 2014-05-30 2016-11-30 Data management method, node, and system for database cluster
US16/455,087 US10860447B2 (en) 2014-05-30 2019-06-27 Database cluster architecture based on dual port solid state disk

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410242052.X 2014-05-30
CN201410242052.XA CN103984768B (zh) 2014-05-30 2014-05-30 一种数据库集群管理数据的方法、节点及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/365,728 Continuation US10379977B2 (en) 2014-05-30 2016-11-30 Data management method, node, and system for database cluster

Publications (1)

Publication Number Publication Date
WO2015180434A1 true WO2015180434A1 (zh) 2015-12-03

Family

ID=51276740

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/092140 WO2015180434A1 (zh) 2014-05-30 2014-11-25 一种数据库集群管理数据的方法、节点及系统

Country Status (7)

Country Link
US (2) US10379977B2 (zh)
EP (1) EP3147797B1 (zh)
JP (1) JP6457633B2 (zh)
KR (1) KR101983208B1 (zh)
CN (1) CN103984768B (zh)
RU (1) RU2653254C1 (zh)
WO (1) WO2015180434A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984768B (zh) 2014-05-30 2017-09-29 华为技术有限公司 一种数据库集群管理数据的方法、节点及系统
CN106034137A (zh) * 2015-03-09 2016-10-19 阿里巴巴集团控股有限公司 用于分布式系统的智能调度方法及分布式服务系统
CN106960060B (zh) * 2017-04-10 2020-07-31 聚好看科技股份有限公司 一种数据库集群的管理方法及装置
CN109697110B (zh) * 2017-10-20 2023-01-06 阿里巴巴集团控股有限公司 事务协调处理系统、方法、装置及电子设备
US10754798B1 (en) * 2019-09-11 2020-08-25 International Business Machines Corporation Link speed recovery in a data storage system
CN111639008B (zh) * 2020-05-29 2023-08-25 杭州海康威视系统技术有限公司 基于双端口ssd的文件系统状态监测方法、装置及电子设备
KR102428587B1 (ko) 2022-04-13 2022-08-03 주식회사 비투엔 마이크로 서비스 아키텍처 기반의 트랜잭션 가용성과 성능 보장 처리 장치 및 방법
US11983428B2 (en) 2022-06-07 2024-05-14 Western Digital Technologies, Inc. Data migration via data storage device peer channel

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050187891A1 (en) * 2004-02-06 2005-08-25 Johnson Charles S. Transaction processing apparatus and method
US20070130220A1 (en) * 2005-12-02 2007-06-07 Tsunehiko Baba Degraded operation technique for error in shared nothing database management system
US20110113279A1 (en) * 2009-11-12 2011-05-12 International Business Machines Corporation Method Apparatus and System for a Redundant and Fault Tolerant Solid State Disk
CN102236623A (zh) * 2010-04-22 2011-11-09 索尼公司 信号控制设备和信号控制方法
CN103984768A (zh) * 2014-05-30 2014-08-13 华为技术有限公司 一种数据库集群管理数据的方法、节点及系统

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0991183A (ja) * 1995-09-27 1997-04-04 Toshiba Corp データベースリカバリ装置
JP3094888B2 (ja) * 1996-01-26 2000-10-03 三菱電機株式会社 採番機構、データ整合性確認機構、トランザクション再実行機構及び分散トランザクション処理システム
US8041735B1 (en) * 2002-11-01 2011-10-18 Bluearc Uk Limited Distributed file system and method
US7028218B2 (en) * 2002-12-02 2006-04-11 Emc Corporation Redundant multi-processor and logical processor configuration for a file server
KR100739674B1 (ko) 2003-05-01 2007-07-13 삼성전자주식회사 결함 관리 방법, 그 장치 및 그 디스크
JP4141921B2 (ja) * 2003-08-29 2008-08-27 富士通株式会社 ファイル処理方法
JP2007157150A (ja) * 2005-12-06 2007-06-21 Samsung Electronics Co Ltd メモリシステム及びそれを含むメモリ処理方法
US7725446B2 (en) * 2005-12-19 2010-05-25 International Business Machines Corporation Commitment of transactions in a distributed system
JP4856561B2 (ja) * 2007-01-31 2012-01-18 日本電信電話株式会社 ノード制御方法、ノード制御プログラムおよびノード
KR101144808B1 (ko) 2008-09-01 2012-05-11 엘지전자 주식회사 박막형 태양전지 제조방법 및 이를 이용한 박막형 태양전지
US8429134B2 (en) * 2009-09-08 2013-04-23 Oracle International Corporation Distributed database recovery
US8407403B2 (en) 2009-12-07 2013-03-26 Microsoft Corporation Extending SSD lifetime using hybrid storage
US8677055B2 (en) * 2010-04-12 2014-03-18 Sandisk Enterprises IP LLC Flexible way of specifying storage attributes in a flash memory-based object store
US8954385B2 (en) * 2010-06-28 2015-02-10 Sandisk Enterprise Ip Llc Efficient recovery of transactional data stores
US8392378B2 (en) * 2010-12-09 2013-03-05 International Business Machines Corporation Efficient backup and restore of virtual input/output server (VIOS) cluster
US8589723B2 (en) * 2010-12-22 2013-11-19 Intel Corporation Method and apparatus to provide a high availability solid state drive
US20120215970A1 (en) * 2011-02-22 2012-08-23 Serge Shats Storage Management and Acceleration of Storage Media in Clusters
US10949415B2 (en) 2011-03-31 2021-03-16 International Business Machines Corporation Logging system using persistent memory
JP5514169B2 (ja) 2011-08-15 2014-06-04 株式会社東芝 情報処理装置および情報処理方法
CN102521389A (zh) * 2011-12-23 2012-06-27 天津神舟通用数据技术有限公司 一种混合使用固态硬盘和传统硬盘的postgresql数据库集群系统及其优化方法
JP5867902B2 (ja) * 2012-03-06 2016-02-24 日本電気株式会社 データベースの非同期レプリケーション方式
CN102681952B (zh) * 2012-05-12 2015-02-18 北京忆恒创源科技有限公司 将数据写入存储设备的方法与存储设备
CN103365987B (zh) * 2013-07-05 2017-04-12 北京人大金仓信息技术股份有限公司 一种基于共享磁盘架构的集群数据库系统及数据处理方法
CN103729442B (zh) * 2013-12-30 2017-11-24 华为技术有限公司 记录事务日志的方法和数据库引擎
US10169169B1 (en) * 2014-05-08 2019-01-01 Cisco Technology, Inc. Highly available transaction logs for storing multi-tenant data sets on shared hybrid storage pools

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050187891A1 (en) * 2004-02-06 2005-08-25 Johnson Charles S. Transaction processing apparatus and method
US20070130220A1 (en) * 2005-12-02 2007-06-07 Tsunehiko Baba Degraded operation technique for error in shared nothing database management system
US20110113279A1 (en) * 2009-11-12 2011-05-12 International Business Machines Corporation Method Apparatus and System for a Redundant and Fault Tolerant Solid State Disk
CN102236623A (zh) * 2010-04-22 2011-11-09 索尼公司 信号控制设备和信号控制方法
CN103984768A (zh) * 2014-05-30 2014-08-13 华为技术有限公司 一种数据库集群管理数据的方法、节点及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3147797A4 *

Also Published As

Publication number Publication date
CN103984768A (zh) 2014-08-13
US10379977B2 (en) 2019-08-13
RU2653254C1 (ru) 2018-05-07
EP3147797A4 (en) 2017-04-26
KR101983208B1 (ko) 2019-08-28
US20170083419A1 (en) 2017-03-23
KR20170013319A (ko) 2017-02-06
US20190317872A1 (en) 2019-10-17
JP6457633B2 (ja) 2019-01-23
CN103984768B (zh) 2017-09-29
US10860447B2 (en) 2020-12-08
EP3147797A1 (en) 2017-03-29
EP3147797B1 (en) 2021-01-06
JP2017517087A (ja) 2017-06-22

Similar Documents

Publication Publication Date Title
WO2015180434A1 (zh) 一种数据库集群管理数据的方法、节点及系统
WO2018103315A1 (zh) 监控数据的处理方法、装置、服务器及存储设备
WO2015109804A1 (zh) 一种虚拟化环境下针对网络服务的双机热备份容灾系统及其方法
WO2013174210A1 (zh) 一种闪存设备中数据存储的方法及装置
WO2018076841A1 (zh) 数据分享方法、装置、存储介质及服务器
WO2014082506A1 (zh) 触摸传感器的触摸检测方法、系统和触控终端
WO2013185434A1 (zh) 高可靠易扩展的录像存储、检索方法及其系统
WO2018076865A1 (zh) 数据分享方法、装置、存储介质及电子设备
WO2021051492A1 (zh) 数据库服务节点切换方法、装置、设备及计算机存储介质
WO2015003516A1 (zh) 生成升级包的方法、服务器、软件升级方法、移动终端
WO2018076867A1 (zh) 数据备份的删除方法、装置、系统、存储介质和服务器
WO2018227772A1 (zh) 柜员机控件更新方法、装置、计算机设备及存储介质
WO2018076863A1 (zh) 数据存储的方法、装置、存储介质、服务器及系统
WO2015149374A1 (zh) 众核下网络数据的分发方法及系统
WO2014044130A1 (zh) 业务巡检方法和系统、计算机存储介质
WO2009136740A2 (ko) OSGi 서비스 플랫폼에 원격으로 설치된 번들에 대한 바인딩 정보를 관리하는 방법 및 장치
WO2015024167A1 (zh) 一种处理用户报文的方法及转发面设备
WO2021012481A1 (zh) 系统性能监控方法、装置、设备及存储介质
WO2018120680A1 (zh) 虚拟磁盘备份系统、方法、装置、服务主机和存储介质
WO2018076890A1 (zh) 数据备份的方法、装置、存储介质、服务器及系统
WO2020135049A1 (zh) 显示面板的过流保护方法及显示装置
WO2012159436A1 (zh) 一种在windows下调整磁盘分区的方法及装置
WO2017067285A1 (zh) 一种刷机系统镜像的签名方法、装置及终端
WO2018014594A1 (zh) 网络请求及响应的处理方法、装置、终端、服务器及存储介质
WO2013013486A1 (zh) 一种将pdf格式文件转换为epub格式的方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14893080

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017514759

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2014893080

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014893080

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 1020167036343

Country of ref document: KR

ENP Entry into the national phase

Ref document number: 2016152176

Country of ref document: RU

Kind code of ref document: A