US11321354B2 - System, computing node and method for processing write requests - Google Patents

System, computing node and method for processing write requests Download PDF

Info

Publication number
US11321354B2
US11321354B2 US16/590,078 US201916590078A US11321354B2 US 11321354 B2 US11321354 B2 US 11321354B2 US 201916590078 A US201916590078 A US 201916590078A US 11321354 B2 US11321354 B2 US 11321354B2
Authority
US
United States
Prior art keywords
redo
consolidated
record
redo record
records
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/590,078
Other languages
English (en)
Other versions
US20210097035A1 (en
Inventor
Xun Xue
Huaxin ZHANG
Yuk Kuen Chan
Wenbin Ma
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to US16/590,078 priority Critical patent/US11321354B2/en
Assigned to HUAWEI TECHNOLOGIES CO., LTD. reassignment HUAWEI TECHNOLOGIES CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHAN, YUK KUEN, XUE, XUN, ZHANG, Huaxin, MA, WENBIN
Priority to EP20872008.6A priority patent/EP4022464A4/en
Priority to CN202080052186.9A priority patent/CN114127707A/zh
Priority to PCT/CN2020/114725 priority patent/WO2021063167A1/en
Publication of US20210097035A1 publication Critical patent/US20210097035A1/en
Application granted granted Critical
Publication of US11321354B2 publication Critical patent/US11321354B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2094Redundant storage or storage space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • G06F11/1451Management of the data involved in backup or backup restore by selection of backup contents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1471Saving, restoring, recovering or retrying involving logging of persistent data for recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1474Saving, restoring, recovering or retrying in transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/174Redundancy elimination performed by the file system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2358Change logging, detection, and notification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F16/273Asynchronous replication or reconciliation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F16/275Synchronous replication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/80Database-specific techniques

Definitions

  • the present invention generally relates to the field of databases and, in particular, to a system, a computing node and a method for processing write requests.
  • a redo log comprises redo records that store all changes made to a database and, therefore, serves as a true copy of the data of the database when the database gets corrupted.
  • the redo log is, therefore, a crucial structure for recovery operations.
  • the redo logs need to be processed and transmitted between the computing nodes and master nodes as rapidly as possible.
  • An object of the present disclosure is to provide a technique for processing write requests.
  • an aspect of the present disclosure provides a computing node comprising: a processor and a non-transitory storage medium storing instructions executable by the processor to: receive a plurality of write requests to modify one or more pages of a database; generate a plurality of redo records including one redo record for each write request of the plurality of write requests; select a subset of the plurality redo records, each redo record of the subset comprising an identical data location identifier; combine the redo records of the subset into a consolidated redo record; and transmit the consolidated redo record to a target node.
  • the data location identifier may comprise a page identifier.
  • the data location identifier may further comprise a space identifier to identify a space, and the page identifier may identify a page within the space.
  • Each respective redo record of the plurality of redo records may comprise: a respective data location identifier, and a content comprising write data
  • the consolidated redo record may comprise: a single instance of the identical data location identifier, and information of the redo records of the subset, the information including contents of the redo records of the subset and excluding individual data location identifiers of the redo records of the subset.
  • the consolidated redo record may further comprise an overall length value indicating a combined length of the information of the redo records of the subset combined into the consolidated redo record.
  • Each respective redo record of the plurality of redo records may comprise a type indicator to indicate a type of the respective redo record
  • the information of the redo records of the subset included in the consolidated redo record may comprise the type indicators of the redo records of the subset.
  • Each respective redo record of the plurality of redo records may comprise an individual length value indicating a length of the content of the respective redo record, and the information of the redo records of the subset included in the consolidated redo record may comprise individual length values of the redo records of the subset.
  • Each respective redo record of the plurality of redo records may comprise a type indicator indicating a type of the respective redo record, a respective individual data location identifier, an individual length value, and a content comprising write data
  • the consolidated redo record may contain a single instance of the identical data location identifier, an overall length value indicating a combined length of the information of the redo records of the subset combined into the consolidated redo record, and a plurality of segments, each segment of the plurality of segments comprising (i) the type indicator, (ii) the individual length value, and (iii) the content of a corresponding redo record of the subset, each segment of the plurality of segments excluding the individual data location identifier of the corresponding redo record.
  • the instructions may be executable on the processor to combine a first value of a first redo record of the subset and a second value of a second redo record of the subset into a merged value included in a segment of the consolidated redo record, the first value representing a first write operation on a first offset in a page, the second value representing a second write operation on a second offset in the page, and the merged value representing a combination of the first write operation and of the second write operation.
  • the instructions may be executable on the processor to receive an additional write request to modify one or more pages of the database; generate an additional redo record for the additional write request; and if the additional redo record comprises the identical data location identifier and if a number of redo records already combined into the consolidated redo record is less than a specified threshold number of redo records combinable into the consolidated redo record, combine the additional redo record into the consolidated redo record.
  • the computing node may be a master computing node to process write requests, and the target node may be a replica computing node at which the database is replicated.
  • a method for processing write requests comprises: receiving, at a computing node, a plurality of write requests to modify one or more pages of a database; generating, at the computing node, a plurality of redo records including one redo record for each write request of the plurality of write requests; selecting a subset of the plurality of redo records, each redo record of the subset comprising an identical data location identifier; combining, at the computing node, the redo records of the subset into a consolidated redo record; and transmitting the consolidated redo record from the computing node to a target node.
  • the data location identifier may comprise a page identifier.
  • the data location identifier may further comprise a space identifier to identify a space, and the page identifier may identify a page within the space.
  • Each respective redo record of the plurality of redo records may comprise a respective data location identifier and a content comprising write data; and the consolidated redo record may comprise: a single instance of the identical data location identifier, and information of the redo records of the subset, the information including contents of the redo records of the subset and excluding individual data location identifiers of the redo records of the subset.
  • the consolidated redo record may further include an overall length value indicating a combined length of the information of the redo records of the subset combined into the consolidated redo record.
  • Each respective redo record of the plurality of redo records may comprise a type indicator to indicate a type of the respective redo record
  • the information of the redo records of the subset included in the consolidated redo record may comprise type indicators of the redo records of the subset.
  • Each respective redo record of the plurality of redo records may comprise an individual length value indicating a length of the content of the respective redo record, and the information of the redo records of the subset included in the consolidated redo record may comprise individual length values of the redo records of the subset.
  • Each respective redo record of the plurality of redo records may comprise a type indicator indicating a type of the respective redo record, a respective individual data location identifier, an individual length value, and a content comprising write data
  • the consolidated redo record may contain a single instance of the identical data location identifier, an overall length value indicating a combined length of the information of the redo records of the subset combined into the consolidated redo record, and a plurality of segments, each segment of the plurality of segments comprising (i) the type indicator, (ii) the individual length value, and (iii) the content of a corresponding redo record of the subset, each segment of the plurality of segments excluding the individual data location identifier of the corresponding redo record.
  • the instructions may be executable on the processor to: combine a first value of a first redo record of the subset and a second value of a second redo record of the subset into a merged value included in a segment of the consolidated redo record, the first value representing a first write operation on a first offset in a page, the second value representing a second write operation on a second offset in the page, and the merged value representing a combination of the first write operation and of the second write operation.
  • the method may further comprise: receiving an additional write request to modify one or more pages of the database; generating an additional redo record for the additional write request; and if the additional redo record comprises the identical data location identifier and if a number of redo records already combined into the consolidated redo record is less than a specified threshold number of redo records combinable into the consolidated redo record, combining the additional redo record into the consolidated redo record.
  • the computing node may be a master computing node to process write requests, and the target node may be a replica node at which the database is replicated.
  • combining, at the computing node, the redo records of the subset into the consolidated redo record may further comprise: receiving a first redo record having a first data location identifier; generating a first consolidated redo record comprising: the first data location identifier and a first segment based on the first redo record; receiving a second redo record having a second data location identifier; if the first data location identifier is the identical as the second data location identifier: generating a second consolidated redo record by adding, to the first consolidated redo record, a second segment based on the first redo record and excluding the second data location identifier.
  • the method may further comprise: if the first data location identifier is different from the second data location identifier: generating a new consolidated redo record comprising: the second data location identifier and a new segment based on the third redo record, the new consolidated redo record excluding information of the first redo record.
  • a system for processing a plurality of write requests comprises: a database; a computing node adapted to: receive the plurality of write requests to modify one or more pages of the database, generate a plurality of redo records including one redo record for each write request of the plurality of write requests, select a subset of the plurality redo records, each redo record of the subset comprising an identical data location identifier, combine the redo records of the subset into a consolidated redo record, and transmit the consolidated redo record; and a target node adapted to receive the consolidated redo record from the computing node.
  • Implementations of the present disclosure each have at least one of the above-mentioned object and/or aspects, but do not necessarily have all of them. It should be understood that some aspects of the present disclosure that have resulted from attempting to attain the above-mentioned object may not satisfy this object and/or may satisfy other objects not specifically recited herein.
  • FIG. 1 depicts a schematic diagram of a distributed database conventional system and illustrates conventional processing of redo logs therein;
  • FIG. 2A depicts an illustrative example of a schematic view of a conventional redo record generated for one transaction
  • FIG. 2B schematically illustrates a fragment of the conventional redo log
  • FIG. 3 depicts a schematic diagram of a system which is suitable for implementing non-limiting embodiments of the present technology
  • FIG. 4 schematically illustrates selected conventional redo records and a consolidated redo record, in accordance with at least one non-limiting embodiment of the present disclosure
  • FIG. 5 depicts a flow chart illustrating a method for processing write requests, in accordance with at least one non-limiting embodiment of the present disclosure
  • FIG. 6 depicts a flow chart illustrating a method for generating the consolidated redo record of FIG. 4 , in accordance with at least one other non-limiting embodiment of the present disclosure.
  • FIG. 7 illustrates non-limiting examples of conventional redo records and consolidated redo logs when implementing the method of FIG. 6 .
  • aspects of the present disclosure is directed to address at least some of the deficiencies of the current techniques for processing writing requests.
  • the present disclosure describes a system, a computing node and a method for accelerated processing of redo logs.
  • FIG. 1 depicts a schematic diagram of a distributed database conventional system 100 and illustrates conventional processing of redo logs therein.
  • the conventional system 100 comprises a client 110 , a master computing node 112 , a replica computing node 122 , and a customer 140 .
  • the master computing node 112 contains or is connected to a primary database 114 .
  • the primary database 114 may be located, for example, on a primary storage node 116 .
  • the primary storage node includes multiple storage devices for storing the data of the database.
  • the storage device may be located at a single datacentre or in multiple datacenters at different geographic locations.
  • the master computing node communicates with the primary storage node.
  • the master computing node 112 may receive structured query language (SQL) statements, such as write requests 118 and/or read requests 148 . Originators of SQL statements may be clients, such as client 110 , and/or customers, such as customer 140 . When referred to herein, client 110 sends out write requests 118 , while customer 140 sends out read requests 148 . A single device may operate both as client 110 and customer 140 .
  • SQL structured query language
  • the “write request”, as referred to herein, comprises requests to modify, insert, or delete a specific row on a specific page in a relational table of primary database 114 .
  • the write requests 118 may be, for example, requests to add new data to records of pages in primary database 114 .
  • one write request 118 may correspond to one OLTP transaction. Examples of OLTP transactions include online banking, booking an airline ticket, purchasing a book online, order entry, and the like.
  • the master computing node 112 processes SQL statements, including write requests 118 and read requests 148 .
  • the replica computing node 122 processes read requests 148 .
  • the master computing node 112 has a master database management system (DBMS) 120 which communicates with clients, such as client 110 , various external applications (not depicted), and primary database 114 .
  • DBMS master database management system
  • the master DBMS 120 is configured to generate conventional redo logs 130 that manifest modifications to be done to primary database 118 .
  • the conventional redo logs 130 may be generated by a conventional redo log generator 123 .
  • the conventional redo logs 130 may have thousands of conventional redo records described below.
  • FIG. 2A depicts an illustrative example of a schematic view of a conventional redo record 231 (also referred to herein as a “redo record” 231 ) generated for one transaction.
  • the conventional redo record 231 comprises a type indicator 232 , a data location identifier 234 , which may include a space identifier 236 and a page identifier 238 .
  • the conventional redo record 231 also comprises a content 239 , which has a length value identifier embedded therein (not depicted separately).
  • type indicator 232 is one-byte long.
  • the space identifier 236 and page identifier 238 consume one to four bytes each.
  • the format of content 239 depends on the type of conventional redo record 231 that is indicated by type indicator 232 . For each write request 118 , a separate conventional redo record 231 is generated.
  • conventional system 100 also comprises replica computing node 122 connected to a replica database 124 .
  • Conventional system 100 may comprise more than one replica computing node 122 .
  • Each replica computing node 122 is connected to corresponding replica database 124 .
  • the replica database 124 may be located, for example, on a replica storage node 126 .
  • the replica computing node 122 may be configured to handle read requests 148 received from customer 140 .
  • the master computing node 112 continuously generates and propagates conventional redo logs 130 to one or more replica computing nodes 122 .
  • the replica computing node 122 continuously synchronizes with master computing node 112 by applying the conventional redo logs 130 on the data stored in the replica database 124 .
  • a hash table 128 is modified in replica computing node 122 and is used as expressed herein below.
  • a write operation at master computing node 112 is based on a write request 118 received by master computing node 112 .
  • the write operation may be considered to be concluded after all or at least a majority of replica computing nodes 122 of conventional system 100 have received conventional redo logs 130 that correspond to that write operation.
  • replica computing node 122 may perform read operation after replica computing node 122 has received all redo logs 130 from master computing node 112 and has finished processing them.
  • the redo logs 130 may need to be quickly transmitted from master computing node 112 to one or more replica computing nodes 122 in order to speed up the completion of write operations.
  • the redo logs 130 also may need to be processed rapidly by replica computing node 122 to catch up with the write operations and to ensure freshness of data for the read requests.
  • the replica computing nodes 122 parse and add redo logs 130 to their data pages in replica databases 124 . After the redo logs 130 are added to replica databases 124 , the write operation may be considered to be completed on master computing node 112 .
  • a parsing engine 133 in a replica DBMS 135 performs parsing of conventional redo logs 130 to obtain conventional redo records 231 .
  • the conventional redo records 231 are then grouped based on their data location identifiers.
  • Each group of conventional redo records 231 has conventional redo records 231 with same space identifier 236 and page identifier 238 .
  • the page identifier 238 identifies the page within the corresponding space of the database.
  • Such groups of conventional redo records 231 are then inserted into hash table 128 .
  • FIG. 2B schematically illustrates a fragment 280 of a conventional redo log 130 .
  • the fragment 280 has a first redo record 231 a and a second redo record 231 b that are sequentially arranged.
  • the second redo record 231 b follows the first redo record 231 a in fragment 280 .
  • Fragments of conventional redo log 130 may have more than two redo records 231 a , 231 b.
  • the fragments 280 are transmitted from master computing node 112 to replica computing nodes 122 within dispatch threads (not shown).
  • the dispatch threads form the conventional redo log 130 .
  • Each dispatch thread has fragments 280 that are also sequentially arranged.
  • both master computing node 112 and replica computing node 122 parse the fragment 280 in order to determine an offset to starting point 291 a or 291 b of a next redo record in fragment 280 .
  • data may be transmitted rapidly between master computing node 112 and replica computing node 122 .
  • parsing of redo logs 130 may be accelerated both at master computing node and replica computing node 122 .
  • fragments 280 may be parsed into individual conventional redo records 231 at each node (both master computing node and replica nodes).
  • the nodes may also allocate memory for all parsed conventional redo records 231 and group the conventional redo records 231 by page number.
  • the nodes then apply all the conventional redo records 231 with the same page number to the corresponding page of database 114 , 124 in an ordered fashion.
  • Such processing of conventional redo logs 130 puts a lot of pressure on central processing unit (CPU) when parsing the fragments 280 , grouping using hash tables, and changing the sequence of the conventional redo records 231 .
  • CPU central processing unit
  • the technology as described herein consolidates multiple conventional redo records into a single consolidated redo record and mitigates the requirements for memory allocation.
  • the consolidated redo logs having consolidated redo records, as described herein, may be parsed faster.
  • the technology as described herein may also reduce network traffic.
  • the new consolidated redo record has data grouped by page number. Therefore, there is no need to group many conventional redo records. There is also no need to put the conventional redo records in an ordered sequence because the order corresponding to receiving time of the redo record is preserved inside the consolidated redo record.
  • FIG. 3 depicts a schematic diagram of a system 300 which is suitable for implementing non-limiting embodiments of the present technology.
  • System 300 has a modified master computing node 312 and a modified replica computing node 322 .
  • the modified master computing node 312 has a modified master DBMS 320 which communicates with clients, such as client 110 , various external applications (not depicted), and primary database 114 .
  • modified master DBMS 320 also has a consolidated redo log generator 350 .
  • the consolidated redo log generator 350 is configured to generate consolidated redo logs 360 . Similar to conventional redo logs 130 in conventional system 100 , consolidated redo logs 360 may be used for transmission of data between modified master computing node 312 , modified replica computing node 322 , and primary and replica storage nodes 116 , 126 in system 300 .
  • Modified replica computing node 322 has modified parsing engine 333 that is configured to update hash table 328 .
  • FIG. 4 schematically illustrates a selection of a subset 431 of conventional redo records 231 with a first conventional redo record 231 a , a second conventional redo record 231 b , and a last conventional redo record 231 z .
  • FIG. 4 also schematically illustrates a consolidated redo record 461 , in accordance with at least one embodiment of the present disclosure.
  • the consolidated redo record 461 is generated by consolidated redo log generator 350 from selected ones of the conventional redo records 231 a , 231 b . . . 231 z.
  • the conventional redo records 231 a , 231 b . . . 231 z of subset 431 have been described above.
  • the conventional redo records 231 a , 231 b . . . 231 z have type indicators 232 a , 232 b . . . 232 z indicating types of the respective redo records 231 a , 231 b . . . 231 z , respective individual data location identifiers 234 , a 234 b . . . 234 z , and respective contents 239 a , 239 b . . . 239 z comprising write data and length identifiers embedded therein.
  • the respective individual data location identifiers 234 a , 234 b . . . 234 z comprise space identifiers 236 a , 236 b . . . 236 z and page identifiers 238 a , 238 b . . . 238 z.
  • consolidated redo log generator 350 receives conventional redo records 231 and determines the subset 431 .
  • the conventional redo records 231 a , 231 b . . . 231 z of the subset 431 are selected by consolidated redo log generator 350 when their data location identifiers 234 a , 234 b . . . 234 z are identical.
  • consolidated redo log generator 350 Based on the selected conventional redo records 231 a , 231 b . . . 231 z , consolidated redo log generator 350 generates consolidated redo record 461 .
  • the consolidated redo record 461 comprises a consolidated type indicator 462 , a consolidated data location identifier 464 , and an overall length value 470 .
  • the consolidated type indicator 462 may be a “flag”.
  • the consolidated redo record 461 has a single instance of consolidated data location identifier 464 .
  • the consolidated data location identifier 464 is the same as individual data location identifiers 234 a , 234 b . . . 234 z for all selected conventional redo records 231 a , 231 b . . . 231 z .
  • the consolidated redo record 461 may exclude the individual data location identifiers 234 a , 234 b . . . 234 z of selected conventional redo records 231 a , 231 b . . . 231 z.
  • the overall length value 470 indicates a combined length of the information of selected conventional redo records 231 a , 231 b . . . 231 z combined into consolidated redo record 461 .
  • overall length value 470 is changed to indicate the new length of consolidated redo record 461 .
  • the consolidated redo record 461 excludes individual length identifiers of each selected conventional redo record 231 .
  • overall length value 470 directly follows consolidated data location identifier 464 . This accelerates processing at modified replica computing node 322 , because modified parsing engine 333 may read the overall length value 470 right after reading consolidated data location identifier 464 .
  • the overall length value 470 may be used by the modified parsing engine 333 to calculate where the beginning of a next overall length value 470 can be found in a fragment of consolidated redo log 360 .
  • modified parsing engine 333 may skip reading the rest of information of consolidated redo record 461 and move on to the next consolidated redo record 461 .
  • the information of the selected conventional redo records 231 a , 231 b . . . 231 z included in the consolidated redo record 461 also comprises segments 475 a , 475 b . . . 475 z corresponding to selected conventional redo records 231 a , 231 b . . . 231 z .
  • each segment 475 a , 475 b , . . . 475 z has a type indicator 232 a , 232 b . . . 232 z and original contents 239 a , 239 b . . . 239 z of the corresponding selected conventional redo record 231 a , 231 b . . . 231 z.
  • the segments 475 a , 475 b . . . 475 z are grouped because they share the same data location identifier of selected conventional redo records 231 a , 231 b . . . 231 z .
  • the segments 475 a , 475 b . . . 475 z follow each other based on the order of arrival of the corresponding conventional redo records 231 a , 231 b . . . 231 z to consolidated redo log generator 350 .
  • FIG. 5 depicts a flow chart illustrating a method 500 for processing write requests, in accordance with at least one non-limiting embodiment of the present disclosure.
  • FIGS. 3-4 depict a flow chart illustrating a method 500 for processing write requests, in accordance with at least one non-limiting embodiment of the present disclosure.
  • master computing node 312 receives one or more write requests 118 to modify a page of a database, which has primary database 114 and one or more replica databases 124 .
  • each write request 118 may correspond to one transaction.
  • conventional redo log generator 123 generates a plurality of conventional redo records 231 for each write request 118 .
  • consolidated redo log generator 350 selects the received conventional redo records 231 to determine selected conventional redo records 231 a , 231 b . . . 231 z .
  • the selection of conventional redo records 231 a , 231 b . . . 231 z is based on that they have an identical data location identifier 234 a , 234 b . . . 234 z .
  • the selected conventional redo records 231 a , 231 b . . . 231 z may have the same page identifier 238 a , 238 b . . . 238 z .
  • . 231 z may also have the same space identifier 236 a , 236 b . . . 236 z .
  • the page identifier 238 a , 238 b . . . 238 z indicates a page within the space indicated by the corresponding space identifier 236 a , 236 b . . . 236 z.
  • consolidated redo log generator 350 may enforce a pre-determined specified threshold number of conventional redo records.
  • the consolidated redo log generator 350 may stop adding conventional redo records 231 a , 231 b . . . 231 z that share the same data location identifier to the consolidated redo record 461 if a number of conventional redo records 231 a , 231 b . . . 231 z combined in the consolidated redo record 461 already meets the specified threshold number of conventional redo records.
  • the consolidated redo log generator 350 may combine the additional redo record into the consolidated redo record 461 .
  • consolidated redo log generator 350 of modified master computing node 312 combines selected conventional redo records 231 a , 231 b . . . 231 z into consolidated redo record 461 .
  • consolidated redo record 461 is transmitted to a target node, which may be any node where the conventional redo logs are sent.
  • the target node may be the modified replica computing node 322 at which primary database 114 is replicated.
  • FIG. 6 depicts a flow chart illustrating a method 600 for generating the consolidated redo record 461 , in accordance with at least one other non-limiting embodiment of the present disclosure.
  • FIG. 7 illustrates non-limiting examples of conventional redo records 231 and consolidated redo records 461 when implementing method 600 .
  • method 600 may be implemented in consolidated redo log generator 350 of the modified master DBMS 320 .
  • FIGS. 3-4 When describing method 600 , reference is also made to FIGS. 3-4 .
  • consolidated redo log generator 350 receives a first conventional redo record 731 a depicted in FIG. 7 .
  • consolidated redo log generator 350 generates a first consolidated redo record 761 a depicted in FIG. 7 .
  • the first consolidated redo record 761 a comprises a first consolidated type indicator 762 a indicating that this is a consolidated redo record.
  • the first consolidated redo record 761 a also comprises a consolidated data location identifier 764 a , which is identical to data location identifier 734 a , and has first space identifier 736 a and first page identifier 738 a of first conventional redo record 731 a .
  • the first consolidated redo record 761 a has a first overall length value 770 a which corresponds to individual length value of first conventional redo record 731 a .
  • the first consolidated redo record 761 a also has a first segment 775 a corresponding to first conventional redo record 731 a .
  • the first segment 775 a has first type indicator 732 a and first content 739 a of first conventional redo record 731 a .
  • first overall length value 770 a indicates the length of first segment 775 a.
  • consolidated redo log generator 350 receives a second conventional redo record 731 b .
  • consolidated redo log generator 350 compares consolidated data location identifier 764 a of first consolidated redo record 761 a and second data location identifier 734 b of second conventional redo record 731 b.
  • consolidated redo log generator 350 modifies first consolidated redo record 761 a to obtain (generate) a modified first consolidated redo record, referred to herein as a “second consolidated redo record 761 b ”, by adding a portion of data of second conventional redo record 731 b to first consolidated redo record 761 a.
  • the second consolidated redo record 761 b is also depicted in FIG. 7 .
  • second consolidated redo record 761 b comprises a second segment 775 b which corresponds to second conventional redo record 731 b.
  • the second segment 775 b has the data of second conventional redo record 731 b .
  • the second segment 775 b may include second type indicator 732 b and second content 739 b .
  • consolidated redo log generator 350 replaces the first overall length value 770 a by second overall length value 770 b .
  • the second overall length value 770 b is based on the full length of second consolidated redo record 761 b .
  • the second overall length value 770 b may indicate a combined length of the information of first and second conventional redo records 731 a and 731 b , such as a combined length of first and second segments 775 a and 775 b .
  • the second overall length value 770 b is larger than first overall length value 770 b because second consolidated redo record 761 b is longer than first consolidated redo record 761 a.
  • step 620 indicates that consolidated data location identifier 764 a and second data location identifier 734 b are different from each other
  • consolidated redo log generator 350 generates a third consolidated redo record 761 c , depicted in FIG. 7 , at step 624 .
  • the third consolidated redo record 761 c is based on second conventional redo record 731 b and thus comprises data related to second conventional redo record 731 b .
  • the third consolidated redo record 761 c comprises third consolidated type indicator 762 c , indicating that this is a consolidated redo record and that it consolidates information about multiple individual redo records.
  • the third consolidated redo record 761 c also comprises second space identifier 736 b and second page identifier 738 b of second conventional redo record 731 b .
  • the third consolidated redo record 761 c also has a third overall length value 770 c which corresponds to individual length value of second conventional redo record 731 b .
  • the third consolidated redo record 761 b also has second type indicator 732 b and second content 739 b of second conventional redo record 731 b.
  • consolidated redo record 761 a is replaced with conventional redo record 731 a in order to revert generation of consolidated redo record 761 b .
  • the conventional redo record 731 a may then be transmitted to the target node, such as modified replica computing node 322 .
  • conventional redo logs that are generated and transmitted by conventional redo log generator 123 have series of many conventional redo records 731 a that have similar data location identifiers 734 a .
  • one conventional redo record is followed by another conventional redo record with the same data location identifier. Therefore, generating consolidated redo records 761 a for each conventional redo record 731 a with new data location identifier indicating space and page ( ⁇ s,p>) helps to speed up generation process of consolidated redo records 761 a.
  • consolidated redo log generator 350 receives a fourth conventional redo record 731 d , depicted in FIG. 7 .
  • consolidated redo log generator 350 determines whether a fourth data location identifier 734 d is similar to consolidated data location identifier 764 a of second consolidated redo record 761 b.
  • fourth data location identifier 734 d is similar to consolidated data location identifier 764 a , at step 630 , a third segment 775 d corresponding to fourth conventional redo record 731 d is added to second consolidated redo record 761 b to obtain a fourth consolidated redo record 761 d .
  • the new overall length value 770 d corresponds to the new total length of fourth consolidated redo record 761 d .
  • the third segment 775 d may include a fourth type indicator 732 d and a fourth content 739 d.
  • consolidated redo log generator 350 finalizes second consolidated redo record 761 b and transmits it to the target node, such as modified replica node 322 .
  • the consolidated redo log generator 350 also generates a fifth consolidated redo record 761 e based on fourth conventional redo record 731 d .
  • consolidated redo records 761 may be finalized and then transmitted out of consolidated redo log generator 350 to the target node.
  • the finalized consolidated redo record 761 d is transmitted to modified replica computing node 322 .
  • the finalized consolidated redo record 761 d is received by modified parsing engine 333 .
  • the modified parsing engine 333 can process both conventional redo records 231 and consolidated redo records 461 , 761 d by editing hash table 328 .
  • modified parsing engine 333 When modified parsing engine 333 applies consolidated redo records 461 to hash table 328 , it parses and applies such consolidated redo records 461 as a single identity, which speeds up the processing of write requests.
  • consolidated redo records 461 in system 300 described herein may help to save on memory allocation and to reduce network traffic in system 300 .
  • the consolidated redo records 461 are shorter than multiple corresponding conventional redo records 231 a . . . 231 z bearing the same information. Therefore, less data needs to be recorded or transmitted between nodes of system 300 .
  • the conventional parsing engine 133 needs to scan into each conventional redo record 231 a , 231 b deep enough to determine offset to starting point 291 a of the next redo record portion in fragment 280 .
  • the overall length value 470 is located close to the beginning of each consolidated redo record 461 . Therefore, when processing consolidated redo logs 360 with consolidated redo records 461 , the modified parsing engine 233 can skip to the very end of consolidated redo record 461 by parsing the first several bytes of that consolidated redo record 461 and analyzing the overall length value 470 .
  • the modified parsing engine 333 does not need to put the consolidated redo records or conventional redo records in any specific order because the order of arrival of consolidated redo records and conventional redo records to consolidated redo log generator 350 is preserved inside the consolidated redo log 360 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Library & Information Science (AREA)
US16/590,078 2019-10-01 2019-10-01 System, computing node and method for processing write requests Active 2040-05-11 US11321354B2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US16/590,078 US11321354B2 (en) 2019-10-01 2019-10-01 System, computing node and method for processing write requests
EP20872008.6A EP4022464A4 (en) 2019-10-01 2020-09-11 SYSTEM, COMPUTER NODE AND METHOD FOR PROCESSING WRITE REQUESTS
CN202080052186.9A CN114127707A (zh) 2019-10-01 2020-09-11 用于处理写请求的系统、计算节点和方法
PCT/CN2020/114725 WO2021063167A1 (en) 2019-10-01 2020-09-11 System, computing node and method for processing write requests

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/590,078 US11321354B2 (en) 2019-10-01 2019-10-01 System, computing node and method for processing write requests

Publications (2)

Publication Number Publication Date
US20210097035A1 US20210097035A1 (en) 2021-04-01
US11321354B2 true US11321354B2 (en) 2022-05-03

Family

ID=75163666

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/590,078 Active 2040-05-11 US11321354B2 (en) 2019-10-01 2019-10-01 System, computing node and method for processing write requests

Country Status (4)

Country Link
US (1) US11321354B2 (zh)
EP (1) EP4022464A4 (zh)
CN (1) CN114127707A (zh)
WO (1) WO2021063167A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11580110B2 (en) * 2019-12-31 2023-02-14 Huawei Cloud Computing Technologies Co., Ltd. Methods and apparatuses for generating redo records for cloud-based database
CN115114370B (zh) * 2022-01-20 2023-06-13 腾讯科技(深圳)有限公司 主从数据库的同步方法、装置、电子设备和存储介质

Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5654558A (en) * 1979-10-09 1981-05-14 Fujitsu Ltd Write control system for main memory unit
US5561778A (en) * 1991-11-12 1996-10-01 International Business Machines Corporation System for representing data object in concatenated multiple virtual address spaces with combined requests for segment mapping
US5819020A (en) * 1995-10-16 1998-10-06 Network Specialists, Inc. Real time backup system
US20020087801A1 (en) * 2000-12-29 2002-07-04 Zohar Bogin Method and system for servicing cache line in response to partial cache line request
US6480950B1 (en) * 2000-01-24 2002-11-12 Oracle International Corporation Software paging system
KR100725415B1 (ko) * 2005-12-24 2007-06-07 삼성전자주식회사 데이터베이스의 로그병합 방법 및 장치
US20080120349A1 (en) * 2006-11-16 2008-05-22 Samsung Electronics Co., Ltd. Method for deferred logging and apparatus thereof
US7552147B2 (en) * 2005-09-02 2009-06-23 International Business Machines Corporation System and method for minimizing data outage time and data loss while handling errors detected during recovery
US20110251997A1 (en) * 2010-04-12 2011-10-13 Microsoft Corporation Logical replication in clustered database system with adaptive cloning
US20110295804A1 (en) * 2010-05-28 2011-12-01 Commvault Systems, Inc. Systems and methods for performing data replication
US20120166390A1 (en) * 2010-12-23 2012-06-28 Dwight Merriman Method and apparatus for maintaining replica sets
US20130246358A1 (en) 2012-01-30 2013-09-19 International Business Machines Corporation Online verification of a standby database in log shipping physical replication environments
US20130290249A1 (en) * 2010-12-23 2013-10-31 Dwight Merriman Large distributed database clustering systems and methods
US20140089599A1 (en) * 2012-09-21 2014-03-27 Fujitsu Limited Processor and control method of processor
US20140101103A1 (en) * 2012-10-02 2014-04-10 Nextbit Systems Inc. Data synchronization based on file system activities
US20140279920A1 (en) * 2013-03-15 2014-09-18 Amazon Technologies, Inc. Log record management
US20140281131A1 (en) * 2013-03-15 2014-09-18 Fusion-Io, Inc. Systems and methods for persistent cache logging
US20150012713A1 (en) * 2012-03-02 2015-01-08 Arm Limited Data processing apparatus having first and second protocol domains, and method for the data processing apparatus
US20150120659A1 (en) * 2013-10-30 2015-04-30 Oracle International Corporation Multi-instance redo apply
US9223843B1 (en) * 2013-12-02 2015-12-29 Amazon Technologies, Inc. Optimized log storage for asynchronous log updates
US20150378830A1 (en) * 2014-06-30 2015-12-31 International Business Machines Corporation Use of replicated copies to improve database backup performance
WO2016064575A1 (en) 2014-10-19 2016-04-28 Microsoft Technology Licensing, Llc High performance transactions in database management systems
US20160283331A1 (en) * 2015-03-27 2016-09-29 International Business Machines Corporation Pooling work across multiple transactions for reducing contention in operational analytics systems
US20160314161A1 (en) * 2013-12-31 2016-10-27 Huawei Technologies Co., Ltd. Multi-Version Concurrency Control Method in Database and Database System
CN106708968A (zh) 2016-12-01 2017-05-24 成都华为技术有限公司 分布式数据库系统和分布式数据库系统中的数据处理方法
US9747295B1 (en) * 2014-11-03 2017-08-29 Sprint Communications Company L.P. Updating a large dataset in an enterprise computer system
CN107145432A (zh) 2017-03-30 2017-09-08 华为技术有限公司 一种建立模型数据库的方法以及客户端
US20170293536A1 (en) * 2016-04-06 2017-10-12 Iucf-Hyu(Industry-University Cooperation Foundation Hanyang University) Database journaling method and apparatus
US10360195B1 (en) * 2013-06-26 2019-07-23 Amazon Technologies, Inc. Absolute and relative log-structured storage

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10303564B1 (en) * 2013-05-23 2019-05-28 Amazon Technologies, Inc. Reduced transaction I/O for log-structured storage systems

Patent Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5654558A (en) * 1979-10-09 1981-05-14 Fujitsu Ltd Write control system for main memory unit
US5561778A (en) * 1991-11-12 1996-10-01 International Business Machines Corporation System for representing data object in concatenated multiple virtual address spaces with combined requests for segment mapping
US5819020A (en) * 1995-10-16 1998-10-06 Network Specialists, Inc. Real time backup system
US6480950B1 (en) * 2000-01-24 2002-11-12 Oracle International Corporation Software paging system
US20020087801A1 (en) * 2000-12-29 2002-07-04 Zohar Bogin Method and system for servicing cache line in response to partial cache line request
US7552147B2 (en) * 2005-09-02 2009-06-23 International Business Machines Corporation System and method for minimizing data outage time and data loss while handling errors detected during recovery
KR100725415B1 (ko) * 2005-12-24 2007-06-07 삼성전자주식회사 데이터베이스의 로그병합 방법 및 장치
US20080120349A1 (en) * 2006-11-16 2008-05-22 Samsung Electronics Co., Ltd. Method for deferred logging and apparatus thereof
US20110251997A1 (en) * 2010-04-12 2011-10-13 Microsoft Corporation Logical replication in clustered database system with adaptive cloning
US20110295804A1 (en) * 2010-05-28 2011-12-01 Commvault Systems, Inc. Systems and methods for performing data replication
US20120166390A1 (en) * 2010-12-23 2012-06-28 Dwight Merriman Method and apparatus for maintaining replica sets
US20130290249A1 (en) * 2010-12-23 2013-10-31 Dwight Merriman Large distributed database clustering systems and methods
US20130246358A1 (en) 2012-01-30 2013-09-19 International Business Machines Corporation Online verification of a standby database in log shipping physical replication environments
US20150012713A1 (en) * 2012-03-02 2015-01-08 Arm Limited Data processing apparatus having first and second protocol domains, and method for the data processing apparatus
US20140089599A1 (en) * 2012-09-21 2014-03-27 Fujitsu Limited Processor and control method of processor
US20140101103A1 (en) * 2012-10-02 2014-04-10 Nextbit Systems Inc. Data synchronization based on file system activities
US20140281131A1 (en) * 2013-03-15 2014-09-18 Fusion-Io, Inc. Systems and methods for persistent cache logging
US20140279920A1 (en) * 2013-03-15 2014-09-18 Amazon Technologies, Inc. Log record management
US10360195B1 (en) * 2013-06-26 2019-07-23 Amazon Technologies, Inc. Absolute and relative log-structured storage
US20150120659A1 (en) * 2013-10-30 2015-04-30 Oracle International Corporation Multi-instance redo apply
US9223843B1 (en) * 2013-12-02 2015-12-29 Amazon Technologies, Inc. Optimized log storage for asynchronous log updates
US20160314161A1 (en) * 2013-12-31 2016-10-27 Huawei Technologies Co., Ltd. Multi-Version Concurrency Control Method in Database and Database System
US20150378830A1 (en) * 2014-06-30 2015-12-31 International Business Machines Corporation Use of replicated copies to improve database backup performance
WO2016064575A1 (en) 2014-10-19 2016-04-28 Microsoft Technology Licensing, Llc High performance transactions in database management systems
US9747295B1 (en) * 2014-11-03 2017-08-29 Sprint Communications Company L.P. Updating a large dataset in an enterprise computer system
US20160283331A1 (en) * 2015-03-27 2016-09-29 International Business Machines Corporation Pooling work across multiple transactions for reducing contention in operational analytics systems
US20170293536A1 (en) * 2016-04-06 2017-10-12 Iucf-Hyu(Industry-University Cooperation Foundation Hanyang University) Database journaling method and apparatus
CN106708968A (zh) 2016-12-01 2017-05-24 成都华为技术有限公司 分布式数据库系统和分布式数据库系统中的数据处理方法
CN107145432A (zh) 2017-03-30 2017-09-08 华为技术有限公司 一种建立模型数据库的方法以及客户端

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
International Search Report and Written Opinion of PCT/CN2020/114725; Xinlei Zhao, dated Oct. 30, 2020.

Also Published As

Publication number Publication date
WO2021063167A1 (en) 2021-04-08
EP4022464A4 (en) 2022-12-28
EP4022464A1 (en) 2022-07-06
CN114127707A (zh) 2022-03-01
US20210097035A1 (en) 2021-04-01

Similar Documents

Publication Publication Date Title
WO2015106711A1 (zh) 一种为半结构化数据构建NoSQL数据库索引的方法及装置
CN102640151B (zh) 用于传输日志记录的方法和系统
US11347787B2 (en) Image retrieval method and apparatus, system, server, and storage medium
US20170109378A1 (en) Distributed pipeline optimization for data preparation
US9256665B2 (en) Creation of inverted index system, and data processing method and apparatus
US10417265B2 (en) High performance parallel indexing for forensics and electronic discovery
US10642814B2 (en) Signature-based cache optimization for data preparation
US8880463B2 (en) Standardized framework for reporting archived legacy system data
US9262511B2 (en) System and method for indexing streams containing unstructured text data
US11321354B2 (en) System, computing node and method for processing write requests
CN104281717B (zh) 一种建立海量id映射关系的方法
CN110633378A (zh) 一种支持超大规模关系网络的图数据库构建方法
KR20160100216A (ko) 대량 오디오 지문 데이터베이스의 온라인 실시간 업데이트를 구축하는 방법과 장치
US10740316B2 (en) Cache optimization for data preparation
US9390131B1 (en) Executing queries subject to different consistency requirements
US8548980B2 (en) Accelerating queries based on exact knowledge of specific rows satisfying local conditions
US11144580B1 (en) Columnar storage and processing of unstructured data
CN111221814A (zh) 二级索引的构建方法、装置及设备
CN108197164A (zh) 业务数据保存方法及装置
US20220245097A1 (en) Hashing with differing hash size and compression size
CN114564501A (zh) 一种数据库数据存储、查询方法、装置、设备及介质
CN109241098B (zh) 一种分布式数据库的查询优化方法
CN114846459A (zh) 用于智能且可扩展的模式匹配框架的方法和装置
EP3916577B1 (en) Parallel load of mapping containers for database system start and restart operations
KR20130078594A (ko) 해시 함수 기반의 인덱스를 이용한 텍스트 검색 장치 및 방법

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: HUAWEI TECHNOLOGIES CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:XUE, XUN;ZHANG, HUAXIN;CHAN, YUK KUEN;AND OTHERS;SIGNING DATES FROM 20191001 TO 20191007;REEL/FRAME:052866/0410

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE