WO2019195969A1 - Procédé et appareil de traitement de synchronisation de données - Google Patents

Procédé et appareil de traitement de synchronisation de données Download PDF

Info

Publication number
WO2019195969A1
WO2019195969A1 PCT/CN2018/082225 CN2018082225W WO2019195969A1 WO 2019195969 A1 WO2019195969 A1 WO 2019195969A1 CN 2018082225 W CN2018082225 W CN 2018082225W WO 2019195969 A1 WO2019195969 A1 WO 2019195969A1
Authority
WO
WIPO (PCT)
Prior art keywords
thread
node
data packet
processed
buffer module
Prior art date
Application number
PCT/CN2018/082225
Other languages
English (en)
Chinese (zh)
Inventor
王成
陈旭升
崔鹤鸣
沈伟锋
白龙
毕舒展
刘祖齐
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2018/082225 priority Critical patent/WO2019195969A1/fr
Priority to CN201880004742.8A priority patent/CN110622478B/zh
Publication of WO2019195969A1 publication Critical patent/WO2019195969A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication

Definitions

  • the present application relates to the field of computers, and in particular, to a method and apparatus for data synchronization processing.
  • Virtualization is part of a compute node (hereafter referred to as a "node") that provides an isolated virtualized computing environment.
  • a typical example of virtualization is a virtual machine (VM).
  • VM virtual machine
  • a virtual machine is a virtual device that is simulated on a physical device by virtual machine software. For applications running in virtual machines, these virtual machines work just like real physical devices, which can have operating systems and applications installed on them, and virtual machines can access network resources.
  • the same database that is, the distributed database
  • the virtual machine automatically takes over the business.
  • the working status of the active and standby VMs is usually not the same. Therefore, the standby VM needs a certain amount of time to synchronize the working status of the active and standby VMs before taking over the services.
  • the node where the primary virtual machine is located performs synchronization processing of the active and standby virtual machines based on the consistency negotiation protocol. For example, the active and standby virtual machines process data packets in the same order, periodically or irregularly. Synchronize the status of the active and standby VMs, thus reducing the difference in the working status of the active and standby VMs.
  • the thread of the master node (for example, the main loop thread) needs to occupy the global mutex when performing the synchronization processing of the master and slave VMs.
  • the global mutex lock prohibits other threads from accessing the code corresponding to the master virtual machine, resulting in the master virtual machine.
  • Other tasks cannot be processed while the main thread is synchronizing, resulting in a significant drop in the performance of the primary virtual machine.
  • the present application provides a method and apparatus for data synchronization processing, which enables a master node to use a primary virtual machine to process other tasks while performing synchronization processing of the primary and secondary virtual machines, thereby improving performance of the primary node.
  • a data synchronization processing method is provided, which is applied to a simulator of a master node in a computer system, the simulator is used to simulate a hardware device of a first virtual device of a master node, and the computer system further includes a connection with the master node.
  • the standby node includes: acquiring, by the first thread of the simulator, the first to-be-processed information, where the first to-be-processed information is the first data packet or the first indication information, where the first indication information is used to indicate the first data packet,
  • the first thread is a thread that executes the non-thread-safe code; the first to-be-processed information is written into the buffer module by the first thread; and the second pending thread of the simulator performs the consistency negotiation process on the first to-be-processed information, the consistency
  • the sex negotiation process is used to synchronize the order in which the primary node and the standby node process the first data packet; the first data packet is processed by the first thread according to the result of the consistency negotiation process.
  • the master node can mobilize the first thread and the second thread to execute the code to complete the corresponding task.
  • the first thread is a thread that executes the non-thread-safe code. Therefore, the first thread needs to occupy the mutex when performing the operation. For example, the first thread needs to occupy the global mutex before acquiring the first to-be-processed information.
  • the manner in which the first thread acquires the first to-be-processed information is not limited. After the first thread obtains the first to-be-processed information, the first to-be-processed information is written into the buffer module, where the buffer module may be a buffer queue or a heap or stack for buffering the first to-be-processed information.
  • the global mutex can be released, and other threads can occupy the global mutex and schedule the virtual machine to perform other tasks.
  • the second thread reads at least one to-be-processed information in the buffer module, and determines a common order in which the primary and secondary nodes process the data packets based on the consistency negotiation protocol, and then the first thread occupies the global mutex and processes according to the second thread. Process packets in sequence. Because the consistency negotiation between the active and standby nodes is performed by the second thread, the second thread does not need to occupy the global mutex when working. Therefore, the master node can use the primary virtual machine to process the synchronous processing of the active and standby virtual machines. Other tasks improve the performance of the primary node.
  • performing a consistency negotiation process on the first to-be-processed information by the second thread of the simulator including: reading, by the second thread, the first to-be-processed information from the buffering module;
  • the information execution consistency negotiation process determines the processed order of the first data packet; writes the first pending processing information to the pipeline through the second thread according to the processed order of the first data packet, where the pipeline is used for reading by the first thread A pending message.
  • the first data packet may be a data packet obtained from the client, or may be a data packet generated by the master node, or may be other data packets.
  • the specific content of the first data packet is not limited in this application.
  • the second thread can not directly call the program code of the master node as a worker thread.
  • the consistency negotiation processing scheme provided in this embodiment is in the first thread and the second thread. A pipeline is established for the connection, and the second thread writes the result of the consistency negotiation to the pipeline, so that the first thread reads the result of the consistency negotiation through the pipeline, thereby avoiding the master node while completing the consistency negotiation. The impact of security.
  • the reading, by the second thread, the first to-be-processed information from the buffer module includes: reading the first to-be-processed information buffer module from the buffer module by using the second thread at a preset time.
  • the preset time is, for example, a time corresponding to the timer event
  • the second thread may read the first to-be-processed information from the buffer module based on the trigger of the timer event, and the master node may set different timer events. Therefore, the foregoing embodiment can flexibly trigger the second thread to perform the consistency negotiation process.
  • the method before the first to-be-processed information is read from the buffer module by using the second thread, the method further includes: obtaining, by the second thread, the exclusive permission of the buffer module, where the exclusive permission of the buffer module is used to prohibit two or two More than one thread accesses the buffer module at the same time; after performing the consistency negotiation process on the first to-be-processed information by the second thread, the method further includes: when the number of pieces of information to be processed in the buffer module is 0, releasing the The exclusive permission of the buffer module obtained by the second thread.
  • the exclusive permission may also be called a queue mutual exclusion lock, which is used to prohibit two or more threads from accessing the buffer module at the same time.
  • the second thread releases the queue mutex, and other threads can continue to write new pending information to the buffer module.
  • the foregoing embodiment can prevent the new pending information from being inserted into the to-be-processed information queue that has completed the consistency negotiation process, thereby improving the reliability and efficiency of the consistency negotiation process.
  • performing the consistency negotiation process on the first to-be-processed information by using the second thread including: determining, by the second thread, the quantity of information to be processed in the buffer module; and when the quantity of the to-be-processed information in the buffer module is greater than 0,
  • the second thread writes the data packet (including the first data packet) corresponding to the to-be-processed information into the consistency log and deletes the to-be-processed information in the buffer module, and the consistency log is used to cache the data packet, and the data packet in the consistency log
  • the sequence corresponds to the processed order of the data packets in the consistency log; the second thread sends a consistency negotiation request including the first data packet, and the consistency negotiation request is used to request the standby node to accept the first data packet. Processing sequence; receiving, by the second thread, a negotiation completion message, the negotiation completion message is used to indicate that the processed sequence of the first data packet has been accepted.
  • the consistency negotiation process is performed, and then the to-be-processed information in the buffer module is deleted, so that the indication information in the buffer module read by the second thread each time is new pending information.
  • the second thread is prevented from reading the processed information to be processed, thereby improving the efficiency of the consistency negotiation process.
  • the method further includes: obtaining, by the first thread, exclusive rights of the buffer module, where the exclusive permission of the buffer module is used to prohibit two or two The above-mentioned thread accesses the buffer module at the same time; after the first thread writes the first to-be-processed information to the buffer module, the method further includes: releasing, by the first thread, the exclusive permission of the buffer module acquired by the first thread.
  • the exclusive permission may also be called a queue mutual exclusion lock, which is used to prohibit two or more threads from accessing the buffer module at the same time.
  • the second thread can occupy the queue mutex lock and read the pending information in the buffer module.
  • the first virtual device runs a primary database
  • the standby node is configured with a second virtual device
  • the second virtual device runs a standby database
  • the first data packet carries the client for sending to the primary node for the primary database.
  • Obtaining the first to-be-processed information by using the first thread of the simulator including: acquiring, by the first thread, the first to-be-processed information from the physical network card of the primary node;
  • Processing, by the first thread, the first data packet according to the result of the consistency negotiation process including: sending, by the first thread, the first data packet to the primary database and the standby database simultaneously, so that the primary node and the standby node processing are processed in the same order The first packet.
  • the method further includes:
  • the third thread of the simulator obtains the load threshold of the master node and the same dirty page ratio of the master node and the standby node in the n-time synchronization operation, and the load threshold of the master node in the n-time synchronization operation is c 1 , . . . , c n , same dirty pages proportion to the n-th primary node and the standby node when the synchronization is w 1, ..., w n, where, c 1 corresponding to w 1, ..., c n and W n corresponds, n is greater than or equal to 2, Positive integer
  • L m is the load value of the primary node at the current time
  • the synchronization request is used to request synchronization of dirty pages of the primary node and the standby node;
  • Performing a consistency negotiation process on the synchronization request by the second thread, and performing a consistency negotiation process on the synchronization request is used to synchronize the order in which the primary node and the standby node process the synchronization request;
  • the synchronization request is processed by the first thread according to the result of performing the consistency negotiation process on the synchronization request.
  • the master node determines whether to synchronize the synchronization between the active and standby nodes according to the current load value and the fixed load threshold. If the current load value is less than the fixed load threshold, the synchronization between the active and standby nodes is not started. When the load value at the current time is greater than or equal to the fixed load threshold, the synchronization of the active and standby nodes is started.
  • the above prior art has the disadvantage that it is difficult to determine the optimal starting timing for synchronizing the active and standby nodes according to a fixed load threshold, because if the fixed load threshold is set too small, for example, the fixed load threshold is set to 0, although When the load value of the master node meets the condition, the ratio of the same dirty pages of the active and standby nodes is the highest (because the virtual machines of the active and standby nodes are no longer working, the dirty pages are no longer changed), but the virtual machines of the primary and secondary nodes are loaded. The virtual machine resources of the active and standby nodes are wasted when the time between the detection and the data synchronization is idle. If the fixed load threshold is set too large, the virtual machines of the active and standby nodes are still working when the data is synchronized.
  • the virtual machine of the active and standby nodes has a small proportion of the same dirty pages.
  • the active and standby nodes need to transmit more data (that is, data corresponding to different dirty pages), which results in more data synchronization between the active and standby nodes.
  • the processor working time of the master node is 10 minutes
  • the virtual machine of the active and standby nodes has the same dirty page ratio of 80%
  • the processor life of the master node of the second load detection is 20 minutes.
  • the virtual machine of the active and standby nodes has the same dirty page ratio of 85%.
  • the processor working time of the primary node is 20 minutes
  • the virtual machine of the active and standby nodes has the same dirty page ratio of 85%.
  • the above data indicates that the virtual machine of the primary node has stopped working at least during the second load detection.
  • the virtual machine of the primary node has stopped working before the second load detection, and starts after the second load detection.
  • Data synchronization will inevitably cause the virtual machine of the primary node to be idle, and the virtual machine resources will be wasted. Therefore, the preferred data synchronization timing is after the first load detection and before the second load detection, when the data synchronization starts, the primary node
  • the virtual machine has completed most of the work or all of its work, which can achieve a better balance between virtual machine resource utilization and the same dirty page ratio.
  • the load threshold is determined according to the load threshold of the primary node and the same dirty page ratio at least two synchronization operations, for example, the same dirty page ratio 80% obtained at the first load detection is used as the weight value. Multiply the load threshold value 5 to obtain the result 4, and the same dirty page ratio 85% obtained at the second load detection is multiplied by the load threshold value 6 to obtain the result 5.1, and the sum of 4 and 5.1 is divided by the load detection number 2 to obtain The weighted average of the load thresholds for the two load tests is 4.55, which is the new load threshold.
  • the processor working time obtained by the third load detection is 22 minutes
  • the load value of the master node in the third load detection is 2 (the working time 22 obtained by the third load detection minus the second load detection).
  • the working time 20 obtains the load value of the master node when the load is detected for the third time.
  • the load value is less than the new load threshold of 4.55, indicating that the remaining tasks of the virtual machine of the master node are not much, and the virtual machine of the master node is soon Will enter the idle state, then start the data synchronization operation; if the processor working time of the third load detection is 30, the load value of the master node is 10 when the load is detected for the third time (the work of the third load detection) Time 30 minus the working time 20 obtained by the second load detection to obtain the load value of the master node in the third load detection 2), the load value is greater than the new load threshold 4.55, indicating that the remaining tasks of the virtual machine of the master node are still There are many.
  • the new load value threshold of 4.55 is determined whether the magnitude relationship data synchronization operations.
  • the new load threshold in this embodiment is a weighted average determined according to the result of the multiple load measurement, the load threshold will gradually converge to a more preferable load threshold as the number of load detections increases.
  • the load threshold is a dynamic and preferable threshold, and the virtual machine resource utilization ratio and the same dirty page ratio of the active and standby nodes are better when data synchronization is performed. The balance point.
  • the method further includes:
  • SUM k is the sum of the load value obtained from the first load measurement of the primary node to the load value obtained by measuring the kth load, and k is a positive integer;
  • T count is the load measurement threshold
  • c 0 is the load threshold of the first synchronization operation of the master node
  • the number of measurements (COUNT) of the load is equal to 0.
  • the first load value L 1 is obtained
  • SUM 1 is equal to L 1
  • the measurement number threshold T count is 2
  • the initial load threshold c 0 is equal to SUM 2 divided by 2 that is, the initial load threshold is positively correlated with SUM 2
  • the initial load threshold is negatively correlated with the number of measurements; if the measurement number threshold T count is 3.
  • the above embodiment can determine an initial load threshold so that the timing at which the primary node synchronizes data for the first time can be determined.
  • the load value of the primary node includes a processor load value and a memory load value
  • the load threshold of the primary node includes a processor load threshold and a memory load threshold
  • the relationship between the processor load value and the processor load threshold may be compared first, and then the relationship between the memory load value and the memory load threshold may be compared, or the relationship between the memory load value and the memory load threshold may be compared first. Then compare the relationship between the processor load value and the processor load threshold, so that the timing of data synchronization between the active and standby nodes can be flexibly determined.
  • a data synchronization processing apparatus which is applied to a simulator of a master node in a computer system, the simulator is used to simulate a hardware device of a first virtual device of a master node, and the computer system further includes a connection with the master node.
  • the device includes:
  • a first thread control unit configured to acquire first to-be-processed information, where the first to-be-processed information is the first data packet or the first indication information, where the first indication information is used to indicate the first data packet, where the first thread control unit For executing a non-thread-safe code; and writing the first pending information to the buffer module;
  • a second thread control unit configured to perform a consistency negotiation process on the first to-be-processed information, where the consistency negotiation process is used to synchronize the order in which the primary node and the standby node process the first data packet;
  • the first thread control unit is further configured to process the first data packet according to a result of the second thread control unit performing the consistency negotiation process.
  • the data synchronization processing device can execute the code by the first thread control unit and the second thread control unit to complete the corresponding task.
  • the first thread control unit is configured to execute the non-thread-safe code. Therefore, the first thread control unit needs to occupy the mutex when performing the operation. For example, the first thread control unit needs to occupy the global mutual exclusion before acquiring the first to-be-processed information.
  • the method for obtaining the first to-be-processed information by the first thread control unit is not limited in this application.
  • the first thread control unit After acquiring the first to-be-processed information, the first thread control unit writes the first to-be-processed information to the buffer module, where the buffer module may be a buffer queue, or may be a heap or a stack for buffering the first to-be-processed information. (stack), which may be other data structures for buffering the first to-be-processed information, which is not limited in this application.
  • the buffer module may be a buffer queue, or may be a heap or a stack for buffering the first to-be-processed information. (stack), which may be other data structures for buffering the first to-be-processed information, which is not limited in this application.
  • the global mutex can be released, and other threads can occupy the global mutex and schedule the virtual machine to perform other tasks.
  • the second thread control unit reads at least one to-be-processed information in the buffer module, and determines a common order in which the active and standby nodes process the data packets based on the consistency negotiation protocol, and then the first thread control unit occupies the global mutual exclusion lock and follows the second The processing sequence determined by the thread control unit processes the data packet. Since the work of the consistency negotiation between the active and standby nodes is performed by the second thread control unit, the second thread control unit does not need to occupy the global mutex when working. Therefore, the data synchronization processing device may be the master node performing the active and standby virtual When the machine is synchronized, the main virtual machine is used to process other tasks, which improves the performance of the master node.
  • the second thread control unit is specifically configured to:
  • the first to-be-processed information is written to the pipeline according to the processed order of the first data packet, and the pipeline is used by the first thread control unit to read the first to-be-processed information.
  • the first data packet may be a data packet obtained from the client, or may be a data packet generated by the master node, or may be other data packets.
  • the specific content of the first data packet is not limited in this application. Since some program code of the master node is non-thread-safe, the second thread control unit can not directly call the program code of the master node as a worker thread.
  • the consistency negotiation processing scheme provided in this embodiment is in the first thread control unit and A pipeline for establishing a relationship is established between the second thread control units, and the second thread control unit writes the result of the consistency negotiation to the pipeline, so that the first thread control unit reads the result of the consistency negotiation through the pipeline, so that Consistency negotiation is completed while avoiding the impact on the security of the primary node.
  • the second thread control unit is further configured to: read the first to-be-processed information from the buffer module at a preset time.
  • the preset time is, for example, a time corresponding to the timer event
  • the second thread control unit may read the first to-be-processed information from the buffer module based on the trigger of the timer event, and the master node may set different timers.
  • the event therefore, the above embodiment can flexibly trigger the second thread control unit to perform the consistency negotiation process.
  • the second thread control unit is further configured to: obtain exclusive rights of the buffer module, and the exclusive permission of the buffer module is used to prohibit two or more threads. Accessing the buffer module at the same time;
  • the second thread control unit is further configured to: when the number of pieces of information to be processed in the buffer module is 0, release the exclusive permission of the buffer module acquired by the second thread.
  • the second thread control unit When the second thread control unit starts to work, it first occupies the exclusive right of the buffer module, which may also be called a queue mutex lock, for prohibiting two or more thread control units from accessing the buffer module at the same time. When the number of pieces of information to be processed in the buffer module is 0, the second thread control unit releases the queue mutex, and other threads may continue to write new pending information to the buffer module.
  • the foregoing embodiment can prevent the new pending information from being inserted into the to-be-processed information queue that has completed the consistency negotiation process, thereby improving the reliability and efficiency of the consistency negotiation process.
  • the second thread control unit is further configured to:
  • the data packet corresponding to the to-be-processed information is written into the consistency log, and the to-be-processed information is deleted.
  • the consistency log is used to cache the data packet corresponding to the to-be-processed information, and the data in the consistency log.
  • the sequence of the packets corresponds to the processed sequence of the data packets in the consistency log, the information to be processed includes the first to-be-processed information, and the data packet corresponding to the to-be-processed information includes the first data packet.
  • a negotiation completion message is received, the negotiation completion message is used to indicate that the processed sequence of the first data packet has been accepted.
  • the consistency negotiation process is performed, and then the to-be-processed information in the buffer module is deleted, so that the indication information in the buffer module read by the second thread control unit is new.
  • the information to be processed prevents the second thread control unit from reading the processed information to be processed, thereby improving the efficiency of the consistency negotiation process.
  • the first thread control unit is further configured to: obtain exclusive rights of the buffer module, and the exclusive permission of the buffer module is used to prohibit two or more threads from being in the same Access the buffer module at any time;
  • the first thread control unit is further configured to: release the exclusive permission of the buffer module acquired by the first thread control unit.
  • the first thread control unit first occupies the exclusive permission of the buffer module before writing to the buffer module, and the exclusive authority may also be referred to as a queue mutex lock, for prohibiting two or more thread control units from accessing the buffer at the same time. Module.
  • the second thread control unit can occupy the queue mutex lock and read the pending information in the buffer module.
  • the foregoing embodiment can prevent the new pending information from being inserted into the queue of the information to be processed that has completed the consistency negotiation process, thereby improving the reliability and efficiency of the consistency negotiation process.
  • the first virtual device runs a primary database
  • the standby node is configured with a second virtual device
  • the second virtual device runs a standby database
  • the first data packet carries the client for sending to the primary node for the primary database.
  • the first thread control unit is further configured to: obtain first to-be-processed information from the physical network card of the primary node; send the first data packet to the primary database and the standby database simultaneously, so that the primary node and the standby node process the same in the same order. A packet of data.
  • the device further includes a third thread control unit, and the third thread control unit is configured to:
  • the master node of the n synchronization operations dirty pages same proportion standby node is w 1, ..., w n, where, c 1 and w 1 corresponds, ..., c n and w n corresponding to, n is a positive integer equal to or greater than 2;
  • L m is the load value of the primary node at the current time
  • a synchronization request is generated, the synchronization request is used to request synchronization of dirty pages of the primary node and the standby node;
  • the second thread control unit is also specifically used to:
  • the first thread control unit is also specifically used to:
  • the synchronization request is processed according to the result of performing the consistency negotiation process on the synchronization request.
  • the load threshold used by the device for data synchronization is a dynamic and more preferable threshold, and the virtual machine resource utilization ratio and the same dirty page ratio of the active and standby nodes can reach a better balance point when data synchronization is performed. .
  • the third thread control unit is further configured to:
  • SUM k is the sum of the load value obtained from the first load measurement of the master node to the load value obtained by measuring the kth load, and k is a positive integer;
  • the above embodiment can determine an initial load threshold so that the timing at which the primary node synchronizes data for the first time can be determined.
  • the load value of the primary node includes a processor load value and a memory load value
  • the load threshold of the primary node includes a processor load threshold and a memory load threshold
  • the relationship between the processor load value and the processor load threshold may be compared first, and then the relationship between the memory load value and the memory load threshold may be compared, or the relationship between the memory load value and the memory load threshold may be compared first. Then compare the relationship between the processor load value and the processor load threshold, so that the timing of data synchronization between the active and standby nodes can be flexibly determined.
  • a data synchronization processing apparatus having the functionality of an execution device implementing the method of the first aspect, comprising means for performing the steps or functions described in the above method aspects (means ).
  • the steps or functions may be implemented by software, or by hardware (such as a circuit), or by a combination of hardware and software.
  • the above apparatus includes one or more processing units and one or more communication units.
  • the one or more processing units are configured to support the apparatus to implement a corresponding function of the execution device of the above method, for example, acquiring the first pending information by the first thread.
  • the one or more communication units are configured to support the device to communicate with other devices to implement receiving and/or transmitting functions. For example, the first packet is obtained from the client.
  • the above apparatus may further comprise one or more memories for coupling with the processor, which store program instructions and/or data necessary for the device.
  • the one or more memories may be integrated with the processor or may be separate from the processor. This application is not limited.
  • the device can be a chip.
  • the communication unit may be an input/output circuit or an interface of the chip.
  • the above apparatus includes a transceiver, a processor, and a memory.
  • the processor is for controlling a transceiver or an input/output circuit for transmitting and receiving signals, the memory for storing a computer program for executing a computer program in the memory, such that the device performs either of the first aspect or the first aspect Possible methods in the implementation.
  • a computer system comprising the primary node and the standby node according to the first aspect, wherein the primary node is configured to perform the first aspect or any of the possible implementations of the first aspect Methods.
  • a fifth aspect a computer readable storage medium for storing a computer program, the computer program comprising instructions for performing the method of the first aspect or any of the possible implementations of the first aspect.
  • a computer program product comprising: computer program code, when the computer program code is run on a computer, causing the computer to perform any of the first aspect or the first aspect described above Possible methods in the implementation.
  • Figure 1 is a schematic illustration of a computer system suitable for use with the present application
  • FIG. 2 is a schematic diagram of virtual machine state replication suitable for use in the present application
  • FIG. 3 is a schematic diagram of a data synchronization processing method provided by the present application.
  • FIG. 4 is a schematic diagram of data synchronization between a primary and secondary virtual machine provided by the present application.
  • FIG. 5 is a schematic diagram of a method for determining a timing of data synchronization between a primary and a secondary virtual machine according to the present application
  • FIG. 6 is a schematic diagram of a method for determining an initial load threshold provided by the present application.
  • FIG. 7 is a schematic diagram of another data synchronization processing method provided by the present application.
  • FIG. 8 is a schematic diagram of a consistency negotiation method provided by the present application.
  • FIG. 9 is a schematic diagram of still another data synchronization processing method provided by the present application.
  • FIG. 10 is a schematic diagram of still another data synchronization processing method provided by the present application.
  • FIG. 11 is a schematic structural diagram of a data synchronization processing apparatus provided by the present application.
  • FIG. 12 is another schematic structural diagram of a master node provided by the present application.
  • Figure 1 shows a schematic diagram of a computer system suitable for use in the present application.
  • the computer system 100 includes a host 1 and a host 2.
  • the host 1 includes a hardware platform and a host operating system installed on the hardware platform.
  • the host 1 further includes a virtual machine running on the host operating system. 1 and a quick emulator (Qemu) 1, in which a database 1 is running on the virtual machine 1. .
  • Qemu emulation hardware devices are provided for use by virtual machines.
  • Qemu can monitor the workload of virtual machines running on Qemu.
  • the workload of virtual machines includes the occupancy of virtual machines for central processing units (CPUs) and virtual machine disk usage. , where the central processor and disk are set in the hardware platform.
  • the host 2 includes a hardware platform and a host operating system installed on the hardware platform.
  • the host 2 further includes a virtual machine 2 and a Qemu 2 running on the host operating system, wherein the virtual machine 2 runs a database 2.
  • the database 1 is the primary database
  • the database 2 is the standby database.
  • the database 2 can be referred to as the primary database instead of the database 1 for the client to access.
  • the virtual machine 1 and the virtual machine 3 can be mutually active virtual machines.
  • the host 1 and the host 2 are mutually active and standby nodes.
  • Host 1 and host 2 can communicate with each other through a network interface card (NIC) and can communicate with the client separately.
  • NIC network interface card
  • host 1 is the master node and host 2 is the standby node
  • virtual machine 1 can process the four data packets in the order of 1234
  • the master node 1 The order of the processing of the four data packets by the virtual machine 3 is determined by the consistency negotiation module of the Qemu1 and the Qemu3 consistency negotiation module in the host 2 to be 1234, so that the virtual machine 1 and the virtual machine 3 process the four data packets.
  • the order is the same, so the primary node and the standby node have only a small amount of memory and dirty pages, and only need to transfer less data when synchronizing.
  • the consistency negotiation module can implement the consistency negotiation of the data packet processing order by using the paxos algorithm.
  • the observer node is further introduced in FIG. 1, wherein the observer node may include a consistency negotiation.
  • the module's Qemu is described in more detail below.
  • the above computer system 100 is merely an example, and the computer system applicable to the present application is not limited thereto.
  • the computer system 100 may further include other hosts.
  • different hosts can communicate via radio waves or communicate over Ethernet.
  • FIG. 2 is a schematic diagram of a virtual machine state replication provided by the present application.
  • the Paxos negotiation module (that is, the consistency negotiation module) is deployed in the Qemu of the active and standby nodes, and all virtual machines run the same database program in parallel.
  • the primary node After the data packets from the client reach the primary node, the primary node The Paxos module negotiates the processing order of each packet received with the Paxos module of other standby nodes, so that all virtual machines process the same data packet in the same order, so that the standby node and the primary node have only a small amount of memory dirty pages. Inconsistent, so that only a small amount of data can be transferred during synchronization to complete the synchronization, which improves the efficiency of synchronization.
  • the primary node and the standby node run the same database, and the shaded memory dirty pages (also referred to as "dirty pages”) represent dirty pages of virtual machine 2 that differ from virtual machine 1.
  • Paxos negotiation module shown in FIG. 2 is merely an example, and other consistency algorithms are also applicable to the present application.
  • FIG. 3 shows a flow chart of a data synchronization processing method 300 provided by the present application.
  • the method 300 is applied to a master node in a computer system. Specifically, the method 300 is applied to a Qemu1 of a master node in a computer system.
  • the computer system further includes a standby node connected to the master node, and the method 300 includes:
  • the first to-be-processed information is obtained by the first thread, where the first to-be-processed information is the first data packet or the first indication information, where the first indication information is used to indicate the first data packet, where the first thread is a non-threading The thread of the security code.
  • the first indication information may be, for example, indication information including a pointer and a data size
  • the pointer is used to indicate a storage address of the first data packet
  • the first thread may be read by the pointer at the storage address of the first data packet.
  • the data size information is used to obtain the first data packet.
  • the buffer module is a buffer queue, and may also be a heap or a stack for buffering the first to-be-processed information, and may also be another data structure for buffering the first to-be-processed information. This application does not limit this.
  • the first data packet is processed by the first thread according to the result of the consistency negotiation process.
  • the master node may mobilize the first thread and the second thread to execute the code to complete the corresponding task.
  • the above behavior is sometimes described as “complete the task by the first thread” or “first”.
  • the thread completes the task, for example, "obtaining the first to-be-processed information by the first thread” and "the first thread acquiring the first to-be-processed information” can be understood as the master node scheduling the first thread to execute the code to obtain the first to-be-processed Process information.
  • the first thread is a thread that executes non-thread-safe code.
  • the first thread is Qemu's main loop thread (Qemu main loop), which executes Qemu's core code, is a dedicated event processing loop thread, and the main loop thread is based on The state change of the file descriptor calls the corresponding handler to handle the event, and the second thread is Qemu's Worker thread.
  • Qemu's core code is non-thread-safe, that is, Qemu does not provide data access protection. It is possible that multiple threads of Qemu change the same data to cause inconsistency. Therefore, the first thread needs to occupy each other when performing operations.
  • a mutex for example, the main loop thread needs to occupy a global mutex before acquiring the first to-be-processed information, and release the global mutex after writing the first to-be-processed information to the buffer module, thereby ensuring At the same time, only the main loop thread occupying the global mutex can perform the operation of acquiring the first to-be-processed information and writing the first to-be-processed information to the buffer module.
  • the first to-be-processed information is any information to be processed obtained by the master node, and the first to-be-processed information may be a data packet, or may be a descriptor for indicating the data packet (ie, indication information). .
  • the master node may directly write the data packet to the buffer module, or may generate a descriptor indicating the data packet, and write the descriptor to the buffer module, where the descriptor is A pointer to the packet can be included, along with information indicating the length and type of the packet.
  • the first data packet may also be a data packet generated locally by the primary node.
  • the application does not limit the specific content of the first data packet and the method for the primary node to acquire the first data packet.
  • the first thread acquires the first to-be-processed information
  • the first to-be-processed information is written to the buffer module.
  • the second thread reads at least one to-be-processed information in the buffer module, and determines a common order in which the active and standby nodes process the data packets based on a consistency negotiation protocol (eg, Paxos), and then the first thread occupies the global mutex and follows the second The processing sequence determined by the thread processes the packet.
  • the second thread is, for example, a consensus negotiation thread.
  • the first thread may process the first data packet according to the type of the first data packet. For example, when the first data packet is a data packet sent by the client, the first thread may send the first data packet to the primary node.
  • the virtual machine performs processing.
  • the primary node may perform the synchronous operation of the active and standby nodes according to the request data packet.
  • the master node can perform the active and standby virtual
  • the synchronous operation of the machine utilizes the virtual machine of the primary node to process other tasks, improving the performance of the primary node.
  • the database 1 and the database 2 can be guaranteed to perform the same order of access, thereby minimizing the difference of the dirty pages of the active and standby nodes, and reducing the main The number of dirty pages that need to be transferred when preparing for synchronization.
  • S330 includes:
  • the first pending information is read from the buffer module by using the second thread.
  • S333 Write, by the second thread, the first to-be-processed information to the pipeline according to the processed order of the first data packet, where the pipeline is used by the first thread to read the first to-be-processed information.
  • the consistency negotiation processing scheme provided in this embodiment is in the first thread and the second thread. Build a pipe for the connection between the threads, and add the pipe to the event loop list of the Qemu main loop thread.
  • the second thread executes on the file descriptor.
  • a write operation causes the file descriptor to become readable at the end of the Qemu main loop thread. After the Qemu main loop thread reads the file descriptor, the corresponding program can be called to perform subsequent processing.
  • the second thread executes the virtual network card processing code (RTL8139_do_receiver) to perform the first data packet.
  • RTL8139_do_receiver the virtual network card processing code
  • the logical operation of the virtual network card however, the processing code of the RTL8139 virtual network card is a non-linear security code.
  • the second thread can write the descriptor of the first data packet into the pipeline, and in the description A write operation is performed to make the file descriptor readable at the end of the Qemu main loop thread.
  • the virtual network card processing code is called to perform subsequent processing on the first data packet. Therefore, the foregoing embodiment can complete the processing task after the consistency negotiation under the premise of ensuring the thread security of the master node.
  • S331 includes:
  • S3311 The first to-be-processed information is read from the buffer module by using the second thread at a preset time.
  • the preset time is, for example, a time corresponding to the timer event
  • the second thread may read the first to-be-processed information from the buffer module based on the trigger of the timer event, and the master node may set different timer events. Therefore, the foregoing embodiment can flexibly trigger the second thread to perform the consistency negotiation process.
  • the method 300 further includes:
  • S3301 Obtain exclusive permission of the buffer module by using the second thread, and the exclusive permission of the buffer module is used to prohibit two or more threads from accessing the buffer module at the same time.
  • method 300 further includes:
  • the second thread when the second thread starts working, first obtain the exclusive permission of the buffer queue, which may also be called a queue mutex, for prohibiting two or more threads from accessing at the same time (including write and / or read) buffer queue.
  • the second thread releases the queue mutex, and other threads can continue to write new pending information to the buffer queue.
  • the foregoing embodiment can prevent the new pending information from being inserted into the to-be-processed information queue that has completed the consistency negotiation process, thereby improving the reliability and efficiency of the consistency negotiation process.
  • S332 includes:
  • the data packet corresponding to the to-be-processed information (including the first data packet) is written into the consistency log by the second thread, and the to-be-processed information in the buffer module is deleted.
  • the sex log is used to cache the data packets, and the order of the data packets in the consistency log corresponds to the processing order of the data packets in the consistency log.
  • S3323 Send, by using the second thread, a consistency negotiation request that includes the first data packet, where the consistency negotiation request is used to request the standby node to accept the processed sequence of the first data packet.
  • the negotiation completion message is received by the second thread, where the negotiation completion message is used to indicate that the processed sequence of the first data packet has been accepted.
  • the second thread When a timer event or an I/O event is triggered, the second thread first occupies the queue mutex and then checks to see if the buffer queue is empty. If the buffer queue is empty, the queue mutex is released; if the buffer queue is not empty, the second thread sequentially reads the members in the queue (data packet or packet descriptor), and inserts the packet corresponding to the member into the Paxos protocol. The consistency log, then removes the member from the queue and releases the memory occupied by the original packet. The second thread reads until the queue is empty, and then releases the queue mutex.
  • the second thread After the queue mutex is released, the second thread sends the data packets in the consistency log to the standby node in sequence, requesting the standby node to process the data packets in the consistency log according to the sequence, and then, when the second thread receives the data from the standby When the node completes the negotiation message, it is determined that the processed sequence of the data packets in the consistency log has been accepted by the standby node.
  • the second thread executes the consistency negotiation process after reading the to-be-processed information, and then deletes the to-be-processed information in the buffer module, so that the indication information in the buffer module that the second thread reads each time can be ensured. It is unprocessed information to be processed, and the second thread is prevented from reading the processed information to be processed, thereby improving the efficiency of the consistency negotiation process.
  • the method 300 before S320, the method 300 further includes:
  • S319 Acquire exclusive permission of the buffer module by using the first thread, and the exclusive permission of the buffer module is used to prohibit two or more threads from accessing the buffer module at the same time.
  • method 300 further includes:
  • S321 Release the exclusive permission of the buffer module acquired by the first thread.
  • the exclusive permission may also be called a queue mutual exclusion lock, which is used to prohibit two or more threads from accessing the buffer module at the same time.
  • the second thread can occupy the queue mutex lock and read the pending information in the buffer module.
  • the first virtual device runs a primary database
  • the standby node is configured with a second virtual device
  • the second virtual device runs a standby database, where the first data packet carries the client and sends the data to the primary node. Access requests to the primary database
  • S310 includes: acquiring, by the first thread, the first to-be-processed information from the physical network card of the primary node.
  • S340 includes: transmitting, by the first thread, the first data packet to the primary database and the standby database simultaneously, so that the primary node and the standby node process process the first data packet in the same order.
  • the first data packet sent by the client reaches the Qemu of the master node through the physical network card of the master node. After the consistency negotiation process of the master node, the first data packet is processed by the master node and the standby node in the same processing order, thereby improving The same dirty page ratio of the active and standby nodes.
  • the method 300 further includes:
  • L m is the time value of the current load of the master node.
  • the synchronization request is processed by the first thread according to a result of performing a consistency negotiation process on the synchronization request.
  • the master node determines whether to synchronize the synchronization between the active and standby nodes according to the current load value and the fixed load threshold. If the current load value is less than the fixed load threshold, the synchronization between the active and standby nodes is not started. When the load value at the current time is greater than or equal to the fixed load threshold, the synchronization of the active and standby nodes is started.
  • the above prior art has the disadvantage that it is difficult to determine the optimal starting timing for starting the synchronization of the active and standby nodes according to the fixed load threshold, because if the fixed load threshold is set too small, for example, the fixed load threshold is set to 0, although When the load value of the master node meets the condition, the ratio of the same dirty pages of the active and standby nodes is the highest (because the virtual machines of the active and standby nodes are no longer working, the dirty pages are no longer changed), but the virtual machines of the primary and secondary nodes are loaded. The virtual machine resources of the active and standby nodes are wasted when the time between the detection and the data synchronization is idle.
  • the virtual machines of the active and standby nodes are still working when the data is synchronized.
  • the virtual machine of the active and standby nodes has a small proportion of the same dirty pages.
  • the active and standby nodes need to transmit more data (that is, data corresponding to different dirty pages), which results in more data synchronization between the active and standby nodes. Internet resources.
  • the working time between the primary virtual machine 1 and the first load detection processor is 10 minutes
  • the virtual machine of the primary and secondary nodes has the same dirty page ratio of 80%
  • the primary virtual machine 1 is started to the second time.
  • the processor working time is 20 minutes between load detection
  • the same dirty page ratio of the virtual machine of the active and standby nodes is 85%
  • the processor working time between the primary virtual machine 1 and the third load detection is 20 minutes.
  • the virtual machine of the standby node has the same dirty page ratio of 85%.
  • the above data indicates that the virtual machine 1 of the primary node is already in an idle state at least during the second load detection.
  • the virtual machine 1 of the primary node is already in an idle state before the second load detection, and if the second load is detected. After the data synchronization is started, the virtual machine 1 of the primary node is idle for a period of time, and the virtual machine resources are wasted. Therefore, the preferred synchronization time of the primary and secondary nodes is after the first load detection and before the second load detection. During this time period, when the virtual machine 1 of the master node has completed most of the work or all the work, the master-slave synchronization starts, which can obtain a better balance point between the virtual machine resource utilization and the same dirty page ratio.
  • the load threshold is determined according to the load threshold of the primary node and the same dirty page ratio at least two synchronization operations, for example, the same dirty page ratio 80% obtained at the first load detection is used as the weight value. Multiply the load threshold value 5 to obtain the result 4, and the same dirty page ratio 85% obtained at the second load detection is multiplied by the load threshold value 6 to obtain the result 5.1, and the sum of 4 and 5.1 is divided by the load detection number 2 to obtain The weighted average of the load thresholds for the two load tests is 4.55, which is the new load threshold.
  • the load value of the master node in the third load detection is 2 (the working time 22 obtained by the third load detection minus the work obtained by the second load detection) Time 20 obtains the load value of the master node during the third load detection 2), and the load value is less than the new load threshold of 4.55, indicating that the remaining tasks of the virtual machine of the primary node are not much, and the virtual machine of the primary node will soon be
  • the data synchronization operation is started; if the processor working time obtained by the third load detection is 30, the load value of the primary node is 10 when the third load is detected (the working time obtained by the third load detection) 30 minus the working time 20 obtained by the second load detection to obtain the load value of the master node in the third load detection 2), the load value is greater than the new load threshold 4.55, indicating that the remaining tasks of the virtual machine of the master node are still Many, if the proportion of the same dirty page of the active and standby nodes is small,
  • the new load threshold in this embodiment is a weighted average determined according to the result of the multiple load measurement, the new load threshold will gradually converge to a more preferable load threshold as the number of load detections increases.
  • the third thread is, for example, a worker thread of the active/standby synchronization module in the master node, that is, a thread responsible for synchronization of the active and standby virtual machines.
  • the load threshold is a dynamic and preferable threshold, and the virtual machine resource utilization ratio and the same dirty page ratio of the active and standby nodes are better when data synchronization is performed. The balance point.
  • the method 300 further includes:
  • S3001 Acquire, by the third thread, SUM k , SUM k is the sum of the load value obtained by the first load measurement of the primary node to the load value obtained by the kth load measurement, and k is a positive integer.
  • T count is the load measurement threshold
  • c 0 is the load threshold of the first synchronization operation of the primary node
  • c 0 SUM k ⁇ k.
  • the number of measurements (COUNT) of the load is equal to 0.
  • the first load value L 1 is obtained
  • SUM 1 is equal to L 1
  • the measurement number threshold T count is 2
  • the initial load threshold c 0 is equal to SUM 2 divided by 2 that is, the initial load threshold is positively correlated with SUM 2
  • the initial load threshold is negatively correlated with the number of measurements; if the measurement number threshold T count is 3.
  • SUM 1 is equal to the load value obtained by the first load measurement.
  • the above embodiment can determine an initial load threshold so that the timing at which the primary node synchronizes data for the first time can be determined.
  • the load value of the primary node includes a processor load value and a memory load value
  • the load threshold of the primary node includes a processor load threshold and a memory load threshold
  • the relationship between the processor load value and the processor load threshold may be compared first, and then the relationship between the memory load value and the memory load threshold may be compared, or the relationship between the memory load value and the memory load threshold may be compared first. Then compare the relationship between the processor load value and the processor load threshold, so that the timing of data synchronization between the active and standby nodes can be flexibly determined.
  • the primary virtual machine is a virtual machine running on the primary node
  • the standby virtual machine is a virtual machine running on the standby node.
  • the primary and secondary virtual machines are synchronized, that is, the data of the active and standby nodes is synchronized. among them,
  • T0-T1 The primary virtual machine runs with the standby virtual machine and records a list of dirty pages.
  • T1-T2 The primary virtual machine and the standby virtual machine stop running, and each computes a hash value of the dirty page.
  • T2-T3 The primary virtual machine compares the hash value of the dirty page of the standby virtual machine.
  • T3-T4 The primary virtual machine transfers the different dirty pages to the backup virtual machine.
  • the primary virtual machine releases the buffered network output (different dirty page data) and resumes operation, and the standby virtual machine resumes operation.
  • T1 is a time for performing synchronization between the active and standby virtual machines.
  • FIG. 5 shows a method flow for triggering synchronization of the active and standby virtual machines.
  • the method 500 includes:
  • the synchronization module of the active and standby virtual machines records the same memory dirty page ratio of the primary virtual machine and the standby virtual machine at each synchronization, and the CPU load threshold and disk (input/output, I/O) load when the primary and backup virtual machines are synchronized. Threshold.
  • the corresponding weight is given to the threshold according to the same dirty page ratio. Then multiply the last n load thresholds by their weights, sum the total values, and divide by n to get the load threshold for triggering the synchronization of the active and standby virtual machines next time.
  • the primary and secondary active and standby synchronization modules of the primary virtual machine are given the CPU threshold according to the same dirty page ratio w j when the jth primary and secondary virtual machines are synchronized. c j corresponding weight w j . Multiply the CPU thresholds of the last n times by the corresponding weights, then sum and get the overall value, and divide by n to get the load value that the CPU needs to achieve when starting the j+1th primary and backup virtual machine synchronization, that is, The adjustment process of the disk I/O load threshold is the same.
  • the initial load accumulated value SUM 0 0
  • the initial load value CPU_Tick_A0 is equal to 0
  • the count value is the number of load measurements performed by the primary virtual machine.
  • S530 Acquire a current load value, compare the current load value with a set load threshold, and determine whether synchronization is performed.
  • the active/standby synchronization module obtains the workload of the virtual machine, compares it with the threshold, and starts synchronization.
  • the process is as follows:
  • the master node is responsible for the synchronization of the active and standby virtual machines (that is, the synchronization thread) calls the clock function to obtain the CPU_Tick 1 time from the startup to the CPU at the first moment.
  • ⁇ t 1 is a value set in advance by the master node. In order to be able to detect that the virtual machine is idle in a short time and avoid the error caused by the monitoring time being too short, ⁇ t 1 can be set, for example, to 100 microseconds.
  • the thread that is responsible for the synchronization of the active and standby virtual machines calls the clock function again to get the CPU_Tick 2 time taken by the virtual machine from the startup to the second time.
  • CPU_Tick 1 -CPU_Tick 1 ⁇ c it means that the CPU is idle, go to step 5, otherwise the synchronous thread sleeps ⁇ t 1 and then continue to call the clock function to get the time that the virtual machine takes up the CPU from the startup to the current time, until the current CPU The difference between the occupied time minus the last CPU usage time is less than the CPU load threshold, where c is the CPU load threshold that is required to trigger the active and standby virtual machines to synchronize the CPU.
  • the synchronization thread obtains the time disk_time 1 of the virtual machine from the startup to the current disk I/O through the Linux Netlink interface.
  • ⁇ t 2 is a value set in advance by the master node. ⁇ t 2 is determined according to the performance of the physical disk. For example, if the disk I/O operation takes 5 milliseconds, ⁇ t 2 can be set to 5 milliseconds.
  • disk_time 2 -disk_time 1 ⁇ d the disk I/O is idle, start the master-slave synchronization, otherwise the synchronization thread sleeps ⁇ t 2 and continues to get the disk I/O from the boot to the current time through the Linux Netlink interface. Time, until the current disk I/O time minus the previous disk I/O time difference (that is, the current disk I/O load value) is less than the disk I/O load threshold, where d is the trigger master and backup The disk load threshold that the disk needs to reach when the virtual machine synchronizes.
  • the above process first determines whether the CPU load exceeds the CPU load threshold, and then determines whether the disk I/O load exceeds the disk I/O load threshold. As an optional example, it can also determine whether the disk I/O load exceeds the disk I/O. Load threshold, and then determine whether the CPU load exceeds the CPU load threshold. In addition, if there are other parameters that affect the same dirty page ratio of the active and standby virtual machines, you can also determine whether to synchronize the active and standby virtual machines according to the above method.
  • the master-slave synchronization module When the synchronization is initiated, the master-slave synchronization module generates a special packet descriptor containing a pointer to a null address, which is also used to indicate the length (zero) of the packet and the type of the packet. The synchronization module occupies the mutex of the buffer queue and inserts the packet descriptor into the buffer queue, then releases the queue mutex.
  • the primary virtual machine transfers inconsistent dirty pages to the standby virtual machine by comparing the dirty pages of the active and standby virtual machines.
  • FIG. 7 shows another flow chart of the data synchronization processing method provided by the present application.
  • the terminal access point (TAP) string device (/dev/tapX) of the master node becomes readable.
  • TAP terminal access point
  • the Qemu main loop thread finds that the TAP string device is readable, it attempts to occupy the global mutex and read out the client packet from the string device.
  • the Qemu main loop thread then generates a descriptor for the packet, the descriptor including a pointer to the packet, and information describing the length of the packet and the type of the packet, wherein the packet type is a client Request.
  • the primary node occupies the mutex of the buffer queue and populates the packet descriptor into the buffer queue, then releases the queue mutex.
  • the middle-tier module is responsible for the consistency negotiation thread occupying the mutex of the buffer queue, and then checking whether the buffer queue is empty. If the buffer queue is not empty, the members in the queue (ie, descriptors) are read in turn, the packets described by the members are filled into the consistency log of the Paxos protocol, and then the members are deleted from the queue and the original data packets are released. Occupied memory.
  • the thread responsible for the consistency negotiation reads until the queue is empty, and then releases the mutex of the queue. After the queue mutex is released, the thread responsible for the consistency negotiation checks whether there are members waiting to be processed (not negotiated) in the consistency log of the Paxos protocol. If so, the member negotiates with other nodes according to the Paxos algorithm.
  • the active/standby synchronization module determines the timing of the synchronization between the active and standby virtual machines according to the methods shown in FIG. 5 and FIG. 6.
  • the primary and secondary synchronization modules When the active/standby synchronization module determines to trigger the synchronization of the active and standby virtual machines, the primary and secondary synchronization modules generate an active/standby synchronization request, and After the mutex that occupies the buffer queue, the packet is inserted into the buffer queue, and then the queue mutex is released. Both the active and standby synchronization requests and the data from the client must be negotiated in a consistent manner before they can be processed.
  • the thread responsible for the consistency negotiation is determined by the Paxos algorithm.
  • the descriptor of the data packet is written to the pipeline, so that the Qemu main loop thread reads the descriptor from the pipeline.
  • the packet is processed according to the type of the packet indicated by the descriptor. For example, when the packet is a packet from the client, the packet is virtualized. The NIC is sent to the virtual machine for processing.
  • FIG. 8 is a schematic diagram of a consistency negotiation method provided by the present application.
  • the distributed system shown in FIG. 8 includes an observer node in addition to the primary node and the standby node, so that the requirements of the Paxos algorithm can be satisfied, and the observer node can also be replaced with Standby node.
  • a distributed system suitable for the present application may also include more standby nodes.
  • the primary and standby virtual machines are in hot standby and run the same distributed database program in parallel.
  • the observer node virtual machine is in a standby state.
  • the Qemu of the three nodes all have a consistency negotiation module, and the client network request and the master-slave synchronization request are negotiated according to the Paxos algorithm. Observer nodes only participate in Paxos negotiation work and do not participate in active/standby synchronization.
  • the middle layer software module is responsible for the consistency negotiation thread based on the network I/O event triggered by the Paxos algorithm message delivery.
  • the thread responsible for the consistency negotiation receives the negotiation message sent from other nodes, it processes according to the Paxos algorithm.
  • the thread responsible for the consistency negotiation determines that after the data packet has been consistently negotiated, if the data packet is a client request, the data packet is sent to the virtual machine, and if the data packet is an active/standby synchronization request, it is responsible for consistency negotiation.
  • the thread notifies the active and standby synchronization modules to initiate synchronization.
  • the thread responsible for consistency negotiation of the middle layer software module is also based on the network I/O event triggered by the Paxos algorithm message delivery.
  • the thread responsible for the consistency negotiation receives the negotiation message sent from other nodes, it processes according to the Paxos algorithm. Since the observer node virtual machine is in the standby state, after the thread responsible for the consistency negotiation determines that the data packet has completed the consistency negotiation, the data packet that is negotiated is either the client request or the active/standby synchronization request, and is discarded.
  • FIG. 9 is still another flowchart of the data synchronization processing method provided by the present application.
  • the master node When the client data packet arrives at the physical network card of the master node, the master node (ie, the host operating system) invokes the driver of the physical network card, in which the software bridge in the Linux kernel is utilized to implement data forwarding. On the software bridge layer, the master node will determine which device the packet is sent to, and at the same time call the bridge's send function to send the packet to the corresponding port number. If the packet is destined for the virtual machine, it is forwarded through the TAP device.
  • the TAP is equivalent to an Ethernet device that operates on Layer 2 packets, the Ethernet data frame.
  • the character device (/dev/tapX) of the TAP device is responsible for forwarding packets in kernel space and user space.
  • the Qemu main loop thread keeps looping through the "select system call" function to determine which file descriptors have changed state, including the state of the TAP device file descriptor and pipe device file description.
  • the Qemu main loop thread finds that the TAP string device is readable, it attempts to occupy the global mutex and read out the client packet from the string device.
  • the Qemu main loop then generates a descriptor for the packet, the descriptor containing a pointer to the packet, and information indicating the length of the packet and the type of the packet, where the type of the packet is the client data pack.
  • the master node initiates the master-slave synchronization to generate a synchronization request packet.
  • An automatic threshold adjustment algorithm is deployed in the active/standby synchronization module of the primary node Qemu (as shown in S301 to S304).
  • the active/standby synchronization module of the master node monitors the CPU load and disk I/O load of the virtual machine, and compares the load threshold and the virtual machine load to determine whether to initiate synchronization.
  • the master-slave synchronization module When the master node initiates synchronization, the master-slave synchronization module generates a special packet descriptor containing a pointer to the null address, and also information indicating the length (zero) of the packet and the type of the packet.
  • the type of the packet here is the primary and secondary synchronization request.
  • the synchronization module occupies the mutex of the buffer queue and populates the packet descriptor into the buffer queue, then releases the queue mutex.
  • the primary virtual machine synchronizes, it compares the dirty pages of the active and standby virtual machines, and only transmits the inconsistent dirty pages to the standby virtual machine.
  • S3 The master node inserts the packet descriptor into the buffer queue and performs consistency negotiation on the data packet.
  • Figure 10 consists of two parts, one part is the processing flow of the Qemu main loop thread.
  • the processing flow consists of three steps, namely the mutex that holds the buffer queue, fills the packet descriptor into the buffer queue, and then releases the queue mutually exclusive. lock.
  • the middle-tier thread responsible for consistency negotiation is driven based on events (timer events or network I/O events). For example, when a timer event is triggered, the consistency negotiation thread first occupies the mutex of the buffer queue, and then checks to see if the buffer queue is empty. If the buffer queue is not empty, the consistency negotiation thread reads the members in the queue in turn, inserts the packet described by the member into the consistency log of the Paxos protocol, and then removes the member from the queue and releases the memory occupied by the original packet. The consistency negotiation thread reads until the queue is empty, and then releases the mutex of the queue. After the queue mutex is released, the consistency negotiation thread checks whether there are members waiting to be processed (not negotiated) in the consistency log of the Paxos protocol. If so, the members to be processed negotiate with other nodes according to the Paxos algorithm.
  • events timer events or network I/O events. For example, when a timer event is triggered, the consistency negotiation thread first occupies the mutex of the buffer
  • S4 The master node determines the type of the data packet after the negotiation is reached.
  • the consistency negotiation thread needs to listen for network I/O events that are triggered by the received Paxos algorithm message. When the consistency negotiation thread receives the negotiation message sent by other nodes, it needs to be processed according to the Paxos algorithm. If the consistency negotiation thread determines that a data packet has been consistently negotiated by the Paxos algorithm, the type is determined according to the information contained in the data packet, wherein the consistency negotiation thread is in the original data packet (the data packet before the buffer queue is inserted) When the consistency negotiation is performed, the original data packet is encapsulated, and the encapsulated data packet contains other information in addition to the original data packet, and the other information is, for example, information indicating the original data packet type, and the consistency negotiation thread encapsulates the information. The subsequent packets are sent to the standby node.
  • the client data packet is forwarded to the Qemu main loop, and the Qemu main loop performs a logical operation of the virtual network card (such as RTL8139) on the client data packet.
  • the virtual network card such as RTL8139
  • the consistency negotiation thread first writes the length of the packet in the pipe associated with the Qemu main loop and then writes the packet content.
  • the Qemu main loop thread finds that the file descriptor of the pipeline becomes readable, it takes up the global mutex and reads out an integer type of data from the pipeline. This data is the length of the packet sent in the pipeline. . According to the obtained integer, the Qemu main loop thread reads the corresponding length data, that is, the data packet, from the pipeline.
  • the Qemu main loop thread then calls the RTL8139_do_receiver function, which performs the logical operation equivalent to the hardware RTL8139 NIC in this function.
  • the kernel-based virtual machine operates the virtual RTL8139 by analog I/O instructions to copy the packet to the client address space and place it in the corresponding I/O address.
  • the Qemu main loop thread releases the global mutex.
  • S6 The application in the virtual machine processes the client data packet.
  • the database program in the virtual machine performs the query action after receiving the client data packet, and returns the execution result.
  • the consistency negotiation thread notifies the active/standby synchronization module to initiate synchronization.
  • the virtual machine is generated to prepare a synchronization data frame, and the data frame is placed in a buffer queue of the primary node for transmission.
  • the master-slave synchronization module can be implemented by the third thread of Qemu
  • the consistency negotiation layer module can be implemented by the second thread of Qemu, wherein the second thread and the third thread are both Qemu. Worker thread.
  • the master node includes corresponding hardware structures and/or software modules for performing various functions.
  • the present application can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present application.
  • the present application may divide a functional unit into a master node according to the above method example.
  • each functional unit may be divided according to each function, or two or more functions may be integrated into one processing unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit. It should be noted that the division of the unit in the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 11 is a schematic structural diagram of a possible data synchronization processing apparatus provided by the present application.
  • the data synchronization processing device 1100 may be a software module or a hardware module included in the master node, and the data synchronization processing device 1100 includes a first thread control unit 1101 and a first thread control unit 1102.
  • the first thread control unit 1101 and the first thread control unit 1102 are used to control and manage the actions of the data synchronization processing device 1100.
  • the first thread control unit 1101 and the first thread control unit 1102 are configured to support the data synchronization processing device 1100.
  • the various steps of Figure 3 and/or other processes for the techniques described herein are performed.
  • the first thread control unit 1101 is configured to acquire the first to-be-processed information, where the first to-be-processed information is the first data packet or the first indication information, where the first indication information is used to indicate the first data packet, where the first thread control
  • the unit 1101 is configured to execute the non-thread-safe code; and write the first to-be-processed information into the buffer module;
  • the second thread control unit 1102 is configured to perform a consistency negotiation process on the first to-be-processed information, where the consistency negotiation process is used to synchronize the order in which the primary node and the standby node process the first data packet;
  • the first thread control unit 1101 is further configured to process the first data packet according to the result of the second thread control unit 1102 performing the consistency negotiation process.
  • the data synchronization processing device 1100 can execute code by the first thread control unit 1101 and the second thread control unit 1102 to complete the corresponding task.
  • the first thread control unit 1101 is configured to execute the non-thread-safe code, and therefore, the first thread control unit 1101 needs to occupy the mutex when performing the operation, for example, the first thread control unit 1101 needs to occupy before acquiring the first to-be-processed information.
  • the global mutex is not limited in this application.
  • the manner in which the first thread control unit 1101 acquires the first to-be-processed information is not limited.
  • the first thread control unit 1101 After acquiring the first to-be-processed information, the first thread control unit 1101 writes the first to-be-processed information to the buffer module, where the buffer module may be a buffer queue, or may be a heap for buffering the first to-be-processed information or The stack is also a data structure for buffering the first to-be-processed information, which is not limited in this application.
  • the global mutex can be released, and other threads can occupy the global mutex and schedule the virtual machine to perform other tasks.
  • the second thread control unit 1102 reads at least one to-be-processed information in the buffer module, and determines a common order in which the active and standby nodes process the data packets based on the consistency negotiation protocol. Subsequently, the first thread control unit 1101 occupies the global mutex and follows The processing sequence determined by the second thread control unit 1102 processes the data packet. Since the work of the consistency negotiation of the active and standby nodes is performed by the second thread control unit 1102, the second thread control unit 1102 does not need to occupy the global mutex when working, and therefore, the master node configured with the data synchronization processing device 1100 is performing. The synchronization process of the active and standby virtual machines utilizes the primary virtual machine to process other tasks, and has higher performance than the primary nodes in the prior art.
  • the second thread control unit 1102 is specifically configured to:
  • the first to-be-processed information is written to the pipeline according to the processed order of the first data packet, and the pipeline is used by the first thread control unit 1101 to read the first to-be-processed information.
  • the first data packet may be a data packet obtained from the client, or may be a data packet generated by the master node, or may be other data packets.
  • the specific content of the first data packet is not limited in this application. Since some program code executed by the data synchronization processing device 1100 is not thread-safe, the second thread control unit 1102 cannot directly call the program code of the data synchronization processing device 1100 as a worker thread, and the consistency negotiation process provided in this embodiment
  • the scheme establishes a pipe for contacting between the first thread control unit 1101 and the second thread control unit 1102, and the second thread control unit 1102 writes the result of the consistency negotiation to the pipeline so that the first thread control unit 1101 passes
  • the pipeline reads the result of the consistency negotiation, so that the consistency of the data synchronization processing device 1100 can be avoided while completing the consistency negotiation.
  • the second thread control unit 1102 is further configured to: read the first to-be-processed information from the buffer module at a preset time.
  • the preset time is, for example, the time corresponding to the timer event
  • the second thread control unit 1102 can read the first to-be-processed information from the buffer module based on the trigger of the timer event, and the master node can set different timings.
  • the event therefore, the above embodiment can flexibly trigger the second thread control unit 1102 to perform the consistency negotiation process.
  • the second thread control unit 1102 is further configured to: obtain exclusive rights of the buffer module, and the exclusive permission of the buffer module is used to prohibit two or more The thread accesses the buffer module at the same time;
  • the second thread control unit 1102 is further configured to: when the number of pieces of information to be processed in the buffer module is 0, release the exclusive right of the buffer module acquired by the second thread.
  • the second thread control unit 1102 When the second thread control unit 1102 starts to work, it first occupies exclusive rights of the buffer module, which may also be called a queue mutex lock, for prohibiting two or more thread control units from accessing the buffer module at the same time. . When the number of pieces of information to be processed in the buffer module is 0, the second thread control unit 1102 releases the queue mutex, and other threads may continue to write new pending information to the buffer module.
  • the foregoing embodiment can prevent the new pending information from being inserted into the to-be-processed information queue that has completed the consistency negotiation process, thereby improving the reliability and efficiency of the consistency negotiation process.
  • the second thread control unit 1102 is further specifically configured to:
  • the data packet corresponding to the to-be-processed information is written into the consistency log, and the to-be-processed information is deleted.
  • the consistency log is used to cache the data packet corresponding to the to-be-processed information, and the data in the consistency log.
  • the sequence of the packets corresponds to the processed sequence of the data packets in the consistency log, the information to be processed includes the first to-be-processed information, and the data packet corresponding to the to-be-processed information includes the first data packet.
  • a negotiation completion message is received, the negotiation completion message is used to indicate that the processed sequence of the first data packet has been accepted.
  • the consistency negotiation process is executed, and the information to be processed in the buffer module is deleted, so that the instruction information in the buffer module read by the second thread control unit 1102 can be ensured. It is new to-be-processed information, and the second thread control unit 1102 is prevented from reading the processed information to be processed, thereby improving the efficiency of the consistency negotiation process.
  • the first thread control unit 1101 is further configured to: acquire exclusive rights of the buffer module, where the exclusive permission of the buffer module is used to prohibit two or more threads from being Accessing the buffer module at the same time;
  • the first thread control unit 1101 is further configured to: release the exclusive permission of the buffer module acquired by the first thread control unit 1101.
  • the first thread control unit 1101 first occupies exclusive rights of the buffer module before writing to the buffer module, and the exclusive authority may also be referred to as a queue mutex lock for prohibiting two or more thread control units from accessing at the same time. Buffer module.
  • the second thread control unit 1102 can occupy the queue mutex lock and read the pending information in the buffer module.
  • the foregoing embodiment can prevent the new pending information from being inserted into the queue of the information to be processed that has completed the consistency negotiation process, thereby improving the reliability and efficiency of the consistency negotiation process.
  • the first virtual device runs a primary database
  • the standby node is configured with a second virtual device
  • the second virtual device runs a standby database
  • the first data packet carries the client for sending to the primary node for the primary database.
  • the first thread control unit 1101 is further configured to: obtain first to-be-processed information from the physical network card of the primary node; send the first data packet to the primary database and the standby database simultaneously, so that the primary node and the standby node are processed in the same order. The first packet.
  • the device further includes a third thread control unit, and the third thread control unit is configured to:
  • the master node of the n synchronization operations dirty pages same proportion standby node is w 1, ..., w n, where, c 1 and w 1 corresponds, ..., c n and w n corresponding to, n is a positive integer equal to or greater than 2;
  • L m is the load value of the primary node at the current time
  • a synchronization request is generated, the synchronization request is used to request synchronization of dirty pages of the primary node and the standby node;
  • the second thread control unit 1102 is further specifically configured to:
  • the first thread control unit 1101 is further specifically configured to:
  • the synchronization request is processed according to the result of performing the consistency negotiation process on the synchronization request.
  • the load threshold used by the device for data synchronization is a dynamic and more preferable threshold, and the virtual machine resource utilization ratio and the same dirty page ratio of the active and standby nodes can reach a better balance point when data synchronization is performed. .
  • the third thread control unit is further configured to:
  • SUM k is the sum of the load value obtained from the first load measurement of the master node to the load value obtained from the kth load measurement, and k is a positive integer.
  • the above embodiment can determine an initial load threshold so that the timing at which the primary node synchronizes data for the first time can be determined.
  • the load value of the primary node includes a processor load value and a memory load value
  • the load threshold of the primary node includes a processor load threshold and a memory load threshold
  • the relationship between the processor load value and the processor load threshold may be compared first, and then the relationship between the memory load value and the memory load threshold may be compared, or the relationship between the memory load value and the memory load threshold may be compared first. Then compare the relationship between the processor load value and the processor load threshold, so that the timing of data synchronization between the active and standby nodes can be flexibly determined.
  • FIG. 12 shows another possible schematic diagram of the master node involved in the present application.
  • the master node 1200 includes a processor 1202, a transceiver 1203, and a memory 1201.
  • the transceiver 1203, the processor 1202, and the memory 1201 can communicate with each other through an internal connection path to transfer control and/or data signals.
  • the processing unit 1102 can be a processor or a controller, for example, can be a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application-specific integrated circuit. , ASIC), field programmable gate array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication unit 1103 can be a transceiver, a transceiver circuit, or the like.
  • the storage unit 1101 may be a memory.
  • the master node 1200 provided by the present application processes the consistency negotiation of the active and standby nodes by using the second thread, and the second thread does not need to occupy the global mutex when working. Therefore, the master node 1200 can utilize the synchronous operation of the active and standby virtual machines.
  • the virtual machine handles other tasks and improves the performance of the primary node.
  • the master node in the device and the method embodiment corresponds completely, and the corresponding module performs corresponding steps, for example, the communication module method performs the steps of sending or receiving in the method embodiment, and the steps other than sending and receiving may be performed by the processing module or the processor. carried out.
  • the communication module method performs the steps of sending or receiving in the method embodiment, and the steps other than sending and receiving may be performed by the processing module or the processor. carried out.
  • the size of the sequence number of each process does not mean the order of execution sequence, and the order of execution of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the present application.
  • the function of the virtual machine may also be implemented by using a container, where the container and the virtual machine may be referred to as a virtual device.
  • the steps of a method or algorithm described in connection with the present disclosure may be implemented in a hardware or may be implemented by a processor executing software instructions.
  • the software instructions may be composed of corresponding software modules, which may be stored in a random access memory (RAM), a flash memory, a read only memory (ROM), an erasable programmable read only memory ( Erasable programmable ROM (EPROM), electrically erasable programmable read only memory (EEPROM), registers, hard disk, removable hard disk, compact disk read only (CD-ROM) or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC. Additionally, the ASIC can be located in the master node. Of course, the processor and the storage medium can also exist as discrete components in the master node.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in or transmitted by a computer readable storage medium.
  • the computer instructions may be from a website site, computer, server or data center via a wired (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.) Another website site, computer, server, or data center for transmission.
  • the computer readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • the usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a digital versatile disc (DVD), or a semiconductor medium (eg, a solid state disk (SSD)). Wait.

Abstract

L'invention concerne un procédé et un appareil de traitement de synchronisation de données, qui sont appliqués à un nœud principal dans un système informatique, le système informatique comportant en outre un nœud de réserve relié au nœud principal. Le procédé comporte les étapes consistant à: acquérir, au moyen d'un premier fil, des premières informations à traiter, les premières informations étant un premier paquet de données ou des premières informations d'indication, les premières informations d'indication étant utilisées pour indiquer le premier paquet de données, et le premier fil étant un fil servant à exécuter un code de sécurité hors fil; écrire, au moyen du premier fil, les premières informations à traiter dans un module de tampon; exécuter, au moyen d'un second fil, un traitement de négociation de cohérence sur les premières informations à traiter, le traitement de négociation de cohérence étant utilisé pour synchroniser les ordres dans lesquels le nœud principal et le nœud de réserve traitent le premier paquet de données; et traiter le premier paquet au moyen du premier fil selon un résultat du traitement de négociation de cohérence. Au moyen du procédé et de l'appareil, un nœud principal traite d'autres tâches au moyen d'une machine virtuelle principale tout en réalisant un traitement de synchronisation sur la machine virtuelle principale et une machine virtuelle de réserve, ce qui améliore les performances du nœud principal.
PCT/CN2018/082225 2018-04-08 2018-04-08 Procédé et appareil de traitement de synchronisation de données WO2019195969A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/082225 WO2019195969A1 (fr) 2018-04-08 2018-04-08 Procédé et appareil de traitement de synchronisation de données
CN201880004742.8A CN110622478B (zh) 2018-04-08 2018-04-08 数据同步处理的方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/082225 WO2019195969A1 (fr) 2018-04-08 2018-04-08 Procédé et appareil de traitement de synchronisation de données

Publications (1)

Publication Number Publication Date
WO2019195969A1 true WO2019195969A1 (fr) 2019-10-17

Family

ID=68162760

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/082225 WO2019195969A1 (fr) 2018-04-08 2018-04-08 Procédé et appareil de traitement de synchronisation de données

Country Status (2)

Country Link
CN (1) CN110622478B (fr)
WO (1) WO2019195969A1 (fr)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352944A (zh) * 2020-02-10 2020-06-30 北京百度网讯科技有限公司 数据处理方法、装置、电子设备与存储介质
CN112714185A (zh) * 2020-12-30 2021-04-27 威创集团股份有限公司 一种接入坐席系统
CN115643237A (zh) * 2022-10-13 2023-01-24 北京华建云鼎科技股份公司 一种用于会议的数据处理系统
US11614923B2 (en) 2020-04-30 2023-03-28 Splunk Inc. Dual textual/graphical programming interfaces for streaming data processing pipelines
US11615084B1 (en) 2018-10-31 2023-03-28 Splunk Inc. Unified data processing across streaming and indexed data sets
US11636116B2 (en) 2021-01-29 2023-04-25 Splunk Inc. User interface for customizing data streams
US11645286B2 (en) 2018-01-31 2023-05-09 Splunk Inc. Dynamic data processor for streaming and batch queries
US11663219B1 (en) 2021-04-23 2023-05-30 Splunk Inc. Determining a set of parameter values for a processing pipeline
US11687487B1 (en) * 2021-03-11 2023-06-27 Splunk Inc. Text files updates to an active processing pipeline
US11727039B2 (en) 2017-09-25 2023-08-15 Splunk Inc. Low-latency streaming analytics
US11886440B1 (en) 2019-07-16 2024-01-30 Splunk Inc. Guided creation interface for streaming data processing pipelines

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111767339B (zh) * 2020-05-11 2023-06-30 北京奇艺世纪科技有限公司 一种数据同步方法、装置、电子设备及存储介质
CN112954133B (zh) * 2021-01-20 2023-03-14 浙江大华技术股份有限公司 同步节点时间的方法、装置、电子装置和存储介质
CN115454657A (zh) * 2022-08-12 2022-12-09 科东(广州)软件科技有限公司 一种用户态虚拟机任务间的同步与互斥的方法及装置
CN117632799A (zh) * 2023-12-05 2024-03-01 合芯科技有限公司 数据处理方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120216193A1 (en) * 2011-02-21 2012-08-23 Samsung Electronics Co., Ltd. Apparatus and method for controlling virtual machine schedule time
CN103309858A (zh) * 2012-03-06 2013-09-18 深圳市腾讯计算机系统有限公司 一种多线程日志管理的方法及装置
CN103501290A (zh) * 2013-09-18 2014-01-08 万达信息股份有限公司 一种基于动态备份虚拟机的高可靠服务系统构建方法
CN105224391A (zh) * 2015-10-12 2016-01-06 浪潮(北京)电子信息产业有限公司 一种虚拟机的在线备份方法及系统
CN105607962A (zh) * 2015-10-22 2016-05-25 华为技术有限公司 一种虚拟机备份的方法和装置
CN107729129A (zh) * 2017-09-18 2018-02-23 惠州Tcl移动通信有限公司 一种基于同步锁的多线程处理方法、终端以及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101609419B (zh) * 2009-06-29 2012-05-30 北京航空航天大学 虚拟机持续在线迁移的数据备份方法及装置
CN102279766B (zh) * 2011-08-30 2014-05-07 华为技术有限公司 并行模拟多个处理器的方法及系统、调度器
JP5700009B2 (ja) * 2012-09-18 2015-04-15 横河電機株式会社 フォールトトレラントシステム
US9740563B2 (en) * 2013-05-24 2017-08-22 International Business Machines Corporation Controlling software processes that are subject to communications restrictions by freezing and thawing a computational process in a virtual machine from writing data
CN104683444B (zh) * 2015-01-26 2017-11-17 电子科技大学 一种数据中心多虚拟机的数据迁移方法
CN104915151B (zh) * 2015-06-02 2018-12-07 杭州电子科技大学 多虚拟机系统中一种主动共享的内存超量分配方法
CN106168885B (zh) * 2016-07-18 2019-09-24 浪潮(北京)电子信息产业有限公司 一种基于lvm的逻辑卷动态扩容的方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120216193A1 (en) * 2011-02-21 2012-08-23 Samsung Electronics Co., Ltd. Apparatus and method for controlling virtual machine schedule time
CN103309858A (zh) * 2012-03-06 2013-09-18 深圳市腾讯计算机系统有限公司 一种多线程日志管理的方法及装置
CN103501290A (zh) * 2013-09-18 2014-01-08 万达信息股份有限公司 一种基于动态备份虚拟机的高可靠服务系统构建方法
CN105224391A (zh) * 2015-10-12 2016-01-06 浪潮(北京)电子信息产业有限公司 一种虚拟机的在线备份方法及系统
CN105607962A (zh) * 2015-10-22 2016-05-25 华为技术有限公司 一种虚拟机备份的方法和装置
CN107729129A (zh) * 2017-09-18 2018-02-23 惠州Tcl移动通信有限公司 一种基于同步锁的多线程处理方法、终端以及存储介质

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11727039B2 (en) 2017-09-25 2023-08-15 Splunk Inc. Low-latency streaming analytics
US11645286B2 (en) 2018-01-31 2023-05-09 Splunk Inc. Dynamic data processor for streaming and batch queries
US11615084B1 (en) 2018-10-31 2023-03-28 Splunk Inc. Unified data processing across streaming and indexed data sets
US11886440B1 (en) 2019-07-16 2024-01-30 Splunk Inc. Guided creation interface for streaming data processing pipelines
CN111352944B (zh) * 2020-02-10 2023-08-18 北京百度网讯科技有限公司 数据处理方法、装置、电子设备与存储介质
CN111352944A (zh) * 2020-02-10 2020-06-30 北京百度网讯科技有限公司 数据处理方法、装置、电子设备与存储介质
US11614923B2 (en) 2020-04-30 2023-03-28 Splunk Inc. Dual textual/graphical programming interfaces for streaming data processing pipelines
CN112714185A (zh) * 2020-12-30 2021-04-27 威创集团股份有限公司 一种接入坐席系统
US11650995B2 (en) 2021-01-29 2023-05-16 Splunk Inc. User defined data stream for routing data to a data destination based on a data route
US11636116B2 (en) 2021-01-29 2023-04-25 Splunk Inc. User interface for customizing data streams
US11687487B1 (en) * 2021-03-11 2023-06-27 Splunk Inc. Text files updates to an active processing pipeline
US11663219B1 (en) 2021-04-23 2023-05-30 Splunk Inc. Determining a set of parameter values for a processing pipeline
CN115643237B (zh) * 2022-10-13 2023-08-11 北京华建云鼎科技股份公司 一种用于会议的数据处理系统
CN115643237A (zh) * 2022-10-13 2023-01-24 北京华建云鼎科技股份公司 一种用于会议的数据处理系统

Also Published As

Publication number Publication date
CN110622478B (zh) 2020-11-06
CN110622478A (zh) 2019-12-27

Similar Documents

Publication Publication Date Title
WO2019195969A1 (fr) Procédé et appareil de traitement de synchronisation de données
JP5258019B2 (ja) アプリケーション・プロセス実行の範囲内での非決定論的オペレーションを管理、ロギング、またはリプレイするための予測方法
Scales et al. The design of a practical system for fault-tolerant virtual machines
US10411953B2 (en) Virtual machine fault tolerance method, apparatus, and system
WO2017008675A1 (fr) Procédé et dispositif permettant de transmettre des données dans un environnement virtuel
US9652247B2 (en) Capturing snapshots of offload applications on many-core coprocessors
US8402318B2 (en) Systems and methods for recording and replaying application execution
US9489230B1 (en) Handling of virtual machine migration while performing clustering operations
US8812907B1 (en) Fault tolerant computing systems using checkpoints
JP5519909B2 (ja) アプリケーション・プロセスにおいて内部イベントをリプレイするための非侵入的方法およびこの方法を実装するシステム
WO2019095655A1 (fr) Procédé d'interaction de données et dispositif informatique
US20130047157A1 (en) Information processing apparatus and interrupt control method
TWI624757B (zh) 資料處理方法、資料處理系統與電腦程式產品
JP6305976B2 (ja) コンピューティング装置に対するネットワーク駆動のウェイクアップ操作の実行期間中においてパケットを遅延させる方法、装置およびシステム
US11537430B1 (en) Wait optimizer for recording an order of first entry into a wait mode by a virtual central processing unit
TW201003526A (en) Lazy handling of end of interrupt messages in a virtualized environment
JPH06110740A (ja) チャネル使用時間測定方法及び手段
US9940152B2 (en) Methods and systems for integrating a volume shadow copy service (VSS) requester and/or a VSS provider with virtual volumes (VVOLS)
US10540301B2 (en) Virtual host controller for a data processing system
US20160062854A1 (en) Failover system and method
TW201721458A (zh) 伺服器備份方法及其備份系統
US20140068165A1 (en) Splitting a real-time thread between the user and kernel space
Scales et al. The design and evaluation of a practical system for fault-tolerant virtual machines
US20170235600A1 (en) System and method for running application processes
US20220075671A1 (en) High Availability Events in a Layered Architecture

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18914205

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18914205

Country of ref document: EP

Kind code of ref document: A1