WO2019232926A1 - 数据一致性校验流控方法、装置、电子设备及存储介质 - Google Patents

数据一致性校验流控方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2019232926A1
WO2019232926A1 PCT/CN2018/100171 CN2018100171W WO2019232926A1 WO 2019232926 A1 WO2019232926 A1 WO 2019232926A1 CN 2018100171 W CN2018100171 W CN 2018100171W WO 2019232926 A1 WO2019232926 A1 WO 2019232926A1
Authority
WO
WIPO (PCT)
Prior art keywords
statistical period
load
flow control
data block
control threshold
Prior art date
Application number
PCT/CN2018/100171
Other languages
English (en)
French (fr)
Inventor
陈学伟
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019232926A1 publication Critical patent/WO2019232926A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0653Monitoring storage devices or systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1464Management of the backup or restore process for networked environments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Definitions

  • the present application relates to the field of computer technology, and in particular, to a data consistency check flow control method, device, electronic device, and storage medium.
  • the distributed storage system is to store data on multiple independent devices in a distributed manner, adopts a scalable system structure and multiple redundancy strategies, uses multiple storage servers to share the storage load, and finds storage information according to the corresponding positioning algorithm.
  • a distributed storage system not only improves system reliability, availability, and access efficiency, it is also easy to expand, and it can eliminate single points of failure. When a certain disk of the storage node or the entire storage node occurs within the scope of the specified redundancy rules In the event of a failure, the impact on front-end user applications is minimal.
  • the consistency check between each replica data in each storage node is generally performed according to a certain period. If the consistency check of the data is triggered when the input / output (IO) (write down the full name) of the user application is under high pressure, the IO that is undergoing consistency check may IO has an impact, thereby reducing the user experience of application and even causing system failure.
  • IO input / output
  • a first aspect of the present application provides a data consistency check flow control method, where the method includes:
  • the flow control threshold corresponding to the current statistical period in the check period is obtained;
  • a second aspect of the present application provides a data consistency check flow control device, where the device includes:
  • a copy storage module configured to store the user data as multiple copies when receiving a write request of the user data
  • a detection module for detecting whether a trigger condition of a data consistency check is satisfied
  • a flow control acquisition module configured to acquire a flow control threshold corresponding to a current statistical period in a verification period when the detection module detects that a trigger condition of a data consistency check is satisfied;
  • a replica verification module is configured to perform data consistency verification on the multiple replicas based on a flow control threshold corresponding to the current statistical period.
  • a third aspect of the present application provides an electronic device including a processor and a memory, where the processor is configured to implement the data consistency check flow control method when executing computer-readable instructions stored in the memory. .
  • a fourth aspect of the present application provides a non-volatile readable storage medium, where computer-readable instructions are stored on the non-volatile readable storage medium, and the computer-readable instructions are implemented when executed by a processor. Data consistency check flow control method.
  • the data consistency check flow control method, device, electronic device and storage medium described in this application store the user data into multiple copies when receiving a write request of the user data, and satisfy the data consistency
  • the triggering condition of the verification is obtained, by obtaining flow control thresholds corresponding to different statistical periods in the verification period, based on the flow control thresholds corresponding to each statistical period, data consistency verification is performed on the multiple copies. While improving the efficiency of data consistency check and ensuring the consistency of data between multiple copies, it can avoid a significant impact on normal I / O business performance and has a good flow control effect.
  • FIG. 1 is a flowchart of a data consistency check flow control method provided in Embodiment 1 of the present application.
  • FIG. 2 is a flowchart of a method for determining a flow control threshold corresponding to a current statistical period according to an IO load of a user application in a previous statistical period according to a second embodiment of the present application.
  • FIG. 3 is a functional module diagram of a data consistency check flow control device provided in Embodiment 3 of the present application.
  • FIG. 4 is a schematic diagram of an electronic device according to a fourth embodiment of the present application.
  • the data consistency check flow control method in the embodiment of the present application is applied to one or more electronic devices.
  • the data consistency check flow control method can also be applied to a hardware environment composed of an electronic device and a server connected to the electronic device through a network.
  • the network includes, but is not limited to: a wide area network, a metropolitan area network, or a local area network.
  • the data consistency check flow control method in the embodiment of the present application may be executed by a server or an electronic device; it may also be executed jointly by the server and the electronic device.
  • the data consistency check flow control function provided by the method of the present application may be directly integrated on the electronic device, or a client for implementing the method of the present application may be installed.
  • the method provided in this application can also be run on devices such as servers in the form of Software Development Kit (SDK), and provide the interface for data consistency verification flow control functions in the form of SDK, electronic devices. Or other devices can implement the method described in this application through the provided interface.
  • SDK Software Development Kit
  • FIG. 1 is a flowchart of a data consistency check flow control method provided in Embodiment 1 of the present application. According to different requirements, the execution order in this flowchart can be changed, and some steps can be omitted.
  • distributed storage systems are generally implemented by storing multiple copies of data. For example, a user stores a txt file.
  • this document will be stored in three copies and placed on different hard disks in different fault domains. In this way, even if a hard disk is damaged, the txt file will not be lost. Or when two hard disks are damaged at the same time, the data is still not lost. However, after the hard disk is damaged, the distributed storage system generally senses and completes the lost copies in a timely manner.
  • the distributed storage system needs to set the trigger conditions for data consistency check.
  • the trigger conditions for the data consistency check are met, the data consistency check instruction is considered to be triggered, and the data consistency check is performed on each copy. ;
  • the trigger condition of the data consistency check is not met, it is considered that there is no instruction to trigger the data consistency check, and the data consistency check may not be performed on each copy.
  • the trigger condition of the data consistency check includes one or more of the following combinations:
  • the preset time point is satisfied, for example, 0 o'clock every day;
  • Every preset time period for example, every 10 hours.
  • the data consistency check between the copies can be performed periodically or regularly or when the user reads the data to ensure the correctness of the data between the copies.
  • the consistency check of the data between the replicas is performed periodically or regularly.
  • the entire distributed system is relatively large, it helps to centrally control the verification and synchronization strategy.
  • step S13 When it is detected that the trigger condition of the data consistency check is satisfied, execute S13. When it is detected that the trigger condition of the data consistency check is not satisfied, step S12 may be continued or the process may be directly ended. There are no restrictions.
  • a verification cycle The entire process of starting multiple data consistency checks and completing data consistency checks on multiple copies is called a verification cycle.
  • a verification period can be divided into multiple statistical periods, and a statistical period can be a preset time period. For example, a statistical period is set to 1 second.
  • the flow control refers to flow control. There are two methods for implementing flow control: one is to implement flow control based on source address, destination address, source port, destination port, and protocol type through the QoS module of routers and switches; the other is to use professional flow control equipment Implement application-based flow control.
  • the acquiring the flow control threshold corresponding to the current statistical period within the verification period may specifically include:
  • the flow control threshold corresponding to the first statistical period in the verification period of the present application is a preset flow control threshold, which can be preset by a system administrator according to experience. That is, a preset flow control threshold is used as the flow control threshold of the first statistical period in the verification period.
  • Each remaining statistical period except the first statistical period in the verification period may correspond to a flow control threshold.
  • the flow control threshold corresponding to each remaining statistical period is dynamically adjusted.
  • the flow control threshold corresponding to the current statistical period can be calculated based on the IO load in the previous statistical period.
  • the flow control threshold corresponding to the next statistical period can be based on the current statistical period.
  • the calculated IO load is calculated. Specifically, the flow control threshold corresponding to the second statistical period is calculated according to the IO load in the first statistical period; the flow control threshold corresponding to the third statistical period is calculated according to the IO load in the second statistical period; analogy.
  • the data consistency check is performed on the multiple copies according to the flow control threshold corresponding to the current statistical period, so that the multiple copies carry data.
  • the consistency check speed is not too fast, which avoids a significant impact on normal I / O service performance. If the flow control threshold corresponding to the current statistical period is large, use a larger flow control threshold to control data consistency for multiple copies.
  • the consistency check can improve the speed of data consistency check on multiple copies and alleviate the pressure of data consistency check.
  • FIG. 2 is a flowchart of a method for determining a flow control threshold corresponding to a current statistical period according to an IO load of a user application in a previous statistical period according to a second embodiment of the present application.
  • S21 Obtain a data block size of each IO applied by a user in a previous statistical period, and calculate an average data block size of the IO in the previous statistical period.
  • the average data block size of the IO in the last statistical period may be calculated by using an arithmetic average algorithm, a geometric mean algorithm, or a root mean square algorithm.
  • the data block sizes of the ten IOs are: 2M, 1M, 3M, 0.5M, 10M, 4M, 0.1M, 1.2M, 5M And 8M. Calculating the average data block size of the IO in the previous statistical period by using the arithmetic average algorithm is:
  • the transmission delay refers to the time required for a node to enter a data block from the node to the transmission medium when transmitting data, that is, the time required for a sending site to start sending data frames to the completion of data frame transmission The total time required for a receiving station, or the time required for a receiving station to start receiving data frames and finish receiving data frames.
  • the transmission delay of the data block may be obtained from a load measurement tool or a performance monitoring tool installed in each storage node.
  • the average data block delay of the IO in the last statistical period may also be calculated by using an arithmetic average algorithm, a geometric mean algorithm, or a root mean square algorithm. Assume that assuming that the transmission delays of ten IOs in the previous statistical period are: 1s, 0.8s, 1.5s, 0.4s, 5s, 2s, 0.02s, 0.6s, 3s, and 4.5s, then When the average IO block delay in the previous statistical period is calculated using the arithmetic mean algorithm, the result is:
  • the average data block size of the IO in the previous statistical period is calculated using the arithmetic average algorithm, the average data block delay of the IO in the previous statistical period is also calculated using the arithmetic average algorithm; if The average data block size of the IO in the previous statistical period is calculated using the geometric mean algorithm, and the average data block delay of the IO in the previous statistical period is also calculated using the geometric mean algorithm; or The average data block size of the IO is calculated using the root mean square average algorithm, and the average data block delay of the IO in the previous statistical period is also calculated using the root mean square average algorithm.
  • the reference value of the size of the IO data block and the reference value of the corresponding data block delay may be preset by an administrator of the storage system according to experience. For example, according to experience, when a 4K data block is transmitted, the delay is the smallest, and in the ideal state, it can reach 50ms, then the reference value of the IO data block size can be set to 4k, and the corresponding data block delay reference value can be set. It is 50ms.
  • the average data block size of the IO in the previous statistical period is X
  • the average data block delay is Y
  • the reference value of the data block size is M
  • the reference value of the corresponding data block delay is N
  • the calculation formula of the IO load intensity in the previous statistical period is:
  • the IO load category includes: a high load category, a normal load category, and a low load category.
  • the load classification model includes, but is not limited to, a Support Vector Machine (SVM) model.
  • SVM Support Vector Machine
  • Using the average data block size of the IO in the last statistical period, the average data block delay of the IO in the last statistical period, and the IO load intensity in the last statistical period as the load classification model The input is calculated by the load classification model, and the IO load category in the previous statistical period is output.
  • SVM Support Vector Machine
  • the training process of the load classification model includes:
  • training samples in the training sets of different load categories are distributed to different folders. For example, training samples of high load category are distributed to the first folder, training samples of normal load category are distributed to the second folder, and training samples of low load category are distributed to the third folder.
  • training samples of the first preset ratio for example, 70%
  • second preset ratios for example, 30%
  • the training is ended, and the trained load classification model is used as a classifier to identify the IO load category in the current statistical period; if the accuracy rate When it is smaller than the preset accuracy threshold, the number of positive samples and the number of negative samples are increased to retrain the load classification model until the accuracy is greater than or equal to the preset accuracy threshold.
  • calculating the flow control threshold corresponding to the current statistical period according to the IO load category in the previous statistical period may include:
  • the flow control threshold is lowered according to the first preset range, so that data consistency verification is performed on multiple copies with a low flow control threshold in the current statistical period. Reduce the speed of data consistency check to ensure efficient access to user applications.
  • the first preset amplitude may be 1/2 of a flow control threshold corresponding to a previous statistical period. That is, the flow control threshold corresponding to the current statistical period is 1/2 of the flow control threshold corresponding to the previous statistical period, and the flow control threshold corresponding to the next statistical period is 1/2 of the flow control threshold corresponding to the current statistical period.
  • the flow control threshold is increased according to the second preset range to perform data consistency check on multiple copies with a high flow control threshold in the current statistical period.
  • the speed of data consistency check is improved, and the distributed storage system is restored to a healthy state as soon as possible.
  • the second preset amplitude may be 1.5 times a flow control threshold corresponding to a previous statistical period. That is, the flow control threshold corresponding to the current statistical period is 1.5 times the flow control threshold corresponding to the previous statistical period, and the flow control threshold corresponding to the next statistical period is 1.5 times the flow control threshold corresponding to the current statistical period.
  • the flow control threshold corresponding to the previous statistical cycle is used as the flow control threshold corresponding to the current statistical cycle.
  • the data consistency check flow control method described in this application stores the user data as multiple copies when a write request for user data is received, and When the condition is triggered, by obtaining flow control thresholds corresponding to different statistical periods in the verification period, based on the flow control thresholds corresponding to each statistical period, data consistency verification is performed on the multiple copies to improve data consistency.
  • the efficiency of the performance check while ensuring the consistency of data between multiple copies, can avoid a significant impact on normal input and output business performance, and has a good flow control effect.
  • the flow control threshold corresponding to the current statistical cycle is automatically adjusted dynamically according to the IO load of the user application in the previous statistical cycle, without manual adjustment by the manager, which reduces the workload of the manager and avoids the subjective factors of the manager The problem caused by inaccurate adjustment.
  • FIG. 3 is a functional module diagram of a preferred embodiment of a data consistency check flow control device of this application.
  • the data consistency check flow control device 30 runs in an electronic device.
  • the data consistency check flow control device 30 may include a plurality of function modules composed of program code segments.
  • the data consistency check program code of each program segment in the flow control device 30 may be stored in a memory and executed by at least one processor to execute (see Figure 1-2 and related descriptions for details) data consistency Flow control method.
  • the data consistency check flow control device 30 may be divided into a plurality of functional modules according to functions performed by the data consistency check flow control device 30.
  • the functional modules may include: a copy storage module 301, a detection module 302, a flow control acquisition module 303, a copy verification module 304, a calculation module 305, a determination module 306, and a training module 307.
  • the module referred to in the present application refers to a series of computer-readable instruction segments capable of being executed by at least one processor and capable of performing fixed functions, which are stored in a memory. In some embodiments, functions of each module will be described in detail in subsequent embodiments.
  • the copy storage module 301 is configured to store the user data as multiple copies when a user data write request is received.
  • distributed storage systems are generally implemented by storing multiple copies of data. For example, a user stores a txt file.
  • this document will be stored in three copies and placed on different hard disks in different fault domains. In this way, even if a hard disk is damaged, the txt file will not be lost. Or when two hard disks are damaged at the same time, the data is still not lost. However, after the hard disk is damaged, the distributed storage system generally senses and completes the lost copies in a timely manner.
  • the detection module 302 is configured to detect whether a trigger condition of a data consistency check is satisfied.
  • the distributed storage system needs to set the trigger conditions for data consistency check.
  • the trigger conditions for the data consistency check are met, the data consistency check instruction is considered to be triggered, and the data consistency check is performed on each copy. ;
  • the trigger condition of the data consistency check is not met, it is considered that there is no instruction to trigger the data consistency check, and the data consistency check may not be performed on each copy.
  • the trigger condition of the data consistency check includes one or more of the following combinations:
  • the preset time point is satisfied, for example, 0 o'clock every day;
  • Every preset time period for example, every 10 hours.
  • the data consistency check between the copies can be performed periodically or regularly or when the user reads the data to ensure the correctness of the data between the copies.
  • the consistency check of the data between the replicas is performed periodically or regularly.
  • the entire distributed system is relatively large, it helps to centrally control the verification and synchronization strategy.
  • a flow control acquisition module 303 is configured to acquire a flow control threshold corresponding to a current statistical period in a verification period when the detection module 302 detects that a trigger condition of a data consistency check is satisfied.
  • a verification cycle The entire process of starting multiple data consistency checks and completing data consistency checks on multiple copies is called a verification cycle.
  • a verification period can be divided into multiple statistical periods, and a statistical period can be a preset time period. For example, a statistical period is set to 1 second.
  • the flow control refers to flow control. There are two methods for implementing flow control: one is to implement flow control based on source address, destination address, source port, destination port, and protocol type through the QoS module of routers and switches; the other is to use professional flow control equipment Implement application-based flow control.
  • the flow control acquisition module 303 acquiring the flow control threshold corresponding to the current statistical period in the verification period may specifically include:
  • the flow control threshold corresponding to the first statistical period in the verification period of the present application is a preset flow control threshold, which can be preset by a system administrator according to experience. That is, a preset flow control threshold is used as the flow control threshold of the first statistical period in the verification period.
  • Each remaining statistical period except the first statistical period in the verification period may correspond to a flow control threshold.
  • the flow control threshold corresponding to each remaining statistical period is dynamically adjusted.
  • the flow control threshold corresponding to the current statistical period can be calculated based on the IO load in the previous statistical period.
  • the flow control threshold corresponding to the next statistical period can be based on the current statistical period.
  • the calculated IO load is calculated. Specifically, the flow control threshold corresponding to the second statistical period is calculated according to the IO load in the first statistical period; the flow control threshold corresponding to the third statistical period is calculated according to the IO load in the second statistical period; analogy.
  • the replica verification module 304 is configured to perform data consistency verification on the multiple replicas based on a flow control threshold corresponding to the current statistical period.
  • the data consistency check is performed on the multiple copies according to the flow control threshold corresponding to the current statistical period, so that the multiple copies carry data.
  • the consistency check speed is not too fast, which avoids a significant impact on normal I / O service performance. If the flow control threshold corresponding to the current statistical period is large, use a larger flow control threshold to control data consistency for multiple copies.
  • the consistency check can improve the speed of data consistency check on multiple copies and alleviate the pressure of data consistency check.
  • the calculation module 305 is configured to obtain a data block size of each IO applied by the user in the previous statistical period, and calculate an average data block size of the IO in the previous statistical period.
  • the average data block size of the IO in the last statistical period may be calculated by using an arithmetic average algorithm, a geometric mean algorithm, or a root mean square algorithm.
  • the data block sizes of the ten IOs are: 2M, 1M, 3M, 0.5M, 10M, 4M, 0.1M, 1.2M, 5M. And 8M. Calculating the average data block size of the IO in the previous statistical period by using the arithmetic average algorithm is:
  • the calculation module 305 is configured to obtain a transmission delay of each data block in the last statistical period, and calculate an average data block delay of the IO in the last statistical period.
  • the transmission delay refers to the time required for a node to enter a data block from the node to the transmission medium when transmitting data, that is, the time required for a sending site to start sending data frames to the completion of data frame transmission The total time required for a receiving station, or the time required for a receiving station to start receiving data frames and finish receiving data frames.
  • the transmission delay of the data block may be obtained from a load measurement tool or a performance monitoring tool installed in each storage node.
  • the average data block delay of the IO in the last statistical period may also be calculated by using an arithmetic average algorithm, a geometric mean algorithm, or a root mean square algorithm. Assume that assuming that the transmission delays of ten IOs in the previous statistical period are: 1s, 0.8s, 1.5s, 0.4s, 5s, 2s, 0.02s, 0.6s, 3s, and 4.5s, then When the average IO block delay in the previous statistical period is calculated using the arithmetic mean algorithm, the result is:
  • the average data block size of the IO in the previous statistical period is calculated using the arithmetic average algorithm, the average data block delay of the IO in the previous statistical period is also calculated using the arithmetic average algorithm; if The average data block size of the IO in the previous statistical period is calculated using the geometric mean algorithm, and the average data block delay of the IO in the previous statistical period is also calculated using the geometric mean algorithm; or The average data block size of the IO is calculated using the root mean square average algorithm, and the average data block delay of the IO in the previous statistical period is also calculated using the root mean square average algorithm.
  • the flow control obtaining module 303 is further configured to obtain a preset reference value of the data block size of the IO and a corresponding reference value of the data block delay.
  • the reference value of the size of the IO data block and the reference value of the corresponding data block delay may be preset by an administrator of the storage system according to experience. For example, according to experience, when a 4K data block is transmitted, the delay is the smallest, and in the ideal state, it can reach 50ms, then the reference value of the IO data block size can be set to 4k, and the corresponding data block delay reference value can be set. It is 50ms.
  • the calculation module 305 is further configured to be based on the average data block size, average data block delay, data block size reference value, and corresponding data block delay reference value of the IO in the previous statistical period. Calculate the IO load intensity in the last statistical period.
  • the average data block size of the IO in the previous statistical period is X
  • the average data block delay is Y
  • the reference value of the data block size is M
  • the reference value of the corresponding data block delay is N
  • the calculation formula of the IO load intensity in the previous statistical period is:
  • a determining module 306 is configured to determine a IO load category in the previous statistical period by using a pre-trained load classification model according to the IO load intensity in the last statistical period.
  • the IO load category includes: a high load category, a normal load category, and a low load category.
  • the load classification model includes, but is not limited to, a Support Vector Machine (SVM) model.
  • SVM Support Vector Machine
  • the input is calculated by the load classification model, and the IO load category in the previous statistical period is output.
  • the training module 307 is configured to train a load classification model.
  • the process in which the training module 307 trains the load classification model includes:
  • training samples in the training sets of different load categories are distributed to different folders. For example, training samples of high load category are distributed to the first folder, training samples of normal load category are distributed to the second folder, and training samples of low load category are distributed to the third folder.
  • training samples of the first preset ratio for example, 70%
  • second preset ratios for example, 30%
  • the training is ended, and the trained load classification model is used as a classifier to identify the IO load category in the current statistical period; if the accuracy rate When it is smaller than the preset accuracy threshold, the number of positive samples and the number of negative samples are increased to retrain the load classification model until the accuracy is greater than or equal to the preset accuracy threshold.
  • the flow control acquisition module 303 is further configured to calculate a flow control threshold corresponding to the current statistical period according to the IO load category in the previous statistical period.
  • the flow control obtaining module 303 calculating the flow control threshold corresponding to the current statistical period according to the IO load category in the previous statistical period may include:
  • the flow control threshold is lowered according to the first preset range, so that data consistency verification is performed on multiple copies with a low flow control threshold in the current statistical period. Reduce the speed of data consistency check to ensure efficient access to user applications.
  • the first preset amplitude may be 1/2 of a flow control threshold corresponding to a previous statistical period. That is, the flow control threshold corresponding to the current statistical period is 1/2 of the flow control threshold corresponding to the previous statistical period, and the flow control threshold corresponding to the next statistical period is 1/2 of the flow control threshold corresponding to the current statistical period.
  • the flow control threshold is increased according to the second preset range to perform data consistency check on multiple copies with a high flow control threshold in the current statistical period.
  • the speed of data consistency check is improved, and the distributed storage system is restored to a healthy state as soon as possible.
  • the second preset amplitude may be 1.5 times a flow control threshold corresponding to a previous statistical period. That is, the flow control threshold corresponding to the current statistical period is 1.5 times the flow control threshold corresponding to the previous statistical period, and the flow control threshold corresponding to the next statistical period is 1.5 times the flow control threshold corresponding to the current statistical period.
  • the flow control threshold corresponding to the previous statistical cycle is used as the flow control threshold corresponding to the current statistical cycle.
  • the data consistency check flow control device described in this application stores the user data as multiple copies when receiving a write request of the user data, and When the condition is triggered, by obtaining flow control thresholds corresponding to different statistical periods in the verification period, based on the flow control thresholds corresponding to each statistical period, data consistency verification is performed on the multiple copies to improve data consistency.
  • the efficiency of the performance check while ensuring the consistency of data between multiple copies, can avoid a significant impact on normal input and output business performance, and has a good flow control effect.
  • the flow control threshold corresponding to the current statistical cycle is automatically adjusted dynamically according to the IO load of the user application in the previous statistical cycle, without manual adjustment by the manager, which reduces the workload of the manager and avoids the subjective factors of the manager The problem caused by inaccurate adjustment.
  • the above integrated unit implemented in the form of a software functional module may be stored in a non-volatile readable storage medium.
  • the above software function module is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a dual-screen device, or a network device) or a processor to execute the embodiments described in this application. Part of the method.
  • FIG. 4 is a schematic diagram of an electronic device according to a fourth embodiment of the present application.
  • the electronic device 4 includes: a memory 41, at least one processor 42, computer-readable instructions 43 stored in the memory 41 and executable on the at least one processor 42, and at least one communication bus 44.
  • the computer-readable instructions 43 may be divided into one or more modules / units, and the one or more modules / units are stored in the memory 41 and processed by the at least one processor 42 Perform to complete the steps in the above method embodiment of the present application.
  • the one or more modules / units may be a series of computer-readable instruction instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer-readable instructions 43 in the electronic device 4.
  • the electronic device 4 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the schematic diagram 4 is only an example of the electronic device 4, and does not constitute a limitation on the electronic device 4. It may include more or fewer components than shown in the figure, or combine some components, or be different
  • the electronic device 4 may further include an input / output device, a network access device, a bus, and the like.
  • the at least one processor 42 may be a central processing unit (CPU), or other general-purpose processors, digital signal processors (DSPs), and application-specific integrated circuits (ASICs). ), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the processor 42 may be a microprocessor, or the processor 42 may be any conventional processor, etc.
  • the processor 42 is a control center of the electronic device 4, and uses various interfaces and lines to connect the entire electronic device 4 The various parts.
  • the memory 41 may be configured to store the computer-readable instructions 43 and / or modules / units, and the processor 42 may execute or execute the computer-readable instructions and / or modules / units stored in the memory 41, and Recalling the data stored in the memory 41 to implement various functions of the electronic device 4.
  • the memory 41 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, application programs required for at least one function (such as a sound playback function, an image playback function, etc.), etc .; the storage data area may Data (such as audio data, phonebook, etc.) created according to the use of the electronic device 4 are stored.
  • the memory 41 may include a high-speed random access memory, and may also include a non-volatile memory, such as a hard disk, an internal memory, a plug-in hard disk, a Smart Memory Card (SMC), and a Secure Digital (SD). Card, flash memory card (Flash card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
  • a non-volatile memory such as a hard disk, an internal memory, a plug-in hard disk, a Smart Memory Card (SMC), and a Secure Digital (SD).
  • SSD Secure Digital
  • flash memory card Flash card
  • flash memory device at least one disk storage device, flash memory device, or other volatile solid-state storage device.
  • the integrated module / unit of the electronic device 4 When the integrated module / unit of the electronic device 4 is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a non-volatile readable storage medium. Based on this understanding, this application implements all or part of the processes in the methods of the above embodiments, and can also be completed by computer-readable instructions to instruct related hardware.
  • the computer-readable instructions can be stored in a non-volatile memory. In the read storage medium, when the computer-readable instructions are executed by a processor, the steps of the foregoing method embodiments can be implemented.
  • the computer-readable instructions include computer-readable instruction codes, and the computer-readable instruction codes may be in a source code form, an object code form, an executable file, or some intermediate form.
  • the computer-readable medium may include: any entity or device capable of carrying the computer-readable instruction code, a recording medium, a U disk, a mobile hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), electric carrier signals, telecommunication signals, and software distribution media.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electric carrier signals telecommunication signals
  • software distribution media any entity or device capable of carrying the computer-readable instruction code
  • a recording medium a U disk, a mobile hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), electric carrier signals, telecommunication signals, and software distribution media.
  • each functional unit in each embodiment of the present application may be integrated in the same processing unit, or each unit may exist separately physically, or two or more units may be integrated in the same unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)
  • Debugging And Monitoring (AREA)

Abstract

一种数据一致性校验流控方法、装置、电子设备及存储介质。该方法包括:接收到用户数据的写入请求时,将所述用户数据存储为多个副本(S11);侦测是否满足了数据一致性校验的触发条件(S12);当侦测到满足了数据一致性校验的触发条件时,获取校验周期内的当前统计周期对应的流控阈值(S13);基于所述当前统计周期对应的流控阈值,对所述多个副本进行数据一致性校验(S14)。该方法能够在提高大规模分布式存储系统的数据一致性校验效率的同时,避免对正常输入输出业务性能造成明显冲击,具有很好的流控效果。

Description

数据一致性校验流控方法、装置、电子设备及存储介质
本申请要求于2018年06月04日提交中国专利局,申请号为201810566098.5、发明名称为“数据一致性校验流控方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,具体涉及一种数据一致性校验流控方法、装置、电子设备及存储介质。
背景技术
分布式存储系统是将数据分散存储在多台独立的设备上,采用可扩展的系统结构和多种冗余策略,利用多台存储服务器分担存储负荷,根据相应的定位算法查找存储信息。分布式存储系统不但可以提高系统的可靠性、可用性和存取效率,还易于扩展,并且能够消除单点故障,在规定的冗余规则范围内,当存储节点的某块磁盘或整个存储节点发生故障时,对前端的用户应用产生的影响很小。
尽管如此,在保证相应的性能指标的同时,分布式存储系统保证各个存储节点中副本数据一致性也是相当重要的。
目前的分布式存储数据一致性校验,一般会按照一定的周期,进行各个存储节点中各个副本数据之间的一致性校验。如果正好在用户应用的输入输出(Input/Output,IO)(写下全称,)压力大的时候,触发了数据的一致性校验,则可能正在进行一致性校验的IO会对用户应用的IO产生影响,从而降低用户应用的使用体验,甚至产生系统故障。
如何较好的权衡分布式存储数据一致性校验与正常的用户输入输出业务之间的任务分配,在提高一致性校验效率的同时,避免对正常的数据输入输出业务性能造成明显冲击,使业务系统能持续稳定地获得较高的读写次数(Input/Output Operations Per Second,IOPS)和吞吐率,对于分布式存储系统的性能提高是至关重要的。
发明内容
鉴于以上内容,有必要提出一种数据一致性校验流控方法、装置、电子设备及存储介质,能够在提高大规模分布式存储系统的数据一致性校验效率的同时,避免对正常输入输出业务性能造成明显冲击,具有很好的流控效果。
本申请的第一方面提供一种数据一致性校验流控方法,所述方法包括:
接收到用户数据的写入请求时,将所述用户数据存储为多个副本;
侦测是否满足了数据一致性校验的触发条件;
当侦测到满足了数据一致性校验的触发条件时,获取校验周期内的当前统计周期对应的流控阈值;
基于所述当前统计周期对应的流控阈值,对所述多个副本进行数据一致性校验。
本申请的第二方面提供一种数据一致性校验流控装置,所述装置包括:
副本存储模块,用于接收到用户数据的写入请求时,将所述用户数据存 储为多个副本;
侦测模块,用于侦测是否满足了数据一致性校验的触发条件;
流控获取模块,用于当所述侦测模块侦测到满足了数据一致性校验的触发条件时,获取校验周期内的当前统计周期对应的流控阈值;
副本校验模块,用于基于所述当前统计周期对应的流控阈值,对所述多个副本进行数据一致性校验。
本申请的第三方面提供一种电子设备,所述电子设备包括处理器和存储器,所述处理器用于执行所述存储器中存储的计算机可读指令时实现所述数据一致性校验流控方法。
本申请的第四方面提供一种非易失性可读存储介质,所述非易失性可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现所述数据一致性校验流控方法。
本申请所述的数据一致性校验流控方法、装置、电子设备及存储介质,在接收到用户数据的写入请求时,将所述用户数据存储为多个副本,并在满足数据一致性校验的触发条件时,通过获取校验周期内的不同统计周期对应的流控阈值,基于所述每一个统计周期对应的流控阈值,对所述多个副本进行数据一致性校验,在提高数据一致性校验的效率、保证多个副本之间的数据一致性的同时,能够避免对正常输入输出业务性能造成明显冲击,具有很好的流控效果。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1是本申请实施例一提供的数据一致性校验流控方法的流程图。
图2是本申请实施例二提供的根据上一个统计周期内用户应用的IO负载确定当前统计周期对应的流控阈值的方法的流程图。
图3是本申请实施例三提供的数据一致性校验流控装置的功能模块图。
图4是本申请实施例四提供的电子设备的示意图。
如下具体实施方式将结合上述附图进一步说明本申请。
具体实施方式
为了能够更清楚地理解本申请的上述目的、特征和优点,下面结合附图和具体实施例对本申请进行详细描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本申请,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用 的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。
本申请实施例的数据一致性校验流控方法应用在一个或者多个电子设备中。所述数据一致性校验流控方法也可以应用于由电子设备和通过网络与所述电子设备进行连接的服务器所构成的硬件环境中。网络包括但不限于:广域网、城域网或局域网。本申请实施例的数据一致性校验流控方法可以由服务器来执行,也可以由电子设备来执行;还可以是由服务器和电子设备共同执行。
对于需要进行数据一致性校验流控方法的电子设备,可以直接在电子设备上集成本申请的方法所提供的数据一致性校验流控功能,或者安装用于实现本申请的方法的客户端。再如,本申请所提供的方法还可以以软件开发工具包(Software Development Kit,SDK)的形式运行在服务器等设备上,以SDK的形式提供数据一致性校验流控功能的接口,电子设备或其他设备通过提供的接口即可实现对本申请所述的方法。
实施例一
图1是本申请实施例一提供的数据一致性校验流控方法的流程图。根据不同的需求,该流程图中的执行顺序可以改变,某些步骤可以省略。
S11、接收到用户数据的写入请求时,将所述用户数据存储为多个副本。
为了提供数据的可靠性,分布式存储系统一般通过将数据存储多个副本来实现。比如用户存储一个txt文档,在底层分布式存储系统中,这份文档会被存储3个副本,并放置在不同故障域的不同硬盘上。这样即使损坏一块硬盘,txt文档也不会丢失。或者当两块硬盘同时损坏时,数据仍然不会丢失。不过在硬盘损坏后,分布式存储系统一般会及时感知并补全丢失的副本。
S12、侦测是否满足了数据一致性校验的触发条件。
多副本带来了数据的可靠性,同时也带来了一致性方面的问题。因此,分布式存储系统需要设置数据一致性校验的触发条件,当满足了数据一致性校验的触发条件时,认为触发了数据一致性校验的指令,对各个副本进行数据一致性校验;当没有满足数据一致性校验的触发条件时,认为没有触发数据一致性校验的指令,可以不对各个副本进行数据一致性校验。
本申请较佳实施例中,所述数据一致性校验的触发条件包括以下一种或多种的组合:
1)满足了预设时间点,例如,每天的0点整;
2)接收到了用户数据的读取请求;
3)每隔预设时间段,例如,每隔10小时。
通过定期或者定时或者用户读取数据时进行各个副本之间的数据一致性校验,能保证各个副本之间数据的正确性。
优选的为定期或定时进行各个副本之间的数据一致性校验,在整个分布式系统比较大时,有助于集中控制校验及同步策略。
当侦测到满足了数据一致性校验的触发条件时,执行S13,当侦测到没有满足数据一致性校验的触发条件时,可以继续执行步骤S12,也可以直接结束流程,本申请对此不做限制。
S13、获取校验周期内的当前统计周期对应的流控阈值。
将对多个副本从开始进行数据一致性校验到完成数据一致性校验的整个过程称之为一个校验周期。一个校验周期可以划分为多个统计周期,一个统计周期可以为一个预设时间段,例如,一个统计周期设置为1秒钟。
所述流控是指流量控制。流控的实现方法包括以下两种:一种是通过路由器、交换机的QoS模块实现基于源地址、目的地址、源端口、目的端口以及协议类型的流量控制;另一种是通过专业的流控设备实现基于应用层的流量控制。
本较佳实施例中,所述获取校验周期内的当前统计周期对应的流控阈值具体可以包括:
1)判断当前统计周期是否为第一个统计周期。
可以通过判断当前时间是否为第1秒来判断当前校验周期是否为第一个统计周期。
2)当确定所述当前统计周期为第一个统计周期时,将预设流控阈值确定为所述当前统计周期对应的流控阈值;
本申请的校验周期内的第一个统计周期对应的流控阈值为预先设置的流控阈值,可以由系统的管理者根据经验预先设置。即,采用一个预设的流控阈值作为校验周期内的第一个统计周期的流控阈值。
3)当确定所述当前统计周期不为第一个统计周期时,获取上一个统计周期内用户应用的IO负载,根据所述上一个统计周期内用户应用的IO负载,确定所述当前统计周期对应的流控阈值。
校验周期内的除第一个统计周期外的剩余每一个统计周期可以对应一个流控阈值。剩余每一个统计周期对应的流控阈值是动态调整的,当前统计周期对应的流控阈值可以根据上一个统计周期内的IO负载计算得到,下一个统计周期对应的流控阈值可以根据当前统计周期内的IO负载计算得到。具体而言,根据第一个统计周期内的IO负载计算第二个统计周期对应的流控阈值;根据第二个统计周期内的IO负载计算第三个统计周期对应的流控阈值;以此类推。
所述根据所述上一个统计周期内用户应用的IO负载,确定所述当前统计周期对应的流控阈值的具体过程可以参见图2及其相应描述。
S14、基于所述当前统计周期对应的流控阈值,对所述多个副本进行数据一致性校验。
由于校验产生的通信量过大会影响分布式系统的正常功能,因而根据所述当前统计周期对应的流控阈值对所述多个副本进行数据一致性校验,使得所述多个副本进行数据一致性校验的速度不至于过快,避免对正常输入输出业务性能造成明显冲击;若当前统计周期对应的流控阈值较大时,以较大的流控阈值控制对多个副本进行数据一致性校验,可以提高对多个副本进行数据一致性校验的速度,缓解对数据进行一致性校验的压力。
实施例二
图2是本申请实施例二提供的根据上一个统计周期内用户应用的IO负 载确定当前统计周期对应的流控阈值的方法的流程图。
S21、获取上一个统计周期内用户应用的每一个IO的数据块大小,计算所述上一个统计周期内的IO的平均数据块大小。
所述上一个统计周期内的IO的平均数据块大小可以采用算术平均值算法、几何平均数算法,或者均方根平均数算法来计算。
举例而言,假设检测到上一个统计周期内,用户应用共有十次IO,十次IO的数据块大小分别为:2M,1M,3M,0.5M,10M,4M,0.1M,1.2M,5M以及8M。利用所述算术平均值算法计算所述上一个统计周期内的IO的平均数据块大小为:
Figure PCTCN2018100171-appb-000001
Figure PCTCN2018100171-appb-000002
S22、获取所述上一个统计周期内的每个数据块的传输时延,计算所述上一个统计周期内的IO的平均数据块时延。
所述传输时延(简称为时延)是指结点在发送数据时使数据块从结点进入到传输媒体所需的时间,即一个发送站点从开始发送数据帧到数据帧发送完毕所需要的全部时间,或者一个接收站点从开始接收数据帧到数据帧接收完毕所需要的全部时间。
在本申请较佳实施例中,所述数据块的传输时延可以从每个存储节点中安装的一个负载量测工具或者性能监控工具中获取得到。
如上所述,所述上一个统计周期内的IO的平均数据块时延也可以采用算术平均值算法、几何平均数算法,或者均方根平均数算法来计算。假设,假设检测到上一个统计周期内,十次IO的传输时延分别为:1s、0.8s、1.5s、0.4s、5s、2s、0.02s、0.6s、3s及4.5s,则所述上一个统计周期内的IO平均数据块时延采用算术平均值算法来计算时,其结果为:
(1s+0.8s+1.5s+0.4s+5s+2s+0.1s+0.6s+3s+4.4s)=1.88s。
应当理解的是,若上一个统计周期内的IO的平均数据块大小采用算术平均值算法来计算,则上一个统计周期内的IO的平均数据块时延也采用算术平均值算法来计算;若上一个统计周期内的IO的平均数据块大小采用几何平均数算法来计算,则上一个统计周期内的IO的平均数据块时延也采用几何平均数算法来计算;或者若上一个统计周期内的IO的平均数据块大小采用均方根平均数算法来计算,则上一个统计周期内的IO的平均数据块时延也采用均方根平均数算法来计算。
S23、获取预先设置的IO的数据块大小的基准值及对应的数据块时延的基准值。
在本申请较佳实施例中,所述IO数据块大小的基准值以及对应的数据块时延的基准值可以由存储系统的管理员根据经验预先设置。例如,根据经验,4K的数据块在传输时,时延最小,理想状态下可以达到50ms,则所述IO数据块大小的基准值可以设置为4k,对应的数据块时延的基准值可以设置为50ms。
S24、根据所述上一个统计周期内的所述IO的平均数据块大小、平均数据块时延、数据块大小的基准值、对应的数据块时延的基准值,计算所述上 一个统计周期内的IO负载强度。
举例而言,假设上一个统计周期内的所述IO的平均数据块大小为X、平均数据块时延为Y、数据块大小的基准值为M、对应的数据块时延的基准值为N,则所述上一个统计周期内的IO负载强度的计算公式为:
Figure PCTCN2018100171-appb-000003
S25、根据所述上一个统计周期内的IO负载强度,利用预先训练好的负载分类模型确定所述上一个统计周期内的IO负载类别。
在本申请较佳实施例中,所述IO负载类别包括:高负载类别、正常负载类别、低负载类别。
优选地,所述负载分类模型包括,但不限于:支持向量机(Support Vector Machine,SVM)模型。将所述上一个统计周期内的IO的平均数据块大小、所述上一个统计周期内的IO的平均数据块时延、所述上一个统计周期内的IO负载强度作为所述负载分类模型的输入,经过所述负载分类模型计算后,输出上一个统计周期内的IO负载类别。
在本申请的优选实施例中,所述负载分类模型的训练过程包括:
1)获取正样本的IO负载数据及负样本的IO负载数据,并将正样本的IO负载数据标注负载类别,以使正样本的IO负载数据携带IO负载类别标签。
例如,分别选取500个高负载类别、正常负载类别、低负载类别对应的IO负载数据,并对每个IO负载数据标注类别,可以以“1”作为高负载的IO数据标签,以“2”作为正常负载的IO数据标签,以“3”作为低负载的IO数据标签。
2)将所述正样本的IO负载数据及所述负样本的IO负载数据随机分成第一预设比例的训练集和第二预设比例的验证集,利用所述训练集训练所述负载分类模型,并利用所述验证集验证训练后的所述负载分类模型的准确率。
先将不同负载类别的训练集中的训练样本分发到不同的文件夹里。例如,将高负载类别的训练样本分发到第一文件夹里、正常负载类别的训练样本分发到第二文件夹里、低负载类别的训练样本分发到第三文件夹里。然后从不同的文件夹里分别提取第一预设比例(例如,70%)的训练样本作为总的训练样本进行负载分类模型的训练,从不同的文件夹里分别取剩余第二预设比例(例如,30%)的训练样本作为总的测试样本对训练完成的所述负载分类模型进行准确性验证。
3)若所述准确率大于或者等于预设准确率阈值时,则结束训练,以训练后的所述负载分类模型作为分类器识别所述当前统计周期内的IO负载类别;若所述准确率小于预设准确率阈值时,则增加正样本数量及负样本数量以重新训练所述负载分类模型直至所述准确率大于或者等于预设准确率阈值。
S26、根据上一个统计周期内的IO负载类别计算当前统计周期对应的流控阈值。
具体的,所述根据上一个统计周期内的IO负载类别计算当前统计周期对应的流控阈值可以包括:
1)当所述上一个统计周期内的IO负载类别为高负载类别时,将所述上一个统计周期对应的流控阈值降低第一预设幅度,得到当前统计周期对应的 流控阈值。
在上一个统计周期内的IO负载为高负载时,按照所述第一预设幅度降低流控阈值,以在当前统计周期内以低流控阈值对多个副本进行数据一致性校验,通过降低数据一致性校验的速度来保证用户应用的高效访问。
在本申请的优选实施例中,所述第一预设幅度可以是上一个统计周期对应的流控阈值的1/2。即当前统计周期对应的流控阈值为上一个统计周期对应的流控阈值的1/2,下一个统计周期对应的流控阈值为当前统计周期对应的流控阈值的1/2。
2)当所述上一个统计周期内的IO负载类别为低负载类别时,将所述上一个统计周期对应的流控阈值提高第二预设幅度,得到下一个统计周期对应的流控阈值。
在上一个统计周期内的IO负载为低负载时,按照所述第二预设幅度提高流控阈值,以在当前统计周期内以高流控阈值对多个副本进行数据一致性校验,在保证用户应用的访问质量的基础上,提高数据一致性校验的速度,尽快将分布式存储系统恢复到健康状态。
在本申请的优选实施例中,所述第二预设幅度可以是上一个统计周期对应的流控阈值的1.5倍。即当前统计周期对应的流控阈值为上一个统计周期对应的流控阈值的1.5倍,下一个统计周期对应的流控阈值为当前统计周期对应的流控阈值的1.5倍。
3)当所述上一个统计周期内的IO负载类别为正常负载类别时,将所述上一个统计周期对应的流控阈值作为当前统计周期对应的流控阈值。
综上所述,本申请所述的数据一致性校验流控方法,在接收到用户数据的写入请求时,将所述用户数据存储为多个副本,并在满足数据一致性校验的触发条件时,通过获取校验周期内的不同统计周期对应的流控阈值,基于所述每一个统计周期对应的流控阈值,对所述多个副本进行数据一致性校验,在提高数据一致性校验的效率、保证多个副本之间的数据一致性的同时,能够避免对正常输入输出业务性能造成明显冲击,具有很好的流控效果。
其次,当前统计周期对应的流控阈值是根据上一个统计周期内用户应用的IO负载自动进行动态调整,不需管理者手动调节,减少了管理者的工作量,避免了因管理者的主观因素导致的调整不精准的问题。
以上所述,仅是本申请的具体实施方式,但本申请的保护范围并不局限于此,对于本领域的普通技术人员来说,在不脱离本申请创造构思的前提下,还可以做出改进,但这些均属于本申请的保护范围。
下面结合第3至4图,分别对实现上述数据一致性校验流控方法的电子设备的功能模块及硬件结构进行介绍。
实施例三
图3为本申请数据一致性校验流控装置较佳实施例中的功能模块图。
在一些实施例中,所述数据一致性校验流控装置30运行于电子设备中。所述数据一致性校验流控装置30可以包括多个由程序代码段所组成的功能模块。所述数据一致性校验流控装置30中的各个程序段的程序代码可以存储 于存储器中,并由至少一个处理器所执行,以执行(详见图1-2及其相关描述)数据一致性校验流控方法。
本实施例中,所述数据一致性校验流控装置30根据其所执行的功能,可以被划分为多个功能模块。所述功能模块可以包括:副本存储模块301、侦测模块302、流控获取模块303、副本校验模块304、计算模块305、确定模块306及训练模块307。本申请所称的模块是指一种能够被至少一个处理器所执行并且能够完成固定功能的一系列计算机可读指令段,其存储在存储器中。在一些实施例中,关于各模块的功能将在后续的实施例中详述。
副本存储模块301,用于接收到用户数据的写入请求时,将所述用户数据存储为多个副本。
为了提供数据的可靠性,分布式存储系统一般通过将数据存储多个副本来实现。比如用户存储一个txt文档,在底层分布式存储系统中,这份文档会被存储3个副本,并放置在不同故障域的不同硬盘上。这样即使损坏一块硬盘,txt文档也不会丢失。或者当两块硬盘同时损坏时,数据仍然不会丢失。不过在硬盘损坏后,分布式存储系统一般会及时感知并补全丢失的副本。
侦测模块302,用于侦测是否满足了数据一致性校验的触发条件。
多副本带来了数据的可靠性,同时也带来了一致性方面的问题。因此,分布式存储系统需要设置数据一致性校验的触发条件,当满足了数据一致性校验的触发条件时,认为触发了数据一致性校验的指令,对各个副本进行数据一致性校验;当没有满足数据一致性校验的触发条件时,认为没有触发数据一致性校验的指令,可以不对各个副本进行数据一致性校验。
本申请较佳实施例中,所述数据一致性校验的触发条件包括以下一种或多种的组合:
1)满足了预设时间点,例如,每天的0点整;
2)接收到了用户数据的读取请求;
3)每隔预设时间段,例如,每隔10小时。
通过定期或者定时或者用户读取数据时进行各个副本之间的数据一致性校验,能保证各个副本之间数据的正确性。
优选的为定期或定时进行各个副本之间的数据一致性校验,在整个分布式系统比较大时,有助于集中控制校验及同步策略。
流控获取模块303,用于当所述侦测模块302侦测到满足了数据一致性校验的触发条件时,获取校验周期内的当前统计周期对应的流控阈值。
将对多个副本从开始进行数据一致性校验到完成数据一致性校验的整个过程称之为一个校验周期。一个校验周期可以划分为多个统计周期,一个统计周期可以为一个预设时间段,例如,一个统计周期设置为1秒钟。
所述流控是指流量控制。流控的实现方法包括以下两种:一种是通过路由器、交换机的QoS模块实现基于源地址、目的地址、源端口、目的端口以及协议类型的流量控制;另一种是通过专业的流控设备实现基于应用层的流量控制。
本较佳实施例中,所述流控获取模块303获取校验周期内的当前统计周期 对应的流控阈值具体可以包括:
1)判断当前统计周期是否为第一个统计周期。
可以通过判断当前时间是否为第1秒来判断当前校验周期是否为第一个统计周期。
2)当确定所述当前统计周期为第一个统计周期时,将预设流控阈值确定为所述当前统计周期对应的流控阈值;
本申请的校验周期内的第一个统计周期对应的流控阈值为预先设置的流控阈值,可以由系统的管理者根据经验预先设置。即,采用一个预设的流控阈值作为校验周期内的第一个统计周期的流控阈值。
3)当确定所述当前统计周期不为第一个统计周期时,获取上一个统计周期内用户应用的IO负载,根据所述上一个统计周期内用户应用的IO负载,确定所述当前统计周期对应的流控阈值。
校验周期内的除第一个统计周期外的剩余每一个统计周期可以对应一个流控阈值。剩余每一个统计周期对应的流控阈值是动态调整的,当前统计周期对应的流控阈值可以根据上一个统计周期内的IO负载计算得到,下一个统计周期对应的流控阈值可以根据当前统计周期内的IO负载计算得到。具体而言,根据第一个统计周期内的IO负载计算第二个统计周期对应的流控阈值;根据第二个统计周期内的IO负载计算第三个统计周期对应的流控阈值;以此类推。
所述根据所述上一个统计周期内用户应用的IO负载,确定所述当前统计周期对应的流控阈值的具体过程可以参见图2及其相应描述。
副本校验模块304,用于基于所述当前统计周期对应的流控阈值,对所述多个副本进行数据一致性校验。
由于校验产生的通信量过大会影响分布式系统的正常功能,因而根据所述当前统计周期对应的流控阈值对所述多个副本进行数据一致性校验,使得所述多个副本进行数据一致性校验的速度不至于过快,避免对正常输入输出业务性能造成明显冲击;若当前统计周期对应的流控阈值较大时,以较大的流控阈值控制对多个副本进行数据一致性校验,可以提高对多个副本进行数据一致性校验的速度,缓解对数据进行一致性校验的压力。
计算模块305,用于获取上一个统计周期内用户应用的每一个IO的数据块大小,计算所述上一个统计周期内的IO的平均数据块大小。
所述上一个统计周期内的IO的平均数据块大小可以采用算术平均值算法、几何平均数算法,或者均方根平均数算法来计算。
举例而言,假设检测到上一个统计周期内,用户应用共有十次IO,十次IO的数据块大小分别为:2M,1M,3M,0.5M,10M,4M,0.1M,1.2M,5M以及8M。利用所述算术平均值算法计算所述上一个统计周期内的IO的平均数据块大小为:
Figure PCTCN2018100171-appb-000004
Figure PCTCN2018100171-appb-000005
计算模块305,用于获取所述上一个统计周期内的每个数据块的传输时延,计算所述上一个统计周期内的IO的平均数据块时延。
所述传输时延(简称为时延)是指结点在发送数据时使数据块从结点进入到传输媒体所需的时间,即一个发送站点从开始发送数据帧到数据帧发送完毕所需要的全部时间,或者一个接收站点从开始接收数据帧到数据帧接收完毕所需要的全部时间。
在本申请较佳实施例中,所述数据块的传输时延可以从每个存储节点中安装的一个负载量测工具或者性能监控工具中获取得到。
如上所述,所述上一个统计周期内的IO的平均数据块时延也可以采用算术平均值算法、几何平均数算法,或者均方根平均数算法来计算。假设,假设检测到上一个统计周期内,十次IO的传输时延分别为:1s、0.8s、1.5s、0.4s、5s、2s、0.02s、0.6s、3s及4.5s,则所述上一个统计周期内的IO平均数据块时延采用算术平均值算法来计算时,其结果为:
(1s+0.8s+1.5s+0.4s+5s+2s+0.1s+0.6s+3s+4.4s)=1.88s。
应当理解的是,若上一个统计周期内的IO的平均数据块大小采用算术平均值算法来计算,则上一个统计周期内的IO的平均数据块时延也采用算术平均值算法来计算;若上一个统计周期内的IO的平均数据块大小采用几何平均数算法来计算,则上一个统计周期内的IO的平均数据块时延也采用几何平均数算法来计算;或者若上一个统计周期内的IO的平均数据块大小采用均方根平均数算法来计算,则上一个统计周期内的IO的平均数据块时延也采用均方根平均数算法来计算。
所述流控获取模块303,还用于获取预先设置的IO的数据块大小的基准值及对应的数据块时延的基准值。
在本申请较佳实施例中,所述IO数据块大小的基准值以及对应的数据块时延的基准值可以由存储系统的管理员根据经验预先设置。例如,根据经验,4K的数据块在传输时,时延最小,理想状态下可以达到50ms,则所述IO数据块大小的基准值可以设置为4k,对应的数据块时延的基准值可以设置为50ms。
所述计算模块305,还用于根据所述上一个统计周期内的所述IO的平均数据块大小、平均数据块时延、数据块大小的基准值、对应的数据块时延的基准值,计算所述上一个统计周期内的IO负载强度。
举例而言,假设上一个统计周期内的所述IO的平均数据块大小为X、平均数据块时延为Y、数据块大小的基准值为M、对应的数据块时延的基准值为N,则所述上一个统计周期内的IO负载强度的计算公式为:
Figure PCTCN2018100171-appb-000006
确定模块306,用于根据所述上一个统计周期内的IO负载强度,利用预先训练好的负载分类模型确定所述上一个统计周期内的IO负载类别。
在本申请较佳实施例中,所述IO负载类别包括:高负载类别、正常负载类别、低负载类别。
优选地,所述负载分类模型包括,但不限于:支持向量机(Support Vector Machine,SVM)模型。将所述上一个统计周期内的IO的平均数据块大小、所述上一个统计周期内的IO的平均数据块时延、所述上一个统计周期内的IO 负载强度作为所述负载分类模型的输入,经过所述负载分类模型计算后,输出上一个统计周期内的IO负载类别。
训练模块307,用于训练负载分类模型。
所述训练模块307训练所述负载分类模型的过程包括:
1)获取正样本的IO负载数据及负样本的IO负载数据,并将正样本的IO负载数据标注负载类别,以使正样本的IO负载数据携带IO负载类别标签。
例如,分别选取500个高负载类别、正常负载类别、低负载类别对应的IO负载数据,并对每个IO负载数据标注类别,可以以“1”作为高负载的IO数据标签,以“2”作为正常负载的IO数据标签,以“3”作为低负载的IO数据标签。
2)将所述正样本的IO负载数据及所述负样本的IO负载数据随机分成第一预设比例的训练集和第二预设比例的验证集,利用所述训练集训练所述负载分类模型,并利用所述验证集验证训练后的所述负载分类模型的准确率。
先将不同负载类别的训练集中的训练样本分发到不同的文件夹里。例如,将高负载类别的训练样本分发到第一文件夹里、正常负载类别的训练样本分发到第二文件夹里、低负载类别的训练样本分发到第三文件夹里。然后从不同的文件夹里分别提取第一预设比例(例如,70%)的训练样本作为总的训练样本进行负载分类模型的训练,从不同的文件夹里分别取剩余第二预设比例(例如,30%)的训练样本作为总的测试样本对训练完成的所述负载分类模型进行准确性验证。
3)若所述准确率大于或者等于预设准确率阈值时,则结束训练,以训练后的所述负载分类模型作为分类器识别所述当前统计周期内的IO负载类别;若所述准确率小于预设准确率阈值时,则增加正样本数量及负样本数量以重新训练所述负载分类模型直至所述准确率大于或者等于预设准确率阈值。
流控获取模块303,还用于根据上一个统计周期内的IO负载类别计算当前统计周期对应的流控阈值。
具体的,所述流控获取模块303根据上一个统计周期内的IO负载类别计算当前统计周期对应的流控阈值可以包括:
1)当所述上一个统计周期内的IO负载类别为高负载类别时,将所述上一个统计周期对应的流控阈值降低第一预设幅度,得到当前统计周期对应的流控阈值。
在上一个统计周期内的IO负载为高负载时,按照所述第一预设幅度降低流控阈值,以在当前统计周期内以低流控阈值对多个副本进行数据一致性校验,通过降低数据一致性校验的速度来保证用户应用的高效访问。
在本申请的优选实施例中,所述第一预设幅度可以是上一个统计周期对应的流控阈值的1/2。即当前统计周期对应的流控阈值为上一个统计周期对应的流控阈值的1/2,下一个统计周期对应的流控阈值为当前统计周期对应的流控阈值的1/2。
2)当所述上一个统计周期内的IO负载类别为低负载类别时,将所述上一个统计周期对应的流控阈值提高第二预设幅度,得到下一个统计周期对应的流控阈值。
在上一个统计周期内的IO负载为低负载时,按照所述第二预设幅度提高流控阈值,以在当前统计周期内以高流控阈值对多个副本进行数据一致性校验,在保证用户应用的访问质量的基础上,提高数据一致性校验的速度,尽快将分布式存储系统恢复到健康状态。
在本申请的优选实施例中,所述第二预设幅度可以是上一个统计周期对应的流控阈值的1.5倍。即当前统计周期对应的流控阈值为上一个统计周期对应的流控阈值的1.5倍,下一个统计周期对应的流控阈值为当前统计周期对应的流控阈值的1.5倍。
3)当所述上一个统计周期内的IO负载类别为正常负载类别时,将所述上一个统计周期对应的流控阈值作为当前统计周期对应的流控阈值。
综上所述,本申请所述的数据一致性校验流控装置,在接收到用户数据的写入请求时,将所述用户数据存储为多个副本,并在满足数据一致性校验的触发条件时,通过获取校验周期内的不同统计周期对应的流控阈值,基于所述每一个统计周期对应的流控阈值,对所述多个副本进行数据一致性校验,在提高数据一致性校验的效率、保证多个副本之间的数据一致性的同时,能够避免对正常输入输出业务性能造成明显冲击,具有很好的流控效果。
其次,当前统计周期对应的流控阈值是根据上一个统计周期内用户应用的IO负载自动进行动态调整,不需管理者手动调节,减少了管理者的工作量,避免了因管理者的主观因素导致的调整不精准的问题。
上述以软件功能模块的形式实现的集成的单元,可以存储在一个非易失性可读存储介质中。上述软件功能模块存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,双屏设备,或者网络设备等)或处理器(processor)执行本申请各个实施例所述方法的部分。
实施例四
图4为本申请实施例四提供的电子设备的示意图。
所述电子设备4包括:存储器41、至少一个处理器42、存储在所述存储器41中并可在所述至少一个处理器42上运行的计算机可读指令43及至少一条通讯总线44。
所述至少一个处理器42执行所述计算机可读指令43时实现上述方法实施例中的步骤。
示例性的,所述计算机可读指令43可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器41中,并由所述至少一个处理器42执行,以完成本申请上述方法实施例中的步骤。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机可读指令指令段,该指令段用于描述所述计算机可读指令43在所述电子设备4中的执行过程。
所述电子设备4可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。本领域技术人员可以理解,所述示意图4仅仅是电子设备4的示例,并不构成对电子设备4的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述电子设备4还可以包括输入输出设备、网络接入设备、总线等。
所述至少一个处理器42可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。该处理器42可以是微处理器或者该处理器42也可以是任何常规的处理器等,所述处理器42是所述电子设备4的控制中心,利用各种接口和线路连接整个电子设备4的各个部分。
所述存储器41可用于存储所述计算机可读指令43和/或模块/单元,所述处理器42通过运行或执行存储在所述存储器41内的计算机可读指令和/或模块/单元,以及调用存储在存储器41内的数据,实现所述电子设备4的各种功能。所述存储器41可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备4的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器41可以包括高速随机存取存储器,还可以包括非易失性存储器,例如硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
所述电子设备4集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个非易失性可读存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性可读存储介质中,该计算机可读指令在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机可读指令包括计算机可读指令代码,所述计算机可读指令代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机可读指令代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
在本申请所提供的几个实施例中,应该理解到,所揭露的电子设备和方法,可以通过其它的方式实现。例如,以上所描述的电子设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
另外,在本申请各个实施例中的各功能单元可以集成在相同处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在相同单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件 功能模块的形式实现。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或,单数不排除复数。系统权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神范围。

Claims (20)

  1. 一种数据一致性校验流控方法,其特征在于,所述方法包括:
    接收到用户数据的写入请求时,将所述用户数据存储为多个副本;
    侦测是否满足了数据一致性校验的触发条件;
    当侦测到满足了数据一致性校验的触发条件时,获取校验周期内的当前统计周期对应的流控阈值;
    基于所述当前统计周期对应的流控阈值,对所述多个副本进行数据一致性校验。
  2. 如权利要求1所述的方法,其特征在于,所述数据一致性校验的触发条件包括以下一种或多种的组合:
    满足了预设时间点;
    接收到了用户数据的读取请求;
    每隔预设时间段。
  3. 如权利要求1所述的方法,其特征在于,所述获取校验周期内的当前统计周期对应的流控阈值包括:
    判断当前统计周期是否为第一个统计周期;
    当确定所述当前统计周期为第一个统计周期时,将预设流控阈值确定为所述当前统计周期对应的流控阈值;
    当确定所述当前统计周期不为第一个统计周期时,获取上一个统计周期内用户应用的IO负载,根据所述上一个统计周期内用户应用的IO负载,确定所述当前统计周期对应的流控阈值。
  4. 如权利要求3所述的方法,其特征在于,所述根据所述上一个统计周期内用户应用的IO负载,确定所述当前统计周期对应的流控阈值包括:
    获取上一个统计周期内用户应用的每一个IO的数据块大小,计算所述上一个统计周期内的IO的平均数据块大小;
    获取所述上一个统计周期内的每个数据块的传输时延,计算所述上一个统计周期内的IO的平均数据块时延;
    获取预先设置的IO的数据块大小的基准值及对应的数据块时延的基准值;
    根据所述上一个统计周期内的所述IO的平均数据块大小、平均数据块时延、数据块大小的基准值、对应的数据块时延的基准值,计算所述上一个统计周期内的IO负载强度;
    根据所述上一个统计周期内的IO负载强度,利用预先训练好的负载分类模型确定所述上一个统计周期内的IO负载类别;
    根据上一个统计周期内的IO负载类别计算当前统计周期对应的流控阈值。
  5. 如权利要求4所述的方法,其特征在于,所述根据所述上一个统计周期内的所述IO的平均数据块大小、平均数据块时延、数据块大小的基准值、对应的数据块时延的基准值,计算所述上一个统计周期内的IO负载强度的计 算公式为:
    Figure PCTCN2018100171-appb-100001
    其中,X为上述上一个统计周期内的所述IO的平均数据块大小,Y为所述平均数据块时延,M为所述数据块大小的基准值,N为所述对应的数据块时延的基准值。
  6. 如权利要求4或5所述的方法,其特征在于,所述负载分类模型的训练过程包括:
    获取正样本的IO负载数据及负样本的IO负载数据,并将正样本的IO负载数据标注负载类别,以使正样本的IO负载数据携带IO负载类别标签;
    将所述正样本的IO负载数据及所述负样本的IO负载数据随机分成第一预设比例的训练集和第二预设比例的验证集,利用所述训练集训练支持向量机分类模型,并利用所述验证集验证训练后的所述支持向量机分类模型的准确率;
    若所述准确率大于或者等于预设准确率阈值时,则结束训练,以训练后的所述支持向量机分类模型作为负载分类模型识别所述当前统计周期内的IO负载类别。
  7. 如权利要求4所述的方法,其特征在于,所述根据上一个统计周期内的IO负载类别计算当前统计周期对应的流控阈值包括:
    当所述上一个统计周期内的IO负载类别为高负载类别时,将所述上一个统计周期对应的流控阈值降低第一预设幅度,得到当前统计周期对应的流控阈值;
    当所述上一个统计周期内的IO负载类别为低负载类别时,将所述上一个统计周期对应的流控阈值提高第二预设幅度,得到下一个统计周期对应的流控阈值;
    当所述上一个统计周期内的IO负载类别为正常负载类别时,将所述上一个统计周期对应的流控阈值作为当前统计周期对应的流控阈值。
  8. 一种数据一致性校验流控装置,其特征在于,所述装置包括:
    副本存储模块,用于接收到用户数据的写入请求时,将所述用户数据存储为多个副本;
    侦测模块,用于侦测是否满足了数据一致性校验的触发条件;
    流控获取模块,用于当所述侦测模块侦测到满足了数据一致性校验的触发条件时,获取校验周期内的当前统计周期对应的流控阈值;
    副本校验模块,用于基于所述当前统计周期对应的流控阈值,对所述多个副本进行数据一致性校验。
  9. 一种电子设备,其特征在于,所述电子设备包括处理器和存储器,所述处理器用于执行所述存储器中存储的计算机可读指令时实现如下步骤:
    接收到用户数据的写入请求时,将所述用户数据存储为多个副本;
    侦测是否满足了数据一致性校验的触发条件;
    当侦测到满足了数据一致性校验的触发条件时,获取校验周期内的当前统计周期对应的流控阈值;
    基于所述当前统计周期对应的流控阈值,对所述多个副本进行数据一致 性校验。
  10. 如权利要求9所述的电子设备,其特征在于,所述获取校验周期内的当前统计周期对应的流控阈值包括:
    判断当前统计周期是否为第一个统计周期;
    当确定所述当前统计周期为第一个统计周期时,将预设流控阈值确定为所述当前统计周期对应的流控阈值;
    当确定所述当前统计周期不为第一个统计周期时,获取上一个统计周期内用户应用的IO负载,根据所述上一个统计周期内用户应用的IO负载,确定所述当前统计周期对应的流控阈值。
  11. 如权利要求10所述的电子设备,其特征在于,所述根据所述上一个统计周期内用户应用的IO负载,确定所述当前统计周期对应的流控阈值包括:
    获取上一个统计周期内用户应用的每一个IO的数据块大小,计算所述上一个统计周期内的IO的平均数据块大小;
    获取所述上一个统计周期内的每个数据块的传输时延,计算所述上一个统计周期内的IO的平均数据块时延;
    获取预先设置的IO的数据块大小的基准值及对应的数据块时延的基准值;
    根据所述上一个统计周期内的所述IO的平均数据块大小、平均数据块时延、数据块大小的基准值、对应的数据块时延的基准值,计算所述上一个统计周期内的IO负载强度;
    根据所述上一个统计周期内的IO负载强度,利用预先训练好的负载分类模型确定所述上一个统计周期内的IO负载类别;
    根据上一个统计周期内的IO负载类别计算当前统计周期对应的流控阈值。
  12. 如权利要求11所述的电子设备,其特征在于,所述根据所述上一个统计周期内的所述IO的平均数据块大小、平均数据块时延、数据块大小的基准值、对应的数据块时延的基准值,计算所述上一个统计周期内的IO负载强度的计算公式为:
    Figure PCTCN2018100171-appb-100002
    其中,X为上述上一个统计周期内的所述IO的平均数据块大小,Y为所述平均数据块时延,M为所述数据块大小的基准值,N为所述对应的数据块时延的基准值。
  13. 如权利要求11或12所述的电子设备,其特征在于,所述负载分类模型的训练过程包括:
    获取正样本的IO负载数据及负样本的IO负载数据,并将正样本的IO负载数据标注负载类别,以使正样本的IO负载数据携带IO负载类别标签;
    将所述正样本的IO负载数据及所述负样本的IO负载数据随机分成第一预设比例的训练集和第二预设比例的验证集,利用所述训练集训练支持向量机分类模型,并利用所述验证集验证训练后的所述支持向量机分类模型的准确率;
    若所述准确率大于或者等于预设准确率阈值时,则结束训练,以训练后的所述支持向量机分类模型作为负载分类模型识别所述当前统计周期内的IO负载类别。
  14. 如权利要求11所述的电子设备,其特征在于,所述根据上一个统计周期内的IO负载类别计算当前统计周期对应的流控阈值包括:
    当所述上一个统计周期内的IO负载类别为高负载类别时,将所述上一个统计周期对应的流控阈值降低第一预设幅度,得到当前统计周期对应的流控阈值;
    当所述上一个统计周期内的IO负载类别为低负载类别时,将所述上一个统计周期对应的流控阈值提高第二预设幅度,得到下一个统计周期对应的流控阈值;
    当所述上一个统计周期内的IO负载类别为正常负载类别时,将所述上一个统计周期对应的流控阈值作为当前统计周期对应的流控阈值。
  15. 一种非易失性可读存储介质,所述非易失性可读存储介质上存储有计算机可读指令,其特征在于,所述计算机可读指令被处理器执行时实现如下步骤:
    接收到用户数据的写入请求时,将所述用户数据存储为多个副本;
    侦测是否满足了数据一致性校验的触发条件;
    当侦测到满足了数据一致性校验的触发条件时,获取校验周期内的当前统计周期对应的流控阈值;
    基于所述当前统计周期对应的流控阈值,对所述多个副本进行数据一致性校验。
  16. 如权利要求15所述的存储介质,其特征在于,所述获取校验周期内的当前统计周期对应的流控阈值包括:
    判断当前统计周期是否为第一个统计周期;
    当确定所述当前统计周期为第一个统计周期时,将预设流控阈值确定为所述当前统计周期对应的流控阈值;
    当确定所述当前统计周期不为第一个统计周期时,获取上一个统计周期内用户应用的IO负载,根据所述上一个统计周期内用户应用的IO负载,确定所述当前统计周期对应的流控阈值。
  17. 如权利要求16所述的存储介质,其特征在于,所述根据所述上一个统计周期内用户应用的IO负载,确定所述当前统计周期对应的流控阈值包括:
    获取上一个统计周期内用户应用的每一个IO的数据块大小,计算所述上一个统计周期内的IO的平均数据块大小;
    获取所述上一个统计周期内的每个数据块的传输时延,计算所述上一个统计周期内的IO的平均数据块时延;
    获取预先设置的IO的数据块大小的基准值及对应的数据块时延的基准值;
    根据所述上一个统计周期内的所述IO的平均数据块大小、平均数据块时 延、数据块大小的基准值、对应的数据块时延的基准值,计算所述上一个统计周期内的IO负载强度;
    根据所述上一个统计周期内的IO负载强度,利用预先训练好的负载分类模型确定所述上一个统计周期内的IO负载类别;
    根据上一个统计周期内的IO负载类别计算当前统计周期对应的流控阈值。
  18. 如权利要求17所述的存储介质,其特征在于,所述根据所述上一个统计周期内的所述IO的平均数据块大小、平均数据块时延、数据块大小的基准值、对应的数据块时延的基准值,计算所述上一个统计周期内的IO负载强度的计算公式为:
    Figure PCTCN2018100171-appb-100003
    其中,X为上述上一个统计周期内的所述IO的平均数据块大小,Y为所述平均数据块时延,M为所述数据块大小的基准值,N为所述对应的数据块时延的基准值。
  19. 如权利要求17或18所述的存储介质,其特征在于,所述负载分类模型的训练过程包括:
    获取正样本的IO负载数据及负样本的IO负载数据,并将正样本的IO负载数据标注负载类别,以使正样本的IO负载数据携带IO负载类别标签;
    将所述正样本的IO负载数据及所述负样本的IO负载数据随机分成第一预设比例的训练集和第二预设比例的验证集,利用所述训练集训练支持向量机分类模型,并利用所述验证集验证训练后的所述支持向量机分类模型的准确率;
    若所述准确率大于或者等于预设准确率阈值时,则结束训练,以训练后的所述支持向量机分类模型作为负载分类模型识别所述当前统计周期内的IO负载类别。
  20. 如权利要求17所述的存储介质,其特征在于,所述根据上一个统计周期内的IO负载类别计算当前统计周期对应的流控阈值包括:
    当所述上一个统计周期内的IO负载类别为高负载类别时,将所述上一个统计周期对应的流控阈值降低第一预设幅度,得到当前统计周期对应的流控阈值;
    当所述上一个统计周期内的IO负载类别为低负载类别时,将所述上一个统计周期对应的流控阈值提高第二预设幅度,得到下一个统计周期对应的流控阈值;
    当所述上一个统计周期内的IO负载类别为正常负载类别时,将所述上一个统计周期对应的流控阈值作为当前统计周期对应的流控阈值。
PCT/CN2018/100171 2018-06-04 2018-08-13 数据一致性校验流控方法、装置、电子设备及存储介质 WO2019232926A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810566098.5 2018-06-04
CN201810566098.5A CN108762686B (zh) 2018-06-04 2018-06-04 数据一致性校验流控方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2019232926A1 true WO2019232926A1 (zh) 2019-12-12

Family

ID=64002614

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/100171 WO2019232926A1 (zh) 2018-06-04 2018-08-13 数据一致性校验流控方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN108762686B (zh)
WO (1) WO2019232926A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110413441A (zh) * 2019-06-18 2019-11-05 平安科技(深圳)有限公司 主备存储卷同步数据校验方法、装置、设备及存储介质
CN111767578B (zh) * 2020-08-31 2021-06-04 支付宝(杭州)信息技术有限公司 一种数据检验方法、装置及设备
CN112052265B (zh) * 2020-09-02 2024-05-10 平安壹钱包电子商务有限公司 数据核对确认方法、装置、计算机设备及可读存储介质
CN112184306A (zh) * 2020-09-26 2021-01-05 中国建设银行股份有限公司 自动返现方法、装置、电子设备及计算机可读存储介质
CN112231326B (zh) * 2020-09-30 2022-08-30 新华三大数据技术有限公司 一种检测Ceph对象的方法和服务器
CN113672167B (zh) * 2021-07-09 2023-12-22 济南浪潮数据技术有限公司 一种分布式存储系统的数据一致性校验方法、装置及设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103780426A (zh) * 2014-01-13 2014-05-07 南京邮电大学 云存储数据的一致性维护方法及云存储系统
US20160125017A1 (en) * 2014-10-29 2016-05-05 International Business Machines Corporation Detection of data replication consistency
CN106059940A (zh) * 2016-05-25 2016-10-26 杭州昆海信息技术有限公司 一种流量控制方法及装置
CN107220006A (zh) * 2017-06-01 2017-09-29 深圳市云舒网络技术有限公司 一种基于tcmu虚拟磁盘的多数据副本一致性保证方法
CN107219997A (zh) * 2016-03-21 2017-09-29 阿里巴巴集团控股有限公司 一种用于验证数据一致性的方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4629342B2 (ja) * 2004-01-09 2011-02-09 株式会社日立製作所 ストレージ装置およびその制御方法
JP4978259B2 (ja) * 2007-03-22 2012-07-18 日本電気株式会社 データ整合性チェック方法およびデータ整合性チェックシステム
JP2009075675A (ja) * 2007-09-18 2009-04-09 Nec Computertechno Ltd 整合性チェック方法及び整合性チェックシステム
US10191674B2 (en) * 2016-04-15 2019-01-29 Netapp, Inc. Shared dense tree repair
CN106897342B (zh) * 2016-07-20 2020-10-09 阿里巴巴集团控股有限公司 一种数据校验方法和设备
CN107818106B (zh) * 2016-09-13 2021-11-16 腾讯科技(深圳)有限公司 一种大数据离线计算数据质量校验方法和装置
CN106649814A (zh) * 2016-12-29 2017-05-10 国网江苏省电力公司南京供电公司 一种配电自动化跨区数据库一致性校验方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103780426A (zh) * 2014-01-13 2014-05-07 南京邮电大学 云存储数据的一致性维护方法及云存储系统
US20160125017A1 (en) * 2014-10-29 2016-05-05 International Business Machines Corporation Detection of data replication consistency
CN107219997A (zh) * 2016-03-21 2017-09-29 阿里巴巴集团控股有限公司 一种用于验证数据一致性的方法及装置
CN106059940A (zh) * 2016-05-25 2016-10-26 杭州昆海信息技术有限公司 一种流量控制方法及装置
CN107220006A (zh) * 2017-06-01 2017-09-29 深圳市云舒网络技术有限公司 一种基于tcmu虚拟磁盘的多数据副本一致性保证方法

Also Published As

Publication number Publication date
CN108762686A (zh) 2018-11-06
CN108762686B (zh) 2021-01-01

Similar Documents

Publication Publication Date Title
WO2019232926A1 (zh) 数据一致性校验流控方法、装置、电子设备及存储介质
WO2019232993A1 (zh) 自适应的数据恢复流控方法、装置、电子设备及存储介质
US10261853B1 (en) Dynamic replication error retry and recovery
WO2019232927A1 (zh) 分布式数据删除流控方法、装置、电子设备及存储介质
US20120209921A1 (en) Instant Message Management Method and Apparatus
WO2019153490A1 (zh) 房产交易方法、装置、计算机可读存储介质及终端设备
US8375200B2 (en) Embedded device and file change notification method of the embedded device
CN110162270B (zh) 基于分布式存储系统的数据存储方法、存储节点及介质
WO2017020614A1 (zh) 一种检测磁盘的方法及装置
WO2020082588A1 (zh) 异常业务请求的识别方法、装置、电子设备及介质
CN110825731B (zh) 数据存储方法、装置、电子设备及存储介质
WO2018166145A1 (zh) 还款数据分批报盘方法和装置
CN111880967A (zh) 云场景下的文件备份方法、装置、介质和电子设备
CN108810832B (zh) 短信下发方法、装置与计算机可读存储介质
WO2019232994A1 (zh) 后台写盘流控方法、装置、电子设备及存储介质
US20240143456A1 (en) Log replay methods and apparatuses, data recovery methods and apparatuses, and electronic devices
WO2019232925A1 (zh) 热点数据迁移流控方法、装置、电子设备及存储介质
CN111159009B (zh) 一种日志服务系统的压力测试方法及装置
CN109298974B (zh) 系统控制方法、装置、计算机及计算机可读存储介质
US20180123866A1 (en) Method and apparatus for determining event level of monitoring result
CN110427293A (zh) 应用处理方法、装置、设备和介质
US9552324B2 (en) Dynamic data collection communication between adapter functions
TW202223920A (zh) 幹細胞密度確定方法、裝置、電腦裝置及儲存介質
CN113873026A (zh) 动态超时响应方法、装置、终端设备及存储介质
CN112114931B (zh) 深度学习程序配置方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18921415

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18921415

Country of ref document: EP

Kind code of ref document: A1