WO2022077605A1 - Wind turbine blade image-based damage detection and localization method - Google Patents

Wind turbine blade image-based damage detection and localization method Download PDF

Info

Publication number
WO2022077605A1
WO2022077605A1 PCT/CN2020/125752 CN2020125752W WO2022077605A1 WO 2022077605 A1 WO2022077605 A1 WO 2022077605A1 CN 2020125752 W CN2020125752 W CN 2020125752W WO 2022077605 A1 WO2022077605 A1 WO 2022077605A1
Authority
WO
WIPO (PCT)
Prior art keywords
wind turbine
damage
model
image
turbine blade
Prior art date
Application number
PCT/CN2020/125752
Other languages
French (fr)
Chinese (zh)
Inventor
曹金凤
郭继鸿
Original Assignee
青岛理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛理工大学 filed Critical 青岛理工大学
Publication of WO2022077605A1 publication Critical patent/WO2022077605A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24317Piecewise classification, i.e. whereby each classification requires several discriminant rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images

Definitions

  • wind turbines my country is rich in wind energy resources, with about 4,350 GW of wind energy resources that can be developed in the country, and the reserves of wind energy resources rank among the top in the world.
  • the popularity of wind turbines (referred to as “wind turbines") is rapid in China. With the continuous expansion of the deployment scale of wind turbines in China and even around the world, its operation status monitoring and safety maintenance work have attracted more and more attention.
  • Wind turbines are installed in places with abundant wind resources, most of which are on the seaside or on the hilltops near the seaside, and the environment is harsh. Therefore, wind turbines are usually exposed to changing and harsh environments, such as high altitude, desert, Gobi and sea.
  • Wind turbine blades are the main components of wind turbines, and extreme conditions such as cold, hail, rain and snow, humidity, corrosion, sandstorm, vibration and high temperature will quickly lead to the loss of wind turbine blades.
  • cracks occur in wind turbine blades due to changing loads and fatigue stress during long-term operation, which poses a serious safety hazard.
  • wind turbine blade damage is one of the main faults leading to wind turbine shutdown.
  • the blades of wind turbines are huge and installed at high altitudes. Professional tools and trained personnel are required to access them. Manual overhaul and maintenance are extremely difficult. Therefore, among all wind turbine failures, the repair cost of wind turbine blade failure is the highest. Longest repair time.
  • Acoustic emission testing refers to a testing method that evaluates the performance or structural integrity of wind turbine blades by receiving and analyzing the acoustic emission signals of materials.
  • patent CN103389341A discloses a wind turbine blade crack detection method, which is characterized in that: an acoustic emission sensor is installed on the wind turbine blade, and the received acoustic emission signal is transmitted to the acoustic emission acquisition system to determine the sampling frequency of the signal, Sampling length, filtering frequency; optimize the bandwidth parameter of Morlet wavelet basis function based on Shannon wavelet entropy, obtain the Morlet wavelet basis function matching the characteristics of the acoustic emission signal of propagating crack and crack initiation, and then calculate the redistribution scale spectrum of the acoustic emission signal to judge the crack state ; Then according to the time-frequency characteristic parameters of the extracted crack acoustic emission signal to determine the expansion state of the crack fault.
  • Patent CN107657110A discloses a fatigue damage evaluation method for large wind turbine blades, which is characterized in that: an acoustic emission sensor is installed on the wind turbine blade, and the received acoustic emission signal is transmitted to the acoustic emission acquisition system, and the acoustic emission signal is analyzed. The evaluation is performed, and the fatigue level is evaluated according to the evaluation set, so as to achieve the evaluation of the fatigue damage state of the wind turbine blade and determine the real-time state of the blade fatigue damage.
  • Vibration signal detection refers to the detection of structural vibration signals to reflect the health status of wind turbine blades.
  • patent CN110568074A discloses a method for locating cracks in wind turbine blades based on non-contact multi-point vibration measurement and Hilbert transformation. The blade is excited, and the nonlinear vibration signal of the blade is collected under the random signal excitation condition, and the input and output position vibration signals are excited according to the random signal excitation condition, and the Hilbert transform is performed on the vibration signal to determine the crack position.
  • Patent CN109541028A discloses a wind turbine blade crack position location detection method and system, which is characterized in that: the vibration response signals when the wind turbine blade is cracked or not are collected respectively, and the vibration response signals before and after crack damage are calculated according to the vibration response signals when there are cracks. Mutual information entropy and vibration response nonlinearity change, and the crack location is determined according to the change.
  • the above detection method still has several insurmountable shortcomings: 1.
  • the operating conditions of wind turbine blades are complex and changeable, the sensor signal is easily interfered by a large amount of noise, and the fault information is easily submerged, resulting in judgment errors. It is difficult to extract robust fault features.
  • different signal data are obtained through the sensors arranged on the wind turbine blades.
  • the arrangement of the sensors, the service life and the accuracy of the collected signals will greatly affect the reliability of the wind turbine blade fault detection method.
  • a patch-type optical fiber load sensor is installed on the wind turbine blade, and a certain method is used to monitor the blade crack by collecting the signal during the operation of the wind turbine.
  • this method ignores the sensor's own performance and is easily affected by environmental factors.
  • the sensors installed on the blades are prone to damage, which affects the accuracy of the detection results and increases the detection cost.
  • Hutchinson et al. proposed a statistical-based image evaluation method based on Bayesian decision theory to detect the damage of concrete structures.
  • Cha et al. used deep learning neural networks to detect concrete cracks.
  • Wang et al. proposed a data-driven wind turbine blade damage detection framework for automatic crack detection based on images captured by drones by using an extended cascade classifier.
  • Wang et al. proposed a crack detection method using unsupervised learning with deep features.
  • Wind turbine blade images have more complex background information, such as the sky around the wind turbine, forests and other wind power equipment; 2.
  • the size, shape and texture of blade damage are different; 3.
  • the size of the defect is extremely small, and it is difficult for a general detector to accurately locate it.
  • early blade cracks can be a few centimeters or less.
  • the current detection methods perform poorly in practical applications in terms of detection accuracy (high false early warning rate and missed detection rate) and recognition effect (either damage localization cannot be achieved, or specific damage types cannot be classified).
  • the technical problem to be solved by the present invention is that my country is a big country in wind power generation industry and equipment manufacturing, but not a strong country, and the possession of technologies and invention patents in related fields is at a disadvantage. Moreover, my country's major wind farms will gradually enter a period of high accident incidence, and the safety of wind power equipment in service has become a bottleneck for wind power development. The fault warning and safety guarantee of the healthy operation of wind turbines urgently need breakthroughs in theory and technology. The newly installed wind turbine blades lack detailed characteristic data, expert-level diagnosis and maintenance information under complex loads and harsh natural conditions, and it is difficult to accurately express them with accurate theoretical models. New theories and methods for forecasting and safety assurance.
  • the present invention develops an image recognition algorithm and an effective feature extraction algorithm reflecting the operating state of the wind turbine blades by introducing a deep learning algorithm, detects the damage of the wind turbine blades online remotely, and provides an early warning before the blades fail. Effectively avoid accidents, reduce maintenance and operation costs, improve the reliability of wind turbine blades under complex and harsh geographical and meteorological conditions, ensure long-term stable and reliable operation of wind turbines, reduce wind turbine operation and maintenance costs, and improve wind turbines. Economic benefits and market competitiveness.
  • a wind turbine blade image damage detection and localization method based on a deep convolutional neural network, includes two processes of model training and damage detection and localization;
  • the model training process includes:
  • step S103 using the sample database established in step S102, according to the classical AdaBoost Haar-like algorithm to train the cascade strong classifier;
  • step S104 building a deep convolutional neural network damage identification model, and using the sample database established in step S102 to train, adjust parameters and verify the convolutional neural network model to obtain a trained damage identification model;
  • the damage detection and localization process includes:
  • S202 Scale the damage target area detected in step S201 to a uniform size, input the damage identification model trained in step S104, and after model identification, determine which damage type the area belongs to; due to the suspected damage detected in the previous step Damage target areas have different scales and sizes, and for the next step of identification, these areas need to be scaled to a uniform size.
  • the suspected area scaled to a uniform size is sequentially input into the damage identification model trained in step S104, and after the model is identified, it is determined which damage type the suspected area belongs to. If it is judged that the suspected area does not belong to any kind of injury, the area is excluded from the suspected area (ie, it is regarded as a normal area);
  • S203 Output a result file, including information such as the position and type of the identified damage target area, and mark the damaged area and type with a box and a number in the corresponding position of the original wind turbine blade image.
  • the manually marked damage in S102 includes four types of glass fiber damage, crack, skin damage and corrosion.
  • the deep convolutional neural network damage identification model built in S104 includes an input layer, a convolutional layer, a pooling layer and an output layer, without a fully connected layer.
  • the model greatly reduces the number of training parameters, while maintaining the model's excellent recognition ability, which enables the model to have higher training efficiency and better generalization ability.
  • the training step in S104 includes: using the gradient descent method to iteratively update the weights of the convolutional neural network model for multiple times, so that the output of the model loss function is reduced, and the detection value of the model and the real value of the data are getting closer and closer. , until the loss function converges to 0, the training and optimization of the model are completed, and the trained convolutional neural network model is obtained.
  • step S201 is specifically as follows: zooming the image of the wind turbine blade to be detected multiple times according to a certain proportion, and on a specific zoom scale, panning the sliding window in the horizontal or vertical direction according to a fixed step size; sliding the window after each translation.
  • the covered image area is used as the detection window, and the Haar-like features in the detection window are extracted, and are identified by the cascade strong classifier trained in step S103 to determine whether the area may contain damaged areas.
  • the image is first based on Haar-like features and shallow images.
  • the layer classifier filters out the suspected damage target area, which greatly reduces the computational cost and lays the foundation for the deployment of deep learning network identification in the next step; if the model judges that the detection window may contain damage areas, the area is marked with a box It is the suspected damage target area, otherwise skip this area to determine the next area; use the above method to traverse the entire image to obtain a series of suspected damage target areas.
  • the invention discloses a wind turbine blade image damage detection method based on a deep convolutional neural network, which can shoot unmanned aerial vehicles and monitoring cameras.
  • the automatic interpretation of wind turbine blade images based on the automatic interpretation of wind turbine blade images enables efficient and accurate identification and localization of various types of wind turbine blade damage. Achieve blade damage assessment and early warning, reduce the number of unexpected shutdowns of wind turbines caused by wind turbine blade failures, and reduce wind turbine operation and maintenance costs.
  • the method can automatically interpret and process the surface images of wind turbine blades captured by unmanned aerial vehicles and monitoring cameras, so as to realize efficient detection and early warning of surface damage of wind turbine blades. Compared with the traditional method, there is no need to consider the signal interference during the operation of the blade, and there is no need to deploy sensors, which can realize the early damage detection of the blade;
  • This method implements a new deep convolutional neural network damage recognition model, which simplifies the traditional convolutional neural network model structure, maintains the model's excellent recognition ability, and enables the model to have higher training efficiency and Better generalization ability.
  • the experimental results show that the recognition and classification accuracy of this model is significantly higher than that of traditional models such as VGG16, which lays a foundation for the feasibility of this method to recognize various types of damage on the surface of wind turbine blades.
  • This method designs a new damage detection process, which includes a two-stage detection process.
  • the suspected damage target area is screened through Haar-like features and shallow classifiers, and then through the deep convolutional neural network.
  • the damage identification model identifies the suspected damage target area.
  • Using this method can avoid the recognition model from wasting too much time in a large number of areas that obviously do not contain damage, so that more detection time can be concentrated on processing difficult-to-recognize areas, which greatly improves the detection efficiency of a single image. It can lay a foundation for the rapid identification of surface damage of wind turbine blades.
  • This method uses Haar-like features and shallow classifiers to perform multi-scale detection on the image to be tested, thereby realizing damage localization of various sizes from large to small.
  • Figure 1 is a schematic diagram of the model training process
  • Fig. 2 is an image proof of the surface of the wind turbine blade
  • Figure 3 is a schematic diagram of the structure of the damage recognition model of the deep convolutional neural network.
  • Fig. 4 is the confusion matrix of the verification result
  • Figure 5 is a schematic diagram of the damage image detection process
  • Fig. 6 is the surface image proof sheet of the wind turbine blade to be detected
  • Fig. 7 is the detection result of the suspected damaged area of the image to be detected
  • FIG. 8 is the final damage area detection result of the image to be detected.
  • a wind turbine blade image damage detection and localization method based on a deep convolutional neural network, includes two processes: model training and damage detection and localization:
  • model training includes the following steps:
  • step S101 a monitoring camera is used to collect the surface image of the wind turbine blade.
  • the image data used in the embodiments of the present invention are all from a wind farm in eastern China, and include a total of 725 wind turbine blade surface images captured by high-resolution cameras.
  • Figure 2 is a sample image of the surface of the wind turbine blade collected at the wind farm site.
  • step S102 manually mark the damaged position and type information on the wind turbine blade image, and cut out image samples (positive samples) containing damaged areas and image samples (negative samples) of the normal blade surface from the image according to the manual marking. sample) to establish a sample database.
  • Manually marked damage includes four types of glass fiber damage, cracks, skin damage and corrosion. Among them, glass fiber damage refers to severe damage to the blade resulting in delamination of fibers and laminates, crack refers to the rupture or degumming of the blade gel coat, and skin damage is Refers to blade gel coat delamination, breakage and damage, and corrosion refers to blade leading edge corrosion.
  • step S103 using the sample database established in step S102, the cascading strong classifier is trained according to the classical AdaBoost Haar-like algorithm.
  • the specific algorithm training process is as follows:
  • a strong classifier of image samples is trained by the AdaBoost algorithm.
  • a training dataset ⁇ (x 1 , y 1 ), (x 2 , y 2 ), ..., (x N , y N ) ⁇ (where N is the number of training samples) and y ⁇ [-1, 1].
  • x i is a training sample
  • the calculation formula of the single-feature weak classifier f(x) is:
  • t k is the segmentation threshold
  • t k 0.5 ⁇ (h k +h k+1 )
  • wi is the weight of the training samples.
  • step S104 this example designs a deep convolutional neural network damage identification model
  • the input of the deep convolutional neural network model is a three-channel 50 ⁇ 50 image
  • the schematic diagram of the model structure is shown in FIG. 3 .
  • the established deep convolutional neural network model structure includes 4 feature extraction modules.
  • the first feature extraction module consists of 2 convolutional layers, each of which includes 64 convolutional kernels of size 3 ⁇ 3.
  • the second feature extraction module consists of 2 convolutional layers, each of which includes 128 convolutional kernels of size 3 ⁇ 3.
  • the third feature extraction module consists of 3 convolutional layers, each of which includes 256 convolution kernels of size 3 ⁇ 3.
  • the fourth feature extraction module consists of 1 convolutional layer, which includes 512 convolution kernels of size 3 ⁇ 3.
  • the convolution stride of the convolutional layers is all 1.
  • Each feature extraction module is linked by a maximum pooling layer, the pooling size of the pooling layer is 2 ⁇ 2, and the pooling operation step size is 2.
  • the image is output to the global maximum pooling layer after four feature extraction modules, and then output to the output layer after a layer of dropout.
  • the output layer is a one-dimensional vector containing 5 elements, and the values of the 5 elements respectively represent the confidence of the input image to be judged as five types of normal blade, glass fiber damage, crack, skin damage and corrosion.
  • the output layer uses the Softmax function for classification.
  • the activation function of the full model adopts the LReLU function, the Leaky Rate is set to 0.1, and the Dropout rate is set to 0.5.
  • the training process adopts Adam optimizer, the training batch is between 6 and 24, the learning rate is 10 -3 -10 -4 , and the training batch size and learning rate are determined according to the cross-validation results.
  • the convolutional neural network model specifically includes an input layer, a convolutional layer, a pooling layer and an output layer, but does not include a fully connected layer.
  • the advantage is that the model greatly reduces the number of training parameters, while maintaining the model's excellent recognition ability, which enables the model to have higher training efficiency and better generalization ability.
  • the convolutional neural network model includes four groups of feature extraction modules including convolutional layers and pooling layers, with a total of 8 convolutional layers and 4 pooling layers.
  • the advantage is that the multi-layered convolution and pooling operations enable the model to capture robust features that are abstract and greatly different between classes in small-sized image samples, thus enabling the model to have better damage recognition performance.
  • step S104 the convolutional neural network model is trained, adjusted and verified by using the sample database established in step S102 to obtain a convolutional neural network model with recognition ability.
  • training refers to: using the gradient descent method to iteratively update the weights of the convolutional neural network model, so that the output of the model loss function is reduced, and the detection value of the model and the real value of the data are getting closer and closer, until the loss The function converges to 0, the training and optimization of the model are completed, and the trained convolutional neural network model is obtained.
  • Parameter adjustment refers to: using the cross-validation method to determine the parameters such as the optimal training batch size and learning rate of the model.
  • Validation refers to the performance of the model on the validation data, that is, the accuracy of model validation.
  • FIG. 4 is a schematic diagram of a confusion matrix showing the verification results in an embodiment of the present invention.
  • the image proves that the recognition model of the present invention can effectively distinguish between normal leaves and damaged leaves, and this model can also correctly distinguish all four types in most image samples. type of damage.
  • the above experimental results show that the trained convolutional neural network model can be used to identify and judge the surface damage of wind turbine blades.
  • damage detection and localization includes the following steps:
  • step S201 the image of the wind turbine blade to be detected is scaled multiple times according to a certain scale, and on a specific scale, the sliding window is moved horizontally or vertically according to a fixed step size (in this example, the step size is set to 10).
  • the image area covered by the sliding window after each translation is used as the detection window, and the Haar-like features in the detection window are extracted, and identified by the cascade strong classifier trained in step S103 to determine whether the area may contain a damaged area. If the model judges that the detection window may contain a damaged area, mark the area as a suspected damage target area with a box, otherwise skip this area to judge the next area.
  • the above method is used to traverse the entire image to obtain a series of suspected damaged target areas.
  • the detection result of the image proof after step S201 is shown in FIG. 7 , in which the detected suspected damage target area is marked with a box.
  • step S201 the image is firstly screened for suspected damage target areas based on Haar-like features and shallow classifiers, which greatly reduces the computational cost and lays a foundation for deploying deep learning network identification in the next step.
  • step S202 since the suspected damaged target areas detected in the previous step have different scales and sizes, these areas need to be scaled to a uniform size for the next step of identification (in this example, the suspected damaged target areas are all scaled to 50 ⁇ 50 size for identification).
  • the suspected area scaled to a uniform size is sequentially input into the damage identification model trained in step S104, and after the model is identified, it is determined which damage type the suspected area belongs to. If it is judged that the suspected area does not belong to any kind of injury, the area is excluded from the suspected area (ie, it is regarded as a normal area).
  • step S203 the detection results after the first two steps are sorted out, and the result file is output, including information such as the position and type of the identified damage target area, and the corresponding position of the original wind turbine blade image is displayed with boxes and numbers.
  • the number identifies the damage area and category.
  • the detection and recognition results of the image proofs after the above steps are shown in Figure 8, in which the identified damage target area is marked with a box, and the damage category is marked with text in the upper left corner of the box. Compared with Fig. 7, it can be seen that after the damage identification model, the normal leaf area detected as a suspected area is basically excluded, and only the real damage area is identified.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Water Supply & Treatment (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

A wind turbine blade image-based damage detection and localization method. The method is based on a deep convolutional neural network, and comprises two processes, i.e., model training and damage detection and localization. A wind turbine blade image captured by an unmanned aerial vehicle or a monitoring camera can be interpreted automatically, so that multiple types of wind turbine blade damages can be identified and localized efficiently and accurately, the frequency of unexpected downtime of a wind turbine caused by a wind turbine blade failure is reduced, and the operation and maintenance costs of the wind turbine are reduced.

Description

一种风力机叶片图像损伤检测和定位方法An image damage detection and localization method for wind turbine blades 技术领域:Technical field:
本发明属于故障检测技术领域,具体涉及一种风力机叶片图像损伤检测和定位方法。The invention belongs to the technical field of fault detection, and in particular relates to a wind turbine blade image damage detection and positioning method.
背景技术:Background technique:
我国风能资源丰富,全国可开发的风能资源约4350GW,风能资源储量居世界前列。我国普及风力发电机(简称“风力机”)速度迅猛,随着中国乃至世界范围内风力机的部署规模不断扩大,其运行状态监测和安全维护工作越来越受到人们关注。风力机安装在风力资源丰富的地方,大多为海边或离海边较近的山顶野外,环境严酷。因此,风力机通常暴露在多变恶劣的环境中,如高空、沙漠、戈壁和海上等。风力机叶片是风力机的主要部件,寒冷、冰雹、雨雪、潮湿、腐蚀、风沙、震动和高温等极端条件会快速导致风力机叶片的损耗。此外,风力机叶片在长期运行过程中由于变化的负载和疲劳应力产生裂纹,存在严重的安全隐患。据统计,风力机叶片损伤是导致风力机停机的主要故障之一。风力机叶片体型巨大,且安装在高空中,需要使用专业工具和经过专业训练的人员才能接近,人工检修和维护难度极大,故而在所有风力机故障中,风力机叶片故障的维修成本最高,维修时间最长。my country is rich in wind energy resources, with about 4,350 GW of wind energy resources that can be developed in the country, and the reserves of wind energy resources rank among the top in the world. The popularity of wind turbines (referred to as "wind turbines") is rapid in China. With the continuous expansion of the deployment scale of wind turbines in China and even around the world, its operation status monitoring and safety maintenance work have attracted more and more attention. Wind turbines are installed in places with abundant wind resources, most of which are on the seaside or on the hilltops near the seaside, and the environment is harsh. Therefore, wind turbines are usually exposed to changing and harsh environments, such as high altitude, desert, Gobi and sea. Wind turbine blades are the main components of wind turbines, and extreme conditions such as cold, hail, rain and snow, humidity, corrosion, sandstorm, vibration and high temperature will quickly lead to the loss of wind turbine blades. In addition, cracks occur in wind turbine blades due to changing loads and fatigue stress during long-term operation, which poses a serious safety hazard. According to statistics, wind turbine blade damage is one of the main faults leading to wind turbine shutdown. The blades of wind turbines are huge and installed at high altitudes. Professional tools and trained personnel are required to access them. Manual overhaul and maintenance are extremely difficult. Therefore, among all wind turbine failures, the repair cost of wind turbine blade failure is the highest. Longest repair time.
近年来,公开发表的资料和文献的数量表明:风力机叶片损伤检测、评估和预测等相关领域,正成为国内外科研机构和相关企业的研究热点。国内外学者对风力机叶片在线损伤检测方法进行了大量的探索,目前主流的检测方法包括声发射检测和振动检测两种。In recent years, the number of published data and literature shows that related fields such as wind turbine blade damage detection, assessment and prediction are becoming the research hotspots of domestic and foreign scientific research institutions and related enterprises. Scholars at home and abroad have carried out a lot of exploration on the online damage detection methods of wind turbine blades. At present, the mainstream detection methods include acoustic emission detection and vibration detection.
(1)声发射检测(1) Acoustic emission detection
声发射检测是指通过接收和分析材料的声发射信号来评定风力机叶片性能或结构完整性的检测方法。例如,专利CN103389341A公开了一种风力机叶片裂纹检测方法,其特征是:在风力机叶片上安装声发射传感器,并将接收到的声发射信号传递给声发射采集系统,确定信号的采样频率、采样长度、滤波频率;基于Shannon小波熵优化Morlet小波基函数的带宽参数,得到与扩展裂纹和萌生裂纹声发射信号特征匹配的Morlet小波基函数,再计算声发射信号的重分配尺度谱判断裂纹状态;接着根据所提取的裂纹声发射信号的时频特征参数来判定裂纹故障的扩展状态。专利CN107657110A公开了一种大型风力机叶片的疲劳损伤评价方法,其特征是:在风力机叶片上安装声发射传感器,并将接收到的声发射信号传递给声发射采集系统,并对声发射信号进行评价,根据评价集合评定出的疲劳等级从而达到对风力机叶片的疲劳损伤状态的评价并确定叶片疲劳损伤的实时状态。Acoustic emission testing refers to a testing method that evaluates the performance or structural integrity of wind turbine blades by receiving and analyzing the acoustic emission signals of materials. For example, patent CN103389341A discloses a wind turbine blade crack detection method, which is characterized in that: an acoustic emission sensor is installed on the wind turbine blade, and the received acoustic emission signal is transmitted to the acoustic emission acquisition system to determine the sampling frequency of the signal, Sampling length, filtering frequency; optimize the bandwidth parameter of Morlet wavelet basis function based on Shannon wavelet entropy, obtain the Morlet wavelet basis function matching the characteristics of the acoustic emission signal of propagating crack and crack initiation, and then calculate the redistribution scale spectrum of the acoustic emission signal to judge the crack state ; Then according to the time-frequency characteristic parameters of the extracted crack acoustic emission signal to determine the expansion state of the crack fault. Patent CN107657110A discloses a fatigue damage evaluation method for large wind turbine blades, which is characterized in that: an acoustic emission sensor is installed on the wind turbine blade, and the received acoustic emission signal is transmitted to the acoustic emission acquisition system, and the acoustic emission signal is analyzed. The evaluation is performed, and the fatigue level is evaluated according to the evaluation set, so as to achieve the evaluation of the fatigue damage state of the wind turbine blade and determine the real-time state of the blade fatigue damage.
(2)振动信号检测(2) Vibration signal detection
振动信号检测是指对结构的振动信号进行检测以反映风力机叶片的健康状态。例如,专 利CN110568074A公开了一种基于非接触多点测振与Hilbert变换的风力机叶片裂纹定位方法,其特征是:在风力机叶片上安装振动传感器,采用随机信号激励方式对具有裂纹的风力机叶片进行激振,采集随机信号激励条件下叶片的非线性振动信号,根据随机信号激励条件下激励输入和输出位置振动信号,并对振动信号进行Hilbert变换,确定裂纹位置。专利CN109541028A公开了一种风力机叶片裂纹位置定位检测方法及系统,其特征是:分别采集风力机叶片有无裂纹时的振动响应信号,根据有无裂纹时的振动响应信号计算出现裂纹损伤前后的互信息熵和振动响应非线性程度变化量,并根据变化量确定裂纹位置。Vibration signal detection refers to the detection of structural vibration signals to reflect the health status of wind turbine blades. For example, patent CN110568074A discloses a method for locating cracks in wind turbine blades based on non-contact multi-point vibration measurement and Hilbert transformation. The blade is excited, and the nonlinear vibration signal of the blade is collected under the random signal excitation condition, and the input and output position vibration signals are excited according to the random signal excitation condition, and the Hilbert transform is performed on the vibration signal to determine the crack position. Patent CN109541028A discloses a wind turbine blade crack position location detection method and system, which is characterized in that: the vibration response signals when the wind turbine blade is cracked or not are collected respectively, and the vibration response signals before and after crack damage are calculated according to the vibration response signals when there are cracks. Mutual information entropy and vibration response nonlinearity change, and the crack location is determined according to the change.
然而,在实际操作中,上述检测方法依旧存在几处难以克服的缺点:1、风力机叶片运行的工况复杂多变,传感器信号容易受到大量噪声干扰,故障信息容易被淹没从而出现判断误差,难以提取到鲁棒性强的故障特征。2、使用基于声或振动信号的方法很难检测叶片的早期损伤。例如,当叶片的裂纹较小或裂纹位于靠近叶片尖端的位置时,固有频率和振动响应的变化很小,这种变化很难被传感器所探知。3、风力机运行过程中,不同信号数据都是通过布置在风力机叶片上的传感器获取,传感器的布置方式、使用寿命以及采集信号的准确性会极大影响风力机叶片故障检测方法的可靠性。例如,将贴片式光纤载荷传感器安装于风机叶片上,通过采集风力机运行过程中的信号,运用一定的方法来对叶片裂纹进行监测,然而该方法忽略了传感器自身性能容易受到环境因素的影响,在风力机运行工况复杂的条件下,安装于叶片上传感器容易出现损坏,影响了检测结果的准确性,增加了检测成本。However, in actual operation, the above detection method still has several insurmountable shortcomings: 1. The operating conditions of wind turbine blades are complex and changeable, the sensor signal is easily interfered by a large amount of noise, and the fault information is easily submerged, resulting in judgment errors. It is difficult to extract robust fault features. 2. It is difficult to detect early damage of blades using methods based on acoustic or vibration signals. For example, when the crack of the blade is small or the crack is located close to the tip of the blade, the change in natural frequency and vibration response is small, which is difficult to be detected by the sensor. 3. During the operation of the wind turbine, different signal data are obtained through the sensors arranged on the wind turbine blades. The arrangement of the sensors, the service life and the accuracy of the collected signals will greatly affect the reliability of the wind turbine blade fault detection method. . For example, a patch-type optical fiber load sensor is installed on the wind turbine blade, and a certain method is used to monitor the blade crack by collecting the signal during the operation of the wind turbine. However, this method ignores the sensor's own performance and is easily affected by environmental factors. , Under the complex operating conditions of the wind turbine, the sensors installed on the blades are prone to damage, which affects the accuracy of the detection results and increases the detection cost.
近年来,风电场通过利用无人机和监控摄像头,拍摄大量高分辨率的风力机叶片图像,实现了远程实时监控风力机的运行状况,特别是风力机叶片表面损伤,极大提高了风力机的维护和运行效率。与基于声发射信号和振动信号的检测方法相比,利用图像或视频的风力机叶片损伤检测清晰直观,对于不引起信号明显变化的小缺陷更为敏感,且不依赖于传感器获取信号数据,克服了上述方法的种种缺点。然而,对于通过无人机和监控摄像头拍摄的大量图像,目前的处理方式依旧是由人工甄别风力机叶片表面损伤。在这种情况下,通过计算机自动处理无人机和监控摄像头拍摄的图像并生成分析结果具有重要意义,从而可以节省人工成本并消除人为错误。In recent years, by using drones and surveillance cameras to capture a large number of high-resolution images of wind turbine blades, wind farms have realized remote real-time monitoring of the operating conditions of wind turbines, especially the surface damage of wind turbine blades, which greatly improves the efficiency of wind turbines. maintenance and operational efficiency. Compared with detection methods based on acoustic emission signals and vibration signals, wind turbine blade damage detection using images or videos is clear and intuitive, and is more sensitive to small defects that do not cause significant signal changes, and does not rely on sensors to obtain signal data. various disadvantages of the above method. However, for the large number of images captured by drones and surveillance cameras, the current processing method is still to manually screen the surface damage of wind turbine blades. In this case, it is of great significance to automatically process images captured by drones and surveillance cameras and generate analysis results by computers, which can save labor costs and eliminate human errors.
在过去的十年中,计算机视觉和深度学习的巨大发展促进了图像处理和目标检测在工业场景中的应用。在国外,Hutchinson等提出了一种基于贝叶斯决策理论的基于统计的图像评估方法,以检测混凝土结构的损伤。Cha等使用深度学习神经网络检测混凝土裂缝。Wang等提出了一种数据驱动的风力机叶片损伤检测框架,该框架通过使用扩展级联分类器基于无人机拍摄的图像进行自动裂纹检测。Wang等提出了使用具有深层特征的无监督学习的裂纹检 测方法。利用计算机视觉和深度学习方法自动处理风力机叶片图像并实现损伤检测和识别,挑战在于:1、风力机叶片图像具有更复杂的背景信息,例如风力机周围天空、森林以及其他风电设备;2、叶片损伤的尺寸、形状和纹理上各不相同;3、与叶片的尺寸(数十米)相比,缺陷的尺寸极小,一般检测器难以对此精确定位。例如,早期的叶片裂纹可以是几厘米或更小。基于上述因素,当前检测方法在实际应用中的检测精度(虚预警率和漏检率高)和识别效果(要不然无法实现损伤定位,要不然无法分类特定损伤类型)方面的表现不佳。In the past decade, the tremendous development of computer vision and deep learning has facilitated the application of image processing and object detection in industrial scenarios. Abroad, Hutchinson et al. proposed a statistical-based image evaluation method based on Bayesian decision theory to detect the damage of concrete structures. Cha et al. used deep learning neural networks to detect concrete cracks. Wang et al. proposed a data-driven wind turbine blade damage detection framework for automatic crack detection based on images captured by drones by using an extended cascade classifier. Wang et al. proposed a crack detection method using unsupervised learning with deep features. Using computer vision and deep learning methods to automatically process wind turbine blade images and achieve damage detection and identification, the challenges are: 1. Wind turbine blade images have more complex background information, such as the sky around the wind turbine, forests and other wind power equipment; 2. The size, shape and texture of blade damage are different; 3. Compared with the size of the blade (tens of meters), the size of the defect is extremely small, and it is difficult for a general detector to accurately locate it. For example, early blade cracks can be a few centimeters or less. Based on the above factors, the current detection methods perform poorly in practical applications in terms of detection accuracy (high false early warning rate and missed detection rate) and recognition effect (either damage localization cannot be achieved, or specific damage types cannot be classified).
因此,需要开发风力机叶片故障检测的新方法,通过远程在线对风力机叶片损伤进行检测,在叶片发生故障失效之前提出预警,有效地避免意外事故发生,降低维护和运行成本。Therefore, it is necessary to develop a new method of wind turbine blade fault detection, through remote online detection of wind turbine blade damage, early warning before blade failure, effectively avoid accidents, and reduce maintenance and operation costs.
发明内容:Invention content:
本发明要解决的技术问题是我国是风力发电产业和设备制造大国,却不是强国,在相关领域所掌握的技术和发明专利的拥有量都处于劣势。而且,我国的主要风电场将逐渐进入事故高发期,风电装备服役的安全保障已成为风电发展的瓶颈。风力机健康运行的故障预警与安全保障亟需理论和技术上的突破。新装机风力机叶片在复杂载荷、严酷自然条件下缺乏详实的特性数据资料,专家级诊断与维护信息,难以用精确的理论模型进行准确表达,急需探索动态运行环境下风力机叶片健康状态评估与预测和安全保障的新理论和新方法。The technical problem to be solved by the present invention is that my country is a big country in wind power generation industry and equipment manufacturing, but not a strong country, and the possession of technologies and invention patents in related fields is at a disadvantage. Moreover, my country's major wind farms will gradually enter a period of high accident incidence, and the safety of wind power equipment in service has become a bottleneck for wind power development. The fault warning and safety guarantee of the healthy operation of wind turbines urgently need breakthroughs in theory and technology. The newly installed wind turbine blades lack detailed characteristic data, expert-level diagnosis and maintenance information under complex loads and harsh natural conditions, and it is difficult to accurately express them with accurate theoretical models. New theories and methods for forecasting and safety assurance.
为解决上述问题,本发明通过引入深度学习算法,开发反映风力机叶片运行状态的图像识别算法和有效特征提取算法,通过远程在线对风力机叶片损伤进行检测,在叶片发生故障失效之前提出预警,有效地避免意外事故发生,降低维护和运行成本,以提升风力机叶片在复杂恶劣地理气象条件下的可靠性,保障风力机长期稳定可靠地运行,降低风力机运营和维护成本,提高风力机的经济效益和市场竞争力。In order to solve the above problems, the present invention develops an image recognition algorithm and an effective feature extraction algorithm reflecting the operating state of the wind turbine blades by introducing a deep learning algorithm, detects the damage of the wind turbine blades online remotely, and provides an early warning before the blades fail. Effectively avoid accidents, reduce maintenance and operation costs, improve the reliability of wind turbine blades under complex and harsh geographical and meteorological conditions, ensure long-term stable and reliable operation of wind turbines, reduce wind turbine operation and maintenance costs, and improve wind turbines. Economic benefits and market competitiveness.
为达到上述目的,本发明通过以下技术方案实现,一种风力机叶片图像损伤检测和定位方法,基于深度卷积神经网络,包括模型训练和损伤检测及定位两个过程;In order to achieve the above object, the present invention is achieved through the following technical solutions: a wind turbine blade image damage detection and localization method, based on a deep convolutional neural network, includes two processes of model training and damage detection and localization;
其中,模型训练过程包括:Among them, the model training process includes:
S101、采集无人机或监控摄像头拍摄的风力机叶片表面图像;S101. Collect surface images of wind turbine blades captured by drones or surveillance cameras;
S102、在风力机叶片图像中人工标记出损伤的位置、类型信息,并根据人工标记从图像中裁切出含有损伤区域的图像样本,即正样本;以及正常叶片表面的图像样本,即负样本;建立样本数据库;S102. Manually mark the damaged position and type information in the wind turbine blade image, and cut out an image sample containing the damaged area from the image according to the manual marking, that is, a positive sample; and an image sample of a normal blade surface, that is, a negative sample ; Establish a sample database;
S103、利用步骤S102中建立的样本数据库,按照经典AdaBoost Haar-like算法训练级联强分类器;S103, using the sample database established in step S102, according to the classical AdaBoost Haar-like algorithm to train the cascade strong classifier;
S104、搭建深度卷积神经网络损伤识别模型,利用步骤S102中建立的样本数据库对卷积 神经网络模型进行训练、调参和验证,得到训练好的损伤识别模型;S104, building a deep convolutional neural network damage identification model, and using the sample database established in step S102 to train, adjust parameters and verify the convolutional neural network model to obtain a trained damage identification model;
损伤检测及定位过程包括:The damage detection and localization process includes:
S201、通过滑动窗口方法遍历风力机叶片图像,并经过步骤S103中训练的级联强分类器识别,判断图像中是否可能包含损伤区域;S201, traverse the wind turbine blade image through the sliding window method, and identify whether the image may contain a damaged area through the identification of the cascade strong classifier trained in step S103;
S202、将步骤S201中检测出的损伤目标区域缩放为统一大小,输入步骤S104中训练的损伤识别模型,经模型识别后,判断该区域属于哪一种损伤类型;由于前一步骤中检测的疑似损伤目标区域具有不同的比例和尺寸,为了下一步识别,需要将这些区域缩放为统一大小。将缩放为统一尺寸的疑似区域依次输入步骤S104中训练的损伤识别模型,经模型识别后,判断疑似区域属于哪一种损伤类型。如果判断疑似区域不属于任何一种损伤,则将该区域从疑似区域中排除(即视为正常区域);S202: Scale the damage target area detected in step S201 to a uniform size, input the damage identification model trained in step S104, and after model identification, determine which damage type the area belongs to; due to the suspected damage detected in the previous step Damage target areas have different scales and sizes, and for the next step of identification, these areas need to be scaled to a uniform size. The suspected area scaled to a uniform size is sequentially input into the damage identification model trained in step S104, and after the model is identified, it is determined which damage type the suspected area belongs to. If it is judged that the suspected area does not belong to any kind of injury, the area is excluded from the suspected area (ie, it is regarded as a normal area);
S203、输出结果文件,包括识别出的损伤目标区域位置、种类等信息,并在原始风力机叶片图像的对应位置中用方框和数字编号将损伤区域和类别标注出来。S203. Output a result file, including information such as the position and type of the identified damage target area, and mark the damaged area and type with a box and a number in the corresponding position of the original wind turbine blade image.
进一步的,S102中人工标记的损伤包括玻纤破损、裂纹、表皮破损和腐蚀四种类型。Further, the manually marked damage in S102 includes four types of glass fiber damage, crack, skin damage and corrosion.
进一步的,S103中体算法训练流程为:Further, the training process of the mid-body algorithm in S103 is as follows:
(1)首先,将图像样本转换为灰度图像,之后,对每张图像计算其Haar-like特征;(1) First, convert the image samples into grayscale images, and then calculate their Haar-like features for each image;
(2)通过AdaBoost算法训练图像样本强分类器;假设训练数据集{(x 1,y 1),(x 2,y 2),...,(x N,y N)}(其中N是训练样本的数量)且
Figure PCTCN2020125752-appb-000001
y∈[-1,1]。x i是训练样本,y i=1表示训练图像样本中包含损伤区域。假设h i作为训练样本x计算出的Haar-like的特征之一,单特征弱分类器f(x)的计算式为:
(2) Train a strong classifier of image samples through the AdaBoost algorithm; suppose that the training data set {(x 1 , y 1 ), (x 2 , y 2 ), ..., (x N , y N )} (where N is number of training samples) and
Figure PCTCN2020125752-appb-000001
y∈[-1, 1]. x i is a training sample, and y i =1 indicates that the training image sample contains a damaged area. Assuming that hi is one of the Haar -like features calculated by the training sample x, the calculation formula of the single-feature weak classifier f(x) is:
Figure PCTCN2020125752-appb-000002
Figure PCTCN2020125752-appb-000002
其中t k是分割阈值,t k=0.5×(h k+h k+1),w i是训练样本的权重;对于每个迭代步骤m=1,2,...,M,通过下式计算最优的弱分类器: where t k is the segmentation threshold, t k =0.5×(h k +h k+1 ), wi is the weight of the training samples; for each iteration step m=1,2,...,M, by the following equation Compute the optimal weak classifier:
Figure PCTCN2020125752-appb-000003
Figure PCTCN2020125752-appb-000003
在每次迭代结束时,通过下式更新强分类器F(x)和w i权重值 At the end of each iteration, the strong classifier F(x) and wi weight values are updated by
F(x)←F(x)+f m(x)       (3) F(x)←F(x)+f m (x) (3)
w i←w iexp(-y if m(x i))      (4) w i ←w i exp(-y i f m (x i )) (4)
在训练初始,F(x)=0,训练数据的权重分布由
Figure PCTCN2020125752-appb-000004
初始化得到。执行所需的迭代后,最终的强分类器C(x)可以表示为:
At the beginning of training, F(x)=0, the weight distribution of the training data is given by
Figure PCTCN2020125752-appb-000004
get initialized. After performing the required iterations, the final strong classifier C(x) can be expressed as:
Figure PCTCN2020125752-appb-000005
Figure PCTCN2020125752-appb-000005
进一步的,S104中搭建的深度卷积神经网络损伤识别模型包括输入层、卷积层、池化层和输出层,无全连接层。该模型大大减少了训练参数的数量,同时保持了模型优秀的识别能力,这使模型具有更高的训练效率和更好的泛化能力。Further, the deep convolutional neural network damage identification model built in S104 includes an input layer, a convolutional layer, a pooling layer and an output layer, without a fully connected layer. The model greatly reduces the number of training parameters, while maintaining the model's excellent recognition ability, which enables the model to have higher training efficiency and better generalization ability.
进一步的,S104中搭建的深度卷积神经网络损伤识别模型包括四组包含卷积层和池化层的特征提取模块,总计含有8个卷积层和4个池化层。多层的卷积和池化操作使得模型能够捕获小尺寸图像样本中抽象化且类别间差异极大的鲁棒特征,从而使模型具有更好的损伤识别性能。Further, the deep convolutional neural network damage identification model built in S104 includes four groups of feature extraction modules including convolutional layers and pooling layers, with a total of 8 convolutional layers and 4 pooling layers. The multi-layered convolution and pooling operations enable the model to capture abstract and robust features that vary greatly between categories in small-sized image samples, resulting in better damage recognition performance.
进一步的,S104中训练步骤包括:采用梯度下降方法,对卷积神经网络模型权值进行多次迭代更新,使模型损失函数输出降低,模型的检测值与数据的真实值之间越来越接近,直至损失函数收敛到0,完成模型的训练和优化,得到训练好的卷积神经网络模型。Further, the training step in S104 includes: using the gradient descent method to iteratively update the weights of the convolutional neural network model for multiple times, so that the output of the model loss function is reduced, and the detection value of the model and the real value of the data are getting closer and closer. , until the loss function converges to 0, the training and optimization of the model are completed, and the trained convolutional neural network model is obtained.
进一步的,S104中调参步骤包括:采用交叉验证方法,确定模型最优训练批次大小和学习率等参数;验证步骤包括:模型在验证数据上的表现结果,即得到模型验证的准确率。Further, the parameter adjustment step in S104 includes: using a cross-validation method to determine parameters such as the optimal training batch size and learning rate of the model; and the validation step includes: the performance result of the model on the validation data, that is, obtaining the model validation accuracy rate.
进一步的,S201步骤具体为:将待检测到的风力机叶片图像按照一定比例多次缩放,在特定缩放尺度上,沿水平或垂直方向按照固定步长平移滑动窗口;将每次平移后滑动窗口覆盖的图像区域作为检测窗口,提取检测窗口内的Haar-like特征,并经过步骤S103中训练的级联强分类器识别,判断该区域是否可能包含损伤区域,图像首先基于Haar-like特征和浅层分类器筛选出疑似损伤目标区域,从而极大降低了计算成本,为下一步骤中部署深度学习网络识别奠定了基础;如果模型判断检测窗口内可能包含损伤区域,则用方框标记该区域为疑似损伤目标区域,否则跳过该区域判别下一个区域;采用上述方法遍历整个图像,获取到一系列的疑似损伤目标区域。Further, step S201 is specifically as follows: zooming the image of the wind turbine blade to be detected multiple times according to a certain proportion, and on a specific zoom scale, panning the sliding window in the horizontal or vertical direction according to a fixed step size; sliding the window after each translation. The covered image area is used as the detection window, and the Haar-like features in the detection window are extracted, and are identified by the cascade strong classifier trained in step S103 to determine whether the area may contain damaged areas. The image is first based on Haar-like features and shallow images. The layer classifier filters out the suspected damage target area, which greatly reduces the computational cost and lays the foundation for the deployment of deep learning network identification in the next step; if the model judges that the detection window may contain damage areas, the area is marked with a box It is the suspected damage target area, otherwise skip this area to determine the next area; use the above method to traverse the entire image to obtain a series of suspected damage target areas.
本发明针对目前尚缺少可用于工程实际的风力机叶片损伤检测与定位的成熟技术,公开了一种基于深度卷积神经网络的风力机叶片图像损伤检测方法,能够对无人机和监控摄像头 拍摄的风力机叶片图像自动解读,实现高效准确识别和定位多种类别的风力机叶片损伤。实现叶片损伤评估和预警,减少由于风力机叶片故障导致的风力机意外停机次数,降低风力机运行维护成本。本方案具有识别速度快、精度高、过程全自动化、操作门槛低等优点,弥补了传统方法依靠人工完成,效率低、误判率高,费时费力等缺憾。目前国内在相关方面的研究成果还鲜有报道,本发明填补了国内风力机叶片图像损伤检测与识别方法的空白,对保障国家风电事业安全生产和发展都具有重要意义。本发明的有益效果具体在于:Aiming at the lack of mature technology for wind turbine blade damage detection and localization that can be used in engineering practice, the invention discloses a wind turbine blade image damage detection method based on a deep convolutional neural network, which can shoot unmanned aerial vehicles and monitoring cameras. The automatic interpretation of wind turbine blade images based on the automatic interpretation of wind turbine blade images enables efficient and accurate identification and localization of various types of wind turbine blade damage. Achieve blade damage assessment and early warning, reduce the number of unexpected shutdowns of wind turbines caused by wind turbine blade failures, and reduce wind turbine operation and maintenance costs. This solution has the advantages of fast recognition speed, high accuracy, fully automated process, and low operating threshold, which makes up for the shortcomings of traditional methods that rely on manual completion, low efficiency, high misjudgment rate, and time-consuming and labor-intensive. At present, domestic research results in related aspects are rarely reported. The invention fills the blank of the domestic wind turbine blade image damage detection and identification method, and is of great significance to ensuring the safe production and development of the national wind power industry. The beneficial effects of the present invention are specifically:
(1)本方法能够自动解读处理无人机和监控摄像头拍摄的风力机叶片表面图像,实现风力机叶片表面损伤的高效检测预警。相比传统方法无需考虑叶片运行过程中的信号干扰,无需部署传感器,能够实现叶片早期损伤检测;(1) The method can automatically interpret and process the surface images of wind turbine blades captured by unmanned aerial vehicles and monitoring cameras, so as to realize efficient detection and early warning of surface damage of wind turbine blades. Compared with the traditional method, there is no need to consider the signal interference during the operation of the blade, and there is no need to deploy sensors, which can realize the early damage detection of the blade;
(2)本方法通过计算机视觉和深度学习算法,自动识别风力机叶片表面损伤类型,并在图像中定位损伤位置,无需人工辅助,极大节省了人力物力,节约了时间,降低风电场的运行维护成本;(2) This method uses computer vision and deep learning algorithms to automatically identify the type of damage on the surface of wind turbine blades, and locate the damage location in the image without manual assistance, which greatly saves manpower and material resources, saves time, and reduces the operation of wind farms. maintenance costs;
(3)本方法实现了一种新的深度卷积神经网络损伤识别模型,该模型简化了传统卷积神经网络模型结构,保持了模型优秀的识别能力的同时使模型具有更高的训练效率和更好的泛化能力,实验结果证明本模型的识别分类正确率明显高于VGG16等传统模型,为该方法能够实现风力机叶片表面多种类型损伤识别的可行性奠定基础。(3) This method implements a new deep convolutional neural network damage recognition model, which simplifies the traditional convolutional neural network model structure, maintains the model's excellent recognition ability, and enables the model to have higher training efficiency and Better generalization ability. The experimental results show that the recognition and classification accuracy of this model is significantly higher than that of traditional models such as VGG16, which lays a foundation for the feasibility of this method to recognize various types of damage on the surface of wind turbine blades.
(4)本方法设计了一种新的损伤检测流程,该流程包含两个阶段的检测过程,首先通过Haar-like特征和浅层分类器筛选出疑似损伤目标区域,再通过深度卷积神经网络损伤识别模型对疑似损伤目标区域识别。采用这种方法可以避免识别模型在大量明显不包含损伤的区域浪费太多时间,从而将更多检测时间集中专注于处理难以辨识的区域,极大提高了单张图片的检测效率,为该方法能够实现风力机叶片表面损伤快速识别奠定基础。(4) This method designs a new damage detection process, which includes a two-stage detection process. First, the suspected damage target area is screened through Haar-like features and shallow classifiers, and then through the deep convolutional neural network. The damage identification model identifies the suspected damage target area. Using this method can avoid the recognition model from wasting too much time in a large number of areas that obviously do not contain damage, so that more detection time can be concentrated on processing difficult-to-recognize areas, which greatly improves the detection efficiency of a single image. It can lay a foundation for the rapid identification of surface damage of wind turbine blades.
(5)本方法采用Haar-like特征和浅层分类器对待测图像进行多尺度检测,从而实现了从大到小,多种尺寸的损伤定位。(5) This method uses Haar-like features and shallow classifiers to perform multi-scale detection on the image to be tested, thereby realizing damage localization of various sizes from large to small.
附图说明Description of drawings
图1为模型训练流程示意图;Figure 1 is a schematic diagram of the model training process;
图2为风力机叶片表面图像样张;Fig. 2 is an image proof of the surface of the wind turbine blade;
图3为深度卷积神经网络损伤识别模型结构示意图,Figure 3 is a schematic diagram of the structure of the damage recognition model of the deep convolutional neural network.
其中,1卷积层、2池化层、3全局做大池化层、4 Dropout操作、5输出层;Among them, 1 convolution layer, 2 pooling layer, 3 global large pooling layer, 4 Dropout operation, 5 output layer;
图4为验证结果混淆矩阵;Fig. 4 is the confusion matrix of the verification result;
图5为损伤图像检测流程示意图;Figure 5 is a schematic diagram of the damage image detection process;
图6为待检测的风力机叶片表面图像样张;Fig. 6 is the surface image proof sheet of the wind turbine blade to be detected;
图7为待检测图像疑似损伤区域检测结果;Fig. 7 is the detection result of the suspected damaged area of the image to be detected;
图8为待检测图像最终损伤区域检测结果。FIG. 8 is the final damage area detection result of the image to be detected.
具体实施方式:Detailed ways:
为使本发明实施例的目的、技术方案和优点更加清楚,下面对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention are described clearly and completely below. Obviously, the described embodiments are part of the embodiments of the present invention, but not all of them. Example. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
实施例1:Example 1:
一种风力机叶片图像损伤检测和定位方法,基于深度卷积神经网络,包括模型训练和损伤检测及定位两个过程:A wind turbine blade image damage detection and localization method, based on a deep convolutional neural network, includes two processes: model training and damage detection and localization:
其中,如图1所示,模型训练包括以下步骤:Among them, as shown in Figure 1, model training includes the following steps:
在步骤S101中,利用监控摄像头对风力机叶片表面图像进行采集。本发明实施例中采用的图像数据均来自于中国东部某风电场,一共包含725张由高分辨率相机拍摄的风力机叶片表面图像。图2为风电场现场采集到的风力机叶片表面图像样张。In step S101, a monitoring camera is used to collect the surface image of the wind turbine blade. The image data used in the embodiments of the present invention are all from a wind farm in eastern China, and include a total of 725 wind turbine blade surface images captured by high-resolution cameras. Figure 2 is a sample image of the surface of the wind turbine blade collected at the wind farm site.
在步骤S102中,对风力机叶片图像人工标记出损伤的位置、类型信息,并根据人工标记从图像中裁切出含有损伤区域的图像样本(正样本),以及正常叶片表面的图像样本(负样本),建立样本数据库。人工标记的损伤包括玻纤破损、裂纹、表皮破损和腐蚀四种类型,其中,玻纤破损是指叶片严重损坏导致纤维和层压板分层,裂纹是指叶片胶衣破裂或脱胶,表皮破损是指叶片胶衣分层、破损和损坏,腐蚀是指叶片前缘腐蚀。In step S102, manually mark the damaged position and type information on the wind turbine blade image, and cut out image samples (positive samples) containing damaged areas and image samples (negative samples) of the normal blade surface from the image according to the manual marking. sample) to establish a sample database. Manually marked damage includes four types of glass fiber damage, cracks, skin damage and corrosion. Among them, glass fiber damage refers to severe damage to the blade resulting in delamination of fibers and laminates, crack refers to the rupture or degumming of the blade gel coat, and skin damage is Refers to blade gel coat delamination, breakage and damage, and corrosion refers to blade leading edge corrosion.
在步骤S103中,利用步骤S102中建立的样本数据库,按照经典AdaBoost Haar-like算法训练级联强分类器。具体算法训练流程为:In step S103, using the sample database established in step S102, the cascading strong classifier is trained according to the classical AdaBoost Haar-like algorithm. The specific algorithm training process is as follows:
(1)首先,将图像样本转换为灰度图像,之后,对每张图像计算其Haar-like特征。(1) First, convert the image samples into grayscale images, and then calculate their Haar-like features for each image.
(2)通过AdaBoost算法训练图像样本强分类器。假设训练数据集{(x 1,y 1),(x 2,y 2),...,(x N,y N)}(其中N是训练样本的数量)且
Figure PCTCN2020125752-appb-000006
y∈[-1,1]。x i是训练样本,y i=1表示训练图像样本中包含损伤区域。假设h i作为训练样本x计算出的Haar-like的特征之一,单特征弱分类器f(x)的计算式为:
(2) A strong classifier of image samples is trained by the AdaBoost algorithm. Suppose a training dataset {(x 1 , y 1 ), (x 2 , y 2 ), ..., (x N , y N )} (where N is the number of training samples) and
Figure PCTCN2020125752-appb-000006
y∈[-1, 1]. x i is a training sample, and y i =1 indicates that the training image sample contains a damaged area. Assuming that hi is one of the Haar -like features calculated by the training sample x, the calculation formula of the single-feature weak classifier f(x) is:
Figure PCTCN2020125752-appb-000007
Figure PCTCN2020125752-appb-000007
其中t k是分割阈值,t k=0.5×(h k+h k+1),w i是训练样本的权重。对于每个迭代步骤m=1,2,...,M,通过下式计算最优的弱分类器: where t k is the segmentation threshold, t k =0.5×(h k +h k+1 ), and wi is the weight of the training samples. For each iteration step m = 1, 2, ..., M, the optimal weak classifier is calculated by:
Figure PCTCN2020125752-appb-000008
Figure PCTCN2020125752-appb-000008
在每次迭代结束时,通过下式更新强分类器F(x)和w i权重值 At the end of each iteration, the strong classifier F(x) and wi weight values are updated by
F(x)←F(x)+f m(x)                  (3) F(x)←F(x)+f m (x) (3)
w i←w iexp(-y if m(x i))                       (4) w i ←w i exp(-y i f m (x i )) (4)
在训练初始,F(x)=0,训练数据的权重分布由
Figure PCTCN2020125752-appb-000009
初始化得到。执行所需的迭代后,最终的强分类器C(x)可以表示为:
At the beginning of training, F(x)=0, the weight distribution of the training data is given by
Figure PCTCN2020125752-appb-000009
get initialized. After performing the required iterations, the final strong classifier C(x) can be expressed as:
Figure PCTCN2020125752-appb-000010
Figure PCTCN2020125752-appb-000010
在步骤S104中,本实例设计了一个深度卷积神经网络损伤识别模型,深度卷积神经网络模型的输入为三通道50×50的图像,模型结构示意图如图3所示。建立的深度卷积神经网络模型结构,包含4个特征提取模块。第一个特征提取模块由2个卷积层组成,每个卷积层包括64个尺寸为3×3卷积核。第二个特征提取模块由2个卷积层组成,每个卷积层包括128个尺寸为3×3卷积核。第三个特征提取模块由3个卷积层组成,每个卷积层包括256个尺寸为3×3卷积核。第四个特征提取模块由1个卷积层组成,该卷积层包括512个尺寸为3×3卷积核。卷积层的卷积步长均为1。每个特征提取模块间通过一个最大池化层链接,池化层的池化尺寸为2×2,池化操作步长为2。图像经4个特征提取模块后,输出给全局最大池化层,再经一层Dropout后,输出给输出层。所述输出层为一个包含5个元素的一维向量,5个元素值分别表示输入图像判断为正常叶片、玻纤破损、裂纹、表皮破损和腐蚀等五种类型的置信度。所述输出层采用Softmax函数进行分类。全模型的激活函数采用LReLU函数,Leaky Rate设置为0.1,Dropout率设置为0.5。训练过程采用Adam优化器,训练批量取6-24之间, 学习率取10 -3-10 -4,并根据交叉验证结果确定训练批次大小和学习率。 In step S104, this example designs a deep convolutional neural network damage identification model, the input of the deep convolutional neural network model is a three-channel 50×50 image, and the schematic diagram of the model structure is shown in FIG. 3 . The established deep convolutional neural network model structure includes 4 feature extraction modules. The first feature extraction module consists of 2 convolutional layers, each of which includes 64 convolutional kernels of size 3 × 3. The second feature extraction module consists of 2 convolutional layers, each of which includes 128 convolutional kernels of size 3 × 3. The third feature extraction module consists of 3 convolutional layers, each of which includes 256 convolution kernels of size 3 × 3. The fourth feature extraction module consists of 1 convolutional layer, which includes 512 convolution kernels of size 3 × 3. The convolution stride of the convolutional layers is all 1. Each feature extraction module is linked by a maximum pooling layer, the pooling size of the pooling layer is 2×2, and the pooling operation step size is 2. The image is output to the global maximum pooling layer after four feature extraction modules, and then output to the output layer after a layer of dropout. The output layer is a one-dimensional vector containing 5 elements, and the values of the 5 elements respectively represent the confidence of the input image to be judged as five types of normal blade, glass fiber damage, crack, skin damage and corrosion. The output layer uses the Softmax function for classification. The activation function of the full model adopts the LReLU function, the Leaky Rate is set to 0.1, and the Dropout rate is set to 0.5. The training process adopts Adam optimizer, the training batch is between 6 and 24, the learning rate is 10 -3 -10 -4 , and the training batch size and learning rate are determined according to the cross-validation results.
该模型的特点是:The features of this model are:
(1)该卷积神经网络模型具体包括输入层、卷积层、池化层和输出层,但不包含全连接层。其优点是该模型大大减少了训练参数的数量,同时保持了模型优秀的识别能力,这使模型具有更高的训练效率和更好的泛化能力。(1) The convolutional neural network model specifically includes an input layer, a convolutional layer, a pooling layer and an output layer, but does not include a fully connected layer. The advantage is that the model greatly reduces the number of training parameters, while maintaining the model's excellent recognition ability, which enables the model to have higher training efficiency and better generalization ability.
(2)该卷积神经网络模型包括四组包含卷积层和池化层的特征提取模块,总计含有8个卷积层和4个池化层。其优点是,多层的卷积和池化操作使得模型能够捕获小尺寸图像样本中抽象化且类别间差异极大的鲁棒特征,从而使模型具有更好的损伤识别性能。(2) The convolutional neural network model includes four groups of feature extraction modules including convolutional layers and pooling layers, with a total of 8 convolutional layers and 4 pooling layers. The advantage is that the multi-layered convolution and pooling operations enable the model to capture robust features that are abstract and greatly different between classes in small-sized image samples, thus enabling the model to have better damage recognition performance.
在步骤S104中,利用步骤S102中建立的样本数据库对卷积神经网络模型进行训练、调参和验证,得到具有识别能力的卷积神经网络模型。其中,训练是指:采用梯度下降方法,对卷积神经网络模型权值进行多次迭代更新,使模型损失函数输出降低,模型的检测值与数据的真实值之间越来越接近,直至损失函数收敛到0,完成模型的训练和优化,得到训练好的卷积神经网络模型。调参是指:采用交叉验证方法,确定模型最优训练批次大小和学习率等参数。验证是指:模型在验证数据上的表现结果,即模型验证的准确率。In step S104, the convolutional neural network model is trained, adjusted and verified by using the sample database established in step S102 to obtain a convolutional neural network model with recognition ability. Among them, training refers to: using the gradient descent method to iteratively update the weights of the convolutional neural network model, so that the output of the model loss function is reduced, and the detection value of the model and the real value of the data are getting closer and closer, until the loss The function converges to 0, the training and optimization of the model are completed, and the trained convolutional neural network model is obtained. Parameter adjustment refers to: using the cross-validation method to determine the parameters such as the optimal training batch size and learning rate of the model. Validation refers to the performance of the model on the validation data, that is, the accuracy of model validation.
采用随机抽样的方法从步骤S2生成的样本数据库中进行抽取验证数据,对训练好的识别模型进行验证,并与经典的卷积神经网络模型VGG16和传统分类模型SVM进行对比,对比结果如表1。根据表1,本发明实例识别模型的识别正确率达到97%,相对于SVM(88%)和VGG16(91%)有显着提高。The random sampling method is used to extract verification data from the sample database generated in step S2, and the trained recognition model is verified, and compared with the classic convolutional neural network model VGG16 and the traditional classification model SVM, and the comparison results are shown in Table 1. . According to Table 1, the recognition accuracy of the instance recognition model of the present invention reaches 97%, which is significantly improved compared to SVM (88%) and VGG16 (91%).
表1模型验证结果对比Table 1 Comparison of model validation results
Figure PCTCN2020125752-appb-000011
Figure PCTCN2020125752-appb-000011
图4为本发明实施例中对验证结果进行展示的混淆矩阵示意图,该图像证明了本发明实例识别模型可以有效区分正常叶片和损伤叶片,同样本模型可以正确区分大部分图像样本中所有四种类型的损伤。上述实验结果表明,可以利用该训练好的卷积神经网络模型对风力机叶片表面损伤进行识别判断。FIG. 4 is a schematic diagram of a confusion matrix showing the verification results in an embodiment of the present invention. The image proves that the recognition model of the present invention can effectively distinguish between normal leaves and damaged leaves, and this model can also correctly distinguish all four types in most image samples. type of damage. The above experimental results show that the trained convolutional neural network model can be used to identify and judge the surface damage of wind turbine blades.
如图5所示,损伤检测及定位包括以下步骤:As shown in Figure 5, damage detection and localization includes the following steps:
在步骤S201中,将待检测到的风力机叶片图像按照一定比例多次缩放,在特定缩放尺度上,沿水平或垂直方向按照固定步长平移滑动窗口(本实例中步长设置为10)。将每次平移后滑动窗口覆盖的图像区域作为检测窗口,提取检测窗口内的Haar-like特征,并经过步骤 S103中训练的级联强分类器识别,判断该区域是否可能包含损伤区域。如果模型判断检测窗口内可能包含损伤区域,则用方框标记该区域为疑似损伤目标区域,否则跳过该区域判别下一个区域。采用上述方法遍历整个图像,获取到一系列的疑似损伤目标区域。图像样张经步骤S201后的检测结果如图7所示,其中用方框标出的就是经检测的疑似损伤目标区域。In step S201, the image of the wind turbine blade to be detected is scaled multiple times according to a certain scale, and on a specific scale, the sliding window is moved horizontally or vertically according to a fixed step size (in this example, the step size is set to 10). The image area covered by the sliding window after each translation is used as the detection window, and the Haar-like features in the detection window are extracted, and identified by the cascade strong classifier trained in step S103 to determine whether the area may contain a damaged area. If the model judges that the detection window may contain a damaged area, mark the area as a suspected damage target area with a box, otherwise skip this area to judge the next area. The above method is used to traverse the entire image to obtain a series of suspected damaged target areas. The detection result of the image proof after step S201 is shown in FIG. 7 , in which the detected suspected damage target area is marked with a box.
在步骤S201中,图像首先基于Haar-like特征和浅层分类器筛选出疑似损伤目标区域,从而极大降低了计算成本,为下一步骤中部署深度学习网络识别奠定了基础。In step S201, the image is firstly screened for suspected damage target areas based on Haar-like features and shallow classifiers, which greatly reduces the computational cost and lays a foundation for deploying deep learning network identification in the next step.
在步骤S202中,由于前一步骤中检测的疑似损伤目标区域具有不同的比例和尺寸,为了下一步识别,需要将这些区域缩放为统一大小(本实例中疑似损伤目标区域均被缩放为50×50大小进行识别)。将缩放为统一尺寸的疑似区域依次输入步骤S104中训练的损伤识别模型,经模型识别后,判断疑似区域属于哪一种损伤类型。如果判断疑似区域不属于任何一种损伤,则将该区域从疑似区域中排除(即视为正常区域)。In step S202, since the suspected damaged target areas detected in the previous step have different scales and sizes, these areas need to be scaled to a uniform size for the next step of identification (in this example, the suspected damaged target areas are all scaled to 50× 50 size for identification). The suspected area scaled to a uniform size is sequentially input into the damage identification model trained in step S104, and after the model is identified, it is determined which damage type the suspected area belongs to. If it is judged that the suspected area does not belong to any kind of injury, the area is excluded from the suspected area (ie, it is regarded as a normal area).
在步骤S203中,对经过前两个步骤的检测结果进行整理,输出结果文件,包括识别出的损伤目标区域位置、种类等信息,并在原始风力机叶片图像的对应位置中用方框和数字编号将损伤区域和类别标注出来。图像样张经上述步骤后的检测识别结果如图8所示,其中用方框标出的就是识别的损伤目标区域,损伤类别用文字在方框左上角标出。对比图7可以看出,经损伤识别模型识别后,检测为疑似区域的正常叶片区域被基本排除,只有真正的损伤区域被识别出来。In step S203, the detection results after the first two steps are sorted out, and the result file is output, including information such as the position and type of the identified damage target area, and the corresponding position of the original wind turbine blade image is displayed with boxes and numbers. The number identifies the damage area and category. The detection and recognition results of the image proofs after the above steps are shown in Figure 8, in which the identified damage target area is marked with a box, and the damage category is marked with text in the upper left corner of the box. Compared with Fig. 7, it can be seen that after the damage identification model, the normal leaf area detected as a suspected area is basically excluded, and only the real damage area is identified.

Claims (9)

  1. 一种风力机叶片图像损伤检测和定位方法,其特征在于:基于深度卷积神经网络,包括模型训练和损伤检测及定位两个过程;A wind turbine blade image damage detection and localization method, characterized in that: based on a deep convolutional neural network, it includes two processes: model training and damage detection and localization;
    其中,模型训练过程包括:Among them, the model training process includes:
    S101、采集无人机或监控摄像头拍摄的风力机叶片表面图像;S101. Collect surface images of wind turbine blades captured by drones or surveillance cameras;
    S102、在风力机叶片图像中人工标记出损伤的位置、类型信息,并根据人工标记从图像中裁切出含有损伤区域的图像样本,即正样本;以及正常叶片表面的图像样本,即负样本;建立样本数据库;S102. Manually mark the damaged position and type information in the wind turbine blade image, and cut out an image sample containing the damaged area from the image according to the manual marking, that is, a positive sample; and an image sample of a normal blade surface, that is, a negative sample ; Establish a sample database;
    S103、利用步骤S102中建立的样本数据库,按照经典AdaBoost Haar-like算法训练级联强分类器;S103, using the sample database established in step S102, according to the classical AdaBoost Haar-like algorithm to train the cascade strong classifier;
    S104、搭建深度卷积神经网络损伤识别模型,利用步骤S102中建立的样本数据库对卷积神经网络模型进行训练、调参和验证,得到训练好的损伤识别模型;S104, building a deep convolutional neural network damage identification model, and using the sample database established in step S102 to train, adjust parameters and verify the convolutional neural network model to obtain a trained damage identification model;
    损伤检测及定位过程包括:The damage detection and localization process includes:
    S201、通过滑动窗口方法遍历风力机叶片图像,并经过步骤S103中训练的级联强分类器识别,判断图像中是否可能包含损伤区域;S201, traverse the wind turbine blade image through the sliding window method, and identify whether the image may contain a damaged area through the identification of the cascade strong classifier trained in step S103;
    S202、将步骤S201中检测出的损伤目标区域缩放为统一大小,输入步骤S104中训练的损伤识别模型,经模型识别后,判断该区域属于哪一种损伤类型;S202, scaling the damage target area detected in step S201 to a uniform size, inputting the damage identification model trained in step S104, and after being identified by the model, determine which damage type the area belongs to;
    S203、输出结果文件,包括识别出的损伤目标区域位置、种类等信息,并在原始风力机叶片图像的对应位置中用方框和数字编号将损伤区域和类别标注出来。S203. Output a result file, including information such as the position and type of the identified damage target area, and mark the damaged area and type with a box and a number in the corresponding position of the original wind turbine blade image.
  2. 如权利要求1所述的风力机叶片图像损伤检测和定位方法,其特征在于:S102中人工标记的损伤包括玻纤破损、裂纹、表皮破损和腐蚀四种类型。The method for detecting and locating damage in an image of a wind turbine blade according to claim 1, wherein the damage manually marked in S102 includes four types of glass fiber damage, cracks, skin damage and corrosion.
  3. 如权利要求1所述的风力机叶片图像损伤检测和定位方法,其特征在于:S103中体算法训练流程为:The wind turbine blade image damage detection and localization method according to claim 1, wherein: S103, the mid-body algorithm training process is:
    (1)首先,将图像样本转换为灰度图像,之后,对每张图像计算其Haar-like特征;(1) First, convert the image samples into grayscale images, and then calculate their Haar-like features for each image;
    (2)通过AdaBoost算法训练图像样本强分类器;假设训练数据集{(x 1,y 1),(x 2,y 2),...,(x N,y N)}(其中N是训练样本的数量)且
    Figure PCTCN2020125752-appb-100001
    y∈[-1,1]。x i是训练样本,y i=1表示训练图像样本中包含损伤区域。假设h i作为训练样本x计算出的Haar-like的特征之一,单特征弱分类器f(x)的计算式为:
    (2) Train a strong classifier of image samples through the AdaBoost algorithm; suppose that the training data set {(x 1 , y 1 ), (x 2 , y 2 ), ..., (x N , y N )} (where N is number of training samples) and
    Figure PCTCN2020125752-appb-100001
    y∈[-1, 1]. x i is a training sample, and y i =1 indicates that the training image sample contains a damaged area. Assuming that hi is one of the Haar -like features calculated by the training sample x, the calculation formula of the single-feature weak classifier f(x) is:
    Figure PCTCN2020125752-appb-100002
    Figure PCTCN2020125752-appb-100002
    其中t k是分割阈值,t k=0.5×(h k+h k+1),w i是训练样本的权重;对于每个迭代步骤m=1,2,...,M,通过下式计算最优的弱分类器: where t k is the segmentation threshold, t k =0.5×(h k +h k+1 ), wi is the weight of the training samples; for each iteration step m=1,2,...,M, by the following equation Compute the optimal weak classifier:
    Figure PCTCN2020125752-appb-100003
    Figure PCTCN2020125752-appb-100003
    在每次迭代结束时,通过下式更新强分类器F(x)和w i权重值 At the end of each iteration, the strong classifier F(x) and wi weight values are updated by
    F(x)←F(x)+f m(x)    (3) F(x)←F(x)+f m (x) (3)
    w i←w iexp(-y if m(x i))    (4) w i ←w i exp(-y i f m (x i )) (4)
    在训练初始,F(x)=0,训练数据的权重分布由
    Figure PCTCN2020125752-appb-100004
    初始化得到。执行所需的迭代后,最终的强分类器C(x)可以表示为:
    At the beginning of training, F(x)=0, the weight distribution of the training data is given by
    Figure PCTCN2020125752-appb-100004
    get initialized. After performing the required iterations, the final strong classifier C(x) can be expressed as:
    Figure PCTCN2020125752-appb-100005
    Figure PCTCN2020125752-appb-100005
  4. 如权利要求1所述的风力机叶片图像损伤检测和定位方法,其特征在于:S104中搭建的深度卷积神经网络损伤识别模型包括输入层、卷积层、池化层和输出层,无全连接层。The wind turbine blade image damage detection and localization method according to claim 1, wherein: the deep convolutional neural network damage identification model built in S104 includes an input layer, a convolutional layer, a pooling layer and an output layer. connection layer.
  5. 如权利要求1或4所述的风力机叶片图像损伤检测和定位方法,其特征在于:S104中搭建的深度卷积神经网络损伤识别模型包括四组包含卷积层和池化层的特征提取模块,总计含有8个卷积层和4个池化层。The wind turbine blade image damage detection and localization method according to claim 1 or 4, characterized in that: the deep convolutional neural network damage identification model built in S104 includes four groups of feature extraction modules including convolutional layers and pooling layers , with a total of 8 convolutional layers and 4 pooling layers.
  6. 如权利要求1所述的风力机叶片图像损伤检测和定位方法,其特征在于:S104中训练步骤包括:采用梯度下降方法,对卷积神经网络模型权值进行多次迭代更新,使模型损失函数输出降低,模型的检测值与数据的真实值之间越来越接近,直至损失函数收敛到0,完成模型的训练和优化,得到训练好的卷积神经网络模型。The wind turbine blade image damage detection and location method according to claim 1, wherein the training step in S104 comprises: using a gradient descent method to iteratively update the weights of the convolutional neural network model multiple times to make the model loss function As the output decreases, the detection value of the model and the real value of the data are getting closer and closer until the loss function converges to 0, the training and optimization of the model are completed, and the trained convolutional neural network model is obtained.
  7. 如权利要求1所述的风力机叶片图像损伤检测和定位方法,其特征在于:S104中训练步骤包括:采用梯度下降方法,对卷积神经网络模型权值进行多次迭代更新,使模型损失 函数输出降低,模型的检测值与数据的真实值之间越来越接近,直至损失函数收敛到0,完成模型的训练和优化,得到训练好的卷积神经网络模型。The wind turbine blade image damage detection and localization method according to claim 1, wherein the training step in S104 comprises: using a gradient descent method to iteratively update the weights of the convolutional neural network model for multiple times, so that the model loss function As the output decreases, the detection value of the model and the real value of the data are getting closer and closer until the loss function converges to 0, the training and optimization of the model are completed, and the trained convolutional neural network model is obtained.
  8. 如权利要求1所述的风力机叶片图像损伤检测和定位方法,其特征在于:S104中调参步骤包括:采用交叉验证方法,确定模型最优训练批次大小和学习率等参数;验证步骤包括:模型在验证数据上的表现结果,即得到模型验证的准确率。The wind turbine blade image damage detection and location method according to claim 1, wherein the parameter adjustment step in S104 comprises: using a cross-validation method to determine parameters such as the optimal training batch size and learning rate of the model; the validation step comprises: : The performance result of the model on the validation data, that is, the accuracy of the model validation.
  9. 如权利要求1所述的风力机叶片图像损伤检测和定位方法,其特征在于:S201步骤具体为:将待检测到的风力机叶片图像按照一定比例多次缩放,在特定缩放尺度上,沿水平或垂直方向按照固定步长平移滑动窗口;将每次平移后滑动窗口覆盖的图像区域作为检测窗口,提取检测窗口内的Haar-like特征,并经过步骤S103中训练的级联强分类器识别,判断该区域是否可能包含损伤区域;如果模型判断检测窗口内可能包含损伤区域,则用方框标记该区域为疑似损伤目标区域,否则跳过该区域判别下一个区域;采用上述方法遍历整个图像,获取到一系列的疑似损伤目标区域。The wind turbine blade image damage detection and localization method according to claim 1, wherein the step S201 is specifically: scaling the wind turbine blade image to be detected multiple times according to a certain ratio, and on a specific scaling scale, along the horizontal Or move the sliding window vertically according to a fixed step size; take the image area covered by the sliding window after each translation as the detection window, extract the Haar-like features in the detection window, and identify it by the cascade strong classifier trained in step S103, Determine whether the area may contain a damaged area; if the model judges that the detection window may contain a damaged area, mark the area as a suspected damage target area with a box, otherwise skip this area to determine the next area; use the above method to traverse the entire image, A series of suspected damaged target areas are acquired.
PCT/CN2020/125752 2020-10-15 2020-11-02 Wind turbine blade image-based damage detection and localization method WO2022077605A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011101812.7A CN112233091B (en) 2020-10-15 2020-10-15 Wind turbine blade image damage detection and positioning method
CN202011101812.7 2020-10-15

Publications (1)

Publication Number Publication Date
WO2022077605A1 true WO2022077605A1 (en) 2022-04-21

Family

ID=74113081

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/125752 WO2022077605A1 (en) 2020-10-15 2020-11-02 Wind turbine blade image-based damage detection and localization method

Country Status (2)

Country Link
CN (1) CN112233091B (en)
WO (1) WO2022077605A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782400A (en) * 2022-05-17 2022-07-22 东风本田发动机有限公司 Method, device, equipment, medium and program product for detecting slag point of metal material
CN115096891A (en) * 2022-05-28 2022-09-23 国营芜湖机械厂 Intelligent inspection method for aero-engine blade
CN115345072A (en) * 2022-08-12 2022-11-15 中山大学 Method and system for predicting impact damage of fan blade and readable storage medium
CN116306231A (en) * 2023-02-06 2023-06-23 大连理工大学 Adhesive joint structure debonding damage identification method and device based on ultrasonic guided wave deep learning
CN116416578A (en) * 2022-12-02 2023-07-11 中国电力工程顾问集团有限公司 Method and device for detecting damage of aerial umbrella cover of high-altitude wind power
CN116503612A (en) * 2023-06-26 2023-07-28 山东大学 Fan blade damage identification method and system based on multitasking association
CN116704266A (en) * 2023-07-28 2023-09-05 国网浙江省电力有限公司信息通信分公司 Power equipment fault detection method, device, equipment and storage medium
CN116883391A (en) * 2023-09-05 2023-10-13 中国科学技术大学 Two-stage distribution line defect detection method based on multi-scale sliding window
CN117232577A (en) * 2023-09-18 2023-12-15 杭州奥克光电设备有限公司 Optical cable distributing box bearing interior monitoring method and system and optical cable distributing box
CN117237367A (en) * 2023-11-16 2023-12-15 江苏星火汽车部件制造有限公司 Spiral blade thickness abrasion detection method and system based on machine vision
WO2024023322A1 (en) * 2022-07-28 2024-02-01 Lm Wind Power A/S Method for performing a maintenance or repair of a rotor blade of a wind turbine
CN117541640A (en) * 2024-01-09 2024-02-09 西南科技大学 Method, equipment and medium for judging uniformity of aerodynamic flow field of cascade test oil flow diagram

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950634B (en) * 2021-04-22 2023-06-30 内蒙古电力(集团)有限责任公司内蒙古电力科学研究院分公司 Unmanned aerial vehicle inspection-based wind turbine blade damage identification method, equipment and system
CN113640297A (en) * 2021-06-30 2021-11-12 华北电力大学 Deep learning-based online blade damage detection method for double-impeller wind driven generator
CN113657193A (en) * 2021-07-27 2021-11-16 中铁工程装备集团有限公司 Segment damage detection method and system based on computer vision and shield machine
CN114004982A (en) * 2021-10-27 2022-02-01 中国科学院声学研究所 Acoustic Haar feature extraction method and system for underwater target recognition
CN114215702B (en) * 2021-12-07 2024-02-23 北京智慧空间科技有限责任公司 Fan blade fault detection method and system
CN114862796A (en) * 2022-05-07 2022-08-05 北京卓翼智能科技有限公司 A unmanned aerial vehicle for fan blade damage detects
CN115239034B (en) * 2022-09-26 2022-11-29 北京科技大学 Method and system for predicting early defects of wind driven generator blade
CN115564740B (en) * 2022-10-17 2023-06-20 风脉能源(武汉)股份有限公司 Fan blade defect positioning method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9652839B2 (en) * 2013-03-15 2017-05-16 Digital Wind Systems, Inc. System and method for ground based inspection of wind turbine blades
CN107144569A (en) * 2017-04-27 2017-09-08 西安交通大学 The fan blade surface defect diagnostic method split based on selective search
CN107154037A (en) * 2017-04-20 2017-09-12 西安交通大学 Fan blade fault recognition method based on depth level feature extraction
CN108416294A (en) * 2018-03-08 2018-08-17 南京天数信息科技有限公司 A kind of fan blade fault intelligent identification method based on deep learning
CN111612030A (en) * 2020-03-30 2020-09-01 华电电力科学研究院有限公司 Wind turbine generator blade surface fault identification and classification method based on deep learning
CN111696075A (en) * 2020-04-30 2020-09-22 航天图景(北京)科技有限公司 Intelligent fan blade defect detection method based on double-spectrum image

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190180150A1 (en) * 2017-12-13 2019-06-13 Bossa Nova Robotics Ip, Inc. Color Haar Classifier for Retail Shelf Label Detection
CN108090512A (en) * 2017-12-15 2018-05-29 佛山市厚德众创科技有限公司 A kind of robust AdaBoost grader construction methods based on Ransac algorithms
CN110314854B (en) * 2019-06-06 2021-08-10 苏州市职业大学 Workpiece detecting and sorting device and method based on visual robot
CN110261394B (en) * 2019-06-24 2022-09-16 内蒙古工业大学 Online real-time diagnosis system and method for damage of fan blade
CN110610492B (en) * 2019-09-25 2023-03-21 空气动力学国家重点实验室 Method and system for identifying external damage of full-size blade of in-service fan, storage medium and terminal
CN111122705B (en) * 2019-12-26 2023-01-03 中国科学院工程热物理研究所 Ultrasonic nondestructive testing method for wind turbine blade

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9652839B2 (en) * 2013-03-15 2017-05-16 Digital Wind Systems, Inc. System and method for ground based inspection of wind turbine blades
CN107154037A (en) * 2017-04-20 2017-09-12 西安交通大学 Fan blade fault recognition method based on depth level feature extraction
CN107144569A (en) * 2017-04-27 2017-09-08 西安交通大学 The fan blade surface defect diagnostic method split based on selective search
CN108416294A (en) * 2018-03-08 2018-08-17 南京天数信息科技有限公司 A kind of fan blade fault intelligent identification method based on deep learning
CN111612030A (en) * 2020-03-30 2020-09-01 华电电力科学研究院有限公司 Wind turbine generator blade surface fault identification and classification method based on deep learning
CN111696075A (en) * 2020-04-30 2020-09-22 航天图景(北京)科技有限公司 Intelligent fan blade defect detection method based on double-spectrum image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHOU ZIFENG: "Research on Blade Surface Damage Detection of Wind Turbine Based on Computer Vision", MASTER THESIS, TIANJIN POLYTECHNIC UNIVERSITY, CN, no. 1, 15 January 2020 (2020-01-15), CN , XP055920677, ISSN: 1674-0246 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782400A (en) * 2022-05-17 2022-07-22 东风本田发动机有限公司 Method, device, equipment, medium and program product for detecting slag point of metal material
CN115096891A (en) * 2022-05-28 2022-09-23 国营芜湖机械厂 Intelligent inspection method for aero-engine blade
CN115096891B (en) * 2022-05-28 2024-05-07 国营芜湖机械厂 Intelligent inspection method for aero-engine blades
WO2024023322A1 (en) * 2022-07-28 2024-02-01 Lm Wind Power A/S Method for performing a maintenance or repair of a rotor blade of a wind turbine
CN115345072A (en) * 2022-08-12 2022-11-15 中山大学 Method and system for predicting impact damage of fan blade and readable storage medium
CN116416578A (en) * 2022-12-02 2023-07-11 中国电力工程顾问集团有限公司 Method and device for detecting damage of aerial umbrella cover of high-altitude wind power
CN116306231A (en) * 2023-02-06 2023-06-23 大连理工大学 Adhesive joint structure debonding damage identification method and device based on ultrasonic guided wave deep learning
CN116306231B (en) * 2023-02-06 2024-01-23 大连理工大学 Adhesive joint structure debonding damage identification method and device based on ultrasonic guided wave deep learning
CN116503612B (en) * 2023-06-26 2023-11-24 山东大学 Fan blade damage identification method and system based on multitasking association
CN116503612A (en) * 2023-06-26 2023-07-28 山东大学 Fan blade damage identification method and system based on multitasking association
CN116704266B (en) * 2023-07-28 2023-10-31 国网浙江省电力有限公司信息通信分公司 Power equipment fault detection method, device, equipment and storage medium
CN116704266A (en) * 2023-07-28 2023-09-05 国网浙江省电力有限公司信息通信分公司 Power equipment fault detection method, device, equipment and storage medium
CN116883391A (en) * 2023-09-05 2023-10-13 中国科学技术大学 Two-stage distribution line defect detection method based on multi-scale sliding window
CN116883391B (en) * 2023-09-05 2023-12-19 中国科学技术大学 Two-stage distribution line defect detection method based on multi-scale sliding window
CN117232577A (en) * 2023-09-18 2023-12-15 杭州奥克光电设备有限公司 Optical cable distributing box bearing interior monitoring method and system and optical cable distributing box
CN117232577B (en) * 2023-09-18 2024-04-05 杭州奥克光电设备有限公司 Optical cable distributing box bearing interior monitoring method and system and optical cable distributing box
CN117237367B (en) * 2023-11-16 2024-02-23 江苏星火汽车部件制造有限公司 Spiral blade thickness abrasion detection method and system based on machine vision
CN117237367A (en) * 2023-11-16 2023-12-15 江苏星火汽车部件制造有限公司 Spiral blade thickness abrasion detection method and system based on machine vision
CN117541640A (en) * 2024-01-09 2024-02-09 西南科技大学 Method, equipment and medium for judging uniformity of aerodynamic flow field of cascade test oil flow diagram
CN117541640B (en) * 2024-01-09 2024-04-02 西南科技大学 Method, equipment and medium for judging uniformity of aerodynamic flow field of cascade test oil flow diagram

Also Published As

Publication number Publication date
CN112233091B (en) 2021-05-18
CN112233091A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
WO2022077605A1 (en) Wind turbine blade image-based damage detection and localization method
Kurukuru et al. Fault classification for photovoltaic modules using thermography and image processing
Wang et al. A two-stage data-driven approach for image-based wind turbine blade crack inspections
Xu et al. Wind turbine blade surface inspection based on deep learning and UAV-taken images
Wang et al. High-voltage power transmission tower detection based on faster R-CNN and YOLO-V3
CN108038846A (en) Transmission line equipment image defect detection method and system based on multilayer convolutional neural networks
CN111696075A (en) Intelligent fan blade defect detection method based on double-spectrum image
Li et al. Intelligent fault pattern recognition of aerial photovoltaic module images based on deep learning technique
CN104865269A (en) Wind turbine blade fault diagnosis method
Wang et al. Insulator defect recognition based on faster R-CNN
Venkatesh et al. Automatic detection of visual faults on photovoltaic modules using deep ensemble learning network
Sun et al. A novel detection method for hot spots of photovoltaic (PV) panels using improved anchors and prediction heads of YOLOv5 network
CN113205039A (en) Power equipment fault image identification and disaster investigation system and method based on multiple DCNNs
CN116258980A (en) Unmanned aerial vehicle distributed photovoltaic power station inspection method based on vision
CN115170816A (en) Multi-scale feature extraction system and method and fan blade defect detection method
CN114387261A (en) Automatic detection method suitable for railway steel bridge bolt diseases
CN114694130A (en) Method and device for detecting telegraph poles and pole numbers along railway based on deep learning
Lin et al. Identification of icing thickness of transmission line based on strongly generalized convolutional neural network
CN116046796A (en) Photovoltaic module hot spot detection method and system based on unmanned aerial vehicle
Gao et al. Low saliency crack detection based on improved multimodal object detection network: an example of wind turbine blade inner surface
CN115294048A (en) Foreign matter detection method, device, equipment and storage medium
Xia et al. A multi-target detection based framework for defect analysis of electrical equipment
Wang et al. Substation Equipment Defect Detection based on Temporal-spatial Similarity Calculation
Sheng et al. A YOLOX-Based Detection Method of Triple-Cascade Feature Level Fusion for Power System External Defects
Özer et al. An approach based on deep learning methods to detect the condition of solar panels in solar power plants

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20957383

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20957383

Country of ref document: EP

Kind code of ref document: A1