CN112581443A - Light-weight identification method for surface damage of wind driven generator blade - Google Patents

Light-weight identification method for surface damage of wind driven generator blade Download PDF

Info

Publication number
CN112581443A
CN112581443A CN202011470894.2A CN202011470894A CN112581443A CN 112581443 A CN112581443 A CN 112581443A CN 202011470894 A CN202011470894 A CN 202011470894A CN 112581443 A CN112581443 A CN 112581443A
Authority
CN
China
Prior art keywords
damage
image
model
target detection
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011470894.2A
Other languages
Chinese (zh)
Inventor
赵霞
田宏哲
赵霄桁
曾凡春
余瑞锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Huaneng Xinrui Control Technology Co Ltd
Original Assignee
Beijing Huaneng Xinrui Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Huaneng Xinrui Control Technology Co Ltd filed Critical Beijing Huaneng Xinrui Control Technology Co Ltd
Priority to CN202011470894.2A priority Critical patent/CN112581443A/en
Publication of CN112581443A publication Critical patent/CN112581443A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a light identification method for surface damage of a wind driven generator blade, which comprises the following steps: establishing a blade damage database as a training set image and a verification set image of a detection model; performing data enhancement operation on the leaf damage database to obtain a training set image after data enhancement; building a lightweight target detection model, wherein the target detection model adopts a MobileNet algorithm as a feature extraction algorithm and adopts a YOLO-V3 algorithm as a single-stage target detection algorithm; training the target detection model by using the training set image after data enhancement, performing precision test on model detection defects by using the verification set image, and storing model parameters with the best test performance to obtain a trained target detection model; and exporting and deploying the trained target detection model in portable computing equipment, and carrying out real-time blade surface damage detection on a wind power plant. The real-time performance and the rapidity of the blade surface damage detection can be greatly improved.

Description

Light-weight identification method for surface damage of wind driven generator blade
Technical Field
The disclosure belongs to the technical field of blade surface damage identification, and particularly relates to a light-weight identification method for blade surface damage of a wind driven generator.
Background
Deep learning based detection techniques utilize modern artificial intelligence algorithms (e.g., Faster R-CNN, Master R-CNN) and big data analysis to locate and classify defects or lesions. The method mainly comprises the following steps:
a) using a camera unmanned aerial vehicle to collect a large number of blade photos, wherein positive samples and negative samples are included;
b) manually marking the damage in the blade photo by using marking software to establish a blade damage database;
c) a target detection algorithm based on a convolutional neural network is used for building a detection model, and training is carried out to a satisfactory precision by means of a database;
d) this model is used to locate and classify damage or defects in live leaf photographs.
The above detection method has a drawback in that the deep learning model requires a large amount of computation and is difficult to deploy in a portable device. The inspection personnel can not know the identification result on site, and the photo needs to be taken back to the headquarter to be identified by the server. The field of intelligent inspection of unmanned aerial vehicles which is being researched at present adopts a cloud computing mode to realize real-time on-site identification, but the blade data can be quickly transmitted between the on-site and the cloud platform only by abundant mobile bandwidth resources. However, wind turbines are usually located in remote areas, base stations are distributed in a small amount, and bandwidth resources are not abundant, so online real-time detection relying on cloud computing may not be practical in this scenario.
Disclosure of Invention
The disclosure aims to solve at least one of the technical problems in the prior art and provides a light weight identification method for surface damage of a wind turbine blade.
In one aspect of the disclosure, a method for identifying surface damage and light weight of a wind turbine blade is provided, and the method includes:
establishing a blade damage database as a training set image and a verification set image of a detection model;
performing data enhancement operation on the leaf damage database to obtain a training set image after data enhancement;
building a lightweight target detection model, wherein the target detection model adopts a MobileNet algorithm as a feature extraction algorithm and adopts a YOLO-V3 algorithm as a single-stage target detection algorithm;
training the target detection model by using the training set image after data enhancement, performing precision test on model detection defects by using the verification set image, and storing model parameters with the best test performance to obtain a trained target detection model;
and exporting and deploying the trained target detection model in portable computing equipment, and carrying out real-time blade surface damage detection on a wind power plant.
In some optional embodiments, the creating a leaf damage database for use as a training set image and a validation set image of the detection model includes:
collecting a large number of blade images from a wind power plant by using an unmanned aerial vehicle with a high-definition camera;
respectively carrying out damage marking on each collected leaf image to obtain damage marking information, and sorting the leaf images, the damage marking information and leaf image list documents according to a YOLO format; the damage marking information comprises a marking format and marking content, the marking format adopts a format of a PASCAL VOC data set or a COCO data set, and the marking content comprises the category of the defect and the position and the area of the defect;
and enabling the damage marking information to correspond to the leaf images one by one, and obtaining the training set images and the verification set images.
In some optional embodiments, the data enhancement operation comprises at least one of:
randomly changing the brightness, the contrast and the color of the image, randomly filling, randomly cutting, randomly zooming, randomly turning and randomly disordering the arrangement sequence of the real frames.
In some optional embodiments, the performing a data enhancement operation on the blade damage database comprises:
loading an image from an image file or a memory buffer, and converting the image into an RGB format;
putting an original picture into an expansion picture filled with a pixel mean value, and cutting, scaling and turning the picture;
adjusting the size of the image by using a random interpolation mode;
adjusting the size of the image according to a specific interpolation mode;
normalizing the pixel value of the image, namely dividing the pixel value by 255.0, and scaling the pixel value to a [0-1] interval;
padding zeros into the bounding box if the number of bounding boxes is less than 50;
converting the bounding box from (xmin, ymin, xmax, ymin) form to (xmin, ymin, width, height) format.
In some optional embodiments, the training the target detection model by using the training set image after the data enhancement, performing precision test on the model detection defects by using the verification set image, and saving the model parameters that perform the best test performance to obtain the trained target detection model includes:
generating an anchor frame: dividing the training set image after data enhancement into m multiplied by n small square areas, and generating k anchor frames with different sizes in the center of each area, wherein m, n and k are positive integers more than or equal to 1;
the method for generating the prediction frame comprises the following steps: fine-tuning the center coordinates (b _ x, b _ y) and the size (b _ h × b _ w) of the generated prediction frame on the basis of the center coordinates (center _ x, center _ y) and the size (p _ h × p _ w) of the anchor frame;
each anchor frame is a possible candidate area, the candidate areas are labeled, and the labeling information comprises the real value of (t _ x, t _ y, t _ h, t _ w, P _ obj, P _1, P _2, P _3, …, P _ C);
extracting image features by using a MobileNet backbone network to obtain an output feature map;
calculating the position and the category of a prediction frame according to the output feature map;
establishing three cross entropy loss functions to calculate a loss value between a model predicted value and an actual value, wherein the loss functions are a loss function for representing whether a target object is included, a loss function for representing the position of the object and a loss function for representing the class of the object;
executing a back propagation algorithm to calculate the gradient of the model parameters, and updating the parameters along the gradient descending direction to minimize the overall loss function;
and saving the model parameters once in each iteration N step of the model, calculating the loss value of a verification set once, and taking the model parameter of the iteration turn with the minimum loss value as the best model file in the training process to obtain the trained target detection model.
In some optional embodiments, the labeling the candidate region includes:
calculating the intersection ratio IoU of k anchor frames generated by each small square area and a real frame, and setting the object label as positive for a prediction frame generated by the anchor frame with the largest IoU;
setting IoU a threshold iou _ threshold, and when the objectness of a predicted box is not 1, but IoU of the predicted box and a real box is greater than the iou _ threshold, setting the objectness label to-1, and not participating in the calculation of the loss function; all other prediction boxes with the object tag set to 0, indicating a negative class;
for the prediction frame with the object of 1, making the real frame and the prediction frame coincide, solving the problems of t _ x, t _ y, t _ h and t _ w, and making the damage type consistent with the type of the real frame, and using one-hot vector (P _1, P _2, P _3, … and P _ C) to represent the probability value of each type of damage of the object in the prediction frame;
for prediction boxes with an object of 0 or-1, their location and class are not taken into account.
In some optional embodiments, the calculating a prediction box position and a category according to the output feature map includes:
establishing association between the output characteristic diagram and the prediction frame, wherein each pixel point on the output characteristic diagram corresponds to a small square area on the original drawing respectively, so that the corresponding relation of spatial positions is realized;
and performing convolution on the output characteristic diagram for multiple times, setting the final output channel number as [ k (5+ C) ] × m × n, and respectively corresponding to (t _ x, t _ y, t _ h, t _ w, P _ obj, P _1, P _2, P _3, …, P _ C) real number predicted values of k prediction frames generated by the m × n small square areas.
In some optional embodiments, the exporting and deploying the trained target detection model in a portable computing device and to a wind farm for real-time blade surface damage detection includes:
acquiring an actual blade video of the wind power plant, selecting a video key frame image and inputting the video key frame image into the trained target detection model carried on the portable computing equipment;
for each key frame image, the trained target detection model executes a forward propagation algorithm to obtain a prediction frame damage category and a score;
elimination of overlapping large redundant prediction blocks using non-maximum suppression:
drawing a rectangular damage frame on the original picture according to the real number value of the prediction frame, and marking the damage category and the damage score near the drawing frame;
and classifying and storing the identified result according to the damage type.
In another aspect of the present disclosure, an electronic device is provided, including:
one or more processors;
a storage unit for storing one or more programs which, when executed by the one or more processors, enable the one or more processors to implement the method according to the preceding description.
In another aspect of the disclosure, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, is adapted to carry out the method according to the above.
According to the light-weight identification method for the surface damage of the blade of the wind driven generator, the light-weight YOLO-v3 target detection model is used so that the light-weight YOLO-v3 target detection model can be deployed on portable equipment, a training data set is enlarged through image enhancement processing, overfitting is restrained, the robustness of the model is improved, the portable equipment is carried to the site to perform rapid damage positioning and classification, and the real-time performance and the rapidity of the surface damage detection of the blade are greatly improved.
Drawings
FIG. 1 is a block diagram schematic composition of an example electronic device for implementing a method for identifying surface damage and lightening weight of a wind turbine blade in one embodiment of the present disclosure;
FIG. 2 is a flow chart of a method for identifying damage to the surface of a wind turbine blade in a lightweight manner according to another embodiment of the disclosure;
FIG. 3 is a schematic diagram of various convolutions according to another embodiment of the present disclosure;
FIG. 4 is a block diagram of a different convolution according to another embodiment of the present disclosure;
fig. 5 is a network structure diagram of MobileNet according to another embodiment of the present disclosure.
Detailed Description
For a better understanding of the technical aspects of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
First, an example electronic device for implementing a wind turbine blade surface damage lightweight identification method according to an embodiment of the present disclosure is described with reference to fig. 1.
As shown in FIG. 1, electronic device 200 includes one or more processors 210, one or more memory devices 220, one or more input devices 230, one or more output devices 240, and the like, interconnected by a bus system 250 and/or other form of connection mechanism. It should be noted that the components and structures of the electronic device shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 210 may be a Central Processing Unit (CPU), or may be made up of multiple processing cores, or other forms of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 200 to perform desired functions.
Storage 220 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that a processor may execute to implement the client functionality (implemented by the processor) in the embodiments of the disclosure described below and/or other desired functionality. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 230 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 240 may output various information (e.g., images or sounds) to an outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
Next, a wind turbine blade surface damage lightweight identification method according to another embodiment of the present disclosure will be described with reference to fig. 2.
As shown in fig. 2, a method S100 for identifying damage to a surface of a wind turbine blade in a lightweight manner includes:
and S110, establishing a blade damage database to be used as a training set image and a verification set image of the detection model.
Specifically, in this step, an unmanned aerial vehicle carrying a high definition camera may be used to acquire a large number of blade images from one or more different wind farms. And then respectively carrying out damage marking on each collected leaf image to obtain damage marking information, and sorting the leaf images, the damage marking information and the leaf image list documents according to a YOLO format. And finally, the damage marking information corresponds to the leaf images one by one to obtain the training set images and the verification set images.
It should be noted that, no limitation is made on the specific content of the damage labeling information, for example, the damage labeling information may include a labeling format and labeling content, and the labeling format may be in the format of a PASCAL VOC data set or a COCO data set. The annotation content includes the category of the defect (C category) and the position and area where the defect occurs. Of course, besides, the damage labeling information may also include other contents, and this embodiment does not specifically limit this.
And S120, performing data enhancement operation on the leaf damage database to obtain a training set image after data enhancement.
Specifically, at this step, some random variation may be made to the images, resulting in similar but not identical samples. The main effects are that limited data can generate more data, the number and diversity (noise data) of training samples are increased, and the robustness of the model is improved. Data enhancement is typically performed using a combination of several methods:
randomly changing brightness, contrast, color and the like of an image, randomly filling, randomly cutting, randomly zooming, randomly turning, randomly disordering the arrangement sequence of real frames and the like.
S130, building a lightweight target detection model, wherein the target detection model adopts a MobileNet algorithm as a feature extraction algorithm and adopts a YOLO-V3 algorithm as a single-stage target detection algorithm.
Specifically, in this step, the MobileNet cause is used: MobileNet uses a depth separable convolution approach to decompose the conventional convolution into a depth convolution (depthwise convolution) + a 1 × 1 convolution (pointwise convolution). As shown in fig. 3, (a) is a conventional Convolution (Standard conversion Filters), (b) and (c) are convolutions of a depth Convolution and 1 × 1 respectively corresponding to the depth separable Convolution.
Assume an input feature map size of Df×DfX M, size of output characteristic diagram is Df×DfX N, convolution kernel size Dk×DkThen the amount of computation of the conventional convolution is:
CCConventional=Dk×Dk×M×N×Df×Df
the computation of the depth separable convolution is the sum of the computation of the depth convolution and the 1 × 1 convolution:
CCDepth=Dk×Dk×M×Df×Df+M×N×Df×Df
the ratio of the computation of the depth separable convolution to the conventional convolution is:
Figure BDA0002833607310000081
taking the above 28 × 28 × 192 input and 28 × 28 × 256 output as examples, the convolution kernel size is 3 × 3, and the ratio of the two calculated amounts is: 0.1150. the computational effort of the depth separable convolution is reduced to around 1/9 for the conventional convolution.
As shown in fig. 4, the structure of the conventional convolution (left) is compared to the depth separable convolution in MobileNet (right). Both the Depth-wise convolution and the 1 × 1 convolution add BN layer and the active layer of ReLU.
YOLO-V3 was used for the following reasons: unlike the R-CNN series algorithm, YOLO-V3 uses a single network structure to predict object types and positions while generating candidate regions, and does not need to be divided into two stages to complete the detection task. In addition, the YOLO-V3 algorithm produces a much smaller number of prediction boxes than Faster R-CNN. Each real box in Faster R-CNN may correspond to multiple candidate regions labeled positive, while each real box inside YOLO-V3 corresponds to only one positive candidate region. These characteristics allow the YOLO-V3 algorithm to be faster and to reach the level of real-time response.
S140, training the target detection model by using the training set image after data enhancement, performing precision test on the model detection defects by using the verification set image, and storing the model parameters with the best test performance to obtain the trained target detection model.
S150, exporting and deploying the trained target detection model in portable computing equipment, and carrying out real-time blade surface damage detection on a wind power plant.
According to the light-weight identification method for the surface damage of the wind driven generator blade, a light-weight YOLO-v3 target detection model is used so that the light-weight YOLO-v3 target detection model can be deployed on portable equipment, a training data set is expanded through image enhancement processing, overfitting is inhibited, the robustness of the model is improved, and finally the portable equipment is carried to the site to perform rapid damage positioning and classification, so that the real-time performance and the rapidity of the surface damage detection of the blade are greatly improved.
In some optional embodiments, the performing a data enhancement operation on the blade damage database comprises:
loading an image from an image file or a memory buffer, and converting the image into an RGB format;
putting an original picture into an expansion picture filled with a pixel mean value, and cutting, scaling and turning the picture;
adjusting the size of the image by using a random interpolation mode;
adjusting the size of the image according to a specific interpolation mode;
normalizing the pixel value of the image, namely dividing the pixel value by 255.0, and scaling the pixel value to a [0-1] interval;
padding zeros into the bounding box if the number of bounding boxes is less than 50;
converting the bounding box from (xmin, ymin, xmax, ymin) form to (xmin, ymin, width, height) format.
In some optional embodiments, the training the target detection model by using the training set image after the data enhancement, performing precision test on the model detection defects by using the verification set image, and saving the model parameters that perform the best test performance to obtain the trained target detection model includes:
generating an anchor frame: dividing the training set image after data enhancement into m multiplied by n small square areas, generating k anchor frames with different sizes in the center of each area, wherein m, n and k are positive integers which are more than or equal to 1.
The method for generating the prediction frame comprises the following steps: fine-tuning the center coordinates (b _ x, b _ y) and the size (b _ h × b _ w) of the generated prediction frame on the basis of the center coordinates (center _ x, center _ y) and the size (p _ h × p _ w) of the anchor frame. The algorithm for the fine tuning is as follows:
Figure BDA0002833607310000101
bx=centerx+σ(tx)
by=centery+σ(ty)
Figure BDA0002833607310000102
Figure BDA0002833607310000103
and randomly assigning values to t _ x, t _ y, t _ h and t _ w to obtain an initial prediction frame.
Each of the anchor boxes is a possible candidate region, the candidate region is labeled, and the labeling information includes the true value of (t _ x, t _ y, t _ h, t _ w, P _ obj, P _1, P _2, P _3, …, P _ C). The detailed labeling steps are as follows:
the prediction frame generated by the anchor frame with the largest IoU, which calculates the intersection ratio of the k anchor frames generated by each small square area and the real frame, is set to be positive by the object label (P _ obj).
Setting IoU a threshold iou _ threshold, and when the objectness of a predicted box is not 1, but IoU of the predicted box and a real box is greater than the iou _ threshold, setting the objectness label to-1, and not participating in the calculation of the loss function; all other prediction boxes, with the object tag set to 0, represent a negative class.
For the prediction frame with object of 1, the real frame and the prediction frame are overlapped, t _ x, t _ y, t _ h, t _ w are solved reversely, the damage type of the prediction frame is consistent with the type of the real frame, and the probability value of each type of damage belonging to the object in the prediction frame is represented by one-hot vector (P _1, P _2, P _3, …, P _ C).
For prediction boxes with an object of 0 or-1, their location and class are not taken into account.
And extracting image features by using a MobileNet backbone network to obtain an output feature map. The network structure of MobileNet is shown in fig. 5.
And calculating the position and the category of a prediction frame according to the output feature graph, and specifically comprising the following steps:
and establishing the association between the output characteristic diagram and the prediction frame, wherein each pixel point on the output characteristic diagram corresponds to a small square area on the original drawing respectively, so that the corresponding relation of the spatial positions is realized.
And performing convolution on the output characteristic diagram for multiple times, setting the final output channel number as [ k (5+ C) ] × m × n, and respectively corresponding to (t _ x, t _ y, t _ h, t _ w, P _ obj, P _1, P _2, P _3, …, P _ C) real number predicted values of k prediction frames generated by the m × n small square areas.
And establishing three cross entropy loss functions to calculate a loss value between the model predicted value and the model actual value, wherein the loss functions are a loss function for representing whether a target object is included, a loss function for representing the position of the object and a loss function for representing the class of the object.
A back propagation algorithm is performed to calculate the gradient of the model parameters and the parameters are updated in the direction of gradient descent to minimize the overall loss function.
And saving the model parameters once in each iteration N step of the model, calculating the loss value of a verification set once, and taking the model parameter of the iteration turn with the minimum loss value as the best model file in the training process to obtain the trained target detection model.
In some optional embodiments, the exporting and deploying the trained target detection model in a portable computing device and to a wind farm for real-time blade surface damage detection includes:
acquiring an actual blade video of the wind power plant, selecting a video key frame image and inputting the video key frame image into the trained target detection model carried on the portable computing equipment;
and for each key frame image, the trained target detection model executes a forward propagation algorithm to obtain a prediction frame damage category and a score. The prediction box score is defined as the probability of belonging to the damage category multiplied by the objectness probability of whether its prediction box contains the target object.
Non-maxima suppression is used to eliminate redundant prediction blocks with large overlap. If the two prediction boxes are of the same category and have a relatively large degree of positional overlap, they can be considered to be predicting the same object. Non-maxima suppression is achieved by selecting the prediction box with the highest score for a class and discarding those prediction boxes if they are greater than a threshold value from its IoU.
And drawing a rectangular damage frame on the original picture according to the real number value of the prediction frame, and marking the damage category and the damage score near the drawing frame.
And classifying and storing the identified result according to the damage type.
In summary, although the background environment and the illumination intensity of the leaf image vary greatly, the accuracy of identifying two main damages, namely cracks and spots on the surface of the leaf, is over 90% after the disclosed method is adopted, so that the technical scheme disclosed by the invention has higher robustness and higher precision compared with other similar technologies. The lightweight model used by the present disclosure can be easily deployed on portable computing devices without requiring significant computational effort. And the identification rate of the method in the specific embodiment reaches 45.5fps, and real-time detection can be completely realized. Inspection personnel do not need to endure the intolerable delay caused by uploading of high-definition blade images or videos to a cloud platform and returning of identification results, or the inefficiency caused by carrying images back to the headquarter to perform damage detection due to the fact that remote network bandwidth resources at a wind power plant are poor.
A specific example will be described below.
Step one, establishing a training set and a testing set
Firstly, a quad-rotor unmanned aerial vehicle (as shown in the following figures) carries a high-definition camera to enter different wind power plants to collect a large number of blade images, and the requirements are as shown in the following table 1:
TABLE 1
Figure BDA0002833607310000121
Two major damage cracks (crack) and spots (spot) in the image library were labeled using Labelimg.
The xml information file generated along with the annotation contains an image name, an image path, the length, the width and the channel of the annotated image, the name of the annotated target, the position information of the annotated target and the like.
And arranging the pictures, the labeling information and the picture list documents by using a PASCAL VOC data set format. Wherein, the Annotations store xml files generated by labeling, the JPEGImage stores pictures, and the ImageSets stores the division of the training set and the data set. There is a Main file under ImageSet, and a label _ list.txt needs to be established under Mian. Txt refers to the names of all lesions marked, training set and validation set are as follows 8: a ratio of 2 is randomly drawn.
Step two, data preprocessing, the following steps are carried out in sequence
Loading an image from an image file or a memory buffer, and converting the image into an RGB format;
putting an original picture into an expansion picture filled with a pixel mean value, and cutting, scaling and turning the picture;
adjusting the size of the image by using a random interpolation mode;
adjusting the size of the image according to a specific interpolation mode;
normalizing the pixel value of the image, namely dividing the pixel value by 255.0, and scaling the pixel value to a [0-1] interval;
padding zeros into the bounding box if the number of bounding boxes is less than 50;
converting the bounding box from (xmin, ymin, xmax, ymin) form to (xmin, ymin, width, height) format.
Step three, a Baidu flying propeller platform is installed on a cloud platform (system: Ubuntu18.04, processor: 8 core, memory: 16GB, display card: single card Tesla v100, storage: 200GB) to build a lightweight model training environment, a backbone network selects MobileNet, a basic network selects YOLO-v3, the maximum iteration step number is set to 20000 steps, a pre-training model comes from a Baidu official network, a data set format is set to VOC, data set categories are selected to be two types, model parameters are stored every 200 steps and are set to be tested while training, and the size of an input image is set to be 3 x 608.
Step four, starting training
The model generates 100 training files in total, and because a method of training and testing simultaneously is adopted, the parameter file which gives the best play to the training model in the testing environment can be obtained, and the network structure and the forward parameters of the best model are exported.
Step five, detecting damage of the blade in real time
The method comprises the steps of carrying out python-side prediction deployment on a portable device IP-1370 (system: Ubuntu18.04, processor: i7-7700T, memory: 4GB, GPU: Quadro P3000, storage: 500GB and wireless communication support), downloading a network structure and forward parameters of a tested optimal lightweight model from a cloud platform through a mobile communication network, and importing the network structure and forward parameters into a python-side environment of the IP-1370.
The high-definition unmanned aerial vehicle shoots blade videos on site, and video key frame images are selected and wirelessly transmitted to the portable equipment through the WiFi local area network.
For each image, the lightweight model only executes a forward propagation algorithm to derive a predicted box damage category and score.
The redundant prediction blocks with large overlap are eliminated using non-maximum suppression, with the threshold set to 0.5.
And drawing a rectangular damage frame on the original picture according to the real number value of the prediction frame, and marking the damage category and the damage score near the drawing frame.
And classifying and storing the identified result according to the damage type.
In another aspect of the present disclosure, an electronic device is provided, including:
one or more processors;
a storage unit for storing one or more programs which, when executed by the one or more processors, enable the one or more processors to carry out a method according to the preceding description.
In another aspect of the disclosure, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, is adapted to carry out the method according to the above.
The computer readable medium may be included in the apparatus, device, system, or may exist separately.
The computer readable storage medium may be any tangible medium that can contain or store a program, and may be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, more specific examples of which include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, an optical fiber, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination thereof.
The computer readable storage medium may also include a propagated data signal with computer readable program code embodied therein, for example, in a non-transitory form, such as in a carrier wave or in a carrier wave, wherein the carrier wave is any suitable carrier wave or carrier wave for carrying the program code.
It is to be understood that the above embodiments are merely exemplary embodiments that are employed to illustrate the principles of the present disclosure, and that the present disclosure is not limited thereto. It will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the disclosure, and these are to be considered as the scope of the disclosure.

Claims (10)

1. A method for identifying surface damage and light weight of a wind turbine blade is characterized by comprising the following steps:
establishing a blade damage database as a training set image and a verification set image of a detection model;
performing data enhancement operation on the leaf damage database to obtain a training set image after data enhancement;
building a lightweight target detection model, wherein the target detection model adopts a MobileNet algorithm as a feature extraction algorithm and adopts a YOLO-V3 algorithm as a single-stage target detection algorithm;
training the target detection model by using the training set image after data enhancement, performing precision test on model detection defects by using the verification set image, and storing model parameters with the best test performance to obtain a trained target detection model;
and exporting and deploying the trained target detection model in portable computing equipment, and carrying out real-time blade surface damage detection on a wind power plant.
2. The method of claim 1, wherein the building of the leaf damage database for use as a training set image and a validation set image of the detection model comprises:
collecting a large number of blade images from a wind power plant by using an unmanned aerial vehicle with a high-definition camera;
respectively carrying out damage marking on each collected leaf image to obtain damage marking information, and sorting the leaf images, the damage marking information and leaf image list documents according to a YOLO format; the damage marking information comprises a marking format and marking content, wherein the marking format adopts a format of a PASCAL VOC data set or a COCO data set; the labeling content comprises the category of the defect and the position and the area of the defect;
and enabling the damage marking information to correspond to the leaf images one by one, and obtaining the training set images and the verification set images.
3. The method of claim 1, wherein the data enhancement operation comprises at least one of:
randomly changing the brightness, the contrast and the color of the image, randomly filling, randomly cutting, randomly zooming, randomly turning and randomly disordering the arrangement sequence of the real frames.
4. The method of claim 1, wherein the performing data enhancement operations on the blade damage database comprises:
loading an image from an image file or a memory buffer, and converting the image into an RGB format;
putting an original picture into an expansion picture filled with a pixel mean value, and cutting, scaling and turning the picture;
adjusting the size of the image by using a random interpolation mode;
adjusting the size of the image according to a specific interpolation mode;
normalizing the pixel value of the image, namely dividing the pixel value by 255.0, and scaling the pixel value to a [0-1] interval;
padding zeros into the bounding box if the number of bounding boxes is less than 50;
converting the bounding box from (xmin, ymin, xmax, ymin) form to (xmin, ymin, width, height) format.
5. The method according to any one of claims 1 to 4, wherein the training the target detection model by using the training set image after the data enhancement, performing precision testing of model detection defects by using the verification set image, and saving the model parameters with the best test performance to obtain the trained target detection model comprises:
generating an anchor frame: dividing the training set image after data enhancement into m multiplied by n small square areas, and generating k anchor frames with different sizes in the center of each area, wherein m, n and k are positive integers more than or equal to 1;
the method for generating the prediction frame comprises the following steps: fine-tuning the center coordinates (b _ x, b _ y) and the size (b _ h × b _ w) of the generated prediction frame on the basis of the center coordinates (center _ x, center _ y) and the size (p _ h × p _ w) of the anchor frame;
each anchor frame is a possible candidate area, the candidate areas are labeled, and the labeling information comprises the real value of (t _ x, t _ y, t _ h, t _ w, P _ obj, P _1, P _2, P _3, …, P _ C);
extracting image features by using a MobileNet backbone network to obtain an output feature map;
calculating the position and the category of a prediction frame according to the output feature map;
establishing three cross entropy loss functions to calculate a loss value between a model predicted value and an actual value, wherein the loss functions are a loss function for representing whether a target object is included, a loss function for representing the position of the object and a loss function for representing the class of the object;
executing a back propagation algorithm to calculate the gradient of the model parameters, and updating the parameters along the gradient descending direction to minimize the overall loss function;
and saving the model parameters once in each iteration N step of the model, calculating the loss value of a verification set once, and taking the model parameter of the iteration turn with the minimum loss value as the best model file in the training process to obtain the trained target detection model.
6. The method of claim 5, wherein labeling the candidate region comprises:
calculating the intersection ratio IoU of k anchor frames generated by each small square area and a real frame, and setting the object label as positive for a prediction frame generated by the anchor frame with the largest IoU;
setting IoU a threshold iou _ threshold, and when the objectness of a predicted box is not 1, but IoU of the predicted box and a real box is greater than the iou _ threshold, setting the objectness label to-1, and not participating in the calculation of the loss function; all other prediction boxes with the object tag set to 0, indicating a negative class;
for the prediction frame with the object of 1, making the real frame and the prediction frame coincide, solving the problems of t _ x, t _ y, t _ h and t _ w, and making the damage type consistent with the type of the real frame, and using one-hot vector (P _1, P _2, P _3, … and P _ C) to represent the probability value of each type of damage of the object in the prediction frame;
for prediction boxes with an object of 0 or-1, their location and class are not taken into account.
7. The method of claim 5, wherein computing prediction box locations and classes from the output feature map comprises:
establishing association between the output characteristic diagram and the prediction frame, wherein each pixel point on the output characteristic diagram corresponds to a small square area on the original drawing respectively, so that the corresponding relation of spatial positions is realized;
and performing convolution on the output characteristic diagram for multiple times, setting the final output channel number as [ k (5+ C) ] × m × n, and respectively corresponding to (t _ x, t _ y, t _ h, t _ w, P _ obj, P _1, P _2, P _3, …, P _ C) real number predicted values of k prediction frames generated by the m × n small square areas.
8. The method according to any one of claims 1 to 4, wherein the deriving and deploying of the trained target detection model in a portable computing device and the real-time blade surface damage detection to the wind farm comprises:
acquiring an actual blade video of the wind power plant, selecting a video key frame image and inputting the video key frame image into the trained target detection model carried on the portable computing equipment;
for each key frame image, the trained target detection model executes a forward propagation algorithm to obtain a prediction frame damage category and a score;
elimination of overlapping large redundant prediction blocks using non-maximum suppression:
drawing a rectangular damage frame on the original picture according to the real number value of the prediction frame, and marking the damage category and the damage score near the drawing frame;
and classifying and storing the identified result according to the damage type.
9. An electronic device, comprising:
one or more processors;
a storage unit for storing one or more programs which, when executed by the one or more processors, enable the one or more processors to carry out the method according to any one of claims 1 to 8.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, is able to carry out a method according to any one of claims 1 to 8.
CN202011470894.2A 2020-12-14 2020-12-14 Light-weight identification method for surface damage of wind driven generator blade Pending CN112581443A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011470894.2A CN112581443A (en) 2020-12-14 2020-12-14 Light-weight identification method for surface damage of wind driven generator blade

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011470894.2A CN112581443A (en) 2020-12-14 2020-12-14 Light-weight identification method for surface damage of wind driven generator blade

Publications (1)

Publication Number Publication Date
CN112581443A true CN112581443A (en) 2021-03-30

Family

ID=75135273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011470894.2A Pending CN112581443A (en) 2020-12-14 2020-12-14 Light-weight identification method for surface damage of wind driven generator blade

Country Status (1)

Country Link
CN (1) CN112581443A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344911A (en) * 2021-07-06 2021-09-03 北京大都正隆医疗科技有限公司 Method and device for measuring size of calculus
CN113553948A (en) * 2021-07-23 2021-10-26 中远海运科技(北京)有限公司 Automatic recognition and counting method for tobacco insects and computer readable medium
CN113640297A (en) * 2021-06-30 2021-11-12 华北电力大学 Deep learning-based online blade damage detection method for double-impeller wind driven generator
CN114002334A (en) * 2021-09-29 2022-02-01 西安交通大学 Structural damage acoustic emission signal identification method and device and storage medium
WO2023036346A1 (en) * 2021-09-13 2023-03-16 中国民航大学 Yolov5-based method and apparatus for performing real-time detection of blade crack during operation and maintenance of aero engine
CN116416578A (en) * 2022-12-02 2023-07-11 中国电力工程顾问集团有限公司 Method and device for detecting damage of aerial umbrella cover of high-altitude wind power
WO2023134399A1 (en) * 2022-01-14 2023-07-20 中国科学院深圳先进技术研究院 Phytoplankton detection method and system
CN116503612A (en) * 2023-06-26 2023-07-28 山东大学 Fan blade damage identification method and system based on multitasking association
CN117541640A (en) * 2024-01-09 2024-02-09 西南科技大学 Method, equipment and medium for judging uniformity of aerodynamic flow field of cascade test oil flow diagram

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111328A (en) * 2019-05-16 2019-08-09 上海中认尚科新能源技术有限公司 A kind of blade crack of wind driven generator detection method based on convolutional neural networks
CN110163177A (en) * 2019-05-28 2019-08-23 李峥嵘 A kind of wind power generation unit blade unmanned plane automatic sensing recognition methods

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111328A (en) * 2019-05-16 2019-08-09 上海中认尚科新能源技术有限公司 A kind of blade crack of wind driven generator detection method based on convolutional neural networks
CN110163177A (en) * 2019-05-28 2019-08-23 李峥嵘 A kind of wind power generation unit blade unmanned plane automatic sensing recognition methods

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
YAJIE YU等: "Defect identification of wind turbine blades based on defect semantic features with transfer feature extractor" *
ZIFENG QIU等: "Automatic visual defects inspection of wind turbine blades via YOLO-based small object detection approach" *
仇梓峰: "基于计算机视觉的风机叶片表面损伤检测研究" *
张超;文传博;: "基于改进Mask R-CNN的风机叶片缺陷检测" *
毛希玮;徐莹莹;: "基于无人机的风机叶片缺陷自动检测技术" *
范晨亮;李国庆;马长啸;黄海;: "基于深度学习的风机叶片裂纹检测算法" *
齐榕等: "基于YOLOv3的轻量级目标检测网络" *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113640297A (en) * 2021-06-30 2021-11-12 华北电力大学 Deep learning-based online blade damage detection method for double-impeller wind driven generator
CN113344911A (en) * 2021-07-06 2021-09-03 北京大都正隆医疗科技有限公司 Method and device for measuring size of calculus
CN113553948A (en) * 2021-07-23 2021-10-26 中远海运科技(北京)有限公司 Automatic recognition and counting method for tobacco insects and computer readable medium
WO2023036346A1 (en) * 2021-09-13 2023-03-16 中国民航大学 Yolov5-based method and apparatus for performing real-time detection of blade crack during operation and maintenance of aero engine
CN114002334A (en) * 2021-09-29 2022-02-01 西安交通大学 Structural damage acoustic emission signal identification method and device and storage medium
WO2023134399A1 (en) * 2022-01-14 2023-07-20 中国科学院深圳先进技术研究院 Phytoplankton detection method and system
CN116416578A (en) * 2022-12-02 2023-07-11 中国电力工程顾问集团有限公司 Method and device for detecting damage of aerial umbrella cover of high-altitude wind power
CN116503612A (en) * 2023-06-26 2023-07-28 山东大学 Fan blade damage identification method and system based on multitasking association
CN116503612B (en) * 2023-06-26 2023-11-24 山东大学 Fan blade damage identification method and system based on multitasking association
CN117541640A (en) * 2024-01-09 2024-02-09 西南科技大学 Method, equipment and medium for judging uniformity of aerodynamic flow field of cascade test oil flow diagram
CN117541640B (en) * 2024-01-09 2024-04-02 西南科技大学 Method, equipment and medium for judging uniformity of aerodynamic flow field of cascade test oil flow diagram

Similar Documents

Publication Publication Date Title
CN112581443A (en) Light-weight identification method for surface damage of wind driven generator blade
CN111784685B (en) Power transmission line defect image identification method based on cloud edge cooperative detection
Ling et al. An accurate and real-time method of self-blast glass insulator location based on faster R-CNN and U-net with aerial images
CN111178206B (en) Building embedded part detection method and system based on improved YOLO
CN108596886A (en) Aerial Images insulator based on deep learning falls piece fault rapid detecting method
CN111767927A (en) Lightweight license plate recognition method and system based on full convolution network
CN110781964A (en) Human body target detection method and system based on video image
CN110889421A (en) Target detection method and device
CN113033516A (en) Object identification statistical method and device, electronic equipment and storage medium
CN113569672A (en) Lightweight target detection and fault identification method, device and system
CN109919145A (en) Mine card detection method and system based on 3D point cloud deep learning
CN116188402A (en) Insulator defect identification method based on improved SSD algorithm
CN116385911A (en) Lightweight target detection method for unmanned aerial vehicle inspection insulator
CN114693963A (en) Recognition model training and recognition method and device based on electric power data feature extraction
CN112749746A (en) Method, system and device for iteratively updating defect sample
CN117152094A (en) Method, device and system for analyzing surface defects of steel plate based on computer vision
CN116012709B (en) High-resolution remote sensing image building extraction method and system
CN111583417A (en) Method and device for constructing indoor VR scene with combined constraint of image semantics and scene geometry, electronic equipment and medium
CN116152685A (en) Pedestrian detection method and system based on unmanned aerial vehicle visual field
CN116129158A (en) Power transmission line iron tower small part image recognition method and device
CN113869122A (en) Distribution network engineering reinforced control method
CN113409327A (en) Example segmentation improvement method based on ordering and semantic consistency constraint
JP2023528530A (en) TRAINING DEVICE, CONTROL METHOD AND PROGRAM
CN113076965A (en) Cloud-based service robot scene classification system and method
CN116682014B (en) Method, device, equipment and storage medium for dividing lamp curtain building image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination