CN115719337A - Wind turbine surface defect detection method - Google Patents

Wind turbine surface defect detection method Download PDF

Info

Publication number
CN115719337A
CN115719337A CN202211414040.1A CN202211414040A CN115719337A CN 115719337 A CN115719337 A CN 115719337A CN 202211414040 A CN202211414040 A CN 202211414040A CN 115719337 A CN115719337 A CN 115719337A
Authority
CN
China
Prior art keywords
network
layer
wind turbine
model
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211414040.1A
Other languages
Chinese (zh)
Inventor
杨宇龙
张银胜
蓝天鹤
吉茹
徐文校
吕宗奎
付相为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi University
Original Assignee
Wuxi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi University filed Critical Wuxi University
Priority to CN202211414040.1A priority Critical patent/CN115719337A/en
Publication of CN115719337A publication Critical patent/CN115719337A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a wind turbine surface defect detection method, which comprises the following steps: s1, collecting a damage detection data image of the surface of the wind turbine, and carrying out picture marking; performing enhancement processing on the marked pictures, and dividing the pictures into a training set, a verification set and a test set; s2, an improved YOLOv5S model is constructed by changing a convolution mode, integrating a lightweight network Bottleneck structure, adding a weighted bidirectional feature pyramid network and adding a high-efficiency channel attention mechanism method; s3, performing bbox regression by adopting an alpha IoU loss function; and S4, inputting the data set obtained in the step S1 into an improved YOLOv5S model for training and testing to obtain a parameter model meeting the conditions, and outputting a surface defect effect diagram. The invention can realize the intelligent detection of various defects of the wind turbine and prevent the defect area from increasing.

Description

Wind turbine surface defect detection method
Technical Field
The invention relates to a surface defect detection method, in particular to a wind turbine surface defect detection method.
Background
As the life of a wind turbine increases, the surface of the wind turbine is exposed to the sun, and the blades and the tower body are gradually damaged and dirty. However, wind turbines are typically up to approximately one hundred meters and the blades are also up to fifty meters, making wind turbine service difficult. It is particularly important how to detect the type and location information of a defect at the early stage of the occurrence of the defect in a wind turbine.
Many scholars have studied on the problem, mao Yulin adopts a cascade R-CNN feature extraction network based on ResNet-101 as a detection model, and a transfer learning idea is introduced when detecting the surface defects of the fan, so that the model can be converged more quickly. However, after the ResNet network is introduced, the later training period is not stable enough, and the pattern distortion is easily caused. An FPN (Feature Pyramid Network) structure is added into a fast-RCNN backbone Network by the broad et al, and an ROI alignment algorithm is introduced to replace a rough ROI Pooling algorithm, so that an anchor frame more suitable for a defect target is obtained, but the fast-RCNN Network is usually long in detection time and not suitable for real-time target detection. Qiu et al, in conjunction with the YOLO and CNN models, apply an inverse convolutional neural network to the high-level features of the feature pyramid, and train the extracted multi-scale convolutional features in a classification model using the feature expression of the small and rich targets in the middle layer of the CNN. In conclusion, the algorithm is difficult to effectively extract the target features, the network model has insufficient fusion capability on the multi-scale target features, and the target detection precision is to be improved.
Disclosure of Invention
The invention aims to: the invention aims to provide a wind turbine surface defect detection method which can accurately acquire an initial defect image of the surface of a wind turbine, avoid the increase of the defect area and improve the efficiency of wind power generation.
The technical scheme is as follows: the invention relates to a method for detecting surface defects of a wind turbine, which comprises the following steps:
s1, collecting a damage detection data image of the surface of a wind turbine, and carrying out picture annotation; performing enhancement processing on the marked pictures, and dividing the pictures into a training set, a verification set and a test set;
s2, an improved YOLOv5S model is constructed by changing a convolution mode, integrating a lightweight network Bottleneck structure, adding a weighted bidirectional feature pyramid network and adding a high-efficiency channel attention mechanism method;
s3, performing bbox regression by adopting an alpha IoU loss function;
and S4, inputting the data set obtained in the step S1 into an improved YOLOv5S model for training and testing to obtain a parameter model meeting the conditions, and outputting a surface defect effect diagram.
Further, the detailed implementation steps of step S1 are as follows:
s11, adopting a Labelimg image labeling tool to label a target of a wind turbine picture with a defect on the surface, and dividing the defect into six detection labels including a crack detection label, a dirt detection label, a void detection label, an exposure detection label, a rust detection label and a thunderstrike detection label; generating xml labels corresponding to various defects one by one;
s12, enhancing the marked picture by adding image blurring, HSV enhancement, rotation, zooming, translation, shearing, perspective and overturning, wherein the enhanced picture is divided into the following components according to the ratio of 8: a training set, a validation set, and a test set.
Further, in step S2, an improved YOLOv5 model is constructed to include an Input end, a Backbone network, a Neck network, and a Head output end, and the detailed implementation steps are as follows:
s21, setting the depth and the width of the model; in a Backbone network of the backhaul, a lightweight network architecture is adopted to construct a Bottleneck layer structure in MobileNetv 3: the first layer of convolution blocks are CBH functions, and the second layer of convolution blocks to the sixth layer of convolution blocks are all Bottleneck networks;
s22, introducing an SE attention mechanism into the Bottleneeck network; in the Neck Neck network, an ECA channel attention mechanism is added to each convolution layer in the BiFPN;
and S23, compressing the number of channels of feature graphs of different scales extracted from a Bottleneck layer by adopting a BiFPN weighted bidirectional feature pyramid network, and fusing multi-scale features.
Further, a first layer of Bottleneck of a backbone part of the Bottleneck network extracts features through DW convolution, an h-swish activation function is introduced to be combined with a SEnet attention mechanism, and then a 1 x 1 convolution layer is used for achieving dimensionality reduction;
the second layer to the fifth layer of the Bottleneck realize dimensionality increase by using 1 x 1 convolution, then extract features through DW convolution, introduce an SE attention mechanism and combine with an h-swish activation function, and finally realize dimensionality reduction by using a 1 x 1 convolution layer.
Further, in step S3, CIoU Loss is used as a Loss function, and the expression is as follows:
Figure BDA0003939075240000021
Figure BDA0003939075240000022
Figure BDA0003939075240000023
Figure BDA0003939075240000024
wherein P is B and B gt C is B and B gt The length of the diagonal line of the minimum circumscribed rectangle between, v is B and B gt Similarity of aspect ratio therebetween; w is a gt 、h gt Represents the width and height of the real detection box, w and h represent the width and height of the prediction detection box; beta is a weight coefficient alpha of power IoU Loss; a represents the real box and B represents the prediction box.
Further, in step S4, inputting the data set into the improved YOLOv5S model for training and testing, and specifically implementing the steps of obtaining a parameter model satisfying the conditions as follows:
s41, inputting 80% of the data set as a training set and 10% of the data set as a verification set into an improved YOLOv5S model, setting training parameters of Batch-size, learning-rate and Epochs at the same time, and training by using a YOLOv5 weight file to obtain an optimal parameter model;
s42, inputting 10% of the data set into the parameter model obtained in the step S41 as a test set for testing, and outputting a prediction effect graph;
and S43, comparing the test effect graph in the step S42 with the label graph to obtain an improved YOLOv5S comparison output result.
Compared with the prior art, the invention has the following remarkable effects:
1. according to the method, a lightweight network structure is integrated on the basis of the existing YOLOv5s model to replace the original backbone network, a high-efficiency channel attention mechanism is added, multi-scale feature fusion is added, and the original loss function is improved, so that the feature information of a defect target in a data set can be accurately captured, and the identification accuracy and recall rate of the YOLOv5s model are effectively improved; compared with the current mainstream detection models YOLOv4 and YOLOv5s, the accuracy rate is respectively improved by 5.19 percent and 5.44 percent, and the detection time is respectively reduced by 37.23FPS and 12.27FPS;
2. the improved MobileNetv3 network is introduced into the trunk feature extraction network and is used for coordinating and balancing the relation between the lightweight and the accuracy of the model; secondly, a BiFPN type fusion mode is adopted, the multi-scale adaptability of the neural network is enhanced, and the fusion speed and efficiency are improved; finally, the feature weight is adjusted in a light self-adaptive mode, and the feature extraction capability of the neural network is further improved by applying an ECAnet channel attention mechanism; in the aspect of Loss functions, the Loss function of frame regression is modified into alpha IoU Loss, and the bbox regression precision is further improved; effectively prevents the increase of the defect area, reduces the economic loss, and has certain research value and application prospect.
Drawings
FIG. 1 is a general flow chart of the invention;
FIG. 2 is a diagram of the improved YOLOv5s _ MEB network structure of the present invention;
FIG. 3 (a) a first layer Bottleneeck layer network structure diagram;
FIG. 3 (b) a diagram of the remaining Bottleneck layer network architecture;
FIG. 4 is a diagram illustrating the preprocessing effect of the improved YOLOv5s _ MEB training data set of the present invention;
FIG. 5 (a) is a box _ loss plot in the training set;
FIG. 5 (b) graphs obj _ loss in the training set;
FIG. 5 (c) plots of cls _ loss in the training set;
FIG. 5 (d) accuracy graph;
FIG. 5 (e) recall graph;
FIG. 5 (f) verification set box _ loss plot;
FIG. 5 (g) validation set obj _ loss graph;
FIG. 5 (h) verification set cls _ loss plot;
FIG. 5 (i) mAP _0.5 plot;
FIG. 5 (j) mAP _0.95 plot.
Detailed Description
The invention is described in further detail below with reference to the drawings and the detailed description.
According to the method, firstly, aiming at various complex detection scenes encountered in the actual detection process, the experimental data set is subjected to fuzzification, gaussian noise addition and other processing so as to adapt to the complex detection scenes in the actual detection, and secondly, in the aspect of network structure, a MobileNetv3 lightweight network model is improved in a backbone feature extraction network, so that the model volume is reduced; then an ECAnet channel attention mechanism is introduced between the backbone network and the neck network, so that the significance of the surface area of the fan is enhanced; then, replacing the FPN + PANet structure by BiFPN, so that the model can be more effectively expressed in response to various target characteristics; in the aspect of a Loss function, alpha IoU Loss is introduced, and the regression precision of bbox can be further improved, and the recall rate of small-size targets is improved. When the improved YOLOv5s _ MEB model detects a defect target, performance indexes such as accuracy, recall rate, mAP, frame number and the like are improved in different ranges compared with other models, the detection work of the surface defect of the fan in a complex environment can be completed, and the improved YOLOv5s _ MEB model has a certain application value in industrial deployment.
FIG. 1 is a general flow chart, the present invention includes the following steps:
step 1, collecting a data set of surface defects of a wind turbine, and carrying out picture marking; preprocessing an image, and performing operations such as image data augmentation;
step 2, changing the convolution mode of YOLOv5s, adding a lightweight network model, integrating into a high-efficiency attention mechanism, and adding a weighted bidirectional feature pyramid network;
step 3, improving the original IoU loss function;
and 4, inputting the data set obtained in the step 1 into an improved YOLOv5s model for training and testing to obtain an optimal parameter model, and outputting a prediction segmentation graph.
The detailed implementation process is as follows:
step 1, referring to a standard format of a PASCAL VOC target detection data set, adopting a Labelimg image annotation tool to perform target annotation on 2996 pictures of the wind turbine with defects on the surface, wherein the pictures are classified into crack (large-area defects caused by paint falling and the like) and dirt (dirt caused by wind sand and the like); void (holes, gaps due to natural corrosion); erosion (narrow and long defects at the joints of the components); rust (defects are not treated for a long time to generate rust); thunderstrike (the defect caused by damage to a fan caused by severe weather such as thunderstorm) has six types of detection labels. In the marking process, a complete defect area is selected as much as possible to prevent image characteristics from being lost. The tag file includes information such as tag type and prediction frame coordinates. Xml tags are generated in one-to-one correspondence with each type of defect, and the size is 608 × 608.
After HSV enhancement, rotation, zooming, translation, shearing, perspective, overturning and other operations are carried out on data, an image fuzzy algorithm is added for detecting the surface defects of the fan in a complex environment, and a shot picture in severe weather is simulated; the brightness contrast conversion is increased, and the photographed picture under dim light is simulated; and Gaussian noise is added into partial image data set to complete the amplification of the data set. In the training process, the robustness of the detection model and the detection success rate of the small targets and targets during stacking are improved. The data set is divided into three parts: the training set accounts for 80%, the validation set accounts for 10%, and the test set accounts for 10%.
Step 2, as shown in fig. 2, in the Backbone network of the backhaul of the encoding part of the network structure diagram of YOLOv5s, the original CSPdarknet layer is mainly replaced by the Bottleneck function in the lightweight network architecture MobileNetv 3. The Bottleneck layer of the MobileNet v3 is used for feature extraction of each feature layer, the Bottleneck layer is improved, dimension reduction of an input end picture is performed through the CBH layer, the CBH function is composed of a Conv layer, BN normalization and an h-swish function, the first layer of Bottleneck extracts features through DW convolution, and the h-swish activation function and SENet (Squeeze-and-Excitation) attention mechanism are introduced to be combined, so that the extraction capability of key features of small targets is improved. Then, dimension reduction is realized by using the 1 × 1 convolution layer. The redundant structure in the original Bottleneck layer is removed, and the network operation speed is improved on the premise of reducing network parameters by changing the convolution layer and the convolution mode, so that the network pays attention to more useful channel information to adjust the weight of each channel. And the rest Bottleneck layers firstly use 1 multiplied by 1 convolution to realize dimension increasing, then extract features through DW convolution, introduce an SE attention mechanism and combine with an h-swish activation function, improve the capture capability of local channel information, inhibit some feature information which is useless to the current task, finally use 1 multiplied by 1 convolution layer to realize dimension reduction, reduce the parameters and the calculated amount of the model to the maximum extent and have small influence on the detection accuracy.
As shown in fig. 3 (a) and 3 (b), on the premise that the sentet Attention mechanism has been introduced into the modified MobileNetv3, YOLOv5s _ MEB during the Bottleneck deep convolution process, an ECAnet (Efficient Channel Attention Module) Efficient correlation Channel Attention Module is introduced between the trunk network and the neck network, so as to enhance the expression capability of the neural network. SENEt is a common channel attention mechanism at the present stage, and a global pooling mode is adopted to reduce the parameter number and prevent the overfitting phenomenon of the layer. ECAnet has better effect of improving the representation capability of key information than SENEt when the calculated amount is reduced. The propagation of backbone network and neck network information is crucial in a convolutional neural network, ECAnet accepts a feature map output by a Bottleneck layer, ECAnet is introduced in the up-sampling process of a neck network C3 module, and 7 ECAnet introduction points are counted. ECAnet captures local channel information between network layers, obtains the importance level in each feature channel, and suppresses feature information that is useless in the current information acquisition process. Firstly, a GAP (global average pooling) operation is performed on an input feature map, and W × H × C (W, H is the width and height of the feature map respectively, and C is the number of channels of the feature map) feature vectors are converted into 1 × 1 × C vectors. And calculating a one-dimensional convolution with the size of k according to the number of the channels of the feature map to carry out cross-channel interaction, obtaining the weight of each channel of the feature map, and finally carrying out normalization processing and channel-by-channel multiplication to obtain the feature map with the attention of the channels. ECAnet performs cross-channel information interaction in a lightweight mode without reducing the channel dimension, so that the calculation cost of the model is reduced, and obvious performance gain is brought.
Because the neck network in YOLOv5s _ MEB replaces the FPN + PANet network by the BiFPN weighted bidirectional feature pyramid network. BiFPN deletes a node with only one input edge without feature fusion, thereby simplifying a bidirectional feature network. BiFPN will repeatedly process each bidirectional path as a feature network layer to realize higher-level feature fusion. Extracting 3 features with different scales from a Bottleneck layer of a backbone network, fusing weighted channels before outputting a fourth layer and a fifth layer of the Bottleneck layer networks, compressing and unifying the number of the channels, sending the compressed and unified channels into a BiFPN network, receiving the Bottleneck layer through upsampling and introducing an ECAnet attention channel by a C3 layer, combining the upper sampling structure with a CBL layer, enhancing the feature extraction capability of the backbone network through convolution calculation after a plurality of groups of Concat, and finally sending the Bottleneck layers with different scales into detection to respectively detect large, medium and small targets.
In step 3, ioU (cross-over ratio) Loss is used to measure the overlapping degree of the prediction frame and the real frame in the target detection, wherein A represents the real frame, and B represents the prediction frame. As shown in equation (1). The Yolov5s model uses CIoU Loss as a Loss function, as shown in equation (2):
Figure BDA0003939075240000061
Figure BDA0003939075240000062
Figure BDA0003939075240000063
Figure BDA0003939075240000071
wherein P is B and B gt C is B and B gt The length of the diagonal line of the minimum circumscribed rectangle between, v is B and B gt Similarity of aspect ratio between, w gt 、h gt Represents the width and height of the real detection box, w and h represent the width and height of the prediction detection box; β is a weight coefficient; alpha is power IoU Loss.
The CIoU Loss adds a penalty term to alleviate the problem of gradient disappearance, and further considers the center point distance and the aspect ratio between the prediction box and the real box under the penalty condition.
The YOLOv5s _ MEB model provides alpha IoU Loss based on the existing CIoU Loss, introduces power transformation, records the power IoU Loss as alpha, and the alpha IoU Loss is used for accurate bbox (bounding box) detection frame regression and target detection and is unified exponentiation based on the existing Loss of IoU; through a large number of experiments and model calculations, a plurality of scholars analyze a series of properties such as order retention, loss, gradient re-weighting and the like of alpha IoU, and the bbox regression precision can be improved by adaptively improving the loss and gradient weighting of a IoU object when alpha is larger than 1; in the embodiment, alpha is selected to be 3, the detection effect is better than the existing CIoU-based loss after the loss of alpha IoU is introduced, and extra parameters and training time are not introduced. Providing greater robustness for defect detection. The expression for α IoU Loss is as follows:
Figure BDA0003939075240000072
step 4, batch-size was set to 16, learning-rate was 0.01, and epochs (number of iterations) was 300 times. The preprocessing result of the Yolov5s _ MEB detection model is shown in FIG. 4, and it can be known that the detection effect of various defect crack models is marked as 0, dirt is marked as 1, void is marked as 2, exposure is marked as 3, run is marked as 4, thunderstrike is marked as 5, and the Yolov5s \\ MEB model basically meets the detection requirement of the wind turbine surface defect in the actual industry. As can be seen from fig. 5 (a) - (j), as the training of the model progresses, the position loss (box _ loss) and the class loss (cls _ loss) of the training set decrease, and the position loss (obj _ loss) of the training set finally stabilizes at about 0.015 after 200 rounds of training. The position loss and the confidence loss of the verification set are stabilized at about 0.01 after 200 rounds of training cycles, the category loss is stabilized at about 0.001, the detection accuracy of the improved model approaches to 0.9 after 250 rounds, the recall rate can reach more than 80%, and the mAP value reaches about 85%. And (4) storing a weight file once in each training, and obtaining the weight file with the minimum loss value after all 300 rounds of training are finished. When the improved YOLOv5s _ MEB model detects a defect target, performance indexes such as accuracy, recall rate and mAP are improved in different ranges compared with other models, the detection of the surface defects of the fan in a complex environment can be completed, and the method has a certain application value in industrial deployment.

Claims (6)

1. A method of detecting surface defects of a wind turbine, comprising the steps of:
s1, collecting a damage detection data image of the surface of the wind turbine, and carrying out picture marking; performing enhancement processing on the marked pictures, and dividing the pictures into a training set, a verification set and a test set;
s2, an improved YOLOv5S model is constructed by changing a convolution mode, integrating a lightweight network Bottleneck structure, adding a weighted bidirectional feature pyramid network and adding a high-efficiency channel attention mechanism method;
s3, performing bbox regression by adopting an alpha IoU loss function;
and S4, inputting the data set obtained in the step S1 into an improved YOLOv5S model for training and testing to obtain a parameter model meeting the conditions, and outputting a surface defect effect diagram.
2. Method for detecting surface defects of a wind turbine according to claim 1, characterized in that the detailed implementation of step S1 is as follows:
s11, adopting a Labelimg image labeling tool to label a target of a wind turbine picture with a defect on the surface, and dividing the defect into six detection labels including a crack detection label, a dirt detection label, a void detection label, an exposure detection label, a rust detection label and a thunderstrike detection label; generating xml labels corresponding to various defects one by one;
s12, enhancing the marked picture by adding image blurring, HSV enhancement, rotation, zooming, translation, shearing, perspective and overturning, wherein the enhanced picture is divided into the following components according to the ratio of 8: training set, validation set and test set.
3. The method for detecting the surface defects of the wind turbine as claimed in claim 1, wherein in step S2, a modified YOLOv5S model is constructed to comprise an Input end, a Backbone network of Backbone, a Neck network of Neck and a Head output end, and the detailed implementation steps are as follows:
s21, setting the depth and width of the model; in a Backbone network of the backhaul, a lightweight network architecture is adopted to construct a Bottleneck layer structure in MobileNetv 3: the first layer of convolution blocks are CBH functions, and the second layer of convolution blocks to the sixth layer of convolution blocks are all Bottleneck networks;
s22, introducing an SE attention mechanism into the Bottleneeck network; in the Neck Neck network, an ECA channel attention mechanism is added to each convolution layer in the BiFPN;
and S23, compressing the number of channels of feature graphs of different scales extracted from a Bottleneck layer by adopting a BiFPN weighted bidirectional feature pyramid network, and fusing multi-scale features.
4. The wind turbine surface defect detection method of claim 3, wherein a first layer of Bottleneck of a backbone part of a Bottleneck network extracts features through DW convolution, introduces h-swish activation function and combines with a SENet attention mechanism, and then uses a 1 x 1 convolutional layer to realize dimension reduction;
the second layer to the fifth layer of the Bottleneck realize dimensionality increase by using 1 x 1 convolution, then extract features through DW convolution, introduce an SE attention mechanism and combine with an h-swish activation function, and finally realize dimensionality reduction by using a 1 x 1 convolution layer.
5. The method of claim 1, wherein in step S3, CIoU Loss is expressed as a Loss function as follows:
Figure FDA0003939075230000021
Figure FDA0003939075230000022
Figure FDA0003939075230000023
Figure FDA0003939075230000024
wherein P is B and B gt C is B and B gt The length of the diagonal line of the minimum circumscribed rectangle between, v is B and B gt Similarity of aspect ratio therebetween; w is a gt 、h gt Represents the width and height of the real detection box, w and h represent the width and height of the prediction detection box; beta is a weight coefficient alpha, power IoU Loss; a represents the real box and B represents the prediction box.
6. The method for detecting surface defects of a wind turbine according to claim 1, wherein in step S4, the data set is input into an improved YOLOv5S model for training and testing, and the specific implementation steps for obtaining a parameter model satisfying the conditions are as follows:
s41, inputting 80% of the data set as a training set and 10% of the data set as a verification set into an improved YOLOv5S model, setting training parameters of Batch-size, learning-rate and Epochs at the same time, and training by using a YOLOv5S weight file to obtain an optimal parameter model;
s42, inputting 10% of the data set as a test set into the parameter model obtained in the step S41 for testing, and outputting a prediction effect graph;
and S43, comparing the test effect graph in the step S42 with the label graph to obtain an improved YOLOv5S comparison output result.
CN202211414040.1A 2022-11-11 2022-11-11 Wind turbine surface defect detection method Pending CN115719337A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211414040.1A CN115719337A (en) 2022-11-11 2022-11-11 Wind turbine surface defect detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211414040.1A CN115719337A (en) 2022-11-11 2022-11-11 Wind turbine surface defect detection method

Publications (1)

Publication Number Publication Date
CN115719337A true CN115719337A (en) 2023-02-28

Family

ID=85255023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211414040.1A Pending CN115719337A (en) 2022-11-11 2022-11-11 Wind turbine surface defect detection method

Country Status (1)

Country Link
CN (1) CN115719337A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116485796A (en) * 2023-06-19 2023-07-25 闽都创新实验室 Pest detection method, pest detection device, electronic equipment and storage medium
CN116681885A (en) * 2023-08-03 2023-09-01 国网安徽省电力有限公司超高压分公司 Infrared image target identification method and system for power transmission and transformation equipment
CN117011231A (en) * 2023-06-27 2023-11-07 盐城工学院 Strip steel surface defect detection method and system based on improved YOLOv5
CN117152443A (en) * 2023-10-30 2023-12-01 江西云眼视界科技股份有限公司 Image instance segmentation method and system based on semantic lead guidance
CN117456610A (en) * 2023-12-21 2024-01-26 浪潮软件科技有限公司 Climbing abnormal behavior detection method and system and electronic equipment

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116485796A (en) * 2023-06-19 2023-07-25 闽都创新实验室 Pest detection method, pest detection device, electronic equipment and storage medium
CN116485796B (en) * 2023-06-19 2023-09-08 闽都创新实验室 Pest detection method, pest detection device, electronic equipment and storage medium
CN117011231A (en) * 2023-06-27 2023-11-07 盐城工学院 Strip steel surface defect detection method and system based on improved YOLOv5
CN117011231B (en) * 2023-06-27 2024-04-09 盐城工学院 Strip steel surface defect detection method and system based on improved YOLOv5
CN116681885A (en) * 2023-08-03 2023-09-01 国网安徽省电力有限公司超高压分公司 Infrared image target identification method and system for power transmission and transformation equipment
CN116681885B (en) * 2023-08-03 2024-01-02 国网安徽省电力有限公司超高压分公司 Infrared image target identification method and system for power transmission and transformation equipment
CN117152443A (en) * 2023-10-30 2023-12-01 江西云眼视界科技股份有限公司 Image instance segmentation method and system based on semantic lead guidance
CN117152443B (en) * 2023-10-30 2024-02-23 江西云眼视界科技股份有限公司 Image instance segmentation method and system based on semantic lead guidance
CN117456610A (en) * 2023-12-21 2024-01-26 浪潮软件科技有限公司 Climbing abnormal behavior detection method and system and electronic equipment
CN117456610B (en) * 2023-12-21 2024-04-12 浪潮软件科技有限公司 Climbing abnormal behavior detection method and system and electronic equipment

Similar Documents

Publication Publication Date Title
CN115719337A (en) Wind turbine surface defect detection method
CN111461190B (en) Deep convolutional neural network-based non-equilibrium ship classification method
Zhao et al. Cloud shape classification system based on multi-channel cnn and improved fdm
CN108427920A (en) A kind of land and sea border defense object detection method based on deep learning
CN108961235A (en) A kind of disordered insulator recognition methods based on YOLOv3 network and particle filter algorithm
CN114627360A (en) Substation equipment defect identification method based on cascade detection model
CN107742099A (en) A kind of crowd density estimation based on full convolutional network, the method for demographics
CN111178177A (en) Cucumber disease identification method based on convolutional neural network
CN116229052B (en) Method for detecting state change of substation equipment based on twin network
CN110490188A (en) A kind of target object rapid detection method based on SSD network improvement type
CN114155474A (en) Damage identification technology based on video semantic segmentation algorithm
Qiu et al. A lightweight yolov4-edam model for accurate and real-time detection of foreign objects suspended on power lines
CN115830535A (en) Method, system, equipment and medium for detecting accumulated water in peripheral area of transformer substation
Zhang et al. A meta-learning framework for few-shot classification of remote sensing scene
CN114332083A (en) PFNet-based industrial product camouflage flaw identification method
CN112668662B (en) Outdoor mountain forest environment target detection method based on improved YOLOv3 network
CN117710824A (en) Core image RQD intelligent analysis method based on improved Cascade Mask R-CNN model
CN117496223A (en) Light insulator defect detection method and device based on deep learning
CN117593244A (en) Film product defect detection method based on improved attention mechanism
CN117115616A (en) Real-time low-illumination image target detection method based on convolutional neural network
CN116630989A (en) Visual fault detection method and system for intelligent ammeter, electronic equipment and storage medium
CN113496159B (en) Multi-scale convolution and dynamic weight cost function smoke target segmentation method
Li et al. Focus on local: transmission line defect detection via feature refinement
Yuan et al. LR-ProtoNet: Meta-Learning for Low-Resolution Few-Shot Recognition and Classification
CN117853923B (en) Power grid power infrastructure safety evaluation analysis method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination