CN116664487A - Composite insulator defect detection system and method based on deep learning - Google Patents

Composite insulator defect detection system and method based on deep learning Download PDF

Info

Publication number
CN116664487A
CN116664487A CN202310443642.8A CN202310443642A CN116664487A CN 116664487 A CN116664487 A CN 116664487A CN 202310443642 A CN202310443642 A CN 202310443642A CN 116664487 A CN116664487 A CN 116664487A
Authority
CN
China
Prior art keywords
composite insulator
convolution
semantic features
module
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310443642.8A
Other languages
Chinese (zh)
Inventor
张博阳
刘馨阳
张艳鹏
苏宝林
倪艳姝
卢振生
郭金宇
王长龙
裴兆兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suihua University
Original Assignee
Suihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suihua University filed Critical Suihua University
Priority to CN202310443642.8A priority Critical patent/CN116664487A/en
Publication of CN116664487A publication Critical patent/CN116664487A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J5/0096Radiation pyrometry, e.g. infrared or optical thermometry for measuring wires, electrical contacts or electronic systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Abstract

The invention discloses a composite insulator defect detection system and method based on deep learning, and relates to the technical field of defect detection. The image acquisition module acquires infrared images in real time; the YOLOv5 module is used for carrying out target detection on the infrared image and determining a target area containing the composite insulator; the segmentation module is used for segmenting the composite insulator region in the target region; the mapping module is used for establishing a pixel distribution histogram of the composite insulator region; the detection result output module is used for judging whether the temperature value corresponding to the pixel distribution histogram of the composite insulator region exceeds a temperature threshold value; if yes, outputting a detection result with defects; if not, returning to execute the operation of collecting the infrared image in real time. The invention improves the defect detection accuracy of the composite insulator while improving the defect detection efficiency of the composite insulator.

Description

Composite insulator defect detection system and method based on deep learning
Technical Field
The invention relates to the technical field of defect detection, in particular to a composite insulator defect detection system and method based on deep learning.
Background
The composite insulator has the advantages of excellent pollution flashover resistance, light weight, convenient installation and the like, is an important basic insulating part in high-voltage power transmission and distribution lines and substations, and plays roles of electric insulation and mechanical support between a wire and a pole tower. At present, about 70 ten thousand composite insulators in China are applied to a power system. With the increase of the usage scale and the operation period, the composite insulator has defects such as abnormal heat generation and the like frequently occurring in recent years, such as partial discharge, dielectric loss or resistance loss when leakage current flows through an insulating substance, and the like, can cause the temperature of the insulator to rise. Further defect development may lead to severe faults such as internal insulation breakdown, broken strings, etc. Therefore, the defect detection of the composite insulator is carried out, and the reliable operation of the composite insulator is ensured to have important significance for the stable operation of the power system.
In order to rapidly and accurately detect and position the defect composite insulator, a series of researches are carried out at home and abroad, the defect detection problem of the composite insulator can be solved to a certain extent, but the defect exists, if the defect detection is needed by manual tower climbing, the problems of potential safety hazard, lower efficiency and poor accuracy exist.
Disclosure of Invention
The embodiment of the invention aims to provide a composite insulator defect detection system and method based on deep learning, which can improve the accuracy of composite insulator defect detection while improving the efficiency of composite insulator defect detection.
In order to achieve the above object, the embodiment of the present invention provides the following solutions:
a deep learning based composite insulator defect detection system, comprising:
the image acquisition module is used for acquiring infrared images in real time;
the YOLOv5 module is connected with the image acquisition module and is used for carrying out target detection on the infrared image and determining a target area containing the composite insulator;
the segmentation module is connected with the YOLOv5 module and is used for segmenting the composite insulator region in the target region;
the mapping module is connected with the segmentation module and is used for establishing a pixel distribution histogram of the composite insulator region;
the detection result output module is connected with the mapping module and is used for:
judging whether a temperature value corresponding to a pixel distribution histogram of the composite insulator region exceeds a temperature threshold value or not;
if yes, outputting a detection result with defects;
if not, returning to execute the operation of collecting the infrared image in real time.
Optionally, the YOLOv5 module specifically includes:
the feature extraction unit is connected with the image acquisition module and is used for carrying out feature recognition on the infrared image to obtain depth semantic features;
the feature fusion unit is connected with the feature extraction unit and used for fusing the depth semantic features to obtain multi-scale features;
and the output unit is connected with the characteristic fusion unit and is used for carrying out regression prediction according to the multi-scale characteristics to obtain a target area in the infrared image.
Optionally, the feature extraction unit specifically includes a number a of convolution operation layers, a being greater than or equal to 2; any one of the convolution operation layers specifically comprises:
a first convolution layer for:
performing Depthwise convolution operation on the input data to obtain a first semantic feature; any one convolution kernel in the first convolution layer only carries out convolution operation on one input channel in all input channels; the convolution kernel size is kxkx1, wherein k represents the size of the convolution kernel, and 1 represents that convolution operation is performed on only one input channel in all input channels; the number of output channels of the first convolution layer is the same as the number of input channels;
a second convolution layer, coupled to said first convolution layer, for:
performing Pointwise convolution operation on the input first semantic features to obtain second semantic features; any one convolution check in the second convolution layer carries out convolution operation on all input channels of the input first semantic features; the convolution kernel size is 1×1×n, where n represents the number of input channels;
an ECA attention mechanism layer, coupled to the second convolution layer, for:
carrying out global average pooling on the second semantic features to obtain second semantic features after global average pooling;
carrying out convolution operation on the second semantic features subjected to global average pooling, and simultaneously carrying out weight calculation on the second semantic features subjected to global average pooling by adopting an activation function to obtain attention weights; the convolution kernel size of the ECA attention mechanism layer is k; the attention weight is w, and the attention weight corresponds to the second semantic features one by one;
and multiplying the second semantic features by the attention weights by corresponding elements to obtain the depth semantic features.
Optionally, the segmentation module specifically includes:
the rough segmentation unit is connected with the output unit and is used for carrying out rough segmentation on the target area to obtain a rough composite insulator area;
and the fine segmentation unit is connected with the coarse segmentation unit and is used for carrying out fine segmentation on the coarse composite insulator region to obtain the precise composite insulator region.
Optionally, the rough segmentation unit specifically includes:
the third convolution layer is connected with the output unit and is used for carrying out convolution operation on the input target area to obtain third semantic features;
the fourth convolution layer is connected with the third convolution layer and is used for carrying out convolution operation on the third input semantic features to obtain fusion semantic features;
and the decoder is respectively connected with the feature extraction unit and the fourth convolution layer and is used for decoding the depth semantic features and the fusion semantic features to obtain the rough composite insulator region.
Optionally, the fourth convolution layer specifically includes:
n cavity convolution blocks for obtaining N features; n is more than or equal to 4; the expansion rates of the cavity convolution blocks are different;
a global average pooling block for obtaining global average pooling characteristics;
and the N features are fused with the global average pooling feature to obtain the fused semantic feature.
Optionally, the fine-dividing unit specifically includes:
the point selecting layer is connected with the decoder and is used for selecting points of the rough composite insulator region to obtain sampling points;
the feature merging layer is connected with the point selection layer and is used for carrying out feature merging on the depth semantic features and the fusion semantic features of the sampling points to obtain merged features;
and the MLP model is connected with the feature merging layer and used for predicting the merging features to obtain the accurate composite insulator region.
Optionally, the mapping module specifically includes:
and the mask unit is connected with the MLP model and is used for:
performing mask treatment on the accurate composite insulator region to obtain a mask composite insulator region;
mapping the mask composite insulator region back to the infrared image to obtain an infrared image composite insulator region with real pixel distribution;
and mapping the infrared image composite insulator region with real pixel distribution with the temperature value to obtain a pixel distribution histogram of the composite insulator region.
In order to achieve the above purpose, the embodiment of the present invention further provides the following solutions:
a composite insulator defect detection method based on deep learning comprises the following steps:
acquiring an infrared image in real time;
performing target detection on the infrared image to determine a target area containing the composite insulator;
dividing a composite insulator region in the target region;
establishing a pixel distribution histogram of the composite insulator region;
judging whether a temperature value corresponding to a pixel distribution histogram of the composite insulator region exceeds a temperature threshold value or not;
if yes, outputting a detection result with defects;
if not, returning to execute the step of collecting the infrared image in real time.
According to the embodiment of the invention, the image acquisition module acquires the infrared image in real time, so that the power failure detection during manual tower climbing is avoided, the detection efficiency is improved, the staff is far away from the composite insulator, and the safety of the staff is improved. The YOLOv5 module is used for positioning and extracting a target area in an infrared image, realizing coarse positioning of a composite insulator target area, the segmentation module is used for segmenting the composite insulator area in the target area, realizing accurate segmentation of the composite insulator area in the infrared image, and improving segmentation accuracy. The mapping module establishes a pixel distribution histogram of the composite insulator region, realizes the visualization of the temperature of the composite insulator, and is convenient for a worker to check the temperature value of the composite insulator in real time. The detection result output module is used for judging whether the temperature value corresponding to the pixel distribution histogram of the composite insulator region exceeds a temperature threshold value; if yes, outputting a detection result with defects; if not, returning to execute the operation of collecting the infrared image in real time. The function of alarming to staff when the temperature of the composite insulator is too high is realized, and the safety and stability of the operation of the power grid are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a deep learning-based composite insulator defect detection system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a portion of an infrared image set provided by an embodiment of the present invention;
FIG. 3 is a detailed block diagram of a YOLOv5 module according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a first convolution layer and a second convolution layer according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an ECA attention mechanism layer according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a segmentation module according to an embodiment of the present invention;
FIG. 7 is a histogram of pixel distribution according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a mask unit according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a target area in an infrared image according to an embodiment of the present invention;
FIG. 10 is a schematic view of a roughened composite insulator region provided by an embodiment of the present invention;
FIG. 11 is a schematic diagram of a precise composite insulator region according to an embodiment of the present invention;
fig. 12 is a schematic flow chart of a method for detecting defects of a composite insulator based on deep learning according to an embodiment of the present invention.
Symbol description:
the device comprises an image acquisition module-1, a YOLOv5 module-2, a feature extraction unit-21, a feature fusion unit-22, an output unit-23, a segmentation module-3, a mapping module-4 and a detection result output module-5.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide a composite insulator defect detection system and method based on deep learning, which are used for solving the problems of low efficiency and low accuracy of the existing composite insulator defect detection.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Fig. 1 illustrates an exemplary structure of the deep learning-based composite insulator defect detection system described above. At least comprises: the device comprises an image acquisition module 1, a yolov5 module 2, a segmentation module 3, a mapping module 4 and a detection result output module 5. The modules are described in detail below.
The image acquisition module 1 is used for acquiring infrared images in real time.
In one example, the image acquisition module 1 may acquire the infrared image of the composite insulator in real time, or may acquire the infrared image of the composite insulator at regular time. The image acquisition module 1 can acquire infrared images of other components in the transmission line at fixed time or in real time. For example, the image acquisition module 1 may be a smart unmanned aerial vehicle in large-scale, the resolution of the infrared image is 640×512, all the infrared images form a data set, the data set contains 1600 infrared images in total, and part of the infrared image sets are shown in fig. 2.
The YOLOv5 module 2 is connected with the image acquisition module 1, and the YOLOv5 module 2 is used for carrying out target detection on the infrared image and determining a target area containing the composite insulator.
In one example, the YOLOv5 module 2 is a single-stage target detection network structure, which is characterized by fast reasoning speed and high detection accuracy, and can quickly and accurately locate and extract a target region in an infrared image. See fig. 9 for a target area in an infrared image.
The segmentation module 3 is connected to the YOLOv5 module 2, the segmentation module 3 being used for segmenting the composite insulator region in the target region.
In one example, after the YOLOv5 module 2 locates and extracts the target region in the infrared image, the segmentation module 3 segments the composite insulator region in the target region.
The mapping module 4 is connected with the segmentation module 3, and the mapping module 4 is used for establishing a pixel distribution histogram of the composite insulator region.
In one example, after the segmentation module 3 segments the composite insulator region in the target region, the mapping module 4 creates a pixel distribution histogram of the composite insulator region.
The detection result output module 5 is connected with the mapping module 4, and the detection result output module 5 is used for:
judging whether the temperature value corresponding to the pixel distribution histogram of the composite insulator region exceeds a temperature threshold value or not;
if yes, outputting a detection result with defects;
if not, returning to execute the operation of collecting the infrared image in real time.
In one example, the detection result output module 5 determines whether the temperature value corresponding to the pixel distribution histogram of the composite insulator region exceeds the temperature threshold, and if the temperature map value exceeds the temperature threshold, the detection result output module 5 outputs a detection result that there is a defect. If the temperature mapping value does not exceed the temperature threshold value, the detection result output module 5 returns to perform the operation of collecting the infrared image in real time, and returns to perform the operation of collecting the infrared image. For example, the temperature threshold is 18.3 degrees celsius.
In summary, in the embodiment of the invention, the image acquisition module acquires the infrared image in real time, so that the power failure detection during manual tower climbing is avoided, the detection efficiency is improved, the staff is far away from the composite insulator, and the safety of the staff is improved. The YOLOv5 module carries out target detection on the infrared image, determines a target area containing the composite insulator, achieves coarse positioning of the target area of the composite insulator, the segmentation module segments the composite insulator area in the target area, achieves precise segmentation of the composite insulator area in the infrared image, and improves segmentation accuracy and temperature detection accuracy. The mapping module establishes a pixel distribution histogram of the composite insulator region, realizes the visualization of the temperature of the composite insulator, and is convenient for a worker to check the temperature value of the composite insulator in real time. The detection result output module is used for judging whether the temperature value corresponding to the pixel distribution histogram of the composite insulator region exceeds a temperature threshold value; if yes, outputting a detection result with defects; if not, returning to execute the operation of collecting the infrared image in real time. The function of alarming to staff when the temperature of the composite insulator is too high is realized, and the safety and stability of the operation of the power grid are improved.
In the embodiment of the invention, a deep learning-based composite insulator defect detection system adopts a Course-to-Fine (from coarse detection to Fine detection) strategy, firstly, a YOLOv5 module is used for positioning a target area of a composite insulator in an infrared image, then a composite insulator area is segmented and extracted, a segmentation module is used for carrying out Fine segmentation, the contour of the composite insulator is precisely positioned, and a mapping relation between pixel distribution and temperature is established according to a segmented result, so that reading and analysis processing of temperature information in the contour area of the composite insulator are realized, and finally, the automatic detection of the composite insulator is realized by integrating information. The defect temperature value is acquired by importing an infrared image into software matched with an infrared camera. The defect temperature detection process of the composite insulator is end-to-end, namely, the defect temperature grade of the defect composite insulator can be obtained by inputting the infrared image, no manual intervention is needed in the middle, and the efficiency and the intelligent level of temperature detection are greatly improved. The system obtains more accurate target positioning and segmentation performance in the infrared image detection segmentation of the composite insulator, and can effectively improve the detection efficiency.
Referring to fig. 3, the yolov5 module 2 specifically includes: a feature extraction unit 21, a feature fusion unit 22, and an output unit 23.
The feature extraction unit 21 is connected with the image acquisition module 1, and the feature extraction unit 21 is used for carrying out feature recognition on the infrared image to obtain depth semantic features.
In one example, the feature extraction unit 21 is configured to perform feature recognition on the infrared image to obtain depth semantic features. The depth semantic features include at least edge features, texture features, contour features, color features, and the like.
The feature fusion unit 22 is connected with the feature extraction unit 21, and the feature fusion unit 22 is used for fusing the depth semantic features to obtain multi-scale features.
In one example, the feature fusion unit 22 can be specifically a PANet path aggregation network, which is a bi-directional pyramid structure. The PANet path aggregation network can fuse the deep semantic features to obtain multi-scale features.
The output unit 23 is connected to the feature fusion unit 22, and the output unit 23 is configured to perform regression prediction according to the multi-scale features to obtain a target region in the infrared image.
In one example. The output unit 23 is configured to perform regression prediction according to the multi-scale feature output by the PANet path aggregation network, so as to obtain a prediction result, i.e. a target area in the infrared image.
The feature extraction unit 21 specifically includes a convolution operation layers a being 2 or more. Those skilled in the art can flexibly design the value of a, such as 2, 3, 4, etc., and will not be described herein. Any convolution operation layer specifically comprises: the first convolution layer, the second convolution layer, and the ECA attention mechanism layer.
The first convolution layer is to:
performing Depthwise convolution operation on the input data to obtain a first semantic feature; any one convolution kernel in the first convolution layer only carries out convolution operation on one input channel in all input channels; the convolution kernel size is k×k×1, where k represents the size of the convolution kernel and 1 represents performing a convolution operation for only one of all input channels; the number of output channels of the first convolution layer is the same as the number of input channels.
The second convolution layer is connected to the first convolution layer, the second convolution layer being configured to:
performing Pointwise convolution operation on the input first semantic features to obtain second semantic features; any one convolution in the second convolution layer carries out convolution operation on all input channels of the input first semantic features; the convolution kernel size is 1x n, where n represents the number of input channels.
Referring to fig. 4, in an embodiment of the present invention, the feature extraction unit 21 may be specifically a depth separable convolution (Depthwise Separable Convolution). The depth separable convolution divides the complete convolution operation into two parts, namely a first convolution layer (Depthwise convolution) and a second convolution layer (Pointwise convolution), so that the parameter quantity of the feature extraction unit 21 can be reduced and the reasoning speed of the feature extraction unit 21 can be improved under the condition that the same feature extraction capacity is obtained.
Still referring to fig. 4, the feature extraction unit 21 (depth separable convolution) is calculated by concatenating the first convolution layer (Depthwise convolution) and the second convolution layer (Pointwise convolution) to form a complete convolution operation, and the parameter amount is reduced by one third compared with the normal convolution.
The ECA attention mechanism layer is connected with the second convolution layer, and the ECA attention mechanism layer is used for:
and carrying out global average pooling on the second semantic features to obtain the second semantic features after global average pooling.
Carrying out convolution operation on the second semantic features subjected to global average pooling, and simultaneously carrying out weight calculation on the second semantic features subjected to global average pooling by adopting an activation function to obtain attention weights; ECA notes that the convolutional kernel size of the mechanism layer is k; the attention weight is w, and the attention weight corresponds to the second semantic features one by one.
And multiplying the second semantic features by the attention weights by corresponding elements to obtain depth semantic features.
Referring to fig. 5, gap is global averaging pooling, convolution kernel size k is 5, sigmoid is an activation function, in deep learning, the feature extraction unit 21 can assist the feature extraction unit 21 in simulating selective attention of human vision by introducing an ECA attention mechanism, the feature extraction unit 21 can focus on key information in the second semantic feature, the ECA attention mechanism assigns corresponding attention weight to the second semantic feature through weight calculation, and the channel inputting the second semantic feature can be assigned with the largest weight through training of the feature extraction unit 21, so that detection precision is improved, and more access of irrelevant information gives low weight, thereby achieving the effects of retaining important information and inhibiting irrelevant information.
In one example, an ECA attention mechanism (Efficient Channel Attention, ECA) improves on the basis of a SE attention mechanism, which efficiently implements local cross-channel information interaction using 1-dimensional convolution, extracting dependencies of input channels.
In the feature extraction unit 21 of the YOLOv5 module 2, the common convolution is replaced by the depth separable convolution to reduce the parameter quantity of the feature extraction unit 21, improve the reasoning speed, introduce an ECA attention mechanism, enhance the relevance of the second semantic feature channel extracted by the middle layer and provide more discriminant features for the feature extraction unit 21.
The dividing module 3 specifically includes: coarse dividing unit and fine dividing unit.
The rough segmentation unit is connected with the output unit 23, and is used for performing rough segmentation on the target area to obtain a rough composite insulator area. Please refer to fig. 10 for roughened composite insulator regions.
The fine division unit is connected with the coarse division unit and is used for fine division of the coarse composite insulator region to obtain the precise composite insulator region. For a precise composite insulator region, please refer to fig. 11.
In one example, the partitioning module 3 may be specifically a deep labv3++ partitioning network. The composite insulator region positioned and cut by the deep labv3++ segmentation network has serious communication phenomenon at the edge thread of the composite insulator region, so that fine segmentation cannot be realized, further the judgment of temperature detection can be influenced, and in order to improve the problem of low precision of image edge segmentation prediction, a fine segmentation unit is adopted. The deep v3++ partition network is mainly divided into an Encoder-Decoder structure, and the Encoder-Decoder structure corresponds to a rough partition unit and a fine partition unit.
The rough segmentation unit specifically comprises: the third convolution layer, the fourth convolution layer and the decoder.
The third convolution layer is connected to the output unit 23, and the third convolution layer is configured to perform convolution operation on the input target area to obtain a third semantic feature.
The fourth convolution layer is connected with the third convolution layer and is used for carrying out convolution operation on the input third semantic features to obtain fusion semantic features.
In one example, the coarse segmentation unit (Encoder section) is mainly composed of a third convolution layer (DCNN back bone), and a fourth convolution layer (hole space convolution pooling pyramid, ASPP). The third convolution layer contains two network structures, the first is a Resnet series with layer4 changed to hole convolution, and the second is a modified Xreception. In order to further lighten the third convolution layer, a lightweight backbone network (ECA-mobiletv 3) based on a channel attention mechanism is used for replacing an original Resnet feature extraction backbone, and experiments show that the reasoning speed of the deep labv3++ segmentation network can be greatly improved on the premise of almost not losing accuracy.
50% of the third semantic features output by the third convolution layer are fed into the fourth convolution layer and the other 50% of the third semantic features output by the third convolution layer are fed into the decoder.
The fused semantic features coming out of the fourth convolution layer are divided into two parts, and are input into a decoder. 50% out of the fourth convolution layer are fused semantic features, and the other 50% are low-level features of the third convolution layer (third semantic features).
The decoder is respectively connected with the feature extraction unit and the fourth convolution layer, and is used for decoding the depth semantic features and the fusion semantic features to obtain a rough composite insulator region.
In one example, the upsampling portion in the deep labv3++ splitting network is replaced with a 3x3 convolution, the number of channels of the final layer is the number of categories, and this number of categories is taken as the rough splitting result, i.e., a rough composite insulator region.
The fourth convolution layer specifically includes: n holes convolve the block, the global average pooling block.
The N cavity convolution blocks are used for obtaining N characteristics; n is more than or equal to 4; the expansion rate of the hole convolution blocks is different.
The global average pooling block is used to obtain global average pooling features.
And the N features are fused with the global average pooling features to obtain fused semantic features.
In one example, one skilled in the art can flexibly design the value of N, such as 4,5,6, etc., and will not be described here. The fourth convolution layer receives 50% of third semantic features output by the third convolution layer, uses 4 kinds of cavity convolution blocks (comprising convolution, BN and activation layers) with different expansion rates and a global average pooling block (comprising pooling, convolution, BN and activation layers) to obtain five features altogether, and performs feature fusion (comprising convolution, BN, activation and dropout layers) through one 1*1 convolution block after the five features are spliced together, and finally sends the features into a fine segmentation unit.
The fine-dividing unit specifically includes: the system comprises a point selection layer, a feature merging layer and an MLP model.
The point selecting layer is connected with the decoder and is used for selecting points of the rough composite insulator region to obtain sampling points.
The feature merging layer is connected with the point selecting layer and is used for carrying out feature merging on the depth semantic features and the fusion semantic features of the sampling points to obtain merged features.
The MLP model is connected with the feature merging layer and is used for predicting the merging features to obtain the accurate composite insulator region.
Referring to fig. 6, the setpoint rule of the setpoint layer during model training is: k x N pixel points are randomly generated in the rough composite insulating area, the pixel values of the pixel points are scored by the point selecting layer, the points marked as uncertain points with low scores are ordered according to the scores, the most uncertain beta x N (beta E [0, K ]) pixel points are selected, and the rest (K-beta) x N pixel points are selected according to a uniform distribution mode. Wherein K, N and beta are positive integers.
The point selection rule of the point selection layer in the segmentation reasoning is as follows: the point selecting layer carries out up sampling on the composite insulator areas by 2 times each time, namely, two composite insulator areas with the same size are spliced into an image, and then N uncertain pixel points are directly selected.
And combining the third semantic features and the fusion semantic features of the positions of the N pixel points extracted from the output rough composite insulator region, and then obtaining the combined features.
The merging features are input into an MLP model (1 x1 convolution) as samples to carry out reasoning prediction, then N pixel point re-prediction results are obtained, and the prediction results are whether the pixel points are pixel points representing a composite insulator. And when in reasoning and predicting, gradually replacing points with low precision in an iterative up-sampling mode, namely replacing the original predicted rough composite insulator region until the predicted picture size is consistent with the original target region size, and obtaining the precise composite insulator region.
The mapping module specifically comprises: and a mask unit.
The mask unit is connected with the MLP model and is used for:
and carrying out mask treatment on the accurate composite insulator region to obtain a mask composite insulator region.
And mapping the mask composite insulator region back to the infrared image to obtain the infrared image composite insulator region with real pixel distribution.
And mapping the infrared image composite insulator region with real pixel distribution with the temperature value to obtain a pixel distribution histogram of the composite insulator region.
In one example, by dividing pixel intervals corresponding to the infrared image and the precise composite insulator region, mapping the precise composite insulator region back to the real pixel distribution of the original infrared image to obtain the pixel distribution of each pixel interval, and those skilled in the art can flexibly design the pixel interval of the composite insulator, for example, equally divide the precise composite insulator region into 10, 11, 15 intervals, etc., which are not described herein. And then, encoding the number of pixels in each pixel interval of the accurate composite insulator region into a one-dimensional vector serving as the input of a mask unit, and taking a temperature value as a label to obtain a pixel distribution histogram of the accurate composite insulator region.
Referring to fig. 7, the abscissa indicates the pixel value size, and the ordinate indicates the number of pixels. And establishing a pixel distribution histogram by using the accurate composite insulator region and the pixel distribution-temperature mapping diagram. The composite insulators with different heating degrees have different pixel distribution histograms, and the segmentation is mapped back to the real pixel distribution of the original infrared image to carry out histogram statistics.
In other embodiments of the present invention, the mask unit builds a 5-layer shallow convolution network to perform regression learning on the pixel distribution-temperature map. The network structure is shown in fig. 8, and is composed of an input layer, a hidden layer formed by a full-connection layer and an output layer, wherein the input is one-dimensional vector codes of the number of pixels in different pixel intervals, and the output is the temperature value of the composite insulator.
In order to achieve the above purpose, the embodiment of the present invention further provides the following solutions:
referring to fig. 12, a method for detecting defects of a composite insulator based on deep learning includes:
step 1: and acquiring infrared images in real time.
Step 1 may be specifically performed by the image acquisition module 1. Please refer to the above for the specific description of the image acquisition module 1, and the detailed description is omitted herein.
Step 2: and carrying out target detection on the infrared image to determine a target area containing the composite insulator.
Step 2 may be performed by YOLOv5 module 2 in particular. Please refer to the above for the detailed description of the YOLOv5 module 2, and the detailed description is omitted here.
Step 3: the composite insulator region in the target region is segmented.
Step 3 may be specifically performed by the segmentation module 3. For a specific description of the segmentation module 3, please refer to the above, and a detailed description is omitted herein.
Step 4: and establishing a pixel distribution histogram of the composite insulator region.
Step 4 may be specifically performed by the mapping module 4. For a specific description of the mapping module 4, please refer to the above, and a detailed description is omitted herein.
Step 5: judging whether the temperature value corresponding to the pixel distribution histogram of the composite insulator region exceeds a temperature threshold value or not;
if yes, outputting a detection result with defects;
if not, returning to execute the step of collecting the infrared image in real time.
Step 5 may be specifically performed by the detection result output module 5. For a specific description of the detection result output module 5, please refer to the above, and a detailed description is omitted herein.
In other embodiments of the present invention, locating and extracting a target area in an infrared image specifically includes:
step 21: and carrying out feature recognition on the infrared image to obtain depth semantic features.
Step 21 may be specifically performed by the feature extraction unit 21. For a specific description of the feature extraction unit 21, please refer to the above, and a detailed description is omitted here.
Step 22: and fusing the depth semantic features to obtain multi-scale features.
Step 22 may be specifically performed by the feature fusion unit 22. For a specific description of the feature fusion unit 22, please refer to the above, and a detailed description is omitted herein.
Step 23: and carrying out regression prediction according to the multi-scale characteristics to obtain a target area in the infrared image.
Step 23 may be specifically performed by the output unit 23. For a specific description of the output unit 23, please refer to the above, and a detailed description is omitted herein.
In other embodiments of the present invention, feature recognition is performed on an infrared image, where obtaining depth semantic features includes a convolution operations, a being greater than or equal to 2; any convolution operation specifically includes:
step 211: performing Depthwise convolution operation on the input data to obtain a first semantic feature; any one convolution kernel only carries out convolution operation on one input channel in all input channels; the convolution kernel size is kxkx1, wherein k represents the size of the convolution kernel, and 1 represents that convolution operation is performed on only one input channel in all input channels; the number of output channels is the same as the number of input channels.
Step 211 may be performed by the first convolution layer. For a specific description of the first convolution layer, please refer to the above, and a detailed description is omitted herein.
Step 212: performing Pointwise convolution operation on the input first semantic features to obtain second semantic features; any one convolution check carries out convolution operation on all input channels of the input first semantic features; the convolution kernel size is 1x n, where n represents the number of input channels.
Step 212 may be performed by the second convolution layer. For a specific description of the second convolution layer, please refer to the above, and a detailed description is omitted herein.
Step 213: carrying out global average pooling on the second semantic features to obtain second semantic features after global average pooling;
carrying out convolution operation on the second semantic features subjected to global average pooling, and simultaneously carrying out weight calculation on the second semantic features subjected to global average pooling by adopting an activation function to obtain attention weights; ECA notes that the convolutional kernel size of the mechanism layer is k; the attention weight is w, and the attention weight corresponds to the second semantic features one by one;
and multiplying the second semantic features by the attention weights by corresponding elements to obtain depth semantic features.
Step 213 may be performed by the ECA attention mechanism layer, in particular. For a detailed description of the ECA attention mechanism layer, refer to the above, and the detailed description is omitted here.
In other embodiments of the present invention, the method for dividing the composite insulator region in the target region specifically includes:
step 31: and (3) performing rough segmentation on the target area to obtain a rough composite insulator area.
Step 31 may be performed by a coarse segmentation unit in particular. For a specific description of the rough segmentation unit, please refer to the above, and a detailed description is omitted herein.
Step 32: and (3) finely dividing the rough composite insulator region to obtain the precise composite insulator region.
Step 32 may be performed in particular by a sub-dividing unit. For a detailed description of the sub-dividing unit, refer to the above, and are not repeated here.
In other embodiments of the present invention, rough segmentation is performed on the target area to obtain a rough composite insulator area, which specifically includes:
step 311: and carrying out convolution operation on the input target region to obtain a third semantic feature.
Step 311 may be performed by the third convolution layer. For a specific description of the third convolution layer, please refer to the above, and a detailed description is omitted herein.
Step 312: and carrying out convolution operation on the input third semantic features to obtain fusion semantic features.
Step 312 may be performed by the fourth convolution layer. For a detailed description of the fourth convolution layer, please refer to the above, and a detailed description is omitted herein.
Step 313: and decoding the depth semantic features and the fusion semantic features to obtain a rough composite insulator region.
Step 313 may be performed by a decoder in particular. For a specific description of the decoder, please refer to the above, and a detailed description is omitted herein.
In other embodiments of the present invention, convolution operation is performed on the third semantic feature to obtain a fused semantic feature, which specifically includes:
step 3121: n features are obtained. N is 4 or more.
Step 3121 may be specifically performed by N hole convolution blocks. For a specific description of the N hole convolution blocks, please refer to the above, and a detailed description is omitted herein.
Step 3122: and obtaining global average pooling characteristics. And fusing the N features with the global average pooling feature to obtain a fused semantic feature.
Step 3122 may be specifically performed by a global average pooling block. For a specific description of the global average pooling block, please refer to the above, and a detailed description is omitted herein.
In other embodiments of the present invention, fine division is performed on the rough composite insulator region to obtain a precise composite insulator region, which specifically includes:
step 321: and selecting points of the rough composite insulator region to obtain sampling points.
Step 321 may be performed by the setpoint layer. For a specific description of the selection layer, please refer to the above, and a detailed description is omitted herein.
Step 322: and combining the depth semantic features with the fusion semantic features of the sampling points to obtain combined features.
Step 322 may be performed by the feature merge layer, in particular. For a specific description of the feature combining layer, please refer to the above, and a detailed description is omitted herein.
Step 323: and predicting the merging characteristics to obtain the accurate composite insulator region.
Step 323 may be performed by the MLP model in particular. For a specific description of the MLP model, please refer to the above, and a detailed description is omitted here.
In other embodiments of the present invention, establishing a pixel distribution histogram of the composite insulator region specifically includes:
step 41: and carrying out mask treatment on the accurate composite insulator region to obtain a mask composite insulator region.
And mapping the mask composite insulator region back to the infrared image to obtain the infrared image composite insulator region with real pixel distribution.
And mapping the infrared image composite insulator region with real pixel distribution with the temperature value to obtain a pixel distribution histogram of the composite insulator region.
Step 41 may be specifically performed by a mask unit. For the specific description of the mask unit, please refer to the above, and the detailed description is omitted herein.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and implementations of the embodiments of the present invention have been described herein with reference to specific examples, the description of the above examples being only for the purpose of aiding in the understanding of the methods of the embodiments of the present invention and the core ideas thereof; also, it is within the spirit of the embodiments of the present invention for those skilled in the art to vary from one implementation to another and from application to another. In view of the foregoing, this description should not be construed as limiting the embodiments of the invention.

Claims (9)

1. A deep learning-based composite insulator defect detection system, comprising:
the image acquisition module is used for acquiring infrared images in real time;
the YOLOv5 module is connected with the image acquisition module and is used for carrying out target detection on the infrared image and determining a target area containing the composite insulator;
the segmentation module is connected with the YOLOv5 module and is used for segmenting the composite insulator region in the target region;
the mapping module is connected with the segmentation module and is used for establishing a pixel distribution histogram of the composite insulator region;
the detection result output module is connected with the mapping module and is used for:
judging whether a temperature value corresponding to a pixel distribution histogram of the composite insulator region exceeds a temperature threshold value or not;
if yes, outputting a detection result with defects;
if not, returning to execute the operation of collecting the infrared image in real time.
2. The deep learning based composite insulator defect detection system of claim 1, wherein the YOLOv5 module specifically comprises:
the feature extraction unit is connected with the image acquisition module and is used for carrying out feature recognition on the infrared image to obtain depth semantic features;
the feature fusion unit is connected with the feature extraction unit and used for fusing the depth semantic features to obtain multi-scale features;
and the output unit is connected with the characteristic fusion unit and is used for carrying out regression prediction according to the multi-scale characteristics to obtain a target area in the infrared image.
3. The deep learning-based composite insulator defect detection system of claim 2, wherein the feature extraction unit specifically comprises a number of convolution operation layers, a being equal to or greater than 2; any one of the convolution operation layers specifically comprises:
a first convolution layer for:
performing Depthwise convolution operation on the input data to obtain a first semantic feature; any one convolution kernel in the first convolution layer only carries out convolution operation on one input channel in all input channels; the convolution kernel size is kxkx1, wherein k represents the size of the convolution kernel, and 1 represents that convolution operation is performed on only one input channel in all input channels; the number of output channels of the first convolution layer is the same as the number of input channels;
a second convolution layer, coupled to said first convolution layer, for:
performing Pointwise convolution operation on the input first semantic features to obtain second semantic features; any one convolution check in the second convolution layer carries out convolution operation on all input channels of the input first semantic features; the convolution kernel size is 1×1×n, where n represents the number of input channels;
an ECA attention mechanism layer, coupled to the second convolution layer, for:
carrying out global average pooling on the second semantic features to obtain second semantic features after global average pooling;
carrying out convolution operation on the second semantic features subjected to global average pooling, and simultaneously carrying out weight calculation on the second semantic features subjected to global average pooling by adopting an activation function to obtain attention weights; the convolution kernel size of the ECA attention mechanism layer is k; the attention weight is w, and the attention weight corresponds to the second semantic features one by one;
and multiplying the second semantic features by the attention weights by corresponding elements to obtain the depth semantic features.
4. The deep learning based composite insulator defect detection system of claim 2, wherein the segmentation module specifically comprises:
the rough segmentation unit is connected with the output unit and is used for carrying out rough segmentation on the target area to obtain a rough composite insulator area;
and the fine segmentation unit is connected with the coarse segmentation unit and is used for carrying out fine segmentation on the coarse composite insulator region to obtain the precise composite insulator region.
5. The deep learning based composite insulator defect detection system of claim 4, wherein the rough segmentation unit specifically comprises:
the third convolution layer is connected with the output unit and is used for carrying out convolution operation on the input target area to obtain third semantic features;
the fourth convolution layer is connected with the third convolution layer and is used for carrying out convolution operation on the third input semantic features to obtain fusion semantic features;
and the decoder is respectively connected with the feature extraction unit and the fourth convolution layer and is used for decoding the depth semantic features and the fusion semantic features to obtain the rough composite insulator region.
6. The deep learning based composite insulator defect detection system of claim 5, wherein the fourth convolution layer specifically comprises:
n cavity convolution blocks for obtaining N features; n is more than or equal to 4; the expansion rates of the cavity convolution blocks are different;
a global average pooling block for obtaining global average pooling characteristics;
and the N features are fused with the global average pooling feature to obtain the fused semantic feature.
7. The deep learning based composite insulator defect detection system of claim 5, wherein the sub-segmentation unit specifically comprises:
the point selecting layer is connected with the decoder and is used for selecting points of the rough composite insulator region to obtain sampling points;
the feature merging layer is connected with the point selection layer and is used for carrying out feature merging on the depth semantic features and the fusion semantic features of the sampling points to obtain merged features;
and the MLP model is connected with the feature merging layer and used for predicting the merging features to obtain the accurate composite insulator region.
8. The deep learning based composite insulator defect detection system of claim 7, wherein the mapping module specifically comprises:
and the mask unit is connected with the MLP model and is used for:
performing mask treatment on the accurate composite insulator region to obtain a mask composite insulator region;
mapping the mask composite insulator region back to the infrared image to obtain an infrared image composite insulator region with real pixel distribution;
and mapping the infrared image composite insulator region with real pixel distribution with the temperature value to obtain a pixel distribution histogram of the composite insulator region.
9. The composite insulator defect detection method based on deep learning is characterized by comprising the following steps of:
acquiring an infrared image in real time;
performing target detection on the infrared image to determine a target area containing the composite insulator;
dividing a composite insulator region in the target region;
establishing a pixel distribution histogram of the composite insulator region;
judging whether a temperature value corresponding to a pixel distribution histogram of the composite insulator region exceeds a temperature threshold value or not;
if yes, outputting a detection result with defects;
if not, returning to execute the step of collecting the infrared image in real time.
CN202310443642.8A 2023-04-24 2023-04-24 Composite insulator defect detection system and method based on deep learning Pending CN116664487A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310443642.8A CN116664487A (en) 2023-04-24 2023-04-24 Composite insulator defect detection system and method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310443642.8A CN116664487A (en) 2023-04-24 2023-04-24 Composite insulator defect detection system and method based on deep learning

Publications (1)

Publication Number Publication Date
CN116664487A true CN116664487A (en) 2023-08-29

Family

ID=87721456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310443642.8A Pending CN116664487A (en) 2023-04-24 2023-04-24 Composite insulator defect detection system and method based on deep learning

Country Status (1)

Country Link
CN (1) CN116664487A (en)

Similar Documents

Publication Publication Date Title
CN112233092A (en) Deep learning method for intelligent defect detection of unmanned aerial vehicle power inspection
CN109977921B (en) Method for detecting hidden danger of power transmission line
CN111832398B (en) Unmanned aerial vehicle image distribution line pole tower ground wire broken strand image detection method
CN111695493B (en) Method and system for detecting hidden danger of power transmission line
CN112541389A (en) Power transmission line fault detection method based on EfficientDet network
CN112819784A (en) Method and system for detecting broken strands and scattered strands of wires of distribution line
CN110619623A (en) Automatic identification method for heating of joint of power transformation equipment
CN111199213A (en) Equipment defect detection method and device for transformer substation
Wang et al. Railway insulator detection based on adaptive cascaded convolutional neural network
CN115546664A (en) Cascaded network-based insulator self-explosion detection method and system
CN112785610B (en) Lane line semantic segmentation method integrating low-level features
CN114119528A (en) Defect detection method and device for distribution line insulator
CN113536944A (en) Distribution line inspection data identification and analysis method based on image identification
CN110555460B (en) Image slice-based bird detection method for power transmission line at mobile terminal
CN116664487A (en) Composite insulator defect detection system and method based on deep learning
CN116580232A (en) Automatic image labeling method and system and electronic equipment
CN116452848A (en) Hardware classification detection method based on improved attention mechanism
CN115100546A (en) Mobile-based small target defect identification method and system for power equipment
CN113033489A (en) Power transmission line insulator identification and positioning method based on lightweight deep learning algorithm
Zhou A lightweight improvement of YOLOv5 for insulator fault detection
Liu et al. Defect Insulator Detection Method Based on Deep Learning
CN113947567B (en) Defect detection method based on multitask learning
CN113205487B (en) Cable state detection method based on residual error network fusion heterogeneous data
CN117409237A (en) Method and system for identifying damage of conducting wire insulation sleeve of power distribution network
CN114663724A (en) Intelligent identification method and system for kite string image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination