CN117392565A - Automatic identification method for unmanned aerial vehicle power inspection defects - Google Patents

Automatic identification method for unmanned aerial vehicle power inspection defects Download PDF

Info

Publication number
CN117392565A
CN117392565A CN202311355665.XA CN202311355665A CN117392565A CN 117392565 A CN117392565 A CN 117392565A CN 202311355665 A CN202311355665 A CN 202311355665A CN 117392565 A CN117392565 A CN 117392565A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
image
inspection
defects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311355665.XA
Other languages
Chinese (zh)
Inventor
靳力
周学华
徐志宗
陈军旗
赵喜宾
孙宇光
王跃
吴瑛杰
秦金伟
姚琛彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebi Dianxing Electric Co ltd
Original Assignee
Hebi Dianxing Electric Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebi Dianxing Electric Co ltd filed Critical Hebi Dianxing Electric Co ltd
Priority to CN202311355665.XA priority Critical patent/CN117392565A/en
Publication of CN117392565A publication Critical patent/CN117392565A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/422Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation for representing the structure of the pattern or shape of an object therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The utility model relates to an unmanned aerial vehicle technical field discloses an unmanned aerial vehicle electric power inspection defect automatic identification method, including carrying out image data's collection through the unmanned aerial vehicle of carrying camera equipment in power equipment inspection process, carry out the preliminary treatment to the image data who gathers in step one, this step includes applying image enhancement algorithm, reinforcing image's contrast and definition, and carry out noise removal operation, eliminate the interference noise in the image, carry out edge detection simultaneously, with the marginal information in the extraction image, utilize machine learning technique to train and classify the image data after the preliminary treatment in step two, in order to realize the automatic identification to power equipment defect. Through marking and warning the defect that discerns, and then can labour saving and time saving, improved the efficiency of defect discernment, can utilize the function of warning simultaneously can in time discover the defect to can eliminate little problem, make it can not cause great harm.

Description

Automatic identification method for unmanned aerial vehicle power inspection defects
Technical Field
The invention relates to the technical field of unmanned aerial vehicles, in particular to an automatic identification method for unmanned aerial vehicle power inspection defects.
Background
The automatic identification method of the unmanned aerial vehicle power inspection defects needs to combine technologies such as image processing, machine learning, feature extraction, data fusion, real-time monitoring and the like, and realizes automatic identification and positioning of the power equipment defects by analyzing and judging inspection images. The method can improve inspection efficiency, reduce labor investment and timely discover and solve potential power equipment problems.
The automatic identification method for the unmanned aerial vehicle power inspection defects in the prior art can not mark and alarm the identified defects, so that the defects can be known at the positions after the defects are found and are needed to be analyzed, time and labor are wasted, the defect identification efficiency is reduced, and the defects can not be found in time due to the incapability of alarming, so that the problem is solved
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an automatic identification method for unmanned aerial vehicle power inspection defects, which solves the problems that the automatic identification method for unmanned aerial vehicle power inspection defects in the prior art can not mark and alarm the identified defects, so that when the defect positions are found, time and labor are wasted, the defect identification efficiency is reduced, and meanwhile, the defects can not be found in time due to the incapability of alarming in time, so that the automatic identification of the defects is changed from small problems to big problems.
In order to achieve the above purpose, the invention is realized by the following technical scheme: an automatic identification method for unmanned aerial vehicle power inspection defects comprises the following steps:
step one: acquiring image data in the power equipment inspection process through an unmanned aerial vehicle carrying the camera equipment;
step two: preprocessing the image data acquired in the first step, wherein the first step comprises the steps of applying an image enhancement algorithm to enhance the contrast and definition of the image, performing noise removal operation to eliminate interference noise in the image, and performing edge detection to extract edge information in the image;
step three: training and classifying the image data preprocessed in the second step by using a machine learning technology to realize automatic identification of defects of the power equipment, training and constructing a model of the image data by using a convolutional neural network, classifying the image, and distinguishing and classifying the defects in the inspection image from normal parts;
step four: firstly, extracting the characteristics of defects in the inspection image in the third step, including color information, texture characteristics and shape characteristics, and selecting the characteristics with differentiation degree for describing and differentiating the defects;
step five: acquiring other sensor data of the power equipment, such as temperature, vibration and current, fusing the image data with the other sensor data, comprehensively analyzing the multi-source data, and judging the position, type and severity of the defects, so that the unmanned aerial vehicle power inspection system can be more intelligent and comprehensive, the capability of detecting and evaluating the defects of the power equipment is improved, and the reliability and maintenance efficiency of the power system are improved;
step six: by detecting through the computer vision technology, image data in the inspection process is monitored in real time, the identified defects are marked and alarmed, inspection results are counted and analyzed, an inspection report is generated, the inspection report comprises information of the position, the type and the severity of the defects, potential problems can be automatically identified in the unmanned aerial vehicle inspection process, efficiency is improved, and human errors are reduced.
Preferably, the camera device in the first step includes a visible light camera, an infrared camera, a thermal imaging camera, a multispectral camera and an ultra-high definition camera.
Preferably, the image enhancement algorithm in the second step is a technology for improving image quality and enhancing image details and contrast, and these algorithms can be applied to various image processing tasks, such as image enhancement, object detection and image recognition.
Preferably, the main principle of the machine learning technology is that a computer can automatically learn and improve according to input data so as to be capable of more accurately executing similar tasks in the future, and the machine can more accurately make predictions or decisions in the future when similar tasks are processed.
Preferably, the convolutional neural network is a deep learning model, particularly suitable for processing data having a grid structure, such as images and videos, which is capable of extracting features from an original input and performing feature learning and pattern recognition on the images through layer-by-layer stacked convolution and pooling operations.
Preferably, the computer vision technology in the sixth step is a technology field related to enabling a computer to understand and interpret visual information, and the objective of the computer vision technology is to enable a computer system to perceive, understand and process image and video data like a human being.
Preferably, the edge detection in the second step adopts a Canny edge detection algorithm, and the algorithm is widely used in the fields of computer vision and image processing to effectively detect the edge in the image and accurately locate and refine the edge.
Preferably, in the sixth step, the real-time monitoring is realized through an unmanned aerial vehicle control system, and the defects of the unmanned aerial vehicle control system, which are identified by the real-time marking and alarming, are overcome, and the unmanned aerial vehicle control system is a software and hardware system for controlling and operating the unmanned aerial vehicle, and comprises components such as a flight control system, a communication system, a sensor system, a ground control station and the like, and is used for realizing functions of flight, navigation, telemetry, task execution and the like of the unmanned aerial vehicle.
Preferably, the inspection report in the sixth step is generated by an unmanned aerial vehicle system and includes information such as position, type and severity of the defect, and the unmanned aerial vehicle automatically executes the inspection task according to the inspection plan specification.
Preferably, the unmanned aerial vehicle transmits the image data and the analysis result in real time through a communication network in the inspection process, and the defect identification result is shared by the unmanned aerial vehicle system and other monitoring equipment.
The invention provides an automatic identification method for unmanned aerial vehicle power inspection defects. The beneficial effects are as follows:
1. the method and the device can conveniently know the position of the unmanned aerial vehicle power inspection defect after the defect is found by marking and alarming the identified defect, save time and labor, improve the defect identification efficiency, and simultaneously can timely find the defect by utilizing the alarming function, thereby eliminating small problems and avoiding great harm.
2. According to the invention, the convolutional neural network is utilized to train and model the image data, and because the convolutional neural network is a deep learning model, automatic learning can be performed, so that characteristic learning and pattern recognition of the image can be realized through layer-by-layer stacked convolution and pooling operation, and the image can be rapidly recognized when the same situation is met next time, and therefore, the recognition efficiency of the automatic recognition method for the electric power inspection defects can be improved.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly and completely described below in conjunction with the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Examples:
the embodiment of the invention provides an automatic identification method for unmanned aerial vehicle power inspection defects, which comprises the following steps:
step one: acquiring image data in the power equipment inspection process through an unmanned aerial vehicle carrying the camera equipment;
step two: preprocessing the image data acquired in the first step, wherein the first step comprises the steps of applying an image enhancement algorithm to enhance the contrast and definition of the image, performing noise removal operation to eliminate interference noise in the image, and performing edge detection to extract edge information in the image;
step three: training and classifying the image data preprocessed in the second step by using a machine learning technology to realize automatic identification of defects of the power equipment, training and constructing a model of the image data by using a convolutional neural network, classifying the image, and distinguishing and classifying the defects in the inspection image from normal parts;
step four: firstly, extracting the characteristics of defects in the inspection image in the third step, including color information, texture characteristics and shape characteristics, and selecting the characteristics with differentiation degree for describing and differentiating the defects;
step five: acquiring other sensor data of the power equipment, such as temperature, vibration and current, fusing the image data with the other sensor data, comprehensively analyzing the multi-source data, and judging the position, type and severity of the defects, so that the unmanned aerial vehicle power inspection system can be more intelligent and comprehensive, the capability of detecting and evaluating the defects of the power equipment is improved, and the reliability and maintenance efficiency of the power system are improved;
step six: by detecting through the computer vision technology, image data in the inspection process is monitored in real time, the identified defects are marked and alarmed, inspection results are counted and analyzed, an inspection report is generated, the inspection report comprises information of the position, the type and the severity of the defects, potential problems can be automatically identified in the unmanned aerial vehicle inspection process, efficiency is improved, and human errors are reduced.
The camera equipment in the first step comprises a visible light camera, an infrared camera, a thermal imaging camera, a multispectral camera and an ultra-high definition camera;
specifically, the visible light camera is the most commonly used camera, can capture light rays in a visible light range, is similar to the observation of human eyes, is suitable for the inspection of most power equipment, and can provide high-resolution color images; the infrared camera can capture infrared radiation, namely heat energy emitted by an object, and can be used for detecting abnormal hot spots, faults or electrical problems because the power equipment often generates heat, and is very useful for finding problems such as abnormal heat, overheating or electricity leakage in the power equipment; the thermal imaging camera is a special infrared camera, can provide a higher-level thermal image to display the temperature distribution of an object, is very useful for detecting the temperature abnormality of the power equipment, and can help to find potential faults or overload conditions; the multispectral camera can capture the spectral information of a plurality of wave bands, including visible light and infrared wave bands, and can provide more image information for analyzing factors such as surface conditions, vegetation interference, pollution and the like of the power equipment; ultra-high definition cameras have higher resolution and image detail, can provide more accurate image data, can capture more detail, and may be more accurate and reliable in detecting electrical devices.
The image enhancement algorithm in the second step is a technology for improving the image quality and enhancing the image detail and contrast, and can be applied to various image processing tasks, such as image enhancement, target detection and image recognition;
specifically, the following are several common image enhancement algorithms and their related formulas:
linear stretching:
the formula: g (x, y) = (f (x, y) -min) × (L-1)/(max-min)
Where g (x, y) represents the enhanced image pixel value, f (x, y) represents the original image pixel value, min and max represent the minimum pixel value and the maximum pixel value of the original image, respectively, and L represents the pixel value range;
histogram equalization:
the formula: g (x, y) =cdf (f (x, y)) (L-1)
Where g (x, y) represents the enhanced image pixel value, f (x, y) represents the original image pixel value, CDF represents the cumulative distribution function of the original image, and L represents the pixel value range;
adaptive histogram equalization:
the formula: g (x, y) =cdf (f (x, y)) (L-1)
Where g (x, y) represents the enhanced image pixel value, f (x, y) represents the original image pixel value, CDF represents the cumulative distribution function of the local area of the original image, and L represents the pixel value range.
The machine learning technology has the main principle that a computer can automatically learn and improve according to input data so as to be capable of more accurately executing similar tasks in the future, and the machine can more accurately make predictions or decisions in the future when similar tasks are processed;
specifically, machine learning is data driven, it learns rules and patterns from a large amount of data, and makes decisions or predictions based on these rules and patterns; machine learning algorithms are a set of mathematical models and techniques for extracting information from data, learning patterns, and making predictions, common learning algorithms include linear regression, decision trees, support vector machines, neural networks, and the like; the machine learning model is trained through a training data set, then the performance of the machine learning model is evaluated by using a test data set, the training data set is the basis of model learning, and the test data set is used for verifying the generalization capability of the model to new data; feature engineering involves selecting, converting, and creating features to maximally reveal information in the data and improve the performance of the model; generalization ability of a machine learning model refers to its behavior when processing new data, and a good machine learning model should be able to make accurate predictions or decisions on unseen data; supervised learning uses tagged data to train models, unsupervised learning uses untagged data to train, and reinforcement learning involves learning by trial and error to maximize rewards in a particular environment; deep learning is a machine learning technique that can learn complex features of a large amount of data based on a neural network model. The method has remarkable results in the fields of image recognition, natural language processing and the like.
The convolutional neural network is a deep learning model, is particularly suitable for processing data with a grid structure, such as images and videos, can extract features from original input, and realizes feature learning and pattern recognition of the images through layer-by-layer stacked convolution and pooling operation;
specifically, convolutional neural networks are composed of several important components:
the convolutional layer is the core part of a convolutional neural network, and local features are extracted by convolving input data with a series of convolutional kernels (also called filters). Each convolution kernel scans input data through a sliding window, and a corresponding convolution characteristic diagram is calculated;
the activation function is usually applied to the output of the convolution layer, nonlinear transformation is introduced, and the expression capacity of the network is increased;
the pooling layer is used for reducing the space dimension of the feature map, reducing the number of model parameters and calculation amount and increasing the robustness of the model to position change. Common pooling operations have maximum pooling and average pooling;
the full-connection layer is used for flattening the obtained feature map into a one-dimensional vector after a series of rolling and pooling layers, and then performing classification or regression and other tasks through the full-connection layer;
the following is a schematic formula of the convolution operation:
calculating an output characteristic diagram:
[ text { { feature map } (i, j) =f (\sum } limits { m=1 } { M } sum } limits { n=1 } { N } { text { input } (i+m, j+n) \cdot _text { kernel } (M, N) +b) ], wherein, (\text { { input } (i, j)) represents one element in the input data, (\text { { kernel } (M, N)) represents the weight of the convolution kernel, and (f) represents the activation function (e.g., reLU), (b) represents the bias term.
The computer vision technology in the step six is a technical field related to enabling a computer to understand and interpret visual information, and aims at enabling a computer system to perceive, understand and process image and video data like a human being;
specifically, the following are some important aspects and applications of computer vision technology:
and (3) image identification: image recognition is a core task of computer vision, which involves classifying images into predefined categories or labels, and deep learning algorithms, particularly convolutional neural networks, have achieved great success in image recognition tasks, where applications include image classification, image search, merchandise identification, and the like;
and (3) target detection: the object detection is to identify the positions and the types of a plurality of objects or objects in an image, has wide application in the fields of automatic driving, video monitoring, object tracking and the like, and common methods comprise RCNN, YOLO (You Only Look Once) and the like;
face recognition: the face recognition technology is used for recognizing and verifying the identity of an individual, has wide application in the aspects of security systems, social media, unlocking of mobile equipment and the like, and the deep learning algorithm has made breakthrough progress in the face recognition task;
medical image analysis: computer vision is used in the medical field for analyzing and identifying medical images such as X-rays, MRI, CT scans, etc. to aid doctors in diagnosing diseases, it can be used for cancer detection, lesion analysis, etc.;
automatic driving: computer vision technology plays a key role in the field of automatic driving and is used for identifying and understanding traffic signs, vehicles, pedestrians and the like on roads in real time so as to support decision and control of the automatic driving vehicles;
and (3) image generation: in addition to analyzing images, computer vision also involves image generation, which can be used to generate realistic images using technologies such as antagonism networking, which has wide application in artistic creation, virtual reality, and video games;
image processing and enhancement: computer vision techniques also include pre-processing, enhancing, and post-processing of the image to improve image quality, remove noise, increase contrast, and the like.
The edge detection in the second step adopts a Canny edge detection algorithm which is widely used in the fields of computer vision and image processing so as to effectively detect the edge in the image and accurately locate and refine the edge;
the Canny edge detection algorithm mainly comprises the following steps:
gaussian filtering: first, gaussian filtering is applied to an input image to smooth the image and reduce the effect of noise. The Gaussian filtering can eliminate details and noise in the image, so that subsequent edge detection is more stable;
calculating gradient amplitude and direction: after gaussian filtering, gradient information is calculated for the smoothed image. This can be done by applying an operator such as Sobel, prewitt to calculate the gradient magnitude and direction of the image in the horizontal and vertical directions;
non-maximum suppression: aiming at the calculated gradient amplitude and direction, carrying out non-maximum value inhibition, wherein the aim of the step is to select the maximum pixel value near the edge and inhibit other non-maximum values so as to obtain a thinned edge;
dual threshold edge connection: after non-maximum suppression, the gradient amplitude is subjected to threshold segmentation, and pixel points in the image are classified into strong edges, weak edges and non-edges according to a set high threshold and a set low threshold. And then, connecting the strong edge pixel points with the adjacent weak edge pixel points to form a complete edge.
In the sixth step, the real-time monitoring is realized through an unmanned aerial vehicle control system, and the defects of the unmanned aerial vehicle control system, which are identified by the real-time marking and alarming, are overcome, and the unmanned aerial vehicle control system is a software and hardware system for controlling and operating the unmanned aerial vehicle, and comprises components such as a flight control system, a communication system, a sensor system, a ground control station and the like, and is used for realizing the functions of unmanned aerial vehicle, such as flight, navigation, telemetering, task execution and the like.
And step six, a patrol report is generated by an unmanned aerial vehicle system and comprises information such as the position, the type and the severity of the defect, and the unmanned aerial vehicle automatically executes the patrol task according to the patrol plan specification.
The unmanned aerial vehicle transmits image data and analysis results in real time through a communication network in the inspection process, and the defect identification results are shared by the unmanned aerial vehicle system and other monitoring equipment.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. The automatic identification method for the unmanned aerial vehicle power inspection defects is characterized by comprising the following steps of:
step one: acquiring image data in the power equipment inspection process through an unmanned aerial vehicle carrying the camera equipment;
step two: preprocessing the image data acquired in the first step, wherein the first step comprises the steps of applying an image enhancement algorithm to enhance the contrast and definition of the image, performing noise removal operation to eliminate interference noise in the image, and performing edge detection to extract edge information in the image;
step three: training and classifying the image data preprocessed in the second step by using a machine learning technology to realize automatic identification of defects of the power equipment, training and constructing a model of the image data by using a convolutional neural network, classifying the image, and distinguishing and classifying the defects in the inspection image from normal parts;
step four: firstly, extracting the characteristics of defects in the inspection image in the third step, including color information, texture characteristics and shape characteristics, and selecting the characteristics with differentiation degree for describing and differentiating the defects;
step five: acquiring other sensor data of the power equipment, such as temperature, vibration and current, fusing the image data with the other sensor data, comprehensively analyzing the multi-source data, and judging the position, type and severity of the defects, so that the unmanned aerial vehicle power inspection system can be more intelligent and comprehensive, the capability of detecting and evaluating the defects of the power equipment is improved, and the reliability and maintenance efficiency of the power system are improved;
step six: by detecting through the computer vision technology, image data in the inspection process is monitored in real time, the identified defects are marked and alarmed, inspection results are counted and analyzed, an inspection report is generated, the inspection report comprises information of the position, the type and the severity of the defects, potential problems can be automatically identified in the unmanned aerial vehicle inspection process, efficiency is improved, and human errors are reduced.
2. The method for automatically identifying the inspection defects of the unmanned aerial vehicle according to claim 1, wherein the camera equipment in the first step comprises a visible light camera, an infrared camera, a thermal imaging camera, a multispectral camera and an ultra-high definition camera.
3. An automated inspection defect recognition method according to claim 1, wherein the image enhancement algorithm in step two is a technique for improving image quality, enhancing image detail and contrast, and the algorithm can be applied to various image processing tasks such as image enhancement, object detection and image recognition.
4. The automatic identification method of unmanned aerial vehicle power inspection defects according to claim 1, wherein the main principle of the machine learning technology is that a computer can automatically learn and improve according to input data so as to be capable of more accurately executing when similar tasks are processed in the future, and the machine can more accurately make predictions or decisions when similar tasks are processed in the future.
5. An automated unmanned aerial vehicle inspection defect recognition method according to claim 1, wherein the convolutional neural network is a deep learning model, particularly adapted to process data having a grid structure, such as images and videos, which is capable of extracting features from the original input and performing feature learning and pattern recognition on the images by layer-by-layer stacked convolution and pooling operations.
6. The method for automatically identifying defects in electronic inspection of unmanned aerial vehicle according to claim 1, wherein the computer vision technique in the step six is a technical field related to the understanding and interpretation of visual information by a computer, and the aim of the method is to enable the computer system to perceive, understand and process image and video data like a human being.
7. The automatic identification method of unmanned aerial vehicle electronic inspection defects according to claim 1, wherein the edge detection in the second step adopts a Canny edge detection algorithm which is widely used in the fields of computer vision and image processing to effectively detect edges in images and accurately locate and refine the edges.
8. The automatic identification method for unmanned aerial vehicle power inspection defects according to claim 1, wherein in the sixth step, real-time monitoring is realized through an unmanned aerial vehicle control system, defects identified by immediate marking and alarming are realized, the unmanned aerial vehicle control system is a software and hardware system for controlling and operating the unmanned aerial vehicle, and the unmanned aerial vehicle automatic identification method comprises a flight control system, a communication system, a sensor system, a ground control station and other components for realizing functions of unmanned aerial vehicle flight, navigation, telemetry, task execution and the like.
9. The automatic identification method for the unmanned aerial vehicle power inspection defects according to claim 1, wherein in the sixth step, the inspection report is generated by an unmanned aerial vehicle system and comprises information such as the position, the type and the severity of the defects, and the unmanned aerial vehicle automatically executes the inspection task according to the inspection plan specification.
10. The automatic identification method for the unmanned aerial vehicle power inspection defects according to claim 1, wherein the unmanned aerial vehicle transmits image data and analysis results in real time through a communication network in the inspection process, and the defect identification results are shared by an unmanned aerial vehicle system and other monitoring equipment.
CN202311355665.XA 2023-10-18 2023-10-18 Automatic identification method for unmanned aerial vehicle power inspection defects Pending CN117392565A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311355665.XA CN117392565A (en) 2023-10-18 2023-10-18 Automatic identification method for unmanned aerial vehicle power inspection defects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311355665.XA CN117392565A (en) 2023-10-18 2023-10-18 Automatic identification method for unmanned aerial vehicle power inspection defects

Publications (1)

Publication Number Publication Date
CN117392565A true CN117392565A (en) 2024-01-12

Family

ID=89469697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311355665.XA Pending CN117392565A (en) 2023-10-18 2023-10-18 Automatic identification method for unmanned aerial vehicle power inspection defects

Country Status (1)

Country Link
CN (1) CN117392565A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710369A (en) * 2024-02-05 2024-03-15 山东省科院易达信息科技有限公司 Metal aluminum phosphate film defect detection method and system based on computer vision technology

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710369A (en) * 2024-02-05 2024-03-15 山东省科院易达信息科技有限公司 Metal aluminum phosphate film defect detection method and system based on computer vision technology
CN117710369B (en) * 2024-02-05 2024-04-30 山东省科院易达信息科技有限公司 Metal aluminum phosphate film defect detection method and system based on computer vision technology

Similar Documents

Publication Publication Date Title
Yang et al. Visual perception enabled industry intelligence: state of the art, challenges and prospects
Luo et al. A vision methodology for harvesting robot to detect cutting points on peduncles of double overlapping grape clusters in a vineyard
Racki et al. A compact convolutional neural network for textured surface anomaly detection
Hu et al. Automatic detection of single ripe tomato on plant combining faster R-CNN and intuitionistic fuzzy set
CN101957325B (en) Substation equipment appearance abnormality recognition method based on substation inspection robot
Tan et al. Multialgorithm fusion image processing for high speed railway dropper failure–defect detection
CN111046880A (en) Infrared target image segmentation method and system, electronic device and storage medium
CN107679495B (en) Detection method for movable engineering vehicles around power transmission line
CN104992449A (en) Information identification and surface defect on-line detection method based on machine visual sense
CN117392565A (en) Automatic identification method for unmanned aerial vehicle power inspection defects
CN112149514A (en) Method and system for detecting safety dressing of construction worker
Voronin et al. Automated visual inspection of fabric image using deep learning approach for defect detection
Liang et al. Methods of moving target detection and behavior recognition in intelligent vision monitoring.
CN115908354A (en) Photovoltaic panel defect detection method based on double-scale strategy and improved YOLOV5 network
Rui et al. Fault point detection of IOT using multi-spectral image fusion based on deep learning
Lu et al. Thermal Fault Diagnosis of Electrical Equipment in Substations Based on Image Fusion.
CN113177439B (en) Pedestrian crossing road guardrail detection method
CN110047041A (en) A kind of empty-frequency-domain combined Traffic Surveillance Video rain removing method
CN112766145B (en) Method and device for identifying dynamic facial expressions of artificial neural network
CN112419243B (en) Power distribution room equipment fault identification method based on infrared image analysis
Lv et al. Method for discriminating of the shape of overlapped apple fruit images
Niu et al. Electrical equipment identification method with synthetic data using edge-oriented generative adversarial network
Huang et al. Mango surface defect detection based on HALCON
CN114677667A (en) Transformer substation electrical equipment infrared fault identification method based on deep learning
Chang et al. Improved deep learning-based approach for real-time plant species recognition on the farm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination