CN113284102A - Fan blade damage intelligent detection method and device based on unmanned aerial vehicle - Google Patents

Fan blade damage intelligent detection method and device based on unmanned aerial vehicle Download PDF

Info

Publication number
CN113284102A
CN113284102A CN202110533752.4A CN202110533752A CN113284102A CN 113284102 A CN113284102 A CN 113284102A CN 202110533752 A CN202110533752 A CN 202110533752A CN 113284102 A CN113284102 A CN 113284102A
Authority
CN
China
Prior art keywords
scattering transformation
scattering
map
fan blade
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110533752.4A
Other languages
Chinese (zh)
Other versions
CN113284102B (en
Inventor
吴劲芳
刁嘉
王斌
吴寒
董超
贾洪岩
田锰
魏宏杰
翟化欣
臧鹏
史学伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Xinyuan Zhangjiakou Scenery Storage Demonstration Power Plant Co ltd
Original Assignee
State Grid Xinyuan Zhangjiakou Scenery Storage Demonstration Power Plant Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Xinyuan Zhangjiakou Scenery Storage Demonstration Power Plant Co ltd filed Critical State Grid Xinyuan Zhangjiakou Scenery Storage Demonstration Power Plant Co ltd
Priority to CN202110533752.4A priority Critical patent/CN113284102B/en
Publication of CN113284102A publication Critical patent/CN113284102A/en
Application granted granted Critical
Publication of CN113284102B publication Critical patent/CN113284102B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/70Wind energy
    • Y02E10/72Wind turbines with rotation axis in wind direction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an intelligent detection method and device for damage of a fan blade based on an unmanned aerial vehicle, and relates to the field of intelligent sensing and intelligent information processing, wherein the method comprises the following steps: collecting a hyperspectral image of a fan blade to be detected in a target area by using an unmanned aerial vehicle; performing feature extraction on the acquired hyperspectral image by adopting scattering transformation to obtain a scattering transformation feature map; performing spectrum scattering transformation depth feature extraction on the scattering transformation feature map obtained through scattering transformation by adopting a residual error network based on an attention mechanism to obtain a scattering transformation depth feature map; predicting an end member abundance map of the acquired hyperspectral image by adopting a regression model based on K nearest neighbor and combining a spectrum similarity rule according to the obtained scattering transformation depth feature map; and judging the damage state of the fan blade to be detected from the fullness graph. The invention can improve the detection capability and efficiency of the damaged part of the fan blade.

Description

Fan blade damage intelligent detection method and device based on unmanned aerial vehicle
Technical Field
The invention relates to the technical field of intelligent sensing and intelligent information processing, in particular to a method and a device for intelligently detecting damage of a fan blade based on an unmanned aerial vehicle.
Background
Blades of wind power generators (hereinafter referred to as wind turbines) are important components for converting wind energy into electrical energy. In the long-term operation process of the fan, various damages can be generated on the surface of the blade, for example, the blade cracks, the blade paint falls, the blade stains and the blade is struck by lightning, and the operation of the fan is seriously affected, so that whether the blade of the fan is damaged or not and the damage degree need to be detected frequently. The existing fan blade damage detection generally refers to a mode that operation and maintenance personnel need to climb a fan or build a platform to check the surface condition of the blade, and the operation mode has the problems of low efficiency, high labor intensity, high risk and the like due to the fact that the operation mode needs personnel to work aloft.
Disclosure of Invention
The invention provides an intelligent detection method and device for fan blade damage based on an unmanned aerial vehicle, and aims to solve the technical problems of low efficiency, high labor intensity and high risk caused by high-altitude operation of personnel in the conventional operation mode aiming at fan blade damage detection.
In order to solve the technical problems, the invention provides the following technical scheme:
on one hand, the invention provides an intelligent detection method for damage of a fan blade based on an unmanned aerial vehicle, which comprises the following steps:
collecting a hyperspectral image of a fan blade to be detected in a target area by using an unmanned aerial vehicle;
performing feature extraction on the acquired hyperspectral image by adopting scattering transformation to obtain a scattering transformation feature map;
performing spectrum scattering transformation depth feature extraction on the scattering transformation feature map obtained through scattering transformation by adopting a residual error network based on an attention mechanism to obtain a scattering transformation depth feature map;
predicting an end member abundance map of the acquired hyperspectral image by adopting a regression model based on K nearest neighbor and combining a spectrum similarity rule according to the obtained scattering transformation depth feature map;
and judging the damage state of the fan blade to be detected based on the end member abundance diagram.
Further, performing feature extraction on the acquired hyperspectral image by adopting scattering transformation to obtain a scattering transformation feature map; the method comprises the following steps:
taking the hyperspectral image as input, and constructing a hyperspectral cube map by using an averaging filter;
and constructing a scattering transformation function, and performing scattering transformation on the hyperspectral cube map to obtain a scattering transformation characteristic map.
Further, performing scattering transformation on the hyperspectral cube map to obtain a scattering transformation feature map, comprising:
constructing invariable, stable and rich signal information by utilizing iterative wavelet decomposition, modular operation and a low-pass filter, and respectively obtaining scattering transformation coefficients from a zero order to a high order;
and constructing a scattering transformation coefficient vector to obtain a scattering transformation characteristic diagram.
Further, spectrum scattering transformation depth feature extraction is carried out on the scattering transformation feature map obtained through scattering transformation by adopting a residual error network based on an attention mechanism, and the scattering transformation depth feature map is obtained, and the method comprises the following steps:
constructing a residual error network model with an attention mechanism based on scattering transformation characteristics;
re-correcting the scattering transformation feature map input into the residual error network model by using a convolution block attention module to extract spectral scattering transformation depth features and obtain a scattering transformation depth feature map; wherein, in the attention module, channel-based attention and space-based attention are included; both the channel-based attention and the space-based attention are used to train scatter transform features in a three-dimensional structure.
Further, the predicting the end member abundance diagram of the acquired hyperspectral image by adopting a regression model based on K nearest neighbor and combining with a spectrum similarity rule according to the obtained scattering transformation depth feature diagram comprises the following steps:
in the regression model, K-neighbor learning regression factors are utilized to predict abundance values corresponding to end members of each pixel according to the obtained scattering transformation depth feature map, and an end member abundance map of the hyperspectral image is obtained.
On the other hand, the invention also provides an intelligent detection device for the damage of the fan blade based on the unmanned aerial vehicle, which comprises the following components:
the unmanned aerial vehicle module is used for acquiring a hyperspectral image of a fan blade to be detected in a target area;
the characteristic extraction module is used for extracting the characteristics of the acquired hyperspectral image by adopting scattering transformation to obtain a scattering transformation characteristic diagram; performing spectrum scattering transformation depth feature extraction on the scattering transformation feature map obtained through scattering transformation by adopting a residual error network based on an attention mechanism to obtain a scattering transformation depth feature map;
the damage detection module is used for predicting an end member abundance map of the acquired hyperspectral image by adopting a regression model based on K nearest neighbor and combining a spectrum similarity rule according to the obtained scattering transformation depth characteristic map; and judging the damage state of the fan blade to be detected based on the end member abundance diagram.
Further, the feature extraction module is specifically configured to:
taking the hyperspectral image as input, and constructing a hyperspectral cube map by using an averaging filter;
and constructing a scattering transformation function, and performing scattering transformation on the hyperspectral cube map to obtain a scattering transformation characteristic map.
Further, performing scattering transformation on the hyperspectral cube map to obtain a scattering transformation feature map, comprising:
constructing invariable, stable and rich signal information by utilizing iterative wavelet decomposition, modular operation and a low-pass filter, and respectively obtaining scattering transformation coefficients from a zero order to a high order;
and constructing a scattering transformation coefficient vector to obtain a scattering transformation characteristic diagram.
Further, the feature extraction module is specifically further configured to:
constructing a residual error network model with an attention mechanism based on scattering transformation characteristics;
re-correcting the scattering transformation feature map input into the residual error network model by using a convolution block attention module to extract spectral scattering transformation depth features and obtain a scattering transformation depth feature map; wherein, in the attention module, channel-based attention and space-based attention are included; both the channel-based attention and the space-based attention are used to train scatter transform features in a three-dimensional structure.
Further, the damage detection module is specifically configured to:
in the regression model, K-neighbor learning regression factors are utilized to predict abundance values corresponding to end members of each pixel according to the obtained scattering transformation depth feature map, and an end member abundance map of the hyperspectral image is obtained.
In yet another aspect, the present invention also provides an electronic device comprising a processor and a memory; wherein the memory has stored therein at least one instruction that is loaded and executed by the processor to implement the above-described method.
In yet another aspect, the present invention also provides a computer-readable storage medium having at least one instruction stored therein, the instruction being loaded and executed by a processor to implement the above method.
The technical scheme provided by the invention has the beneficial effects that at least:
the method comprises the steps that an unmanned aerial vehicle is adopted to collect hyperspectral images of fan blades to be detected in a target area; performing feature extraction on the acquired hyperspectral image by adopting scattering transformation to obtain a scattering transformation feature map; performing spectrum scattering transformation depth feature extraction on the scattering transformation feature map obtained through scattering transformation by adopting a residual error network based on an attention mechanism to obtain a scattering transformation depth feature map; predicting an end member abundance map of the acquired hyperspectral image by adopting a regression model based on K nearest neighbor and combining a spectrum similarity rule according to the obtained scattering transformation depth feature map; and judging the damage state of the fan blade to be detected based on the predicted end member abundance diagram. The damage detection of the fan blade is automatically and accurately carried out by using the unmanned aerial vehicle, the detection efficiency is effectively improved, technical support is provided for follow-up blade maintenance, and the operation time and cost are saved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of an intelligent detection method for damage to a fan blade based on an unmanned aerial vehicle according to the present invention;
fig. 2 is a schematic diagram of a structure of a hyperspectral unmixed three-dimensional (3D) filter provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
First embodiment
The embodiment provides an intelligent detection method for damage to a fan blade based on an unmanned aerial vehicle, and the method can be realized by electronic equipment, and the electronic equipment can be a terminal or a server. The execution flow of the intelligent detection method for the damage of the fan blade based on the unmanned aerial vehicle is shown in figure 1, and the method comprises the following steps:
s101, collecting hyperspectral images of the fan blade to be detected in a target area by using an unmanned aerial vehicle;
s102, performing feature extraction on the acquired hyperspectral image by adopting scattering transformation to obtain a scattering transformation feature map;
s103, performing spectrum scattering transformation depth feature extraction on the scattering transformation feature map obtained through scattering transformation by adopting a residual error network based on an attention mechanism to obtain a scattering transformation depth feature map;
s104, predicting an end member abundance map of the acquired hyperspectral image by adopting a regression model based on K nearest neighbor and combining a spectrum similarity rule according to the obtained scattering transformation depth feature map;
and S105, judging the damage state of the fan blade to be detected based on the end member abundance diagram.
The unmanned aerial vehicle is provided with data acquisition equipment and an embedded processing system; the hyperspectral image information and the optical video information in the target area can be acquired, and the acquired data is transmitted and processed.
The method for extracting the features of the hyperspectral image by adopting scattering transformation to obtain a scattering transformation feature map comprises the following steps:
(1) taking hyperspectral image information as input, and constructing a hyperspectral image cube by using an average filter; obtaining a hyperspectral cube map, specifically realizing the following process:
for the acquired hyperspectral image of the fan blade area, the v-th pixel spectrum is given as
Figure BDA0003066668820000041
The spectral mixture model f can be described simply as:
Figure BDA0003066668820000051
wherein the content of the first and second substances,
Figure BDA0003066668820000052
is the abundance fraction, avkRepresenting the abundance value of the kth end member.
Figure BDA0003066668820000053
Is an end member matrix of n end members, and l is the number of wave bands of the hyperspectral data.
Figure BDA0003066668820000054
Representing an error vector. a isvk≧ 0 is an abundance non-negative constraint,
Figure BDA0003066668820000055
is a constraint that the sum of abundances is one.
The spectral vector at position (i, j) in spatial coordinates is:
ri,j(i=1,2,...,M;j=1,2,...,N)
a data cube of the hyperspectral image is calculated using the three-dimensional filter f (p, q, o). As shown in fig. 2, the filter f (P, Q) with size (2P +1) × (2Q +1), the new data cube after filtering is:
Figure BDA0003066668820000056
wherein, { ri+p,j+q,ri+1,j+1,ri-1,j-1,...ri-p,j-qAre and ri,jThe adjacent ones are adjacent to each other,
{ f (+ P, + Q),. f (+1, +1), f (0,0), f (-1, -1),. f (-P, -Q) } are coefficients of the filter.
(2) Performing scattering transformation on the hyperspectral cube map to obtain a characteristic map, wherein the method comprises the following steps: constructing a scattering transformation function, and constructing invariable, stable and rich signal information by utilizing iterative wavelet decomposition, modular operation and a low-pass filter; respectively obtaining the scattering transformation coefficients from the zero order to the high order, and constructing a scattering transformation coefficient vector to obtain a scattering transformation characteristic diagram; the specific implementation process is as follows:
ψλis a wavelet cluster, the mother wave presses 2jScaling, expressed as:
ψλ(y)=2-2jψ(2-jy) (2)
defining wavelet modulus operator | WrAnd | includes wavelet algorithm and modulo algorithm:
|Wr|={S(r),U(r)}={r*φJ,|r*ψλ|}λ (3)
the first part, s (r), is called the scattering coefficient, and represents the main information of the input signal in the low frequency band. The second part, u (r), is a scatterer, a model of the nonlinear wavelet transform. S (r) is the output coefficient of each stage, and u (r) is the input of the next stage transform to obtain high frequency information. r is the spectral vector input and λ represents the variable of the wavelet modulo operator. The scaling function is phiJ=2-2Jφ(2-Jy)。J=3。
In this way, the scattering transform uses iterative wavelet decomposition, modulo arithmetic and a low pass filter to construct invariant, stable, rich signal information. The zero order scattering transform output is:
S0(r)=r*φJ(y) (4)
then U1(r, λ) may be used as input to the first order transformation:
Figure BDA0003066668820000061
further, the first order scatter transform output and corresponding input are:
Figure BDA0003066668820000062
Figure BDA0003066668820000063
and finally obtaining an output set of the scattering transformation coefficients from the zero order to the m order as follows:
S(r)={S0(r),S1(r),...Sm(r)} (8)
the feature vector of the hyperspectral pixel y is obtained.
The main advantages of scattering transformation are translational invariance, local deformation stability, energy conservation and strong noise resistance.
Therefore, the hyperspectral scatterometry feature mixture model can be rewritten as:
S(r)={S0(r),S1(r),...Sm(r)}=f(Av,X,εv)
Figure BDA0003066668820000064
performing spectrum scattering transformation depth feature extraction on a scattering transformation feature map obtained through scattering transformation by adopting a residual error network based on an attention mechanism to obtain a scattering transformation depth feature map, wherein the method comprises the following steps:
(1) construction of residual error network based on scattering transformation characteristics
Figure BDA0003066668820000065
The scatter transform coefficients of the v-th pixel are reshaped into three dimensions, that is,
Figure BDA0003066668820000066
H. w and C represent three-dimensional space.
Figure BDA0003066668820000067
Is the input to the deep neural network.
The convolution model is one of the key components in the feature extraction layer. Input S or feature mapping FSConvolving with a convolution kernel to obtain the following feature mapping:
Fl S=Fl-1 S*Wl+bl (3)
Fl-1 Sand Fl SRepresenting the input and output of the l convolutional layer. When l is 0, Fl SS. In addition, the operation of convolution is referred to. WlAnd blIs the weight and offset of the first convolutional layer.
However, as the depth of the network increases, CNN causes the problem of gradient vanishing, and particularly in a high-frequency space, the scattering transformation coefficient becomes smaller, so that the probability of gradient vanishing is higher. In order to eliminate the influence, a residual error network is adopted to train in the network, and the efficiency of the network is ensured. Defining a residual function to be learned as
Figure BDA0003066668820000068
Then it is possible to obtain:
Figure BDA0003066668820000069
(2) re-correcting the scattering transformation feature map in the residual error network model by using a convolution block attention module, and extracting the scattering transformation depth feature of the spectral image to obtain a final scattering transformation depth feature map; wherein, in the attention module, both channel-based attention and space-based attention are used to train scatter transform features in a three-dimensional structure. The specific implementation process is as follows:
in the attention block, both channel-based attention and space-based attention are used to train scatter transform features in three-dimensional structures.
Figure BDA0003066668820000071
Will be reshaped to an appropriate dimension
Figure BDA0003066668820000072
This facilitates the input of the spatial attention network. Defining the two-dimensional space attention as:
Figure BDA0003066668820000073
while a one-dimensional channel is noted to be defined as
Figure BDA0003066668820000074
Taking into account topographical maps
Figure BDA0003066668820000075
The CBAM module may compute attention weights for two independent dimensions (spatial dimension and channel dimension) and then multiply
Figure BDA0003066668820000076
And realizing the refinement of the characteristic diagram. The two-dimensional spatial attention map is used to search for the most focused information in the spatial dimension, while the channel attention map is used to search for the focus along the channel axis.
Applying the average and maximum pool operations to the spatial module along the channel axis and concatenating the outputs to generate a feature descriptor, and then using the filtered convolutional layer f7×7Size 7X 7, and space attention MSpAnd sigmoid activation function σ.
Figure BDA0003066668820000077
Figure BDA0003066668820000078
And
Figure BDA0003066668820000079
mean pool operator and max pool operator are represented separately. Thus, the output of the space attention module can be described as:
Figure BDA00030666688200000710
wherein the content of the first and second substances,
Figure BDA00030666688200000711
representing element-wise multiplication.
For the channel module, both the maximum pool output and the average pool output are utilized by a shared network consisting of a multi-layer perceptron (MLP) and an implicit layer. Element summation is carried out on the two output feature vectors, and then a sigmoid activation function is utilized to obtain a channel attention map:
Figure BDA00030666688200000712
where ω is the shared MLP weight.
The output of the channel attention module can then be described as:
Figure BDA00030666688200000713
finally, the output of the whole scatter transform attention mechanism is:
Figure BDA00030666688200000714
an attention-based residual network with scatter transformation characteristics can be obtained, and the residual function can be expressed as:
Figure BDA00030666688200000715
in most cases, no shaping of the input to the scatter transform attention mechanism is required, which means that
Figure BDA00030666688200000716
The residual function is finally expressed as:
Figure BDA00030666688200000717
after the last feature extraction layer is executed, a final scattering transformation depth feature map F is obtainedk S
The specific implementation process of predicting the end member abundance map of the hyperspectral image by adopting a regression model based on K nearest neighbor, according to a scattering transformation depth feature map and combining a spectrum similarity rule is as follows: and predicting abundance values corresponding to the end members of each pixel by using the KNN regression model. In the regression model, the scattering transformation characteristics of the samples are used, and K-nearest neighbor (K-NN) learning regression factors are used for training. The k-NN predicts spectral values in the test set based on the similarity of the features to the spectra in the training set. And obtaining an abundance map of the hyperspectral image through regression model prediction, thereby distinguishing damaged and undamaged areas of the fan blade and evaluating the damage state of the fan blade.
In conclusion, in the embodiment, the unmanned aerial vehicle is adopted to collect the hyperspectral image of the fan blade to be detected in the target area; performing feature extraction on the acquired hyperspectral image by adopting scattering transformation to obtain a scattering transformation feature map; performing spectrum scattering transformation depth feature extraction on the obtained scattering transformation feature map by adopting a residual error network based on an attention mechanism to obtain a final scattering transformation depth feature map; predicting an end member abundance map of the acquired hyperspectral image by adopting a regression model based on K nearest neighbor and combining a spectrum similarity rule according to the obtained scattering transformation depth feature map; and judging the damage state of the fan blade to be detected based on the predicted end member abundance diagram. The damage detection of the fan blade is automatically and accurately carried out by using the unmanned aerial vehicle, the detection efficiency is effectively improved, technical support is provided for follow-up blade maintenance, and the operation time and cost are saved.
Second embodiment
This embodiment provides a fan blade damages intellectual detection system device based on unmanned aerial vehicle, this fan blade damages intellectual detection system device based on unmanned aerial vehicle includes following module:
the unmanned aerial vehicle module is used for acquiring a hyperspectral image of a fan blade to be detected in a target area;
the characteristic extraction module is used for extracting the characteristics of the acquired hyperspectral image by adopting scattering transformation to obtain a scattering transformation characteristic diagram; performing spectrum scattering transformation depth feature extraction on the scattering transformation feature map obtained through scattering transformation by adopting a residual error network based on an attention mechanism to obtain a scattering transformation depth feature map;
the damage detection module is used for predicting an end member abundance map of the acquired hyperspectral image by adopting a regression model based on K nearest neighbor and combining a spectrum similarity rule according to the obtained scattering transformation depth characteristic map; and judging the damage state of the fan blade to be detected based on the end member abundance diagram.
The intelligent detection device for damage of the fan blade based on the unmanned aerial vehicle of the embodiment corresponds to the intelligent detection method for damage of the fan blade based on the unmanned aerial vehicle of the first embodiment; the functions realized by the functional modules in the intelligent detection device for the damage of the fan blade based on the unmanned aerial vehicle correspond to the flow steps in the intelligent detection method for the damage of the fan blade based on the unmanned aerial vehicle one by one; therefore, it is not described herein.
Third embodiment
The present embodiment provides an electronic device, which includes a processor and a memory; wherein the memory has stored therein at least one instruction that is loaded and executed by the processor to implement the method of the first embodiment.
The electronic device may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) and one or more memories, where at least one instruction is stored in the memory, and the instruction is loaded by the processor and executes the method.
Fourth embodiment
The present embodiment provides a computer-readable storage medium, in which at least one instruction is stored, and the instruction is loaded and executed by a processor to implement the method of the first embodiment. The computer readable storage medium may be, among others, ROM, random access memory, CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like. The instructions stored therein may be loaded by a processor in the terminal and perform the above-described method.
Furthermore, it should be noted that the present invention may be provided as a method, apparatus or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
Finally, it should be noted that while the above describes a preferred embodiment of the invention, it will be appreciated by those skilled in the art that, once the basic inventive concepts have been learned, numerous changes and modifications may be made without departing from the principles of the invention, which shall be deemed to be within the scope of the invention. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.

Claims (10)

1. The utility model provides a fan blade damage intelligent detection method based on unmanned aerial vehicle which characterized in that includes:
collecting a hyperspectral image of a fan blade to be detected in a target area by using an unmanned aerial vehicle;
performing feature extraction on the acquired hyperspectral image by adopting scattering transformation to obtain a scattering transformation feature map;
performing spectrum scattering transformation depth feature extraction on the scattering transformation feature map obtained through scattering transformation by adopting a residual error network based on an attention mechanism to obtain a scattering transformation depth feature map;
predicting an end member abundance map of the acquired hyperspectral image by adopting a regression model based on K nearest neighbor and combining a spectrum similarity rule according to the obtained scattering transformation depth feature map;
and judging the damage state of the fan blade to be detected based on the end member abundance diagram.
2. The intelligent detection method for damage to the fan blade based on the unmanned aerial vehicle as claimed in claim 1, wherein the collected hyperspectral image is subjected to feature extraction to obtain a scattering transformation feature map; the method comprises the following steps:
taking the hyperspectral image as input, and constructing a hyperspectral cube map by using an averaging filter;
and constructing a scattering transformation function, and performing scattering transformation on the hyperspectral cube map to obtain a scattering transformation characteristic map.
3. The intelligent detection method for damage to the fan blade based on the unmanned aerial vehicle as claimed in claim 2, wherein the performing scattering transformation on the hyperspectral cube map to obtain a scattering transformation feature map comprises:
constructing invariable, stable and rich signal information by utilizing iterative wavelet decomposition, modular operation and a low-pass filter, and respectively obtaining scattering transformation coefficients from a zero order to a high order;
and constructing a scattering transformation coefficient vector to obtain a scattering transformation characteristic diagram.
4. The intelligent unmanned-aerial-vehicle-based fan blade damage detection method of claim 1, wherein the obtaining of the scattering transformation depth feature map by performing spectral scattering transformation depth feature extraction on the scattering transformation feature map obtained through scattering transformation by using a residual error network based on an attention mechanism comprises:
constructing a residual error network model with an attention mechanism based on scattering transformation characteristics;
re-correcting the scattering transformation feature map input into the residual error network model by using a convolution block attention module to extract spectral scattering transformation depth features and obtain a scattering transformation depth feature map; wherein, in the attention module, channel-based attention and space-based attention are included; both the channel-based attention and the space-based attention are used to train scatter transform features in a three-dimensional structure.
5. The intelligent unmanned-aerial-vehicle-based fan blade damage detection method as claimed in claim 1, wherein the predicting of the end-member abundance map of the acquired hyperspectral image by using a regression model based on K nearest neighbors according to the obtained scattering transformation depth feature map in combination with a spectral similarity rule comprises:
in the regression model, K-neighbor learning regression factors are utilized to predict abundance values corresponding to end members of each pixel according to the obtained scattering transformation depth feature map, and an end member abundance map of the hyperspectral image is obtained.
6. The utility model provides a fan blade damages intellectual detection system device based on unmanned aerial vehicle, its characterized in that includes:
the unmanned aerial vehicle module is used for acquiring a hyperspectral image of a fan blade to be detected in a target area;
the characteristic extraction module is used for extracting the characteristics of the acquired hyperspectral image by adopting scattering transformation to obtain a scattering transformation characteristic diagram; performing spectrum scattering transformation depth feature extraction on the scattering transformation feature map obtained through scattering transformation by adopting a residual error network based on an attention mechanism to obtain a scattering transformation depth feature map;
the damage detection module is used for predicting an end member abundance map of the acquired hyperspectral image by adopting a regression model based on K nearest neighbor and combining a spectrum similarity rule according to the obtained scattering transformation depth characteristic map; and judging the damage state of the fan blade to be detected based on the end member abundance diagram.
7. The unmanned aerial vehicle-based intelligent detection device for fan blade damage of claim 6, wherein the feature extraction module is specifically configured to:
taking the hyperspectral image as input, and constructing a hyperspectral cube map by using an averaging filter;
and constructing a scattering transformation function, and performing scattering transformation on the hyperspectral cube map to obtain a scattering transformation characteristic map.
8. The intelligent detection device for fan blade damage based on unmanned aerial vehicle of claim 7, wherein the performing scattering transformation on the hyperspectral cube map to obtain a scattering transformation feature map comprises:
constructing invariable, stable and rich signal information by utilizing iterative wavelet decomposition, modular operation and a low-pass filter, and respectively obtaining scattering transformation coefficients from a zero order to a high order;
and constructing a scattering transformation coefficient vector to obtain a scattering transformation characteristic diagram.
9. The unmanned aerial vehicle-based intelligent detection device for fan blade damage of claim 6, wherein the feature extraction module is further specifically configured to:
constructing a residual error network model with an attention mechanism based on scattering transformation characteristics;
re-correcting the scattering transformation feature map input into the residual error network model by using a convolution block attention module to extract spectral scattering transformation depth features and obtain a scattering transformation depth feature map; wherein, in the attention module, channel-based attention and space-based attention are included; both the channel-based attention and the space-based attention are used to train scatter transform features in a three-dimensional structure.
10. The unmanned aerial vehicle-based intelligent detection device for damage to fan blades as claimed in claim 6, wherein the damage detection module is specifically configured to:
in the regression model, K-neighbor learning regression factors are utilized to predict abundance values corresponding to end members of each pixel according to the obtained scattering transformation depth feature map, and an end member abundance map of the hyperspectral image is obtained.
CN202110533752.4A 2021-05-14 2021-05-14 Fan blade damage intelligent detection method and device based on unmanned aerial vehicle Active CN113284102B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110533752.4A CN113284102B (en) 2021-05-14 2021-05-14 Fan blade damage intelligent detection method and device based on unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110533752.4A CN113284102B (en) 2021-05-14 2021-05-14 Fan blade damage intelligent detection method and device based on unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN113284102A true CN113284102A (en) 2021-08-20
CN113284102B CN113284102B (en) 2022-11-01

Family

ID=77279460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110533752.4A Active CN113284102B (en) 2021-05-14 2021-05-14 Fan blade damage intelligent detection method and device based on unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN113284102B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117307414A (en) * 2023-09-26 2023-12-29 金开智维(宁夏)科技有限公司 Unmanned aerial vehicle aerial photography-based fan blade detection method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738171A (en) * 2019-10-15 2020-01-31 大连海事大学 Hyperspectral image spectrum space division classification method based on class feature iterative random sampling
CN110852369A (en) * 2019-11-06 2020-02-28 西北工业大学 Hyperspectral image classification method combining 3D/2D convolutional network and adaptive spectrum unmixing
CN111274869A (en) * 2020-01-07 2020-06-12 中国地质大学(武汉) Method for classifying hyperspectral images based on parallel attention mechanism residual error network
CN112580480A (en) * 2020-12-14 2021-03-30 河海大学 Hyperspectral remote sensing image classification method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738171A (en) * 2019-10-15 2020-01-31 大连海事大学 Hyperspectral image spectrum space division classification method based on class feature iterative random sampling
CN110852369A (en) * 2019-11-06 2020-02-28 西北工业大学 Hyperspectral image classification method combining 3D/2D convolutional network and adaptive spectrum unmixing
CN111274869A (en) * 2020-01-07 2020-06-12 中国地质大学(武汉) Method for classifying hyperspectral images based on parallel attention mechanism residual error network
CN112580480A (en) * 2020-12-14 2021-03-30 河海大学 Hyperspectral remote sensing image classification method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
许周乐: "基于深度学习与散射变换的信号分类研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》, no. 04, 15 April 2019 (2019-04-15), pages 2 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117307414A (en) * 2023-09-26 2023-12-29 金开智维(宁夏)科技有限公司 Unmanned aerial vehicle aerial photography-based fan blade detection method and device and electronic equipment
CN117307414B (en) * 2023-09-26 2024-05-03 金开智维(宁夏)科技有限公司 Unmanned aerial vehicle aerial photography-based fan blade detection method and device and electronic equipment

Also Published As

Publication number Publication date
CN113284102B (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN111027575B (en) Semi-supervised semantic segmentation method for self-attention confrontation learning
CN109117858B (en) Method and device for monitoring icing of wind driven generator blade
CN113449680B (en) Knowledge distillation-based multimode small target detection method
CN105938559B (en) Use the Digital Image Processing of convolutional neural networks
Husin et al. Embedded portable device for herb leaves recognition using image processing techniques and neural network algorithm
CN107563433B (en) Infrared small target detection method based on convolutional neural network
CN108345827B (en) Method, system and neural network for identifying document direction
CN104834933A (en) Method and device for detecting salient region of image
CN113269224B (en) Scene image classification method, system and storage medium
CN110991444A (en) Complex scene-oriented license plate recognition method and device
CN113284102B (en) Fan blade damage intelligent detection method and device based on unmanned aerial vehicle
CN112784754A (en) Vehicle re-identification method, device, equipment and storage medium
CN114444757A (en) Combined prediction method for plateau mountain multi-model multi-scale new energy power station output
CN110334775B (en) Unmanned aerial vehicle line fault identification method and device based on width learning
WO2022257407A1 (en) Hyperspectral image classification method and apparatus, electronic device and storage medium
CN116503399B (en) Insulator pollution flashover detection method based on YOLO-AFPS
CN115937071A (en) Image detection method, device, equipment and medium
Jakaria et al. Comparison of classification of birds using lightweight deep convolutional neural networks
CN111666872B (en) Efficient behavior identification method under data imbalance
CN112883964A (en) Method for detecting characters in natural scene
CN116012393A (en) Carton point cloud segmentation method, device and processing equipment
CN115546569A (en) Attention mechanism-based data classification optimization method and related equipment
CN116052097A (en) Map element detection method and device, electronic equipment and storage medium
CN115564709A (en) Evaluation method and system for robustness of power algorithm model in confrontation scene
CN115097451A (en) Sea wave multi-parameter inversion method and system based on SAR satellite data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant