CN116123040A - Fan blade state detection method and system based on multi-mode data fusion - Google Patents

Fan blade state detection method and system based on multi-mode data fusion Download PDF

Info

Publication number
CN116123040A
CN116123040A CN202310073745.XA CN202310073745A CN116123040A CN 116123040 A CN116123040 A CN 116123040A CN 202310073745 A CN202310073745 A CN 202310073745A CN 116123040 A CN116123040 A CN 116123040A
Authority
CN
China
Prior art keywords
fusion
mode
blade
fan blade
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310073745.XA
Other languages
Chinese (zh)
Inventor
陈少雨
卜令国
王金根
王昕炜
项武
赵阳
胡友龙
贺振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinhua Power Development Investment Co ltd
Shandong University
Original Assignee
Xinhua Power Development Investment Co ltd
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinhua Power Development Investment Co ltd, Shandong University filed Critical Xinhua Power Development Investment Co ltd
Priority to CN202310073745.XA priority Critical patent/CN116123040A/en
Publication of CN116123040A publication Critical patent/CN116123040A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F03MACHINES OR ENGINES FOR LIQUIDS; WIND, SPRING, OR WEIGHT MOTORS; PRODUCING MECHANICAL POWER OR A REACTIVE PROPULSIVE THRUST, NOT OTHERWISE PROVIDED FOR
    • F03DWIND MOTORS
    • F03D17/00Monitoring or testing of wind motors, e.g. diagnostics
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/70Wind energy
    • Y02E10/72Wind turbines with rotation axis in wind direction

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Sustainable Development (AREA)
  • Sustainable Energy (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Mechanical Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Wind Motors (AREA)
  • Testing Of Devices, Machine Parts, Or Other Structures Thereof (AREA)

Abstract

The invention provides a fan blade state detection method and system based on multi-mode data fusion, which relate to the technical field of wind power generation, and specifically comprise the following steps: acquiring a plurality of modal data of a blade to be detected, extracting characteristics of each modal data from the modal data, and performing characteristic level fusion to generate new multi-modal fusion characteristics; inputting the multi-mode fusion characteristics into each trained mode model to obtain detection results of each mode model; carrying out decision-level fusion on the detection results of the mode models to obtain a final fan blade state detection result; the method combines the multi-mode data fusion modes of feature level fusion and decision level fusion, reduces the task amount, reduces redundant data, enhances the interpretability, improves the data utilization rate, and generally improves the accuracy, convenience and real-time performance of blade detection.

Description

Fan blade state detection method and system based on multi-mode data fusion
Technical Field
The invention belongs to the technical field of wind power generation, and particularly relates to a fan blade state detection method and system based on multi-mode data fusion.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Currently, detection of fan blades is largely based on human detection, pattern recognition, or single-mode data of blades based on artificial intelligence; the manual detection mainly depends on experience detection of technicians, is simple and effective, but consumes huge manpower resources; the pattern recognition mainly comprises the steps of establishing a fault database, determining faults of the blades to be detected by comparing the blades to be detected with the database, and ensuring clear thought but insufficient accuracy; based on the single-mode prediction of the artificial intelligence, the data of one mode of the blade, such as an image or sound, is mainly obtained, the prediction is realized in an artificial intelligence mode, and the efficiency and accuracy are realized, but for certain faults of the blade, the data of the single mode cannot be reflected, such as internal cracks cannot be reflected in a surface image, so that the phenomenon of inadaptation to all conditions can occur.
Therefore, a fan blade state detection method based on multi-mode data is needed to achieve more accurate, more timely and more convenient detection.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a fan blade state detection method and system based on multi-mode data fusion, which are combined with a multi-mode data fusion mode of feature level fusion and decision level fusion, so that the task amount is reduced, redundant data is reduced, meanwhile, the interpretability is enhanced, the data utilization rate is improved, and the accuracy, convenience and real-time performance of blade detection are improved as a whole.
To achieve the above object, one or more embodiments of the present invention provide the following technical solutions:
the invention provides a fan blade state detection method based on multi-mode data fusion;
a fan blade state detection method based on multi-mode data fusion comprises the following steps:
acquiring a plurality of modal data of a blade to be detected, extracting characteristics of each modal data from the modal data, and performing characteristic level fusion to generate new multi-modal fusion characteristics;
inputting the multi-mode fusion characteristics into each trained mode model to obtain detection results of each mode model;
and carrying out decision-level fusion on the detection results of the mode models to obtain a final fan blade state detection result.
Further, the modal data includes a blade visible light image, a blade infrared image, a blade sound, and a blade vibration signal.
Further, the feature extraction of the visible light image of the blade specifically includes:
(1) Obtaining a surface picture of a blade to be detected through an unmanned aerial vehicle;
(2) Performing defogging treatment;
(3) And extracting features of the de-atomized picture through a CNN model introducing an attention mechanism to obtain a visible light picture feature map A (i, j), wherein i and j represent pixel point positions.
Further, the attention mechanism comprises channel attention and spatial attention;
the channel attention is characterized in that the maximum pooling and the average pooling are carried out on the feature map to obtain two vectors, the obtained two vectors with the same dimension are put into the same perceptron for learning, the output results are added one by one and put into the s i gmod function for activation to obtain a channel attention vector, and the channel attention vector is multiplied by the original vector to obtain the feature vector under the attention mechanism;
the spatial attention is to carry out maximum pooling and average pooling treatment on the channel attention output result, splice the two results together for convolution operation to obtain a spatial attention vector, and multiply the spatial attention vector with the channel attention mechanism output result to obtain a final characteristic diagram.
Further, the feature level fusion is to fuse features of the visible light image and the infrared image of the blade, specifically:
(1) Obtaining a blade infrared image B (i, j) in an infrared imaging mode, and obtaining a gray level image C (i, j) of the blade infrared image in a gray level processing mode, wherein i and j represent pixel point positions;
(2) Setting a zero matrix D (i, j) to be coupled, which is equal to the gray level diagram C in size, setting a threshold t, and taking the pixel value of a corresponding point of the infrared image of the blade when C (i, j) is larger than t, namely enabling D (i, j) to be equal to B (i, j); when C (i, j) is less than or equal to t, taking the pixel value of the corresponding point of the visible light graph, namely making D (i, j) equal to A (i, j).
Further, the modality model includes: a decision model based on visible light images and infrared images, a decision model based on acoustic features, and a decision model based on vibration signals.
Furthermore, the decision stage fusion is carried out, and the detection results of the mode models are subjected to weighted fusion through a trained decision perceptron, so that the final fan blade state detection result is obtained.
The invention provides a fan blade state detection system based on multi-mode data fusion.
A fan blade state detection system based on multi-mode data fusion comprises a feature generation module, a mode detection module and a decision fusion module:
a feature generation module configured to: acquiring a plurality of modal data of a blade to be detected, extracting characteristics of each modal data from the modal data, and performing characteristic level fusion to generate new multi-modal fusion characteristics;
a modality detection module configured to: inputting the multi-mode fusion characteristics into each trained mode model to obtain detection results of each mode model;
a decision fusion module configured to: and carrying out decision-level fusion on the detection results of the mode models to obtain a final fan blade state detection result.
A third aspect of the present invention provides a computer readable storage medium having stored thereon a program which when executed by a processor performs the steps of a method for fan blade status detection based on multi-modal data fusion according to the first aspect of the present invention.
A fourth aspect of the present invention provides an electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, the processor implementing the steps in a fan blade state detection method based on multi-modal data fusion according to the first aspect of the present invention when the program is executed.
The one or more of the above technical solutions have the following beneficial effects:
according to the multi-mode data fusion blade detection method, multi-mode data of the blade are fully utilized, faults of the blade are detected in an omnibearing and accurate mode, and the detection accuracy is greatly improved; meanwhile, the traditional blade detection requires a large amount of manpower to perform field detection, and meanwhile, the requirements on technicians are high, the manpower cost is high, the detection period is long, and the maintenance workload is large; for the method, the manual on-site detection is not needed, the technical requirement is low, the required manual cost is low, the real-time detection can be realized, and the detection workload is low; therefore, the blade detection mode of the invention realizes more accurate, more timely and more convenient detection, and compared with the traditional detection method, the blade detection mode has great improvement.
Additional aspects of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
Fig. 1 is a flow chart of a method of a first embodiment.
Fig. 2 is a CNN model structure diagram of the attention introducing mechanism CBAM of the first embodiment.
Fig. 3 is a structural diagram of a convolutional neural network of the first embodiment.
Fig. 4 is a flowchart of image detection according to the first embodiment.
Fig. 5 is a flowchart of sound detection according to the first embodiment.
Fig. 6 is a flow chart of vibration detection according to the first embodiment.
Fig. 7 is a diagram of a decision-making aware machine-made architecture of the first embodiment.
Fig. 8 is a system configuration diagram of the second embodiment.
Detailed Description
The invention will be further described with reference to the drawings and examples.
Two fusion modes are involved in leaf blade detection:
an early fusion, namely feature level fusion, is realized by fusing feature vectors after feature extraction of data in different modes to obtain a fusion feature vector fusing multi-mode data, and then training a fault detection model based on the fusion feature vector to realize the judgment of blade faults;
and the other is late fusion, namely decision-level fusion, wherein for the data extraction characteristics of different modes, the respective fault detection models are trained under the modes, the output results of the respective fault detection models are calculated according to a weighted mode, and the final detection result is given.
Compared with the two modes, the feature level fusion has the characteristics of small task amount and less data redundancy; the decision-level fusion has the characteristics of strong interpretability and high data utilization rate; based on two fusion modes, the invention provides a multi-mode data fusion mode combining feature level fusion and decision level fusion, and intelligent early warning of the fan blade is realized from multiple angles through the modes of data fusion and decision fusion; the physical state relates to various parameters including blade visible light images, blade infrared images, blade sounds, blade vibration signals and the like; obtaining visible light images and infrared images of the blades through unmanned aerial vehicle shooting and infrared imaging technologies; the sound collector is used for collecting the blade sound in multiple directions and comprehensively; obtaining vibration related data by a vibration sensor; and then combining machine learning methods such as de-atomization, attention mechanisms, convolutional neural networks and the like to process data and predict faults of the blades, so that the real-time intelligent monitoring of the fan blades is realized, the task amount is reduced, redundant data is reduced, the interpretability is enhanced, and the data utilization rate is improved.
Example 1
The embodiment discloses a fan blade state detection method based on multi-mode data fusion;
as shown in fig. 1, a fan blade state detection method based on multi-mode data fusion includes:
step S1: acquiring a plurality of modal data of a blade to be detected, extracting characteristics of each modal data from the modal data, and performing characteristic level fusion to generate new multi-modal fusion characteristics;
the modal data comprises visible light images of the blade, infrared images of the blade, sound of the blade and vibration signals of the blade.
The processing steps of the visible light image of the blade are as follows:
(1) Obtaining a surface picture of a blade to be detected through an unmanned aerial vehicle;
(2) Defogging treatment is carried out to obtain defogging images: because wind power resources are sufficient in western regions and coastal regions, fans are often distributed in places often accompanied by foggy days, and the obtained images have great influence on subsequent treatment, the atomization imaging model is known from the principle of atmospheric scattering by adopting a surface image atomization treatment mode: the first portion is from a debilitating incident light source; the second part comes from the scattering of other light sources, and the atomized imaging model, namely the primes-reducing mathematical expression of the atomized image is as follows:
I(x)=J(x)t(x)+A(1-t(x)) (1)
the first part is J (x) t (x), I (x) represents a foggy image acquired by the unmanned aerial vehicle, J (x) represents a target defogging image, the second part is A (1-t (x)), t (x) represents the transmissivity of a scene, and A represents an atmospheric light value.
As can be derived from equation (1), the mathematical expression of the target defogging image J (x) is:
J(x)=(I(x)-A(1-t(x)))/t(x) (2)
the gray value of the dark channel of the foggy image is mainly determined by atmospheric light, and the transmissivity t and the atmospheric light value A can be estimated by the estimation method of A: and extracting the pixel position of which the gray value of the dark channel image is 0.1% in the front, corresponding to the corresponding position in the foggy image, and finding out the maximum pixel value of the positions, wherein the pixel value is the atmospheric light value A.
the estimation mode of t is as follows: the transmissivity is refined by using a guided filtering method, and the mathematical expression of the guided filtering method is as follows:
q=a*I+b (3)
wherein I is a guide graph, q is a target defogging image obtained by transformation, a and b are fixed parameters of the window, and the model considers that the guide graph and the target graph are in linear relation within a certain range.
The actual original graph is p, and the values of a and b are calculated by a least square method through the difference re-square operation of the input p and the output q, so that the method can be obtained:
Figure BDA0004065535270000061
Figure BDA0004065535270000071
Figure BDA0004065535270000072
where p is the actual obtained graph, q is the graph after output, and a and b are parameters defined by the guided filtering method.
By the above method, the de-atomization can be completed rapidly and accurately.
(3) And extracting the characteristics of the defogging image.
The embodiment adopts a feature extraction mode of a mechanism for introducing attention: in actual detection, the main damage modes of the fan blade are often cracks and corrosion, so that pictures obtained by the unmanned aerial vehicle are divided into cracks, corrosion and no defects; in the environment of the blade, the problems of complex background, uneven illumination and small defect proportion often occur, and the key features are often difficult to extract by the conventional convolutional neural network.
For these problems, attention-drawing mechanism CBAM is used for extracting image features, and the specific structure of the model is shown in figure 2.
The CNN is composed of an input layer, a convolution layer, a pooling layer, a full connection layer and an output layer, the projection of the input on the characteristic is obtained by multiplying the input and the convolution kernel, and the characteristic vector of the output image is finally obtained by calculating the convolution layer and the pooling layer for a plurality of times; when the images of the fan blades are processed, an attention mechanism is introduced to solve the problems of uneven illumination, complex background and small defect occupation ratio, and realize accurate extraction of the features.
The attention mechanism CBAM is an attention introduction mode with smaller operation cost, can improve the quality of the features on the premise of almost not changing the operation cost, and comprises channel attention and space attention.
Channel attention is realized: firstly, carrying out maximum pooling and average pooling on a feature map to obtain two vectors, putting the obtained two vectors with the same dimension into the same perceptron for learning, adding output results one by one, putting the output results into an s i gmod function for activation to obtain a channel attention vector, and multiplying the channel attention vector with an original vector to obtain a feature vector under an attention mechanism; the feature map is a matrix storage after the image is de-atomized; channel attention is a part of attention mechanism, which is a special operation layer comprising convolution, pooling, full-join and some operations of fusion attention mechanism, which is equivalent to adding a special layer in cnn.
Realizing spatial attention: and carrying out maximum pooling and average pooling processing on the results output by the channel attention mechanism, splicing the two results, carrying out convolution operation to obtain a spatial attention vector, multiplying the spatial attention vector by the channel attention mechanism output result to obtain a feature map A (i, j) for enhancing the spatial attention, wherein i, j represents the position of a pixel point.
Because unmanned aerial vehicle can only obtain blade surface visible light image, can't judge inside trouble, and infrared image can judge inside trouble, and blade surface feature map and infrared feature map have high information redundancy, therefore carry out feature level with the feature map of blade visible light image and infrared image and fuse, obtain blade inside and outside trouble feature and reduce information redundancy, specifically be:
(1) The infrared image B (i, j) of the blade is obtained by an infrared imaging mode, and the gray level image C (i, j) of the infrared image is obtained by a gray level processing mode, wherein i, j represents the position of a pixel point.
(2) Setting a zero matrix D (i, j) to be coupled, which is equal to the gray level diagram C in size, setting a threshold t, and taking the pixel value of a corresponding point of the infrared image when C (i, j) is larger than t, namely enabling D (i, j) to be equal to B (i, j); when C (i, j) is less than or equal to t, taking the pixel value of the corresponding point of the visible light graph, namely making D (i, j) equal to A (i, j), and expressing the pixel value as follows by a formula:
Figure BDA0004065535270000081
thus, the image features D (i, j) fusing the visible light image and the infrared image are obtained, compared with the single visible light image, the image can obtain the internal abnormality of the blade, and compared with the single infrared image, the image is clearer, can process tiny faults and can obtain the fault position more accurately.
Regarding the processing of blade sounds, specifically:
(1) Obtaining relevant data of the fan blade: the sound field of the fan is obtained by adopting the mode of collecting sound information by the sound array sensor, the omnibearing collection of the sound of the fan is realized by arranging the sound sensors, the influence of noise on local sound collection is reduced, and meanwhile, the integrity of information collection is ensured.
(2) And (3) preprocessing the obtained sound signals to remove noise, and measuring that the sound signals acquired in the environment without fan sound are low-frequency signals compared with the acquired fan sound signals through experiments, so that the noise is removed in a high-pass filtering mode.
(3) For extracting acoustic signals, an MFCC feature extraction method is adopted to extract acoustic features, specifically: firstly, adding a vi ocebox package into an mt/ab, and then realizing subsequent operations such as reading, pre-emphasis, framing, windowing, fourier transformation and the like based on relevant tools in the vi ocebox package, so as to realize the feature extraction of sound.
Regarding the processing of the blade vibration signal, specifically:
(1) Obtaining relevant data of blade vibration through a vibration sensor;
(2) Because the vibration of the fan blade has natural frequency and relatively stable frequency, the Fourier transform is used for filtering, the frequency domain distribution of the vibration signal is obtained through the Fourier transform, the noise frequency domain is removed, and the denoised vibration signal is obtained and is used as the vibration signal characteristic of the blade.
Step S2: inputting the multi-mode fusion characteristics into each trained mode model to obtain detection results of each mode model;
the modal models include an image-based fault detection model, an acoustic-based fault detection model, and a vibration signal-based fault detection model.
Based on a fault detection model of the image, constructing based on a convolutional neural network shown in fig. 3, and training by image features extracted from training set samples; and inputting the image characteristics of the blade to be detected into the trained model to obtain an image detection result, wherein the flow is shown in fig. 4.
Based on an acoustic fault detection model, constructing a convolutional neural network shown in fig. 3, and training the neural network model to obtain acoustic features extracted by training set samples; and inputting the acoustic characteristics of the blade to be detected into the trained model to obtain an acoustic detection result, wherein the flow is shown in fig. 5.
The fault detection model based on the vibration signals is constructed based on a convolution neural network shown in fig. 3, the blade vibration signals in a normal state are taken as positive samples, the blade vibration signals in a fault state are taken as auxiliary samples according to the de-noised vibration signals of the training set samples, and the neural network model is trained to obtain the fault detection model based on the vibration signals; and carrying out Fourier transform filtering on the blade vibration signal to be detected, inputting the noise-removed signal obtained after filtering into a fault detection model based on the vibration signal, and obtaining output as a vibration signal detection result, wherein the flow is shown in figure 6.
Step S3: carrying out decision-level fusion on detection results of the mode models to obtain final fan blade state detection results, wherein the detection results comprise the following specific steps:
the decision-making perceptron is constructed, the structure of which is shown in figure 7, and the decision-making perceptron is trained based on the image detection result, the acoustic detection result and the vibration signal detection result of the samples in the training set, so that the trained decision-making perceptron is obtained.
Based on a trained decision perceptron, carrying out weighted fusion on an image detection result, an acoustic detection result and a vibration signal detection result of the blade to be detected to obtain a final fan blade state detection result.
Example two
The embodiment discloses a fan blade state detection system based on multi-mode data fusion;
as shown in fig. 8, a fan blade state detection system based on multi-mode data fusion includes a feature generation module, a mode detection module and a decision fusion module:
a feature generation module configured to: acquiring a plurality of modal data of a blade to be detected, extracting characteristics of each modal data from the modal data, and performing characteristic level fusion to generate new multi-modal fusion characteristics;
a modality detection module configured to: inputting the multi-mode fusion characteristics into each trained mode model to obtain detection results of each mode model;
a decision fusion module configured to: and carrying out decision-level fusion on the detection results of the mode models to obtain a final fan blade state detection result.
Example III
An object of the present embodiment is to provide a computer-readable storage medium.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps in a method for fan blade status detection based on multi-modal data fusion as described in the first embodiment of the present disclosure.
Example IV
An object of the present embodiment is to provide an electronic apparatus.
The electronic device comprises a memory, a processor and a program stored on the memory and capable of running on the processor, wherein the processor realizes the steps in the fan blade state detection method based on multi-mode data fusion according to the first embodiment of the disclosure when executing the program.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A fan blade state detection method based on multi-mode data fusion is characterized by comprising the following steps:
acquiring a plurality of modal data of a blade to be detected, extracting characteristics of each modal data from the modal data, and performing characteristic level fusion to generate new multi-modal fusion characteristics;
inputting the multi-mode fusion characteristics into each trained mode model to obtain detection results of each mode model;
and carrying out decision-level fusion on the detection results of the mode models to obtain a final fan blade state detection result.
2. The fan blade state detection method based on multi-mode data fusion according to claim 1, wherein the mode data comprises a blade visible light image, a blade infrared image, a blade sound and a blade vibration signal.
3. The fan blade state detection method based on multi-mode data fusion according to claim 2, wherein the feature extraction of the visible light image of the blade is specifically as follows:
(1) Obtaining a surface picture of a blade to be detected through an unmanned aerial vehicle;
(2) Performing defogging treatment;
(3) And extracting features of the de-atomized picture through a CNN model introducing an attention mechanism to obtain a visible light picture feature map A (i, j), wherein i and j represent pixel point positions.
4. A method for detecting a fan blade state based on multi-modal data fusion as claimed in claim 3 wherein the attention mechanism includes channel attention, spatial attention;
the channel attention is characterized in that the maximum pooling and the average pooling are carried out on the feature map to obtain two vectors, the obtained two vectors with the same dimension are put into the same perceptron for learning, the output results are added one by one and put into a sigmod function for activation to obtain a channel attention vector, and the channel attention vector is multiplied with the original vector to obtain the feature vector under an attention mechanism;
the spatial attention is to carry out maximum pooling and average pooling treatment on the channel attention output result, splice the two results together for convolution operation to obtain a spatial attention vector, and multiply the spatial attention vector with the channel attention mechanism output result to obtain a final characteristic diagram.
5. The method for detecting the state of the fan blade based on multi-mode data fusion according to claim 3, wherein the feature level fusion is to fuse features of a visible light image of the blade and an infrared image of the blade, specifically:
(1) Obtaining a blade infrared image B (i, j) in an infrared imaging mode, and obtaining a gray level image C (i, j) of the blade infrared image in a gray level processing mode, wherein i and j represent pixel point positions;
(2) Setting a zero matrix D (i, j) to be coupled, which is equal to the gray level diagram C in size, setting a threshold t, and taking the pixel value of a corresponding point of the infrared image of the blade when C (i, j) is larger than t, namely enabling D (i, j) to be equal to B (i, j); when C (i, j) is less than or equal to t, taking the pixel value of the corresponding point of the visible light graph, namely making D (i, j) equal to A (i, j).
6. The fan blade state detection method based on multi-modal data fusion of claim 1, wherein the modal model comprises: an image-based fault detection model, an acoustic-based fault detection model, and a vibration signal-based fault detection model.
7. The method for detecting the fan blade state based on multi-mode data fusion according to claim 1, wherein the decision stage fusion is performed to obtain a final fan blade state detection result by performing weighted fusion on detection results of all mode models through a trained decision perceptron.
8. The fan blade state detection system based on multi-mode data fusion is characterized by comprising a feature generation module, a mode detection module and a decision fusion module:
a feature generation module configured to: acquiring a plurality of modal data of a blade to be detected, extracting characteristics of each modal data from the modal data, and performing characteristic level fusion to generate new multi-modal fusion characteristics;
a modality detection module configured to: inputting the multi-mode fusion characteristics into each trained mode model to obtain detection results of each mode model;
a decision fusion module configured to: and carrying out decision-level fusion on the detection results of the mode models to obtain a final fan blade state detection result.
9. A computer readable storage medium having a program stored thereon, which when executed by a processor, implements the steps of a method for detecting a fan blade status based on multi-modal data fusion according to any one of claims 1 to 7.
10. An electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor, when executing the program, performs the steps of a method for detecting a fan blade state based on multi-modal data fusion as claimed in any one of claims 1 to 7.
CN202310073745.XA 2023-01-30 2023-01-30 Fan blade state detection method and system based on multi-mode data fusion Pending CN116123040A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310073745.XA CN116123040A (en) 2023-01-30 2023-01-30 Fan blade state detection method and system based on multi-mode data fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310073745.XA CN116123040A (en) 2023-01-30 2023-01-30 Fan blade state detection method and system based on multi-mode data fusion

Publications (1)

Publication Number Publication Date
CN116123040A true CN116123040A (en) 2023-05-16

Family

ID=86302453

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310073745.XA Pending CN116123040A (en) 2023-01-30 2023-01-30 Fan blade state detection method and system based on multi-mode data fusion

Country Status (1)

Country Link
CN (1) CN116123040A (en)

Similar Documents

Publication Publication Date Title
KR102229594B1 (en) Display screen quality detection method, device, electronic device and storage medium
CN111553929A (en) Mobile phone screen defect segmentation method, device and equipment based on converged network
CN113436169B (en) Industrial equipment surface crack detection method and system based on semi-supervised semantic segmentation
CN110084165B (en) Intelligent identification and early warning method for abnormal events in open scene of power field based on edge calculation
CN109671071B (en) Underground pipeline defect positioning and grade judging method based on deep learning
CN117392615B (en) Anomaly identification method and system based on monitoring video
CN112365468A (en) AA-gate-Unet-based offshore wind power tower coating defect detection method
CN113850136A (en) Yolov5 and BCNN-based vehicle orientation identification method and system
CN114463389B (en) Moving target detection method and detection system
Peng et al. Non-uniform illumination image enhancement for surface damage detection of wind turbine blades
CN115578326A (en) Road disease identification method, system, equipment and storage medium
CN111126155A (en) Pedestrian re-identification method for generating confrontation network based on semantic constraint
CN114596477A (en) Foggy day train fault detection method based on field self-adaption and attention mechanism
CN116977256A (en) Training method, device, equipment and storage medium for defect detection model
CN116123040A (en) Fan blade state detection method and system based on multi-mode data fusion
CN115761606A (en) Box electric energy meter identification method and device based on image processing
CN115439738A (en) Underwater target detection method based on self-supervision cooperative reconstruction
CN116385915A (en) Water surface floater target detection and tracking method based on space-time information fusion
Gao et al. CP-Net: Channel attention and pixel attention network for single image dehazing
CN116977334B (en) Optical cable surface flaw detection method and device
CN115767040B (en) 360-degree panoramic monitoring automatic cruising method based on interactive continuous learning
CN117274723B (en) Target identification method, system, medium and equipment for power transmission inspection
CN117292266B (en) Method and device for detecting concrete cracks of main canal of irrigation area and storage medium
CN113971755B (en) All-weather sea surface target detection method based on improved YOLOV model
CN117911282B (en) Construction method and application of image defogging model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination