CN114062511A - Single-sensor-based intelligent acoustic emission identification method for early damage of aircraft engine - Google Patents

Single-sensor-based intelligent acoustic emission identification method for early damage of aircraft engine Download PDF

Info

Publication number
CN114062511A
CN114062511A CN202111237068.8A CN202111237068A CN114062511A CN 114062511 A CN114062511 A CN 114062511A CN 202111237068 A CN202111237068 A CN 202111237068A CN 114062511 A CN114062511 A CN 114062511A
Authority
CN
China
Prior art keywords
neural network
acoustic emission
convolutional neural
image
deep convolutional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111237068.8A
Other languages
Chinese (zh)
Inventor
杨国安
韩聪
刘曈
金宇澄
王硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Chemical Technology
Original Assignee
Beijing University of Chemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Chemical Technology filed Critical Beijing University of Chemical Technology
Priority to CN202111237068.8A priority Critical patent/CN114062511A/en
Publication of CN114062511A publication Critical patent/CN114062511A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/14Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object using acoustic emission techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • G01N29/4472Mathematical theories or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/02Indexing codes associated with the analysed material
    • G01N2291/028Material parameters
    • G01N2291/0289Internal structure, e.g. defects, grain size, texture

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Analytical Chemistry (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Pathology (AREA)
  • Immunology (AREA)
  • Biochemistry (AREA)
  • Evolutionary Computation (AREA)
  • Chemical & Material Sciences (AREA)
  • Signal Processing (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Algebra (AREA)
  • Acoustics & Sound (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

An intelligent acoustic emission identification method for early damage of an aircraft engine based on a single sensor belongs to the field of intelligent identification of early damage of aircraft engines. Firstly, acquiring an acoustic emission original signal under early damage of the aircraft engine by using a single sensor. Secondly, the signals are converted into two-dimensional time-frequency characteristic images by using time-frequency analysis. And then, building the deep convolutional neural network, and preprocessing the acoustic emission characteristic image according to the input requirement of the deep convolutional neural network. And then, learning the preprocessed image to obtain a training model of the acoustic emission characteristic image corresponding to the damage type. And finally, inputting the test set signal into the trained deep convolutional neural network model to obtain an early damage intelligent identification result of the aircraft engine. The invention can improve the efficiency of acoustic emission diagnosis, and has important significance in pursuing a light weight target with few measuring points, ensuring the normal and reliable operation of an engine and reducing the occurrence of major accidents.

Description

Single-sensor-based intelligent acoustic emission identification method for early damage of aircraft engine
Technical Field
The invention belongs to the field of intelligent recognition of early damage acoustic emission of an aero-engine, and particularly relates to an intelligent recognition method of early damage acoustic emission of an aero-engine based on a single sensor.
Background
The performance of an aircraft engine, which is the "heart" of the aircraft equipment, determines the speed, maneuverability, reliability and economy of the aircraft. Due to the complex structure and long-term operation in a harsh environment, various faults of the aircraft engine inevitably occur. If failure information such as the type, extent, and location of damage is not sensed and evaluated early in the failure, catastrophic failure of the aircraft may result, leading to serious casualties. Therefore, the fault monitoring and early warning capability of the engine is improved, and the method has important practical significance for guaranteeing safe and reliable operation of the aircraft.
The weak early damage characteristics do not cause the aircraft engine to exhibit significant abnormal conditions at the early stages of aircraft engine failure (e.g., cracks, impacts, scuffs, structural deformations, etc.). At the moment, the conventional state monitoring method (including gas path analysis, lubricating oil detection technology, hole detection technology, vibration monitoring technology and the like) cannot timely detect early damage through real-time online monitoring. However, the online structural health monitoring approach for aircraft engines based on acoustic emissions has significant advantages in addressing such issues. When the material is subjected to irreversible changes such as plastic deformation or crack formation, strain energy is rapidly released to generate transient elastic waves, and the phenomenon is called acoustic emission. Compared with other nondestructive detection methods, the acoustic emission technology has the advantages of dynamic performance, real-time performance, integrity and the like, is widely applied to the fields of pressure vessels, composite materials, ocean platforms, aerospace and the like, and can realize dynamic monitoring and integrity evaluation of early weak faults of materials and structures.
However, the acoustic emission signals emitted from the fault sources are mostly multi-modal, multi-frequency, strong and non-stationary signals, which undoubtedly increases the difficulty of signal processing and feature extraction. Meanwhile, due to the complex structure and compact space of the aircraft engine, the acoustic emission sensor cannot be installed inside the engine close to a fault source, and only the outer surface of the aircraft engine with low temperature is suitable for arranging the sensor. Therefore, besides basic attenuation mechanisms such as diffusion, scattering and viscosity, distortion phenomena such as mode conversion and waveform aliasing exist in the process of signal propagation, so that the mapping relation between the fault and the received signal is complicated, and the difficulty of damage identification is increased. One method commonly used to overcome the complex propagation characteristics of acoustic emission signals in real structures is to increase the number of sensors. This approach, however, undoubtedly significantly increases the complexity and deployment cost of the acoustic emission system, and there is insufficient space available on the aircraft engine to deploy multiple sensors. Therefore, the method has important practical significance for realizing early damage intelligent identification of the aircraft engine based on acoustic emission by using few sensors.
Disclosure of Invention
In order to solve the problems, the invention provides an intelligent recognition method for early damage acoustic emission of an aircraft engine based on a single sensor. Firstly, acquiring an acoustic emission original signal under early damage of the aircraft engine by using a single sensor. Secondly, acoustic emission signals captured by the sensor are converted into a two-dimensional time-frequency feature image using time-frequency analysis. And then, constructing the deep convolutional neural network for early damage intelligent identification by using a single-sensor acoustic emission technology, and preprocessing the acoustic emission characteristic image of the aircraft engine to obtain a preprocessed image meeting the input requirement of the deep convolutional neural network. And then, learning the preprocessed image by using the initialized deep convolution neural network to obtain a training model of the acoustic emission characteristic image of the aircraft engine corresponding to the damage category. And finally, inputting the test set signal into the trained deep convolutional neural network model to obtain an early damage intelligent identification result of the aircraft engine. By adopting the technical scheme, the early damage acoustic emission intelligent identification of the aero-engine based on the single sensor can be realized, the acoustic emission diagnosis efficiency can be improved, the influence of artificial subjective factors is reduced, and the method has important significance for pursuing a light weight target with few measuring points, ensuring the normal and reliable operation of the engine, making a scientific and reasonable equipment maintenance plan and reducing the occurrence of major accidents.
The real-time online monitoring and intelligent identification of early damage of the aero-engine under few measuring points are aimed to be solved.
An early damage acoustic emission intelligent identification method of an aircraft engine based on a single sensor is characterized by comprising the following steps:
step 100: acquiring acoustic emission original signals of the aircraft engine under different early damages by using a single sensor;
step 200: converting the acoustic emission original signal captured by the sensor into a two-dimensional time-frequency characteristic image by using a time-frequency analysis technology of continuous wavelet transform to form a data sample set;
step 300: and constructing a training sample set, a verification sample set and a test sample set. Specifically, dividing all images in an initial sample set into a training set, a verification set and a test set according to the ratio of 6:2:2, wherein data in the data sample set are all image samples with labels; the tag type is one-hot coded to ensure that the distance between features is more reasonable to calculate.
Step 401: using 1 input layer as the start of the deep convolutional neural network for receiving the preprocessed aeroengine acoustic emission feature image. The input image size of the deep convolutional neural network is 200 × 200 × 3, that is, the width is 200 pixels and the height is 200 pixels, and each pixel has three values.
Step 402: and the convolution structure in the deep convolution neural network is used for carrying out feature extraction on the aeroengine acoustic emission feature image so as to obtain image features of different early damages. The convolution structure includes 6 convolution operations, each including 1 convolution layer with a Relu activation function and 1 pooling layer. In the convolution structure, the first convolution layer uses a 5 × 5 kernel with a step size of 1 to extract a learnable pattern quickly. And the rest convolution layers adopt 3 multiplied by 3 kernels to learn signal characteristics through multilayer nonlinear mapping, and the convolution step length is 1. The kernel size and step size of the pooling layer are 2 x 2 and 2, respectively. The 6 convolution operations may reduce the input image size by a factor of 64.
In the convolutional layer, an input image is convolved using a convolution kernel. By adding bias terms and using activation functions, a series of feature maps can be generated. The convolution operation formula is as follows:
Figure BDA0003318040850000031
in the formula (I), the compound is shown in the specification,
Figure BDA0003318040850000032
is the value of the kth neuron in the mth frame of the ith layer,
Figure BDA0003318040850000033
the k convolution region in the m frame feature map of the l-1 layer,
Figure BDA0003318040850000034
are the elements thereof.
Figure BDA0003318040850000035
In the form of a convolution kernel, the kernel is,
Figure BDA0003318040850000036
is the bias term. The symbol · represents a scalar product operation between the local region and the convolution kernel. f () is the activation function, and the mathematical expression of Relu activation function is:
f(x)=max[0,x]
the convolutional layer is followed by a pooling layer, and the pooling mode is maximum pooling, namely, the maximum value is selected as output in a pooling area. The formula of the maximum pooling is as follows:
Figure BDA0003318040850000037
in the formula (I), the compound is shown in the specification,
Figure BDA0003318040850000041
is the value of the kth neuron in the mth frame of the l-th layer, n is the width of the pooling region,
Figure BDA0003318040850000042
is the value of the jth neuron in the mth frame of layer l + 1.
Step 403: the connection structure comprises a tiling operation, a full connection operation and a discarding operation, and specifically comprises 1 tiling layer, 1 full connection layer and 1 discarding layer with the proportion of 0.25. The connection structure first tiles the features after the convolution structure, and then connects each neuron with the neuron of the previous layer by using the full connection layer. Then, a 0.25 scale discard layer was used to avoid the overfitting problem of model training.
Step 404: and the output layer selects a softmax layer to realize early damage classification and output a prediction result. For the multi-classification task, the output model used is softmax, which can be expressed as:
Figure BDA0003318040850000043
in the formula, hjIndicates the classification result, VjFor the input value of the function, i.e., the output of the upper network, M represents the number of classes. Softmax outputs the probability that the input image finally belongs to a certain category, the greater the probability, the greater the likelihood. The loss function of the deep convolutional neural network model is defined as the cross entropy between the real value and the model prediction, which is often used to judge the difference degree between the predicted value and the actual value, and can be expressed as:
Figure BDA0003318040850000044
wherein N represents N samples in a batch, and M represents the number of categories; y isicIndicating an indicator variable (0 or 1), which is 1 if the category is the same as that of the sample i, and is 0 otherwise; p is a radical oficIs to observeAnd measuring the prediction probability that the sample i belongs to the class c.
Step 500: initializing the deep convolutional neural network. Setting an initial learning rate to be 0.001, an initial momentum to be 0.9, a batch to be 20, a weight attenuation factor to be 0.000001, a maximum iteration number to be 500 and a callback value to be 10, namely stopping training and storing a model with the highest precision if the verification precision of the model in 10 rounds of circulation is not improved any more; optimizing the learning rate by combining a momentum learning method and a small-batch random gradient descent method; the loss function of the deep convolutional neural network model is defined as the cross entropy between the true value and the model prediction.
Step 601: and preprocessing the aeroengine acoustic emission characteristic image training set and the verification set. Specifically, the sizes of the images in the training set and the verification set are scaled according to the input requirements of the deep convolutional neural network, and then the scaled images are normalized to scale all the pixel values of the scaled images to the interval [0,1 ]. Specifically, the image sizes in the training set and the verification set are scaled to 200 × 200, and then the image pixel values are adjusted by using a formula a '═ a/255, where a is a pixel value of each point in the image, i.e., a pixel value of each point before normalization, and a' is a pixel value of each point in the processed image, i.e., a pixel value of each point after normalization.
Step 602: and inputting the preprocessed training set and verification set and the damage category labels to which the images belong into the initialized deep convolutional neural network as input for training and verification. Particularly, the training set is divided into a plurality of batches, and each batch comprises B1(ii) a preprocessed acoustic emission characteristic image of the aircraft engine, B1Is a positive integer greater than or equal to 1; repeating the training step on the initialized deep convolutional neural network to traverse the whole training set; dividing the verification set into a plurality of batches, each batch containing B2(ii) a preprocessed acoustic emission characteristic image of the aircraft engine, B2Is a positive integer greater than or equal to 1; and verifying the deep convolutional neural network trained each time. Computing the depth volume according to a cross-entropy loss function using a gradient backpropagation algorithm in the training of each batchThe gradient of each weight change in the neural network is integrated, and an optimization method is used for adjusting the value of each weight in the deep convolutional neural network. For the callback values and maximum number of iterations, the model will follow the settings that are satisfied first. And repeatedly and iteratively operating to complete model training to obtain the deep convolutional neural network model after training is completed.
Step 701: preprocessing the aeroengine acoustic emission characteristic image test set to obtain a preprocessed image meeting the input requirement of the deep convolutional neural network; specifically, the size of the image in the test set is scaled according to the input requirement of the deep convolutional neural network, and then the scaled image is normalized to scale all pixel values of the scaled image to an interval [0,1 ]; specifically, the image size in the test set is first scaled to 200 × 200, and then the image pixel values are adjusted by using the formula a '═ a/255, where a is the pixel value of each point in the image, and a' is the pixel value of each point in the processed image.
Step 702: and inputting the preprocessed test set and the damage category labels to which the images belong into the trained deep convolutional neural network together as input for testing to obtain the predicted damage category of the acoustic emission characteristic image of the aircraft engine.
Step 703: calculating the test accuracy, the Euclidean distance of the test precision and the Kappa coefficient evaluation index of the acoustic emission characteristic image of the aero-engine according to the real damage category and the predicted damage category of the acoustic emission characteristic image of the aero-engine; judging whether the evaluation indexes of all the aeroengine acoustic emission characteristic images in the test set meet the requirement of a preset threshold value, namely the test accuracy reaches over 90.00%, the Euclidean distance of the test precision does not exceed 1.0000, and the Kappa coefficient is not lower than 0.9000; if not, executing steps 401-702, adjusting the deep convolutional neural network structure, reinitializing the deep convolutional neural network, and performing training, verification and testing; if the structure and parameters of the deep convolutional neural network model still cannot reach the preset prediction evaluation standard on the test set by adjusting, acquiring more training data on the basis of the original training set, namely acquiring early damage acoustic emission signals of a plurality of aeroengines, and then executing the steps 200-702; if the acoustic emission is judged to be in accordance with the acoustic emission threshold value, the trained deep convolutional neural network model can realize intelligent identification of early damage of the aircraft engine based on single-sensor acoustic emission.
In the method described above, the evaluation index specifically includes: and testing accuracy, Euclidean distance of testing precision and Kappa coefficient. Wherein the test accuracy represents an arithmetic average of the prediction accuracy of each category in the deep convolutional neural network model, and can be represented as:
Figure BDA0003318040850000061
in the formula, SmAn arithmetic mean representing the accuracy of the deep convolutional neural network model for each class test,
Figure BDA0003318040850000062
representing the test accuracy of the deep convolutional neural network model to the c category, M represents the total number of damage categories, nc,cFor the number of correctly predicted samples in the c-th class, nc,tIs the total number of samples in the c-th category. Generally, the higher the test accuracy, the better the deep convolutional neural network model performs.
The Euclidean distance of the test precision can reflect the real gap between the test precision of the deep convolutional neural network and the rational model, and can be expressed as follows:
Figure BDA0003318040850000063
in the formula (I), the compound is shown in the specification,
Figure BDA0003318040850000064
representing the Euclidean distance, P, of the test accuracy of the deep convolutional neural network modelrFor the test accuracy of the ideal model (model with test accuracy of 100%), takeThe value is 1. The Euclidean distance can intuitively display the real distance between two points in the space. The smaller the Euclidean distance value of the test precision is, the smaller the difference between the deep convolutional neural network model and the ideal model is, the greater the similarity is, and the better the performance of the deep convolutional neural network model is.
The Kappa coefficient is a method for evaluating statistical consistency in statistics:
Figure BDA0003318040850000071
in the formula, K represents the Kappa coefficient of the deep convolutional neural network model, and the number of the c-th type real samples in the deep convolutional neural network model is acThe predicted number of class c samples is bc
Figure BDA0003318040850000072
And representing the total number of samples of the deep convolutional neural network model. Kappa is a method of evaluating statistical consistency in statistics. The higher the coefficient, the higher the classification accuracy of the deep convolutional neural network model. In practice, the value range of kappa is generally [0,1]]In between, different consistency levels can be represented in 5 groups: 0.0 to 0.20, 0.21 to 0.40, 0.41 to 0.60, 0.61 to 0.80 and 0.81 to 1.
The invention provides an intelligent recognition method for early damage acoustic emission of an aircraft engine based on a single sensor, which has the advantages that:
1. the method for intelligently identifying the early damage of the aero-engine based on the acoustic emission overcomes the limitation of the traditional fault diagnosis technology of the aero-engine, and has outstanding advantages in real-time online monitoring and intelligent identification of the early damage of the aero-engine.
2. The acoustic emission signals of the aircraft engine under early damage are picked up by using the single sensor, more characteristic information of the non-stationary acoustic emission signals such as time domain, frequency domain, combined time-frequency domain and the like is comprehensively reflected by adopting time-frequency analysis, and the health monitoring target of the aircraft engine with less measuring points and light weight can be realized.
3. The method adopts the deep convolutional neural network to realize the intelligent identification of the early damage of the aero-engine, effectively improves the working efficiency and the accuracy of the existing human identification method, can assist technicians to find some early damages which are difficult to find or are often ignored manually, and reduces the influence of artificial subjective factors in the damage identification process.
Drawings
FIG. 1 is a flow chart of the technical scheme of the invention
Fig. 2 is a curve of the training process of the embodiment.
Detailed Description
For a better understanding of the present invention, reference is made to the following detailed description taken in conjunction with the accompanying drawings and technical solutions.
It should be noted that the following early damage includes lead breaking, active excitation and impact at different positions of the aircraft engine simulation experiment table, and 15 different damage types. Specifically, lead breaking and active excitation are applied to the low-voltage casing, the low-voltage support, the medium casing, the medium support, the high-voltage casing and the high-voltage support respectively, and 12 different damage characteristics are calculated; the impact was applied to the low pressure case, the intermediate case, and the high pressure case, respectively, for a total of 3 different damage characteristics.
FIG. 1 is a flow chart of a technical scheme of the invention, and the invention provides an intelligent recognition method for early damage acoustic emission of an aircraft engine based on a single sensor, which comprises the following steps:
step 100: the method comprises the following steps of carrying out experiments on an aeroengine test bed, and acquiring acoustic emission original signals of the aeroengine under different early damages by using a single sensor. Due to the limitation of complex structure and compact space of the aircraft engine, the acoustic emission sensor is arranged on the outer surface of the low-pressure compressor casing of the aircraft engine with lower temperature.
Step 200: and converting the acoustic emission original signals captured by the sensor into a two-dimensional time-frequency characteristic image by using a time-frequency analysis technology of continuous wavelet transform to form a data sample set.
Step 300: and constructing a training sample set, a verification sample set and a test sample set. All images in the initial sample set are divided into a training set, a verification set and a test set according to the ratio of 6:2:2, and data in the data sample set are all image samples with labels. The tag type is one-hot coded to ensure that the distance between features is more reasonable to calculate.
Step 401: using 1 input layer as the start of the deep convolutional neural network for receiving the preprocessed aeroengine acoustic emission feature image. The input image size of the deep convolutional neural network is 200 × 200 × 3, i.e., 200 pixels in width and 200 pixels in height, each pixel having three values.
Step 402: and a convolution structure in the deep convolution neural network is used for carrying out feature extraction on the aeroengine acoustic emission feature image so as to obtain image features of different early damages. The convolution structure includes 6 convolution operations, each including 1 convolution layer with a Relu activation function and 1 pooling layer. In the convolution structure, the first convolution layer uses a 5 × 5 kernel with a step size of 1 to extract a learnable pattern quickly. The remaining convolutional layers learn signal features through multi-layer nonlinear mapping using smaller kernels (3 × 3), with a convolution step size of 1. The kernel size and step size of the pooling layer are 2 x 2 and 2, respectively. The 6 convolution operations can reduce the input image size by a factor of 64.
In the convolutional layer, an input image is convolved using a convolution kernel. By adding bias terms and using activation functions, a series of feature maps can be generated. The convolution operation formula is:
Figure BDA0003318040850000091
in the formula (I), the compound is shown in the specification,
Figure BDA0003318040850000092
is the value of the kth neuron in the mth frame of the ith layer,
Figure BDA0003318040850000093
the mth frame in the characteristic diagram of the l-1 layerThe number of k convolution regions is k,
Figure BDA0003318040850000094
are the elements thereof.
Figure BDA0003318040850000095
In the form of a convolution kernel, the kernel is,
Figure BDA0003318040850000096
is the bias term. The symbol · represents a scalar product operation between the local region and the convolution kernel. f () is the activation function, and the mathematical expression of Relu activation function is:
f(x)=max[0,x]
the convolutional layer is followed by a pooling layer, which is pooled in a maximum manner, i.e., the maximum value is selected as output in the pooled region. The formula of the maximum pooling is:
Figure BDA0003318040850000097
in the formula (I), the compound is shown in the specification,
Figure BDA0003318040850000098
is the value of the kth neuron in the mth frame of the l-th layer, n is the width of the pooling region,
Figure BDA0003318040850000099
is the value of the jth neuron in the mth frame of layer l + 1.
Step 403: the connection structure comprises a tiling operation, a full connection operation and a discarding operation, and specifically comprises 1 tiling layer, 1 full connection layer and 1 discarding layer with the proportion of 0.25. The connection structure first tiles the features after the convolution structure, and then connects each neuron with the neuron in the previous layer by using the full connection layer. Then, a 0.25 scale discard layer was used to avoid the overfitting problem of model training.
Step 404: and the output layer selects a softmax layer to realize early damage classification and output a prediction result. For the multi-classification task, the output model used is softmax, which can be expressed as:
Figure BDA00033180408500000910
in the formula, hjIndicates the classification result, VjFor the input value of the function, i.e., the output of the upper network, M represents the number of classes. Softmax outputs the probability that the input image finally belongs to a certain category, the greater the probability, the greater the likelihood. The loss function of the deep convolutional neural network model is defined as the cross entropy between the real value and the model prediction, which is often used to judge the difference degree between the predicted value and the actual value, and can be expressed as:
Figure BDA0003318040850000101
wherein N represents N samples in a batch, and M represents the number of categories; y isicIndicating an indicator variable (0 or 1), which is 1 if the category is the same as that of the sample i, and is 0 otherwise; p is a radical oficIs the predicted probability that the observation sample i belongs to class c.
Step 500: and initializing the deep convolutional neural network. Setting an initial learning rate to be 0.001, an initial momentum to be 0.9, a batch to be 20, a weight attenuation factor to be 0.000001, a maximum iteration number to be 500 and a callback value to be 10, namely stopping training and storing a model with the highest precision if the verification precision of the model in 10 rounds of circulation is not improved any more; optimizing the learning rate by combining a momentum learning method and a small-batch random gradient descent method; the loss function of the deep convolutional neural network model is defined as the cross entropy between the true value and the model prediction.
Step 601: and preprocessing the aeroengine acoustic emission characteristic image training set and the verification set. Specifically, the sizes of the images in the training set and the verification set are scaled according to the input requirements of the deep convolutional neural network, and then the scaled images are normalized to scale all the pixel values of the scaled images to the interval [0,1 ]. Specifically, the image sizes in the training set and the verification set are scaled to 200 × 200, and then the image pixel values are adjusted by using a formula a '═ a/255, where a is the pixel value of each point in the image, i.e., the pixel value of each point before normalization, and a' is the pixel value of each point in the processed image, i.e., the pixel value of each point after normalization.
Step 602: and inputting the preprocessed training set and verification set and the damage category labels to which the images belong into the initialized deep convolutional neural network as input for training and verification. Particularly, the training set is divided into a plurality of batches, and each batch comprises B1(ii) a preprocessed acoustic emission characteristic image of the aircraft engine, B1Is a positive integer greater than or equal to 1; repeating the training step on the initialized deep convolutional neural network to traverse the whole training set; dividing the verification set into a plurality of batches, wherein each batch contains B2(ii) a preprocessed acoustic emission characteristic image of the aircraft engine, B2Is a positive integer greater than or equal to 1; and verifying the deep convolutional neural network trained each time. And calculating the gradient of each weight change in the deep convolutional neural network according to a cross entropy loss function by using a gradient back propagation algorithm in the training of each batch, and adjusting the value of each weight in the deep convolutional neural network by using an optimization method. For the callback values and maximum number of iterations, the model will follow the settings that are satisfied first. And repeatedly and iteratively operating to complete model training to obtain the trained deep convolution neural network model.
Step 701: and preprocessing the test set of the acoustic emission characteristic image of the aircraft engine to obtain a preprocessed image meeting the input requirement of the deep convolutional neural network. Specifically, the size of the images in the test set is scaled according to the input requirements of the deep convolutional neural network, and then the scaled images are normalized to scale all pixel values of the scaled images to the interval [0,1 ]. Specifically, the image size in the test set is scaled to 200 × 200, and then the image pixel values are adjusted by using the formula a '═ a/255, where a is the pixel value of each point in the image, and a' is the pixel value of each point in the processed image.
Step 702: and inputting the preprocessed test set and the damage category labels to which the images belong into the trained deep convolutional neural network together as input to test to obtain the predicted damage category of the acoustic emission characteristic image of the aircraft engine.
Step 703: calculating the test accuracy, the Euclidean distance of the test precision and the Kappa coefficient evaluation index of the acoustic emission characteristic image of the aero-engine according to the real damage category and the predicted damage category of the acoustic emission characteristic image of the aero-engine; judging whether evaluation indexes of all the aeroengine acoustic emission characteristic images in the test set meet the requirement of a preset threshold value, namely the test accuracy reaches over 90.00%, the Euclidean distance of the test precision does not exceed 1.0000, and the Kappa coefficient is not lower than 0.9000; if not, executing steps 401-702, adjusting the deep convolutional neural network structure, reinitializing the deep convolutional neural network, and performing training, verification and testing. If the structure and parameters of the deep convolutional neural network model still cannot reach the preset prediction evaluation standard on the test set by adjusting the structure and parameters of the deep convolutional neural network model, acquiring more training data on the basis of the original training set, namely acquiring early damage acoustic emission signals of a plurality of aeroengines, and then executing steps 200-702. If the acoustic emission is judged to be in accordance with the acoustic emission threshold value, the trained deep convolutional neural network model can realize intelligent recognition of early damage of the aircraft engine based on single-sensor acoustic emission.
The evaluation index specifically includes: and testing accuracy, Euclidean distance of testing precision and Kappa coefficient. The test accuracy represents an arithmetic average of the prediction accuracy of each category in the deep convolutional neural network model, and can be represented as:
Figure BDA0003318040850000111
in the formula, SmRepresents the test accuracy of the deep convolutional neural network model,
Figure BDA0003318040850000112
representing the test accuracy of the c-th class in the deep convolutional neural network model, M representing the total number of damage classes, nc,cFor correctly predicted samples in the c-th classNumber, nc,tIs the total number of samples in the c-th category. Generally, the higher the test accuracy, the better the deep convolutional neural network model performance.
The Euclidean distance of the test precision can reflect the real gap between the test precision of the deep convolutional neural network and a rational model, and can be expressed as follows:
Figure BDA0003318040850000121
in the formula (I), the compound is shown in the specification,
Figure BDA0003318040850000122
representing the test accuracy Euclidean distance, P, of a deep convolutional neural network modelrThe value is 1 for the test precision of the ideal model. The smaller the Euclidean distance value of the test precision is, the smaller the difference between the deep convolutional neural network model and the ideal model is, the greater the similarity is, and the better the performance of the deep convolutional neural network model is.
The Kappa coefficient is a method for evaluating statistical consistency in statistics, and can be expressed as:
Figure BDA0003318040850000123
in the formula, K represents the Kappa coefficient of the deep convolutional neural network model, and the number of the c-th class real samples in the deep convolutional neural network model is acThe predicted number of class c samples is bc
Figure BDA0003318040850000124
And the total number of samples of the deep convolutional neural network model is represented. The higher the Kappa coefficient is, the higher the classification accuracy of the deep convolutional neural network model is. In practice, the value range of Kappa is generally [0,1]]In between, different consistency levels can be represented in 5 groups: 0.0 to 0.20, 0.21 to 0.40, 0.41 to 0.60, 0.61 to 0.80 and 0.81 to 1.
In the case, a single sensor acquires 9000 sets of early damage acoustic emission signals (600 sets of acoustic emission signals for each early damage) on an aircraft engine experiment table, and 9000 acoustic emission characteristic images (600 acoustic emission characteristic images for each early damage) under 15 types of early damages are obtained after time-frequency transformation. For each early lesion, the training set, validation set, and test set are scaled. The built deep convolutional neural network is used for training the aeroengine acoustic emission characteristic images in the training set and the verification set, after 27 iterations, the verification accuracy of the model reaches 99.17%, and the accuracy is not increased in the next 10 cycles, which indicates that the model has been trained, at this time, the training accuracy reaches 99.19%, and the training process is as shown in fig. 2. And inputting the test set into the trained model to obtain the prediction accuracy of the model to the test set as 98.50%. The evaluation index results of this model are shown in table 1.
Table 1: evaluation index result of model test
Figure BDA0003318040850000131
As can be seen from the table 1, the evaluation indexes of the model test are all superior to the preset indexes and all show very good classification accuracy, and the result shows that the model can realize the early damage acoustic emission intelligent identification of the aircraft engine based on a single sensor.
The implementation case realizes intelligent identification on early damage of the aero-engine by adopting a deep convolutional neural network model based on a single-sensor acoustic emission technology, can effectively improve the efficiency of acoustic emission detection, reduces the influence of artificial subjective factors, lays a foundation for the application of a single-sensor acoustic emission diagnosis method in the structural health monitoring and early damage intelligent identification of the engine, and has important significance for pursuing a light weight target with few measuring points, ensuring the normal and reliable running of the engine, making a scientific and reasonable equipment maintenance plan and reducing the occurrence of major accidents.

Claims (1)

1. An early damage acoustic emission intelligent identification method of an aircraft engine based on a single sensor is characterized by comprising the following steps:
step 100: performing an experiment on an aeroengine test bed, and acquiring acoustic emission original signals of the aeroengine under different early damages by using a single sensor;
step 200: converting the acoustic emission original signal captured by the sensor into a two-dimensional time-frequency characteristic image by using a time-frequency analysis technology of continuous wavelet transform to form a data sample set;
step 300: constructing a training sample set, a verification sample set and a test sample set; dividing all images in the initial sample set into a training set, a verification set and a test set according to the ratio of 6:2:2, wherein data in the data sample set are all image samples with labels;
step 401: using 1 input layer as the start of a deep convolution neural network, and receiving a preprocessed aeroengine acoustic emission characteristic image; the input image size of the deep convolutional neural network is 200 multiplied by 3, namely the width is 200 pixels, the height is 200 pixels, and each pixel has three values;
step 402: a convolution structure in the deep convolution neural network is used for carrying out feature extraction on the aeroengine acoustic emission feature image to obtain image features of different early damages; the convolution structure includes 6 convolution operations, each convolution operation including 1 convolution layer with Relu activation function and 1 pooling layer; in the convolution structure, a first convolution layer uses a kernel of 5 multiplied by 5, the step length is 1, so as to rapidly extract a learnable mode; the other convolutional layers adopt 3 multiplied by 3 kernels to learn signal characteristics through multilayer nonlinear mapping, and the convolution step length is 1; the kernel size and step size of the pooling layer are 2 × 2 and 2, respectively; 6 convolution operations can reduce the input image size by a factor of 64;
in the convolutional layer, convolving an input image with a convolution kernel; by adding deviation terms and utilizing activation functions, a series of feature maps can be generated; the convolution operation formula is:
Figure RE-RE-FDA0003318043000000011
in the formula (I), the compound is shown in the specification,
Figure RE-RE-FDA0003318043000000012
is the value of the kth neuron in the mth frame of the ith layer,
Figure RE-RE-FDA0003318043000000013
the k convolution region in the m frame feature map of the l-1 layer,
Figure RE-RE-FDA0003318043000000014
is an element therein;
Figure RE-RE-FDA0003318043000000015
in the form of a convolution kernel, the kernel is,
Figure RE-RE-FDA0003318043000000016
is a bias term; the symbol represents a scalar product operation between the local region and the convolution kernel; f () is the activation function, and the mathematical expression of Relu activation function is:
f(x)=max[0,x]
the convolution layer is followed by a pooling layer, and the pooling mode is maximum pooling, namely selecting the maximum value in a pooling area as output; the formula of the maximum pooling is:
Figure RE-RE-FDA0003318043000000021
in the formula (I), the compound is shown in the specification,
Figure RE-RE-FDA0003318043000000022
is the value of the kth neuron in the mth frame of the l-th layer, n is the width of the pooling region,
Figure RE-RE-FDA0003318043000000023
in the m-th frame of layer l +1The value of the jth neuron;
step 403: the connecting structure comprises a tiling operation, a full-connection operation and a discarding operation, and specifically comprises 1 tiling layer, 1 full-connection layer and 1 discarding layer with the proportion of 0.25; the connection structure firstly spreads the features after the convolution structure, and then uses the full connection layer to connect each neuron with the neuron of the previous layer; then, a 0.25 ratio discard layer was used to avoid the overfitting problem of model training;
step 404: the output layer selects a softmax layer to realize early damage classification and output a prediction result; for the multi-classification task, the output model used is softmax, which can be expressed as:
Figure RE-RE-FDA0003318043000000024
in the formula, hjIndicates the classification result, VjFor the input value of the function, i.e., the output of the upper network, M represents the number of classes. The output of Softmax is the probability that the input image finally belongs to a certain category, and the probability is higher, the probability is higher; the loss function of the deep convolutional neural network model is defined as the cross entropy between the true value and the model prediction, which is often used to judge the degree of difference between the predicted value and the actual value, and is expressed as:
Figure RE-RE-FDA0003318043000000025
wherein N represents N samples in a batch, and M represents the number of categories; y isicIndicating an indication variable, if the category is the same as that of the sample i, the indication variable is 1, otherwise the indication variable is 0; p is a radical oficIs the predicted probability for an observation sample i belonging to class c;
step 500: initializing a deep convolutional neural network; setting an initial learning rate to be 0.001, an initial momentum to be 0.9, a batch to be 20, a weight attenuation factor to be 0.000001, a maximum iteration number to be 500 and a callback value to be 10, namely stopping training and storing a model with the highest precision if the verification precision of the model in 10 rounds of circulation is not improved any more; optimizing the learning rate by combining a momentum learning method and a small-batch random gradient descent method; defining a loss function of the deep convolutional neural network model as a cross entropy between a real value and model prediction;
step 601: preprocessing an acoustic emission characteristic image training set and a verification set of the aircraft engine; specifically, the sizes of the images in the training set and the verification set are scaled according to the input requirement of the deep convolutional neural network, and then the scaled images are normalized to scale all pixel values of the scaled images to an interval [0,1 ]; specifically, the image sizes in the training set and the verification set are firstly scaled to 200 × 200, and then the pixel values of the image are adjusted by adopting a formula a '═ a/255, wherein a is the pixel value of each point in the image, namely the pixel value of each point before normalization, and a' is the pixel value of each point in the processed image, namely the pixel value of each point after normalization; (ii) a
Step 602: inputting the preprocessed training set and verification set and the damage category labels to which the images belong into the initialized deep convolutional neural network as input for training and verification; particularly, the training set is divided into a plurality of batches, and each batch comprises B1(ii) a preprocessed acoustic emission characteristic image of the aircraft engine, B1Is a positive integer greater than or equal to 1; repeating the training step on the initialized deep convolutional neural network to traverse the whole training set; dividing the verification set into a plurality of batches, wherein each batch contains B2(ii) a preprocessed acoustic emission characteristic image of the aircraft engine, B2Verifying the trained deep convolutional neural network each time for a positive integer greater than or equal to 1; calculating the gradient of each weight change in the deep convolutional neural network according to a cross entropy loss function by using a gradient back propagation algorithm in the training of each batch, and adjusting the value of each weight in the deep convolutional neural network by using an optimization method; for the callback values and maximum number of iterations, the model will follow the first satisfied setting; repeatedly and iteratively operating to complete model training to obtain a deep convolution neural network model after training is completed;
step 701: preprocessing an acoustic emission characteristic image test set of the aircraft engine to obtain a preprocessed image meeting the input requirement of a deep convolution neural network; specifically, the size of the image in the test set is scaled according to the input requirement of the deep convolutional neural network, and then the scaled image is normalized to scale all pixel values of the scaled image to an interval [0,1 ]; specifically, the size of the image in the test set is scaled to 200 × 200, and then the pixel value of the image is adjusted by using a formula a '═ a/255, where a is the pixel value of each point in the image, and a' is the pixel value of each point in the processed image;
step 702: inputting the preprocessed test set and damage category labels to which the images belong into the trained deep convolutional neural network together as input, and testing to obtain a predicted damage category of the acoustic emission characteristic image of the aircraft engine;
step 703: calculating the test accuracy, the Euclidean distance of the test precision and the Kappa coefficient evaluation index of the acoustic emission characteristic image of the aero-engine according to the real damage category and the predicted damage category of the acoustic emission characteristic image of the aero-engine; judging whether evaluation indexes of all the aeroengine acoustic emission characteristic images in the test set meet the requirement of a preset threshold value, namely the test accuracy reaches over 90.00%, the Euclidean distance of the test precision does not exceed 1.0000, and the Kappa coefficient is not lower than 0.9000; if not, executing the step 401-702, adjusting the deep convolutional neural network structure, reinitializing the deep convolutional neural network, and performing training, verification and test; if the structure and parameters of the deep convolutional neural network model still cannot reach the preset prediction evaluation standard on the test set by adjusting the structure and parameters of the deep convolutional neural network model, acquiring more training data on the basis of the original training set, namely acquiring early damage acoustic emission signals of a plurality of aeroengines, and then executing the steps 200-702; if the acoustic emission is judged to be in accordance with the acoustic emission standard, the trained deep convolutional neural network model is used for realizing intelligent identification of early damage of the aero-engine based on single-sensor acoustic emission.
CN202111237068.8A 2021-10-24 2021-10-24 Single-sensor-based intelligent acoustic emission identification method for early damage of aircraft engine Pending CN114062511A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111237068.8A CN114062511A (en) 2021-10-24 2021-10-24 Single-sensor-based intelligent acoustic emission identification method for early damage of aircraft engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111237068.8A CN114062511A (en) 2021-10-24 2021-10-24 Single-sensor-based intelligent acoustic emission identification method for early damage of aircraft engine

Publications (1)

Publication Number Publication Date
CN114062511A true CN114062511A (en) 2022-02-18

Family

ID=80235331

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111237068.8A Pending CN114062511A (en) 2021-10-24 2021-10-24 Single-sensor-based intelligent acoustic emission identification method for early damage of aircraft engine

Country Status (1)

Country Link
CN (1) CN114062511A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114781441A (en) * 2022-04-06 2022-07-22 电子科技大学 EEG motor imagery classification method and multi-space convolution neural network model
CN115100544A (en) * 2022-08-24 2022-09-23 中国电力科学研究院有限公司 Power transmission line satellite-ground cooperative external damage monitoring and early warning method, device, equipment and medium
CN115331155A (en) * 2022-10-14 2022-11-11 智慧齐鲁(山东)大数据科技有限公司 Mass video monitoring point location graph state detection method and system
CN115546224A (en) * 2022-12-06 2022-12-30 新乡学院 Automatic fault identification and control method for motor operation process

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106257022A (en) * 2015-06-16 2016-12-28 通用汽车环球科技运作有限责任公司 For using the system and method for sonic transducer detection cyclical event
CN109102005A (en) * 2018-07-23 2018-12-28 杭州电子科技大学 Small sample deep learning method based on shallow Model knowledge migration
CN110823574A (en) * 2019-09-30 2020-02-21 安徽富煌科技股份有限公司 Fault diagnosis method based on semi-supervised learning deep countermeasure network
KR20200080380A (en) * 2018-12-17 2020-07-07 주식회사 포스코 Apparatus and method for fault diagnosis of gearbox using cnn
AU2020103901A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Image Semantic Segmentation Method Based on Deep Full Convolutional Network and Conditional Random Field
CN112541510A (en) * 2019-09-20 2021-03-23 宫文峰 Intelligent fault diagnosis method based on multi-channel time series data
CN112541511A (en) * 2019-09-20 2021-03-23 宫文峰 Multi-channel time series data fault diagnosis method based on convolutional neural network
CN112762362A (en) * 2021-01-15 2021-05-07 中国海洋石油集团有限公司 Underwater pipeline leakage acoustic emission detection method based on convolutional neural network
CN112989712A (en) * 2021-04-27 2021-06-18 浙大城市学院 Aeroengine fault diagnosis method based on 5G edge calculation and deep learning
CN113155464A (en) * 2021-03-31 2021-07-23 燕山大学 CNN model visual optimization method for bearing fault recognition

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106257022A (en) * 2015-06-16 2016-12-28 通用汽车环球科技运作有限责任公司 For using the system and method for sonic transducer detection cyclical event
CN109102005A (en) * 2018-07-23 2018-12-28 杭州电子科技大学 Small sample deep learning method based on shallow Model knowledge migration
KR20200080380A (en) * 2018-12-17 2020-07-07 주식회사 포스코 Apparatus and method for fault diagnosis of gearbox using cnn
CN112541510A (en) * 2019-09-20 2021-03-23 宫文峰 Intelligent fault diagnosis method based on multi-channel time series data
CN112541511A (en) * 2019-09-20 2021-03-23 宫文峰 Multi-channel time series data fault diagnosis method based on convolutional neural network
CN110823574A (en) * 2019-09-30 2020-02-21 安徽富煌科技股份有限公司 Fault diagnosis method based on semi-supervised learning deep countermeasure network
AU2020103901A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Image Semantic Segmentation Method Based on Deep Full Convolutional Network and Conditional Random Field
CN112762362A (en) * 2021-01-15 2021-05-07 中国海洋石油集团有限公司 Underwater pipeline leakage acoustic emission detection method based on convolutional neural network
CN113155464A (en) * 2021-03-31 2021-07-23 燕山大学 CNN model visual optimization method for bearing fault recognition
CN112989712A (en) * 2021-04-27 2021-06-18 浙大城市学院 Aeroengine fault diagnosis method based on 5G edge calculation and deep learning

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114781441A (en) * 2022-04-06 2022-07-22 电子科技大学 EEG motor imagery classification method and multi-space convolution neural network model
CN114781441B (en) * 2022-04-06 2024-01-26 电子科技大学 EEG motor imagery classification method and multi-space convolution neural network model
CN115100544A (en) * 2022-08-24 2022-09-23 中国电力科学研究院有限公司 Power transmission line satellite-ground cooperative external damage monitoring and early warning method, device, equipment and medium
CN115331155A (en) * 2022-10-14 2022-11-11 智慧齐鲁(山东)大数据科技有限公司 Mass video monitoring point location graph state detection method and system
CN115331155B (en) * 2022-10-14 2023-02-03 智慧齐鲁(山东)大数据科技有限公司 Mass video monitoring point location graph state detection method and system
CN115546224A (en) * 2022-12-06 2022-12-30 新乡学院 Automatic fault identification and control method for motor operation process

Similar Documents

Publication Publication Date Title
CN114062511A (en) Single-sensor-based intelligent acoustic emission identification method for early damage of aircraft engine
CN112131760B (en) CBAM model-based prediction method for residual life of aircraft engine
Zhong et al. A novel gas turbine fault diagnosis method based on transfer learning with CNN
Chao et al. Adaptive decision-level fusion strategy for the fault diagnosis of axial piston pumps using multiple channels of vibration signals
CN104712542B (en) A kind of reciprocating compressor sensitive features based on Internet of Things are extracted and method for diagnosing faults
CN110363151A (en) Based on the controllable radar target detection method of binary channels convolutional neural networks false-alarm
CN111368885B (en) Gas circuit fault diagnosis method for aircraft engine
CN112101426A (en) Unsupervised learning image anomaly detection method based on self-encoder
CN112257530A (en) Rolling bearing fault diagnosis method based on blind signal separation and support vector machine
CN107727333A (en) A kind of diagnostic method for hydraulic cylinder leakage analyzing
CN111042917A (en) Common rail fuel injector weak fault diagnosis method based on GOA-MCKD and hierarchical discrete entropy
Liao et al. Research on a rolling bearing fault detection method with wavelet convolution deep transfer learning
Han et al. Acoustic emission intelligent identification for initial damage of the engine based on single sensor
Shaowu et al. Study on the health condition monitoring method of hydraulic pump based on convolutional neural network
CN114021620A (en) Electrical submersible pump fault diagnosis method based on BP neural network feature extraction
CN117516939A (en) Bearing cross-working condition fault detection method and system based on improved EfficientNetV2
CN112802011A (en) Fan blade defect detection method based on VGG-BLS
CN117037841A (en) Acoustic signal hierarchical cavitation intensity identification method based on hierarchical transition network
CN110779477B (en) Acoustic method for identifying shape of object in real time
CN115587541A (en) Method for describing turbofan engine process characteristics by utilizing multi-feature fusion residual error network
CN113409213A (en) Plunger pump fault signal time-frequency graph noise reduction enhancement method and system
CN114444544A (en) Signal classification and identification method based on convolutional neural network and knowledge migration
CN108710920B (en) Indicator diagram identification method and device
CN116626170B (en) Fan blade damage two-step positioning method based on deep learning and sound emission
KR102372124B1 (en) Method of Fault Classification of Solenoid Pumps based on Machine Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination