CN113255546A - Diagnosis method for aircraft system sensor fault - Google Patents

Diagnosis method for aircraft system sensor fault Download PDF

Info

Publication number
CN113255546A
CN113255546A CN202110617475.5A CN202110617475A CN113255546A CN 113255546 A CN113255546 A CN 113255546A CN 202110617475 A CN202110617475 A CN 202110617475A CN 113255546 A CN113255546 A CN 113255546A
Authority
CN
China
Prior art keywords
training
layer
data
training set
fault diagnosis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110617475.5A
Other languages
Chinese (zh)
Other versions
CN113255546B (en
Inventor
鲁方祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Calabar Information Technology Co ltd
Original Assignee
Chengdu Calabar Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Calabar Information Technology Co ltd filed Critical Chengdu Calabar Information Technology Co ltd
Priority to CN202110617475.5A priority Critical patent/CN113255546B/en
Publication of CN113255546A publication Critical patent/CN113255546A/en
Application granted granted Critical
Publication of CN113255546B publication Critical patent/CN113255546B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Testing Or Calibration Of Command Recording Devices (AREA)

Abstract

The invention relates to a method for diagnosing faults of sensors of an aircraft system, which comprises the following steps: performing sample and feature processing on the acquired sensor data to be used as a training set for training a fault diagnosis model; training a fault diagnosis model by using a method of a decision tree, a random forest or a deep neural network according to the training set to construct the fault diagnosis model; using sensor data which is not subjected to sample and feature processing as a test set, and verifying the constructed fault diagnosis model; and after the fault diagnosis model is verified, inputting the newly acquired sensor data into the fault diagnosis model to obtain a diagnosis result. The training set of the invention contains data of various performances of the equipment, and the trained fault diagnosis model can know what performances of the equipment have faults and can completely locate fault points.

Description

Diagnosis method for aircraft system sensor fault
Technical Field
The invention relates to the technical field of aircraft sensor fault detection, in particular to a method for diagnosing faults of a sensor of an aircraft system.
Background
The airplane is provided with a plurality of sensor acquisition subsystems which are responsible for the functions of acquiring and processing the data of the sensors at the front end of the airplane electronic system. Such as sensors on the aircraft including rate gyroscopes, acceleration assemblies, fuel sensors, pitch rod sensors, etc., and interface units to collect data signals from these sensors.
The data collected by the aircraft sensor contains the state information of the aircraft, and acquisition of the driving data of the sensor is carried out at present, so that whether the sensor breaks down or not is judged, the purposes of fault diagnosis and isolation maintenance of the sensor in the aircraft are achieved, a fault unit is rapidly positioned, and the method has important significance for reducing economic loss and improving the safety and the fighting capacity of the aircraft.
However, at present, the fault diagnosis of the sensor is based on experience knowledge, or only the equipment corresponding to which sensor is faulty can be known according to the data of the sensor, but what performance of the equipment is faulty cannot be known, and therefore the fault point cannot be completely located.
Disclosure of Invention
The invention aims to provide a method for specifically positioning equipment fault point positions, reducing errors to the maximum extent and providing a method for diagnosing faults of sensors of an aircraft system.
In order to achieve the above object, the embodiments of the present invention provide the following technical solutions:
a method for diagnosing aircraft system sensor faults, comprising the steps of:
performing sample and feature processing on the acquired sensor data to be used as a training set for training a fault diagnosis model;
training a fault diagnosis model by using a method of a decision tree, a random forest or a deep neural network according to the training set to construct the fault diagnosis model;
using sensor data which is not subjected to sample and feature processing as a test set, and verifying the constructed fault diagnosis model;
and after the fault diagnosis model is verified, inputting the newly acquired sensor data into the fault diagnosis model to obtain a diagnosis result.
In the scheme, the real data of the sensor is used as a training set, and when a fault diagnosis model is constructed, the training set with huge data volume can be applied by a method of a decision tree, a random forest or a deep neural network, and meanwhile, the difficulty of manual marking is reduced; after the fault diagnosis model is built, the fault diagnosis model is evaluated by using a test set so as to verify whether the built fault diagnosis model can accurately output a fault result.
The step of performing sample and feature processing on the acquired sensor data as a training set for training a fault diagnosis model includes:
injecting faults with different performances into equipment of data collected by a sensor, collecting data of the equipment respectively at each performance fault by using the sensor as a training set C, and collecting data under each performance fault as a training subset C1、C2、...CNN is the number of equipment performance faults;
wherein the data of each performance fault further includes a plurality of condition data, and one training subset is Ci={a1 i,a2 i,...aM i},CiFor the ith training subset, a is the condition data and M is the number of condition data.
According to the training set, the step of training the fault diagnosis model by using a deep neural network method comprises the following steps:
carrying out DNN forward propagation calculation and DNN backward propagation calculation through a deep neural network layer;
the deep neural network layer comprises an input layer, a hidden layer and an output layer, wherein the hidden layer is an intermediate layer and comprises a plurality of layers;
performing DNN forward propagation calculation:
Figure 299126DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 965731DEST_PATH_IMAGE002
is a linear relation coefficient and represents the linear coefficient from the kth neuron of the i-1 th layer to the jth neuron of the i-1 th layer;
Figure 44545DEST_PATH_IMAGE003
is inclined toMean bias of the ith neuron;
Figure 542523DEST_PATH_IMAGE004
is an activation function;
Figure 2585DEST_PATH_IMAGE005
an output value calculated for forward propagation, representing the output value for the jth neuron of the ith layer if there are m neurons in total at the ith-1 layer;
the output value of the i-th layer is represented using a matrix method:
Figure 953224DEST_PATH_IMAGE006
wherein, the i-1 th layer has m neurons and the i-1 th layer has n neurons, the linear coefficients w of the i-th layer form an n × m matrix
Figure 835729DEST_PATH_IMAGE007
The bias b of the ith layer constitutes an n x 1 vector
Figure 125896DEST_PATH_IMAGE008
The output a of the i-1 th layer constitutes an m x 1 vector
Figure 68444DEST_PATH_IMAGE009
The linear output z of the i-th layer before being activated forms an n x 1 vector
Figure 506379DEST_PATH_IMAGE010
The output a of the ith layer constitutes an n x 1 vector
Figure 113947DEST_PATH_IMAGE011
DNN back propagation calculations were performed:
inputting: the total number L of layers, the number of neurons of each hidden layer and each output layer, an activation function, a loss function and an iteration step length
Figure 586516DEST_PATH_IMAGE012
Maximum number of iterations MAX and stop iteration threshold
Figure 699966DEST_PATH_IMAGE013
Input m training subsets C1、C2、...Cm
And (3) outputting: a linear relation coefficient matrix W and a bias vector b of each hidden layer and each output layer.
The step of verifying the constructed fault diagnosis model by using the sensor data which is not subjected to sample and feature processing as a test set comprises the following steps:
the sensor data without sample and feature processing is: using a sensor to collect data of the equipment in any condition as a test set, wherein the collected test set is data of unknown equipment performance whether has faults or unknown equipment performance why has faults, and Z = { b =1、b2、...bnB is data of the equipment acquired by the sensor under any condition, and n is the data quantity acquired by the sensor;
and inputting the data of the test set into a fault diagnosis model, and judging whether the result output by the fault diagnosis model is consistent with the original equipment performance fault of the data.
Compared with the prior art, the invention has the beneficial effects that:
(1) the training set of the invention contains data of various performances of the equipment, and the trained fault diagnosis model can know what performances of the equipment have faults and can completely locate fault points.
(2) The real data of the sensor is used as a training set, and when a fault diagnosis model is constructed, the method can be applied to the training set with huge data volume by a decision tree, random forest or deep neural network method, and meanwhile, the difficulty of manual marking is reduced; after the fault diagnosis model is built, the fault diagnosis model is evaluated by using a test set so as to verify whether the built fault diagnosis model can accurately output a fault result.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flow chart of a diagnostic method of the present invention;
FIG. 2 is a diagram illustrating deep neural network layers according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating the definition of linear relationship coefficients according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the bias definition according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating output values of a deep neural network according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The invention is realized by the following technical scheme, as shown in fig. 1, a method for diagnosing the faults of the sensors of the aircraft system comprises the following steps:
step S1: and carrying out sample and feature processing on the acquired sensor data to be used as a training set for training a fault diagnosis model.
Usually, a sensor on the aircraft is used to fixedly acquire data of a device, such as a rate gyroscope for acquiring the angular rate of the vehicle, an acceleration component for acquiring the acceleration of the vehicle, a fuel sensor for acquiring the amount of fuel in the fuel tank, and a pitch lever sensor for acquiring the operation data of the pitch lever.
However, one device often suffers from different performance failures, such as a possible fuel deficiency in the fuel tank, a possible fuel tank leak, and a possible fuel sensor power failure; for another example, the failure of the operation of the pitch lever may be a failure of a broken pitch lever, a failure of a power supply of a sensor of the pitch lever, or the like.
Therefore, for a device, different performance failures occur, which can be roughly divided into electrical performance failures and mechanical performance failures, wherein the electrical performance failures include voltage, current, power, temperature and the like, and the mechanical performance failures include breakage, jamming and the like. However, when the sensor collects data, even if the collected data is abnormal, it cannot be directly known what performance of the device has been failed.
According to the scheme, firstly, faults with different performances are injected into equipment with data collected by a sensor actively, the data of the equipment during each performance fault are collected by the sensor, the data under each performance fault are used as a training subset, and C is obtained1、C2、...CNAnd N is the number of device performance failures.
For example, when the C-redundancy pitch rod fails, it may be caused by a disconnection fault of the C-redundancy pitch rod or a power supply fault of a sensor corresponding to the C-redundancy pitch rod (for the moment, the two cases are discussed), a disconnection fault of the C-redundancy pitch rod is injected into the C-redundancy pitch rod, and data of the C-redundancy pitch rod during a performance fault of the disconnection is collected by using the sensor as a training subset C1. Injecting the fault of sensor power supply into the C redundancy pitch rod again, and using the data of the sensor when the performance fault of the sensor power supply is collected as the training subset C2
And separately for training subsets C1And training subset C2Labelling performance faults, e.g. for training subset C1The label of 'C redundancy pitching rod disconnection fault' is marked, and the training subset C is subjected to2And (4) marking a label of 'C redundancy pitch rod sensor power supply fault'.
When data of the training subset is collected, data of a plurality of pieces of equipment in different states needs to be collected, and the collected data are called as condition data. For example, when data of performance faults of disconnection of the C-redundancy pitch lever are collected, the pitch levers are respectively adjusted to 9 different gear values: -20, -15, -10, -5, 0, 5, 10, 15, 20, so as to obtain 9 sets of data, C, acquired by the sensors1={a1 1(-20),a2 1(-15),a3 1(-10),a4 1(-5),a5 1(0),a6 1(5),a7 1(10),a8 1(15),a9 1(20)}。
Similarly, when data of performance failure of power supply of the sensor is collected, the pitching rods are respectively adjusted to the 9 different gear positions, so that 9 groups of data, C, collected by the sensor are obtained2={a1 2(-20),a2 2(-15),a3 2(-10),a4 2(-5),a5 2(0),a6 2(5),a7 2(10),a8 2(15),a9 2(20)}。
Wherein the training subset C1And training subset C2Each piece of data in (1) is conditional data, and two training subsets are used as training sets, so that 18 pieces of conditional data are in total. And each piece of condition data is labeled with a condition label, such as condition data a1 1(-20) is labeled with a "-20" conditional label. And thus as data in the training set, is known for its specific performance failure and the current state of the device.
Step S2: and training the fault diagnosis model by using a method of a decision tree, a random forest or a deep neural network according to the training set so as to construct the fault diagnosis model.
As an implementable mode, the fault diagnosis model is trained and constructed by using a decision tree, and a training set C (comprising a training subset C)1Training subset C2) The conditional data in (1) is used as leaf nodes of the decision tree, and the subset C is trained1As root node, training subset C2As a root node. And segmenting the training set by a recursive optimal feature selection mode to ensure that each condition data has an optimal classification result.
Because each condition data is labeled with a condition label, each condition data can be correctly classified into a corresponding training subset after the decision tree training is carried out. And repeating the steps until all the condition data in the training set are correctly classified, namely, all the condition data are finally segmented into corresponding root nodes, so that a decision tree is generated, and the training of the fault diagnosis model is completed. And inputting the test set into a decision tree to complete the construction of the fault diagnosis model. In the present embodiment, the condition tags are used to classify the condition data, and therefore the condition tags are selected features.
However, when the data size of the training set is very large, the label processing is performed on the condition data one by one, which increases the workload. Therefore, when selecting features for classification, the criteria for selection may be selected by way of information gain, information gain ratio, or a kuni index.
When the features are selected by using an information gain mode, the conditional data is used as a random variable X, and the probability distribution is as follows:
Figure 562880DEST_PATH_IMAGE014
wherein
Figure 787188DEST_PATH_IMAGE015
Is the ith condition data, n is the number of condition data,
Figure 848684DEST_PATH_IMAGE016
is the probability distribution of the ith condition data.
The entropy of the random variable X is then:
Figure 818521DEST_PATH_IMAGE017
entropy is a measure of uncertainty of random variables, and the larger the entropy value, the larger the uncertainty of random variables. According to the entropy of each random variable X, the joint entropy of a plurality of random variables can be obtained. For example, the joint entropy expression of the random variable X and the random variable Y is:
Figure 231048DEST_PATH_IMAGE018
after the joint entropy is obtained, the expression of the conditional entropy can be obtained:
Figure 259047DEST_PATH_IMAGE019
the conditional entropy measures the uncertainty of the random variable X remaining after the random variable Y is known, so the information gain represents the degree to which the information of the feature Y is known such that the uncertainty of the feature X is reduced. Assuming that a is a certain feature in the training set C, the information gain of the feature a to the training set C is expressed as:
Figure 440629DEST_PATH_IMAGE020
h (C) represents the uncertainty of the classification of the training set C, and H (C | a) represents the uncertainty of the classification of the training set C under the conditions given by the feature a, so that the difference is the information gain g (C | a) representing the degree of uncertainty of the classification of the training set C reduced due to the given feature a. Therefore, the larger the information gain, the stronger the classification capability of the feature is, and therefore, the feature with the larger information gain can be selected as the classification feature.
The method for selecting features according to the information gain criteria is to calculate the information gain of each feature and select the feature with the largest information gain for classification. Suppose that the training set is C, | C | is the sample capacity of the training set, and there are K classes Dk,k=1,2,...K,|DkIs of class DkThe number of samples.
The characteristic A has n different values { a }1,a2,...anDividing the training set C into n training subsets C according to the characteristic A1、C2、...Ci、...Cn,|CiL is the sample number of the ith value of the characteristic A; let training subset CiIn (II) of class DkIs CikI.e. Cik=Ci∩Ck,|CikL is CikThe information gain algorithm is as follows:
inputting a training set C and features A, and calculating an entropy H (C):
Figure 567985DEST_PATH_IMAGE021
2, calculating the conditional entropy H (C | a):
Figure 467808DEST_PATH_IMAGE022
calculating an information gain g (C, a):
Figure 33919DEST_PATH_IMAGE020
and secondly, when the characteristics are selected by selecting the information gain ratio, the adverse effect caused by the characteristic with more values biased by the information gain as a dividing basis can be avoided.
Information gain ratio g of feature A to training set CR(C, A) defined as its information gain g (C, A) and the entropy H of the training set C with respect to the feature AA(C) The ratio of (A) to (B) is expressed as:
Figure 522538DEST_PATH_IMAGE023
about the characteristic entropy HA(C) Is expressed as:
Figure 883112DEST_PATH_IMAGE024
wherein n is the number of the values of the characteristic A, | CiAnd | C | is the sample capacity.
And (III) when the characteristics are selected by using the mode of the Gini coefficient, assuming that K categories are provided, wherein the probability of the Kth category is pkThen the expression of the kini coefficient is:
Figure 270231DEST_PATH_IMAGE025
the larger the kini coefficient is, the larger the uncertainty of the training set is, and for the training set C, if the training set C is divided into training subsets C according to a certain value a of the characteristic A1And training subset C2And in two parts, under the condition of the characteristic A, the Gini coefficient of the training set C is expressed as:
Figure 640033DEST_PATH_IMAGE026
in conclusion, features are selected in the mode of information gain, information gain ratio or a kini coefficient, so that the classification method of the decision tree is generated, and is suitable for processing samples with missing attributes, for example, when data in a training set C has attribute missing; the method is suitable for processing mass data, for example, when the data volume in the training set C is huge, feasible and reliable results can be made for a large data source in a relatively short time; the method is suitable for the cases of which the classification details need to be visually displayed and has strong interpretability.
As another possible implementation mode, a random forest is used for training and constructing the fault diagnosis model, the random forest is an integrated algorithm, and the result of the overall fault diagnosis model has high accuracy and generalization capability by combining a plurality of weak classifiers and voting the final result.
The random forest uses a decision tree generated by selecting features through a kini coefficient as a weak classifier, improves the establishment of the decision tree on the basis of using the decision tree, and selects an optimal feature from all n sample features to divide left and right subtrees of the decision tree for a common decision tree.
But random forests are created by selecting a portion of sample features nsub(nsub<n) to select an optimal feature for left and right tree partitioning of the decision tree, thus further enhancing the generalization capability of fault diagnosis model construction, and nsubThe smaller the fault diagnosis model is, the more robust the fault diagnosis model is, and the algorithm of the random forest is as follows:
inputting iteration times T of a training set C and a classifier, wherein T =1,2t
2, using the sampling set CtTraining and training tth decision tree model Gt(x) When the nodes of the decision tree model are trained, a part of sample features are selected from all the sample features on the nodes, and an optimal feature is selected from the selected part of sample features to divide left and right subtrees of the decision tree.
And 3, the category with the maximum number of votes is cast out in T iterations to serve as the final category of the data in the training set, and if two or more categories with the maximum number of votes exist, the final category of the data in one seat training set is selected.
The random forest can realize the classification of data, and the application condition of the random forest not only comprises the application condition of a decision tree, but also is suitable for the condition of not making feature selection, is also suitable for the condition of not making generalization processing, and is also suitable for the condition of needing parallel processing of weak classifiers.
No matter a decision tree or a random forest is used, a fault diagnosis model can be constructed according to a training set and a test set which are prepared in advance, but after the fault diagnosis model is constructed, in order to guarantee the use accuracy of the fault diagnosis model, the fault diagnosis model also needs to be evaluated so as to ensure whether errors occur in the construction process of the fault diagnosis model.
As another possible implementation manner, the fault diagnosis model is trained by using a deep neural network method, and if the fault diagnosis model output has errors, the fault diagnosis model is repeatedly learned to reduce or eliminate the errors.
The deep neural network DNN is a multi-layer feedforward neural network trained according to an error back propagation algorithm, and is the most widely applied neural network at present. The process of evaluating the fault diagnosis model consists of two parts, namely signal forward propagation and error backward propagation.
When the DNN is transmitted in the forward direction, an input sample is transmitted from an input layer of the fault diagnosis model, is sequentially processed layer by layer through all hidden layers and is transmitted to an output layer, if the output of the output layer is inconsistent with the expectation, errors are used as adjusting signals to be reversely transmitted back layer by layer, and a connection weight matrix between neurons is processed, so that the errors are reduced. Through repeated learning, the error is finally reduced to an acceptable range.
The deep neural network layer can be divided into three types, namely an input layer, a hidden layer and an output layer. Referring to fig. 2, the first layer is an input layer, the middle layers are hidden layers, the last layer is an output layer, all layers are connected, and any neuron on the ith layer is connected with any neuron on the (i + 1) th layer.
In defining the coefficient of linear relationship
Figure 468311DEST_PATH_IMAGE027
Please refer to fig. 3, for example
Figure 265366DEST_PATH_IMAGE028
Linear coefficients representing the 4 th neuron of the second layer to the 2 nd neuron of the third layer, superscript representing linear coefficients
Figure 874202DEST_PATH_IMAGE027
The number of layers is the same, and the table below corresponds to the third layer of output cablesIndex 2 and the input second level index 4. The linear coefficient from the kth neuron of the i-1 th layer to the jth neuron of the i-1 th layer is defined as
Figure 47694DEST_PATH_IMAGE002
In defining bias b, see FIG. 4, for example
Figure 746791DEST_PATH_IMAGE029
Indicating the bias for the third neuron in the second layer, superscript 2 represents the number of layers in which it is located, and subscript 3 represents the index of the nerve in which it is located. Also the bias of the first neuron in the third layer should be expressed as
Figure 714747DEST_PATH_IMAGE030
. The bias of the jth neuron at the ith layer is defined as
Figure 810879DEST_PATH_IMAGE003
In carrying out the DNN forward propagation algorithm, the activation function is
Figure 522483DEST_PATH_IMAGE004
The output values of the hidden layer and the output layer are a, and the output of the next layer is calculated by using the output of the previous layer. See FIG. 5, for example for the output of the second layer
Figure 325354DEST_PATH_IMAGE031
Figure 464211DEST_PATH_IMAGE032
Figure 47639DEST_PATH_IMAGE033
There are (superscript of a represents number of layers, subscript represents index of nerve, x represents neuron of input layer):
Figure 297355DEST_PATH_IMAGE034
Figure 469579DEST_PATH_IMAGE035
Figure 779338DEST_PATH_IMAGE036
assuming that there are m neurons in the i-1 th layer, the output for the jth neuron of the i-th layer
Figure 850062DEST_PATH_IMAGE005
The method comprises the following steps:
Figure 903469DEST_PATH_IMAGE001
if the output of the representation of each element by using an algebraic method is complex, the matrix method is simple to use. Assuming that there are m neurons in the i-1 th layer and n neurons in the i-th layer, the linear coefficients w of the i-th layer form an n × m matrix
Figure 680932DEST_PATH_IMAGE007
The offset b of the ith layer constitutes an n x 1 vector
Figure 161592DEST_PATH_IMAGE008
The output a of the i-1 th layer constitutes an m x 1 vector
Figure 454033DEST_PATH_IMAGE011
. The output of the ith layer is represented by a matrix method as:
Figure 311131DEST_PATH_IMAGE006
the forward propagation of DNN is to use several weight coefficient matrixes W, bias vectors b and input value vectors x to perform a series of linear operations and activation operations, starting from an input layer, calculating backwards layer by layer until an output layer is operated to obtain an output result.
Thus, DNN forward propagation can be summarized as:
inputting: the total number of layers L, a matrix W, a bias vector b and an input value vector x corresponding to all the hidden layers and the output layers;
and (3) outputting: output of the output layer
Figure 270996DEST_PATH_IMAGE037
The method comprises the following steps:
1, initialization
Figure 922557DEST_PATH_IMAGE038
2, for i =2 to L, calculate:
Figure 967874DEST_PATH_IMAGE006
the final result is the output
Figure 363083DEST_PATH_IMAGE037
During DNN reverse propagation, if errors exist, the errors serve as adjusting signals to reversely pass back layer by layer, a connection weight matrix between the neurons is processed, and the reverse propagation can be summarized as follows:
inputting: the total number L of layers, the number of neurons of each hidden layer and each output layer, an activation function, a loss function, an iteration step length beta, a maximum iteration number MAX, an iteration stop threshold value з, and m input training subsets C1、C2、...Cm
And (3) outputting: a linear relation coefficient matrix W and a bias vector b of each hidden layer and each output layer.
The method comprises the following steps:
1, initializing a linear relation coefficient matrix W and a bias vector b of each hidden layer and each output layer to be a random value;
2,for iter to 1 to MAX:
2-1,for i=1 to m;
2-1a, inverting DNN to network input
Figure 275325DEST_PATH_IMAGE011
Is arranged as
Figure 363367DEST_PATH_IMAGE015
2-1b, for i =2 to L, forward propagation algorithm calculation is performed
Figure 630400DEST_PATH_IMAGE039
2-1c, calculating output layers by loss functions
Figure 16251DEST_PATH_IMAGE040
2-1d, for i = L-1 to 2, calculating by back propagation algorithm
Figure 419551DEST_PATH_IMAGE041
2-2, for i =2 to L, updating the coefficient matrix of the linear relation of the i-th layer
Figure 678494DEST_PATH_IMAGE007
Bias vector of
Figure 432823DEST_PATH_IMAGE008
Figure 107518DEST_PATH_IMAGE042
Figure 630903DEST_PATH_IMAGE043
2-3, if all the W, b change values are smaller than the stop iteration threshold з, jumping out of the iteration loop to the next step;
and 3, outputting a linear relation coefficient matrix W and a bias vector b of each hidden layer and each output layer.
The deep neural network DNN can realize data classification, is particularly suitable for the condition of discovering the nonlinear relation between model input and model output, can learn and store a large number of input-output mode mapping relations without a mathematical equation for describing the mapping relations in advance, and is a good choice for training a fault diagnosis model.
Step S3: and using the sensor data which is not subjected to sample and characteristic processing as a test set to verify the constructed fault diagnosis model.
Using a sensor to collect data of the equipment in any condition as a test set, wherein the collected test set is data of unknown equipment performance whether has faults or unknown equipment performance why has faults, and Z = { b =1、b2、...bnB is the data of the equipment acquired by the sensor under any condition, and n is the data quantity acquired by the sensor.
For example, the C-redundancy pitch stick is now in any situation, and it is unknown whether the performance of the C-redundancy pitch stick is faulty, or in particular what kind of fault is. The pitch lever is also adjusted to the 9 different gear positions, so that 9 groups of data collected by the sensor are obtained as a test set, and Z = { b = }1(-20)、b2(-15)、b3(-10)、b4(-5)、b5(0)、b6(5)、b7(10)、b8(15)、b9(20)}。
Since the data in the test set is unknown whether the device has a performance failure or not, and also unknown what specific performance failure has occurred to the device, the data in the test set is not tagged as random data.
And inputting the data of the test set into a fault diagnosis model, and judging whether the result output by the fault diagnosis model is consistent with the original equipment performance fault of the data.
Step S4: and after the fault diagnosis model is verified, inputting the newly acquired sensor data into the fault diagnosis model to obtain a diagnosis result.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for diagnosing faults in sensors of an aircraft system, comprising: the method comprises the following steps:
performing sample and feature processing on the acquired sensor data to be used as a training set for training a fault diagnosis model;
training a fault diagnosis model by using a method of a decision tree, a random forest or a deep neural network according to the training set to construct the fault diagnosis model;
using sensor data which is not subjected to sample and feature processing as a test set, and verifying the constructed fault diagnosis model;
and after the fault diagnosis model is verified, inputting the newly acquired sensor data into the fault diagnosis model to obtain a diagnosis result.
2. A method of diagnosing a sensor fault in an aircraft system according to claim 1, wherein: the step of performing sample and feature processing on the acquired sensor data as a training set for training a fault diagnosis model includes:
injecting faults with different performances into equipment of data collected by a sensor, collecting data of the equipment respectively at each performance fault by using the sensor as a training set C, and collecting data under each performance fault as a training subset C1、C2、...CNN is the number of equipment performance faults;
wherein the data of each performance fault further includes a plurality of condition data, and one training subset is Ci={a1 i,a2 i,...aM i},CiFor the ith training subset, a is the condition data and M is the number of condition data.
3. A method of diagnosing a sensor fault in an aircraft system according to claim 2, characterised in that: according to the training set, the step of training the fault diagnosis model by using a decision tree method comprises the following steps:
respectively marking a performance fault label on each training subset, and respectively marking a condition label on each condition data;
a plurality of training subsets C in a training set CiAnd as a root node of the decision tree, taking the condition data in the training set C as leaf nodes of the decision tree, classifying the condition data into corresponding root nodes by taking the condition labels of the condition data as features in a recursive feature selection mode so as to generate the decision tree and finish the training of the fault diagnosis model.
4. A method of diagnosing a sensor fault in an aircraft system according to claim 2, characterised in that: according to the training set, the step of training the fault diagnosis model by using a decision tree method comprises the following steps:
a plurality of training subsets C in a training set CiTaking the conditional data in the training set C as leaf nodes of the decision tree as root nodes of the decision tree;
inputting the training set C and the preset characteristics A into a decision tree, and calculating the entropy H (C):
Figure 314093DEST_PATH_IMAGE001
wherein H (C) represents the uncertainty of classifying the training set C, | C | is the sample capacity of the training set C, and K classes are Dk,k=1,2,...K,|DkIs of class DkThe number of samples of (a); the characteristic A has n different values { a }1,a2,...anDividing the training set C into n training subsets C according to the characteristic A1、C2、...Ci、...Cn,|CiL is the sample number of the ith value of the characteristic A;
let training subset CiIn (II) of class DkIs CikI.e. Cik=Ci∩Ck,|CikL is CikThe conditional entropy H (C | a) is calculated:
Figure 613488DEST_PATH_IMAGE002
from the entropy H (C) and the conditional entropy H (C | A) of the training set C, an information gain g (C, A) is calculated:
Figure 325092DEST_PATH_IMAGE003
the information gain represents the degree of classification uncertainty reduction for the training set C due to the given feature a, and the feature with the largest value of the information gain is selected as the classification feature of the decision tree.
5. A diagnostic method for aircraft system sensor failure according to claim 4, characterized in that: according to the training set, the step of training the fault diagnosis model by using a decision tree method further comprises the following steps:
calculating the information gain ratio g of the characteristic A to the training set C according to the information gain g (C, A) of the training set CR(C,A):
Figure 455859DEST_PATH_IMAGE004
Wherein the content of the first and second substances,
Figure 594716DEST_PATH_IMAGE005
characteristic entropy:
Figure 506040DEST_PATH_IMAGE006
and selecting the characteristic of the maximum value of the information gain ratio as the classification characteristic of the decision tree.
6. A diagnostic method for aircraft system sensor failure according to claim 4, characterized in that: according to the training set, the step of training the fault diagnosis model by using a decision tree method further comprises the following steps:
assuming that there are K classes, the probability of the Kth class is pkThen the expression of the kini coefficient is:
Figure 286914DEST_PATH_IMAGE007
the larger the kini coefficient is, the larger the uncertainty of the training set is, and for the training set C, if the training set C is divided into training subsets C according to a certain value a of the characteristic A1And training subset C2And in two parts, under the condition of the characteristic A, the Gini coefficient of the training set C is expressed as:
Figure 6609DEST_PATH_IMAGE008
and selecting the characteristic of the maximum value of the kini coefficient as the classification characteristic of the decision tree.
7. A method of diagnosing a sensor fault in an aircraft system according to claim 2, characterised in that: according to the training set, the step of training the fault diagnosis model by using a random forest method comprises the following steps:
inputting iteration times T of a training set C and a classifier, wherein T =1,2t
Using a sample set CtTraining and training tth decision tree model Gt(x) When the nodes of the decision tree model are trained, a part of sample features are selected from all sample features on the nodes, and an optimal feature is selected from the selected part of sample features to make left and right subtrees of the decision treeDividing;
and (5) the category with the maximum ticket number is cast out in T times of iteration and is used as the final category of the condition data in the training set C.
8. A method of diagnosing a sensor fault in an aircraft system according to claim 1, wherein: according to the training set, the step of training the fault diagnosis model by using a deep neural network method comprises the following steps:
carrying out DNN forward propagation calculation and DNN backward propagation calculation through a deep neural network layer;
the deep neural network layer comprises an input layer, a hidden layer and an output layer, wherein the hidden layer is an intermediate layer and comprises a plurality of layers;
performing DNN forward propagation calculation:
Figure 519630DEST_PATH_IMAGE009
wherein the content of the first and second substances,
Figure 590354DEST_PATH_IMAGE010
is a linear relation coefficient and represents the linear coefficient from the kth neuron of the i-1 th layer to the jth neuron of the i-1 th layer;
Figure 909340DEST_PATH_IMAGE011
is bias, representing the bias of the jth neuron in the ith layer;
Figure 749120DEST_PATH_IMAGE012
is an activation function;
Figure 557676DEST_PATH_IMAGE013
an output value calculated for forward propagation, representing the output value for the jth neuron of the ith layer if there are m neurons in total at the ith-1 layer;
the output value of the i-th layer is represented using a matrix method:
Figure 381275DEST_PATH_IMAGE014
wherein, the i-1 th layer has m neurons and the i-1 th layer has n neurons, the linear coefficients w of the i-th layer form an n × m matrix
Figure 972794DEST_PATH_IMAGE015
The bias b of the ith layer constitutes an n x 1 vector
Figure 870342DEST_PATH_IMAGE016
The output a of the i-1 th layer constitutes an m x 1 vector
Figure 787483DEST_PATH_IMAGE017
The linear output z of the i-th layer before being activated forms an n x 1 vector
Figure 832799DEST_PATH_IMAGE018
The output a of the ith layer constitutes an n x 1 vector
Figure 555905DEST_PATH_IMAGE019
DNN back propagation calculations were performed:
inputting: the total number L of layers, the number of neurons of each hidden layer and each output layer, an activation function, a loss function and an iteration step length
Figure 370277DEST_PATH_IMAGE020
Maximum number of iterations MAX and stop iteration threshold
Figure 458319DEST_PATH_IMAGE021
Input m training subsets C1、C2、...Cm
And (3) outputting: a linear relation coefficient matrix W and a bias vector b of each hidden layer and each output layer.
9. A method of diagnosing a sensor fault in an aircraft system according to claim 8, wherein: the step of performing DNN back propagation calculations comprises:
initializing linear relation coefficient matrix of each hidden layer and output layer
Figure 928614DEST_PATH_IMAGE022
And the value of the bias vector b is a random value;
inputs to the DNN inverse network are
Figure 393094DEST_PATH_IMAGE019
Performing forward propagation algorithm calculations
Figure 61972DEST_PATH_IMAGE023
Computing output layers by loss functions
Figure 383232DEST_PATH_IMAGE024
Performing back propagation algorithm calculations
Figure 403141DEST_PATH_IMAGE025
Updating the linear relation coefficient matrix of the ith layer
Figure 140153DEST_PATH_IMAGE015
Bias vector of
Figure 866800DEST_PATH_IMAGE016
Figure 296645DEST_PATH_IMAGE026
Figure 803849DEST_PATH_IMAGE027
If all of
Figure 406869DEST_PATH_IMAGE022
Figure 50340DEST_PATH_IMAGE028
Are all less than the stop iteration threshold
Figure 651085DEST_PATH_IMAGE021
Jumping out the iteration loop to the next step; wherein the content of the first and second substances,
Figure 380007DEST_PATH_IMAGE029
is a linear relation coefficient matrix of the L-th layer,
Figure 662084DEST_PATH_IMAGE030
is a linear relation coefficient matrix of the L +1 layer,
Figure 425641DEST_PATH_IMAGE031
is the offset vector for the L-th layer,
Figure 197287DEST_PATH_IMAGE032
is the offset vector of the L +1 th layer;
outputting linear relation coefficient matrix of each hidden layer and output layer
Figure 487541DEST_PATH_IMAGE022
Sum bias vector
Figure 370046DEST_PATH_IMAGE028
10. A method of diagnosing a sensor fault in an aircraft system according to claim 1, wherein: the step of verifying the constructed fault diagnosis model by using the sensor data which is not subjected to sample and feature processing as a test set comprises the following steps:
the sensor data without sample and feature processing is: using a sensor to collect data of the equipment in any condition as a test set, wherein the collected test set is data of unknown equipment performance whether has faults or unknown equipment performance why has faults, and Z = { b =1、b2、...bnB is data of the equipment acquired by the sensor under any condition, and n is the data quantity acquired by the sensor;
and inputting the data of the test set into a fault diagnosis model, and judging whether the result output by the fault diagnosis model is consistent with the original equipment performance fault of the data.
CN202110617475.5A 2021-06-03 2021-06-03 Diagnosis method for aircraft system sensor fault Active CN113255546B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110617475.5A CN113255546B (en) 2021-06-03 2021-06-03 Diagnosis method for aircraft system sensor fault

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110617475.5A CN113255546B (en) 2021-06-03 2021-06-03 Diagnosis method for aircraft system sensor fault

Publications (2)

Publication Number Publication Date
CN113255546A true CN113255546A (en) 2021-08-13
CN113255546B CN113255546B (en) 2021-11-09

Family

ID=77186314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110617475.5A Active CN113255546B (en) 2021-06-03 2021-06-03 Diagnosis method for aircraft system sensor fault

Country Status (1)

Country Link
CN (1) CN113255546B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114877925A (en) * 2022-03-31 2022-08-09 上海交通大学 Comprehensive energy system sensor fault diagnosis method based on extreme learning machine
CN115307947A (en) * 2022-08-09 2022-11-08 吉林大学 Crusher health monitoring system and method based on sensor information fusion
EP4145147A1 (en) * 2021-08-27 2023-03-08 Hamilton Sundstrand Corporation Online health monitoring and fault detection for high voltage dc distribution networks
CN117060353A (en) * 2023-07-31 2023-11-14 中国南方电网有限责任公司超高压输电公司电力科研院 Fault diagnosis method and system for high-voltage direct-current transmission system based on feedforward neural network
CN117647367A (en) * 2024-01-29 2024-03-05 四川航空股份有限公司 Machine learning-based method and system for positioning leakage points of aircraft fuel tank

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104596764A (en) * 2015-02-02 2015-05-06 华北电力大学 Fault diagnosing and predicating test device for epicyclic gearbox
CN104635081A (en) * 2015-01-29 2015-05-20 西北工业大学 Adaptive fault diagnosis method of aircraft generator rectifier
CN104950866A (en) * 2014-03-25 2015-09-30 株式会社日立高新技术 Failure cause classification apparatus
CN106154209A (en) * 2016-07-29 2016-11-23 国电南瑞科技股份有限公司 Electrical energy meter fault Forecasting Methodology based on decision Tree algorithms
CN107179194A (en) * 2017-06-30 2017-09-19 安徽工业大学 Rotating machinery fault etiologic diagnosis method based on convolutional neural networks
CN108197014A (en) * 2017-12-29 2018-06-22 东软集团股份有限公司 Method for diagnosing faults, device and computer equipment
CN108241298A (en) * 2018-01-09 2018-07-03 南京航空航天大学 A kind of aerogenerator method for diagnosing faults based on FWA-RNN models
CN108594788A (en) * 2018-03-27 2018-09-28 西北工业大学 A kind of aircraft actuator fault detection and diagnosis method based on depth random forests algorithm
CN109142946A (en) * 2018-06-29 2019-01-04 东华大学 Transformer fault detection method based on ant group algorithm optimization random forest
CN109215165A (en) * 2018-08-08 2019-01-15 南京航空航天大学 A kind of civil aircraft APU Performance Evaluation and fault early warning method
CN109255441A (en) * 2018-10-18 2019-01-22 西安电子科技大学 Spacecraft fault diagnosis method based on artificial intelligence
CN110243405A (en) * 2019-06-25 2019-09-17 东北大学 A kind of Aero-Engine Sensor Failure diagnostic method based on deep learning
CN110489254A (en) * 2019-07-13 2019-11-22 西北工业大学 Large aircraft aviation big data fault detection and causal reasoning system and method based on depth random forests algorithm
CN111175054A (en) * 2020-01-08 2020-05-19 沈阳航空航天大学 Aeroengine fault diagnosis method based on data driving
CN111221919A (en) * 2018-11-27 2020-06-02 波音公司 System and method for generating aircraft failure prediction classifier
CN111259532A (en) * 2020-01-13 2020-06-09 西北工业大学 Fault diagnosis method of aeroengine control system sensor based on 3DCNN-JTFA
CN111474919A (en) * 2020-04-27 2020-07-31 西北工业大学 Aeroengine control system sensor fault diagnosis method based on AANN network group
CN111580498A (en) * 2020-05-08 2020-08-25 北京航空航天大学 Aircraft environmental control system air cooling equipment robust fault diagnosis method based on random forest
CN112182743A (en) * 2020-09-09 2021-01-05 北京航空航天大学 Airplane system fault diagnosis method based on fault transmission characteristic matching
CN112857669A (en) * 2021-03-30 2021-05-28 武汉飞恩微电子有限公司 Fault detection method, device and equipment of pressure sensor and storage medium
CN112881017A (en) * 2021-01-07 2021-06-01 西北工业大学 Intelligent fault diagnosis method for aeroengine control system sensor based on mode gradient spectral entropy

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104950866A (en) * 2014-03-25 2015-09-30 株式会社日立高新技术 Failure cause classification apparatus
CN104635081A (en) * 2015-01-29 2015-05-20 西北工业大学 Adaptive fault diagnosis method of aircraft generator rectifier
CN104596764A (en) * 2015-02-02 2015-05-06 华北电力大学 Fault diagnosing and predicating test device for epicyclic gearbox
CN106154209A (en) * 2016-07-29 2016-11-23 国电南瑞科技股份有限公司 Electrical energy meter fault Forecasting Methodology based on decision Tree algorithms
CN107179194A (en) * 2017-06-30 2017-09-19 安徽工业大学 Rotating machinery fault etiologic diagnosis method based on convolutional neural networks
CN108197014A (en) * 2017-12-29 2018-06-22 东软集团股份有限公司 Method for diagnosing faults, device and computer equipment
CN108241298A (en) * 2018-01-09 2018-07-03 南京航空航天大学 A kind of aerogenerator method for diagnosing faults based on FWA-RNN models
CN108594788A (en) * 2018-03-27 2018-09-28 西北工业大学 A kind of aircraft actuator fault detection and diagnosis method based on depth random forests algorithm
CN109142946A (en) * 2018-06-29 2019-01-04 东华大学 Transformer fault detection method based on ant group algorithm optimization random forest
CN109215165A (en) * 2018-08-08 2019-01-15 南京航空航天大学 A kind of civil aircraft APU Performance Evaluation and fault early warning method
CN109255441A (en) * 2018-10-18 2019-01-22 西安电子科技大学 Spacecraft fault diagnosis method based on artificial intelligence
CN111221919A (en) * 2018-11-27 2020-06-02 波音公司 System and method for generating aircraft failure prediction classifier
CN110243405A (en) * 2019-06-25 2019-09-17 东北大学 A kind of Aero-Engine Sensor Failure diagnostic method based on deep learning
CN110489254A (en) * 2019-07-13 2019-11-22 西北工业大学 Large aircraft aviation big data fault detection and causal reasoning system and method based on depth random forests algorithm
CN111175054A (en) * 2020-01-08 2020-05-19 沈阳航空航天大学 Aeroengine fault diagnosis method based on data driving
CN111259532A (en) * 2020-01-13 2020-06-09 西北工业大学 Fault diagnosis method of aeroengine control system sensor based on 3DCNN-JTFA
CN111474919A (en) * 2020-04-27 2020-07-31 西北工业大学 Aeroengine control system sensor fault diagnosis method based on AANN network group
CN111580498A (en) * 2020-05-08 2020-08-25 北京航空航天大学 Aircraft environmental control system air cooling equipment robust fault diagnosis method based on random forest
CN112182743A (en) * 2020-09-09 2021-01-05 北京航空航天大学 Airplane system fault diagnosis method based on fault transmission characteristic matching
CN112881017A (en) * 2021-01-07 2021-06-01 西北工业大学 Intelligent fault diagnosis method for aeroengine control system sensor based on mode gradient spectral entropy
CN112857669A (en) * 2021-03-30 2021-05-28 武汉飞恩微电子有限公司 Fault detection method, device and equipment of pressure sensor and storage medium

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
CWFLY93: "决策树介绍", 《HTTPS://BLOG.CSDN.NET/U014258807/ARTICLE/DETAILS/80672928》 *
S. E. DE LUCENA等: "Electro-Hydraulic Actuator Tester for Fly-By-Wire Aircrafts", 《2007 IEEE INSTRUMENTATION & MEASUREMENT TECHNOLOGY CONFERENCE IMTC 2007》 *
李冠男等: "基于SVDD的冷水机组传感器故障检测及效率分析", 《化工学报》 *
深度机器学习: "机器学习(六)-随机森林Random Forest", 《HTTPS://WWW.CNBLOGS.COM/EILEARN/P/8993980.HTML》 *
漫漫成长: "深度神经网络(DNN)", 《HTTPS://ZHUANLAN.ZHIHU.COM/P/29815081》 *
白杰等: "基于小波神经网络的航空发动机传感器故障诊断", 《机床与液压》 *
翟嘉琪等: "机器学习在故障检测与诊断领域应用综述", 《计算机测量与控制》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4145147A1 (en) * 2021-08-27 2023-03-08 Hamilton Sundstrand Corporation Online health monitoring and fault detection for high voltage dc distribution networks
US11874318B2 (en) 2021-08-27 2024-01-16 Hamilton Sundstrand Corporation Online health monitoring and fault detection for high voltage DC distribution networks
CN114877925A (en) * 2022-03-31 2022-08-09 上海交通大学 Comprehensive energy system sensor fault diagnosis method based on extreme learning machine
CN114877925B (en) * 2022-03-31 2023-08-22 上海交通大学 Comprehensive energy system sensor fault diagnosis method based on extreme learning machine
CN115307947A (en) * 2022-08-09 2022-11-08 吉林大学 Crusher health monitoring system and method based on sensor information fusion
CN117060353A (en) * 2023-07-31 2023-11-14 中国南方电网有限责任公司超高压输电公司电力科研院 Fault diagnosis method and system for high-voltage direct-current transmission system based on feedforward neural network
CN117647367A (en) * 2024-01-29 2024-03-05 四川航空股份有限公司 Machine learning-based method and system for positioning leakage points of aircraft fuel tank
CN117647367B (en) * 2024-01-29 2024-04-16 四川航空股份有限公司 Machine learning-based method and system for positioning leakage points of aircraft fuel tank

Also Published As

Publication number Publication date
CN113255546B (en) 2021-11-09

Similar Documents

Publication Publication Date Title
CN113255546B (en) Diagnosis method for aircraft system sensor fault
US10373056B1 (en) Unsupervised model building for clustering and anomaly detection
CN108960303B (en) Unmanned aerial vehicle flight data anomaly detection method based on LSTM
CN109141847B (en) Aircraft system fault diagnosis method based on MSCNN deep learning
CN111368885B (en) Gas circuit fault diagnosis method for aircraft engine
US20200175378A1 (en) Automated model building search space reduction
CN111860982A (en) Wind power plant short-term wind power prediction method based on VMD-FCM-GRU
CN110609524B (en) Industrial equipment residual life prediction model and construction method and application thereof
CN110097123B (en) Express mail logistics process state detection multi-classification system
CN111046961B (en) Fault classification method based on bidirectional long-time and short-time memory unit and capsule network
CN110851654A (en) Industrial equipment fault detection and classification method based on tensor data dimension reduction
CN106874963A (en) A kind of Fault Diagnosis Method for Distribution Networks and system based on big data technology
US11113600B2 (en) Translating sensor input into expertise
CN114048468A (en) Intrusion detection method, intrusion detection model training method, device and medium
CN117075582A (en) Industrial process generalized zero sample fault diagnosis method based on DSECMR-VAE
Gao et al. Reliability evaluation of pruned neural networks against errors on parameters
CN115510950A (en) Aircraft telemetry data anomaly detection method and system based on time convolution network
CN114065919A (en) Deficiency value completion method and medium based on generation countermeasure network
CN110580213A (en) Database anomaly detection method based on cyclic marking time point process
CN111079348B (en) Method and device for detecting slowly-varying signal
CN115049852A (en) Bearing fault diagnosis method and device, storage medium and electronic equipment
CN114330650A (en) Small sample characteristic analysis method and device based on evolutionary element learning model training
CN112069724A (en) Rocket health degree evaluation method based on long-time and short-time memory self-encoder
CN113378009A (en) Binary neural network quantitative analysis method based on binary decision diagram
Mitchell et al. Using a genetic algorithm to find the rules of a neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A method of sensor fault diagnosis for aircraft system

Effective date of registration: 20220517

Granted publication date: 20211109

Pledgee: Bank of Chengdu science and technology branch of Limited by Share Ltd.

Pledgor: CHENGDU CALABAR INFORMATION TECHNOLOGY CO.,LTD.

Registration number: Y2022510000125

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20230720

Granted publication date: 20211109

Pledgee: Bank of Chengdu science and technology branch of Limited by Share Ltd.

Pledgor: CHENGDU CALABAR INFORMATION TECHNOLOGY CO.,LTD.

Registration number: Y2022510000125