US6804600B1 - Sensor error detection and compensation system and method - Google Patents

Sensor error detection and compensation system and method Download PDF

Info

Publication number
US6804600B1
US6804600B1 US10/655,674 US65567403A US6804600B1 US 6804600 B1 US6804600 B1 US 6804600B1 US 65567403 A US65567403 A US 65567403A US 6804600 B1 US6804600 B1 US 6804600B1
Authority
US
United States
Prior art keywords
sensor
expected
value
neural network
under test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US10/655,674
Inventor
Onder Uluyol
Emmanuel O. Nwadiogbu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honeywell International Inc
Original Assignee
Honeywell International Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honeywell International Inc filed Critical Honeywell International Inc
Priority to US10/655,674 priority Critical patent/US6804600B1/en
Assigned to HONEYWELL INTERNATIONAL, INC. reassignment HONEYWELL INTERNATIONAL, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NWADIOGBU, EMMANUEL O, ULUYOL, ONDER
Application granted granted Critical
Publication of US6804600B1 publication Critical patent/US6804600B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B9/00Safety arrangements
    • G05B9/02Safety arrangements electric

Definitions

  • This invention generally relates to diagnostic systems, and more specifically relates to sensor fault detection in turbine engines.
  • fault detection systems are designed to monitor the various systems of the aircraft in an effort to detect potential faults. These systems are designed to detect these potential faults such that the potential faults can be addressed before the potential faults lead to serious system failure and possible in-flight shutdowns, take-off aborts, and delays or cancellations.
  • Engines are, of course, a particularly critical part of the aircraft. As such, fault detection for aircraft engines are an important part of an aircrafts fault detection system. Engine sensors play a critical role in fault detection. Typical modem turbine engine systems include several sets of engine sensors, such as engine speed sensors, fuel sensors, pressure sensors and temperature sensors. These sensors provide critical data to the operation of the turbine engine, and provide the core data used in fault detection.
  • Sensor validation generally includes fault detection and isolation, the determination of when one of the sensors is faulty and the isolation of which particular sensor in a group of sensor is at fault.
  • Current practice in sensor fault detection generally relies upon range and rate checks for individual sensors. This technique, while acceptable for some circumstances, may not be able to consistently detect engine sensor faults that can occur, especially in-range sensor faults.
  • the current practice of fault isolation relies upon qualitative cause-effect relationships based on design documents and expressed in terms of fault trees. These qualitative cause-effect relationships are typically based on a baseline of a new engine, and thus can become obsolete as sensor aging, engine wear, deterioration and other variations cause inconsistencies in sensor readings. Thus, the current systems used for sensor fault detection and isolation have had limited effectiveness.
  • the present invention provides a sensor error compensation system and method that provides improved sensor anomaly detection and compensation.
  • the sensor error compensation system includes an expected value generator and a sensor fault detector.
  • the expected value generator receives sensor data from a plurality of sensors in a turbine engine. From the sensor data, the expected value generator generates an expected sensor value for a sensor under test. The expected sensor value is passed to the sensor fault detector.
  • the sensor fault detector compares the expected sensor value to a received sensor value to determine if a sensor error has occurred in the sensor under test. If an error has occurred, the error can be compensated for by generating a replacement sensor value to substitute for erroneous received sensor value.
  • the expected value generator comprises an auto-associative model configured to generate expected sensor values from the plurality of sensor values received from the engine sensors. In another embodiment, the expected value generator comprises a hetero-associative model that is configured to generate an expected sensor value from a plurality of sensor values from other
  • FIG. 1 is a schematic view of a sensor error compensation system
  • FIG. 2 is a schematic view of an exemplary sensor error compensation that includes an auto-associative model
  • FIG. 3 is a schematic view of an auto-associative neural network
  • FIG. 4 is an flow diagram of an neural network training methods
  • FIG. 5 is a schematic view of an exemplary sensor error compensation that includes an hetero-associative model
  • FIG. 6 is a schematic view of a computer system that includes a sensor error compensation program.
  • the present invention provides a sensor error compensation system and method that provides improved sensor error detection and compensation.
  • the sensor error compensation system receives sensor values and generates an expected sensor value for a sensor under test.
  • the expected sensor value is compared to a received sensor value to determine if a sensor error has occurred in the sensor under test.
  • the sensor error compensation system 100 includes an expected value generator and a sensor fault detector.
  • the expected value generator receives sensor data from a plurality of sensors in a turbine engine. From the sensor data, the expected value generator generates an expected sensor value for a sensor under test. The expected sensor value is passed to the sensor fault detector.
  • the sensor fault detector compares the expected sensor value to a received sensor value to determine if a sensor error has occurred in the sensor under test. If an error has occurred, the error can be compensated for by generating a replacement sensor value to substitute for faulty sensor value.
  • the sensor error compensation system 100 can compensate for engine sensor errors.
  • the sensor error compensation system 100 can compensate for errors in a wide variety of engine sensors. Typical modern turbine engine systems include several sets of engine sensors, such as engine speed sensors, fuel sensors, pressure sensors and temperature sensors. These sensors provide critical data to the operation of the turbine engine, and provide the core data used in fault detection. The sensor error compensation system 100 can detect and correct errors in these and other types of sensors typically used in turbine engines.
  • the expected value generator comprises an auto-associative model configured and trained to generate expected sensor values from a received plurality of sensor values.
  • the expected value generator comprises a hetero-associative model that is configured to generate an expected sensor value from a plurality of sensor values from other sensors.
  • the sensor error compensation system uses the expected value generator to detect and correct sensor errors.
  • the expected value generator comprises an auto-associative model and the sensor fault detector comprises a residual generator and anomaly detector. Additionally, the sensor error compensation system 200 includes a fault isolator to isolate which sensors are faulty.
  • the auto-associative model generates expected senor values.
  • the expected value generator receives sensor values from each of a plurality of turbine engine sensors and generates a plurality of expected sensor values from the associative model, with each of the expected sensor values corresponding to one of the plurality of received sensor values.
  • the model auto-associates the received sensor values with themselves to create a corresponding plurality of expected values.
  • an auto-associative model is a many-to-many framework where many sensors are used as input to validate all or some of the same input sensors.
  • the expected sensor values are passed to the residual generator in the sensor fault detector.
  • the residual generator compares expected values with the actual received values, and generates residuals values indicating the difference between the expected values and the received values. These residuals are passed to the anomaly detector.
  • the anomaly detector evaluates the residuals to determine if any are indicative of a likely fault in one or more of the sensors. For example, each residual is compared to a corresponding threshold value, with residuals above the corresponding threshold deemed to be indicative of a potential fault in a sensor. These detected anomalies are then passed to the fault isolator.
  • the sensor fault isolator determines which if any sensors in the system are at fault. In many cases, sensors with the largest residual values will be at fault. This will not be true in all cases however.
  • the received sensor value for a potentially faulty sensor is replaced at the input to the expected value generator with the previously generated corresponding expected value.
  • the expected value generator then generates a new expected value for the potentially faulty sensor based on the received sensor values and the one replaced sensor value.
  • the new expected value can then be analyzed to determine if the potentially faulty sensor was in fact the faulty sensor. If replacing the potentially faulty value with the expected value reduces the residuals, then the replaced value was in fact faulty. If instead, the residuals rise or do not change, then the potentially faulty sensor was not the faulty sensor. In this case, additional sensor values would be replaced at the input of the expected value generator with their corresponding expected values, and new expected values generated until the faulty sensor is isolated.
  • the faulty sensor values can be replaced in the system with a corrected value.
  • the corrected value is derived from the expected values created by the expected value generator.
  • a replacement value can be created using iterative associative model estimation feedback. Specifically, an expected value is fed back into the generator and new expected value created until it converges to the non-faulty nominal value. This iterative process generates very accurate recovered values, and these recovered values can then replace the faulty sensor values in the system.
  • the system provides the replacement values to the engine control logic for using in controlling engine operation and for sensor fault annunciation.
  • FIG. 3 illustrates an auto-associative neural network 300 used to implement an auto-associative model in an expected value generator.
  • the exemplary auto-associative neural network 300 comprises a multi-layered neural network that includes a mapping layer, a bottle-neck layer and a demapping layer.
  • neural networks are data processing systems that are not explicitly programmed. Instead, neural networks are trained through exposure to real-time or historical data. Neural networks are characterized by powerful pattern matching and predictive capabilities in which input variables interact heavily. Through training, neural networks learn the underlying relationships among the input and output variables, and form generalizations that are capable of representing any nonlinear function. As such, neural networks are a powerful technology for nonlinear, complex classification problems.
  • the auto-associative neural network 300 comprises a mapping layer, a bottleneck layer and a demapping layer.
  • the bottleneck layer is a small dimensional layer between the mapping layer and demapping layer.
  • the bottleneck layer can be used to ensure that neural network obtains good generalization and prevents the network from forming a look-table.
  • the mapping and demapping layers would have approximately 1.25 times the number of nodes present at the input, and the bottleneck layer would have one half the size of the input layer.
  • the auto-associative neural network can be set up to have 7 nodes at its input and output, 3 nodes at the bottleneck layer, and 9 nodes at the mapping and demapping layers.
  • the auto-associative neural network 300 is trained to receive a plurality of sensor values and generate a corresponding plurality of expected sensor values.
  • the goal in training the neural network is to reproduce the input sensor values at the output as closely as possible when the input sensor values do not contain any faulty sensor values, while also having the network output non-faulty sensor values even when the sensor is faulty.
  • the first part of this goal is met by having the neural network configured to be very sensitive to changes in input sensor values while the latter part is met by having the network discard faulty sensor values and make up for the faulty sensor values based on the other sensor values.
  • the auto-associative neural network can be trained to meet both goals with a two phase training scheme.
  • the neural network is trained with non-faulty data to learn to associate the expected value output with the good sensor values it is presented with at input.
  • This first phase of training results in the neural network configured to fulfill the first goal.
  • the second phase of training is preferably accomplished by freezing the weights between the bottleneck and the de-mapping layer, and the weights between the demapping layer and the output layer.
  • This training procedure exploits the fact that the main features embedded in the input data are extracted by the neural networks bottleneck layer and that the final output is based on the values that the neurons in the bottleneck layer take.
  • mapping part of the neural network to learn to produce the main features of good sensor data from the faulty and noisy data the neural network can be trained to satisfy both goals.
  • the second phase of training is preferably done with white noise added to the input sensor values to achieve good generalization in a larger operational envelope.
  • the trained neural network will not only output expected sensor values that are very close to the input sensor values when all the inputs are non-faulty, it will also develop a convex mapping of sensor inputs with one faulty sensor value to all non-faulty sensor values.
  • One type of training method that can be used is a Levenberg-Marquardt backpropagation algorithm used in batch mode.
  • Other types of training methods that can be used include the variations of gradient descent algorithm by including heuristics or by optimization approaches.
  • the training method 400 trains the auto-associative neural network to generate expected sensor values when presented with a plurality of inputted sensor values.
  • the first step 402 is to group the sensors based on a correlation analysis.
  • the correlation analysis is performed in order to categorize sensors into related groups such that sensors belong to the same group are related. These related sensors in the group will then constitute the basis for generating expected values for the sensors in the group that are to be validated.
  • a detailed example of one type of correlation analysis technique that can be used will be discussed below.
  • the next step 404 is extract the data used to train and validate the neural network.
  • the data is preferably split into training, validation and test data sets.
  • the training data set is used to train for association between inputs and outputs using a suitable method.
  • the validation data set is used to determine an optimum number of iterations over the training data to achieve good generalization.
  • the test data sets are then used to asses the prediction accuracy of the trained network.
  • the next step 406 is to train the neural network for association between inputs and outputs.
  • This can be done using any suitable training method.
  • One exemplary suitable training method is the Levenberg-Marquardt backpropagation algorithm.
  • the training is preferably continued until the input is within a predetermined association threshold of the output when tested with the validation data set. This threshold is preferably determined based on the inherent noise measured during a noise analysis.
  • the input and output association should be within the nominal range of variation of each sensor during a normal operation.
  • the next step 408 is to freeze the weights in the bottleneck and output layers. Freezing the bottleneck and output layers ensures that the accommodation training only updates the weights in the first two layers. Having already trained for association, the bottleneck layer represents the features that describe the nominal data. By preserving the mapping from the features to nominal values, the association is maintained for generating expected values from possibly faulty input data.
  • the next step 410 is to train for accommodation of the faulty sensors.
  • the training data is expanded to include faulty sensor data or white noise can be added to the nominal data to simulate faults.
  • the weights in the bottleneck and output layers were first frozen to ensure that the accommodation training only updates the weights in the first two layers.
  • the training for accommodation is then preferably continued until an error condition threshold is satisfied.
  • the error condition threshold is generally determined based on the nominal range of variation in sensor values.
  • the error condition threshold is generally slightly larger than the association threshold because the neural network is now used for mapping for association as well as for accommodation.
  • the last step 412 is to save the neural network model in a way that can implement the expected value generator.
  • an auto-associative model can be provided to implement an effective expected value generator.
  • FIGS. 3 and 4 thus illustrate an auto-associative neural network and training method that can be used as an expected value generator, configured and trained to generate expected sensor values from the plurality of sensor values received from the engine sensors.
  • the expected value generator comprises a hetero-associative model and the sensor fault detector comprises a residual generator and anomaly detector.
  • the hetero-associative model generates expected sensor values.
  • the expected value generator receives sensor values from a plurality of turbine engine sensors and from that model generates an expected value for another sensor in the system.
  • the hetero-associative model receives data for some sensors and uses that data to generate expected values for a different sensor.
  • the hetero-associative model is many-to-one approach where many sensors are used to validate one other sensor.
  • the expected value generator would include a plurality of hetero-associative models, one for each sensor that expected values are to be generated.
  • an expected value generator could include a hetero-associative model for all engine turbine sensors, or for all sensors of specific types, or for particular subsets of sensors. Each hetero-associative model can then generate expected values for its corresponding sensor.
  • the expected sensor values generated by the expected value generator are passed to the residual generator in the sensor fault detector.
  • the residual generator compares expected values with the actual received values, and generates residuals values indicating the difference between the expected values and the received values. These residuals are passed to the anomaly detector.
  • the anomaly detector evaluates the residuals to determine if any are indicative of a likely fault in one or more the sensors. For example, each residual is compared to a corresponding threshold value, with residuals above the corresponding threshold deemed to be indicative of a potential fault in a sensor.
  • the faulty sensor values can be replaced in the system with a corrected value.
  • the corrected value is derived from the expected values created by the expected value generator.
  • the hetero-associative model can be implemented with a neural network.
  • the hetero-associative neural network can comprise a three-layer feed-forward neural network that provides a many-to-one mapping.
  • a hetero-associative neural network implementation may include 6-8 input sensors, 10 hidden layer nodes and one output node.
  • the hidden layer nodes can be implemented to have log-sigmoid transfer functions while the output nodes are implemented to have linear transfer functions.
  • the implementation can include a delay line for each input sensor to facilitate storage of sensor values for computation of later sensor values.
  • the hetero-associative neural network would preferably be trained using techniques similar to the auto-associative neural network. Specifically, the hetero-associative neural network can be trained with existing data sets to receive a plurality of sensor values and generate expected sensor values for its corresponding engine sensor.
  • a hetero-associative neural network is particularly suitable to applications where relationships between sensor values have strong time dependencies. That is, where different sensor types have relationships that are stronger with some amount of time difference between measurements.
  • gas turbine engines are typically monitored with a variety of different sensor types, including pressure, temperature, flow rate and position sensors. These different types of sensors are used to monitor different modalities. Because of the difference in propagation of changes in the engine, the correlation among various sensors is higher when some sensor readings are delayed. In these cases, if only a snapshot of sensor values are used (i.e., only sensor values from the same time instant) the validation of those sensors that have longer time constants such as many temperature sensors would not be as reliable as needed in some cases. This issue is of particular concern during transient conditions such as startup, climb, descent, etc.
  • the hetero-associative model can be implemented to store sensor values until they are used with other, later sensor values. This can be done by employing a large input layer (e.g. a shift register) at the input to the hetero-associative neural network.
  • a large input layer e.g. a shift register
  • the expected sensor value can be generated with those sensor values that show the strongest correlation to the corresponding sensor.
  • the shift register is added to provide a delay line of 5 equally spaced units going back to 2 seconds in time for each input sensor sampled at 100 ms intervals. This provides the short-term memory for providing the temporal context for the current sensor data.
  • such a model would be developed by identifying the sensors and the time differentials between sensors values that have the highest correlation.
  • the training data can be split into training and test sets according to the correlation time difference. Because the hetero-associative neural network can use delayed values at its input, special attention should be paid into the splitting of data.
  • the output sensor data are split into training and test sets. Then the corresponding time samples of the input sensors along with the proper delayed values from those time samples should be extracted from the input sensor data set.
  • the hetero-associative neural network can then be trained using any suitable technique, such as using backpropagation or faster converging variations such as Levenberg-Marquardt training algorithm. This training is preferably continued until the output error falls to within a predetermined threshold.
  • One advantage to a hetero-associative model is that fault isolation is not generally required. This is because a typical implementation using one hetero-associative model with one output for each engine sensor that an expected value is being generated. Thus, the anomaly detector need only to compare the estimated value with the actual received valued to determine if the difference exceeds a certain threshold. When the difference exceeds the threshold a fault is detected and the expected value can be used as a replacement for the faulty sensor value. Again, this threshold is preferably chosen using the inherent variation of the sensor readings at various flight regimes as computed during a noise analysis, with the threshold being increased until the desired balance is achieved between fault detection and false alarm rates.
  • the expected values generator can be implemented with a variety of models, including auto-associative and hetero-associative models.
  • auto-associative models would be desirable when there is sufficiently large number of sensor that are highly correlated spatially at any given instant.
  • hetero-associative models would be desirable in those cases where on sensor correlates well with delayed values of several other sensors.
  • the auto-associative model could include snapshot data from the engine sensors as well as sensor values that were already validated through a hetero-associative model.
  • the input to the hetero-associative model could include spatial as well as temporal engine data.
  • the delayed sensor values could be validated through the auto-associative model.
  • both the auto-associative and hetero-associative models are preferably implemented with a data correlation analysis and noise analysis serving as the basis for the models.
  • a correlation analysis is performed in order to categorize sensors into groups such that sensors belong to the same group are related. These related sensors in the group will then constitute the basis for generating expected values for the sensors in the group that are to be validated.
  • noise analysis is preferably performed that involves computation of several statistical measures in order to reveal levels of noise inherently present in each of the sensors. These noise measures are then used for setting threshold values and defining the nominal range of the variation of sensor values.
  • the correlation analysis is preferably performed as part of engine development.
  • a large number of sensors are typically installed for the extensive testing necessary to develop a complex turbine engine.
  • a typical production engine includes only small subset of these sensors. This is typically a result of the drive to reduce cost and eliminate excessive weight in the engine.
  • the large number of development sensors is used to categorize sensors into groups during model development. Specifically, sensors are grouped together with other sensors that can be used to predict the value of the other sensors. Stated another way, sensors are preferably grouped together into related groups that have high values of cross-correlation and cross-covariance.
  • a cross-correlation sequence can be statistical quantity defined as:
  • the cross-covanance sequence can be calculated by first removing the mean and then estimating the cross-correlation sequence:
  • n 0, . . . , N ⁇ 1.
  • calculations to estimate cross-covariance can be done with a Fast Fourier Transform (FFT) based algorithm to evaluate the sum given inputs x n and y n in length N.
  • FFT Fast Fourier Transform
  • Such an FFT can be performed with a xcov function provided by MATLAB or other equivalent tool.
  • the estimated cross-covariances can then be normalized to be 1 at the zero-lag by dividing the output by norm(x) and norm(y). In other words, the auto-correlation at the zero lag point is set to 1.
  • the correlations can then be investigated with a maximwn lag envelope of [ ⁇ 50,50].
  • K channels a cross-covariance matrix of the size (50+50+1) by (K 2 ) is generated.
  • this method can determine the magnitude of the maximum correlation and the lag value at which the maximum is reached. This is useful in determining how far back in time one needs to go to use a highly correlated value as a basis for validating a sensor reading.
  • the speed and pressure sensors react to any change in engine dynamics much quicker than temperature sensors since the propagation of heat takes longer. Hence, high correlation with temperature sensors occurs only after some delay.
  • the cross-covariance matrix can then be used to develop the model used to estimate sensor values.
  • the list of related sensors is formed by analyzing the normalized cross-covariance matrix. Several slices of this three dimensional matrix are taken at various correlation levels such as 0.9, 0.8, and 0.7 by subtracting these numbers from the absolute values of the cross-covariance matrix. Those sensors whose correlation factors remain positive after each slice form a group. The slicing starts at 1 and continues down until each sensor is included in a group of at least 5 sensors. A similar process is also applied with various lag times to seek groups of sensors that are correlated in time. Once the groups are formed, the ones that are correlated spatially are inputted into auto-associative models, and the ones that are correlated temporally are inputted into hetero-associative models.
  • a noise analysis is preferably performed to determine the level of noise inherent in each of the sensors. These noise measures are then used for setting threshold values and defining the nominal range of the variation of sensor values.
  • the sensor data is made up of steady state and transient regions through several flight modes. The steady-state regions correspond to those with settled sensor values after certain load is applied or removed. The statistical characteristics of sensor data besides its mean are different in each mode. To determine an appropriate way to preprocess the raw data, and set expectations for sensor validation and fault detection so that the results can be assessed realistically, it is desirable to analyze the noise characteristics of the original sensor readings.
  • Examples of the type of calculations that can be used to analyze the noise characteristics include: the mean, standard deviation, Z-score (which is a measure of the deviation from the mean normalized by the standard deviation), signal to noise ratio (which can be computed as mean divided by standard, and percent deviation (which can be computed as Z-Score divided by signal to noise ratio).
  • a sliding window can be used to calculate the moving mean and other features.
  • a sliding window width of 100 time frames can be used for data sampled at 100 Hz, 10 frames for data sampled at 10 Hz, and 5 frames for data sampled at 1 Hz.
  • the sensor error compensation system and method can be implemented in wide variety of platforms.
  • FIG. 6 an exemplary computer system 50 is illustrated.
  • Computer system 50 illustrates the general features of a computer system that can be used to implement the invention. Of course, these features are merely exemplary, and it should be understood that the invention can be implemented using different types of hardware that can include more or different features. It should be noted that the computer system can be implemented in many different environments, such as onboard an aircraft to provide onboard diagnostics, or on the ground to provide remote diagnostics.
  • the exemplary computer system 50 includes a processor 110 , an interface 130 , a storage device 190 , a bus 170 and a memory 180 .
  • the memory system 50 includes a sensor error compensation program.
  • the processor 10 performs the computation and control functions of the system 50 .
  • the processor 110 may comprise any type of processor, include single integrated circuits such as a microprocessor, or may comprise any suitable number of integrated circuit devices and/or circuit boards working in cooperation to accomplish the functions of a processing unit.
  • processor 110 may comprise multiple processors implemented on separate systems.
  • the processor 110 may be part of an overall vehicle control, navigation, avionics, communication or diagnostic system. During operation, the processor 110 executes the programs contained within memory 180 and as such, controls the general operation of the computer system 50 .
  • Memory 180 can be any type of suitable memory. This would include the various types of dynamic random access memory (DRAM) such as SDRAM, the various types of static RAM (SRAM), and the various types of non-volatile memory (PROM, EPROM, and flash). It should be understood that memory 180 may be a single type of memory component, or it may be composed of many different types of memory components. In addition, the memory 180 and the processor 110 may be distributed across several different computers that collectively comprise system 50 . For example, a portion of memory 180 may reside on the vehicle system computer, and another portion may reside on a ground based diagnostic computer.
  • DRAM dynamic random access memory
  • SRAM static RAM
  • PROM non-volatile memory
  • EPROM erasable programmable read-only memory
  • flash non-volatile memory
  • the bus 170 serves to transmit programs, data, status and other information or signals between the various components of system 100 .
  • the bus 170 can be any suitable physical or logical means of connecting computer systems and components. This includes, but is not limited to, direct hard-wired connections, fiber optics, infrared and wireless bus technologies.
  • the interface 130 allows communication to the system 50 , and can be implemented using any suitable method and apparatus. It can include a network interfaces to communicate to other systems, terminal interfaces to communicate with technicians, and storage interfaces to connect to storage apparatuses such as storage device 190 .
  • Storage device 190 can be any suitable type of storage apparatus, including direct access storage devices such as hard disk drives, flash systems, floppy disk drives and optical disk drives. As shown in FIG. 6, storage device 190 can comprise a disc drive device that uses discs 195 to store data.
  • the computer system 50 includes the sensor error compensation program. Specifically during operation, the sensor error compensation program is stored in memory 180 and executed by processor 110 . When being executed by the processor 110 , the sensor error compensation program validates sensor outputs and provides replacement sensor values.
  • signal bearing media include: recordable media such as floppy disks, hard drives, memory cards and optical disks (e.g., disk 195 ), and transmission media such as digital and analog communication links, including wireless communication links.
  • the present invention thus provides a sensor error compensation system and method that provides improved sensor error detection and compensation.
  • the sensor error compensation system includes an expected value generator and a sensor fault detector.
  • the expected value generator receives sensor data from a plurality of sensors in a turbine engine. From the sensor data, the expected value generator generates an expected sensor value for a sensor under test. The expected sensor value is passed to the sensor fault detector.
  • the sensor fault detector compares the expected sensor value to a received sensor value to determine if a sensor error has occurred in the sensor under test. If an error has occurred, the error can be compensated for by generating a replacement sensor value to substitute for erroneous received sensor value.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Testing And Monitoring For Control Systems (AREA)

Abstract

A sensor error compensation system and method is provided that facilitates improved sensor error detection and compensation. The sensor error compensation system includes an expected value generator and a sensor fault detector. The expected value generator receives sensor data from a plurality of sensors in a turbine engine. From the sensor data, the expected value generator generates an expected sensor value for a sensor under test. The expected sensor value is passed to the sensor fault detector. The sensor fault detector compares the expected sensor value to a received sensor value to determine if a sensor error has occurred in the sensor under test. If an error has occurred, the error can be compensated for by generating a replacement sensor value to substitute for erroneous received sensor value.

Description

FIELD OF THE INVENTION
This invention generally relates to diagnostic systems, and more specifically relates to sensor fault detection in turbine engines.
BACKGROUND OF THE INVENTION
Modern aircraft are increasingly complex. The complexities of these aircraft have led to an increasing need for fault detection systems. These fault detection systems are designed to monitor the various systems of the aircraft in an effort to detect potential faults. These systems are designed to detect these potential faults such that the potential faults can be addressed before the potential faults lead to serious system failure and possible in-flight shutdowns, take-off aborts, and delays or cancellations.
Engines are, of course, a particularly critical part of the aircraft. As such, fault detection for aircraft engines are an important part of an aircrafts fault detection system. Engine sensors play a critical role in fault detection. Typical modem turbine engine systems include several sets of engine sensors, such as engine speed sensors, fuel sensors, pressure sensors and temperature sensors. These sensors provide critical data to the operation of the turbine engine, and provide the core data used in fault detection.
Because of the critical importance of turbine engine sensors there is a strong need for sensor performance validation and compensation. Sensor validation generally includes fault detection and isolation, the determination of when one of the sensors is faulty and the isolation of which particular sensor in a group of sensor is at fault. Current practice in sensor fault detection generally relies upon range and rate checks for individual sensors. This technique, while acceptable for some circumstances, may not be able to consistently detect engine sensor faults that can occur, especially in-range sensor faults. Likewise, the current practice of fault isolation relies upon qualitative cause-effect relationships based on design documents and expressed in terms of fault trees. These qualitative cause-effect relationships are typically based on a baseline of a new engine, and thus can become obsolete as sensor aging, engine wear, deterioration and other variations cause inconsistencies in sensor readings. Thus, the current systems used for sensor fault detection and isolation have had limited effectiveness.
Thus, what is needed is an improved system and method for detecting and isolating sensor faults in turbine engines.
BRIEF SUMMARY OF THE INVENTION
The present invention provides a sensor error compensation system and method that provides improved sensor anomaly detection and compensation. The sensor error compensation system includes an expected value generator and a sensor fault detector. The expected value generator receives sensor data from a plurality of sensors in a turbine engine. From the sensor data, the expected value generator generates an expected sensor value for a sensor under test. The expected sensor value is passed to the sensor fault detector. The sensor fault detector compares the expected sensor value to a received sensor value to determine if a sensor error has occurred in the sensor under test. If an error has occurred, the error can be compensated for by generating a replacement sensor value to substitute for erroneous received sensor value.
In one embodiment, the expected value generator comprises an auto-associative model configured to generate expected sensor values from the plurality of sensor values received from the engine sensors. In another embodiment, the expected value generator comprises a hetero-associative model that is configured to generate an expected sensor value from a plurality of sensor values from other
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of a preferred embodiment of the invention, as illustrated in the accompanying drawings.
BRIEF DESCRIPTION OF DRAWINGS
The preferred exemplary embodiment of the present invention will hereinafter be described in conjunction with the appended drawings, where like designations denote like elements, and:
FIG. 1 is a schematic view of a sensor error compensation system;
FIG. 2 is a schematic view of an exemplary sensor error compensation that includes an auto-associative model;
FIG. 3 is a schematic view of an auto-associative neural network;
FIG. 4 is an flow diagram of an neural network training methods;
FIG. 5 is a schematic view of an exemplary sensor error compensation that includes an hetero-associative model; and
FIG. 6 is a schematic view of a computer system that includes a sensor error compensation program.
DETAILED DESCRIPTION OF THE INVENTION
The present invention provides a sensor error compensation system and method that provides improved sensor error detection and compensation. The sensor error compensation system receives sensor values and generates an expected sensor value for a sensor under test. The expected sensor value is compared to a received sensor value to determine if a sensor error has occurred in the sensor under test.
Turning now to FIG. 1, a sensor error compensation system 100 for engine systems is illustrated. The sensor error compensation system 100 includes an expected value generator and a sensor fault detector. The expected value generator receives sensor data from a plurality of sensors in a turbine engine. From the sensor data, the expected value generator generates an expected sensor value for a sensor under test. The expected sensor value is passed to the sensor fault detector. The sensor fault detector compares the expected sensor value to a received sensor value to determine if a sensor error has occurred in the sensor under test. If an error has occurred, the error can be compensated for by generating a replacement sensor value to substitute for faulty sensor value. Thus, the sensor error compensation system 100 can compensate for engine sensor errors.
The sensor error compensation system 100 can compensate for errors in a wide variety of engine sensors. Typical modern turbine engine systems include several sets of engine sensors, such as engine speed sensors, fuel sensors, pressure sensors and temperature sensors. These sensors provide critical data to the operation of the turbine engine, and provide the core data used in fault detection. The sensor error compensation system 100 can detect and correct errors in these and other types of sensors typically used in turbine engines.
In one embodiment, the expected value generator comprises an auto-associative model configured and trained to generate expected sensor values from a received plurality of sensor values. In another embodiment, the expected value generator comprises a hetero-associative model that is configured to generate an expected sensor value from a plurality of sensor values from other sensors. In each of these embodiments, the sensor error compensation system uses the expected value generator to detect and correct sensor errors.
Turning now to FIG. 2, a sensor error compensation system 200 that utilizes an auto-associative model as is illustrated. In this embodiment, the expected value generator comprises an auto-associative model and the sensor fault detector comprises a residual generator and anomaly detector. Additionally, the sensor error compensation system 200 includes a fault isolator to isolate which sensors are faulty.
In this embodiment, the auto-associative model generates expected senor values. Specifically, the expected value generator receives sensor values from each of a plurality of turbine engine sensors and generates a plurality of expected sensor values from the associative model, with each of the expected sensor values corresponding to one of the plurality of received sensor values. Thus, the model auto-associates the received sensor values with themselves to create a corresponding plurality of expected values. Stated another way, an auto-associative model is a many-to-many framework where many sensors are used as input to validate all or some of the same input sensors.
The expected sensor values are passed to the residual generator in the sensor fault detector. The residual generator compares expected values with the actual received values, and generates residuals values indicating the difference between the expected values and the received values. These residuals are passed to the anomaly detector. The anomaly detector evaluates the residuals to determine if any are indicative of a likely fault in one or more of the sensors. For example, each residual is compared to a corresponding threshold value, with residuals above the corresponding threshold deemed to be indicative of a potential fault in a sensor. These detected anomalies are then passed to the fault isolator.
The sensor fault isolator determines which if any sensors in the system are at fault. In many cases, sensors with the largest residual values will be at fault. This will not be true in all cases however. To further isolate the fault, the received sensor value for a potentially faulty sensor is replaced at the input to the expected value generator with the previously generated corresponding expected value. The expected value generator then generates a new expected value for the potentially faulty sensor based on the received sensor values and the one replaced sensor value. The new expected value can then be analyzed to determine if the potentially faulty sensor was in fact the faulty sensor. If replacing the potentially faulty value with the expected value reduces the residuals, then the replaced value was in fact faulty. If instead, the residuals rise or do not change, then the potentially faulty sensor was not the faulty sensor. In this case, additional sensor values would be replaced at the input of the expected value generator with their corresponding expected values, and new expected values generated until the faulty sensor is isolated.
With the faulty sensor isolated, the faulty sensor values can be replaced in the system with a corrected value. Generally, the corrected value is derived from the expected values created by the expected value generator. By repeatedly feeding back expected values for the faulty sensor into the expected value generator, a replacement value can be created using iterative associative model estimation feedback. Specifically, an expected value is fed back into the generator and new expected value created until it converges to the non-faulty nominal value. This iterative process generates very accurate recovered values, and these recovered values can then replace the faulty sensor values in the system.
With the replacement values generated, the system provides the replacement values to the engine control logic for using in controlling engine operation and for sensor fault annunciation.
Turning now to FIG. 3, an exemplary auto-associative model is illustrated. Specifically, FIG. 3 illustrates an auto-associative neural network 300 used to implement an auto-associative model in an expected value generator. The exemplary auto-associative neural network 300 comprises a multi-layered neural network that includes a mapping layer, a bottle-neck layer and a demapping layer.
In general, neural networks are data processing systems that are not explicitly programmed. Instead, neural networks are trained through exposure to real-time or historical data. Neural networks are characterized by powerful pattern matching and predictive capabilities in which input variables interact heavily. Through training, neural networks learn the underlying relationships among the input and output variables, and form generalizations that are capable of representing any nonlinear function. As such, neural networks are a powerful technology for nonlinear, complex classification problems.
In the illustrated embodiment, the auto-associative neural network 300 comprises a mapping layer, a bottleneck layer and a demapping layer. The bottleneck layer is a small dimensional layer between the mapping layer and demapping layer. The bottleneck layer can be used to ensure that neural network obtains good generalization and prevents the network from forming a look-table. In a one exemplary implementation, the mapping and demapping layers would have approximately 1.25 times the number of nodes present at the input, and the bottleneck layer would have one half the size of the input layer. For example, for a 7 sensor group, the auto-associative neural network can be set up to have 7 nodes at its input and output, 3 nodes at the bottleneck layer, and 9 nodes at the mapping and demapping layers.
The auto-associative neural network 300 is trained to receive a plurality of sensor values and generate a corresponding plurality of expected sensor values. In general, the goal in training the neural network is to reproduce the input sensor values at the output as closely as possible when the input sensor values do not contain any faulty sensor values, while also having the network output non-faulty sensor values even when the sensor is faulty. The first part of this goal is met by having the neural network configured to be very sensitive to changes in input sensor values while the latter part is met by having the network discard faulty sensor values and make up for the faulty sensor values based on the other sensor values.
The auto-associative neural network can be trained to meet both goals with a two phase training scheme. In the first phase, the neural network is trained with non-faulty data to learn to associate the expected value output with the good sensor values it is presented with at input. This first phase of training results in the neural network configured to fulfill the first goal.
The second phase of training is preferably accomplished by freezing the weights between the bottleneck and the de-mapping layer, and the weights between the demapping layer and the output layer. Thus, only the weights in the first two layers of the neural network are adapted in the second phase of training. This training procedure exploits the fact that the main features embedded in the input data are extracted by the neural networks bottleneck layer and that the final output is based on the values that the neurons in the bottleneck layer take. Hence, by mapping part of the neural network to learn to produce the main features of good sensor data from the faulty and noisy data the neural network can be trained to satisfy both goals.
The second phase of training is preferably done with white noise added to the input sensor values to achieve good generalization in a larger operational envelope. Thus trained, the trained neural network will not only output expected sensor values that are very close to the input sensor values when all the inputs are non-faulty, it will also develop a convex mapping of sensor inputs with one faulty sensor value to all non-faulty sensor values.
One type of training method that can be used is a Levenberg-Marquardt backpropagation algorithm used in batch mode. Other types of training methods that can be used include the variations of gradient descent algorithm by including heuristics or by optimization approaches.
Turning now to FIG. 4, a method 400 for training an auto-associative neural network is illustrated. The training method 400 trains the auto-associative neural network to generate expected sensor values when presented with a plurality of inputted sensor values.
The first step 402 is to group the sensors based on a correlation analysis. The correlation analysis is performed in order to categorize sensors into related groups such that sensors belong to the same group are related. These related sensors in the group will then constitute the basis for generating expected values for the sensors in the group that are to be validated. A detailed example of one type of correlation analysis technique that can be used will be discussed below.
The next step 404 is extract the data used to train and validate the neural network. The data is preferably split into training, validation and test data sets. The training data set is used to train for association between inputs and outputs using a suitable method. The validation data set is used to determine an optimum number of iterations over the training data to achieve good generalization. The test data sets are then used to asses the prediction accuracy of the trained network.
The next step 406 is to train the neural network for association between inputs and outputs. This can be done using any suitable training method. One exemplary suitable training method is the Levenberg-Marquardt backpropagation algorithm. The training is preferably continued until the input is within a predetermined association threshold of the output when tested with the validation data set. This threshold is preferably determined based on the inherent noise measured during a noise analysis. The input and output association should be within the nominal range of variation of each sensor during a normal operation. A more detailed explanation of a specific noise analysis technique that can be used will be discussed below.
The next step 408 is to freeze the weights in the bottleneck and output layers. Freezing the bottleneck and output layers ensures that the accommodation training only updates the weights in the first two layers. Having already trained for association, the bottleneck layer represents the features that describe the nominal data. By preserving the mapping from the features to nominal values, the association is maintained for generating expected values from possibly faulty input data.
The next step 410 is to train for accommodation of the faulty sensors. In this step the training data is expanded to include faulty sensor data or white noise can be added to the nominal data to simulate faults. Generally, it is desirable for the maximum offset of the faulty data to be between 5 and 10 standard deviations. Again, the weights in the bottleneck and output layers were first frozen to ensure that the accommodation training only updates the weights in the first two layers. The training for accommodation is then preferably continued until an error condition threshold is satisfied. The error condition threshold is generally determined based on the nominal range of variation in sensor values. The error condition threshold is generally slightly larger than the association threshold because the neural network is now used for mapping for association as well as for accommodation.
The last step 412 is to save the neural network model in a way that can implement the expected value generator. Thus, by training the neural network an auto-associative model can be provided to implement an effective expected value generator. FIGS. 3 and 4 thus illustrate an auto-associative neural network and training method that can be used as an expected value generator, configured and trained to generate expected sensor values from the plurality of sensor values received from the engine sensors.
Turning now to FIG. 5, a sensor error compensation system 500 that utilizes a hetero-associative model as is illustrated. In this embodiment, the expected value generator comprises a hetero-associative model and the sensor fault detector comprises a residual generator and anomaly detector.
In this embodiment, the hetero-associative model generates expected sensor values. Specifically, the expected value generator receives sensor values from a plurality of turbine engine sensors and from that model generates an expected value for another sensor in the system. Thus, unlike the auto-associative model, the hetero-associative model receives data for some sensors and uses that data to generate expected values for a different sensor. Stated another way, the hetero-associative model is many-to-one approach where many sensors are used to validate one other sensor. In a typical configuration, the expected value generator would include a plurality of hetero-associative models, one for each sensor that expected values are to be generated. For example, an expected value generator could include a hetero-associative model for all engine turbine sensors, or for all sensors of specific types, or for particular subsets of sensors. Each hetero-associative model can then generate expected values for its corresponding sensor.
Like system 300, the expected sensor values generated by the expected value generator are passed to the residual generator in the sensor fault detector. The residual generator compares expected values with the actual received values, and generates residuals values indicating the difference between the expected values and the received values. These residuals are passed to the anomaly detector. The anomaly detector evaluates the residuals to determine if any are indicative of a likely fault in one or more the sensors. For example, each residual is compared to a corresponding threshold value, with residuals above the corresponding threshold deemed to be indicative of a potential fault in a sensor.
With the faulty sensor detected, the faulty sensor values can be replaced in the system with a corrected value. Generally, the corrected value is derived from the expected values created by the expected value generator.
Like the auto-associative model, the hetero-associative model can be implemented with a neural network. In this embodiment, the hetero-associative neural network can comprise a three-layer feed-forward neural network that provides a many-to-one mapping. In a typical engine sensor application, a hetero-associative neural network implementation may include 6-8 input sensors, 10 hidden layer nodes and one output node. The hidden layer nodes can be implemented to have log-sigmoid transfer functions while the output nodes are implemented to have linear transfer functions. Additionally, as will be discussed in more detail below, the implementation can include a delay line for each input sensor to facilitate storage of sensor values for computation of later sensor values.
The hetero-associative neural network would preferably be trained using techniques similar to the auto-associative neural network. Specifically, the hetero-associative neural network can be trained with existing data sets to receive a plurality of sensor values and generate expected sensor values for its corresponding engine sensor.
One advantage to a hetero-associative neural network implementation is that a hetero-associative neural network is particularly suitable to applications where relationships between sensor values have strong time dependencies. That is, where different sensor types have relationships that are stronger with some amount of time difference between measurements.
As an example of these time dependencies, gas turbine engines are typically monitored with a variety of different sensor types, including pressure, temperature, flow rate and position sensors. These different types of sensors are used to monitor different modalities. Because of the difference in propagation of changes in the engine, the correlation among various sensors is higher when some sensor readings are delayed. In these cases, if only a snapshot of sensor values are used (i.e., only sensor values from the same time instant) the validation of those sensors that have longer time constants such as many temperature sensors would not be as reliable as needed in some cases. This issue is of particular concern during transient conditions such as startup, climb, descent, etc.
To account for this, the hetero-associative model can be implemented to store sensor values until they are used with other, later sensor values. This can be done by employing a large input layer (e.g. a shift register) at the input to the hetero-associative neural network. Thus, the expected sensor value can be generated with those sensor values that show the strongest correlation to the corresponding sensor. As one specific implementation example, the shift register is added to provide a delay line of 5 equally spaced units going back to 2 seconds in time for each input sensor sampled at 100 ms intervals. This provides the short-term memory for providing the temporal context for the current sensor data.
In general, such a model would be developed by identifying the sensors and the time differentials between sensors values that have the highest correlation. In training the hetero-associative neural network, the training data can be split into training and test sets according to the correlation time difference. Because the hetero-associative neural network can use delayed values at its input, special attention should be paid into the splitting of data. First, the output sensor data are split into training and test sets. Then the corresponding time samples of the input sensors along with the proper delayed values from those time samples should be extracted from the input sensor data set. The hetero-associative neural network can then be trained using any suitable technique, such as using backpropagation or faster converging variations such as Levenberg-Marquardt training algorithm. This training is preferably continued until the output error falls to within a predetermined threshold.
One advantage to a hetero-associative model is that fault isolation is not generally required. This is because a typical implementation using one hetero-associative model with one output for each engine sensor that an expected value is being generated. Thus, the anomaly detector need only to compare the estimated value with the actual received valued to determine if the difference exceeds a certain threshold. When the difference exceeds the threshold a fault is detected and the expected value can be used as a replacement for the faulty sensor value. Again, this threshold is preferably chosen using the inherent variation of the sensor readings at various flight regimes as computed during a noise analysis, with the threshold being increased until the desired balance is achieved between fault detection and false alarm rates.
Thus, the expected values generator can be implemented with a variety of models, including auto-associative and hetero-associative models. Generally, auto-associative models would be desirable when there is sufficiently large number of sensor that are highly correlated spatially at any given instant. Likewise, hetero-associative models would be desirable in those cases where on sensor correlates well with delayed values of several other sensors.
Furthermore, in some applications it may be desirable to include both auto-associative and hetero-associative models in the expected value generator. This would be the case if there are, for example, many pressure and flow sensors, but only a few temperature sensors. The hetero-associative models could then be employed for the temperature sensors while the remaining sensors are validated through the auto-associative models. As another example, inputs to the auto associative model could include snapshot data from the engine sensors as well as sensor values that were already validated through a hetero-associative model. Likewise, the input to the hetero-associative model could include spatial as well as temporal engine data. Furthermore, the delayed sensor values could be validated through the auto-associative model. Thus, the combined outputs of the auto-associative and hetero-associative models would combine to validate and recover the sensors.
As discussed above, both the auto-associative and hetero-associative models are preferably implemented with a data correlation analysis and noise analysis serving as the basis for the models. As one example, a correlation analysis is performed in order to categorize sensors into groups such that sensors belong to the same group are related. These related sensors in the group will then constitute the basis for generating expected values for the sensors in the group that are to be validated.
Similarly, noise analysis is preferably performed that involves computation of several statistical measures in order to reveal levels of noise inherently present in each of the sensors. These noise measures are then used for setting threshold values and defining the nominal range of the variation of sensor values.
The correlation analysis is preferably performed as part of engine development. During development, a large number of sensors are typically installed for the extensive testing necessary to develop a complex turbine engine. In contrast, a typical production engine includes only small subset of these sensors. This is typically a result of the drive to reduce cost and eliminate excessive weight in the engine. As one implementation of a correlation analysis, the large number of development sensors is used to categorize sensors into groups during model development. Specifically, sensors are grouped together with other sensors that can be used to predict the value of the other sensors. Stated another way, sensors are preferably grouped together into related groups that have high values of cross-correlation and cross-covariance.
As one example, a cross-correlation sequence can be statistical quantity defined as:
γxy(m)=E{x n y* n+m}  (1)
where xn and yn are stationary random processes, −∞<n<∞ and E{ } is the expected value operator. The cross-covanance sequence can be calculated by first removing the mean and then estimating the cross-correlation sequence:
C xy(m)=E{(x n−μx)(y* n+m−μ*y)}  (2)
However, since typically only a finite-length record of random process in real-life applications is available, the following relation can be used to estimate the deterministic cross-correlation sequence (also called the time-ambiguity function): R xy ( m ) = { n = 0 N - m - 1 x n y n + m * m 0 R xy * ( - m ) m < 0 ( 3 )
Figure US06804600-20041012-M00001
where n=0, . . . , N−1.
As a further example, calculations to estimate cross-covariance can be done with a Fast Fourier Transform (FFT) based algorithm to evaluate the sum given inputs xn and yn in length N. Such an FFT can be performed with a xcov function provided by MATLAB or other equivalent tool. The estimated cross-covariances can then be normalized to be 1 at the zero-lag by dividing the output by norm(x) and norm(y). In other words, the auto-correlation at the zero lag point is set to 1. The correlations can then be investigated with a maximwn lag envelope of [−50,50]. As an example, for K channels, a cross-covariance matrix of the size (50+50+1) by (K2) is generated.
When such a correlation analysis is applied to sensor data, those sensors that have high correlation are grouped together. In addition, this method can determine the magnitude of the maximum correlation and the lag value at which the maximum is reached. This is useful in determining how far back in time one needs to go to use a highly correlated value as a basis for validating a sensor reading.
For example, in a typical turbine engine, the speed and pressure sensors react to any change in engine dynamics much quicker than temperature sensors since the propagation of heat takes longer. Hence, high correlation with temperature sensors occurs only after some delay.
The cross-covariance matrix can then be used to develop the model used to estimate sensor values. As one example, the list of related sensors is formed by analyzing the normalized cross-covariance matrix. Several slices of this three dimensional matrix are taken at various correlation levels such as 0.9, 0.8, and 0.7 by subtracting these numbers from the absolute values of the cross-covariance matrix. Those sensors whose correlation factors remain positive after each slice form a group. The slicing starts at 1 and continues down until each sensor is included in a group of at least 5 sensors. A similar process is also applied with various lag times to seek groups of sensors that are correlated in time. Once the groups are formed, the ones that are correlated spatially are inputted into auto-associative models, and the ones that are correlated temporally are inputted into hetero-associative models.
As stated above, a noise analysis is preferably performed to determine the level of noise inherent in each of the sensors. These noise measures are then used for setting threshold values and defining the nominal range of the variation of sensor values. Typically, the sensor data is made up of steady state and transient regions through several flight modes. The steady-state regions correspond to those with settled sensor values after certain load is applied or removed. The statistical characteristics of sensor data besides its mean are different in each mode. To determine an appropriate way to preprocess the raw data, and set expectations for sensor validation and fault detection so that the results can be assessed realistically, it is desirable to analyze the noise characteristics of the original sensor readings. Examples of the type of calculations that can be used to analyze the noise characteristics include: the mean, standard deviation, Z-score (which is a measure of the deviation from the mean normalized by the standard deviation), signal to noise ratio (which can be computed as mean divided by standard, and percent deviation (which can be computed as Z-Score divided by signal to noise ratio).
These statistical features can be calculated for load and no-load state regions for all modes of operation. For transient modes, a sliding window can be used to calculate the moving mean and other features. As an example, a sliding window width of 100 time frames can be used for data sampled at 100 Hz, 10 frames for data sampled at 10 Hz, and 5 frames for data sampled at 1 Hz.
These statistical features can then be used to determine anomaly detection thresholds. As one example implementation, once the statistical measures are computed for each time instant, one representative number is obtained for each sensor through the following procedure. First a mode is selected (transient or steady-state) that shows the larger overall standard deviation. It would be the transient in most cases. Second, the standard deviations and percent deviations are ranked for that mode for each sensor. Third, the ones at the 10 percentile are selected to be the representative number. This number would then be a conservative (smaller) estimate of the nominal variation out of a liberal (higher) variation region, and would be a good candidate for anomaly detection thresholds.
The sensor error compensation system and method can be implemented in wide variety of platforms. Turning now to FIG. 6, an exemplary computer system 50 is illustrated. Computer system 50 illustrates the general features of a computer system that can be used to implement the invention. Of course, these features are merely exemplary, and it should be understood that the invention can be implemented using different types of hardware that can include more or different features. It should be noted that the computer system can be implemented in many different environments, such as onboard an aircraft to provide onboard diagnostics, or on the ground to provide remote diagnostics. The exemplary computer system 50 includes a processor 110, an interface 130, a storage device 190, a bus 170 and a memory 180. In accordance with the preferred embodiments of the invention, the memory system 50 includes a sensor error compensation program.
The processor 10 performs the computation and control functions of the system 50. The processor 110 may comprise any type of processor, include single integrated circuits such as a microprocessor, or may comprise any suitable number of integrated circuit devices and/or circuit boards working in cooperation to accomplish the functions of a processing unit. In addition, processor 110 may comprise multiple processors implemented on separate systems. In addition, the processor 110 may be part of an overall vehicle control, navigation, avionics, communication or diagnostic system. During operation, the processor 110 executes the programs contained within memory 180 and as such, controls the general operation of the computer system 50.
Memory 180 can be any type of suitable memory. This would include the various types of dynamic random access memory (DRAM) such as SDRAM, the various types of static RAM (SRAM), and the various types of non-volatile memory (PROM, EPROM, and flash). It should be understood that memory 180 may be a single type of memory component, or it may be composed of many different types of memory components. In addition, the memory 180 and the processor 110 may be distributed across several different computers that collectively comprise system 50. For example, a portion of memory 180 may reside on the vehicle system computer, and another portion may reside on a ground based diagnostic computer.
The bus 170 serves to transmit programs, data, status and other information or signals between the various components of system 100. The bus 170 can be any suitable physical or logical means of connecting computer systems and components. This includes, but is not limited to, direct hard-wired connections, fiber optics, infrared and wireless bus technologies.
The interface 130 allows communication to the system 50, and can be implemented using any suitable method and apparatus. It can include a network interfaces to communicate to other systems, terminal interfaces to communicate with technicians, and storage interfaces to connect to storage apparatuses such as storage device 190. Storage device 190 can be any suitable type of storage apparatus, including direct access storage devices such as hard disk drives, flash systems, floppy disk drives and optical disk drives. As shown in FIG. 6, storage device 190 can comprise a disc drive device that uses discs 195 to store data.
In accordance with the preferred embodiments of the invention, the computer system 50 includes the sensor error compensation program. Specifically during operation, the sensor error compensation program is stored in memory 180 and executed by processor 110. When being executed by the processor 110, the sensor error compensation program validates sensor outputs and provides replacement sensor values.
It should be understood that while the present invention is described here in the context of a fully functioning computer system, those skilled in the art will recognize that the mechanisms of the present invention are capable of being distributed as a program product in a variety of forms, and that the present invention applies equally regardless of the particular type of signal bearing media used to carry out the distribution. Examples of signal bearing media include: recordable media such as floppy disks, hard drives, memory cards and optical disks (e.g., disk 195), and transmission media such as digital and analog communication links, including wireless communication links.
The present invention thus provides a sensor error compensation system and method that provides improved sensor error detection and compensation. The sensor error compensation system includes an expected value generator and a sensor fault detector. The expected value generator receives sensor data from a plurality of sensors in a turbine engine. From the sensor data, the expected value generator generates an expected sensor value for a sensor under test. The expected sensor value is passed to the sensor fault detector. The sensor fault detector compares the expected sensor value to a received sensor value to determine if a sensor error has occurred in the sensor under test. If an error has occurred, the error can be compensated for by generating a replacement sensor value to substitute for erroneous received sensor value.
The embodiments and examples set forth herein were presented in order to best explain the present invention and its particular application and to thereby enable those skilled in the art to make and use the invention. However, those skilled in the art will recognize that the foregoing description and examples have been presented for the purposes of illustration and example only. The description as set forth is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching without departing from the spirit of the forthcoming claims.

Claims (40)

What is claimed is:
1. A sensor error compensation system for compensating sensor errors in a turbine engine, the sensor error compensation system comprising:
an expected value generator, the expected value generator receiving sensor data from a plurality of turbine engine sensors and generating an expected sensor value for a sensor under test; and
a sensor fault detector, the sensor fault detector comparing the sensor expected value to a received sensor value for the sensor under test to determine if a sensor error has occurred in the sensor under test.
2. The system of claim 1 wherein the sensor error compensation system further generates a replacement sensor value to substitute for the received sensor value.
3. The system of claim 2 wherein the replacement sensor value is generated from the expected sensor value.
4. The system of claim 3 wherein the replacement sensor value is generated from the expected sensor value by the expected value generator generating a new expected sensor value based on the expected sensor value as an input.
5. The system of claim 1 wherein the expected value generator comprises an auto-associative neural network, and wherein the sensor under test comprises one of the plurality of turbine engine sensors.
6. The system of claim 1 wherein the expected value generator comprises hetero-associative neural network, and wherein the sensor under test comprises an additional turbine engine sensor.
7. The system of claim 1 wherein the expected value generator generates a new expected value using the expected value as an input to isolate the sensor under test from the plurality of turbine engine sensors.
8. The system of claim 1 wherein the expected value generator comprises an auto-associative neural network, and wherein the auto-associative neural network includes a mapping layer, a bottleneck layer and a demapping layer.
9. The system of claim 8 wherein the auto-associative neural network is first trained for association using nominal sensor values, the demapping layer is then fixed and the neural network is trained with faulty sensor values.
10. The system of claim 1 wherein the expected value generator includes a model based on groupings of the plurality of turbine engine sensors into related groups.
11. The system of claim 10 wherein the groupings of the plurality of turbine engine sensors is based upon a time dependent correlation.
12. A method of compensating for sensor errors in a turbine engine, the method comprising the steps of:
a) receiving a plurality of sensor values from a plurality of sensors;
b) generating an expected sensor value from the plurality of sensor values;
c) comparing the expected sensor value to a received sensor value from a sensor under test; and
d) replacing the received sensor value with a replacement sensor value if a sensor error in the sensor under test has occurred.
13. The method of claim 12 wherein the step of generating an expected sensor value from a plurality of sensor values comprises generated the expected sensor value with an auto-associative neural network, and wherein the sensor under test comprises one of the plurality of turbine engine sensors.
14. The method of claim 13 wherein the step of generating an expected sensor value from a plurality of sensor values comprises inputting the plurality of sensor values into the auto-associative neural network and processing the plurality of sensor values through a mapping layer, bottleneck layer and demapping layer in the auto-associative neural network.
15. The method of claim 12 wherein the step of generating an expected sensor value from a plurality of sensor values comprises generated the expected sensor value with hetero-associative neural network.
16. The method of claim 12 further comprising the step of isolating the sensor under test from the plurality of turbine engine sensors.
17. The method of claim 16 wherein the step of isolating the sensor under test from the plurality of turbine engine sensors comprises generating a new expected value using the expected value as an input.
18. The method of claim 12 wherein the step of comparing the expected sensor value to a received sensor value from a sensor under test comprises determining a residual difference between the expected sensor value and the received sensor value.
19. The method of claim 12 wherein the step of replacing the received sensor value with a replacement sensor value comprises the step of generating the replacement sensor value from the expected sensor value.
20. The method of claim 19 wherein the step of generating the replacement sensor value from the expected sensor value comprises generating a new expected sensor value using the expected sensor value as an input.
21. An apparatus comprising:
a) a processor;
b) a memory coupled to the processor;
c) a sensor error compensation program residing in the memory and being executed by the processor, the sensor error compensation program including:
an expected value generator, the expected value generator receiving sensor data from a plurality of turbine engine sensors and generating an expected sensor value for a sensor under test; and
a sensor fault detector, the sensor fault detector comparing the sensor expected value to a received sensor value for the sensor under test to determine if a sensor error has occurred in the sensor under test.
22. The apparatus of claim 20 wherein the sensor error compensation program further generates a replacement sensor value to substitute for the received sensor value.
23. The apparatus of claim 22 wherein the replacement sensor value is generated from the expected sensor value.
24. The apparatus of claim 23 wherein the replacement sensor value is generated from the expected sensor value by the expected value generator generating a new expected sensor value based on the expected sensor value as an input.
25. The apparatus of claim 20 wherein the expected value generator comprises an auto-associative neural network, and wherein the sensor under test comprises one of the plurality of turbine engine sensors.
26. The apparatus of claim 20 wherein the expected value generator comprises hetero-associative neural network, and wherein the sensor under test comprises an additional turbine engine sensor.
27. The apparatus of claim 20 wherein the expected value generator generates a new expected value using the expected value as an input to isolate the sensor under test from the plurality of turbine engine sensors.
28. The apparatus of claim 20 wherein the expected value generator comprises an auto-associative neural network, and wherein the auto-associative neural network includes a mapping layer, a bottleneck layer and a demapping layer.
29. The apparatus of claim 28 wherein the auto-associative neural network is first trained for association using nominal sensor values, the demapping layer is then fixed and the neural network is trained with faulty sensor values.
30. A program product comprising:
a) a sensor error compensation program, the sensor error compensation program including:
an expected value generator, the expected value generator receiving sensor data from a plurality of turbine engine sensors and generating an expected sensor value for a sensor under test; and
a sensor fault detector, the sensor fault detector comparing the sensor expected value to a received sensor value for the sensor under test to determine if a sensor error has occurred in the sensor under test; and
b) signal bearing media bearing said program.
31. The program product of claim 30 wherein the signal bearing media comprises recordable media.
32. The program product of claim 30 wherein the signal bearing media comprises transmission media.
33. The program product of claim 30 wherein the sensor error compensation program further generates a replacement sensor value to substitute for the received sensor value.
34. The program product of claim 33 wherein the replacement sensor value is generated from the expected sensor value.
35. The program product of claim 34 wherein the replacement sensor value is generated from the expected sensor value by the expected value generator generating a new expected sensor value based on the expected sensor value as an input.
36. The program product of claim 30 wherein the expected value generator comprises an auto-associative neural network, and wherein the sensor under test comprises one of the plurality of turbine engine sensors.
37. The program product of claim 30 wherein the expected value generator comprises hetero-associative neural network, and wherein the sensor under test comprises an additional turbine engine sensor.
38. The program product of claim 30 wherein the expected value generator generates a new expected value using the expected value as an input to isolate the sensor under test from the plurality of turbine engine sensors.
39. The program product of claim 30 wherein the expected value generator comprises an auto-associative neural network, and wherein the auto-associative neural network includes a mapping layer, a bottleneck layer and a demapping layer.
40. The program product of claim 39 wherein the auto-associative neural network is first trained for association using nominal sensor values, the demapping layer is then fixed and the neural network is trained with faulty sensor values.
US10/655,674 2003-09-05 2003-09-05 Sensor error detection and compensation system and method Expired - Lifetime US6804600B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/655,674 US6804600B1 (en) 2003-09-05 2003-09-05 Sensor error detection and compensation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/655,674 US6804600B1 (en) 2003-09-05 2003-09-05 Sensor error detection and compensation system and method

Publications (1)

Publication Number Publication Date
US6804600B1 true US6804600B1 (en) 2004-10-12

Family

ID=33098463

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/655,674 Expired - Lifetime US6804600B1 (en) 2003-09-05 2003-09-05 Sensor error detection and compensation system and method

Country Status (1)

Country Link
US (1) US6804600B1 (en)

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050021212A1 (en) * 2003-07-24 2005-01-27 Gayme Dennice F. Fault detection system and method using augmented data and fuzzy logic
US20050160323A1 (en) * 2003-12-26 2005-07-21 Yi-Chang Wu Error-examining method for monitor circuit
US20050209767A1 (en) * 2004-03-16 2005-09-22 Honeywell International Inc. Method for fault diagnosis of a turbine engine
US20050251364A1 (en) * 2004-05-06 2005-11-10 Pengju Kang Sensor fault diagnostics and prognostics using component model and time scale orthogonal expansions
FR2882141A1 (en) * 2005-02-14 2006-08-18 Airbus France Sas METHOD AND DEVICE FOR DETECTING IN THE GROUND THE OBSTRUCTION OF A PRESSURE SOCKET OF A STATIC PRESSURE SENSOR OF AN AIRCRAFT
WO2006089330A1 (en) * 2005-02-24 2006-08-31 Arc Seibersdorf Research Gmbh Methods and arrangement for identifying the deviation from determined values
US20060198065A1 (en) * 2005-03-02 2006-09-07 Schweitzer Engineering Laboratories, Inc. Apparatus and method for detecting the loss of a current transformer connection coupling a current differential relay to an element of a power system
WO2006110248A1 (en) * 2005-04-08 2006-10-19 Caterpillar Inc. Control system and method
US20070014062A1 (en) * 2005-07-14 2007-01-18 Schweitzer Engineering Laboratories, Inc. Apparatus and method for identifying a loss of a current transformer signal in a power system
US20070035410A1 (en) * 2005-08-12 2007-02-15 Cohen Alexander J Sensor emulation using mote networks
GB2437099A (en) * 2006-04-13 2007-10-17 Fisher Rosemount Systems Inc Adding noise to data for model generation
US20080082470A1 (en) * 2006-09-29 2008-04-03 Ehsan Sobhani Tehrani Infrastructure health monitoring and analysis
US20080133149A1 (en) * 2006-12-05 2008-06-05 Robert Louis Ponziani Sensor fault detection and compensation
US20080154544A1 (en) * 2006-12-21 2008-06-26 Honeywell International Inc. Monitoring and fault detection in dynamic systems
US20080228338A1 (en) * 2007-03-15 2008-09-18 Honeywell International, Inc. Automated engine data diagnostic analysis
US20080270003A1 (en) * 2007-04-24 2008-10-30 Honeywell International, Inc. Feedback control system and method that selectively utilizes observer estimates
US20080281557A1 (en) * 2007-02-27 2008-11-13 Emigholz Kenneth F Method and system of using inferential measurements for abnormal event detection in continuous industrial processes
US20090164050A1 (en) * 2007-12-21 2009-06-25 Rosemount, Inc. Diagnostics for mass flow control
US20090240467A1 (en) * 2008-03-21 2009-09-24 Rochester Institute Of Technology Sensor fault detection systems and methods thereof
US20090299695A1 (en) * 2008-05-29 2009-12-03 General Electric Company System and method for advanced condition monitoring of an asset system
US20100100350A1 (en) * 2008-10-21 2010-04-22 General Electric Company System including phase signal saving during anomaly and related method
US20100324799A1 (en) * 2009-06-18 2010-12-23 Ronald Stuart Davison Turbine engine speed and vibration sensing system
US7882394B2 (en) 2005-07-11 2011-02-01 Brooks Automation, Inc. Intelligent condition-monitoring and fault diagnostic system for predictive maintenance
US20120296605A1 (en) * 2011-05-17 2012-11-22 International Business Machines Corporation Method, computer program, and system for performing interpolation on sensor data for high system availability
US20140012481A1 (en) * 2010-07-30 2014-01-09 Pratt & Whitney Canada Corp. Aircraft engine control during icing of temperature probe
WO2014039512A1 (en) * 2012-09-07 2014-03-13 Saudi Arabian Oil Company Methods, apparatus, computer readable media, and computer programs for estimating missing real-time data for intelligent fields
WO2014105232A2 (en) 2012-09-28 2014-07-03 United Technologies Corporation Model based engine inlet condition estimation
US9104650B2 (en) 2005-07-11 2015-08-11 Brooks Automation, Inc. Intelligent condition monitoring and fault diagnostic system for preventative maintenance
EP2957967A1 (en) * 2014-06-19 2015-12-23 Rosemount Aerospace Inc. Fault isolating altitude splits
US20150379394A1 (en) * 2014-01-07 2015-12-31 Stephen L. Thaler Device and method for the autonomous bootstrapping of unified sentience
US9423526B2 (en) 2011-12-31 2016-08-23 Saudi Arabian Oil Company Methods for estimating missing real-time data for intelligent fields
US9429678B2 (en) 2011-12-31 2016-08-30 Saudi Arabian Oil Company Apparatus, computer readable media, and computer programs for estimating missing real-time data for intelligent fields
US9464999B2 (en) 2013-11-04 2016-10-11 Honeywell International Inc. Detecting temperature sensor anomalies
US9500612B2 (en) 2013-11-04 2016-11-22 Honeywell International Inc. Detecting temperature sensor anomalies
US9671524B2 (en) 2011-12-31 2017-06-06 Saudi Arabian Oil Company Real-time dynamic data validation methods for intelligent fields
DK179327B1 (en) * 2014-03-21 2018-05-07 Gen Electric System and method for controlling an electronic component of a wind turbine using contingent communication
US20180291832A1 (en) * 2017-04-05 2018-10-11 GM Global Technology Operations LLC Method and system to detect and mitigate sensor degradation
US20180298839A1 (en) * 2017-04-12 2018-10-18 GM Global Technology Operations LLC Method and system to control propulsion systems having sensor or actuator degradation
US20190272466A1 (en) * 2018-03-02 2019-09-05 University Of Southern California Expert-driven, technology-facilitated intervention system for improving interpersonal relationships
WO2019217636A1 (en) * 2018-05-09 2019-11-14 Abb Schweiz Ag Turbine diagnostics
US10495680B2 (en) 2017-06-14 2019-12-03 Schweitzer Engineering Laboratories, Inc. Systems and methods for detecting current transformer ultrasaturation to enhance relay security and dependability
WO2020180424A1 (en) * 2019-03-04 2020-09-10 Iocurrents, Inc. Data compression and communication using machine learning
US10895872B2 (en) 2016-02-09 2021-01-19 Siemens Aktiengesellschaft Detection of temperature sensor failure in turbine systems
CN112749789A (en) * 2020-12-30 2021-05-04 西北工业大学 Aero-engine multiple fault diagnosis device based on self-association neural network
CN112818461A (en) * 2020-12-30 2021-05-18 西北工业大学 Variable-cycle engine multiple fault diagnosis device based on self-association neural network
CN112906855A (en) * 2020-12-30 2021-06-04 西北工业大学 Dynamic threshold variable cycle engine multiple fault diagnosis device
EP3839414A1 (en) * 2019-12-20 2021-06-23 Hexagon Technology Center GmbH Advanced thermal compensation of mechanical processes
US11067553B2 (en) * 2018-02-01 2021-07-20 Nova Fitness Co., Ltd. Method for determination and isolation of abnormal sub-sensors in a multi-core sensor
US20210261003A1 (en) * 2018-06-29 2021-08-26 Robert Bosch Gmbh Monitoring and Identifying Sensor Failure in an Electric Drive System
US11151449B2 (en) * 2018-01-24 2021-10-19 International Business Machines Corporation Adaptation of a trained neural network
US11379766B2 (en) * 2017-02-21 2022-07-05 International Business Machines Corporation Sensor deployment
US11491650B2 (en) 2018-12-19 2022-11-08 Abb Schweiz Ag Distributed inference multi-models for industrial applications
CN115326400A (en) * 2022-10-13 2022-11-11 中国航发四川燃气涡轮研究院 Fault diagnosis method of aircraft engine surge detection system and electronic equipment
US11714388B1 (en) 2018-08-10 2023-08-01 Apple Inc. Conditional error models

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3902315A (en) * 1974-06-12 1975-09-02 United Aircraft Corp Starting fuel control system for gas turbine engines
SU569873A1 (en) * 1974-12-09 1977-08-25 Предприятие П/Я А-1902 Device for measuring temperature of rotor blades of gas turbine engine
US4058975A (en) 1975-12-08 1977-11-22 General Electric Company Gas turbine temperature sensor validation apparatus and method
US4161101A (en) 1976-09-09 1979-07-17 General Electric Company Control system for and method of zero error automatic calibration of gas turbine temperature control parameters
US4212161A (en) 1978-05-01 1980-07-15 United Technologies Corporation Simulated parameter control for gas turbine engine
US4228650A (en) 1978-05-01 1980-10-21 United Technologies Corporation Simulated parameter control for gas turbine engine
US5497751A (en) * 1994-03-04 1996-03-12 Toyota Jidosha Kabushiki Kaisha Safety control apparatus for reciprocating engine
US5570300A (en) 1992-04-22 1996-10-29 The Foxboro Company Self-validating sensors
JPH09257034A (en) * 1996-03-22 1997-09-30 Ebara Corp Fluid machine having magnetic bearing and hydrostatic bearing
US5680409A (en) 1995-08-11 1997-10-21 Fisher-Rosemount Systems, Inc. Method and apparatus for detecting and identifying faulty sensors in a process
US6067032A (en) * 1997-12-23 2000-05-23 United Technologies Corporation Method of detecting stalls in a gas turbine engine
US6098011A (en) * 1998-05-18 2000-08-01 Alliedsignal, Inc. Efficient fuzzy logic fault accommodation algorithm
US6298718B1 (en) 2000-03-08 2001-10-09 Cummins Engine Company, Inc. Turbocharger compressor diagnostic system
US6347289B1 (en) 1998-07-28 2002-02-12 United Technologies Corporation Method and apparatus for determining an in-range failure of a speed sensor
US6393355B1 (en) 1999-10-05 2002-05-21 Honda Giken Kogyo Kabushiki Kaisha Gas turbine aeroengine control system
US20040030417A1 (en) * 2000-12-06 2004-02-12 Gribble Jeremy John Tracking systems for detecting sensor errors

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3902315A (en) * 1974-06-12 1975-09-02 United Aircraft Corp Starting fuel control system for gas turbine engines
SU569873A1 (en) * 1974-12-09 1977-08-25 Предприятие П/Я А-1902 Device for measuring temperature of rotor blades of gas turbine engine
US4058975A (en) 1975-12-08 1977-11-22 General Electric Company Gas turbine temperature sensor validation apparatus and method
US4161101A (en) 1976-09-09 1979-07-17 General Electric Company Control system for and method of zero error automatic calibration of gas turbine temperature control parameters
US4212161A (en) 1978-05-01 1980-07-15 United Technologies Corporation Simulated parameter control for gas turbine engine
US4228650A (en) 1978-05-01 1980-10-21 United Technologies Corporation Simulated parameter control for gas turbine engine
US5570300A (en) 1992-04-22 1996-10-29 The Foxboro Company Self-validating sensors
US5497751A (en) * 1994-03-04 1996-03-12 Toyota Jidosha Kabushiki Kaisha Safety control apparatus for reciprocating engine
US5680409A (en) 1995-08-11 1997-10-21 Fisher-Rosemount Systems, Inc. Method and apparatus for detecting and identifying faulty sensors in a process
JPH09257034A (en) * 1996-03-22 1997-09-30 Ebara Corp Fluid machine having magnetic bearing and hydrostatic bearing
US6067032A (en) * 1997-12-23 2000-05-23 United Technologies Corporation Method of detecting stalls in a gas turbine engine
US6098011A (en) * 1998-05-18 2000-08-01 Alliedsignal, Inc. Efficient fuzzy logic fault accommodation algorithm
US6347289B1 (en) 1998-07-28 2002-02-12 United Technologies Corporation Method and apparatus for determining an in-range failure of a speed sensor
US6393355B1 (en) 1999-10-05 2002-05-21 Honda Giken Kogyo Kabushiki Kaisha Gas turbine aeroengine control system
US6298718B1 (en) 2000-03-08 2001-10-09 Cummins Engine Company, Inc. Turbocharger compressor diagnostic system
US20040030417A1 (en) * 2000-12-06 2004-02-12 Gribble Jeremy John Tracking systems for detecting sensor errors

Cited By (110)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050021212A1 (en) * 2003-07-24 2005-01-27 Gayme Dennice F. Fault detection system and method using augmented data and fuzzy logic
US7734400B2 (en) * 2003-07-24 2010-06-08 Honeywell International Inc. Fault detection system and method using augmented data and fuzzy logic
US20050160323A1 (en) * 2003-12-26 2005-07-21 Yi-Chang Wu Error-examining method for monitor circuit
US7222266B2 (en) * 2003-12-26 2007-05-22 Wistron Corporation Error-examining method for monitor circuit
US7552005B2 (en) * 2004-03-16 2009-06-23 Honeywell International Inc. Method for fault diagnosis of a turbine engine
US20050209767A1 (en) * 2004-03-16 2005-09-22 Honeywell International Inc. Method for fault diagnosis of a turbine engine
US20050251364A1 (en) * 2004-05-06 2005-11-10 Pengju Kang Sensor fault diagnostics and prognostics using component model and time scale orthogonal expansions
WO2005111806A2 (en) 2004-05-06 2005-11-24 Carrier Corporation Sensor fault diagnostics and prognostics using component model and time scale orthogonal expansions
WO2005111806A3 (en) * 2004-05-06 2006-03-16 Carrier Corp Sensor fault diagnostics and prognostics using component model and time scale orthogonal expansions
US7200524B2 (en) * 2004-05-06 2007-04-03 Carrier Corporation Sensor fault diagnostics and prognostics using component model and time scale orthogonal expansions
FR2882141A1 (en) * 2005-02-14 2006-08-18 Airbus France Sas METHOD AND DEVICE FOR DETECTING IN THE GROUND THE OBSTRUCTION OF A PRESSURE SOCKET OF A STATIC PRESSURE SENSOR OF AN AIRCRAFT
US20080150763A1 (en) * 2005-02-14 2008-06-26 Airbus France Method and Device for Detecting, on the Ground, the Obstruction of a Pressure Tap of a Static Pressure Sensor of an Aircraft
US7675434B2 (en) 2005-02-14 2010-03-09 Airbus France Method and device for detecting, on the ground, the obstruction of a pressure tap of a static pressure sensor of an aircraft
WO2006087440A1 (en) * 2005-02-14 2006-08-24 Airbus France Method and device for detecting, on the ground, the obstruction of a pressure tap of a static pressure sensor of an aircraft
WO2006089330A1 (en) * 2005-02-24 2006-08-31 Arc Seibersdorf Research Gmbh Methods and arrangement for identifying the deviation from determined values
US7196884B2 (en) * 2005-03-02 2007-03-27 Schweitzer Engineering Laboratories, Inc. Apparatus and method for detecting the loss of a current transformer connection coupling a current differential relay to an element of a power system
US20060198065A1 (en) * 2005-03-02 2006-09-07 Schweitzer Engineering Laboratories, Inc. Apparatus and method for detecting the loss of a current transformer connection coupling a current differential relay to an element of a power system
US7565333B2 (en) 2005-04-08 2009-07-21 Caterpillar Inc. Control system and method
WO2006110248A1 (en) * 2005-04-08 2006-10-19 Caterpillar Inc. Control system and method
US10120374B2 (en) 2005-07-11 2018-11-06 Brooks Automation, Inc. Intelligent condition monitoring and fault diagnostic system for preventative maintenance
US10845793B2 (en) 2005-07-11 2020-11-24 Brooks Automation, Inc. Intelligent condition monitoring and fault diagnostic system for preventative maintenance
US8356207B2 (en) 2005-07-11 2013-01-15 Brooks Automation, Inc. Intelligent condition monitoring and fault diagnostic system for preventative maintenance
US11650581B2 (en) 2005-07-11 2023-05-16 Brooks Automation Us, Llc Intelligent condition monitoring and fault diagnostic system for preventative maintenance
US20110173496A1 (en) * 2005-07-11 2011-07-14 Brooks Automation, Inc. Intelligent condition monitoring and fault diagnostic system for preventative maintenance
US7882394B2 (en) 2005-07-11 2011-02-01 Brooks Automation, Inc. Intelligent condition-monitoring and fault diagnostic system for predictive maintenance
US9104650B2 (en) 2005-07-11 2015-08-11 Brooks Automation, Inc. Intelligent condition monitoring and fault diagnostic system for preventative maintenance
US7345863B2 (en) 2005-07-14 2008-03-18 Schweitzer Engineering Laboratories, Inc. Apparatus and method for identifying a loss of a current transformer signal in a power system
US20070014062A1 (en) * 2005-07-14 2007-01-18 Schweitzer Engineering Laboratories, Inc. Apparatus and method for identifying a loss of a current transformer signal in a power system
US20110205082A1 (en) * 2005-08-12 2011-08-25 Cohen Alexander J Sensor emulation using mote networks
US8743698B2 (en) * 2005-08-12 2014-06-03 The Invention Science Fund I, Llc Sensor emulation using mote networks
US7830805B2 (en) * 2005-08-12 2010-11-09 The Invention Science Fund I, Llc Sensor emulation using mote networks
US20070035410A1 (en) * 2005-08-12 2007-02-15 Cohen Alexander J Sensor emulation using mote networks
US7840287B2 (en) 2006-04-13 2010-11-23 Fisher-Rosemount Systems, Inc. Robust process model identification in model based control techniques
GB2437099B (en) * 2006-04-13 2011-04-06 Fisher Rosemount Systems Inc Robust process model identification in model based control techniques
GB2437099A (en) * 2006-04-13 2007-10-17 Fisher Rosemount Systems Inc Adding noise to data for model generation
DE102007017039B4 (en) 2006-04-13 2022-08-11 Fisher-Rosemount Systems, Inc. Robust process model identification with model-based control techniques
US20070244575A1 (en) * 2006-04-13 2007-10-18 Fisher-Rosemount Systems, Inc. Robust process model identification in model based control techniques
US20080082470A1 (en) * 2006-09-29 2008-04-03 Ehsan Sobhani Tehrani Infrastructure health monitoring and analysis
US7822697B2 (en) * 2006-09-29 2010-10-26 Globvision Inc. Method and apparatus for infrastructure health monitoring and analysis wherein anomalies are detected by comparing measured outputs to estimated/modeled outputs by using a delay
US7481100B2 (en) 2006-12-05 2009-01-27 General Electric Company Method and apparatus for sensor fault detection and compensation
US20080133149A1 (en) * 2006-12-05 2008-06-05 Robert Louis Ponziani Sensor fault detection and compensation
US7421351B2 (en) * 2006-12-21 2008-09-02 Honeywell International Inc. Monitoring and fault detection in dynamic systems
US20080154544A1 (en) * 2006-12-21 2008-06-26 Honeywell International Inc. Monitoring and fault detection in dynamic systems
US8285513B2 (en) 2007-02-27 2012-10-09 Exxonmobil Research And Engineering Company Method and system of using inferential measurements for abnormal event detection in continuous industrial processes
US20080281557A1 (en) * 2007-02-27 2008-11-13 Emigholz Kenneth F Method and system of using inferential measurements for abnormal event detection in continuous industrial processes
US20080228338A1 (en) * 2007-03-15 2008-09-18 Honeywell International, Inc. Automated engine data diagnostic analysis
US20080270003A1 (en) * 2007-04-24 2008-10-30 Honeywell International, Inc. Feedback control system and method that selectively utilizes observer estimates
US7630820B2 (en) 2007-04-24 2009-12-08 Honeywell International Inc. Feedback control system and method that selectively utilizes observer estimates
US20090164050A1 (en) * 2007-12-21 2009-06-25 Rosemount, Inc. Diagnostics for mass flow control
US7693606B2 (en) * 2007-12-21 2010-04-06 Rosemount Inc. Diagnostics for mass flow control
US8285514B2 (en) * 2008-03-21 2012-10-09 Rochester Institute Of Technology Sensor fault detection systems and methods thereof
US20090240467A1 (en) * 2008-03-21 2009-09-24 Rochester Institute Of Technology Sensor fault detection systems and methods thereof
US8352216B2 (en) * 2008-05-29 2013-01-08 General Electric Company System and method for advanced condition monitoring of an asset system
US20090299695A1 (en) * 2008-05-29 2009-12-03 General Electric Company System and method for advanced condition monitoring of an asset system
US7827008B2 (en) * 2008-10-21 2010-11-02 General Electric Company System including phase signal saving during anomaly and related method
US20100100350A1 (en) * 2008-10-21 2010-04-22 General Electric Company System including phase signal saving during anomaly and related method
US20100324799A1 (en) * 2009-06-18 2010-12-23 Ronald Stuart Davison Turbine engine speed and vibration sensing system
US9014944B2 (en) 2009-06-18 2015-04-21 United Technologies Corporation Turbine engine speed and vibration sensing system
US20140012481A1 (en) * 2010-07-30 2014-01-09 Pratt & Whitney Canada Corp. Aircraft engine control during icing of temperature probe
US9114885B2 (en) * 2010-07-30 2015-08-25 Pratt & Whitney Canada Corp. Aircraft engine control during icing of temperature probe
US20120296605A1 (en) * 2011-05-17 2012-11-22 International Business Machines Corporation Method, computer program, and system for performing interpolation on sensor data for high system availability
US20120296606A1 (en) * 2011-05-17 2012-11-22 International Business Machines Corporation Method, computer program, and system for performing interpolation on sensor data for high system availability
US9423526B2 (en) 2011-12-31 2016-08-23 Saudi Arabian Oil Company Methods for estimating missing real-time data for intelligent fields
US9429678B2 (en) 2011-12-31 2016-08-30 Saudi Arabian Oil Company Apparatus, computer readable media, and computer programs for estimating missing real-time data for intelligent fields
US9671524B2 (en) 2011-12-31 2017-06-06 Saudi Arabian Oil Company Real-time dynamic data validation methods for intelligent fields
WO2014039512A1 (en) * 2012-09-07 2014-03-13 Saudi Arabian Oil Company Methods, apparatus, computer readable media, and computer programs for estimating missing real-time data for intelligent fields
WO2014105232A2 (en) 2012-09-28 2014-07-03 United Technologies Corporation Model based engine inlet condition estimation
EP2904242A4 (en) * 2012-09-28 2016-11-23 United Technologies Corp Model based engine inlet condition estimation
US9464999B2 (en) 2013-11-04 2016-10-11 Honeywell International Inc. Detecting temperature sensor anomalies
US9500612B2 (en) 2013-11-04 2016-11-22 Honeywell International Inc. Detecting temperature sensor anomalies
US12073313B2 (en) 2014-01-07 2024-08-27 Stephen L. Thaler Electro-optical devices and methods for identifying and inducing topological states formed among interconnecting neural modules
US11727251B2 (en) * 2014-01-07 2023-08-15 Stephen L. Thaler Electro-optical devices and methods for identifying and inducing topological states formed among interconnecting neural modules
US20150379394A1 (en) * 2014-01-07 2015-12-31 Stephen L. Thaler Device and method for the autonomous bootstrapping of unified sentience
US10423875B2 (en) * 2014-01-07 2019-09-24 Stephen L. Thaler Electro-optical device and method for identifying and inducing topological states formed among interconnecting neural modules
US20190362225A1 (en) * 2014-01-07 2019-11-28 Stephen L. Thaler Electro-optical devices and methods for identifying and inducing topological states formed among interconnecting neural modules
DK179327B1 (en) * 2014-03-21 2018-05-07 Gen Electric System and method for controlling an electronic component of a wind turbine using contingent communication
US9759560B2 (en) 2014-06-19 2017-09-12 Rosemount Aerospace Inc. Fault isolating altitude splits
EP2957967A1 (en) * 2014-06-19 2015-12-23 Rosemount Aerospace Inc. Fault isolating altitude splits
US10895872B2 (en) 2016-02-09 2021-01-19 Siemens Aktiengesellschaft Detection of temperature sensor failure in turbine systems
US11379766B2 (en) * 2017-02-21 2022-07-05 International Business Machines Corporation Sensor deployment
CN108691678A (en) * 2017-04-05 2018-10-23 通用汽车环球科技运作有限责任公司 Detection and the method and system for alleviating sensor degradation
US20180291832A1 (en) * 2017-04-05 2018-10-11 GM Global Technology Operations LLC Method and system to detect and mitigate sensor degradation
US10526992B2 (en) * 2017-04-05 2020-01-07 GM Global Technology Operations LLC Method and system to detect and mitigate sensor degradation
CN108691678B (en) * 2017-04-05 2021-08-03 通用汽车环球科技运作有限责任公司 Method and system for detecting and mitigating sensor degradation
DE102018107746B4 (en) 2017-04-05 2023-07-06 GM Global Technology Operations LLC METHOD OF DETECTING AND MITIGATING SENSOR DEGRADATION
CN108691679A (en) * 2017-04-12 2018-10-23 通用汽车环球科技运作有限责任公司 Method and system for controlling the propulsion system degenerated with sensor or actuator
DE102018108115B4 (en) 2017-04-12 2023-08-03 GM Global Technology Operations LLC METHOD OF CONTROLLING DRIVE SYSTEMS WITH SENSOR OR ACTUATOR DEGRADATION
US20180298839A1 (en) * 2017-04-12 2018-10-18 GM Global Technology Operations LLC Method and system to control propulsion systems having sensor or actuator degradation
US10883436B2 (en) * 2017-04-12 2021-01-05 GM Global Technology Operations LLC Method and system to control propulsion systems having sensor or actuator degradation
CN108691679B (en) * 2017-04-12 2021-08-03 通用汽车环球科技运作有限责任公司 Method and system for controlling a propulsion system with sensor or actuator degradation
US10495680B2 (en) 2017-06-14 2019-12-03 Schweitzer Engineering Laboratories, Inc. Systems and methods for detecting current transformer ultrasaturation to enhance relay security and dependability
US11151449B2 (en) * 2018-01-24 2021-10-19 International Business Machines Corporation Adaptation of a trained neural network
US11067553B2 (en) * 2018-02-01 2021-07-20 Nova Fitness Co., Ltd. Method for determination and isolation of abnormal sub-sensors in a multi-core sensor
US20190272466A1 (en) * 2018-03-02 2019-09-05 University Of Southern California Expert-driven, technology-facilitated intervention system for improving interpersonal relationships
US11773721B2 (en) 2018-05-09 2023-10-03 Abb Schweiz Ag Turbine diagnostics
US11814964B2 (en) 2018-05-09 2023-11-14 Abb Schweiz Ag Valve position control
US11898449B2 (en) 2018-05-09 2024-02-13 Abb Schweiz Ag Turbine control system
WO2019217636A1 (en) * 2018-05-09 2019-11-14 Abb Schweiz Ag Turbine diagnostics
US20210261003A1 (en) * 2018-06-29 2021-08-26 Robert Bosch Gmbh Monitoring and Identifying Sensor Failure in an Electric Drive System
US11714388B1 (en) 2018-08-10 2023-08-01 Apple Inc. Conditional error models
US11491650B2 (en) 2018-12-19 2022-11-08 Abb Schweiz Ag Distributed inference multi-models for industrial applications
US11216742B2 (en) 2019-03-04 2022-01-04 Iocurrents, Inc. Data compression and communication using machine learning
US11468355B2 (en) 2019-03-04 2022-10-11 Iocurrents, Inc. Data compression and communication using machine learning
WO2020180424A1 (en) * 2019-03-04 2020-09-10 Iocurrents, Inc. Data compression and communication using machine learning
US11550291B2 (en) 2019-12-20 2023-01-10 Hexagon Technology Center Gmbh Advanced thermal compensation of mechanical processes
EP3839414A1 (en) * 2019-12-20 2021-06-23 Hexagon Technology Center GmbH Advanced thermal compensation of mechanical processes
CN112906855A (en) * 2020-12-30 2021-06-04 西北工业大学 Dynamic threshold variable cycle engine multiple fault diagnosis device
CN112818461A (en) * 2020-12-30 2021-05-18 西北工业大学 Variable-cycle engine multiple fault diagnosis device based on self-association neural network
CN112749789A (en) * 2020-12-30 2021-05-04 西北工业大学 Aero-engine multiple fault diagnosis device based on self-association neural network
CN115326400A (en) * 2022-10-13 2022-11-11 中国航发四川燃气涡轮研究院 Fault diagnosis method of aircraft engine surge detection system and electronic equipment

Similar Documents

Publication Publication Date Title
US6804600B1 (en) Sensor error detection and compensation system and method
US7280941B2 (en) Method and apparatus for in-situ detection and isolation of aircraft engine faults
US6898554B2 (en) Fault detection in a physical system
US8744813B2 (en) Detection of anomalies in an aircraft engine
US7734400B2 (en) Fault detection system and method using augmented data and fuzzy logic
EP1416348B1 (en) Methodology for temporal fault event isolation and identification
US7580812B2 (en) Trending system and method using window filtering
US6868325B2 (en) Transient fault detection system and method using Hidden Markov Models
US7660774B2 (en) Nonlinear neural network fault detection system and method
JP2006105142A (en) Fault detection method for device and fault detection/separation system
US7254491B2 (en) Clustering system and method for blade erosion detection
US11807388B2 (en) Apparatus, method and computer program for monitoring an aircraft engine
Mathioudakis et al. Probabilistic neural networks for validation of on-board jet engine data
CN112801267A (en) Multiple fault diagnosis device for aircraft engine with dynamic threshold value
Alozie et al. An adaptive model-based framework for prognostics of gas path faults in aircraft gas turbine engines
Aggab et al. Remaining Useful Life prediction method using an observer and statistical inference estimation methods
Hare et al. System-level fault diagnosis with application to the environmental control system of an aircraft
Alozie et al. An Integrated Principal Component Analysis, Artificial Neural Network and Gas Path Analysis Approach for Multi-Component Fault Diagnostics of Gas Turbine Engines
Scordamaglia et al. A data-driven algorithm for detecting anomalies in underwater sensor-based wave height measurements
Zarate et al. Computation and monitoring of the deviations of gas turbine unmeasured parameters
Baïkeche et al. On parametric and nonparametric fault detection in linear closed-loop systems
Roemer Testing of a real-time health monitoring and diagnostic system for gas turbine engines
de Pater et al. Constructing health indicators for systems with few failure instances using unsuper-vised learning
KR102654326B1 (en) Fault diagnosis device and method of oil purifier
US20230259113A1 (en) Subsystem-level model-based diagnosis

Legal Events

Date Code Title Description
AS Assignment

Owner name: HONEYWELL INTERNATIONAL, INC., NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ULUYOL, ONDER;NWADIOGBU, EMMANUEL O;REEL/FRAME:014470/0827;SIGNING DATES FROM 20030827 TO 20030903

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12