US20190286506A1 - Topology-inspired neural network autoencoding for electronic system fault detection - Google Patents
Topology-inspired neural network autoencoding for electronic system fault detection Download PDFInfo
- Publication number
- US20190286506A1 US20190286506A1 US16/245,734 US201916245734A US2019286506A1 US 20190286506 A1 US20190286506 A1 US 20190286506A1 US 201916245734 A US201916245734 A US 201916245734A US 2019286506 A1 US2019286506 A1 US 2019286506A1
- Authority
- US
- United States
- Prior art keywords
- sensor data
- anomalies
- anomaly
- fault detection
- recited
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0706—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
- G06F11/0709—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0793—Remedial or corrective actions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G06K9/6267—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/045—Explanation of inference; Explainable artificial intelligence [XAI]; Interpretable artificial intelligence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1425—Traffic logging, e.g. anomaly detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
- H04L67/125—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W12/00—Security arrangements; Authentication; Protecting privacy or anonymity
- H04W12/12—Detection or prevention of fraud
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/38—Services specially adapted for particular environments, situations or purposes for collecting sensor information
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S40/00—Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them
- Y04S40/18—Network protocols supporting networked applications, e.g. including control of end-device applications over a network
Definitions
- the present invention relates to fault detection in electronic systems and more particularly topology-inspired neural network autoencoding for fault detection in electronic systems.
- a variety of electronic systems such as, e.g., store registers, retail store showcases, power plants, heating, ventilation and air condition (HVAC) systems, among other electronically controlled systems, can monitor both physical and electronic states of the electronic system using a variety of sensor techniques. To determine when the electronic system experiences a failure or fault related to the physical and electronic states, analysis of sensor behavior can be used. Accordingly, a system can be equipped to conduct surveillance of the electronic system to analyze behaviors and diagnose faults and failures. Quickly and accurately discovering a failure can result in reduced downtime as well as reduced hazards, among other issues related to a fault, thus decreasing costs associated with faults and increasing safety.
- HVAC heating, ventilation and air condition
- a method for fault detection in a sensor network.
- the method includes receiving sensor data from sensors in the sensor network with a communication device.
- the sensor data is analyze to determine if the sensor data is indicative of a fault with a fault detection model, the fault detection model including; predicting the sensor data with an autoencoder by encoding the sensor data and decoding encoded the sensor data, autoregressively modelling the sensor data with an autoregressor, combining the modeled sensor data and the predicted sensor data with a combiner to produce reconstructed sensor data, and comparing the reconstructed sensor data to the sensor data with an anomaly evaluator to determine anomalies.
- An anomaly classification is produced by comparing the anomalies to historical anomalies with an anomaly classifier. Faults in the sensor network are automatically mitigated with a processing device based on the anomaly classification.
- a method for fault detection in a sensor network.
- the method includes receiving sensor data from sensors in the sensor network with a communication device.
- the sensor data is logged in an event log to form time-series of sensor data.
- the sensor data is analyzed to determine if the sensor data is indicative of a fault with a fault detection model, the fault detection model including; predicting the sensor data with an autoencoder by encoding the sensor data and decoding encoded the sensor data, autoregressively modelling the sensor data with an autoregressor, combining the modeled sensor data and the predicted sensor data with a combiner to produce reconstructed sensor data, comparing the reconstructed sensor data to the sensor data with an anomaly evaluator to determine anomalies, and ranking the anomalies according to a difference between the reconstructed sensor data and the sensor data.
- An anomaly classification is produced by comparing the anomalies to historical anomalies with an anomaly classifier. Faults in the sensor network are automatically mitigated with a processing device based on the anomaly classification.
- a system for fault detection in a sensor network with a fault detection system to detect faults.
- the system includes a communication device to receive sensor data from sensors in the sensor network.
- a fault detection model analyzes the sensor data to determine if the sensor data is indicative of a fault, the fault detection model including; an autoencoder that encodes the sensor data and decodes the encoded sensor data to predict the sensor data, an autoregressor that autoregressively models the sensor data, a combiner that combines the modeled sensor data and the predicted sensor data to produced reconstructed sensor data, and an anomaly evaluator that compares the reconstructed sensor data to the sensor data to determine anomalies.
- An anomaly classifier compares the anomalies to historical anomalies and produces an anomaly classification.
- a processing device automatically mitigates faults in the sensor network based on the anomaly classification.
- FIG. 1 is a generalized diagram of a neural network, in accordance with the present invention.
- FIG. 2 is a block/flow diagram illustrating an artificial neural network (ANN) architecture, in accordance with the present invention
- FIG. 3 is a diagram illustrating a network monitored by a topology-inspired neural network for fault detection, in accordance with the present invention
- FIG. 4 is a block/flow diagram illustrating a fault detection system with topology-inspired neural network autoencoding for fault detection, in accordance with the present invention
- FIG. 5 is a block/flow diagram illustrating fault detection model for a fault detection system with topology-inspired neural network autoencoding, in accordance with the present invention
- FIG. 6 is a block/flow diagram of an anomaly classifier for classifying anomalies detected by a fault detection model, in accordance with the present invention.
- FIG. 7 is a flow diagram illustrating a system/method for topology-inspired neural network autoencoding for fault detection, in accordance with the present invention.
- systems and methods are provided for automatic fault detection with topology-inspired neural network autoencoding.
- a fault detection system is implemented in communication with a system or network.
- the system or network includes, for example, a power grid, however, the fault detection system can be implemented in any system that monitors physical systems using electronic sensors.
- the fault detection system facilitates real-time analysis of sensor data to determine if or when a fault in the system occurs, such as, e.g., the power grid.
- the fault detection system operates through the use of an autoencoder trained to recognize normal sensor data of the monitored system. Because such data is time varying, highly multi-variate and often asynchronous, the autoencoder includes an autoregressive model and long short-term memory. By combining an autoencoder to capture time-varying data relationships, and an autoregressive model to compensate for asynchronous data, the fault detection system can better operate in a real-world system to analyze sensor data in real-time. Thus, the fault detection system can more accurately and more efficiently recognize behavior that is outside of normal operating behavior on which the fault detection system is trained.
- the fault detection system monitors the system behaviors to detect and recognize anomalous behaviors that may correspond to faults.
- the suspected faults are fingerprinted according to behavior and recorded.
- the suspected faults can then be compared to past confirmed faults according to, e.g., similarities in fingerprints. Where the fingerprints of the suspected faults match past faults, the suspected faults are verified as faults having a type and a method of response corresponding to the matched past fault.
- the fault detection system can then automatically perform fault mitigation according to the method of response.
- the fault detection system can, e.g., automatically notify an administrator via a display or speaker, shut down or reset a particular portion of the system, issue a general alert to users or customers, redistribute resources, or perform any other appropriate action.
- the faults can be identified and addressed more quickly, efficiently and accurately, with less need for human intervention. Because of the reduced human oversight, faults can be addressed more quickly and with reduced costs.
- Embodiments described herein may be entirely hardware, entirely software or including both hardware and software elements.
- the present invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
- Embodiments may include a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
- a computer-usable or computer readable medium may include any apparatus that stores, communicates, propagates, or transports the program for use by or in connection with the instruction execution system, apparatus, or device.
- the medium can be magnetic, optical, electronic, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
- the medium may include a computer-readable storage medium such as a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk, etc.
- Each computer program may be tangibly stored in a machine-readable storage media or device (e.g., program memory or magnetic disk) readable by a general or special purpose programmable computer, for configuring and controlling operation of a computer when the storage media or device is read by the computer to perform the procedures described herein.
- the inventive system may also be considered to be embodied in a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
- a data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus.
- the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code to reduce the number of times code is retrieved from bulk storage during execution.
- I/O devices including but not limited to keyboards, displays, pointing devices, etc. may be coupled to the system either directly or through intervening I/O controllers.
- Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks.
- Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
- An artificial neural network is an information processing system that is inspired by biological nervous systems, such as the brain.
- the key element of ANNs is the structure of the information processing system, which includes a large number of highly interconnected processing elements (called “neurons”) working in parallel to solve specific problems.
- ANNs are furthermore trained in-use, with learning that involves adjustments to weights that exist between the neurons.
- An ANN is configured for a specific application, such as pattern recognition or data classification, through such a learning process.
- ANNs demonstrate an ability to derive meaning from complicated or imprecise data and can be used to extract patterns and detect trends that are too complex to be detected by humans or other computer-based systems.
- the structure of a neural network is known generally to have input neurons 102 that provide information to one or more “hidden” neurons 104 . Connections 108 between the input neurons 102 and hidden neurons 104 are weighted and these weighted inputs are then processed by the hidden neurons 104 according to some function in the hidden neurons 104 , with weighted connections 108 between the layers. There may be any number of layers of hidden neurons 104 , and as well as neurons that perform different functions. There exist different neural network structures as well, such as convolutional neural network, maxout network, etc. Finally, a set of output neurons 106 accepts and processes weighted input from the last set of hidden neurons 104 .
- the output is compared to a desired output available from training data.
- the error relative to the training data is then processed in “feed-back” computation, where the hidden neurons 104 and input neurons 102 receive information regarding the error propagating backward from the output neurons 106 .
- weight updates are performed, with the weighted connections 108 being updated to account for the received error.
- an artificial neural network (ANN) architecture 200 is shown. It should be understood that the present architecture is purely exemplary and that other architectures or types of neural network may be used instead.
- the ANN embodiment described herein is included with the intent of illustrating general principles of neural network computation at a high level of generality and should not be construed as limiting in any way.
- layers of neurons described below and the weights connecting them are described in a general manner and can be replaced by any type of neural network layers with any appropriate degree or type of interconnectivity.
- layers can include convolutional layers, pooling layers, fully connected layers, stopmax layers, or any other appropriate type of neural network layer.
- layers can be added or removed as needed and the weights can be omitted for more complicated forms of interconnection.
- a set of input neurons 202 each provide an input signal in parallel to a respective row of weights 204 .
- the weights 204 each have a respective settable value, such that a weight output passes from the weight 204 to a respective hidden neuron 206 to represent the weighted input to the hidden neuron 206 .
- the weights 204 may simply be represented as coefficient values that are multiplied against the relevant signals. The signals from each weight adds column-wise and flows to a hidden neuron 206 .
- any number of these stages may be implemented, by interposing additional layers of arrays and hidden neurons 206 . It should also be noted that some neurons may be constant neurons 209 , which provide a constant output to the array. The constant neurons 209 can be present among the input neurons 202 and/or hidden neurons 206 and are used during feed-forward operation.
- the output neurons 208 provide a signal back across the array of weights 204 .
- the output layer compares the generated network response to training data and computes an error.
- the error signal can be made proportional to the error value.
- a row of weights 204 receives a signal from a respective output neuron 208 in parallel and produces an output which adds column-wise to provide an input to hidden neurons 206 .
- the hidden neurons 206 combine the weighted feedback signal with a derivative of its feed-forward calculation and stores an error value before outputting a feedback signal to its respective column of weights 204 . This back propagation travels through the entire network 200 until all hidden neurons 206 and the input neurons 202 have stored an error value.
- FIG. 3 a system/method for a network monitored by a topology-inspired neural network for fault detection is illustratively depicted in accordance with an embodiment of the present invention.
- a fault detection system 300 is in communication with a network 301 , such as, e.g., a cloud network, the Internet, an intranet, or other network. Via the network 301 , the fault detection system 300 can monitor systems such as, e.g., a building 304 including a heating, ventilation and air conditioning (HVAC) system, a power grid 306 and other sensor network 305 .
- HVAC heating, ventilation and air conditioning
- the sensor network 305 can be any network of sensors 308 a and 308 b including, e.g., another HVAC system of a building or a power grid.
- the fault detection system 300 retrieves a data stream from each of the monitored systems across the network 301 .
- the sensor network 305 can provide sensor data from each of the sensors 308 a and 308 b across the network 301 to the fault detection system 300 .
- the fault detection system 300 can log the sensor data, e.g., in a memory or storage device, or in a database 307 via the network 301 .
- the fault detection system 300 can maintain a record of sensor data from the sensor network 305 .
- the fault detection system 300 can also maintain a record of sensor data from the building 304 utilities, such as, e.g., HVAC, as well as power grid 306 behavior.
- the fault detection system 300 analyzes the sensor data to determine the presence of faults or other anomalies in the sensor network 305 .
- the fault detection system 300 can, e.g., learn from the record of sensor data in the database 307 to recognize normal operating behavior of sensors and other devices in the sensor network 305 .
- the fault detection system 300 can determine a suspected fault where received sensor data does not match the learned normal behavior.
- other methods of fault detection are contemplated.
- the fault detection system 300 classifies suspected faults according to types of faults.
- the type of a fault can relate to, e.g., effective responses for similar faults, particular variations from normal behavior, or other form of classification.
- the fault detection system 300 determines a fault classification by, e.g., storing the record of sensor data as a fingerprint according to a format of the record and comparing the fingerprint against past detected faults.
- the fault detection system 300 can, alternatively, classify faults according to pre-defined classifications of behavior variations, such as, e.g., with a human annotated training set.
- the fault can be associated with one or more actions to address the fault of the corresponding category.
- the fault detection system 300 can automatically address the fault according to the associated actions, such as, e.g., notifying an administrator via the display and/or speaker of a computer 302 or mobile device 303 in communication with the network 301 .
- the fault detection system 300 can address faults by, e.g., shutting down systems and equipment that are malfunctioning, as indicated by the fault, shutting down or resetting devices to prevent hazardous situations caused by or associated with the fault, dispatching maintenance teams, issuing public alerts via the internet, email, simple messaging service (SMS) or other communication medium, or any other appropriate response to the fault.
- faults e.g., shutting down systems and equipment that are malfunctioning, as indicated by the fault, shutting down or resetting devices to prevent hazardous situations caused by or associated with the fault, dispatching maintenance teams, issuing public alerts via the internet, email, simple messaging service (SMS) or other communication medium, or any other appropriate response to the fault.
- SMS simple messaging service
- the suspected fault can then be communicated to the database 307 to be recorded as a fault.
- the suspected fault can be recorded according to a new classification. Otherwise, the suspected fault is recorded according to the classification determined as described above.
- the suspected fault is added to the historical record of faults for improved training of the fault detection system for identifying anomalies and associated faults.
- Received sensor data can be stored in the event log 402 , which can be, e.g., a storage or memory device such as, e.g., a hard drive, a solid state drive, flash storage, a cloud database, random access memory (RAM), or other storage device.
- the event log 402 can maintain a record over time of sensor data.
- the event log 402 can maintain the record for a given period, such as, e.g., for one day, multiple days, a week or a month, or for another desirable period, before deleting the data to make room for new sensor data.
- the event log 402 can maintain a rolling log of sensor data where the oldest data is deleted upon receipt of new data or in anticipation of new data.
- the fault detection system 400 can analyze sensor data to determine anomalous behavior.
- the fault detection system 400 includes a fault detection model 420 that can determine behavior that does not match normal operating behavior.
- sensor data recorded in the event log 402 may include a spike in power draw on the power grid that is above normal for a corresponding time of day.
- the temperature detected on a floor of a building may be below normal, thus requiring increased heat supplied by the HVAC system to that floor.
- the fault detection model 420 can include an autoencoder 430 trained with normal sensor data.
- the autoencoder 430 can encode the set of data into a feature vector and decode the feature vector according to learned parameters. Therefore, the autoencoder 430 can include, e.g., a neural network, such as, e.g., long short-term memory, a recurrent neural network, a convolutional neural network, or other machine learning technique for encoding and decoding the sensor data.
- the autoencoder 430 reconstructs the data according to normal expected behaviors in sensor data. Accordingly, the reconstructed data and the original set of data can be compared to determine a difference. Where the difference is high, for example, above a threshold error level, the set of data is deemed anomalous, and thus corresponding to a suspected fault.
- the sensor data can be time varying and include asynchronous properties.
- the fault detection model 420 can include an autoregressor 440 .
- the autoregressor 440 analyzes the sensor data to determine local linear correlations of data points in the time varying sensor data.
- the seasonal patterns determined from the autoencoder 430 can be augemented with the local linear correlations from the autoregressor 440 to reliably and efficiently reconstruct the sensor data, even with asynchronous time varying sensor data. As a result, a deviation from normal behavior can be more accurately and efficiently assessed.
- Both the autoencoder 430 and the autoregressor 440 can be trained to reproduce data according to normal sensor data patterns.
- the fault detection model 420 can include an optimization function for training the autoencoder 430 and the autoregressor 440 with normal sensor data, such as, e.g., a training set of curated normal sensor data.
- the fault detection model 420 can be trained according to, e.g., the optimization function of equation 1 below:
- equation 1 determines a reconstruction error of the fault detection model 420 . Therefore, the autoregressor 440 and the autoencoder 440 can each be trained via backpropagation of the reconstruction error when normal operating behavior is provided as sensor data. Thus, the fault detection model 420 can be efficiently trained to recognize normal operating behavior. A fault, therefore, can easily be determined according to a deviation from the normal operating behavior.
- the deviation from the normal behavior can be determined by an anomaly evaluator 406 .
- the anomaly evaluator 406 compared the reconstructed data from the fault detection model 420 with the original sensor data. Where the original sensor data and reconstructed sensor data deviate from each other by greater than a threshold amount, the data can be determined to indicate anomalous behavior, and thus a fault.
- the anomaly evaluator 406 can determine an anomaly score that quantifies the discrepancy between the sensor data and the reconstructed data. For example, an anomaly score can be determined according to equation 2 below:
- the anomaly evaluator 406 quantifies and scores reconstructed data. Where the score rises above a threshold level, the data is considered anomalous.
- the threshold can be user adjustable, learned according to an optimization function, or predetermined for each system or type of sensor.
- the anomaly evaluator 406 can include, e.g., a software module stored in a memory device, such as, e.g., a hard drive, a solid state drive, a cache, a buffer, flash storage, random access memory, or other memory device, and executed by a processing device, such as the processing device 414 or other processing device.
- the fault detection system 400 can prioritize faults to determine an order of addressability.
- the fault detection system 400 can rank anomalous data to determine the most severe faults for various components providing sensor data.
- the ranking can be, e.g., based on score, ordered from greatest to least.
- a rank can be determined according to equation 3 below:
- r is the rank of the sensor data.
- sensor data can be verified as faults.
- the anomaly can be classified by a type or similarity to historical faults stored in, e.g., the event log 402 or an external database, with an anomaly classifier 410 .
- the fault detection system 400 can utilize past responses to determine an appropriate response to the fault.
- the anomaly classifier 410 compares a fingerprint corresponding to the sensor data exhibiting anomalous behavior of a fault to fingerprints of the historical faults.
- the fingerprint can include top ranked anomalous behavior, according to equation 3 above, and corresponding time-varying data.
- the top ranked anomalous behavior can correspond to the most damaged or faulty part of the system.
- the ranking of anomalous behavior can be used to classify the fault as indicative of the origin of the fault.
- the anomaly classifier 410 can determine the appropriate response according to the historical responses to similar past faults.
- the response can include, e.g., shutting down or restarting malfunctioning hardware, alerting an administrator or other users by an audible or visual notification using, e.g., a display 412 or network attached devices via the communication device 404 , or any other appropriate response.
- the components of the vault detection system 400 can include, e.g., a memory or storage to store software to perform the above described tasks. Additionally, each component can include a dedicated processing device, such as, e.g., a central processing unit (CPU), a graphical processing unit (GPU), resistive processing unit (RPU), field programmable gate array (FPGA), or other processing device. Alternatively, a processing device 414 can be in communication with one or more the components to execute the component functions.
- a dedicated processing device such as, e.g., a central processing unit (CPU), a graphical processing unit (GPU), resistive processing unit (RPU), field programmable gate array (FPGA), or other processing device.
- a processing device 414 can be in communication with one or more the components to execute the component functions.
- FIG. 5 a block/flow diagram illustrating fault detection model for a fault detection system with topology-inspired neural network autoencoding is illustratively depicted in accordance with an embodiment the present invention.
- a fault detection model 420 can include an autoencoder 430 with an encoder 510 and decoder 520 , and an autoregressor 440 .
- Each of the autoencoder 430 and the autoregressor 440 can analyze multi-variate time varying sensor data 502 .
- the results from each of the autoencoder 430 and the autoregressor 440 can be combined in a combiner 504 to generate reproduced sensor data.
- the combiner 504 can be, e.g., a software module that retrieves the autoencoder 430 output and the autoregressor 440 output and combines the outputs via, e.g., vector addition, concatenation, or other combination scheme.
- the sensor data 502 can include multi-variate data that is time-varying. Accordingly, the sensor data 502 can include, e.g., multiple data time-series 502 a - c . To evaluate the behavior of the system, such as, e.g., a building monitoring system including HVAC, a power grid, or other sensor network, the fault detection model 420 analyzes each time-series 502 a - d jointly. Thus, each of the time-series 502 a - d in the sensor data 502 is provided to the encoder 510 .
- the encoder 510 encodes the sensor data 502 into a feature vector according to learned parameters. Therefore, the encoder 510 can include, e.g., a neural network, such as, e.g., a convolutional neural network (CNN), a recurrent neural network (RNN) or other machine learning technique. According to aspects of the present embodiment, to capture the dynamic, multi-variate nature of the sensor data 502 , the encoder 510 includes a RNN. Therefore, the encoder 510 is constructed with one or more neurons such as, e.g., long short-term memory (LSTM) units or gated recurrent units, among others. For example, the encoder 510 can include a first layer of LSTM units 512 a - d and a second layer of LSTM units 514 a - d . However, more layers are contemplated.
- LSTM long short-term memory
- each time-series 502 a - d is communicated to a corresponding LSTM unit 512 a - d .
- the LSTM units 512 a - d output respective hidden states according to learned parameters, that are supplied to each LSTM unit 512 a - d of the first layer as well as passed on to the LSTM units 514 a - d of the second layer.
- the LSTM units 514 a - d similarly output hidden states according to learned parameters.
- the hidden states are used to form a feature vector that represents the encoded sensor data 502 .
- the decoder 520 includes, e.g., a neural network.
- the decoder 520 includes a RNN with LSTM units 522 a - d , 524 a - d of corresponding number to the encoder 510 LSTM units 512 a - d and 514 a - d .
- the feature vector is provided to a first LSTM unit 522 a , which generates a hidden state from learned parameters. The hidden state is passed to each other LSTM unit 522 a - d , 524 a - d.
- the second layer of LSTM units 524 a - d generates vectors corresponding to each of the LSTM units 524 a - d .
- the decoder 520 generates vectors that correspond to each of the time series 502 a - d of the sensor data 502 .
- the vectors represent reconstructed sensor data according to the learned parameters of the autoencoder 430 .
- the learned parameters for each of the LSTM units 512 a - d , 514 a - d , 522 a - d and 524 a - d are jointly trained using training data including normal operating behavior of the monitored system.
- the autoencoder 430 is trained to reconstruct sensor data 502 according to normal operating behavior. Therefore, if the sensor data 502 includes anomalous operating behavior, the generated vectors from the decoder 520 will differ from the sensor data 502 . However, the sensor data 502 includes normal operating behavior, then the generated vectors and the sensor data 502 will match within a degree of acceptable error.
- the degree of acceptable error can include a threshold that is, e.g., learned, predetermine, or user selectable.
- the error between the generated vectors and the sensor data 502 can be evaluated according to an error function, such as, e.g., equation 1 described above.
- the autoencoder 430 may not accurately account for a lack synchronicity between the time-series 502 a - d .
- the reconstructed sensor data can be augmented with the autoregressor 440 .
- the autoregressor 440 can determine a regression to model each time-series 502 a - d using the time-series 502 a - d as input.
- the autoregressor 440 can model the time-series 502 a - d using, e.g., equation 4 below:
- Y is reconstructed data according to autoregression
- c is a constant
- i is a data point with a time-series of the sensor data
- p is the total number of data points
- ⁇ is a learned parameter
- D is a vector of sensor data
- ⁇ is a white noise vector
- t is time.
- the modeled time-series from the autoregressor 440 can be used to augment the reconstructed sensor data generated by the autoencoder 430 , such as, e.g., by adding the modeled time-series to the reconstructed sensor data.
- the fault detection model 420 generates reconstructed sensor data that takes into account multi-variate time-series with asynchronous behaviors.
- the reconstructed behavior can be generated more efficiently and accurately.
- the reconstructed sensor data is compared against the original sensor data 502 to quickly determine whether anomalous behavior exists. This process is efficient for finding faults because storage space and processing resources are not needed to match sensor data 502 to particular fault behaviors.
- FIG. 6 a block/flow diagram of an anomaly classifier for classifying anomalies detected by a fault detection model is illustratively depicted in accordance with an embodiment the present invention.
- detected anomalies can be ranked as described with reference to FIG. 4 above.
- the ranked anomalies 601 can be provided to an anomaly classifier 410 , such as the anomaly classifier 410 of FIG. 4 described above, along with historical anomalies 602 provided by, e.g., an event log or a database or other storage device.
- the anomaly classifier 410 examines the ranked anomalies 601 and corresponding behavior fingerprints to produce a fault classification 603 .
- a fingerprint extractor 610 extracts a fingerprint from each of the ranked anomalies 601 .
- the fingerprint can include, e.g., one or more time-series across the multi-variate sensor data exhibiting anomalous behavior, however other methods of fingerprinting are contemplated.
- the fingerprint extractor 610 formats the ranked anomalies 601 according to the anomalous behavior and chronology associated with the anomalous behavior.
- the fingerprint can be represented as a fingerprint matrix of the time-series data representing the top ranked anomalies of the ranked anomalies 601 .
- the fingerprint matrix can be converted into a feature vector using a feature vector generator 612 .
- the feature vector generator 612 can, e.g., form the feature vector by separating either each row or each column of the fingerprint matrix, and appending the rows or columns into a single vector.
- other conversion techniques for converting a matrix to vector are also contemplated.
- a feature selector 614 selects features to select the most informative anomalies, such as, e.g., broken dependencies between system components.
- the feature selector 614 can include any feature selection technique, including, e.g., a greedy algorithm, LASSO method, wrapper method, filter, ranked correlation, a Markov blanket, minimum-redundancy-maximum-relevance (mRMR), or other suitable feature selection technique.
- the feature selector 614 employs a Chi Square feature selection technique.
- a diagnosis unit 616 uses the selected features of the feature vector to diagnose the anomalies. Because the anomalies are determined based on deviation from normal operating behaviors, the anomalies can be diagnosed by comparing the deviation using the selected features with historical anomalies 602 .
- the historical anomalies 602 can include previously detected and addressed anomalies in the system. As such, each of the historical anomalies 602 can include a fingerprint, as well as failure reasons and annotations for possible actions to remedy or mitigate the respective failure.
- the features of the historical anomalies 602 can be compared to the selected features of the ranked anomalies 601 .
- the diagnosis unit 616 determines a degree of similarity between the ranked anomalies 601 and the historical anomalies 602 using, e.g., Jaccard distance.
- the diagnosis unit 616 produces a similarity score corresponding to the Jaccard distance between each of the ranked anomalies 601 and the historical anomalies 602 .
- a learned or a user selectable similarity threshold can be employed to determine whether the ranked anomalies 601 are similar to any of the historical anomalies 602 according to the Jaccard distance.
- the ranked anomalies 601 and the corresponding historical anomaly are deemed similar.
- the ranked anomalies 601 are diagnosed as the failure of the similar historical anomaly.
- the diagnosis unit 616 thus produces a fault classification 603 for the ranked anomalies 601 that includes, e.g., the failure reasons and the possible action annotations of the similar historical anomaly.
- the fault classification 603 can then be analyzed for automatic remediation according to the possible action annotations.
- the fault classification 603 can be provided to a user or administrator with the failure reasons and possible action annotations displayed on a display.
- a fault classification 603 is generated as new event and assigned to a category associated with the most similar historical anomaly 602 . Because the ranked anomalies 601 are not similar to a particular historical anomaly 602 , the fault classification 603 can exclude any failure reasons or possible actions. However, the fault classification 603 can, alternatively, include the failure reasons and possible action annotations of each of the historical anomalies in the assigned category. The fault classification 603 can then be provided to a user or administrator, e.g., via a display.
- FIG. 7 a flow diagram illustrating a system/method for topology-inspired neural network autoencoding for fault detection is illustratively depicted in accordance with an embodiment the present invention.
- sensor data is received from sensors in the sensor network with a communication device.
- the sensor data is logged in an event log to form time-series of sensor data.
- the sensor data is analyzed to determine if the sensor data is indicative of a fault with a fault detection model, the fault detection model including; at block 711 , predicting the sensor data with an autoencoder by encoding the sensor data and decoding encoded the sensor data, at block 712 , autoregressively modelling the sensor data with an autoregressor, at block 713 , combining the modeled sensor data and the predicted sensor data with a combiner to produce reconstructed sensor data, at block 714 , comparing the reconstructed sensor data to the sensor data with an anomaly evaluator to determine anomalies, and, at block 715 , ranking the anomalies according to a difference between the reconstructed sensor data and the sensor data.
- the fault detection model including; at block 711 , predicting the sensor data with an autoencoder by encoding the sensor data and decoding encoded the sensor data, at block 712 , autoregressively modelling the sensor data with an autoregressor, at block 713 , combining the modeled sensor data and the
- an anomaly classification is produced by comparing the anomalies to historical anomalies with an anomaly classifier.
- faults in the sensor network are automatically mitigated with a processing device based on the anomaly classification.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Testing And Monitoring For Control Systems (AREA)
Abstract
Description
- This application claims priority to U.S. Provisional Application 62/642,165, filed on Mar. 13, 2018, incorporated herein by reference herein its entirety.
- The present invention relates to fault detection in electronic systems and more particularly topology-inspired neural network autoencoding for fault detection in electronic systems.
- A variety of electronic systems, such as, e.g., store registers, retail store showcases, power plants, heating, ventilation and air condition (HVAC) systems, among other electronically controlled systems, can monitor both physical and electronic states of the electronic system using a variety of sensor techniques. To determine when the electronic system experiences a failure or fault related to the physical and electronic states, analysis of sensor behavior can be used. Accordingly, a system can be equipped to conduct surveillance of the electronic system to analyze behaviors and diagnose faults and failures. Quickly and accurately discovering a failure can result in reduced downtime as well as reduced hazards, among other issues related to a fault, thus decreasing costs associated with faults and increasing safety.
- According to an aspect of the present invention, a method is provided for fault detection in a sensor network. The method includes receiving sensor data from sensors in the sensor network with a communication device. The sensor data is analyze to determine if the sensor data is indicative of a fault with a fault detection model, the fault detection model including; predicting the sensor data with an autoencoder by encoding the sensor data and decoding encoded the sensor data, autoregressively modelling the sensor data with an autoregressor, combining the modeled sensor data and the predicted sensor data with a combiner to produce reconstructed sensor data, and comparing the reconstructed sensor data to the sensor data with an anomaly evaluator to determine anomalies. An anomaly classification is produced by comparing the anomalies to historical anomalies with an anomaly classifier. Faults in the sensor network are automatically mitigated with a processing device based on the anomaly classification.
- According to another aspect of the present invention, a method is provided for fault detection in a sensor network. The method includes receiving sensor data from sensors in the sensor network with a communication device. The sensor data is logged in an event log to form time-series of sensor data. The sensor data is analyzed to determine if the sensor data is indicative of a fault with a fault detection model, the fault detection model including; predicting the sensor data with an autoencoder by encoding the sensor data and decoding encoded the sensor data, autoregressively modelling the sensor data with an autoregressor, combining the modeled sensor data and the predicted sensor data with a combiner to produce reconstructed sensor data, comparing the reconstructed sensor data to the sensor data with an anomaly evaluator to determine anomalies, and ranking the anomalies according to a difference between the reconstructed sensor data and the sensor data. An anomaly classification is produced by comparing the anomalies to historical anomalies with an anomaly classifier. Faults in the sensor network are automatically mitigated with a processing device based on the anomaly classification.
- According to another aspect of the present invention, a system is provided for fault detection in a sensor network with a fault detection system to detect faults. The system includes a communication device to receive sensor data from sensors in the sensor network. A fault detection model analyzes the sensor data to determine if the sensor data is indicative of a fault, the fault detection model including; an autoencoder that encodes the sensor data and decodes the encoded sensor data to predict the sensor data, an autoregressor that autoregressively models the sensor data, a combiner that combines the modeled sensor data and the predicted sensor data to produced reconstructed sensor data, and an anomaly evaluator that compares the reconstructed sensor data to the sensor data to determine anomalies. An anomaly classifier compares the anomalies to historical anomalies and produces an anomaly classification. A processing device automatically mitigates faults in the sensor network based on the anomaly classification.
- These and other features and advantages will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings.
- The disclosure will provide details in the following description of preferred embodiments with reference to the following figures wherein:
-
FIG. 1 is a generalized diagram of a neural network, in accordance with the present invention; -
FIG. 2 is a block/flow diagram illustrating an artificial neural network (ANN) architecture, in accordance with the present invention; -
FIG. 3 is a diagram illustrating a network monitored by a topology-inspired neural network for fault detection, in accordance with the present invention; -
FIG. 4 is a block/flow diagram illustrating a fault detection system with topology-inspired neural network autoencoding for fault detection, in accordance with the present invention; -
FIG. 5 is a block/flow diagram illustrating fault detection model for a fault detection system with topology-inspired neural network autoencoding, in accordance with the present invention; -
FIG. 6 is a block/flow diagram of an anomaly classifier for classifying anomalies detected by a fault detection model, in accordance with the present invention; and -
FIG. 7 is a flow diagram illustrating a system/method for topology-inspired neural network autoencoding for fault detection, in accordance with the present invention. - In accordance with the present invention, systems and methods are provided for automatic fault detection with topology-inspired neural network autoencoding.
- In one embodiment, a fault detection system is implemented in communication with a system or network. The system or network includes, for example, a power grid, however, the fault detection system can be implemented in any system that monitors physical systems using electronic sensors. Thus, the fault detection system facilitates real-time analysis of sensor data to determine if or when a fault in the system occurs, such as, e.g., the power grid.
- The fault detection system operates through the use of an autoencoder trained to recognize normal sensor data of the monitored system. Because such data is time varying, highly multi-variate and often asynchronous, the autoencoder includes an autoregressive model and long short-term memory. By combining an autoencoder to capture time-varying data relationships, and an autoregressive model to compensate for asynchronous data, the fault detection system can better operate in a real-world system to analyze sensor data in real-time. Thus, the fault detection system can more accurately and more efficiently recognize behavior that is outside of normal operating behavior on which the fault detection system is trained.
- Thus, the fault detection system monitors the system behaviors to detect and recognize anomalous behaviors that may correspond to faults. The suspected faults are fingerprinted according to behavior and recorded. The suspected faults can then be compared to past confirmed faults according to, e.g., similarities in fingerprints. Where the fingerprints of the suspected faults match past faults, the suspected faults are verified as faults having a type and a method of response corresponding to the matched past fault. The fault detection system can then automatically perform fault mitigation according to the method of response. For example, the fault detection system can, e.g., automatically notify an administrator via a display or speaker, shut down or reset a particular portion of the system, issue a general alert to users or customers, redistribute resources, or perform any other appropriate action. Thus, the faults can be identified and addressed more quickly, efficiently and accurately, with less need for human intervention. Because of the reduced human oversight, faults can be addressed more quickly and with reduced costs.
- Embodiments described herein may be entirely hardware, entirely software or including both hardware and software elements. In a preferred embodiment, the present invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
- Embodiments may include a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. A computer-usable or computer readable medium may include any apparatus that stores, communicates, propagates, or transports the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be magnetic, optical, electronic, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. The medium may include a computer-readable storage medium such as a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk, etc.
- Each computer program may be tangibly stored in a machine-readable storage media or device (e.g., program memory or magnetic disk) readable by a general or special purpose programmable computer, for configuring and controlling operation of a computer when the storage media or device is read by the computer to perform the procedures described herein. The inventive system may also be considered to be embodied in a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
- A data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code to reduce the number of times code is retrieved from bulk storage during execution. Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) may be coupled to the system either directly or through intervening I/O controllers.
- Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
- An artificial neural network (ANN) is an information processing system that is inspired by biological nervous systems, such as the brain. The key element of ANNs is the structure of the information processing system, which includes a large number of highly interconnected processing elements (called “neurons”) working in parallel to solve specific problems. ANNs are furthermore trained in-use, with learning that involves adjustments to weights that exist between the neurons. An ANN is configured for a specific application, such as pattern recognition or data classification, through such a learning process.
- Referring now to
FIG. 1 , a generalized diagram of a neural network is shown. ANNs demonstrate an ability to derive meaning from complicated or imprecise data and can be used to extract patterns and detect trends that are too complex to be detected by humans or other computer-based systems. The structure of a neural network is known generally to haveinput neurons 102 that provide information to one or more “hidden”neurons 104.Connections 108 between theinput neurons 102 andhidden neurons 104 are weighted and these weighted inputs are then processed by thehidden neurons 104 according to some function in thehidden neurons 104, withweighted connections 108 between the layers. There may be any number of layers of hiddenneurons 104, and as well as neurons that perform different functions. There exist different neural network structures as well, such as convolutional neural network, maxout network, etc. Finally, a set ofoutput neurons 106 accepts and processes weighted input from the last set of hiddenneurons 104. - This represents a “feed-forward” computation, where information propagates from
input neurons 102 to theoutput neurons 106. Upon completion of a feed-forward computation, the output is compared to a desired output available from training data. The error relative to the training data is then processed in “feed-back” computation, where thehidden neurons 104 andinput neurons 102 receive information regarding the error propagating backward from theoutput neurons 106. Once the backward error propagation has been completed, weight updates are performed, with theweighted connections 108 being updated to account for the received error. This represents just one variety of ANN. - Referring now to the drawings in which like numerals represent the same or similar elements and initially to
FIG. 2 , an artificial neural network (ANN)architecture 200 is shown. It should be understood that the present architecture is purely exemplary and that other architectures or types of neural network may be used instead. The ANN embodiment described herein is included with the intent of illustrating general principles of neural network computation at a high level of generality and should not be construed as limiting in any way. - Furthermore, the layers of neurons described below and the weights connecting them are described in a general manner and can be replaced by any type of neural network layers with any appropriate degree or type of interconnectivity. For example, layers can include convolutional layers, pooling layers, fully connected layers, stopmax layers, or any other appropriate type of neural network layer. Furthermore, layers can be added or removed as needed and the weights can be omitted for more complicated forms of interconnection.
- During feed-forward operation, a set of
input neurons 202 each provide an input signal in parallel to a respective row ofweights 204. In the hardware embodiment described herein, theweights 204 each have a respective settable value, such that a weight output passes from theweight 204 to a respectivehidden neuron 206 to represent the weighted input to the hiddenneuron 206. In software embodiments, theweights 204 may simply be represented as coefficient values that are multiplied against the relevant signals. The signals from each weight adds column-wise and flows to ahidden neuron 206. - The
hidden neurons 206 use the signals from the array ofweights 204 to perform some calculation. Thehidden neurons 206 then output a signal of their own to another array ofweights 204. This array performs in the same way, with a column ofweights 204 receiving a signal from their respective hiddenneuron 206 to produce a weighted signal output that adds row-wise and is provided to theoutput neuron 208. - It should be understood that any number of these stages may be implemented, by interposing additional layers of arrays and
hidden neurons 206. It should also be noted that some neurons may beconstant neurons 209, which provide a constant output to the array. Theconstant neurons 209 can be present among theinput neurons 202 and/or hiddenneurons 206 and are used during feed-forward operation. - During back propagation, the
output neurons 208 provide a signal back across the array ofweights 204. The output layer compares the generated network response to training data and computes an error. The error signal can be made proportional to the error value. In this example, a row ofweights 204 receives a signal from arespective output neuron 208 in parallel and produces an output which adds column-wise to provide an input to hiddenneurons 206. Thehidden neurons 206 combine the weighted feedback signal with a derivative of its feed-forward calculation and stores an error value before outputting a feedback signal to its respective column ofweights 204. This back propagation travels through theentire network 200 until allhidden neurons 206 and theinput neurons 202 have stored an error value. - During weight updates, the stored error values are used to update the settable values of the
weights 204. In this manner theweights 204 can be trained to adapt theneural network 200 to errors in its processing. It should be noted that the three modes of operation, feed forward, back propagation, and weight update, do not overlap with one another. - Referring now to
FIG. 3 , a system/method for a network monitored by a topology-inspired neural network for fault detection is illustratively depicted in accordance with an embodiment of the present invention. - According to aspects of the present invention, a
fault detection system 300 is in communication with anetwork 301, such as, e.g., a cloud network, the Internet, an intranet, or other network. Via thenetwork 301, thefault detection system 300 can monitor systems such as, e.g., abuilding 304 including a heating, ventilation and air conditioning (HVAC) system, apower grid 306 andother sensor network 305. Thesensor network 305 can be any network ofsensors - The
fault detection system 300 retrieves a data stream from each of the monitored systems across thenetwork 301. For example, thesensor network 305 can provide sensor data from each of thesensors network 301 to thefault detection system 300. Thefault detection system 300 can log the sensor data, e.g., in a memory or storage device, or in adatabase 307 via thenetwork 301. Thus, thefault detection system 300 can maintain a record of sensor data from thesensor network 305. Similarly, thefault detection system 300 can also maintain a record of sensor data from thebuilding 304 utilities, such as, e.g., HVAC, as well aspower grid 306 behavior. - The
fault detection system 300 analyzes the sensor data to determine the presence of faults or other anomalies in thesensor network 305. For example, thefault detection system 300 can, e.g., learn from the record of sensor data in thedatabase 307 to recognize normal operating behavior of sensors and other devices in thesensor network 305. Thus, for example, thefault detection system 300 can determine a suspected fault where received sensor data does not match the learned normal behavior. However, other methods of fault detection are contemplated. - The
fault detection system 300 classifies suspected faults according to types of faults. The type of a fault can relate to, e.g., effective responses for similar faults, particular variations from normal behavior, or other form of classification. According one possible embodiment, thefault detection system 300 determines a fault classification by, e.g., storing the record of sensor data as a fingerprint according to a format of the record and comparing the fingerprint against past detected faults. However, thefault detection system 300 can, alternatively, classify faults according to pre-defined classifications of behavior variations, such as, e.g., with a human annotated training set. - Classifying a suspected fault at once verifies the fault as well as categorizes the fault. As such, the fault can be associated with one or more actions to address the fault of the corresponding category. As such, the
fault detection system 300 can automatically address the fault according to the associated actions, such as, e.g., notifying an administrator via the display and/or speaker of acomputer 302 ormobile device 303 in communication with thenetwork 301. Additionally, thefault detection system 300 can address faults by, e.g., shutting down systems and equipment that are malfunctioning, as indicated by the fault, shutting down or resetting devices to prevent hazardous situations caused by or associated with the fault, dispatching maintenance teams, issuing public alerts via the internet, email, simple messaging service (SMS) or other communication medium, or any other appropriate response to the fault. - The suspected fault can then be communicated to the
database 307 to be recorded as a fault. In the event that past faults do not match the suspected fault, the suspected fault can be recorded according to a new classification. Otherwise, the suspected fault is recorded according to the classification determined as described above. Thus, the suspected fault is added to the historical record of faults for improved training of the fault detection system for identifying anomalies and associated faults. - Referring now to
FIG. 4 , a block/flow diagram illustrating a fault detection system with topology-inspired neural network autoencoding for fault detection is illustratively depicted in accordance with an embodiment the present invention. - According to aspects of an embodiment of an embodiment of the present invention, a
fault detection system 400 can include anevent log 402 and acommunication device 404. Thecommunication device 404 can communicate with electronic systems to receive sensor data from, e.g., a sensor network including, e.g., a power grid, a retail store showcase, security systems such as building and home security, or other sensor networks. Thus, thecommunication device 404 can receive sensor data from the sensor network on, e.g., a continual basis, or, alternatively, periodically, such as, e.g., on a minute basis, hourly, daily, weekly, or monthly, or combinations thereof. - Received sensor data can be stored in the
event log 402, which can be, e.g., a storage or memory device such as, e.g., a hard drive, a solid state drive, flash storage, a cloud database, random access memory (RAM), or other storage device. Thus, the event log 402 can maintain a record over time of sensor data. Theevent log 402 can maintain the record for a given period, such as, e.g., for one day, multiple days, a week or a month, or for another desirable period, before deleting the data to make room for new sensor data. Alternatively, the event log 402 can maintain a rolling log of sensor data where the oldest data is deleted upon receipt of new data or in anticipation of new data. - Using the
event log 402, thefault detection system 400 can analyze sensor data to determine anomalous behavior. For such analysis, thefault detection system 400 includes afault detection model 420 that can determine behavior that does not match normal operating behavior. For example, sensor data recorded in theevent log 402 may include a spike in power draw on the power grid that is above normal for a corresponding time of day. As another example, the temperature detected on a floor of a building may be below normal, thus requiring increased heat supplied by the HVAC system to that floor. - To determine that a set of data is anomalous, the
fault detection model 420 can include anautoencoder 430 trained with normal sensor data. Theautoencoder 430 can encode the set of data into a feature vector and decode the feature vector according to learned parameters. Therefore, theautoencoder 430 can include, e.g., a neural network, such as, e.g., long short-term memory, a recurrent neural network, a convolutional neural network, or other machine learning technique for encoding and decoding the sensor data. Thus, theautoencoder 430 reconstructs the data according to normal expected behaviors in sensor data. Accordingly, the reconstructed data and the original set of data can be compared to determine a difference. Where the difference is high, for example, above a threshold error level, the set of data is deemed anomalous, and thus corresponding to a suspected fault. - However, the sensor data can be time varying and include asynchronous properties. Thus, to compensate for the asynchronous properties, the
fault detection model 420 can include anautoregressor 440. Theautoregressor 440 analyzes the sensor data to determine local linear correlations of data points in the time varying sensor data. Thus, the seasonal patterns determined from theautoencoder 430 can be augemented with the local linear correlations from theautoregressor 440 to reliably and efficiently reconstruct the sensor data, even with asynchronous time varying sensor data. As a result, a deviation from normal behavior can be more accurately and efficiently assessed. - Both the
autoencoder 430 and theautoregressor 440 can be trained to reproduce data according to normal sensor data patterns. As such, thefault detection model 420 can include an optimization function for training theautoencoder 430 and theautoregressor 440 with normal sensor data, such as, e.g., a training set of curated normal sensor data. Thefault detection model 420 can be trained according to, e.g., the optimization function ofequation 1 below: -
- where t is time, T is the period of time corresponding to the sensor data, W is weight matrix corresponding to the sensor topology, F relates to a given fault, Dt is a data point at time t of the sensor data, {circumflex over (D)}t is the reconstructed data point at time t, λ is a bias coefficient and Θ is a vector of learned parameters. Accordingly,
equation 1 determines a reconstruction error of thefault detection model 420. Therefore, theautoregressor 440 and theautoencoder 440 can each be trained via backpropagation of the reconstruction error when normal operating behavior is provided as sensor data. Thus, thefault detection model 420 can be efficiently trained to recognize normal operating behavior. A fault, therefore, can easily be determined according to a deviation from the normal operating behavior. - The deviation from the normal behavior can be determined by an
anomaly evaluator 406. Theanomaly evaluator 406 compared the reconstructed data from thefault detection model 420 with the original sensor data. Where the original sensor data and reconstructed sensor data deviate from each other by greater than a threshold amount, the data can be determined to indicate anomalous behavior, and thus a fault. Thus, theanomaly evaluator 406 can determine an anomaly score that quantifies the discrepancy between the sensor data and the reconstructed data. For example, an anomaly score can be determined according to equation 2 below: -
score=Σt=1 T ∥W∘(D t −{circumflex over (D)} t)∥F 2. Equation 2: - Thus, the
anomaly evaluator 406 quantifies and scores reconstructed data. Where the score rises above a threshold level, the data is considered anomalous. The threshold can be user adjustable, learned according to an optimization function, or predetermined for each system or type of sensor. Accordingly, theanomaly evaluator 406 can include, e.g., a software module stored in a memory device, such as, e.g., a hard drive, a solid state drive, a cache, a buffer, flash storage, random access memory, or other memory device, and executed by a processing device, such as theprocessing device 414 or other processing device. - Additionally, the
fault detection system 400 can prioritize faults to determine an order of addressability. As such, thefault detection system 400 can rank anomalous data to determine the most severe faults for various components providing sensor data. The ranking can be, e.g., based on score, ordered from greatest to least. However, according to aspects of an embodiment, a rank can be determined according to equation 3 below: -
r=W∘(D 1:t −{circumflex over (D)} 1:t), Equation 3: - where r is the rank of the sensor data.
- According to the anomaly rank and/or anomaly score, sensor data can be verified as faults. However, to address the faults, the anomaly can be classified by a type or similarity to historical faults stored in, e.g., the event log 402 or an external database, with an
anomaly classifier 410. By classifying the faults, thefault detection system 400 can utilize past responses to determine an appropriate response to the fault. Thus, theanomaly classifier 410 compares a fingerprint corresponding to the sensor data exhibiting anomalous behavior of a fault to fingerprints of the historical faults. The fingerprint can include top ranked anomalous behavior, according to equation 3 above, and corresponding time-varying data. The top ranked anomalous behavior can correspond to the most damaged or faulty part of the system. Thus, the ranking of anomalous behavior can be used to classify the fault as indicative of the origin of the fault. - Upon classifying the fault, the
anomaly classifier 410 can determine the appropriate response according to the historical responses to similar past faults. The response can include, e.g., shutting down or restarting malfunctioning hardware, alerting an administrator or other users by an audible or visual notification using, e.g., adisplay 412 or network attached devices via thecommunication device 404, or any other appropriate response. - The components of the
vault detection system 400 can include, e.g., a memory or storage to store software to perform the above described tasks. Additionally, each component can include a dedicated processing device, such as, e.g., a central processing unit (CPU), a graphical processing unit (GPU), resistive processing unit (RPU), field programmable gate array (FPGA), or other processing device. Alternatively, aprocessing device 414 can be in communication with one or more the components to execute the component functions. - Referring now to
FIG. 5 , a block/flow diagram illustrating fault detection model for a fault detection system with topology-inspired neural network autoencoding is illustratively depicted in accordance with an embodiment the present invention. - According to an embodiment of the present invention, a
fault detection model 420, such as thefault detection model 420 described above, can include anautoencoder 430 with anencoder 510 anddecoder 520, and anautoregressor 440. Each of theautoencoder 430 and theautoregressor 440 can analyze multi-variate time varyingsensor data 502. The results from each of theautoencoder 430 and theautoregressor 440 can be combined in acombiner 504 to generate reproduced sensor data. Thus, thecombiner 504 can be, e.g., a software module that retrieves theautoencoder 430 output and theautoregressor 440 output and combines the outputs via, e.g., vector addition, concatenation, or other combination scheme. - The
sensor data 502 can include multi-variate data that is time-varying. Accordingly, thesensor data 502 can include, e.g., multiple data time-series 502 a-c. To evaluate the behavior of the system, such as, e.g., a building monitoring system including HVAC, a power grid, or other sensor network, thefault detection model 420 analyzes each time-series 502 a-d jointly. Thus, each of the time-series 502 a-d in thesensor data 502 is provided to theencoder 510. - The
encoder 510 encodes thesensor data 502 into a feature vector according to learned parameters. Therefore, theencoder 510 can include, e.g., a neural network, such as, e.g., a convolutional neural network (CNN), a recurrent neural network (RNN) or other machine learning technique. According to aspects of the present embodiment, to capture the dynamic, multi-variate nature of thesensor data 502, theencoder 510 includes a RNN. Therefore, theencoder 510 is constructed with one or more neurons such as, e.g., long short-term memory (LSTM) units or gated recurrent units, among others. For example, theencoder 510 can include a first layer of LSTM units 512 a-d and a second layer of LSTM units 514 a-d. However, more layers are contemplated. - Thus, each time-
series 502 a-d is communicated to a corresponding LSTM unit 512 a-d. The LSTM units 512 a-d output respective hidden states according to learned parameters, that are supplied to each LSTM unit 512 a-d of the first layer as well as passed on to the LSTM units 514 a-d of the second layer. The LSTM units 514 a-d similarly output hidden states according to learned parameters. The hidden states are used to form a feature vector that represents the encodedsensor data 502. - The feature vector is then provided to the
decoder 520. Similar to theencoder 510, thedecoder 520 includes, e.g., a neural network. According to aspects of the present embodiment, thedecoder 520 includes a RNN with LSTM units 522 a-d, 524 a-d of corresponding number to theencoder 510 LSTM units 512 a-d and 514 a-d. The feature vector is provided to afirst LSTM unit 522 a, which generates a hidden state from learned parameters. The hidden state is passed to each other LSTM unit 522 a-d, 524 a-d. - The second layer of LSTM units 524 a-d generates vectors corresponding to each of the LSTM units 524 a-d. As a result, the
decoder 520 generates vectors that correspond to each of thetime series 502 a-d of thesensor data 502. The vectors represent reconstructed sensor data according to the learned parameters of theautoencoder 430. - The learned parameters for each of the LSTM units 512 a-d, 514 a-d, 522 a-d and 524 a-d are jointly trained using training data including normal operating behavior of the monitored system. As a result, the
autoencoder 430 is trained to reconstructsensor data 502 according to normal operating behavior. Therefore, if thesensor data 502 includes anomalous operating behavior, the generated vectors from thedecoder 520 will differ from thesensor data 502. However, thesensor data 502 includes normal operating behavior, then the generated vectors and thesensor data 502 will match within a degree of acceptable error. - The degree of acceptable error can include a threshold that is, e.g., learned, predetermine, or user selectable. Moreover, the error between the generated vectors and the
sensor data 502 can be evaluated according to an error function, such as, e.g.,equation 1 described above. - However, the
autoencoder 430 may not accurately account for a lack synchronicity between the time-series 502 a-d. Accordingly, the reconstructed sensor data can be augmented with theautoregressor 440. Theautoregressor 440 can determine a regression to model each time-series 502 a-d using the time-series 502 a-d as input. Thus, theautoregressor 440 can model the time-series 502 a-d using, e.g., equation 4 below: -
Y t =c+Σ i=1 pρi D t-i+ϵ t, Equation 4: - where Y is reconstructed data according to autoregression, c is a constant, i is a data point with a time-series of the sensor data, p is the total number of data points, ρ is a learned parameter, D is a vector of sensor data, ϵ is a white noise vector and t is time.
- The modeled time-series from the
autoregressor 440 can be used to augment the reconstructed sensor data generated by theautoencoder 430, such as, e.g., by adding the modeled time-series to the reconstructed sensor data. As a result, thefault detection model 420 generates reconstructed sensor data that takes into account multi-variate time-series with asynchronous behaviors. Thus, the reconstructed behavior can be generated more efficiently and accurately. Furthermore, the reconstructed sensor data is compared against theoriginal sensor data 502 to quickly determine whether anomalous behavior exists. This process is efficient for finding faults because storage space and processing resources are not needed to matchsensor data 502 to particular fault behaviors. - Referring now to
FIG. 6 , a block/flow diagram of an anomaly classifier for classifying anomalies detected by a fault detection model is illustratively depicted in accordance with an embodiment the present invention. - According to aspects of the present invention, detected anomalies can be ranked as described with reference to
FIG. 4 above. The rankedanomalies 601 can be provided to ananomaly classifier 410, such as theanomaly classifier 410 ofFIG. 4 described above, along withhistorical anomalies 602 provided by, e.g., an event log or a database or other storage device. Theanomaly classifier 410 examines the rankedanomalies 601 and corresponding behavior fingerprints to produce afault classification 603. - To produce the
fault classification 603, afingerprint extractor 610 extracts a fingerprint from each of the rankedanomalies 601. The fingerprint can include, e.g., one or more time-series across the multi-variate sensor data exhibiting anomalous behavior, however other methods of fingerprinting are contemplated. Thus, thefingerprint extractor 610 formats the rankedanomalies 601 according to the anomalous behavior and chronology associated with the anomalous behavior. The fingerprint can be represented as a fingerprint matrix of the time-series data representing the top ranked anomalies of the rankedanomalies 601. - The fingerprint matrix can be converted into a feature vector using a feature vector generator 612. The feature vector generator 612 can, e.g., form the feature vector by separating either each row or each column of the fingerprint matrix, and appending the rows or columns into a single vector. However, other conversion techniques for converting a matrix to vector are also contemplated.
- Using the feature vector, a
feature selector 614 selects features to select the most informative anomalies, such as, e.g., broken dependencies between system components. Thefeature selector 614 can include any feature selection technique, including, e.g., a greedy algorithm, LASSO method, wrapper method, filter, ranked correlation, a Markov blanket, minimum-redundancy-maximum-relevance (mRMR), or other suitable feature selection technique. However, according to aspects of the present invention, thefeature selector 614 employs a Chi Square feature selection technique. - A
diagnosis unit 616 uses the selected features of the feature vector to diagnose the anomalies. Because the anomalies are determined based on deviation from normal operating behaviors, the anomalies can be diagnosed by comparing the deviation using the selected features withhistorical anomalies 602. Thehistorical anomalies 602 can include previously detected and addressed anomalies in the system. As such, each of thehistorical anomalies 602 can include a fingerprint, as well as failure reasons and annotations for possible actions to remedy or mitigate the respective failure. Thus, the features of thehistorical anomalies 602 can be compared to the selected features of the rankedanomalies 601. Thus, thediagnosis unit 616 determines a degree of similarity between the rankedanomalies 601 and thehistorical anomalies 602 using, e.g., Jaccard distance. - Accordingly, the
diagnosis unit 616 produces a similarity score corresponding to the Jaccard distance between each of the rankedanomalies 601 and thehistorical anomalies 602. A learned or a user selectable similarity threshold can be employed to determine whether the rankedanomalies 601 are similar to any of thehistorical anomalies 602 according to the Jaccard distance. - Where the similarity score exceeds a threshold value, the ranked
anomalies 601 and the corresponding historical anomaly are deemed similar. Thus, the rankedanomalies 601 are diagnosed as the failure of the similar historical anomaly. Thediagnosis unit 616 thus produces afault classification 603 for the rankedanomalies 601 that includes, e.g., the failure reasons and the possible action annotations of the similar historical anomaly. Thefault classification 603 can then be analyzed for automatic remediation according to the possible action annotations. Alternatively, or in addition, thefault classification 603 can be provided to a user or administrator with the failure reasons and possible action annotations displayed on a display. - However, where the ranked
anomalies 601 do not have a similarity score with anyhistorical anomaly 603 that exceeds the threshold, then afault classification 603 is generated as new event and assigned to a category associated with the most similarhistorical anomaly 602. Because the rankedanomalies 601 are not similar to a particularhistorical anomaly 602, thefault classification 603 can exclude any failure reasons or possible actions. However, thefault classification 603 can, alternatively, include the failure reasons and possible action annotations of each of the historical anomalies in the assigned category. Thefault classification 603 can then be provided to a user or administrator, e.g., via a display. - Referring now to
FIG. 7 , a flow diagram illustrating a system/method for topology-inspired neural network autoencoding for fault detection is illustratively depicted in accordance with an embodiment the present invention. - At
block 701, sensor data is received from sensors in the sensor network with a communication device. - At
block 702, the sensor data is logged in an event log to form time-series of sensor data. - At
block 710, the sensor data is analyzed to determine if the sensor data is indicative of a fault with a fault detection model, the fault detection model including; atblock 711, predicting the sensor data with an autoencoder by encoding the sensor data and decoding encoded the sensor data, atblock 712, autoregressively modelling the sensor data with an autoregressor, atblock 713, combining the modeled sensor data and the predicted sensor data with a combiner to produce reconstructed sensor data, atblock 714, comparing the reconstructed sensor data to the sensor data with an anomaly evaluator to determine anomalies, and, atblock 715, ranking the anomalies according to a difference between the reconstructed sensor data and the sensor data. - At
block 703, an anomaly classification is produced by comparing the anomalies to historical anomalies with an anomaly classifier. - At
block 704, faults in the sensor network are automatically mitigated with a processing device based on the anomaly classification. - The foregoing is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the present invention and that those skilled in the art may implement various modifications without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention. Having thus described aspects of the invention, with the details and particularity required by the patent laws, what is claimed and desired protected by Letters Patent is set forth in the appended claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/245,734 US11379284B2 (en) | 2018-03-13 | 2019-01-11 | Topology-inspired neural network autoencoding for electronic system fault detection |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862642165P | 2018-03-13 | 2018-03-13 | |
US16/245,734 US11379284B2 (en) | 2018-03-13 | 2019-01-11 | Topology-inspired neural network autoencoding for electronic system fault detection |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190286506A1 true US20190286506A1 (en) | 2019-09-19 |
US11379284B2 US11379284B2 (en) | 2022-07-05 |
Family
ID=67905575
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/245,734 Active US11379284B2 (en) | 2018-03-13 | 2019-01-11 | Topology-inspired neural network autoencoding for electronic system fault detection |
Country Status (1)
Country | Link |
---|---|
US (1) | US11379284B2 (en) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190312898A1 (en) * | 2018-04-10 | 2019-10-10 | Cisco Technology, Inc. | SPATIO-TEMPORAL ANOMALY DETECTION IN COMPUTER NETWORKS USING GRAPH CONVOLUTIONAL RECURRENT NEURAL NETWORKS (GCRNNs) |
CN111027680A (en) * | 2019-12-06 | 2020-04-17 | 北京瑞莱智慧科技有限公司 | Monitoring quantity uncertainty prediction method and system based on variational self-encoder |
CN111340107A (en) * | 2020-02-25 | 2020-06-26 | 山东大学 | Fault diagnosis method and system based on convolutional neural network cost sensitive learning |
CN111537207A (en) * | 2020-04-29 | 2020-08-14 | 西安交通大学 | Data enhancement method for intelligent diagnosis of mechanical fault under small sample |
CN111814976A (en) * | 2020-07-14 | 2020-10-23 | 西安建筑科技大学 | Air conditioning system sensor fault error relearning method and system |
CN111860569A (en) * | 2020-06-01 | 2020-10-30 | 国网浙江省电力有限公司宁波供电公司 | Power equipment abnormity detection system and method based on artificial intelligence |
CN112381180A (en) * | 2020-12-09 | 2021-02-19 | 杭州拓深科技有限公司 | Power equipment fault monitoring method based on mutual reconstruction single-class self-encoder |
CN112393860A (en) * | 2020-12-09 | 2021-02-23 | 杭州拓深科技有限公司 | Fire-fighting pipe network water leakage monitoring method based on in-class distance constraint self-encoder |
EP3798778A1 (en) * | 2019-09-30 | 2021-03-31 | Siemens Energy Global GmbH & Co. KG | Method and system for detecting an anomaly of an equipment in an industrial environment |
WO2021111252A1 (en) * | 2019-12-02 | 2021-06-10 | International Business Machines Corporation | Deep contour-correlated forecasting |
EP3839440A1 (en) * | 2019-12-20 | 2021-06-23 | Pratt & Whitney Canada Corp. | Sensor fault management tool |
US20210201460A1 (en) * | 2019-12-30 | 2021-07-01 | Micron Technology, Inc. | Apparatuses and methods for determining wafer defects |
US20210263954A1 (en) * | 2018-06-22 | 2021-08-26 | Nippon Telegraph And Telephone Corporation | Apparatus for functioning as sensor node and data center, sensor network, communication method and program |
CN113428167A (en) * | 2021-08-25 | 2021-09-24 | 长沙德壹科技有限公司 | ECU (electronic control Unit) abnormality recognition method |
CN113468751A (en) * | 2021-07-05 | 2021-10-01 | 河南中烟工业有限责任公司 | Recursion Lasso-based flowmeter anomaly online monitoring method and system and storage medium |
CN113486291A (en) * | 2021-06-18 | 2021-10-08 | 电子科技大学 | Petroleum drilling machine micro-grid fault prediction method based on deep learning |
CN113537352A (en) * | 2021-07-15 | 2021-10-22 | 杭州鲁尔物联科技有限公司 | Sensor abnormal value monitoring method and device, computer equipment and storage medium |
US11178170B2 (en) * | 2018-12-14 | 2021-11-16 | Ca, Inc. | Systems and methods for detecting anomalous behavior within computing sessions |
CN113726559A (en) * | 2021-08-09 | 2021-11-30 | 国网福建省电力有限公司 | Artificial intelligence network-based security analysis early warning model |
US11280816B2 (en) * | 2018-04-20 | 2022-03-22 | Nec Corporation | Detecting anomalies in a plurality of showcases |
CN114371002A (en) * | 2021-12-30 | 2022-04-19 | 天津理工大学 | Planetary gearbox fault diagnosis method based on DAE-CNN |
US11410048B2 (en) * | 2019-05-17 | 2022-08-09 | Honda Motor Co., Ltd. | Systems and methods for anomalous event detection |
US11443194B2 (en) | 2019-12-17 | 2022-09-13 | SparkCognition, Inc. | Anomaly detection using a dimensional-reduction model |
US20220303288A1 (en) * | 2021-03-16 | 2022-09-22 | Mitsubishi Electric Research Laboratories, Inc. | Apparatus and Method for Anomaly Detection |
US20220382614A1 (en) * | 2021-05-26 | 2022-12-01 | Nec Laboratories America, Inc. | Hierarchical neural network-based root cause analysis for distributed computing systems |
US20220382622A1 (en) * | 2021-05-25 | 2022-12-01 | Google Llc | Point Anomaly Detection |
US11580005B2 (en) * | 2019-03-05 | 2023-02-14 | Ellexi Co., Ltd. | Anomaly pattern detection system and method |
US11656927B1 (en) * | 2021-12-03 | 2023-05-23 | International Business Machines Corporation | Localizing faults in multi-variate time series data |
US11662718B2 (en) | 2020-11-30 | 2023-05-30 | BISTelligence, Inc. | Method for setting model threshold of facility monitoring system |
WO2023148843A1 (en) * | 2022-02-02 | 2023-08-10 | 日本電気株式会社 | Time-series data processing method |
US11727109B2 (en) | 2020-01-24 | 2023-08-15 | International Business Machines Corporation | Identifying adversarial attacks with advanced subset scanning |
EP4270196A1 (en) * | 2022-04-26 | 2023-11-01 | Hamilton Sundstrand Corporation | Apparatus and method for diagnosing no fault failure found in electronic systems |
US11892938B2 (en) | 2020-03-16 | 2024-02-06 | International Business Machines Corporation | Correlation and root cause analysis of trace data using an unsupervised autoencoder |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7248103B2 (en) * | 2019-03-26 | 2023-03-29 | 日本電気株式会社 | Anomaly detection method, anomaly detection device, program |
US20200380391A1 (en) * | 2019-05-29 | 2020-12-03 | Caci, Inc. - Federal | Methods and systems for predicting electromechanical device failure |
US20240070130A1 (en) * | 2022-08-30 | 2024-02-29 | Charter Communications Operating, Llc | Methods And Systems For Identifying And Correcting Anomalies In A Data Environment |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8185781B2 (en) | 2009-04-09 | 2012-05-22 | Nec Laboratories America, Inc. | Invariants-based learning method and system for failure diagnosis in large scale computing systems |
US10594712B2 (en) * | 2016-12-06 | 2020-03-17 | General Electric Company | Systems and methods for cyber-attack detection at sample speed |
US11204602B2 (en) * | 2018-06-25 | 2021-12-21 | Nec Corporation | Early anomaly prediction on multi-variate time series data |
-
2019
- 2019-01-11 US US16/245,734 patent/US11379284B2/en active Active
Cited By (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10771488B2 (en) * | 2018-04-10 | 2020-09-08 | Cisco Technology, Inc. | Spatio-temporal anomaly detection in computer networks using graph convolutional recurrent neural networks (GCRNNs) |
US20190312898A1 (en) * | 2018-04-10 | 2019-10-10 | Cisco Technology, Inc. | SPATIO-TEMPORAL ANOMALY DETECTION IN COMPUTER NETWORKS USING GRAPH CONVOLUTIONAL RECURRENT NEURAL NETWORKS (GCRNNs) |
US11280816B2 (en) * | 2018-04-20 | 2022-03-22 | Nec Corporation | Detecting anomalies in a plurality of showcases |
US20210263954A1 (en) * | 2018-06-22 | 2021-08-26 | Nippon Telegraph And Telephone Corporation | Apparatus for functioning as sensor node and data center, sensor network, communication method and program |
US11822579B2 (en) * | 2018-06-22 | 2023-11-21 | Nippon Telegraph And Telephone Corporation | Apparatus for functioning as sensor node and data center, sensor network, communication method and program |
US11178170B2 (en) * | 2018-12-14 | 2021-11-16 | Ca, Inc. | Systems and methods for detecting anomalous behavior within computing sessions |
US11580005B2 (en) * | 2019-03-05 | 2023-02-14 | Ellexi Co., Ltd. | Anomaly pattern detection system and method |
US11410048B2 (en) * | 2019-05-17 | 2022-08-09 | Honda Motor Co., Ltd. | Systems and methods for anomalous event detection |
EP3798778A1 (en) * | 2019-09-30 | 2021-03-31 | Siemens Energy Global GmbH & Co. KG | Method and system for detecting an anomaly of an equipment in an industrial environment |
GB2605719A (en) * | 2019-12-02 | 2022-10-12 | Ibm | Deep contour-correlated forecasting |
WO2021111252A1 (en) * | 2019-12-02 | 2021-06-10 | International Business Machines Corporation | Deep contour-correlated forecasting |
US11586705B2 (en) | 2019-12-02 | 2023-02-21 | International Business Machines Corporation | Deep contour-correlated forecasting |
GB2605719B (en) * | 2019-12-02 | 2024-01-24 | Ibm | Deep contour-correlated forecasting |
CN111027680A (en) * | 2019-12-06 | 2020-04-17 | 北京瑞莱智慧科技有限公司 | Monitoring quantity uncertainty prediction method and system based on variational self-encoder |
US11443194B2 (en) | 2019-12-17 | 2022-09-13 | SparkCognition, Inc. | Anomaly detection using a dimensional-reduction model |
US11371911B2 (en) | 2019-12-20 | 2022-06-28 | Pratt & Whitney Canada Corp. | System and method for sensor fault management |
EP3839440A1 (en) * | 2019-12-20 | 2021-06-23 | Pratt & Whitney Canada Corp. | Sensor fault management tool |
US20210201460A1 (en) * | 2019-12-30 | 2021-07-01 | Micron Technology, Inc. | Apparatuses and methods for determining wafer defects |
US11922613B2 (en) * | 2019-12-30 | 2024-03-05 | Micron Technology, Inc. | Apparatuses and methods for determining wafer defects |
US11727109B2 (en) | 2020-01-24 | 2023-08-15 | International Business Machines Corporation | Identifying adversarial attacks with advanced subset scanning |
CN111340107A (en) * | 2020-02-25 | 2020-06-26 | 山东大学 | Fault diagnosis method and system based on convolutional neural network cost sensitive learning |
US11892938B2 (en) | 2020-03-16 | 2024-02-06 | International Business Machines Corporation | Correlation and root cause analysis of trace data using an unsupervised autoencoder |
CN111537207A (en) * | 2020-04-29 | 2020-08-14 | 西安交通大学 | Data enhancement method for intelligent diagnosis of mechanical fault under small sample |
CN111860569A (en) * | 2020-06-01 | 2020-10-30 | 国网浙江省电力有限公司宁波供电公司 | Power equipment abnormity detection system and method based on artificial intelligence |
CN111814976A (en) * | 2020-07-14 | 2020-10-23 | 西安建筑科技大学 | Air conditioning system sensor fault error relearning method and system |
US11662718B2 (en) | 2020-11-30 | 2023-05-30 | BISTelligence, Inc. | Method for setting model threshold of facility monitoring system |
CN112393860A (en) * | 2020-12-09 | 2021-02-23 | 杭州拓深科技有限公司 | Fire-fighting pipe network water leakage monitoring method based on in-class distance constraint self-encoder |
CN112381180A (en) * | 2020-12-09 | 2021-02-19 | 杭州拓深科技有限公司 | Power equipment fault monitoring method based on mutual reconstruction single-class self-encoder |
WO2022195976A1 (en) * | 2021-03-16 | 2022-09-22 | Mitsubishi Electric Corporation | Apparatus and method for anomaly detection |
US20220303288A1 (en) * | 2021-03-16 | 2022-09-22 | Mitsubishi Electric Research Laboratories, Inc. | Apparatus and Method for Anomaly Detection |
US11843623B2 (en) * | 2021-03-16 | 2023-12-12 | Mitsubishi Electric Research Laboratories, Inc. | Apparatus and method for anomaly detection |
US20220382622A1 (en) * | 2021-05-25 | 2022-12-01 | Google Llc | Point Anomaly Detection |
US11928017B2 (en) * | 2021-05-25 | 2024-03-12 | Google Llc | Point anomaly detection |
US20220382614A1 (en) * | 2021-05-26 | 2022-12-01 | Nec Laboratories America, Inc. | Hierarchical neural network-based root cause analysis for distributed computing systems |
CN113486291A (en) * | 2021-06-18 | 2021-10-08 | 电子科技大学 | Petroleum drilling machine micro-grid fault prediction method based on deep learning |
CN113468751A (en) * | 2021-07-05 | 2021-10-01 | 河南中烟工业有限责任公司 | Recursion Lasso-based flowmeter anomaly online monitoring method and system and storage medium |
CN113537352A (en) * | 2021-07-15 | 2021-10-22 | 杭州鲁尔物联科技有限公司 | Sensor abnormal value monitoring method and device, computer equipment and storage medium |
CN113726559A (en) * | 2021-08-09 | 2021-11-30 | 国网福建省电力有限公司 | Artificial intelligence network-based security analysis early warning model |
CN113428167A (en) * | 2021-08-25 | 2021-09-24 | 长沙德壹科技有限公司 | ECU (electronic control Unit) abnormality recognition method |
US11656927B1 (en) * | 2021-12-03 | 2023-05-23 | International Business Machines Corporation | Localizing faults in multi-variate time series data |
WO2023099063A1 (en) * | 2021-12-03 | 2023-06-08 | International Business Machines Corporation | Localizing faults in multi-variate time series data |
US20230176939A1 (en) * | 2021-12-03 | 2023-06-08 | International Business Machines Corporation | Localizing faults in multi-variate time series data |
CN114371002A (en) * | 2021-12-30 | 2022-04-19 | 天津理工大学 | Planetary gearbox fault diagnosis method based on DAE-CNN |
WO2023148843A1 (en) * | 2022-02-02 | 2023-08-10 | 日本電気株式会社 | Time-series data processing method |
EP4270196A1 (en) * | 2022-04-26 | 2023-11-01 | Hamilton Sundstrand Corporation | Apparatus and method for diagnosing no fault failure found in electronic systems |
Also Published As
Publication number | Publication date |
---|---|
US11379284B2 (en) | 2022-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11379284B2 (en) | Topology-inspired neural network autoencoding for electronic system fault detection | |
US11169514B2 (en) | Unsupervised anomaly detection, diagnosis, and correction in multivariate time series data | |
US20200371491A1 (en) | Determining Operating State from Complex Sensor Data | |
US11204602B2 (en) | Early anomaly prediction on multi-variate time series data | |
US10686806B2 (en) | Multi-class decision system for categorizing industrial asset attack and fault types | |
KR102118670B1 (en) | System and method for management of ict infra | |
US11763198B2 (en) | Sensor contribution ranking | |
JP7217761B2 (en) | Abnormal device detection from communication data | |
US11336672B2 (en) | Detecting behavioral anomaly in machine learned rule sets | |
US11520981B2 (en) | Complex system anomaly detection based on discrete event sequences | |
US11496493B2 (en) | Dynamic transaction graph analysis | |
US11323465B2 (en) | Temporal behavior analysis of network traffic | |
US20230085991A1 (en) | Anomaly detection and filtering of time-series data | |
US11221617B2 (en) | Graph-based predictive maintenance | |
JP2022176136A (en) | Time series-based abnormality detection method and system | |
CN114091930A (en) | Service index early warning method and device, electronic equipment and storage medium | |
Wu et al. | An autonomic reliability improvement system for cyber-physical systems | |
JP7062505B2 (en) | Equipment management support system | |
US20200133253A1 (en) | Industrial asset temporal anomaly detection with fault variable ranking | |
JP7113092B2 (en) | Performance prediction from communication data | |
US20210302042A1 (en) | Pipeline for continuous improvement of an hvac health monitoring system combining rules and anomaly detection | |
Markovic et al. | Time-series Anomaly Detection and Classification with Long Short-Term Memory Network on Industrial Manufacturing Systems | |
US20240134736A1 (en) | Anomaly detection using metric time series and event sequences for medical decision making | |
US11892829B2 (en) | Monitoring apparatus, method, and program | |
Azirah et al. | A data-driven prognostic model for industrial equipment using time series prediction methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NEC LABORATORIES AMERICA, INC., NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHENG, WEI;CHENG, HAIFENG;NATSUMEDA, MASANAO;SIGNING DATES FROM 20190107 TO 20190111;REEL/FRAME:047967/0885 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
STCC | Information on status: application revival |
Free format text: WITHDRAWN ABANDONMENT, AWAITING EXAMINER ACTION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
AS | Assignment |
Owner name: NEC CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NEC LABORATORIES AMERICA, INC.;REEL/FRAME:060034/0515 Effective date: 20220527 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |