US20140200952A1 - Scalable rule logicalization for asset health prediction - Google Patents

Scalable rule logicalization for asset health prediction Download PDF

Info

Publication number
US20140200952A1
US20140200952A1 US13/962,203 US201313962203A US2014200952A1 US 20140200952 A1 US20140200952 A1 US 20140200952A1 US 201313962203 A US201313962203 A US 201313962203A US 2014200952 A1 US2014200952 A1 US 2014200952A1
Authority
US
United States
Prior art keywords
data
detector
prediction
features
data sources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/962,203
Inventor
Arun Hampapur
Hongfei Li
Dhaivat P. Parikh
Buyue Qian
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US13/962,203 priority Critical patent/US20140200952A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HAMPAPUR, ARUN, LI, HONGFEI, QIAN, BUYUE, PARIKH, Dhaivat P.
Publication of US20140200952A1 publication Critical patent/US20140200952A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61KAUXILIARY EQUIPMENT SPECIALLY ADAPTED FOR RAILWAYS, NOT OTHERWISE PROVIDED FOR
    • B61K9/00Railway vehicle profile gauges; Detecting or indicating overheating of components; Apparatus on locomotives or cars to indicate bad track sections; General design of track recording vehicles
    • B61K9/08Measuring installations for surveying permanent way
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L23/00Control, warning or like safety means along the route or between vehicles or trains
    • B61L23/04Control, warning or like safety means along the route or between vehicles or trains for monitoring the mechanical state of the route
    • B61L23/042Track changes detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L23/00Control, warning or like safety means along the route or between vehicles or trains
    • B61L23/04Control, warning or like safety means along the route or between vehicles or trains for monitoring the mechanical state of the route
    • B61L23/042Track changes detection
    • B61L23/044Broken rails
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L23/00Control, warning or like safety means along the route or between vehicles or trains
    • B61L23/04Control, warning or like safety means along the route or between vehicles or trains for monitoring the mechanical state of the route
    • B61L23/042Track changes detection
    • B61L23/045Rail wear
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L23/00Control, warning or like safety means along the route or between vehicles or trains
    • B61L23/04Control, warning or like safety means along the route or between vehicles or trains for monitoring the mechanical state of the route
    • B61L23/042Track changes detection
    • B61L23/047Track or rail movements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L27/00Central railway traffic control systems; Trackside control; Communication systems specially adapted therefor
    • B61L27/50Trackside diagnosis or maintenance, e.g. software upgrades
    • B61L27/53Trackside diagnosis or maintenance, e.g. software upgrades for trackside elements or systems, e.g. trackside supervision of trackside control system conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L27/00Central railway traffic control systems; Trackside control; Communication systems specially adapted therefor
    • B61L27/60Testing or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities

Definitions

  • the present invention relates to data processing and, more specifically, to scalable rule logicalization for asset health prediction where the logicalization includes creating human interpretable rules.
  • a method includes aggregating data, via a computer processing device, from data sources, extracting a set of features from the data, projecting the features to a lower dimensional space, generating a prediction based on the projecting, logicalizing a decision boundary for the prediction, and estimating a confidence level of the prediction based on the decision boundary.
  • a system includes a computer processing system communicatively coupled to data sources, and logic executable by the computer processing system.
  • the logic is configured to implement a method.
  • the method includes aggregating data from data sources, extracting a set of features from the data, projecting the features to a lower dimensional space, generating a prediction based on the projecting, logicalizing a decision boundary for the prediction, and estimating a confidence level of the prediction based on the decision boundary.
  • a computer program product includes a storage medium embodied with machine-readable program instructions, which when executed by a computer causes the computer to implement a method.
  • the method includes aggregating data from data sources, extracting a set of features from the data, projecting the features to a lower dimensional space, generating a prediction based on the projecting, logicalizing a decision boundary for the prediction, and estimating a confidence level of the prediction based on the decision boundary.
  • FIG. 1 depicts a block diagram of a system upon which predictive modeling for asset management may be implemented according to an embodiment of the present invention
  • FIG. 2 depicts a flow diagram describing a process for implementing predictive modeling for asset management according to an embodiment of the present invention
  • FIG. 3 depicts a failure rate control chart with sample data for identifying changes in failure rate of an asset according to an embodiment of the present invention
  • FIG. 4 depicts components and functions for online learning and information fusion according to an embodiment of the present invention
  • FIG. 5 depicts records of data that may be merged by information fusion techniques according to an embodiment of the present invention
  • FIG. 6 depicts a flow diagram of a process for implementing alarm prediction processes according to an embodiment of the present invention.
  • FIGS. 7A and 7B each depicts a two-dimensional chart of sampled data according to an embodiment of the present invention.
  • Exemplary embodiments provide predictive modeling using several analytical approaches including, e.g., correlation analysis, causal analysis, time series analysis, survival modeling, and machine learning techniques to automatically learn rules and build failure prediction models based on exploration of historical multi-detector measurements, equipment failure records, maintenance records, environmental conditions, etc. Additionally, the analytics and models can also be used for detecting root-causes of several failure modes of components, which can be proactively used by a maintenance organization to optimize trade-offs related to maintenance schedules, costs, and shop capacity.
  • analytical approaches including, e.g., correlation analysis, causal analysis, time series analysis, survival modeling, and machine learning techniques to automatically learn rules and build failure prediction models based on exploration of historical multi-detector measurements, equipment failure records, maintenance records, environmental conditions, etc.
  • the analytics and models can also be used for detecting root-causes of several failure modes of components, which can be proactively used by a maintenance organization to optimize trade-offs related to maintenance schedules, costs, and shop capacity.
  • predictive modeling for asset management also referred to herein as “predictive modeling”.
  • the predictive modeling provides the ability to analyze and interpret large amounts of complex and variable data concerning an asset or group of assets, as well as conditions surrounding the assets.
  • the predictive modeling provides the ability to perform large-scale, multi-detector predictive modeling and related tasks to predict when one or more of the assets might fail.
  • the predictive modeling incorporates statistical learning to predict asset failures based on large-scale, multi-dimensional sparse time series data.
  • the predictive modeling develops the concept of composite detectors and integrates large-scale information collected from multiple detectors to predict undesired conditions of equipment and unexpected events, such as alarms that cause service interruptions.
  • the exemplary predictive modeling techniques described herein may be implemented for any industry that collects and processes large amounts of data from detectors in order to determine and maintain the health of one or more assets.
  • the predictive modeling processes may be implemented by the railroad industry, airline industry, or other transportation industry.
  • the exemplary predictive modeling processes also have applications in the area of manufacturing.
  • the system 100 of FIG. 1 includes a host system 102 in communication with data sources 104 A- 104 n (referred to collectively as data sources 104 ) over one or more networks 110 .
  • the host system 102 may be implemented as a high-speed computer processing device (e.g., a mainframe computer) that is capable of handling a large volume of data received from the data sources 104 .
  • the host system 102 may be implemented by any entity that collects and processes a large amount of data from a multitude of data sources 104 to manage, or may be offered as a service to such entity by, e.g., an application service provider (ASP).
  • ASP application service provider
  • the data sources 104 may include devices configured to capture raw data from aspects of the asset, as well as any conditions surrounding the asset.
  • assets may be railroad tracks, as well as cars that travel along the tracks (and their constituent parts).
  • the assets may include airplanes and corresponding parts that are inspected, as well as runway conditions.
  • the data sources 104 may include detectors, such as probes, sensors, and other instrumentation that are configured to measure qualitative aspects of the assets or surrounding conditions, such as temperature, weight or load, strain, dimensions (e.g., indications of wear), sound, and images, to name a few.
  • the measurements may be taken with regard to railroad track components and vehicle wheels.
  • detectors that may be used as sources of data include machine vision detectors (MVDs), wheel impact load detectors (WILDS), optical geometry detectors (OCDs), truck performance detectors (TPDs), acoustic bay detectors (ABDs), hot box detectors, warm bearing detectors, and hot wheel/cold wheel detectors.
  • MMDs machine vision detectors
  • WILDS wheel impact load detectors
  • OCDs optical geometry detectors
  • TPDs truck performance detectors
  • ABDs acoustic bay detectors
  • hot box detectors warm bearing detectors
  • hot wheel/cold wheel detectors hot wheel/cold wheel detectors.
  • the data sources 104 may capture time, physical location, object location, and other information regarding the subject of measurement, as will be described herein.
  • the data sources 104 reflect multi-dimensional detection devices, as they are configured to collect a wide variety of different types of information.
  • the data sources 104 A- 104 n may include (or may be coupled to) corresponding communication components 116 A- 116 n (referred to collectively as communication components 116 ) for transmitting captured data over one or more networks.
  • the communication components 116 may include, e.g., transceivers, antennae, and/or network cards for receiving and conveying data using wireless and/or wireline transmission technologies including radio frequency (RF), WiFi, Bluetooth, cellular, satellite, copper wiring, co-axial cabling, etc.
  • RF radio frequency
  • WiFi WiFi
  • Bluetooth cellular
  • satellite copper wiring
  • co-axial cabling etc.
  • a probe on one of the data sources 104 collects data from a location (e.g., a location on a railroad track) and transfers the data to the corresponding communication component 116 for transmission over networks 110 to the host system 102 .
  • the networks 110 may include one or more reader devices 108 for receiving the data from the data sources 104 .
  • the reader devices 108 may be RF readers positioned at defined locations (e.g., at fixed-length intervals) along the railroad track.
  • the RF readers 108 read data from corresponding data sources 104 (via the communication components 116 , which may be RF antennae) as the data sources 104 (embedded in the vehicles) pass within communicative range of the reader devices 108 .
  • the data captured by the data sources 104 may be transmitted as raw data to the host system 102 or may be processed prior to transmission.
  • the data sources 104 A- 104 n may also include corresponding computer processors 118 A- 118 n (collectively referred to as computer processors 118 ) for processing the raw data and/or formatting the data for transmission over the networks 110 .
  • the captured data may be transmitted via the communication components 116 to a computer processor configured for receiving the data.
  • some of the data sources 104 may alternatively include other information sources, such as cameras or portable communication devices (e.g., cellular telephones, smart phones, or other portable devices) operated by users who are in direct observation of the asset or surrounding conditions who have observed an event that may have an impact on safety.
  • the data collected by the host system 102 from these portable devices may include texts, images, messages, or other information provided by a user of a communication device. For example, an observer near a railroad track may witness a previously unreported defect or anomaly, record an image of the defect, and transmit the image with date/time information, and alternatively a text description, to the host system 102 or another entity which forwards the information to the host system 102 .
  • the networks 110 may include any type of networks, such as local area networks, wide area networks, virtual private networks, and the Internet.
  • the networks 110 may be configured to support wireless communications, e.g., via radio frequency (RF) communications, cellular networks, satellite networks, and global positioning (GPS) systems.
  • RF radio frequency
  • GPS global positioning
  • the host system 102 executes logic 112 for implementing the exemplary predictive modeling, as well as other processes, as described herein.
  • the logic 112 includes a user interface component for enabling authorized users to set preferences used in configuring data sources 104 employed in the processes described herein, as well as generating and executing predictive models, performing analysis on the histories of previously implemented models, and facilitating the generation of new models, or evolvement of existing models, to increase the ability for the managing entity to ensure reliable operation.
  • the preferences may include designating a frequency of data collection by the data sources 104 .
  • the logic 112 may also be configured to utilize the information acquired from execution of the models to analyze and adopt maintenance and repair plans for components of the asset.
  • the host system 102 is communicatively coupled to a storage device 114 that stores various data used in implementing the predictive modeling.
  • the storage device 114 may store models, performance histories (e.g., alarm histories, repair histories, etc.), and other information desired.
  • the storage device 114 may be directly in communication with the host system 102 (e.g., via cabling) or may be logically addressable by the host system 102 , e.g., as a consolidated data source over one or more networks 110 .
  • Predictive models are generated from history data collected from the data sources 104 . Patterns of data from the measurements and resulting repair work or maintenance schedules can be used in a predictive manner for estimating when maintenance should be performed. In addition, as new data is received, the predictive models can be updated to reflect any changes discovered. A user interface of the logic 112 may be used to present organized history data, as well as alert information. The created model may be stored as one of several stored models, e.g., in the storage device 114 of FIG. 1 . As new data is received from the data sources 104 , it can be applied to the predictive models and may be used to update the models in order to ascertain future maintenance needs or critical issues that require immediate attention.
  • the services may provide a web-based user interface for receiving information from a user in creating and implementing a model.
  • the user interface via the logic 112 , prompts a user through the process.
  • the process assumes that history data has been collected over a period of time.
  • the history data may include detector data, alarm information, and maintenance data. It will be understood that the data collected may be sparse time series data. For example, in the railroad industry, the detectors may not be evenly distributed across the network, thus the number of readings may vary dramatically across different locations in the railroad system.
  • the time series of readings may be sparse due to, e.g., infrequent use of the asset in which the detector readings are taken, as compared to other assets.
  • the predictive modeling is configured to handle the sparsity in the detector data.
  • the logic 112 generates a factor matrix for each univariate time series data in a set of sparse time series data collected from a group of detectors over time.
  • this may be implemented using supervised matrix factorization (SMF) techniques.
  • SMF supervised matrix factorization
  • X denote the multi-dimensional time series from different types of detectors.
  • Some time series could be sparse (e.g., they may be sparse as a result of being sparsely sampled over time, or they may represent incomplete or noisy data).
  • Y be the label vector for the asset failures (e.g., 1 indicates a failure, and 0 indicates good condition).
  • H i is the latent representation of time series, where each row defines the latent features of the original time series in X i .
  • V i is the latent representation of time points.
  • SMF is used to find optimal latent representation matrices in order to best approximate the matrices X and Y via a loss function optimization technique.
  • the latent representation matrix H is a good estimate of observed time series, and useful features may be extracted from H, such as trend and diversification.
  • L CA is the supervised classification accuracy loss term, which enforces the latent time series representation H to be corrected such that a set of all-versus-one logistic regression weights W can maximize the classification accuracy on the data set;
  • a subset of the time series data is identified as a feature selection, which is determined based on a loss function. For each predictor X (where X i is the ith predictor), calculate the minimum loss, then rank the predictors in the order of the optimized loss. The predictor importance indicates the relative importance of each predictor in predicting the bad trucks (i.e., the best predictors to approximate the label vector Y in terms of the loss).
  • the logic 112 generates a predictive model from the subset of time series data.
  • the predictive model is configured to predict a failure using, e.g., data mining, machine learning, and/or statistical modeling. Different predictive models may be generated for asset failures. For example, in the railroad industry, data from multiple detectors (e.g., WILD, MV, and OGD) may be used in the analysis. Suppose a decision tree method is used to predict the occurrence of truck failures in three months with sparse time series data, and the prediction accuracy of the model is high for both training and testing data. The model may ben correctly classify most bad truck records as failed in both the training and test datasets.
  • predictive methods can be used as well, such as neural network, Support Vector Machine (SVM), and statistical models (e.g., Cox Proportional Hazards model or Andersen-Gill model).
  • SVM Support Vector Machine
  • statistical models e.g., Cox Proportional Hazards model or Andersen-Gill model.
  • the predictive model may then be used to render decisions regarding inspection and repair of the asset.
  • the logic 112 receives new data from the detectors and compares the model predictions to actual data that is newly received.
  • the new data may be real-time or near real-time data streamed from one or more of the data sources 104 to the host system 102 over the networks 110 .
  • a failure rate chart 300 for an asset is generated based on the one-sample weighted rank test.
  • a non-parametric one-sample weighted rank test may be represented as:
  • n ( Z ⁇ + Z ⁇ ) 2 ⁇ ⁇ 0 ⁇ ⁇ w 2 ⁇ ( s ) ⁇ h 0 ⁇ ( s ) / y ⁇ ( s ) ⁇ ⁇ ⁇ s ⁇ ⁇ 0 ⁇ ⁇ w ⁇ ( s ) ⁇ [ h ⁇ ( s ) - h 0 ⁇ ( s ) ] ⁇ ⁇ ⁇ s ⁇ 2
  • the statistic follows the standard normal distribution for large samples under H 0 . ⁇ is selected as the largest failure time in the monitoring subgroup.
  • the logic 112 determines a change in a failure rate based on a one-sample weighted rank test.
  • the threshold value may be defined by an authorized user of the logic 112 . If the change does not exceed the threshold value, the process returns to step 208 .
  • the logic 112 updates the predictive model to reflect the change at step 214 .
  • Subgroup refers to a term used in control charts for Statistical Process Control (SPC).
  • SPC Statistical Process Control
  • the subgroup is a sample with a fixed sample size (i.e., the number of observations is fixed). In this embodiment, the subgroup represents a sample containing a fixed number of failures.
  • the model can be updated by using, e.g., the Bayesian inference method as shown below (where ⁇ denotes the predictive model parameters which need to be updated):
  • M, ⁇ ) represents the data likelihood function based on the performance model
  • M) represents the prior probability density function selected for the model.
  • history data has been collected over a period of time and may include, e.g., detector data, alarm information, and maintenance data.
  • the data collected may be sparse time series data.
  • An online learning system may be employed using information fusion techniques to integrate the history data that is received from multiple types of disparate detection devices.
  • the online learning system and process is configured to integrate information collected from spatially- and temporally-incompatible detection devices to enable predictive maintenance for asset management.
  • the online learning system and process utilizes historical detector data along with failure data to determine patterns of detector readings that may be subcritical, thereby leading to failures across multiple detectors with sparse sampling.
  • the fusion techniques provide the ability to study assets that move across the detector network and enable information from these assets to be integrated across time and space. By fusing this information collected from multiple detectors, an integrated insight into equipment conditions can be gleaned.
  • the on line learning system and process combines offline and online learning engines to generate failure alerts for equipment predictive maintenance.
  • FIG. 4 a system 400 , and functional components thereof, through which the online learning and fusion processes may be implemented will now be described in an embodiment.
  • the system 400 includes an integrated data model 402 that is generated from a variety of data 404 collected offline.
  • the data 404 includes wayside detector data 404 a, traffic and network data 404 b, track inspection data 404 c , weather data 404 d, set-out and failure data 404 e, and tear down/repair data 404 f .
  • the data 404 may be collected by the data sources 104 of FIG. 1 and transmitted to the host system 102 for processing by the logic 112 .
  • the logic 112 generates the integrated data model 402 as described herein.
  • the integrated data model 402 is generated in part by merging disparate data from multi-dimensional detection devices (e.g., data sources 104 of FIG. 1 ).
  • the data sources 104 collect information, which data is stored as history data in one or more storage locations (e.g., storage device 114 ).
  • the data may be stored in various tables.
  • FIG. 5 illustrates tables of sample data to be merged in a railroad industry environment.
  • a first table 502 provides data regarding a vehicle wheel (HOT_WHEEL) collected from January 1 st through December 31 st of a given year.
  • HET_WHEEL vehicle wheel
  • the detector used in the collection may be a hot wheel detector (HWD) that is attached to a railroad track at a specified, fixed location, and includes a temperature sensor to measure temperature of the wheel as it passes the location on the track.
  • HWD hot wheel detector
  • the information stored in the table 502 may include message identification information, equipment identification information, and temperature measured, to name a few.
  • a second table 504 provides data regarding a vehicle axle (HBD_AXLE) collected from January 15 th through October 30 th of the same given year.
  • the detector used in the collection may be a hot box detector (HBD) for an axle that is attached to a railroad track at a specified, fixed location, and includes a temperature sensor to measure temperature of the axle as it passes the location on the track.
  • HBD hot box detector
  • a third table 506 provides data regarding impact load for a wheel (WILD_WHL) collected from January 1 st through December 31 st of two years covering the same given year.
  • the detector used in the collection may be a wheel impact load detector (WILD) that is attached to a railroad track at a specified, fixed location, and includes a sensor that determines an amount of load or force on the track (e.g., measured in KIPS).
  • WILD wheel impact load detector
  • a fourth table 507 provides data regarding equipment or railcar that was collected from the same two years as the table above.
  • the detector used in the collection may be a wheel impact load detector (WILD) that is attached to a railroad track at a specified, fixed location, and includes a sensor that determines the measurements at equipment level, such as equipment speed.
  • WILD wheel impact load detector
  • a fifth table 510 provides data regarding a noise signature emitted by bearings collected from the same two years.
  • the detector used in the collection may be an acoustic bearing detector (ABD) that is attached to a railroad track at a specified, fixed location, and includes a sensor that captures any anomalies in the noise signature emitted by bearing in motion and the detector processes this information internally and issues alarms when an anomalous acoustic signature is detected.
  • ABS acoustic bearing detector
  • the logic 112 is configured to merge the data in these tables where shared fields are known.
  • the data in table 502 can be merged with data in table 504 through common fields HBD_MSG_ID, EQP_INIT, EQP_NBR, EQP_ACLE_NBR, and AXLE_SIDE, which occur in both tables.
  • tables 502 and 504 may be merged with table 506 through common fields EQP_AXLE_NBR and AXLE_SIDE.
  • table 506 can be merged with table 507 through common fields EDR_MSG_ID and EQP_SEQ_NBR, as well as with table 510 through common fields EDR_MSG_ID, EQP_AXLE_NBR, and AXLE_SIDE.
  • EDR_MSG_ID and EQP_SEQ_NBR
  • table 510 through common fields EDR_MSG_ID, EQP_AXLE_NBR, and AXLE_SIDE.
  • the logic 112 utilizes the integrated data model 402 to perform failure pattern analysis 406 and failure causal analysis 410 .
  • Insights from the perspectives of preventative maintenance, procurement decision, and railway operations can be obtained by discovering correlations in the historical data that associates failures with equipment physical parameters (e.g., weight, flange thickness, flange height, rim thickness, etc.), equipment operation parameters (e.g., speed, dynamic load, bearing temperature, etc.), and external parameters (e.g., weather, usage history).
  • Traffic & network data 404 b includes data that measures the traffic volumes or number of railcars passing through the rail segments.
  • Track inspection data 404 c provides inspection records which may indicate the condition of the tracks.
  • Weather data 404 d includes any weather-related information that may have an impact on railway operating conditions (e.g., those that might result in derailment).
  • the set-out & failure data 404 e and the tear down/repair data 404 f provide maintenance records including equipment failures and repair information.
  • Failure pattern analysis 406 subject matter expert (SME)-rendered decisions 408 , failure causal analyses 410 , learning failure prediction models 412 , and failure causal map 414 are associated with an offline learning engine based on the large volume of data collected for these elements.
  • the failure pattern analysis component 406 of the offline learning engine is an analytics engine configured to discover failure patterns such as seasonality patterns of failures.
  • the failure causal analyses component 410 of the offline learning engine identifies the factors that drive the patterns while leveraging SME knowledge in 408 .
  • the failure causal map 414 provides a tool to visualize the causal factors and failure patterns.
  • the learning failure prediction models 412 develop the failure prediction engine based on the failure patterns and causal factors.
  • the offline learning engine is used in an online fashion with real-time data (e.g., live sensor data 418 ).
  • real-time data e.g., live sensor data 418
  • the detector data 418 is received, it is fed to the analytical models 420 .
  • Prediction outputs and decision recommendations resulting from the models 420 is displayed in a predicted failure/optimized preventative maintenance program 416 .
  • one goal is to maximize the occurrences of crew set-outs and reduce the inspection and maintenance costs.
  • components of analytics involved in this goal relate to alarm prediction, bad truck prediction, and bad wheel prediction (as described above with respect to FIGS. 1-3 ).
  • alarm prediction multiple detectors (HBD, ABD, and WILD) can aid in predicting the most severe alarm related to hot bearings within a meaningful advanced amount of time (e.g., 7 days in advance of actual alarm/incident occurrence to reduce the immediate train stops).
  • a meaningful advanced amount of time e.g., 7 days in advance of actual alarm/incident occurrence to reduce the immediate train stops.
  • wheels and trucks are replaced when they create high impact or wear out.
  • wheel dimension and wheel impact load data may be used to detect truck performance issues earlier using multiple detectors, such as MV, OGD, TPD, and WILD.
  • movement errors and wheel impact load that predict the wheel defects earlier may be determined from data received from multiple detectors, such as MV, OGD, and WILD.
  • L1 alarms when the detector readings reach the most severe category and thus immediate train stoppage is generally required. Predicting an L1 alarm in advance is desirable so that operators have sufficient time to respond.
  • One goal in developing an alarm prediction model is to keep false alarm rates low due to constraints in corresponding resources.
  • Another goal is to provide human interpretable rules to facilitate decision processes by operators.
  • alarm prediction processes are provided.
  • the alarm prediction processes are configured to accomplish the above-referenced goals.
  • Alarm prediction may be summarized as a classification problem where one class relates to detector readings history with alarms, and the other class relates to detector readings history without alarms.
  • the exemplary alarm prediction processes utilize Support Vector Machine (SVM) techniques.
  • SVM Support Vector Machine
  • In the alarm prediction two sets of parameters are provided and may be customized. One set is the prediction time window (i.e., how many days in advance the alarm prediction is generated). Based on the trade-offs between operational constraints versus accuracy of prediction, the process offers predictions for 3 or 7 days in advance, which in turn may provide enough buffering time to prepare for inspections based on operation conditions.
  • the second set of parameters is the historical detector reading time window that indicates how many days of past detector readings may be used to provide a forecast.
  • the process may include two options. For purposes of illustration the options are 7 days and 14 days.
  • the first number of each setting indicates the reading time window
  • the second number in each setting reflects the prediction time window. For example, 7-3 means using the past 7 days of readings, an alarm prediction of 3 days in the future can be provided.
  • step 602 data is aggregated from detectors (e.g., the data sources 104 of FIG. 1 including the data elements 404 in FIG. 4 ), and features are extracted (e.g., using quantiles) for each numeric value variable.
  • the features may each be a vector of equal length.
  • historical multi-detector readings e.g., ABD, HBD, and WILD
  • extract features are combined and aggregated using quantiles for each numeric value variable.
  • sample features are linearly projected to a lower dimensional space (e.g., a learned non-linear decision boundary) while maintaining a comparable learning performance, which in turn may reduce both the time and memory complexities required by the learning model.
  • a prediction is generated based on its location in the feature space to the support vectors (e.g., the key samples that lie in the border area between positives and negatives).
  • a decision boundary is logicalized. Human interpretable rules are extracted through grid searching given the complex SVM classification results. As shown in FIG. 7A for example, a grid 700 A illustrates feature space in which all blocks constitute the feasible feature space, and each block is a sample. Based on learning decisions, positive samples are darkened ( 704 a ). A curve 706 a represents a separating or decision boundary. The feature space of the grid 700 A illustrates a coarse logical rule search. Using the same two-dimensional learning problem, a grid 700 B in FIG. 7B illustrates a feature space comprising smaller (finer grid search) blocks 702 b. In comparison to the grid 700 A, the decision boundary 706 b is more precise. The rule logicalization is, thus, scalable to a desired granular level.
  • the logic 112 may calculate the probability or risk. For example, the logic 112 may predict whether a bearing will issue a L1 alarm within a defined future time period based on its location (in feature space) to the support vectors (i.e., the key samples that lie in the border area between positives and negatives). In addition to predicting whether an alarm will be issued or not, the corresponding confidence is estimated based on the relative position to the support vectors at step 610 .
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • Computer Hardware Design (AREA)
  • Train Traffic Observation, Control, And Security (AREA)
  • Testing And Monitoring For Control Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)

Abstract

An aspect of scalable rule logicalization for asset health management includes aggregating data, via a computer processing device, from data sources, extracting a set of features from the data, projecting the features to a lower dimensional space, generating a prediction based on the projecting, logicalizing a decision boundary for the prediction, and estimating a confidence level of the prediction based on the decision boundary.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • The present application is a Continuation Application of U.S. patent application Ser. No. 13/873,829, filed on Apr. 30, 2013, which claims the benefit of U.S. Patent Application Ser. No. 61/751,704 filed on Jan. 11, 2013, which are hereby incorporated by reference herein in their entirety.
  • BACKGROUND
  • The present invention relates to data processing and, more specifically, to scalable rule logicalization for asset health prediction where the logicalization includes creating human interpretable rules.
  • The ability to make sense of large amounts of data, or “big data” as it is often referred to, is a challenging task. With the ever-increasing numbers of available data sources and rapid, ongoing enhancements made in the computing power of data generation devices, as well as the wide variety of types of data (e.g., both structured and unstructured) that can be collected today, managing big data can require advanced techniques and technologies. Clearly, the ability to analyze and interpret these large amounts of complex and variable data has the potential to be of great value to an entity or entities responsible for or having an interest in the data. For example, in many industries that monitor the health of equipment or other assets, accurate analyses of this data can be used to predict and, thus, take measures to prevent equipment or asset failures.
  • SUMMARY
  • According to one embodiment of the present invention, a method is provided. The method includes aggregating data, via a computer processing device, from data sources, extracting a set of features from the data, projecting the features to a lower dimensional space, generating a prediction based on the projecting, logicalizing a decision boundary for the prediction, and estimating a confidence level of the prediction based on the decision boundary.
  • According to another embodiment of the present invention, a system is provided. The system includes a computer processing system communicatively coupled to data sources, and logic executable by the computer processing system. The logic is configured to implement a method. The method includes aggregating data from data sources, extracting a set of features from the data, projecting the features to a lower dimensional space, generating a prediction based on the projecting, logicalizing a decision boundary for the prediction, and estimating a confidence level of the prediction based on the decision boundary.
  • According to a further embodiment of the present invention, a computer program product is provided. The computer program product includes a storage medium embodied with machine-readable program instructions, which when executed by a computer causes the computer to implement a method. The method includes aggregating data from data sources, extracting a set of features from the data, projecting the features to a lower dimensional space, generating a prediction based on the projecting, logicalizing a decision boundary for the prediction, and estimating a confidence level of the prediction based on the decision boundary.
  • Additional features and advantages are realized through the techniques of the present invention. Other embodiments and aspects of the invention are described in detail herein and are considered a part of the claimed invention. For a better understanding of the invention with the advantages and the features, refer to the description and to the drawings.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
  • The subject matter which is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The forgoing and other features, and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
  • FIG. 1 depicts a block diagram of a system upon which predictive modeling for asset management may be implemented according to an embodiment of the present invention;
  • FIG. 2 depicts a flow diagram describing a process for implementing predictive modeling for asset management according to an embodiment of the present invention;
  • FIG. 3 depicts a failure rate control chart with sample data for identifying changes in failure rate of an asset according to an embodiment of the present invention;
  • FIG. 4 depicts components and functions for online learning and information fusion according to an embodiment of the present invention;
  • FIG. 5 depicts records of data that may be merged by information fusion techniques according to an embodiment of the present invention;
  • FIG. 6 depicts a flow diagram of a process for implementing alarm prediction processes according to an embodiment of the present invention; and
  • FIGS. 7A and 7B each depicts a two-dimensional chart of sampled data according to an embodiment of the present invention.
  • DETAILED DESCRIPTION
  • Exemplary embodiments provide predictive modeling using several analytical approaches including, e.g., correlation analysis, causal analysis, time series analysis, survival modeling, and machine learning techniques to automatically learn rules and build failure prediction models based on exploration of historical multi-detector measurements, equipment failure records, maintenance records, environmental conditions, etc. Additionally, the analytics and models can also be used for detecting root-causes of several failure modes of components, which can be proactively used by a maintenance organization to optimize trade-offs related to maintenance schedules, costs, and shop capacity.
  • According to an exemplary embodiment, predictive modeling for asset management (also referred to herein as “predictive modeling”) is provided. The predictive modeling provides the ability to analyze and interpret large amounts of complex and variable data concerning an asset or group of assets, as well as conditions surrounding the assets. In particular, the predictive modeling provides the ability to perform large-scale, multi-detector predictive modeling and related tasks to predict when one or more of the assets might fail. Further, the predictive modeling incorporates statistical learning to predict asset failures based on large-scale, multi-dimensional sparse time series data.
  • Effective utilization of data provides valuable tools for operational sustainability. The predictive modeling develops the concept of composite detectors and integrates large-scale information collected from multiple detectors to predict undesired conditions of equipment and unexpected events, such as alarms that cause service interruptions.
  • The exemplary predictive modeling techniques described herein may be implemented for any industry that collects and processes large amounts of data from detectors in order to determine and maintain the health of one or more assets. For example, the predictive modeling processes may be implemented by the railroad industry, airline industry, or other transportation industry. The exemplary predictive modeling processes also have applications in the area of manufacturing.
  • Turning now to FIG. 1, a system 100 upon which the predictive modeling processes may be implemented will now be described in an exemplary embodiment. The system 100 of FIG. 1 includes a host system 102 in communication with data sources 104A-104 n (referred to collectively as data sources 104) over one or more networks 110.
  • The host system 102 may be implemented as a high-speed computer processing device (e.g., a mainframe computer) that is capable of handling a large volume of data received from the data sources 104. The host system 102 may be implemented by any entity that collects and processes a large amount of data from a multitude of data sources 104 to manage, or may be offered as a service to such entity by, e.g., an application service provider (ASP).
  • The data sources 104 may include devices configured to capture raw data from aspects of the asset, as well as any conditions surrounding the asset. In the railroad industry, for example, assets may be railroad tracks, as well as cars that travel along the tracks (and their constituent parts). In the airline industry, the assets may include airplanes and corresponding parts that are inspected, as well as runway conditions. The data sources 104 may include detectors, such as probes, sensors, and other instrumentation that are configured to measure qualitative aspects of the assets or surrounding conditions, such as temperature, weight or load, strain, dimensions (e.g., indications of wear), sound, and images, to name a few. In the railroad industry, the measurements may be taken with regard to railroad track components and vehicle wheels. In this embodiment, detectors that may be used as sources of data include machine vision detectors (MVDs), wheel impact load detectors (WILDS), optical geometry detectors (OCDs), truck performance detectors (TPDs), acoustic bay detectors (ABDs), hot box detectors, warm bearing detectors, and hot wheel/cold wheel detectors. In addition to the qualitative aspects, the data sources 104 may capture time, physical location, object location, and other information regarding the subject of measurement, as will be described herein. In this regard, the data sources 104 reflect multi-dimensional detection devices, as they are configured to collect a wide variety of different types of information.
  • The data sources 104A-104 n may include (or may be coupled to) corresponding communication components 116A-116 n (referred to collectively as communication components 116) for transmitting captured data over one or more networks. In an embodiment, the communication components 116 may include, e.g., transceivers, antennae, and/or network cards for receiving and conveying data using wireless and/or wireline transmission technologies including radio frequency (RF), WiFi, Bluetooth, cellular, satellite, copper wiring, co-axial cabling, etc. For example, a probe on one of the data sources 104 collects data from a location (e.g., a location on a railroad track) and transfers the data to the corresponding communication component 116 for transmission over networks 110 to the host system 102. In an embodiment, as shown in FIG. 1, the networks 110 may include one or more reader devices 108 for receiving the data from the data sources 104. In the railroad industry, e.g., the reader devices 108 may be RF readers positioned at defined locations (e.g., at fixed-length intervals) along the railroad track. The RF readers 108 read data from corresponding data sources 104 (via the communication components 116, which may be RF antennae) as the data sources 104 (embedded in the vehicles) pass within communicative range of the reader devices 108.
  • In an embodiment, the data captured by the data sources 104 may be transmitted as raw data to the host system 102 or may be processed prior to transmission. The data sources 104A-104 n may also include corresponding computer processors 118A-118 n (collectively referred to as computer processors 118) for processing the raw data and/or formatting the data for transmission over the networks 110. Alternatively, if the data sources 104 do not include a computer processor, the captured data may be transmitted via the communication components 116 to a computer processor configured for receiving the data.
  • In another embodiment, some of the data sources 104 may alternatively include other information sources, such as cameras or portable communication devices (e.g., cellular telephones, smart phones, or other portable devices) operated by users who are in direct observation of the asset or surrounding conditions who have observed an event that may have an impact on safety. The data collected by the host system 102 from these portable devices may include texts, images, messages, or other information provided by a user of a communication device. For example, an observer near a railroad track may witness a previously unreported defect or anomaly, record an image of the defect, and transmit the image with date/time information, and alternatively a text description, to the host system 102 or another entity which forwards the information to the host system 102.
  • The networks 110 may include any type of networks, such as local area networks, wide area networks, virtual private networks, and the Internet. In addition, the networks 110 may be configured to support wireless communications, e.g., via radio frequency (RF) communications, cellular networks, satellite networks, and global positioning (GPS) systems.
  • The host system 102 executes logic 112 for implementing the exemplary predictive modeling, as well as other processes, as described herein. The logic 112 includes a user interface component for enabling authorized users to set preferences used in configuring data sources 104 employed in the processes described herein, as well as generating and executing predictive models, performing analysis on the histories of previously implemented models, and facilitating the generation of new models, or evolvement of existing models, to increase the ability for the managing entity to ensure reliable operation. The preferences may include designating a frequency of data collection by the data sources 104. The logic 112 may also be configured to utilize the information acquired from execution of the models to analyze and adopt maintenance and repair plans for components of the asset. These, and other features of the predictive modeling, will be described further herein.
  • The host system 102 is communicatively coupled to a storage device 114 that stores various data used in implementing the predictive modeling. For example, the storage device 114 may store models, performance histories (e.g., alarm histories, repair histories, etc.), and other information desired. The storage device 114 may be directly in communication with the host system 102 (e.g., via cabling) or may be logically addressable by the host system 102, e.g., as a consolidated data source over one or more networks 110.
  • Predictive models are generated from history data collected from the data sources 104. Patterns of data from the measurements and resulting repair work or maintenance schedules can be used in a predictive manner for estimating when maintenance should be performed. In addition, as new data is received, the predictive models can be updated to reflect any changes discovered. A user interface of the logic 112 may be used to present organized history data, as well as alert information. The created model may be stored as one of several stored models, e.g., in the storage device 114 of FIG. 1. As new data is received from the data sources 104, it can be applied to the predictive models and may be used to update the models in order to ascertain future maintenance needs or critical issues that require immediate attention.
  • Turning now to FIG. 2, a flow diagram describing a process for implementing the predictive modeling will now be described in an exemplary embodiment. In one embodiment, the services may provide a web-based user interface for receiving information from a user in creating and implementing a model. Once accessed, the user interface, via the logic 112, prompts a user through the process. The process assumes that history data has been collected over a period of time. The history data may include detector data, alarm information, and maintenance data. It will be understood that the data collected may be sparse time series data. For example, in the railroad industry, the detectors may not be evenly distributed across the network, thus the number of readings may vary dramatically across different locations in the railroad system. Also, for some types of detectors, the time series of readings may be sparse due to, e.g., infrequent use of the asset in which the detector readings are taken, as compared to other assets. The predictive modeling is configured to handle the sparsity in the detector data.
  • At step 202, the logic 112 generates a factor matrix for each univariate time series data in a set of sparse time series data collected from a group of detectors over time. In an embodiment, this may be implemented using supervised matrix factorization (SMF) techniques. In this example, let X denote the multi-dimensional time series from different types of detectors. These detectors generate p univariate time series (e.g., p influential factors) denoted as X=(X1, X2, . . . , Xp). Some time series could be sparse (e.g., they may be sparse as a result of being sparsely sampled over time, or they may represent incomplete or noisy data). Let Y be the label vector for the asset failures (e.g., 1 indicates a failure, and 0 indicates good condition). The SMF for the ith, i=1, 2, . . . p, univariate time series may be represented as:
  • X i H i V i T Y logit ( H i W i T ) ( H i * , V i * , W i * ) = arg min H i , V i , W i μ L R ( X i , H i , V i T ) + ( 1 - μ ) L CA ( Y , logit ( H i W i T ) ) + Reg ( H i , V i , W i )
  • Hi is the latent representation of time series, where each row defines the latent features of the original time series in Xi. Similarly, Vi is the latent representation of time points. SMF is used to find optimal latent representation matrices in order to best approximate the matrices X and Y via a loss function optimization technique. The latent representation matrix H is a good estimate of observed time series, and useful features may be extracted from H, such as trend and diversification.
      • ‘T’ is the transpose of a matrix;
      • W is a set of linear logistic regression weights;
      • logit is the cell-wise logistic function;
      • LR is the reconstruction loss, which makes sure that the latent feature matrices H and V could reconstruct X;
  • LCA is the supervised classification accuracy loss term, which enforces the latent time series representation H to be corrected such that a set of all-versus-one logistic regression weights W can maximize the classification accuracy on the data set;
      • Reg is the regularization term, which ensures that the latent matrices do not overfit; and
      • μ is the weight of LR and μ ∈ (0,1).
  • At step 204, a subset of the time series data is identified as a feature selection, which is determined based on a loss function. For each predictor X (where Xi is the ith predictor), calculate the minimum loss, then rank the predictors in the order of the optimized loss. The predictor importance indicates the relative importance of each predictor in predicting the bad trucks (i.e., the best predictors to approximate the label vector Y in terms of the loss).
  • At step 206, the logic 112 generates a predictive model from the subset of time series data. The predictive model is configured to predict a failure using, e.g., data mining, machine learning, and/or statistical modeling. Different predictive models may be generated for asset failures. For example, in the railroad industry, data from multiple detectors (e.g., WILD, MV, and OGD) may be used in the analysis. Suppose a decision tree method is used to predict the occurrence of truck failures in three months with sparse time series data, and the prediction accuracy of the model is high for both training and testing data. The model may ben correctly classify most bad truck records as failed in both the training and test datasets. Other predictive methods can be used as well, such as neural network, Support Vector Machine (SVM), and statistical models (e.g., Cox Proportional Hazards model or Andersen-Gill model). The predictive model may then be used to render decisions regarding inspection and repair of the asset.
  • At step 208, the logic 112 receives new data from the detectors and compares the model predictions to actual data that is newly received. The new data may be real-time or near real-time data streamed from one or more of the data sources 104 to the host system 102 over the networks 110. As shown, for example, in FIG. 3, a failure rate chart 300 for an asset is generated based on the one-sample weighted rank test.
  • A non-parametric one-sample weighted rank test may be represented as:
  • H 0 : h 0 ( t ) = h ( t ) , for t τ H 1 : h 0 ( t ) < h ( t ) , for t τ Z ( τ ) = i = 1 D I W ( t i ) d i Y ( t i ) - 0 τ W ( s ) h 0 ( s ) s 0 τ W 2 ( s ) h 0 ( s ) Y ( s ) s
  • An operational characteristics function is expressed as:
  • n = ( Z α + Z β ) 2 · 0 τ w 2 ( s ) h 0 ( s ) / y ( s ) s { 0 τ w ( s ) [ h ( s ) - h 0 ( s ) ] s } 2
  • The statistic follows the standard normal distribution for large samples under H0. τ is selected as the largest failure time in the monitoring subgroup.
  • At step 210, the logic 112 determines a change in a failure rate based on a one-sample weighted rank test. At step 212, it is determined if the change exceeds a threshold value (referred to herein as ‘upper control limit’). The threshold value may be defined by an authorized user of the logic 112. If the change does not exceed the threshold value, the process returns to step 208.
  • If, however, the change exceeds the threshold value, the logic 112 updates the predictive model to reflect the change at step 214. As shown in FIG. 3, the failure rate of the asset became worse over time since the Z value is gradually increasing, and at the subgroup 63, it exceeds the upper control limit (UCL=2.326) for the first time. Subgroup refers to a term used in control charts for Statistical Process Control (SPC). The subgroup is a sample with a fixed sample size (i.e., the number of observations is fixed). In this embodiment, the subgroup represents a sample containing a fixed number of failures.
  • The model can be updated by using, e.g., the Bayesian inference method as shown below (where Φ denotes the predictive model parameters which need to be updated):
      • p(Φ|D, M)∝p(D|M, Φ)p(Φ|M)
      • D represents the data, and M represents the model.
      • p(Φ|D, M) represents the updated joint probability density function.
  • In addition, p(D|M, Φ) represents the data likelihood function based on the performance model, and the function p((Φ|M) represents the prior probability density function selected for the model.
  • As indicated above, the predictive modeling process of FIG. 2 assumes that history data has been collected over a period of time and may include, e.g., detector data, alarm information, and maintenance data. In addition, the data collected may be sparse time series data. An online learning system may be employed using information fusion techniques to integrate the history data that is received from multiple types of disparate detection devices. The online learning system and process is configured to integrate information collected from spatially- and temporally-incompatible detection devices to enable predictive maintenance for asset management. The online learning system and process utilizes historical detector data along with failure data to determine patterns of detector readings that may be subcritical, thereby leading to failures across multiple detectors with sparse sampling. The fusion techniques provide the ability to study assets that move across the detector network and enable information from these assets to be integrated across time and space. By fusing this information collected from multiple detectors, an integrated insight into equipment conditions can be gleaned. In addition, the on line learning system and process combines offline and online learning engines to generate failure alerts for equipment predictive maintenance.
  • The online learning system and process is described herein with respect to the railroad industry. However, it will be understood that the online learning system may be adapted for other industries as well. Thus, the embodiments described herein are for illustrative purposes and are not intended to limit the scope of thereof.
  • Turning now to FIG. 4, a system 400, and functional components thereof, through which the online learning and fusion processes may be implemented will now be described in an embodiment.
  • The system 400 includes an integrated data model 402 that is generated from a variety of data 404 collected offline. As shown in FIG. 4, the data 404 includes wayside detector data 404 a, traffic and network data 404 b, track inspection data 404 c, weather data 404 d, set-out and failure data 404 e, and tear down/repair data 404 f. It will be understood that additional (or fewer) data elements may be employed to realize the advantages of the embodiments described herein. The data 404 may be collected by the data sources 104 of FIG. 1 and transmitted to the host system 102 for processing by the logic 112. The logic 112 generates the integrated data model 402 as described herein.
  • The integrated data model 402 is generated in part by merging disparate data from multi-dimensional detection devices (e.g., data sources 104 of FIG. 1). The data sources 104 collect information, which data is stored as history data in one or more storage locations (e.g., storage device 114). The data may be stored in various tables. FIG. 5 illustrates tables of sample data to be merged in a railroad industry environment. A first table 502 provides data regarding a vehicle wheel (HOT_WHEEL) collected from January 1st through December 31st of a given year. The detector used in the collection may be a hot wheel detector (HWD) that is attached to a railroad track at a specified, fixed location, and includes a temperature sensor to measure temperature of the wheel as it passes the location on the track. The information stored in the table 502 may include message identification information, equipment identification information, and temperature measured, to name a few.
  • A second table 504 provides data regarding a vehicle axle (HBD_AXLE) collected from January 15th through October 30th of the same given year. The detector used in the collection may be a hot box detector (HBD) for an axle that is attached to a railroad track at a specified, fixed location, and includes a temperature sensor to measure temperature of the axle as it passes the location on the track.
  • A third table 506 provides data regarding impact load for a wheel (WILD_WHL) collected from January 1st through December 31st of two years covering the same given year. The detector used in the collection may be a wheel impact load detector (WILD) that is attached to a railroad track at a specified, fixed location, and includes a sensor that determines an amount of load or force on the track (e.g., measured in KIPS).
  • A fourth table 507 provides data regarding equipment or railcar that was collected from the same two years as the table above. The detector used in the collection may be a wheel impact load detector (WILD) that is attached to a railroad track at a specified, fixed location, and includes a sensor that determines the measurements at equipment level, such as equipment speed.
  • A fifth table 510 provides data regarding a noise signature emitted by bearings collected from the same two years. The detector used in the collection may be an acoustic bearing detector (ABD) that is attached to a railroad track at a specified, fixed location, and includes a sensor that captures any anomalies in the noise signature emitted by bearing in motion and the detector processes this information internally and issues alarms when an anomalous acoustic signature is detected.
  • As indicated above, the detectors associated with these tables acquire very different information (e.g., temperature readings and load bearing information). The logic 112 is configured to merge the data in these tables where shared fields are known. For example, the data in table 502 can be merged with data in table 504 through common fields HBD_MSG_ID, EQP_INIT, EQP_NBR, EQP_ACLE_NBR, and AXLE_SIDE, which occur in both tables. Likewise, tables 502 and 504 may be merged with table 506 through common fields EQP_AXLE_NBR and AXLE_SIDE. In addition, table 506 can be merged with table 507 through common fields EDR_MSG_ID and EQP_SEQ_NBR, as well as with table 510 through common fields EDR_MSG_ID, EQP_AXLE_NBR, and AXLE_SIDE. By way of example, using the vehicle identifier, monthly data regarding each vehicle can be aggregated for each detector.
  • The logic 112 utilizes the integrated data model 402 to perform failure pattern analysis 406 and failure causal analysis 410. Insights from the perspectives of preventative maintenance, procurement decision, and railway operations can be obtained by discovering correlations in the historical data that associates failures with equipment physical parameters (e.g., weight, flange thickness, flange height, rim thickness, etc.), equipment operation parameters (e.g., speed, dynamic load, bearing temperature, etc.), and external parameters (e.g., weather, usage history). Traffic & network data 404 b includes data that measures the traffic volumes or number of railcars passing through the rail segments. Track inspection data 404 c provides inspection records which may indicate the condition of the tracks. Weather data 404 d includes any weather-related information that may have an impact on railway operating conditions (e.g., those that might result in derailment). The set-out & failure data 404 e and the tear down/repair data 404 f provide maintenance records including equipment failures and repair information.
  • Failure pattern analysis 406, subject matter expert (SME)-rendered decisions 408, failure causal analyses 410, learning failure prediction models 412, and failure causal map 414 are associated with an offline learning engine based on the large volume of data collected for these elements. The failure pattern analysis component 406 of the offline learning engine is an analytics engine configured to discover failure patterns such as seasonality patterns of failures. The failure causal analyses component 410 of the offline learning engine identifies the factors that drive the patterns while leveraging SME knowledge in 408. The failure causal map 414 provides a tool to visualize the causal factors and failure patterns. The learning failure prediction models 412 develop the failure prediction engine based on the failure patterns and causal factors. Once the offline learning engine is developed, it is used in an online fashion with real-time data (e.g., live sensor data 418). When the detector data 418 is received, it is fed to the analytical models 420. Prediction outputs and decision recommendations resulting from the models 420 is displayed in a predicted failure/optimized preventative maintenance program 416.
  • In order to reduce immediate service interruptions and to provide better prediction of asset failures, one goal is to maximize the occurrences of crew set-outs and reduce the inspection and maintenance costs. With reference to the railroad industry, e.g., components of analytics involved in this goal relate to alarm prediction, bad truck prediction, and bad wheel prediction (as described above with respect to FIGS. 1-3). In alarm prediction, multiple detectors (HBD, ABD, and WILD) can aid in predicting the most severe alarm related to hot bearings within a meaningful advanced amount of time (e.g., 7 days in advance of actual alarm/incident occurrence to reduce the immediate train stops). In bad truck prediction analysis, wheels and trucks are replaced when they create high impact or wear out. To identify patterns in wheel movement error, wheel dimension and wheel impact load data may be used to detect truck performance issues earlier using multiple detectors, such as MV, OGD, TPD, and WILD. To identify patterns in wheel dimensions, movement errors and wheel impact load that predict the wheel defects earlier may be determined from data received from multiple detectors, such as MV, OGD, and WILD.
  • Railroads issue Level 1 (L1) alarms when the detector readings reach the most severe category and thus immediate train stoppage is generally required. Predicting an L1 alarm in advance is desirable so that operators have sufficient time to respond. One goal in developing an alarm prediction model is to keep false alarm rates low due to constraints in corresponding resources. Another goal is to provide human interpretable rules to facilitate decision processes by operators.
  • In an embodiment, alarm prediction processes are provided. The alarm prediction processes are configured to accomplish the above-referenced goals. Alarm prediction may be summarized as a classification problem where one class relates to detector readings history with alarms, and the other class relates to detector readings history without alarms. The exemplary alarm prediction processes utilize Support Vector Machine (SVM) techniques. In the alarm prediction, two sets of parameters are provided and may be customized. One set is the prediction time window (i.e., how many days in advance the alarm prediction is generated). Based on the trade-offs between operational constraints versus accuracy of prediction, the process offers predictions for 3 or 7 days in advance, which in turn may provide enough buffering time to prepare for inspections based on operation conditions. The second set of parameters is the historical detector reading time window that indicates how many days of past detector readings may be used to provide a forecast. Based on the trade-offs between availability of historic data in detector data storage systems versus the accuracy of prediction, the process may include two options. For purposes of illustration the options are 7 days and 14 days. By combining the two sets of parameters, there are now four settings, i.e., 7-7, 7-3, 14-7, and 14-3. The first number of each setting indicates the reading time window, and the second number in each setting reflects the prediction time window. For example, 7-3 means using the past 7 days of readings, an alarm prediction of 3 days in the future can be provided.
  • Turning now to FIG. 6, the alarm prediction processes will now be described in an embodiment. At step 602, data is aggregated from detectors (e.g., the data sources 104 of FIG. 1 including the data elements 404 in FIG. 4), and features are extracted (e.g., using quantiles) for each numeric value variable. The features may each be a vector of equal length. In the feature extraction, historical multi-detector readings, e.g., ABD, HBD, and WILD, as well as extract features are combined and aggregated using quantiles for each numeric value variable. At step 604, sample features are linearly projected to a lower dimensional space (e.g., a learned non-linear decision boundary) while maintaining a comparable learning performance, which in turn may reduce both the time and memory complexities required by the learning model.
  • At step 606, a prediction is generated based on its location in the feature space to the support vectors (e.g., the key samples that lie in the border area between positives and negatives). In step 608, a decision boundary is logicalized. Human interpretable rules are extracted through grid searching given the complex SVM classification results. As shown in FIG. 7A for example, a grid 700A illustrates feature space in which all blocks constitute the feasible feature space, and each block is a sample. Based on learning decisions, positive samples are darkened (704 a). A curve 706 a represents a separating or decision boundary. The feature space of the grid 700A illustrates a coarse logical rule search. Using the same two-dimensional learning problem, a grid 700B in FIG. 7B illustrates a feature space comprising smaller (finer grid search) blocks 702 b. In comparison to the grid 700A, the decision boundary 706 b is more precise. The rule logicalization is, thus, scalable to a desired granular level.
  • From execution of the rules, the logic 112 may calculate the probability or risk. For example, the logic 112 may predict whether a bearing will issue a L1 alarm within a defined future time period based on its location (in feature space) to the support vectors (i.e., the key samples that lie in the border area between positives and negatives). In addition to predicting whether an alarm will be issued or not, the corresponding confidence is estimated based on the relative position to the support vectors at step 610.
  • As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
  • The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one more other features, integers, steps, operations, element components, and/or groups thereof.
  • The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated
  • The flow diagrams depicted herein are just one example. There may be many variations to this diagram or the steps (or operations) described therein without departing from the spirit of the invention. For instance, the steps may be performed in a differing order or steps may be added, deleted or modified. All of these variations are considered a part of the claimed invention.
  • While the preferred embodiment to the invention had been described, it will be understood that those skilled in the art, both now and in the future, may make various improvements and enhancements which fall within the scope of the claims which follow. These claims should be construed to maintain the proper protection for the invention first described.

Claims (12)

What is claimed is:
1. A method, comprising:
aggregating data, via a computer processing device, from data sources;
extracting a set of features from the data;
projecting the features to a lower dimensional space;
generating a prediction based on the projecting;
logicalizing a decision boundary for the prediction; and
estimating a confidence level of the prediction based on the decision boundary.
2. The method of claim 1, wherein data sources reside within a railroad environment.
3. The method of claim 2, wherein the data sources include detectors operating within the railroad environment, the detectors comprising at least one of:
a machine vision detector;
a wheel impact load detector;
an optical geometry detector;
a truck performance detector;
an acoustic bay detector;
a hot box detector;
a warm bearing detector;
a hot wheel detector; and
a cold wheel detector.
4. The method of claim 1, wherein the prediction is for a failure, the prediction generated by machine learning techniques, performing a grid search and extracting human interpretable rules using Support Vector Machine classification results.
5. The method of claim 4, wherein the confidence is estimated based on a relative position to support vectors associated with the prediction.
6. The method of claim 1, wherein the features are extracted using quantiles.
7. A computer program product comprising a storage medium embodied with machine-readable program instructions, which when executed by a computer, causes the computer to implement a method, the method comprising:
aggregating data from data sources;
extracting a set of features from the data;
projecting the features to a lower dimensional space;
generating a prediction based on the projecting;
logicalizing a decision boundary for the prediction; and
estimating a confidence level of the prediction based on the decision boundary.
8. The computer program product of claim 7, wherein data sources reside within a railroad environment.
9. The computer program product of claim 8, wherein the data sources include detectors operating within the railroad environment, the detectors comprising at least one of:
a machine vision detector;
a wheel impact load detector;
an optical geometry detector;
a truck performance detector;
an acoustic bay detector;
a hot box detector;
a warm bearing detector;
a hot wheel detector; and
a cold wheel detector.
10. The computer program product of claim 7, wherein the prediction is for a failure, the prediction generated by machine learning techniques, performing a grid search and extracting human interpretable rules using Support Vector Machine classification results.
11. The computer program product of claim 10, wherein the confidence is estimated based on a relative position to support vectors associated with the prediction.
12. The computer program product of claim 7, wherein the features are extracted using quantiles.
US13/962,203 2013-01-11 2013-08-08 Scalable rule logicalization for asset health prediction Abandoned US20140200952A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/962,203 US20140200952A1 (en) 2013-01-11 2013-08-08 Scalable rule logicalization for asset health prediction

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201361751704P 2013-01-11 2013-01-11
US13/873,829 US20140200951A1 (en) 2013-01-11 2013-04-30 Scalable rule logicalization for asset health prediction
US13/962,203 US20140200952A1 (en) 2013-01-11 2013-08-08 Scalable rule logicalization for asset health prediction

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/873,829 Continuation US20140200951A1 (en) 2013-01-11 2013-04-30 Scalable rule logicalization for asset health prediction

Publications (1)

Publication Number Publication Date
US20140200952A1 true US20140200952A1 (en) 2014-07-17

Family

ID=51165801

Family Applications (10)

Application Number Title Priority Date Filing Date
US13/873,859 Expired - Fee Related US9561810B2 (en) 2013-01-11 2013-04-30 Large-scale multi-detector predictive modeling
US13/873,829 Abandoned US20140200951A1 (en) 2013-01-11 2013-04-30 Scalable rule logicalization for asset health prediction
US13/873,851 Expired - Fee Related US9187104B2 (en) 2013-01-11 2013-04-30 Online learning using information fusion for equipment predictive maintenance in railway operations
US13/906,883 Expired - Fee Related US9744978B2 (en) 2013-01-11 2013-05-31 Railway track geometry defect modeling for predicting deterioration, derailment risk, and optimal repair
US13/907,056 Abandoned US20140200828A1 (en) 2013-01-11 2013-05-31 Asset failure prediction with location uncertainty
US13/962,310 Expired - Fee Related US9764746B2 (en) 2013-01-11 2013-08-08 Railway track geometry defect modeling for predicting deterioration, derailment risk, and optimal repair
US13/962,203 Abandoned US20140200952A1 (en) 2013-01-11 2013-08-08 Scalable rule logicalization for asset health prediction
US13/962,252 Expired - Fee Related US9463815B2 (en) 2013-01-11 2013-08-08 Large-scale multi-detector predictive modeling
US13/962,229 Abandoned US20140200873A1 (en) 2013-01-11 2013-08-08 Online learning using information fusion for equipment predictive maintenance in railway operations
US13/962,287 Expired - Fee Related US10414416B2 (en) 2013-01-11 2013-08-08 Asset failure prediction with location uncertainty

Family Applications Before (6)

Application Number Title Priority Date Filing Date
US13/873,859 Expired - Fee Related US9561810B2 (en) 2013-01-11 2013-04-30 Large-scale multi-detector predictive modeling
US13/873,829 Abandoned US20140200951A1 (en) 2013-01-11 2013-04-30 Scalable rule logicalization for asset health prediction
US13/873,851 Expired - Fee Related US9187104B2 (en) 2013-01-11 2013-04-30 Online learning using information fusion for equipment predictive maintenance in railway operations
US13/906,883 Expired - Fee Related US9744978B2 (en) 2013-01-11 2013-05-31 Railway track geometry defect modeling for predicting deterioration, derailment risk, and optimal repair
US13/907,056 Abandoned US20140200828A1 (en) 2013-01-11 2013-05-31 Asset failure prediction with location uncertainty
US13/962,310 Expired - Fee Related US9764746B2 (en) 2013-01-11 2013-08-08 Railway track geometry defect modeling for predicting deterioration, derailment risk, and optimal repair

Family Applications After (3)

Application Number Title Priority Date Filing Date
US13/962,252 Expired - Fee Related US9463815B2 (en) 2013-01-11 2013-08-08 Large-scale multi-detector predictive modeling
US13/962,229 Abandoned US20140200873A1 (en) 2013-01-11 2013-08-08 Online learning using information fusion for equipment predictive maintenance in railway operations
US13/962,287 Expired - Fee Related US10414416B2 (en) 2013-01-11 2013-08-08 Asset failure prediction with location uncertainty

Country Status (2)

Country Link
US (10) US9561810B2 (en)
WO (1) WO2014110099A2 (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150066449A1 (en) * 2013-08-29 2015-03-05 General Electric Company Solar farm and method for forecasting solar farm performance
US20150170090A1 (en) * 2013-12-17 2015-06-18 Intellisense.Io Ltd Optimizing efficiency of an asset and an overall system in a facility
US20160121912A1 (en) * 2013-11-27 2016-05-05 Solfice Research, Inc. Real time machine vision system for train control and protection
US9718486B1 (en) 2016-02-01 2017-08-01 Electro-Motive Diesel, Inc. System for analyzing health of train
US20180074482A1 (en) * 2016-09-14 2018-03-15 Emerson Process Management Power & Water Solutions, Inc. Method for Improving Process/Equipment Fault Diagnosis
US10196078B2 (en) 2015-11-30 2019-02-05 Progress Rail Locomotive Inc. Diagnostic system for a rail vehicle
US10338982B2 (en) * 2017-01-03 2019-07-02 International Business Machines Corporation Hybrid and hierarchical outlier detection system and method for large scale data protection
US10379008B2 (en) * 2014-12-15 2019-08-13 Nippon Steel Corporation Railway vehicle condition monitoring apparatus
WO2019185873A1 (en) * 2018-03-29 2019-10-03 Konux Gmbh System and method for detecting and associating railway related data
US20190378349A1 (en) * 2018-06-07 2019-12-12 GM Global Technology Operations LLC Vehicle remaining useful life prediction
US10534361B2 (en) * 2013-06-10 2020-01-14 Abb Schweiz Ag Industrial asset health model update
US10580228B2 (en) * 2017-07-07 2020-03-03 The Boeing Company Fault detection system and method for vehicle system prognosis
CN110930258A (en) * 2019-11-15 2020-03-27 安徽海汇金融投资集团有限公司 Receivable financial financing cash scale prediction method and system
US10642813B1 (en) * 2015-12-14 2020-05-05 Amazon Technologies, Inc. Techniques and systems for storage and processing of operational data
US10749881B2 (en) 2017-06-29 2020-08-18 Sap Se Comparing unsupervised algorithms for anomaly detection
US10878385B2 (en) * 2015-06-19 2020-12-29 Uptake Technologies, Inc. Computer system and method for distributing execution of a predictive model
US11055450B2 (en) * 2013-06-10 2021-07-06 Abb Power Grids Switzerland Ag Industrial asset health model update
US11169269B2 (en) 2019-05-16 2021-11-09 Tetra Tech, Inc. System and method for generating and interpreting point clouds of a rail corridor along a survey path
US11196981B2 (en) 2015-02-20 2021-12-07 Tetra Tech, Inc. 3D track assessment apparatus and method
US11300481B2 (en) 2019-01-25 2022-04-12 Wipro Limited Method and system for predicting failures in diverse set of asset types in an enterprise
US11305799B2 (en) 2018-06-01 2022-04-19 Tetra Tech, Inc. Debris deflection and removal method for an apparatus and method for gathering data from sensors oriented at an oblique angle relative to a railway track
US11308250B2 (en) * 2013-11-27 2022-04-19 Falkonry Inc. Learning expected operational behavior of machines from generic definitions and past behavior
US11377130B2 (en) 2018-06-01 2022-07-05 Tetra Tech, Inc. Autonomous track assessment system
US11500366B2 (en) * 2019-03-26 2022-11-15 Ge Aviation Systems Limited Method and system for fusing disparate industrial asset event information
US11507716B2 (en) 2018-07-09 2022-11-22 International Business Machines Corporation Predicting life expectancy of machine part

Families Citing this family (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3172685A1 (en) * 2014-07-23 2017-05-31 Iruiz Technologies Ltd Improvements related to forecasting systems
US11106987B2 (en) 2014-07-23 2021-08-31 Iruiz Technologies Ltd. Forecasting systems
JP6273650B2 (en) * 2014-10-29 2018-02-07 公益財団法人鉄道総合技術研究所 PC sleeper deterioration determination system and PC sleeper deterioration determination method
FR3028610B1 (en) * 2014-11-17 2017-01-13 Leyfa Measurement METHOD OF CHARACTERIZING TRACE PLAN AND CROSS-SECTIONAL PROFILE OF A RAILWAY
FR3029488B1 (en) * 2014-12-04 2017-12-29 Alstom Transp Tech SYSTEM FOR MONITORING THE CONDITIONS FOR THE OPERATION OF A TRAIN
DE102014119095A1 (en) * 2014-12-18 2016-06-23 Deutsches Zentrum für Luft- und Raumfahrt e.V. Method and device for optimizing the track superstructure maintenance by single fault classification
US10899374B2 (en) * 2015-01-12 2021-01-26 The Island Radar Company Video analytic sensor system and methods for detecting railroad crossing gate position and railroad occupancy
US9849895B2 (en) 2015-01-19 2017-12-26 Tetra Tech, Inc. Sensor synchronization apparatus and method
CA2892952C (en) 2015-01-19 2019-10-15 Tetra Tech, Inc. Protective shroud
US9618335B2 (en) 2015-01-19 2017-04-11 Tetra Tech, Inc. Light emission power control apparatus and method
US10349491B2 (en) 2015-01-19 2019-07-09 Tetra Tech, Inc. Light emission power control apparatus and method
JP6420714B2 (en) 2015-04-28 2018-11-07 株式会社日立製作所 Railway ground equipment maintenance support system, maintenance support method, and maintenance support program
US10984338B2 (en) 2015-05-28 2021-04-20 Raytheon Technologies Corporation Dynamically updated predictive modeling to predict operational outcomes of interest
EP3188093A1 (en) 2015-12-29 2017-07-05 Tata Consultancy Services Limited System and method for predicting response time of an enterprise system
NL2016372B1 (en) * 2016-03-07 2017-09-19 Fugro Roames Pty Ltd A method, a system and a computer program product for monitoring remote infrastructure networks.
US10664750B2 (en) 2016-08-10 2020-05-26 Google Llc Deep machine learning to predict and prevent adverse conditions at structural assets
JP6704817B2 (en) * 2016-08-18 2020-06-03 西日本高速道路エンジニアリング四国株式会社 Quantitative analysis method for pothole risk in drainage pavement
JP6675297B2 (en) * 2016-12-09 2020-04-01 Dmg森精機株式会社 Information processing method, information processing system, and information processing apparatus
US10311551B2 (en) 2016-12-13 2019-06-04 Westinghouse Air Brake Technologies Corporation Machine vision based track-occupancy and movement validation
US20180204134A1 (en) * 2017-01-19 2018-07-19 Honeywell International Inc. Expert-augmented machine learning for condition monitoring
US11238212B1 (en) * 2017-02-07 2022-02-01 Railworks Corporation Systems and methods for generating maintenance data
US20190012627A1 (en) * 2017-07-06 2019-01-10 Bnsf Railway Company Railroad engineering asset management systems and methods
US11475377B2 (en) * 2017-09-08 2022-10-18 Nec Corporation Maintenance range optimization apparatus, maintenance range optimization method, and computer-readable recording medium
US10762111B2 (en) * 2017-09-25 2020-09-01 International Business Machines Corporation Automatic feature learning from a relational database for predictive modelling
CN107886192B (en) * 2017-10-26 2020-10-30 西南交通大学 Data and information fusion method based on fixed and mobile vehicle detection data
CA3077324C (en) 2017-11-17 2022-08-30 Thales Canada Inc. Point cloud rail asset data extraction
CN108248641A (en) * 2017-12-06 2018-07-06 中国铁道科学研究院电子计算技术研究所 A kind of urban track traffic data processing method and device
WO2019125301A1 (en) * 2017-12-21 2019-06-27 Hitachi, Ltd. Control arrangements for maintenance of a collection of physical devices and methods for controlling maintenance of a collection of physical devices
EP3774487A1 (en) * 2018-05-16 2021-02-17 Siemens Mobility Austria GmbH Method and apparatus for diagnosis and monitoring of vehicles, vehicle components and routes
US10625760B2 (en) 2018-06-01 2020-04-21 Tetra Tech, Inc. Apparatus and method for calculating wooden crosstie plate cut measurements and rail seat abrasion measurements based on rail head height
US10730538B2 (en) 2018-06-01 2020-08-04 Tetra Tech, Inc. Apparatus and method for calculating plate cut and rail seat abrasion based on measurements only of rail head elevation and crosstie surface elevation
WO2020002019A1 (en) * 2018-06-28 2020-01-02 Konux Gmbh Smart sensor data transmission in railway infrastructure
EP3814191A1 (en) * 2018-06-28 2021-05-05 Konux GmbH Planning of maintenance of railway
US10894551B2 (en) * 2018-09-05 2021-01-19 Protran Technology, Llc Lateral rail measurement device
US11265688B2 (en) * 2018-09-10 2022-03-01 Tagup, Inc. Systems and methods for anomaly detection and survival analysis for physical assets
AU2019338073B2 (en) 2018-09-10 2021-08-19 Mer Mec S.P.A. Device and method for detecting railway equipment defects
CN109472461B (en) * 2018-10-18 2021-10-01 中国铁道科学研究院集团有限公司基础设施检测研究所 Contact net section quality determination method and device
DE102018219256A1 (en) * 2018-11-12 2020-05-14 Siemens Mobility GmbH Determine a degradation of a certain track component
CN109657910B (en) * 2018-11-13 2022-03-15 国能龙源环保有限公司 Big data-based intermittent operation method and system for desulfurization oxidation air system
US10752271B2 (en) * 2018-11-15 2020-08-25 Avante International Technology, Inc. Image-based monitoring and detection of track/rail faults
JP2022515266A (en) 2018-12-24 2022-02-17 ディーティーエス・インコーポレイテッド Room acoustic simulation using deep learning image analysis
CN112012060B (en) * 2019-05-28 2022-06-28 浙江德盛铁路器材股份有限公司 Railway track basic equipment quality condition prediction and judgment method
CN110254478B (en) * 2019-06-05 2021-07-02 中国铁道科学研究院集团有限公司 Roadbed deformation disease identification method and device
CN110222856B (en) * 2019-06-12 2022-03-25 中国神华能源股份有限公司 Method and device for processing damage to train wheel tread and storage medium
JP2022537937A (en) * 2019-07-02 2022-08-31 コヌクス ゲーエムベーハー Monitoring, Predicting, and Maintaining the Condition of Railroad Track Elements Using Digital Twins
CN110472370B (en) * 2019-08-29 2022-11-08 智慧航海(青岛)科技有限公司 Intelligent ship body system
US11427232B2 (en) * 2019-10-16 2022-08-30 Bnsf Railway Company Systems and methods for auditing assets
CA3171477A1 (en) 2020-03-20 2021-09-23 Amsted Rail Company, Inc. Mobile railway asset monitoring apparatus and methods
DE102020118670A1 (en) 2020-07-15 2022-01-20 Deutsche Bahn Aktiengesellschaft OBSERVATION PROCEDURES
US12073152B2 (en) * 2020-07-20 2024-08-27 International Business Machines Corporation Vehicle asset modeling using language processing methods
CA3219358A1 (en) * 2021-05-19 2022-11-24 Xiang Liu Systems and methods for machine learning enhanced railway condition monitoring, assessment and prediction
US12017691B1 (en) 2021-09-08 2024-06-25 Bentley Systems, Incorporated Techniques for predicting railroad track geometry exceedances
US11511781B1 (en) * 2021-09-10 2022-11-29 Bnsf Railway Company System and method for continuous welded rail risk modeling
WO2023043801A1 (en) * 2021-09-15 2023-03-23 Amsted Rail Company, Inc. Mobile railway asset monitoring apparatus and methods
EP4429955A1 (en) 2021-11-10 2024-09-18 Aermetric Technology Group, Inc. Systems and methods for aircraft management
DE102023105265A1 (en) 2023-03-03 2024-09-05 Deutsche Bahn Aktiengesellschaft OBSERVATION PROCEDURES
CN116452983B (en) * 2023-06-12 2023-10-10 合肥工业大学 Quick discovering method for land landform change based on unmanned aerial vehicle aerial image

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3381626A (en) * 1966-03-25 1968-05-07 Jackson Vibrators Track working assembly and control system
US3500186A (en) * 1968-12-26 1970-03-10 Melpar Inc Apparatus for high-speed measurement of track geometry
US4173073A (en) * 1977-05-25 1979-11-06 Hitachi, Ltd. Track displacement detecting and measuring system
US6324659B1 (en) * 1999-10-28 2001-11-27 General Electric Company Method and system for identifying critical faults in machines
US20060015224A1 (en) * 2004-07-15 2006-01-19 Hilleary Thomas N Systems and methods for delivery of railroad crossing and wayside equipment operational data
US7164975B2 (en) * 1999-06-15 2007-01-16 Andian Technologies Ltd. Geometric track and track/vehicle analyzers and methods for controlling railroad systems
US20070087756A1 (en) * 2005-10-04 2007-04-19 Hoffberg Steven M Multifactorial optimization system and method
US20090043547A1 (en) * 2006-09-05 2009-02-12 Colorado State University Research Foundation Nonlinear function approximation over high-dimensional domains
US20090063115A1 (en) * 2007-08-31 2009-03-05 Zhao Lu Linear programming support vector regression with wavelet kernel
US20090113049A1 (en) * 2006-04-12 2009-04-30 Edsa Micro Corporation Systems and methods for real-time forecasting and predicting of electrical peaks and managing the energy, health, reliability, and performance of electrical power systems based on an artificial adaptive neural network

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4793047A (en) * 1986-12-18 1988-12-27 General Electric Company Method of adjusting the distribution of locomotive axle loads
US5020007A (en) 1988-03-10 1991-05-28 Wu Samuel C Method for monitoring the health of physical systems producing waste heat
KR960703479A (en) * 1993-07-05 1996-08-17 안자끼 사토루 Inference Device
US5542043A (en) 1994-10-11 1996-07-30 Bell Communications Research, Inc. Method and system for automatically generating efficient test cases for systems having interacting elements
US5791063A (en) 1996-02-20 1998-08-11 Ensco, Inc. Automated track location identification using measured track data
US5867404A (en) 1996-04-01 1999-02-02 Cairo Systems, Inc. Method and apparatus for monitoring railway defects
US5986547A (en) 1997-03-03 1999-11-16 Korver; Kelvin Apparatus and method for improving the safety of railroad systems
US5970438A (en) 1998-04-07 1999-10-19 Sperry Rail Service Method and apparatus for testing rails for structural defects
EP0982673A3 (en) 1998-08-21 2002-10-02 Tsubasa System Co. Ltd. Vehicle repair cost estimating system
US6347265B1 (en) 1999-06-15 2002-02-12 Andian Technologies Ltd. Railroad track geometry defect detector
US20110208567A9 (en) 1999-08-23 2011-08-25 Roddy Nicholas E System and method for managing a fleet of remote assets
US20020059075A1 (en) 2000-05-01 2002-05-16 Schick Louis A. Method and system for managing a land-based vehicle
US7689521B2 (en) * 2001-06-28 2010-03-30 Microsoft Corporation Continuous time bayesian network models for predicting users' presence, activities, and component usage
US20030136192A1 (en) 2002-01-23 2003-07-24 Jia-Heng Tu System and method for detecting noises and sounds in wheels, tires and bearings of a vehicle
GB0304633D0 (en) 2003-02-28 2003-04-02 Cdsrail Ltd Condition monitoring apparatus and method
US7010538B1 (en) * 2003-03-15 2006-03-07 Damian Black Method for distributed RDSMS
US6804621B1 (en) 2003-04-10 2004-10-12 Tata Consultancy Services (Division Of Tata Sons, Ltd) Methods for aligning measured data taken from specific rail track sections of a railroad with the correct geographic location of the sections
US8180590B2 (en) 2003-10-06 2012-05-15 Marshall University Research Corporation Railroad surveying and monitoring system
US7154403B2 (en) 2004-06-30 2006-12-26 General Electric Company Apparatus and method for monitoring the output of a warning or indicator light
ATE544654T1 (en) * 2005-12-23 2012-02-15 Asf Keystone Inc MONITORING SYSTEM FOR RAILWAY TRAINS
US8942426B2 (en) 2006-03-02 2015-01-27 Michael Bar-Am On-train rail track monitoring system
US8000568B2 (en) 2006-11-07 2011-08-16 Olympus Corporation Beam steering element and associated methods for mixed manifold fiberoptic switches
US7937246B2 (en) 2007-09-07 2011-05-03 Board Of Regents Of The University Of Nebraska Vertical track modulus trending
US8423240B2 (en) 2008-06-30 2013-04-16 International Electronic Machines Corporation Wireless railroad monitoring
US8412393B2 (en) * 2008-07-01 2013-04-02 General Electric Company Apparatus and method for monitoring of infrastructure condition
US8626459B2 (en) * 2008-09-25 2014-01-07 The Regents Of The University Of California Defect detection in objects using statistical approaches
US20100241639A1 (en) * 2009-03-20 2010-09-23 Yahoo! Inc. Apparatus and methods for concept-centric information extraction
US20120059684A1 (en) 2010-09-02 2012-03-08 International Business Machines Corporation Spatial-Temporal Optimization of Physical Asset Maintenance
US8626680B2 (en) 2010-12-03 2014-01-07 International Business Machines Corporation Group variable selection in spatiotemporal modeling
CN103635375A (en) 2011-05-24 2014-03-12 内布拉斯加大学董事会 Vision system for imaging and measuring rail deflection
US9981671B2 (en) 2012-03-01 2018-05-29 Nordco Inc. Railway inspection system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3381626A (en) * 1966-03-25 1968-05-07 Jackson Vibrators Track working assembly and control system
US3500186A (en) * 1968-12-26 1970-03-10 Melpar Inc Apparatus for high-speed measurement of track geometry
US4173073A (en) * 1977-05-25 1979-11-06 Hitachi, Ltd. Track displacement detecting and measuring system
US7164975B2 (en) * 1999-06-15 2007-01-16 Andian Technologies Ltd. Geometric track and track/vehicle analyzers and methods for controlling railroad systems
US6324659B1 (en) * 1999-10-28 2001-11-27 General Electric Company Method and system for identifying critical faults in machines
US20060015224A1 (en) * 2004-07-15 2006-01-19 Hilleary Thomas N Systems and methods for delivery of railroad crossing and wayside equipment operational data
US20070087756A1 (en) * 2005-10-04 2007-04-19 Hoffberg Steven M Multifactorial optimization system and method
US20090113049A1 (en) * 2006-04-12 2009-04-30 Edsa Micro Corporation Systems and methods for real-time forecasting and predicting of electrical peaks and managing the energy, health, reliability, and performance of electrical power systems based on an artificial adaptive neural network
US20090043547A1 (en) * 2006-09-05 2009-02-12 Colorado State University Research Foundation Nonlinear function approximation over high-dimensional domains
US20090063115A1 (en) * 2007-08-31 2009-03-05 Zhao Lu Linear programming support vector regression with wavelet kernel

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11055450B2 (en) * 2013-06-10 2021-07-06 Abb Power Grids Switzerland Ag Industrial asset health model update
US10534361B2 (en) * 2013-06-10 2020-01-14 Abb Schweiz Ag Industrial asset health model update
US20150066449A1 (en) * 2013-08-29 2015-03-05 General Electric Company Solar farm and method for forecasting solar farm performance
US20160121912A1 (en) * 2013-11-27 2016-05-05 Solfice Research, Inc. Real time machine vision system for train control and protection
US11308250B2 (en) * 2013-11-27 2022-04-19 Falkonry Inc. Learning expected operational behavior of machines from generic definitions and past behavior
US10086857B2 (en) * 2013-11-27 2018-10-02 Shanmukha Sravan Puttagunta Real time machine vision system for train control and protection
US20180370552A1 (en) * 2013-11-27 2018-12-27 Solfice Research, Inc. Real time machine vision system for vehicle control and protection
US20150170090A1 (en) * 2013-12-17 2015-06-18 Intellisense.Io Ltd Optimizing efficiency of an asset and an overall system in a facility
US10379008B2 (en) * 2014-12-15 2019-08-13 Nippon Steel Corporation Railway vehicle condition monitoring apparatus
US11399172B2 (en) 2015-02-20 2022-07-26 Tetra Tech, Inc. 3D track assessment apparatus and method
US11259007B2 (en) 2015-02-20 2022-02-22 Tetra Tech, Inc. 3D track assessment method
US11196981B2 (en) 2015-02-20 2021-12-07 Tetra Tech, Inc. 3D track assessment apparatus and method
US10878385B2 (en) * 2015-06-19 2020-12-29 Uptake Technologies, Inc. Computer system and method for distributing execution of a predictive model
US10196078B2 (en) 2015-11-30 2019-02-05 Progress Rail Locomotive Inc. Diagnostic system for a rail vehicle
US11537587B2 (en) * 2015-12-14 2022-12-27 Amazon Technologies, Inc. Techniques and systems for storage and processing of operational data
US10642813B1 (en) * 2015-12-14 2020-05-05 Amazon Technologies, Inc. Techniques and systems for storage and processing of operational data
US9718486B1 (en) 2016-02-01 2017-08-01 Electro-Motive Diesel, Inc. System for analyzing health of train
US10606254B2 (en) * 2016-09-14 2020-03-31 Emerson Process Management Power & Water Solutions, Inc. Method for improving process/equipment fault diagnosis
US20180074482A1 (en) * 2016-09-14 2018-03-15 Emerson Process Management Power & Water Solutions, Inc. Method for Improving Process/Equipment Fault Diagnosis
US10338982B2 (en) * 2017-01-03 2019-07-02 International Business Machines Corporation Hybrid and hierarchical outlier detection system and method for large scale data protection
US11016834B2 (en) 2017-01-03 2021-05-25 International Business Machines Corporation Hybrid and hierarchical outlier detection system and method for large scale data protection
US10749881B2 (en) 2017-06-29 2020-08-18 Sap Se Comparing unsupervised algorithms for anomaly detection
US10580228B2 (en) * 2017-07-07 2020-03-03 The Boeing Company Fault detection system and method for vehicle system prognosis
US11113905B2 (en) * 2017-07-07 2021-09-07 The Boeing Company Fault detection system and method for vehicle system prognosis
WO2019185873A1 (en) * 2018-03-29 2019-10-03 Konux Gmbh System and method for detecting and associating railway related data
US11919551B2 (en) 2018-06-01 2024-03-05 Tetra Tech, Inc. Apparatus and method for gathering data from sensors oriented at an oblique angle relative to a railway track
US11560165B2 (en) 2018-06-01 2023-01-24 Tetra Tech, Inc. Apparatus and method for gathering data from sensors oriented at an oblique angle relative to a railway track
US11305799B2 (en) 2018-06-01 2022-04-19 Tetra Tech, Inc. Debris deflection and removal method for an apparatus and method for gathering data from sensors oriented at an oblique angle relative to a railway track
US11377130B2 (en) 2018-06-01 2022-07-05 Tetra Tech, Inc. Autonomous track assessment system
US20190378349A1 (en) * 2018-06-07 2019-12-12 GM Global Technology Operations LLC Vehicle remaining useful life prediction
US11507716B2 (en) 2018-07-09 2022-11-22 International Business Machines Corporation Predicting life expectancy of machine part
US11300481B2 (en) 2019-01-25 2022-04-12 Wipro Limited Method and system for predicting failures in diverse set of asset types in an enterprise
US11500366B2 (en) * 2019-03-26 2022-11-15 Ge Aviation Systems Limited Method and system for fusing disparate industrial asset event information
US11782160B2 (en) 2019-05-16 2023-10-10 Tetra Tech, Inc. System and method for generating and interpreting point clouds of a rail corridor along a survey path
US11169269B2 (en) 2019-05-16 2021-11-09 Tetra Tech, Inc. System and method for generating and interpreting point clouds of a rail corridor along a survey path
CN110930258A (en) * 2019-11-15 2020-03-27 安徽海汇金融投资集团有限公司 Receivable financial financing cash scale prediction method and system

Also Published As

Publication number Publication date
US9463815B2 (en) 2016-10-11
US20140200829A1 (en) 2014-07-17
US20140200830A1 (en) 2014-07-17
US20140200869A1 (en) 2014-07-17
US20140200951A1 (en) 2014-07-17
US20140200873A1 (en) 2014-07-17
US9744978B2 (en) 2017-08-29
US9561810B2 (en) 2017-02-07
US9187104B2 (en) 2015-11-17
US20140200872A1 (en) 2014-07-17
WO2014110099A3 (en) 2014-10-16
US20140200870A1 (en) 2014-07-17
WO2014110099A2 (en) 2014-07-17
US10414416B2 (en) 2019-09-17
US20140200827A1 (en) 2014-07-17
US9764746B2 (en) 2017-09-19
US20140200828A1 (en) 2014-07-17

Similar Documents

Publication Publication Date Title
US9463815B2 (en) Large-scale multi-detector predictive modeling
Ghofrani et al. Recent applications of big data analytics in railway transportation systems: A survey
Yan et al. Emerging approaches applied to maritime transport research: Past and future
US11144378B2 (en) Computer system and method for recommending an operating mode of an asset
Tiddens et al. Exploring predictive maintenance applications in industry
Bemment et al. Improving the reliability and availability of railway track switching by analysing historical failure data and introducing functionally redundant subsystems
US20180060703A1 (en) Detection of Anomalies in Multivariate Data
US10592870B2 (en) System and method to analyze and detect anomalies in vehicle service procedures
Márquez et al. Designing CBM plans, based on predictive analytics and big data tools, for train wheel bearings
US10579961B2 (en) Method and system of identifying environment features for use in analyzing asset operation
US20190122138A1 (en) Computer System &amp; Method for Detecting Anomalies in Multivariate Data
Brahimi et al. Development of a prognostics and health management system for the railway infrastructure—Review and methodology
Shetty Predictive maintenance in the IoT era
Gopalakrishnan et al. IIoT Framework Based ML Model to Improve Automobile Industry Product.
Yin et al. A new Wasserstein distance-and cumulative sum-dependent health indicator and its application in prediction of remaining useful life of bearing
US11810061B2 (en) Pre-trip inspection prediction and PTI reduction systems, processes and methods of use
Ren et al. Rail gage-based risk detection Using iPhone 12 pro
Villarejo et al. Context-driven decisions for railway maintenance
Sun et al. A data-driven framework for tunnel infrastructure maintenance
Hwang et al. Building an Analytical Platform of Big Data for Quality Inspection in the Dairy Industry: A Machine Learning Approach
Dirnfeld et al. Integrating AI and DTs: challenges and opportunities in railway maintenance application and beyond
Lourenço et al. Time series data mining for railway wheel and track monitoring: a survey
Enshaei et al. A comprehensive review on advanced maintenance strategies for smart railways
Chellaswamy et al. Optimized railway track condition monitoring and derailment prevention system supported by cloud technology
McMahon Development of Missing Data Imputation Models for Railway Asset Management Systems

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HAMPAPUR, ARUN;LI, HONGFEI;PARIKH, DHAIVAT P.;AND OTHERS;SIGNING DATES FROM 20130503 TO 20130602;REEL/FRAME:030969/0458

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION