WO2024063787A1 - Asset structure behavior learning and inference management system - Google Patents

Asset structure behavior learning and inference management system Download PDF

Info

Publication number
WO2024063787A1
WO2024063787A1 PCT/US2022/044602 US2022044602W WO2024063787A1 WO 2024063787 A1 WO2024063787 A1 WO 2024063787A1 US 2022044602 W US2022044602 W US 2022044602W WO 2024063787 A1 WO2024063787 A1 WO 2024063787A1
Authority
WO
WIPO (PCT)
Prior art keywords
diagnostic
asset
predictive
prescriptive
model
Prior art date
Application number
PCT/US2022/044602
Other languages
French (fr)
Inventor
Mauro Arduino DAMO
Mohan WANG
Wei Lin
Original Assignee
Hitachi Vantara Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Vantara Llc filed Critical Hitachi Vantara Llc
Priority to PCT/US2022/044602 priority Critical patent/WO2024063787A1/en
Publication of WO2024063787A1 publication Critical patent/WO2024063787A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/08Probabilistic or stochastic CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/10Numerical modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/02Reliability analysis or reliability optimisation; Failure analysis, e.g. worst case scenario performance, failure mode and effects analysis [FMEA]

Definitions

  • the present disclosure is generally directed to asset management systems for factory and production lines, and more specifically, to an asset structure behavior learning and inference management system.
  • Example implementations described herein are directed to the solution of the whole failure mode problem.
  • Asset Structure Behavior Learning and Inference Management System (ASBLIMS) was designed to solve a difficult problem involving rare failure events on complex systems that are difficult in diagnosis detection and resolution.
  • Example implementations described herein can provide a gateway to store technical information that required to solve industrial problems, incapsulate aging Subject Matter Expert (SME) workforce knowledge, not just the reasoning but also the knowledge.
  • SME Subject Matter Expert
  • Example implementations described herein can also solve the challenge of aggregating information from different domains.
  • SMEs from different fields work together to make sense of all the information for detect, diagnosis and solutioning, to make an actionable decision.
  • the example implementations described herein involve a system that can provide decision makers with the best sequence of decisions to solve problems in any field.
  • the example implementations described herein have a comprehensive stack of tools that will be useful for any decision maker. Examples of the tools and functionalities are as follows.
  • Asset Hierarchy Maps out asset-component-sensor relationships, and record the asset hierarchy to provide as input to knowledge graph database.
  • Semantic Feature Store Contains the historical information of the signals, and feature engineering rules to create analytical features based on historical information. Contains the metadata information about the use case like kind of use case, such as geo-positions. Contains data catalog information, categorize dataset based on metadata information.
  • Solution Curation Curates the best solution proposal considering different persona profiles. Decides the optimal objective function and algorithm combination based on persona based Key Performance Indicators (KPIs).
  • Models can be trained with any model in model zoo that is applicable to input dataset. Models can train from scratch or use transfer learning to warm start the process. Transfers learning candidate models will come from model recommendation process. After training, generic model files will be provided for deployment.
  • Model Store/Zoo The repository of all models like supervised, unsupervised, semisupervised learning models that a user can use. Trained models with best performance of each input dataset will be stored in model zoo, with corresponding meta data information for model recommendation usage. All previous experiments are recorded on this metadata database and it is accessible for the user.
  • Model Recommendation The process of model selection is by using statistical inference of the comparison between the metadata and feature store information from the dataset and the metadata of the model.
  • Model Deployment The current status diagnosis model is a classification model to classify asset status and defect type, if any.
  • the prediction model predicts the future breakdown time, deterioration speed using pixel changes (e.g., changes between images) over time.
  • Bayesian network reasoning takes in the diagnosis model result, and searches for matched root causes and solutions, by utilizing existing root causes and solution distribution.
  • Sensor signal traverse pattern diagnose anomaly detection and signal triage is used to detect if all sensors are working properly.
  • Graph databases are used to store symptom, root cause and solution information, and map to corresponding assets and sensor type. Chatbot - Knowledge Extraction is used to continuedly update dialogue related information in graph database.
  • Enablement System Dashboard Example implementations described herein have Complex Event Processing (CEP) that is integrated with three different systems: Safety System, Information System and System Control and a Unified dashboard that provides the visualization of the three systems. Below is a brief explanation about each system.
  • Safety system complex event processing (CEP) triggers an event to safety system.
  • CEP CEP triggers an event to any external system. Slack, Mobile Companies.
  • System control CEP triggers an event to system control.
  • aspects of the present disclosure can involve a method for facilitating a computer- implemented autonomous learning application with machine information ingestion, assessable, interoperable, catalogable, reusable construct, and human interface human knowledge extraction associated with a plurality of assets in an asset hierarchy, the method including, for receipt of a request of a diagnosis on an asset from the plurality of assets in a computer- implemented diagnostic, predictive and prescriptive human interface, executing a diagnostic, predictive and prescriptive model configured to classify asset status and defect for the asset; and processing the classified asset status and the defect from the execution of the diagnostic, predictive and prescriptive model through a Bayesian network, the Bayesian network including a plurality of interconnected nodes, each of the plurality of interconnected nodes representative of a possible diagnostic for the for the request and a probability, wherein sequences of the plurality of interconnected nodes are representative of a conditional probability of the diagnostic, the processing resulting in an output diagnosis having a highest probability as provided from the Bayesian network; wherein the diagnostic, predictive and
  • aspects of the present disclosure can involve a system for facilitating a computer- implemented autonomous learning application with machine information ingestion, assessable, interoperable, catalogable, reusable construct, and human interface human knowledge extraction associated with a plurality of assets in an asset hierarchy, the system including, for receipt of a request of a diagnosis on an asset from the plurality of assets in a computer- implemented diagnostic, predictive and prescriptive human interface, means for executing a diagnostic, predictive and prescriptive model configured to classify asset status and defect for the asset; and means for processing the classified asset status and the defect from the execution of the diagnostic, predictive and prescriptive model through a Bayesian network, the Bayesian network including a plurality of interconnected nodes, each of the plurality of interconnected nodes representative of a possible diagnostic for the for the request and a probability, wherein sequences of the plurality of interconnected nodes are representative of a conditional probability of the diagnostic, the processing resulting in an output diagnosis having a highest probability as provided from the Bayesian network; wherein the
  • aspects of the present disclosure can involve a computer program for facilitating a computer-implemented autonomous learning application with machine information ingestion, assessable, interoperable, catalogable, reusable construct, and human interface human knowledge extraction associated with a plurality of assets in an asset hierarchy, the computer program including instructions involving, for receipt of a request of a diagnosis on an asset from the plurality of assets in a computer-implemented diagnostic, predictive and prescriptive human interface, executing a diagnostic, predictive and prescriptive model configured to classify asset status and defect for the asset; and processing the classified asset status and the defect from the execution of the diagnostic, predictive and prescriptive model through a Bayesian network, the Bayesian network including a plurality of interconnected nodes, each of the plurality of interconnected nodes representative of a possible diagnostic for the for the request and a probability, wherein sequences of the plurality of interconnected nodes are representative of a conditional probability of the diagnostic, the processing resulting in an output diagnosis having a highest probability as provided from the Bayesian
  • the computer program and instructions can be stored on a non-transitory computer readable medium and executed by one or more processors.
  • Aspects of the present disclosure can involve an apparatus for facilitating a computer-implemented autonomous learning application with machine information ingestion, assessable, interoperable, catalogable, reusable construct, and human interface human knowledge extraction associated with a plurality of assets in an asset hierarchy, the apparatus including, for receipt of a request of a diagnosis on an asset from the plurality of assets in a computer-implemented diagnostic, predictive and prescriptive human interface, a processor configured to execute a method or instructions involving executing a diagnostic, predictive and prescriptive model configured to classify asset status and defect for the asset; and processing the classified asset status and the defect from the execution of the diagnostic, predictive and prescriptive model through a Bayesian network, the Bayesian network including a plurality of interconnected nodes, each of the plurality of interconnected nodes representative of a possible diagnostic for the for the request and a probability, wherein sequences of the plurality of
  • FIG. 1 illustrates an example of entity relationship diagram that shows how to structure an asset hierarchy (AH) in a database, in accordance with an example implementation.
  • AH asset hierarchy
  • FIG. 2 illustrates the process for the semantic feature store, in accordance with an example implementation.
  • FIG. 3 illustrates solution curation, in accordance with an example implementation.
  • FIG. 4 illustrates an example of the model store/zoo, in accordance with an example implementation.
  • FIG. 5 illustrates an example flow of the model recommendation, in accordance with an example implementation.
  • FIG. 6 illustrates an example of the intelligent self-examination and self-validation, in accordance with an example implementation.
  • FIG. 7 illustrates an example management system for the Bayesian network, in accordance with an example implementation.
  • FIG. 8 gives an illustration example of entity relationship between different type of entities (e.g., asset, component, senor, failure events, warning messages, symptom, root cause, solution), and the analytics problem and corresponding algorithms used in the solution, in accordance with an example implementation.
  • entities e.g., asset, component, senor, failure events, warning messages, symptom, root cause, solution
  • FIG. 9 illustrates an example of sequential solutioning with live state update by using a car example, in accordance with an example implementation.
  • FIG. 10 illustrates an example of parallel solutioning with the car example, in accordance with an example implementation.
  • FIG. 11 illustrates an example of the enablement system dashboard, in accordance with an example implementation.
  • FIG. 12 illustrates a system involving a plurality of physical systems networked to a management apparatus, in accordance with an example implementation.
  • FIG. 13 illustrates an example computing environment with an example computer device suitable for use in some example implementations.
  • FIG. 1 illustrates an example of entity relationship diagram that shows how to structure an asset hierarchy (AH) in a database, in accordance with an example implementation.
  • Asset hierarchy has a key importance on the proposedsolution because it helps to associate different time series sensors on the same asset or group of assets.
  • the asset hierarchy also contributes for information about the data structure, data type and data latency.
  • the AH definition is a logical or physical definition of the relation and the authority between assets. Those assets in the AH could have a cause-effect relationship.
  • AH can involve different hierarchy definitions built using different objects and the relationships between them.
  • Object table describes all objects on the system independently of their hierarchy. Relationship table stores the relationship between objects who are relatives. Attributes table stores the attributes of each object. Information to be use it in the GraphDB for map the relationship of objects.
  • FIG. 2 illustrates the process for the semantic feature store 200, in accordance with an example implementation.
  • Semantic Feature Store 200 has three critical components, which include feature store 210 of historical information, metadata store 220 such as user cases, asset hierarchy, geo-positioning, and so on, and data catalog 230 configured to conduct automapping and cataloging new and existing data by leveraging information from metadata store 220 and feature store 210.
  • the semantic feature store 200 serves as a data hub to clean, sort, catalog raw data from sensor, and extract valuable information from them to get data ready for later modeling process, as well as the capability to catalog future data to similar historical datasets based on various metrics and algorithms.
  • Feature store 210 will save sensor data by asset hierarchy and save the corresponding feature engineering rules for each type of sensor data, for future reuse.
  • Metadata store 220 will store the meta data information about data source for model post-process.
  • Data catalog 230 will create index for similar data source for easier cross reference.
  • Feature store 210 stores raw data, and automatically engineers features accordingly based on asset hierarchy categories.
  • Feature store 210 can involve trigger-based or schedule-based automatic synthetic features (engineered features) and updates based on input data injection.
  • Feature store 210 can involve automatic intelligent data checking, by embedded automatic feature selection criteria to comply with the downstream modeling pipeline, and selects based on data quality, prediction power, data newness, data relevancy, and so on.
  • Feature store 210 communicates with metadata store 220 and data catalog 230 to keep data update in sync.
  • Metadata Store 220 tracks and logs metadata information (use case information, asset hierarchy, geo-positions, and so on), as well as time series information (data timestamp, device timestamp) of the same sensors or same device. Metadata store 220 also communicates with feature store 210 and data catalog 230 to keep data update in sync. Metadata store 220 also auto-communicates with knowledge graph to help with traverse pattern diagnosis and root cause triage process, as well as to compare and store data from new source, or to enrich existing data nodes.
  • Data classification and cataloging 230 involves an unsupervised learning-based auto-mapping process to identify the relationship between new data and existing data for data curation process in the model recommendation module.
  • the data classification and cataloging 230 can involve dimensionality reduction-based algorithms to determine the contribution to different principal components of a new dataset. Cluster based algorithms can also be used to find out cluster assignment for a new dataset.
  • Data classification and cataloging can also involve automatic mapping of new input data to the existing data catalog based on sensor type, asset type and metadata information. Further, data classification and cataloging 230 can store information at dataset level (a superset of any arbitrary sensors/assets) as input to latter algorithm and configuration recommendation system.
  • FIG. 3 illustrates solution curation 300, in accordance with an example implementation.
  • Defining the objective function at 310 is one of the most critical steps in model building life cycle.
  • This persona-based solution curation enables this application to provide more tailored solutions for a business problem, and to ensure that the ‘business impact’ of a model is directly related to model performance.
  • the quantified business impact makes the model results easier to be communicated with other stakeholders for faster business decision making.
  • the solution curation 300 takes into consideration the operator/user personal profile to look for business metrics that takes higher priority to them in predefined persona-metrics databases if any. [0048] Based on the defined persona, the solution curation 300 executes solution formulation 320 and recommends top metrics to be used as objective function in later model building. After user input is obtained to confirm the objective function, the solution curation 300 automatically matches the suitable model family for the algorithm candidate proposals. If no user input is received, then the system recommended algorithms will be used by default.
  • the present disclosure introduces a process to warm start modeling training process by transfer of the learning based on previous model training weights or adoption of a model hyperparameter setting from similar datasets identified in a data cataloging process. If there are no similar datasets, or any previous weights, then the model will train from scratch.
  • Model training breaks down to two scenarios: 1) With prior model adoption for training 2) Without prior model adoption for training.
  • the prior model adoption feature can be enabled by the user, or by an automatic model zoo search if there are any model configurations saved for similar datasets.
  • the model will train from scratch with using meta data information and engineered features from the feature store. Based on the input dataset type, after going through data cataloging and solution curation steps, the proposed objective function with suitable algorithms (e.g., regression, classification, detection, clustering, dimensionality reduction algorithms, and so on) from the model zoo will be used to train the model with default hyperparameter values.
  • suitable algorithms e.g., regression, classification, detection, clustering, dimensionality reduction algorithms, and so on
  • prior model adoption The prior saved metadata fingerprint and device characteristics (from the Asset Hierarchy) will be used together to estimate the dataset between historical dataset and new dataset similarity in data catalog module (e.g., can use measurement from one dimension, or separate measurements for both dimensions) and thereby decide which existing model configuration to be used for model adaptation.
  • the adapted model configuration file can be used as starting point for model training of input dataset. Further, randomized permutations on algorithms (from model recommendation module) and hyperparameter values will be performed based on user preference, to avoid model performance trapped in local optimal. The newly created models will be saved in the model store for future model recommendations. Top models will be selected by weighting business metric performance as well as model metric performance. Generic model files of the best models will be saved and passed to model deployment and model zoo.
  • FIG. 4 illustrates an example of the model store/zoo, in accordance with an example implementation.
  • Model zoo serves as a model repository in the backend to support the model training process, and contains algorithms that are available to use for both structured 400 and unstructured data 410.
  • the model zoo contains both vanilla models and pretrained models from all datasets.
  • Vanilla models are for net-new datasets which do not have a similar data source from the semantic feature store. These net-new datasets will be trained with default weights and through hyper-parameter search to find the best model, which will be saved as reference for incoming new datasets.
  • Pretrained models are the best performing models and hyperparameter values saved during each model run for each input dataset.
  • vanilla models Different vanilla algorithms (like supervised, unsupervised, semisupervised learning, dimensionality reduction models) with default hyperparameter value ranges are predefined in the model zoo. Vanilla models are further categorized by unstructured data and structured data to decide applicability. For net new datasets without any identified ‘similar’ prior dataset, vanilla models to be used for training will follow output from the solution curation step. Newly emerged algorithms will be review periodically to add to vanilla model zoo.
  • Pre-trained model archiving process As new datasets are provided into pipeline, the best model configuration of each algorithm family of this dataset will be identified during the training process, their performance information, and hyperparameter values will be saved as a model fingerprint, together with metadata fingerprint, which will be used for the future model recommendation process. Models are saved and indexed in the model store, and categorized by data type, problem type, failure mode, and/or asset hierarchy for fast retrieval and relevance assessment. Model storage can be categorized by structured and unstructured data, since they require a different algorithm family.
  • FIG. 5 illustrates an example flow of the model recommendation 500, in accordance with an example implementation.
  • the model recommendation module 500 There are mainly four components in the model recommendation module 500: [0058] Step 1. Model Zoo Creation/Enrichment for Performance Benchmark 510.
  • Step 2 Data Curation Process 520.
  • Step 3 Model Curation Algorithms 530.
  • Step 4 Recommended Model Selection 540.
  • Model recommendation 500 will take information from semantic feature store to data curation process to identify how similar a new data source is compared to historical datasets to narrow down the candidates for the model recommendation process. After which, one or more model curation algorithms can be chosen as a selection base to decide on the ranking of algorithm and parameter recommendations. There are three different scenarios for the final recommendation selection, wherein the suitable method will be selected based on each of the scenarios listed below.
  • Model zoo creation/enrichment for performance benchmark 510 Model zoo creation is only the for first time application initiation process to create a vanilla model zoo for both structured data and unstructured data. The model zoo enrichment process will be conducted periodically to update algorithm families with the best rule to thumb hyperparameter settings that are tailored to different problem and data type.
  • Data Curation Process 520 uses data cataloging information from the feature store to determine which existing dataset is similar to the new dataset (e.g., belong to the 1st PCA, or the same cluster). Data Curation Process 520 utilizes information from the data catalog to search for the historical dataset that share similar asset type, sensor type, data type and problem type, and so on, to use best models created from historical datasets as a reference for algorithm and hyperparameter setting recommendation. Dataset ‘similarity’ will be measured in both a qualitative and quantitative way, with the option to take user/operator weight assignment as input to the similarity determination process.
  • Model Curation Algorithms 530 involve three different algorithms that can be used for model curation:
  • One or more algorithms can be used for model curation. When multiple algorithm results are taken into consideration, the weights can be input by the user/operator.
  • Recommended Model Selection 540 can obtain the architecture ranking from the previous module, select the top number (e.g., three) of model hyperparameters/architectures, and have methods to pick the best model according to different scenarios. Such scenarios are as follows.
  • Scenario 1 Input data with the same asset hierarchy, together with the same value distribution of each type of sensor data (Directly inference on new data and compare inference results).
  • Scenario 2 Input data with the similar asset hierarchy, together with a similar value distribution of each type of sensor data (Select best architecture from most similar data).
  • Scenario 3 Input data with different asset hierarchy, with the value distribution and different sensor type.
  • model deployment There is another step before model deployment is to self-examinate and selfvalidate model results based on the asset hierarchy and sensor metadata information, such as the sensor physical location, the data quality of a sensor, sensor runtime stability, and so on. This step will further suppress false positives predictions (or false negatives in some cases) to improve overall prediction accuracy with postprocessing steps. [0076] After the models pass the examination and validation stage, the models are ready to put into production. Depending on the use case and KPI selected at solution curation stage, there are two types of pipelines for model deployment, a classification model pipeline for current asset status classification and a prediction model pipeline for preventive maintenance.
  • FIG. 6 illustrates an example of the intelligent self-examination and self-validation, in accordance with an example implementation.
  • the multi-sensor results are cross checked for self-examination and self-validation as part of the post-processing and the sanity check process.
  • the ground truth used for comparison in this case is either the historical data value from the same sensor, or the same asset (that under the same condition), or historical value pattern from a group of sensors that work together (down-stream signal, upstream signal, and so on).
  • prescriptive results For generating prescriptive results, not just one sensor’s prediction will be used, but instead prescriptive results will only be provided if multiple sensor predictions fit with known patterns (from SME knowledge store), to avoid false positives.
  • FIG. 6 gives an example of how self-examination and self-validation works.
  • the validation can be done at different level, like between same type of sensors within the same component, or different type of sensors within the same component, or same type of sensors between similar components, to check if prediction value within a reasonable range.
  • Diagnosis model for status Classification model to classify asset current status and defect type if any.
  • Prediction model for preventive maintenance Predict future break down time e.g.: deterioration speed using pixel changes (changes between images) over time.
  • FIG. 7 illustrates an example management system for the Bayesian network 700, in accordance with an example implementation.
  • a Bayesian network is a representation of a joint probability distribution of a set of random variables with a possible mutual causal relationship.
  • the network involves nodes representing the random variables, edges between pairs of nodes representing the causal relationship of these nodes, and a conditional probability distribution in each of the nodes.
  • the main objective of the method is to model the posterior conditional probability distribution of outcome (often causal) variable(s) after observing new evidence.
  • Bayesian network 700 utilizes historical information saved in graph database, combining with current information from maintenance system to conduct the Bayesian reasoning process, to provide sufficient information for the diagnostic reasoning module with the goal of improving diagnostic results.
  • the nodes of the Bayesian network will be interconnected, and each node will be a possible diagnostic for the problem with an associated probability.
  • a sequence of nodes defines the conditional probability of the right diagnostic. This sequence has multiple layers of nodes and the system will compute the final results based on the raking of the top likelihood of all possible diagnosises.
  • the top of the ranking is the most likely path for right diagnosis.
  • the outcome of this module is the ranking of the nodes path with the highest likelihoods of the possible diagnosis.
  • Graph Database has four different types of information stored.
  • Historical data and corresponding asset, sensor hierarchy Mainly serves storage purpose to categorize and organize raw historical data from maintenance system (e.g., Maximo, Apex) into symptom, root cause, solution information, and define entity and relationship between different entities for later information retrieval. This type of information can define one to many, or multiple to multiple relationships between assets, sensors, symptoms, root causes, and solutions.
  • maintenance system e.g., Maximo, Apex
  • This type of information can define one to many, or multiple to multiple relationships between assets, sensors, symptoms, root causes, and solutions.
  • Diagnostic Reasoning Module can involve Failure Mode Detection, Root Cause Analysis, and/or Remediation Solution Recommendation. Multiple sources of information are used as the input of the diagnostic reasoning module, which can involve classification model scoring results, current information from maintenance system, as well as historical information and entity relationships between different symptom, root cause, and solution from graph database. Sensor signal traverse patterns from the asset hierarchy is used for anomaly detection diagnosis and signal triage to detect if all sensors working properly.
  • Markov Chain state detection of the system, code diagnostic based on historical information of signals and maintenance system. One or more sub-modules can be selected to acquire information as needed. A major failure or downtime can be caused by a series of failures, which requires one or more multiple solutions to each of the failure. Solutions will be proposed based on severity and component importance. Operators will have the freedom to adopt or reject proposed solution (e.g., if the top solutions are rejected, then the following solutions will fill in automatically). Adopted solution will be passed to relevant control system. Diagnostic results will be sent to the enablement dashboard.
  • FIG. 8 gives an illustration example of entity relationship between different type of entities (e.g., asset, component, senor, failure events, warning messages, symptom, root cause, solution), and the analytics problem and corresponding algorithms used in the solution, in accordance with an example implementation.
  • FIG. 9 illustrates an example of sequential solutioning with live state update by using a car example, in accordance with an example implementation.
  • the graph database When there is problem detected, the graph database will get an update in corresponding ‘problem’ node’s value, find associated solutions for that problem, and send solutions back to the asset to deploy.
  • the results of the proposed solution will circle back to graph database to update the problem status, whether solved or remain unsolved, which will require a further diagnosis.
  • FIG. 10 illustrates an example of parallel solutioning with the car example, in accordance with an example implementation.
  • the dashed arrow indicates the entity relationship between the sensor and component, whereas the solid arrow indicates cross-sensor communication and the entity relationship between graph database and sensors.
  • Graph database is able to communicate and interact with multiple sensors and multiple components at the same time when an issue occurs, which usually involves more than one sensor.
  • Graph traverse path can be based on asset hierarchy component importance ranking, or based on the solution success rate from similar historical events.
  • SME knowledge store is configured to store the analytics use case with SME inputs, business metrics (model evaluation metrics) selection, domain knowledge-based feature engineering, model best practice (algorithms and hyperparameter settings), problem specific post processing, and so on.
  • business metrics model evaluation metrics
  • domain knowledge-based feature engineering domain knowledge-based feature engineering
  • model best practice algorithms and hyperparameter settings
  • problem specific post processing and so on.
  • the example implementations can standardize the analytics use case with the goal of reusing in the future.
  • Index analytics use case by asset, sensor, symptoms, root causes, and solutions for fast information retrieval and solution recommendation process for any new problem.
  • Chatbot - Knowledge Extraction can involve Reactive Based Knowledge Extraction, involving interaction with a chatbot to log information such as question type, high frequency questions, accepted solution, and so on, from conversation dialogue.
  • Such an implementation can improve chatbot performance by recommending solution from historical dialogue data distribution.
  • FIG. 11 illustrates an example of the enablement system dashboard, in accordance with an example implementation.
  • CEP Complex Event Processing
  • Safety System not illustrated
  • Information System 1101
  • a unified dashboard provides the visualization of the three systems. Below is a brief explanation about each of them.
  • a unified dashboard compiles all results and prescriptive information to give user easy access to actionable insights from analytics jobs, and provides connection to below systems:
  • System control 1102 CEP 1100 triggers an event to system control 1102. Once failure is detected and severity level meets trigger condition, system control will activate contingency plan for related assets and sensors to minimize impact of a failure, and to avoid catastrophic accident, or potential personal injury.
  • Information system 1101 CEP 1100 triggers an event to any external system (e.g., Slack, Mobile Companies). If failure is detected and the severity level meets the trigger condition, the relevant personnel will be notified through app or mobile notification. Upstream and downstream system will be notified to minimize the impact of defected failure or interruption from a sensor or asset.
  • external system e.g., Slack, Mobile Companies
  • example implementations described herein it is possible to facilitate the reusability of time series models to new analytics solutions. Further, example implementations can leverage the existing domain expert knowledge of solutions, and convert to reusable analytical solution pipelines to solve new/similar problems.
  • the diagnostic reasoning and the solution is an expertise domain and often it is tacit knowledge of the technical experts (more senior technician).
  • the example implementations described herein can provde a solution storage maintenance diagnostic reasoning in a graph database and keep this knowledge persistent and available for any expert.
  • Example implementations can further facilitate the self-improvement of the model recommendation based on the data and model curation process using the model zoo performance benchmark.
  • Example implementations descriebed herein through the semantic feature store, by using the data structure, data catalog and metadata, enrich data information to help in the curation of a new solution for new problem with minimum human intervention.
  • semantic feature store which can include feature store and metadata store.
  • the feature store can store raw data, and automatically engineer features accordingly based on asset hierarchy categories.
  • the feature store can further faciltiate trigger-based or schedule-based automatic synthetic features(engineered features) and update based on input data injection.
  • Feature store can also facilitate automatic intelligent data checking by use of embedded automatic feature selection criteria to comply with the downstream modeling pipeline. Selection can be based on data quality, prediction power, data newness, data relevancy, and so on.
  • Metadata store can communicate with feature store and data catalog to keep data updates in sync. Metadata store can also auto-communi cate with the knowledge graph to help with traverse pattern diagnosis and root cause triage process, as well as to compare and store data from new source, or to enrich existing data nodes.
  • Data Classification and cataloging can involve an unsupervised learning-based auto-mapping process to identify relationships between new data and existing data for data curation process in the model recommendation module.
  • Such unsupervised learning-based auto-mapping processes can involve dimensionality reduction-based algorithms to find out contribution to different principal components of a new dataset, and cluster-based algorithms to find out the cluster assignment for a new dataset.
  • Data classification and cataloging can involve automatically mapping new input data to existing data catalog base on sensor type, asset type and metadata information, and can store information at dataset level(a superset of any arbitrary sensors/assets) as input to the algorithm and configuration recommendation system.
  • Solution curation takes into consideration the operator/user personal profile to look for business metrics that takes higher priority to them in predefined persona-metrics database, if any. Based on defined persona, the solution curation recommends top metrics to be used as objective function in later model building. After user input is obtained to confirm the objective function, the solution curation automatically matches a suitable model family for algorithm candidate proposals. If no user input is received, then the system recommended algorithms will be used by default.
  • FIG. 12 illustrates a system involving a plurality of physical systems networked to a management apparatus, in accordance with an example implementation.
  • One or more physical systems 1201 integrated with various sensors are communicatively coupled to a network 1200 (e.g., local area network (LAN), wide area network (WAN)) through the corresponding network interface of the sensor system installed in the physical systems 1201, which is connected to a management apparatus 1202.
  • the management apparatus 1202 manages a database 1203, which contains historical data collected from the sensor systems from each of the physical systems 1201.
  • the data from the sensor systems of the physical systems 1201 can be stored to a central repository or central database such as proprietary databases that intake data from the physical systems 1201, or systems such as enterprise resource planning systems, and the management apparatus 1202 can access or retrieve the data from the central repository or central database.
  • the sensor systems of the physical systems 1201 can include any type of sensors to facilitate the desired implementation, such as but not limited to gyroscopes, accelerometers, global positioning satellite (GPS), thermometers, humidity gauges, or any sensors that can measure one or more of temperature, humidity, gas levels (e.g., CO2 gas), and so on.
  • Examples of physical systems can include, but are not limited to, shipping containers, lathes, air compressors, and so on. Further, the physical systems can also be represented as virtual systems, such as in the form of a digital twin.
  • FIG. 13 illustrates an example computing environment with an example computer device suitable for use in some example implementations, such as a management apparatus 1202 as illustrated in FIG. 12.
  • Computer device 1305 in computing environment 1300 can include one or more processing units, cores, or processors 1310, memory 1315 (e.g., RAM, ROM, and/or the like), internal storage 1320 (e.g., magnetic, optical, solid state storage, and/or organic), and/or I/O interface 1325, any of which can be coupled on a communication mechanism or bus 1330 for communicating information or embedded in the computer device 1305.
  • I/O interface 1325 is also configured to receive images from cameras or provide images to projectors or displays, depending on the desired implementation.
  • Computer device 1305 can be communicatively coupled to input/user interface 1335 and output device/interface 1340. Either one or both of input/user interface 1335 and output device/interface 1340 can be a wired or wireless interface and can be detachable.
  • Input/user interface 1335 may include any device, component, sensor, or interface, physical or virtual, that can be used to provide input (e.g., buttons, touch-screen interface, keyboard, a pointing/cursor control, microphone, camera, braille, motion sensor, optical reader, and/or the like).
  • Output device/interface 1340 may include a display, television, monitor, printer, speaker, braille, or the like.
  • input/user interface 1335 and output device/interface 1340 can be embedded with or physically coupled to the computer device 1305.
  • other computer devices may function as or provide the functions of input/user interface 1335 and output device/interface 1340 for a computer device 1305.
  • Examples of computer device 1305 may include, but are not limited to, highly mobile devices (e.g., smartphones, devices in vehicles and other machines, devices carried by humans and animals, and the like), mobile devices (e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like), and devices not designed for mobility (e.g., desktop computers, other computers, information kiosks, televisions with one or more processors embedded therein and/or coupled thereto, radios, and the like).
  • highly mobile devices e.g., smartphones, devices in vehicles and other machines, devices carried by humans and animals, and the like
  • mobile devices e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like
  • devices not designed for mobility e.g., desktop computers, other computers, information kiosks, televisions with one or more processors embedded therein and/or coupled thereto, radios, and the like.
  • Computer device 1305 can be communicatively coupled (e.g., via I/O interface 1325) to external storage 1345 and network 1350 for communicating with any number of networked components, devices, and systems, including one or more computer devices of the same or different configuration.
  • Computer device 1305 or any connected computer device can be functioning as, providing services of, or referred to as a server, client, thin server, general machine, special-purpose machine, or another label.
  • I/O interface 1325 can include, but is not limited to, wired and/or wireless interfaces using any communication or I/O protocols or standards (e.g., Ethernet, 802.1 lx, Universal System Bus, WiMax, modem, a cellular network protocol, and the like) for communicating information to and/or from at least all the connected components, devices, and network in computing environment 1300.
  • Network 1350 can be any network or combination of networks (e.g., the Internet, local area network, wide area network, a telephonic network, a cellular network, satellite network, and the like).
  • Computer device 1305 can use and/or communicate using computer-usable or computer-readable media, including transitory media and non-transitory media.
  • Transitory media include transmission media (e.g., metal cables, fiber optics), signals, carrier waves, and the like.
  • Non-transitory media include magnetic media (e.g., disks and tapes), optical media (e.g., CD ROM, digital video disks, Blu-ray disks), solid state media (e.g., RAM, ROM, flash memory, solid-state storage), and other non-volatile storage or memory.
  • Computer device 1305 can be used to implement techniques, methods, applications, processes, or computer-executable instructions in some example computing environments.
  • Computer-executable instructions can be retrieved from transitory media, and stored on and retrieved from non-transitory media.
  • the executable instructions can originate from one or more of any programming, scripting, and machine languages (e.g., C, C++, C#, Java, Visual Basic, Python, Perl, JavaScript, and others).
  • Processor(s) 1310 can execute under any operating system (OS) (not shown), in a native or virtual environment.
  • OS operating system
  • One or more applications can be deployed that include logic unit 1360, application programming interface (API) unit 1365, input unit 1370, output unit 1375, and inter-unit communication mechanism 1395 for the different units to communicate with each other, with the OS, and with other applications (not shown).
  • API application programming interface
  • Processor(s) 1310 can be in the form of hardware processors such as central processing units (CPUs) or in a combination of hardware and software units.
  • API unit 1365 when information or an execution instruction is received by API unit 1365, it may be communicated to one or more other units (e.g., logic unit 1360, input unit 1370, output unit 1375).
  • logic unit 1360 may be configured to control the information flow among the units and direct the services provided by API unit 1365, input unit 1370, output unit 1375, in some example implementations described above.
  • the flow of one or more processes or implementations may be controlled by logic unit 1360 alone or in conjunction with API unit 1365.
  • the input unit 1370 may be configured to obtain input for the calculations described in the example implementations
  • the output unit 1375 may be configured to provide output based on the calculations described in example implementations.
  • Processor(s) 1310 can be configured to execute methods or instructions for facilitating a computer-implemented autonomous learning application with machine information ingestion, assessable, interoperable, catalogable, reusable construct, and human interface human knowledge extraction associated with a plurality of assets in an asset hierarchy, which can involve for receipt of a request of a diagnosis on an asset from the plurality of assets in a computer-implemented diagnostic, predictive and prescriptive human interface (e.g., a chatbot as described herein), executing a diagnostic, predictive and prescriptive model configured to classify asset status and defect for the asset; and processing the classified asset status and the defect from the execution of the diagnostic, predictive and prescriptive model through a Bayesian network, the Bayesian network involving a plurality of interconnected nodes, each of the plurality of interconnected nodes representative of a possible diagnostic for the for the request and a probability, wherein sequences of the plurality of interconnected nodes are representative of a conditional probability of the diagnostic, the processing resulting in an output diagnosis
  • Processor(s) 1310 can be configured to execute any of the methods or instructions as described herein, and further involve selecting the diagnostic, predictive and prescriptive model from a plurality of pre-trained diagnostic, predictive and prescriptive models for the execution based on asset hierarchy similarity, the plurality of pre-trained diagnostic, predictive and prescriptive models indexed by asset hierarchy.
  • Processor(s) 1310 can be configured to execute any of the methods or instructions as described herein, and further involve managing a semantic feature store, the managing the semantic feature store involving storing raw data from sensors associated with the plurality of assets; automatically engineering features from the raw data based on asset hierarchy categories; and generating synthetic features for updating the semantic feature store in response to input data.
  • Processor(s) 1310 can be configured to execute any of the methods or instructions as described herein, wherein the managing the semantic feature store can involve providing metadata to synchronize the semantic feature store and a data catalog. [0133] Processor(s) 1310 can be configured to execute any of the methods or instructions as described herein, wherein the managing the semantic feature store further involves executing an unsupervised learning based auto-mapping process configured to identify relationships between the stored raw data and received new data based on one or more of sensor type, asset type, or metadata.
  • Processor(s) 1310 can be configured to execute any of the methods or instructions as described herein, further involving determining, from a user profile associated with the request, performance metrics to be used for the diagnostic, predictive and prescriptive model; selecting the diagnostic, predictive and prescriptive model from a plurality of pre-trained diagnostic models for the execution based on the performance metrics.
  • Processor(s) 1310 can be configured to execute any of the methods or instructions as described herein, and further involve training a plurality of diagnostic, predictive and prescriptive models, the training the plurality of diagnostic, predictive and prescriptive models involving training a diagnostic, predictive and prescriptive model from the plurality of diagnostic, predictive and prescriptive models for each proposed objective function from raw data received from sensors associated with the plurality of assets, the training utilizing default hyperparameter values, the each proposed objective function; and for receipt of new data from the sensors associated with the plurality of assets, estimating a similarity from the new data and the stored data based on the asset hierarchy and metadata; selecting ones of the trained plurality of diagnostic, predictive and prescriptive models for retraining based on the similarity; and randomizing the hyperparameter values and algorithms utilized for training the selected ones of the trained plurality of diagnostic, predictive and prescriptive models to retrain the selected ones of the trained plurality of diagnostic, predictive and prescriptive models.
  • Processor(s) 1310 can be configured to execute the methods or instructions as described herein, and further involve managing predictive and prescriptive functions including an inferencing engine storing predictive algorithms configured to use a plurality of predictive models to predict machine internal status, corresponding anomaly score, and user risk based on historical data and machine status.
  • Processor(s) 1310 can be configured to execute any of the methods or instructions as described herein, and further involve managing predictive and prescriptive functions involving an inference engine storing a plurality of optimization models trained against treatment for documented anomalies, the inference engine configured to determine an optimization output from the plurality of optimization models as a prescriptive recommendation based on predictive outcomes.
  • Processor(s) 1310 can be configured to execute any of the methods or instructions as described herein, and further involve managing predictive and prescriptive functions involving a proactive maintenance and sustainability function calculdated and associated with indexed maintenance manual and operation instructions, confluence pages and associated events.
  • the human interface can include a chatbot.
  • the Bayesian network can be continuously updated through machine ingest data, outcome, ground truth and persona usage.
  • Example implementations may also relate to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may include one or more general-purpose computers selectively activated or reconfigured by one or more computer programs.
  • Such computer programs may be stored in a computer readable medium, such as a computer-readable storage medium or a computer-readable signal medium.
  • a computer-readable storage medium may involve tangible mediums such as, but not limited to optical disks, magnetic disks, read-only memories, random access memories, solid state devices and drives, or any other types of tangible or non-transitory media suitable for storing electronic information.
  • a computer readable signal medium may include mediums such as carrier waves.
  • the algorithms and displays presented herein are not inherently related to any particular computer or other apparatus.
  • Computer programs can involve pure software implementations that involve instructions that perform the operations of the desired implementation.
  • the operations described above can be performed by hardware, software, or some combination of software and hardware.
  • Various aspects of the example implementations may be implemented using circuits and logic devices (hardware), while other aspects may be implemented using instructions stored on a machine-readable medium (software), which if executed by a processor, would cause the processor to perform a method to carry out implementations of the present application.
  • some example implementations of the present application may be performed solely in hardware, whereas other example implementations may be performed solely in software.
  • the various functions described can be performed in a single unit, or can be spread across a number of components in any number of ways.
  • the methods may be executed by a processor, such as a general purpose computer, based on instructions stored on a computer-readable medium. If desired, the instructions can be stored on the medium in a compressed and/or encrypted format.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Resources & Organizations (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Computational Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Economics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Algebra (AREA)
  • General Business, Economics & Management (AREA)
  • Operations Research (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Example implementations described herein involve systems and methods for facilitating autonomous learning application with machine information ingestion, assessable, interoperable, catalogable, reusable construct, and human interface human knowledge extraction associated with a plurality of assets in an asset hierarchy, which can involve executing a diagnostic, predictive and prescriptive model configured to classify asset status and defect for the asset; and processing the classified asset status and the defect from the execution of the diagnostic, predictive and prescriptive model through a Bayesian network, the Bayesian network comprising a plurality of interconnected nodes, each of the plurality of interconnected nodes representative of a possible diagnostic for the request and a probability, wherein sequences of the plurality of interconnected nodes are representative of a conditional probability of the diagnostic, the processing resulting in an output diagnosis having a highest probability as provided from the Bayesian network.

Description

ASSET STRUCTURE BEHAVIOR LEARNING AND INFERENCE MANAGEMENT SYSTEM
BACKGROUND
Field
[0001] The present disclosure is generally directed to asset management systems for factory and production lines, and more specifically, to an asset structure behavior learning and inference management system.
Related Art
[0002] Industrial devices have high availability and reliability on operations. In modem day factory and production lines, automated or schedule-based machine operation accounts for the majority of the work load in replacement of manual work conducted by factory workers. Such an operation makes equipment maintenance a critical task for the operator in order to ensure consistent up time. Operating the equipment with conventional systems or methods is subj ect to unique challenges during daily operation, in particular from a technical point of view.
[0003] To solve problems, such systems and methods need a multidisciplinary team that is compose by specialists on different areas, such as subject matter experts, data scientists, data engineers, and so on (Human in the loop). Similar problems are solved with different methods, even for those problems that could be solved with the same method. The resolution path is not always consistent.
[0004] Further, the manual process required to discover the data structure and algorithms that should fit into the dataset results in a time-consuming activity, reducing the Return over Investment (ROI) of the operations. The knowledge to conduct maintenance on equipment is mastered by the subject matter experts (SMEs) and, in general, is a tacit knowledge (Lack of SME). When SMEs leave the job (e.g., by resignation or retirement), they take this knowledge with them, and it is difficult to replace such expert and aged workers.
[0005] The diagnostic of the problem-solution cycle is done by inference and by an expert. The tacit knowledge is very important, but sometimes it has flaws because of the lack of data decision making and lack of decision optimization. In order to conduct remote maintenance/control, there is a gap between human SME knowledge and executable commands for remote agent/machine to directly act on. There is a lack of standardized and generalized methods to translate from one to each other.
SUMMARY
[0006] Example implementations described herein are directed to the solution of the whole failure mode problem. Asset Structure Behavior Learning and Inference Management System (ASBLIMS) was designed to solve a difficult problem involving rare failure events on complex systems that are difficult in diagnosis detection and resolution.
[0007] Example implementations described herein can provide a gateway to store technical information that required to solve industrial problems, incapsulate aging Subject Matter Expert (SME) workforce knowledge, not just the reasoning but also the knowledge.
[0008] Example implementations described herein can also solve the challenge of aggregating information from different domains. Through the example implementations described herein, SMEs from different fields work together to make sense of all the information for detect, diagnosis and solutioning, to make an actionable decision.
[0009] The example implementations described herein address the issue regarding the lack of persistent knowledge store and knowledge inferencing for ongoing industrial facilities in different life cycles. In the related art, learned knowledge cannot be transferred in an easy and efficient way.
[0010] The example implementations described herein involve a system that can provide decision makers with the best sequence of decisions to solve problems in any field. The example implementations described herein have a comprehensive stack of tools that will be useful for any decision maker. Examples of the tools and functionalities are as follows.
[0011] Asset Hierarchy: Maps out asset-component-sensor relationships, and record the asset hierarchy to provide as input to knowledge graph database.
[0012] Semantic Feature Store: Contains the historical information of the signals, and feature engineering rules to create analytical features based on historical information. Contains the metadata information about the use case like kind of use case, such as geo-positions. Contains data catalog information, categorize dataset based on metadata information. [0013] Solution Curation: Curates the best solution proposal considering different persona profiles. Decides the optimal objective function and algorithm combination based on persona based Key Performance Indicators (KPIs).
[0014] Model Training: Models can be trained with any model in model zoo that is applicable to input dataset. Models can train from scratch or use transfer learning to warm start the process. Transfers learning candidate models will come from model recommendation process. After training, generic model files will be provided for deployment.
[0015] Model Store/Zoo: The repository of all models like supervised, unsupervised, semisupervised learning models that a user can use. Trained models with best performance of each input dataset will be stored in model zoo, with corresponding meta data information for model recommendation usage. All previous experiments are recorded on this metadata database and it is accessible for the user.
[0016] Model Recommendation: The process of model selection is by using statistical inference of the comparison between the metadata and feature store information from the dataset and the metadata of the model.
[0017] Model Deployment: The current status diagnosis model is a classification model to classify asset status and defect type, if any. The prediction model predicts the future breakdown time, deterioration speed using pixel changes (e.g., changes between images) over time.
[0018] Bayesian network: Bayesian network reasoning takes in the diagnosis model result, and searches for matched root causes and solutions, by utilizing existing root causes and solution distribution. Sensor signal traverse pattern diagnose anomaly detection and signal triage is used to detect if all sensors are working properly. Graph databases are used to store symptom, root cause and solution information, and map to corresponding assets and sensor type. Chatbot - Knowledge Extraction is used to continuedly update dialogue related information in graph database.
[0019] Enablement System Dashboard: Example implementations described herein have Complex Event Processing (CEP) that is integrated with three different systems: Safety System, Information System and System Control and a Unified dashboard that provides the visualization of the three systems. Below is a brief explanation about each system. [0020] Safety system: complex event processing (CEP) triggers an event to safety system.
[0021] Information system: CEP triggers an event to any external system. Slack, Mobile Companies.
[0022] System control: CEP triggers an event to system control.
[0023] Aspects of the present disclosure can involve a method for facilitating a computer- implemented autonomous learning application with machine information ingestion, assessable, interoperable, catalogable, reusable construct, and human interface human knowledge extraction associated with a plurality of assets in an asset hierarchy, the method including, for receipt of a request of a diagnosis on an asset from the plurality of assets in a computer- implemented diagnostic, predictive and prescriptive human interface, executing a diagnostic, predictive and prescriptive model configured to classify asset status and defect for the asset; and processing the classified asset status and the defect from the execution of the diagnostic, predictive and prescriptive model through a Bayesian network, the Bayesian network including a plurality of interconnected nodes, each of the plurality of interconnected nodes representative of a possible diagnostic for the for the request and a probability, wherein sequences of the plurality of interconnected nodes are representative of a conditional probability of the diagnostic, the processing resulting in an output diagnosis having a highest probability as provided from the Bayesian network; wherein the diagnostic, predictive and prescriptive human interface is configured to provide a root cause of the defect and a solution for the asset based on referencing the output diagnosis to a graph database configured to associate diagnosis, root cause, and solution for each asset in an asset hierarchy, wherein the Bayesian network is updated from the graph database and feedback of current information from a maintenance system.
[0024] Aspects of the present disclosure can involve a system for facilitating a computer- implemented autonomous learning application with machine information ingestion, assessable, interoperable, catalogable, reusable construct, and human interface human knowledge extraction associated with a plurality of assets in an asset hierarchy, the system including, for receipt of a request of a diagnosis on an asset from the plurality of assets in a computer- implemented diagnostic, predictive and prescriptive human interface, means for executing a diagnostic, predictive and prescriptive model configured to classify asset status and defect for the asset; and means for processing the classified asset status and the defect from the execution of the diagnostic, predictive and prescriptive model through a Bayesian network, the Bayesian network including a plurality of interconnected nodes, each of the plurality of interconnected nodes representative of a possible diagnostic for the for the request and a probability, wherein sequences of the plurality of interconnected nodes are representative of a conditional probability of the diagnostic, the processing resulting in an output diagnosis having a highest probability as provided from the Bayesian network; wherein the diagnostic, predictive and prescriptive human interface is configured to provide a root cause of the defect and a solution for the asset based on referencing the output diagnosis to a graph database configured to associate diagnosis, root cause, and solution for each asset in an asset hierarchy, wherein the Bayesian network is updated from the graph database and feedback of current information from a maintenance system.
[0025] Aspects of the present disclosure can involve a computer program for facilitating a computer-implemented autonomous learning application with machine information ingestion, assessable, interoperable, catalogable, reusable construct, and human interface human knowledge extraction associated with a plurality of assets in an asset hierarchy, the computer program including instructions involving, for receipt of a request of a diagnosis on an asset from the plurality of assets in a computer-implemented diagnostic, predictive and prescriptive human interface, executing a diagnostic, predictive and prescriptive model configured to classify asset status and defect for the asset; and processing the classified asset status and the defect from the execution of the diagnostic, predictive and prescriptive model through a Bayesian network, the Bayesian network including a plurality of interconnected nodes, each of the plurality of interconnected nodes representative of a possible diagnostic for the for the request and a probability, wherein sequences of the plurality of interconnected nodes are representative of a conditional probability of the diagnostic, the processing resulting in an output diagnosis having a highest probability as provided from the Bayesian network; wherein the diagnostic, predictive and prescriptive human interface is configured to provide a root cause of the defect and a solution for the asset based on referencing the output diagnosis to a graph database configured to associate diagnosis, root cause, and solution for each asset in an asset hierarchy, wherein the Bayesian network is updated from the graph database and feedback of current information from a maintenance system. The computer program and instructions can be stored on a non-transitory computer readable medium and executed by one or more processors. [0026] Aspects of the present disclosure can involve an apparatus for facilitating a computer-implemented autonomous learning application with machine information ingestion, assessable, interoperable, catalogable, reusable construct, and human interface human knowledge extraction associated with a plurality of assets in an asset hierarchy, the apparatus including, for receipt of a request of a diagnosis on an asset from the plurality of assets in a computer-implemented diagnostic, predictive and prescriptive human interface, a processor configured to execute a method or instructions involving executing a diagnostic, predictive and prescriptive model configured to classify asset status and defect for the asset; and processing the classified asset status and the defect from the execution of the diagnostic, predictive and prescriptive model through a Bayesian network, the Bayesian network including a plurality of interconnected nodes, each of the plurality of interconnected nodes representative of a possible diagnostic for the for the request and a probability, wherein sequences of the plurality of interconnected nodes are representative of a conditional probability of the diagnostic, the processing resulting in an output diagnosis having a highest probability as provided from the Bayesian network; wherein the diagnostic, predictive and prescriptive human interface is configured to provide a root cause of the defect and a solution for the asset based on referencing the output diagnosis to a graph database configured to associate diagnosis, root cause, and solution for each asset in an asset hierarchy, wherein the Bayesian network is updated from the graph database and feedback of current information from a maintenance system.
BRIEF DESCRIPTION OF DRAWINGS
[0027] FIG. 1 illustrates an example of entity relationship diagram that shows how to structure an asset hierarchy (AH) in a database, in accordance with an example implementation.
[0028] FIG. 2 illustrates the process for the semantic feature store, in accordance with an example implementation.
[0029] FIG. 3 illustrates solution curation, in accordance with an example implementation.
[0030] FIG. 4 illustrates an example of the model store/zoo, in accordance with an example implementation.
[0031] FIG. 5 illustrates an example flow of the model recommendation, in accordance with an example implementation. [0032] FIG. 6 illustrates an example of the intelligent self-examination and self-validation, in accordance with an example implementation.
[0033] FIG. 7 illustrates an example management system for the Bayesian network, in accordance with an example implementation.
[0034] FIG. 8 gives an illustration example of entity relationship between different type of entities (e.g., asset, component, senor, failure events, warning messages, symptom, root cause, solution), and the analytics problem and corresponding algorithms used in the solution, in accordance with an example implementation.
[0035] FIG. 9 illustrates an example of sequential solutioning with live state update by using a car example, in accordance with an example implementation.
[0036] FIG. 10 illustrates an example of parallel solutioning with the car example, in accordance with an example implementation.
[0037] FIG. 11 illustrates an example of the enablement system dashboard, in accordance with an example implementation.
[0038] FIG. 12 illustrates a system involving a plurality of physical systems networked to a management apparatus, in accordance with an example implementation.
[0039] FIG. 13 illustrates an example computing environment with an example computer device suitable for use in some example implementations.
DETAILED DESCRIPTION
[0040] The following detailed description provides details of the figures and example implementations of the present application. Reference numerals and descriptions of redundant elements between figures are omitted for clarity. Terms used throughout the description are provided as examples and are not intended to be limiting. For example, the use of the term “automatic” may involve fully automatic or semi-automatic implementations involving user or administrator control over certain aspects of the implementation, depending on the desired implementation of one of ordinary skill in the art practicing implementations of the present application. Selection can be conducted by a user through a user interface or other input means, or can be implemented through a desired algorithm. Example implementations as described herein can be utilized either singularly or in combination and the functionality of the example implementations can be implemented through any means according to the desired implementations.
[0041] FIG. 1 illustrates an example of entity relationship diagram that shows how to structure an asset hierarchy (AH) in a database, in accordance with an example implementation. Asset hierarchy has a key importance on the proposedsolution because it helps to associate different time series sensors on the same asset or group of assets. The asset hierarchy also contributes for information about the data structure, data type and data latency. The AH definition is a logical or physical definition of the relation and the authority between assets. Those assets in the AH could have a cause-effect relationship. In example implementations described herein, AH can involve different hierarchy definitions built using different objects and the relationships between them. Object table describes all objects on the system independently of their hierarchy. Relationship table stores the relationship between objects who are relatives. Attributes table stores the attributes of each object. Information to be use it in the GraphDB for map the relationship of objects.
[0042] FIG. 2 illustrates the process for the semantic feature store 200, in accordance with an example implementation. Semantic Feature Store 200 has three critical components, which include feature store 210 of historical information, metadata store 220 such as user cases, asset hierarchy, geo-positioning, and so on, and data catalog 230 configured to conduct automapping and cataloging new and existing data by leveraging information from metadata store 220 and feature store 210. The semantic feature store 200 serves as a data hub to clean, sort, catalog raw data from sensor, and extract valuable information from them to get data ready for later modeling process, as well as the capability to catalog future data to similar historical datasets based on various metrics and algorithms. Feature store 210 will save sensor data by asset hierarchy and save the corresponding feature engineering rules for each type of sensor data, for future reuse. Metadata store 220 will store the meta data information about data source for model post-process. Data catalog 230 will create index for similar data source for easier cross reference.
[0043] The process illustrated in FIG. 2 is as follows. Feature store 210 stores raw data, and automatically engineers features accordingly based on asset hierarchy categories. Feature store 210 can involve trigger-based or schedule-based automatic synthetic features (engineered features) and updates based on input data injection. Feature store 210 can involve automatic intelligent data checking, by embedded automatic feature selection criteria to comply with the downstream modeling pipeline, and selects based on data quality, prediction power, data newness, data relevancy, and so on. Feature store 210 communicates with metadata store 220 and data catalog 230 to keep data update in sync.
[0044] Metadata Store 220 tracks and logs metadata information (use case information, asset hierarchy, geo-positions, and so on), as well as time series information (data timestamp, device timestamp) of the same sensors or same device. Metadata store 220 also communicates with feature store 210 and data catalog 230 to keep data update in sync. Metadata store 220 also auto-communicates with knowledge graph to help with traverse pattern diagnosis and root cause triage process, as well as to compare and store data from new source, or to enrich existing data nodes.
[0045] Data classification and cataloging 230 involves an unsupervised learning-based auto-mapping process to identify the relationship between new data and existing data for data curation process in the model recommendation module. The data classification and cataloging 230 can involve dimensionality reduction-based algorithms to determine the contribution to different principal components of a new dataset. Cluster based algorithms can also be used to find out cluster assignment for a new dataset. Data classification and cataloging can also involve automatic mapping of new input data to the existing data catalog based on sensor type, asset type and metadata information. Further, data classification and cataloging 230 can store information at dataset level (a superset of any arbitrary sensors/assets) as input to latter algorithm and configuration recommendation system.
[0046] FIG. 3 illustrates solution curation 300, in accordance with an example implementation. Defining the objective function at 310 is one of the most critical steps in model building life cycle. This persona-based solution curation enables this application to provide more tailored solutions for a business problem, and to ensure that the ‘business impact’ of a model is directly related to model performance. At the same time, the quantified business impact makes the model results easier to be communicated with other stakeholders for faster business decision making.
[0047] The solution curation 300 takes into consideration the operator/user personal profile to look for business metrics that takes higher priority to them in predefined persona-metrics databases if any. [0048] Based on the defined persona, the solution curation 300 executes solution formulation 320 and recommends top metrics to be used as objective function in later model building. After user input is obtained to confirm the objective function, the solution curation 300 automatically matches the suitable model family for the algorithm candidate proposals. If no user input is received, then the system recommended algorithms will be used by default.
[0049] With regards to model training, the present disclosure introduces a process to warm start modeling training process by transfer of the learning based on previous model training weights or adoption of a model hyperparameter setting from similar datasets identified in a data cataloging process. If there are no similar datasets, or any previous weights, then the model will train from scratch.
[0050] Model training breaks down to two scenarios: 1) With prior model adoption for training 2) Without prior model adoption for training. The prior model adoption feature can be enabled by the user, or by an automatic model zoo search if there are any model configurations saved for similar datasets.
[0051] Without prior model adoption: The model will train from scratch with using meta data information and engineered features from the feature store. Based on the input dataset type, after going through data cataloging and solution curation steps, the proposed objective function with suitable algorithms (e.g., regression, classification, detection, clustering, dimensionality reduction algorithms, and so on) from the model zoo will be used to train the model with default hyperparameter values.
[0052] With prior model adoption: The prior saved metadata fingerprint and device characteristics (from the Asset Hierarchy) will be used together to estimate the dataset between historical dataset and new dataset similarity in data catalog module (e.g., can use measurement from one dimension, or separate measurements for both dimensions) and thereby decide which existing model configuration to be used for model adaptation.
[0053] In example implementations, the adapted model configuration file can be used as starting point for model training of input dataset. Further, randomized permutations on algorithms (from model recommendation module) and hyperparameter values will be performed based on user preference, to avoid model performance trapped in local optimal. The newly created models will be saved in the model store for future model recommendations. Top models will be selected by weighting business metric performance as well as model metric performance. Generic model files of the best models will be saved and passed to model deployment and model zoo.
[0054] FIG. 4 illustrates an example of the model store/zoo, in accordance with an example implementation. Model zoo serves as a model repository in the backend to support the model training process, and contains algorithms that are available to use for both structured 400 and unstructured data 410. The model zoo contains both vanilla models and pretrained models from all datasets. Vanilla models are for net-new datasets which do not have a similar data source from the semantic feature store. These net-new datasets will be trained with default weights and through hyper-parameter search to find the best model, which will be saved as reference for incoming new datasets. Pretrained models are the best performing models and hyperparameter values saved during each model run for each input dataset.
[0055] Vanilla models: Different vanilla algorithms (like supervised, unsupervised, semisupervised learning, dimensionality reduction models) with default hyperparameter value ranges are predefined in the model zoo. Vanilla models are further categorized by unstructured data and structured data to decide applicability. For net new datasets without any identified ‘similar’ prior dataset, vanilla models to be used for training will follow output from the solution curation step. Newly emerged algorithms will be review periodically to add to vanilla model zoo.
[0056] Pre-trained model archiving process: As new datasets are provided into pipeline, the best model configuration of each algorithm family of this dataset will be identified during the training process, their performance information, and hyperparameter values will be saved as a model fingerprint, together with metadata fingerprint, which will be used for the future model recommendation process. Models are saved and indexed in the model store, and categorized by data type, problem type, failure mode, and/or asset hierarchy for fast retrieval and relevance assessment. Model storage can be categorized by structured and unstructured data, since they require a different algorithm family.
[0057] FIG. 5 illustrates an example flow of the model recommendation 500, in accordance with an example implementation. There are mainly four components in the model recommendation module 500: [0058] Step 1. Model Zoo Creation/Enrichment for Performance Benchmark 510.
[0059] Step 2. Data Curation Process 520.
[0060] Step 3. Model Curation Algorithms 530.
[0061] Step 4. Recommended Model Selection 540.
[0062] Model recommendation 500 will take information from semantic feature store to data curation process to identify how similar a new data source is compared to historical datasets to narrow down the candidates for the model recommendation process. After which, one or more model curation algorithms can be chosen as a selection base to decide on the ranking of algorithm and parameter recommendations. There are three different scenarios for the final recommendation selection, wherein the suitable method will be selected based on each of the scenarios listed below.
[0063] Model zoo creation/enrichment for performance benchmark 510: Model zoo creation is only the for first time application initiation process to create a vanilla model zoo for both structured data and unstructured data. The model zoo enrichment process will be conducted periodically to update algorithm families with the best rule to thumb hyperparameter settings that are tailored to different problem and data type.
[0064] Data Curation Process 520: Data Curation Process 520 uses data cataloging information from the feature store to determine which existing dataset is similar to the new dataset (e.g., belong to the 1st PCA, or the same cluster). Data Curation Process 520 utilizes information from the data catalog to search for the historical dataset that share similar asset type, sensor type, data type and problem type, and so on, to use best models created from historical datasets as a reference for algorithm and hyperparameter setting recommendation. Dataset ‘similarity’ will be measured in both a qualitative and quantitative way, with the option to take user/operator weight assignment as input to the similarity determination process.
[0065] Model Curation Algorithms 530 involve three different algorithms that can be used for model curation:
[0066] 1. Neuron weight based
[0067] 2. Autoencoder (compare data fingerprint against what available in feature store) [0068] 3. Use graph model to compare neuron node structure similarity
[0069] One or more algorithms can be used for model curation. When multiple algorithm results are taken into consideration, the weights can be input by the user/operator.
[0070] Recommended Model Selection 540 can obtain the architecture ranking from the previous module, select the top number (e.g., three) of model hyperparameters/architectures, and have methods to pick the best model according to different scenarios. Such scenarios are as follows.
[0071] Scenario 1 : Input data with the same asset hierarchy, together with the same value distribution of each type of sensor data (Directly inference on new data and compare inference results).
[0072] Scenario 2: Input data with the similar asset hierarchy, together with a similar value distribution of each type of sensor data (Select best architecture from most similar data).
[0073] Scenario 3 : Input data with different asset hierarchy, with the value distribution and different sensor type.
[0074] In example implementations, there can be a need to retrain model with all three recommended architecture and hyperparameters, and then compare these three scenarios. In this case, performance comparison can be time consuming. An alternative is to use the asset hierarchy matching score as the sole metric (e.g., in case of rare data for certain assets, any prior data relate to that asset can be very insightful to use as reference). When there are multiple factors considered for the model recommendation ranking, the weight assignment for different factors can be fixed, or dynamically assigned based on asset/component type and the importance of each component to the entire asset. Model indexing and versioning, when model train on different data, save a snapshot of model with the data used for training, to update best model performance of each dataset.
[0075] There is another step before model deployment is to self-examinate and selfvalidate model results based on the asset hierarchy and sensor metadata information, such as the sensor physical location, the data quality of a sensor, sensor runtime stability, and so on. This step will further suppress false positives predictions (or false negatives in some cases) to improve overall prediction accuracy with postprocessing steps. [0076] After the models pass the examination and validation stage, the models are ready to put into production. Depending on the use case and KPI selected at solution curation stage, there are two types of pipelines for model deployment, a classification model pipeline for current asset status classification and a prediction model pipeline for preventive maintenance.
[0077] All deployment metadata details are stored in graph database for future reference and deployment. Model weights and configuration files are saved in model zoo.
[0078] FIG. 6 illustrates an example of the intelligent self-examination and self-validation, in accordance with an example implementation. After the training stage, the multi-sensor results are cross checked for self-examination and self-validation as part of the post-processing and the sanity check process. The ground truth used for comparison in this case is either the historical data value from the same sensor, or the same asset (that under the same condition), or historical value pattern from a group of sensors that work together (down-stream signal, upstream signal, and so on).
[0079] Several sensor level cross checks of prediction results will be performed, for example:
[0080] 1. If the same type of sensor of the same sensor family at nearby location share similar value range, otherwise it can be a false prediction.
[0081] 2. If one group of sensor provides higher quality data, prediction results from that sensor group will have higher priority, and can be used for results suppression of results from less reliable sensors.
[0082] 3. For generating prescriptive results, not just one sensor’s prediction will be used, but instead prescriptive results will only be provided if multiple sensor predictions fit with known patterns (from SME knowledge store), to avoid false positives.
[0083] 4. Use status diagnosis model with timestamp information to validate prediction model results for future prediction.
[0084] FIG. 6 gives an example of how self-examination and self-validation works. The validation can be done at different level, like between same type of sensors within the same component, or different type of sensors within the same component, or same type of sensors between similar components, to check if prediction value within a reasonable range. Depends on the scope of model, there are two different pipelines to deploy model.
[0085] Diagnosis model for status: Classification model to classify asset current status and defect type if any.
[0086] Prediction model for preventive maintenance: Predict future break down time e.g.: deterioration speed using pixel changes (changes between images) over time.
[0087] There are mainly four components for both pipelines,
[0088] 1) Deployment module
[0089] 2) Prediction results generation
[0090] 3) Postprocessing of Multi-model predictions
[0091] 4) Explainable Al model for both structured and unstructured data
[0092] For both pipelines, generic model file will be provided as input for model deployment.
[0093] FIG. 7 illustrates an example management system for the Bayesian network 700, in accordance with an example implementation. A Bayesian network is a representation of a joint probability distribution of a set of random variables with a possible mutual causal relationship. The network involves nodes representing the random variables, edges between pairs of nodes representing the causal relationship of these nodes, and a conditional probability distribution in each of the nodes.
[0094] The main objective of the method is to model the posterior conditional probability distribution of outcome (often causal) variable(s) after observing new evidence.
[0095] Bayesian network 700 utilizes historical information saved in graph database, combining with current information from maintenance system to conduct the Bayesian reasoning process, to provide sufficient information for the diagnostic reasoning module with the goal of improving diagnostic results. The nodes of the Bayesian network will be interconnected, and each node will be a possible diagnostic for the problem with an associated probability. A sequence of nodes defines the conditional probability of the right diagnostic. This sequence has multiple layers of nodes and the system will compute the final results based on the raking of the top likelihood of all possible diagnosises. The top of the ranking is the most likely path for right diagnosis. The outcome of this module is the ranking of the nodes path with the highest likelihoods of the possible diagnosis.
[0096] Graph Database has four different types of information stored.
[0097] Historical data and corresponding asset, sensor hierarchy: Mainly serves storage purpose to categorize and organize raw historical data from maintenance system (e.g., Maximo, Apex) into symptom, root cause, solution information, and define entity and relationship between different entities for later information retrieval. This type of information can define one to many, or multiple to multiple relationships between assets, sensors, symptoms, root causes, and solutions.
[0098] Diagnostic Reasoning Module: can involve Failure Mode Detection, Root Cause Analysis, and/or Remediation Solution Recommendation. Multiple sources of information are used as the input of the diagnostic reasoning module, which can involve classification model scoring results, current information from maintenance system, as well as historical information and entity relationships between different symptom, root cause, and solution from graph database. Sensor signal traverse patterns from the asset hierarchy is used for anomaly detection diagnosis and signal triage to detect if all sensors working properly.
[0099] Markov Chain: state detection of the system, code diagnostic based on historical information of signals and maintenance system. One or more sub-modules can be selected to acquire information as needed. A major failure or downtime can be caused by a series of failures, which requires one or more multiple solutions to each of the failure. Solutions will be proposed based on severity and component importance. Operators will have the freedom to adopt or reject proposed solution (e.g., if the top solutions are rejected, then the following solutions will fill in automatically). Adopted solution will be passed to relevant control system. Diagnostic results will be sent to the enablement dashboard.
[0100] FIG. 8 gives an illustration example of entity relationship between different type of entities (e.g., asset, component, senor, failure events, warning messages, symptom, root cause, solution), and the analytics problem and corresponding algorithms used in the solution, in accordance with an example implementation. [0101] FIG. 9 illustrates an example of sequential solutioning with live state update by using a car example, in accordance with an example implementation. When there is problem detected, the graph database will get an update in corresponding ‘problem’ node’s value, find associated solutions for that problem, and send solutions back to the asset to deploy. The results of the proposed solution will circle back to graph database to update the problem status, whether solved or remain unsolved, which will require a further diagnosis.
[0102] FIG. 10 illustrates an example of parallel solutioning with the car example, in accordance with an example implementation. The dashed arrow indicates the entity relationship between the sensor and component, whereas the solid arrow indicates cross-sensor communication and the entity relationship between graph database and sensors. Graph database is able to communicate and interact with multiple sensors and multiple components at the same time when an issue occurs, which usually involves more than one sensor. Graph traverse path can be based on asset hierarchy component importance ranking, or based on the solution success rate from similar historical events.
[0103] For the Knowledge Based Solution Pipeline Storage, SME knowledge store is configured to store the analytics use case with SME inputs, business metrics (model evaluation metrics) selection, domain knowledge-based feature engineering, model best practice (algorithms and hyperparameter settings), problem specific post processing, and so on. The example implementations can standardize the analytics use case with the goal of reusing in the future.
[0104] Index analytics use case by asset, sensor, symptoms, root causes, and solutions for fast information retrieval and solution recommendation process for any new problem.
[0105] Chatbot - Knowledge Extraction can involve Reactive Based Knowledge Extraction, involving interaction with a chatbot to log information such as question type, high frequency questions, accepted solution, and so on, from conversation dialogue. Such an implementation can improve chatbot performance by recommending solution from historical dialogue data distribution.
[0106] FIG. 11 illustrates an example of the enablement system dashboard, in accordance with an example implementation. In example implementations, there is Complex Event Processing (CEP) 1100 that is integrated with three different systems: Safety System (not illustrated), Information System 1101 and System Control 1102. A unified dashboard provides the visualization of the three systems. Below is a brief explanation about each of them.
[0107] A unified dashboard compiles all results and prescriptive information to give user easy access to actionable insights from analytics jobs, and provides connection to below systems:
[0108] System control 1102: CEP 1100 triggers an event to system control 1102. Once failure is detected and severity level meets trigger condition, system control will activate contingency plan for related assets and sensors to minimize impact of a failure, and to avoid catastrophic accident, or potential personal injury.
[0109] If failure is detected and the severity level meets the trigger condition, the corresponding safety system will be activated to mitigate impact of detected failure. Further, the recommended remediation solution will be provided to operator for consideration.
[0110] Information system 1101 : CEP 1100 triggers an event to any external system (e.g., Slack, Mobile Companies). If failure is detected and the severity level meets the trigger condition, the relevant personnel will be notified through app or mobile notification. Upstream and downstream system will be notified to minimize the impact of defected failure or interruption from a sensor or asset.
[0111] Through the example implementations described herein, it is possible to facilitate the reusability of time series models to new analytics solutions. Further, example implementations can leverage the existing domain expert knowledge of solutions, and convert to reusable analytical solution pipelines to solve new/similar problems.
[0112] In a maintenance process, the diagnostic reasoning and the solution is an expertise domain and often it is tacit knowledge of the technical experts (more senior technician). The example implementations described herein can provde a solution storage maintenance diagnostic reasoning in a graph database and keep this knowledge persistent and available for any expert.
[0113] Example implementations can further facilitate the self-improvement of the model recommendation based on the data and model curation process using the model zoo performance benchmark. [0114] Further, in the example implementations descriebed herein, through the semantic feature store, by using the data structure, data catalog and metadata, enrich data information to help in the curation of a new solution for new problem with minimum human intervention.
[0115] In the example implementations described herein, there is a semantic feature store which can include feature store and metadata store. The feature store can store raw data, and automatically engineer features accordingly based on asset hierarchy categories. The feature store can further faciltiate trigger-based or schedule-based automatic synthetic features(engineered features) and update based on input data injection. Feature store can also facilitate automatic intelligent data checking by use of embedded automatic feature selection criteria to comply with the downstream modeling pipeline. Selection can be based on data quality, prediction power, data newness, data relevancy, and so on.
[0116] Metadata store can communicate with feature store and data catalog to keep data updates in sync. Metadata store can also auto-communi cate with the knowledge graph to help with traverse pattern diagnosis and root cause triage process, as well as to compare and store data from new source, or to enrich existing data nodes.
[0117] Data Classification and cataloging can involve an unsupervised learning-based auto-mapping process to identify relationships between new data and existing data for data curation process in the model recommendation module. Such unsupervised learning-based auto-mapping processes can involve dimensionality reduction-based algorithms to find out contribution to different principal components of a new dataset, and cluster-based algorithms to find out the cluster assignment for a new dataset. Data classification and cataloging can involve automatically mapping new input data to existing data catalog base on sensor type, asset type and metadata information, and can store information at dataset level(a superset of any arbitrary sensors/assets) as input to the algorithm and configuration recommendation system.
[0118] Solution curation takes into consideration the operator/user personal profile to look for business metrics that takes higher priority to them in predefined persona-metrics database, if any. Based on defined persona, the solution curation recommends top metrics to be used as objective function in later model building. After user input is obtained to confirm the objective function, the solution curation automatically matches a suitable model family for algorithm candidate proposals. If no user input is received, then the system recommended algorithms will be used by default.
[0119] FIG. 12 illustrates a system involving a plurality of physical systems networked to a management apparatus, in accordance with an example implementation. One or more physical systems 1201 integrated with various sensors are communicatively coupled to a network 1200 (e.g., local area network (LAN), wide area network (WAN)) through the corresponding network interface of the sensor system installed in the physical systems 1201, which is connected to a management apparatus 1202. The management apparatus 1202 manages a database 1203, which contains historical data collected from the sensor systems from each of the physical systems 1201. In alternate example implementations, the data from the sensor systems of the physical systems 1201 can be stored to a central repository or central database such as proprietary databases that intake data from the physical systems 1201, or systems such as enterprise resource planning systems, and the management apparatus 1202 can access or retrieve the data from the central repository or central database. The sensor systems of the physical systems 1201 can include any type of sensors to facilitate the desired implementation, such as but not limited to gyroscopes, accelerometers, global positioning satellite (GPS), thermometers, humidity gauges, or any sensors that can measure one or more of temperature, humidity, gas levels (e.g., CO2 gas), and so on. Examples of physical systems can include, but are not limited to, shipping containers, lathes, air compressors, and so on. Further, the physical systems can also be represented as virtual systems, such as in the form of a digital twin.
[0120] FIG. 13 illustrates an example computing environment with an example computer device suitable for use in some example implementations, such as a management apparatus 1202 as illustrated in FIG. 12. Computer device 1305 in computing environment 1300 can include one or more processing units, cores, or processors 1310, memory 1315 (e.g., RAM, ROM, and/or the like), internal storage 1320 (e.g., magnetic, optical, solid state storage, and/or organic), and/or I/O interface 1325, any of which can be coupled on a communication mechanism or bus 1330 for communicating information or embedded in the computer device 1305. I/O interface 1325 is also configured to receive images from cameras or provide images to projectors or displays, depending on the desired implementation.
[0121] Computer device 1305 can be communicatively coupled to input/user interface 1335 and output device/interface 1340. Either one or both of input/user interface 1335 and output device/interface 1340 can be a wired or wireless interface and can be detachable. Input/user interface 1335 may include any device, component, sensor, or interface, physical or virtual, that can be used to provide input (e.g., buttons, touch-screen interface, keyboard, a pointing/cursor control, microphone, camera, braille, motion sensor, optical reader, and/or the like). Output device/interface 1340 may include a display, television, monitor, printer, speaker, braille, or the like. In some example implementations, input/user interface 1335 and output device/interface 1340 can be embedded with or physically coupled to the computer device 1305. In other example implementations, other computer devices may function as or provide the functions of input/user interface 1335 and output device/interface 1340 for a computer device 1305.
[0122] Examples of computer device 1305 may include, but are not limited to, highly mobile devices (e.g., smartphones, devices in vehicles and other machines, devices carried by humans and animals, and the like), mobile devices (e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like), and devices not designed for mobility (e.g., desktop computers, other computers, information kiosks, televisions with one or more processors embedded therein and/or coupled thereto, radios, and the like).
[0123] Computer device 1305 can be communicatively coupled (e.g., via I/O interface 1325) to external storage 1345 and network 1350 for communicating with any number of networked components, devices, and systems, including one or more computer devices of the same or different configuration. Computer device 1305 or any connected computer device can be functioning as, providing services of, or referred to as a server, client, thin server, general machine, special-purpose machine, or another label.
[0124] I/O interface 1325 can include, but is not limited to, wired and/or wireless interfaces using any communication or I/O protocols or standards (e.g., Ethernet, 802.1 lx, Universal System Bus, WiMax, modem, a cellular network protocol, and the like) for communicating information to and/or from at least all the connected components, devices, and network in computing environment 1300. Network 1350 can be any network or combination of networks (e.g., the Internet, local area network, wide area network, a telephonic network, a cellular network, satellite network, and the like).
[0125] Computer device 1305 can use and/or communicate using computer-usable or computer-readable media, including transitory media and non-transitory media. Transitory media include transmission media (e.g., metal cables, fiber optics), signals, carrier waves, and the like. Non-transitory media include magnetic media (e.g., disks and tapes), optical media (e.g., CD ROM, digital video disks, Blu-ray disks), solid state media (e.g., RAM, ROM, flash memory, solid-state storage), and other non-volatile storage or memory.
[0126] Computer device 1305 can be used to implement techniques, methods, applications, processes, or computer-executable instructions in some example computing environments. Computer-executable instructions can be retrieved from transitory media, and stored on and retrieved from non-transitory media. The executable instructions can originate from one or more of any programming, scripting, and machine languages (e.g., C, C++, C#, Java, Visual Basic, Python, Perl, JavaScript, and others).
[0127] Processor(s) 1310 can execute under any operating system (OS) (not shown), in a native or virtual environment. One or more applications can be deployed that include logic unit 1360, application programming interface (API) unit 1365, input unit 1370, output unit 1375, and inter-unit communication mechanism 1395 for the different units to communicate with each other, with the OS, and with other applications (not shown). The described units and elements can be varied in design, function, configuration, or implementation and are not limited to the descriptions provided. Processor(s) 1310 can be in the form of hardware processors such as central processing units (CPUs) or in a combination of hardware and software units.
[0128] In some example implementations, when information or an execution instruction is received by API unit 1365, it may be communicated to one or more other units (e.g., logic unit 1360, input unit 1370, output unit 1375). In some instances, logic unit 1360 may be configured to control the information flow among the units and direct the services provided by API unit 1365, input unit 1370, output unit 1375, in some example implementations described above. For example, the flow of one or more processes or implementations may be controlled by logic unit 1360 alone or in conjunction with API unit 1365. The input unit 1370 may be configured to obtain input for the calculations described in the example implementations, and the output unit 1375 may be configured to provide output based on the calculations described in example implementations.
[0129] Processor(s) 1310 can be configured to execute methods or instructions for facilitating a computer-implemented autonomous learning application with machine information ingestion, assessable, interoperable, catalogable, reusable construct, and human interface human knowledge extraction associated with a plurality of assets in an asset hierarchy, which can involve for receipt of a request of a diagnosis on an asset from the plurality of assets in a computer-implemented diagnostic, predictive and prescriptive human interface (e.g., a chatbot as described herein), executing a diagnostic, predictive and prescriptive model configured to classify asset status and defect for the asset; and processing the classified asset status and the defect from the execution of the diagnostic, predictive and prescriptive model through a Bayesian network, the Bayesian network involving a plurality of interconnected nodes, each of the plurality of interconnected nodes representative of a possible diagnostic for the for the request and a probability, wherein sequences of the plurality of interconnected nodes are representative of a conditional probability of the diagnostic, the processing resulting in an output diagnosis having a highest probability as provided from the Bayesian network; wherein the diagnostic, predictive and prescriptive human interface is configured to provide a root cause of the defect and a solution for the asset based on referencing the output diagnosis to a graph database configured to associate diagnosis, root cause, and solution for each asset in an asset hierarchy, wherein the Bayesian network is updated from the graph database and feedback of current information from a maintenance system.
[0130] Processor(s) 1310 can be configured to execute any of the methods or instructions as described herein, and further involve selecting the diagnostic, predictive and prescriptive model from a plurality of pre-trained diagnostic, predictive and prescriptive models for the execution based on asset hierarchy similarity, the plurality of pre-trained diagnostic, predictive and prescriptive models indexed by asset hierarchy.
[0131] Processor(s) 1310 can be configured to execute any of the methods or instructions as described herein, and further involve managing a semantic feature store, the managing the semantic feature store involving storing raw data from sensors associated with the plurality of assets; automatically engineering features from the raw data based on asset hierarchy categories; and generating synthetic features for updating the semantic feature store in response to input data.
[0132] Processor(s) 1310 can be configured to execute any of the methods or instructions as described herein, wherein the managing the semantic feature store can involve providing metadata to synchronize the semantic feature store and a data catalog. [0133] Processor(s) 1310 can be configured to execute any of the methods or instructions as described herein, wherein the managing the semantic feature store further involves executing an unsupervised learning based auto-mapping process configured to identify relationships between the stored raw data and received new data based on one or more of sensor type, asset type, or metadata.
[0134] Processor(s) 1310 can be configured to execute any of the methods or instructions as described herein, further involving determining, from a user profile associated with the request, performance metrics to be used for the diagnostic, predictive and prescriptive model; selecting the diagnostic, predictive and prescriptive model from a plurality of pre-trained diagnostic models for the execution based on the performance metrics.
[0135] Processor(s) 1310 can be configured to execute any of the methods or instructions as described herein, and further involve training a plurality of diagnostic, predictive and prescriptive models, the training the plurality of diagnostic, predictive and prescriptive models involving training a diagnostic, predictive and prescriptive model from the plurality of diagnostic, predictive and prescriptive models for each proposed objective function from raw data received from sensors associated with the plurality of assets, the training utilizing default hyperparameter values, the each proposed objective function; and for receipt of new data from the sensors associated with the plurality of assets, estimating a similarity from the new data and the stored data based on the asset hierarchy and metadata; selecting ones of the trained plurality of diagnostic, predictive and prescriptive models for retraining based on the similarity; and randomizing the hyperparameter values and algorithms utilized for training the selected ones of the trained plurality of diagnostic, predictive and prescriptive models to retrain the selected ones of the trained plurality of diagnostic, predictive and prescriptive models.
[0136] Processor(s) 1310 can be configured to execute the methods or instructions as described herein, and further involve managing predictive and prescriptive functions including an inferencing engine storing predictive algorithms configured to use a plurality of predictive models to predict machine internal status, corresponding anomaly score, and user risk based on historical data and machine status.
[0137] Processor(s) 1310 can be configured to execute any of the methods or instructions as described herein, and further involve managing predictive and prescriptive functions involving an inference engine storing a plurality of optimization models trained against treatment for documented anomalies, the inference engine configured to determine an optimization output from the plurality of optimization models as a prescriptive recommendation based on predictive outcomes.
[0138] Processor(s) 1310 can be configured to execute any of the methods or instructions as described herein, and further involve managing predictive and prescriptive functions involving a proactive maintenance and sustainability function calculdated and associated with indexed maintenance manual and operation instructions, confluence pages and associated events.
[0139] Depending on the desired implementation, the human interface can include a chatbot.
[0140] Depending on the desired implementation, the Bayesian network can be continuously updated through machine ingest data, outcome, ground truth and persona usage.
[0141] Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations within a computer. These algorithmic descriptions and symbolic representations are the means used by those skilled in the data processing arts to convey the essence of their innovations to others skilled in the art. An algorithm is a series of defined steps leading to a desired end state or result. In example implementations, the steps carried out require physical manipulations of tangible quantities for achieving a tangible result.
[0142] Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” “displaying,” or the like, can include the actions and processes of a computer system or other information processing device that manipulates and transforms data represented as physical (electronic) quantities within the computer system’s registers and memories into other data similarly represented as physical quantities within the computer system’s memories or registers or other information storage, transmission or display devices.
[0143] Example implementations may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include one or more general-purpose computers selectively activated or reconfigured by one or more computer programs. Such computer programs may be stored in a computer readable medium, such as a computer-readable storage medium or a computer-readable signal medium. A computer-readable storage medium may involve tangible mediums such as, but not limited to optical disks, magnetic disks, read-only memories, random access memories, solid state devices and drives, or any other types of tangible or non-transitory media suitable for storing electronic information. A computer readable signal medium may include mediums such as carrier waves. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Computer programs can involve pure software implementations that involve instructions that perform the operations of the desired implementation.
[0144] Various general-purpose systems may be used with programs and modules in accordance with the examples herein, or it may prove convenient to construct a more specialized apparatus to perform desired method steps. In addition, the example implementations are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the techniques of the example implementations as described herein. The instructions of the programming language(s) may be executed by one or more processing devices, e.g., central processing units (CPUs), processors, or controllers.
[0145] As is known in the art, the operations described above can be performed by hardware, software, or some combination of software and hardware. Various aspects of the example implementations may be implemented using circuits and logic devices (hardware), while other aspects may be implemented using instructions stored on a machine-readable medium (software), which if executed by a processor, would cause the processor to perform a method to carry out implementations of the present application. Further, some example implementations of the present application may be performed solely in hardware, whereas other example implementations may be performed solely in software. Moreover, the various functions described can be performed in a single unit, or can be spread across a number of components in any number of ways. When performed by software, the methods may be executed by a processor, such as a general purpose computer, based on instructions stored on a computer-readable medium. If desired, the instructions can be stored on the medium in a compressed and/or encrypted format.
[0146] Moreover, other implementations of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the techniques of the present application. Various aspects and/or components of the described example implementations may be used singly or in any combination. It is intended that the specification and example implementations be considered as examples only, with the true scope and spirit of the present application being indicated by the following claims.

Claims

CLAIMS What is claimed is:
1. A method for facilitating a computer-implemented autonomous learning application with machine information ingestion, assessable, interoperable, catalogable, reusable construct, and human interface human knowledge extraction associated with a plurality of assets in an asset hierarchy, the method comprising: for receipt of a request of a diagnosis on an asset from the plurality of assets in a computer-implemented diagnostic, predictive and prescriptive human interface: executing a diagnostic, predictive and prescriptive model configured to classify asset status and defect for the asset; and processing the classified asset status and the defect from the execution of the diagnostic, predictive and prescriptive model through a Bayesian network, the Bayesian network comprising a plurality of interconnected nodes, each of the plurality of interconnected nodes representative of a possible diagnostic for the for the request and a probability, wherein sequences of the plurality of interconnected nodes are representative of a conditional probability of the diagnostic, the processing resulting in an output diagnosis having a highest probability as provided from the Bayesian network; wherein the diagnostic, predictive and prescriptive human interface is configured to provide a root cause of the defect and a solution for the asset based on referencing the output diagnosis to a graph database configured to associate diagnosis, root cause, and solution for each asset in an asset hierarchy, wherein the Bayesian network is updated from the graph database and feedback of current information from a maintenance system. The method of claim 1, further comprising selecting the diagnostic, predictive and prescriptive model from a plurality of pre-trained diagnostic, predictive and prescriptive models for the execution based on asset hierarchy similarity, the plurality of pre-trained diagnostic, predictive and prescriptive models indexed by asset hierarchy. The method of claim 1, further comprising managing a semantic feature store, the managing the semantic feature store comprising: storing raw data from sensors associated with the plurality of assets; automatically engineering features from the raw data based on asset hierarchy categories; and generating synthetic features for updating the semantic feature store in response to input data. The method of claim 3, wherein the managing the semantic feature store further comprises: providing metadata to synchronize the semantic feature store and a data catalog. The method of claim 3, wherein the managing the semantic feature store further comprises executing an unsupervised learning based auto-mapping process configured to identify relationships between the stored raw data and received new data based on one or more of sensor type, asset type, or metadata. The method of claim 1, further comprising: determining, from a user profile associated with the request, performance metrics to be used for the diagnostic, predictive and prescriptive model; selecting the diagnostic, predictive and prescriptive model from a plurality of pre-trained diagnostic models for the execution based on the performance metrics. The method of claim 1, further comprising training a plurality of diagnostic, predictive and prescriptive models, the training the plurality of diagnostic, predictive and prescriptive models comprising: training a diagnostic, predictive and prescriptive model from the plurality of diagnostic, predictive and prescriptive models for each proposed objective function from raw data received from sensors associated with the plurality of assets, the training utilizing default hyperparameter values, the each proposed objective function; and for receipt of new data from the sensors associated with the plurality of assets: estimating a similarity from the new data and the stored data based on the asset hierarchy and metadata; selecting ones of the trained plurality of diagnostic, predictive and prescriptive models for retraining based on the similarity; and randomizing the hyperparameter values and algorithms utilized for training the selected ones of the trained plurality of diagnostic, predictive and prescriptive models to retrain the selected ones of the trained plurality of diagnostic, predictive and prescriptive models. The method of claim 1, further comprising managing predictive and prescriptive functions comprising an inferencing engine storing predictive algorithms configured to use a plurality of predictive models to predict machine internal status, corresponding anomaly score, and user risk based on historical data and machine status. The method of claim 1, further comprising managing predictive and prescriptive functions comprising an inference engine storing a plurality of optimization models trained against treatment for documented anomalies, the inference engine configured to determine an optimization output from the plurality of optimization models as a prescriptive recommendation based on predictive outcomes. The method of claim 1, further comprising managing predictive and prescriptive functions comprising a proactive maintenance and sustainability function calculdated and associated with indexed maintenance manual and operation instructions, confluence pages and associated events. The method of claim 1, wherein the human interface includes a chatbot. The method of claim 1, wherein the Bayesian network is continuously updated through machine ingest data, outcome, ground truth and persona usage. An apparatus for facilitating a computer-implemented autonomous learning application with machine information ingestion, assessable, interoperable, catalogable, reusable construct, and human interface human knowledge extraction associated with a plurality of assets in an asset hierarchy, the apparatus comprising: a processor, configured to: for receipt of a request of a diagnosis on an asset from the plurality of assets in a computer-implemented diagnostic, predictive and prescriptive human interface: execute a diagnostic, predictive and prescriptive model configured to classify asset status and defect for the asset; and process the classified asset status and the defect from the execution of the diagnostic, predictive and prescriptive model through a Bayesian network, the Bayesian network comprising a plurality of interconnected nodes, each of the plurality of interconnected nodes representative of a possible diagnostic for the for the request and a probability, wherein sequences of the plurality of interconnected nodes are representative of a conditional probability of the diagnostic, the processing resulting in an output diagnosis having a highest probability as provided from the Bayesian network; wherein the diagnostic, predictive and prescriptive human interface is configured to provide a root cause of the defect and a solution for the asset based on referencing the output diagnosis to a graph database configured to associate diagnosis, root cause, and solution for each asset in an asset hierarchy, wherein the Bayesian network is updated from the graph database and feedback of current information from a maintenance system. A computer program, storing instructions for facilitating a computer-implemented autonomous learning application with machine information ingestion, assessable, interoperable, catalogable, reusable construct, and human interface human knowledge extraction associated with a plurality of assets in an asset hierarchy, the instructionscomprising: for receipt of a request of a diagnosis on an asset from the plurality of assets in a computer-implemented diagnostic, predictive and prescriptive human interface: executing a diagnostic, predictive and prescriptive model configured to classify asset status and defect for the asset; and processing the classified asset status and the defect from the execution of the diagnostic, predictive and prescriptive model through a Bayesian network, the Bayesian network comprising a plurality of interconnected nodes, each of the plurality of interconnected nodes representative of a possible diagnostic for the for the request and a probability, wherein sequences of the plurality of interconnected nodes are representative of a conditional probability of the diagnostic, the processing resulting in an output diagnosis having a highest probability as provided from the Bayesian network; wherein the diagnostic, predictive and prescriptive human interface is configured to provide a root cause of the defect and a solution for the asset based on referencing the output diagnosis to a graph database configured to associate diagnosis, root cause, and solution for each asset in an asset hierarchy, wherein the Bayesian network is updated from the graph database and feedback of current information from a maintenance system.
PCT/US2022/044602 2022-09-23 2022-09-23 Asset structure behavior learning and inference management system WO2024063787A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2022/044602 WO2024063787A1 (en) 2022-09-23 2022-09-23 Asset structure behavior learning and inference management system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2022/044602 WO2024063787A1 (en) 2022-09-23 2022-09-23 Asset structure behavior learning and inference management system

Publications (1)

Publication Number Publication Date
WO2024063787A1 true WO2024063787A1 (en) 2024-03-28

Family

ID=90454950

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/044602 WO2024063787A1 (en) 2022-09-23 2022-09-23 Asset structure behavior learning and inference management system

Country Status (1)

Country Link
WO (1) WO2024063787A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220036010A1 (en) * 2020-08-03 2022-02-03 Optum Technology, Inc. Natural language processing techniques using joint sentiment-topic modeling
US20220083851A1 (en) * 2020-09-11 2022-03-17 Acoem France Vibrating Machine Automated Diagnosis with Supervised Learning
US20220197271A1 (en) * 2019-05-09 2022-06-23 Dürr Systems Ag Analysis method and devices for same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220197271A1 (en) * 2019-05-09 2022-06-23 Dürr Systems Ag Analysis method and devices for same
US20220036010A1 (en) * 2020-08-03 2022-02-03 Optum Technology, Inc. Natural language processing techniques using joint sentiment-topic modeling
US20220083851A1 (en) * 2020-09-11 2022-03-17 Acoem France Vibrating Machine Automated Diagnosis with Supervised Learning

Similar Documents

Publication Publication Date Title
Mehdiyev et al. Explainable artificial intelligence for process mining: A general overview and application of a novel local explanation approach for predictive process monitoring
Manco et al. Fault detection and explanation through big data analysis on sensor streams
US10733536B2 (en) Population-based learning with deep belief networks
US10635095B2 (en) Computer system and method for creating a supervised failure model
JP7167009B2 (en) System and method for predicting automobile warranty fraud
US11868101B2 (en) Computer system and method for creating an event prediction model
US11415975B2 (en) Deep causality learning for event diagnosis on industrial time-series data
US20230123527A1 (en) Distributed client server system for generating predictive machine learning models
Cao et al. Combining chronicle mining and semantics for predictive maintenance in manufacturing processes
Shakhovska et al. An improved software defect prediction algorithm using self-organizing maps combined with hierarchical clustering and data preprocessing
US9489379B1 (en) Predicting data unavailability and data loss events in large database systems
Khan et al. Robustness of AI-based prognostic and systems health management
CN116457802A (en) Automatic real-time detection, prediction and prevention of rare faults in industrial systems using unlabeled sensor data
Siraskar et al. Reinforcement learning for predictive maintenance: A systematic technical review
Lima et al. Smart predictive maintenance for high-performance computing systems: a literature review
Hubauer et al. Analysis of data quality issues in real-world industrial data
Hagedorn et al. Understanding unforeseen production downtimes in manufacturing processes using log data-driven causal reasoning
e Souza et al. Development of a CNN-based fault detection system for a real water injection centrifugal pump
Remil et al. Aiops solutions for incident management: Technical guidelines and a comprehensive literature review
Merkt Predictive models for maintenance optimization: an analytical literature survey of industrial maintenance strategies
WO2024063787A1 (en) Asset structure behavior learning and inference management system
Bellini et al. A deep learning approach for short term prediction of industrial plant working status
Vasudevan et al. A systematic data science approach towards predictive maintenance application in manufacturing industry
Franciosi et al. Ontologies for prognostics and health management of production systems: overview and research challenges
Quiñones-Grueiro et al. Fault Diagnosis in Industrial Systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22959695

Country of ref document: EP

Kind code of ref document: A1