WO2023129164A1 - Digital twin sequential and temporal learning and explaining - Google Patents

Digital twin sequential and temporal learning and explaining Download PDF

Info

Publication number
WO2023129164A1
WO2023129164A1 PCT/US2021/065717 US2021065717W WO2023129164A1 WO 2023129164 A1 WO2023129164 A1 WO 2023129164A1 US 2021065717 W US2021065717 W US 2021065717W WO 2023129164 A1 WO2023129164 A1 WO 2023129164A1
Authority
WO
WIPO (PCT)
Prior art keywords
stage
digital twin
physical process
output
kpis
Prior art date
Application number
PCT/US2021/065717
Other languages
French (fr)
Inventor
Wei Lin
Yongqiang Zhang
Original Assignee
Hitachi Vantara Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Vantara Llc filed Critical Hitachi Vantara Llc
Priority to PCT/US2021/065717 priority Critical patent/WO2023129164A1/en
Publication of WO2023129164A1 publication Critical patent/WO2023129164A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • the present disclosure is generally related to Internet of Things (loT), Operational Technology (OT) and Digital Twin (DT) systems, and more specifically, to facilitate a framework for digital twin sequential and temporal learning and explaining.
  • LoT Internet of Things
  • OT Operational Technology
  • DT Digital Twin
  • a digital twin is a virtual representation that serves as the real-time digital counterpart of a physical object, process and/or system.
  • the digital twin concept is an outcome of the effort to continuous improvement in the creation of product, product design, product engineering, and on-going operation activities. It has the capacities to connect all lifecycle stages of a product and its interactions into a cohesive smart construct.
  • the digital twin object contains digital threads.
  • the digital thread is used to describe the traceability of the digital twin back to the requirements, parts, and control systems that make up the physical asset.
  • Digital thread(s) also align both physical sensors and corresponding data to the digital system.
  • simulations are used to construct digital models that imitate the operations or processes within a system for execution of distributed ledger and other transactions (e.g., blockchain, peer-to-peer interaction models), and micro-transactions replacing or complementing traditional models that involve centralized authorities or intermediaries, with artificial intelligence/machine learning (AI/ML) enhanced controller circuits having self-adaptive autonomous execution of transactions in real time and forward markets.
  • distributed ledger and other transactions e.g., blockchain, peer-to-peer interaction models
  • micro-transactions replacing or complementing traditional models that involve centralized authorities or intermediaries
  • AI/ML artificial intelligence/machine learning
  • the simulation is then run by introducing variables into the digital environment or interface.
  • a transaction-enabling system that includes a production facility having a core task that is a production task.
  • the system includes a controller having a facility description circuit that interprets a number of historical facility parameter values and a corresponding number of historical facility outcome values, and a facility prediction circuit that operates an adaptive learning system, where the adaptive learning system is configured to train a facility production predictor in response to the plurality of facility parameter values and the corresponding plurality of facility outcome values.
  • the facility description circuit further interprets a number of present state facility parameter values, and the facility prediction circuit further operates the adaptive learning system to predict a present state facility outcome value in response to the number of present state facility parameter values.
  • a first issue with the related art is that the analysis for the process pipeline is done at the process and asset level separately.
  • a problem in a process pipeline is recognized separately with its critical assets and their subprocess.
  • a quality issue not limited to failure
  • most of the analytics work starts with collecting data as variables from system sensors and proceeds with the analysis thereon.
  • process pipeline degradation and its connected assets properties e.g., remaining useful life, and material/subcomponent such as quality
  • a second issue with the related art is that the process pipeline and its connected assets (e.g., robotic machines) involves constantly changing performance over time and requires continuous calibration and alignment.
  • Product quality and process pipeline performance is constantly fluctuating due to degradation from continuous operation, subcomponent replacement, quality of material to process, inadequate maintenance, tuning, core components life cycles and cascaded aggregation for its output quality from prior stages.
  • the absence of numeric data and direct measures makes data analysis for actionable decisions more susceptible to biased interpretation. Therefore, there is a need to adopt well established procedures and techniques to enrich and ensure high-quality analysis that is both valid and reliable. For example, delays in a single process cycle time often stall pieces behind it and create downtime and consume manufacturing space both farther up and down the assembly line. It is rare to have process queueing/throughput analysis conducted with a connected asset, cascading into its hierarchy, connected subprocess performance model, and/or behavior model, health model via outcome attributions to connected assets/connected processes involved activities at time steps.
  • a third issue in the related art is that the inter-relationships of the connected assets in process pipeline and procedures to perform (e.g., motion profiles) are usually not well considered.
  • the process pipeline can be decomposed into stages. Each stage could be processed by a connected asset(s).
  • the connected asset could also be decomposed by a sequence of actions of motion profiles and material conditions are used for the tasks.
  • Connected asset could be further decomposed into its own process pipeline and connected asset.
  • Those interrelationships could improve the asset model/solution for the process in question.
  • a fourth issue with the related art is that the inter-relationships of the material/module to be processed by connected assets in the pipeline and material/module quality to connected asset(s) are usually not well considered.
  • the material/module used in the pipeline from upstream to downstream processes are not considered holistically when building a product for the process in question.
  • the sub-optimal work-in-progress inventory from sub- optimal (not rejected) material/modules are still included in the inventory on the production floor (e.g., a half-assembled car or a partially completed truck).
  • the information of suppliers, contract manufacturing for material/module mostly resides in different databases for post analysis.
  • example implementations described herein involve a digital twin solution with forward prediction and backward attribution.
  • the digital twin facilitates optimized product process performance, predictive maintenance, and the extension of remaining useful life (RUL).
  • the digital twin incorporates operations of connected assets, and applies physics and machine learning in real time.
  • the digital twin in the example implementations allow for digital twin real time representation for system conditions, physics model blending for virtual sensory implementation for insufficient measurements, artificial intelligence/machine learning (AI/ML) model blending for fast and large volume information extraction, material and component in modeling consideration, processes and procedures in modeling consideration, connected process pipeline and connected asset in prediction and attribution, and temporal and sequential analytics for dependency analytics.
  • AI/ML artificial intelligence/machine learning
  • aspects of the present disclosure further involve a solution representation in the digital twin, which introduces an approach to represent and store the information in digital twin, and an expert data store for solution explaining.
  • aspects of the present disclosure involve a method for a digital twin of a physical process, the digital twin configured with forward prediction of the physical process, the method involving identifying key performance indicators (KPIs) from the data architecture of the digital twin; defining output KPIs indicative of quality or efficiency for the forward prediction from the vector input to the digital twin and the identified KPIs; generating a recurrent neural network (RNN) structure to match the physical process, each node of the RNN structure associated with a sub-process within the physical process; and computing each stage of the RNN structure from output of a prior stage and current input to the each stage to generate values for the output KPIs as the forward prediction.
  • KPIs key performance indicators
  • RNN recurrent neural network
  • aspects of the present disclosure involve a computer program storing instructions for a digital twin of a physical process, the digital twin configured with forward prediction of the physical process, the instructions involving identifying key performance indicators (KPIs) from the data architecture of the digital twin; defining output KPIs indicative of quality or efficiency for the forward prediction from the vector input to the digital twin and the identified KPIs; generating a recurrent neural network (RNN) structure to match the physical process, each node of the RNN structure associated with a sub-process within the physical process; and computing each stage of the RNN structure from output of a prior stage and current input to the each stage to generate values for the output KPIs as the forward prediction.
  • the instructions can be stored in a non-transitory computer readable medium and executed by one or more processors.
  • aspects of the present disclosure involve a system for a digital twin of a physical process, the digital twin configured with forward prediction of the physical process, the system involving means for identifying key performance indicators (KPIs) from the data architecture of the digital twin; means for defining output KPIs indicative of quality or efficiency for the forward prediction from the vector input to the digital twin and the identified KPIs; means for generating a recurrent neural network (RNN) structure to match the physical process, each node of the RNN structure associated with a sub-process within the physical process; and means for computing each stage of the RNN structure from output of a prior stage and current input to the each stage to generate values for the output KPIs as the forward prediction.
  • KPIs key performance indicators
  • RNN recurrent neural network
  • aspects of the present disclosure involve an apparatus to facilitate a digital twin of a physical process, the digital twin configured with forward prediction of the physical process, the apparatus involving a processor, configured to identify key performance indicators (KPIs) from the data architecture of the digital twin; define output KPIs indicative of quality or efficiency for the forward prediction from the vector input to the digital twin and the identified KPIs; generate a recurrent neural network (RNN) structure to match the physical process, each node of the RNN structure associated with a sub-process within the physical process; and compute each stage of the RNN structure from output of a prior stage and current input to the each stage to generate values for the output KPIs as the forward prediction.
  • KPIs key performance indicators
  • RNN recurrent neural network
  • FIG. 1 illustrates an example digital twin upon which the example implementations can be applied.
  • FIG. 2 illustrates an example of defining the digital twin by digitalizing the physical system and entities in scope, in accordance with an example implementation.
  • FIG. 3 illustrates an example of defining the digital twin by constructing a virtual replica and connecting data threads to the designed software system (including models) for structure, behavior, and failure modes in scope, in accordance with an example implementation.
  • FIG. 4 illustrates an example integration of the AI/ML model and physics model, in accordance with an example implementation.
  • FIG. 5 illustrates an example of forward prediction in a digital twin, in accordance with an example implementation.
  • FIG. 6 illustrates an example of backward attribution in a digital twin, in accordance with an example implementation.
  • FIG. 7 illustrates a solution architecture for solution operation for digital twin orchestration, knowledge compilation, initialization, data sources (physical and simulated), computation, knowledge extraction and business actionable, in accordance with an example implementation.
  • FIG. 8 illustrates the system architecture on which the solutions are built and executed, in accordance with an example implementation.
  • FIG. 9 illustrates the conceptual flow of the system architecture, in accordance with an example implementation.
  • FIG. 10 illustrates an example for calculating backwards attribution, in accordance with an example implementation.
  • FIGS. 11 illustrates an example of the multi-input structures for forward prediction, in accordance with an example implementation.
  • FIG. 12 illustrates an example for calculating backwards attribution for multi-input structures, in accordance with an example implementation.
  • FIG. 13 illustrates an example of forward prediction for subcomponent multi-inputs structure with multi-level recursion, in accordance with an example implementation.
  • FIG. 14 illustrates an example expanded view of the forward prediction for the subcomponent multi-inputs structure with multi-level recursion, in accordance with an example implementation.
  • FIG. 15 illustrates an expanded view of the backward attribution for the subcomponent multi-inputs structure with multi-level recursion, in accordance with an example implementation.
  • FIG. 16 illustrates an example calculation of the backward attribution for the subcomponent multi-inputs structure with multi-level recursion, in accordance with an example implementation.
  • FIG. 17 illustrates a flow for the expanded view of the recursive approach for backwards attribution, in accordance with an example implementation.
  • FIG. 18 illustrates a system involving a plurality of assets networked to a management apparatus, in accordance with an example implementation.
  • FIG. 19 illustrates an example computing environment with an example computer device suitable for use in some example implementations.
  • FIG. 1 illustrates an example digital twin upon which the example implementations can be applied.
  • the digital twin in example implementations described herein focus on process pipeline behaviors, and can be in the form of a physical system digital replica 100 or logical (functional) replica 110 which includes the material, installation processes, assets configuration and data required to perform the analysis efficiently.
  • the data threads mapped to specific sensors 101 of the underlying connected assets can be fed into switches 103 for processing into a management system 104 based on the corresponding system thread mapping.
  • the management system 104 can involve various interfaces 105 which can include, but are not limited to, system/business management interface 106, user interfaces 107, executing and operation interfaces 108, as well as having direct data threads for storage as operational records 109.
  • digital threads 111 that mimic the underlying system of connected assets are provided to a corresponding simulator 112, which can draw upon various connected system threads, such as, but not limited to, analysis and testing modules 113, what if execution and operation modules 114, and physics learning/ML models 115.
  • the process pipeline creates (logical) associations among the connected assets, connected processes and sub-processes. Hence, with the full system emulation in digital space, users can forward predict the system outcomes from existing pipeline stages and backward attribute outcomes to their prior stage process, connected asset, used material, installation procedures and connected sub-process.
  • users can forward predict the system outcomes from existing pipeline stages and backward attribute outcomes to their prior stage process, connected asset, used material, installation procedures and connected sub-process.
  • the pipeline temporal sequential structure allows maintenance/production professionals to understand the relationships of assembly line yield/quality/throughputs and attributing them to processes, assets using “primary connected” logic.
  • the primary process outcomes can translate into connected asset KPIs such as quality, anomaly, failure, remaining useful life, and so on in accordance with the desired implementation, and further decompose to the next level of connected process and connected asset.
  • the proposed digital twins involve the digital representation of physical or nonphysical processes, systems, or objects.
  • the digital twin also integrates all data produced or associated with the process or system it mirrors. Thus, it enables the transfer of data within its digital ecosystem, mirroring the data transfer that occurs in the real world.
  • the data used in proposed digital twins are collected from loT devices or sensors, edge hardware, human machine interfaces (HMIs), sensors, and other embedded devices in accordance with the desired implementation.
  • the captured data represents high-level information that integrates the behavioral pattern of digitized assets in the digital twin.
  • a digital twin provides serves as a world of its own. Within this digital world, all types of simulations can be run. It can also be used as a planning and scheduling tool for training, facility management, and the implementation of new ideas. This highlights the fact that a digital twin is a virtual environment, thus it must involve either 2D or 3D assets or the data they produce or are expected to produce.
  • the digital twin disclosed herein is also extensible from using physics model combined with AI/ML to extend the digital system sensory beyond the physical system installation or pure the pure physics model emulation, which is a contrast to the related art.
  • the example implementations use digital twin with calibration having Contextual Knowledge Center (CKC), Knowledge Graph (KG) and human Subject Matter Expert (SME) in the look as knowledge store to align the physical system and the digital system through continuously calibration. It is designed to be either closely coupling or loosely coupling the physical system with digital twin.
  • CKC Contextual Knowledge Center
  • KG Knowledge Graph
  • SME human Subject Matter Expert
  • the digital twin allows for explaining (by learning) pipeline temporal and sequential process relationships. Explaining the model and the results to help with the prescriptive actions for the problems in process, connected assets in their temporal/ sequential relationships.
  • This explanation of problems can include, but is not limited to KPI such as cycle time variations, scheduled operation time, finished good quality, supplier on-time delivery, percentage of management line scheduling visibility, and working time lost by the employee.
  • the digital twin allows for the application of a machine learning technique to solve some of the problems in the assets. This includes, but is not limited to, failure detection/prediction/prevention, anomaly detection/prediction/prevention, remaining useful life prediction, and so on.
  • the temporal/sequential activities could include robotic motion profiles that are not easily tracked in the sensors. Motion profile and its corresponding performance and behavior could be enriched via physic model.
  • the digital twin further allows for the application of a deep learning explainable scheme (RNN) in factor attribution.
  • RNN deep learning explainable scheme
  • the steps to define digital twin can involve the following. There can be a step to define the measurements indicative of the digital twin success criteria and business use cases. There can be a step to define the scope of the physical system, the processes, or the objects to be digitized. There can be a step to define physical system structures, behaviors, and failure modes. The step to define physical system structures can involve the alignment of the operation plan, content, capacity, schedule, and so on, with contextual knowledge center for physical system. There can be a step to define physical system entities which are required (physical or logical). There can be a step to align entities relationships, pipeline stages and technical specifications. [0057] There can be a step to define data to be captured in physical system.
  • the data that defines a system or process can be sourced from assets within a facility and these assets include equipment, floor layouts, workstations, and Internet of Things (loT) devices.
  • assets include equipment, floor layouts, workstations, and Internet of Things (loT) devices.
  • LoT Internet of Things
  • RFIDs Radio Frequency Identifiers
  • humanmachine interfaces and other technologies that drive data collection.
  • physical data capture could be done through sensors, actuators, controllers, and other smart edge devices installed (or to install) within the system.
  • LIDAR and/or 3D scanners can also be used to extract point clouds when digitizing small to medium-sized objects.
  • the key step is to successful capture of the data from a system or object produces that could sufficiently define the system to creating a digital twin.
  • digital twin software and platforms can include but not limited to, software that handles the flow of data from the loT devices, a facility, and/or other enterprise systems needed to understand and digitize the chosen process, software and software architecture that recreates physical objects or assets into its digital ecosystem to deliver information that has level of clarity/granularity for business actionable, increasing the computing resources needed to create and manage a digital twin when digitizing and emulating complex systems with hundreds of variables that produce large data sets, and scalable computing power and resources as a key consideration for a digital twin platform or solution.
  • digital twin there can be a step to define and design business functions that digital twin to perform and solve. For example, if it is to serve as a monitoring tool for facilities or for predictive maintenance, a limited digital twin software can be used while for simulations and scheduling a more advanced technology will be required.
  • the step to select additional ML models and physics models can involve the calibration ML models and physics models with the physical system using behavior KPI and performance KPI, calibration of the ML models with physics models in digital twin, alignment of the ML model system structure with the physics model system structure, and creation of the virtual sensory content in physics model to facilitate the alignment.
  • step to maintain continuous operation can include involving the entire product value chain and user buy-in, including data from multiple sources, creating/ ensuring Long Access Lifecycles as asset lifecycles is longer than software lifecycles, and another step for defining and evolving the measurements of the digital twin success criteria and next best fit use cases.
  • FIG. 2 illustrates an example of defining the digital twin by digitalizing the physical system and entities in scope, in accordance with an example implementation.
  • FIG. 3 illustrates an example of defining the digital twin by constructing a virtual replica and connecting data threads to the designed software system (including models) for structure, behavior, and failure modes in scope, in accordance with an example implementation.
  • a solution hierarchy for process
  • an asset hierarchy for assets
  • the digital assets can be mapped from the physical asset through a solution hierarchy/asset hierarchy input as is known in the art. From the hierarchy, the inferencing pipeline can be constructed, so that each stage can eventually map out to the corresponding physical process/asset.
  • AI/ML model There are two types of base models in digital twin and their integration. Firstly, there is the AI/ML model.
  • the AI/ML model focuses on the operation side once the physical system starts operating.
  • the AI/ML model can capture the system runtime information and use that to derive insights and help with remediation of issues and decision-making.
  • the AI/ML model replicates a decision process to enable automation and understanding in the digital twin.
  • AI/ML models are mathematical algorithms that are trained using data and subject matter expert (SME) input to reach a decision an expert would make when provided information required.
  • SME subject matter expert
  • the AI/ML training processes a large amount of data through the algorithm(s) using a fitness function to maximize likelihood or minimize cost and yield a trained model as result.
  • the model learns to detect the type of failure mode patterns and distinguish these from normal operation.
  • the AI/ML model also attempt to operate under a fault tolerance when not all the data is trusted (e.g. sensor failure, connectivity failure) to reach the best fit solution or replicate a specific decision process prior trained (e.g. using alternative features set by excluding data in questions for modeling).
  • a fault tolerance when not all the data is trusted (e.g. sensor failure, connectivity failure) to reach the best fit solution or replicate a specific decision process prior trained (e.g. using alternative features set by excluding data in questions for modeling).
  • the physics model mainly focuses on the design phase, and can output the expected behavior of the system.
  • the model will mainly target what could happen by design or under normal operation; otherwise, capturing the abnormal operations or conditions will be very costly and may not be accurate and reliable.
  • the physics model in the digital twin is the theory describing the known fundamental principles (e.g., electromagnetic, machinal, thermo dynamics, material science, and so on), as well as motion model terms of displacement, distance, velocity, acceleration, speed, rotation, and time.
  • the physics model is developed in formulation of physical system, sub-system, motion, load and controller and being finalized in a time series output via “virtual sensor” placement upon directed experimental confirmation.
  • the physics model can predict various properties of operating outcomes, system responses, and safety compliance with great accuracy.
  • the output from the physics-based model can complement and/or validate the data from the physical sensors and thus help improve the AI/ML model for operation.
  • virtual sensor data from the digital twin model can serve as a “surrogate” of the physical sensors.
  • the virtual sensor data can serve as the “expected” value while the values from physical sensors can serve as the “observed” value and thus the variance or different between them can be used as a signal to detect abnormal behaviors in the system.
  • the AI/ML model can use the physics model to create a large amount data to stand up AI/ML model for training when data is not available, or a system is in infancy having insufficient amount of data for training. Further the sparsity of the sensor data is an issue in identifying the telemetry of the failure and its impact range.
  • the physics model can devise “virtual sensors” per the distribution of theoretical outputs and provide coverage for physical sensor sparsity. The physics model is theoretically self-consistent and has demonstrated successes in providing to experimental predictions. The physics model may leave some unexplained when complex system interactions involved and falls short of system responding prediction when dormant variables are not in scope for analysis.
  • FIG. 4 illustrates an example integration of the AI/ML model and physics model, in accordance with an example implementation.
  • an analytics model 430 in the digital twin contains both AI/ML models 431 and physics models 432.
  • the AI/ML models 431 and the physics models 432 will go through a calibration phase prior to integration.
  • the physics model output could be considered as content contribution from the “virtual sensors”.
  • the outcomes of analytics models will be stored in the content knowledge store 420 with corresponding simulation conditions to compare and optimize.
  • the physics-based modeling could be combined with physical sensory and external data as inputs to an AI/ML model(s) to apply to predict and/or attribute the outcomes. For instance, suppose there is a vehicle assembly line in a manufacture plant, and quality prediction can be applied to the production pipeline outcome by using each pipeline stage quality as inputs to compute the yield quality. Each stage and its sub-process’ anomaly detection, failure detection and their related risks scores can be used as inputs to prediction the quality of a stage. This approach could recursively apply to the entire plant including additional inputs such as material quality, configuration setup, installation procedure version, attached asset conditions (risk scores and RUL), and so on.
  • the solution is built per problem, or a failure mode and the relationships among the assets and the problems are not considered and utilized. Further, the corresponding optimizations of related art implementations are locally constrained (e.g. no content for attribution for low quality condition) which cannot be leveraged.
  • An important related task for the solution is to attribute the results and generate the prescriptive actions in order to remediate the problems in the pipeline via the digital twin. This includes, but is not limited to, performance optimization, root cause analysis, remediation recommendation, alert suppression, and so on. Conventionally, there is limited work on attributing solution and mostly, is done in isolation without full contextual knowledge in operation. Like solution learning, the explaining/attribution effort and could be applied recursively to the fully connected digital twin with contextual knowledge content.
  • aspects of the present disclosure involve facilitating forward prediction in a digital twin as illustrated in FIG. 5.
  • the example implementations at first create the temporal sequential logical structure of the process digital twin that reflects process, material, and installation with prediction, in conjunction with Sequence Prediction, Sequence Classification, Sequence Generation, and Sequence to Sequence Prediction including a recurrent neural network (RNN) approach.
  • RNN recurrent neural network
  • the tl-t7 indicate the point in time.
  • the example implementations identify sensors and key performance indicators (KPIs) (e.g., quality or qualities) that applied to each stage process, asset, material, and installation procedure.
  • KPIs key performance indicators
  • Q represents “Quality” which is an aggregation of one or more of quality of material, quality of installation procedure, quality of the asset (e.g., robotic stations), and quality of the process line.
  • example implementations build a homogenous (or heterogeneous) model and/or solutions to each stage process, asset, material, and installation procedure.
  • the output of each model at a prior stage serves as input to the model at next stage by following digital twin process flow.
  • the model at prior stage output are used as derived features and predict the next stage to mimic sequential dependency of process flow.
  • the output KPIs of the stages in the forward prediction are identified/defined that are indicative of quality of efficiency for the forward prediction based on the vector input provided to the digital twin and the identified input KPIs. Further, defining the output KPI mostly is done using user cases. Use cases are stored in the knowledge base or initialize by the operator as will described, for example, in FIG. 7, or can be initialized by the user in accordance with the desired implementation.
  • the output KPIs are single value KPIs that are representative of quality and/or efficiency. If both quality and efficiency are needed, then two prediction pipelines (one using quality as KPI and one with efficiency as KPI) can be initialized, or new user cases can be initialized by the user in accordance with the desired implementation.
  • the vector input can involve the robot/asset quality of the underlying asset or robot, which can involve variables such as anomaly risk score, remaining useful life score, failure risk score, material quality score, configuration quality score, operator quality score, and so on in accordance with the desired implementation.
  • the recurrent structure involves a recurrent neural network (RNN) structure to match the physical process, and the structure is used to match the underlying production/assembly line structure such that each node of the RNN structure associated with a sub-process/sub-station within the physical process/physical system.
  • RNN recurrent neural network
  • most of the task(s) of the sub-processes can be performed by the sub-stations.
  • the sub-processes are outsourced to different external locations, but with connected loT and blockchain, the primary manufacturing location can trace inbound parts along a supply chain with blockchain created immutable documentation of quality checks and detailed production process data along with KPIs.
  • the sub-process/ sub-station could be considered as physical or virtual that are connected to the database(s) and uniquely tag each product as well as to automatically inscribe every manufacturer transactions, procedures, modifications, or quality score/checks by blockchain.
  • the RNN can involve weights which are parameters within the RNN that transforms input data within the hidden layers of the network.
  • a neural network is a series of nodes, or neurons. Within each node is a set of inputs, weight, and a bias value.
  • the node As an input enters the node, it gets multiplied by a weight value and the resulting output is either observed, or passed to the next layer in the neural network. Often the weights of a neural network are contained within the hidden layers of the network. The weights can be computed from previous output to determine which ones of the stages incurred a change to the values to the output KPIs over a threshold.
  • each stage of the RNN structure is conducted from output of a prior stage and current input to the each stage to generate values for the output KPIs as the forward prediction.
  • the RNN stage has two inputs; one is output from the prior stage and the other is new input(s) at the current stage.
  • the input/output is associated in time.
  • FIG. 6 illustrates an example of backward attribution in a digital twin.
  • the quality (Output) is calculated for its attribution to prior stages of processes, material, installation procedures and asset which is used at corresponding timesteps. Attributions can involve attribution by individual stage process (prior stages), attribution by individual stage material, attribution by individual stage installation procedure, attribution by individual stage asset (robotic station) anomaly, failure, and RUL risk score, and attribution by combinations of process(es), material, installation procedures, and asset(s).
  • the example implementations initialize digital twin conditional simulations (via physics model simulated outputs or collected historical data or mix of both) to emulate/perturbate inputs to the obtain target outputs explaining predictions (or an influence function) for all instances of its class (and/or failure modes).
  • the Influence Function(s) provides an efficient way to estimate the impact of upweighting samples on the model loss function for each simulation cycle.
  • the deep learning related algorithms in sequence prediction leverage a transparent Al approach (e.g. identify largest gradient decent as features).
  • Example implementations can further translate the feature back to root cause KPIs.
  • FIG. 7 illustrates a solution architecture for solution operation for digital twin orchestration, knowledge compilation, initialization, data sources (physical and simulated), computation, knowledge extraction and business actionable, in accordance with an example implementation.
  • Digital twin orchestration engine 700 is the software system/artifact which automates, coordinates, and manages computer resources and services. Orchestration provides for deployment and execution of interdependent workflows on external resources.
  • the digital twin orchestration engine 700 can manage complex cross-domain workflows involving both extracting knowledge stores and initializing the digital twin actor construct. Once data 750 from both sensors 751, external data 752, and simulated data 770, digital twin orchestration engine 700 will execute digital twin actors in the designed computing environment having the objective function to optimize. The generated data/outcomes from digital twin computation will be used in the business turn-key decision and to extract new/updated contextual knowledge content to integrate with knowledge stores.
  • Contextual knowledge store involves interrelationships, object, structure, services, activities and/or content, and its contextual situation.
  • Knowledge Store persists outputs from Al enrichment pipelines into storage with knowledge graph format for independent analysis and/or downstream processing.
  • Knowledge store preserves the enriched content for inspection or for other knowledge mining scenarios.
  • the contextual knowledge store is a combination of knowledge content, searchable/reasonable format, and physical storage.
  • AI/ML pipeline's output from analytics creates content that has been further extracted, structured, aligned and analyzed using ML processes (e.g., page ranking).
  • Contextual knowledge store has the following characteristics and asset which can be used at corresponding timesteps: emphasizing problem and its solving, recognizing that activities need to occur in multiple contexts, facilitating reasoning, monitoring, and become self-regulated entities, anchoring knowledge in the diverse persona’s context of activities, encouraging knowledge translation, knowledge transferring, knowledge indexing, knowledge blending and knowledge cohort building, employing authentic and continuous assessment, and managing knowledge lifecycle.
  • Physical Process, Configuration and Knowledge Store 710 is a contextual knowledge store that includes artifacts such as Operation Use Cases 711, Asset Model 712 Process flow and map 713, Process Pipeline and Connected Asset and Sensory 714, Operation Knowledge Graph 715, and Material/Sub -Module, Configuration, Installation Knowledge Base 716.
  • Twin Analytics Modeling & Behavior Knowledge Store 720 is a contextual knowledge store that includes artifacts such as Asset Digital Template 721, Physics mathematical model 722, ML algorithm 723, asset behavior 724, and other stores such as pipeline and failure mode and its transition states knowledge graph and operation scenarios, depending on the desired implementation.
  • a digital twin model is an instance of a custom-defined model or one of the existing models in the contextual knowledge store.
  • a digital twin model can be connected to other digital twins via directed acyclic graph (DAC) to form a twin graph and this twin graph is the representation of entire simulation of an environment (e.g., a factory).
  • the initialization procedure can include the following.
  • a digital twin by new custom-defined model In a first step for creating a digital twin by new custom-defined model, this new model needs to be uploaded to the service and can contain a set of properties, asset types, telemetry, and directed acyclic graph that a particular twin can have, as well as required information to maintain knowledge store's knowledge graph.
  • the existing service can create an instance of the twin using the stored digital twin construct.
  • a new DAC is created to connect.
  • multi-instances of digital twin could be built, instantiated, and connected via DAC.
  • the first through fourth steps can be selected and combined to implement such in accordance with the desired implementation.
  • This Digital Twin initialization solution 730 can include artifacts as follows: Hypothesis, Objective and Measurement 731, Simulation Scenario (predictive model) 732, Action Optimization Scenario (what if computation) 733, Asset Life cycles Scenario (prognosis model) 734, Performance/Behavior Attributions 735, Digital Twin construct 741, Temporal and Sequential construct 742, Virtual Sensory 743, Physical Sensory 744, External Data Sources 745, Physics Model 746, and ML Model 747.
  • Sensory and External Data (feeds) 750 are data feeds from each physical system to its corresponding digital twin system. Two types of feeds, can be provided: machine sensory 751 and external data 752.
  • Machine(s) sensory 751 with its sensors data represent a system operating situation.
  • the collected data is either software instructions or directly from the sensor reading measurements through the attached hardware.
  • the external data 752 represents the data stores or data sources which contain services, activities or technician notes related to the physical system. These data are used to calibrate the digital twin with the physical system and further optimize the physical process.
  • Simulated Data (feeds) 770 are data feeds related to the physics model of asset/process and treated/provided as virtual sensor output having data distributions 771 (e.g., temperature distribution, vibration distribution, and so on) in a predefined mesh telemetry on asset. It is an augment into simulations for low density data (e.g., sparse sensor installations, missing data supplements).
  • the physical system operates under physics (and chemical) principles.
  • the first physics-based model is often used and presents descriptions of majority and minority physical system behaviors and characteristics such as the impurities fatigue zones, voltage, current, vibration, temperature variances, and their dependency (e.g., abrasion and temperature dependence in both majority and minority behaviors).
  • Ray Digital Twin Computation Environment 760 is a digital twin computation environment that is implemented in Ray, which provides a simple, universal application programming interface (API) for building a distributed digital twin. Once the digital twin construct and DAC are in place, Ray will wrap the ML algorithm in the Ray actor(s) to be executed in a parallel and distributed fashion. Data feeds (e.g., Sensor, external data and simulated data) will be streamed to enable computation.
  • API application programming interface
  • Ray digital twin could use several native ML libraries, (e.g., AutoML, Reinforcement Learning, Distributed Training Wrappers, Scalable and Programmable Serving, and Distributed memory) based on the columnar memory format for flat and hierarchical data, and organized for efficient analytic operations.
  • native ML libraries e.g., AutoML, Reinforcement Learning, Distributed Training Wrappers, Scalable and Programmable Serving, and Distributed memory
  • Operation Content Knowledge Graph Extraction 790 is the creation of knowledge from structured (relational databases table format, XML) and unstructured (text, documents, images) outcomes from analytics activities in digital twin simulation(s).
  • the simulation intakes contextual content and objectives (e.g., Hypothesis development canvas) and the provides results, then translates into knowledge in a machine-readable and machine-interpretable format (e.g., resource description frameworks (RDF)) which represent knowledge to facilitate inferencing.
  • RDF resource description frameworks
  • the processes include information extraction in nature language process (NLP) and involve additional Extract Transfer Load (ETL) process.
  • blending criteria is that the extraction result of creation of structured information and/or the transformation into a relational schema with existing formal knowledge via ontologies and/or the generation of a schema based on the source data with reasoning.
  • the modules include, but are not limited to Content and Attributes Clustering 791, Taxonomy Ontology of Asset Model 792, Bayesian Network Probability calculation and graph assignment 793, Knowledge Graph 794, and Bayesian Reasoning for knowledge graph blending 795.
  • Knowledge graph 794 can involve knowledge types that are underpinned and tagged, such as contextual knowledge, attribution knowledge, unstructured knowledge, structured knowledge, process pipeline and connected asset, asset hierarchy, solution hierarchy, flow processes, and so on in accordance with the desired implementation.
  • Business Actionable 780 is a module that acts as a human and machine interface used by analytics translator This module will contain a user journey designed for analytics translator and the clients, and can be implemented through any technique as known in the art in accordance with the desired implementation.
  • FIG. 8 illustrates the system architecture on which the solutions are built and executed, in accordance with an example implementation.
  • the system involves the components as illustrated therein.
  • a user orchestrator sends a request to the digital twin orchestration engine 700.
  • the digital twin orchestration engine 700 searches a database and identifies if the installation contains required services.
  • Orchestration Engine contains can involve APIs, Business Process Modeling Language (BPMLs) and Message Adaptor which interface with event bus to work with other microservices.
  • Event Bus 810 provides queueing and communication functions between orchestration and microservices.
  • Knowledge Store Microservice 820 is a service providing knowledge store of the physical system that will be represented by the digital twin.
  • Knowledge Store Microservice 830 is a service providing knowledge store of the digital system that will represent the physical system using AI/ML models.
  • Physical Data Microservice 840 is a service providing data store from physical sensors. Depending on the scope of physical system, data could be from on-prem or cloud sources. Simulated Data Microservice 850 is a service providing the data store from calculated theorical behaviors/outputs of system via its telemetry. This could be considered as virtual sensor data. External Data Microservice 860 is a service providing data from service, maintenance, activities, operation and notes to connect system behaviors to its failure modes for labeling.
  • Solution Initialization Microservice 880 is a service instantiate software Ray actor construct with analytics content and data as a digital twin actor. Digital Twin Simulation Microservice 870 is a service that executes the digital twin Ray actor in this run time and provide outputs.
  • FIG. 9 illustrates the conceptual flow of the system architecture, in accordance with an example implementation.
  • the forward prediction and backward attribution is discussed in detail. Forward prediction and backward attribution schemes are described herein using the digital twin in process pipeline temporal and sequential learning.
  • the aspects as described herein can be implemented as APIs to facilitate the desired implementation.
  • the APIs can be implemented in the form of a container, which is a standard unit of software that packages up code and all its dependencies, so that the application runs reliably from one computing environment to another and could scale within the resources assigned in that environment which is necessary for the application to function correctly.
  • digital twin orchestration engine 905 can be implemented in a container to interact with the Physical Process, Configuration & Knowledge Store 900, physical assets 901, Twin Analytics Modeling & Behavior Knowledge Store 902, virtual assets 903, external data sources 904, and digital twin environment 911.
  • Digital Twin Initialization 906 Theorical Behavior models (Virtual) 907, AI/ML Behavior models (Physical) 908, Product Flow Directed Acyclic Graph 909, and Digital Twin software Actor (Ray Actor) Insatiate 910 can be instantiated as an API via a single container, depending on the desired implementation.
  • the RNN like structure as illustrated in FIG. 5 is used to conduct forward prediction and backward attribution analysis.
  • the RNN like pipeline structure represents a physical assembly line in digital twin by connecting Ray actors (the rectangle box) sequentially and operating in temporal order.
  • this pipeline takes input (Q) from each stage’s physical sensors/simulated data or pre-computed scores e.g., quality scores and compute internal state at stage (A) and generate output (h) and then feedforward (h) to next stage as an input to next stage.
  • This input to next stage from prior stage represents the inherited quality from prior pipeline stage to next pipeline stage as combined inputs
  • t represents process time stamp (which is not necessary uniform, e.g., txis not necessarily equal to tx+i).
  • Time stamp t and stage x are labelled the same to simplify, e.g., Qxt to Qt and the stage prior Q x -i, t-i will be label as Qt-i.
  • the stage x output hxt will be labelled as ht.
  • the computation steps are taking input (Qt) at stage t and prior stage output (ht-i) using algorithm implemented in stage (At) generate next stage output (ht).
  • the hidden state is updated using the following equation a is a partial evidence obtained by RNN from previous t - 1 steps is brought to the time step t. [0117] With this, knowing the hidden state vector ht and the updating parameter vector at will be sufficient to derive the decomposition.
  • the above formulation Eq. 10 within the brackets is the elementwise multiplication of two terms (Hadamard product).
  • the left term (h t — a t O hf-i denotes the updating evidence from time t - 1 to t, i.e., the contribution to class q by the input stage xt.
  • the evidence that an RNN like construct has gathered at time step t gradually diminishes as the time increases from t + 1 to the final time step T.
  • Eq. 10 is used to calculate backward attribution in the following learning flow as follows.
  • FIG. 10 illustrates an example for calculating backwards attribution, in accordance with an example implementation.
  • the input KPIs are identified at 1000 and the output KPIs are identified at 1001.
  • the flow establishes the recurrent structure to match the underlying (e.g., assembly) line structure.
  • equations 2 to 10 is used to get the inferred output to calculate attribution (explanation).
  • the inferring is used to perform prediction and the event is not occurred yet.
  • the attribution is reversed calculated to identify the root cause.
  • the operator could remediate the issue to prevent the unfavorable prediction outcome to really occur.
  • the flow computes the time step t-n to t outputs from inputs Q at t-n to Q at t using the equations described herein.
  • the flow computes and reconciles outputs h- n to h to the system outputs according to the equations herein.
  • the flow computes hl to h6 from QI to Q6 and compares outputs hl to h6 according to the equations herein.
  • the flow computes output h7 and computes attribute h7 outputs to QI to Q6 according to the equations described herein.
  • the backward attribution is generated.
  • the example implementations described herein can use an RNN structure to match the physical process with each node of the RNN structure associated with a sub-process within the physical process.
  • backward attribution can be executed for each stage to identify the root cause through temporal steps. Because the backward attribution algorithm calculates per time step, the algorithm is executed across the entire length of the process pipeline stages steps, which are treated as equal time steps, to determine which of the stages has the most impact on the outcome (e.g., beyond a threshold, or highest impacting stage).
  • the backward attribution can be combine with the forward prediction as described herein.
  • FIGS. 11 illustrates an example of the multi-input structures for forward prediction, in accordance with an example implementation.
  • the multi-input structures for backwards attribution is similar to that of FIG. 6.
  • this structure can conduct multiple analysis at the same time.
  • the output could involve different targeted KPIs.
  • Each input of the connected asset at pipeline stage can be associated with its corresponding model outputs as a vector containing risk scores from its anomaly detection, failure detection, remaining useful life, failure prediction and so on; or could be sensors measurements (e.g., vibration, temperature, pressure as a vector).
  • Multi-input structures can be used for complicated systems such as robotic arms, in which that a quality value alone may not be sufficient to explain what is occurring in the robotic machine.
  • the input can involve characteristic features of a robotic status, and those features can be used in the form of a vector.
  • the failure risk scope of a robotic arm which includes the operation history of the robotic arm, remaining useful life, and so on can be used as examples of the input vector.
  • the multi-input structures allow for the creation of multiple parallel analysis, and processing each of the different types of features in the input vector for a particular process.
  • the learning algorithm description is as follows. At first, the learning algorithm creates the logical structure of the process pipeline and hierarchy. Secondly, the algorithm identifies sensors that applied to each stage. Then, the learning algorithm builds model(s)/solution(s) for each stage and connected asset. Then the output of each model at the prior stage serves as input to the model at next stage by following process pipeline and hierarchy. The model output can be deemed as derived features and predict of next stage. Then the sensor/KPIs data can be input to each asset/node in the process pipeline and connected asset. The output then is calculated to attribute to prior stages. In the example of FIG. 11, V/h where input/output pair V/h could be (Anomaly risk score, Remaining useful life score, Failure risk score) as multi-inputs to Quality output pairs.
  • FIG. 12 illustrates an example for calculating backwards attribution for multi-input structures, in accordance with an example implementation.
  • the flow identifies the input KPI vector and the output KPI at 1201.
  • the flow establishes the recurrent structure to match the underlying (e.g., assembly/production) line structure as illustrated in FIG. 11.
  • the flow computes time step t-n to t outputs from inputs V at t-n to V at t.
  • the flow computes and reconciles outputs h at t-n to h to the system outputs.
  • the flow computes hl to h6 from VI to V6 and compares outputs hl to h6.
  • the flow computes output h7 1206.
  • the flow computes attribute h7 output to VI to V6.
  • the flow generates attribution.
  • the computations can be conducted in accordance with the equations as described herein.
  • FIG. 13 illustrates an example of forward prediction for the subcomponent multiinputs structure with multi-level recursion, in accordance with an example implementation.
  • This structure can conduct multiple level analysis. The output could be different targeted KPIs.
  • Each input of the connected asset at pipeline stage can be associated with its corresponding subcomponent model outputs as a vector containing risk scores from its anomaly detection, failure detection, remaining useful life, failure prediction and so on; or could be sensors measurements (e.g., vibration, temperature, pressure as a vector). This decomposing effort can be conducted recursively.
  • FIG. 14 illustrates an example expanded view of the forward prediction for the subcomponent multi-inputs structure with multi-level recursion, in accordance with an example implementation.
  • FIG. 15 illustrates an expanded view of the backward attribution for the subcomponent multi-inputs structure with multi-level recursion, in accordance with an example implementation.
  • the structures are similar to that of FIGS. 5 and 6, only reconfigured to facilitate multiple-input structure and multi-level recursion.
  • FIG. 16 illustrates an example calculation of the backward attribution for the subcomponent multi-inputs structure with multi-level recursion, in accordance with an example implementation.
  • the flow is similar to that of FIG. 10.
  • the input KPI vector is identified at 1600, and the output KPI(s) are identified at 1601.
  • the flow establishes the recurrent structure to match the underlying (e.g., assembly/production) line structure.
  • the flow computes time step t-n to t outputs from inputs Qt-n of f(A) at t-n to Qt of f(A) at t.
  • the training is initiated to compute and reconcile outputs ht-n at t-n to ht at t to the system outputs.
  • the flow computes hl at tl to h6 at t6 from QI of f(Al) at tl to Q6 of f(A6) at t6 and compare outputs hl to h6.
  • the flow computes output h7.
  • the flow computes attribute h7 output to Q of f(A) at tl to Q of f(A) at t6.
  • the flow generates the attribution.
  • FIG. 17 illustrates a flow for the expanded view of the recursive approach for backwards attribution, in accordance with an example implementation. The flow is the same of FIG. 16, but executed in recursive form as illustrated in FIG. 17.
  • the model output can be deemed as derived features and predict of next stage.
  • Examples of algorithms that could be used in this forward prediction effort are as follows. Algorithms such as RNN, LSTM, transformer, Huristic, Exponential smoothing models, ARIMA/SARIMA, and linear regression can be used.
  • Algorithms such as RNN, LSTM, transformer, Huristic, Exponential smoothing models, ARIMA/SARIMA, and linear regression can be used.
  • the input for the stages can be quality, and the output (t+1) can also be quality, with no translation layer needed.
  • the input for the stages can be a vector, and the output (t+1) can also be quality, with no translation layer needed.
  • RNN, LSTM, and transformer have an autoencoder for the translation layer
  • Hurstic uses Hurstic for the translation layer
  • Exponential smoothing models use principal component analysis (PCA) as the translation layer
  • ARIMA/SARIMA uses independent component analysis (ICA) as the translation layer
  • linear regression uses a multi-dimension min/max scaler as the translation layer.
  • the input for the stages can be a vector, and the output (t+1) can also be quality.
  • the example implementations described herein can confirm various advantages over the related art.
  • the example implementations utilize end to end learning schemes for process pipeline and hierarchy by utilizing the physical and/or logical relationships and sensors and/or simulated data to build a digital twin.
  • This digital twin can help to achieve better prediction performance, solutions of the given task(s) or KPIs for each process pipeline and connected asset.
  • example implementations provide the comprehensive outcome prediction and event attribution of the whole system in digital twin and can potentially prioritize the tasks to optimize accordingly.
  • the example implementations can also help fine tune the solutions for each connected asset.
  • the example implementations introduce an explaining approach for the solution and results based on the process pipeline and its connected assets at different levels.
  • Example implementations introduce three knowledge graph systems in solution architecture to represent and store process pipeline and connected asset/hierarchy and information needed to execute the solutions.
  • Example implementations help resolve the relationships among process pipeline stages, connected assets and assets hierarchy accordingly. Further, the example implementations can help calibrate, refine, and optimize the forward prediction and backward attribution parameters via approach in Ray of the Digital Twin environment.
  • the example implementations can thereby generate knowledge content for optimizing operation and prognosis to continuously optimize based on the production pipeline and its subsystem recursively. It is not failure driven solution only continuously optimize (where also include failure cases).
  • FIG. 18 illustrates a system involving a plurality of assets networked to a management apparatus, in accordance with an example implementation.
  • One or more assets 1801 are communicatively coupled to a network 1800 (e.g., local area network (LAN), wide area network (WAN)) through the corresponding on-board computer or Internet of Things (loT) device of the assets 1801, which is connected to a management apparatus 1802.
  • the management apparatus 1802 manages a database 1803, which contains historical data collected from the assets 1801 and also facilitates remote control to each of the assets 1801.
  • the data from the assets can be stored to a central repository or central database such as proprietary databases that intake data, or systems such as enterprise resource planning systems, and the management apparatus 1802 can access or retrieve the data from the central repository or central database.
  • Asset 1801 can involve any physical system for use in a physical process such as an assembly line or production line, in accordance with the desired implementation, such as but not limited to air compressors, lathes, robotic arms, and so on in accordance with the desired implementation.
  • the data provided from the sensors of such assets 1801 can serve as the data flows as described herein upon which analytics can be conducted.
  • the system of FIG. 18 can involve the underlying physical system upon which the physical process can be implemented.
  • the physical system and the physical process can be represented by a solution/asset hierarchy as is known in the art and as described in FIG. 7.
  • the physical process can involve two parts; the assets 1801 along with their hierarchy, and the physical process to assemble the truck.
  • the forward prediction of the physical process can be to predict the outcome of the production line (e.g. the quality of the assembled truck or the efficiency of the production line itself).
  • FIG. 19 illustrates an example computing environment with an example computer device suitable for use in some example implementations, such as a management apparatus 1802 as illustrated in FIG. 18, or as an on-board computer of an asset 1801.
  • the computing environment can be used to facilitate implementation of the architectures illustrated in FIGS. 1 and 4 to 9.
  • any of the example implementations described herein can be implemented based on the architectures, APIs, microservice systems, and so on as illustrated in FIGS. 1 and 4 to 9.
  • Computer device 1905 in computing environment 1900 can include one or more processing units, cores, or processors 1910, memory 1915 (e.g., RAM, ROM, and/or the like), internal storage 1920 (e.g., magnetic, optical, solid state storage, and/or organic), and/or I/O interface 1925, any of which can be coupled on a communication mechanism or bus 1930 for communicating information or embedded in the computer device 1905.
  • VO interface 1925 is also configured to receive images from cameras or provide images to projectors or displays, depending on the desired implementation.
  • Computer device 1905 can be communicatively coupled to input/user interface 1935 and output device/interface 1940.
  • Either one or both of input/user interface 1935 and output device/interface 1940 can be a wired or wireless interface and can be detachable.
  • Input/user interface 1935 may include any device, component, sensor, or interface, physical or virtual, that can be used to provide input (e.g., buttons, touch-screen interface, keyboard, a pointing/cursor control, microphone, camera, braille, motion sensor, optical reader, and/or the like).
  • Output device/interface 1940 may include a display, television, monitor, printer, speaker, braille, or the like.
  • input/user interface 1935 and output device/interface 1940 can be embedded with or physically coupled to the computer device 1905.
  • other computer devices may function as or provide the functions of input/user interface 1935 and output device/interface 1940 for a computer device 1905.
  • Examples of computer device 1905 may include, but are not limited to, highly mobile devices (e.g., smartphones, devices in vehicles and other machines, devices carried by humans and animals, and the like), mobile devices (e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like), and devices not designed for mobility (e.g., desktop computers, other computers, information kiosks, televisions with one or more processors embedded therein and/or coupled thereto, radios, and the like).
  • highly mobile devices e.g., smartphones, devices in vehicles and other machines, devices carried by humans and animals, and the like
  • mobile devices e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like
  • devices not designed for mobility e.g., desktop computers, other computers, information kiosks, televisions with one or more processors embedded therein and/or coupled thereto, radios, and the like.
  • Computer device 1905 can be communicatively coupled (e.g., via I/O interface 1925) to external storage 1945 and network 1950 for communicating with any number of networked components, devices, and systems, including one or more computer devices of the same or different configuration.
  • Computer device 1905 or any connected computer device can be functioning as, providing services of, or referred to as a server, client, thin server, general machine, special-purpose machine, or another label.
  • I/O interface 1925 can include, but is not limited to, wired and/or wireless interfaces using any communication or I/O protocols or standards (e.g., Ethernet, 802.1 lx, Universal System Bus, WiMax, modem, a cellular network protocol, and the like) for communicating information to and/or from at least all the connected components, devices, and network in computing environment 1900.
  • Network 1950 can be any network or combination of networks (e.g., the Internet, local area network, wide area network, a telephonic network, a cellular network, satellite network, and the like).
  • Computer device 1905 can use and/or communicate using computer-usable or computer-readable media, including transitory media and non-transitory media.
  • Transitory media include transmission media (e.g., metal cables, fiber optics), signals, carrier waves, and the like.
  • Non-transitory media include magnetic media (e.g., disks and tapes), optical media (e.g., CD ROM, digital video disks, Blu-ray disks), solid state media (e.g., RAM, ROM, flash memory, solid-state storage), and other non-volatile storage or memory.
  • Computer device 1905 can be used to implement techniques, methods, applications, processes, or computer-executable instructions in some example computing environments.
  • Computer-executable instructions can be retrieved from transitory media, and stored on and retrieved from non-transitory media.
  • the executable instructions can originate from one or more of any programming, scripting, and machine languages (e.g., C, C++, C#, Java, Visual Basic, Python, Perl, JavaScript, and others).
  • Processor(s) 1910 can execute under any operating system (OS) (not shown), in a native or virtual environment.
  • OS operating system
  • One or more applications can be deployed that include logic unit 1960, application programming interface (API) unit 1965, input unit 1970, output unit 1975, and inter-unit communication mechanism 1995 for the different units to communicate with each other, with the OS, and with other applications (not shown).
  • API application programming interface
  • Processor(s) 1910 can be in the form of hardware processors such as central processing units (CPUs) or in a combination of hardware and software units.
  • API unit 1965 when information or an execution instruction is received by API unit 1965, it may be communicated to one or more other units (e.g., logic unit 1960, input unit 1970, output unit 1975).
  • logic unit 1960 may be configured to control the information flow among the units and direct the services provided by API unit 1965, input unit 1970, output unit 1975, in some example implementations described above.
  • the flow of one or more processes or implementations may be controlled by logic unit 1960 alone or in conjunction with API unit 1965.
  • the input unit 1970 may be configured to obtain input for the calculations described in the example implementations
  • the output unit 1975 may be configured to provide output based on the calculations described in example implementations.
  • Processor(s) 1910 can be configured to execute instructions for a method for a digital twin of a physical process, the digital twin configured with forward prediction of the physical process, the instructions involving identifying key performance indicators (KPIs) from the data architecture of the digital twin; defining output KPIs indicative of quality or efficiency for the forward prediction from the vector input to the digital twin and the identified KPIs; generating a recurrent neural network (RNN) structure to match the physical process, each node of the RNN structure associated with a sub-process within the physical process; and computing each stage of the RNN structure from output of a prior stage and current input to the each stage to generate values for the output KPIs as the forward prediction as described, for example, in FIGS. 5 to 10.
  • KPIs key performance indicators
  • Processor(s) 1910 can be configured to execute instructions for a method for a digital twin of a physical process, the method involving aligning the computation of the each stage with a data set of the physical process; and adjusting the RNN structure based on a difference between the computation and the data set as illustrated in FIGS. 7 and 8.
  • the data set can be simulated from the physics model of the physical process, or it can be the actual historical data set in accordance with the desired implementation. As illustrated in FIG. 7, there can be two types of data sets; one is read from physical sensors and the other is from outputs of simulation. For example, in a truck manufacturing system, there may not be any sensors placed to detect the truck axle torque, so the mathematical expression of the wheel torque function of the engine torque can be used as replacement. The torque distribution alone the axle can be calculated, and once the force is understood, the metal fatigue can be calculated.
  • Processor(s) 1910 can be configured to execute instructions for a method for a digital twin of a physical process, the method involving generating the digital twin from a solution hierarchy or an asset hierarchy of the physical process and an associated artificial intelligence (Al) or physics model configured to model a corresponding one of the each sub process of the physical process; wherein the generating the RNN structure to match the physical process comprises mapping the each node to the each sub-process of the physical process as illustrated in FIGS. 4 and 10-15.
  • Al artificial intelligence
  • physics model configured to model a corresponding one of the each sub process of the physical process
  • Processor(s) 1910 can be configured to execute instructions for a method for a digital twin of a physical process, the method involving executing the forward prediction to generate the value of the output KPIs; executing backward attribution for the each stage of the RNN structure based on the values of the output KPIs to determine ones of the each stage to be adjusted; and generating a recommendation to adjust the determined ones of the each stage as illustrated in FIGS. 7 to 17.
  • the recommendation can be based on the attribution result which shows which variables in the prior stages are the leading factors to the output variables in the current or future stages.
  • Processor(s) 1910 can be configured to execute instructions for a method for a digital twin of a physical process, the method involving computing a weight from previous output to determine ones of the each stage that incurred a change to the values of the output KPIs over a threshold, as illustrated in FIGS. 7 to 18.
  • Processor(s) 1910 can be configured to execute instructions for a method for a digital twin of a physical process, wherein the vector input can involve one or more of robot quality, material quality, configuration quality, or operator quality.
  • Robot quality can be score are set as known in the art to indicate how well a robot has conducted a task, and can be some function involving any or a combination of anomaly risk, remaining useful life, and failure risk as illustrated in FIGS. 10 to 17.
  • Processor(s) 1910 can be configured to execute instructions for a method for a digital twin of a physical process, the method involving wherein the method is instantiated in a solution architecture can involve a digital twin configured with a machine learning model and a physics model of the physical process, the digital twin executing a simulation from the physical model of the physical process, the machine learning model constructed from a library of machine learning algorithms to generate the RNN structure and to facilitate backward attribution as described in the example algorithms herein and as illustrated in FIG. 4 and FIG. 7.
  • the library of machine learning algorithms can be used create the machine learning model via selection, autoML, and so on in accordance with the desired implementation. As illustrated in FIG. 4 and FIG.
  • the solution architecture can include a ML model and physics model, that is simulated from the physics model of the physics process.
  • the current input to the each stage can involve a vector composed of a plurality of different features; wherein the computing the each stage of the RNN structure involves executing multiple parallel analysis for each of the plurality of different features; wherein the output of the each stage is a single value KPI as illustrated in FIGS. 6 to 17.
  • the different features can involve the variables in the vector input, such as the anomaly, RUL, and so on, as well as statistical features derived from the data set such as min, max, average, and so on in accordance with the desired implementation.
  • multiple parallel analysis can be conducted for each of the features by Ray Digital Twin Computation Environment 760.
  • input to one layer of the RNN structure can be different from input to another layer of the RNN structure as illustrated in FIGS. 15 and 16.
  • one layer can involve one variable (e.g., quality of material), another layer can involve the material vendor, and so on in accordance with the desired implementation.
  • the physical process can be a production / assembly line as illustrated in FIG. 18, wherein the RNN structure is mapped to a structure of the production line map production line structure to RNN structure to facilitate both forward prediction and backward attribution as illustrated in FIGS. 10 to 17.
  • FIG. 18 Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations within a computer. These algorithmic descriptions and symbolic representations are the means used by those skilled in the data processing arts to convey the essence of their innovations to others skilled in the art.
  • An algorithm is a series of defined steps leading to a desired end state or result. In example implementations, the steps carried out require physical manipulations of tangible quantities for achieving a tangible result.
  • Example implementations may also relate to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may include one or more general-purpose computers selectively activated or reconfigured by one or more computer programs.
  • Such computer programs may be stored in a computer readable medium, such as a computer-readable storage medium or a computer-readable signal medium.
  • a computer-readable storage medium may involve tangible mediums such as, but not limited to optical disks, magnetic disks, read-only memories, random access memories, solid state devices and drives, or any other types of tangible or non-transitory media suitable for storing electronic information.
  • a computer readable signal medium may include mediums such as carrier waves.
  • the algorithms and displays presented herein are not inherently related to any particular computer or other apparatus.
  • Computer programs can involve pure software implementations that involve instructions that perform the operations of the desired implementation.
  • the operations described above can be performed by hardware, software, or some combination of software and hardware.
  • Various aspects of the example implementations may be implemented using circuits and logic devices (hardware), while other aspects may be implemented using instructions stored on a machine-readable medium (software), which if executed by a processor, would cause the processor to perform a method to carry out implementations of the present application.
  • some example implementations of the present application may be performed solely in hardware, whereas other example implementations may be performed solely in software.
  • the various functions described can be performed in a single unit, or can be spread across a number of components in any number of ways.
  • the methods may be executed by a processor, such as a general purpose computer, based on instructions stored on a computer-readable medium. If desired, the instructions can be stored on the medium in a compressed and/or encrypted format.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Example implementations described herein involve systems and methods for a digital twin of a physical process, the digital twin configured with forward prediction of the physical process, which can include identifying key performance indicators (KPIs) from the data architecture of the digital twin; defining output KPIs indicative of quality or efficiency for the forward prediction from the vector input to the digital twin and the identified KPIs; generating a recurrent neural network (RNN) structure to match the physical process, each node of the RNN structure associated with a sub-process within the physical process; and computing each stage of the RNN structure from output of a prior stage and current input to the each stage to generate values for the output KPIs as the forward prediction.

Description

DIGITAL TWIN SEQUENTIAL AND TEMPORAL LEARNING AND EXPLAINING
BACKGROUND
Field
[0001] The present disclosure is generally related to Internet of Things (loT), Operational Technology (OT) and Digital Twin (DT) systems, and more specifically, to facilitate a framework for digital twin sequential and temporal learning and explaining.
Related Art
[0002] A digital twin is a virtual representation that serves as the real-time digital counterpart of a physical object, process and/or system. The digital twin concept is an outcome of the effort to continuous improvement in the creation of product, product design, product engineering, and on-going operation activities. It has the capacities to connect all lifecycle stages of a product and its interactions into a cohesive smart construct.
[0003] The digital twin object contains digital threads. The digital thread is used to describe the traceability of the digital twin back to the requirements, parts, and control systems that make up the physical asset. Digital thread(s) also align both physical sensors and corresponding data to the digital system.
[0004] Related art implementations for solution learning and explaining for assets are usually done per asset per problem. In terms of solution learning on assets, the related art implementations are related to learning in process pipeline and connected assets.
[0005] In an example related art implementation in the domain of parallel and distributed systems, there is the use of smart contracts as connectors linking heterogeneous systems as an end-to-end simulation and solutioning. This related art implementation is more closely related to an explicit complex agent-based system design and implementation than the digital twin with process and attached assets for industrial operations. The defined physical system, logical system, their hierarchy, data structures and the problem to solve are very different from the digital twin space.
[0006] In the related art, simulations are used to construct digital models that imitate the operations or processes within a system for execution of distributed ledger and other transactions (e.g., blockchain, peer-to-peer interaction models), and micro-transactions replacing or complementing traditional models that involve centralized authorities or intermediaries, with artificial intelligence/machine learning (AI/ML) enhanced controller circuits having self-adaptive autonomous execution of transactions in real time and forward markets. For these simulations, there are level of digitization and this process may involve mathematical concepts. The simulation is then run by introducing variables into the digital environment or interface.
[0007] In another example related art implementation, there is a transaction-enabling system that includes a production facility having a core task that is a production task. The system includes a controller having a facility description circuit that interprets a number of historical facility parameter values and a corresponding number of historical facility outcome values, and a facility prediction circuit that operates an adaptive learning system, where the adaptive learning system is configured to train a facility production predictor in response to the plurality of facility parameter values and the corresponding plurality of facility outcome values. The facility description circuit further interprets a number of present state facility parameter values, and the facility prediction circuit further operates the adaptive learning system to predict a present state facility outcome value in response to the number of present state facility parameter values.
SUMMARY
[0008] Numerous related art implementations have tried to explain the relationship between production assembly system design and productivity, so that they can help to design factories to produce more products on time, waste less resources (people, material, space, connected asset and others) and generate high quality outcomes. Quality research have been the center of practitioner and researcher focus as it involves complicated relationships with time and resources. The measurement of quality in quality control, and quality management have been used to emphasize the importance of quality, but what is missing in the factory design, quality, and productivity is that it is not only a cohesive model to show how they are interrelated by the arguments of anecdotal evidence or qualitative reasoning in quality prediction, and (poor) quality attribution in near real time.
[0009] The related art encounters several problems when trying to explain such a relationship. The process pipeline quality exploration intends to go boarder to explore and discover problems periodically and inefficiently in continuous operation, whereas the problems are normally studied in isolation with limited criteria, scenarios, and data.
[0010] A first issue with the related art is that the analysis for the process pipeline is done at the process and asset level separately. In the related art, a problem in a process pipeline is recognized separately with its critical assets and their subprocess. When a quality issue (not limited to failure) occurs, most of the analytics work starts with collecting data as variables from system sensors and proceeds with the analysis thereon. The association of process pipeline degradation and its connected assets properties (e.g., remaining useful life, and material/subcomponent such as quality) used in the process are not closely study together. Thus, there is no holistic forward prediction nor proper backward attribution with full system in view.
[0011] A second issue with the related art is that the process pipeline and its connected assets (e.g., robotic machines) involves constantly changing performance over time and requires continuous calibration and alignment. Product quality and process pipeline performance is constantly fluctuating due to degradation from continuous operation, subcomponent replacement, quality of material to process, inadequate maintenance, tuning, core components life cycles and cascaded aggregation for its output quality from prior stages. The absence of numeric data and direct measures makes data analysis for actionable decisions more susceptible to biased interpretation. Therefore, there is a need to adopt well established procedures and techniques to enrich and ensure high-quality analysis that is both valid and reliable. For example, delays in a single process cycle time often stall pieces behind it and create downtime and consume manufacturing space both farther up and down the assembly line. It is rare to have process queueing/throughput analysis conducted with a connected asset, cascading into its hierarchy, connected subprocess performance model, and/or behavior model, health model via outcome attributions to connected assets/connected processes involved activities at time steps.
[0012] A third issue in the related art is that the inter-relationships of the connected assets in process pipeline and procedures to perform (e.g., motion profiles) are usually not well considered. The process pipeline can be decomposed into stages. Each stage could be processed by a connected asset(s). The connected asset could also be decomposed by a sequence of actions of motion profiles and material conditions are used for the tasks. Connected asset could be further decomposed into its own process pipeline and connected asset. Those interrelationships could improve the asset model/solution for the process in question.
[0013] A fourth issue with the related art is that the inter-relationships of the material/module to be processed by connected assets in the pipeline and material/module quality to connected asset(s) are usually not well considered. The material/module used in the pipeline from upstream to downstream processes are not considered holistically when building a product for the process in question. The sub-optimal work-in-progress inventory from sub- optimal (not rejected) material/modules are still included in the inventory on the production floor (e.g., a half-assembled car or a partially completed truck). The information of suppliers, contract manufacturing for material/module mostly resides in different databases for post analysis.
[0014] To address the above issues, example implementations described herein involve a digital twin solution with forward prediction and backward attribution. The digital twin facilitates optimized product process performance, predictive maintenance, and the extension of remaining useful life (RUL). The digital twin incorporates operations of connected assets, and applies physics and machine learning in real time.
[0015] The digital twin in the example implementations allow for digital twin real time representation for system conditions, physics model blending for virtual sensory implementation for insufficient measurements, artificial intelligence/machine learning (AI/ML) model blending for fast and large volume information extraction, material and component in modeling consideration, processes and procedures in modeling consideration, connected process pipeline and connected asset in prediction and attribution, and temporal and sequential analytics for dependency analytics.
[0016] To address the issues of the related art, the example implementations described herein involve the following aspects.
[0017] Aspects of the present disclosure further involve a solution representation in the digital twin, which introduces an approach to represent and store the information in digital twin, and an expert data store for solution explaining.
[0018] Aspects of the present disclosure involve a method for a digital twin of a physical process, the digital twin configured with forward prediction of the physical process, the method involving identifying key performance indicators (KPIs) from the data architecture of the digital twin; defining output KPIs indicative of quality or efficiency for the forward prediction from the vector input to the digital twin and the identified KPIs; generating a recurrent neural network (RNN) structure to match the physical process, each node of the RNN structure associated with a sub-process within the physical process; and computing each stage of the RNN structure from output of a prior stage and current input to the each stage to generate values for the output KPIs as the forward prediction.
[0019] Aspects of the present disclosure involve a computer program storing instructions for a digital twin of a physical process, the digital twin configured with forward prediction of the physical process, the instructions involving identifying key performance indicators (KPIs) from the data architecture of the digital twin; defining output KPIs indicative of quality or efficiency for the forward prediction from the vector input to the digital twin and the identified KPIs; generating a recurrent neural network (RNN) structure to match the physical process, each node of the RNN structure associated with a sub-process within the physical process; and computing each stage of the RNN structure from output of a prior stage and current input to the each stage to generate values for the output KPIs as the forward prediction. The instructions can be stored in a non-transitory computer readable medium and executed by one or more processors.
[0020] Aspects of the present disclosure involve a system for a digital twin of a physical process, the digital twin configured with forward prediction of the physical process, the system involving means for identifying key performance indicators (KPIs) from the data architecture of the digital twin; means for defining output KPIs indicative of quality or efficiency for the forward prediction from the vector input to the digital twin and the identified KPIs; means for generating a recurrent neural network (RNN) structure to match the physical process, each node of the RNN structure associated with a sub-process within the physical process; and means for computing each stage of the RNN structure from output of a prior stage and current input to the each stage to generate values for the output KPIs as the forward prediction.
[0021] Aspects of the present disclosure involve an apparatus to facilitate a digital twin of a physical process, the digital twin configured with forward prediction of the physical process, the apparatus involving a processor, configured to identify key performance indicators (KPIs) from the data architecture of the digital twin; define output KPIs indicative of quality or efficiency for the forward prediction from the vector input to the digital twin and the identified KPIs; generate a recurrent neural network (RNN) structure to match the physical process, each node of the RNN structure associated with a sub-process within the physical process; and compute each stage of the RNN structure from output of a prior stage and current input to the each stage to generate values for the output KPIs as the forward prediction.
BRIEF DESCRIPTION OF DRAWINGS
[0022] FIG. 1 illustrates an example digital twin upon which the example implementations can be applied.
[0023] FIG. 2 illustrates an example of defining the digital twin by digitalizing the physical system and entities in scope, in accordance with an example implementation.
[0024] FIG. 3 illustrates an example of defining the digital twin by constructing a virtual replica and connecting data threads to the designed software system (including models) for structure, behavior, and failure modes in scope, in accordance with an example implementation.
[0025] FIG. 4 illustrates an example integration of the AI/ML model and physics model, in accordance with an example implementation.
[0026] FIG. 5 illustrates an example of forward prediction in a digital twin, in accordance with an example implementation.
[0027] FIG. 6 illustrates an example of backward attribution in a digital twin, in accordance with an example implementation.
[0028] FIG. 7 illustrates a solution architecture for solution operation for digital twin orchestration, knowledge compilation, initialization, data sources (physical and simulated), computation, knowledge extraction and business actionable, in accordance with an example implementation.
[0029] FIG. 8 illustrates the system architecture on which the solutions are built and executed, in accordance with an example implementation.
[0030] FIG. 9 illustrates the conceptual flow of the system architecture, in accordance with an example implementation. [0031] FIG. 10 illustrates an example for calculating backwards attribution, in accordance with an example implementation.
[0032] FIGS. 11 illustrates an example of the multi-input structures for forward prediction, in accordance with an example implementation.
[0033] FIG. 12 illustrates an example for calculating backwards attribution for multi-input structures, in accordance with an example implementation.
[0034] FIG. 13 illustrates an example of forward prediction for subcomponent multi-inputs structure with multi-level recursion, in accordance with an example implementation.
[0035] FIG. 14 illustrates an example expanded view of the forward prediction for the subcomponent multi-inputs structure with multi-level recursion, in accordance with an example implementation.
[0036] FIG. 15 illustrates an expanded view of the backward attribution for the subcomponent multi-inputs structure with multi-level recursion, in accordance with an example implementation.
[0037] FIG. 16 illustrates an example calculation of the backward attribution for the subcomponent multi-inputs structure with multi-level recursion, in accordance with an example implementation.
[0038] FIG. 17 illustrates a flow for the expanded view of the recursive approach for backwards attribution, in accordance with an example implementation.
[0039] FIG. 18 illustrates a system involving a plurality of assets networked to a management apparatus, in accordance with an example implementation.
[0040] FIG. 19 illustrates an example computing environment with an example computer device suitable for use in some example implementations.
DETAILED DESCRIPTION
[0041] The following detailed description provides details of the figures and example implementations of the present application. Reference numerals and descriptions of redundant elements between figures are omitted for clarity. Terms used throughout the description are provided as examples and are not intended to be limiting. For example, the use of the term “automatic” may involve fully automatic or semi-automatic implementations involving user or administrator control over certain aspects of the implementation, depending on the desired implementation of one of ordinary skill in the art practicing implementations of the present application. Selection can be conducted by a user through a user interface or other input means, or can be implemented through a desired algorithm. Example implementations as described herein can be utilized either singularly or in combination and the functionality of the example implementations can be implemented through any means according to the desired implementations.
[0042] FIG. 1 illustrates an example digital twin upon which the example implementations can be applied. The digital twin in example implementations described herein focus on process pipeline behaviors, and can be in the form of a physical system digital replica 100 or logical (functional) replica 110 which includes the material, installation processes, assets configuration and data required to perform the analysis efficiently.
[0043] In an example of the physical system digital replica 100, the data threads mapped to specific sensors 101 of the underlying connected assets can be fed into switches 103 for processing into a management system 104 based on the corresponding system thread mapping. The management system 104 can involve various interfaces 105 which can include, but are not limited to, system/business management interface 106, user interfaces 107, executing and operation interfaces 108, as well as having direct data threads for storage as operational records 109.
[0044] In an example of the logical replica 110, digital threads 111 that mimic the underlying system of connected assets are provided to a corresponding simulator 112, which can draw upon various connected system threads, such as, but not limited to, analysis and testing modules 113, what if execution and operation modules 114, and physics learning/ML models 115.
[0045] The process pipeline creates (logical) associations among the connected assets, connected processes and sub-processes. Hence, with the full system emulation in digital space, users can forward predict the system outcomes from existing pipeline stages and backward attribute outcomes to their prior stage process, connected asset, used material, installation procedures and connected sub-process. [0046] For example, suppose there is an assembly line involving multiple machine assets and their corresponding component feeds. The pipeline temporal sequential structure allows maintenance/production professionals to understand the relationships of assembly line yield/quality/throughputs and attributing them to processes, assets using “primary connected” logic. The primary process outcomes can translate into connected asset KPIs such as quality, anomaly, failure, remaining useful life, and so on in accordance with the desired implementation, and further decompose to the next level of connected process and connected asset.
[0047] The proposed digital twins involve the digital representation of physical or nonphysical processes, systems, or objects. The digital twin also integrates all data produced or associated with the process or system it mirrors. Thus, it enables the transfer of data within its digital ecosystem, mirroring the data transfer that occurs in the real world. The data used in proposed digital twins are collected from loT devices or sensors, edge hardware, human machine interfaces (HMIs), sensors, and other embedded devices in accordance with the desired implementation. Thus, the captured data represents high-level information that integrates the behavioral pattern of digitized assets in the digital twin.
[0048] In addition, the real-time digital representation a digital twin provides serves as a world of its own. Within this digital world, all types of simulations can be run. It can also be used as a planning and scheduling tool for training, facility management, and the implementation of new ideas. This highlights the fact that a digital twin is a virtual environment, thus it must involve either 2D or 3D assets or the data they produce or are expected to produce.
[0049] The digital twin disclosed herein is also extensible from using physics model combined with AI/ML to extend the digital system sensory beyond the physical system installation or pure the pure physics model emulation, which is a contrast to the related art.
[0050] In the proposed approach, the example implementations use digital twin with calibration having Contextual Knowledge Center (CKC), Knowledge Graph (KG) and human Subject Matter Expert (SME) in the look as knowledge store to align the physical system and the digital system through continuously calibration. It is designed to be either closely coupling or loosely coupling the physical system with digital twin. [0051] There are various benefits for utilizing a digital twin. Once the assets of the interest are organized into digital twin format, it facilitates the study of yield optimization, quality management and asset management, effective scheduling of preventive and predictive maintenance activities, the ability to charge costs to the lowest possible asset level, as well as allowing for better Failure Mode and Effects Analysis (FMEA).
[0052] Further, the digital twin allows for explaining (by learning) pipeline temporal and sequential process relationships. Explaining the model and the results to help with the prescriptive actions for the problems in process, connected assets in their temporal/ sequential relationships. This explanation of problems can include, but is not limited to KPI such as cycle time variations, scheduled operation time, finished good quality, supplier on-time delivery, percentage of management line scheduling visibility, and working time lost by the employee.
[0053] The digital twin allows for the application of a machine learning technique to solve some of the problems in the assets. This includes, but is not limited to, failure detection/prediction/prevention, anomaly detection/prediction/prevention, remaining useful life prediction, and so on. The temporal/sequential activities could include robotic motion profiles that are not easily tracked in the sensors. Motion profile and its corresponding performance and behavior could be enriched via physic model.
[0054] The digital twin further allows for the application of a deep learning explainable scheme (RNN) in factor attribution.
[0055] Given a factory pipeline, their relationships need to be identified and then be used to build the digital twin. Using digital threads and the genericity of the digital twin, different physical replica(s) or logical replica(s) of digital twin can be defined.
[0056] The steps to define digital twin can involve the following. There can be a step to define the measurements indicative of the digital twin success criteria and business use cases. There can be a step to define the scope of the physical system, the processes, or the objects to be digitized. There can be a step to define physical system structures, behaviors, and failure modes. The step to define physical system structures can involve the alignment of the operation plan, content, capacity, schedule, and so on, with contextual knowledge center for physical system. There can be a step to define physical system entities which are required (physical or logical). There can be a step to align entities relationships, pipeline stages and technical specifications. [0057] There can be a step to define data to be captured in physical system. In an example, for manufacturing facilities, the data that defines a system or process can be sourced from assets within a facility and these assets include equipment, floor layouts, workstations, and Internet of Things (loT) devices. There can be a step to define the scope and content of data from these assets to be captured using smart edge devices, Radio Frequency Identifiers (RFIDs), humanmachine interfaces and other technologies that drive data collection.
[0058] There can be a step to define and construct physical system digital threads by building traceability from data, system and requirements. There can be a step to define the digital system to be digitized using data threads and introduce into the digital space.
[0059] In example implementations, with physical objects (such as vehicles) and data threads defined, physical data capture could be done through sensors, actuators, controllers, and other smart edge devices installed (or to install) within the system. LIDAR and/or 3D scanners can also be used to extract point clouds when digitizing small to medium-sized objects. The key step is to successful capture of the data from a system or object produces that could sufficiently define the system to creating a digital twin.
[0060] In example implementations, there can be a step to define and create (decentralized) identifiers which represent/verify the digital identity of a self-sovereign physical object or facility. For example, when developing a digital twin of a facility, the entire system will have its own unique identity and assets within the facility are verified with unique identities to ensure their actions are autonomous when executing simulations within the digital twin environment.
[0061] There can be a step to define the scope of (automate) processes and corresponding simulations that analyze how a physical system will operate under physics principles or designed constraints. There can be a step to define and design the digital twin interface technology which can achieve goals of a digital twin.
[0062] There can be a step to define and design digital twin software and platforms. Examples can include but not limited to, software that handles the flow of data from the loT devices, a facility, and/or other enterprise systems needed to understand and digitize the chosen process, software and software architecture that recreates physical objects or assets into its digital ecosystem to deliver information that has level of clarity/granularity for business actionable, increasing the computing resources needed to create and manage a digital twin when digitizing and emulating complex systems with hundreds of variables that produce large data sets, and scalable computing power and resources as a key consideration for a digital twin platform or solution.
[0063] There can be a step to define metadata/content aware solution requirements which understands the data produced across the lifecycles of an asset and integrate the asset management system with the digital twin.
[0064] There can be a step to define and design business functions that digital twin to perform and solve. For example, if it is to serve as a monitoring tool for facilities or for predictive maintenance, a limited digital twin software can be used while for simulations and scheduling a more advanced technology will be required.
[0065] There can be a step to define and design digital twin solutioning and creation processes to simplifies the process of creating digital representations of physical assets (e.g. implement digital system software architecture for physical system structure, behavior, failure modes, replica, and so on).
[0066] There can be a step to define the procedure and approach to connect physical system and digital system pair in digital twin via physical sensory and software interface.
[0067] There can be a step to verify system connectivity and bring both systems online and verify data quality, latency and sync up to time.
[0068] There can be a step to calibrate and align the digital twin (both physical and digital system) and the system lifecycle.
[0069] There can be a step to select additional ML models and physics models to compute digital twin system performance, business actionable, and what-if scenarios. The step to select additional ML models and physics models can involve the calibration ML models and physics models with the physical system using behavior KPI and performance KPI, calibration of the ML models with physics models in digital twin, alignment of the ML model system structure with the physics model system structure, and creation of the virtual sensory content in physics model to facilitate the alignment.
[0070] There can be a step to maintain continuous operation, which can include involving the entire product value chain and user buy-in, including data from multiple sources, creating/ ensuring Long Access Lifecycles as asset lifecycles is longer than software lifecycles, and another step for defining and evolving the measurements of the digital twin success criteria and next best fit use cases.
[0071] FIG. 2 illustrates an example of defining the digital twin by digitalizing the physical system and entities in scope, in accordance with an example implementation. FIG. 3 illustrates an example of defining the digital twin by constructing a virtual replica and connecting data threads to the designed software system (including models) for structure, behavior, and failure modes in scope, in accordance with an example implementation. To facilitate the mapping between the digital twin and the underlying physical process/system, a solution hierarchy (for process) and/or an asset hierarchy (for assets) is constructed as represented by boxes 1, 2, 3, 4, 5, 6, and 7. The upper left portion of the figure is the underlying physical process/system, and the upper right portion is the flows that are mapped out based on the underlying physical process/system. Based on the physical assets, the digital assets can be mapped from the physical asset through a solution hierarchy/asset hierarchy input as is known in the art. From the hierarchy, the inferencing pipeline can be constructed, so that each stage can eventually map out to the corresponding physical process/asset.
[0072] There are two types of base models in digital twin and their integration. Firstly, there is the AI/ML model. The AI/ML model focuses on the operation side once the physical system starts operating. The AI/ML model can capture the system runtime information and use that to derive insights and help with remediation of issues and decision-making. The AI/ML model replicates a decision process to enable automation and understanding in the digital twin. AI/ML models are mathematical algorithms that are trained using data and subject matter expert (SME) input to reach a decision an expert would make when provided information required. AI/ML model in digital twin is designed to predict outcomes and reveal the rationale behind its predicted outcome to help interpret the decision process. The AI/ML training processes a large amount of data through the algorithm(s) using a fitness function to maximize likelihood or minimize cost and yield a trained model as result. By analyzing data from many system behaviors in different operating conditions (e.g. material, installation procedures, asset conditions, and process), the model learns to detect the type of failure mode patterns and distinguish these from normal operation. The AI/ML model also attempt to operate under a fault tolerance when not all the data is trusted (e.g. sensor failure, connectivity failure) to reach the best fit solution or replicate a specific decision process prior trained (e.g. using alternative features set by excluding data in questions for modeling). [0073] Secondly, there is the physics model. The physics model mainly focuses on the design phase, and can output the expected behavior of the system. The model will mainly target what could happen by design or under normal operation; otherwise, capturing the abnormal operations or conditions will be very costly and may not be accurate and reliable. The physics model in the digital twin is the theory describing the known fundamental principles (e.g., electromagnetic, machinal, thermo dynamics, material science, and so on), as well as motion model terms of displacement, distance, velocity, acceleration, speed, rotation, and time. The physics model is developed in formulation of physical system, sub-system, motion, load and controller and being finalized in a time series output via “virtual sensor” placement upon directed experimental confirmation. The physics model can predict various properties of operating outcomes, system responses, and safety compliance with great accuracy.
[0074] There are benefits of integrating the AI/ML model and the physics model together. The output from the physics-based model can complement and/or validate the data from the physical sensors and thus help improve the AI/ML model for operation. For the “complement” case, when the physical sensor data is not available or not enough, virtual sensor data from the digital twin model can serve as a “surrogate” of the physical sensors. For the “validate” case, assuming the physical sensors also collect the data as the outputs of the digital twin model, the virtual sensor data can serve as the “expected” value while the values from physical sensors can serve as the “observed” value and thus the variance or different between them can be used as a signal to detect abnormal behaviors in the system.
[0075] By combining the AI/ML model and the physics model in the digital twin, the AI/ML model can use the physics model to create a large amount data to stand up AI/ML model for training when data is not available, or a system is in infancy having insufficient amount of data for training. Further the sparsity of the sensor data is an issue in identifying the telemetry of the failure and its impact range. The physics model can devise “virtual sensors” per the distribution of theoretical outputs and provide coverage for physical sensor sparsity. The physics model is theoretically self-consistent and has demonstrated successes in providing to experimental predictions. The physics model may leave some unexplained when complex system interactions involved and falls short of system responding prediction when dormant variables are not in scope for analysis. AI/ML models could absorb high dimension data and manage non-linear transformation. [0076] FIG. 4 illustrates an example integration of the AI/ML model and physics model, in accordance with an example implementation. For an analytics model 430 in the digital twin, it contains both AI/ML models 431 and physics models 432. Once initial data 400 and content loading from content knowledge store 420 and orchestration engine 410 is completed, the AI/ML models 431 and the physics models 432 will go through a calibration phase prior to integration. Once the physics model and AI/ML model align to the physical system, the physics model output could be considered as content contribution from the “virtual sensors”. The outcomes of analytics models will be stored in the content knowledge store 420 with corresponding simulation conditions to compare and optimize.
[0077] Given a digital twin and a problem of this digital twin, the physics-based modeling could be combined with physical sensory and external data as inputs to an AI/ML model(s) to apply to predict and/or attribute the outcomes. For instance, suppose there is a vehicle assembly line in a manufacture plant, and quality prediction can be applied to the production pipeline outcome by using each pipeline stage quality as inputs to compute the yield quality. Each stage and its sub-process’ anomaly detection, failure detection and their related risks scores can be used as inputs to prediction the quality of a stage. This approach could recursively apply to the entire plant including additional inputs such as material quality, configuration setup, installation procedure version, attached asset conditions (risk scores and RUL), and so on. Conventionally, the solution is built per problem, or a failure mode and the relationships among the assets and the problems are not considered and utilized. Further, the corresponding optimizations of related art implementations are locally constrained (e.g. no content for attribution for low quality condition) which cannot be leveraged.
[0078] An important related task for the solution is to attribute the results and generate the prescriptive actions in order to remediate the problems in the pipeline via the digital twin. This includes, but is not limited to, performance optimization, root cause analysis, remediation recommendation, alert suppression, and so on. Conventionally, there is limited work on attributing solution and mostly, is done in isolation without full contextual knowledge in operation. Like solution learning, the explaining/attribution effort and could be applied recursively to the fully connected digital twin with contextual knowledge content.
[0079] Aspects of the present disclosure involve facilitating forward prediction in a digital twin as illustrated in FIG. 5. To facilitate the forward prediction, the example implementations at first create the temporal sequential logical structure of the process digital twin that reflects process, material, and installation with prediction, in conjunction with Sequence Prediction, Sequence Classification, Sequence Generation, and Sequence to Sequence Prediction including a recurrent neural network (RNN) approach. In the example of FIG. 5, the tl-t7 indicate the point in time. Then, the example implementations identify sensors and key performance indicators (KPIs) (e.g., quality or qualities) that applied to each stage process, asset, material, and installation procedure. “Q” represents “Quality” which is an aggregation of one or more of quality of material, quality of installation procedure, quality of the asset (e.g., robotic stations), and quality of the process line. Then, example implementations build a homogenous (or heterogeneous) model and/or solutions to each stage process, asset, material, and installation procedure. The output of each model at a prior stage serves as input to the model at next stage by following digital twin process flow. Thus, the model at prior stage output are used as derived features and predict the next stage to mimic sequential dependency of process flow.
[0080] In the example implementations described herein, the output KPIs of the stages in the forward prediction are identified/defined that are indicative of quality of efficiency for the forward prediction based on the vector input provided to the digital twin and the identified input KPIs. Further, defining the output KPI mostly is done using user cases. Use cases are stored in the knowledge base or initialize by the operator as will described, for example, in FIG. 7, or can be initialized by the user in accordance with the desired implementation. The output KPIs are single value KPIs that are representative of quality and/or efficiency. If both quality and efficiency are needed, then two prediction pipelines (one using quality as KPI and one with efficiency as KPI) can be initialized, or new user cases can be initialized by the user in accordance with the desired implementation. The vector input can involve the robot/asset quality of the underlying asset or robot, which can involve variables such as anomaly risk score, remaining useful life score, failure risk score, material quality score, configuration quality score, operator quality score, and so on in accordance with the desired implementation.
[0081] In the structure as illustrated in FIG. 5 and in other implementations of forward prediction as described herein, the recurrent structure involves a recurrent neural network (RNN) structure to match the physical process, and the structure is used to match the underlying production/assembly line structure such that each node of the RNN structure associated with a sub-process/sub-station within the physical process/physical system. In example implementations as described herein, most of the task(s) of the sub-processes can be performed by the sub-stations. In an example implementation in contract manufacturing, the sub-processes are outsourced to different external locations, but with connected loT and blockchain, the primary manufacturing location can trace inbound parts along a supply chain with blockchain created immutable documentation of quality checks and detailed production process data along with KPIs. Thus, the sub-process/ sub-station could be considered as physical or virtual that are connected to the database(s) and uniquely tag each product as well as to automatically inscribe every manufacturer transactions, procedures, modifications, or quality score/checks by blockchain. The RNN can involve weights which are parameters within the RNN that transforms input data within the hidden layers of the network. A neural network is a series of nodes, or neurons. Within each node is a set of inputs, weight, and a bias value. As an input enters the node, it gets multiplied by a weight value and the resulting output is either observed, or passed to the next layer in the neural network. Often the weights of a neural network are contained within the hidden layers of the network. The weights can be computed from previous output to determine which ones of the stages incurred a change to the values to the output KPIs over a threshold.
[0082] In the example of FIG. 5, the computation of each stage of the RNN structure is conducted from output of a prior stage and current input to the each stage to generate values for the output KPIs as the forward prediction. The RNN stage has two inputs; one is output from the prior stage and the other is new input(s) at the current stage. The input/output is associated in time.
[0083] Aspects of the present disclosure further involve facilitating backward attribution in a digital twin. FIG. 6 illustrates an example of backward attribution in a digital twin.
[0084] In an example of backward attribution, the quality (Output) is calculated for its attribution to prior stages of processes, material, installation procedures and asset which is used at corresponding timesteps. Attributions can involve attribution by individual stage process (prior stages), attribution by individual stage material, attribution by individual stage installation procedure, attribution by individual stage asset (robotic station) anomaly, failure, and RUL risk score, and attribution by combinations of process(es), material, installation procedures, and asset(s). Once the digital twin model is calibrated with a physical factory, the example implementations initialize digital twin conditional simulations (via physics model simulated outputs or collected historical data or mix of both) to emulate/perturbate inputs to the obtain target outputs explaining predictions (or an influence function) for all instances of its class (and/or failure modes). For a given instance prediction, the Influence Function(s) provides an efficient way to estimate the impact of upweighting samples on the model loss function for each simulation cycle. The deep learning related algorithms in sequence prediction, leverage a transparent Al approach ( e.g. identify largest gradient decent as features). Example implementations can further translate the feature back to root cause KPIs.
[0085] FIG. 7 illustrates a solution architecture for solution operation for digital twin orchestration, knowledge compilation, initialization, data sources (physical and simulated), computation, knowledge extraction and business actionable, in accordance with an example implementation.
[0086] Below is listed a brief description of each element in the solution architecture, with more details provided in the later sections.
[0087] Digital twin orchestration engine 700 is the software system/artifact which automates, coordinates, and manages computer resources and services. Orchestration provides for deployment and execution of interdependent workflows on external resources. The digital twin orchestration engine 700 can manage complex cross-domain workflows involving both extracting knowledge stores and initializing the digital twin actor construct. Once data 750 from both sensors 751, external data 752, and simulated data 770, digital twin orchestration engine 700 will execute digital twin actors in the designed computing environment having the objective function to optimize. The generated data/outcomes from digital twin computation will be used in the business turn-key decision and to extract new/updated contextual knowledge content to integrate with knowledge stores.
[0088] Contextual knowledge store involves interrelationships, object, structure, services, activities and/or content, and its contextual situation. Knowledge Store persists outputs from Al enrichment pipelines into storage with knowledge graph format for independent analysis and/or downstream processing. Knowledge store preserves the enriched content for inspection or for other knowledge mining scenarios. The contextual knowledge store is a combination of knowledge content, searchable/reasonable format, and physical storage. AI/ML pipeline's output from analytics creates content that has been further extracted, structured, aligned and analyzed using ML processes (e.g., page ranking).
[0089] Contextual knowledge store has the following characteristics and asset which can be used at corresponding timesteps: emphasizing problem and its solving, recognizing that activities need to occur in multiple contexts, facilitating reasoning, monitoring, and become self-regulated entities, anchoring knowledge in the diverse persona’s context of activities, encouraging knowledge translation, knowledge transferring, knowledge indexing, knowledge blending and knowledge cohort building, employing authentic and continuous assessment, and managing knowledge lifecycle.
[0090] There are two contextual knowledge stores in scope, the Physical Process, Configuration and Knowledge Store 710 and Twin Analytics Modeling and Behavior Knowledge Store 720.
[0091] Physical Process, Configuration and Knowledge Store 710 is a contextual knowledge store that includes artifacts such as Operation Use Cases 711, Asset Model 712 Process flow and map 713, Process Pipeline and Connected Asset and Sensory 714, Operation Knowledge Graph 715, and Material/Sub -Module, Configuration, Installation Knowledge Base 716.
[0092] Twin Analytics Modeling & Behavior Knowledge Store 720 is a contextual knowledge store that includes artifacts such as Asset Digital Template 721, Physics mathematical model 722, ML algorithm 723, asset behavior 724, and other stores such as pipeline and failure mode and its transition states knowledge graph and operation scenarios, depending on the desired implementation.
[0093] In this Digital Twin initialization solution 730, the entities in the physical environment/physical process are represented by digital twins. A digital twin model is an instance of a custom-defined model or one of the existing models in the contextual knowledge store. A digital twin model can be connected to other digital twins via directed acyclic graph (DAC) to form a twin graph and this twin graph is the representation of entire simulation of an environment (e.g., a factory). The initialization procedure can include the following.
[0094] In a first step for creating a digital twin by new custom-defined model, this new model needs to be uploaded to the service and can contain a set of properties, asset types, telemetry, and directed acyclic graph that a particular twin can have, as well as required information to maintain knowledge store's knowledge graph. In a second step for an existing model and/or twin, the existing service can create an instance of the twin using the stored digital twin construct. In a third step for the combination of new and existing digital twin, after the first and second steps, a new DAC is created to connect. In a fourth step, for duplicate and parallel production lines, multi-instances of digital twin could be built, instantiated, and connected via DAC. In a sixth step, for any other combinations for twin and twin graph creation, the first through fourth steps can be selected and combined to implement such in accordance with the desired implementation.
[0095] This Digital Twin initialization solution 730 can include artifacts as follows: Hypothesis, Objective and Measurement 731, Simulation Scenario (predictive model) 732, Action Optimization Scenario (what if computation) 733, Asset Life cycles Scenario (prognosis model) 734, Performance/Behavior Attributions 735, Digital Twin construct 741, Temporal and Sequential construct 742, Virtual Sensory 743, Physical Sensory 744, External Data Sources 745, Physics Model 746, and ML Model 747.
[0096] Sensory and External Data (feeds) 750 are data feeds from each physical system to its corresponding digital twin system. Two types of feeds, can be provided: machine sensory 751 and external data 752.
[0097] Machine(s) sensory 751 with its sensors data represent a system operating situation. The collected data is either software instructions or directly from the sensor reading measurements through the attached hardware. The external data 752 represents the data stores or data sources which contain services, activities or technician notes related to the physical system. These data are used to calibrate the digital twin with the physical system and further optimize the physical process.
[0098] Simulated Data (feeds) 770 are data feeds related to the physics model of asset/process and treated/provided as virtual sensor output having data distributions 771 (e.g., temperature distribution, vibration distribution, and so on) in a predefined mesh telemetry on asset. It is an augment into simulations for low density data (e.g., sparse sensor installations, missing data supplements). The physical system operates under physics (and chemical) principles. The first physics-based model is often used and presents descriptions of majority and minority physical system behaviors and characteristics such as the impurities fatigue zones, voltage, current, vibration, temperature variances, and their dependency (e.g., abrasion and temperature dependence in both majority and minority behaviors).
[0099] Ray Digital Twin Computation Environment 760 is a digital twin computation environment that is implemented in Ray, which provides a simple, universal application programming interface (API) for building a distributed digital twin. Once the digital twin construct and DAC are in place, Ray will wrap the ML algorithm in the Ray actor(s) to be executed in a parallel and distributed fashion. Data feeds (e.g., Sensor, external data and simulated data) will be streamed to enable computation.
[0100] Choosing Ray as digital twin computation environment is done to leverage Ray native characteristics (e.g., simple for building and running distributed applications, flexible computation requirements, simple code changes, and large applications, libraries) and tools on top of the core Ray to enable complex applications.
[0101] Ray digital twin could use several native ML libraries, (e.g., AutoML, Reinforcement Learning, Distributed Training Wrappers, Scalable and Programmable Serving, and Distributed memory) based on the columnar memory format for flat and hierarchical data, and organized for efficient analytic operations.
[0102] Operation Content Knowledge Graph Extraction 790 is the creation of knowledge from structured (relational databases table format, XML) and unstructured (text, documents, images) outcomes from analytics activities in digital twin simulation(s). The simulation intakes contextual content and objectives (e.g., Hypothesis development canvas) and the provides results, then translates into knowledge in a machine-readable and machine-interpretable format (e.g., resource description frameworks (RDF)) which represent knowledge to facilitate inferencing. The processes include information extraction in nature language process (NLP) and involve additional Extract Transfer Load (ETL) process. Then, blending criteria is that the extraction result of creation of structured information and/or the transformation into a relational schema with existing formal knowledge via ontologies and/or the generation of a schema based on the source data with reasoning. The modules include, but are not limited to Content and Attributes Clustering 791, Taxonomy Ontology of Asset Model 792, Bayesian Network Probability calculation and graph assignment 793, Knowledge Graph 794, and Bayesian Reasoning for knowledge graph blending 795.
[0103] Knowledge graph 794 can involve knowledge types that are underpinned and tagged, such as contextual knowledge, attribution knowledge, unstructured knowledge, structured knowledge, process pipeline and connected asset, asset hierarchy, solution hierarchy, flow processes, and so on in accordance with the desired implementation.
[0104] Business Actionable 780 is a module that acts as a human and machine interface used by analytics translator This module will contain a user journey designed for analytics translator and the clients, and can be implemented through any technique as known in the art in accordance with the desired implementation.
[0105] FIG. 8 illustrates the system architecture on which the solutions are built and executed, in accordance with an example implementation. The system involves the components as illustrated therein. In this example, a user (orchestrator) sends a request to the digital twin orchestration engine 700. The digital twin orchestration engine 700 searches a database and identifies if the installation contains required services. Orchestration Engine contains can involve APIs, Business Process Modeling Language (BPMLs) and Message Adaptor which interface with event bus to work with other microservices. Event Bus 810 provides queueing and communication functions between orchestration and microservices. Knowledge Store Microservice 820 is a service providing knowledge store of the physical system that will be represented by the digital twin. Knowledge Store Microservice 830 is a service providing knowledge store of the digital system that will represent the physical system using AI/ML models.
[0106] Physical Data Microservice 840 is a service providing data store from physical sensors. Depending on the scope of physical system, data could be from on-prem or cloud sources. Simulated Data Microservice 850 is a service providing the data store from calculated theorical behaviors/outputs of system via its telemetry. This could be considered as virtual sensor data. External Data Microservice 860 is a service providing data from service, maintenance, activities, operation and notes to connect system behaviors to its failure modes for labeling. Solution Initialization Microservice 880 is a service instantiate software Ray actor construct with analytics content and data as a digital twin actor. Digital Twin Simulation Microservice 870 is a service that executes the digital twin Ray actor in this run time and provide outputs.
[0107] FIG. 9 illustrates the conceptual flow of the system architecture, in accordance with an example implementation. In the following, the forward prediction and backward attribution is discussed in detail. Forward prediction and backward attribution schemes are described herein using the digital twin in process pipeline temporal and sequential learning.
[0108] As illustrated in FIG. 9, the aspects as described herein can be implemented as APIs to facilitate the desired implementation. The APIs can be implemented in the form of a container, which is a standard unit of software that packages up code and all its dependencies, so that the application runs reliably from one computing environment to another and could scale within the resources assigned in that environment which is necessary for the application to function correctly. As illustrated in FIG. 9, digital twin orchestration engine 905 can be implemented in a container to interact with the Physical Process, Configuration & Knowledge Store 900, physical assets 901, Twin Analytics Modeling & Behavior Knowledge Store 902, virtual assets 903, external data sources 904, and digital twin environment 911. Further, the functions for the Digital Twin Initialization 906, Theorical Behavior models (Virtual) 907, AI/ML Behavior models (Physical) 908, Product Flow Directed Acyclic Graph 909, and Digital Twin software Actor (Ray Actor) Insatiate 910 can be instantiated as an API via a single container, depending on the desired implementation.
[0109] For the forward prediction analysis, the RNN like structure as illustrated in FIG. 5 is used to conduct forward prediction and backward attribution analysis. In this structure, the RNN like pipeline structure represents a physical assembly line in digital twin by connecting Ray actors (the rectangle box) sequentially and operating in temporal order. For a given task, this pipeline takes input (Q) from each stage’s physical sensors/simulated data or pre-computed scores e.g., quality scores and compute internal state at stage (A) and generate output (h) and then feedforward (h) to next stage as an input to next stage. This input to next stage from prior stage represents the inherited quality from prior pipeline stage to next pipeline stage as combined inputs, t represents process time stamp (which is not necessary uniform, e.g., txis not necessarily equal to tx+i). Time stamp t and stage x are labelled the same to simplify, e.g., Qxt to Qt and the stage prior Qx-i, t-i will be label as Qt-i. The stage x output hxt will be labelled as ht.
[0110] The computation steps are taking input (Qt) at stage t and prior stage output (ht-i) using algorithm implemented in stage (At) generate next stage output (ht).
Ouput at Stage t: ht = /^(Qt^t-i) Eq. l
[OHl] In this example as illustrated in FIG. 5, a forward prediction diagram for a given product which requires seven stages. The output h? at t? in this example is product quality score (h could be different KPI, e.g., on-time risk score, product anomaly risk score, failure risk score, overall RUL score, and/or composited scores such as waste eliminating waste, costs reduction and cycle time reduction). [0112] For backwards attribution as illustrated in FIG. 6, the response of the RNN like model is denoted to outcome Quality S3@t3 as X3, (generalize to Xt), and thereby the information gained at step t, as follows:
Figure imgf000026_0001
[0113] And the total information gain across total time T
Figure imgf000026_0002
[0114] For target class q, the equation could be represented by a sequence of factors
Figure imgf000026_0003
[0115] Thus, the contribution of Xt towards the logit target Zq, can be calculated as
Figure imgf000026_0004
[0116] Also, at each time step t, the hidden state is updated using the following equation
Figure imgf000026_0005
a is a partial evidence obtained by RNN from previous t - 1 steps is brought to the time step t.
Figure imgf000026_0006
[0117] With this, knowing the hidden state vector ht and the updating parameter vector at will be sufficient to derive the decomposition.
[0118] For contribution of each stage Xt towards the logit Zq of probability for target class q, the attribution equation is listed below.
Figure imgf000027_0001
[0119] The above formulation Eq. 10 within the brackets is the elementwise multiplication of two terms (Hadamard product). The left term (ht — at O hf-i denotes the updating evidence from time t - 1 to t, i.e., the contribution to class q by the input stage xt. The right term (nl=t+i ak) represents the forgetting mechanism of RNN. The evidence that an RNN like construct has gathered at time step t gradually diminishes as the time increases from t + 1 to the final time step T.
[0120] Eq. 10 is used to calculate backward attribution in the following learning flow as follows.
[0121] FIG. 10 illustrates an example for calculating backwards attribution, in accordance with an example implementation. At first, the input KPIs are identified at 1000 and the output KPIs are identified at 1001. At 1002, the flow establishes the recurrent structure to match the underlying (e.g., assembly) line structure. In example implementations, equations 2 to 10 is used to get the inferred output to calculate attribution (explanation). The inferring is used to perform prediction and the event is not occurred yet. Using the prediction, the attribution is reversed calculated to identify the root cause. Thus, the operator could remediate the issue to prevent the unfavorable prediction outcome to really occur.
[0122] At 1003, the flow computes the time step t-n to t outputs from inputs Q at t-n to Q at t using the equations described herein. At 1004, the flow computes and reconciles outputs h- n to h to the system outputs according to the equations herein. At 1005, the flow computes hl to h6 from QI to Q6 and compares outputs hl to h6 according to the equations herein. At 1006, the flow computes output h7 and computes attribute h7 outputs to QI to Q6 according to the equations described herein. At 1008, the backward attribution is generated.
[0123] To execute the learning algorithm for the backwards attribution, at first the logical structure of the process pipeline and hierarchy is created. Secondly, sensors/KPIs that applied to each stage are identified. Next, the algorithm builds a model/solution for each stage and connected asset. Then, the output of each model at prior stage serves as input to the model at next stage by the following process pipeline and hierarchy. The model output can be considered as derived features and prediction of the next stage. The sensor/KPIs data can be input to each asset/node in the process pipeline. The output then is calculated backward to attribute to prior stages. The Q/h where input/output pair Q/h could be Quality/Quality, Quality/Cycle time, Quality /Remaining useful life, as multi-output, in accordance with the desired implementation.
[0124] Accordingly, the example implementations described herein can use an RNN structure to match the physical process with each node of the RNN structure associated with a sub-process within the physical process. For values computed at each stage of the RNN structure from the output of the prior stage and current input to the stage, backward attribution can be executed for each stage to identify the root cause through temporal steps. Because the backward attribution algorithm calculates per time step, the algorithm is executed across the entire length of the process pipeline stages steps, which are treated as equal time steps, to determine which of the stages has the most impact on the outcome (e.g., beyond a threshold, or highest impacting stage). Depending on the desired implementation, the backward attribution can be combine with the forward prediction as described herein.
[0125] FIGS. 11 illustrates an example of the multi-input structures for forward prediction, in accordance with an example implementation. The multi-input structures for backwards attribution is similar to that of FIG. 6. Regarding the multi-input structures, this structure can conduct multiple analysis at the same time. The output could involve different targeted KPIs. Each input of the connected asset at pipeline stage can be associated with its corresponding model outputs as a vector containing risk scores from its anomaly detection, failure detection, remaining useful life, failure prediction and so on; or could be sensors measurements (e.g., vibration, temperature, pressure as a vector).
[0126] Multi-input structures can be used for complicated systems such as robotic arms, in which that a quality value alone may not be sufficient to explain what is occurring in the robotic machine. In an example, the input can involve characteristic features of a robotic status, and those features can be used in the form of a vector. In an example, the failure risk scope of a robotic arm which includes the operation history of the robotic arm, remaining useful life, and so on can be used as examples of the input vector. Further the multi-input structures allow for the creation of multiple parallel analysis, and processing each of the different types of features in the input vector for a particular process.
[0127] The learning algorithm description is as follows. At first, the learning algorithm creates the logical structure of the process pipeline and hierarchy. Secondly, the algorithm identifies sensors that applied to each stage. Then, the learning algorithm builds model(s)/solution(s) for each stage and connected asset. Then the output of each model at the prior stage serves as input to the model at next stage by following process pipeline and hierarchy. The model output can be deemed as derived features and predict of next stage. Then the sensor/KPIs data can be input to each asset/node in the process pipeline and connected asset. The output then is calculated to attribute to prior stages. In the example of FIG. 11, V/h where input/output pair V/h could be (Anomaly risk score, Remaining useful life score, Failure risk score) as multi-inputs to Quality output pairs.
[0128] FIG. 12 illustrates an example for calculating backwards attribution for multi-input structures, in accordance with an example implementation. At 1200, the flow identifies the input KPI vector and the output KPI at 1201. At 1202, the flow establishes the recurrent structure to match the underlying (e.g., assembly/production) line structure as illustrated in FIG. 11. At 1203, the flow computes time step t-n to t outputs from inputs V at t-n to V at t. At 1204, the flow computes and reconciles outputs h at t-n to h to the system outputs. At 1205, the flow computes hl to h6 from VI to V6 and compares outputs hl to h6. At 1206, the flow computes output h7 1206. At 1207, the flow computes attribute h7 output to VI to V6. At 1208, the flow generates attribution. The computations can be conducted in accordance with the equations as described herein.
[0129] FIG. 13 illustrates an example of forward prediction for the subcomponent multiinputs structure with multi-level recursion, in accordance with an example implementation. This structure can conduct multiple level analysis. The output could be different targeted KPIs. Each input of the connected asset at pipeline stage can be associated with its corresponding subcomponent model outputs as a vector containing risk scores from its anomaly detection, failure detection, remaining useful life, failure prediction and so on; or could be sensors measurements (e.g., vibration, temperature, pressure as a vector). This decomposing effort can be conducted recursively. [0130] FIG. 14 illustrates an example expanded view of the forward prediction for the subcomponent multi-inputs structure with multi-level recursion, in accordance with an example implementation. The backward attribution for the subcomponent multi-inputs structure with multi-level recursion is the same as that illustrated in FIG. 6. FIG. 15 illustrates an expanded view of the backward attribution for the subcomponent multi-inputs structure with multi-level recursion, in accordance with an example implementation. The structures are similar to that of FIGS. 5 and 6, only reconfigured to facilitate multiple-input structure and multi-level recursion.
[0131] FIG. 16 illustrates an example calculation of the backward attribution for the subcomponent multi-inputs structure with multi-level recursion, in accordance with an example implementation. The flow is similar to that of FIG. 10. At first, the input KPI vector is identified at 1600, and the output KPI(s) are identified at 1601. At 1602, the flow establishes the recurrent structure to match the underlying (e.g., assembly/production) line structure. At 1603, the flow computes time step t-n to t outputs from inputs Qt-n of f(A) at t-n to Qt of f(A) at t. At 1604, the training is initiated to compute and reconcile outputs ht-n at t-n to ht at t to the system outputs. At 1605, the flow computes hl at tl to h6 at t6 from QI of f(Al) at tl to Q6 of f(A6) at t6 and compare outputs hl to h6. At 1606, the flow computes output h7. At 1607, the flow computes attribute h7 output to Q of f(A) at tl to Q of f(A) at t6. At 1608, the flow generates the attribution.
[0132] FIG. 17 illustrates a flow for the expanded view of the recursive approach for backwards attribution, in accordance with an example implementation. The flow is the same of FIG. 16, but executed in recursive form as illustrated in FIG. 17.
[0133] The learning algorithm can operate as follows. At first, the learning algorithm creates the logical structure of the process pipeline and hierarchy. Secondly, the learning algorithm identifies sensors that are applied to each stage. Then, the learning algorithm build model(s)/solution(s) for each stage and connected asset. Next, the learning algorithm computes the function to match the subcomponent’s Anomaly risk score to Quality of assembly line stage Q = f(ASub). Subsequently, the learning algorithm computes the subcomponent's Anomaly risk score via heuristic solutions.
[0134] The output of each model at prior stage and current stage components Q=f(A) serves as input to the model at next stage by following process pipeline and connected asset. The model output can be deemed as derived features and predict of next stage. Sensor/KPIs data can also be input to each asset/node in the process pipeline and connected asset in a vector format. From the examples of FIGS. 12 to 16, h7 output then is calculated to attribute to prior stages Qs = f(As) at prior time steps. Q=f(ASub)/h where input/output pair Q=f(ASub)/h could be assembly line stage’s subcomponent (Anomaly risk score) as inputs to assembly line Quality output.
[0135] Examples of algorithms that could be used in this forward prediction effort are as follows. Algorithms such as RNN, LSTM, transformer, Huristic, Exponential smoothing models, ARIMA/SARIMA, and linear regression can be used. For single-level multi-output structures, the input for the stages can be quality, and the output (t+1) can also be quality, with no translation layer needed. For single-level multi-input structures, the input for the stages can be a vector, and the output (t+1) can also be quality, with no translation layer needed. For multilevel subcomponent multi-inputs structure with multi-level recursion, RNN, LSTM, and transformer have an autoencoder for the translation layer, Hurstic uses Hurstic for the translation layer, Exponential smoothing models use principal component analysis (PCA) as the translation layer, ARIMA/SARIMA uses independent component analysis (ICA) as the translation layer, and linear regression uses a multi-dimension min/max scaler as the translation layer. The input for the stages can be a vector, and the output (t+1) can also be quality.
[0136] The algorithms that could be used in the backward attribution effort are the same as the forward prediction, with the following caveats. RNN, LSTM, and transformer use back propagation to facilitate the attribution, whereas Huristic, Exponential smoothing models, ARIMA/SARIMA, and linear regression use variable importance.
[0137] The example implementations described herein can confirm various advantages over the related art. For example, the example implementations utilize end to end learning schemes for process pipeline and hierarchy by utilizing the physical and/or logical relationships and sensors and/or simulated data to build a digital twin. This digital twin can help to achieve better prediction performance, solutions of the given task(s) or KPIs for each process pipeline and connected asset. Further, example implementations provide the comprehensive outcome prediction and event attribution of the whole system in digital twin and can potentially prioritize the tasks to optimize accordingly. The example implementations can also help fine tune the solutions for each connected asset. Additionally, by using attribution, the example implementations introduce an explaining approach for the solution and results based on the process pipeline and its connected assets at different levels. Example implementations introduce three knowledge graph systems in solution architecture to represent and store process pipeline and connected asset/hierarchy and information needed to execute the solutions. Example implementations help resolve the relationships among process pipeline stages, connected assets and assets hierarchy accordingly. Further, the example implementations can help calibrate, refine, and optimize the forward prediction and backward attribution parameters via approach in Ray of the Digital Twin environment.
[0138] The example implementations can thereby generate knowledge content for optimizing operation and prognosis to continuously optimize based on the production pipeline and its subsystem recursively. It is not failure driven solution only continuously optimize (where also include failure cases).
[0139] FIG. 18 illustrates a system involving a plurality of assets networked to a management apparatus, in accordance with an example implementation. One or more assets 1801 are communicatively coupled to a network 1800 (e.g., local area network (LAN), wide area network (WAN)) through the corresponding on-board computer or Internet of Things (loT) device of the assets 1801, which is connected to a management apparatus 1802. The management apparatus 1802 manages a database 1803, which contains historical data collected from the assets 1801 and also facilitates remote control to each of the assets 1801. In alternate example implementations, the data from the assets can be stored to a central repository or central database such as proprietary databases that intake data, or systems such as enterprise resource planning systems, and the management apparatus 1802 can access or retrieve the data from the central repository or central database. Asset 1801 can involve any physical system for use in a physical process such as an assembly line or production line, in accordance with the desired implementation, such as but not limited to air compressors, lathes, robotic arms, and so on in accordance with the desired implementation. The data provided from the sensors of such assets 1801 can serve as the data flows as described herein upon which analytics can be conducted.
[0140] The system of FIG. 18 can involve the underlying physical system upon which the physical process can be implemented. Depending on the desired implementation, the physical system and the physical process can be represented by a solution/asset hierarchy as is known in the art and as described in FIG. 7. In an example implementation of a production line of a truck that can be used as the subject of the system of FIG. 18 and the represented digital twin, the physical process can involve two parts; the assets 1801 along with their hierarchy, and the physical process to assemble the truck. In such an example implementation, the forward prediction of the physical process can be to predict the outcome of the production line (e.g. the quality of the assembled truck or the efficiency of the production line itself).
[0141] FIG. 19 illustrates an example computing environment with an example computer device suitable for use in some example implementations, such as a management apparatus 1802 as illustrated in FIG. 18, or as an on-board computer of an asset 1801. The computing environment can be used to facilitate implementation of the architectures illustrated in FIGS. 1 and 4 to 9. Further, any of the example implementations described herein can be implemented based on the architectures, APIs, microservice systems, and so on as illustrated in FIGS. 1 and 4 to 9. Computer device 1905 in computing environment 1900 can include one or more processing units, cores, or processors 1910, memory 1915 (e.g., RAM, ROM, and/or the like), internal storage 1920 (e.g., magnetic, optical, solid state storage, and/or organic), and/or I/O interface 1925, any of which can be coupled on a communication mechanism or bus 1930 for communicating information or embedded in the computer device 1905. VO interface 1925 is also configured to receive images from cameras or provide images to projectors or displays, depending on the desired implementation.
[0142] Computer device 1905 can be communicatively coupled to input/user interface 1935 and output device/interface 1940. Either one or both of input/user interface 1935 and output device/interface 1940 can be a wired or wireless interface and can be detachable. Input/user interface 1935 may include any device, component, sensor, or interface, physical or virtual, that can be used to provide input (e.g., buttons, touch-screen interface, keyboard, a pointing/cursor control, microphone, camera, braille, motion sensor, optical reader, and/or the like). Output device/interface 1940 may include a display, television, monitor, printer, speaker, braille, or the like. In some example implementations, input/user interface 1935 and output device/interface 1940 can be embedded with or physically coupled to the computer device 1905. In other example implementations, other computer devices may function as or provide the functions of input/user interface 1935 and output device/interface 1940 for a computer device 1905.
[0143] Examples of computer device 1905 may include, but are not limited to, highly mobile devices (e.g., smartphones, devices in vehicles and other machines, devices carried by humans and animals, and the like), mobile devices (e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like), and devices not designed for mobility (e.g., desktop computers, other computers, information kiosks, televisions with one or more processors embedded therein and/or coupled thereto, radios, and the like).
[0144] Computer device 1905 can be communicatively coupled (e.g., via I/O interface 1925) to external storage 1945 and network 1950 for communicating with any number of networked components, devices, and systems, including one or more computer devices of the same or different configuration. Computer device 1905 or any connected computer device can be functioning as, providing services of, or referred to as a server, client, thin server, general machine, special-purpose machine, or another label.
[0145] I/O interface 1925 can include, but is not limited to, wired and/or wireless interfaces using any communication or I/O protocols or standards (e.g., Ethernet, 802.1 lx, Universal System Bus, WiMax, modem, a cellular network protocol, and the like) for communicating information to and/or from at least all the connected components, devices, and network in computing environment 1900. Network 1950 can be any network or combination of networks (e.g., the Internet, local area network, wide area network, a telephonic network, a cellular network, satellite network, and the like).
[0146] Computer device 1905 can use and/or communicate using computer-usable or computer-readable media, including transitory media and non-transitory media. Transitory media include transmission media (e.g., metal cables, fiber optics), signals, carrier waves, and the like. Non-transitory media include magnetic media (e.g., disks and tapes), optical media (e.g., CD ROM, digital video disks, Blu-ray disks), solid state media (e.g., RAM, ROM, flash memory, solid-state storage), and other non-volatile storage or memory.
[0147] Computer device 1905 can be used to implement techniques, methods, applications, processes, or computer-executable instructions in some example computing environments. Computer-executable instructions can be retrieved from transitory media, and stored on and retrieved from non-transitory media. The executable instructions can originate from one or more of any programming, scripting, and machine languages (e.g., C, C++, C#, Java, Visual Basic, Python, Perl, JavaScript, and others).
[0148] Processor(s) 1910 can execute under any operating system (OS) (not shown), in a native or virtual environment. One or more applications can be deployed that include logic unit 1960, application programming interface (API) unit 1965, input unit 1970, output unit 1975, and inter-unit communication mechanism 1995 for the different units to communicate with each other, with the OS, and with other applications (not shown). The described units and elements can be varied in design, function, configuration, or implementation and are not limited to the descriptions provided. Processor(s) 1910 can be in the form of hardware processors such as central processing units (CPUs) or in a combination of hardware and software units.
[0149] In some example implementations, when information or an execution instruction is received by API unit 1965, it may be communicated to one or more other units (e.g., logic unit 1960, input unit 1970, output unit 1975). In some instances, logic unit 1960 may be configured to control the information flow among the units and direct the services provided by API unit 1965, input unit 1970, output unit 1975, in some example implementations described above. For example, the flow of one or more processes or implementations may be controlled by logic unit 1960 alone or in conjunction with API unit 1965. The input unit 1970 may be configured to obtain input for the calculations described in the example implementations, and the output unit 1975 may be configured to provide output based on the calculations described in example implementations.
[0150] Processor(s) 1910 can be configured to execute instructions for a method for a digital twin of a physical process, the digital twin configured with forward prediction of the physical process, the instructions involving identifying key performance indicators (KPIs) from the data architecture of the digital twin; defining output KPIs indicative of quality or efficiency for the forward prediction from the vector input to the digital twin and the identified KPIs; generating a recurrent neural network (RNN) structure to match the physical process, each node of the RNN structure associated with a sub-process within the physical process; and computing each stage of the RNN structure from output of a prior stage and current input to the each stage to generate values for the output KPIs as the forward prediction as described, for example, in FIGS. 5 to 10.
[0151] Processor(s) 1910 can be configured to execute instructions for a method for a digital twin of a physical process, the method involving aligning the computation of the each stage with a data set of the physical process; and adjusting the RNN structure based on a difference between the computation and the data set as illustrated in FIGS. 7 and 8. The data set can be simulated from the physics model of the physical process, or it can be the actual historical data set in accordance with the desired implementation. As illustrated in FIG. 7, there can be two types of data sets; one is read from physical sensors and the other is from outputs of simulation. For example, in a truck manufacturing system, there may not be any sensors placed to detect the truck axle torque, so the mathematical expression of the wheel torque function of the engine torque can be used as replacement. The torque distribution alone the axle can be calculated, and once the force is understood, the metal fatigue can be calculated.
[0152] Processor(s) 1910 can be configured to execute instructions for a method for a digital twin of a physical process, the method involving generating the digital twin from a solution hierarchy or an asset hierarchy of the physical process and an associated artificial intelligence (Al) or physics model configured to model a corresponding one of the each sub process of the physical process; wherein the generating the RNN structure to match the physical process comprises mapping the each node to the each sub-process of the physical process as illustrated in FIGS. 4 and 10-15.
[0153] Processor(s) 1910 can be configured to execute instructions for a method for a digital twin of a physical process, the method involving executing the forward prediction to generate the value of the output KPIs; executing backward attribution for the each stage of the RNN structure based on the values of the output KPIs to determine ones of the each stage to be adjusted; and generating a recommendation to adjust the determined ones of the each stage as illustrated in FIGS. 7 to 17. Depending on the desired implementation, the recommendation can be based on the attribution result which shows which variables in the prior stages are the leading factors to the output variables in the current or future stages.
[0154] Processor(s) 1910 can be configured to execute instructions for a method for a digital twin of a physical process, the method involving computing a weight from previous output to determine ones of the each stage that incurred a change to the values of the output KPIs over a threshold, as illustrated in FIGS. 7 to 18.
[0155] Processor(s) 1910 can be configured to execute instructions for a method for a digital twin of a physical process, wherein the vector input can involve one or more of robot quality, material quality, configuration quality, or operator quality. Robot quality can be score are set as known in the art to indicate how well a robot has conducted a task, and can be some function involving any or a combination of anomaly risk, remaining useful life, and failure risk as illustrated in FIGS. 10 to 17.
[0156] Processor(s) 1910 can be configured to execute instructions for a method for a digital twin of a physical process, the method involving wherein the method is instantiated in a solution architecture can involve a digital twin configured with a machine learning model and a physics model of the physical process, the digital twin executing a simulation from the physical model of the physical process, the machine learning model constructed from a library of machine learning algorithms to generate the RNN structure and to facilitate backward attribution as described in the example algorithms herein and as illustrated in FIG. 4 and FIG. 7. The library of machine learning algorithms can be used create the machine learning model via selection, autoML, and so on in accordance with the desired implementation. As illustrated in FIG. 4 and FIG. 7, the solution architecture can include a ML model and physics model, that is simulated from the physics model of the physics process. In an example, there can be multiple levels, wherein data and data simulation are the first several levels, the next level being the digital twin simulation, and the next level being the analytics model which models the output of the digital twin simulation output into anomaly detection, failure prediction, and so on.
[0157] In any of the example implementations described herein, the current input to the each stage can involve a vector composed of a plurality of different features; wherein the computing the each stage of the RNN structure involves executing multiple parallel analysis for each of the plurality of different features; wherein the output of the each stage is a single value KPI as illustrated in FIGS. 6 to 17. The different features can involve the variables in the vector input, such as the anomaly, RUL, and so on, as well as statistical features derived from the data set such as min, max, average, and so on in accordance with the desired implementation. As illustrated in FIG. 7, multiple parallel analysis can be conducted for each of the features by Ray Digital Twin Computation Environment 760.
[0158] In any of the example implementations described herein, input to one layer of the RNN structure can be different from input to another layer of the RNN structure as illustrated in FIGS. 15 and 16. For example, one layer can involve one variable (e.g., quality of material), another layer can involve the material vendor, and so on in accordance with the desired implementation.
[0159] In any of the example implementations described herein, the physical process can be a production / assembly line as illustrated in FIG. 18, wherein the RNN structure is mapped to a structure of the production line map production line structure to RNN structure to facilitate both forward prediction and backward attribution as illustrated in FIGS. 10 to 17. [0160] Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations within a computer. These algorithmic descriptions and symbolic representations are the means used by those skilled in the data processing arts to convey the essence of their innovations to others skilled in the art. An algorithm is a series of defined steps leading to a desired end state or result. In example implementations, the steps carried out require physical manipulations of tangible quantities for achieving a tangible result.
[0161] Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” “displaying,” or the like, can include the actions and processes of a computer system or other information processing device that manipulates and transforms data represented as physical (electronic) quantities within the computer system’s registers and memories into other data similarly represented as physical quantities within the computer system’s memories or registers or other information storage, transmission or display devices.
[0162] Example implementations may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include one or more general-purpose computers selectively activated or reconfigured by one or more computer programs. Such computer programs may be stored in a computer readable medium, such as a computer-readable storage medium or a computer-readable signal medium. A computer-readable storage medium may involve tangible mediums such as, but not limited to optical disks, magnetic disks, read-only memories, random access memories, solid state devices and drives, or any other types of tangible or non-transitory media suitable for storing electronic information. A computer readable signal medium may include mediums such as carrier waves. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Computer programs can involve pure software implementations that involve instructions that perform the operations of the desired implementation.
[0163] Various general-purpose systems may be used with programs and modules in accordance with the examples herein, or it may prove convenient to construct a more specialized apparatus to perform desired method steps. In addition, the example implementations are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the techniques of the example implementations as described herein. The instructions of the programming language(s) may be executed by one or more processing devices, e.g., central processing units (CPUs), processors, or controllers.
[0164] As is known in the art, the operations described above can be performed by hardware, software, or some combination of software and hardware. Various aspects of the example implementations may be implemented using circuits and logic devices (hardware), while other aspects may be implemented using instructions stored on a machine-readable medium (software), which if executed by a processor, would cause the processor to perform a method to carry out implementations of the present application. Further, some example implementations of the present application may be performed solely in hardware, whereas other example implementations may be performed solely in software. Moreover, the various functions described can be performed in a single unit, or can be spread across a number of components in any number of ways. When performed by software, the methods may be executed by a processor, such as a general purpose computer, based on instructions stored on a computer-readable medium. If desired, the instructions can be stored on the medium in a compressed and/or encrypted format.
[0165] Moreover, other implementations of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the techniques of the present application. Various aspects and/or components of the described example implementations may be used singly or in any combination. It is intended that the specification and example implementations be considered as examples only, with the true scope and spirit of the present application being indicated by the following claims.

Claims

CLAIMS What is claimed is:
1. A method for a digital twin of a physical process, the digital twin configured with forward prediction of the physical process, the method comprising: identifying key performance indicators (KPIs) from a data architecture of the digital twin; defining output KPIs indicative of quality or efficiency for the forward prediction from vector input to the digital twin and the identified KPIs; generating a recurrent neural network (RNN) structure to match the physical process, each node of the RNN structure associated with a sub-process within the physical process; and computing each stage of the RNN structure from output of a prior stage and current input to the each stage to generate values for the output KPIs as the forward prediction.
2. The method of claim 1, further comprising: executing the forward prediction to generate the value of the output KPIs; executing backward attribution for the each stage of the RNN structure based on the values of the output KPIs to determine ones of the each stage to be adjusted; and generating a recommendation to adjust the determined ones of the each stage.
3. The method of claim 1, further comprising:
- 38 - aligning the computation of the each stage with a data set of the physical process; and adjusting the RNN structure based on a difference between the computation and the data set.
4. The method of claim 1, further comprising: generating the digital twin from a solution hierarchy or an asset hierarchy of the physical process and an associated artificial intelligence (Al) or physics model configured to model a corresponding one of each sub process of the physical process; wherein the generating the RNN structure to match the physical process comprises mapping the each node to the each sub-process of the physical process.
5. The method of claim 1, further comprising: computing a weight from previous output to determine ones of the each stage that incurred a change to the values of the output KPIs over a threshold.
6. The method of claim 1 , wherein the vector input comprises one or more of robot quality, material quality, configuration quality, or operator quality.
7. The method of claim 1, wherein the method is instantiated in a solution architecture comprising a digital twin configured with a machine learning model and a physics model of
- 39 - the physical process, the digital twin executing a simulation from the physical model of the physical process, the machine learning model constructed from a library of machine learning algorithms to generate the RNN structure and to facilitate backward attribution.
8. The method of claim 1, wherein the current input to the each stage comprises a vector composed of a plurality of different features; wherein the computing the each stage of the RNN structure comprises executing multiple parallel analysis for each of the plurality of different features; wherein the output of the each stage is a single value KPI.
9. The method of claim 1, wherein input to one layer of the RNN structure is different from input to another layer of the RNN structure.
10. The method of claim 1, wherein the physical process is a production line, wherein the RNN structure is mapped to a structure of the production line map production line structure to RNN structure to facilitate both forward prediction and backward attribution.
11. A computer program, storing instructions for a digital twin of a physical process, the digital twin configured with forward prediction of the physical process, the instructions comprising:
- 40 - identifying key performance indicators (KPIs) from a data architecture of the digital twin; defining output KPIs indicative of quality or efficiency for the forward prediction from vector input to the digital twin and the identified KPIs; generating a recurrent neural network (RNN) structure to match the physical process, each node of the RNN structure associated with a sub-process within the physical process; and computing each stage of the RNN structure from output of a prior stage and current input to the each stage to generate values for the output KPIs as the forward prediction.
12. The computer program of claim 11, the instructions further comprising: executing the forward prediction to generate the value of the output KPIs; executing backward attribution for the each stage of the RNN structure based on the values of the output KPIs to determine ones of the each stage to be adjusted; and generating a recommendation to adjust the determined ones of the each stage.
13. The computer program of claim 11, the instructions further comprising: aligning the computation of the each stage with a data set of the physical process; and adjusting the RNN structure based on a difference between the computation and the data set.
14. The computer program of claim 11, the instructions further comprising: generating the digital twin from a solution hierarchy or an asset hierarchy of the physical process and an associated artificial intelligence (Al) or physics model configured to model a corresponding one of each sub process of the physical process; wherein the generating the RNN structure to match the physical process comprises mapping the each node to the each sub-process of the physical process.
15. A method for a digital twin of a physical process, the method comprising: generating a recurrent neural network (RNN) structure to match the physical process, each node of the RNN structure associated with a sub-process within the physical process; and computing each stage of the RNN structure from output of a prior stage and current input to the each stage to generate values; executing backward attribution for the each stage of the RNN structure based on the values to determine ones of the each stage to be adjusted; and generating a recommendation to adjust the determined ones of the each stage.
PCT/US2021/065717 2021-12-30 2021-12-30 Digital twin sequential and temporal learning and explaining WO2023129164A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2021/065717 WO2023129164A1 (en) 2021-12-30 2021-12-30 Digital twin sequential and temporal learning and explaining

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2021/065717 WO2023129164A1 (en) 2021-12-30 2021-12-30 Digital twin sequential and temporal learning and explaining

Publications (1)

Publication Number Publication Date
WO2023129164A1 true WO2023129164A1 (en) 2023-07-06

Family

ID=87000015

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/065717 WO2023129164A1 (en) 2021-12-30 2021-12-30 Digital twin sequential and temporal learning and explaining

Country Status (1)

Country Link
WO (1) WO2023129164A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237574A (en) * 2023-10-11 2023-12-15 西南交通大学 Task-driven geographical digital twin scene enhancement visualization method and system
CN117318033A (en) * 2023-09-27 2023-12-29 国网江苏省电力有限公司南通供电分公司 Power grid data management method and system combining data twinning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021108680A1 (en) * 2019-11-25 2021-06-03 Strong Force Iot Portfolio 2016, Llc Intelligent vibration digital twin systems and methods for industrial environments
WO2021245442A1 (en) * 2020-06-02 2021-12-09 Telefonaktiebolaget Lm Ericsson (Publ) Bler target selection for wireless communication session

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021108680A1 (en) * 2019-11-25 2021-06-03 Strong Force Iot Portfolio 2016, Llc Intelligent vibration digital twin systems and methods for industrial environments
WO2021245442A1 (en) * 2020-06-02 2021-12-09 Telefonaktiebolaget Lm Ericsson (Publ) Bler target selection for wireless communication session

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117318033A (en) * 2023-09-27 2023-12-29 国网江苏省电力有限公司南通供电分公司 Power grid data management method and system combining data twinning
CN117318033B (en) * 2023-09-27 2024-05-24 国网江苏省电力有限公司南通供电分公司 Power grid data management method and system combining data twinning
CN117237574A (en) * 2023-10-11 2023-12-15 西南交通大学 Task-driven geographical digital twin scene enhancement visualization method and system
CN117237574B (en) * 2023-10-11 2024-03-26 西南交通大学 Task-driven geographical digital twin scene enhancement visualization method and system

Similar Documents

Publication Publication Date Title
Lade et al. Manufacturing analytics and industrial internet of things
US20230336021A1 (en) Intelligent Orchestration Systems for Delivery of Heterogeneous Energy and Power Resources
Saldivar et al. Self-organizing tool for smart design with predictive customer needs and wants to realize Industry 4.0
CN102016730B (en) Autonomous adaptive semiconductor manufacturing
WO2023129164A1 (en) Digital twin sequential and temporal learning and explaining
Vještica et al. Multi-level production process modeling language
Biller et al. Simulation: The critical technology in digital twin development
Jain et al. Digital twin–enabled machine learning for smart manufacturing
CA3211789A1 (en) Computer-implemented methods referring to an industrial process for manufacturing a product and system for performing said methods
Ivanov et al. Multi-disciplinary analysis of interfaces “supply chain event management–RFID–Control theory”
Listl et al. Decision Support on the Shop Floor Using Digital Twins: Architecture and Functional Components for Simulation-Based Assistance
Alexopoulos et al. Machine learning agents augmented by digital twinning for smart production scheduling
Agostinho et al. Explainability as the key ingredient for AI adoption in Industry 5.0 settings
Li et al. Challenges in developing a computational platform to integrate data analytics with simulation-based optimization
Sadeghi et al. Artificial Intelligence and Its Application in Optimization under Uncertainty
US20230289623A1 (en) Systems and methods for an automated data science process
Dui et al. A data-driven construction method of aggregated value chain in three phases for manufacturing enterprises
Rubio et al. Smart manufacturing in a SoSE perspective
Kumari et al. MetaAnalyser-a concept and toolkit for enablement of digital twin
Arévalo et al. Production assessment using a knowledge transfer framework and evidence theory
Biller et al. Simulation-Driven Digital Twins: the Dna of Resilient Supply Chains
Stjepandic et al. Generation and Update of a Digital Twin in a Process Plant
Biller et al. A Practitioner’s Guide to Digital Twin Development
Villamizar et al. Self-Optimizing Smart Control Engineering Enabled by Digital Twins
Silvestri Novel techniques for harnessing symbolic and structured information into machine learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21970183

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE