US20220075361A1 - Systems and method for management and allocation of network assets - Google Patents

Systems and method for management and allocation of network assets Download PDF

Info

Publication number
US20220075361A1
US20220075361A1 US17/529,326 US202117529326A US2022075361A1 US 20220075361 A1 US20220075361 A1 US 20220075361A1 US 202117529326 A US202117529326 A US 202117529326A US 2022075361 A1 US2022075361 A1 US 2022075361A1
Authority
US
United States
Prior art keywords
model
operational state
level
collected
observable data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/529,326
Inventor
Rudolph Mappus
Mark Morris
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Intellectual Property I LP
Original Assignee
AT&T Intellectual Property I LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AT&T Intellectual Property I LP filed Critical AT&T Intellectual Property I LP
Priority to US17/529,326 priority Critical patent/US20220075361A1/en
Assigned to AT&T INTELLECTUAL PROPERTY I, L.P. reassignment AT&T INTELLECTUAL PROPERTY I, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MAPPUS, RUDOLPH, MORRIS, MARK
Publication of US20220075361A1 publication Critical patent/US20220075361A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0218Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
    • G05B23/0224Process history based detection method, e.g. whereby history implies the availability of large amounts of data
    • G05B23/024Quantitative history assessment, e.g. mathematical relationships between available data; Functions therefor; Principal component analysis [PCA]; Partial least square [PLS]; Statistical classifiers, e.g. Bayesian networks, linear regression or correlation analysis; Neural networks
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0218Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
    • G05B23/0243Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model
    • G05B23/0254Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model based on a quantitative model, e.g. mathematical relationships between inputs and outputs; functions: observer, Kalman filter, residual calculation, Neural Networks
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0259Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the response to fault detection
    • G05B23/0283Predictive maintenance, e.g. involving the monitoring of a system and, based on the monitoring results, taking decisions on the maintenance schedule of the monitored system; Estimating remaining useful life [RUL]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis

Definitions

  • the Telcordia TR-332/SR-332 Electronic Reliability Prediction Standard represents a standard practice for estimating mean time between failures (MTBF) for equipment in the telecommunications industry.
  • the Telcordia standard uses component level failure rates to estimate circuit and equipment level failure rates.
  • the standard maintains a list of failure rates for components and, the specification aggregates the component failure rates to estimate MTBF for a circuit or piece of equipment.
  • transition probabilities can become convoluted affecting model interpretability and accuracy.
  • the ability to improve MTBF predictions enables intelligent asset allocation strategies.
  • the method may include: collecting historical observable data from one or more pieces of equipment of a same type, wherein the historical observable data is collected at different hierarchical levels of the one or more pieces of equipment.
  • the different hierarchical levels may be a component level, a circuit level, and a logical path level.
  • the method may further include collecting operational state indications of the pieces of equipment corresponding to the collected historical observable data; generating, from the collected historical observable data, a set of operational state models, wherein each operational state model corresponds to one of the different hierarchical levels; and generating, from outputs of the set of operational state models, a top-level operational model for the piece of equipment.
  • the top-level operational model may be operable to determine maintenance and replacement timing for the piece of equipment.
  • the operational state indications may include an operational state indication, a degraded state indication, and a failed state indication.
  • the method may further include collecting the historical observable data asynchronously between the different hierarchical levels.
  • the method may include generating a first covariance matrix between outputs of a first hierarchical level operational state model and outputs of a second hierarchical level operational state model; generating a second covariance matrix between the outputs of the second hierarchical level operational state model and outputs of a third hierarchical level operational state model; and generating the top-level operational model using the first covariance matrix and the second covariance matrix as input.
  • the method may further include temporally aligning the asynchronously collected historical observable data between the different hierarchical levels.
  • Each of the first hierarchical level operational state model, the second hierarchical level operational state model, and the third hierarchical level operational state model may output a single state probability estimate for a sequence of input observable data.
  • the operational state indications may be correlated to the asynchronously collected historical observable data for one of the different hierarchical levels.
  • the method may further include generating a top-level model output based on a product of a highest probability estimation state from each of the first covariance matrix and the second covariance matrix.
  • the top-level model output may be a probability estimate of a next operational state or a mean time between failure (MTBF) for the piece of equipment.
  • MTBF mean time between failure
  • the method may further include collecting the historical observable data synchronously between the different hierarchical levels.
  • the method may further include: generating the top-level operational model using a first hierarchical level operational state model, a second hierarchical level operational state model, and a third hierarchical level operational state model.
  • the method may further include generating a top-level model output based on a product of probability estimation states from outputs of each of the first hierarchical level operational state model, the second hierarchical level operational state model, and the third hierarchical level operational state model.
  • Each operational state model of the set of operational state models may be a machine learning model trained with the historical observable data collected from corresponding hierarchical levels.
  • the top-level operational model may be a machine learning model trained with outputs of the set of operational state models and corresponding operational state indications.
  • the computer-implemented method may include: collecting observable data from the piece of equipment, wherein the observable data is collected at different hierarchical levels of the piece of equipment.
  • the different hierarchical levels may be a component level, a circuit level, and a logical path level.
  • the computer-implemented method may further include determining that the observable data is collected asynchronously between the different hierarchical levels. In response to determining that the observable data is collected asynchronously the computer-implemented method may further include: generating a first covariance matrix between outputs of a first hierarchical level operational state model and outputs of a second hierarchical level operational state model; generating a second covariance matrix between the outputs of the second hierarchical level operational state model and outputs of a third hierarchical level operational state model; and generating the output of the top-level operational model based on a product of a highest probability estimation state from each of the first covariance matrix and the second covariance matrix, wherein the output of the top-level operational model is a probability estimate of a next operational state or a mean time between failure (MTBF) for the piece of equipment.
  • MTBF mean time between failure
  • the computer-implemented method may further include determining that the observable data is collected synchronously between the different hierarchical levels. In response to determining that the observable data is collected synchronously, the computer-implemented method may further include generating the output from the top-level operational model based on a product of probability estimation states from outputs of each of a first hierarchical level operational state model, a second hierarchical level operational state model, and a third hierarchical level operational state model.
  • Each operational state model of the set of operational state models may be a machine learning model trained with the historical observable data collected from corresponding hierarchical levels.
  • the top-level operational model may be a machine learning model trained with outputs of the set of operational state models and corresponding operational state indications.
  • the apparatus may include: a memory configured to store program instructions and data and a processor configured to communicate with the memory.
  • the processor may be further configured to execute instructions read from the memory.
  • the instructions may be operable to cause the processor to perform operations including: collecting observable data from a piece of equipment, wherein the observable data is collected at different hierarchical levels of the piece of equipment.
  • the different hierarchical levels may be a component level, a circuit level, and a logical path level.
  • the operations may further include inputting the collected observable data to a predictive model at a set of operational state models corresponding to the different hierarchical levels; generating an output from each operational state model of the set of operational state models, the output being a state probability estimate for each of the different hierarchical levels; and generating, from a top-level operational model, an output based on the outputs of the set of operational state models, wherein the output from the top-level operational model is a probability estimate of a next operational state or a mean time between failure (MTBF) for the piece of equipment.
  • the next operational state may include an operational state indication, a degraded state indication, and a failed state indication.
  • the instructions may be further operable to cause the processor to perform operations including determining that the observable data is collected asynchronously between the different hierarchical levels.
  • the instructions may be further operable to cause the processor to perform operations including: generating a first covariance matrix between outputs of a first hierarchical level operational state model and outputs of a second hierarchical level operational state model; generating a second covariance matrix between the outputs of the second hierarchical level operational state model and outputs of a third hierarchical level operational state model; and generating the output of the top-level operational model based on a product of a highest probability estimation state from each of the first covariance matrix and the second covariance matrix, wherein the output of the top-level operational model is a probability estimate of a next operational state or a mean time between failure (MTBF) for the piece of equipment.
  • MTBF mean time between failure
  • the instructions may be further operable to cause the processor to perform operations including determining that the observable data is collected synchronously between the different hierarchical levels.
  • the instructions may be further operable to cause the processor to perform operations including generating the output from the top-level operational model based on a product of probability estimation states from outputs of each of a first hierarchical level operational state model, a second hierarchical level operational state model, and a third hierarchical level operational state model.
  • Each operational state model of the set of operational state models may be a machine learning model trained with the historical observable data collected from corresponding hierarchical levels.
  • the top-level operational model may be a machine learning model trained with outputs of the set of operational state models and corresponding operational state indications.
  • the proposed solution provides a scalable solution that enhances the state of the art in estimating MTBF for equipment.
  • the solution also provides a hierarchical latent state model that is interpretable for its estimates.
  • FIG. 1 illustrates a general example of visual model for a neural network according to various aspects of the present disclosure
  • FIG. 2 is a block diagram illustrating the hierarchy of layers according to various aspects of the present disclosure
  • FIG. 3 is a diagram illustrating an example of aligning observations and states between models for different layers when data is collected asynchronously according to various aspects of the present disclosure
  • FIG. 4 is a flowchart illustrating a method for estimating a next operational state and/or MTBF for a new piece of equipment according to various aspects of the present disclosure.
  • FIG. 5 is a block diagram of an example computing environment with an example computing device according to various aspects of the present disclosure.
  • a MTBF prediction system can improve next operational state and/or MTBF predictions, enable intelligent asset allocation strategies, and improve decommissioning strategies. For example, in network hardware inventory management (e.g., purchasing, allocation, etc.), both model interpretability and accuracy are important for purchasing and positioning replacement equipment to anticipate equipment failures. In addition, the ability to explain why the MTBF prediction system estimates failure states or non-failure states enables improved situational awareness for managers and operations teams for executing intelligent inventory management.
  • network hardware inventory management e.g., purchasing, allocation, etc.
  • both model interpretability and accuracy are important for purchasing and positioning replacement equipment to anticipate equipment failures.
  • the ability to explain why the MTBF prediction system estimates failure states or non-failure states enables improved situational awareness for managers and operations teams for executing intelligent inventory management.
  • the MTBF prediction system models modes of operation in terms of the physical states of components, the circuit level usage states of operation, and the logical and usage states of components. Modeling at these levels improves the characterization of failure estimates and the estimation of failures in general. With sufficient data at each level to model transitions between states at each level, the overall MTBF prediction system can capture details about operations that previous models were not capable of detecting.
  • the MTBF prediction system collects and registers observations (i.e., data from variables that can be measured) from three layers that represent different functions/operations: a component layer, a circuit layer, and a logical path layer.
  • the MTBF prediction system may build latent state (i.e., operational states that cannot be directly measured) models from these observations.
  • latent state models can used as input to build a top-level latent state model that estimates the next operational state and/or MTBF of the piece of equipment.
  • the top-level latent state model may be referred to herein as a top-level latent state operational model, a top-level model, a top-level operational model, or a top-level hierarchical latent state model depending on context.
  • the MTBF prediction system uses historical data collected from similar pieces of active equipment to inform the models.
  • the MTBF prediction system can use both synchronously and asynchronously collected data for the three first level model layers (i.e., component, circuit, logical path).
  • a top-level latent state model produces MTBF estimates for activity sequences of pieces of equipment and continues training as additional data arrive.
  • FIG. 1 illustrates a visual model 100 for a general example of a neural network according to various aspect of the present disclosure.
  • a neural network may execute a neural network model.
  • a neural network model may also be referred to herein as a machine learning model.
  • the model 100 includes an input layer 104 , a middle layer (i.e., a “hidden” layer) 106 , and an output layer 108 .
  • a neural network implementation can include multiple hidden layers.
  • Each layer includes some number of nodes 102 .
  • the nodes 102 of the input layer 104 may be connected to each node 102 of the hidden layer 106 .
  • the connections may be referred to as weights 110 .
  • Each node 102 of the hidden layer 106 may have a connection or weight 110 with each node 102 of the output layer.
  • the input layer 104 can receive inputs and can propagate the inputs to the hidden layer 106 . Weighted sums computed by the hidden layer 106 (or multiple hidden layers) are propagated to the output layer 108 , which can present final outputs to a user.
  • neural network illustrated in FIG. 1 is merely exemplary and that different and/or additional neural networks, for example, but not limited to, Long Short Term Memory (LSTM) neural networks, feedforward neural networks, radial basis function neural networks, or other types of neural networks, may be used without departing from the scope of the present disclosure.
  • LSTM Long Short Term Memory
  • feedforward neural networks feedforward neural networks
  • radial basis function neural networks radial basis function neural networks
  • Data may be collected synchronously or asynchronously for the MTBF prediction system.
  • Historical data for training the model layers for a piece of equipment may be obtained from measurements of various parameters collected over time from similar pieces equipment operating in the field.
  • the data may include operational data used to model physical characteristics, for example, voltages, currents, operating temperature, amps, radio frequency (RF) characteristics, etc., obtained by instrumenting the pieces equipment with sensors, as well as environmental data, for example, ambient temperature, humidity, vibration, etc.
  • RF radio frequency
  • components may have built-in instrumentation for collecting data.
  • Data from instrumented equipment e.g., smart meters
  • ambient environment data e.g., “off-equipment” data
  • cabinet or room temperature data e.g., cabinet or room temperature data
  • input current data e.g., ambient or room RF data, etc.
  • operational states e.g., operational, degraded, or failed
  • the pieces equipment corresponding to the operational and environmental data may be collected (e.g., from an equipment status monitor).
  • data may be collected synchronously at the different model layers for a piece of equipment, meaning that the observations in each layer are all collected at the same time.
  • the data may be collected, for example at a rate of 60 samples per second or another rate.
  • the corresponding operational states of the equipment may be collected.
  • data may be collected asynchronously at the different model layers, meaning that the observation at one model layer are collected at a different time than the observations collected at the other model layers. Collection of the asynchronously collected data should occur in the same time range (i.e., over the same temporal extent) for the model layers of a piece of equipment.
  • a piece of equipment may be represented as a hierarchy of three first stage model layers (i.e., component level, circuit level, and logical path level) for modelling.
  • Sensor arrays attached to equipment, built-in equipment instrumentation, and/or ambient instrumentation may be used to collect information about equipment performance at each of the three layers to construct a corresponding top-level hierarchical latent state model.
  • the top-level hierarchical latent state model may be used to determine maintenance and replacement timing (e.g., by estimating a next operational state or MTBF) for the piece of equipment.
  • a latent state model relates a set of observable (i.e., directly measured) variables, for example, voltage or current at a circuit test point, to a set of variables that are not directly observed (i.e., latent variables), for example, an internal operational state of an integrated circuit or a piece of equipment.
  • the latent variables are inferred from the observable variables.
  • the nature of the hierarchical latent state model enables limiting the computations performed by the MTBF prediction system; thus, the resulting inference tool is scalable. Modes of operation in terms of the physical states of components, circuit level usage states of operation, and logical level usage states of operation may be modeled. Modeling at these three layers represents an improvement in the characterization of failure estimates and the estimation of failures in general.
  • FIG. 2 is a block diagram illustrating the hierarchy of model layers according to various aspects of the present disclosure.
  • a piece of equipment for which the operational state and/or MTBF is to be calculated may be represented as three layers: a component layer (m0) 210 , a circuit layer (m1) 220 , and a logical path layer (m2) 230 .
  • the component layer 210 may also be referred to herein as the physical characteristics layer.
  • Observations i.e., measurements made by various sensors, may be made at each of the three model layers and, based on the observations, latent state models for each layer may be constructed. Data can be generated from the models of the component level, circuit level, and logical level in conjunction with observable features. With sufficient data at each level to model the transitions between operational states at each level, the MTBF prediction system can capture details about equipment operations that previous models were not capable of detecting while providing the ability to generate explanations of predictions.
  • a latent state model that captures working operational states and failure states may be determined from the collected historical data.
  • the latent state model for a given layer may be based on the observations (i.e., sensor measurement data) from that layer.
  • subsequent observations i.e., new sensor measurement data
  • the operational state for the equipment may be estimated based on a single top-level latent state operational model for the equipment.
  • the single top-level latent state operational model may be generated using the estimates of the current latent states of the hierarchical model layers and the observations from each of the hierarchical model layers.
  • the component layer 210 physical characteristics of hardware components, for example, integrated circuits, capacitors, resistors, etc., may be modeled.
  • the modelled physical characteristics may include characteristics such as temperature, vibration, friction, etc., that can affect MTBF for hardware components.
  • a first level (m0) model may be developed for each identified component.
  • the circuit process level 220 At the second level of the hierarchical model (m1), the circuit process level 220 , input and output characteristics, for example, input voltage and/or current signals, of the hardware may be modeled. Similar data types as collected at the m0 layer may be collected together with the additional inputs and outputs of the circuit.
  • Logical paths as used herein refer to operational circuit paths.
  • a transmitter circuit may operate in a low power mode when communicating with one receiver and may operate in a high power mode when communicating with another receiver.
  • Different logical signal and power paths through the same transmitter circuit may be used for the low and high power modes.
  • Similar data types as collected at the m0 and m1 layers may also be collected as well as the operational hours.
  • logical paths may have different effects on hardware lifetimes. Operational hours of logical paths may be modeled based on the instrumented input and output data. In cases where instrumented data on logical path operational hours is unavailable, logical paths between pairs of components may be modeled using the data collected from m0 (i.e., the component layer 210 ) to estimate operational hours/utilization of groups of components.
  • the hierarchical levels may be represented at different functional/operational levels.
  • the component layer rather than representing an individual component such as a resistor or capacitor, may represent a printed circuit board (PCB) assembly
  • the circuit layer rather than representing a single circuit, may represent a module containing several PCBs forming a larger circuit
  • the logical path layer represent various combinations of functions provided by different PCBs representing a functional path.
  • the PCB or module may correspond to a serviceable or replaceable part. Many variations and alternatives may be recognized for defining the various layers.
  • Machine learning models for each of the three hierarchical first stage model layers may be trained using the historical data (e.g., observations and operational states) collected from the other similar pieces of equipment. Using the historical data, the machine learning models may be trained to estimate the state of a component, circuit, or logical path. In the case of the component level model (m0), a separate machine learning model may be trained for each identified component.
  • the MTBF prediction system may collect and register observations from the three first stage model layers: the component layer m(0), the circuit layer m(1), and the logical path layer m(2). The MTBF prediction system then uses these observations to train a top-level latent state model to learn a state model and transitions between states for the equipment level (i.e., top-level) operational model 240 . Once trained, the top-level model 240 can produce MTBF estimates for activity sequences of a piece of equipment and continues training as additional new data arrive. When used in conjunction with hardware and logical operational hours data, these features create a representation of an operational state that improves the state of the art in estimating time to failure as well as transitions between states (e.g., operational, degraded, failed).
  • Each of the trained machine learning models can be used to estimate a probability of a next state (e.g., operational or failure) of a corresponding component (m0), circuit (m1), or logical path (m2) for a new piece of equipment using new data (e.g., observations) generated by the new equipment as input to the models. That is, when new data is input for each component at the m0 level, the machine learning models for the m0 level may estimate the probability of a next state for each component. Similarly, when new data is input for each circuit at the m1 level, the machine learning models for the m1 level may estimate the probability of a next state for each circuit.
  • the machine learning models for the m2 level may estimate the probability of a next state for each logical path.
  • the estimated output state probabilities for all the models (e.g., m0, m1, and m2) may be used as input to train the single top-level model 240 to generate the next state probability for the piece of equipment.
  • the MTBF prediction system builds computational models to estimate the latent (i.e., unobservable) operational states at each model layer.
  • the models are trained with the historical data to learn the relationships between the observations (e.g., the instrumented data collected for each layer) and the latent states.
  • Observation vector elements e.g., accelerometer and temperature data collected within a layer
  • the number of latent states to be associated is a parameter supplied to each layer's models when it is trained.
  • time windows e.g. 10 seconds, 10 minutes, 10 days
  • the output of each of the three first stage model layers are the sequences of most likely latent states.
  • the desired output is the operational state with the highest probability given the history of observations ⁇ .
  • the trained first stage model layers output estimates of the most likely states given a sequence of observations.
  • the MTBF prediction system uses the outputs of the first stage model layers as inputs to train the top-level model for state estimation.
  • relations may be established for the observations between the first stage model layers. The relations may be established depending on how the observation data are collected for each first stage model layer in relation to the other first stage model layers.
  • the data may be collected synchronously, meaning that observations for each first stage model layer are all collected at the same time, or the data may be collected asynchronously, meaning that observations in one first stage model layer are collected at a different time than the other first stage model layers.
  • the MTBF prediction system may estimate the equipment level operational state using the outputs of the three first stage model layers.
  • the equipment level model uses different methods of processing the input data to produce operational state estimates.
  • the MTBF prediction system builds the top-level latent state model in the same manner as the first stage model layers.
  • the top-level (i.e., equipment level) latent state model uses the outputs of the first stage model layers as input in conjunction with the operational state.
  • the time range of the data that was used to train the first stage model layers is used to train the top-level model.
  • each first stage model layer outputs a state estimate.
  • each first stage model layer As observations arrive to each first stage models, each first stage model layer generates a state estimate for each observation.
  • the state estimates are the input to the top-level model.
  • the top-level model produces a single state estimate for the equipment operation, one state estimate for each observation.
  • the top-level model is trained using historical data similar to, but not the same as, the historical data used to train the first stage models.
  • the training data for the top-level model may additionally include an equipment state label (e.g., normal, impaired, fault). During historical data collection, these states are recorded alongside the collected observation data, for example, from an equipment status monitor.
  • equipment state label e.g., normal, impaired, fault
  • the MTBF prediction system may use a different method to estimate operational states than is used when the data is collected synchronously since there will not be a temporal dependence between the first stage model layers.
  • the model outputs a single state estimate for a sequence of input observations.
  • the first stage models each produce state estimates, but the first stage model outputs are used to build the pair of covariance matrices.
  • the covariance matrices are then the input the top-level model.
  • the top-level model is also trained using historical data that has the equipment state label.
  • equipment level states are synchronized with the logical level observation data.
  • FIG. 3 is a diagram illustrating an example of aligning observations and states between models for different layers when data is collected asynchronously according to various aspects of the present disclosure.
  • measured data i.e., observations 310
  • operational states 320 for the first level during a given time period.
  • measured data i.e., observations 330
  • operational states 340 for the second level 325 .
  • the asynchronously collected data should occur over the same temporal extent as the time over which the observations 310 were collected for the first level 305 .
  • several observations 310 at the first level 305 may be correlated with an observation 330 at the second level 325 .
  • several operational states 320 at the first level 305 may be correlated with an operational state 340 at the second level 325 .
  • an operational state at the first level may be correlated with more than one operational state at the second level.
  • a pair of covariance matrices may be generated using the output state sequences from the first stage model layers: a first covariance matrix between the component level (m0) and circuit level (m1) outputs (i.e., the estimates of the most likely states given a sequence of observations) and a second covariance matrix between the circuit level (m1) and logical path level (m2) outputs.
  • a first covariance matrix between the component level (m0) and circuit level (m1) outputs i.e., the estimates of the most likely states given a sequence of observations
  • m1 circuit level
  • m2 logical path level
  • the hierarchy of layers may be used to make estimating the relationship between the first stage model layers computationally easier than finding the joint probabilities of observations. As a result of the hierarchical approach, computing all pairs of covariances may be avoided.
  • the outputs from each model layer of the hierarchy may be used to generate the correlation matrices to find the highest correlation states between the pairs of models (i.e., component/circuit and circuit/logical).
  • the MTBF prediction system can compute the two covariance matrices using the outputs of the three first stage models: one between the component and circuit layers and one between the circuit and logical path layers. These matrices can become the input observations for the top-level latent state model.
  • the top-level latent state model can be trained based on the relationships between all three layers given the pairwise state inputs from the two covariance matrices.
  • the highest correlation states between the pairs of states given in the covariance matrices are input into the top-level latent state model, and the outputs of the top-level latent state model are the operational state estimates for the piece of equipment.
  • the correlation between pairs of observables (e.g., (O_m0, O_m1)) is first determined.
  • the hierarchy may be used to organize observations and states from the three models to determine transitions between states in the larger, hierarchical state/action space.
  • the objective in asynchronous data collection is to generate a top level model that predicts the relationships between the three first stage models without having to compute the full joint probabilities of all three first stage models.
  • the MTBF prediction system can use the covariance matrices between pairs of first stage models (component & circuit and circuit & logical) as the input for the top-level latent state model.
  • the hierarchy of the first stage model layers limits the combinatorics of the state/observation space in that not all combinations of state/observation co-occurrences are considered.
  • the top-level latent state model may compute the probability of an observation in model m0 (i.e., O_m0) given an observation in model m1 (i.e., O_m1) multiplied by the probability of an observation in model m1 (i.e., O_m1) given an observation in model m2 (i.e., O_m2) as, shown in equation 4, rather than the full joint probability p(O_mTL
  • the following example is presented to further explain the operation of the MTBF prediction system for estimating a next operational state and/or MTBF for a new piece of equipment according to aspects of the present disclosure.
  • new observations i.e., new data
  • Each of the first stage model layers outputs sequences of most likely latent states (i.e., the operational states with the highest probability given the observations).
  • the outputs of the first stage model layers are used as input to the top-level model to predict future operational states, for example, operational, degraded, or failed, and or to predict MTBF.
  • a new piece of equipment may be identified to monitor and estimate likely time to failure.
  • a collection of historical data from other pieces of equipment of the same type may be obtained to observe the transitions to failure states over time.
  • the historical data from the other pieces of equipment may have been previously generated by instrumenting the other pieces of equipment as described above with temperature sensors to collect component level operating temperatures of specified components, accelerometers to capture vibration and movement data, as well as voltage and current sensors to capture voltage and current values at specified locations and operating hours.
  • corresponding operational states e.g., operational, degraded, failed
  • the data may have been collected synchronously or asynchronously.
  • the historical data collected from the other pieces of equipment may include working states, for example operational or failure states, corresponding to the collected data.
  • data for example, but not limited to, voltage, temperature, operational hours, and accelerometer data
  • the data may be collected, for example at a rate of 60 samples per second or another rate, for several pieces of equipment, all of which are of the target equipment type.
  • the corresponding operational states of the equipment may be collected, for example, from an equipment status monitor.
  • latent state models that associate observables (e.g., sensor measurements) to the latent operational states may be generated for each component for which data is collected (e.g., for a 10-component circuit, 10 latent state models may be generated) based on the collected data and the operational state labels.
  • the component level models may output the sequences of most likely latent states.
  • the circuit level and logical path level models may output the sequences of most likely latent states based on the circuit level and logical path level observables, respectively.
  • a model for example, but not limited to, a hidden Markov model (HMM) or recurrent neural network (RNN), for each layer in the hierarchy (i.e., component, circuit, and logical layers) may be generated.
  • HMM hidden Markov model
  • RNN recurrent neural network
  • Each of the three models may be trained to learn the association between the observables (e.g., temperature, power, movement, hours, etc.) and the operational states of the model (e.g., operational, degraded, and failed).
  • the outputs of the three models form the input to the top-level latent state model that produces a single operational state prediction.
  • the data from the three layers may be aligned to determine the current state of the equipment.
  • Correlation matrices may be generated from the output sequences of most likely latent states between the component/circuit and circuit/logical layers.
  • the operational latent state model learns the latent operational states from each hierarchical layer of the model (e.g., component, circuit, logical path) using the failure states from each layer as failure states in the operational state model.
  • the operational latent state model enables determination of the probability of transition to failure state in the latent state model given the present state and observation sequence as shown by the argmax function of equation 5.
  • Equation 5 indicates that the MTBF prediction system can use the top level transition probabilities to estimate the probability of the earliest opportunity of the equipment to transition to a failure state.
  • the top-level latent state model estimates the transition probabilities between the operational (e.g., latent) states of the equipment. These state transition probabilities are used to estimate the most likely operational states the equipment will experience in the future.
  • the MTBF prediction system does this by computing the next most likely state of the top-level latent state model given the current state estimates of the first stage models obtained using the current observations. Using this next most likely state, the MTBF prediction system can compute a subsequent next most likely state.
  • the MTBF prediction system can compute the most likely future states in this fashion and can also estimate the earliest most likely transition to particular states in the top-level latent state model (e.g., the earliest, most likely transitions to failure states).
  • the trained top-level model may use the outputs of the first stage model layers to form the single operational state estimate for the new piece of equipment for each observation.
  • the top-level model may output the operational state estimates as a vector.
  • the operational state estimates output by the top-level model can enable computation of the MTBF of a piece of equipment. For example, by estimating the probabilities of transitions to failure states in the models, the expected time between failures (i.e., MTBF) may be computed.
  • FIG. 4 is a flowchart illustrating a method 400 for predicting MTBF according to various aspects of the present disclosure.
  • observable data may be collected from sensors configured to sense operating characteristics of a piece of equipment at a plurality of hierarchical layers (e.g., component, circuit, logical path) of the equipment.
  • data for a piece of equipment may be collected over a period of time from sensor arrays attached to the piece of equipment, built-in equipment instrumentation, and/or ambient instrumentation may be used to collect information about equipment performance.
  • the data may be collected, for example at a rate of 60 samples per second or another rate, for several pieces of equipment, all of which are of the target equipment type.
  • the collected data will be historical data.
  • operational state indications of the piece of equipment may be collected.
  • the operational states of the equipment e.g., operational, degraded, failed
  • the operational states may be correlated with the collected observable data collected for each of the hierarchical layers.
  • the historical data and associated operational states collected at blocks 410 and 420 may be used to generate a set of hierarchical latent state models associated with each layer of the equipment (e.g., the first stage model layers). As explained above, generation of the first stage model layers may differ based on whether data collection is synchronous or asynchronous.
  • the historical data for example, but not limited to, voltage, temperature, operational hours, and accelerometer data, may be used to train machine learning models that can predict latent states.
  • models may be developed for each selected component (e.g., the components).
  • the component level models may output sequences of most likely latent states based on observables (i.e., input data).
  • the circuit level and logical path level models may output the sequences of most likely latent states based on the circuit level and logical path level observables, respectively.
  • the top-level model may be generated.
  • the top-level model may use the outputs of the first stage model layers to form the single operational state values for the overall equipment. As explained above, generation of the top-level model may differ based on whether data collection is synchronous or asynchronous.
  • new data for a piece of equipment under investigation may be fed into the models at each layer of the hierarchy.
  • the new data may be collected synchronously or asynchronously and may be processed accordingly by the model as explained above.
  • Each of the first stage model layers may output an operational state estimate for each input observation.
  • each first stage model outputs a state estimate for each observation and the state estimates are the input to the top-level model.
  • the first stage models each produce state estimates, but the first stage model outputs are used to build the pair of covariance matrices.
  • the covariance matrices are the input to the top-level model.
  • the top-level model may output a next operational state estimate for the piece of equipment based on the estimates provided by the outputs of each of the first stage model layers. Thus, execution of the model may predict a next operational state and/or an MTBF prediction for the equipment under investigation.
  • FIG. 4 provides a particular method for providing training data for a neural network model according to an embodiment. Other sequences of steps may also be performed according to alternative embodiments. For example, alternative embodiments may perform the steps outlined above in a different order. Moreover, the individual steps illustrated in FIG. 4 may include multiple sub-steps that may be performed in various sequences as appropriate to the individual step. Furthermore, additional steps may be added or removed depending on the particular applications.
  • One of ordinary skill in the art would recognize many variations, modifications, and alternatives.
  • the method 400 may be embodied on a non-transitory computer readable medium, for example, but not limited to, a memory or other non-transitory computer readable medium known to those of skill in the art, having stored therein a program including computer executable instructions for making a processor, computer, or other programmable device execute the operations of the methods.
  • FIG. 5 is a block diagram of an example computing environment 500 with an example computing device in accordance with various aspects of the present disclosure.
  • the example computing environment 500 may suitable for use in some example implementations for collecting training data and executing a neural network model.
  • the computing device 505 in the example computing environment 500 may include one or more processing units, cores, or processors 510 , memory 515 (e.g., RAM, ROM, and/or the like), internal storage 520 (e.g., magnetic, optical, solid state storage, and/or organic), and/or I/O interface 525 , any of which may be coupled on a communication mechanism or a bus 530 for communicating information or embedded in the computing device 505 .
  • memory 515 e.g., RAM, ROM, and/or the like
  • internal storage 520 e.g., magnetic, optical, solid state storage, and/or organic
  • I/O interface 525 any of which may be coupled on a communication mechanism or a bus 530 for communicating information or embedded in the computing device 50
  • the computing device 505 may be communicatively coupled to an input/user interface 535 and an output device/interface 540 .
  • Either one or both of the input/user interface 535 and the output device/interface 540 may be a wired or wireless interface and may be detachable.
  • the input/user interface 535 may include any device, component, sensor, or interface, physical or virtual, that can be used to provide input (e.g., buttons, touch-screen interface, keyboard, a pointing/cursor control, microphone, camera, braille, motion sensor, optical reader, and/or the like).
  • the output device/interface 540 may include a display, television, monitor, printer, speaker, braille, or the like.
  • the input/user interface 535 and the output device/interface 540 may be embedded with or physically coupled to the computing device 505 .
  • other computing devices may function as or provide the functions of the input/user interface 535 and the output device/interface 540 for the computing device 505 .
  • Examples of the computing device 505 may include, but are not limited to, mobile devices (e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like), and devices not designed for mobility (e.g., desktop computers, other computers, and the like).
  • the computing device 505 may be communicatively coupled (e.g., via the I/O interface 525 ) to an external storage device 545 and a network 550 for communicating with any number of networked components, devices, and systems, including one or more computing devices of the same or different configuration.
  • the computing device 505 or any connected computing device may be functioning as, providing services of, or referred to as a server, client, thin server, general machine, special-purpose machine, or another label.
  • the I/O interface 525 may include, but is not limited to, wired and/or wireless interfaces using any communication or I/O protocols or standards (e.g., Ethernet, 802.11x, Universal System Bus, WiMax, modem, a cellular network protocol, and the like) for communicating information to and/or from at least all the connected components, devices, and network in the computing environment 500 .
  • the network 550 may be any network or combination of networks (e.g., the Internet, local area network, wide area network, a telephonic network, a cellular network, satellite network, and the like).
  • the computing device 505 can use and/or communicate using computer-usable or computer-readable media, including transitory media and non-transitory media.
  • Transitory media include transmission media (e.g., metal cables, fiber optics), signals, carrier waves, and the like.
  • Non-transitory media include magnetic media (e.g., disks and tapes), optical media (e.g., CD ROM, digital video disks, Blu-ray disks), solid state media (e.g., RAM, ROM, flash memory, solid-state storage), and other non-volatile storage or memory.
  • the computing device 505 can be used to implement techniques, methods, applications, processes, or computer-executable instructions in some example computing environments.
  • Computer-executable instructions can be retrieved from transitory media, and stored on and retrieved from non-transitory media.
  • the executable instructions may originate from one or more of any programming, scripting, and machine languages (e.g., C, C++, C#, Java, Visual Basic, Python, Perl, JavaScript, and others).
  • the processor(s) 510 may execute under any operating system (OS) (not shown), in a native or virtual environment.
  • One or more applications may be deployed that include a logic unit 560 , an application programming interface (API) unit 565 , an input unit 570 , an output unit 575 , and an inter-unit communication mechanism 595 for the different units to communicate with each other, with the OS, and with other applications (not shown).
  • the processor(s) 510 may further include a neural network processor 580 .
  • the neural network processor 580 may include multiple processors operating in parallel.
  • the neural network processor 580 may implement neural networks, for example, but not limited to, Long Short Term Memory (LSTM) neural networks, feedforward neural network, radial basis function neural network, or other types of neural networks.
  • LSTM Long Short Term Memory
  • the neural network processor 580 may be used in an implementation of one or more processes described and/or shown in FIG. 3 .
  • the described units and elements can be varied in design, function, configuration, or
  • the logic unit 560 may be configured to control information flow among the units and direct the services provided by the API unit 565 , the input unit 570 , the output unit 575 , and the neural network processor 580 in some example implementations.
  • the flow of one or more processes or implementations may be controlled by the logic unit 560 alone or in conjunction with the API unit 565 .
  • the example computing environment 500 may be or may include a cloud computing platform.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Automation & Control Theory (AREA)
  • Computational Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computing Systems (AREA)
  • Testing And Monitoring For Control Systems (AREA)

Abstract

A method for generating a multi-layer predictive model includes collecting historical observable data from one or more pieces of equipment of a same type, wherein the historical observable data is collected at different hierarchical levels of the one or more pieces of equipment; collecting operational state indications of the pieces of equipment corresponding to the collected historical observable data; generating, from the collected historical observable data, a set of operational state models, wherein each operational state model corresponds to one of the different hierarchical levels; and generating, from outputs of the set of operational state models, a top-level operational model for the piece of equipment. The top-level operational model is operable to determine maintenance and replacement timing for the piece of equipment.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 16/418,183, filed May 21, 2019. All sections of the aforementioned application are incorporated herein by reference in their entirety.
  • BACKGROUND
  • Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
  • Accurate predictions of when and where equipment will fail or require maintenance can enable a company to plan inventory purchases and stage of spare equipment to minimize costs. The Telcordia TR-332/SR-332 Electronic Reliability Prediction Standard represents a standard practice for estimating mean time between failures (MTBF) for equipment in the telecommunications industry. The Telcordia standard uses component level failure rates to estimate circuit and equipment level failure rates. The standard maintains a list of failure rates for components and, the specification aggregates the component failure rates to estimate MTBF for a circuit or piece of equipment.
  • Other approaches to estimating MTBF that build on the Telcordia standard are state-based and consider latent operational states of devices or direct modeling of operational states. These approaches estimate the probability of transitioning between operational states; the estimated probabilities can then be used to estimate the time to transitioning to a failure state. Latent state models have been successful due in part to their ability to model transitions and activity without having to have a generative or a priori model of activity.
  • However, in cases of multiple causes of failures and multiple failure states, transition probabilities can become convoluted affecting model interpretability and accuracy. The ability to improve MTBF predictions enables intelligent asset allocation strategies.
  • SUMMARY
  • Systems and methods for cost based optimization of network asset allocation are provided.
  • According to various aspects there is provided a method for generating a multi-layer predictive model. In some aspects, the method may include: collecting historical observable data from one or more pieces of equipment of a same type, wherein the historical observable data is collected at different hierarchical levels of the one or more pieces of equipment. The different hierarchical levels may be a component level, a circuit level, and a logical path level.
  • The method may further include collecting operational state indications of the pieces of equipment corresponding to the collected historical observable data; generating, from the collected historical observable data, a set of operational state models, wherein each operational state model corresponds to one of the different hierarchical levels; and generating, from outputs of the set of operational state models, a top-level operational model for the piece of equipment. The top-level operational model may be operable to determine maintenance and replacement timing for the piece of equipment. The operational state indications may include an operational state indication, a degraded state indication, and a failed state indication.
  • The method may further include collecting the historical observable data asynchronously between the different hierarchical levels. In response to collecting the historical observable data asynchronously, the method may include generating a first covariance matrix between outputs of a first hierarchical level operational state model and outputs of a second hierarchical level operational state model; generating a second covariance matrix between the outputs of the second hierarchical level operational state model and outputs of a third hierarchical level operational state model; and generating the top-level operational model using the first covariance matrix and the second covariance matrix as input.
  • The method may further include temporally aligning the asynchronously collected historical observable data between the different hierarchical levels. Each of the first hierarchical level operational state model, the second hierarchical level operational state model, and the third hierarchical level operational state model, may output a single state probability estimate for a sequence of input observable data. The operational state indications may be correlated to the asynchronously collected historical observable data for one of the different hierarchical levels.
  • The method may further include generating a top-level model output based on a product of a highest probability estimation state from each of the first covariance matrix and the second covariance matrix. The top-level model output may be a probability estimate of a next operational state or a mean time between failure (MTBF) for the piece of equipment.
  • The method may further include collecting the historical observable data synchronously between the different hierarchical levels. In response to collecting the historical observable data synchronously, the method may further include: generating the top-level operational model using a first hierarchical level operational state model, a second hierarchical level operational state model, and a third hierarchical level operational state model. The method may further include generating a top-level model output based on a product of probability estimation states from outputs of each of the first hierarchical level operational state model, the second hierarchical level operational state model, and the third hierarchical level operational state model.
  • Each operational state model of the set of operational state models may be a machine learning model trained with the historical observable data collected from corresponding hierarchical levels. The top-level operational model may be a machine learning model trained with outputs of the set of operational state models and corresponding operational state indications.
  • According to various aspects there is provided a computer-implemented method for estimating a next operational state of a piece of equipment. In some aspects, the computer-implemented method may include: collecting observable data from the piece of equipment, wherein the observable data is collected at different hierarchical levels of the piece of equipment. The different hierarchical levels may be a component level, a circuit level, and a logical path level.
  • The computer-implemented method may further include determining that the observable data is collected asynchronously between the different hierarchical levels. In response to determining that the observable data is collected asynchronously the computer-implemented method may further include: generating a first covariance matrix between outputs of a first hierarchical level operational state model and outputs of a second hierarchical level operational state model; generating a second covariance matrix between the outputs of the second hierarchical level operational state model and outputs of a third hierarchical level operational state model; and generating the output of the top-level operational model based on a product of a highest probability estimation state from each of the first covariance matrix and the second covariance matrix, wherein the output of the top-level operational model is a probability estimate of a next operational state or a mean time between failure (MTBF) for the piece of equipment.
  • The computer-implemented method may further include determining that the observable data is collected synchronously between the different hierarchical levels. In response to determining that the observable data is collected synchronously, the computer-implemented method may further include generating the output from the top-level operational model based on a product of probability estimation states from outputs of each of a first hierarchical level operational state model, a second hierarchical level operational state model, and a third hierarchical level operational state model.
  • Each operational state model of the set of operational state models may be a machine learning model trained with the historical observable data collected from corresponding hierarchical levels. The top-level operational model may be a machine learning model trained with outputs of the set of operational state models and corresponding operational state indications.
  • According to various aspects there is provided an apparatus. In some aspects, the apparatus may include: a memory configured to store program instructions and data and a processor configured to communicate with the memory. The processor may be further configured to execute instructions read from the memory. The instructions may be operable to cause the processor to perform operations including: collecting observable data from a piece of equipment, wherein the observable data is collected at different hierarchical levels of the piece of equipment. The different hierarchical levels may be a component level, a circuit level, and a logical path level.
  • The operations may further include inputting the collected observable data to a predictive model at a set of operational state models corresponding to the different hierarchical levels; generating an output from each operational state model of the set of operational state models, the output being a state probability estimate for each of the different hierarchical levels; and generating, from a top-level operational model, an output based on the outputs of the set of operational state models, wherein the output from the top-level operational model is a probability estimate of a next operational state or a mean time between failure (MTBF) for the piece of equipment. The next operational state may include an operational state indication, a degraded state indication, and a failed state indication.
  • The instructions may be further operable to cause the processor to perform operations including determining that the observable data is collected asynchronously between the different hierarchical levels. In response to determining that the observable data is collected asynchronously, the instructions may be further operable to cause the processor to perform operations including: generating a first covariance matrix between outputs of a first hierarchical level operational state model and outputs of a second hierarchical level operational state model; generating a second covariance matrix between the outputs of the second hierarchical level operational state model and outputs of a third hierarchical level operational state model; and generating the output of the top-level operational model based on a product of a highest probability estimation state from each of the first covariance matrix and the second covariance matrix, wherein the output of the top-level operational model is a probability estimate of a next operational state or a mean time between failure (MTBF) for the piece of equipment.
  • The instructions may be further operable to cause the processor to perform operations including determining that the observable data is collected synchronously between the different hierarchical levels. In response to determining that the observable data is collected synchronously, the instructions may be further operable to cause the processor to perform operations including generating the output from the top-level operational model based on a product of probability estimation states from outputs of each of a first hierarchical level operational state model, a second hierarchical level operational state model, and a third hierarchical level operational state model.
  • Each operational state model of the set of operational state models may be a machine learning model trained with the historical observable data collected from corresponding hierarchical levels. The top-level operational model may be a machine learning model trained with outputs of the set of operational state models and corresponding operational state indications.
  • Numerous benefits are achieved by way of the various embodiments over conventional techniques The proposed solution provides a scalable solution that enhances the state of the art in estimating MTBF for equipment. The solution also provides a hierarchical latent state model that is interpretable for its estimates. These and other embodiments along with many of its advantages and features are described in more detail in conjunction with the text below and attached figures.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Aspects and features of the various embodiments will be more apparent by describing examples with reference to the accompanying drawings, in which:
  • FIG. 1 illustrates a general example of visual model for a neural network according to various aspects of the present disclosure;
  • FIG. 2 is a block diagram illustrating the hierarchy of layers according to various aspects of the present disclosure;
  • FIG. 3 is a diagram illustrating an example of aligning observations and states between models for different layers when data is collected asynchronously according to various aspects of the present disclosure;
  • FIG. 4 is a flowchart illustrating a method for estimating a next operational state and/or MTBF for a new piece of equipment according to various aspects of the present disclosure; and
  • FIG. 5 is a block diagram of an example computing environment with an example computing device according to various aspects of the present disclosure.
  • DETAILED DESCRIPTION
  • While certain embodiments are described, these embodiments are presented by way of example only, and are not intended to limit the scope of protection. The apparatuses, methods, and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions, and changes in the form of the example methods and systems described herein may be made without departing from the scope of protection.
  • According to aspects of the present disclosure, a MTBF prediction system can improve next operational state and/or MTBF predictions, enable intelligent asset allocation strategies, and improve decommissioning strategies. For example, in network hardware inventory management (e.g., purchasing, allocation, etc.), both model interpretability and accuracy are important for purchasing and positioning replacement equipment to anticipate equipment failures. In addition, the ability to explain why the MTBF prediction system estimates failure states or non-failure states enables improved situational awareness for managers and operations teams for executing intelligent inventory management.
  • The MTBF prediction system models modes of operation in terms of the physical states of components, the circuit level usage states of operation, and the logical and usage states of components. Modeling at these levels improves the characterization of failure estimates and the estimation of failures in general. With sufficient data at each level to model transitions between states at each level, the overall MTBF prediction system can capture details about operations that previous models were not capable of detecting.
  • To estimate MTBF for a piece of equipment, the MTBF prediction system collects and registers observations (i.e., data from variables that can be measured) from three layers that represent different functions/operations: a component layer, a circuit layer, and a logical path layer. The MTBF prediction system may build latent state (i.e., operational states that cannot be directly measured) models from these observations. The output of those latent state models can used as input to build a top-level latent state model that estimates the next operational state and/or MTBF of the piece of equipment. The top-level latent state model may be referred to herein as a top-level latent state operational model, a top-level model, a top-level operational model, or a top-level hierarchical latent state model depending on context.
  • The MTBF prediction system uses historical data collected from similar pieces of active equipment to inform the models. The MTBF prediction system can use both synchronously and asynchronously collected data for the three first level model layers (i.e., component, circuit, logical path). Once trained, a top-level latent state model produces MTBF estimates for activity sequences of pieces of equipment and continues training as additional data arrive.
  • The MTBF prediction system may be implemented by a machine learning model. Machine learning technology has applicability for companies seeking to accurately monitor equipment state to minimize operational disruptions. FIG. 1 illustrates a visual model 100 for a general example of a neural network according to various aspect of the present disclosure. A neural network may execute a neural network model. A neural network model may also be referred to herein as a machine learning model. Referring to FIG. 1, the model 100 includes an input layer 104, a middle layer (i.e., a “hidden” layer) 106, and an output layer 108. In general, a neural network implementation can include multiple hidden layers.
  • Each layer includes some number of nodes 102. The nodes 102 of the input layer 104 may be connected to each node 102 of the hidden layer 106. The connections may be referred to as weights 110. Each node 102 of the hidden layer 106 may have a connection or weight 110 with each node 102 of the output layer. The input layer 104 can receive inputs and can propagate the inputs to the hidden layer 106. Weighted sums computed by the hidden layer 106 (or multiple hidden layers) are propagated to the output layer 108, which can present final outputs to a user.
  • One of ordinary skill in the art will appreciate that the neural network illustrated in FIG. 1 is merely exemplary and that different and/or additional neural networks, for example, but not limited to, Long Short Term Memory (LSTM) neural networks, feedforward neural networks, radial basis function neural networks, or other types of neural networks, may be used without departing from the scope of the present disclosure.
  • Data (i.e., observations) may be collected synchronously or asynchronously for the MTBF prediction system. Historical data for training the model layers for a piece of equipment (e.g., a network asset) may be obtained from measurements of various parameters collected over time from similar pieces equipment operating in the field. The data may include operational data used to model physical characteristics, for example, voltages, currents, operating temperature, amps, radio frequency (RF) characteristics, etc., obtained by instrumenting the pieces equipment with sensors, as well as environmental data, for example, ambient temperature, humidity, vibration, etc.
  • In some cases, components, for example, but not limited to, integrated circuits, may have built-in instrumentation for collecting data. Data from instrumented equipment (e.g., smart meters) may also be collected. Where only ambient environment data (e.g., “off-equipment” data), for example, cabinet or room temperature data, input current data, ambient or room RF data, etc., is available, the condition of each component may be estimated with those data using statistical distribution models on the hardware. In addition, operational states (e.g., operational, degraded, or failed) of the pieces equipment corresponding to the operational and environmental data may be collected (e.g., from an equipment status monitor).
  • In some implementations, data may be collected synchronously at the different model layers for a piece of equipment, meaning that the observations in each layer are all collected at the same time. The data may be collected, for example at a rate of 60 samples per second or another rate. In addition, the corresponding operational states of the equipment may be collected. In other cases, data may be collected asynchronously at the different model layers, meaning that the observation at one model layer are collected at a different time than the observations collected at the other model layers. Collection of the asynchronously collected data should occur in the same time range (i.e., over the same temporal extent) for the model layers of a piece of equipment.
  • According to various aspects of the present disclosure, a piece of equipment may be represented as a hierarchy of three first stage model layers (i.e., component level, circuit level, and logical path level) for modelling. Sensor arrays attached to equipment, built-in equipment instrumentation, and/or ambient instrumentation may be used to collect information about equipment performance at each of the three layers to construct a corresponding top-level hierarchical latent state model. The top-level hierarchical latent state model may be used to determine maintenance and replacement timing (e.g., by estimating a next operational state or MTBF) for the piece of equipment.
  • A latent state model relates a set of observable (i.e., directly measured) variables, for example, voltage or current at a circuit test point, to a set of variables that are not directly observed (i.e., latent variables), for example, an internal operational state of an integrated circuit or a piece of equipment. The latent variables are inferred from the observable variables. The nature of the hierarchical latent state model enables limiting the computations performed by the MTBF prediction system; thus, the resulting inference tool is scalable. Modes of operation in terms of the physical states of components, circuit level usage states of operation, and logical level usage states of operation may be modeled. Modeling at these three layers represents an improvement in the characterization of failure estimates and the estimation of failures in general.
  • FIG. 2 is a block diagram illustrating the hierarchy of model layers according to various aspects of the present disclosure. Referring to FIG. 2, a piece of equipment for which the operational state and/or MTBF is to be calculated may be represented as three layers: a component layer (m0) 210, a circuit layer (m1) 220, and a logical path layer (m2) 230. The component layer 210 may also be referred to herein as the physical characteristics layer.
  • Observations, i.e., measurements made by various sensors, may be made at each of the three model layers and, based on the observations, latent state models for each layer may be constructed. Data can be generated from the models of the component level, circuit level, and logical level in conjunction with observable features. With sufficient data at each level to model the transitions between operational states at each level, the MTBF prediction system can capture details about equipment operations that previous models were not capable of detecting while providing the ability to generate explanations of predictions.
  • For each layer, a latent state model that captures working operational states and failure states may be determined from the collected historical data. The latent state model for a given layer may be based on the observations (i.e., sensor measurement data) from that layer. Once the latent state models are developed, subsequent observations (i.e., new sensor measurement data) may be used to estimate a current latent state for each layer, for example, for a new piece of equipment. Given the estimated current latent state from each model, the operational state for the equipment may be estimated based on a single top-level latent state operational model for the equipment. The single top-level latent state operational model may be generated using the estimates of the current latent states of the hierarchical model layers and the observations from each of the hierarchical model layers.
  • At the first level of the hierarchical model (m0), the component layer 210, physical characteristics of hardware components, for example, integrated circuits, capacitors, resistors, etc., may be modeled. The modelled physical characteristics may include characteristics such as temperature, vibration, friction, etc., that can affect MTBF for hardware components. A first level (m0) model may be developed for each identified component.
  • At the second level of the hierarchical model (m1), the circuit process level 220, input and output characteristics, for example, input voltage and/or current signals, of the hardware may be modeled. Similar data types as collected at the m0 layer may be collected together with the additional inputs and outputs of the circuit.
  • At the third level of the hierarchical model (m2), the logical path level 230, operational hours of logical paths may be modeled. Logical paths as used herein refer to operational circuit paths. For example a transmitter circuit may operate in a low power mode when communicating with one receiver and may operate in a high power mode when communicating with another receiver. Different logical signal and power paths through the same transmitter circuit may be used for the low and high power modes. Similar data types as collected at the m0 and m1 layers may also be collected as well as the operational hours.
  • For circuits and components, different logical paths may have different effects on hardware lifetimes. Operational hours of logical paths may be modeled based on the instrumented input and output data. In cases where instrumented data on logical path operational hours is unavailable, logical paths between pairs of components may be modeled using the data collected from m0 (i.e., the component layer 210) to estimate operational hours/utilization of groups of components.
  • In some cases, the hierarchical levels may be represented at different functional/operational levels. For example, the component layer, rather than representing an individual component such as a resistor or capacitor, may represent a printed circuit board (PCB) assembly, the circuit layer, rather than representing a single circuit, may represent a module containing several PCBs forming a larger circuit, and the logical path layer represent various combinations of functions provided by different PCBs representing a functional path. In some cases, the PCB or module may correspond to a serviceable or replaceable part. Many variations and alternatives may be recognized for defining the various layers.
  • Machine learning models for each of the three hierarchical first stage model layers (component (m0), circuit (m1), and logical path (m2)) may be trained using the historical data (e.g., observations and operational states) collected from the other similar pieces of equipment. Using the historical data, the machine learning models may be trained to estimate the state of a component, circuit, or logical path. In the case of the component level model (m0), a separate machine learning model may be trained for each identified component.
  • In accordance with aspects of the present disclosure, to estimate next operational states and/or MTBF for a piece of equipment, the MTBF prediction system may collect and register observations from the three first stage model layers: the component layer m(0), the circuit layer m(1), and the logical path layer m(2). The MTBF prediction system then uses these observations to train a top-level latent state model to learn a state model and transitions between states for the equipment level (i.e., top-level) operational model 240. Once trained, the top-level model 240 can produce MTBF estimates for activity sequences of a piece of equipment and continues training as additional new data arrive. When used in conjunction with hardware and logical operational hours data, these features create a representation of an operational state that improves the state of the art in estimating time to failure as well as transitions between states (e.g., operational, degraded, failed).
  • Each of the trained machine learning models can be used to estimate a probability of a next state (e.g., operational or failure) of a corresponding component (m0), circuit (m1), or logical path (m2) for a new piece of equipment using new data (e.g., observations) generated by the new equipment as input to the models. That is, when new data is input for each component at the m0 level, the machine learning models for the m0 level may estimate the probability of a next state for each component. Similarly, when new data is input for each circuit at the m1 level, the machine learning models for the m1 level may estimate the probability of a next state for each circuit. Finally, when new data is input for each logical path at the m2 level, the machine learning models for the m2 level may estimate the probability of a next state for each logical path. The estimated output state probabilities for all the models (e.g., m0, m1, and m2) may be used as input to train the single top-level model 240 to generate the next state probability for the piece of equipment.
  • The MTBF prediction system builds computational models to estimate the latent (i.e., unobservable) operational states at each model layer. The models are trained with the historical data to learn the relationships between the observations (e.g., the instrumented data collected for each layer) and the latent states. Within each of the three input first stage model layers (i.e., component, circuit, logical path), the most likely operational state S may be estimated at each time step given the observations ω where observations are collected over time ω=ωt 0 . . . ωt n and each ωt n is a vector of observations. Observation vector elements (e.g., accelerometer and temperature data collected within a layer) are collected at the same time.
  • The number of latent states to be associated is a parameter supplied to each layer's models when it is trained. The MTBF prediction system may additionally compute the sample entropy (n=−log(ω12)) between features at each data point for a collection of time windows (e.g., 10 seconds, 10 minutes, 10 days) to augment the input data for building these models. Once trained, the models can be capable of outputting the most likely latent states given a sequence of simulated or actual observations. The models can also estimate the next most likely state given the current state (i.e., given only the current observation).
  • The output of each of the three first stage model layers are the sequences of most likely latent states. The desired output is the operational state with the highest probability given the history of observations ω. The likelihood of a sequence of states S=S0 . . . St is:
  • ( S | ω ) = t p ( S t | ω t ) ( 1 )
  • and the likelihood of the most likely sequence is:
  • ( S | ω ) = t argmax S ( p ( S t | ω t ) ) ( 2 )
  • The trained first stage model layers output estimates of the most likely states given a sequence of observations. The MTBF prediction system uses the outputs of the first stage model layers as inputs to train the top-level model for state estimation. To use the estimates in the top-level model, relations may be established for the observations between the first stage model layers. The relations may be established depending on how the observation data are collected for each first stage model layer in relation to the other first stage model layers. The data may be collected synchronously, meaning that observations for each first stage model layer are all collected at the same time, or the data may be collected asynchronously, meaning that observations in one first stage model layer are collected at a different time than the other first stage model layers.
  • The MTBF prediction system may estimate the equipment level operational state using the outputs of the three first stage model layers. Depending on the data collection type (i.e., synchronous or asynchronous), the equipment level model uses different methods of processing the input data to produce operational state estimates.
  • When the data for each first stage model layer are collected synchronously (i.e., at the same time), the MTBF prediction system builds the top-level latent state model in the same manner as the first stage model layers. The top-level (i.e., equipment level) latent state model uses the outputs of the first stage model layers as input in conjunction with the operational state. The time range of the data that was used to train the first stage model layers is used to train the top-level model.
  • In the synchronous case, each first stage model layer outputs a state estimate. As observations arrive to each first stage models, each first stage model layer generates a state estimate for each observation. The state estimates are the input to the top-level model. The top-level model produces a single state estimate for the equipment operation, one state estimate for each observation. The top-level model is trained using historical data similar to, but not the same as, the historical data used to train the first stage models. The training data for the top-level model may additionally include an equipment state label (e.g., normal, impaired, fault). During historical data collection, these states are recorded alongside the collected observation data, for example, from an equipment status monitor.
  • When the data for each first stage model layer are collected asynchronously (i.e., not collected at the same time), the MTBF prediction system may use a different method to estimate operational states than is used when the data is collected synchronously since there will not be a temporal dependence between the first stage model layers.
  • In the asynchronous case, the model outputs a single state estimate for a sequence of input observations. As in the synchronous case, the first stage models each produce state estimates, but the first stage model outputs are used to build the pair of covariance matrices. The covariance matrices are then the input the top-level model. The top-level model is also trained using historical data that has the equipment state label. In the asynchronous collection case, equipment level states are synchronized with the logical level observation data.
  • FIG. 3 is a diagram illustrating an example of aligning observations and states between models for different layers when data is collected asynchronously according to various aspects of the present disclosure. Referring to FIG. 3, at a first (lower) level 305 of the hierarchy, measured data, i.e., observations 310, may be collected and associated with operational states 320 for the first level during a given time period. At a second (higher) level 325 of the hierarchy, measured data, i.e., observations 330, may be collected and associated with operational states 340 for the second level 325. While not synchronous, the asynchronously collected data should occur over the same temporal extent as the time over which the observations 310 were collected for the first level 305.
  • As illustrated in FIG. 3, several observations 310 at the first level 305 may be correlated with an observation 330 at the second level 325. Similarly, several operational states 320 at the first level 305 may be correlated with an operational state 340 at the second level 325. In some cases, an operational state at the first level may be correlated with more than one operational state at the second level.
  • In the asynchronous data collection case, a pair of covariance matrices may be generated using the output state sequences from the first stage model layers: a first covariance matrix between the component level (m0) and circuit level (m1) outputs (i.e., the estimates of the most likely states given a sequence of observations) and a second covariance matrix between the circuit level (m1) and logical path level (m2) outputs. For example, the covariance between two sets of latent state estimates Sm 0 and Sm 1 is

  • E[(Sm 0 −E[Sm 0 ])(Sm 1 −E[Sm 1 ])]  (3)
  • The hierarchy of layers may be used to make estimating the relationship between the first stage model layers computationally easier than finding the joint probabilities of observations. As a result of the hierarchical approach, computing all pairs of covariances may be avoided.
  • The outputs from each model layer of the hierarchy may be used to generate the correlation matrices to find the highest correlation states between the pairs of models (i.e., component/circuit and circuit/logical). The MTBF prediction system can compute the two covariance matrices using the outputs of the three first stage models: one between the component and circuit layers and one between the circuit and logical path layers. These matrices can become the input observations for the top-level latent state model. The top-level latent state model can be trained based on the relationships between all three layers given the pairwise state inputs from the two covariance matrices. The highest correlation states between the pairs of states given in the covariance matrices are input into the top-level latent state model, and the outputs of the top-level latent state model are the operational state estimates for the piece of equipment.
  • To determine transitions between observations and states between the three first stage model layers (e.g., component, circuit, logical path), the correlation between pairs of observables (e.g., (O_m0, O_m1)) is first determined. The hierarchy may be used to organize observations and states from the three models to determine transitions between states in the larger, hierarchical state/action space.
  • The objective in asynchronous data collection is to generate a top level model that predicts the relationships between the three first stage models without having to compute the full joint probabilities of all three first stage models. The MTBF prediction system can use the covariance matrices between pairs of first stage models (component & circuit and circuit & logical) as the input for the top-level latent state model. The hierarchy of the first stage model layers limits the combinatorics of the state/observation space in that not all combinations of state/observation co-occurrences are considered. For example, at any point in a time series of observations, to compute the probability of an observation in the top-level latent state model (i.e., p(O_mTL)) at time tn, the top-level latent state model may compute the probability of an observation in model m0 (i.e., O_m0) given an observation in model m1 (i.e., O_m1) multiplied by the probability of an observation in model m1 (i.e., O_m1) given an observation in model m2 (i.e., O_m2) as, shown in equation 4, rather than the full joint probability p(O_mTL|O_m0, O_m1, O_m2).

  • p(O_mTL)=p(O_m0|O_m1)*p(O_m1|O_m2))  (4)
  • The following example is presented to further explain the operation of the MTBF prediction system for estimating a next operational state and/or MTBF for a new piece of equipment according to aspects of the present disclosure. To estimate future operational states of a new piece of equipment, new observations (i.e., new data) collected from a piece of equipment are input to each of the first stage models. Each of the first stage model layers outputs sequences of most likely latent states (i.e., the operational states with the highest probability given the observations). The outputs of the first stage model layers are used as input to the top-level model to predict future operational states, for example, operational, degraded, or failed, and or to predict MTBF.
  • A new piece of equipment may be identified to monitor and estimate likely time to failure. A collection of historical data from other pieces of equipment of the same type may be obtained to observe the transitions to failure states over time. The historical data from the other pieces of equipment may have been previously generated by instrumenting the other pieces of equipment as described above with temperature sensors to collect component level operating temperatures of specified components, accelerometers to capture vibration and movement data, as well as voltage and current sensors to capture voltage and current values at specified locations and operating hours. In addition, corresponding operational states (e.g., operational, degraded, failed) may be collected, for example, from equipment status monitors that monitor the states of the pieces of equipment. The data may have been collected synchronously or asynchronously. One of ordinary skill in the art will appreciate that other sensors may be used to collect data on other characteristics of the equipment without departing from the scope of the present disclosure. The historical data collected from the other pieces of equipment may include working states, for example operational or failure states, corresponding to the collected data.
  • At each layer of the model (e.g., component, circuit, logical path), data, for example, but not limited to, voltage, temperature, operational hours, and accelerometer data, may be collected for several months. The data may be collected, for example at a rate of 60 samples per second or another rate, for several pieces of equipment, all of which are of the target equipment type. In addition, the corresponding operational states of the equipment may be collected, for example, from an equipment status monitor.
  • At the component level, latent state models that associate observables (e.g., sensor measurements) to the latent operational states may be generated for each component for which data is collected (e.g., for a 10-component circuit, 10 latent state models may be generated) based on the collected data and the operational state labels. The component level models may output the sequences of most likely latent states. Similarly, the circuit level and logical path level models may output the sequences of most likely latent states based on the circuit level and logical path level observables, respectively.
  • Using these data, a model, for example, but not limited to, a hidden Markov model (HMM) or recurrent neural network (RNN), for each layer in the hierarchy (i.e., component, circuit, and logical layers) may be generated. Each of the three models may be trained to learn the association between the observables (e.g., temperature, power, movement, hours, etc.) and the operational states of the model (e.g., operational, degraded, and failed). The outputs of the three models form the input to the top-level latent state model that produces a single operational state prediction. In the case of asynchronously collected data, to make a single top-level operational state prediction, the data from the three layers may be aligned to determine the current state of the equipment. Correlation matrices may be generated from the output sequences of most likely latent states between the component/circuit and circuit/logical layers.
  • The operational latent state model learns the latent operational states from each hierarchical layer of the model (e.g., component, circuit, logical path) using the failure states from each layer as failure states in the operational state model. The operational latent state model enables determination of the probability of transition to failure state in the latent state model given the present state and observation sequence as shown by the argmax function of equation 5.

  • argmax_P(failure)M(S_m0tn,S_m1tn,S_m2tn,O_m0tn,O_m1tn,O_m2tn)  (5)
  • Equation 5 indicates that the MTBF prediction system can use the top level transition probabilities to estimate the probability of the earliest opportunity of the equipment to transition to a failure state.
  • An objective for the MTBF prediction system for both synchronous and asynchronous data collection is to estimate the time to failure or the next operation state for a piece of equipment given the collected observations. The top-level latent state model estimates the transition probabilities between the operational (e.g., latent) states of the equipment. These state transition probabilities are used to estimate the most likely operational states the equipment will experience in the future. The MTBF prediction system does this by computing the next most likely state of the top-level latent state model given the current state estimates of the first stage models obtained using the current observations. Using this next most likely state, the MTBF prediction system can compute a subsequent next most likely state. The MTBF prediction system can compute the most likely future states in this fashion and can also estimate the earliest most likely transition to particular states in the top-level latent state model (e.g., the earliest, most likely transitions to failure states).
  • When new data (e.g., observations) from a new piece of equipment are input to the trained first stage model layers, the trained top-level model may use the outputs of the first stage model layers to form the single operational state estimate for the new piece of equipment for each observation. The top-level model may output the operational state estimates as a vector.
  • The operational state estimates output by the top-level model can enable computation of the MTBF of a piece of equipment. For example, by estimating the probabilities of transitions to failure states in the models, the expected time between failures (i.e., MTBF) may be computed.
  • FIG. 4 is a flowchart illustrating a method 400 for predicting MTBF according to various aspects of the present disclosure. Referring to FIG. 4 at block 410, observable data may be collected from sensors configured to sense operating characteristics of a piece of equipment at a plurality of hierarchical layers (e.g., component, circuit, logical path) of the equipment. For example, data for a piece of equipment may be collected over a period of time from sensor arrays attached to the piece of equipment, built-in equipment instrumentation, and/or ambient instrumentation may be used to collect information about equipment performance. The data may be collected, for example at a rate of 60 samples per second or another rate, for several pieces of equipment, all of which are of the target equipment type. Thus, the collected data will be historical data.
  • At block 420, operational state indications of the piece of equipment may be collected. The operational states of the equipment (e.g., operational, degraded, failed) may be collected, for example, from an equipment status monitor. The operational states may be correlated with the collected observable data collected for each of the hierarchical layers.
  • At block 430, the historical data and associated operational states collected at blocks 410 and 420 may be used to generate a set of hierarchical latent state models associated with each layer of the equipment (e.g., the first stage model layers). As explained above, generation of the first stage model layers may differ based on whether data collection is synchronous or asynchronous. At each model layer (e.g., component, circuit, logical path), the historical data, for example, but not limited to, voltage, temperature, operational hours, and accelerometer data, may be used to train machine learning models that can predict latent states. At the component level, models may be developed for each selected component (e.g., the components).
  • The component level models may output sequences of most likely latent states based on observables (i.e., input data). Similarly, the circuit level and logical path level models may output the sequences of most likely latent states based on the circuit level and logical path level observables, respectively.
  • At block 440, the top-level model may be generated. The top-level model may use the outputs of the first stage model layers to form the single operational state values for the overall equipment. As explained above, generation of the top-level model may differ based on whether data collection is synchronous or asynchronous.
  • At block 450, after the models are constructed, new data for a piece of equipment under investigation may be fed into the models at each layer of the hierarchy. The new data may be collected synchronously or asynchronously and may be processed accordingly by the model as explained above. Each of the first stage model layers may output an operational state estimate for each input observation. When the data is collected synchronously, each first stage model outputs a state estimate for each observation and the state estimates are the input to the top-level model. When the data is collected asynchronously, the first stage models each produce state estimates, but the first stage model outputs are used to build the pair of covariance matrices. The covariance matrices are the input to the top-level model. The top-level model may output a next operational state estimate for the piece of equipment based on the estimates provided by the outputs of each of the first stage model layers. Thus, execution of the model may predict a next operational state and/or an MTBF prediction for the equipment under investigation.
  • It should be appreciated that the specific steps illustrated in FIG. 4 provide a particular method for providing training data for a neural network model according to an embodiment. Other sequences of steps may also be performed according to alternative embodiments. For example, alternative embodiments may perform the steps outlined above in a different order. Moreover, the individual steps illustrated in FIG. 4 may include multiple sub-steps that may be performed in various sequences as appropriate to the individual step. Furthermore, additional steps may be added or removed depending on the particular applications. One of ordinary skill in the art would recognize many variations, modifications, and alternatives.
  • The method 400 may be embodied on a non-transitory computer readable medium, for example, but not limited to, a memory or other non-transitory computer readable medium known to those of skill in the art, having stored therein a program including computer executable instructions for making a processor, computer, or other programmable device execute the operations of the methods.
  • FIG. 5 is a block diagram of an example computing environment 500 with an example computing device in accordance with various aspects of the present disclosure. The example computing environment 500 may suitable for use in some example implementations for collecting training data and executing a neural network model. Referring to FIG. 5, the computing device 505 in the example computing environment 500 may include one or more processing units, cores, or processors 510, memory 515 (e.g., RAM, ROM, and/or the like), internal storage 520 (e.g., magnetic, optical, solid state storage, and/or organic), and/or I/O interface 525, any of which may be coupled on a communication mechanism or a bus 530 for communicating information or embedded in the computing device 505.
  • The computing device 505 may be communicatively coupled to an input/user interface 535 and an output device/interface 540. Either one or both of the input/user interface 535 and the output device/interface 540 may be a wired or wireless interface and may be detachable. The input/user interface 535 may include any device, component, sensor, or interface, physical or virtual, that can be used to provide input (e.g., buttons, touch-screen interface, keyboard, a pointing/cursor control, microphone, camera, braille, motion sensor, optical reader, and/or the like). The output device/interface 540 may include a display, television, monitor, printer, speaker, braille, or the like. In some example implementations, the input/user interface 535 and the output device/interface 540 may be embedded with or physically coupled to the computing device 505. In other example implementations, other computing devices may function as or provide the functions of the input/user interface 535 and the output device/interface 540 for the computing device 505.
  • Examples of the computing device 505 may include, but are not limited to, mobile devices (e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like), and devices not designed for mobility (e.g., desktop computers, other computers, and the like). The computing device 505 may be communicatively coupled (e.g., via the I/O interface 525) to an external storage device 545 and a network 550 for communicating with any number of networked components, devices, and systems, including one or more computing devices of the same or different configuration. The computing device 505 or any connected computing device may be functioning as, providing services of, or referred to as a server, client, thin server, general machine, special-purpose machine, or another label.
  • The I/O interface 525 may include, but is not limited to, wired and/or wireless interfaces using any communication or I/O protocols or standards (e.g., Ethernet, 802.11x, Universal System Bus, WiMax, modem, a cellular network protocol, and the like) for communicating information to and/or from at least all the connected components, devices, and network in the computing environment 500. The network 550 may be any network or combination of networks (e.g., the Internet, local area network, wide area network, a telephonic network, a cellular network, satellite network, and the like).
  • The computing device 505 can use and/or communicate using computer-usable or computer-readable media, including transitory media and non-transitory media. Transitory media include transmission media (e.g., metal cables, fiber optics), signals, carrier waves, and the like. Non-transitory media include magnetic media (e.g., disks and tapes), optical media (e.g., CD ROM, digital video disks, Blu-ray disks), solid state media (e.g., RAM, ROM, flash memory, solid-state storage), and other non-volatile storage or memory.
  • The computing device 505 can be used to implement techniques, methods, applications, processes, or computer-executable instructions in some example computing environments. Computer-executable instructions can be retrieved from transitory media, and stored on and retrieved from non-transitory media. The executable instructions may originate from one or more of any programming, scripting, and machine languages (e.g., C, C++, C#, Java, Visual Basic, Python, Perl, JavaScript, and others).
  • The processor(s) 510 may execute under any operating system (OS) (not shown), in a native or virtual environment. One or more applications may be deployed that include a logic unit 560, an application programming interface (API) unit 565, an input unit 570, an output unit 575, and an inter-unit communication mechanism 595 for the different units to communicate with each other, with the OS, and with other applications (not shown). The processor(s) 510 may further include a neural network processor 580. The neural network processor 580 may include multiple processors operating in parallel. The neural network processor 580 may implement neural networks, for example, but not limited to, Long Short Term Memory (LSTM) neural networks, feedforward neural network, radial basis function neural network, or other types of neural networks. For example, the neural network processor 580 may be used in an implementation of one or more processes described and/or shown in FIG. 3. The described units and elements can be varied in design, function, configuration, or implementation and are not limited to the descriptions provided.
  • In some instances, the logic unit 560 may be configured to control information flow among the units and direct the services provided by the API unit 565, the input unit 570, the output unit 575, and the neural network processor 580 in some example implementations. For example, the flow of one or more processes or implementations may be controlled by the logic unit 560 alone or in conjunction with the API unit 565.
  • In some implementations, the example computing environment 500 may be or may include a cloud computing platform.
  • The examples and embodiments described herein are for illustrative purposes only. Various modifications or changes in light thereof will be apparent to persons skilled in the art. These are to be included within the spirit and purview of this application, and the scope of the appended claims, which follow.

Claims (20)

What is claimed is:
1. A method comprising:
collecting, by a processor, historical observable data from one or more pieces of equipment of a same type, wherein the historical observable data is collected at different hierarchical levels of the one or more pieces of equipment, wherein the different hierarchical levels comprise a component level, and wherein first historical observable data of the historical observable data pertaining to the component level includes data pertaining to an integrated circuit, a capacitor, and a resistor of the one or more pieces of equipment;
collecting, by the processor, operational state indications of the one or more pieces of equipment corresponding to the collected historical observable data;
generating, by the processor and from the collected historical observable data and the collected operational state indications, a set of operational state models; and
generating, by the processor and from outputs of the set of operational state models, a top-level operational model operable to determine maintenance and replacement timing for the one or more pieces of equipment.
2. The method of claim 1, wherein the operational state indications comprise an operational state indication, a degraded state indication, and a failed state indication.
3. The method of claim 1, further comprising:
collecting, by the processor, the historical observable data asynchronously between the different hierarchical levels; and
in response to collecting the historical observable data asynchronously:
generating, by the processor, a first covariance matrix between outputs of a first hierarchical level operational state model and outputs of a second hierarchical level operational state model;
generating, by the processor, a second covariance matrix between the outputs of the second hierarchical level operational state model and outputs of a third hierarchical level operational state model; and
generating, by the processor, the top-level operational model using the first covariance matrix and the second covariance matrix as input.
4. The method of claim 3, further comprising temporally aligning, by the processor, the asynchronously collected historical observable data between the different hierarchical levels.
5. The method of claim 3, wherein each of the first hierarchical level operational state model, the second hierarchical level operational state model, and the third hierarchical level operational state model, outputs a single state probability estimate for a sequence of input observable data.
6. The method of claim 3, wherein the operational state indications are correlated to the asynchronously collected historical observable data for one of the different hierarchical levels.
7. The method of claim 3, further comprising:
generating, by the processor, a top-level model output based on a product of a highest probability estimation state from each of the first covariance matrix and the second covariance matrix,
wherein the top-level model output is a probability estimate of a next operational state or a mean time between failure (MTBF) for the one or more pieces of equipment.
8. The method of claim 1, further comprising:
collecting, by the processor, the historical observable data synchronously between the different hierarchical levels; and
in response to collecting the historical observable data synchronously:
generating, by the processor, the top-level operational model using a first hierarchical level operational state model, a second hierarchical level operational state model, and a third hierarchical level operational state model.
9. The method of claim 8, further comprising generating, by the processor, a top-level model output based on a product of probability estimation states from outputs of each of the first hierarchical level operational state model, the second hierarchical level operational state model, and the third hierarchical level operational state model.
10. The method of claim 1, wherein:
each operational state model of the set of operational state models is a machine learning model trained with the historical observable data collected from corresponding hierarchical levels; and
the top-level operational model is a machine learning model trained with outputs of the set of operational state models and corresponding operational state indications.
11. A non-transitory computer readable medium comprising instructions that, when executed by a processor, cause the processor to perform operations, the operations comprising:
collecting observable data from a piece of equipment, wherein the observable data is collected at different hierarchical levels of the piece of equipment, wherein the different hierarchical levels comprise a component level, and wherein first observable data of the observable data pertains to an integrated circuit, a capacitor, and a resistor of the piece of equipment;
inputting the collected observable data to a predictive model at a set of operational state models corresponding to the different hierarchical levels;
generating an output from each operational state model of the set of operational state models, the output being a state probability estimate for each of the different hierarchical levels; and
generating, from a top-level operational model, an output based on the outputs of the set of operational state models, wherein the output from the top-level operational model is a probability estimate of a next operational state or a mean time between failure (MTBF) for the piece of equipment.
12. The non-transitory computer readable medium of claim 11, wherein the next operational state comprises an operational state, a degraded state, or a failed state.
13. The non-transitory computer readable medium of claim 11, wherein the operations further comprise:
determining that the observable data is collected asynchronously between the different hierarchical levels; and
in response to determining that the observable data is collected asynchronously:
generating a first covariance matrix between outputs of a first hierarchical level operational state model and outputs of a second hierarchical level operational state model;
generating a second covariance matrix between the outputs of the second hierarchical level operational state model and outputs of a third hierarchical level operational state model; and
generating the output of the top-level operational model based on a product of a highest probability estimation state from each of the first covariance matrix and the second covariance matrix.
14. The non-transitory computer readable medium of claim 11, wherein the operations further comprise:
determining that the observable data is collected synchronously between the different hierarchical levels; and
in response to determining that the observable data is collected synchronously, generating the output from the top-level operational model based on a product of probability estimation states from outputs of each of a first hierarchical level operational state model, a second hierarchical level operational state model, and a third hierarchical level operational state model.
15. The non-transitory computer readable medium of claim 11, wherein:
each operational state model of the set of operational state models is a machine learning model trained with historical observable data collected from corresponding hierarchical levels; and
the top-level operational model is a machine learning model trained with outputs of the set of operational state models and corresponding operational state indications.
16. An apparatus comprising:
a memory configured to store instructions; and
a processor configured to communicate with the memory, the processor further configured to execute the instructions read from the memory, the instructions operable to cause the processor to perform operations including:
collecting observable data from a piece of equipment, wherein the observable data is collected at different hierarchical levels of the piece of equipment, wherein the different hierarchical levels comprise a component level, a circuit level, and a logical path level, and wherein first observable data of the observable data pertains to an integrated circuit, a capacitor, and a resistor of the piece of equipment;
inputting the collected observable data to a predictive model at a set of operational state models corresponding to the different hierarchical levels;
generating an output from each operational state model of the set of operational state models, the output being a state probability estimate for each of the different hierarchical levels; and
generating, from a top-level operational model, an output based on the outputs of the set of operational state models, wherein the output from the top-level operational model is a probability estimate of a next operational state or a mean time between failure (MTBF) for the piece of equipment.
17. The apparatus of claim 16, wherein the next operational state comprises an operational state, a degraded state, or a failed state.
18. The apparatus of claim 16, wherein the operations further comprise:
determining that the observable data is collected asynchronously between the different hierarchical levels; and
in response to determining that the observable data is collected asynchronously:
generating a first covariance matrix between outputs of a first hierarchical level operational state model and outputs of a second hierarchical level operational state model;
generating a second covariance matrix between the outputs of the second hierarchical level operational state model and outputs of a third hierarchical level operational state model; and
generating the output from the top-level operational model based on a product of a highest probability estimation state from each of the first covariance matrix and the second covariance matrix.
19. The apparatus of claim 16, wherein the operations further comprise:
determining that the observable data is collected synchronously between the different hierarchical levels; and
in response to determining that the observable data is collected synchronously, generating the output from the top-level operational model based on a product of probability estimation states from outputs of each of a first hierarchical level operational state model, a second hierarchical level operational state model, and a third hierarchical level operational state model.
20. The apparatus of claim 16, wherein second observable data of the observable data pertains to the logical path level, wherein the second observable data includes data pertaining to a transmitter circuit of the piece of equipment, wherein the data pertaining to the transmitter circuit includes data pertaining to the transmitter circuit operating in a first power mode and data pertaining to the transmitter circuit operating in a second power mode, and wherein the second power mode is a higher power mode relative to the first power mode.
US17/529,326 2019-05-21 2021-11-18 Systems and method for management and allocation of network assets Abandoned US20220075361A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/529,326 US20220075361A1 (en) 2019-05-21 2021-11-18 Systems and method for management and allocation of network assets

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/418,183 US11209808B2 (en) 2019-05-21 2019-05-21 Systems and method for management and allocation of network assets
US17/529,326 US20220075361A1 (en) 2019-05-21 2021-11-18 Systems and method for management and allocation of network assets

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/418,183 Continuation US11209808B2 (en) 2019-05-21 2019-05-21 Systems and method for management and allocation of network assets

Publications (1)

Publication Number Publication Date
US20220075361A1 true US20220075361A1 (en) 2022-03-10

Family

ID=73456612

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/418,183 Active 2039-09-27 US11209808B2 (en) 2019-05-21 2019-05-21 Systems and method for management and allocation of network assets
US17/529,326 Abandoned US20220075361A1 (en) 2019-05-21 2021-11-18 Systems and method for management and allocation of network assets

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/418,183 Active 2039-09-27 US11209808B2 (en) 2019-05-21 2019-05-21 Systems and method for management and allocation of network assets

Country Status (1)

Country Link
US (2) US11209808B2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11562297B2 (en) * 2020-01-17 2023-01-24 Apple Inc. Automated input-data monitoring to dynamically adapt machine-learning techniques

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020091972A1 (en) * 2001-01-05 2002-07-11 Harris David P. Method for predicting machine or process faults and automated system for implementing same
US7206646B2 (en) * 1999-02-22 2007-04-17 Fisher-Rosemount Systems, Inc. Method and apparatus for performing a function in a plant using process performance monitoring with process equipment monitoring and control
US8489360B2 (en) * 2006-09-29 2013-07-16 Fisher-Rosemount Systems, Inc. Multivariate monitoring and diagnostics of process variable data
US20160291552A1 (en) * 2014-11-18 2016-10-06 Prophecy Sensors, Llc System for rule management, predictive maintenance and quality assurance of a process and machine using reconfigurable sensor networks and big data machine learning
US20180013376A1 (en) * 2015-02-27 2018-01-11 Mitsubishi Electric Corporation Electric motor control device
US20180031256A1 (en) * 2016-07-27 2018-02-01 Johnson Controls Technology Company Systems and methods for interactive hvac maintenance interface

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5311562A (en) * 1992-12-01 1994-05-10 Westinghouse Electric Corp. Plant maintenance with predictive diagnostics
KR100212608B1 (en) 1996-01-12 1999-08-02 가네꼬 히사시 Cmos integrated circuit failure diagnosis apparatus and diagnostic method
US6820215B2 (en) 2000-12-28 2004-11-16 International Business Machines Corporation System and method for performing automatic rejuvenation at the optimal time based on work load history in a distributed data processing environment
AU2002235516A1 (en) 2001-01-08 2002-07-16 Vextec Corporation Method and apparatus for predicting failure in a system
US8266066B1 (en) * 2001-09-04 2012-09-11 Accenture Global Services Limited Maintenance, repair and overhaul management
JP4333331B2 (en) 2002-12-20 2009-09-16 セイコーエプソン株式会社 Failure prediction system, failure prediction program, and failure prediction method
US7457725B1 (en) 2003-06-24 2008-11-25 Cisco Technology Inc. Electronic component reliability determination system and method
DE10331207A1 (en) 2003-07-10 2005-01-27 Daimlerchrysler Ag Method and apparatus for predicting failure frequency
JP4500063B2 (en) 2004-02-06 2010-07-14 富士通株式会社 Electronic device, prediction method, and prediction program
JP4720295B2 (en) 2005-06-02 2011-07-13 日本電気株式会社 Abnormality detection system and maintenance system
CN1967573A (en) 2005-11-17 2007-05-23 鸿富锦精密工业(深圳)有限公司 System and method for reliability analysis of electron products
US7496796B2 (en) 2006-01-23 2009-02-24 International Business Machines Corporation Apparatus, system, and method for predicting storage device failure
US7539907B1 (en) 2006-05-05 2009-05-26 Sun Microsystems, Inc. Method and apparatus for determining a predicted failure rate
US8103463B2 (en) 2006-09-21 2012-01-24 Impact Technologies, Llc Systems and methods for predicting failure of electronic systems and assessing level of degradation and remaining useful life
US7912669B2 (en) 2007-03-27 2011-03-22 Honeywell International Inc. Prognosis of faults in electronic circuits
US8949671B2 (en) 2008-01-30 2015-02-03 International Business Machines Corporation Fault detection, diagnosis, and prevention for complex computing systems
US8140914B2 (en) 2009-06-15 2012-03-20 Microsoft Corporation Failure-model-driven repair and backup
WO2012024692A2 (en) * 2010-08-20 2012-02-23 Federspiel Clifford C Energy-optimal control decisions for hvac systems
GB2504081B (en) 2012-07-16 2019-09-18 Bae Systems Plc Assessing performance of a vehicle system
US20140188405A1 (en) 2012-12-28 2014-07-03 International Business Machines Corporation Predicting a time of failure of a device
GB2516840A (en) 2013-07-31 2015-02-11 Bqr Reliability Engineering Ltd Failure rate estimation from multiple failure mechanisms
US9500705B2 (en) 2013-08-28 2016-11-22 Wisconsin Alumni Research Foundation Integrated circuit providing fault prediction
US10223230B2 (en) 2013-09-11 2019-03-05 Dell Products, Lp Method and system for predicting storage device failures
US9400731B1 (en) 2014-04-23 2016-07-26 Amazon Technologies, Inc. Forecasting server behavior
US9348710B2 (en) 2014-07-29 2016-05-24 Saudi Arabian Oil Company Proactive failure recovery model for distributed computing using a checkpoint frequency determined by a MTBF threshold
US9542296B1 (en) 2014-12-01 2017-01-10 Amazon Technologies, Inc. Disk replacement using a predictive statistical model
KR101651883B1 (en) 2014-12-31 2016-08-29 주식회사 효성 Method for monitoring state of capacitor in modular converter
US20160292652A1 (en) 2015-04-03 2016-10-06 Chevron Pipe Line Company Predictive analytic reliability tool set for detecting equipment failures
US10048996B1 (en) 2015-09-29 2018-08-14 Amazon Technologies, Inc. Predicting infrastructure failures in a data center for hosted service mitigation actions

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7206646B2 (en) * 1999-02-22 2007-04-17 Fisher-Rosemount Systems, Inc. Method and apparatus for performing a function in a plant using process performance monitoring with process equipment monitoring and control
US20020091972A1 (en) * 2001-01-05 2002-07-11 Harris David P. Method for predicting machine or process faults and automated system for implementing same
US8489360B2 (en) * 2006-09-29 2013-07-16 Fisher-Rosemount Systems, Inc. Multivariate monitoring and diagnostics of process variable data
US20160291552A1 (en) * 2014-11-18 2016-10-06 Prophecy Sensors, Llc System for rule management, predictive maintenance and quality assurance of a process and machine using reconfigurable sensor networks and big data machine learning
US20180013376A1 (en) * 2015-02-27 2018-01-11 Mitsubishi Electric Corporation Electric motor control device
US20180031256A1 (en) * 2016-07-27 2018-02-01 Johnson Controls Technology Company Systems and methods for interactive hvac maintenance interface

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Espacenet, Machine translation, Kang Deok Hun et al., "Method and apparatus for prediction maintenance" KR101713985B1, 03/09/2017 (Year: 2017) *

Also Published As

Publication number Publication date
US11209808B2 (en) 2021-12-28
US20200371510A1 (en) 2020-11-26

Similar Documents

Publication Publication Date Title
US20210034449A1 (en) Integrated model for failure diagnosis and prognosis
EP3407267A1 (en) Deep learning network architecture optimization for uncertainty estimation in regression
Li et al. Particle filtering based likelihood ratio approach to fault diagnosis in nonlinear stochastic systems
US20170235626A1 (en) Anomaly Fusion on Temporal Casualty Graphs
Fan et al. A sequential Bayesian approach for remaining useful life prediction of dependent competing failure processes
Si et al. Prognostics for linear stochastic degrading systems with survival measurements
US20220004182A1 (en) Approach to determining a remaining useful life of a system
Friederich et al. Towards data-driven reliability modeling for cyber-physical production systems
CN111708876A (en) Method and device for generating information
JP5193533B2 (en) Remote monitoring system and remote monitoring method
Pavlov et al. A note on the” mean value” software reliability model
US20230133541A1 (en) Alert correlating using sequence model with topology reinforcement systems and methods
US20220075361A1 (en) Systems and method for management and allocation of network assets
Zin et al. Reliability and availability measures for Internet of Things consumer world perspectives
KR20190078850A (en) Method for estimation on online multivariate time series using ensemble dynamic transfer models and system thereof
Qiao et al. An empirical study on software aging indicators prediction in Android mobile
Sheppard et al. Bayesian diagnosis and prognosis using instrument uncertainty
KR102107689B1 (en) Apparatus and method for analyzing cause of network failure
Kayode et al. Lirul: A lightweight lstm based model for remaining useful life estimation at the edge
Natsumeda et al. RULENet: end-to-end learning with the dual-estimator for remaining useful life estimation
Dick et al. Detecting changes and avoiding catastrophic forgetting in dynamic partially observable environments
Ding et al. Backward inference in bayesian networks for distributed systems management
Brosinsky et al. Machine learning and digital twins: monitoring and control for dynamic security in power systems
Ma et al. A Predictive Online Transient Stability Assessment with Hierarchical Generative Adversarial Networks
EP4099116B1 (en) System and method for contextually-informed fault diagnostics using structural-temporal analysis of fault propagation graphs

Legal Events

Date Code Title Description
AS Assignment

Owner name: AT&T INTELLECTUAL PROPERTY I, L.P., GEORGIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MAPPUS, RUDOLPH;MORRIS, MARK;REEL/FRAME:058229/0509

Effective date: 20190521

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION