US11227209B2 - Systems and methods for predicting information handling resource failures using deep recurrent neural network with a modified gated recurrent unit having missing data imputation - Google Patents

Systems and methods for predicting information handling resource failures using deep recurrent neural network with a modified gated recurrent unit having missing data imputation Download PDF

Info

Publication number
US11227209B2
US11227209B2 US16/528,081 US201916528081A US11227209B2 US 11227209 B2 US11227209 B2 US 11227209B2 US 201916528081 A US201916528081 A US 201916528081A US 11227209 B2 US11227209 B2 US 11227209B2
Authority
US
United States
Prior art keywords
information handling
failure
processor
gated recurrent
recurrent unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US16/528,081
Other versions
US20210034949A1 (en
Inventor
Ashutosh Singh
Landon Martin CHAMBERS
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dell Products LP
Original Assignee
Dell Products LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Assigned to DELL PRODUCTS L.P. reassignment DELL PRODUCTS L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHAMBERS, LANDON MARTIN, SINGH, ASHUTOSH
Priority to US16/528,081 priority Critical patent/US11227209B2/en
Application filed by Dell Products LP filed Critical Dell Products LP
Assigned to CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH reassignment CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH SECURITY AGREEMENT Assignors: DELL PRODUCTS L.P., EMC CORPORATION, EMC IP Holding Company LLC
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT (NOTES) Assignors: DELL PRODUCTS L.P., EMC CORPORATION, EMC IP Holding Company LLC
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. SECURITY AGREEMENT Assignors: CREDANT TECHNOLOGIES INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., WYSE TECHNOLOGY L.L.C.
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DELL PRODUCTS L.P., EMC CORPORATION, EMC IP Holding Company LLC
Publication of US20210034949A1 publication Critical patent/US20210034949A1/en
Assigned to DELL PRODUCTS L.P., EMC CORPORATION, EMC IP Holding Company LLC reassignment DELL PRODUCTS L.P. RELEASE OF SECURITY INTEREST AT REEL 050406 FRAME 421 Assignors: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH
Publication of US11227209B2 publication Critical patent/US11227209B2/en
Application granted granted Critical
Assigned to DELL PRODUCTS L.P., EMC CORPORATION, EMC IP Holding Company LLC reassignment DELL PRODUCTS L.P. RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0571) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Assigned to EMC IP Holding Company LLC, DELL PRODUCTS L.P., EMC CORPORATION reassignment EMC IP Holding Company LLC RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06N3/0445
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/008Reliability or availability analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks

Definitions

  • the present disclosure relates in general to information handling systems, and more particularly to methods and systems for predicting information handling resource failures using a deep recurrent neural network having a modified gated recurrent unit capable of imputing missing training data, and performing imputation for training, test, and prediction steps.
  • An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information.
  • information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated.
  • the variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications.
  • information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
  • U.S. patent application Ser. No. 15/861,039 which uses a long-short term memory (LSTM) recurrent neural network.
  • LSTM long-short term memory
  • the approach disclosed in U.S. patent application Ser. No. 15/861,039 may have disadvantages.
  • telemetry data may be collected at irregular frequencies, wherein the time between collections may be inconsistent.
  • recorded telemetry data may have fields missing at random.
  • data imputation methods employed by the LSTM approach e.g., discrete cosine transformation
  • U.S. patent application Ser. No. 15/861,039 may not be scalable to large data sets, as cosine transform approach may require many elements within the telemetry data in order to represent the signals.
  • the LSTM approach requires a step for imputation followed by a step for training.
  • the disadvantages and problems associated with addressing failures of information handling resources in an information handling system may be reduced or eliminated.
  • an information handling system may include a processor and a non-transitory computer-readable medium having stored thereon a program of instructions executable by the processor.
  • Program of instructions may be configured to, when read and executed by the processor, receive telemetry data associated with one or more information handling resources, receive failure statistics associated with the one or more information handling resources, merge the telemetry data and the failure statistics to create training data, and implement a gated recurrent unit to: (i) impute missing values from the training data and (ii) train a pattern recognition engine configured to predict a failure status of an information handling resource from operational data associated with the information handling resource.
  • a method may include receiving telemetry data associated with one or more information handling resources, receiving failure statistics associated with the one or more information handling resources, merging the telemetry data and the failure statistics to create training data, and implementing a gated recurrent unit to: (i) impute missing values from the training data and (ii) train a pattern recognition engine configured to predict a failure status of an information handling resource from operational data associated with the information handling resource.
  • an article of manufacture may include a non-transitory computer-readable medium and computer-executable instructions carried on the computer readable medium, the instructions readable by a processor.
  • the instructions when read and executed, may cause the processor to receive telemetry data associated with one or more information handling resources, receive failure statistics associated with the one or more information handling resources, merge the telemetry data and the failure statistics to create training data, and implement a gated recurrent unit to: (i) impute missing values from the training data; and (ii) train a pattern recognition engine configured to predict a failure status of an information handling resource from operational data associated with the information handling resource.
  • FIG. 1 illustrates a block diagram of an example client information handling system, in accordance with embodiments of the present disclosure
  • FIG. 2 illustrates a block diagram of an example system for predicting information handling resource failures, in accordance with embodiments of the present disclosure
  • FIG. 3 illustrates a functional block diagram of the central support engine depicted in FIG. 2 , in accordance with embodiments of the present disclosure.
  • FIG. 4 illustrates a functional block diagram of a gated recurrent unit, in accordance with embodiments of the present disclosure.
  • FIGS. 1 through 4 Preferred embodiments and their advantages are best understood by reference to FIGS. 1 through 4 , wherein like numbers are used to indicate like and corresponding parts.
  • an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, entertainment, or other purposes.
  • an information handling system may be a personal computer, a personal digital assistant (PDA), a consumer electronic device, a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price.
  • the information handling system may include memory, one or more processing resources such as a central processing unit (“CPU”) or hardware or software control logic.
  • Additional components of the information handling system may include one or more storage devices, one or more communications ports for communicating with external devices as well as various input/output (“I/O”) devices, such as a keyboard, a mouse, and a video display.
  • the information handling system may also include one or more buses operable to transmit communication between the various hardware components.
  • Computer-readable media may include any instrumentality or aggregation of instrumentalities that may retain data and/or instructions for a period of time.
  • Computer-readable media may include, without limitation, storage media such as a direct access storage device (e.g., a hard disk drive or floppy disk), a sequential access storage device (e.g., a tape disk drive), compact disk, CD-ROM, DVD, random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), and/or flash memory; as well as communications media such as wires, optical fibers, microwaves, radio waves, and other electromagnetic and/or optical carriers; and/or any combination of the foregoing.
  • storage media such as a direct access storage device (e.g., a hard disk drive or floppy disk), a sequential access storage device (e.g., a tape disk drive), compact disk, CD-ROM, DVD, random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-
  • information handling resources may broadly refer to any component system, device or apparatus of an information handling system, including without limitation processors, service processors, basic input/output systems (BIOSs), buses, memories, I/O devices and/or interfaces, storage resources, network interfaces, motherboards, and/or any other components and/or elements of an information handling system.
  • processors service processors, basic input/output systems (BIOSs), buses, memories, I/O devices and/or interfaces, storage resources, network interfaces, motherboards, and/or any other components and/or elements of an information handling system.
  • BIOS basic input/output systems
  • FIG. 1 illustrates a block diagram of an example client information handling system 102 , in accordance with embodiments of the present disclosure.
  • client information handling system 102 may comprise a server.
  • client information handling system 102 may be a personal computer (e.g., a desktop computer, a laptop, notebook, tablet, handheld, smart phone, personal digital assistant, etc.). As depicted in FIG.
  • client information handling system 102 may include a processor 103 , a memory 104 communicatively coupled to processor 103 , a storage medium 106 communicatively coupled to processor 103 , a basic input/output system (BIOS) 105 communicatively coupled to processor 103 , a network interface 108 communicatively coupled to processor 103 , and one or more other information handling resources 120 communicatively coupled to processor 103 .
  • BIOS basic input/output system
  • Processor 103 may include any system, device, or apparatus configured to interpret and/or execute program instructions and/or process data, and may include, without limitation, a microprocessor, microcontroller, digital signal processor (DSP), application specific integrated circuit (ASIC), or any other digital or analog circuitry configured to interpret and/or execute program instructions and/or process data.
  • processor 103 may interpret and/or execute program instructions and/or process data stored in memory 104 , storage medium 106 , BIOS 105 , and/or another component of client information handling system 102 .
  • Memory 104 may be communicatively coupled to processor 103 and may include any system, device, or apparatus configured to retain program instructions and/or data for a period of time (e.g., computer-readable media).
  • Memory 104 may include RAM, EEPROM, a PCMCIA card, flash memory, magnetic storage, opto-magnetic storage, or any suitable selection and/or array of volatile or non-volatile memory that retains data after power to client information handling system 102 is turned off.
  • Storage medium 106 may be communicatively coupled to processor 103 and may include any system, device, or apparatus operable to store information processed by processor 103 .
  • Storage medium 106 may include, for example, network attached storage, one or more direct access storage devices (e.g., hard disk drives), and/or one or more sequential access storage devices (e.g., tape drives).
  • storage medium 106 may have stored thereon an operating system (OS) 114 , and a client support engine 116 .
  • OS operating system
  • client support engine 116 client support engine
  • OS 114 may be any program of executable instructions, or aggregation of programs of executable instructions, configured to manage and/or control the allocation and usage of hardware resources such as memory, CPU time, disk space, and input and output devices, and provide an interface between such hardware resources and application programs hosted by OS 114 . Active portions of OS 114 may be transferred to memory 104 for execution by processor 103 .
  • Client support engine 116 may comprise a program of instructions configured to, when loaded into memory 104 and executed by processor 103 , perform one or more tasks related to collection and communication (e.g., via network interface 108 ) of telemetry information associated with information handling resources of client information handling system 102 (including, without limitation, storage medium 106 and information handling resources 120 ), as is described in greater detail elsewhere in this disclosure.
  • BIOS 105 may be communicatively coupled to processor 103 and may include any system, device, or apparatus configured to identify, test, and/or initialize information handling resources of client information handling system 102 .
  • BIOS may broadly refer to any system, device, or apparatus configured to perform such functionality, including without limitation, a Unified Extensible Firmware Interface (UEFI).
  • BIOS 105 may be implemented as a program of instructions that may be read by and executed on processor 103 to carry out the functionality of BIOS 105 .
  • BIOS 105 may comprise boot firmware configured to be the first code executed by processor 103 when client information handling system 102 is booted and/or powered on.
  • code for BIOS 105 may be configured to set components of client information handling system 102 into a known state, so that one or more applications (e.g., operating system 114 or other application programs) stored on compatible media (e.g., memory 104 , storage medium 106 ) may be executed by processor 103 and given control of client information handling system 102 .
  • applications e.g., operating system 114 or other application programs
  • compatible media e.g., memory 104 , storage medium 106
  • Network interface 108 may include any suitable system, apparatus, or device operable to serve as an interface between client information handling system 102 and a network external to client information handling system 102 (e.g., network 210 depicted in FIG. 2 ).
  • Network interface 108 may allow client information handling system 102 to communicate via an external network using any suitable transmission protocol and/or standard.
  • information handling resources 120 may include any component system, device or apparatus of client information handling system 102 , including without limitation processors, buses, computer-readable media, input-output devices and/or interfaces, storage resources, network interfaces, motherboards, electro-mechanical devices (e.g., fans), displays, batteries, and/or power supplies.
  • FIG. 2 illustrates a block diagram of an example system 200 for predicting information handling resource failures, in accordance with embodiments of the present disclosure.
  • system 200 may include a plurality of client information handling systems 102 (such as those depicted in FIG. 1 ), a central information handling system 202 , and a network 210 communicatively coupled to client information handling systems 102 and central information handling system 202 .
  • central information handling system 202 may comprise a server.
  • central information handling system 202 may be a personal computer (e.g., a desktop computer, a laptop, notebook, tablet, handheld, smart phone, personal digital assistant, etc.).
  • central information handling system 202 may include a processor 203 , a memory 204 communicatively coupled to processor 203 , a storage medium 206 communicatively coupled to processor 203 , and a network interface 208 communicatively coupled to processor 203 .
  • Processor 203 may include any system, device, or apparatus configured to interpret and/or execute program instructions and/or process data, and may include, without limitation, a microprocessor, microcontroller, digital signal processor (DSP), application specific integrated circuit (ASIC), or any other digital or analog circuitry configured to interpret and/or execute program instructions and/or process data.
  • processor 203 may interpret and/or execute program instructions and/or process data stored in memory 204 , storage medium 206 , and/or another component of client information handling system 202 .
  • Memory 204 may be communicatively coupled to processor 203 and may include any system, device, or apparatus configured to retain program instructions and/or data for a period of time (e.g., computer-readable media).
  • Memory 204 may include RAM, EEPROM, a PCMCIA card, flash memory, magnetic storage, opto-magnetic storage, or any suitable selection and/or array of volatile or non-volatile memory that retains data after power to client information handling system 202 is turned off.
  • Storage medium 206 may be communicatively coupled to processor 203 and may include any system, device, or apparatus operable to store information processed by processor 203 .
  • Storage medium 206 may include, for example, network attached storage, one or more direct access storage devices (e.g., hard disk drives), and/or one or more sequential access storage devices (e.g., tape drives).
  • storage medium 206 may have stored thereon an operating system (OS) 214 , and a central support engine 216 .
  • OS operating system
  • OS 214 may be any program of executable instructions, or aggregation of programs of executable instructions, configured to manage and/or control the allocation and usage of hardware resources such as memory, CPU time, disk space, and input and output devices, and provide an interface between such hardware resources and application programs hosted by OS 214 . Active portions of OS 214 may be transferred to memory 204 for execution by processor 203 .
  • Central support engine 216 may comprise a program of instructions configured to, when loaded into memory 204 and executed by processor 203 , perform one or more tasks related to receipt of telemetry information from client information handling systems 102 , receipt of data regarding actual failure of information handling resources, and correlate such telemetry information and failure information to predict the occurrence of failures of information handling resources of client information handling systems 102 , as is described in greater detail elsewhere in this disclosure.
  • Network interface 208 may include any suitable system, apparatus, or device operable to serve as an interface between central information handling system 202 and network 210 .
  • Network interface 208 may allow central information handling system 202 to communicate via an external network using any suitable transmission protocol and/or standard.
  • central information handling system 202 may comprise one or more other information handling resources.
  • Network 210 may comprise a network and/or fabric configured to couple information handling systems of system 200 (e.g., client information handling systems 102 and central information handling system 202 ) to one another.
  • central information handling system 202 may be able to access, via network 210 , telemetry data collected and communicated by client support engines 116 executing on client information handling systems 102 .
  • FIG. 3 illustrates a functional block diagram of central support engine 216 depicted in FIG. 2 , in accordance with embodiments of the present disclosure.
  • central support engine 216 may implement an input processing unit 302 , a recurrent neural network with modified gated recurrent unit (RNN/GRU) 304 having missing data imputation, and a rule-based decision engine 306 .
  • RNN/GRU modified gated recurrent unit
  • Input processing unit 302 may receive telemetry data from client information handling systems 102 and may also receive failure statistics regarding client information handling systems 102 .
  • Such telemetry data may include any operational data associated with an information handling resource of a client information handling system 102 .
  • telemetry data may include information regarding performance of an information handling resource, environmental conditions associated with an information handling resource, or any other suitable operational data regarding an information handling resource.
  • telemetry data for a hard disk drive may include information regarding cyclic redundancy check errors, volume of read input/output, volume of write input/output, operating temperature, rotation rate of rotational media, number of power cycles, amount of time the hard disk drive is powered on, and/or other parameters.
  • Failure statistics may include, for each information handling resource from which telemetry data is received, an indication of a failure status of the information handling resource (e.g., failed, about to fail, healthy).
  • failure statistics may be received from a repair and/or servicing facility that may manually or automatically inspect information handling resources for their health status.
  • Input processing unit 302 may merge telemetry data and the failure statistics to create one or more labeled time series patterns, which it may output to RNN/GRU 304 as training data. Input processing unit 302 may generate the time series patterns to have any suitable length and may sample telemetry data and failure statistics at any appropriate sampling frequency.
  • RNN/GRU 304 may receive the time series data as training data, such that RNN/GRU 304 may perform as a pattern recognition engine.
  • RNN/GRU 304 may monitor telemetry data from information handling resources of client information handling systems 102 and predict a failure status (e.g., failed, about to fail, healthy) based on pattern analysis of the telemetry data. Accordingly, RNN/GRU 304 may predict a failure of an information handling resource before it actually occurs. As explained in greater detail below, RNN/GRU 304 may be unable to handle any uneven time gaps in the sample or the time series of its training data, thus imputing missing data from the training data in order to perform training and prediction.
  • rules-based decision engine 306 may generate a decision for one or more information handling resources based on the predicted failure status. Rules applied by rules-based decision engine 306 may consider warranty status of an information handling resource, criticality of the information handling resource, service/support level of the information handling resource, and/or any other suitable factor. For information handling resources predicted to have a status of failed or about to fail, the decision generated by rules-based decision engine 306 may comprise any remedial action to be taken in response to the status, including dispatch of a replacement information handling resource, dispatch of a technician to repair or replace the information handling resource, and/or communication of an alert regarding the information handling resource.
  • FIG. 4 illustrates a functional block diagram of a gated recurrent unit 400 , in accordance with embodiments of the present disclosure.
  • a gated recurrent unit may perform functions similar to LSTM, but with a fewer number of steps. GRUs may be computationally less expensive when compared to LSTMs and may be fine-tuned to achieve similar levels of accuracy.
  • GRU 400 may comprise a cell, a remember gate, and an update gate. GRU 400 , unlike an LSTM, may not have a forget gate and may need not store a cell state. Accordingly, compared to LSTM, GRU 400 may have lower computational requirements as it may eliminate the processing required to calculate the forget gate and the storage required to maintain the cell state. GRU 400 may calculate the future state based on the last output and the current input.
  • x t ⁇ R D represents the t-th observation of all variables
  • x t d denotes the d-th variable of x t .
  • a masking vector m t ⁇ (0; 1) D which is 0 for missing values and 1 otherwise.
  • Another vector ⁇ t d ⁇ R may be used to maintain the time interval since the last observation. Mathematically, such vectors may be written as:
  • R t and Z t are reset and update gates for the t th time period, respectively;
  • h t ′ and h t are the input and output for the t th time period and comprise the information added to the cell using the update gate;
  • W and B are weights and bias matrices with subscripts r and z pertaining to input and update, respectively; and
  • h t may be passed to a fully-connected output layer, to calculate the output for the t th time-period.
  • the output from the output layer may be the estimate of the response variable for the t th time period and may be used to calculate the loss and initiate the gradient for back-propagation.
  • GRU 400 may be further modified in order to train variables so as to learn distributions of predictor variables, by adding weight matrices to the GRU equations and modifying input variables.
  • a decay rate may be used to modify the inputs and the hidden state.
  • two versions of the decay function given above may be used.
  • W yx may be constrained to be diagonal, effectively making decay rate independent for each predictor.
  • the modified GRU may take in a data set with missing values, masking vectors, and the modified inputs (as described above) to make predictions.
  • the foregoing approach may modify the inputs and the hidden states for a GRU using decay (which may be calculated using time interval and masking vector) and then such modified inputs, modified hidden state, and the masking vector may be fed to the modified GRU.
  • the use of the modified GRU for prediction may have advantages over over LSTM and other known approaches for data imputation.
  • the modified GRU imputation approach described herein may be capable of exploiting time-series nature of the training data, using the last observation, time since the last observation and the distribution of a predictor to make more accurate estimates for missing values of the training data.
  • the use of the modified GRU imputation approach described herein may assume no correlation and may only require a single prediction step.
  • the modified GRU imputation approach described herein may enable combination of imputation and training into a single step, eliminating the need for storing imputed datasets.
  • the additional computation cost associated with imputation in the modified GRU imputation approach described herein may be at least partly offset by the low computation expense associated with GRUs when compared to LSTMs.
  • references in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, or component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Accordingly, modifications, additions, or omissions may be made to the systems, apparatuses, and methods described herein without departing from the scope of the disclosure. For example, the components of the systems and apparatuses may be integrated or separated.
  • each refers to each member of a set or each member of a subset of a set.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Debugging And Monitoring (AREA)

Abstract

A method may include receiving telemetry data associated with one or more information handling resources, receiving failure statistics associated with the one or more information handling resources, merging the telemetry data and the failure statistics to create training data, and implementing a gated recurrent unit to: (i) impute missing values from the training data and (ii) train a pattern recognition engine configured to predict a failure status of an information handling resource from operational data associated with the information handling resource.

Description

TECHNICAL FIELD
The present disclosure relates in general to information handling systems, and more particularly to methods and systems for predicting information handling resource failures using a deep recurrent neural network having a modified gated recurrent unit capable of imputing missing training data, and performing imputation for training, test, and prediction steps.
BACKGROUND
As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option available to users is information handling systems. An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information. Because technology and information handling needs and requirements vary between different users or applications, information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
Many information handling resources, in particular hard disk drives and batteries, may suffer from faults or failures that require replacement. However, replacement of such devices after failure or fault may be undesirable as it leads to system downtime. Accordingly, systems and methods for predicting component failure in order to enable pre-failure failure placement of information handling systems is desired.
One approach to predict component failure is given by U.S. patent application Ser. No. 15/861,039, which uses a long-short term memory (LSTM) recurrent neural network. However, the approach disclosed in U.S. patent application Ser. No. 15/861,039 may have disadvantages. For example, telemetry data may be collected at irregular frequencies, wherein the time between collections may be inconsistent. In addition, recorded telemetry data may have fields missing at random. In addition, data imputation methods employed by the LSTM approach (e.g., discrete cosine transformation) disclosed in U.S. patent application Ser. No. 15/861,039 may not be scalable to large data sets, as cosine transform approach may require many elements within the telemetry data in order to represent the signals. In addition, the LSTM approach requires a step for imputation followed by a step for training.
SUMMARY
In accordance with the teachings of the present disclosure, the disadvantages and problems associated with addressing failures of information handling resources in an information handling system may be reduced or eliminated.
In accordance with embodiments of the present disclosure, an information handling system may include a processor and a non-transitory computer-readable medium having stored thereon a program of instructions executable by the processor. Program of instructions may be configured to, when read and executed by the processor, receive telemetry data associated with one or more information handling resources, receive failure statistics associated with the one or more information handling resources, merge the telemetry data and the failure statistics to create training data, and implement a gated recurrent unit to: (i) impute missing values from the training data and (ii) train a pattern recognition engine configured to predict a failure status of an information handling resource from operational data associated with the information handling resource.
In accordance with these and other embodiments of the present disclosure, a method may include receiving telemetry data associated with one or more information handling resources, receiving failure statistics associated with the one or more information handling resources, merging the telemetry data and the failure statistics to create training data, and implementing a gated recurrent unit to: (i) impute missing values from the training data and (ii) train a pattern recognition engine configured to predict a failure status of an information handling resource from operational data associated with the information handling resource.
In accordance with these and other embodiments of the present disclosure, an article of manufacture may include a non-transitory computer-readable medium and computer-executable instructions carried on the computer readable medium, the instructions readable by a processor. The instructions, when read and executed, may cause the processor to receive telemetry data associated with one or more information handling resources, receive failure statistics associated with the one or more information handling resources, merge the telemetry data and the failure statistics to create training data, and implement a gated recurrent unit to: (i) impute missing values from the training data; and (ii) train a pattern recognition engine configured to predict a failure status of an information handling resource from operational data associated with the information handling resource.
Technical advantages of the present disclosure may be readily apparent to one skilled in the art from the figures, description and claims included herein. The objects and advantages of the embodiments will be realized and achieved at least by the elements, features, and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are examples and explanatory and are not restrictive of the claims set forth in this disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
A more complete understanding of the present embodiments and advantages thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings, in which like reference numbers indicate like features, and wherein:
FIG. 1 illustrates a block diagram of an example client information handling system, in accordance with embodiments of the present disclosure;
FIG. 2 illustrates a block diagram of an example system for predicting information handling resource failures, in accordance with embodiments of the present disclosure;
FIG. 3 illustrates a functional block diagram of the central support engine depicted in FIG. 2, in accordance with embodiments of the present disclosure; and
FIG. 4 illustrates a functional block diagram of a gated recurrent unit, in accordance with embodiments of the present disclosure.
DETAILED DESCRIPTION
Preferred embodiments and their advantages are best understood by reference to FIGS. 1 through 4, wherein like numbers are used to indicate like and corresponding parts.
For the purposes of this disclosure, an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, entertainment, or other purposes. For example, an information handling system may be a personal computer, a personal digital assistant (PDA), a consumer electronic device, a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. The information handling system may include memory, one or more processing resources such as a central processing unit (“CPU”) or hardware or software control logic. Additional components of the information handling system may include one or more storage devices, one or more communications ports for communicating with external devices as well as various input/output (“I/O”) devices, such as a keyboard, a mouse, and a video display. The information handling system may also include one or more buses operable to transmit communication between the various hardware components.
For the purposes of this disclosure, computer-readable media may include any instrumentality or aggregation of instrumentalities that may retain data and/or instructions for a period of time. Computer-readable media may include, without limitation, storage media such as a direct access storage device (e.g., a hard disk drive or floppy disk), a sequential access storage device (e.g., a tape disk drive), compact disk, CD-ROM, DVD, random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), and/or flash memory; as well as communications media such as wires, optical fibers, microwaves, radio waves, and other electromagnetic and/or optical carriers; and/or any combination of the foregoing.
For the purposes of this disclosure, information handling resources may broadly refer to any component system, device or apparatus of an information handling system, including without limitation processors, service processors, basic input/output systems (BIOSs), buses, memories, I/O devices and/or interfaces, storage resources, network interfaces, motherboards, and/or any other components and/or elements of an information handling system.
FIG. 1 illustrates a block diagram of an example client information handling system 102, in accordance with embodiments of the present disclosure. In some embodiments, client information handling system 102 may comprise a server. In other embodiments, client information handling system 102 may be a personal computer (e.g., a desktop computer, a laptop, notebook, tablet, handheld, smart phone, personal digital assistant, etc.). As depicted in FIG. 1, client information handling system 102 may include a processor 103, a memory 104 communicatively coupled to processor 103, a storage medium 106 communicatively coupled to processor 103, a basic input/output system (BIOS) 105 communicatively coupled to processor 103, a network interface 108 communicatively coupled to processor 103, and one or more other information handling resources 120 communicatively coupled to processor 103.
Processor 103 may include any system, device, or apparatus configured to interpret and/or execute program instructions and/or process data, and may include, without limitation, a microprocessor, microcontroller, digital signal processor (DSP), application specific integrated circuit (ASIC), or any other digital or analog circuitry configured to interpret and/or execute program instructions and/or process data. In some embodiments, processor 103 may interpret and/or execute program instructions and/or process data stored in memory 104, storage medium 106, BIOS 105, and/or another component of client information handling system 102.
Memory 104 may be communicatively coupled to processor 103 and may include any system, device, or apparatus configured to retain program instructions and/or data for a period of time (e.g., computer-readable media). Memory 104 may include RAM, EEPROM, a PCMCIA card, flash memory, magnetic storage, opto-magnetic storage, or any suitable selection and/or array of volatile or non-volatile memory that retains data after power to client information handling system 102 is turned off.
Storage medium 106 may be communicatively coupled to processor 103 and may include any system, device, or apparatus operable to store information processed by processor 103. Storage medium 106 may include, for example, network attached storage, one or more direct access storage devices (e.g., hard disk drives), and/or one or more sequential access storage devices (e.g., tape drives). As shown in FIG. 1, storage medium 106 may have stored thereon an operating system (OS) 114, and a client support engine 116.
OS 114 may be any program of executable instructions, or aggregation of programs of executable instructions, configured to manage and/or control the allocation and usage of hardware resources such as memory, CPU time, disk space, and input and output devices, and provide an interface between such hardware resources and application programs hosted by OS 114. Active portions of OS 114 may be transferred to memory 104 for execution by processor 103.
Client support engine 116 may comprise a program of instructions configured to, when loaded into memory 104 and executed by processor 103, perform one or more tasks related to collection and communication (e.g., via network interface 108) of telemetry information associated with information handling resources of client information handling system 102 (including, without limitation, storage medium 106 and information handling resources 120), as is described in greater detail elsewhere in this disclosure.
BIOS 105 may be communicatively coupled to processor 103 and may include any system, device, or apparatus configured to identify, test, and/or initialize information handling resources of client information handling system 102. “BIOS” may broadly refer to any system, device, or apparatus configured to perform such functionality, including without limitation, a Unified Extensible Firmware Interface (UEFI). In some embodiments, BIOS 105 may be implemented as a program of instructions that may be read by and executed on processor 103 to carry out the functionality of BIOS 105. In these and other embodiments, BIOS 105 may comprise boot firmware configured to be the first code executed by processor 103 when client information handling system 102 is booted and/or powered on. As part of its initialization functionality, code for BIOS 105 may be configured to set components of client information handling system 102 into a known state, so that one or more applications (e.g., operating system 114 or other application programs) stored on compatible media (e.g., memory 104, storage medium 106) may be executed by processor 103 and given control of client information handling system 102.
Network interface 108 may include any suitable system, apparatus, or device operable to serve as an interface between client information handling system 102 and a network external to client information handling system 102 (e.g., network 210 depicted in FIG. 2). Network interface 108 may allow client information handling system 102 to communicate via an external network using any suitable transmission protocol and/or standard.
Generally speaking, information handling resources 120 may include any component system, device or apparatus of client information handling system 102, including without limitation processors, buses, computer-readable media, input-output devices and/or interfaces, storage resources, network interfaces, motherboards, electro-mechanical devices (e.g., fans), displays, batteries, and/or power supplies.
FIG. 2 illustrates a block diagram of an example system 200 for predicting information handling resource failures, in accordance with embodiments of the present disclosure. As shown in FIG. 2, system 200 may include a plurality of client information handling systems 102 (such as those depicted in FIG. 1), a central information handling system 202, and a network 210 communicatively coupled to client information handling systems 102 and central information handling system 202.
In some embodiments, central information handling system 202 may comprise a server. In other embodiments, central information handling system 202 may be a personal computer (e.g., a desktop computer, a laptop, notebook, tablet, handheld, smart phone, personal digital assistant, etc.). As depicted in FIG. 2, central information handling system 202 may include a processor 203, a memory 204 communicatively coupled to processor 203, a storage medium 206 communicatively coupled to processor 203, and a network interface 208 communicatively coupled to processor 203.
Processor 203 may include any system, device, or apparatus configured to interpret and/or execute program instructions and/or process data, and may include, without limitation, a microprocessor, microcontroller, digital signal processor (DSP), application specific integrated circuit (ASIC), or any other digital or analog circuitry configured to interpret and/or execute program instructions and/or process data. In some embodiments, processor 203 may interpret and/or execute program instructions and/or process data stored in memory 204, storage medium 206, and/or another component of client information handling system 202.
Memory 204 may be communicatively coupled to processor 203 and may include any system, device, or apparatus configured to retain program instructions and/or data for a period of time (e.g., computer-readable media). Memory 204 may include RAM, EEPROM, a PCMCIA card, flash memory, magnetic storage, opto-magnetic storage, or any suitable selection and/or array of volatile or non-volatile memory that retains data after power to client information handling system 202 is turned off.
Storage medium 206 may be communicatively coupled to processor 203 and may include any system, device, or apparatus operable to store information processed by processor 203. Storage medium 206 may include, for example, network attached storage, one or more direct access storage devices (e.g., hard disk drives), and/or one or more sequential access storage devices (e.g., tape drives). As shown in FIG. 2, storage medium 206 may have stored thereon an operating system (OS) 214, and a central support engine 216.
OS 214 may be any program of executable instructions, or aggregation of programs of executable instructions, configured to manage and/or control the allocation and usage of hardware resources such as memory, CPU time, disk space, and input and output devices, and provide an interface between such hardware resources and application programs hosted by OS 214. Active portions of OS 214 may be transferred to memory 204 for execution by processor 203.
Central support engine 216 may comprise a program of instructions configured to, when loaded into memory 204 and executed by processor 203, perform one or more tasks related to receipt of telemetry information from client information handling systems 102, receipt of data regarding actual failure of information handling resources, and correlate such telemetry information and failure information to predict the occurrence of failures of information handling resources of client information handling systems 102, as is described in greater detail elsewhere in this disclosure.
Network interface 208 may include any suitable system, apparatus, or device operable to serve as an interface between central information handling system 202 and network 210. Network interface 208 may allow central information handling system 202 to communicate via an external network using any suitable transmission protocol and/or standard.
In addition to or in lieu of one or more of processor 203, memory 204, storage medium 206, and network interface 208, central information handling system 202 may comprise one or more other information handling resources.
Network 210 may comprise a network and/or fabric configured to couple information handling systems of system 200 (e.g., client information handling systems 102 and central information handling system 202) to one another. Thus, central information handling system 202 may be able to access, via network 210, telemetry data collected and communicated by client support engines 116 executing on client information handling systems 102.
FIG. 3 illustrates a functional block diagram of central support engine 216 depicted in FIG. 2, in accordance with embodiments of the present disclosure. As shown in FIG. 3, central support engine 216 may implement an input processing unit 302, a recurrent neural network with modified gated recurrent unit (RNN/GRU) 304 having missing data imputation, and a rule-based decision engine 306.
Input processing unit 302 may receive telemetry data from client information handling systems 102 and may also receive failure statistics regarding client information handling systems 102. Such telemetry data may include any operational data associated with an information handling resource of a client information handling system 102. For example, telemetry data may include information regarding performance of an information handling resource, environmental conditions associated with an information handling resource, or any other suitable operational data regarding an information handling resource. As a specific example, telemetry data for a hard disk drive may include information regarding cyclic redundancy check errors, volume of read input/output, volume of write input/output, operating temperature, rotation rate of rotational media, number of power cycles, amount of time the hard disk drive is powered on, and/or other parameters. Failure statistics may include, for each information handling resource from which telemetry data is received, an indication of a failure status of the information handling resource (e.g., failed, about to fail, healthy). In some embodiments, failure statistics may be received from a repair and/or servicing facility that may manually or automatically inspect information handling resources for their health status.
Input processing unit 302 may merge telemetry data and the failure statistics to create one or more labeled time series patterns, which it may output to RNN/GRU 304 as training data. Input processing unit 302 may generate the time series patterns to have any suitable length and may sample telemetry data and failure statistics at any appropriate sampling frequency.
RNN/GRU 304 may receive the time series data as training data, such that RNN/GRU 304 may perform as a pattern recognition engine. Thus, in operation, once trained, RNN/GRU 304 may monitor telemetry data from information handling resources of client information handling systems 102 and predict a failure status (e.g., failed, about to fail, healthy) based on pattern analysis of the telemetry data. Accordingly, RNN/GRU 304 may predict a failure of an information handling resource before it actually occurs. As explained in greater detail below, RNN/GRU 304 may be unable to handle any uneven time gaps in the sample or the time series of its training data, thus imputing missing data from the training data in order to perform training and prediction.
Based on the failure status, rules-based decision engine 306 may generate a decision for one or more information handling resources based on the predicted failure status. Rules applied by rules-based decision engine 306 may consider warranty status of an information handling resource, criticality of the information handling resource, service/support level of the information handling resource, and/or any other suitable factor. For information handling resources predicted to have a status of failed or about to fail, the decision generated by rules-based decision engine 306 may comprise any remedial action to be taken in response to the status, including dispatch of a replacement information handling resource, dispatch of a technician to repair or replace the information handling resource, and/or communication of an alert regarding the information handling resource.
FIG. 4 illustrates a functional block diagram of a gated recurrent unit 400, in accordance with embodiments of the present disclosure. A gated recurrent unit (GRU) may perform functions similar to LSTM, but with a fewer number of steps. GRUs may be computationally less expensive when compared to LSTMs and may be fine-tuned to achieve similar levels of accuracy. GRU 400 may comprise a cell, a remember gate, and an update gate. GRU 400, unlike an LSTM, may not have a forget gate and may need not store a cell state. Accordingly, compared to LSTM, GRU 400 may have lower computational requirements as it may eliminate the processing required to calculate the forget gate and the storage required to maintain the cell state. GRU 400 may calculate the future state based on the last output and the current input.
As background, a multivariate time series with D variables of length T may be denoted as:
X=(x 1 ,x 2 , . . . ,x T)T ∈R T×D.
where for each t ∈ 1, 2, . . . , T, xt∈ RD represents the t-th observation of all variables and xt d denotes the d-th variable of xt. st∈ R denotes the time-stamp for the t-th observation and s1=0, for all variables. To keep track of the missing values, a masking vector mt∈ (0; 1)D, which is 0 for missing values and 1 otherwise. Another vector δt d∈ R, may be used to maintain the time interval since the last observation. Mathematically, such vectors may be written as:
m t d = { 1 if x t d is observed 0 otherwise δ t d = { s t - s t - 1 + δ t - 1 d , t > 1 , m t - 1 d = 0 s t - s t - 1 , t > 1 , m t - 1 d = 1 0 , t = 1
A GRU such as that shown in FIG. 4 may be mathematically written as,
R t=σ(W r[X t ,h t−1]+B r)
Z t=σ(W z[X t ,h t−1]+B z)
h′ t=tan h(W h[X t ,R t ⊙h t−1]+B h)
h t=(1−Z t)⊙h t−1 +Z t ⊙h′ t
wherein: (i) Rt and Zt are reset and update gates for the tth time period, respectively; (ii) ht′ and ht are the input and output for the tth time period and comprise the information added to the cell using the update gate; (iii) W and B are weights and bias matrices with subscripts r and z pertaining to input and update, respectively; and (iv) σ and tank are the sigmoid and the hyperbolic tangent activation functions. In operation, ht may be passed to a fully-connected output layer, to calculate the output for the tth time-period. The output from the output layer may be the estimate of the response variable for the tth time period and may be used to calculate the loss and initiate the gradient for back-propagation.
GRU 400 may be further modified in order to train variables so as to learn distributions of predictor variables, by adding weight matrices to the GRU equations and modifying input variables. For example, a decay rate may be used to modify the inputs and the hidden state. Such decay rate may be given by:
γt=exp[−max(0,W γδt +b γ)],
where WY and bY may be trained jointly with all other parameters of GRU 400. In some embodiments, two versions of the decay function given above may be used. The first decay function may be used to modify inputs to GRU 400 and may be given by:
{circumflex over (x)} t d =m t d x t d+(1−m t d)(γz t d x t′ d+(1−γx t d){tilde over (x)} d,
wherein: (i) γd t d is the decay for the input value xt d, xt′ d), is the last observation of the dth variable, and {tilde over (x)}d is the empirical mean of the dth variable. Wyx may be constrained to be diagonal, effectively making decay rate independent for each predictor. The other decay function may be used to decay a hidden state ht−1 according to:
ĥ t−1h t ⊙h t−1 t ,
where the weight Wyh corresponding to the decay function ht−1 is not constrained to be diagonal. In addition to the above modifications to GRU 400, a masking vector may be added to GRU equations, using special weights matrices such that inputs to GRU 400 may be given as:
R t=σ(W r[{circumflex over (X)} t ĥ t−1]+V r m t +B r)
Z t=σ(W z[{circumflex over (X)} t ĥ t−1]+V z m i +B z)
h′ t=tan h(W h[{circumflex over (X)} t ,R t ⊙ĥ t−1]+Vm i +B h)
h t=(1−Z t)⊙ĥ t−1 +Z t ⊙h′ i
Accordingly, the modified GRU may take in a data set with missing values, masking vectors, and the modified inputs (as described above) to make predictions. In other words, the foregoing approach may modify the inputs and the hidden states for a GRU using decay (which may be calculated using time interval and masking vector) and then such modified inputs, modified hidden state, and the masking vector may be fed to the modified GRU.
The use of the modified GRU for prediction may have advantages over over LSTM and other known approaches for data imputation. For example, the modified GRU imputation approach described herein may be capable of exploiting time-series nature of the training data, using the last observation, time since the last observation and the distribution of a predictor to make more accurate estimates for missing values of the training data. The use of the modified GRU imputation approach described herein may assume no correlation and may only require a single prediction step. In addition, the modified GRU imputation approach described herein may enable combination of imputation and training into a single step, eliminating the need for storing imputed datasets. Further, the additional computation cost associated with imputation in the modified GRU imputation approach described herein may be at least partly offset by the low computation expense associated with GRUs when compared to LSTMs.
As used herein, when two or more elements are referred to as “coupled” to one another, such term indicates that such two or more elements are in electronic communication or mechanical communication, as applicable, whether connected indirectly or directly, with or without intervening elements.
This disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Similarly, where appropriate, the appended claims encompass all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Moreover, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, or component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Accordingly, modifications, additions, or omissions may be made to the systems, apparatuses, and methods described herein without departing from the scope of the disclosure. For example, the components of the systems and apparatuses may be integrated or separated. Moreover, the operations of the systems and apparatuses disclosed herein may be performed by more, fewer, or other components and the methods described may include more, fewer, or other steps. Additionally, steps may be performed in any suitable order. As used in this document, “each” refers to each member of a set or each member of a subset of a set.
Although exemplary embodiments are illustrated in the figures and described below, the principles of the present disclosure may be implemented using any number of techniques, whether currently known or not. The present disclosure should in no way be limited to the exemplary implementations and techniques illustrated in the drawings and described above.
Unless otherwise specifically noted, articles depicted in the drawings are not necessarily drawn to scale.
All examples and conditional language recited herein are intended for pedagogical objects to aid the reader in understanding the disclosure and the concepts contributed by the inventor to furthering the art, and are construed as being without limitation to such specifically recited examples and conditions. Although embodiments of the present disclosure have been described in detail, it should be understood that various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the disclosure.
Although specific advantages have been enumerated above, various embodiments may include some, none, or all of the enumerated advantages. Additionally, other technical advantages may become readily apparent to one of ordinary skill in the art after review of the foregoing figures and description.
To aid the Patent Office and any readers of any patent issued on this application in interpreting the claims appended hereto, applicants wish to note that they do not intend any of the appended claims or claim elements to invoke 35 U.S.C. § 112(f) unless the words “means for” or “step for” are explicitly used in the particular claim.

Claims (14)

What is claimed is:
1. An information handling system comprising:
a processor; and
a non-transitory computer-readable medium having stored thereon a program of instructions executable by the processor, the program of instructions configured to, when read and executed by the processor:
receive telemetry data associated with one or more information handling resources;
receive failure statistics associated with the one or more information handling resources wherein the failure statistics include, for each information handling resource from which telemetry data is received, a failure status of the information handling resource;
merge the telemetry data and the failure statistics to create training data;
provide the training data to a gated recurrent unit;
impute, by the gated recurrent unit, missing values from the training data; and
train the gated recurrent unit, in accordance with the training data, to predict a future failure status of an information handling resource from operational data associated with the information handling resource, wherein the failure status is selected from a group of failure states comprising: failed, about to fail, and healthy;
wherein the gated recurrent unit is configured to impute the missing values using a last observation, a time since the last observation, and a distribution of a predictor.
2. The information handling system of claim 1, wherein the training data comprises time series data generated from the telemetry data and the failure statistics.
3. The information handling system of claim 1, wherein the program of instructions is further configured to, when read and executed by the processor, implement a pattern recognition engine as a recurrent neural network with the gated recurrent unit.
4. The information handling system of claim 1, wherein the program of instructions is further configured to, when read and executed by the processor, apply a rules-based decision engine to the failure status to determine a remedial action for the information handling resource.
5. The information handling system of claim 1, wherein the program of instructions is configured to impute the missing value and train the gated recurrent unit in a single step without storing datasets of imputed values.
6. A method comprising:
receiving telemetry data associated with one or more information handling resources;
receiving failure statistics associated with the one or more information handling resources, wherein the failure statistics include, for each information handling resource from which telemetry data is received, a failure status of the information handling resource;
merging the telemetry data and the failure statistics to create training data; and
providing the training data to a gated recurrent unit;
imputing, by the gated recurrent unit, missing values from the training data; and
training the gated recurrent unit, in accordance with the training data, to predict a future failure status of an information handling resource from operational data associated with the information handling resource, wherein the failure status is selected from a group of failure states comprising: failed, about to fail, and healthy;
wherein the gated recurrent unit is configured to impute the missing values using a last observation, a time since the last observation, and a distribution of a predictor.
7. The method of claim 6, wherein the training data comprises time series data generated from the telemetry data and the failure statistics.
8. The method of claim 6, further comprising implementing a pattern recognition engine as a recurrent neural network with the gated recurrent unit.
9. The method of claim 6, further comprising applying a rules-based decision engine to the failure status to determine a remedial action for the information handling resource.
10. The method of claim 6, wherein the program of instructions is configured to impute the missing value and train the gated recurrent unit in a single step without storing datasets of imputed values.
11. An article of manufacture comprising:
a non-transitory computer-readable medium; and
computer-executable instructions carried on the computer readable medium, the instructions readable by a processor, the instructions, when read and executed, for causing the processor to:
receive telemetry data associated with one or more information handling resources;
receive failure statistics associated with the one or more information handling resources wherein the failure statistics include, for each information handling resource from which telemetry data is received, a failure status of the information handling resource;
merge the telemetry data and the failure statistics to create training data;
provide the training data to a gated recurrent unit;
impute, by the gated recurrent unit, missing values from the training data; and
train the gated recurrent unit, in accordance with the training data, to predict a future failure status of an information handling resource from operational data associated with the information handling resource, wherein the failure status is selected from a group of failure states comprising: failed, about to fail, and healthy;
wherein the gated recurrent unit is configured to impute the missing values using a last observation, a time since the last observation, and a distribution of a predictor.
12. The article of claim 11, wherein the training data comprises time series data generated from the telemetry data and the failure statistics.
13. The article of claim 11, the instructions for further causing the processor to, when read and executed by the processor, a pattern recognition engine as a recurrent neural network with the gated recurrent unit.
14. The article of claim 11, the instructions for further causing the processor to, when read and executed by the processor, apply a rules-based decision engine to the failure status to determine a remedial action for the information handling resource.
US16/528,081 2019-07-31 2019-07-31 Systems and methods for predicting information handling resource failures using deep recurrent neural network with a modified gated recurrent unit having missing data imputation Active US11227209B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/528,081 US11227209B2 (en) 2019-07-31 2019-07-31 Systems and methods for predicting information handling resource failures using deep recurrent neural network with a modified gated recurrent unit having missing data imputation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/528,081 US11227209B2 (en) 2019-07-31 2019-07-31 Systems and methods for predicting information handling resource failures using deep recurrent neural network with a modified gated recurrent unit having missing data imputation

Publications (2)

Publication Number Publication Date
US20210034949A1 US20210034949A1 (en) 2021-02-04
US11227209B2 true US11227209B2 (en) 2022-01-18

Family

ID=74258651

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/528,081 Active US11227209B2 (en) 2019-07-31 2019-07-31 Systems and methods for predicting information handling resource failures using deep recurrent neural network with a modified gated recurrent unit having missing data imputation

Country Status (1)

Country Link
US (1) US11227209B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220197977A1 (en) * 2020-12-22 2022-06-23 International Business Machines Corporation Predicting multivariate time series with systematic and random missing values

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11341026B2 (en) * 2020-01-06 2022-05-24 EMC IP Holding Company LLC Facilitating detection of anomalies in data center telemetry
US11416506B2 (en) * 2020-04-29 2022-08-16 EMC IP Holding Company LLC Facilitating temporal data management for anomalous state detection in data centers
US20220253690A1 (en) * 2021-02-09 2022-08-11 Adobe Inc. Machine-learning systems for simulating collaborative behavior by interacting users within a group
CN112948743B (en) * 2021-03-26 2022-05-03 重庆邮电大学 Coal mine gas concentration deficiency value filling method based on space-time fusion
CN113951845B (en) * 2021-12-01 2022-08-05 中国人民解放军总医院第一医学中心 Method and system for predicting severe blood loss and injury condition of wound
CN114205251B (en) * 2021-12-09 2022-12-02 西安电子科技大学 Switch link resource prediction method based on space-time characteristics
CN114509283A (en) * 2022-01-05 2022-05-17 中车唐山机车车辆有限公司 System fault monitoring method and device, electronic equipment and storage medium
EP4246376A1 (en) * 2022-03-16 2023-09-20 Tata Consultancy Services Limited Methods and systems for time-series prediction under missing data using joint impute and learn technique
CN116861347B (en) * 2023-05-22 2024-06-11 青岛海洋地质研究所 Magnetic force abnormal data calculation method based on deep learning model
CN116992295A (en) * 2023-09-26 2023-11-03 北京宝隆泓瑞科技有限公司 Reconstruction method and device for machine pump equipment monitoring missing data for machine learning
CN118519043B (en) * 2024-07-23 2024-10-18 杭州神驹科技有限公司 New energy mine card battery fault prediction method based on RNN (RNN-based network) circulating neural network

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5463768A (en) * 1994-03-17 1995-10-31 General Electric Company Method and system for analyzing error logs for diagnostics
US5465321A (en) * 1993-04-07 1995-11-07 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Hidden markov models for fault detection in dynamic systems
US6105149A (en) * 1998-03-30 2000-08-15 General Electric Company System and method for diagnosing and validating a machine using waveform data
US6442542B1 (en) * 1999-10-08 2002-08-27 General Electric Company Diagnostic system with learning capabilities
US6609217B1 (en) * 1998-03-30 2003-08-19 General Electric Company System and method for diagnosing and validating a machine over a network using waveform data
US6609212B1 (en) * 2000-03-09 2003-08-19 International Business Machines Corporation Apparatus and method for sharing predictive failure information on a computer network
US20070260566A1 (en) * 2006-04-11 2007-11-08 Urmanov Aleksey M Reducing the size of a training set for classification
US20100332189A1 (en) * 2009-06-30 2010-12-30 Sun Microsystems, Inc. Embedded microcontrollers classifying signatures of components for predictive maintenance in computer servers
US20140201571A1 (en) * 2005-07-11 2014-07-17 Brooks Automation, Inc. Intelligent condition monitoring and fault diagnostic system for preventative maintenance
US20160034809A1 (en) * 2014-06-10 2016-02-04 Sightline Innovation Inc. System and method for network based application development and implementation
US20160350194A1 (en) * 2015-05-27 2016-12-01 Tata Consultancy Services Limited Artificial intelligence based health management of host system
US20170212799A1 (en) * 2016-01-26 2017-07-27 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Adjusting failure response criteria based on external failure data
US20170262758A1 (en) * 2016-03-10 2017-09-14 Dell Products, Lp System and method to assess anomalous behavior on an information handling system using indirect identifiers
US20180336494A1 (en) * 2017-05-17 2018-11-22 Bsquare Corp. Translating sensor input into expertise
US20190138423A1 (en) * 2018-12-28 2019-05-09 Intel Corporation Methods and apparatus to detect anomalies of a monitored system
US20190155712A1 (en) * 2017-11-22 2019-05-23 International Business Machines Corporation System to manage economics and operational dynamics of it systems and infrastructure in a multi-vendor service environment
US20190205232A1 (en) 2018-01-03 2019-07-04 Dell Products L.P. Systems and methods for predicting information handling resource failures using deep recurrent neural networks
US20200103894A1 (en) * 2018-05-07 2020-04-02 Strong Force Iot Portfolio 2016, Llc Methods and systems for data collection, learning, and streaming of machine signals for computerized maintenance management system using the industrial internet of things
US20200117565A1 (en) * 2018-10-15 2020-04-16 Nvidia Corporation Enhanced in-system test coverage based on detecting component degradation
US20200364107A1 (en) * 2020-06-27 2020-11-19 Intel Corporation Self-supervised learning system for anomaly detection with natural language processing and automatic remediation
US20210004682A1 (en) * 2018-06-27 2021-01-07 Google Llc Adapting a sequence model for use in predicting future device interactions with a computing system
US20210042180A1 (en) * 2019-08-06 2021-02-11 Oracle International Corporation Predictive system remediation
US20210142122A1 (en) * 2019-10-14 2021-05-13 Pdf Solutions, Inc. Collaborative Learning Model for Semiconductor Applications
US11099928B1 (en) * 2020-02-26 2021-08-24 EMC IP Holding Company LLC Utilizing machine learning to predict success of troubleshooting actions for repairing assets

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5465321A (en) * 1993-04-07 1995-11-07 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Hidden markov models for fault detection in dynamic systems
US5463768A (en) * 1994-03-17 1995-10-31 General Electric Company Method and system for analyzing error logs for diagnostics
US6105149A (en) * 1998-03-30 2000-08-15 General Electric Company System and method for diagnosing and validating a machine using waveform data
US6609217B1 (en) * 1998-03-30 2003-08-19 General Electric Company System and method for diagnosing and validating a machine over a network using waveform data
US6442542B1 (en) * 1999-10-08 2002-08-27 General Electric Company Diagnostic system with learning capabilities
US6609212B1 (en) * 2000-03-09 2003-08-19 International Business Machines Corporation Apparatus and method for sharing predictive failure information on a computer network
US20140201571A1 (en) * 2005-07-11 2014-07-17 Brooks Automation, Inc. Intelligent condition monitoring and fault diagnostic system for preventative maintenance
US20070260566A1 (en) * 2006-04-11 2007-11-08 Urmanov Aleksey M Reducing the size of a training set for classification
US20100332189A1 (en) * 2009-06-30 2010-12-30 Sun Microsystems, Inc. Embedded microcontrollers classifying signatures of components for predictive maintenance in computer servers
US20160034809A1 (en) * 2014-06-10 2016-02-04 Sightline Innovation Inc. System and method for network based application development and implementation
US20160350194A1 (en) * 2015-05-27 2016-12-01 Tata Consultancy Services Limited Artificial intelligence based health management of host system
US10089203B2 (en) * 2015-05-27 2018-10-02 Tata Consultancy Services Limited Artificial intelligence based health management of host system
US20170212799A1 (en) * 2016-01-26 2017-07-27 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Adjusting failure response criteria based on external failure data
US20170262758A1 (en) * 2016-03-10 2017-09-14 Dell Products, Lp System and method to assess anomalous behavior on an information handling system using indirect identifiers
US20180336494A1 (en) * 2017-05-17 2018-11-22 Bsquare Corp. Translating sensor input into expertise
US20190155712A1 (en) * 2017-11-22 2019-05-23 International Business Machines Corporation System to manage economics and operational dynamics of it systems and infrastructure in a multi-vendor service environment
US20190205232A1 (en) 2018-01-03 2019-07-04 Dell Products L.P. Systems and methods for predicting information handling resource failures using deep recurrent neural networks
US20200103894A1 (en) * 2018-05-07 2020-04-02 Strong Force Iot Portfolio 2016, Llc Methods and systems for data collection, learning, and streaming of machine signals for computerized maintenance management system using the industrial internet of things
US20210004682A1 (en) * 2018-06-27 2021-01-07 Google Llc Adapting a sequence model for use in predicting future device interactions with a computing system
US20200117565A1 (en) * 2018-10-15 2020-04-16 Nvidia Corporation Enhanced in-system test coverage based on detecting component degradation
US20190138423A1 (en) * 2018-12-28 2019-05-09 Intel Corporation Methods and apparatus to detect anomalies of a monitored system
US20210042180A1 (en) * 2019-08-06 2021-02-11 Oracle International Corporation Predictive system remediation
US20210142122A1 (en) * 2019-10-14 2021-05-13 Pdf Solutions, Inc. Collaborative Learning Model for Semiconductor Applications
US11099928B1 (en) * 2020-02-26 2021-08-24 EMC IP Holding Company LLC Utilizing machine learning to predict success of troubleshooting actions for repairing assets
US20200364107A1 (en) * 2020-06-27 2020-11-19 Intel Corporation Self-supervised learning system for anomaly detection with natural language processing and automatic remediation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Colah, "Understanding LSTM Networks", http://colah.github.io/, Aug. 27, 2015, pp. 1-16 (Year: 2015). *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220197977A1 (en) * 2020-12-22 2022-06-23 International Business Machines Corporation Predicting multivariate time series with systematic and random missing values
US12039002B2 (en) * 2020-12-22 2024-07-16 International Business Machines Corporation Predicting multivariate time series with systematic and random missing values

Also Published As

Publication number Publication date
US20210034949A1 (en) 2021-02-04

Similar Documents

Publication Publication Date Title
US11227209B2 (en) Systems and methods for predicting information handling resource failures using deep recurrent neural network with a modified gated recurrent unit having missing data imputation
EP3857377B1 (en) Disk drive failure prediction with neural networks
US11423327B2 (en) Out of band server utilization estimation and server workload characterization for datacenter resource optimization and forecasting
US11003561B2 (en) Systems and methods for predicting information handling resource failures using deep recurrent neural networks
US20200097810A1 (en) Automated window based feature generation for time-series forecasting and anomaly detection
US10614361B2 (en) Cost-sensitive classification with deep learning using cost-aware pre-training
CN114008641A (en) Improving accuracy of automatic machine learning model selection using hyper-parametric predictors
US11275664B2 (en) Encoding and decoding troubleshooting actions with machine learning to predict repair solutions
JP7335352B2 (en) Enhanced diversity and learning of ensemble models
US11669324B2 (en) Safe window for creating a firmware update package
US11568319B2 (en) Techniques for dynamic machine learning integration
US11875190B2 (en) Methods and systems for AI-based load balancing of processing resources in distributed environments
US20220036220A1 (en) Machine learning data cleaning
CN114692883B (en) Quantum data loading method, device and equipment and readable storage medium
Singh et al. A feature extraction and time warping based neural expansion architecture for cloud resource usage forecasting
Qiu et al. On the promise and challenges of foundation models for learning-based cloud systems management
US20230229735A1 (en) Training and implementing machine-learning models utilizing model container workflows
US20220036233A1 (en) Machine learning orchestrator
US11216269B2 (en) Systems and methods for update of storage resource firmware
Zdunek et al. Distributed geometric nonnegative matrix factorization and hierarchical alternating least squares–based nonnegative tensor factorization with the MapReduce paradigm
US20220043697A1 (en) Systems and methods for enabling internal accelerator subsystem for data analytics via management controller telemetry data
US20240103991A1 (en) Hci performance capability evaluation
US20230342661A1 (en) Machine learning based monitoring focus engine
US20230176887A1 (en) Knowledge base for predicting success of cluster scaling
US11681438B2 (en) Minimizing cost of disk fulfillment

Legal Events

Date Code Title Description
AS Assignment

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SINGH, ASHUTOSH;CHAMBERS, LANDON MARTIN;REEL/FRAME:049921/0672

Effective date: 20190731

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, NORTH CAROLINA

Free format text: SECURITY AGREEMENT;ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:050406/0421

Effective date: 20190917

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS

Free format text: PATENT SECURITY AGREEMENT (NOTES);ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:050724/0571

Effective date: 20191010

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:053546/0001

Effective date: 20200409

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS

Free format text: SECURITY INTEREST;ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:053311/0169

Effective date: 20200603

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

AS Assignment

Owner name: EMC IP HOLDING COMPANY LLC, TEXAS

Free format text: RELEASE OF SECURITY INTEREST AT REEL 050406 FRAME 421;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058213/0825

Effective date: 20211101

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST AT REEL 050406 FRAME 421;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058213/0825

Effective date: 20211101

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST AT REEL 050406 FRAME 421;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058213/0825

Effective date: 20211101

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: EMC IP HOLDING COMPANY LLC, TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742

Effective date: 20220329

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742

Effective date: 20220329

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742

Effective date: 20220329

Owner name: EMC IP HOLDING COMPANY LLC, TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0571);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0088

Effective date: 20220329

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0571);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0088

Effective date: 20220329

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0571);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0088

Effective date: 20220329