GB2612362A - Fault prediction for machines - Google Patents

Fault prediction for machines Download PDF

Info

Publication number
GB2612362A
GB2612362A GB2115659.1A GB202115659A GB2612362A GB 2612362 A GB2612362 A GB 2612362A GB 202115659 A GB202115659 A GB 202115659A GB 2612362 A GB2612362 A GB 2612362A
Authority
GB
United Kingdom
Prior art keywords
error code
machine
data
diagnostic error
code data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2115659.1A
Other versions
GB202115659D0 (en
Inventor
Homung Juergen
Hammouchene Rachid
Ansell Keith
Basit Hafeez Abdul
Riaz Atif
Alonso Eduardo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
City University of London
Bosch Automotive Service Solutions Ltd
Original Assignee
City University of London
Bosch Automotive Service Solutions Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by City University of London, Bosch Automotive Service Solutions Ltd filed Critical City University of London
Priority to GB2115659.1A priority Critical patent/GB2612362A/en
Publication of GB202115659D0 publication Critical patent/GB202115659D0/en
Publication of GB2612362A publication Critical patent/GB2612362A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0218Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
    • G05B23/0224Process history based detection method, e.g. whereby history implies the availability of large amounts of data
    • G05B23/024Quantitative history assessment, e.g. mathematical relationships between available data; Functions therefor; Principal component analysis [PCA]; Partial least square [PLS]; Statistical classifiers, e.g. Bayesian networks, linear regression or correlation analysis; Neural networks
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0218Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
    • G05B23/0243Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model
    • G05B23/0254Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model based on a quantitative model, e.g. mathematical relationships between inputs and outputs; functions: observer, Kalman filter, residual calculation, Neural Networks
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0259Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the response to fault detection
    • G05B23/0267Fault communication, e.g. human machine interface [HMI]
    • G05B23/0272Presentation of monitored results, e.g. selection of status reports to be displayed; Filtering information to the user
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2637Vehicle, car, auto, wheelchair

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Debugging And Monitoring (AREA)

Abstract

A method of predicting diagnostic error code data for a machine such as a vehicle. The method includes predicting diagnostic error code data based on data attribute weights learnt using a recurrent neural network (RNN). The predicted diagnostic error code data and data attributes for which the weights are learnt, comprise (a) data indicative of a system of the machine in which a fault is detected; (b) a diagnostic error code indicative of a fault type; and (c) data providing additional information on the fault type. The diagnostic error code data may be predicted using learnt dependencies between diagnostic error code data and the machine’s past-generated diagnostic error code data. An entity embedding technique may be used to provide a compact representation of the diagnostic error code data. The diagnostic error code data may correspond to data of previous error sequences for one or more machines. The attributes may correspond to an electronic control unit, ECU identifier, Base-DTC, and fault byte.

Description

Intellectual Property Office Application No G132115659.1 RTM Date:28 April 2022 The following terms are registered trade marks and should be read as such wherever they occur in this document: Apache Spark Amazon Web Services Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
FAULT PREDICTION FOR MACHINES DESCRIPTION
FIELD
[0001] The present invention relates to fault prediction, particularly to a method for predicting a fault and diagnostic error code in a machine.
BACKGROUND
[0002] A machine's diagnostics system may detect a malfunction in the machine and generate a warning to alert a user of the malfunction, or fault, via a display unit, warning light or other indicator. Any time a problem is detected, the problem may be recorded as a code, or diagnostic error code. Diagnostic error codes provide valuable information about problems or potential problems, and may serve as a guide to find underlying issues which may further diagnose the root of defective and malfunctioning components.
[0003] A diagnostics system may generate an error code and display this to a user or send the error code to a technician. A diagnostic error code is able to inform a user of fault location and fault type so that a user is able to take corrective action. Consequently, a machine's diagnostic system and error code can improve machine safety and help to keep machines running efficiently and smoothly. A diagnostic error code may be activated after an issue requiring attention has arisen. Accordingly, and depending on the detected fault, machine downtime and machine repairs can be inconvenient and costly.
[0004] Thus, there is a need for a method of predicting faults within a machine that may reduce the inconvenience associated with machine downtime and machine repair.
SUMMARY OF THE INVENTION
[0005] The object of the invention is to provide a method for predicting faults in a machine. According to the method of the present invention, it is possible to predict a fault in advance of malfunction such that corrective action can be taken, where appropriate [0006] In a first aspect of the invention there is provided a method of predicting diagnostic error code data for a machine. The method comprising predicting diagnostic error code data based on data attribute weights learnt using a recurrent neural network (RNN), wherein the predicted diagnostic error code data and data attributes for which the weights are learnt, comprise: (a) data indicative of a system of the machine in which a fault is detected; (b) a diagnostic error code indicative of a fault type; and (c) data providing additional information on the fault type.
[0007] Second and third aspects of the invention provide a computer storage medium and system for predicting diagnostic error data, respectively.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] FIG. 1 shows example attributes of DTC events.
[0009] FIG. 2 shows an example method for predicting faults in a machine.
[0010] FIG. 3 shows example DTC sequences.
[0011] FIG. 4 shows an example method for processing DTC data for predicting faults in a machine.
[0012] FIG. 5 shows an example of a controller for predicting faults in a machine.
[0013] FIG. 6 shows an of a computer readable medium comprising instructions for predicting faults in a machine.
DETAILED DESCRIPTION
[0014] A diagnostic error code is a series of diagnostic trouble codes used by a machine's diagnostics system to alert a user when a machine experiences a malfunction. Different error codes represent specific problems in the machine. A machine may be any complex system that consists of structural elements, mechanisms and control components. Examples include a wide range of vehicles, such as cars (automobiles), boats, and airplanes; appliances in the home and office, including computers, farm machinery, machine tools, and factory automation systems, robots or any type of heavy equipment. Diagnostic error codes may be logged, or generated, by an Electronic Control Unit (ECU) when an on-board diagnostics routine fails an on-board test. The error code is provided using a set of codes that are able to indicate, for example, fault occurrence, the nature of the fault, and the location of the fault. A machine may include a number of different ECUs, each of which being configured to log faults for features which it controls e.g. a system of the machine. The ECU is a controller that controls features of the machine and may enable or disable functions of that machine, and this may dependent upon an error code that is logged. Each ECU may be assigned a unique label or identifier so that it can be individually identified. The ECU label may be in the form of a number, for example.
[0015] In the context of a vehicle such as an automobile, a diagnostic error code is often referred to as a diagnostic trouble code (DTC). A DTC in the context on an automobile may comprise a base-DTC as well as a fault byte. The base-DTC for an automobile is typically five characters long and capable of indicating a fault type, where each character, or value, of the base-DTC provides a different piece of information about a machine's problem, or fault. An example of a typical DTC generated in an automobile is P0307.
[0016] The first character in the example DTC is a letter which indicates which e.g. system, control system, or, unit in the machine has a problem. Examples of first characters in the example DTC are: P = Powertrain, B = Body, C = Chassis, U = Network.
[0017] The second character in the example DTC is a digit, and typically a 0 or 1. This character may denote whether or not the code is standardized. For example, 0 may indicate that the code is a generic code which is adopted by all e.g. cars that follow the on-board diagnostics 2 system (OBD-2). 1, on the other hand, may indicate that the code is manufacturer-specific. Such codes are unique to a manufacturer or model and are typically less common. This character may not be used at all because it may not be necessary to specify whether a code is standardized.
[0018] The third character in the example DTC is also a digit, and ranges from e.g. 0 to 7 or 1 to 8. The third character of the example DTC indicates a subsystem or component part of the machine in which a fault is detected. For example, 0 may refer to fuel and air metering and auxiliary emission controls, 1 may refer to a fuel or air metering, 2 may refer to fuel and air metering -injector circuit, 3 may refer to ignition systems or misfires, 4 may refer to auxiliary emission controls, 5 may refer to a vehicle speed control, idle control systems, and auxiliary inputs, 6 may refer to a computer and output circuit, and 7 may refer to a transmission.
[0019] The fourth and fifth characters in the example DTC is typically read as a two-digit number that helps to further define the problem, and can range between 0 and 99, for example. The fourth and fifth characters in this example DTC are indicative of a fault detected within the machine and therefore help to more specifically identify the issue with the machine, such as a particular problem that a component part may be experiencing.
[0020] Accordingly, the example DTC provided above (P0307) may be used to indicate that a problem has occurred in the powertrain (P) concerning ignition systems or misfire (3) and it is cylinder 7 (07) for which a misfire has been detected.
[0021] In addition to the five character example provided above, an additional two characters known as a fault-byte may also be provided. The fault-byte can be a number within the range of 0 and 255 and can be used to provide further, additional, detail about the fault identified by the DTC.
[0022] The above-described DTC is merely an example of a DTC for an automobile. Diagnostic error code formats may vary by machine and the number of characters/digits forming a diagnostic error code as well as the range of characters/digits to represent an attribute of the code can vary, depending on e.g. the type, size and complexity of the machine. It will be understood that an error code for e.g. an airplane may comprise more characters than the example DTC and require a larger range of characters/digits to represent systems, units, sub-systems, sub-units and faults etc. compared to an automobile.
[0023] A diagnostic error code generated by a machine may be an independent event or it may be an event that is dependent on another event and therefore triggered by a previous fault in the machine.
[0024] Reverting back to the example of a diagnostic error code in an automobile, generated DTC codes can be stored on a storage medium and form part of DTC data, or a DTC event. A storage medium may receive many instances of DTC data from a number of machines. Each entry of DTC data (or error code data) received and stored on the storage medium may comprise the DTC code, DTC-related information, and machine status information for the time at which a fault is detected. For example, such additional diagnostic error code information may include session time (time when DTCs were transferred to the storage medium), vehicle mileage, time of DTC generation, ECU that logged/generated the DTC, Base-DTC and fault byte. Examples of DTC events containing DTC attributes are illustrated in FIGS. 1A and 13. Any information that may be deemed insightful or useful for fault determination, fault prediction or determining fault relationships may be captured and stored on the storage medium as diagnostic error code data. FIG. 1A illustrates a DTC event comprising only three attributes: ECU, Base-DTC, and fault-byte. The three attributes illustrated in FIG. 1A can be used to identify a fault at the granularity level of an individual chip within a machine. However, and as illustrated in FIG. 1B, a DTC event may include any additional attributes or information that provide further detail about the machine and/or fault. It will be understood that additional attributes and information may be used to improve prediction of data relating to a next error code to be generated for the machine.
[0025] FIG. 2 illustrates an example method for predicting faults in a machine, such as a vehicle, for example. At block 201 of FIG. 2 DTC data of a storage medium can be accessed and processed by a data pre-processing module, or unit. The data pre-processing module is capable of performing processing on large volumes of data, or "big data", relating to DTC events of a number of machines. Large volumes of data may be reshaped by changing the way the data is structured, or organised. Reshaping or re-organising of data simply rearranges the form of the data without changing the content of the dataset so that analysis or data processing may be performed. For example, the data may be reshaped into a sequential common form for each machine and further arranged by each DTC event in order of occurrence of the event. FIG. 3 illustrates sequences of events arranged according to the described example. To predict the next most probable DTC for a vehicle, the proposed machine learning system requires a sequence containing previous DTC events detected in a machine. DTC-sequence 301 is an example of such a sequence belonging to an arbitrary vehicle (vehicle 1, for example), where all previously generated DTC events having at least three different attributes indicated by the shaded boxes are arranged chronologically. DTCsequence 302 may be a chronologically-arranged sequence belonging to e.g. vehicle 2. DTC-sequence 303 may be a chronologically-arranged sequence belonging to e.g. vehicle 3. DTC events are pre-processed and grouped for all vehicles in a sequential format such as that illustrated in FIG. 3. As evident by FIG. 3, having access to previous DTC events that have been appropriately organised according to an attribute, for example, enables a learning algorithm to process and analyse the DTC events and learn from the events so that a prediction of the attributes of the next probable DTC event can be made. This is further described below. Sequences of DTC events may be prepared or organised by using user, or expert, knowledge/preferences relating to any DTC attribute, machine-type, fault-type etc. or any combination thereof so that data can be filtered and/or reshaped in a particular way to meet requirements suitable for further data processing. Block 201 may form part of the example method of FIG. 2, but it might alternatively be performed separate to the method illustrated in FIG. 2. For example, data reshaping may take place prior to the method of FIG. 2 and stored as reshaped data on the storage medium so that it can be later obtained, or accessed, for processing as part of a method of fault prediction. The data pre-processing module may be implemented using any analytics engine or data processing framework suitable for big data workloads. An example of an analytics engine that may be used to develop the data pre-processing module of block 201 is Apache Spark TM on Amazon Web Services (AWS) TM. Any data processing framework, or analytics engine, that can quickly perform processing tasks such as the reshaping of data on very large sets of data may be suitable. [0026] The reshaped data undergoes further processing before it is provided to a machine learning algorithm. Most machine learning algorithms require all input variables to be numeric. The DTC data is therefore prepared so that all data is converted to numeric values before it is fitted to a machine learning model. The entity embedding technique and one-hot encoding described below are example processes by which a compact representation for DTC events can be provided to compactly represent an otherwise difficult and large set of possible combinations of attributes. Attributes may be converted into a compact numeric representation that can be provided to a machine learning algorithm and therefore improve predictions.
[0027] Compact representations for each DTC event may be provided at block 202 of FIG 2. The DTC events of block 201 represent a large and complex data set, which may comprise many data points in different data formats such as numeric and non-numeric (e.g. letters). Further, DTC data may be a complex textual representation of multiple attributes, which relate to a large variety of faults. To represent a non-numeric attribute of DTC (e.g, Base_DTC with N unique values) a simple numeric representation may be provided using a technique such as one-hot encoding to perform "binarization" of the attributes. Alternatively, entity embedding is able to provide a numeric representation that is continuous (i.e. an array that contains floats and integers instead of a binary representation containing zeros and ones). Each entity, or attribute, can be represented by an array of size M, where M is much less than the actual count of total unique values (N). If N is 1000, instead of encoding every occurrence of an attribute in the array of 1000 values (e.g. 1000 Base-DTC values), a machine learning algorithm can be used to reduce the number of array values and learn a smaller set of numbers, for example M=32. Since encoding provided by entity embedding is dense and continuous, a machine learning algorithm is capable of learning to represent each attribute such that similar values of the attributes have similar encodings. In other words, similar values close to each other in the embedding space may be identified and mapped to provide a more compact representation. Providing compact representations using entity embedding techniques and machine learning to identify similar values may require less memory usage and speed up recurrent neural networks (RN Ns).
[0028] The compact representations generated at block 202 of FIG. 1 may, alternatively, be previously generated and simply obtained, or accessed, from a storage medium, for example. [0029] At block 203 of FIG. 2, the compact representations for DTC events are fed to an RNN so that deep learning methods can be applied to the compact representations and a prediction of a future DTC for a machine can be made. RNNs are dynamic systems that have an internal state at each classification time step due to feedback connections. These characteristics enable RNNs to propagate data from earlier events to current processing steps to build a memory of time series events. RNN's are capable of remembering important information about the inputs they receive because of their internal memory. Recurrent neural networks can form a much deeper understanding of data sequences and data context compared to other algorithms. This allows them to be very precise in predicting future events. A long-short term memory network (LSTM) is a type of RNN capable of learning order dependence in sequence prediction problems. LSTMs are capable of processing long sequences because they comprise gates that can regulate the flow of information, and the gates can learn which data in a sequence is important to keep or dismiss. By doing that, an LSTM can pass relevant information down a long chain of sequences (cycling information through loops to feedback into the network) to make predictions. A Gated Recurrent Unit (GRU) is another type of RNN which might alternatively be used instead of an LSTM.
[0030] As will be understood, RN Ns are capable of tracking long-term dependencies in input sequences. RNNs adapted to learn, or capture, long term dependencies may train network components to learn from large amounts of data. Any suitable combination of sequences/data may be used to train the RNN to identify dependencies that may be useful in predicting a DTC that may be the next generated DTC for a given machine. For example, the RNN may use only DTC events generated by one machine to identify long-term dependencies in DTC events useful for predicting the next possible DTC event for that machine. However, the RNN may also use DTC events from more than one machine to identify long-term dependencies and use these learnt dependencies to weight attributes. Any suitable combination of sequences/data/machines may be used as an input to the RNN so that the RNN is able to learn fault dependencies that may be useful for predicting a next possible fault for a machine. By applying the RNN to the compact representations of a large volume of machine DTC events, it is possible to identify and capture long-term DTC dependencies which might otherwise be difficult to capture. Because certain machine faults may represent a trigger or root cause for subsequent machine faults, complex fault dependencies can be learnt using sequences of machine DTC events. These dependencies may be used to predict the next DTC event for a machine at the granularity level of an individual chip in the machine, e.g. attributes capable of identifying a machine system, subsystem/component part, and particular fault within the subsystem/component part. Identified dependencies can be used in combination with a machine's event sequence to predict the next DTC event for that machine. Based on the learnt dependencies and a machine's event sequence, a next DTC event for that machine may be predicted.
[0031] FIG. 4 further illustrates an example method of predicting a DTC event for a machine. [0032] Block 401 of FIG. 4 represents the non-numeric sequence data described in relation to FIG. 4 above. At block 402 of FIG. 4 the non-numeric sequence data has been separated according to different attributes, e.g. ECU, Base-DTC, and Fault-byte. The example illustrated in FIG. 4 illustrates the separation of just three attributes of a DTC event (more attributes may be included), where block 4021 represents an embedding module that may receive ECU data separated from each DTC event in the sequence data, block 4022 represents an embedding module that may receive Base-DTC data separated from each event in the sequence data, and block 4023 represents an embedding module that may receive Fault-byte data separated from each DTC event in the sequence data. Each embedding module 4021 to 4023 returns numeric encodings using the techniques described above in relation to block 202 of FIG. 2. Each attribute in DTC event occurrence is represented by the numeric encoding in an array of size M (where M = 32, for example).
[0033] At block 403, encoded arrays of all attributes are merged into one array of size N. For instance, in the case of the example three attributes, if each have an encoded array of size M = 5, the merged array will have a size N = 15. A single merged array denotes a single DTC event encoded with all its attributes. A merged array of size N may simply be a concatenation of the compact representation of each attribute of each DTC event, for example. In one example, compact representations of each attribute may be arranged in arrays as follows: ECU [12, 200, 399, 25, 49], Base-DTC [202, 389, 400, 321, 34], and Fault-Byte [56, 287, 102, 10, 56] and the merged, or concatenated array of representations may be provided as [12, 200, 399, 25, 49, 202, 389, 400, 321, 34, 56, 287, 102, 10, 56]. Any appropriate method of merging the data may be used. The merged arrays of encoded DTC attributes is fed to the RNN illustrated as block 404 in FIG 4. The RNN is a dynamic technique that processes the numeric data by propagating data from earlier events to current processing steps to build a memory of time series events. The data input to block 404 is used to train the RNN so that it learns some rules (or weights) for establishing a relationship between DTC events in the sequence, even if there is great separation in the time series of these events. In other words, the RNN is able to learn relationships or dependencies between events to capture long term dependencies. These rules or weights incorporate complex relationships or dependencies, which may not be possible to detect by a user looking at the sequence. RNNs are able to identify and capture the impact of DTC events in the form of rules, which can further be used to predict the next DTC.
[0034] Respective rules, or weights, provided by the RNN are passed to blocks 405 to 407, where each block corresponds to one attribute. For example, block 405 corresponds to the ECU attribute, block 406 corresponds to the Base-DTC attribute and block 407 corresponds to the fault-byte attribute. Blocks 405 to 407 convert these rules, or weights, into probability values. Blocks 405 to 407 may be neural network layers that are also capable of learning specialised rules based on the weights provided to blocks 405 to 407 by the RNN. The probability values calculated and stored for each of blocks 405 to 407 indicate the probability of each attribute option forming part of the next DTC to be generated for a machine. Attribute-specific rules, or weights, may in one example be converted to a range 0 to 1 to represent a probability distribution. This conversion may be performed using a mathematical operation, or function, called softmax. Softmax may scale each value to a probability range where the values of attributes that have high weights are converted to a high probability value and low weights are mapped to a low probability value. Together, probabilities of all the unique values may sum up to 1. Any method for modelling categorical probability distributions in applications of deep learning may be applied.
[0035] In order to predict the next DTC to be generated by a machine, rules, or weights, provided by the RNN may be used in combination with all previous events for a particular machine in order to determine a DTC that is most likely to be the next-generated DTC for that machine. In other words, data from more than one vehicle may be used to identify complex long term dependencies and these are, in turn, used alongside an event sequence of a single machine to predict the DTC that is most likely to be the next-generated DTC for that machine. Event sequences can be used to identify fault dependencies and these can be applied to a particular machine based on that machine's past-generated DTC events. For example, box 405 may determine, based on the RNN learning, that it is most likely that the next DTC will be generated by e.g. ECU 61. Box 406 may, for example, determine, based on the RNN learning, that it is most likely that the next Base-DTC to be generated by a machine will be P0500 (Vehicle Speed Sensor (VSS) Circuit). Box 407 may, for example, determine, based on the RNN learning, that it is most likely that the next DTC to be generated by a machine will have a fault-byte "45" associated with it therefore indicating a short circuit, for example.
[0036] The DTC data is provided as an output of the method and provided to a user so that a user is informed of attributes that are considered to form the next-to-be generated diagnostic error code data for a machine.
[0037] With knowledge of a potential fault in advance of malfunction, corrective action may be taken, where appropriate, and a user may be able to take action to avoid the fault occurring. Accordingly, inconvenience with machine downtime and machine repair may be avoided.
[0038] FIG. 5 shows an example of a controller 600 for predicting faults in a machine. The controller 500 comprises a processor 501 and a memory 502. Stored within the memory 502 are instructions 503 for carrying out a method that may be used to predict faults in a machine, in accordance with any of the examples described above. In one example, the controller 500 may be part of a computer running the instructions 503.
[0039] FIG. 6 shows a memory 602, which is an example of a computer readable medium storing instructions 611 to 613. Instructions stored on memory 602, when executed by processor 601, may cause the processor 601 to obtain compact representations of DTC data from medium 603 and to predict a DTC to be generated by the machine by feeding the compact representations to a RNN that outputs predicted DTC data for the predicted DTC. The DTC data obtainable from storage medium 603 includes at least: a character indicative of a system of the machine in which a fault is detected, a character indicative of a sub-system of the machine in which a fault is detected, and a character indicative of a fault detected within the machine. The computer readable medium 602 may be part of a system including processor 501 and/or medium 603. The computer readable medium may be any form of storage device capable of storing executable instructions, such as a non-transient computer readable medium, for example Random Access Memory (RAM), Electrically-Erasable Programmable Read-Only Memory (EEPROM), a storage drive, an optical disc, or the like.

Claims (10)

  1. CLAIMS1. A method of predicting diagnostic error code data for a machine, the method comprising: predicting diagnostic error code data based on data attribute weights learnt using a recurrent neural network (RNN), wherein the predicted diagnostic error code data and data attributes for which the weights are learnt, comprises: (a) data indicative of a system of the machine in which a fault is detected; (b) a diagnostic error code indicative of a fault type; and (c) data providing additional information on the fault type.
  2. 2 The method of claim 1, wherein the predicted diagnostic error code data indicates a next possible fault in the machine at the granularity of a particular fault within a system and sub-system of the machine.
  3. 3 The method of claim 1, further comprising the RNN that is adapted to learn complex dependencies between diagnostic error code data, and the learnt dependencies are used to predict the diagnostic error code data that is to be generated by the machine.
  4. 4 The method of claim 3, wherein the predicted diagnostic error code data that is to be generated by the machine is predicted using the learnt dependencies and the machine's past-generated diagnostic error code data.
  5. The method of claim 3, wherein compact representations of diagnostic error code data are fed to one of a long short-term memory network or gated recurrent unit to process the representations and learn weights for each attribute of the diagnostic error code data.
  6. 6. The method of claim 1, wherein an entity embedding technique is used to provide a compact representation of the diagnostic error code data.
  7. 7. The method of claim 1, wherein the diagnostic error code data corresponds to data of previous error sequences for one or more machines.
  8. 8. The method of claim 1, wherein attributes (a) to (c) of the diagnostic error code data correspond to an electronic control unit, ECU identifier, Base-DTC, and fault byte.
  9. 9. A computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out the method of any of claims 1 to 8
  10. 10. A system for predicting diagnostic error code data for a machine, the system comprising: a processor; and a computer-readable storage medium comprising instructions which, when executed by the processor, cause the processor to carry out the method of any of claims 1 to 8.
GB2115659.1A 2021-11-01 2021-11-01 Fault prediction for machines Pending GB2612362A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2115659.1A GB2612362A (en) 2021-11-01 2021-11-01 Fault prediction for machines

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2115659.1A GB2612362A (en) 2021-11-01 2021-11-01 Fault prediction for machines

Publications (2)

Publication Number Publication Date
GB202115659D0 GB202115659D0 (en) 2021-12-15
GB2612362A true GB2612362A (en) 2023-05-03

Family

ID=78828481

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2115659.1A Pending GB2612362A (en) 2021-11-01 2021-11-01 Fault prediction for machines

Country Status (1)

Country Link
GB (1) GB2612362A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020091972A1 (en) * 2001-01-05 2002-07-11 Harris David P. Method for predicting machine or process faults and automated system for implementing same
US20060288260A1 (en) * 2005-06-17 2006-12-21 Guoxian Xiao System and method for production system performance prediction
US20110118932A1 (en) * 2009-11-17 2011-05-19 Gm Global Technology Operations, Inc. Fault diagnosis and prognosis using diagnostic trouble code markov chains
US20160035150A1 (en) * 2014-07-30 2016-02-04 Verizon Patent And Licensing Inc. Analysis of vehicle data to predict component failure

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020091972A1 (en) * 2001-01-05 2002-07-11 Harris David P. Method for predicting machine or process faults and automated system for implementing same
US20060288260A1 (en) * 2005-06-17 2006-12-21 Guoxian Xiao System and method for production system performance prediction
US20110118932A1 (en) * 2009-11-17 2011-05-19 Gm Global Technology Operations, Inc. Fault diagnosis and prognosis using diagnostic trouble code markov chains
US20160035150A1 (en) * 2014-07-30 2016-02-04 Verizon Patent And Licensing Inc. Analysis of vehicle data to predict component failure

Also Published As

Publication number Publication date
GB202115659D0 (en) 2021-12-15

Similar Documents

Publication Publication Date Title
US20210016786A1 (en) Predictive Vehicle Diagnostics Method
US6650949B1 (en) Method and system for sorting incident log data from a plurality of machines
CN103163877A (en) Method and system for root cause analysis and quality monitoring of system-level faults
SE531898C2 (en) Method and control system for controlling a motor in a working machine
WO2006110246A2 (en) Diagnostic and prognostic method and system
CN107423205B (en) System fault early warning method and system for data leakage prevention system
WO2021130486A1 (en) Sensor fault prediction method and apparatus
CN109934362A (en) A kind of method, apparatus and terminal device of vehicle detection
CN109270921A (en) A kind of method for diagnosing faults and device
CN116610092A (en) Method and system for vehicle analysis
CN110097219B (en) Electric vehicle operation and maintenance optimization method based on safety tree model
Singh et al. Decision forest for root cause analysis of intermittent faults
Kharazian et al. SCANIA component X dataset: a real-world multivariate time series dataset for predictive maintenance
GB2612362A (en) Fault prediction for machines
EP4167040A1 (en) Fault model editor and diagnostic tool
CN114104329A (en) Automated prediction of fixes based on sensor data
KR20220125689A (en) Method of determining the operational condition of vehicle components
CN110084919B (en) Electric vehicle and safety tree construction method thereof
CN115221599A (en) Chassis abnormal sound diagnosis method and system and automobile
Shivakarthik et al. Maintenance of automobiles by predicting system fault severity using machine learning
CN117540894B (en) Method, apparatus and storage medium for generating inspection plan
De Freitas et al. Data-Driven Methodology for Predictive Maintenance of Commercial Vehicle Turbochargers
EP4280172A1 (en) Vehicle fault diagnosis method, device, and computer-readable storage medium
Rau et al. Electrical fault classification strategies for maintenance models using machine learning algorithms
CN117171885A (en) Method for constructing abnormal fuel consumption model of automobile engine and abnormal fuel consumption prediction method