WO2024025539A1 - Hardware behavior analysis - Google Patents
Hardware behavior analysis Download PDFInfo
- Publication number
- WO2024025539A1 WO2024025539A1 PCT/US2022/038714 US2022038714W WO2024025539A1 WO 2024025539 A1 WO2024025539 A1 WO 2024025539A1 US 2022038714 W US2022038714 W US 2022038714W WO 2024025539 A1 WO2024025539 A1 WO 2024025539A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- layer
- models
- anomaly
- predictive
- ensemble
- Prior art date
Links
- 238000001514 detection method Methods 0.000 claims abstract description 35
- 238000012544 monitoring process Methods 0.000 claims abstract description 15
- 238000000034 method Methods 0.000 claims description 37
- 238000012545 processing Methods 0.000 claims description 17
- 238000003066 decision tree Methods 0.000 claims description 5
- 231100000656 threshold model Toxicity 0.000 claims description 5
- 238000013145 classification model Methods 0.000 claims description 3
- 238000002955 isolation Methods 0.000 claims description 3
- 238000012417 linear regression Methods 0.000 claims description 3
- 239000000203 mixture Substances 0.000 claims description 3
- 238000012706 support-vector machine Methods 0.000 claims description 3
- 238000007477 logistic regression Methods 0.000 claims description 2
- 238000012806 monitoring device Methods 0.000 description 19
- 230000006399 behavior Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 15
- 230000002547 anomalous effect Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 2
- 230000000116 mitigating effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000013179 statistical model Methods 0.000 description 2
- 238000012517 data analytics Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0751—Error or fault detection not based on redundancy
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3452—Performance evaluation by statistical analysis
Definitions
- the present disclosure relates to methods and systems for hardware behavior analysis.
- the present disclosure describes a method and system for anomaly detection in a device.
- Hardware behavior analysis is the evaluation of the behavior of a device or system.
- One form of hardware behavior analysis is anomaly detection.
- Anomaly detection is the identification of behavior on a device or chip that differs significantly from the normal behavior of the device. Anomalies may occur in the form of anomalous data points, collections of data points, observations, or patterns of behavior which show significant deviation. Anomaly detection is widely deployed with many useful applications in different fields of technology.
- Anomaly detection inference includes running live data points into an anomaly detection process.
- Anomaly detection inference may utilize statistical models and methods to infer whether a data point is anomalous.
- Such statistical models may initially be trained on a training dataset including ‘clean’ data representative of normal device behavior.
- a method for anomaly detection in a device includes receiving time series datasets from a monitoring system in the device over a time period. The method further includes executing a first layer in an ensemble of predictive models based on the time series datasets. The first layer includes one or more predictive models for anomaly detection. The method further includes executing a second layer in the ensemble based on outputs of the first layer. The second layer includes a predictive model that determines whether an anomaly is detected in the device based on output data of the first layer.
- the method according to the first aspect provides anomaly detection in a device based on an ensemble of predictive model.
- the use of multiple layers reduces the false-positive rate compared to using a single anomaly detection model.
- a data processing apparatus for performing anomaly detection in a device.
- the data processing includes a processor configured to receive time series datasets from a monitoring system in the device over a time period.
- the processor is configured to execute a first layer in an ensemble of predictive models based on the time series datasets.
- the first layer includes one or more predictive models for anomaly detection.
- the processor is configured to execute a second layer in the ensemble based on outputs of the first layer.
- the second layer includes a predictive model that determines whether an anomaly is detected in the device based on output data of the first layer.
- a non-transitory computer readable storage including program code that is provided.
- the program code when executed by a processor, causes the processor to receive time series datasets from a monitoring system in a device over a time period.
- the program code further causes the processor to execute a first layer in an ensemble of predictive models based on the time series datasets.
- the first layer including one or more predictive models for anomaly detection.
- the program code further causes the processor to execute a second layer in the ensemble based on outputs of the first layer.
- the second layer includes a predictive model that determines whether an anomaly is detected in the device based on output data of the first layer.
- the method according to the first aspect includes notifying the anomaly detection system that an anomaly has been detected.
- the predictive model of the second layer determines whether an anomaly is detected based on a combination of the output data of the first layer.
- the ensemble includes one or more further layers of predictive models between the first layer and second layer.
- the method includes executing the one or more further layers based on outputs of predictive models in the previous layers.
- the predictive models in the first layer include one or more of: one class support-vector machines, support-vector regression models, support-vector classification models, isolation forests, decision trees, K-mean models, Gaussian mixture models, kernel density estimations, local outlier factor models, threshold models, or linear regression models.
- the predictive model in the second layer includes a voting classifier, a threshold model, logistic regression, or a decision tree.
- Figure 1 shows a schematic diagram of an apparatus for anomaly detection, according to an example.
- Figure 2 shows a flow diagram of a method for anomaly detection in a device, according to an example.
- Figure 3 is a block diagram of an ensemble of predictive models, according to an example.
- Figure 4 is a block diagram of a computing system, according to an example.
- FIG. 1 is a simplified schematic diagram showing an apparatus 100, according to an example.
- the apparatus 100 shown in Figure 1 may be used in conjunction with the other methods and systems described herein.
- the apparatus 100 includes a computing device 105.
- the device 105 may be an embedded system forming part of a larger system.
- the device 105 may include various components including one or more central processing unit (CPU) cores, memory devices, one or more input/output devices and secondary storage devices, graphical processing units (GPUs), bus interfaces, custom logic and any other circuitry or types of electronic components.
- the device 105 may be a System on a Chip (SoC) device which integrates one or more of the aforementioned components into a single substrate or microchip design.
- SoC System on a Chip
- the device 105 includes a component 110.
- the component 110 may be any of the previously mentioned components.
- the component 110 is connected via an interconnect 115 to a bus 120 which facilitates communication between the component 110 and one or more other components of the device 105.
- the component 110 is also communicatively coupled to a first monitoring device 125.
- the first monitoring device 125 is configured to perform on-chip monitoring of the component 110.
- a second monitoring device 130 is communicatively coupled to the interconnect 115.
- the second monitoring device 130 is configured to monitor data which is communicated between the component 110 and bus 120.
- the device 105 shown in Figure 1 is an example of a device including one component and two monitoring devices. In other examples, one or more further monitoring devices may be present together with additional components. In some cases, a single monitoring device may be present. Further monitoring devices may perform monitoring of the further components and interconnects between the components and the bus 120 or monitoring of the bus 120 itself.
- the monitoring devices 125, 130 are hardware blocks that monitor hardware behavior autonomously without the involvement of software. According to examples, devices 125, 130 may be configured at boot time. In some cases, the monitoring devices 125, 130 may also be reconfigurable after booting.
- the monitoring devices 125, 130 are coupled to a communication module 135.
- the monitoring devices 125, 130 are configured to communicate time series data to the communication module 135.
- the time series data includes a stream of time-stamped messages.
- the messages may be, for example, counter values indicative of hardware events.
- messages may be communicated periodically. In this case, the messages provide a count of hardware events over a pre- determined time period.
- the monitoring devices 125, 130 provide direct information on hardware events based on a trace. In this example, messages may be irregularly spaced where the timing of messages follows hardware events as they happen.
- the communication module 135 may provide means of connecting a further device to the device 105 such as a universal serial bus port, network port, or any other suitable means of connection.
- the device 105 may be connected to a further device 140.
- the further device 140 may be a computing system or any other remote electronic device capable of receiving data from the device 105.
- the device 105 is configured to implement an application 145.
- the application 145 is an analytics application which receives the time series data from the monitoring devices 125, 130 and performs data analytics on the time series data, according to examples described herein.
- an analytics application 145 may be implemented on the device 140.
- hardware events either recorded by a counter or directly as a trace, may be filtered to focus attention on a subset of all hardware events that the monitoring device records.
- Anomaly detection is the identification of system or chip behavior differing from normal or expected behavior.
- the methods and systems described herein perform anomaly detection based on analysis time series data received from monitoring devices such as those shown in Figure 1.
- the methods are based on the use of an ensemble of models which infer whether device behavior is anomalous based on time series data.
- Anomaly detection may be used to improve the functioning of a device.
- the cause of the anomaly may be identified, and mitigating actions may be taken to address any problems.
- Mitigating actions may include shutting down or rebooting the device, modifying, upgrading, or deleting software or firmware on the device.
- Figure 2 shows a flow diagram of a method 200 for anomaly detection in a device.
- the method 200 is implemented on the apparatus 100 shown in Figure 1.
- the method 200 is implemented in the analytics application 145.
- the method 200 includes receiving time series data from a monitoring system in the device over a time period.
- the monitoring system may include the monitoring devices 125, 130 and the communication module 135 which facilitates communication of the messages from the monitoring devices 125, 130 to the analytics application 145.
- the time series data may be counterbased or trace-based data including time stamped messages such as those generated by the monitoring devices 125, 130.
- the method includes executing a first layer in an ensemble of predictive models based on the time series datasets.
- the first layer includes one or more predictive models for anomaly detection.
- the first layer models may be efficient models which independently generate outputs indicating whether an anomaly has been detected.
- the predictive models in the first layer of the ensemble may include one class support-vector machines, support-vector regression models, support-vector classification models, isolation forests, decision trees, K-mean models, Gaussian mixture models, kernel density estimations, local outlier factor models, threshold models, and linear regression models.
- the method includes executing a second layer in the ensemble based on outputs of the first layer.
- the second layer includes a predictive model that determines whether an anomaly is detected in the device based the output data of the first layer.
- a model may be referred to as a supervisor or metamodel.
- the supervisor model receives outputs from the first layer models and combines the outputs to determine whether an anomalous device behavior has been identified.
- the supervisor model reduces the error or false positive rate in comparison to each individual first layer model because the output of the supervisor is based on a combination of results rather than a single result.
- the supervisor model combines the results of the first layer by determining whether a threshold number of the models have identified anomalous behavior.
- the threshold may include a simple majority.
- Figure 3 shows a block diagram 300 of an ensemble of predictive models for anomaly detection in a device.
- the ensemble may be used in conjunction with the methods and systems described herein.
- the method 200 may be implemented in conjunction with the ensemble shown in Figure 3.
- the ensemble includes a first layer of predictive models 310 for anomaly detection.
- four predictive models 311, 312, 313, 314 are shown. More or less models may be implemented in the first layer 310.
- Each of models 311, 312, 313, 314 receives time series data. Some models may receive time series data from multiple sources over different time periods.
- the model 311 receives time series data 321, the model 312 receives time series data 322, 323.
- the model 313 receives time series data 323.
- the model 314 receives time series data 323, 324.
- the ensemble includes a second layer includes a single model 330.
- the model 330 receives the outputs of the models from the first layer 310, and determines whether an anomaly is detected, based on the outputs of the first layer 310. The model 330 may decide if the anomaly should be reported or ignored and report the anomaly 340 to the system implementing the ensemble, accordingly.
- additional layers may be implemented between the first layer and second layer.
- a single supervisor may be implemented as a collection of smaller supervisors. In that case, a first layer may include the anomaly detection models, a second layer may include the smaller supervisor models and a third layer may include a supervisor of the supervisor models.
- the machine-readable instructions may, for example, be executed by a general-purpose computer, a special purpose computer, an embedded processor, or processors of other programmable data processing devices to realize the functions described in the description and diagrams.
- a processor or processing apparatus may execute the machine-readable instructions.
- modules of apparatus may be implemented by a processor executing machine-readable instructions stored in a memory, or a processor operating in accordance with instructions embedded in logic circuitry.
- the term 'processor' is to be interpreted broadly to include a CPU, processing unit, ASIC, logic unit, or programmable gate set etc.
- the methods and modules may all be performed by a single processor or divided amongst several processors.
- Such machine-readable instructions may also be stored in a computer readable storage that can guide the computer or other programmable data processing devices to operate in a specific mode.
- Figure 4 shows an example 400 of a processor 410 associated with a memory 420.
- the memory 420 includes computer readable instructions 430 which are executable by the processor 410.
- the instructions 430 cause the processor 410 to receive time series datasets from a monitoring system in a device over a time period, execute a first layer in an ensemble of predictive models based on the time series datasets, the first layer including one or more predictive models for anomaly detection, and execute a second layer in the ensemble based on outputs of the first layer, the second layer including a predictive model that determines whether to notify an anomaly is detected based the output data of the first layer.
- Such machine-readable instructions may also be loaded onto a computer or other programmable data processing devices, so that the computer or other programmable data processing devices perform a series of operations to produce computer-implemented processing, thus the instructions executed on the computer or other programmable devices provide an operation for realizing functions specified by flow(s) in the flow charts and/or block(s) in the block diagrams.
- the methods herein may be implemented in the form of a computer software product, the computer software product being stored in a storage medium and including a plurality of instructions for making a computer device implement the methods recited in the examples of the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Debugging And Monitoring (AREA)
Abstract
An electronic device configured for retail display includes an antenna, a memory in which security monitoring instructions are stored, and a processor configured to execute the security monitoring instructions to monitor a profile of wireless beacon devices detected via the antenna. The processor is further configured via the execution of the security monitoring instructions to, upon detection of a profile of wireless beacon devices that exceeds a threshold, initiate a security measure for the electronic device
Description
HARDWARE BEHAVIOR ANALYSIS
TECHNICAL FIELD
[0001] The present disclosure relates to methods and systems for hardware behavior analysis. In particular, the present disclosure describes a method and system for anomaly detection in a device.
BACKGROUND
[0002] Hardware behavior analysis is the evaluation of the behavior of a device or system. One form of hardware behavior analysis is anomaly detection. Anomaly detection is the identification of behavior on a device or chip that differs significantly from the normal behavior of the device. Anomalies may occur in the form of anomalous data points, collections of data points, observations, or patterns of behavior which show significant deviation. Anomaly detection is widely deployed with many useful applications in different fields of technology.
[0003] Anomaly detection inference includes running live data points into an anomaly detection process. Anomaly detection inference may utilize statistical models and methods to infer whether a data point is anomalous. Such statistical models may initially be trained on a training dataset including ‘clean’ data representative of normal device behavior.
SUMMARY
[0004] It is an object of the disclosure to provide a method for anomaly detection in a device.
[0005] The foregoing and other objects are achieved by the features of the independent claims. Further implementation forms are apparent from the dependent claims, the description, and the figures.
[0006] According to a first aspect, a method for anomaly detection in a device is provided. The method includes receiving time series datasets from a monitoring system in the device over a time period. The method further includes executing a first layer in an ensemble of predictive models based on the time series datasets. The first layer includes one or more predictive models for anomaly detection. The method further includes executing a second layer in the ensemble based on outputs of the first layer. The second
layer includes a predictive model that determines whether an anomaly is detected in the device based on output data of the first layer.
[0007] The method according to the first aspect provides anomaly detection in a device based on an ensemble of predictive model. The use of multiple layers reduces the false-positive rate compared to using a single anomaly detection model.
[0008] According to a second aspect, a data processing apparatus for performing anomaly detection in a device is provided. The data processing includes a processor configured to receive time series datasets from a monitoring system in the device over a time period. The processor is configured to execute a first layer in an ensemble of predictive models based on the time series datasets. The first layer includes one or more predictive models for anomaly detection. The processor is configured to execute a second layer in the ensemble based on outputs of the first layer. The second layer includes a predictive model that determines whether an anomaly is detected in the device based on output data of the first layer.
[0009] According to a third aspect, a non-transitory computer readable storage including program code that is provided. The program code, when executed by a processor, causes the processor to receive time series datasets from a monitoring system in a device over a time period. The program code further causes the processor to execute a first layer in an ensemble of predictive models based on the time series datasets. The first layer including one or more predictive models for anomaly detection. The program code further causes the processor to execute a second layer in the ensemble based on outputs of the first layer. The second layer includes a predictive model that determines whether an anomaly is detected in the device based on output data of the first layer. [0010] In a first implementation form, the method according to the first aspect includes notifying the anomaly detection system that an anomaly has been detected. [0011] In a second implementation form, the predictive model of the second layer determines whether an anomaly is detected based on a combination of the output data of the first layer.
[0012] In a third implementation form, the ensemble includes one or more further layers of predictive models between the first layer and second layer.
[0013] In a fourth implementation form, the method includes executing the one or more further layers based on outputs of predictive models in the previous layers.
[0014] In a fifth implementation form, the predictive models in the first layer include one or more of: one class support-vector machines, support-vector regression models, support-vector classification models, isolation forests, decision trees, K-mean models, Gaussian mixture models, kernel density estimations, local outlier factor models, threshold models, or linear regression models.
[0015] In a sixth implementation form, the predictive model in the second layer includes a voting classifier, a threshold model, logistic regression, or a decision tree. [0016] These and other aspects of the disclosure are described in the embodiment(s) below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] For a more complete understanding of the present disclosure, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
[0018] Figure 1 shows a schematic diagram of an apparatus for anomaly detection, according to an example.
[0019] Figure 2 shows a flow diagram of a method for anomaly detection in a device, according to an example.
[0020] Figure 3 is a block diagram of an ensemble of predictive models, according to an example.
[0021] Figure 4 is a block diagram of a computing system, according to an example.
DETAILED DESCRIPTION
[0022] Example embodiments are described below in sufficient detail to enable those of ordinary skill in the art to embody and implement the systems and processes herein described. It is important to understand that embodiments can be provided in many alternate forms and should not be construed as limited to the examples set forth herein.
[0023] Accordingly, while embodiments can be modified in various ways and take on various alternative forms, specific embodiments thereof are shown in the drawings and described in detail below as examples. There is no intent to limit to the particular forms disclosed. On the contrary, all modifications, equivalents, and alternatives falling within
the scope of the appended claims should be included. Elements of the example embodiments are consistently denoted by the same reference numerals throughout the drawings and detailed description where appropriate.
[0024] The terminology used herein to describe embodiments is not intended to limit the scope. The articles “a,” “an,” and “the” are singular in that they have a single referent, however the use of the singular form in the present document should not preclude the presence of more than one referent. In other words, elements referred to in the singular can number one or more, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, items, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, items, steps, operations, elements, components, and/or groups thereof.
[0025] Unless otherwise defined, all terms (including technical and scientific terms) used herein are to be interpreted as is customary in the art. It will be further understood that terms in common usage should also be interpreted as is customary in the relevant art and not in an idealized or overly formal sense unless expressly so defined herein.
[0026] Figure 1 is a simplified schematic diagram showing an apparatus 100, according to an example. The apparatus 100 shown in Figure 1 may be used in conjunction with the other methods and systems described herein. The apparatus 100 includes a computing device 105. The device 105 may be an embedded system forming part of a larger system. The device 105 may include various components including one or more central processing unit (CPU) cores, memory devices, one or more input/output devices and secondary storage devices, graphical processing units (GPUs), bus interfaces, custom logic and any other circuitry or types of electronic components. The device 105 may be a System on a Chip (SoC) device which integrates one or more of the aforementioned components into a single substrate or microchip design.
[0027] In the example shown in Figure 1, the device 105 includes a component 110. The component 110 may be any of the previously mentioned components. The component 110 is connected via an interconnect 115 to a bus 120 which facilitates communication between the component 110 and one or more other components of the device 105. The component 110 is also communicatively coupled to a first monitoring
device 125. The first monitoring device 125 is configured to perform on-chip monitoring of the component 110. A second monitoring device 130 is communicatively coupled to the interconnect 115. The second monitoring device 130 is configured to monitor data which is communicated between the component 110 and bus 120.
[0028] The device 105 shown in Figure 1 is an example of a device including one component and two monitoring devices. In other examples, one or more further monitoring devices may be present together with additional components. In some cases, a single monitoring device may be present. Further monitoring devices may perform monitoring of the further components and interconnects between the components and the bus 120 or monitoring of the bus 120 itself.
[0029] The monitoring devices 125, 130 are hardware blocks that monitor hardware behavior autonomously without the involvement of software. According to examples, devices 125, 130 may be configured at boot time. In some cases, the monitoring devices 125, 130 may also be reconfigurable after booting.
[0030] The monitoring devices 125, 130 are coupled to a communication module 135. The monitoring devices 125, 130 are configured to communicate time series data to the communication module 135. The time series data includes a stream of time-stamped messages. The messages may be, for example, counter values indicative of hardware events. In some examples messages may be communicated periodically. In this case, the messages provide a count of hardware events over a pre- determined time period. In other examples, the monitoring devices 125, 130 provide direct information on hardware events based on a trace. In this example, messages may be irregularly spaced where the timing of messages follows hardware events as they happen.
[0031] The communication module 135 may provide means of connecting a further device to the device 105 such as a universal serial bus port, network port, or any other suitable means of connection. The device 105 may be connected to a further device 140. The further device 140 may be a computing system or any other remote electronic device capable of receiving data from the device 105.
[0032] The device 105 is configured to implement an application 145. The application 145 is an analytics application which receives the time series data from the monitoring devices 125, 130 and performs data analytics on the time series data, according to
examples described herein. In other examples, an analytics application 145 may be implemented on the device 140. According to examples, hardware events, either recorded by a counter or directly as a trace, may be filtered to focus attention on a subset of all hardware events that the monitoring device records.
[0033] Anomaly detection is the identification of system or chip behavior differing from normal or expected behavior. The methods and systems described herein perform anomaly detection based on analysis time series data received from monitoring devices such as those shown in Figure 1. The methods are based on the use of an ensemble of models which infer whether device behavior is anomalous based on time series data.
[0034] Anomaly detection may be used to improve the functioning of a device. In response to identifying anomalous device behavior, the cause of the anomaly may be identified, and mitigating actions may be taken to address any problems. Mitigating actions may include shutting down or rebooting the device, modifying, upgrading, or deleting software or firmware on the device.
[0035] Figure 2 shows a flow diagram of a method 200 for anomaly detection in a device. The method 200 is implemented on the apparatus 100 shown in Figure 1. In particular, the method 200 is implemented in the analytics application 145.
[0036] At block 210, the method 200 includes receiving time series data from a monitoring system in the device over a time period. In the example shown in Figure 1, the monitoring system may include the monitoring devices 125, 130 and the communication module 135 which facilitates communication of the messages from the monitoring devices 125, 130 to the analytics application 145. The time series data may be counterbased or trace-based data including time stamped messages such as those generated by the monitoring devices 125, 130.
[0037] At block 220, the method includes executing a first layer in an ensemble of predictive models based on the time series datasets. The first layer includes one or more predictive models for anomaly detection. The first layer models may be efficient models which independently generate outputs indicating whether an anomaly has been detected. [0038] According to examples, the predictive models in the first layer of the ensemble may include one class support-vector machines, support-vector regression models, support-vector classification models, isolation forests, decision trees, K-mean models,
Gaussian mixture models, kernel density estimations, local outlier factor models, threshold models, and linear regression models.
[0039] At block 230, the method includes executing a second layer in the ensemble based on outputs of the first layer. The second layer includes a predictive model that determines whether an anomaly is detected in the device based the output data of the first layer. Such a model may be referred to as a supervisor or metamodel. The supervisor model receives outputs from the first layer models and combines the outputs to determine whether an anomalous device behavior has been identified.
[0040] The supervisor model reduces the error or false positive rate in comparison to each individual first layer model because the output of the supervisor is based on a combination of results rather than a single result. In some examples, the supervisor model combines the results of the first layer by determining whether a threshold number of the models have identified anomalous behavior. In some cases, the threshold may include a simple majority.
[0041] Figure 3 shows a block diagram 300 of an ensemble of predictive models for anomaly detection in a device. The ensemble may be used in conjunction with the methods and systems described herein. In particular, the method 200 may be implemented in conjunction with the ensemble shown in Figure 3.
[0042] The ensemble includes a first layer of predictive models 310 for anomaly detection. In Figure 3, four predictive models 311, 312, 313, 314 are shown. More or less models may be implemented in the first layer 310. Each of models 311, 312, 313, 314 receives time series data. Some models may receive time series data from multiple sources over different time periods. The model 311 receives time series data 321, the model 312 receives time series data 322, 323. The model 313 receives time series data 323. The model 314 receives time series data 323, 324.
[0043] The ensemble includes a second layer includes a single model 330. The model 330 receives the outputs of the models from the first layer 310, and determines whether an anomaly is detected, based on the outputs of the first layer 310. The model 330 may decide if the anomaly should be reported or ignored and report the anomaly 340 to the system implementing the ensemble, accordingly.
[0044] In some cases, additional layers may be implemented between the first layer and second layer. For example, in some cases, a single supervisor may be implemented as a collection of smaller supervisors. In that case, a first layer may include the anomaly detection models, a second layer may include the smaller supervisor models and a third layer may include a supervisor of the supervisor models.
[0045] The present disclosure is described with reference to flow charts and/or block diagrams of the method, devices, and systems according to examples of the present disclosure. Although the flow diagrams described above show a specific order of execution, the order of execution may differ from that which is depicted. Blocks described in relation to one flow chart may be combined with those of another flow chart. In some examples, some blocks of the flow diagrams may not be necessary and/or additional blocks may be added. Each flow and/or block in the flow charts and/or block diagrams, as well as combinations of the flows and/or diagrams in the flow charts and/or block diagrams, can be realized by machine readable instructions.
[0046] The machine-readable instructions may, for example, be executed by a general-purpose computer, a special purpose computer, an embedded processor, or processors of other programmable data processing devices to realize the functions described in the description and diagrams. In particular, a processor or processing apparatus may execute the machine-readable instructions. Thus, modules of apparatus may be implemented by a processor executing machine-readable instructions stored in a memory, or a processor operating in accordance with instructions embedded in logic circuitry. The term 'processor' is to be interpreted broadly to include a CPU, processing unit, ASIC, logic unit, or programmable gate set etc. The methods and modules may all be performed by a single processor or divided amongst several processors.
[0047] Such machine-readable instructions may also be stored in a computer readable storage that can guide the computer or other programmable data processing devices to operate in a specific mode. Figure 4 shows an example 400 of a processor 410 associated with a memory 420. The memory 420 includes computer readable instructions 430 which are executable by the processor 410.
[0048] The instructions 430 cause the processor 410 to receive time series datasets from a monitoring system in a device over a time period, execute a first layer in an
ensemble of predictive models based on the time series datasets, the first layer including one or more predictive models for anomaly detection, and execute a second layer in the ensemble based on outputs of the first layer, the second layer including a predictive model that determines whether to notify an anomaly is detected based the output data of the first layer.
[0049] Such machine-readable instructions may also be loaded onto a computer or other programmable data processing devices, so that the computer or other programmable data processing devices perform a series of operations to produce computer-implemented processing, thus the instructions executed on the computer or other programmable devices provide an operation for realizing functions specified by flow(s) in the flow charts and/or block(s) in the block diagrams.
[0050] Further, the methods herein may be implemented in the form of a computer software product, the computer software product being stored in a storage medium and including a plurality of instructions for making a computer device implement the methods recited in the examples of the present disclosure.
[0051] The present disclosure may be embodied in other specific apparatus and/or methods. The described embodiments are to be considered in all respects as illustrative and not restrictive. In particular, the scope of the disclosure is indicated by the appended claims rather than by the description and figures herein. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Claims
1. A method for anomaly detection in a device, the method comprising: receiving time series datasets from a monitoring system in the device over a time period; executing a first layer in an ensemble of predictive models based on the time series datasets, the first layer comprising one or more predictive models for anomaly detection; and executing a second layer in the ensemble based on outputs of the first layer, the second layer comprising a predictive model that determines whether an anomaly is detected in the device based on output data of the first layer.
2. The method of claim 1 further comprising: notifying the anomaly detection system that an anomaly has been detected.
3. The method of claim 1, wherein the predictive model of the second layer determines whether an anomaly is detected based on a combination of the output data of the first layer.
4. The method of claim 1 , wherein the ensemble comprises one or more further layers of predictive models between the first layer and second layer.
5. The method of claim 4, further comprising: executing the one or more further layers based on outputs of predictive models in the previous layers.
6. The method of claim 1 , wherein the predictive models in the first layer comprise one or more of: one class support-vector machines, support-vector regression models, support-vector classification models, isolation forests, decision trees, K-mean models, Gaussian mixture models, kernel density estimations, local outlier factor models, threshold models, or linear regression models.
7. The method of claim 1, wherein the predictive model in the second layer comprises a voting classifier, a threshold model, logistic regression, or a decision tree.
8. A data processing apparatus for performing anomaly detection in a device, the data processing comprising: a processor configured to: receive time series datasets from a monitoring system in the device over a time period; execute a first layer in an ensemble of predictive models based on the time series datasets, the first layer comprising one or more predictive models for anomaly detection; and execute a second layer in the ensemble based on outputs of the first layer, the second layer comprising a predictive model that is configured to determine whether an anomaly is detected in the device based on output data of the first layer.
9. The data processing apparatus of claim 8, wherein the processor is further configured to notify an anomaly detection system that an anomaly has been detected.
10. The data processing apparatus of claim 9, wherein the predictive model of the second layer is configured to determine whether an anomaly is detected based on a combination of the output data of the first layer.
11. The data processing apparatus of claim 10, wherein the ensemble comprises one or more further layers of predictive models between the first layer and second layer.
12. The data processing apparatus of claim 11, wherein the processor is further configured to execute the one or more further layers based on outputs of predictive models in the previous layers.
13. A non-transitory computer- readable medium comprising instructions which, when executed by a processor, cause the processor to: receive time series datasets from a monitoring system in a device over a time period; execute a first layer in an ensemble of predictive models based on the time series datasets, the first layer comprising one or more predictive models for anomaly detection; and execute a second layer in the ensemble based on outputs of the first layer, the second layer comprising a predictive model that determines whether an anomaly is detected based on output data of the first layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2022/038714 WO2024025539A1 (en) | 2022-07-28 | 2022-07-28 | Hardware behavior analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2022/038714 WO2024025539A1 (en) | 2022-07-28 | 2022-07-28 | Hardware behavior analysis |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024025539A1 true WO2024025539A1 (en) | 2024-02-01 |
Family
ID=83232838
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2022/038714 WO2024025539A1 (en) | 2022-07-28 | 2022-07-28 | Hardware behavior analysis |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024025539A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200379868A1 (en) * | 2019-05-31 | 2020-12-03 | Gurucul Solutions, Llc | Anomaly detection using deep learning models |
WO2022040360A1 (en) * | 2020-08-20 | 2022-02-24 | Red Bend Ltd. | Detecting vehicle malfunctions and cyber attacks using machine learning |
-
2022
- 2022-07-28 WO PCT/US2022/038714 patent/WO2024025539A1/en unknown
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200379868A1 (en) * | 2019-05-31 | 2020-12-03 | Gurucul Solutions, Llc | Anomaly detection using deep learning models |
WO2022040360A1 (en) * | 2020-08-20 | 2022-02-24 | Red Bend Ltd. | Detecting vehicle malfunctions and cyber attacks using machine learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111475804B (en) | Alarm prediction method and system | |
US10733536B2 (en) | Population-based learning with deep belief networks | |
US11093519B2 (en) | Artificial intelligence (AI) based automatic data remediation | |
US9275353B2 (en) | Event-processing operators | |
CN107430613B (en) | Knowledge intensive data processing system | |
CN103793284B (en) | Analysis system and method based on consensus pattern, for smart client service | |
JP6555061B2 (en) | Clustering program, clustering method, and information processing apparatus | |
US9424288B2 (en) | Analyzing database cluster behavior by transforming discrete time series measurements | |
US20160100009A1 (en) | Cloud process for rapid data investigation and data integrity analysis | |
US9870294B2 (en) | Visualization of behavior clustering of computer applications | |
US20100332540A1 (en) | Condition monitoring with automatically generated error templates from log messages and sensor trends based on time semi-intervals | |
CN113313280B (en) | Cloud platform inspection method, electronic equipment and nonvolatile storage medium | |
CN113342939B (en) | Data quality monitoring method and device and related equipment | |
CN116881737A (en) | System analysis method in industrial intelligent monitoring system | |
Dubrawski | Detection of events in multiple streams of surveillance data: Multivariate, multi-stream and multi-dimensional approaches | |
Xu et al. | Industrial process fault detection and diagnosis framework based on enhanced supervised kernel entropy component analysis | |
US11586942B2 (en) | Granular binarization for extended reality | |
WO2021178649A1 (en) | An algorithmic learning engine for dynamically generating predictive analytics from high volume, high velocity streaming data | |
Bailis et al. | Macrobase: Analytic monitoring for the internet of things | |
WO2024025539A1 (en) | Hardware behavior analysis | |
US20160259842A1 (en) | System and method for categorizing events | |
CN114327963A (en) | Anomaly detection method and device | |
Flick et al. | Conceptual Framework for manufacturing data preprocessing of diverse input sources | |
US10623236B2 (en) | Alert management system for enterprises | |
CN113961431A (en) | Service monitoring method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22765964 Country of ref document: EP Kind code of ref document: A1 |