WO2018174908A1 - Method to enhance reliability of monitoring data for edge-core distributed analytics systems - Google Patents

Method to enhance reliability of monitoring data for edge-core distributed analytics systems Download PDF

Info

Publication number
WO2018174908A1
WO2018174908A1 PCT/US2017/024170 US2017024170W WO2018174908A1 WO 2018174908 A1 WO2018174908 A1 WO 2018174908A1 US 2017024170 W US2017024170 W US 2017024170W WO 2018174908 A1 WO2018174908 A1 WO 2018174908A1
Authority
WO
WIPO (PCT)
Prior art keywords
plurality
sensors
data
apparatus
evaluation categories
Prior art date
Application number
PCT/US2017/024170
Other languages
French (fr)
Inventor
Yusuke Shomura
Takeshi Shibata
Original Assignee
Hitachi, Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi, Ltd. filed Critical Hitachi, Ltd.
Priority to PCT/US2017/024170 priority Critical patent/WO2018174908A1/en
Publication of WO2018174908A1 publication Critical patent/WO2018174908A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/12Network-specific arrangements or communication protocols supporting networked applications adapted for proprietary or special purpose networking environments, e.g. medical networks, sensor networks, networks in a car or remote metering networks
    • H04L67/125Network-specific arrangements or communication protocols supporting networked applications adapted for proprietary or special purpose networking environments, e.g. medical networks, sensor networks, networks in a car or remote metering networks involving the control of end-device applications over a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/12Network-specific arrangements or communication protocols supporting networked applications adapted for proprietary or special purpose networking environments, e.g. medical networks, sensor networks, networks in a car or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/28Network-specific arrangements or communication protocols supporting networked applications for the provision of proxy services, e.g. intermediate processing or storage in the network
    • H04L67/2823Network-specific arrangements or communication protocols supporting networked applications for the provision of proxy services, e.g. intermediate processing or storage in the network for conversion or adaptation of application content or format
    • H04L67/2828Network-specific arrangements or communication protocols supporting networked applications for the provision of proxy services, e.g. intermediate processing or storage in the network for conversion or adaptation of application content or format for reducing the amount or size of exchanged application data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/04Arrangements for maintaining operational condition

Abstract

Example implementations described herein are directed to systems and methods for improving the reliability of monitoring data for the distributed analytics systems. Example implementations can involve confidence evaluation functions both on the edge and the core. This invention enables to evaluate data with multiple perspectives without large volume of transmission via wide area network (WAN).

Description

METHOD TO ENHANCE RELIABILITY OF MONITORING DATA FOR EDGE- CORE DISTRIBUTED ANALYTICS SYSTEMS

BACKGROUND

Field

[0001] The present application is generally directed to Internet of Things (IoT), and more specifically, to enhancing reliability of monitoring data for distributed analytics systems.

Related Art

[0002] In related art IoT systems, there are systems and methods for the verification of monitored data to determine if such data can be trusted. Untrusted or uncertain data can lead to wrong decisions or operations which may result in damage to operational technology (OT) systems. Thus, related art IoT systems provide implementations for verifying monitored data.

[0003] In a related art implementation, there is a terminal authentication system utilized for each sensor node. However, such related art implementations tend to consume excessive resources for sensor nodes. Such resources can include computing resources and memory to store the authentication keys, and online update functions to maintain the authentication method of the sensor nodes and the secure connections with the sensor nodes. As sensor nodes are often required to work for long periods of time (e.g. a few decades), the resource consumption can be excessive in comparison with the desired implementation.

[0004] In another related art implementation, there is a data cleansing system to avoid anomalous data. In such related art implementations, the range of values and fluctuations are estimated and judged as either anomalous or not through the comparison of the sensed data with the estimation. However, such related art implementations can have limitations. For example, such related art implementations may only apply to data that can be estimated to a sufficient confidence level, and may risk filtering normal data that indicates an anomaly pertaining to an actual incident happens when such data values fall outside the estimation.

[0005] In related art implementations, there has been a need to reduce the volume of data transmitted via Wide Area Network (WAN) for storage in a data lake. To facilitate such needs, related art implementations have involved distributed analytics systems. In distributed analytics systems, rich data is processed at the edge side for extraction of desired feature values, wherein the desired feature values are transmitted to the core side. Such related art implementations also have a need to ensure the reliability of the monitored data.

SUMMARY

[0006] Aspects of the present disclosure include a system, which can involve a first apparatus configured to manage a plurality of sensors. The first apparatus can include a memory, configured to store a plurality of evaluation categories for data received from the plurality of sensors, each of the plurality of evaluation categories associated with an anomaly tag, impact information associated with each of the plurality of evaluation categories, and a data range; and a processor, configured to, for data received from a sensor of the plurality of sensors, determine one or more applicable evaluation categories from the plurality of evaluation categories for the data received from the sensor of the plurality of sensors, based on a comparison of the data received from the sensor of the plurality of sensors to the data range associated with the one or more applicable evaluation categories from the plurality of evaluation categories; calculate a confidence score for the data received from the sensor based on each of the one or more applicable evaluation categories from the plurality of evaluation categories and the impact information associated with the each of the one or more applicable evaluation categories from the plurality of evaluation categories; and transmit the confidence score and the anomaly tag associated with each of the one or more applicable evaluation categories to a second apparatus.

[0007] Aspects of the present disclosure include a method for managing a system involving a first apparatus configured to manage a plurality of sensors. The method can include managing a plurality of evaluation categories for data received from the plurality of sensors, each of the plurality of evaluation categories associated with an anomaly tag, impact information associated with each of the plurality of evaluation categories, and a data range; and for data received from a sensor of the plurality of sensors, determining one or more applicable evaluation categories from the plurality of evaluation categories for the data received from the sensor of the plurality of sensors, based on a comparison of the data received from the sensor of the plurality of sensors to the data range associated with the one or more applicable evaluation categories from the plurality of evaluation categories; calculating a confidence score for the data received from the sensor based on each of the one or more applicable evaluation categories from the plurality of evaluation categories and the impact information associated with the each of the one or more applicable evaluation categories from the plurality of evaluation categories; and transmitting the confidence score and the anomaly tag associated with each of the one or more applicable evaluation categories to a second apparatus.

[0008] Aspects of the present disclosure can include a non-transitory computer readable medium storing instructions for managing a system involving a first apparatus configured to manage a plurality of sensors, the instructions can include managing a plurality of evaluation categories for data received from the plurality of sensors, each of the plurality of evaluation categories associated with an anomaly tag, impact information associated with each of the plurality of evaluation categories, and a data range; and for data received from a sensor of the plurality of sensors, determining one or more applicable evaluation categories from the plurality of evaluation categories for the data received from the sensor of the plurality of sensors, based on a comparison of the data received from the sensor of the plurality of sensors to the data range associated with the one or more applicable evaluation categories from the plurality of evaluation categories; calculating a confidence score for the data received from the sensor based on each of the one or more applicable evaluation categories from the plurality of evaluation categories and the impact information associated with the each of the one or more applicable evaluation categories from the plurality of evaluation categories; and transmitting the confidence score and the anomaly tag associated with each of the one or more applicable evaluation categories to a second apparatus.

[0009] Aspects of the present disclosure include a system involving a first apparatus configured to manage a plurality of sensors. The system can include means for managing a plurality of evaluation categories for data received from the plurality of sensors, each of the plurality of evaluation categories associated with an anomaly tag, impact information associated with each of the plurality of evaluation categories, and a data range; and for data received from a sensor of the plurality of sensors, means for determining one or more applicable evaluation categories from the plurality of evaluation categories for the data received from the sensor of the plurality of sensors, based on a comparison of the data received from the sensor of the plurality of sensors to the data range associated with the one or more applicable evaluation categories from the plurality of evaluation categories; means for calculating a confidence score for the data received from the sensor based on each of the one or more applicable evaluation categories from the plurality of evaluation categories and the impact information associated with the each of the one or more applicable evaluation categories from the plurality of evaluation categories; and means for transmitting the confidence score and the anomaly tag associated with each of the one or more applicable evaluation categories to a second apparatus.

BRIEF DESCRIPTION OF DRAWINGS

[0010] FIG. 1 illustrates an example of core-edge distributed analytics system, in accordance with an example implementation.

[0011] FIG. 2 illustrates an example architecture of the IoT Gateway (GW), in accordance with an example implementation.

[0012] FIG. 3 illustrates an example of an evaluation method control table, in accordance with an example implementation.

[0013] FIG. 4 illustrates an example of confidence judgment table which stores the normal value or range for each sensor node, in accordance with an example implementation.

[0014] FIG. 5 illustrates an example flow chart of the confidence evaluation program, in accordance with an example implementation.

[0015] FIG. 6 illustrates an example of the historical data for confidence evaluation, which is utilized to generate the normal value for each method in accordance with an example implementation.

[0016] FIG. 7 illustrates an example of the data lake, in accordance with an example implementation.

[0017] FIG. 8 illustrates an example architecture of the core server, in accordance with an example implementation.

[0018] FIG. 9 illustrates an example of the operation history, in accordance with an example implementation. [0019] FIG. 10 illustrates an example of the incident factor table, in accordance with an example implementation.

[0020] FIG. 11 illustrates a flow chart of the confidence reevaluation program, in accordance with an example implementation.

[0021] FIG. 12 illustrates an example flow diagram of the notification program, in accordance with an example implementation.

[0022] FIG. 13 illustrates an example of procedure and message format, in accordance with an example implementation.

[0023] FIG. 14 illustrates an example flow for IoT GW for processing messages from the core server, in accordance with an example implementation.

DETAILED DESCRIPTION

[0024] The following detailed description provides further details of the figures and example implementations of the present application. Reference numerals and descriptions of redundant elements between figures are omitted for clarity. Terms used throughout the description are provided as examples and are not intended to be limiting. For example, the use of the term "automatic" may involve fully automatic or semi-automatic implementations involving user or administrator control over certain aspects of the implementation, depending on the desired implementation of one of ordinary skill in the art practicing implementations of the present application. Selection can be conducted by a user through a user interface or other input means, or can be implemented through a desired algorithm. Example implementations as described herein can be utilized either singularly or in combination and the functionality of the example implementations can be implemented through any means according to the desired implementations.

[0025] Example implementations described herein are generally directed to data collection systems. In the example implementations described herein, reliability of the monitored data may be improved for the distributed analytics systems. In particular, example implementations can involve a confidence evaluation function both on the edge and the core, which are configured to communicate with each other via the WAN. The functions can be configured to use location-specific data to evaluate the data. For example, the location- specific data of the edge is related to the data source, routing and node behavior. The location-specific data of the core is information related to the operation history of Operational Technology (OT) and IoT systems. Example implementations described herein are configured to evaluate data with multiple perspectives while reducing the volume of transmission via WAN, which can enhance the reliability of the monitored data.

[0026] In an example implementation described below, the system is configured to manage reliability of monitoring data to avoid incorrect conclusions or bad decisions from anomalous data.

[0027] FIG. 1 illustrates an example of a core-edge distributed analytics system, in accordance with an example implementation. In this example, there are sensor nodes 101 and IoT GWs 102 in an edge site being associated with edge server 103. Additionally, there is a core server 106 and a data lake 107 implemented in a cloud as a core. The edge and the core are connected via wide area network (WAN) 104. The sensor nodes 101 connect to the IoT GW 102 and send monitoring data 105 to core server 106 through the IoT GW 102. The IoT GW 102 has a fundamental gateway function, receives data from sensor node 101 and sends the data 105 to core server 106. IoT GW 102 can also send transmission control functions, such as aggregation, summarization, and compression. The server 106 receives the data and stores them to the data lake 107. Example implementations of the present disclosure deploy confidence evaluation function on the edge. In this example, the function is implemented on the IoT GW 102. It may be also be implemented on an edge server 103 set beside on IoT GW 102, depending on the desired implementation.

[0028] FIG. 2 illustrates an example architecture of the IoT GW 102, in accordance with an example implementation. The IoT GW 102 can include a Memory 200, Central Processing Unit (CPU) 210, Input/Output (I/O) interface 220, Network Interface (I/F) 230, and Internal Storage 240. Memory 200 is configured to manage confidence evaluation program 201, evaluation method control table 202, confidence judgment table 203, and historical data 204. The confidence evaluation program 201 is executed each time the IoT GW 102 receives monitoring data 105 from sensor nodes 101 and adds a confidence score to the data based on the evaluation method control table 202 and confidence judgment table 203. The historical data 204 contains the history of the data transmitted from the edge to the core, and is used to create confidence judgment table 203. IoT GW 102 can be configured to manage the sensors 101 at the edge site through the use of edge server 103.

[0029] Memory 200 can be configured to store a plurality of evaluation categories for data received from the plurality of sensors, wherein each of the plurality of evaluation categories is associated with an anomaly tag, impact information associated with each of the plurality of evaluation categories, and a data range as illustrated in FIGS. 3 and FIG. 4.

[0030] CPU 210 can be in the form of physical processors or a combination of software processes and hardware to execute instructions for the confidence evaluation program 201, as illustrated in the flow of FIG. 5 and FIG. 14. For example, for data received from a given sensor of the sensors managed by IoT GW 102 (e.g. through receipt on internal storage 240), CPU 210 can be configured to execute the flow of FIG. 5 to determine one or more applicable evaluation categories from the plurality of evaluation categories for the data received from the sensor of the plurality of sensors, based on a comparison of the data received from the sensor of the plurality of sensors to the data range associated with the one or more applicable evaluation categories from the plurality of evaluation categories; calculate a confidence score for the data received from the sensor based on each of the one or more applicable evaluation categories from the plurality of evaluation categories and the impact information associated with the each of the one or more applicable evaluation categories from the plurality of evaluation categories; and transmit the confidence score and the anomaly tag associated with each of the one or more applicable evaluation categories to a second apparatus such as a core server 106.

[0031] CPU 210 is also configured to update ones of the plurality of evaluation categories in the memory associated with one or more exceptions received from the second apparatus such as a core server 106. When exceptions are received from a notification as illustrated in FIG. 13, the exceptions can be recorded into the confidence judgement table 203 as illustrated in FIG. 4. In this manner, when the confidence score is calculated in accordance with the flow of FIG. 5, exceptions received from the core server 106 can be applied in the calculation for the one or more applicable evaluation categories, which can thereby facilitate real time determination as to whether anomalous data falls under an exception and should not be filtered. To facilitate such functionality, CPU 210 may be configured to execute the flow diagram is illustrated in FIG. 14. [0032] I/O interface 220 is configured to provide an interface for administrators of the IoT GW 102. Network I/F 230 facilitates connections and data transmission over the WAN 104 between the IoT GW 102 and the core server 106. Internal Storage 240 intakes data from sensor nodes 101 in real time for real time processing through the use of confidence evaluation program 201, thereby facilitating functionality for determining the filtering of data based on a confidence threshold in real time.

[0033] FIG. 3 illustrates an example of an evaluation method control table 202, in accordance with an example implementation. Each row of the table indicates a method to evaluate the confidence of the data. The table can include category 300, tag 301, weight 302, and active flag 303. The category 300 is used to classify methods with the type of data sources. For example, "Src-Physics" indicates that the IoT GW 102 is to evaluate the data with the physical layer information related to the transmission from sensor node to IoT GW. Other categories can include classification based on the evaluation of the data based on the data coming from a new node, from the data link, or from the network, or other methods according to the desired implementation. Such evaluation categories are determined to be applicable or not for the evaluation method based on the source of the data as determined from the associated node, and whether the flag is set to active or not as illustrated in FIG. 5. The tag 301 is used to identify the method to evaluate the data. The weight 302 is impact information indicative of the influence or impact degree for the confidence score, when the evaluation method indicates that the data is anomalous. For example, when the data comes from a new device and the evaluation method tagged "Src-New" determines that the data is anomalous, then the confidence score is reduced by 50% of the weight. Depending on the desired implementation, for the calculation of the confidence score, the weights applied can be stacked. For example, if other anomalies are detected in the data from the new device example, first the confidence score is reduced by 50% (e.g. 100 to 50), and then the subsequent anomalous weights are applied on the reduced score (e.g. for a detected anomaly with a weight of 20%, the score is reduced from 50 to 40). This can be applied by the confidence evaluation programs on both the IoT GW 102 and the core server 106. The active flag 303 indicates whether the evaluation method is to be executed (Y) or not (N), and can be set by an administrator in accordance with a desired implementation. [0034] FIG. 4 illustrates an example of confidence judgment table 203 which stores the normal value or range for each sensor node, in accordance with an example implementation. The table includes node 400, category 401, tag 402, normal range 403, and exception 404. The category 401 and tag 402 is similar to the category 300 and tag 301 as illustrated in the evaluation method control table 202 of FIG. 3. The node column 400 stores the sensor node ID and the normal range column 403 stores the normal range of values for the sensor node corresponding to the sensor node ID. The exception 404 is used to store information from the core server 106, such as instructions to consider every value as normal, to add a normal value temporarily, to reset the normal value, to recreate the normal value, and so on in accordance with the desired implementation. For example, the second row (#2) indicates that the IoT GW 102 always receives the data of Sensor 1 from the network device "wlanO", and then the normal value is set to "wlanO". Hence the confidence evaluation program 201 determines that an anomaly occurs if the IoT GW 102 receives the data of sensor 1 from another network device. Exception 404 can be received from core server 106 to provide an exception for an anomalous reading. For example, the third row (#3) indicates that the IoT GW 102 is to create an exception for the normal range 403 and not tag them as anomalous until a period of time T3 is reached. Such an exception could be created due to the updating of equipment, changing of a location, or so on according to the desired implementation. Similarly, the fifth row (#5) indicates that the IoT GW 102 is to create an exception and not tag the data link indicated as [yy-yy-yy-yy] as anomalous, which can be due, for example, to an upgrade to the equipment.

[0035] FIG. 5 illustrates an example flow chart of the confidence evaluation program 201, in accordance with an example implementation. The program is executed whenever the IoT GW 102 receives monitoring data from sensor nodes as shown with the flow at 501, and can be executed as real time processing for determining a confidence score for data and filtering data based on data not meeting a confidence score. At 502, the confidence evaluation program 201 gathers logs related to the monitoring data, such as input device (Src-Device) and Signal to Noise Ratio (SNR) of the wireless section (Src-SNR), and statistics related to the node behavior, such as the frequency of the data generation (Gen-Freq). The confidence evaluation program 201 can identify the categories provided in the evaluation method control table 202 as illustrated in FIG. 3 that pertain to the given data, which can be implemented with any desired comparison method in accordance with the desired implementation. At 503, the confidence evaluation program 201 retrieves the associated methods from the evaluation method control table 202 based on the identified categories.

[0036] Based on the methods retrieved from the evaluation method control table 202, the confidence evaluation program 201 determines the confidence score through utilizing each of the retrieved methods as described from 504 to 508. That is, for each row i retrieved from evaluation method control table 202, the flow executes an analysis as described from the flow at 504 to 508. At 504, the confidence evaluation program 201 checks the active column 303 of the evaluation method control table 202 for each row /', and skips the rows determined to be non-active. If the row is indicated as active (Yes) then the flow proceeds to 505, otherwise (No) the flow proceeds to the next retrieved row i and reverts back to 504.

[0037] At 505, the confidence evaluation program 201 fetches a row with the sensor node ID 400, category 401, and tag 402 from the confidence judgment table 203 as illustrated in FIG. 4 having the same parameters as the row i extracted from evaluation method control table

202. At 506, the confidence evaluation program 201 selects a log to be utilized by the evaluation method based on logs as determined from the flow at 502, and then determines whether the log is within the normal range 403 as specified by the confidence judgment table

203. If the data is out of the normal range (Yes), then the flow proceeds to 507, otherwise (No), the proceeds to the next retrieved row i and reverts back to 504.

[0038] At 507, the confidence evaluation program 201 checks the exception field 404 of the confidence judgment table 203 to determine if the exceptions apply to the data. If the monitoring data is still determined to be anomalous without any exceptions (Yes), then the flow proceeds to 508, otherwise (No), it proceeds to the next retrieved row i.

[0039] At 508, the confidence evaluation program 201 assigns the tag specified in the tag field 301 of the evaluation method control table 202 to the monitoring data 105. The confidence evaluation program 201 reduces the confidence score of the monitoring data by the portion specified in the weight field 302 of the evaluation method control table 202. After all of the methods corresponding to retrieved rows i from the evaluation method control table 202 are evaluated, the confidence evaluation program 201 sends the monitoring data 105 with a confidence score and corresponding tags to the core server 106 as illustrated in FIG. 13. If no anomaly is detected, the confidence score and tags can be omitted to reduce the transmission volume, depending on the desired implementation.

[0040] In an example implementation, suppose the confidence evaluation program 201 receives data regarding Sensor 1, wherein the S R and RSSI data falls outside of the expected normal. At 503, the confidence evaluation program 201 thereby selects the corresponding rows Src-Physics #3 and Src-Physics #4 from evaluation method control table 202. At 504, the confidence evaluation program 201 determines that the selected categories are active based on the active flag 303 from evaluation method control table 202. At 505, the confidence evaluation program 201 determines the normal range 403 of the data from the confidence judgment table 203, and determines that both are out of normal range at 506. At 507, the confidence evaluation program 201 will determine that there is an exception made for the Src-RSSI tag based on row #3 of the confidence judgment table 203 and discards the anomaly tag. Confidence evaluation program 201 determines that the Src-S R tag has no exception and proceeds to 508 to assign the Src-SNR tag to the data. At 509, the confidence evaluation program 201 decreases the confidence score based on row #4 from evaluation method control table 202, which decreases the score by 10% (e.g. from 100 to 90). After the data evaluation, the confidence evaluation program 201 transmits the data with the confidence score of 90 and the tag of Src-SNR to the core server 106, as shown at 510.

[0041] FIG. 6 illustrates an example of the historical data 204 for confidence evaluation, which is utilized to generate the normal value for each method in accordance with an example implementation. The historical data 204 can include timestamp 600, node 601, tag 602, and value 603. The timestamp column 600 stores the time that the IoT GW 102 receives the data 105 from the sensor node 101. The node column 601 stores the sensor node ID and the tag column 602 stores the tag ID indicating the evaluation method. The value column 603 stores information used for the evaluation method. For example, the first row (#1) indicates that IoT GW 102 received data from sensor 1 from network device "wlanO" at time Tl .

[0042] FIG. 7 illustrates an example of the data lake 107, in accordance with an example implementation. Data lake 107 stores the data received from the IoT GW 102. The information in the data lake can include timestamp 700, node 701, key 702, value 703, confidence 704, and tag 705. [0043] The timestamp 700, node 701, key 702 and value 703 columns are the same as described with respect to the historical data 204 in FIG. 6. In example implementations, there is a confidence score 704 and tag 705. These columns are provided for the confidence evaluation program 201. In example implementations where data size needs to be reduced, these columns can be left blank if the confidence score is higher than a threshold. The confidence column 704 stores the confidence score of the data. The tag column 705 stores the tags of the confidence judgment table 203. The tags 705 indicate what kind of anomaly was detected by the IoT GW 102.

[0044] FIG. 8 illustrates an example architecture of the core server, in accordance with an example implementation. The core server 106 can include a memory 800 configured to manage confidence reevaluation program 801, data analysis program 802, notification program 803, operation history 804, and incident factor table 805. The core server 106 can further include a CPU 810, I/O 811, Network I/F 812, and internal storage 813. The confidence reevaluation program 801 is executed periodically, or can be executed when the data analysis program 802 demands the data from the data lake 107, depending on the desired implementation. The data analysis program 802 retrieves the data having a confidence score exceeding a threshold, and then analyzes the retrieved data. Through this example implementation, the precision of the analysis can thereby be improved.

[0045] The memory 800 can be configured to manage operation a second memory configured to manage operation history of the IoT GW 102 and the plurality of sensors 101. CPU 810 is configured to execute the confidence reevaluation program 801 as illustrated in the flow of FIG. 11, wherein for the confidence score of the data associated with the each of the one or more applicable evaluation categories being below a threshold, CPU 810 is configured to conduct a comparison of an operation type associated with the anomaly tag with the operation history of the IoT GW 102 and the plurality of sensors 101 and, for the comparison indicative of the operation history of the IoT GW 102 and the plurality of sensors 101 being applicable to the data, update the confidence score based on the operation history as illustrated in the flow from 1102 to 1107 of FIG. 11. Such thresholds can be set at any level in accordance with the desired implementation, and can be generated by any means or set by the administrator. [0046] CPU 810 can be configured to, for the comparison indicative of the operation history of the IoT GW 102 and the plurality of sensors 101 being applicable to the data, send a notification to the IoT GW 102, the notification comprising a command associated with the data, one or more affected sensors from the plurality of sensors 101, and an exception for the anomaly tag through the execution of a notification program 803 as illustrated in FIG. 12 and FIG. 13. To facilitate the execution of the generation of the notification, CPU 810 can be configured to conduct the comparison of the operation type associated with the anomaly tag with the operation history of the IoT GW 102 and the plurality of sensors 101 through retrieval of logs from the operation history associated with the operation type, and buffer the retrieved logs for use in generating the notification to be sent to the IoT GW 102 as illustrated at the flow of FIG. 11.

[0047] CPU 810 can also be configured to conduct the comparison of the operation type associated with the anomaly tag with the operation history of the IoT GW 102 through a determination of ones of the plurality of sensors that are affected by an anomaly indicated in the anomaly tag.

[0048] FIG. 9 illustrates an example of the operation history 804, in accordance with an example implementation. The operation history 804 is configured to store an operation log which includes not only the IoT system change logs, but also the facility change log and IT system log. Operation history 804 can include entries for timestamp 901, site 902, node 903, operation type 904, and influence 905. The influence column 905 stores the node IDs that are influenced by the operation. The operation type 904 indicates the type of operation conducted at timestamp 901 for a corresponding site 902 and node 903. The operation type 904 can be utilized to determine the operation had an impact on corresponding data tagged as anomalous, based on a comparison of the operation type to the operation type and corresponding anomaly tag in the incident factor table 805 as described in FIG. 10.

[0049] FIG. 10 illustrates an example of the incident factor table 805, in accordance with an example implementation. The incident factor table 805 indicates the relationship between operation type 1001 and anomaly tag 1002. For example, the rows from second to four indicates that the operation which changes the location of a sensor node in the edge site affects the evaluation methods tagged as "Src-RSSI", "Src-S R" and "Src-Path". The table is used by the confidence reevaluation program 801 for determining if the operation conducted in the operation history 804 caused any anomalous data that should not be considered anomalous and that should have a corresponding exception.

[0050] FIG. 11 illustrates a flow chart of the confidence reevaluation program 801, in accordance with an example implementation. The confidence reevaluation program 801 is executed periodically, or can be executed when data analysis program 802 demands the data from data lake 107. The confidence reevaluation program 801 can be executed for batch time processing to reevaluate the data the IoT GW 102 has filtered due to having a confidence score that falls below a threshold, so as to determine if the data truly should be filtered by the IoT GW 102. The confidence reevaluation program 801 fetches the data received from the IoT GW 102 from data lake 107 and reevaluates the confidence score with the incident factor table 805 and the operation history 804, whereupon the confidence reevaluation program 801 updates the confidence score stored in the data lake 107 if there is an exception to the anomalous tag to the data. If such an exception exists, then the data lake 107 can also be configured to remove the anomaly tags from the data.

[0051] At first, the confidence reevaluation program 801 fetches rows from data lake 107 that have not been processed yet at 1101. Then, the confidence reevaluation program 801 reevaluates the confidence score of each row from the flow from 1102 to 1108.

[0052] At 1102, the confidence reevaluation program 801 compares the confidence score with a threshold. If the comparison indicates that the confidence score is less than the threshold (Yes), then the flow continues the process for the row at 1103, otherwise (No), the flow proceeds back to 1102 to process the next row i. At 1103, the confidence reevaluation program 801 retrieves the operation types 1001 related to the anomaly tag 1002, or searches for entries with the anomaly tag 1002 of the row from the incident factor table 805 as illustrated in FIG. 10. At 1104, the confidence reevaluation program 801 searches logs with the operation type 904, node ID 903 and timestamp 901 of the row from the operation history 804 as illustrated in FIG. 9 to determine if any operations were conducted that might have caused the anomalous data. At 1105, if the search does find related logs (Yes), then the confidence reevaluation program 801 executes the update confidence process at 1106, otherwise (No), the flow proceeds to execute the flow at 1102 for the next row i. The update confidence process indicates that the confidence score is to be returned to the value before the reduction of the tagged method. Further details are provided with respect to FIG. 14. At 1107, the confidence evaluation program 801 stores the logs retrieved from the flow at 1104 for sending the modified notification to the IoT GW 102.

[0053] After all rows i are processed, confidence evaluation program 801 proceeds to 1108 and executes the notification program 803 with the logs stored at the flow of 1107.

[0054] In the example as described for FIG. 5, suppose confidence evaluation program 801 receives the data transmitted regarding the Src-S R anomaly tag. At 1102, suppose that the confidence score falls below the threshold for the S R data. At 1103, the confidence evaluation program 801 retrieves operations associated with the tag based on incident factor table 805 as illustrated in FIG. 10. In this example, the anomaly tag Src-SNR is associated with the change location operation as illustrated in row #3 of FIG. 10. At 1104, the confidence evaluation program 801 then searches the operation history 804 as illustrated in FIG. 9, and determines that a change location operation has occurred at time T3 for Sensor 1, as illustrated in row #3 of FIG. 9. Because the sensor and operation type match the parameters of the data received, the confidence evaluation program determines that the change location operation at T3 is applicable to the received data at 1105, and proceeds to 1106 to remove the impact of the Src-SNR tag to the confidence score (e.g. reverting the score from 90 back to 100), and then proceeds to 1107 to store the retrieved operation history (row #3 of FIG. 9) in a send buffer. The confidence evaluation program 801 then executes the notification program 803 at 1108 with the retrieved operation history from 1107.

[0055] FIG. 12 illustrates an example flow diagram of the notification program 803, in accordance with an example implementation. The notification program 803 can be executed periodically or invoked by the confidence reevaluation program 801, depending on the desired implementation. The notification program 803 takes the logs stored at 1107 of FIG. 11 to prepare a notification to inform the IoT GW 102 regarding non-anomalous data, whereupon the IoT GW 102 can cease discarding such data and tagging such data as anomalous after receiving the notification. At 1201, the notification program 803 fetches the operation history logs which are received from the notification program 803 or which are retrieved from the operation history 804. At 1202, the notification program 803 retrieves anomaly tags 1002 from incident factor table 805 with the operation type 1001 of the row utilized to determine the selection of rows. At 1203, the notification program 803 adds a message that includes the node ID, anomaly tag, and exception to the send buffer. In example implementations, the node IDs are retrieved from the influence column 905 associated with the operation type 1001 corresponding to the data from the operation history table 804.

[0056] In example implementations, exceptions are included into the message to be transmitted based on the operation type 1001 and exceptions associated with the operation type. For example, in reference to the incident factor table 805 of FIG. 10, each of the operation types may be associated with an exception for a particular sensor or device. For example, if the operation type is change location, then an exception may be provided that the anomaly tag is ignored until a particular time in which the location change is completed (e.g. T3). Exceptions can be generated and implemented in any form according to the desired implementation.

[0057] After all rows i are processed, the notification program 803 sends the messages in the send buffer to the IoT GW 102 at 1204. Upon receipt, the IoT GW 102 is configured to discard the categories based on the received exceptions and the associated operation types. In an example implementation, confidence judgment table 203 is updated based on the received exceptions for the designated categories, as illustrated in FIG. 14.

[0058] In the example as provided in FIG. 5 and FIG. 11, the operation history (row #3 of FIG. 9) is retrieved at 1201, whereupon the anomaly tags Src-S R, Src-RSSI, and Src-Path tags are retrieved from the incident factor table 805 of FIG. 10 at the flow of 1202, due to their association with the change location operation. At 1203 a message is constructed which includes the node ID affected (Sensor 1), anomaly tags (Src-SNR, Src-RSSI, and Src-Path), and associated exceptions are prepared to the send buffer. At 1204, a message is transmitted containing the information to the IoT GW 102.

[0059] FIG. 13 illustrates an example of procedure and message format, in accordance with an example implementation. The message from the IoT GW 102 to the core server 106 includes sensing data 1300, confidence score 1301 and anomaly tags 1302. To reduce the transmission volume, the confidence score 1301 and anomaly tags 1302 can be omitted when the confidence score is higher than a threshold. The message from the core server 106 to the IoT GW 102 includes command 1303, node ID 1304, anomaly tags 1305, and exceptions 1306. Commands associated with the data 1303 can include commands to set or reset the data, to re-evaluate the data, to discard based on anomaly tags, and so on, depending on the desired implementation. The message from the core server 106 is mainly used to inform which anomaly tags are authorized by the core server 106 and do not need to be reported any more, or to provide exceptions for particular data points.

[0060] FIG. 14 illustrates an example flow for IoT GW 102 for processing messages from the core server, in accordance with an example implementation. The flow begins at 1401, when a message from the core server 106 is received by IoT GW 102 based on the flow at 1204 of FIG. 12. At 1402, the IoT GW 102 updates the confidence judgment table 203 with the exceptions indicated in the received message. Such example implementations can include updating the exception column 404 in the confidence judgment table 203 based on the exceptions received. At 1403, the IoT GW 102 discards the categories that are indicated by the message from the corresponding confidence score evaluation associated with the message as executed from FIG. 5, and then at 1404, the confidence score for the original transmission is updated. For example, suppose that the original transmission included a detection of anomalies based on a new device, but is now considered to be an exception. Originally, such an anomaly would reduce the confidence score by 50% (e.g. from 100 to 50) in accordance with the evaluation method control table as illustrated in FIG. 3. As the weighting is stacked, the subsequent anomalous weights are applied on the reduced score (e.g. for a detected anomaly with a weight of 20%, the score is reduced from 50 to 40). Should the new device be considered as an exception, then the category is discarded, which would restore the 50% loss from the original confidence score (e.g. revert back to 100), whereupon subsequent anomalous weights that did not have an exception are stacked based on the discarded categories (e.g. for a detected anomaly with a weight of 20%, the score is thereby reduced from 100 to 80).

[0061] In the example provided in FIGS. 5, 11 and 12, at 1401, the IoT GW 102 receives the message regarding the node ID affected (Sensor 1), and the anomaly tags (Src-S R, Src- RSSI, and Src-Path). At 1402, the IoT GW 102 thereby updates the confidence judgment table 203 of FIG. 4 based on the exceptions received from the

[0062] Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations within a computer. These algorithmic descriptions and symbolic representations are the means used by those skilled in the data processing arts to convey the essence of their innovations to others skilled in the art. An algorithm is a series of defined steps leading to a desired end state or result. In example implementations, the steps carried out require physical manipulations of tangible quantities for achieving a tangible result.

[0063] Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout the description, discussions utilizing terms such as "processing," "computing," "calculating," "determining," "displaying," or the like, can include the actions and processes of a computer system or other information processing device that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system's memories or registers or other information storage, transmission or display devices.

[0064] Example implementations may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include one or more general-purpose computers selectively activated or reconfigured by one or more computer programs. Such computer programs may be stored in a computer readable medium, such as a computer-readable storage medium or a computer-readable signal medium. A computer-readable storage medium may involve tangible mediums such as, but not limited to optical disks, magnetic disks, read-only memories, random access memories, solid state devices and drives, or any other types of tangible or non-transitory media suitable for storing electronic information. A computer readable signal medium may include mediums such as carrier waves. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Computer programs can involve pure software implementations that involve instructions that perform the operations of the desired implementation.

[0065] Various general-purpose systems may be used with programs and modules in accordance with the examples herein, or it may prove convenient to construct a more specialized apparatus to perform desired method steps. In addition, the example implementations are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the example implementations as described herein. The instructions of the programming language(s) may be executed by one or more processing devices, e.g., central processing units (CPUs), processors, or controllers.

[0066] As is known in the art, the operations described above can be performed by hardware, software, or some combination of software and hardware. Various aspects of the example implementations may be implemented using circuits and logic devices (hardware), while other aspects may be implemented using instructions stored on a machine-readable medium (software), which if executed by a processor, would cause the processor to perform a method to carry out implementations of the present application. Further, some example implementations of the present application may be performed solely in hardware, whereas other example implementations may be performed solely in software. Moreover, the various functions described can be performed in a single unit, or can be spread across a number of components in any number of ways. When performed by software, the methods may be executed by a processor, such as a general purpose computer, based on instructions stored on a computer-readable medium. If desired, the instructions can be stored on the medium in a compressed and/or encrypted format.

[0067] Moreover, other implementations of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the teachings of the present application. Various aspects and/or components of the described example implementations may be used singly or in any combination. It is intended that the specification and example implementations be considered as examples only, with the true scope and spirit of the present application being indicated by the following claims.

Claims

CLAIMS What is claimed is
1. A system, comprising:
a first apparatus configured to manage a plurality of sensors, the first apparatus comprising:
a memory, configured to store a plurality of evaluation categories for data received from the plurality of sensors, each of the plurality of evaluation categories associated with an anomaly tag, impact information associated with each of the plurality of evaluation categories, and a data range; and
a processor, configured to:
for data received from a sensor of the plurality of sensors:
determine one or more applicable evaluation categories from the plurality of evaluation categories for the data received from the sensor of the plurality of sensors, based on a comparison of the data received from the sensor of the plurality of sensors to the data range associated with the one or more applicable evaluation categories from the plurality of evaluation categories;
calculate a confidence score for the data received from the sensor based on each of the one or more applicable evaluation categories from the plurality of evaluation categories and the impact information associated with the each of the one or more applicable evaluation categories from the plurality of evaluation categories; and
transmit the confidence score and the anomaly tag associated with each of the one or more applicable evaluation categories to a second apparatus.
2. The system of claim 1, further comprising the second apparatus, the second apparatus comprising:
a second memory configured to manage operation history of the first apparatus and the plurality of sensors; and
a second processor configured to, for the confidence score of the data associated with the each of the one or more applicable evaluation categories being below a threshold:
conduct a comparison of an operation type associated with the anomaly tag with the operation history of the first apparatus and the plurality of sensors; for the comparison indicative of the operation history of the first apparatus and the plurality of sensors being applicable to the data, update the confidence score based on the operation history.
3. The system of claim 2, wherein the second processor of the second apparatus is
configured to, for the comparison indicative of the operation history of the first apparatus and the plurality of sensors being applicable to the data, send a notification to the first apparatus, the notification comprising a command associated with the data, one or more affected sensors from the plurality of sensors, and an exception for the anomaly tag.
4. The system of claim 3, wherein the second processor of the second apparatus is
configured to conduct the comparison of the operation type associated with the anomaly tag with the operation history of the first apparatus and the plurality of sensors through retrieval of logs from the operation history associated with the operation type, and buffer the retrieved logs for use in generating the notification to be sent to the first apparatus.
5. The system of claim 2, wherein the processor of the first apparatus is configured to update ones of the plurality of evaluation categories in the memory associated with one or more exceptions received from the second apparatus.
6. The system of claim 5, wherein the processor of the first apparatus is configured to
calculate the confidence score based on the one or more exceptions associated with the one or more applicable evaluation categories.
7. The system of claim 2, wherein the second processor of the second apparatus is
configured to conduct the comparison of the operation type associated with the anomaly tag with the operation history of the first apparatus through a determination of ones of the plurality of sensors that are affected by an anomaly indicated in the anomaly tag.
8. A method for managing a system involving a first apparatus configured to manage a
plurality of sensors, the method comprising:
managing a plurality of evaluation categories for data received from the plurality of sensors, each of the plurality of evaluation categories associated with an anomaly tag, impact information associated with each of the plurality of evaluation categories, and a data range; and
for data received from a sensor of the plurality of sensors:
determining one or more applicable evaluation categories from the plurality of evaluation categories for the data received from the sensor of the plurality of sensors, based on a comparison of the data received from the sensor of the plurality of sensors to the data range associated with the one or more applicable evaluation categories from the plurality of evaluation categories;
calculating a confidence score for the data received from the sensor based on each of the one or more applicable evaluation categories from the plurality of evaluation categories and the impact information associated with the each of the one or more applicable evaluation categories from the plurality of evaluation categories; and
transmitting the confidence score and the anomaly tag associated with each of the one or more applicable evaluation categories to a second apparatus.
9. The method of claim 8, further comprising:
managing operation history of the first apparatus and the plurality of sensors; and for the confidence score of the data associated with the each of the one or more applicable evaluation categories being below a threshold:
conducting a comparison of an operation type associated with the anomaly tag with the operation history of the first apparatus and the plurality of sensors;
for the comparison indicative of the operation history of the first apparatus and the plurality of sensors being applicable to the data, updating the confidence score based on the operation history.
10. The method of claim 9, further comprising, for the comparison indicative of the operation history of the first apparatus and the plurality of sensors being applicable to the data, sending a notification to the first apparatus, the notification comprising a command associated with the data, one or more affected sensors from the plurality of sensors, and an exception for the anomaly tag.
11. The method of claim 10, further comprising conducting the comparison of the operation type associated with the anomaly tag with the operation history of the first apparatus and the plurality of sensors through retrieval of logs from the operation history associated with the operation type, and buffering the retrieved logs for use in generating the notification to be sent to the first apparatus.
12. The method of claim 10, further comprising updating ones of the plurality of evaluation categories in the memory associated with one or more exceptions received from the second apparatus.
13. The method of claim 12, wherein the calculating the confidence score is based on the one or more exceptions associated with the one or more applicable evaluation categories.
14. The method of claim 9, wherein the conducting the comparison of the operation type associated with the anomaly tag with the operation history of the first apparatus is through a determination of ones of the plurality of sensors that are affected by an anomaly indicated in the anomaly tag.
15. A non-transitory computer readable medium storing instructions for managing a system involving a first apparatus configured to manage a plurality of sensors, the instructions comprising:
managing a plurality of evaluation categories for data received from the plurality of sensors, each of the plurality of evaluation categories associated with an anomaly tag, impact information associated with each of the plurality of evaluation categories, and a data range; and
for data received from a sensor of the plurality of sensors:
determining one or more applicable evaluation categories from the plurality of evaluation categories for the data received from the sensor of the plurality of sensors, based on a comparison of the data received from the sensor of the plurality of sensors to the data range associated with the one or more applicable evaluation categories from the plurality of evaluation categories;
calculating a confidence score for the data received from the sensor based on each of the one or more applicable evaluation categories from the plurality of evaluation categories and the impact information associated with the each of the one or more applicable evaluation categories from the plurality of evaluation categories; and
transmitting the confidence score and the anomaly tag associated with each of the one or more applicable evaluation categories to a second apparatus.
PCT/US2017/024170 2017-03-24 2017-03-24 Method to enhance reliability of monitoring data for edge-core distributed analytics systems WO2018174908A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2017/024170 WO2018174908A1 (en) 2017-03-24 2017-03-24 Method to enhance reliability of monitoring data for edge-core distributed analytics systems

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2017/024170 WO2018174908A1 (en) 2017-03-24 2017-03-24 Method to enhance reliability of monitoring data for edge-core distributed analytics systems

Publications (1)

Publication Number Publication Date
WO2018174908A1 true WO2018174908A1 (en) 2018-09-27

Family

ID=63584665

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/024170 WO2018174908A1 (en) 2017-03-24 2017-03-24 Method to enhance reliability of monitoring data for edge-core distributed analytics systems

Country Status (1)

Country Link
WO (1) WO2018174908A1 (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5944839A (en) * 1997-03-19 1999-08-31 Symantec Corporation System and method for automatically maintaining a computer system
US20090030753A1 (en) * 2007-07-27 2009-01-29 General Electric Company Anomaly Aggregation method
US8285514B2 (en) * 2008-03-21 2012-10-09 Rochester Institute Of Technology Sensor fault detection systems and methods thereof
US8755469B1 (en) * 2008-04-15 2014-06-17 The United States Of America, As Represented By The Secretary Of The Army Method of spectrum mapping and exploitation using distributed sensors
US8996446B2 (en) * 2007-09-26 2015-03-31 Nike, Inc. Sensory testing data analysis by categories
US9092616B2 (en) * 2012-05-01 2015-07-28 Taasera, Inc. Systems and methods for threat identification and remediation
US9311676B2 (en) * 2003-09-04 2016-04-12 Hartford Fire Insurance Company Systems and methods for analyzing sensor data
US20160231830A1 (en) * 2010-08-20 2016-08-11 Knowles Electronics, Llc Personalized Operation of a Mobile Device Using Sensor Signatures
US9426139B1 (en) * 2015-03-30 2016-08-23 Amazon Technologies, Inc. Triggering a request for an authentication
US20160253100A1 (en) * 2005-12-19 2016-09-01 Commvault Systems, Inc. Systems and methods for performing data replication

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5944839A (en) * 1997-03-19 1999-08-31 Symantec Corporation System and method for automatically maintaining a computer system
US9311676B2 (en) * 2003-09-04 2016-04-12 Hartford Fire Insurance Company Systems and methods for analyzing sensor data
US20160253100A1 (en) * 2005-12-19 2016-09-01 Commvault Systems, Inc. Systems and methods for performing data replication
US20090030753A1 (en) * 2007-07-27 2009-01-29 General Electric Company Anomaly Aggregation method
US8996446B2 (en) * 2007-09-26 2015-03-31 Nike, Inc. Sensory testing data analysis by categories
US8285514B2 (en) * 2008-03-21 2012-10-09 Rochester Institute Of Technology Sensor fault detection systems and methods thereof
US8755469B1 (en) * 2008-04-15 2014-06-17 The United States Of America, As Represented By The Secretary Of The Army Method of spectrum mapping and exploitation using distributed sensors
US20160231830A1 (en) * 2010-08-20 2016-08-11 Knowles Electronics, Llc Personalized Operation of a Mobile Device Using Sensor Signatures
US9092616B2 (en) * 2012-05-01 2015-07-28 Taasera, Inc. Systems and methods for threat identification and remediation
US9426139B1 (en) * 2015-03-30 2016-08-23 Amazon Technologies, Inc. Triggering a request for an authentication

Similar Documents

Publication Publication Date Title
US9070121B2 (en) Approach for prioritizing network alerts
US9495180B2 (en) Optimized resource allocation for virtual machines within a malware content detection system
US20070289013A1 (en) Method and system for anomaly detection using a collective set of unsupervised machine-learning algorithms
Aljawarneh et al. Anomaly-based intrusion detection system through feature selection analysis and building hybrid efficient model
US10033748B1 (en) System and method employing structured intelligence to verify and contain threats at endpoints
US20110185422A1 (en) Method and system for adaptive anomaly-based intrusion detection
AU2017254815B2 (en) Anomaly detection to identify coordinated group attacks in computer networks
US9672085B2 (en) Adaptive fault diagnosis
WO2003005200A1 (en) Method and system for correlating and determining root causes of system and enterprise events
US20160359872A1 (en) System for monitoring and managing datacenters
US9462009B1 (en) Detecting risky domains
CN104618343B (en) A method for threat detection based on real-time log of websites and systems
US20120026890A1 (en) Reporting Statistics on the Health of a Sensor Node in a Sensor Network
Farid et al. Anomaly Network Intrusion Detection Based on Improved Self Adaptive Bayesian Algorithm.
US7783744B2 (en) Facilitating root cause analysis for abnormal behavior of systems in a networked environment
US9832214B2 (en) Method and apparatus for classifying and combining computer attack information
US20140230053A1 (en) Automatic Detection of Fraudulent Ratings/Comments Related to an Application Store
US7778715B2 (en) Methods and systems for a prediction model
US8990778B1 (en) Shadow test replay service
US20150095892A1 (en) Systems and methods for evaluating a change pertaining to a service or machine
EP2725512B1 (en) System and method for malware detection using multi-dimensional feature clustering
CA2933426C (en) Event anomaly analysis and prediction
US9386041B2 (en) Method and system for automated incident response
Yoon et al. Communication pattern monitoring: Improving the utility of anomaly detection for industrial control systems
US8751874B2 (en) Managing apparatus, managing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17901733

Country of ref document: EP

Kind code of ref document: A1