GB2616506A - Malware detection by distributed telemetry data analysis - Google Patents
Malware detection by distributed telemetry data analysis Download PDFInfo
- Publication number
- GB2616506A GB2616506A GB2300649.7A GB202300649A GB2616506A GB 2616506 A GB2616506 A GB 2616506A GB 202300649 A GB202300649 A GB 202300649A GB 2616506 A GB2616506 A GB 2616506A
- Authority
- GB
- United Kingdom
- Prior art keywords
- neural network
- processors
- trained neural
- federated
- network system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007405 data analysis Methods 0.000 title 1
- 238000001514 detection method Methods 0.000 title 1
- 238000000034 method Methods 0.000 claims abstract 78
- 238000013528 artificial neural network Methods 0.000 claims abstract 35
- 238000004590 computer program Methods 0.000 claims abstract 3
- 230000006870 function Effects 0.000 claims 21
- 238000003062 neural network model Methods 0.000 claims 16
- 230000002547 anomalous effect Effects 0.000 claims 5
- 230000004931 aggregating effect Effects 0.000 claims 2
- 230000001537 neural effect Effects 0.000 claims 2
- BSFODEXXVBBYOC-UHFFFAOYSA-N 8-[4-(dimethylamino)butan-2-ylamino]quinolin-6-ol Chemical compound C1=CN=C2C(NC(CCN(C)C)C)=CC(O)=CC2=C1 BSFODEXXVBBYOC-UHFFFAOYSA-N 0.000 claims 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/56—Computer malware detection or handling, e.g. anti-virus arrangements
- G06F21/566—Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/52—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
- G06F21/53—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by executing in a restricted environment, e.g. sandbox or secure virtual machine
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/57—Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
- G06F21/577—Assessing vulnerabilities and evaluating computer system security
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/098—Distributed learning, e.g. federated learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1441—Countermeasures against malicious traffic
- H04L63/145—Countermeasures against malicious traffic the attack involving the propagation of malware through the network, e.g. viruses, trojans or worms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/03—Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
- G06F2221/033—Test or assess software
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Virology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer And Data Communications (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Debugging And Monitoring (AREA)
Abstract
A method, computer program product, and system for detecting a malicious process by a selected instance of an anti-malware system are provided. The method includes one or more processors examining a process for indicators of compromise to the process. The method further includes one or more processors determining a categorization of the process based upon a result of the examination. In response to determining that the categorization of the process does not correspond to a known benevolent process and a known malicious process, the method further includes one or more processors executing the process in a secure enclave. The method further includes one or more processors collecting telemetry data from executing the process in the secure enclave. The method further includes one or more processors passing the collected telemetry data to a locally trained neural network system.
Claims (20)
1. A method comprising: examining, by one or more processors, a process for indicators of compromise to the process; determining, by one or more processors, a categorization of the process based upon a result of the examination; in response to determining that the categorization of the process does not correspond to a known benevolent process and a known malicious process, executing, by one or more processors, the process in a secure enclave; collecting, by one or more processors, telemetry data from executing the process in the secure enclave; passing, by one or more processors, the collected telemetry data to a locally trained neural network system, wherein training data of the locally trained neural network system compri ses telemetry data from processes being executed on a host system underlyi ng the locally trained neural network system; determining, by one or more processors, a result of a first loss function for the locally trained neural network system; and comparing, by one or more processors, the result with a result of said loss function at an end of a training of said locally trained neural network system.
2. The method of claim 1, further comprising: passing, by one or more processors, the collected telemetry data to a federated trained neural network system , wherein the federated trained neural network system is adapted to receive a federated trained neural network model; determining, by one or more processors, a result of a second loss function for the federated trained neural netwo rk system; comparing, by one or more processors, the result with a result of the loss function of the received federated t rained neural network model of the federated trained neural network system ; aggregating, by one or more processors, results of the first loss function and the second loss function; and determining, by one or more processors, whether the process is anomalous based on the aggregated results.
3. The method of claim 1, further comprising: collecting, by one or more processors, telemetry data of non-malicious processes being executed on a selected in stance of an anti-malware system; and retraining, by one or more processors, the locally trained neural network system with the collected telemetry da ta to build an updated local neural network model on a regular basis.
4. The method of claim 2, further comprising: receiving, by one or more processors, an updated federated neural network model for the federated neural networ k system, wherein the federated neural network model is built using locally trained neural network models of a plurality of selected instances as input.
5. The method of claim 2, wherein at least one of the locally trained neural network systems and th e federated trained neural network system is an auto-encoder system.
6. The method of claim 1, further comprising: in response to determining that the categorization of the process correspo nds to an anomalous process, discarding, by one or more processors, the process.
7. The method of claim 1, further comprising: in response to determining that the process is a regular process, based on execution in the secure enclave, moving, by one or more processors, the process out of the secure enclave; and executing, by one or more processors, the process as a regular process.
8. The method of claim 2, wherein the federated trained neural network system is trained with telem etry data from a plurality of hosts.
9. The method of claim 8, wherein training of the federated neural network system further comprises : processing, by one or more processors, by each of a plurality of received locally trained neural network models, a set of representative telemetry data and storing respective results; and training, by one or more processors, the federated neural network using input/output pairs of telemetry data u sed and generated during processing of the telemetry data as input data fo r a training of the federated neural network model.
10. The method of claim 9: wherein the input/output pairs of telemetry data used for the training of the federated neural network model are weighted depending on a geographica l vicinity between geographical host locations of the local received neura l network models and a geographical host location for which a new federate d neural network model is trained, and wherein the input/output pairs of telemetry data used for the training of said federated neural network model are weighted depending on a predefined metric.
11. The method of claim 2, wherein aggregating results of the first loss function and the second los s function further comprises: building, by one or more processors, a weighted average of the first loss function and the second loss functio n.
12. The method of claim 1, wherein the determined categorization is selected from the group consisti ng of: a known benevolent process, a known malicious process, and an unknown process.
13. A computer system comprising: one or more computer processors; one or more computer readable storage media; and program instructions stored on the computer readable storage media for exe cution by at least one of the one or more processors, the program instructions comprising: program instructions to examine a process for indicators of compromise to the process; program instructions to determine a categorization of the process based up on a result of the examination; in response to determining that the categorization of the process does not correspond to a known benevolent process and a known malicious process, program instructions to execute the process in a secure enclave; program instructions to collect telemetry data from executing the process in the secure enclave; program instructions to pass the collected telemetry data to a locally tra ined neural network system, wherein training data of the locally trained neural network system compri ses telemetry data from processes being executed on a host system underlyi ng the locally trained neural network system; program instructions to determine a result of a first loss function for th e locally trained neural network system; and program instructions to compare the result with a result of said loss func tion at an end of a training of said locally trained neural network system .
14. The computer system of claim 13, further comprising program instructions, stored on the computer readable storage media for execution by at least o ne of the one or more processors, to: pass the collected telemetry data to a federated trained neural network sy stem, wherein the federated trained neural network system is adapted to receive a federated trained neural network model; determine a result of a second loss function for the federated trained neu ral network system; compare the result with a result of the loss function of the received fede rated trained neural network model of the federated trained neural network system; aggregate results of the first loss function and the second loss function; and determine whether the process is anomalous based on the aggregated results .
15. The computer system of claim 13, further comprising program instructions, stored on the computer readable storage media for execution by at least o ne of the one or more processors, to: collect telemetry data of non-malicious processes being executed on a sele cted instance of an anti-malware system; and retrain the locally trained neural network system with the collected telem etry data to build an updated local neural network model on a regular basi s.
16. The computer system of claim 13, further comprising program instructions, stored on the computer readable storage media for execution by at least o ne of the one or more processors, to: in response to determining that the categorization of the process correspo nds to an anomalous process, discard the process.
17. The computer system of claim 13, further comprising program instructions, stored on the computer readable storage media for execution by at least o ne of the one or more processors, to: in response to determining that the process is a regular process, based on execution in the secure enclave, move the process out of the secure enclave; and execute the process as a regular process.
18. The computer system of claim 14, wherein the federated trained neural network system is trained with telem etry data from a plurality of hosts.
19. A computer program product comprising: one or more computer readable storage media and program instructions store d on the one or more computer readable storage media, the program instructions comprising: program instructions to examine a process for indicators of compromise to the process; program instructions to determine a categorization of the process based up on a result of the examination; in response to determining that the categorization of the process does not correspond to a known benevolent process and a known malicious process, program instructions to execute the process in a secure enclave; program instructions to collect telemetry data from executing the process in the secure enclave; program instructions to pass the collected telemetry data to a locally tra ined neural network system, wherein training data of the locally trained neural network system compri ses telemetry data from processes being executed on a host system underlyi ng the locally trained neural network system; program instructions to determine a result of a first loss function for th e locally trained neural network system; and program instructions to compare the result with a result of said loss func tion at an end of a training of said locally trained neural network system .
20. The computer program product of claim 19, further comprising program instructions, stored on the one or more computer readable storage media, to: pass the collected telemetry data to a federated trained neural network sy stem, wherein the federated trained neural network system is adapted to receive a federated trained neural network model; determine a result of a second loss function for the federated trained neu ral network system; compare the result with a result of the loss function of the received fede rated trained neural network model of the federated trained neural network system; aggregate results of the first loss function and the second loss function; and determine whether the process is anomalous based on the aggregated results .
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/069,189 US11886587B2 (en) | 2020-10-13 | 2020-10-13 | Malware detection by distributed telemetry data analysis |
PCT/CN2021/120874 WO2022078196A1 (en) | 2020-10-13 | 2021-09-27 | Malware detection by distributed telemetry data analysis |
Publications (2)
Publication Number | Publication Date |
---|---|
GB202300649D0 GB202300649D0 (en) | 2023-03-01 |
GB2616506A true GB2616506A (en) | 2023-09-13 |
Family
ID=81077750
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB2300649.7A Pending GB2616506A (en) | 2020-10-13 | 2021-09-27 | Malware detection by distributed telemetry data analysis |
Country Status (5)
Country | Link |
---|---|
US (1) | US11886587B2 (en) |
JP (1) | JP2023549284A (en) |
DE (1) | DE112021004808T5 (en) |
GB (1) | GB2616506A (en) |
WO (1) | WO2022078196A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220405573A1 (en) * | 2021-06-18 | 2022-12-22 | Ford Global Technologies, Llc | Calibration for a distributed system |
US11693965B1 (en) * | 2022-06-17 | 2023-07-04 | Uab 360 It | Malware detection using federated learning |
EP4336294A1 (en) * | 2022-09-09 | 2024-03-13 | AO Kaspersky Lab | System and method for detecting anomalies in a cyber-physical system |
US12002055B1 (en) * | 2023-09-13 | 2024-06-04 | Progressive Casualty Insurance Company | Adaptable processing framework |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070294768A1 (en) * | 2006-01-31 | 2007-12-20 | Deutsche Telekom Ag | Method and system for detecting malicious behavioral patterns in a computer, using machine learning |
CN109635563A (en) * | 2018-11-30 | 2019-04-16 | 北京奇虎科技有限公司 | The method, apparatus of malicious application, equipment and storage medium for identification |
CN111260053A (en) * | 2020-01-13 | 2020-06-09 | 支付宝(杭州)信息技术有限公司 | Method and apparatus for neural network model training using trusted execution environments |
CN111368297A (en) * | 2020-02-02 | 2020-07-03 | 西安电子科技大学 | Privacy protection mobile malicious software detection method, system, storage medium and application |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7331062B2 (en) * | 2002-08-30 | 2008-02-12 | Symantec Corporation | Method, computer software, and system for providing end to end security protection of an online transaction |
US8881282B1 (en) | 2004-04-01 | 2014-11-04 | Fireeye, Inc. | Systems and methods for malware attack detection and identification |
US8745361B2 (en) | 2008-12-02 | 2014-06-03 | Microsoft Corporation | Sandboxed execution of plug-ins |
US8306931B1 (en) | 2009-08-06 | 2012-11-06 | Data Fusion & Neural Networks, LLC | Detecting, classifying, and tracking abnormal data in a data stream |
US9134996B2 (en) * | 2011-04-28 | 2015-09-15 | F-Secure Corporation | Updating anti-virus software |
CN105247532B (en) | 2013-03-18 | 2019-05-31 | 纽约市哥伦比亚大学理事会 | Use the unsupervised detection to abnormal process of hardware characteristics |
US9195833B2 (en) | 2013-11-19 | 2015-11-24 | Veracode, Inc. | System and method for implementing application policies among development environments |
US9262635B2 (en) * | 2014-02-05 | 2016-02-16 | Fireeye, Inc. | Detection efficacy of virtual machine-based analysis with application specific events |
US9853997B2 (en) | 2014-04-14 | 2017-12-26 | Drexel University | Multi-channel change-point malware detection |
US9959405B2 (en) | 2014-05-28 | 2018-05-01 | Apple Inc. | Sandboxing third party components |
US10419452B2 (en) * | 2015-07-28 | 2019-09-17 | Sap Se | Contextual monitoring and tracking of SSH sessions |
AU2017200941B2 (en) | 2016-02-10 | 2018-03-15 | Accenture Global Solutions Limited | Telemetry Analysis System for Physical Process Anomaly Detection |
US10375090B2 (en) | 2017-03-27 | 2019-08-06 | Cisco Technology, Inc. | Machine learning-based traffic classification using compressed network telemetry data |
CN108092962B (en) * | 2017-12-08 | 2020-11-06 | 奇安信科技集团股份有限公司 | Malicious URL detection method and device |
WO2019229728A1 (en) * | 2018-06-01 | 2019-12-05 | Thales Canada Inc. | System for and method of data encoding and/or decoding using neural networks |
US11699080B2 (en) * | 2018-09-14 | 2023-07-11 | Cisco Technology, Inc. | Communication efficient machine learning of data across multiple sites |
US11706499B2 (en) * | 2018-10-31 | 2023-07-18 | Sony Interactive Entertainment Inc. | Watermarking synchronized inputs for machine learning |
US11258813B2 (en) | 2019-06-27 | 2022-02-22 | Intel Corporation | Systems and methods to fingerprint and classify application behaviors using telemetry |
US20210365841A1 (en) * | 2020-05-22 | 2021-11-25 | Kiarash SHALOUDEGI | Methods and apparatuses for federated learning |
US20220076133A1 (en) * | 2020-09-04 | 2022-03-10 | Nvidia Corporation | Global federated training for neural networks |
-
2020
- 2020-10-13 US US17/069,189 patent/US11886587B2/en active Active
-
2021
- 2021-09-27 JP JP2023535644A patent/JP2023549284A/en active Pending
- 2021-09-27 DE DE112021004808.2T patent/DE112021004808T5/en active Pending
- 2021-09-27 GB GB2300649.7A patent/GB2616506A/en active Pending
- 2021-09-27 WO PCT/CN2021/120874 patent/WO2022078196A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070294768A1 (en) * | 2006-01-31 | 2007-12-20 | Deutsche Telekom Ag | Method and system for detecting malicious behavioral patterns in a computer, using machine learning |
CN109635563A (en) * | 2018-11-30 | 2019-04-16 | 北京奇虎科技有限公司 | The method, apparatus of malicious application, equipment and storage medium for identification |
CN111260053A (en) * | 2020-01-13 | 2020-06-09 | 支付宝(杭州)信息技术有限公司 | Method and apparatus for neural network model training using trusted execution environments |
CN111368297A (en) * | 2020-02-02 | 2020-07-03 | 西安电子科技大学 | Privacy protection mobile malicious software detection method, system, storage medium and application |
Also Published As
Publication number | Publication date |
---|---|
US20220114260A1 (en) | 2022-04-14 |
WO2022078196A1 (en) | 2022-04-21 |
US11886587B2 (en) | 2024-01-30 |
JP2023549284A (en) | 2023-11-22 |
GB202300649D0 (en) | 2023-03-01 |
DE112021004808T5 (en) | 2023-07-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
GB2616506A (en) | Malware detection by distributed telemetry data analysis | |
CN110572297B (en) | Network performance evaluation method, server and storage medium | |
WO2017118133A1 (en) | Anomaly detection method for internal virtual machine of cloud system | |
KR102120214B1 (en) | Cyber targeted attack detect system and method using ensemble learning | |
CN114499979B (en) | SDN abnormal flow cooperative detection method based on federal learning | |
CN112217650B (en) | Network blocking attack effect evaluation method, device and storage medium | |
JP2021056927A (en) | Abnormality detection system, abnormality detection method, and abnormality detection program | |
CN109040113B (en) | Distributed denial of service attack detection method and device based on multi-core learning | |
US20180330226A1 (en) | Question recommendation method and device | |
CN113434859A (en) | Intrusion detection method, device, equipment and storage medium | |
CN113660273B (en) | Intrusion detection method and device based on deep learning under super fusion architecture | |
JP2018148350A (en) | Threshold determination device, threshold level determination method and program | |
Al-Yaseen et al. | Real-time intrusion detection system using multi-agent system | |
WO2020158398A1 (en) | Sound generation device, data generation device, abnormality degree calculation device, index value calculation device, and program | |
RU2020111006A (en) | VERIFICATION DEVICE, INFORMATION PROCESSING METHOD AND PROGRAM | |
CN108600270A (en) | A kind of abnormal user detection method and system based on network log | |
CN114363212B (en) | Equipment detection method, device, equipment and storage medium | |
CN104008038A (en) | Method and device for detecting and evaluating software | |
Mughaid et al. | Utilizing machine learning algorithms for effectively detection iot ddos attacks | |
CN108989083B (en) | Fault detection performance optimization method based on hybrid strategy in cloud environment | |
CN108121912B (en) | Malicious cloud tenant identification method and device based on neural network | |
KR102354094B1 (en) | Method and Apparatus for Security Management Based on Machine Learning | |
CN109768995B (en) | Network flow abnormity detection method based on cyclic prediction and learning | |
EP4221081A1 (en) | Detecting behavioral change of iot devices using novelty detection based behavior traffic modeling | |
CN111277427B (en) | Data center network equipment inspection method and system |