CN113176977A - Interleaved log analysis method for networking workflow of construction - Google Patents

Interleaved log analysis method for networking workflow of construction Download PDF

Info

Publication number
CN113176977A
CN113176977A CN202110464020.4A CN202110464020A CN113176977A CN 113176977 A CN113176977 A CN 113176977A CN 202110464020 A CN202110464020 A CN 202110464020A CN 113176977 A CN113176977 A CN 113176977A
Authority
CN
China
Prior art keywords
log
dependency
value
log entry
entry
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110464020.4A
Other languages
Chinese (zh)
Inventor
靳宗明
卢冶
谢学说
简兆龙
李涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nankai University
CERNET Corp
Original Assignee
Nankai University
CERNET Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nankai University, CERNET Corp filed Critical Nankai University
Priority to CN202110464020.4A priority Critical patent/CN113176977A/en
Publication of CN113176977A publication Critical patent/CN113176977A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/1805Append-only file systems, e.g. using logs or journals to store data
    • G06F16/1815Journaling file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3065Monitoring arrangements determined by the means or processing involved in reporting the monitored data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention provides a method for analyzing interleaved logs of a construction networking workflow, which comprises the following steps: in the log file, a window with the length of N is arranged behind each log entry, and the window covers N subsequent log entries adjacent to the rear of the log entry; the N value of the window is artificially determined; calculating a dependency value between a preceding log entry and each subsequent log entry in its subsequent window; and setting a filtering threshold value, wherein if the joint dependency value is larger than the filtering threshold value, the corresponding successor log entry is the real successor entry of the previous log entry. The invention avoids the use of the identification information in the log when the workflow model is constructed in the interleaved log, has wide applicability and solves the problems of context loss and noise caused by log interleaving.

Description

Interleaved log analysis method for networking workflow of construction
Technical Field
The invention belongs to the field of computers, and particularly relates to a staggered log analysis method for networking workflows of a construction.
Background
Currently, intelligent internet of things system architectures are becoming more complex, including thousands of device components, and providing services to multiple users simultaneously. For the administrator, it is an important approach to maintain the system to acquire the system operation information from the log and to know the system operation state. However, as the size and complexity of the internet of things system increases, the number of logs generated in one day has reached the TB level, and manual processing of the logs becomes impossible. The workflow model mined from the system log can be used for system maintenance, and helps an administrator to know the operation condition of the system and monitor the correct operation of the system. However, due to imperfections in system files and specifications, a complete workflow of the system is typically not available. In addition, due to the characteristic that the operation mechanism of the internet of things system is variable, it is difficult to establish an effective, complete and self-adaptive workflow model.
The current system writes information of system operation into a log, and the log can help a manager to know the system operation state and analyze the system fault reason. The following example illustrates a typical raw log entry, including timestamp, level, and raw log content.
2020-02-10 20:38:31,5186INFO dfs.DataNode$DataXceiver:Receiving block blk_-6853481264720481267 src:/10.210.11.53:48251dest:/10.210.11.53:50754
The original log content may be parsed by the log into a log template and parameters, for this example, the template is "Receiving block Src: dest: parameters are identified with "". Since the templates may represent specific states of the system at runtime, they are used to make points in the workflow model. Edges connecting the points in the workflow model represent state transitions, as in the workflow model example of FIG. 1.
The concurrent and asynchronous features of the system lead to the log interleaving phenomenon, and simply speaking, the log sequence in the log file of the system is disordered and unordered. The log interleaving phenomenon causes problems of context loss and noise, which makes it difficult to distinguish logs belonging to different tasks. In principle, however, to construct a correct workflow model, it is necessary to separate logs belonging to different tasks and then further extract program execution paths to form the workflow model. We take fig. 1 as an example to illustrate the difficulty of separating interleaved logs and building corresponding workflow models. There are two log sequences where the execution paths are interleaved, i.e., 1 → 3 → 5 → 8 → 9 → 11 and 2 → 4 → 6 → 7 → 10 → 12. Some existing methods assume that each log entry has an identifier that uniquely identifies the execution to which it belongs. Still other approaches assume that even if the system logs do not have a unique identification, the logs can be cooperatively identified by information in the logs. However, some systems, such as OpenStack, have over one third of the logs without any identification. Therefore, it is not possible to rely on this information to separate logs and build workflow models. For these systems, it is not efficient to distinguish logs belonging to different tasks by the identification information in the logs.
In addition, variability and polymorphism in the system present problems with model aging, so workflow models built off-line can quickly lose effectiveness as the system runs. Model aging is usually irreversible and a hot update (application restart) has little impact on its recovery or mitigation. Thus, a static workflow model built offline will lose effectiveness if not updated over time due to model aging issues. Reconstructing the workflow model consumes many resources and performing the reconstruction in real time is not an effective solution. Therefore, how to update the workflow model established offline on line is also a problem to be solved urgently.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides the method for analyzing the interleaved logs of the networking workflow of the construction object, which avoids the use of identification information in the logs when a workflow model is constructed in the interleaved logs, has wide applicability and solves the problems of context loss and noise caused by log interleaving. In addition, the model is updated through a micro-iteration updating algorithm, so that the problem of model aging is solved.
The technical scheme adopted by the invention is as follows: a method for interleaved log analysis of a build networking workflow, comprising the steps of:
step 1: in the log file, a window with the length of N is arranged behind each log entry, and the window covers N subsequent log entries adjacent to the rear of the log entry; the N value of the window is artificially determined;
step 2: calculating a dependency value between a preceding log entry and each subsequent log entry in its subsequent window;
and step 3: and setting a filtering threshold value, wherein if the joint dependency value is larger than the filtering threshold value, the corresponding successor log entry is the real successor entry of the previous log entry. The rest are the noise successor entries.
Preferably, the dependency value is the probability of occurrence of a subsequent log entry in all windows of the same preceding log entry.
Preferably, in order to deal with the situation that the previous log entry has a plurality of real successor entries, a weighted greedy algorithm is adopted to filter the noise, specifically: step 3, continuously calculating the sum of the first m dependency values according to the numerical value of the dependency values of the subsequent log entries from large to small, until the sum of the dependency values is greater than a filtering threshold value, and enabling all the corresponding subsequent log entries to be real subsequent entries of the previous log entries; wherein m is a natural number, and the value of m is increased by 1 from 1. And all the other log entries are noise successor entries, and the corresponding dependent values are set to be zero. Starting with a value of 1, m may cover the case of one or more real subsequent entries.
Preferably, the preceding log entry and its successor log entries form a log entry dependency pair; a dependency relationship exists between a previous log entry and its real successor entry; there is no dependency between the prior log entry and its noisy successor entry.
Preferably, the set value of the dependency value corresponding to the noise log entry is zero, and the dependency value corresponding to the real subsequent log entry keeps the original value.
Preferably, the micro-iteration adjusts: continuously acquiring log file segments with the length of L from the log file, if log entry dependency pairs exist in the real-time log file segments, increasing mu by the dependency value, otherwise decreasing mu by the dependency value,
and if the dependency value is lower than the lower limit value, the dependency relationship of the log entry dependency pair is deleted.
Preferably, the reconstruction: and a workflow model is re-established according to the staggered logs in a period of time, so that a large amount of micro-iteration adjustment during large-scale change of the system operation mode is avoided.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention designs the window, the real successor of the log entry is covered in the staggered log, and the window size is adjustable; performing dependency calculation by using a variable window-based method to solve the problem of context loss; the use of the identification information in the log is avoided, and the method has wide applicability.
2. The invention designs a weighted greedy algorithm to filter noise, and the algorithm can select a real successor from candidate objects of a window and then delete a noise branch.
3. The invention provides a micro-iteration adjustment algorithm, which is used for finely adjusting a dependency value so as to change the dependency relationship to adapt to system change. The algorithm can not only generate a real-time workflow model, but also avoid a large amount of resource consumption when the workflow is reconstructed for multiple times.
Drawings
FIG. 1 is a diagram of an OpenStack VM initiated interleaved log sequence and workflow model;
FIG. 2 is a schematic flow chart of an embodiment of the present invention;
FIG. 3 is a diagram illustrating probability densities associated with interleaved logs according to an embodiment of the present invention;
FIG. 4 is a comparison of the Precision/Recall of the embodiment of the present invention and Logsed;
FIG. 5 is a graph of Precision/Recall results for different window lengths and filtering thresholds according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
An embodiment of the present invention provides a method for analyzing interleaved logs of a networking workflow of a building, as shown in fig. 2, which includes the following steps:
step 1: in the log file, a window with the length of N-3 is set behind each log entry, and the window covers 3 subsequent log entries adjacent behind the log entries;
when a window is used to find the true successor entries to the current template, the remaining entries in the window are independent of the current template. The remaining log entries are noise to the previous log entries, although they may be other execution paths.
FIG. 3 illustrates the probability density of the true successor entry dependency value and the noise successor entry dependency value for a window in the interleaved log.
In fig. 3(a), as the window increases, the true dependency becomes smaller and the distribution becomes more concentrated. However, in fig. 3(b), as the window increases, the noise dependent value becomes larger, and the distribution becomes more dispersed. Thus, the larger the window setting, the greater the boost in true subsequent discovery capability, but the more noise that is introduced. The appropriate window length ensures subsequent discovery capability and minimizes the introduction of noise. The calculation of the optimal window length is an NP problem, and therefore a better window length needs to be set. In practical applications, the preferred window length setting may be from the number of executions up to the length of the shortest task.
Step 2: calculating a dependency value between a preceding log entry and each subsequent log entry in its subsequent window;
the dependency values are used to measure the connections between log templates. As shown in the dependency computation portion of fig. 2, a count vector (C _ V) is first computed whose value is the template occurrence count in the window following the current template in the log sequence. Here, we introduce the C _ V calculation of T2. If the next 3 entries of T2 are checked, then a number of occurrences dictionary { T0: 1, T3: 4, T4: 1, T5: 3, T6: 2, T8: 1, naturally converted to C _ V according to the subscript of the template. Intuitively, T3 is a successor to T2, since T3 has the largest count in C _ V of T2. But it is uncertain whether T5 is also a successor to T2. Furthermore, the value of C _ V may be greatly affected by the number of occurrences of the reference log entry. Although the system logs are interleaved, the true neighbors of the template entries will appear nearby. The true successor of the log entry will necessarily occur after a period of time, while the noise log entry will occur randomly. Therefore, the probability that the true successor entry of the reference template appears in the window is higher, and this probability can represent the dependency of the two templates. The dependency vector (D _ V) may be calculated by equation 1.
D_Vi[j]=C_Vi[j]/sum(C_Vi) (1)
Where D _ Vi and C _ Vi denote the dependency vector and count vector of Ti, respectively. D _ Vi [ j ] is the dependent value between Tj and Ti.
And step 3: the noise filtering adopts a weighted greedy algorithm to filter the noise, combines multiple dependencies to form a new characteristic, and filters the noise together. The method specifically comprises the following steps: continuously calculating the sum of the first m dependent values according to the numerical values of the dependent values of the log entries, trying 1 combination, 2 combination and 3 combination until the combination enhanced feature can meet the dependency requirement (the sum of the dependent values is greater than a filtering threshold theta), and enabling all corresponding log entries to be real successor entries of the previous log entries; the remaining log entries are noise successor entries and the dependency value is set to zero.
Over time, the state transitions of the system mechanisms can change, even completely transitioning to a new mode of operation that never occurred. The offline built workflow model can be updated to accommodate changes in system mechanisms by capturing changes in the system log. There are two cases of workflow updates, namely real-time workflow model micro-reconciliation and periodic model reconstruction. Workflow trimming uses real-time log streams to adjust the weights of edges in an offline workflow model and dynamically adds or deletes state transition edges according to their changes. Periodic checks and reconstructions are also employed to avoid extensive fine-tuning, as the system mechanism may be fully transformed into a new operating model.
The off-line workflow model is constructed by D _ V after noise filtering, and a non-zero dependent value in the D _ V represents that a transfer edge exists before two templates. In the online fine-tuning stage, the dependent values (weights of the edges) in the workflow model are continuously adjusted by the micro-iteration adjustment algorithm. If the dependency value reaches an upper or lower limit, the online update method needs to add or delete a branch edge. The method comprises the following specific steps:
the online model is initialized to a static offline workflow model and all dependent values are obtained simultaneously. Continuously acquiring a log file segment with the length of L from a log file, if a log entry dependency pair exists, increasing mu of a dependency value, and decreasing mu of an nonexistent dependency value, if the dependency value exceeds an upper limit value, increasing the dependency relationship of the log entry dependency pair, and if the dependency value is lower than a lower limit value, deleting the dependency relationship of the log entry dependency pair.
The micro-iterative adjustment algorithm is suitable for continuous and slightly changing situations, and can well process complete transfer of system mechanisms through periodic checking and reconstruction.
And (3) periodic model reconstruction: and resetting the dependency relationship and the dependency value of the log entry dependency pair, and the upper limit value and the lower limit value. By periodic inspection and model reconstruction, a large number of fine tuning iterations can be avoided when large abrupt changes occur in the system.
The combination of iterative fine-tuning and periodic reconstruction can be applied in most cases.
The present embodiment uses Nova computing service in OpenStack as a data source for interleaving experiments. The degree of interleaving may be controlled by the number of concurrent executions of the Virtual Machines (VMs). To understand the reality of the OpenStack VM run cycle, we manually examine the source code and submit a single request to the system to get a real workflow.
We collected five sets of experimental data to evaluate workflow construction performance of Sysnif (this example). These experiments tested potential factors affecting accuracy and efficiency results: number of tasks executed simultaneously. The more tasks that run concurrently, the higher the interleaving complexity of the logs. The number of concurrent tasks is set to 2 to 6. Each task represents an execution that can automatically perform virtual machine management. The virtual machine cycle starts from instance start and ends from instance stop, and each task in our experiment is executed for 20 cycles in turn.
In this section, we chose the criterion of the comparison index to be standard Precision/Recall based on the cluster of Logged (T.Jia, L.Yang, P.Chen, Y.Li, F.Meng, and J.xu, "Logged: analog diagnostics through time-weighted control f1ow graphs," in 2017IEEE 10th International Conference on Computing (CLOUD). IEEE, 2017, pp.447-455.). Precision is the proportion of the edge in the mined workflow model that is true, and Recall is how much proportion of the edge in the true workflow model is mined.
The results of Precision and Recall comparison experiments are shown in FIG. 4. Precision for Logsed is high, but Recall is unsatisfactory as the number of concurrencies increases. Branches in the OpenStackVM lifecycle model will cause the dependencies to be scattered, and Logsed filters out some real dependencies (e.g. noise). Sysnif uses a weighted greedy algorithm to filter noise to make up for this deficiency and has better performance. The Precision results decreased slightly with increasing degree of interleaving, but Precision and Recall of Sysnif achieved at least 92.2% and 93.1%, respectively.
Depending on the window size in the computation and the filtering threshold in the weight greedy noise filter, the accuracy is strongly affected. Fig. 5 shows Precision and Recall of the workflow model constructed from a dataset with Sysnif concurrent tasks number 5. It can be concluded that for this data set, the window size needs to be set to 5, the filtering threshold is set to 0.3, and that Sysnif can achieve better performance. This also verifies that the larger the window, the greater the noise introduced and the greater the difficulty in distinguishing the true predecessor successors from the noise term.
We also performed comparative experiments for direct construction of 3k log workflow and construction of 2k log workflow + fine-tuning of 1k log workflow, with Precision/Recall results of 0.948/0.935 and 0.953/0.929, respectively. In this experiment, the window length was set to 5 and the micro-iteration step size μ was set to 0.008. The upper limit value and the lower limit value are set to 0.1 and 0.9 relatively. It can be concluded that the construction + micro-iteration can reach the level of accuracy of the direct construction. The micro-iteration can work on line, and outputs a real-time workflow model when the system runs, thereby avoiding a large amount of resource consumption of workflow reconstruction.
The main time consumption of Sysnif can be divided into two parts: raw log preprocessing and structured sequence workflow construction. The preprocessing is to parse unstructured raw log data into a sequence of structured log templates. Our tool employs the latest log parsing algorithm Drain (P.He, J.Zhu, Z.ZHEN, and M.R.Lyu, "Drain: An online matching approach with fixed depth tree," in 2017IEEE International Conference on Web Services (ICWS). IEEE, 2017, pp.33-40.). During the workflow construction phase, Sysnif is based on a statistical calculation with few iterations. The time complexity of the D _ V calculation and sorting process in the weighted greedy noise filtering algorithm is O (N) and O (Nlog (N)), wherein N is a log number, and N is a template number which is much smaller than N. In the i5-8300HCPU and 8GB memory hardware environment, the time for Sysniff to process 10k logs is 3.95 s. It can be concluded that Sysnif can reach the level of seconds per 10k log processing level.
The present invention has been described in detail with reference to the embodiments, but the description is only illustrative of the present invention and should not be construed as limiting the scope of the present invention. The scope of the invention is defined by the claims. The technical solutions of the present invention or those skilled in the art, based on the teaching of the technical solutions of the present invention, should be considered to be within the scope of the present invention, and all equivalent changes and modifications made within the scope of the present invention or equivalent technical solutions designed to achieve the above technical effects are also within the scope of the present invention.

Claims (7)

1. A method for analyzing interleaved logs of a build networking workflow, comprising: the method comprises the following steps:
step 1: in the log file, a window with the length of N is arranged behind each log entry, and the window covers N subsequent log entries adjacent to the rear of the log entry; the N value of the window is artificially determined;
step 2: calculating a dependency value between a preceding log entry and each subsequent log entry in its subsequent window;
and step 3: and setting a filtering threshold value, wherein if the joint dependency value is larger than the filtering threshold value, the corresponding successor log entry is the real successor entry of the previous log entry.
2. The method of claim 1, wherein the method comprises the steps of: the dependency value is the probability of occurrence of a subsequent log entry in all windows of the same previous log entry.
3. The method of claim 1, wherein the method comprises the steps of: step 3, continuously calculating the sum of the first m dependency values according to the numerical value of the dependency values of the subsequent log entries from large to small, until the sum of the dependency values is greater than a filtering threshold value, and enabling all the corresponding subsequent log entries to be real subsequent entries of the previous log entries; wherein m is a natural number, and the value of m is increased by 1 from 1.
4. The method of claim 3, wherein the method comprises the steps of: the prior log entry and the subsequent log entry form a log entry dependency pair; a dependency relationship exists between a previous log entry and its real successor entry; there is no dependency between the prior log entry and its noisy successor entry.
5. The method of claim 3, wherein the method comprises the steps of: the set value of the noise log entry corresponding to the dependency value is zero, and the real successor log entry corresponding to the dependency value is kept as the original value.
6. The method of claim 1, wherein the method comprises the steps of: and (3) micro-iteration adjustment: continuously acquiring log file segments with the length of L from the log file, if log entry dependency pairs exist in the real-time log file segments, increasing mu by the dependency value, decreasing mu by the non-dependency value,
and if the dependency value is lower than the lower limit value, the dependency relationship of the log entry dependency pair is deleted.
7. The method of claim 6, wherein the method comprises the steps of: reconstruction: a workflow model is re-established based on the interleaved logs over a period of time.
CN202110464020.4A 2021-04-27 2021-04-27 Interleaved log analysis method for networking workflow of construction Pending CN113176977A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110464020.4A CN113176977A (en) 2021-04-27 2021-04-27 Interleaved log analysis method for networking workflow of construction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110464020.4A CN113176977A (en) 2021-04-27 2021-04-27 Interleaved log analysis method for networking workflow of construction

Publications (1)

Publication Number Publication Date
CN113176977A true CN113176977A (en) 2021-07-27

Family

ID=76926866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110464020.4A Pending CN113176977A (en) 2021-04-27 2021-04-27 Interleaved log analysis method for networking workflow of construction

Country Status (1)

Country Link
CN (1) CN113176977A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036034A (en) * 2014-06-30 2014-09-10 百度在线网络技术(北京)有限公司 Log analysis method and device for data warehouse
CN108427720A (en) * 2018-02-08 2018-08-21 中国科学院计算技术研究所 System log sorting technique
US20180288074A1 (en) * 2017-03-31 2018-10-04 Mcafee, Inc. Identifying malware-suspect end points through entropy changes in consolidated logs
CN110018948A (en) * 2018-01-02 2019-07-16 开利公司 For analyzing the system and method with the mistake in response log file
CN110032494A (en) * 2019-03-21 2019-07-19 杭州电子科技大学 A kind of double grains degree noise log filter method based on incidence relation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036034A (en) * 2014-06-30 2014-09-10 百度在线网络技术(北京)有限公司 Log analysis method and device for data warehouse
US20180288074A1 (en) * 2017-03-31 2018-10-04 Mcafee, Inc. Identifying malware-suspect end points through entropy changes in consolidated logs
CN110018948A (en) * 2018-01-02 2019-07-16 开利公司 For analyzing the system and method with the mistake in response log file
CN108427720A (en) * 2018-02-08 2018-08-21 中国科学院计算技术研究所 System log sorting technique
CN110032494A (en) * 2019-03-21 2019-07-19 杭州电子科技大学 A kind of double grains degree noise log filter method based on incidence relation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
潘建梁: ""支持流程建模的工作流重复任务识别和噪声日志检测方法"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 1, 15 January 2019 (2019-01-15), pages 2 - 4 *

Similar Documents

Publication Publication Date Title
CN111949633B (en) ICT system operation log analysis method based on parallel stream processing
Vianna et al. Analytical performance models for MapReduce workloads
JP6205066B2 (en) Stream data processing method, stream data processing apparatus, and storage medium
CN113011602A (en) Method and device for training federated model, electronic equipment and storage medium
US20050278273A1 (en) System and method for using root cause analysis to generate a representation of resource dependencies
Zhang et al. Accelerate large-scale iterative computation through asynchronous accumulative updates
Inacio et al. A survey into performance and energy efficiency in HPC, cloud and big data environments
Srinivasan et al. Elastic time
CN105630797B (en) Data processing method and system
CN112070416A (en) AI-based RPA process generation method, apparatus, device and medium
Xie et al. Dynamic interaction graphs with probabilistic edge decay
Ghaffari et al. A massively parallel algorithm for minimum weight vertex cover
Xu et al. Dynamic backup workers for parallel machine learning
Liu et al. Fluxinfer: Automatic diagnosis of performance anomaly for online database system
Bei et al. MEST: A model-driven efficient searching approach for MapReduce self-tuning
CN112905370A (en) Topological graph generation method, anomaly detection method, device, equipment and storage medium
Martyshkin et al. Queueing Theory to Describe Adaptive Mathematical Models of Computational Systems with Resource Virtualization and Model Verification by Similarly Configured Virtual Server
CN112532625B (en) Network situation awareness evaluation data updating method and device and readable storage medium
US10324845B1 (en) Automatic placement of cache operations for complex in-memory dataflows
US11157267B1 (en) Evaluation of dynamic relationships between application components
Guo et al. Correlation-based performance analysis for full-system MapReduce optimization
CN113176977A (en) Interleaved log analysis method for networking workflow of construction
Rehab et al. Scalable massively parallel learning of multiple linear regression algorithm with MapReduce
Bailey et al. Efficient incremental mining of contrast patterns in changing data
US20220107817A1 (en) Dynamic System Parameter for Robotics Automation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination