CN116028315A - Operation early warning method, device, medium and electronic equipment - Google Patents

Operation early warning method, device, medium and electronic equipment Download PDF

Info

Publication number
CN116028315A
CN116028315A CN202211678021.XA CN202211678021A CN116028315A CN 116028315 A CN116028315 A CN 116028315A CN 202211678021 A CN202211678021 A CN 202211678021A CN 116028315 A CN116028315 A CN 116028315A
Authority
CN
China
Prior art keywords
window
data
predicted
time
residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211678021.XA
Other languages
Chinese (zh)
Inventor
刘晓玲
李相宜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202211678021.XA priority Critical patent/CN116028315A/en
Publication of CN116028315A publication Critical patent/CN116028315A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application belongs to the technical field of artificial intelligence, and particularly relates to a job operation early warning method, a job operation early warning device, a computer readable medium, electronic equipment and a computer program product. The method comprises the following steps: analyzing the operation log to obtain monitoring data associated with the time stamp; grouping the monitoring data according to a plurality of time windows with different window scales to obtain a plurality of groups of window data corresponding to the time windows; performing feature mapping processing on the plurality of groups of window data to obtain prediction data corresponding to a time window to be predicted; when the number of the predicted data exceeding the threshold interval exceeds a number threshold, triggering early warning information of abnormal operation of the job according to the predicted data. The embodiment of the application can accurately early warn the operation running condition of the big data operation platform.

Description

Operation early warning method, device, medium and electronic equipment
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to a job operation early warning method, a job operation early warning device, a computer readable medium, electronic equipment and a computer program product.
Background
With the continuous promotion and promotion of the information age, the technology of the internet industry is also advanced, and accordingly, data in various fields are also increased, and the demands for mass data processing are also increased dramatically.
The operation condition of the large data platform is monitored, the cluster storage, node operation condition, CPU, memory, virtual resource, occupation condition of service operation execution and use condition of various operations are predicted, faults are predicted, countermeasures and guarantees are made before faults occur, and the large data platform is a key for improving the stability of the platform and is an important guarantee link of data management work. Therefore, how to accurately early warn the operation condition of a large data platform is a problem to be solved in the field.
Disclosure of Invention
The application provides a job operation early warning method, a job operation early warning device, a computer readable medium, electronic equipment and a computer program product, and aims to accurately early warn the job operation condition of a big data job platform.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned in part by the practice of the application.
According to an aspect of the embodiments of the present application, there is provided a job operation early warning method, including:
analyzing the operation log to obtain monitoring data associated with the time stamp;
grouping the monitoring data according to a plurality of time windows with different window scales to obtain a plurality of groups of window data corresponding to the time windows;
performing feature mapping processing on the plurality of groups of window data to obtain prediction data corresponding to a time window to be predicted;
when the number of the predicted data exceeding the threshold interval exceeds a number threshold, triggering early warning information of abnormal operation of the job according to the predicted data.
According to an aspect of the embodiments of the present application, there is provided a job operation early warning apparatus, including:
the analysis module is configured to analyze the operation log to obtain monitoring data associated with the time stamp;
the grouping module is configured to perform grouping processing on the monitoring data according to a plurality of time windows with different window scales to obtain a plurality of groups of window data corresponding to the plurality of time windows;
the mapping module is configured to perform feature mapping processing on the plurality of groups of window data to obtain prediction data corresponding to a time window to be predicted;
And the early warning module is configured to trigger early warning information of abnormal operation of the job according to the predicted data when the number of the predicted data exceeding a threshold interval exceeds a number threshold.
According to an aspect of the embodiments of the present application, there is provided a computer-readable medium having stored thereon a computer program which, when executed by a processor, implements a job running pre-warning method as in the above technical solution.
According to an aspect of the embodiments of the present application, there is provided an electronic device including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to execute the executable instructions to implement the job execution pre-warning method as in the above technical solution.
According to an aspect of the embodiments of the present application, there is provided a computer program product, including a computer program, which when executed by a processor implements a job running pre-warning method as in the above technical solution.
According to the technical scheme provided by the embodiment of the application, the operation running log is analyzed to obtain the monitoring data associated with the time stamp, then the window data corresponding to various time windows are obtained by data grouping according to different window scales, the characteristic mapping is further carried out on the plurality of groups of window data to obtain the prediction data, the operation running state prediction is carried out by utilizing the data under the various time windows, and the early warning precision of operation running abnormality can be remarkably improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 shows a flow chart of steps of a job run pre-warning method in one embodiment of the present application.
FIG. 2 is a flowchart illustrating steps of a method for performing job execution pre-warning based on data preprocessing in one embodiment of the present application.
FIG. 3 is a flowchart illustrating steps of a method for performing job execution early warning based on long and short term memory network and residual network in one embodiment of the present application.
FIG. 4 shows the model structure of the ResNet-LSTM fusion model in one embodiment of the present application.
Fig. 5 shows a residual block structure in one embodiment of the present application.
Fig. 6 schematically shows a block diagram of a job operation early warning device provided in an embodiment of the present application.
Fig. 7 schematically illustrates a block diagram of a computer system suitable for use in implementing embodiments of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present application. One skilled in the relevant art will recognize, however, that the aspects of the application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the application.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
Under the background of development of big data and data mining technology, the independent operation technology of a single machine cannot meet the current data analysis requirement, and the parallel and distributed computing technology becomes huge in various fields day by day. The distributed framework not only can solve the problem of storing mass data, but also can integrate multi-data source and multi-level data, but also faces the dynamic property and complexity of data management, and can bring certain monitoring difficulty to a large data platform.
Currently, a large number of job scheduling systems are used in the market, for example airflow, oflow, and alarms are all performed by monitoring task attributes, that is, the alarms are triggered after job failure or overtime. Alarms based on job attributes have the following problems:
1) The method has no prediction capability on the operation result of job scheduling. For example, companies typically choose to run job scheduling in the morning when the CPU load is low and the process is idle. When executing tasks, the execution can be started according to the period, the execution time and the front-end dependency relationship of the tasks, and if a certain execution node fails, an operation and maintenance person can only know by means of short message notification, and if the operation and maintenance person does not see and solve the tasks at the first time, the whole business process is easy to delay.
2) The alarm mode is single, the alarm quantity is large and frequent. For example, some operations fail, the operation and maintenance personnel can only be notified by sending short messages, a great number of frequent sending alarm short messages are needed, the operation log is still needed to be checked by the system for the reason of the operation failure, the retrospective processing logic knows that the short message alarm can be eliminated after the forced task passing. In short, with the continuous expansion of the service, the alarm content is more and more, and the transmission of the alarm information of a large number of different services can cause the condition that the key alarm information is omitted and neglected.
3) The logs of the operation are distributed on different platforms, and data collection is difficult. For example, service scheduling operation logs are generally stored in a scattered manner on different devices, and as service types are continuously expanded, the daily accumulated logs also grow in a large amount, and if the logs are still referred to and acquired by adopting a method of orderly logging in a machine, the operation is complex and the efficiency is low.
In the related art of the present application, a predictive model for performing fault early warning may be trained using artificial intelligence, however, these predictive model-based methods still have the following drawbacks.
1) The information reflected by the training set is an abnormal event which has occurred, manual data acquisition and data screening are required, and a great deal of time is required for feature extraction. In fact, the number of abnormal events is far less than that of normal data, even if the data set is marked manually, the situation that the marking is wrong or missed is caused inadvertently exists, and the model also misses the best opportunity to learn the event.
2) The proposed business operation monitoring model and individual parameter values always have the situation that the predicted value is not matched with the actual value. In fact, the current job scheduling is affected by factors such as cluster storage, node running conditions, CPU, memory, virtual resources, service job execution states and the like, and the situation that the prediction effect is poor and difficult to apply can be caused only by the fact that the traditional model is suitable for all job scheduling monitoring scenes.
3) Only can the abnormal operation within a certain time period be predicted, and the effective prediction of the operation with different time windows is difficult. The prediction method in the related art only predicts a single window, and faults of different time windows of a minute level, an hour level, a day level and a month level are difficult to consider at the same time. Some proposed methods have a good prediction effect on data with high timeliness in a short time, and can effectively evaluate the condition of index change in a few minutes or hours in the future, but have difficulty in achieving a good prediction effect on indexes of mutation in a short time and indexes of slow mutation in a few days.
In addition, in the early warning method based on the prediction model in the related art, whether the monitoring data is abnormal or not is judged by means of clustering, threshold setting and the like. Such methods require domain expert guidance to define rules, which can lead to problems of high maintenance costs and poor predictive results.
Aiming at the problems and the shortcomings in the related art, the embodiment of the application provides a method suitable for monitoring job scheduling of operation and maintenance of a large data platform, which analyzes factors such as cluster storage, node running conditions, CPU, memory, virtual resources, job execution states and the like to predict.
Particularly, in some application scenarios, the embodiment of the application provides a large data platform operation early warning system and method based on hierarchical residual connection LSTM. The LSTM model is connected to the level residual error that this embodiment provided all has stronger fitting ability to different operation monitoring indexes. By combining the multi-window prediction method provided by the embodiment of the application, short, medium and long-term monitoring index prediction tasks can be considered, the possibility factors of abnormal occurrence are evaluated, and fluctuation of abnormal operation scheduling is effectively identified and alarming is carried out.
The following describes in detail the technical schemes such as the job operation early warning method, the job operation early warning device, the computer readable medium, the electronic device, the computer program product and the like provided in the present application with reference to the specific embodiments.
FIG. 1 shows a flow chart of steps of a job run pre-warning method in one embodiment of the present application. As shown in fig. 1, the job execution pre-warning method may include the following steps S110 to S140.
S110: and analyzing the job running log to obtain monitoring data associated with the time stamp.
Job travel logs are a period of operational data collected in real-time from a cluster of job nodes in a large data platform, typically semi-structured flow logs. By analyzing the job running log, the time stamp with uniform format and the monitoring data associated with the time stamp can be obtained.
S120: and grouping the monitoring data according to a plurality of time windows with different window scales to obtain a plurality of groups of window data corresponding to the plurality of time windows.
Grouping the monitoring data by a plurality of time windows having different window sizes can result in a set of window data corresponding to each time window.
In one embodiment of the application, a window sequence comprising a plurality of time windows with different window dimensions is acquired, and the window dimensions of each time window in the window sequence are sequentially increased; and respectively carrying out grouping processing on the monitoring data according to each time window in the window sequence to obtain a plurality of groups of window data corresponding to a plurality of time windows.
For example, in the embodiment of the present application, three time windows with different window scales may be used to group the monitoring data respectively, so as to obtain three sets of window data; the window scale of the first set of window data is 5 minutes, wherein the time stamps between every two adjacent pieces of monitoring data are 5 minutes apart from each other; the window scale of the second set of window data is 10 minutes, wherein the time stamps between every two adjacent pieces of monitoring data are spaced apart from each other by 10 minutes; the window size of the third set of window data is 30 minutes, with the time stamps between every two adjacent monitored data being spaced apart from each other by 30 minutes.
S130: and performing feature mapping processing on the plurality of groups of window data to obtain prediction data corresponding to the time window to be predicted.
The time window to be predicted is one or more of a plurality of time windows for grouping the monitoring data, and the method for performing feature mapping processing on the plurality of groups of window data can comprise pre-training a prediction model corresponding to each time window to be predicted, and then inputting each group of window data into the prediction model with the same window scale according to the matching relation of the window scale, so as to obtain the prediction data output by the prediction model. For example, the time window to be predicted in the embodiment of the present application includes three window scales of 5 minutes, 10 minutes and 30 minutes, and the first group of window data with the window scale of 5 minutes is input into the first prediction model, so that the future 5 minutes of prediction data output by the first prediction model can be obtained; inputting a second group of window data with the window scale of 10 minutes into a second prediction model to obtain future 10 minutes of prediction data output by the second prediction model; and inputting a third group of window data with the window scale of 30 minutes into the third prediction model, so that the future 30 minutes of prediction data output by the third prediction model can be obtained.
S140: when the number of the predicted data exceeding the threshold interval exceeds a number threshold, the early warning information of abnormal operation of the job is triggered according to the predicted data.
The threshold interval is a preset numerical value interval representing that the operation state of the job is in a normal state, and when the value of the predicted data is in the threshold interval, the predicted operation state is represented as the normal state. When the value of the predicted data exceeds the threshold value interval, the number of the predicted data exceeding the threshold value interval can be recorded, and if the number also exceeds the number threshold value, the early warning information of abnormal operation of the job can be triggered according to the predicted data. For example, if the three prediction data A, B, C are obtained in step S130 and the number threshold is set to two, if any two of the three prediction data A, B, C exceeds the threshold interval, it may be determined that the abnormal risk exists in the operation, and the early warning information of the abnormal operation of the operation may be triggered according to the prediction data exceeding the threshold interval.
In the job running early warning method provided by the embodiment of the application, the job running log is analyzed to obtain the monitoring data associated with the time stamp, then the window data corresponding to various time windows are obtained by data grouping according to different window scales, the characteristic mapping is further carried out on the plurality of groups of window data to obtain the prediction data, the job running state prediction is carried out by utilizing the data under the various time windows, and the early warning precision of the job running abnormality can be obviously improved.
FIG. 2 is a flowchart illustrating steps of a method for performing job execution pre-warning based on data preprocessing in one embodiment of the present application. As shown in fig. 2, the job execution pre-warning method may include the following steps S210 to S260.
S210: and carrying out structural analysis on the job running log to obtain a time stamp and structural data associated with the time stamp.
And cleaning, merging and structuring the semi-structured stream log, extracting contents including time stamps, monitoring indexes and the like as data objects, and generating corresponding structured data.
S220: and converting the format of the time stamp to obtain a normalized time stamp.
The method comprises the steps of performing format conversion on different types of dates and times, unifying the dates and the times into time stamps in a floating point number format, and performing normalization processing on the unified time by taking seconds as a unit, wherein the calculation formula is as follows:
Figure BDA0004017893440000071
wherein t is a normalized timestamp, t' is a timestamp after format conversion, t s 、t e The start timestamp and the end timestamp of the log collection window, respectively.
The acquisition period of the monitoring log is denoted by Δt, which is used in this application to characterize the window scale, i.e. the instant window size.
S230: and carrying out normalization processing on the structured data to obtain monitoring data associated with the normalized time stamp.
And carrying out data normalization processing on the parsed structured data, and converting the percentage indexes such as the memory utilization rate, the process occupancy rate and the like into decimal processing. And normalizing indexes such as network traffic, node running number and the like according to the maximum value of the technical standard.
The processed monitoring data should conform to the following form:
<t,d> N (N=1,2,...,N max )
wherein t is the timestamp after the processing in step S220, d is the data after the normalization processing, N is used to represent the index of the monitored data in the time window, and N can be stored at maximum max And each.
S240: and grouping the monitoring data according to a plurality of time windows with different window scales to obtain a plurality of groups of window data corresponding to the plurality of time windows.
S250: and performing feature mapping processing on the plurality of groups of window data to obtain prediction data corresponding to the time window to be predicted.
S260: when the number of the predicted data exceeding the threshold interval exceeds a number threshold, the early warning information of abnormal operation of the job is triggered according to the predicted data.
The implementation details of steps S240 to S260 may refer to steps S120 to S140 in the above embodiments, and will not be described here again.
In the job operation early warning method provided by the embodiment of the application, the semi-structured stream log can be converted into the structured data with the same format and the same time scale by carrying out structural analysis, timestamp conversion and data normalization on the original log data, so that early warning efficiency and early warning accuracy can be improved.
FIG. 3 is a flowchart illustrating steps of a method for performing job execution early warning based on long and short term memory network and residual network in one embodiment of the present application. As shown in fig. 3, the job execution pre-warning method may include the following steps S310 to S370.
S310: and analyzing the job running log to obtain monitoring data associated with the time stamp.
S320: and grouping the monitoring data according to a plurality of time windows with different window scales to obtain a plurality of groups of window data corresponding to the plurality of time windows.
The implementation details of steps S310 to S320 may refer to S110 to S120 in the above embodiment, or may refer to S210 to S240 in the above embodiment, which will not be described herein.
S330: and acquiring a long-period memory network with the same window scale as the time window to be predicted.
S340: and extracting a data sequence with the same window scale as the time window to be predicted from the plurality of sets of window data.
S350: and performing feature mapping processing on the data sequence through the long-short-period memory network to obtain a hidden layer vector output by the long-short-period memory network.
The time window to be predicted is deltat 1 ,Δt 2 ,...,Δt n (n is not less than 1). The number of windows is at least 1, and the size of the window scale increases with the index n in turn. Each time window to be predicted corresponds to a pre-trained long and short term memory network.
A long-short-term memory network (LSTM) is used as a variant of a cyclic neural network (RNN), so that the problem that the RNN cannot process long-sequence information can be solved, an input sequence is input into the LSTM network, and the LSTM cell state output hidden layer vector at the time t is ht. The calculation formula is as follows:
zt=σ([t-1,xt]+b Z )
rt=σ([t-1,xt]+b r )
Figure BDA0004017893440000091
Figure BDA0004017893440000092
wherein z is t Controlling the amount of information left in the past and the amount of information newly added as an input gate; r is (r) t As forget gate in GRU structure to control how much information is written to the current candidate set in the previous state if r t If the value of (2) is 0, then all previous states are forgotten.
In one embodiment of the present application, a method for acquiring a long-short term memory network having the same window scale as a time window to be predicted may include: acquiring sample data with the same window scale as a time window to be predicted; acquiring the ratio of a time window to be predicted to a minimum time window, and acquiring regularization adjustment parameters positively related to the ratio; training the initial network according to regularization adjustment parameters and sample data to obtain a long-term and short-term memory network with the same window scale as the time window to be predicted. Regularization adjustment parameters are parameters in the loss function used to train the long and short term memory network.
For example, the window ratio is r i To express, the calculation formula is as follows:
Figure BDA0004017893440000093
wherein i represents a window index, Δt 1 Representing the smallest window.
According to the ratio of each window, a regularization adjustment parameter lambda is constructed, and lambda and r should satisfy the following formula:
Figure BDA0004017893440000094
in the formula, lambda can be increased along with the increase of the window, so that not only can the prediction accuracy of the model be improved, but also the inhibition capability of the middle-long-term window on the abnormal frequent short-term noise fluctuation can be improved.
S360: and carrying out nonlinear transformation on the hidden layer vector through a residual network formed by a plurality of residual blocks to obtain prediction data corresponding to a time window to be predicted.
In one embodiment of the present application, a method for non-linearly transforming a hidden layer vector through a residual network consisting of a plurality of residual blocks may include: respectively carrying out nonlinear transformation on input data through each residual block in a residual network to obtain a residual value output by the residual block; wherein, the input data of the first residual block is a hidden layer vector, and the input data of the latter residual block is a residual value output by the former residual block; and determining prediction data corresponding to the time window to be predicted according to the residual values output by the residual blocks.
For example, let h t The hidden layer vector is input into a residual error network for nonlinear transformation, the residual error network carries out dimension reduction processing on the hidden vector, and the depth of residual error connection is D i (i=1, 2., (i.), n) is represented by the formula (i), the residual network is formed by a plurality of residual blocks connected, each residual block is represented as follows:
y=F(x,{W i }+x)
wherein: x is the input vector, y represents the residual block, and also represents the output of the last layer, F (x, { W i -a) represents an activation function.
H for LSTM network output at time t t After residual calculation, a new output state can be obtained, and the formula is as follows:
h′ t =f ω(h t) (h t )+h t
the data output by the residual error network is the prediction data corresponding to the time window to be predicted.
In one embodiment of the present application, a method for determining prediction data corresponding to a time window to be predicted according to residual values output by respective residual blocks may include: and carrying out weighting processing on residual values output by each residual block according to the attention weight obtained by pre-training, and obtaining the prediction data corresponding to the time window to be predicted.
By introducing an attention mechanism, the depth of the residual error network can be effectively regulated, the advantages and disadvantages of different depths on characteristic fitting can be better judged, and the phenomenon of over fitting is avoided.
S370: when the number of the predicted data exceeding the threshold interval exceeds a number threshold, the early warning information of abnormal operation of the job is triggered according to the predicted data.
Based on the above embodiments, the technical solutions in some embodiments of the present application have the following features:
1) Model algorithm is improved, and model quality is improved: traditional machine learning requires different feature extraction methods for different types of job logs and requires domain experts to formulate feature templates. Compared with the deep learning method for detecting the abnormality, the method has the advantages that the prediction capability of monitoring indexes can be enhanced, a better fitting effect can be achieved, and the analysis and detection of the operation log can be completed without almost manually extracting the characteristics under the condition of proper parameters.
2) Flexibly controlling the prediction aging: the time windows of different sizes are applied to each monitoring index and run in parallel with the training task.
3) The method has the capability of resisting overfitting, and improves the accuracy of mid-long term window prediction: and adding a regularization term loss function positively correlated with a time window into the network model, reducing the influence of short-term noise fluctuation abnormal frequency on a medium-term window, improving the accuracy of the model, and preventing the model from being over-fitted.
The method for performing operation early warning on the job in the application scene can comprise the following processes.
S1: and (5) monitoring log collection.
And pulling a resource monitoring log, a process monitoring log, a node monitoring log and a job monitoring log of each platform of HDFS, yarn, hbase, zookeeper, and sending data to an S2 module for data preprocessing.
S2: and (5) preprocessing data.
Preprocessing the operation log data, including structural analysis, time stamp conversion and data normalization processing on the original log data, and filtering the data which does not accord with the specification.
The log structured parsing comprises the sub-steps of:
s21: log structured parsing.
And carrying out structuring treatment on factors such as cluster storage, node running conditions, CPU, memory, virtual resources, service operation executing conditions, resource occupation conditions and the like, converting the factors into a Json format, and extracting contents including time stamps, monitoring indexes and the like from the factors to serve as data objects.
S22: timestamp conversion.
Taking time "2022-06-2017:15:57" as an example, this translates into "1655716557" in seconds. And (3) carrying out normalization processing on the unified time, wherein the calculation formula is as follows:
Figure BDA0004017893440000111
wherein t is a normalized timestamp, t' is a timestamp after format conversion, t s 、t e The start timestamp and the end timestamp of the log collection window, respectively.
The acquisition period of the monitoring log is denoted by Δt, which is used in the embodiments of the present application to characterize the time window size.
In the formula, let Δt=4800 s, and the end timestamp be t e 1655716557, and the monitoring data time stamp is t' = 1655716557, calculated, starting time stamp is t s = 1655711757, t=1 after normalization treatment.
S23: and (5) data normalization processing.
And carrying out data normalization processing on the parsed structured data, and converting the percentage indexes such as the memory utilization rate, the process occupancy rate and the like into decimal processing. For example: the current process occupancy reaches 91%, and the current process occupancy is 0.91 after being converted into decimal.
The processed monitoring data should conform to the following form:
<t,d> N (N=1,2,...,N max )
where t is the timestamp after sub-step S22 processing, d is the data after sub-step S23 normalization processing, N is used to represent the index of the monitored data in the time window, and N can be stored at maximum max And each.
S3: and (5) model training.
And in different time windows, constructing a hierarchical residual connection LSTM model, inputting the monitoring log data processed in the step S2 into the model for iterative training, and obtaining fitting models adapting to different window sizes for detecting the abnormality in the step S4.
The construction of the hierarchical residual connection LSTM model comprises the following substeps:
S31: a training time window size is determined.
Taking the window as delta t 1 =4800s,Δt 2 =57600s,Δt 3 =115200
S32: resNet-LSTM fusion models were constructed under different time windows.
Fig. 4 shows the model structure of the res net-LSTM fusion model in one embodiment of the present application, as shown in fig. 4,
the network includes the following features:
(1) A plurality of residual block structures are included, each residual block including at least 2 softmax full link layers.
(2) Comprises a long short term memory network (LSTM).
(3) A Dynamic ReLU activation function is used.
(4) The attention-introducing mechanism module adjusts the weights.
S321: under different time windows, a model structure of a hierarchical residual connection LSTM is constructed, and a residual network (ResNet) is used as a backbone network for extracting high-level features (specific steps are S3211 and S3212).
S3211: building an LSTM network: LSTM network as variation of cyclic neural network (RNN) can solve the problem that RNN can not process long sequence information, and input sequence x= (x) 1 ,x 2 ,…,x n ) Inputting the cell state output hidden layer vector of the LSTM at the moment of t into a hierarchical residual connection LSTM network, wherein the cell state output hidden layer vector of the LSTM at the moment of t is h t . The calculation formula is as follows:
zt=σ(Wz[t-1,xt]+b z )
rt=σ(Wr[t-1,xt]+b r )
Figure BDA0004017893440000131
Figure BDA0004017893440000132
wherein z is t Controlling the amount of information left in the past and the amount of information newly added as an input gate; r is (r) t As forget gate in GRU structure to control how much information is written to the current candidate set in the previous state if r t If the value of (2) is 0, then all previous states are forgotten.
S3212: constructing a ResNet network: will h t The hidden layer vector is input into a residual error network for nonlinear transformation, the residual error network carries out dimension reduction processing on the hidden vector, and the depth of residual error connection is D i (i=1, 2,3,4, 5), the residual network is composed of a plurality of residual blocks connected.
Fig. 5 shows a residual block structure in one embodiment of the present application, as shown in fig. 5, each residual block is represented as follows:
y=F(x,{W i }+x)
wherein: x is the input vector, y represents the residual block, and also represents the output of the last layer, F (x, { W i -a) represents an activation function.
S322: performing global coding by using a dynamic ReLU activation function, obtaining a proper activation function by a dynamic selection mode, performing dimension increasing treatment on the dimension-reducing vector obtained in the step S3212, and restoring the dimension to the original input dimension, wherein the specific calculation steps are as follows:
s3221: and (3) dimension reduction treatment: the pooling layer is used for achieving the effects of reducing parameters and reducing network complexity by compressing feature vectors, and the formula is as follows:
x′=(x 1 ,x 2 ,...x m )(m=9)
s3222: normalization: the feature vector after the dimension reduction processing is input into a ReLU activation function g (x) and normalized, unimportant feature information is filtered, the vector value is ensured to be between [ -1,1], and the formula is as follows:
Figure BDA0004017893440000133
S3223: and (3) dimension increasing treatment: adding the vector value normalized in the step S3222 to the original input dimension, wherein the calculation formula is as follows:
Figure BDA0004017893440000141
Figure BDA0004017893440000142
/>
Figure BDA0004017893440000143
Figure BDA0004017893440000144
h′ t =f(Wh t +Wx t +b)
in the formula, the super parameter alpha i And beta i Respectively is
Figure BDA0004017893440000145
And->
Figure BDA0004017893440000146
For determining a vector coefficient matrix, lambda a And lambda is b As residual range control scalar, superscalarParameter->
Figure BDA0004017893440000147
From a coefficient vector matrix->
Figure BDA0004017893440000148
Calculating to select the optimal value as the coefficient of the activation function to determine the activation function, wherein c is the number of channels, i is the number of functions, +.>
Figure BDA0004017893440000149
For the definition of the dynamic activation function, the vector +.>
Figure BDA00040178934400001410
Can determine the hyper-parameters +.>
Figure BDA00040178934400001411
Coefficients.
At this time, for h of LSTM network output at time t t After residual calculation, a new output state can be obtained, and the formula is as follows:
h′ t =f ω(ht) (h t )+h t
s323: defining an attention mechanism module: by introducing an attention mechanism, the depth of the residual error network can be effectively regulated, the fitting quality of different depths to the features can be better judged, and the phenomenon of over fitting is avoided. Output vector H ' =h ' of LSTM cells at different depths ' kt(k>=0) Input into the attention mechanism, the calculation steps are as follows:
Figure BDA00040178934400001412
wherein softmax (·) is used for normalization to obtain the weight parameter Z k Selecting the depth with the largest weight as the connection depth of the residual layer, and taking the output of the current maximum depth as the vector h of the LSTM cell output layer t
S33, calculating window ratio by r i To express, the calculation formula is as follows:
Figure BDA00040178934400001413
wherein i represents a window index, Δt 1 Representing the smallest window.
S331, constructing regularization adjustment parameters lambda and r according to the ratio of each window of S331, wherein lambda and r should satisfy the following formula:
Figure BDA00040178934400001414
in the formula, lambda can be increased along with the increase of the window, so that on one hand, the prediction accuracy of the model can be improved, and the inhibition capability of the middle-long-term window on short-term noise fluctuation abnormity can be improved.
S34: and (5) model training.
And (3) connecting the level residual errors of the time windows with different sizes with the LSTM model for training, inputting the time and data after the data preprocessing in the step S2 into the model as a training set, stopping training by the model, obtaining a prediction model under the ith window, and performing abnormal prediction in the step S4.
S4: and (5) abnormality prediction.
And (3) putting the predicted data of different windows into a model trained in the step (S3), calculating a predicted value at the moment, storing a predicted result into a distributed database, and storing the predicted result by taking a timestamp as a main key, wherein the predicted result can be used as a data source to be sent to the step (S5) for alarm judgment.
The calculation substeps of the predicted value are:
and S41, determining the size of a prediction time window.
Defining data D to be predicted i The window size is deltat i ' the first j points of the data to be predicted are input into the prediction model as a time window, and the time window with i= 5,j =4 is taken to satisfy the following formula:
Δt 5 ′=(D 1 ,D 2 ,D 3 ,D 4 )
s42, real-time prediction data processing.
The predicted data is subjected to up-and-down fluctuation, the predicted data is subjected to smoothing treatment by using a one-time moving average method, m groups of data are taken for calculation, and the formula is as follows:
Figure BDA0004017893440000151
where each new predictor is a correction to the previous moving average predictor.
S5: and (5) alarming and judging.
Determining a threshold interval by setting a maximum threshold and a minimum threshold, if the predicted value D i And within the threshold range, no alarm is triggered, and the process returns to the step S1. If the threshold frequency is exceeded, and the threshold frequency f is exceeded in a certain period of time, step S6 is triggered.
S6: and sending a service alarm.
After the alarm is triggered, the early warning log is converted into characters according to a preset template, and operation and maintenance personnel are notified through communication groups, short messages, telephones and the like of instant messaging software, so that effective processing is performed at the first time.
Based on the description of the application scenario, compared with the current threshold-based alarm software and similar related technologies, the embodiment of the application has the following advantages:
1) And the model algorithm is improved, and the model quality is improved.
In the related art of the present application, the proposed business operation monitoring model and the individual parameter take values, and the situation that the predicted value does not match the actual value always occurs. According to the embodiment of the application, the LSTM model is connected by using the level residual error to detect the abnormality, and the characteristic nonlinear fitting capability is enhanced by stacking the LSTM network depth, so that the phenomenon of gradient disappearance or gradient explosion of the model is avoided. By introducing the dynamic activation function, the parameter quantity in the operation process can be reduced, the optimal activation function can be obtained, and the model fitting effect is further enhanced.
2) Flexibly controlling the prediction aging.
The related technology can only predict abnormal operation within a certain time period, and is difficult to effectively predict monitoring logs with different time windows. The embodiment of the application applies time windows with different sizes to each monitoring index and runs in parallel with the training task.
3) The method has the capability of resisting overfitting and improves the accuracy of mid-long term window prediction.
The traditional neural network can increase along with the depth of the network, and the model can have an overfitting phenomenon, so that the effect is poor in verification. According to the embodiment of the application, a time window forward correlation attention mechanism is added into the network model, so that the residual network depth can be dynamically adjusted, the influence of short-term noise fluctuation on a middle-term window due to abnormal frequency can be reduced, and the phenomenon of overfitting of the model can be prevented.
It should be noted that although the steps of the methods in the present application are depicted in the accompanying drawings in a particular order, this does not require or imply that the steps must be performed in that particular order, or that all illustrated steps be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
The following describes an embodiment of the apparatus of the present application, which may be used to execute the job running early warning method in the foregoing embodiment of the present application. Fig. 6 schematically shows a block diagram of a job operation early warning device provided in an embodiment of the present application. As shown in fig. 6, the job execution pre-warning apparatus 600 may include:
the parsing module 610 is configured to parse the job running log to obtain monitoring data associated with the timestamp;
a grouping module 620 configured to perform grouping processing on the monitoring data according to a plurality of time windows with different window scales, so as to obtain a plurality of groups of window data corresponding to the plurality of time windows;
the mapping module 630 is configured to perform feature mapping processing on the multiple sets of window data to obtain prediction data corresponding to a time window to be predicted;
And the early warning module 640 is configured to trigger early warning information of abnormal operation of the job according to the predicted data when the number of the predicted data exceeding a threshold interval exceeds a number threshold.
In one embodiment of the present application, based on the above embodiment, the parsing module 610 may be further configured to: carrying out structural analysis on the operation log to obtain a time stamp and structural data associated with the time stamp; performing format conversion on the timestamp to obtain a normalized timestamp; and carrying out normalization processing on the structured data to obtain monitoring data associated with the normalized time stamp.
In one embodiment of the present application, based on the above embodiment, the grouping module 620 may be further configured to: acquiring a window sequence comprising a plurality of time windows with different window scales, wherein the window scales of all the time windows in the window sequence are sequentially increased; and respectively carrying out grouping processing on the monitoring data according to each time window in the window sequence to obtain a plurality of groups of window data corresponding to the plurality of time windows.
In one embodiment of the present application, based on the above embodiment, the mapping module 630 may further include:
The network acquisition module is configured to acquire a long-period memory network with the same window scale as the time window to be predicted;
the sequence extraction module is configured to extract a data sequence with the same window scale as the time window to be predicted from the plurality of groups of window data;
the feature mapping module is configured to perform feature mapping processing on the data sequence through the long-period memory network to obtain a hidden layer vector output by the long-period memory network;
and the nonlinear transformation module is configured to perform nonlinear transformation on the hidden layer vector through a residual network formed by a plurality of residual blocks to obtain prediction data corresponding to the time window to be predicted.
In one embodiment of the present application, based on the above embodiment, the nonlinear transformation module may be further configured to: respectively carrying out nonlinear transformation on input data through each residual block in a residual network to obtain a residual value output by the residual block; wherein, the input data of the first residual block is the hidden layer vector, and the input data of the latter residual block is the residual value output by the former residual block; and determining prediction data corresponding to the time window to be predicted according to residual values output by the residual blocks.
In one embodiment of the present application, based on the above embodiment, the nonlinear transformation module may be further configured to: and carrying out weighting processing on residual values output by each residual block according to the attention weight obtained by pre-training to obtain prediction data corresponding to the time window to be predicted.
In one embodiment of the present application, based on the above embodiment, the network acquisition module may be further configured to: acquiring sample data with the same window scale as the time window to be predicted; acquiring the ratio of the time window to be predicted to the minimum time window, and acquiring regularization adjustment parameters positively related to the ratio; training the initial network according to the regularization adjustment parameters and the sample data to obtain a long-term and short-term memory network with the same window scale as the time window to be predicted.
Specific details of the operation early warning device provided in each embodiment of the present application have been described in detail in the corresponding method embodiments, and are not described herein again.
Fig. 7 schematically shows a block diagram of a computer system for implementing an electronic device according to an embodiment of the present application.
It should be noted that, the computer system 700 of the electronic device shown in fig. 7 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 7, the computer system 700 includes a central processing unit 701 (Central Processing Unit, CPU) which can execute various appropriate actions and processes according to a program stored in a Read-Only Memory 702 (ROM) or a program loaded from a storage section 708 into a random access Memory 703 (Random Access Memory, RAM). In the random access memory 703, various programs and data necessary for the system operation are also stored. The central processing unit 701, the read only memory 702, and the random access memory 703 are connected to each other via a bus 704. An Input/Output interface 705 (i.e., an I/O interface) is also connected to bus 704.
The following components are connected to the input/output interface 705: an input section 706 including a keyboard, a mouse, and the like; an output section 707 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and the like, a speaker, and the like; a storage section 708 including a hard disk or the like; and a communication section 709 including a network interface card such as a local area network card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the input/output interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the present application, the processes described in the various method flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. The computer programs, when executed by the central processor 701, perform the various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal that propagates in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, in accordance with embodiments of the present application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a usb disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a touch terminal, or a network device, etc.) to perform the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. The operation early warning method is characterized by comprising the following steps of:
analyzing the operation log to obtain monitoring data associated with the time stamp;
grouping the monitoring data according to a plurality of time windows with different window scales to obtain a plurality of groups of window data corresponding to the time windows;
performing feature mapping processing on the plurality of groups of window data to obtain prediction data corresponding to a time window to be predicted;
when the number of the predicted data exceeding the threshold interval exceeds a number threshold, triggering early warning information of abnormal operation of the job according to the predicted data.
2. The job execution pre-warning method according to claim 1, wherein the analyzing the job execution log to obtain the monitoring data associated with the time stamp includes:
carrying out structural analysis on the operation log to obtain a time stamp and structural data associated with the time stamp;
performing format conversion on the timestamp to obtain a normalized timestamp;
and carrying out normalization processing on the structured data to obtain monitoring data associated with the normalized time stamp.
3. The job execution pre-warning method according to claim 1, wherein the grouping processing of the monitoring data according to a plurality of time windows having different window scales to obtain a plurality of sets of window data corresponding to the plurality of time windows, comprises:
acquiring a window sequence comprising a plurality of time windows with different window scales, wherein the window scales of all the time windows in the window sequence are sequentially increased;
and respectively carrying out grouping processing on the monitoring data according to each time window in the window sequence to obtain a plurality of groups of window data corresponding to the plurality of time windows.
4. The job execution pre-warning method according to claim 1, wherein performing feature mapping processing on the plurality of sets of window data to obtain prediction data corresponding to a time window to be predicted, comprises:
acquiring a long-period memory network with the same window scale as the time window to be predicted;
extracting a data sequence with the same window scale as the time window to be predicted from the plurality of sets of window data;
performing feature mapping processing on the data sequence through the long-period memory network to obtain a hidden layer vector output by the long-period memory network;
And carrying out nonlinear transformation on the hidden layer vector through a residual network formed by a plurality of residual blocks to obtain prediction data corresponding to the time window to be predicted.
5. The job execution pre-warning method according to claim 4, wherein the non-linear transformation of the hidden layer vector through a residual network composed of a plurality of residual blocks, to obtain the prediction data corresponding to the time window to be predicted, comprises:
respectively carrying out nonlinear transformation on input data through each residual block in a residual network to obtain a residual value output by the residual block; wherein, the input data of the first residual block is the hidden layer vector, and the input data of the latter residual block is the residual value output by the former residual block;
and determining prediction data corresponding to the time window to be predicted according to residual values output by the residual blocks.
6. The job execution pre-warning method according to claim 5, wherein determining the prediction data corresponding to the time window to be predicted from the residual values output from the respective residual blocks, comprises:
and carrying out weighting processing on residual values output by each residual block according to the attention weight obtained by pre-training to obtain prediction data corresponding to the time window to be predicted.
7. The job execution pre-warning method according to claim 4, wherein acquiring the long-short-period memory network having the same window scale as the time window to be predicted, comprises:
acquiring sample data with the same window scale as the time window to be predicted;
acquiring the ratio of the time window to be predicted to the minimum time window, and acquiring regularization adjustment parameters positively related to the ratio;
training the initial network according to the regularization adjustment parameters and the sample data to obtain a long-term and short-term memory network with the same window scale as the time window to be predicted.
8. An operation early warning device, characterized by comprising:
the analysis module is configured to analyze the operation log to obtain monitoring data associated with the time stamp;
the grouping module is configured to perform grouping processing on the monitoring data according to a plurality of time windows with different window scales to obtain a plurality of groups of window data corresponding to the plurality of time windows;
the mapping module is configured to perform feature mapping processing on the plurality of groups of window data to obtain prediction data corresponding to a time window to be predicted;
And the early warning module is configured to trigger early warning information of abnormal operation of the job according to the predicted data when the number of the predicted data exceeding a threshold interval exceeds a number threshold.
9. A computer readable medium, wherein a computer program is stored on the computer readable medium, and when executed by a processor, the computer program implements the job running pre-warning method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to execute the executable instructions to implement the job execution pre-warning method of any one of claims 1 to 7.
CN202211678021.XA 2022-12-26 2022-12-26 Operation early warning method, device, medium and electronic equipment Pending CN116028315A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211678021.XA CN116028315A (en) 2022-12-26 2022-12-26 Operation early warning method, device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211678021.XA CN116028315A (en) 2022-12-26 2022-12-26 Operation early warning method, device, medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116028315A true CN116028315A (en) 2023-04-28

Family

ID=86069932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211678021.XA Pending CN116028315A (en) 2022-12-26 2022-12-26 Operation early warning method, device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116028315A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116910682A (en) * 2023-09-14 2023-10-20 中移(苏州)软件技术有限公司 Event detection method and device, electronic equipment and storage medium
CN117010690A (en) * 2023-08-04 2023-11-07 洛阳炼化宏达实业有限责任公司 Production safety early warning method based on artificial intelligence
CN117376030A (en) * 2023-12-06 2024-01-09 深圳依时货拉拉科技有限公司 Flow anomaly detection method, device, computer equipment and readable storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117010690A (en) * 2023-08-04 2023-11-07 洛阳炼化宏达实业有限责任公司 Production safety early warning method based on artificial intelligence
CN116910682A (en) * 2023-09-14 2023-10-20 中移(苏州)软件技术有限公司 Event detection method and device, electronic equipment and storage medium
CN116910682B (en) * 2023-09-14 2023-12-05 中移(苏州)软件技术有限公司 Event detection method and device, electronic equipment and storage medium
CN117376030A (en) * 2023-12-06 2024-01-09 深圳依时货拉拉科技有限公司 Flow anomaly detection method, device, computer equipment and readable storage medium
CN117376030B (en) * 2023-12-06 2024-03-26 深圳依时货拉拉科技有限公司 Flow anomaly detection method, device, computer equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN116028315A (en) Operation early warning method, device, medium and electronic equipment
CN112118143B (en) Traffic prediction model training method, traffic prediction method, device, equipment and medium
CN108388969A (en) Inside threat personage&#39;s Risk Forecast Method based on personal behavior temporal aspect
CN115168443A (en) Anomaly detection method and system based on GCN-LSTM and attention mechanism
CN117041017A (en) Intelligent operation and maintenance management method and system for data center
CN112488142A (en) Radar fault prediction method and device and storage medium
CN116402352A (en) Enterprise risk prediction method and device, electronic equipment and medium
CN116885699A (en) Power load prediction method based on dual-attention mechanism
CN115617614A (en) Log sequence anomaly detection method based on time interval perception self-attention mechanism
CN115545169A (en) GRU-AE network-based multi-view service flow anomaly detection method, system and equipment
CN114118570A (en) Service data prediction method and device, electronic equipment and storage medium
CN115883424B (en) Method and system for predicting flow data between high-speed backbone networks
Dang et al. seq2graph: discovering dynamic dependencies from multivariate time series with multi-level attention
CN116596662A (en) Risk early warning method and device based on enterprise public opinion information, electronic equipment and medium
CN115545339A (en) Transformer substation safety operation situation assessment method and device
CN115794548A (en) Method and device for detecting log abnormity
US20230080654A1 (en) Causality detection for outlier events in telemetry metric data
Kotenko et al. Formation of Indicators for Assessing Technical Reliability of Information Security Systems
CN113934862A (en) Community security risk prediction method, device, electronic equipment and medium
CN109685308A (en) A kind of complication system critical path appraisal procedure and system
US20230401851A1 (en) Temporal event detection with evidential neural networks
CN117667495B (en) Association rule and deep learning integrated application system fault prediction method
CN118520026A (en) Industrial process multivariable time sequence prediction method driven by data and domain knowledge
CN118172206B (en) Method and system for simulating and predicting judgment result of case of personnel dispute
CN114091732A (en) Time series processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination