CN113743607B - Training method of anomaly detection model, anomaly detection method and device - Google Patents

Training method of anomaly detection model, anomaly detection method and device Download PDF

Info

Publication number
CN113743607B
CN113743607B CN202111083531.8A CN202111083531A CN113743607B CN 113743607 B CN113743607 B CN 113743607B CN 202111083531 A CN202111083531 A CN 202111083531A CN 113743607 B CN113743607 B CN 113743607B
Authority
CN
China
Prior art keywords
data
anomaly detection
detection model
image data
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111083531.8A
Other languages
Chinese (zh)
Other versions
CN113743607A (en
Inventor
张静
李泽州
张宪波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202111083531.8A priority Critical patent/CN113743607B/en
Publication of CN113743607A publication Critical patent/CN113743607A/en
Application granted granted Critical
Publication of CN113743607B publication Critical patent/CN113743607B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a training method of an anomaly detection model, the method comprising: acquiring a multi-frame time sequence image, wherein the multi-frame time sequence image comprises a target image and a historical image, the target image comprises a data point to be detected, the historical image comprises a historical data point before the data point to be detected, the data point to be detected has tag information, and the tag information represents an abnormal value of the data point to be detected; inputting a multi-frame time sequence image into an anomaly detection model to be trained so that the anomaly detection model to be trained detects anomalies of data points to be detected in a target image according to a historical image and outputs a prediction result, wherein the prediction result represents predicted anomalies of the data points to be detected; and iteratively adjusting network parameters of the anomaly detection model to be trained according to the prediction result and the label information to generate a trained anomaly detection model. The present disclosure also provides an anomaly detection method, apparatus, computer system, and computer-readable storage medium.

Description

Training method of anomaly detection model, anomaly detection method and device
Technical Field
The present disclosure relates to the field of artificial intelligence, and more particularly, to a training method of an anomaly detection model, an anomaly detection method, an anomaly detection device, a computer system, and a computer-readable storage medium.
Background
The abnormal segment type of the time sequence monitoring index not only comprises amplitude abnormality, but also comprises several modes such as context abnormality, interval abnormality and the like. Therefore, when analyzing the sequence index, the time attribute and the space attribute of the sequence index need to be analyzed simultaneously to obtain a comprehensive detection result.
In the process of realizing the disclosed conception, the inventor finds that the time sequence monitoring index anomaly detection method for simultaneously analyzing the time attribute and the space attribute in the related technology has the technical problems of higher operand and complexity and lower accuracy of detection results.
Disclosure of Invention
In view of this, the present disclosure provides a training method, an anomaly detection device, a computer system, and a computer-readable storage medium for an anomaly detection model for improving detection accuracy.
One aspect of the present disclosure provides a training method of an anomaly detection model, including:
obtaining a plurality of frames of time sequence images, wherein the plurality of frames of time sequence images are generated by screenshot of a time sequence data display interface according to a preset frequency, the plurality of frames of time sequence images comprise a target image and a historical image, the target image comprises a data point to be detected, the historical image comprises a historical data point in front of the data point to be detected, the data point to be detected is provided with label information, and the label information represents an abnormal value of the data point to be detected;
Inputting a plurality of frames of time sequence images into an anomaly detection model to be trained so that the anomaly detection model to be trained detects anomalies of data points to be detected in the target image according to the historical images and outputs a prediction result, wherein the prediction result represents predicted anomalies of the data points to be detected; and
and iteratively adjusting network parameters of the anomaly detection model to be trained according to the prediction result and the label information to generate a trained anomaly detection model.
According to an embodiment of the present disclosure, the anomaly detection model to be trained includes a first feature extraction network, an attention network, and a second feature extraction network;
inputting the plurality of frames of the time sequence images into an anomaly detection model to be trained, so that the anomaly detection model to be trained detects anomalies of data points to be detected in the target image according to the historical images, and outputting a prediction result comprises:
inputting a plurality of frames of the time sequence images into the first feature extraction network, and outputting a plurality of frames of first image data, wherein the plurality of frames of first image data comprise first target image data corresponding to the target image and first historical image data corresponding to the historical image;
Inputting a plurality of frames of the first image data into the attention network, so that the attention network configures weight parameters for the first historical image data according to the correlation between the first historical image data and the first target image data, and outputs the first target image data and the second historical image data;
and inputting the first target image data and the second history image data into the second feature extraction network, and outputting the prediction result.
According to an embodiment of the present disclosure, the inputting the plurality of frames of the first image data into the attention network so that the attention network configures weight parameters for the first history image data according to correlation of the first history image data and the first target image data, and outputting the first target image data and the second history image data includes:
performing similarity calculation on the first historical image data and the first target image data to generate a similarity result;
generating a first weight parameter according to the similarity result;
and generating the second historical image data according to the first weight parameter and the first historical image data.
According to an embodiment of the present disclosure, the iteratively adjusting network parameters for training the anomaly detection model to be trained according to the prediction result and the tag information, and generating the trained anomaly detection model includes:
inputting the predicted result and the tag information into a loss function, and outputting a loss result;
and iteratively adjusting network parameters of the anomaly detection model to be trained according to the loss result to generate the trained anomaly detection model.
According to an embodiment of the present disclosure, the above-described time-series data display interface is generated by:
acquiring an initial time sequence data display interface, wherein the initial time sequence data display interface comprises a target time point and a historical time point before the target data point;
and dynamically normalizing the data displayed on the initial time sequence data display interface according to the data value of the target time point and the maximum value and the minimum value of the data point to be detected in a preset time period to generate the time sequence data display interface.
According to an embodiment of the present disclosure, the preset time period is associated with the preset frequency.
According to an embodiment of the present disclosure, a multi-frame time-series image generated by capturing a screen of a time-series data display interface according to a preset frequency includes each data point of the time-series data display interface.
Another aspect of the present disclosure provides an abnormality detection method including:
acquiring an image to be detected, wherein the image to be detected comprises time sequence data;
inputting the image to be detected into an anomaly detection model, and outputting a detection result, wherein the detection result represents an anomaly value of a time point to be detected in the image to be detected, and the anomaly detection model is trained by the training method of the anomaly detection model.
Another aspect of the present disclosure provides a training apparatus of an anomaly detection model, including:
the first acquisition module is used for acquiring a plurality of frames of time sequence images, wherein the plurality of frames of time sequence images are generated by screenshot of a time sequence data display interface according to preset frequency, the plurality of frames of time sequence images comprise a target image and a historical image, the target image comprises a data point to be detected, the historical image comprises a historical data point in front of the data point to be detected, the data point to be detected is provided with label information, and the label information represents an abnormal value of the data point to be detected;
The prediction module is used for inputting a plurality of frames of time sequence images into an anomaly detection model to be trained so that the anomaly detection model to be trained detects anomalies of data points to be detected in the target image according to the historical images and outputs a prediction result, wherein the prediction result represents predicted anomalies of the data points to be detected; and
and the training module is used for iteratively adjusting the network parameters of the anomaly detection model to be trained according to the prediction result and the label information to generate a trained anomaly detection model.
Another aspect of the present disclosure provides an abnormality detection apparatus including:
the second acquisition module is used for acquiring an image to be detected, wherein the image to be detected comprises time sequence data;
the detection module is used for inputting the image to be detected into an abnormal detection model and outputting a detection result, wherein the detection result represents an abnormal value of a time point to be detected in the image to be detected, and the abnormal detection model is trained by the training method of the abnormal detection model.
Another aspect of the present disclosure provides a computer system comprising: one or more processors; and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the methods described in embodiments of the present disclosure.
Another aspect of the present disclosure provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to implement the method described in the embodiments of the present disclosure.
Another aspect of the present disclosure provides a computer program product comprising computer executable instructions which, when executed, are adapted to implement a method as described above.
In the embodiment of the disclosure, a time series data display interface is subjected to screenshot through a preset frequency to generate a multi-frame time series image, wherein the multi-frame time series image comprises data points to be detected and historical data points before the data points to be detected, so that the multi-frame time series image comprises spatial attributes and time attributes of the time series data, the multi-frame time series image is processed by utilizing an anomaly detection model, detection results generated based on spatial features and time series features of the time series data can be used for representing anomaly values of the time series data. Therefore, at least the technical problem that the time attribute and the space attribute of the time sequence index cannot be analyzed simultaneously in the related technology is solved, and the detection accuracy of anomaly detection is improved. Meanwhile, the preprocessing process of the time sequence image is omitted, so that the technical problems of high operand and complexity of an abnormality detection method in the related technology can be at least partially overcome, and the detection efficiency of time sequence data abnormality detection is improved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments thereof with reference to the accompanying drawings in which:
FIG. 1 schematically illustrates an exemplary system architecture to which an anomaly detection model training method, an anomaly detection model training device, an anomaly detection method, an anomaly detection device may be applied, according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a training method of an anomaly detection model in accordance with an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow chart of generating a time series data display interface according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flowchart of inputting a plurality of frames of the time-series images into an anomaly detection model to be trained so that the anomaly detection model to be trained performs anomaly detection on data points to be detected in the target image according to the historical images, and outputs a prediction result, according to an embodiment of the present disclosure;
FIG. 5 schematically illustrates a model structure schematic of a first feature extraction network in accordance with an embodiment of the disclosure;
FIG. 6 schematically illustrates a schematic diagram of inputting a plurality of frames of the time-series images into an anomaly detection model to be trained so that the anomaly detection model to be trained performs anomaly detection on data points to be detected in the target image according to the historical images, and outputs a prediction result according to an embodiment of the present disclosure;
FIG. 7 schematically illustrates a flowchart for iteratively adjusting network parameters of the anomaly detection model to be trained based on the prediction results and the tag information, generating a trained anomaly detection model, according to an embodiment of the present disclosure;
FIG. 8 schematically illustrates a flow chart of an anomaly detection method according to an embodiment of the present disclosure;
FIG. 9 schematically illustrates a block diagram of a training apparatus of an anomaly detection model, according to an embodiment of the present disclosure;
fig. 10 schematically illustrates a block diagram of an anomaly detection device according to an embodiment of the present disclosure; and
fig. 11 schematically illustrates a block diagram of a computer system suitable for implementing the above-described methods, according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
In the process of realizing the disclosed conception, the inventor finds that the time sequence monitoring index anomaly detection method for simultaneously analyzing the time attribute and the space attribute in the related technology has the technical problems of higher operand and complexity and lower accuracy of detection results.
To at least partially solve the above technical problems, the present disclosure provides a training method of an anomaly detection model, an anomaly detection method, an anomaly detection device, a computer system, and a computer-readable storage medium. The training method of the anomaly detection model comprises the following steps: acquiring a multi-frame time sequence image, wherein the multi-frame time sequence image is generated by screenshot of a time sequence data display interface according to a preset frequency, the multi-frame time sequence image comprises a target image and a historical image, the target image comprises a data point to be detected, the historical image comprises a historical data point before the data point to be detected, the data point to be detected is provided with label information, and the label information represents abnormal values of the data point to be detected; inputting a multi-frame time sequence image into an anomaly detection model to be trained so that the anomaly detection model to be trained detects anomalies of data points to be detected in a target image according to a historical image and outputs a prediction result, wherein the prediction result represents predicted anomalies of the data points to be detected; and iteratively adjusting network parameters of the anomaly detection model to be trained according to the prediction result and the label information to generate a trained anomaly detection model. The abnormality detection method includes: acquiring an image to be detected, wherein the image to be detected comprises time sequence data; inputting an image to be detected into an anomaly detection model, and outputting a detection result, wherein the detection result represents an anomaly value of a time point to be detected in the image to be detected, and the anomaly detection model is obtained by training the training method of the anomaly detection model provided by the embodiment of the disclosure.
The method comprises the steps of capturing images of a time sequence data display interface through preset frequency to generate multi-frame time sequence images, wherein the multi-frame time sequence images comprise data points to be detected and historical data points before the data points to be detected, so that the multi-frame time sequence images comprise spatial attributes and time attributes of the time sequence data, the multi-frame time sequence images are processed by utilizing an anomaly detection model, detection results generated based on spatial features and time sequence features of the time sequence data can be used for representing anomaly values of the time sequence data. Therefore, at least the technical problem that the time attribute and the space attribute of the time sequence index cannot be analyzed simultaneously in the related technology is solved, and the detection accuracy of anomaly detection is improved. Meanwhile, the preprocessing process of the time sequence image is omitted, so that the technical problems of high operand and complexity of an abnormality detection method in the related technology can be at least partially overcome, and the detection efficiency of time sequence data abnormality detection is improved.
Fig. 1 schematically illustrates an exemplary system architecture 100 in which an anomaly detection model training method, an anomaly detection model training device, an anomaly detection method, an anomaly detection device may be applied, according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired and/or wireless communication links, and the like.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications may be installed on the terminal devices 101, 102, 103, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients and/or social platform software, to name a few.
The terminal devices 101, 102, 103 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (by way of example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and process the received data such as the user request, and feed back the processing result (e.g., the web page, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that, the training method and the anomaly detection method of the anomaly detection model provided in the embodiments of the present disclosure may be generally executed by the server 105. Accordingly, the training device and the abnormality detection device of the abnormality detection model provided in the embodiments of the present disclosure may be generally provided in the server 105. The training method of the anomaly detection model, the anomaly detection method provided by the embodiments of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the training apparatus and the anomaly detection apparatus of the anomaly detection model provided in the embodiments of the present disclosure may also be provided in a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Alternatively, the training method and the anomaly detection method of the anomaly detection model provided by the embodiments of the present disclosure may be performed by the terminal device 101, 102, or 103, or may be performed by other terminal devices different from the terminal device 101, 102, or 103. Accordingly, the training apparatus and the abnormality detection apparatus of the abnormality detection model provided in the embodiments of the present disclosure may also be provided in the terminal device 101, 102, or 103, or in another terminal device different from the terminal device 101, 102, or 103.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
FIG. 2 schematically illustrates a flowchart of a training method of an anomaly detection model, according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S210 to S230.
In operation S210, a multi-frame time-series image is acquired, wherein the multi-frame time-series image is generated by capturing a screen of a time-series data display interface according to a preset frequency, the multi-frame time-series image includes a target image and a history image, the target image includes a data point to be measured, the history image includes a history data point preceding the data point to be measured, the data point to be measured has tag information, and the tag information characterizes an outlier of the data point to be measured.
According to embodiments of the present disclosure, capturing high quality pictures with rich information over time series data is of great significance for subsequent training of anomaly detection models. Thus, the window length and the screen capture frequency of the time series image are two key parameters.
The window length of the time sequence image is longer, and rich information can be obtained in one time sequence image captured at one time. However, if the window length of the time-series image is too long, it means that more time information is compressed within a limited length, so that some key information is not sufficiently, finely and specifically displayed, which has a certain influence on the subsequent feature extraction.
The window length of the time sequence image is shorter, which is equivalent to displaying less data information in the equal length range, so that the data information can be clearly displayed in the time sequence image, and the subsequent feature extraction is facilitated. However, the screen capturing window is too short, and the problem that each frame of time sequence image only contains less part of information, so that information reaction is incomplete and the data relationship in the front and rear time periods cannot be effectively established can occur.
In addition, the window length of the time sequence image is too short, so that more screen capturing times are needed for the same time sequence data to complete the sampling of all data, and the number of the time sequence images is increased, so that the training amount is increased.
Thus, the appropriate window length is determined based on the performance characteristics and the trend of the time series data. In one screen capturing, the method ensures that more complete historical data is available, data change within a certain length time range can be reflected, meanwhile, change details of key parts can be displayed more finely, and subsequent feature extraction is facilitated.
At a determined screen capture window length, the screen capture frequency should be satisfied, with each data point in the time series data being captured at least once. That is, it is ensured that each data is utilized at least once for feature extraction and analysis without missing relevant information. In order to utilize each of the time series data as much as possible, the screen capturing frequency may be appropriately increased, that is, the number of uses of each data is greater than 1. In addition, when each data is utilized a plurality of times, the processing power of the hardware should be combined to obtain the number of time-series image processing matched with the existing hardware device.
In operation S220, the multi-frame time sequence image is input into the anomaly detection model to be trained, so that the anomaly detection model to be trained performs anomaly detection on the data points to be detected in the target image according to the history image, and a prediction result is output, wherein the prediction result represents a predicted anomaly value of the data points to be detected.
According to the embodiment of the disclosure, the anomaly detection model to be trained can be a neural network model constructed based on deep learning, so that the anomaly detection model to be trained can perform anomaly detection on data points to be detected in a target image according to time information and space information contained in a historical image.
According to the embodiment of the disclosure, the time sequence image is generated by directly taking the screenshot of the time sequence data display interface without carrying out data processing on the data in the time sequence data display interface, so that the time sequence image comprises time sequence information within a certain fixed time period. The input data in the picture format not only can reflect the historical information of the time series data, but also can intuitively reflect the distribution and expression of the time series data from the view of human eyes. In addition, by directly using the screen shot image of the input data for anomaly detection, the steps of digital conversion, preprocessing, and the like for analyzing the digital format data can be omitted.
According to the embodiment of the disclosure, the screen capturing image of the time series data is directly input as the anomaly detection model to be trained, so that the time characteristic and the space characteristic of the time series data can be well reserved, the feature extraction of the anomaly detection model on the time series data is facilitated, and effective information is provided for subsequent time series analysis.
According to embodiments of the present disclosure, the neural network model may include, for example, any one or a combination of a convolutional neural network (Convolutional Neural Networks, CNN), a recurrent neural network (Recurrent Neural Network, RNN), a Long Short-Term Memory (LSTM), but is not limited thereto, and may be other network models, such as a fully connected neural network (DNN), and a person skilled in the art may design a network structure of the anomaly detection model according to actual requirements.
In operation S230, the network parameters of the anomaly detection model to be trained are iteratively adjusted according to the prediction result and the tag information, and the trained anomaly detection model is generated.
In the embodiment of the disclosure, a screen capturing is performed on a time series data display interface through a preset frequency to generate a multi-frame time series image, and because the multi-frame time series image comprises data points to be detected and historical data points before the data points to be detected, the multi-frame time series image comprises spatial attributes and temporal attributes of the time series data, the multi-frame time series image is processed by using an anomaly detection model, detection results generated based on spatial features and time series features of the time series data can be used for representing anomaly values of the time series data. Therefore, at least the technical problem that the time attribute and the space attribute of the time sequence index cannot be analyzed simultaneously in the related technology is solved, and the detection accuracy of anomaly detection is improved. Meanwhile, the preprocessing process of the time sequence image is omitted, so that the technical problems of high operand and complexity of an abnormality detection method in the related technology can be at least partially overcome, and the detection efficiency of time sequence data abnormality detection is improved.
According to an embodiment of the disclosure, a multi-frame time-series image generated by screenshot of a time-series data display interface according to a preset frequency includes each data point of the time-series data display interface.
According to the embodiment of the disclosure, the multi-frame time sequence image comprises each data point of the time sequence data display interface, so that the abnormality detection model can detect each data point of the time sequence data display interface at least once, and the situation that time sequence characteristics in the time sequence data display interface cannot be extracted due to missing data points can be avoided.
For example, in the multi-frame time series image, the time period length of each frame of time series image may be set to 30 minutes, and the preset frequency may be set to 10 minutes/time, i.e., a time series image with a time period length of 30 minutes of one frame is cut every 10 minutes. Thus, each data point may be acquired 3 times, except for the data points of the start frame sequential image and the end frame sequential image.
It should be understood that the above examples are only illustrative of a method for setting a preset frequency, and those skilled in the art can set the preset frequency according to actual requirements, and the time period length of a time-series image in a multi-frame time-series image.
The method shown in fig. 2 is further described below with reference to fig. 3-7 in conjunction with the exemplary embodiment.
Fig. 3 schematically illustrates a flowchart of generating a time-series data display interface according to an embodiment of the present disclosure.
As shown in fig. 3, the time-series data display interface is generated through operations S310 to S320.
In operation S310, an initial time-series data display interface is acquired, wherein the initial time-series data display interface includes a target point in time and a historical point in time prior to the target point in time.
In operation S320, the data displayed on the initial time-series data display interface is dynamically normalized according to the data value of the target time point and the maximum value and the minimum value of the data point to be measured in the preset time period, so as to generate the time-series data display interface.
According to an embodiment of the present disclosure, the preset time period is associated with a preset frequency.
According to an embodiment of the present disclosure, the preset frequency of screenshot of the time series data display interface may be determined according to a preset time period. For example, the window length of each time-series image generated by capturing the time-series data display interface at the preset frequency may be equal to the preset time period.
According to the embodiment of the disclosure, before the screen capturing is performed on the time series data display interface, corresponding processing work can be performed on the data displayed in the initial time series data display interface so as to ensure that the current coordinates adapt to the data in the current window, so that main features can be expressed in detail in the captured picture and expressions of some non-important details can be ignored.
In particular, the data is dynamically changing. The data peaks are different in different phases due to fluctuations in the data. If a fixed coordinate is used to represent the data from the beginning to the end of a certain time sequence data, there may be a problem that the new data value is too large to be displayed in full beyond the range of the coordinate, or there may be a problem that the detail cannot be displayed in detail in a certain period of time because the data dimension is far smaller than the coordinate. Therefore, a dynamic coordinate ruler is required to be used for meeting the characteristic of dynamic change of data, and the data of different stages can be displayed in detail and properly.
The normalization operation of the data can not only avoid the scale problem, but also reduce the influence of the data dimension on the result. Therefore, the initial time series data display interface can be dynamically normalized to generate the time series data display interface, and then the screenshot is carried out on the time series data display interface.
In embodiments of the present disclosure, the data displayed by the initial time series data display interface may be dynamically normalized, i.e., the data { x } in the past n historical images may be utilized t-n ,...,x t-2 ,x t-1 }(x i Window data representing i time), and data x at the current time, the process of performing normalization processing on the data, the dynamic normalization process of the embodiment of the present disclosure may be represented by the following formula (1).
According to the embodiment of the disclosure, the value of n is too large, which can cause the increase of the resource requirements of data storage and data calculation, and the value of n is too small, so that the problem of non-ideal normalization effect can occur.
According to embodiments of the present disclosure, n may be valued at a window length and a screen capture frequency, e.g., if the time series data is collected in seconds, the screen capture window length is 30 minutes, the screen capture frequency is 10 minutes/time, then n may be valued at 10. I.e. taking 10 window data of the past history for normalization calculations.
Fig. 4 schematically illustrates a flowchart of inputting a plurality of frames of time-series images into an anomaly detection model to be trained so that the anomaly detection model to be trained performs anomaly detection on data points to be detected in a target image according to a historical image, and outputs a prediction result according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, an anomaly detection model to be trained includes a first feature extraction network, an attention network, and a second feature extraction network.
As shown in fig. 4, the method includes operations S410 to S430.
In operation S410, a plurality of frames of time-series images are input into a first feature extraction network, and a plurality of frames of first image data are output, wherein the plurality of frames of first image data include first target image data corresponding to target images and first history image data corresponding to history images.
In operation S420, a plurality of frames of first image data are input to the attention network so that the attention network configures weight parameters for the first history image data according to the correlation of the first history image data and the first target image data, and outputs the first target image data and the second history image data.
In operation S430, the first target image data and the second history image data are input to the second feature extraction network, and a prediction result is output.
Fig. 5 schematically illustrates a model structure diagram of a first feature extraction network according to an embodiment of the disclosure.
As shown in fig. 5, the first feature extraction network 510 includes a first feature extraction layer 502, a second feature extraction layer 503, a third feature extraction 504, a fourth feature extraction 505, a tiling layer (flat) 506, and a Full Connection layer (Full Connection) 507, which are sequentially cascaded. Wherein the first feature extraction layer 502 may include a first Convolution layer (Convolvulation) and a first batch normalization layer (Batch Normalization, BN); the second feature extraction layer may include a first Pooling layer (Pooling) and a first drop-out-DP layer; the third feature extraction layer 504 may include a second Convolution layer (Convolition) and a second batch normalization layer (Batch Normalization, BN); the fourth feature extraction layer 504 may include a first Pooling layer (Pooling) and a first drop out-DP layer.
The time-series image 501 may represent any frame of time-series images in a plurality of frames, as shown in fig. 5, after the time-series image 501 is input into the first feature extraction network 510, the first feature extraction network 510 performs feature extraction on the time-series image 501 by using the first feature extraction layer 502, the second feature extraction layer 503, the third feature extraction layer 504, the fourth feature extraction layer 505, the flat layer (flat) 506 and the Full Connection layer (Full Connection) 507, and finally may output the first image data.
Fig. 6 schematically illustrates a schematic diagram of inputting a plurality of frames of time-series images into an anomaly detection model to be trained so that the anomaly detection model to be trained performs anomaly detection on data points to be detected in a target image according to a historical image and outputs a prediction result according to an embodiment of the present disclosure.
In fig. 6, a time series image 602, a time series image 603, and a time series image 604 may be time series images generated by taking a screenshot of the time series data display interface 601, wherein the time series image 604 may be a target image, and the time series image 602 and the time series image 603 may be history images.
It should be noted that the number of time-series images shown in fig. 6 may be flexibly set by those skilled in the art according to actual needs, and is not limited to the number shown in fig. 6.
As shown in fig. 6, the time-series image 602, the time-series image 603, and the time-series image 604 may be input to a first feature extraction network, the first feature extraction network outputs a plurality of frames of first image data, then the plurality of frames of first image data may be input to an attention network 605, the attention network 605 may configure weight parameters for the first history image data according to the correlation of the first history image data and the first target image data, and output the first target image data and the second history image data. Finally, the first target image data and the second historical image data may be input into a second feature extraction network, outputting a prediction result.
According to the embodiment of the present disclosure, by introducing the attention network between the first feature extraction network and the second feature extraction network, the weight distribution is performed on the plurality of first historical image data within a certain period of time output by the first feature extraction network, so that the first historical image data which has a larger contribution to the input at the next moment obtains a larger weight, and the first historical image data which has a weaker correlation with the input at the next moment and has a smaller contribution to the abnormality detection is distributed with a smaller weight. Through the attention mechanism, effective information can be extracted, and data with poor importance are ignored, so that the efficiency is improved.
According to an embodiment of the present disclosure, inputting a plurality of frames of first image data into an attention network so that the attention network configures weight parameters for the first historical image data according to correlation of the first historical image data with the first target image data, and outputting the first target image data and the second historical image data includes the operations of:
performing similarity calculation on the first historical image data and the first target image data to generate a similarity result;
generating a first weight parameter according to the similarity result;
and generating second historical image data according to the first weight parameter and the first historical image data.
For n first image data f= (F1, F2,., fn) output by the first feature extraction network, correlations Ci with data features of the first target image data may be calculated, respectively. The correlation calculation may be performed by selecting a euclidean distance, a manhattan distance equidistant calculation method, such as c1=c (f 1, fn), and c2=c (f 2, fn), where C may represent a correlation calculation function, such as the euclidean distance or manhattan distance method mentioned above. And according to the correlation result, namely the importance degree judged on the current time data, respectively giving the n characteristics with corresponding weights. The weight allocation method may be a linear allocation method or a softmax method, etc. For example, the assignment may be performed in a linear fashion, with the weight parameters calculated as follows: The corresponding product of the original characteristic data F and the weight parameter W is the input of the second characteristic extraction network under the action of the attention mechanism.
Due to the specificity of time sequence data abnormality discrimination, the current time abnormality discrimination has a dense and inseparable relationship with the historical data. Meanwhile, the contribution value of the historical data to the current moment abnormality discrimination is greatly different. And the attention mechanism can well focus the attention on the core input, so that more effective information is extracted.
Fig. 7 schematically illustrates a flowchart for iteratively adjusting network parameters of an anomaly detection model to be trained based on prediction results and tag information to generate a trained anomaly detection model, according to an embodiment of the present disclosure.
As shown in fig. 7, the method includes operations S710 to S720.
In operation S710, the prediction result and the tag information are input into the loss function, and the loss result is output.
In operation S720, the network parameters of the anomaly detection model to be trained are iteratively adjusted according to the loss result, and the trained anomaly detection model is generated.
According to an embodiment of the present disclosure, the training process of the anomaly detection model may be represented by the following table (1).
Watch (1)
According to an embodiment of the present disclosure, the first feature extraction network may be constructed based on a convolutional neural network. The convolutional neural network CNN may perform feature extraction on format data such as pictures. Therefore, through the CNN network, the effective spatial features in the time-series images can be extracted and analyzed, and the subsequent analysis and processing are facilitated.
Specifically, the shared convolution kernel in the CNN network performs nonlinear operations such as convolution with pixel information in the picture in a point-by-point scanning manner, so that spatial morphological characteristics of data in an original time sequence can be effectively extracted. The parameters of the convolution kernel may be updated by back-propagation.
According to an embodiment of the present disclosure, the second feature extraction network may be constructed based on a long-short term memory network. The LSTM network can effectively analyze and process time sequence data, and has better performance on analysis of history dependency data with larger front-back correlation. In the anomaly classification of time-series data, context anomalies and section anomalies are determined based on the characteristics of the historical data, i.e., the anomaly determination at the next time depends on the performance of the historical data. Therefore, by means of the extraction of the LSTM to the history information, the history data representation can be effectively combined, and whether the data to be detected is abnormal or not can be determined.
The information obtained through the CNN network and the attention network represents the characteristic extraction of historical data which is relatively relevant to the data points to be measured. The input information of the plurality of data points is combined with the data points to be detected, and the output result is obtained through selective input of LSTM and forgetting. The output result represents the feature extraction of the data point to be detected based on the analysis of the historical information.
The abnormality discrimination of the time series data has strong relevance with the historical data, and the instant time dependence shows obvious performance. The LSTM is utilized to extract, screen, memorize and transmit the history information, and meanwhile, the information useful for detecting the abnormal data at the current moment can be extracted well by means of the selection function of the attention network on the image data. The parameters of the logic gates in the LSTM network may be updated by back-propagation of gradients that minimize the objective function.
Fig. 8 schematically illustrates a flowchart of an anomaly detection method according to an embodiment of the present disclosure.
As shown in fig. 8. The abnormality detection method includes operations S810 to S820.
In operation S810, an image to be measured is acquired, wherein the image to be measured includes time-series data.
In operation S820, the image to be detected is input into an anomaly detection model, and a detection result is output, wherein the detection result represents an anomaly value of a time point to be detected in the image to be detected, and the anomaly detection model is trained by the training method of the anomaly detection model provided by the embodiment of the present disclosure.
Fig. 9 schematically illustrates a block diagram of a training apparatus of an anomaly detection model according to an embodiment of the present disclosure.
As shown in fig. 9, the training apparatus 900 of the anomaly detection model may include a first acquisition module 910, a prediction module 920, and a training module 930.
The first obtaining module 910 is configured to obtain a plurality of frames of time-series images, where the plurality of frames of time-series images are generated by capturing a screen of a time-series data display interface according to a preset frequency, the plurality of frames of time-series images include a target image and a history image, the target image includes a data point to be measured, the history image includes a history data point before the data point to be measured, the data point to be measured has tag information, and the tag information characterizes an outlier of the data point to be measured.
The prediction module 920 is configured to input the multi-frame time-sequence image into an anomaly detection model to be trained, so that the anomaly detection model to be trained performs anomaly detection on a data point to be detected in the target image according to the historical image, and output a prediction result, where the prediction result represents a predicted anomaly value of the data point to be detected.
The training module 930 is configured to iteratively adjust network parameters of the anomaly detection model to be trained according to the prediction result and the tag information, and generate a trained anomaly detection model.
According to an embodiment of the present disclosure, wherein the anomaly detection model to be trained comprises a first feature extraction network, an attention network, and a second feature extraction network.
According to an embodiment of the present disclosure, training module 930 includes a first input sub-module, a second input sub-module, and a third input sub-module.
The first input sub-module is used for inputting a plurality of frames of time sequence images into the first feature extraction network and outputting a plurality of frames of first image data, wherein the plurality of frames of first image data comprise first target image data corresponding to target images and first historical image data corresponding to historical images.
And the second input sub-module is used for inputting a plurality of frames of first image data into the attention network so that the attention network configures weight parameters for the first historical image data according to the correlation between the first historical image data and the first target image data and outputs the first target image data and the second historical image data.
And the third input sub-module is used for inputting the first target image data and the second historical image data into the second feature extraction network and outputting a prediction result.
According to an embodiment of the present disclosure, the second input sub-module includes a similarity calculation unit, a first generation unit, and a second generation unit.
And the similarity calculation unit is used for calculating the similarity of the first historical image data and the first target image data and generating a similarity result.
The first generation unit is used for generating a first weight parameter according to the similarity result.
And the second generation unit is used for generating second historical image data according to the first weight parameter and the first historical image data.
According to an embodiment of the present disclosure, training module 930 includes a first input module and a first generation module.
The first input module is used for inputting the prediction result and the label information into the loss function and outputting the loss result.
The first generation module is used for iteratively adjusting network parameters of the anomaly detection model to be trained according to the loss result to generate the anomaly detection model after training.
According to an embodiment of the disclosure, the time series data display interface is generated by a second acquisition module and a dynamic normalization module.
And the second acquisition module is used for acquiring an initial time sequence data display interface, wherein the initial time sequence data display interface comprises a target time point and a historical time point before the target data point.
The dynamic normalization module is used for dynamically normalizing the data displayed on the initial time sequence data display interface according to the data value of the target time point and the maximum value and the minimum value of the data point to be detected in the preset time period to generate a time sequence data display interface.
According to an embodiment of the present disclosure, the preset time period is associated with a preset frequency.
According to an embodiment of the disclosure, a multi-frame time-series image generated by screenshot of a time-series data display interface according to a preset frequency includes each data point of the time-series data display interface.
Fig. 10 schematically shows a block diagram of an abnormality detection apparatus according to an embodiment of the present disclosure.
As shown in fig. 10, the abnormality detection apparatus 1000 may include a second acquisition module 1010 and a detection module 1020.
A second obtaining module 1010, configured to obtain an image to be measured, where the image to be measured includes time-series data.
The detection module 1020 is configured to input an image to be detected into an anomaly detection model, and output a detection result, where the detection result represents an anomaly value of a time point to be detected in the image to be detected, and the anomaly detection model is obtained by training the training method of the anomaly detection model provided by the embodiment of the disclosure.
It should be noted that, the embodiments of the apparatus portion of the present disclosure are the same as or similar to the embodiments of the method portion of the present disclosure, and are not described herein.
Any number of the modules, sub-modules, units, or at least some of the functionality of any number of the modules, sub-modules, units, may be implemented in one module in accordance with embodiments of the present disclosure. Any one or more of the modules, sub-modules, units according to embodiments of the present disclosure may be implemented as a split into multiple modules. Any one or more of the modules, sub-modules, units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or in hardware or firmware in any other reasonable manner of integrating or packaging the circuits, or in any one of or in any suitable combination of three of software, hardware, and firmware. Alternatively, one or more of the modules, sub-modules, units according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which when executed, may perform the corresponding functions.
For example, any number of the first acquisition module 910, the prediction module 920, the training module 930, the second acquisition module 1010, and the detection module 1020 may be combined in one module/unit/sub-unit or any one of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least some of the functionality of one or more of these modules/units/sub-units may be combined with at least some of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to embodiments of the present disclosure, at least one of the first acquisition module 910, the prediction module 920, the training module 930, the second acquisition module 1010, and the detection module 1020 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging the circuitry, or in any one of or a suitable combination of any of the three implementations of software, hardware, and firmware. Alternatively, at least one of the first acquisition module 910, the prediction module 920, the training module 930, the second acquisition module 1010, and the detection module 1020 may be at least partially implemented as a computer program module, which when executed, may perform the corresponding functions.
Fig. 11 schematically illustrates a block diagram of a computer system suitable for implementing the above-described methods, according to an embodiment of the present disclosure. The computer system illustrated in fig. 11 is merely an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 11, a computer system 1100 according to an embodiment of the present disclosure includes a processor 1101 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1102 or a program loaded from a storage section 1108 into a Random Access Memory (RAM) 1103. The processor 1101 may comprise, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 1101 may also include on-board memory for caching purposes. The processor 1101 may comprise a single processing unit or a plurality of processing units for performing the different actions of the method flow according to embodiments of the present disclosure.
In the RAM 1103, various programs and data required for the operation of the computer system 1100 are stored. The processor 1101, ROM 1102, and RAM 1103 are connected to each other by a bus 1104. The processor 1101 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 1102 and/or the RAM 1103. Note that the program can also be stored in one or more memories other than the ROM 1102 and the RAM 1103. The processor 1101 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in one or more memories.
According to an embodiment of the present disclosure, the computer system 1100 may also include an input/output (I/O) interface 1105, the input/output (I/O) interface 1105 also being connected to the bus 1104. The system 1100 may also include one or more of the following components connected to the I/O interface 1105: an input section 1106 including a keyboard, a mouse, and the like; an output portion 1107 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 1108 including a hard disk or the like; and a communication section 1109 including a network interface card such as a LAN card, a modem, and the like. The communication section 1109 performs communication processing via a network such as the internet. The drive 1110 is also connected to the I/O interface 1105 as needed. Removable media 1111, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is installed as needed in drive 1110, so that a computer program read therefrom is installed as needed in storage section 1108.
According to embodiments of the present disclosure, the method flow according to embodiments of the present disclosure may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program can be downloaded and installed from a network via the communication portion 1109, and/or installed from the removable media 1111. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 1101. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 1102 and/or RAM 1103 described above and/or one or more memories other than ROM 1102 and RAM 1103.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be combined in various combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (11)

1. A training method of an anomaly detection model, comprising:
acquiring a plurality of frames of time sequence images, wherein the plurality of frames of time sequence images are generated by screenshot of a time sequence data display interface according to preset frequency, the plurality of frames of time sequence images comprise a target image and a historical image, the target image comprises a data point to be tested, the historical image comprises a historical data point before the data point to be tested, the data point to be tested is provided with label information, and the label information represents an abnormal value of the data point to be tested;
inputting a plurality of frames of time sequence images into an anomaly detection model to be trained so that the anomaly detection model to be trained detects anomalies of data points to be detected in the target image according to the historical images and outputs a prediction result, wherein the prediction result represents predicted anomaly values of the data points to be detected; and
Iteratively adjusting network parameters of the anomaly detection model to be trained according to the prediction result and the label information to generate a trained anomaly detection model;
the anomaly detection model to be trained comprises a first feature extraction network, an attention network and a second feature extraction network;
inputting a plurality of frames of time sequence images into an anomaly detection model to be trained so that the anomaly detection model to be trained detects anomalies of data points to be detected in the target image according to the historical images, and outputting a prediction result comprises:
inputting a plurality of frames of the time sequence images into the first feature extraction network, and outputting a plurality of frames of first image data, wherein the plurality of frames of first image data comprise first target image data corresponding to the target image and first historical image data corresponding to the historical image;
inputting a plurality of frames of the first image data into the attention network, so that the attention network configures weight parameters for the first historical image data according to the correlation between the first historical image data and the first target image data, and outputs the first target image data and the second historical image data;
And inputting the first target image data and the second historical image data into the second feature extraction network, and outputting the prediction result.
2. The method of claim 1, the inputting a plurality of frames of the first image data into the attention network such that the attention network configures weight parameters for the first historical image data based on a correlation of the first historical image data with the first target image data, outputting the first target image data and the second historical image data comprising:
performing similarity calculation on the first historical image data and the first target image data to generate a similarity result;
generating a first weight parameter according to the similarity result;
and generating the second historical image data according to the first weight parameter and the first historical image data.
3. The method of claim 1, wherein,
iteratively adjusting network parameters for training the anomaly detection model to be trained according to the prediction result and the label information, wherein generating the trained anomaly detection model comprises the following steps:
inputting the prediction result and the label information into a loss function, and outputting a loss result;
And iteratively adjusting network parameters of the anomaly detection model to be trained according to the loss result to generate the trained anomaly detection model.
4. The method of claim 1, the time series data display interface generated by:
acquiring an initial time sequence data display interface, wherein the initial time sequence data display interface comprises a target time point and a historical time point before the target data point;
and dynamically normalizing the data displayed on the initial time sequence data display interface according to the data value of the target time point and the maximum value and the minimum value of the data point to be detected in a preset time period to generate the time sequence data display interface.
5. The method of claim 4, wherein the preset time period is associated with the preset frequency.
6. The method of claim 1, wherein the multi-frame time series image generated by screenshot of the time series data display interface at a preset frequency comprises each data point of the time series data display interface.
7. An anomaly detection method, comprising:
acquiring an image to be detected, wherein the image to be detected comprises time sequence data;
Inputting the image to be detected into an anomaly detection model, and outputting a detection result, wherein the detection result represents an anomaly value of a time point to be detected in the image to be detected, and the anomaly detection model is trained by the training method of the anomaly detection model according to any one of claims 1 to 6.
8. A training apparatus for an anomaly detection model, comprising:
the first acquisition module is used for acquiring a plurality of frames of time sequence images, wherein the plurality of frames of time sequence images are generated by screenshot of a time sequence data display interface according to preset frequency, the plurality of frames of time sequence images comprise a target image and a historical image, the target image comprises a data point to be detected, the historical image comprises a historical data point in front of the data point to be detected, the data point to be detected is provided with label information, and the label information represents an abnormal value of the data point to be detected;
the prediction module is used for inputting a plurality of frames of time sequence images into an anomaly detection model to be trained so that the anomaly detection model to be trained detects anomalies of data points to be detected in the target image according to the historical images and outputs a prediction result, wherein the prediction result represents predicted anomalies of the data points to be detected; and
The training module is used for iteratively adjusting network parameters of the anomaly detection model to be trained according to the prediction result and the label information to generate a trained anomaly detection model;
the anomaly detection model to be trained comprises a first feature extraction network, an attention network and a second feature extraction network;
the training module comprises:
a first input sub-module, configured to input a plurality of frames of the time-series images into the first feature extraction network, and output a plurality of frames of first image data, where the plurality of frames of first image data include first target image data corresponding to the target image and first history image data corresponding to the history image;
a second input sub-module, configured to input a plurality of frames of the first image data into the attention network, so that the attention network configures weight parameters for the first historical image data according to the correlation between the first historical image data and the first target image data, and outputs the first target image data and the second historical image data;
and the third input sub-module is used for inputting the first target image data and the second historical image data into the second feature extraction network and outputting the prediction result.
9. An abnormality detection apparatus comprising:
the second acquisition module is used for acquiring an image to be detected, wherein the image to be detected comprises time sequence data;
the detection module is used for inputting the image to be detected into an anomaly detection model and outputting a detection result, wherein the detection result represents an anomaly value of a time point to be detected in the image to be detected, and the anomaly detection model is trained by the training method of the anomaly detection model according to any one of claims 1 to 6.
10. A computer system, comprising:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-6 or the method of claim 7.
11. A computer readable storage medium having stored thereon executable instructions which when executed by a processor cause the processor to implement the method of any one of claims 1 to 6 or to implement the method of claim 7.
CN202111083531.8A 2021-09-15 2021-09-15 Training method of anomaly detection model, anomaly detection method and device Active CN113743607B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111083531.8A CN113743607B (en) 2021-09-15 2021-09-15 Training method of anomaly detection model, anomaly detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111083531.8A CN113743607B (en) 2021-09-15 2021-09-15 Training method of anomaly detection model, anomaly detection method and device

Publications (2)

Publication Number Publication Date
CN113743607A CN113743607A (en) 2021-12-03
CN113743607B true CN113743607B (en) 2023-12-05

Family

ID=78739170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111083531.8A Active CN113743607B (en) 2021-09-15 2021-09-15 Training method of anomaly detection model, anomaly detection method and device

Country Status (1)

Country Link
CN (1) CN113743607B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114255373B (en) * 2021-12-27 2024-02-02 中国电信股份有限公司 Sequence anomaly detection method, device, electronic equipment and readable medium
CN114332080B (en) * 2022-03-04 2022-05-27 北京字节跳动网络技术有限公司 Tissue cavity positioning method and device, readable medium and electronic equipment
CN115272831B (en) * 2022-09-27 2022-12-09 成都中轨轨道设备有限公司 Transmission method and system for monitoring images of suspension state of contact network
CN115514614B (en) * 2022-11-15 2023-02-24 阿里云计算有限公司 Cloud network anomaly detection model training method based on reinforcement learning and storage medium
CN115879054B (en) * 2023-03-03 2023-05-30 泰安市特种设备检验研究院 Method and device for determining liquid ammonia refrigeration state based on image processing
CN115965944B (en) * 2023-03-09 2023-05-09 安徽蔚来智驾科技有限公司 Target information detection method, device, driving device and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110262950A (en) * 2019-05-21 2019-09-20 阿里巴巴集团控股有限公司 Abnormal movement detection method and device based on many index
US10783399B1 (en) * 2018-01-31 2020-09-22 EMC IP Holding Company LLC Pattern-aware transformation of time series data to multi-dimensional data for deep learning analysis
CN112000830A (en) * 2020-08-26 2020-11-27 中国科学技术大学 Time sequence data detection method and device
CN112465049A (en) * 2020-12-02 2021-03-09 罗普特科技集团股份有限公司 Method and device for generating anomaly detection model and method and device for detecting anomaly event
WO2021098384A1 (en) * 2019-11-18 2021-05-27 中国银联股份有限公司 Data abnormality detection method and apparatus
CN112987675A (en) * 2021-05-06 2021-06-18 北京瑞莱智慧科技有限公司 Method, device, computer equipment and medium for anomaly detection
WO2021120719A1 (en) * 2019-12-19 2021-06-24 华为技术有限公司 Neural network model update method, and image processing method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9779361B2 (en) * 2014-06-05 2017-10-03 Mitsubishi Electric Research Laboratories, Inc. Method for learning exemplars for anomaly detection
US11223642B2 (en) * 2019-09-14 2022-01-11 International Business Machines Corporation Assessing technical risk in information technology service management using visual pattern recognition

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10783399B1 (en) * 2018-01-31 2020-09-22 EMC IP Holding Company LLC Pattern-aware transformation of time series data to multi-dimensional data for deep learning analysis
CN110262950A (en) * 2019-05-21 2019-09-20 阿里巴巴集团控股有限公司 Abnormal movement detection method and device based on many index
WO2021098384A1 (en) * 2019-11-18 2021-05-27 中国银联股份有限公司 Data abnormality detection method and apparatus
WO2021120719A1 (en) * 2019-12-19 2021-06-24 华为技术有限公司 Neural network model update method, and image processing method and device
CN112000830A (en) * 2020-08-26 2020-11-27 中国科学技术大学 Time sequence data detection method and device
CN112465049A (en) * 2020-12-02 2021-03-09 罗普特科技集团股份有限公司 Method and device for generating anomaly detection model and method and device for detecting anomaly event
CN112987675A (en) * 2021-05-06 2021-06-18 北京瑞莱智慧科技有限公司 Method, device, computer equipment and medium for anomaly detection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Time series anomaly detection using convolutional neural networks and transfer learning》;en, Tailai, and Roy Keyes;《arXiv preprint arXiv:1905.13628 (2019)》;1-8 *
基于卷积神经网络的工控网络异常流量检测;张艳升;李喜旺;李丹;杨华;;计算机应用(第05期);1512-1517 *

Also Published As

Publication number Publication date
CN113743607A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
CN113743607B (en) Training method of anomaly detection model, anomaly detection method and device
JP7181437B2 (en) A technique for identifying skin tones in images under uncontrolled lighting conditions
CN109460514B (en) Method and device for pushing information
US20210256320A1 (en) Machine learning artificialintelligence system for identifying vehicles
US20150127595A1 (en) Modeling and detection of anomaly based on prediction
JP7394809B2 (en) Methods, devices, electronic devices, media and computer programs for processing video
US20210073981A1 (en) Automatic graph scoring for neuropsychological assessments
US11593860B2 (en) Method, medium, and system for utilizing item-level importance sampling models for digital content selection policies
EP3725084B1 (en) Deep learning on image frames to generate a summary
CN113434716B (en) Cross-modal information retrieval method and device
CN110866040B (en) User portrait generation method, device and system
EP4073978B1 (en) Intelligent conversion of internet domain names to vector embeddings
CN113569740B (en) Video recognition model training method and device, and video recognition method and device
CN112766284A (en) Image recognition method and device, storage medium and electronic equipment
Venkatesvara Rao et al. Real-time video object detection and classification using hybrid texture feature extraction
CN117633613A (en) Cross-modal video emotion analysis method and device, equipment and storage medium
CN111899239A (en) Image processing method and device
CN114724144B (en) Text recognition method, training device, training equipment and training medium for model
US11087045B1 (en) Apparatus and method for option data object performance prediction and modeling
CN113780318B (en) Method, device, server and medium for generating prompt information
CN111737920B (en) Data processing method, equipment and medium based on cyclic neural network
CN110704294B (en) Method and apparatus for determining response time
CN111949819A (en) Method and device for pushing video
CN112308074A (en) Method and device for generating thumbnail
CN113657230B (en) Method for training news video recognition model, method for detecting video and device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant