WO2021139236A1 - 基于自编码器的异常检测方法、装置、设备及存储介质 - Google Patents

基于自编码器的异常检测方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2021139236A1
WO2021139236A1 PCT/CN2020/118224 CN2020118224W WO2021139236A1 WO 2021139236 A1 WO2021139236 A1 WO 2021139236A1 CN 2020118224 W CN2020118224 W CN 2020118224W WO 2021139236 A1 WO2021139236 A1 WO 2021139236A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
label
reconstruction
positive
data
Prior art date
Application number
PCT/CN2020/118224
Other languages
English (en)
French (fr)
Inventor
邓悦
郑立颖
徐亮
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021139236A1 publication Critical patent/WO2021139236A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • This application relates to the field of artificial intelligence, and in particular to an abnormality detection method, device, equipment and storage medium based on an autoencoder.
  • anomaly detection is to identify data that does not meet the expected normal pattern. These data may come from new categories or some meaningless noisy data. There is no clear definition, so it is difficult to collect or verify.
  • Positive samples can be well characterized by training data, but due to the inertia of the classifier, traditional methods either build a model configuration for the positive samples, and then identify the violation examples as outliers, or the outliers based on abnormal statistics or geometric metrics Clear separation, usually linear model, limited capacity. Although kernel functions can be used to improve performance, they are still not suitable for high-dimensional mass data.
  • the main purpose of this application is to solve the current technical problem that the preset threshold is difficult to determine and overfitting is caused by abnormal detection by establishing a model.
  • the first aspect of the present application provides an abnormality detection method based on an autoencoder, including: inputting unlabeled samples into the encoder for dimensionality reduction processing to obtain unlabeled sample characteristics of the unlabeled samples, and randomly The unlabeled sample features are assigned a first label; the unlabeled sample features with the first label are respectively input to the positive sample decoder and the negative sample decoder to perform data reconstruction to obtain a first reconstruction Data and second reconstruction data; calculate the reconstruction error of the unmarked sample according to the first reconstruction data and the second reconstruction data; determine the reconstruction error of the unmarked sample according to the reconstruction error Second label; determine whether the second label and the first label are the same; if they are the same, determine the abnormality of the unmarked sample according to the second label; if they are not the same, the first label
  • the content of is updated to the content of the second label, and the step of inputting the unmarked sample features with the first label into the positive sample decoder and the negative sample decoder to perform data reconstruction is returned.
  • the second aspect of the present application provides an abnormality detection device based on an autoencoder, which includes: a dimensionality reduction module for inputting unlabeled samples into the encoder for dimensionality reduction processing to obtain unlabeled samples of the unlabeled samples Sample features, and randomly assign a first label to the unlabeled sample feature; a reconstruction module for inputting the unlabeled sample feature with the first label to the positive sample decoder and the negative sample decoder Data reconstruction in the device to obtain the first reconstruction data and the second reconstruction data; the calculation module is used to calculate the reconstitution of the unmarked sample according to the first reconstruction data and the second reconstruction data.
  • Construction error a determination module, used to determine the second label of the unmarked sample according to the reconstruction error, and determine whether the second label is the same as the first label; a determination module, used when the When the second label is the same as the first label, determine the abnormal condition of the unmarked sample according to the second label; the circulation module is configured to: when the second label is different from the first label, Update the content of the first label to the content of the second label, and return to the input of the unmarked sample features with the first label to the positive sample decoder and the negative sample decoder for data Refactoring steps.
  • a third aspect of the present application provides an abnormality detection device based on an autoencoder, including: a memory and at least one processor, the memory stores instructions, and the memory and the at least one processor are interconnected by wires; The at least one processor calls the instructions in the memory, so that the autoencoder-based anomaly detection device executes the steps of the autoencoder-based anomaly detection method as follows: input unmarked samples into the laboratory The encoder performs dimensionality reduction processing to obtain the unmarked sample features of the unlabeled sample, and randomly assigns a first label to the unlabeled sample feature; input the unlabeled sample features with the first label to Perform data reconstruction in the positive sample decoder and the negative sample decoder to obtain first reconstructed data and second reconstructed data; according to the first reconstructed data and the second reconstructed data, calculate The reconstruction error of the unmarked sample; determine the second label of the unmarked sample according to the reconstruction error; determine whether the second label is the same as the first label; if they are the same, according to
  • the fourth aspect of the present application provides a computer-readable storage medium having instructions stored in the computer-readable storage medium, which when run on a computer, cause the computer to perform the following anomaly detection based on an autoencoder
  • the steps of the method input unlabeled samples into the encoder for dimensionality reduction processing to obtain unlabeled sample features of the unlabeled samples, and randomly assign a first label to the unlabeled sample features; and will have the first label
  • the features of the unmarked samples are respectively input to the positive sample decoder and the negative sample decoder for data reconstruction, to obtain first reconstructed data and second reconstructed data; according to the first reconstructed data And the second reconstruction data, calculating the reconstruction error of the unmarked sample; determining the second label of the unmarked sample according to the reconstruction error; judging the second label and the first label Whether they are the same; if they are the same, determine the abnormality of the unmarked sample according to the second label; if they are not the same, update the content of the first label to the content
  • the unlabeled sample is input into the encoder for dimensionality reduction processing, the unlabeled sample feature of the unlabeled sample is obtained, and the first label is randomly assigned to the unlabeled sample feature;
  • the unmarked sample features of the first label are respectively input to the positive sample decoder and the negative sample decoder for data reconstruction, to obtain first reconstructed data and second reconstructed data; according to the first Reconstructed data and the second reconstructed data, and calculate the reconstruction error of the unmarked sample;
  • This application performs anomaly detection through iterative reconstruction of the autoencoder instead of establishing a model, and proposes a new standard for anomaly definition, avoiding the problem of difficulty in determining the preset threshold, and at the same time, performing anomaly detection in a discriminative manner, avoiding excessive
  • the learning process of the autoencoder converges, the model is reliable, and the robustness to the outlier ratio is higher, which saves computing resources.
  • FIG. 1 is a schematic diagram of a first embodiment of an abnormality detection method based on a self-encoder in an embodiment of the application;
  • FIG. 2 is a schematic diagram of a second embodiment of an abnormality detection method based on a self-encoder in an embodiment of the application;
  • FIG. 3 is a schematic diagram of a third embodiment of an abnormality detection method based on a self-encoder in an embodiment of the application;
  • FIG. 4 is a schematic diagram of a fourth embodiment of an abnormality detection method based on a self-encoder in an embodiment of the application;
  • FIG. 5 is a schematic diagram of a fifth embodiment of an abnormality detection method based on a self-encoder in an embodiment of the application;
  • Fig. 6 is a schematic diagram of an embodiment of an abnormality detection device based on a self-encoder in an embodiment of the application;
  • Fig. 7 is a schematic diagram of another embodiment of an abnormality detection device based on a self-encoder in an embodiment of the application;
  • Fig. 8 is a schematic diagram of an embodiment of an abnormality detection device based on a self-encoder in an embodiment of the application.
  • the embodiments of this application provide an abnormality detection method, device, equipment and storage medium based on an autoencoder.
  • unmarked samples are input into the encoder for dimensionality reduction processing to obtain the Mark the unmarked sample feature of the sample, and randomly assign a first label to the unlabeled sample feature; input the unlabeled sample feature with the first label to the positive sample decoder and the negative sample decoder, respectively Perform data reconstruction in the process to obtain first reconstructed data and second reconstructed data; calculate the reconstruction error of the unmarked sample according to the first reconstructed data and the second reconstructed data;
  • This application performs anomaly detection through iterative reconstruction of the autoencoder instead of establishing a model, and proposes a new standard for anomaly definition, avoiding the problem of difficulty in determining the preset threshold, and at the same time, performing anomaly detection in a discriminative manner, avoiding excessive
  • the learning process of the autoencoder converges, the model is reliable, and the robustness to the outlier ratio is higher, which saves computing resources.
  • the first embodiment of the abnormality detection method based on the self-encoder in the embodiment of the present application includes:
  • the execution subject of this application may be an abnormality detection device based on a self-encoder, or may also be a terminal or a server, which is not specifically limited here.
  • the embodiment of the present application takes the server as the execution subject as an example for description.
  • the unmarked samples and the samples after the detection can be stored in a node of a blockchain.
  • the anomaly detection is to identify data that does not conform to the expected normal pattern, so the data that is known to conform to the normal pattern obtained in advance can be used as a positive sample, and the data that is not yet known whether it conforms to the expected normal pattern is regarded as unmarked Samples, unlabeled samples contain data that meets or does not meet the expected normal pattern. Through the method of this application, it will be possible to identify from the unlabeled samples which meet the expected normal pattern and which do not meet the expected normal pattern, so as to achieve abnormal detection. purpose.
  • the data sets used for anomaly detection are the MNIST data set and the KDD Cup 1999 network intrusion data set (KDD).
  • KDD KDD Cup 1999 network intrusion data set
  • the sample set is divided into positive sample data and negative sample data according to its class label.
  • the labeled positive sample consists of 80% normal data
  • the unlabeled sample consists of the remaining 20% normal data and all abnormal data. composition. Therefore, the model uses only normal data to train the positive sample decoder, and uses normal and abnormal data for testing.
  • the neural network types selected by the encoder can include fully connected networks, convolutional neural networks, and recurrent neural networks, which are mainly determined by the attributes of the data to determine the attributes of the sample data Choosing different neural network types can reduce the amount of calculation and improve efficiency.
  • a fully connected network is selected as the neural network type of the encoder and decoder.
  • Each encoder and decoder are composed of two hidden layers. , The structure is symmetrical.
  • high-dimensional sample data can be encoded into low-dimensional sample data through the multilayer neural network selected by the encoder.
  • a regularization item can be added to separate the positive and negative sample data to a certain extent, and then The low-dimensional data is decoded by the decoder back to the high-order sample data of the same dimension as before, and the entire reconstruction process is completed.
  • the samples can be preprocessed before they are input into the autoencoder, which can be data normalization of the samples.
  • Data normalization is to scale the data to a small specific interval.
  • the significance of data standardization is to eliminate errors caused by different dimensions, self-variation, or large differences in values.
  • Data standardization methods include min-max standardization, z-score standardization, atan arctangent function standardization, and log function standardization.
  • the decoder maps the samples in the low-dimensional subspace back to the original input space through the same transformation as the encoder.
  • the data obtained after reconstruction is the reconstructed data.
  • the positive sample decoder is obtained by training with labeled positive samples as a training set, and the labeled positive samples are composed of 80% of normal data.
  • the decoder maps the samples in the low-dimensional subspace back to the original input space through the same transformation as the encoder.
  • the difference between the samples input to the encoder and the samples output from the decoder is the reconstruction error.
  • the reconstruction error calculated by comparing the first reconstruction data output by the positive sample decoder and the reconstruction error calculated by the second reconstruction data output by the negative sample decoder may be The unlabeled samples are reassigned to the label, because the size of the two reconstruction errors means that the unlabeled sample is more biased towards the positive sample, or the negative sample is also the abnormal sample.
  • the positive sample decoder outputs the first reconstructed data calculation
  • the obtained reconstruction error is smaller, it means that the unmarked sample is more biased towards the positive sample.
  • the reconstruction error calculated by the second reconstruction data output by the negative sample decoder is smaller, it means that the unmarked sample is more biased towards the abnormal sample, which is The unlabeled sample is re-assigned a label.
  • the sample set includes a plurality of unlabeled samples.
  • the process of reconfiguring the label distribution is stopped, and it is determined whether the unlabeled sample is It is an abnormal sample.
  • step 107 If they are not the same, update the content of the first label to the content of the second label, and return to step 102.
  • the second label assigned by the unlabeled sample through reconstruction is different from the first label assigned before reconstruction, it means that the label assigned before reconstruction is not the correct label and the label assignment needs to be performed again. And reconstruct to check whether the secondary assigned label is the correct label. After multiple reconstructions, the label of the unmarked sample is finally determined, and the label content is used to determine whether the unmarked sample is an abnormal sample.
  • the unlabeled sample feature of the unlabeled sample is obtained, and the first label is randomly assigned to the unlabeled sample feature;
  • the unmarked sample features of the first label are respectively input to the positive sample decoder and the negative sample decoder for data reconstruction, to obtain first reconstructed data and second reconstructed data; according to the first Reconstructed data and the second reconstructed data, and calculate the reconstruction error of the unmarked sample;
  • This application performs anomaly detection through iterative reconstruction of the autoencoder instead of establishing a model, and proposes a new standard for anomaly definition, avoiding the problem of difficulty in determining the preset threshold, and at the same time, performing anomaly detection in a discriminative manner, avoiding excessive
  • the learning process of the autoencoder converges, the model is reliable, and the robustness to the outlier ratio is higher, which saves computing resources.
  • the second embodiment of the abnormality detection method based on the self-encoder in the embodiment of the present application includes:
  • the reconstruction error of the unmarked sample can be divided into positive reconstruction error and negative reconstruction error, where the positive reconstruction error is that the unmarked sample is encoded by the encoder and then decoded by the positive sample decoder.
  • the error obtained by calculating the difference between the original unmarked sample and the negative reconstruction error means that the unmarked sample is encoded by the encoder and then reconstructed by the negative sample decoder to obtain the reconstructed data.
  • the calculation is compared with the original
  • the error obtained by the difference of unmarked samples can be obtained by calculating the second norm.
  • the calculation formulas are as follows:
  • D in is a positive reconstruction error
  • D out is a negative reconstruction error
  • X u is an unlabeled sample
  • R in (X) is the first reconstructed data
  • R out (X) is the second reconstructed data.
  • Minkowski distance the Minkowski distance
  • the Minkowski distance is the Manhattan distance
  • the Minkowski distance is The distance is the Euclidean distance
  • step 207 If they are not the same, update the content of the first label to the content of the second label, and return to step 202;
  • Steps 204-207 in this embodiment are similar to steps 104-107 in the first embodiment, and will not be repeated here.
  • this embodiment adds the process of calculating the reconstruction error, by separately calculating the reconstruction error of the positive sample in the reconstruction process and the positive reconstruction error and the weight of the negative sample in the reconstruction process.
  • the positive reconstruction error will tend to become smaller, while the negative reconstruction error will tend to become larger.
  • the positive reconstruction error will tend to become larger.
  • the size of the reconstruction error and the negative reconstruction error is assigned the label of the unlabeled sample, and finally, whether the sample is abnormal can be determined through the label.
  • the third embodiment of the abnormality detection method based on the self-encoder in the embodiment of the present application includes:
  • Steps 301-306 in this embodiment are similar to steps 101-106 in the first embodiment, and will not be repeated here.
  • the calculation formula of the first loss function is:
  • n is the number of the unlabeled sample
  • X p is the positive sample
  • E(X) represents the low-dimensional subspace feature of sample X
  • W is the regularization term
  • the positive samples and unlabeled samples are mapped to the same low-dimensional space, and in the process of mapping In, a regularization process is added.
  • the regularization process is to constrain similar labeled positive samples in adjacent spaces by calculating the block symmetric affinity matrix as a regularization item, and the purpose is to strengthen the positive sample decoding
  • the data reconstruction ability of the detector improves the structure characteristics of the positive sample data retained in the low-dimensional subspace, which can better distinguish between normal values and abnormal points, and improve the accuracy of the model.
  • the block symmetric affinity matrix W is used as the regular
  • the calculation formula of the regularization term is:
  • D (X i, X j ) is the distance metric data
  • N i is the i-th data point in the neighborhood
  • N j is the j-th data point neighborhood
  • ⁇ > 0 is a constant parameter
  • this embodiment adds the process of calculating the loss function of the encoder.
  • the loss function is used to calculate the loss function of the autoencoder. Adjust the network parameters in the system to optimize the autoencoder and improve the reconstruction accuracy of the autoencoder.
  • a regularization term is added in the calculation process to constrain similar labeled positive samples in adjacent spaces. , Thereby enhancing the data reconstruction capability of the positive sample decoder.
  • step 308 of inputting positive samples into the encoder for dimensionality reduction processing can be performed simultaneously with step 301 of inputting unmarked samples into the encoder for dimensionality reduction processing, that is, at the same time, the marked positive samples are input to the encoder for dimensionality reduction processing.
  • the samples and unlabeled samples are input into the encoder for dimensionality reduction processing.
  • the step 310 of inputting the positive sample features into the positive sample decoder for data reconstruction may be the same as inputting the unlabeled sample features with the first label into the positive sample decoder and the negative sample decoder for data reconstruction.
  • Step 302 is performed synchronously.
  • the fourth embodiment of the abnormality detection method based on the autoencoder in the embodiment of the present application includes:
  • Steps 401-406 in this embodiment are similar to steps 101-106 in the first embodiment, and will not be repeated here.
  • the average competitive reconstruction error of the entire sample is:
  • m is the number of samples to be positive
  • n is the number of unlabeled samples
  • X p is a positive sample
  • y j represents the predicted label for the j-th unlabeled data
  • X u is the unlabeled sample
  • R in (X) is the reconstructed data output by the positive sample decoder, including the third reconstructed data
  • the reconstructed data output by the negative sample decoder is the second reconstructed data
  • the final loss function of the autoencoder in the entire reconstruction process can be obtained through the first function and the average competitive reconstruction error of all samples, and the calculation formula of the final function is:
  • Is the final loss function ⁇ >0
  • Is the first loss function of the encoder In order to optimize the final loss function, a method similar to stochastic gradient descent can be used to train the model.
  • step 414 Adjust the self-encoder based on the network parameters, and return to step 402.
  • this embodiment describes in detail the process of updating the labels of unlabeled samples.
  • labels are assigned to unlabeled samples in each iteration, knowing that the labels of all samples are no longer.
  • the reconstruction error in the positive sample decoder will become smaller and smaller, while the abnormal samples will become larger and larger.
  • the positive and negative samples in the unlabeled sample can be determined through the label.
  • the fifth embodiment of the abnormality detection method based on the autoencoder in the embodiment of the present application includes:
  • Steps 501-503 in this embodiment are similar to steps 101-103 in the first embodiment, and will not be repeated here.
  • the same unlabeled sample is compared with the size of the reconstruction error output by the two decoders to determine whether the label to be assigned to the unlabeled sample is 0 or 1. , when It means that the reconstruction error of the positive sample decoder is small, that is, the unlabeled sample is more likely to be a normal sample. It means that the unmarked sample is more likely to be an abnormal sample.
  • this embodiment describes in detail the process of determining the second label of an unmarked sample.
  • the label of an unmarked sample can be 0 and 1, where 0 represents the sample is an abnormal sample, and 1 represents the label. Is a normal sample.
  • the reconstruction error calculated by comparing the first reconstruction data output by the positive sample decoder and the reconstruction error calculated by the second reconstruction data output by the negative sample decoder can be the size of the reconstruction error. Labeled samples are redistributed because the size of the two reconstruction errors represents whether the unlabeled sample is more biased toward a positive sample or a negative sample, which is an abnormal sample. According to this, the label of the unlabeled sample can be quickly redistributed.
  • An embodiment of the abnormality detection device includes:
  • the dimensionality reduction module 601 is configured to input unlabeled samples into the encoder for dimensionality reduction processing to obtain unlabeled sample features of the unlabeled samples, and randomly assign a first label to the unlabeled sample features;
  • the reconstruction module 602 is configured to input the unmarked sample features with the first label into the positive sample decoder and the negative sample decoder to perform data reconstruction, to obtain first reconstructed data and second reconstructed data. Reconstruct the data;
  • the calculation module 603 is configured to calculate the reconstruction error of the unmarked sample according to the first reconstruction data and the second reconstruction data;
  • the determining module 604 is configured to determine the second label of the unmarked sample according to the reconstruction error, and determine whether the second label is the same as the first label;
  • the determining module 605 is configured to determine the abnormal situation of the unmarked sample according to the second label when the second label is the same as the first label;
  • the circulation module 606 is configured to update the content of the first label to the content of the second label when the second label is not the same as the first label, and return the content of the first label.
  • the features of the unlabeled samples are respectively input to the positive sample decoder and the negative sample decoder to perform the data reconstruction step.
  • the above positive samples and negative samples can be stored in nodes of a blockchain.
  • the autoencoder-based abnormality detection device runs the autoencoder-based abnormality detection method, and inputs unmarked samples into the encoder for dimensionality reduction processing to obtain the unmarked The unlabeled sample feature of the sample, and randomly assign a first label to the unlabeled sample feature; input the unlabeled sample feature with the first label into the positive sample decoder and the negative sample decoder, respectively Performing data reconstruction to obtain first reconstruction data and second reconstruction data; calculating the reconstruction error of the unmarked sample according to the first reconstruction data and the second reconstruction data;
  • This application performs anomaly detection through iterative reconstruction of the autoencoder instead of establishing a model, and proposes a new standard for anomaly definition, avoiding the problem of difficulty in determining the preset threshold, and at the same time, performing anomaly detection in a discriminative manner, avoiding excessive
  • the learning process of the autoencoder converges, the model is reliable, and the robustness to the outlier ratio is higher, which saves computing resources.
  • an abnormality detection device based on a self-encoder in the embodiment of the present application includes:
  • the dimensionality reduction module 601 is configured to input unlabeled samples into the encoder for dimensionality reduction processing to obtain unlabeled sample features of the unlabeled samples, and randomly assign a first label to the unlabeled sample features;
  • the reconstruction module 602 is configured to input the unmarked sample features with the first label into the positive sample decoder and the negative sample decoder to perform data reconstruction, to obtain first reconstructed data and second reconstructed data. Reconstruct the data;
  • the calculation module 603 is configured to calculate the reconstruction error of the unmarked sample according to the first reconstruction data and the second reconstruction data;
  • the determining module 604 is configured to determine the second label of the unmarked sample according to the reconstruction error, and determine whether the second label is the same as the first label;
  • the determining module 605 is configured to determine the abnormal situation of the unmarked sample according to the second label when the second label is the same as the first label;
  • the circulation module 606 is configured to update the content of the first label to the content of the second label when the second label is not the same as the first label, and return the content of the first label.
  • the features of the unlabeled samples are respectively input to the positive sample decoder and the negative sample decoder to perform the data reconstruction step.
  • calculation module 603 is specifically configured to:
  • the abnormality detection device based on the self-encoder further includes a parameter adjustment module 607, and the parameter adjustment module 607 includes:
  • the positive sample dimensionality reduction unit 6071 is configured to input a positive sample into the encoder for dimensionality reduction processing to obtain the positive sample feature of the positive sample;
  • a positive sample reconstruction unit 6072 configured to input the positive sample features into the positive sample decoder to perform data reconstruction to obtain a third reconstruction error
  • the adjusting unit 6073 calculates the final loss function of the self-encoder, and adjusts the network parameters of the self-encoder according to the final loss function.
  • the parameter adjustment module 607 further includes a first loss calculation unit 6074, and the first loss calculation unit 6074 is specifically configured to:
  • the parameter adjustment module 607 further includes a competition error unit 6075, and the competition error unit 6075 is specifically configured to:
  • the adjustment unit 6073 is specifically configured to:
  • the judgment module 604 is specifically configured to:
  • the second label of the unlabeled sample is a label representing a normal sample
  • the second label of the unlabeled sample is a label representing an abnormal sample.
  • this embodiment describes the specific functions of each module in detail, and adds multiple module functions.
  • the first function module and the second function module are used to calculate the final result of the autoencoder in the reconstruction process. Loss function, through the back propagation of the final loss function, when adjusting the parameters of the neural network of the autoencoder, the performance of the autoencoder becomes better and better.
  • the above figures 6 and 7 describe in detail the anomaly detection device based on the autoencoder in the embodiment of the present application from the perspective of a modular functional entity.
  • the following describes the anomaly detection based on the autoencoder in the embodiment of the present application from the perspective of hardware processing. The equipment is described in detail.
  • FIG. 8 is a schematic structural diagram of an abnormality detection device based on an autoencoder provided by an embodiment of the present application.
  • the abnormality detection device 800 based on an autoencoder may have relatively large differences due to differences in configuration or performance, and may include one or One or more processors (central processing units, CPU) 810 (for example, one or more processors) and memory 820, one or more storage media 830 for storing application programs 833 or data 832 (for example, one or one storage device with a large amount of storage ).
  • the memory 820 and the storage medium 830 may be short-term storage or persistent storage.
  • the program stored in the storage medium 830 may include one or more modules (not shown in the figure), and each module may include a series of instruction operations on the abnormality detection device 800 based on the autoencoder. Further, the processor 810 may be configured to communicate with the storage medium 830, and execute a series of instruction operations in the storage medium 830 on the abnormality detection device 800 based on the self-encoder.
  • the autoencoder-based abnormality detection device 800 may also include one or more power supplies 840, one or more wired or wireless network interfaces 850, one or more input and output interfaces 860, and/or, one or more operating systems 831 , Such as Windows Serve, Mac OS X, Unix, Linux, FreeBSD and so on.
  • Windows Serve Windows Serve
  • Mac OS X Unix
  • Linux FreeBSD
  • FIG. 8 does not constitute a limitation to the anomaly detection device based on the autoencoder provided in the present application, and may include more or less than that shown in the figure. Components, or a combination of certain components, or different component arrangements.
  • the blockchain referred to in this application is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information for verification. The validity of the information (anti-counterfeiting) and generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium, and the computer-readable storage medium may also be a volatile computer-readable storage medium.
  • the computer-readable storage medium stores instructions, and when the instructions run on a computer, the computer executes the steps of the abnormality detection method based on the self-encoder.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

本案涉及人工智能领域,提供一种基于自编码器的异常检测方法、装置、设备及存储介质。该方法包括:将无标记样本输入至编码器进行编码,随机为获得的无标记样本特征分配标签并分别输入至正样本解码器和负样本解码器解码,并计算无标记样本的重构误差,根据重构误差修改无标记样本的标签并修改编码器和解码器的网络参数,重新输入至编码器中进行重构,重复迭代,直到无标记样本的标签不再变化,并根据无标记样本的标签确定异常样本。本申请通过自编码器迭代重构而不是建立模型的方式来异常检测,避免了预置阈值难以确定和过拟合的问题,异常检测的准确率高,适用性强。此外,本申请还涉及区块链技术,检测后样本可存储于区块链中。

Description

基于自编码器的异常检测方法、装置、设备及存储介质
本申请要求于2020年6月30日提交中国专利局、申请号为202010611195.9、发明名称为“基于自编码器的异常检测方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在申请中。
技术领域
本申请涉及人工智能领域,尤其涉及一种基于自编码器的异常检测方法、装置、设备及存储介质。
背景技术
在人工智能领域,异常检测是为了识别不符合预期正常模式的数据,这些数据可能来自新的类别或一些没有意义的嘈杂数据,没有十分明确的定义,因此很难收集或验证。
正样本可以由训练数据很好地表征,但由于分类器的惰性,传统方法要么为正样本构建模型配置,然后将违反示例标识为离群值,要么根据异常的统计或几何度量将离群值明确隔离,通常使用线性模型,容量有限。尽管可以使用核函数来提高性能,但仍不适用于高维海量数据。
近几年深度学习逐渐兴起,在许多方面中取得成功,然而,发明人意识到,由于没有负样本,很难直接为单分类器训练有监督的深度神经网络。目前尝试的单分类分类器,即使可以建立用于异常检测的判别模型,也需要通过选择预定义的阈值来完成检测。由于异常值不可预测,因此很难确定适用于所有情况的阈值。同时,由于仅根据样本对模型进行训练,导致过拟合的问题,导致模型的泛化性能低。
发明内容
本申请的主要目的在于解决目前的通过建立模型进行异常检测导致预设阈值难以确定、过拟合的技术问题。
本申请第一方面提供了一种基于自编码器的异常检测方法,包括:将无标记样本输入所述编码器中进行降维处理,得到所述无标记样本的无标记样本特征,并随机为所述无标记样本特征分配第一标签;将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差;根据所述重构误差,确定所述无标记样本的第二标签;判断所述第二标签与所述第一标签是否相同;若相同,则根据所述第二标签确定所述无标记样本的异常情况;若不相同,则将所述第一标签的内容更新为第二标签的内容,并返回所述将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构的步骤。
本申请第二方面提供了一种基于自编码器的异常检测装置,包括:降维模块,用于将无标记样本输入所述编码器中进行降维处理,得到所述无标记样本的无标记样本特征,并随机为所述无标记样本特征分配第一标签;重构模块,用于将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;计算模块,用于根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差;判断模块,用于根据所述重构误差,确定所述无标记样本的第二标签,并判断所述第二标签与所述第一标签是否相同;确定模块,用于当所述第二标签与所述第一标签相同时,根据所述第二标签确定所述无标记样本的异常情况;循环模块,用于当所述第二标签与所述第一标签不相同时,将所述第一标签的内容更新为第二标签的内容,并返回所述将具有第一标签的所述无标记样本特征分别输入至所述正样本解码 器和所述负样本解码器中进行数据重构的步骤。
本申请第三方面提供了一种基于自编码器的异常检测设备,包括:存储器和至少一个处理器,所述存储器中存储有指令,所述存储器和所述至少一个处理器通过线路互连;所述至少一个处理器调用所述存储器中的所述指令,以使得所述基于自编码器的异常检测设备执行如下所述的基于自编码器的异常检测方法的步骤:将无标记样本输入所述编码器中进行降维处理,得到所述无标记样本的无标记样本特征,并随机为所述无标记样本特征分配第一标签;将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差;根据所述重构误差,确定所述无标记样本的第二标签;判断所述第二标签与所述第一标签是否相同;若相同,则根据所述第二标签确定所述无标记样本的异常情况;若不相同,则将所述第一标签的内容更新为第二标签的内容,并返回所述将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构的步骤。
本申请的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行如下所述的基于自编码器的异常检测方法的步骤:将无标记样本输入所述编码器中进行降维处理,得到所述无标记样本的无标记样本特征,并随机为所述无标记样本特征分配第一标签;将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差;根据所述重构误差,确定所述无标记样本的第二标签;判断所述第二标签与所述第一标签是否相同;若相同,则根据所述第二标签确定所述无标记样本的异常情况;若不相同,则将所述第一标签的内容更新为第二标签的内容,并返回所述将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构的步骤。
本申请的技术方案中,将无标记样本输入所述编码器中进行降维处理,得到所述无标记样本的无标记样本特征,并随机为所述无标记样本特征分配第一标签;将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差;
根据所述重构误差,确定所述无标记样本的第二标签;判断所述第二标签与所述第一标签是否相同;若相同,则根据所述第二标签确定所述无标记样本的异常情况;若不相同,则将所述第一标签的内容更新为第二标签的内容,并返回所述将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构的步骤。本申请通过自编码器迭代重构而不是通过建立模型的方式进行异常检测,提出了异常界定的新标准,避免了预置阈值难以确定的问题,同时以判别的方式进行异常检测,避免了过拟合的问题,自编码器的学习过程收敛,模型可靠,对异常值比率的鲁棒性更高,节省计算资源。
附图说明
图1为本申请实施例中基于自编码器的异常检测方法的第一个实施例示意图;
图2为本申请实施例中基于自编码器的异常检测方法的第二个实施例示意图;
图3为本申请实施例中基于自编码器的异常检测方法的第三个实施例示意图;
图4为本申请实施例中基于自编码器的异常检测方法的第四个实施例示意图;
图5为本申请实施例中基于自编码器的异常检测方法的第五个实施例示意图;
图6为本申请实施例中基于自编码器的异常检测装置的一个实施例示意图;
图7为本申请实施例中基于自编码器的异常检测装置的另一个实施例示意图;
图8为本申请实施例中基于自编码器的异常检测设备的一个实施例示意图。
具体实施方式
本申请实施例提供了一种基于自编码器的异常检测方法、装置、设备及存储介质,本申请的技术方案中,将无标记样本输入所述编码器中进行降维处理,得到所述无标记样本的无标记样本特征,并随机为所述无标记样本特征分配第一标签;将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差;
根据所述重构误差,确定所述无标记样本的第二标签;判断所述第二标签与所述第一标签是否相同;若相同,则根据所述第二标签确定所述无标记样本的异常情况;若不相同,则将所述第一标签的内容更新为第二标签的内容,并返回所述将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构的步骤。本申请通过自编码器迭代重构而不是通过建立模型的方式进行异常检测,提出了异常界定的新标准,避免了预置阈值难以确定的问题,同时以判别的方式进行异常检测,避免了过拟合的问题,自编码器的学习过程收敛,模型可靠,对异常值比率的鲁棒性更高,节省计算资源。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”或“具有”及其任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
为便于理解,下面对本申请实施例的具体流程进行描述,请参阅图1,本申请实施例中基于自编码器的异常检测方法的第一个实施例包括:
101、将无标记样本输入编码器中进行降维处理,得到无标记样本的无标记样本特征,并随机为无标记样本特征分配第一标签;
可以理解的是,本申请的执行主体可以为基于自编码器的异常检测装置,还可以是终端或者服务器,具体此处不做限定。本申请实施例以服务器为执行主体为例进行说明。
需要强调的是,为保证上述无标记样本以及检测后样本的私密和安全性,上述无标记样本以及检测后样本可以存储于一区块链的节点中。
在本实施例中,异常检测是为了识别出不符合预期正常模式的数据,所以可以将事先获得的已知符合正常模式的数据作为正样本,将尚未知道是否符合预期正常模式的数据作为无标记样本,无标记样本中包含了符合或不符合预期正常模式的数据,通过本申请的方法将可以从无标记样本中识别出哪些符合预期正常模式,哪些不符合预期正常模式,进而达到异常检测的目的。
在本实施例中,用于异常检测的数据集是MNIST数据集和KDD杯1999网络入侵数据集(KDD)。样本集根据其类标签分为正样本数据和负样本数据,为了应用半监督学习,已标记的正样本由80%的正常数据组成,无标记样本由剩余的20%的正常数据和所有异常数据组成。因此,模型仅使用正常数据进行训练正样本解码器,并使用正常和异常数据进行测 试。
在实际应用中,需要先构建编码器和解码器,所述编码器选取的神经网络类型可以包括全连接网络、卷积神经网络和循环神经网络,主要通过数据的属性决定,确定样本数据的属性选择不同的神经网络类型能够减少运算量,提高效率,在本实施例中,选择全连接网络作为编码器和解码器的神经网络类型,每个编码器和解码器都是有两层隐层组成,结构是对称的。
在本实施例中,高维样本数据能够通过编码器选择的多层神经网络,编码成低维样本数据,在这个过程中,可以增加正则化项将正负样本数据进行一定程度的分离,再将低维数据通过解码器解码回到和之前一样维度的高位样本数据,完成整个重构过程。
在实际应用中,可以在样本输入自编码器前对样本进行预处理,可以是对样本进行数据标准化,数据的标准化(normalization)是将数据按比例缩放,使之落入一个小的特定区间。数据标准化的意义在于取消由于量纲不同、自身变异或者数值相差较大所引起的误差,数据标准化的方式包括min-max标准化、z-score标准化、atan反正切函数标准化和log函数标准化。
在本实施例中,需要在编码后为所述无标记样本分配标签,其中,标签分别为0和1,其中0代表该无标记样本是异常样本,1代表该无标记样本为正常样本,通过编码器和解码器的重构过程后,进行循环迭代的过程中,不再需要为其中的无标记样本随机分配标签,而是通过计算重构过程中的重构误差来对无标记样本的标签进行重新分配。
102、将具有第一标签的无标记样本特征分别输入至正样本解码器和负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;
在本实施例中,编码器通过非线性之后的仿射映射将输入的样本映射到低维子空间后,解码器通过与编码器相同的变换将低维子空间中的样本映射回原始输入空间作为重建,重建之后得到的数据即为重构数据。其中,正样本解码器经过已标记的正样本作为训练集进行训练得到,已标记的正样本由80%的正常数据组成。
103、根据第一重构数据和第二重构数据,计算无标记样本的重构误差;
在本实施例中,编码器通过非线性之后的仿射映射将输入的样本映射到低维子空间后,解码器通过与编码器相同的变换将低维子空间中的样本映射回原始输入空间作为重建,输入编码器的样本和解码器输出的样本之间的差异就是重构误差。
104、根据重构误差,确定无标记样本的第二标签;
在本实施例中,通过比对正样本解码器输出的第一重构数据计算获得的重构误差和负样本解码器输出的第二重构数据计算获得的重构误差的大小,可以为所述无标记样本重新分配标签,因为两个重构误差的大小,代表着该无标记样本更偏向于是正样本,还是负样本也就是异常样本,当正样本解码器输出的第一重构数据计算获得的重构误差较小时,表示无标记样本更偏向于是正样本,当负样本解码器输出的第二重构数据计算获得的重构误差较小时,表示无标记样本更偏向于是异常样本,为所述无标记样本重新分配标签。
105、判断第二标签与第一标签是否相同;
106、若相同,则根据第二标签确定无标记样本的异常情况;
在本实施例中,样本集包括多个无标记样本,当所有无标记样本分配的第二标签与第一标签相同时,停止进行重构分配标签的过程,并根据标签内容确定无标记样本是否为异常样本。
107、若不相同,则将第一标签的内容更新为第二标签的内容,并返回至步骤102。
在本实施例中,当无标记样本通过重构分配的第二标签与重构前分配的第一标签不相同时,说明重构前分配的标签不是正确的标签,需要重新进行标签的分配,并重构查看二次分配的标签是否为正确的标签,通过多次重构,最终确定无标记样本的标签,并通过标 签内容确定无标记样本是否为异常样本。
在本实施例中,通过将无标记样本输入所述编码器中进行降维处理,得到所述无标记样本的无标记样本特征,并随机为所述无标记样本特征分配第一标签;将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差;
根据所述重构误差,确定所述无标记样本的第二标签;判断所述第二标签与所述第一标签是否相同;若相同,则根据所述第二标签确定所述无标记样本的异常情况;若不相同,则将所述第一标签的内容更新为第二标签的内容,并返回所述将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构的步骤。本申请通过自编码器迭代重构而不是通过建立模型的方式进行异常检测,提出了异常界定的新标准,避免了预置阈值难以确定的问题,同时以判别的方式进行异常检测,避免了过拟合的问题,自编码器的学习过程收敛,模型可靠,对异常值比率的鲁棒性更高,节省计算资源。
请参阅图2,本申请实施例中基于自编码器的异常检测方法的第二个实施例包括:
201、将无标记样本输入编码器中进行降维处理,得到无标记样本的无标记样本特征,并随机为无标记样本特征分配第一标签;
202、将具有第一标签的无标记样本特征分别输入至正样本解码器和负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;
上述步骤201-202与第一实施例中的步骤101-102类似,此处不再赘述。
203、分别计算无标记样本与第一重构数据以及无标记样本与第二重构数据的范数的平方,得到正重构误差和负重构误差;
在本实施例中,所述无标记样本的重构误差可以分为正重构误差和负重构误差,其中正重构误差为无标记样本通过编码器编码后再通过正样本解码器解码进行重构得到重构数据后计算与原无标记样本的差异得到的误差,负重构误差为无标记样本通过编码器编码后再通过负样本解码器解码进行重构得到重构数据后计算与原无标记样本的差异得到的误差,可以通过计算二范数得到,计算公式分别如下:
Figure PCTCN2020118224-appb-000001
Figure PCTCN2020118224-appb-000002
其中,D in为正重构误差,D out为负重构误差,X u为无标记样本,
Figure PCTCN2020118224-appb-000003
为所述无标记样本的第j个样本,R in(X)为第一重构数据,R out(X)为第二重构数据。
在实际应用中,计算范数最常用的就是闵可夫斯基距离,当范数中的下标为1时,闵可夫斯基距离为曼哈顿距离,当范数中的下标为1时,闵可夫斯基距离即为欧氏距离。
204、根据正重构误差和负重构误差,确定无标记样本的第二标签;
205、判断第二标签与第一标签是否相同;
206、若相同,则根据第二标签确定无标记样本的异常情况;
207、若不相同,则将第一标签的内容更新为第二标签的内容,并返回步骤202;
本实施例中的步骤204-207与第一实施例中的步骤104-107类似,此处不再赘述。
本实施例在上一实施例的基础上,增加计算重构误差的过程,通过分别计算正样本在重构过程中的重构误差以及负样本在重构过程中的正重构误差和负重构误差,确定无标记样本的第二标签,在不断进行重构的过程中,正重构误差会趋于变小,而负重构误差会趋于变大,在此过程中,通过判断正重构误差和负重构误差的大小分配无标记样本的标签, 最终就能通过标签确定样本是否异常。
请参阅图3,本申请实施例中基于自编码器的异常检测方法的第三个实施例包括:
301、将无标记样本输入编码器中进行降维处理,得到无标记样本的无标记样本特征,并随机为无标记样本特征分配第一标签;
302、将具有第一标签的无标记样本特征分别输入至正样本解码器和负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;
303、根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差;
304、根据重构误差,确定无标记样本的第二标签;
305、判断第二标签与第一标签是否相同;
306、若相同,则根据第二标签确定无标记样本的异常情况;
本实施例中的步骤301-306与第一实施例中的步骤101-106类似,此处不再赘述。
307、若不相同,则将第一标签的内容更新为第二标签的内容;
308、将正样本输入编码器中进行降维处理,得到正样本的正样本特征;
309、根据正样本、无标记样本、正样本特征和无标记样本特征,计算编码器的第一损失函数;
在本实施例中,所述第一损失函数的计算公式为:
Figure PCTCN2020118224-appb-000004
其中,m为所正样本的样本数量,n为所述无标记样本的数量,X p为正样本,
Figure PCTCN2020118224-appb-000005
为正样本中的第i个样本,E(X)表示样本X低维子空间特征,W为正则化项;
在本实施例中,对将正样本和无标记样本输入至所述编码器中进行降维处理的过程中,会将正样本和无标记样本映射到用同一低维空间中,在映射的过程中,添加了正则化处理,所述正则化处理为通过计算块对称亲和矩阵作为正则化项,用以约束相似的已标记正样本在相邻的空间中,目的是为了加强了正样本解码器的数据重构能力,提升保留在低维子空间中的正样本数据结构特征,进而可以更好的区分正常值与异常点,提升了模型精度,其中,使用块对称亲和矩阵W作为正则化项,所述正则化项的计算公式为:
Figure PCTCN2020118224-appb-000006
其中,D(X i,X j)是数据的距离度量,N i是第i个数据点的邻域,N j是第j个数据点的邻域,∈>0,为常数参数,通过所述第一损失函数,可以反向传播更新解码器的网络参数,以使得最大程度地减小正则化项的损失函数。
310、正样本特征输入至正样本解码器中进行数据重构,得到第三重构误差;
311、根据第一损失函数计算自编码器的最终损失函数,并根据最终损失函数调整自编码器的网络参数并返回至步骤302。
本实施例在前实施例的基础上,增加了计算编码器的损失函数的过程,通过计算编码器对正样本和无标记样本的编码过程中的损失函数,并通过该损失函数对自编码器中的网络参数进行调整,进而达到优化自编码器,提高自编码器的重构精度的效果,同时在计算过程中增加正则化项,用以约束相似的已标记正样本在相邻的空间中,从而加强了正样本解码器的数据重构能力。
可以理解的是,本实施例中将正样本输入编码器中进行降维处理的步骤308可与将无标记样本输入编码器中进行降维处理的步骤301同步进行,即同时将已标记的正样本和无标记样本输入编码器中进行降维处理。进一步,将正样本特征输入至正样本解码器中进行数据重构的步骤310可与将具有第一标签的无标记样本特征分别输入至正样本解码器和负样本 解码器中进行数据重构的步骤302同步进行。
请参阅图4,本申请实施例中基于自编码器的异常检测方法的第四个实施例包括:
401、将无标记样本输入编码器中进行降维处理,得到无标记样本的无标记样本特征,并随机为无标记样本特征分配第一标签;
402、将具有第一标签的无标记样本特征分别输入至正样本解码器和负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;
403、根据第一重构数据和第二重构数据,计算无标记样本的重构误差;
404、根据重构误差,确定无标记样本的第二标签;
405、判断第二标签与第一标签是否相同;
406、若相同,则根据第二标签确定无标记样本的异常情况;
本实施例中的步骤401-406与第一实施例中的步骤101-106类似,此处不再赘述。
407、若不相同,则将第一标签的内容更新为第二标签的内容;
408、将正样本输入编码器中进行降维处理,得到正样本的正样本特征;
409、根据正样本、无标记样本、正样本特征和无标记样本特征,计算编码器的第一损失函数;
410、正样本特征输入至正样本解码器中进行数据重构,得到第三重构误差;
411、根据正样本、无标记样本、第三重构数据、正重构误差和负重构误差计算无标记样本和正样本的全体样本的平均竞争性重构误差;
在本实施例中,所述全体样本的平均竞争性重构误差为:
Figure PCTCN2020118224-appb-000007
其中,m为所正样本的样本数量,n为所述无标记样本的数量,X p为正样本,
Figure PCTCN2020118224-appb-000008
为正样本中的第i个样本,y j表示对第j个无标记数据的预测标签,X u为无标记样本,
Figure PCTCN2020118224-appb-000009
为所述无标记样本的第j个样本,R in(X)为正样本解码器输出的重构数据,包括第三重构数据
Figure PCTCN2020118224-appb-000010
和第一重构数据
Figure PCTCN2020118224-appb-000011
R out(X)负样本解码器输出的重构数据,为第二重构数据
Figure PCTCN2020118224-appb-000012
全体样本的平均竞争性重构误差越小,模型就越好,通过所述全体样本的平均竞争性重构误差进行自编码器的网络参数的调整,可以使自编码器的精度更高。
412、根据第一损失函数和平均竞争性重构误差计算自编码器的最终损失函数;
在本实施例中,通过第一所述函数和全体样本的平均竞争性重构误差,能够获得自编码器在整个重构过程中最终的损失函数,所述最终所述函数的计算公式为:
Figure PCTCN2020118224-appb-000013
其中,
Figure PCTCN2020118224-appb-000014
为最终损失函数,λ>0,为常数参数,它控制正则项的相对重要性,
Figure PCTCN2020118224-appb-000015
为编码器的第一损失函数,为了优化最终损失函数,可以采用类似随机梯度下降的方法来训练模型。
413、根据最终损失函数,进行反向传播更新自编码器的网络参数;
414、基于网络参数调整自编码器,并返回至步骤402。
本实施例在上一实施例的基础上,详细描述了无标记样本的标签的更新过程,通过重复的重构迭代,每次迭代都为无标记样本分配标签,知道所有样本的标签都不再变化,此时正常样本的在每次重构之后,由于正样本解码器的训练,在正样本解码器中的重构误差会越来越小,而异常样本,则越来越大,由此在最后样本标签保存不变的时候,就能通过标签确定无标记样本中的正负样本。
请参阅图5,本申请实施例中基于自编码器的异常检测方法的第五个实施例包括:
501、将无标记样本输入编码器中进行降维处理,得到无标记样本的无标记样本特征,并随机为无标记样本特征分配第一标签;
502、将具有第一标签的无标记样本特征分别输入至正样本解码器和负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;
503、根据第一重构数据和第二重构数据,计算无标记样本的重构误差;
本实施例中的步骤501-503与第一实施例中的步骤101-103类似,此处不再赘述。
504、判断重构误差中的正重构误差是否小于负重构误差;
505、若小于,则确定无标签样本的第二标签为代表正常样本的标签;
506、若不小于,则确定无标签样本的第二标签为代表异常样本的标签;
在本实施例中,在第一次输入自编码器中进行重构时,会随机分配为无标记样本分配标签,而正样本则已经带有标签,所述标签分为0和1,其中0代表该样本为异常样本,1代表该标签是正常样本,由于第一次进行重构时,还尚未知道无标记样本中哪些样本为正常样本,哪些样本为异常样本,所以需要先进行随机分配,后续通过不断迭代更新进行标签的重分配,其中,对于标签的更新公式为:
Figure PCTCN2020118224-appb-000016
其中,
Figure PCTCN2020118224-appb-000017
是无标记样本输入到正样本解码器后得到的重构误差,
Figure PCTCN2020118224-appb-000018
Figure PCTCN2020118224-appb-000019
是无标记样本输入到负样本解码器后得到而重构误差,同一无标记样本通过比较在两个解码器输出的重构误差的大小,确定该无标记样本需要分配的标签是0或是1,当
Figure PCTCN2020118224-appb-000020
时,说明正样本解码器的重构误差较小,也就是说,该无标记样本更倾向于为正常样本,当
Figure PCTCN2020118224-appb-000021
时,说明该无标记样本更倾向于是异常样本。
507、判断第二标签与第一标签是否相同;
508、若相同,则根据第二标签确定无标记样本的异常情况;
509、若不相同,则将第一标签的内容更新为第二标签的内容,并返回至步骤502。
本实施例在上一实施例的基础上,详细描述了无标记样本的第二标签的确定过程,无标记样本的标签可以为0和1,其中0代表该样本为异常样本,1代表该标签是正常样本,通过比对正样本解码器输出的第一重构数据计算获得的重构误差和负样本解码器输出的第二重构数据计算获得的重构误差的大小,可以为所述无标记样本重新分配标签,因为两个重构误差的大小,代表着该无标记样本更偏向于是正样本,还是负样本也就是异常样本,据此,能够快速为无标记样本的标签进行重新分配。
上面对本申请实施例中基于自编码器的异常检测方法进行了描述,下面对本申请实施例中基于自编码器的异常检测装置进行描述,请参阅图6,本申请实施例中基于自编码器的异常检测装置一个实施例包括:
降维模块601,用于将无标记样本输入所述编码器中进行降维处理,得到所述无标记样本的无标记样本特征,并随机为所述无标记样本特征分配第一标签;
重构模块602,用于将具有第一标签的所述无标记样本特征分别输入至所述正样本解码 器和所述负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;
计算模块603,用于根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差;
判断模块604,用于根据所述重构误差,确定所述无标记样本的第二标签,并判断所述第二标签与所述第一标签是否相同;
确定模块605,用于当所述第二标签与所述第一标签相同时,根据所述第二标签确定所述无标记样本的异常情况;
循环模块606,用于当所述第二标签与所述第一标签不相同时,将所述第一标签的内容更新为第二标签的内容,并返回所述将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构的步骤。
需要强调的是,为保证上述正样本和负样本的私密和安全性,上述正样本和负样本可以存储于一区块链的节点中。
本申请实施例中,所述基于自编码器的异常检测装置通过运行所述基于自编码器的异常检测方法,通过将无标记样本输入所述编码器中进行降维处理,得到所述无标记样本的无标记样本特征,并随机为所述无标记样本特征分配第一标签;将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差;
根据所述重构误差,确定所述无标记样本的第二标签;判断所述第二标签与所述第一标签是否相同;若相同,则根据所述第二标签确定所述无标记样本的异常情况;若不相同,则将所述第一标签的内容更新为第二标签的内容,并返回所述将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构的步骤。本申请通过自编码器迭代重构而不是通过建立模型的方式进行异常检测,提出了异常界定的新标准,避免了预置阈值难以确定的问题,同时以判别的方式进行异常检测,避免了过拟合的问题,自编码器的学习过程收敛,模型可靠,对异常值比率的鲁棒性更高,节省计算资源。
请参阅图7,本申请实施例中基于自编码器的异常检测装置的另一个实施例包括:
降维模块601,用于将无标记样本输入所述编码器中进行降维处理,得到所述无标记样本的无标记样本特征,并随机为所述无标记样本特征分配第一标签;
重构模块602,用于将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;
计算模块603,用于根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差;
判断模块604,用于根据所述重构误差,确定所述无标记样本的第二标签,并判断所述第二标签与所述第一标签是否相同;
确定模块605,用于当所述第二标签与所述第一标签相同时,根据所述第二标签确定所述无标记样本的异常情况;
循环模块606,用于当所述第二标签与所述第一标签不相同时,将所述第一标签的内容更新为第二标签的内容,并返回所述将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构的步骤。
可选的,所述计算模块603具体用于:
计算所述无标记样本与所述第一重构数据的范数的平方,得到所述正重构误差;
计算所述无标记样本与所述第二重构数据的范数的平方,得到所述负重构误差。
其中,所述基于自编码器的异常检测装置还包括调参模块607,所述调参模块607包括:
正样本降维单元6071,用于将正样本输入所述编码器中进行降维处理,得到所述正样本的正样本特征;
正样本重构单元6072,用于所述正样本特征输入至所述正样本解码器中进行数据重构,得到第三重构误差;
调整单元6073,计算所述自编码器的最终损失函数,并根据所述最终损失函数调整所述自编码器的网络参数。
其中,所述调参模块607还包括第一损失计算单元6074,所述第一损失计算单元6074具体用于:
根据所述正样本、所述无标记样本、所述正样本特征和所述无标记样本特征,计算所述编码器的第一损失函数。
其中,所述调参模块607还包括竞争误差单元6075,所述竞争误差单元6075具体用于:
根据所述正样本、所述无标记样本、所述第三重构数据、所述正重构误差和所述负重构误差计算所述无标记样本和所述正样本的全体样本的平均竞争性重构误差。
可选的,所述调整单元6073具体用于:
根据所述第一损失函数和所述平均竞争性重构误差计算所述自编码器的最终损失函数;
根据所述最终损失函数,进行反向传播更新所述自编码器的网络参数;
基于所述网络参数调整所述自编码器。
可选的,所述判断模块604具体用于:
判断所述正重构误差是否小于所述负重构误差;
若小于,则确定所述无标签样本的第二标签为代表正常样本的标签;
若不小于,则确定所述无标签样本的第二标签为代表异常样本的标签。
本实施例在上一实施例的基础上,详细描述了各个模块的具体功能,同时增加了多个模块功能,通过第一函数模块和第二函数模块计算自编码器在重构过程中的最终损失函数,通过最终损失函数的反向传播,调整自编码器的神经网络的参数时,使得自编码器的性能越来越好。
上面图6和图7从模块化功能实体的角度对本申请实施例中的中基于自编码器的异常检测装置进行详细描述,下面从硬件处理的角度对本申请实施例中基于自编码器的异常检测设备进行详细描述。
图8是本申请实施例提供的一种基于自编码器的异常检测设备的结构示意图,该基于自编码器的异常检测设备800可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器(central processing units,CPU)810(例如,一个或一个以上处理器)和存储器820,一个或一个以上存储应用程序833或数据832的存储介质830(例如一个或一个以上海量存储设备)。其中,存储器820和存储介质830可以是短暂存储或持久存储。存储在存储介质830的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对基于自编码器的异常检测设备800中的一系列指令操作。更进一步地,处理器810可以设置为与存储介质830通信,在基于自编码器的异常检测设备800上执行存储介质830中的一系列指令操作。
基于自编码器的异常检测设备800还可以包括一个或一个以上电源840,一个或一个以上有线或无线网络接口850,一个或一个以上输入输出接口860,和/或,一个或一个以上操作系统831,例如Windows Serve,Mac OS X,Unix,Linux,FreeBSD等等。本领域技术人员可以理解,图8示出的基于自编码器的异常检测设备结构并不构成对本申请提供的基于自编码器的异常检测设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件, 或者不同的部件布置。
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
本申请还提供一种计算机可读存储介质,该计算机可读存储介质可以为非易失性计算机可读存储介质,该计算机可读存储介质也可以为易失性计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行所述基于自编码器的异常检测方法的步骤。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统或装置、单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (20)

  1. 一种基于自编码器的异常检测方法,其中,所述自编码器包括编码器、正样本解码器和负样本解码器,所述异常检测方法包括:
    将无标记样本输入所述编码器中进行降维处理,得到所述无标记样本的无标记样本特征,并随机为所述无标记样本特征分配第一标签;
    将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;
    根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差;
    根据所述重构误差,确定所述无标记样本的第二标签;
    判断所述第二标签与所述第一标签是否相同;
    若相同,则根据所述第二标签确定所述无标记样本的异常情况;
    若不相同,则将所述第一标签的内容更新为第二标签的内容,并返回所述将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构的步骤。
  2. 根据权利要求1所述的基于自编码器的异常检测方法,其中,所述重构误差包括正重构误差和负重构误差,所述根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差包括:
    计算所述无标记样本与所述第一重构数据的范数的平方,得到所述正重构误差;
    计算所述无标记样本与所述第二重构数据的范数的平方,得到所述负重构误差。
  3. 根据权利要求2所述的基于自编码器的异常检测方法,其中,在所述将所述第一标签的内容更新为第二标签的内容之后,还包括:
    将正样本输入所述编码器中进行降维处理,得到所述正样本的正样本特征;
    所述正样本特征输入至所述正样本解码器中进行数据重构,得到第三重构误差;
    计算所述自编码器的最终损失函数,并根据所述最终损失函数调整所述自编码器的网络参数。
  4. 根据权利要求3所述的基于自编码器的异常检测方法,其中,在所述将正样本输入所述编码器中进行降维处理,得到所述正样本的正样本特征之后,还包括:
    根据所述正样本、所述无标记样本、所述正样本特征和所述无标记样本特征,计算所述编码器的第一损失函数。
  5. 根据权利要求2-4中任一项所述的基于自编码器的异常检测方法,其中,在所述正样本特征输入至所述正样本解码器中进行数据重构,得到第三重构误差之后,还包括:
    根据所述正样本、所述无标记样本、所述第三重构数据、所述正重构误差和所述负重构误差计算所述无标记样本和所述正样本的全体样本的平均竞争性重构误差。
  6. 根据权利要求5所述的基于自编码器的异常检测方法,其中,所述计算所述自编码器的最终损失函数,并根据所述最终损失函数调整所述自编码器的网络参数,包括:
    根据所述第一损失函数和所述平均竞争性重构误差计算所述自编码器的最终损失函数;
    根据所述最终损失函数,进行反向传播更新所述自编码器的网络参数;
    基于所述网络参数调整所述自编码器。
  7. 根据权利要求2所述的基于自编码器的异常检测方法,其中,所述根据所述重构误差,确定所述无标记样本的第二标签,包括:
    判断所述正重构误差是否小于所述负重构误差;
    若小于,则确定所述无标签样本的第二标签为代表正常样本的标签;
    若不小于,则确定所述无标签样本的第二标签为代表异常样本的标签。
  8. 一种基于自编码器的异常检测装置,其中,所述基于自编码器的异常检测装置包括:
    降维模块,用于将无标记样本输入所述编码器中进行降维处理,得到所述无标记样本的无标记样本特征,并随机为所述无标记样本特征分配第一标签;
    重构模块,用于将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;
    计算模块,用于根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差;
    判断模块,用于根据所述重构误差,确定所述无标记样本的第二标签,并判断所述第二标签与所述第一标签是否相同;
    确定模块,用于当所述第二标签与所述第一标签相同时,根据所述第二标签确定所述无标记样本的异常情况;
    循环模块,用于当所述第二标签与所述第一标签不相同时,将所述第一标签的内容更新为第二标签的内容,并返回所述将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构的步骤。
  9. 一种基于自编码器的异常检测设备,其中,所述基于自编码器的异常检测设备包括:存储器和至少一个处理器,所述存储器中存储有指令,所述存储器和所述至少一个处理器通过线路互连;
    所述至少一个处理器调用所述存储器中的所述指令,以使得所述基于自编码器的异常检测设备执行如下所述的基于自编码器的异常检测方法的步骤:
    将无标记样本输入所述编码器中进行降维处理,得到所述无标记样本的无标记样本特征,并随机为所述无标记样本特征分配第一标签;
    将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;
    根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差;
    根据所述重构误差,确定所述无标记样本的第二标签;
    判断所述第二标签与所述第一标签是否相同;
    若相同,则根据所述第二标签确定所述无标记样本的异常情况;
    若不相同,则将所述第一标签的内容更新为第二标签的内容,并返回所述将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构的步骤。
  10. 根据权利要求9所述的基于自编码器的异常检测设备,其中,所述重构误差包括正重构误差和负重构误差,所述基于自编码器的异常检测设备执行所述根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差的步骤时,包括:
    计算所述无标记样本与所述第一重构数据的范数的平方,得到所述正重构误差;
    计算所述无标记样本与所述第二重构数据的范数的平方,得到所述负重构误差。
  11. 根据权利要求10所述的基于自编码器的异常检测设备,其中,所述基于自编码器的异常检测设备执行所述将所述第一标签的内容更新为第二标签的内容的步骤之后,还包括如下步骤:
    将正样本输入所述编码器中进行降维处理,得到所述正样本的正样本特征;
    所述正样本特征输入至所述正样本解码器中进行数据重构,得到第三重构误差;
    计算所述自编码器的最终损失函数,并根据所述最终损失函数调整所述自编码器的网络参数。
  12. 根据权利要求11所述的基于自编码器的异常检测设备,其中,所述基于自编码器的异常检测设备执行所述将正样本输入所述编码器中进行降维处理,得到所述正样本的正样本特征的步骤之后,还包括如下步骤:
    根据所述正样本、所述无标记样本、所述正样本特征和所述无标记样本特征,计算所述编码器的第一损失函数。
  13. 根据权利要求10-12中任一项所述的基于自编码器的异常检测设备,其中,所述基于自编码器的异常检测设备执行所述正样本特征输入至所述正样本解码器中进行数据重构,得到第三重构误差的步骤之后,还包括如下步骤:
    根据所述正样本、所述无标记样本、所述第三重构数据、所述正重构误差和所述负重构误差计算所述无标记样本和所述正样本的全体样本的平均竞争性重构误差。
  14. 根据权利要求13所述的基于自编码器的异常检测设备,其中,所述基于自编码器的异常检测设备执行所述计算所述自编码器的最终损失函数,并根据所述最终损失函数调整所述自编码器的网络参数的步骤时,包括:
    根据所述第一损失函数和所述平均竞争性重构误差计算所述自编码器的最终损失函数;
    根据所述最终损失函数,进行反向传播更新所述自编码器的网络参数;
    基于所述网络参数调整所述自编码器。
  15. 根据权利要求10所述的基于自编码器的异常检测设备,其中,所述基于自编码器的异常检测设备执行所述根据所述重构误差,确定所述无标记样本的第二标签的步骤时,包括:
    判断所述正重构误差是否小于所述负重构误差;
    若小于,则确定所述无标签样本的第二标签为代表正常样本的标签;
    若不小于,则确定所述无标签样本的第二标签为代表异常样本的标签。
  16. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,其中,所述计算机程序被处理器执行时实现如下所述的基于自编码器的异常检测方法的步骤:
    将无标记样本输入所述编码器中进行降维处理,得到所述无标记样本的无标记样本特征,并随机为所述无标记样本特征分配第一标签;
    将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构,获得第一重构数据和第二重构数据;
    根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差;
    根据所述重构误差,确定所述无标记样本的第二标签;
    判断所述第二标签与所述第一标签是否相同;
    若相同,则根据所述第二标签确定所述无标记样本的异常情况;
    若不相同,则将所述第一标签的内容更新为第二标签的内容,并返回所述将具有第一标签的所述无标记样本特征分别输入至所述正样本解码器和所述负样本解码器中进行数据重构的步骤。
  17. 根据权利要求16所述的计算机可读存储介质,其中,所述重构误差包括正重构误差和负重构误差,所述计算机程序被处理器执行时实现所述根据所述第一重构数据和所述第二重构数据,计算所述无标记样本的重构误差的步骤时,包括:
    计算所述无标记样本与所述第一重构数据的范数的平方,得到所述正重构误差;
    计算所述无标记样本与所述第二重构数据的范数的平方,得到所述负重构误差。
  18. 根据权利要求17所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时实现所述将所述第一标签的内容更新为第二标签的内容的步骤之后,还包括如下步骤:
    将正样本输入所述编码器中进行降维处理,得到所述正样本的正样本特征;
    所述正样本特征输入至所述正样本解码器中进行数据重构,得到第三重构误差;
    计算所述自编码器的最终损失函数,并根据所述最终损失函数调整所述自编码器的网络参数。
  19. 根据权利要求18所述的计算机可读存储介质,其中,所述计算机程序被处理器执 行时实现所述将正样本输入所述编码器中进行降维处理,得到所述正样本的正样本特征的步骤之后,还包括如下步骤:
    根据所述正样本、所述无标记样本、所述正样本特征和所述无标记样本特征,计算所述编码器的第一损失函数。
  20. 根据权利要求16-18中任一项所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时实现所述正样本特征输入至所述正样本解码器中进行数据重构,得到第三重构误差的步骤之后,还包括如下步骤:
    根据所述正样本、所述无标记样本、所述第三重构数据、所述正重构误差和所述负重构误差计算所述无标记样本和所述正样本的全体样本的平均竞争性重构误差。
PCT/CN2020/118224 2020-06-30 2020-09-28 基于自编码器的异常检测方法、装置、设备及存储介质 WO2021139236A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010611195.9 2020-06-30
CN202010611195.9A CN111709491B (zh) 2020-06-30 2020-06-30 基于自编码器的异常检测方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021139236A1 true WO2021139236A1 (zh) 2021-07-15

Family

ID=72543754

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/118224 WO2021139236A1 (zh) 2020-06-30 2020-09-28 基于自编码器的异常检测方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN111709491B (zh)
WO (1) WO2021139236A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657516A (zh) * 2021-08-20 2021-11-16 泰康保险集团股份有限公司 医疗交易数据处理的方法、装置、电子设备和存储介质
CN113780387A (zh) * 2021-08-30 2021-12-10 桂林电子科技大学 基于共享自编码器的时间序列异常检测方法
CN114330440A (zh) * 2021-12-28 2022-04-12 国网山东省电力公司营销服务中心(计量中心) 基于模拟学习判别的分布式电源负荷异常识别方法及系统
CN114493250A (zh) * 2022-01-17 2022-05-13 北京齐尔布莱特科技有限公司 一种异常行为检测方法、计算设备及可读存储介质
CN114494772A (zh) * 2022-01-17 2022-05-13 烽火通信科技股份有限公司 一种不均衡样本分类方法和装置
CN114722061A (zh) * 2022-04-08 2022-07-08 中国电信股份有限公司 数据处理方法及装置、设备、计算机可读存储介质
CN114978613A (zh) * 2022-04-29 2022-08-30 南京信息工程大学 基于数据增强和自监督特征增强的网络入侵检测方法
CN115714731A (zh) * 2022-09-27 2023-02-24 中国人民解放军63921部队 一种基于深度学习自编码器的深空测控链路异常检测方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709491B (zh) * 2020-06-30 2024-05-14 平安科技(深圳)有限公司 基于自编码器的异常检测方法、装置、设备及存储介质
US11570046B2 (en) * 2020-12-17 2023-01-31 Nokia Solutions And Networks Oy Method and apparatus for anomaly detection in a network
CN113067754B (zh) * 2021-04-13 2022-04-26 南京航空航天大学 一种半监督时间序列异常检测方法及系统
CN113360694B (zh) * 2021-06-03 2022-09-27 安徽理工大学 一种基于自编码器的恶意图像查询样本检测过滤方法
CN113535452A (zh) * 2021-07-12 2021-10-22 浙江讯飞智能科技有限公司 数据检测方法、装置、电子设备和存储介质
CN114386067B (zh) * 2022-01-06 2022-08-23 承德石油高等专科学校 一种基于人工智能的设备生产数据安全传输方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898218A (zh) * 2018-05-24 2018-11-27 阿里巴巴集团控股有限公司 一种神经网络模型的训练方法、装置、及计算机设备
CN109543727A (zh) * 2018-11-07 2019-03-29 复旦大学 一种基于竞争重构学习的半监督异常检测方法
WO2020017285A1 (ja) * 2018-07-20 2020-01-23 日本電信電話株式会社 異常検知装置、異常検知方法、およびプログラム
CN110895705A (zh) * 2018-09-13 2020-03-20 富士通株式会社 异常样本检测装置及其训练装置和训练方法
CN111709491A (zh) * 2020-06-30 2020-09-25 平安科技(深圳)有限公司 基于自编码器的异常检测方法、装置、设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10999247B2 (en) * 2017-10-24 2021-05-04 Nec Corporation Density estimation network for unsupervised anomaly detection
CN108881196B (zh) * 2018-06-07 2020-11-24 中国民航大学 基于深度生成模型的半监督入侵检测方法
SG11202105016PA (en) * 2018-11-15 2021-06-29 Uveye Ltd Method of anomaly detection and system thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898218A (zh) * 2018-05-24 2018-11-27 阿里巴巴集团控股有限公司 一种神经网络模型的训练方法、装置、及计算机设备
WO2020017285A1 (ja) * 2018-07-20 2020-01-23 日本電信電話株式会社 異常検知装置、異常検知方法、およびプログラム
CN110895705A (zh) * 2018-09-13 2020-03-20 富士通株式会社 异常样本检测装置及其训练装置和训练方法
CN109543727A (zh) * 2018-11-07 2019-03-29 复旦大学 一种基于竞争重构学习的半监督异常检测方法
CN111709491A (zh) * 2020-06-30 2020-09-25 平安科技(深圳)有限公司 基于自编码器的异常检测方法、装置、设备及存储介质

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657516A (zh) * 2021-08-20 2021-11-16 泰康保险集团股份有限公司 医疗交易数据处理的方法、装置、电子设备和存储介质
CN113780387A (zh) * 2021-08-30 2021-12-10 桂林电子科技大学 基于共享自编码器的时间序列异常检测方法
CN114330440A (zh) * 2021-12-28 2022-04-12 国网山东省电力公司营销服务中心(计量中心) 基于模拟学习判别的分布式电源负荷异常识别方法及系统
CN114330440B (zh) * 2021-12-28 2024-04-05 国网山东省电力公司营销服务中心(计量中心) 基于模拟学习判别的分布式电源负荷异常识别方法及系统
CN114493250A (zh) * 2022-01-17 2022-05-13 北京齐尔布莱特科技有限公司 一种异常行为检测方法、计算设备及可读存储介质
CN114494772A (zh) * 2022-01-17 2022-05-13 烽火通信科技股份有限公司 一种不均衡样本分类方法和装置
CN114494772B (zh) * 2022-01-17 2024-05-14 烽火通信科技股份有限公司 一种不均衡样本分类方法和装置
CN114722061A (zh) * 2022-04-08 2022-07-08 中国电信股份有限公司 数据处理方法及装置、设备、计算机可读存储介质
CN114722061B (zh) * 2022-04-08 2023-11-14 中国电信股份有限公司 数据处理方法及装置、设备、计算机可读存储介质
CN114978613A (zh) * 2022-04-29 2022-08-30 南京信息工程大学 基于数据增强和自监督特征增强的网络入侵检测方法
CN114978613B (zh) * 2022-04-29 2023-06-02 南京信息工程大学 基于数据增强和自监督特征增强的网络入侵检测方法
CN115714731A (zh) * 2022-09-27 2023-02-24 中国人民解放军63921部队 一种基于深度学习自编码器的深空测控链路异常检测方法

Also Published As

Publication number Publication date
CN111709491A (zh) 2020-09-25
CN111709491B (zh) 2024-05-14

Similar Documents

Publication Publication Date Title
WO2021139236A1 (zh) 基于自编码器的异常检测方法、装置、设备及存储介质
US10713597B2 (en) Systems and methods for preparing data for use by machine learning algorithms
Ou et al. Asymmetric transitivity preserving graph embedding
US10885379B2 (en) Multi-view image clustering techniques using binary compression
US10504005B1 (en) Techniques to embed a data object into a multidimensional frame
CN110929840A (zh) 使用滚动窗口的连续学习神经网络系统
CN113961759B (zh) 基于属性图表示学习的异常检测方法
US8484253B2 (en) Variational mode seeking
CN108805193B (zh) 一种基于混合策略的电力缺失数据填充方法
CN112231592B (zh) 基于图的网络社团发现方法、装置、设备以及存储介质
US10678888B2 (en) Methods and systems to predict parameters in a database of information technology equipment
CN111612041A (zh) 异常用户识别方法及装置、存储介质、电子设备
Labroche New incremental fuzzy c medoids clustering algorithms
Ma et al. Parallel auto-encoder for efficient outlier detection
US10248625B2 (en) Assurance-enabled Linde Buzo Gray (ALBG) data clustering
CN114925767A (zh) 一种基于变分自编码器的场景生成方法和装置
CN114118416A (zh) 一种基于多任务学习的变分图自动编码器方法
Deng et al. Network Intrusion Detection Based on Sparse Autoencoder and IGA‐BP Network
Kumagai et al. Combinatorial clustering based on an externally-defined one-hot constraint
Hajewski et al. An evolutionary approach to variational autoencoders
CN111401412B (zh) 一种基于平均共识算法的物联网环境下分布式软聚类方法
CN117592595A (zh) 一种配电网负荷预测模型建立、预测方法及装置
CN115102868A (zh) 一种基于SOM聚类与深度自编码器的web服务QoS预测方法
Snášel et al. Bars problem solving-new neural network method and comparison
Chen et al. KeyBin2: Distributed Clustering for Scalable and In-Situ Analysis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20911938

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20911938

Country of ref document: EP

Kind code of ref document: A1