CN113360694A - Malicious image query sample detection and filtration method based on self-encoder - Google Patents

Malicious image query sample detection and filtration method based on self-encoder Download PDF

Info

Publication number
CN113360694A
CN113360694A CN202110621344.4A CN202110621344A CN113360694A CN 113360694 A CN113360694 A CN 113360694A CN 202110621344 A CN202110621344 A CN 202110621344A CN 113360694 A CN113360694 A CN 113360694A
Authority
CN
China
Prior art keywords
model
data
training
loss
encoder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110621344.4A
Other languages
Chinese (zh)
Other versions
CN113360694B (en
Inventor
杨高明
常昊乾
方贤进
李明炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University of Science and Technology
Original Assignee
Anhui University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University of Science and Technology filed Critical Anhui University of Science and Technology
Priority to CN202110621344.4A priority Critical patent/CN113360694B/en
Publication of CN113360694A publication Critical patent/CN113360694A/en
Application granted granted Critical
Publication of CN113360694B publication Critical patent/CN113360694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6227Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database where protection concerns the structure of data, e.g. records, types, queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Bioethics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a malicious image query sample detection and filtration method based on an autoencoder, which relates to the field of machine learning and comprises the following steps: a manner of obtaining a composite data set; the method can generate a large amount of data, classification results under the same classifier belong to the same category, and the method can be used for a query data set of model stealing attack or other data enhancement methods; a method of model training; and (3) splitting and classifying the training data, training a single self-encoder by using a single category, and debugging the model architecture, parameter selection and overfitting. A method of calculating a threshold; and saving and compressing the reconstruction loss, wherein the compression rule is consistent with the recovery rule. The reconstruction loss is saved as a grayscale image, and the format and parameters of the image saving. Calculating a threshold value of the gray level image, recovering the threshold value and obtaining the loss of the reconstructed threshold value.

Description

Malicious image query sample detection and filtration method based on self-encoder
Technical Field
The invention relates to the field of detection, in particular to a malicious image query sample detection and filtering method based on an autoencoder.
Background
The deep Learning model has a wide application range and is gradually commercialized, that is, a deep Learning cloud model is developed and deployed by an internet huge head such as google, amazon, hundredth, and acriba, and the cloud model can be directly used without time for a user to collect and arrange data and train an adjustment model, which is undoubtedly another great health of the deep Learning application range.
The current attacks against deep learning models can be roughly divided into four types: model stealing attacks, escape attacks, member inference attacks, and poisoning attacks. Wherein Model Stealing Attacks (see attached figures) are efficient, simple and difficult to defend query Attacks, by constructing a Synthetic Dataset (Synthetic Dataset), an attacker can regularly inquire a model and obtain an output label, then combine the output label with substitute data to form a new Dataset, train the substitute model by using the new Dataset, and experiments prove that the difference between the precision of the substitute model and the precision of an original target model is at least not more than 2 percent, so that model stealing attack increasingly becomes a great obstacle to deep learning development, meanwhile, because the inquiry attacker is a cursory person, the existing method can hardly predict the existence of the attacker, the existing defense method comprises the steps of abandoning an output label, outputting obfuscation information, input layer disturbance and the like, although these methods work, it is easy for human beings to identify these defense methods and change the query strategy to achieve the goal. Therefore, an efficient and secure method for defending against model theft attacks is necessary.
The deep learning model, especially the business model, is a model trained in advance, this process needs a large amount of data, framework and complicated operations such as debugging, etc., in order to guarantee the accuracy of this model, therefore its business value is also self-evident, once the training of the model is finished, according to its function (such as prediction, classification, etc.) different services (such as face recognition, traffic flow detection, pedestrian detection, body temperature detection, etc.) can be realized, when users use this model, only need pay a small amount of expenses (query times/yuan) can skip the previous training step, directly input their own data into the model and use, this high-efficient method is popular once, at present, there have been various services such as face recognition, character recognition, video understanding, etc. provided by internet huge heads such as Baidu AI, Jingdong AI, Aliyun AI, etc. in China, Services such as natural language processing and the like, and the field relates to aspects of our lives. But the worry about the safety of the deep learning model is still rising in China at present.
Disclosure of Invention
In order to solve the above mentioned drawbacks in the background art, the present invention provides a malicious image query sample detection and filtering method based on an auto-encoder.
The purpose of the invention can be realized by the following technical scheme:
a malicious image query sample detection and filtering method based on an autoencoder comprises the following steps:
s1 Generation of composite data set
Training a self-encoder by using a certain type of data of a data set, and generating data by using other types as a test set, wherein the data is used for a query data set of model stealing attack or other data enhancement methods;
s2 training self-encoder
Training self-encoders with different parameters and the same structure through the single-class data set obtained in S1, and adjusting a loss function and a learning rate until the model converges;
s3, calculating a threshold value
And (5) calculating the reconstruction loss of the trained single model in the S2, storing the reconstruction loss as images of black and white pixels, calculating a threshold value by using a maximum inter-class variance method, and returning the threshold value to be the reconstruction loss to obtain the threshold value.
Further, the S1 specifically includes:
s1.1, splitting an initial data set into single categories, respectively generating synthetic data by using a pre-trained self-encoder, forming the synthetic data into single-category data sets, extracting the synthetic data sets according to equal proportion from the single-category data sets, and generating labels according to the same proportion;
s1.2, randomly extracting an initial data set, and increasing data in a multiple mode through a data enhancement mode of rotation, translation and inversion to obtain a synthetic data set;
and S1.3, inputting the synthetic data sets obtained in the two modes into a target model to obtain labels, and forming a synthetic data set of the substitution model.
Further, the S2 specifically includes:
s2.1, data collection: collecting normal training data, inputting data as
Figure BDA0003100036020000031
The label is
Figure BDA0003100036020000032
Combined into a data set
Figure BDA0003100036020000033
S2.2, training a target model, wherein the model adopts a mainstream and efficient model as an attacked model as an experimental model;
s2.3, training the experimental model by using the collected data in advance, and training for multiple times until the accuracy reaches more than 90 percent to obtain a qualified experimental model;
s2.4, constructing a self-encoder model, wherein the specific structure is as follows:
encoder←(ReLU(Conv),MaxPool,ReLU(Conv),MaxPool)
decoder←(ReLU(ConvTranspose),ReLU(ConvTranspose),Tanh(ConvTranspose));
respectively training self-encoder models, respectively training the models by using the data sets in the step 2.1, and adjusting the loss function and the learning rate until the models are converged;
s2.5, respectively inputting the synthetic data sets into a self-encoder model, wherein the calculation method comprises the following steps
x′j←DeepAE(xj)
Calculating and storing reconstruction loss reconstructionErrorj←(xj-x′j)2Reconstructing the loss, saving it as a file in CSV format, reading it when calculating the loss and extracting the content to [ m1, m2, …, mn]In a size matrix.
Further, the S3 specifically includes:
s3.1, taking out the stored reconstruction loss, correspondingly compressing the reconstruction loss to a range of 0-255, and storing a compression rule;
s3.2, arranging the reconstruction loss into a matrix to generate an 8-bit JPG black-and-white image, wherein the image storage format is H multiplied by L black-and-white image;
s3.3, calculating the maximum inter-class variance method of the generated black and white imagei←OTSU([error]i) Obtaining a plurality of threshold values;
and S3.4, decompressing the threshold according to a compression rule to obtain the reconstruction threshold loss.
The invention has the beneficial effects that:
the defense method for distinguishing malicious attackers based on detection of malicious samples uses an effective reconstruction loss threshold defense model to steal the attack process, so that an attacker cannot query and steal a target model by using a synthetic data set; in order to improve the maximum discrimination of reconstruction threshold loss, a data set is divided into a plurality of categories to be trained respectively, and the threshold accuracy can be maximized by adopting the method.
Drawings
The invention will be further described with reference to the accompanying drawings.
FIG. 1 is a basic flow diagram of the present invention;
FIG. 2 is a basic framework diagram of the present invention;
FIG. 3 is a general model stealing attack flow diagram of the present invention;
FIG. 4 is a depth self-encoder architecture of the present invention;
FIG. 5 is a block diagram of the core concept of the present invention;
FIG. 6 is a general architecture diagram and implementation diagram of the present invention;
FIG. 7 is a graph of the results of reconstructing the loss threshold of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A malicious image query sample detection and filtering method based on an autoencoder, as shown in fig. 1-7, specifically performs detection and filtering by three small methods:
a method of acquiring a composite data set, comprising the steps of:
step 1, data synthesis, which comprises generating synthetic data, mainly used for generating query data in a large scale, enhancing the data and generating a sample with high accuracy;
step 1.1, splitting the dataset into a single class Ddataset={d0,d1,...,dnIn the process, the data set is converted into a JPG form and then divided. Separately generating synthetic data D with pre-trained self-encodersgener={d′1,d′2,...,d′nAnd extracting a synthetic training set from the generated single-type data set according to equal proportion, and generating a label according to the same proportion.
And 1.2, randomly extracting a small amount of initial data sets, and increasing the data in a multiple mode through data enhancement modes such as rotation, translation, inversion and the like to obtain a synthetic data set.
Step 2, inputting the synthetic data set obtained in the two modes in the step 1 into a target model to obtain a label, and forming a synthetic data set (training set) D of the substitution modelsyn={x′n,y′n}。
A method for model training comprises the following steps:
step 1, target model training, including data collection and model training;
step 1.1, data collection: normal training data collection, wherein
Figure BDA0003100036020000051
In order to input the data, the data is,
Figure BDA0003100036020000061
is a label and is combined into
Figure BDA0003100036020000062
The data set is used for model training, debugging and model accuracy detection, can be a classical machine learning data set such as MNIST, CIFAR10/100 and the like, and can also be a self-made data set such as photos or video clips shot by a camera and an unmanned aerial vehicle.
Step 1.2, training a target model, wherein the current mainstream deep learning and high-efficiency model is adopted as the attacked target model (attacked model), such as LeNet, VGG and other models, and the classical machine learning model has high precision and is used as an experimental model.
And 1.3, training a target model by using the collected data in advance, and training for multiple times until the accuracy reaches more than 90 percent to obtain the qualified target model.
And 2, training a defense model, and respectively training a plurality of self-encoders to obtain reconstruction loss.
Step 2.1, constructing a self-encoder model, wherein the specific structure is as follows:
encoder←(ReLU(Conv),MaxPool,ReLU(Conv),MaxPool),
decoder ← (ReLU (ConvTranspose), Tanh (ConvTranspose)). In particular, model architectures can be different, the implementation process and the purpose of the method are given, results obtained by different model architectures can be different, but the overall ratio of models trained by the same data set is approximately unchanged, and meanwhile, the larger the data volume is, the more accurate the results of the models are.
And 2.2, respectively training self-encoder models, respectively training the models by using the data sets in the step 1.1, and adjusting other parameters such as loss functions, learning rates and the like until the models are converged. The parameters used in this specification are given here: EPOCHES 100, BATCH 128, LEARNING RATE 1e-2, and LEARNING delay 1 e-5.
Step 3, inputting the synthetic data sets into a self-encoder model respectively, wherein the calculation method is x'j←DeepAE(xj) Calculating and storing reconstruction loss reconstructionErrorj←(xj-x′j)2Reconstructing the loss, saving it as a file in CSV format, reading it when calculating the loss and extracting the content to [ m1, m2, …, mn]In a matrix of sizes.
A method of calculating a threshold, comprising the steps of:
step 1, taking out the stored reconstruction loss, correspondingly compressing the reconstruction loss to a range of 0-255, and storing compression rules, wherein the compression rules are multiple, and the invention does not refer to a certain method any more.
And 2, arranging the reconstruction loss into a matrix to generate an 8-bit JPG black-and-white image with an H multiplied by L image storage format.
Step 3, calculating the maximum inter-class variance method threshold gamma of the generated black and white imagei←OTSU([error]i) And obtaining a plurality of threshold values.
And 4, decompressing the threshold according to a compression rule to obtain the loss of the reconstruction threshold.
Further, for the three methods, the following application scenarios are set, pre-trained self-encoders are respectively arranged behind target models, according to the calculated reconstruction threshold loss, when a new query is made, the new query passes through the target models first, results are obtained and then retained, the self-encoders enter the self-encoders according to categories to calculate the reconstruction loss, if the loss value is greater than the reconstruction threshold loss, output is not performed or other defensive measures are taken, the description provides how to detect malicious samples, namely, through the method, if the reconstruction loss is greater than the pre-obtained reconstruction threshold loss and the query times and the number are too large, the malicious query samples are determined.
Let us assume that a classification model is trained and mounted in the cloud server for the user to use, and the charging standard is the number of queries/dollars. At this time, an attacker wants to steal the model, and in the attack process, the model owner can perform corresponding defense measures according to the content of the specification. The experimental environment of the examples was: the servers are Intel Core i 5-73003.50 GHz, NVIDIA GeForce GTX 1050 Ti and 16G DDR4 RAM.
First we consider the problem of data sets, and the method of this specification is not limited to a certain type of data set in general, but we refer here to the MNIST, CIFAR data sets, with pixel sizes of 28 × 28 × 1 and 32 × 32 × 3, respectively, including both grayscale and RGB formats. Taking MNIST as an example, firstly, a data set is divided into n classes, the total number of the classes is determined by the total number contained in the data set, the classes can be increased and decreased when the data set is constructed, but the formats are required to be uniform, and the process is the first part of the process in FIG. 1.
Then, the depth autoencoder model, namely the second part of fig. 1 and fig. 4, is respectively trained by using the split data set, in the training process, a network structure with 10 layers of depth is adopted, namely a Decoder and an Endecoder are respectively connected in a full-connection mode, an activation function is a ReLU function, an optimization function is Adam optimization, and other parameters are
EPOCHES 100, BATCH 128, LEARNING RATE 1e-2, and LEARNING delay 1 e-5. After the training of the self-encoder model is completed, the self-encoder model is respectively tested by using a test set of a data set, each picture is reconstructed, the reconstruction loss is stored in a CSV or other text formats, wherein the reconstruction loss is L2 loss, in order to guarantee the effectiveness of the method, the reconstructed image and the original image are strictly corresponding, and the batch and the sample positions in the batch cannot be disturbed.
After taking reconstruction losses of each category, we compress the losses into h × l gray level images, the process is the third part of fig. 1 and the "Threshold Selection" part of fig. 5, the process uses the maximum inter-class variance method (OTSU) to Threshold the compressed images, in order to ensure the effectiveness of the method, domain value redirection must be performed on the reconstruction losses, the reconstruction losses are compressed into the range of [0-255] according to a certain rule, the compressed images are stored as h × l gray level images, the Threshold is calculated on the images, after the Threshold is obtained, the same compression rule is restored to the reconstruction losses, and the Threshold is only effective at this time.
After training the self-coder model and taking the threshold, we mount the self-coder model on the classification model, which is the last part of fig. 1, and the general form of mounting is shown in fig. 2, and fig. 6 is a more detailed structural diagram. Taking fig. 2 as an example, because it is unknown whether the current user using the model is a normal user or an attacker, after receiving the incoming data, the classification model first performs classification, after obtaining a classification result, the labels and the original input are respectively transmitted into the trained self-encoder model, at this time, the reconstruction in fig. 5 is performed again, the process of reconstruction loss is compared, after obtaining the reconstruction loss of the sample, the reconstruction loss is compared with the threshold, if the loss is greater than the threshold, it can be determined that the sample is a malicious query sample, and meanwhile, attention and tracking are performed on the behavior of the user, and whether the user is an attacker is further determined.
In the description herein, references to the description of "one embodiment," "an example," "a specific example" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed.

Claims (4)

1. A malicious image query sample detection and filtering method based on an autoencoder is characterized by comprising the following steps:
s1 Generation of composite data set
Training a self-encoder by using a certain type of data of a data set, and generating data by using other types as a test set, wherein the data is used for a query data set of model stealing attack or other data enhancement methods;
s2 training self-encoder
Training self-encoders with different parameters and the same structure through the single-class data set obtained in S1, and adjusting a loss function and a learning rate until the model converges;
s3, calculating a threshold value
And (5) calculating the reconstruction loss of the trained single model in the S2, storing the reconstruction loss as images of black and white pixels, calculating a threshold value by using a maximum inter-class variance method, and returning the threshold value to be the reconstruction loss to obtain the threshold value.
2. The method as claimed in claim 1, wherein the S1 is specifically configured to:
s1.1, splitting an initial data set into single categories, respectively generating synthetic data by using a pre-trained self-encoder, forming the synthetic data into single-category data sets, extracting the synthetic data sets according to equal proportion from the single-category data sets, and generating labels according to the same proportion;
s1.2, randomly extracting an initial data set, and increasing data in a multiple mode through a data enhancement mode of rotation, translation and inversion to obtain a synthetic data set;
and S1.3, inputting the synthetic data sets obtained in the two modes into a target model to obtain labels, and forming a synthetic data set of the substitution model.
3. The method as claimed in claim 1, wherein the S2 is specifically configured to:
s2.1, data collection: collecting normal training data, inputting data as
Figure FDA0003100036010000021
The label is
Figure FDA0003100036010000022
Combined into a data set
Figure FDA0003100036010000023
S2.2, training a target model, wherein the model adopts a mainstream and efficient model as an attacked model as an experimental model;
s2.3, training the experimental model by using the collected data in advance, and training for multiple times until the accuracy reaches more than 90 percent to obtain a qualified experimental model;
s2.4, constructing a self-encoder model, wherein the specific structure is as follows:
encoder←(ReLU(Conv),MaxPool,ReLu(Conv),MaxPool)
decoder←(ReLU(ConvTranspose),ReLU(ConvTranspose),Tanh(ConvTranspose));
respectively training self-encoder models, respectively training the models by using the data sets in the step 2.1, and adjusting the loss function and the learning rate until the models are converged;
s2.5, respectively inputting the synthetic data sets into a self-encoder model, wherein the calculation method comprises the following steps
x′j←DeepAE(xj)
Calculating and storing reconstruction loss reconstructionErrorj←(xj-x′j)2Reconstructing the loss, saving it as a file in CSV format, reading it when calculating the loss and extracting the content to [ m1, m2, …, mn]In a size matrix.
4. The method as claimed in claim 1, wherein the S3 is specifically configured to:
s3.1, taking out the stored reconstruction loss, correspondingly compressing the reconstruction loss to a range of 0-255, and storing a compression rule;
s3.2, arranging the reconstruction loss into a matrix to generate an 8-bit JPG black-and-white image, wherein the image storage format is H multiplied by L black-and-white image;
s3.3, calculating the maximum inter-class variance method of the generated black and white imagei←OTSU([error]i) Obtaining a plurality of threshold values;
and S3.4, decompressing the threshold according to a compression rule to obtain the reconstruction threshold loss.
CN202110621344.4A 2021-06-03 2021-06-03 Malicious image query sample detection and filtering method based on self-encoder Active CN113360694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110621344.4A CN113360694B (en) 2021-06-03 2021-06-03 Malicious image query sample detection and filtering method based on self-encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110621344.4A CN113360694B (en) 2021-06-03 2021-06-03 Malicious image query sample detection and filtering method based on self-encoder

Publications (2)

Publication Number Publication Date
CN113360694A true CN113360694A (en) 2021-09-07
CN113360694B CN113360694B (en) 2022-09-27

Family

ID=77531874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110621344.4A Active CN113360694B (en) 2021-06-03 2021-06-03 Malicious image query sample detection and filtering method based on self-encoder

Country Status (1)

Country Link
CN (1) CN113360694B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948117A (en) * 2019-03-13 2019-06-28 南京航空航天大学 A kind of satellite method for detecting abnormality fighting network self-encoding encoder
CN110826059A (en) * 2019-09-19 2020-02-21 浙江工业大学 Method and device for defending black box attack facing malicious software image format detection model
CN111160095A (en) * 2019-11-26 2020-05-15 华东师范大学 Unbiased face feature extraction and classification method and system based on depth self-encoder network
CN111709491A (en) * 2020-06-30 2020-09-25 平安科技(深圳)有限公司 Anomaly detection method, device and equipment based on self-encoder and storage medium
US20210099474A1 (en) * 2019-09-30 2021-04-01 Mcafee, Llc Methods and apparatus to perform malware detection using a generative adversarial network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948117A (en) * 2019-03-13 2019-06-28 南京航空航天大学 A kind of satellite method for detecting abnormality fighting network self-encoding encoder
CN110826059A (en) * 2019-09-19 2020-02-21 浙江工业大学 Method and device for defending black box attack facing malicious software image format detection model
US20210099474A1 (en) * 2019-09-30 2021-04-01 Mcafee, Llc Methods and apparatus to perform malware detection using a generative adversarial network
CN111160095A (en) * 2019-11-26 2020-05-15 华东师范大学 Unbiased face feature extraction and classification method and system based on depth self-encoder network
CN111709491A (en) * 2020-06-30 2020-09-25 平安科技(深圳)有限公司 Anomaly detection method, device and equipment based on self-encoder and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHIFENG HAO ET AL.: ""Prediction of Synthetic Lethal Interactions in Human Cancers Using Multi-View Graph Auto-Encoder"", 《IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS》 *
张生顺: ""基于堆叠自编码器的恶意域名检测"", 《网络安全与信息化》 *

Also Published As

Publication number Publication date
CN113360694B (en) 2022-09-27

Similar Documents

Publication Publication Date Title
Kos et al. Adversarial examples for generative models
Vitorino et al. Leveraging deep neural networks to fight child pornography in the age of social media
He et al. Detection of fake images via the ensemble of deep representations from multi color spaces
KR102493075B1 (en) Image processing method for content detection
JP2020508522A (en) Periodic hostile generation networks for unsupervised cross-domain image generation
CN107729809B (en) Method and device for adaptively generating video abstract and readable storage medium thereof
KR101983752B1 (en) Apparatus and method for automatic classification of document
CN113283403B (en) Counterfeited face video detection method based on counterstudy
Chen et al. Using generative adversarial networks for data augmentation in android malware detection
He et al. A visual residual perception optimized network for blind image quality assessment
CN109447014A (en) A kind of online behavioral value method of video based on binary channels convolutional neural networks
Goodwin et al. Blind video tamper detection based on fusion of source features
CN114697096A (en) Intrusion detection method based on space-time characteristics and attention mechanism
CN114172688A (en) Encrypted traffic network threat key node automatic extraction method based on GCN-DL
Jiang et al. Research progress and challenges on application-driven adversarial examples: A survey
CN116434351A (en) Fake face detection method, medium and equipment based on frequency attention feature fusion
Shah et al. Deep Learning model-based Multimedia forgery detection
Yousaf et al. Fake visual content detection using two-stream convolutional neural networks
Revi et al. Gan-generated fake face image detection using opponent color local binary pattern and deep learning technique
CN113360694B (en) Malicious image query sample detection and filtering method based on self-encoder
CN112396674A (en) Rapid event image filling method and system based on lightweight generation countermeasure network
Huang et al. Edge device-based real-time implementation of CycleGAN for the colorization of infrared video
Liu et al. Visual privacy-preserving level evaluation for multilayer compressed sensing model using contrast and salient structural features
CN114926348B (en) Device and method for removing low-illumination video noise
Ramesh Babu et al. A novel framework design for semantic based image retrieval as a cyber forensic tool

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant