CN113205030A - Pedestrian re-identification method for defending antagonistic attack - Google Patents

Pedestrian re-identification method for defending antagonistic attack Download PDF

Info

Publication number
CN113205030A
CN113205030A CN202110457404.3A CN202110457404A CN113205030A CN 113205030 A CN113205030 A CN 113205030A CN 202110457404 A CN202110457404 A CN 202110457404A CN 113205030 A CN113205030 A CN 113205030A
Authority
CN
China
Prior art keywords
picture
pedestrian
network
sampling
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110457404.3A
Other languages
Chinese (zh)
Inventor
梁超
周建力
张梦萱
陈军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202110457404.3A priority Critical patent/CN113205030A/en
Publication of CN113205030A publication Critical patent/CN113205030A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention relates to a pedestrian re-identification method for defending adversarial attacks, which comprises the steps of firstly carrying out down-sampling treatment on adversarial pictures, training the down-sampled pictures to generate an adversarial network model, then carrying out down-sampling treatment on the pictures in a query set, regenerating the pictures by a variational self-encoder generating the adversarial network, finally inputting the regenerated pictures into a pedestrian re-identification model for retrieval, and matching the regenerated pictures with corresponding pictures to obtain a pedestrian re-identification result. According to the method, the antagonistic network model is generated through the antagonistic picture training, then the picture is regenerated through the variational self-encoder for generating the antagonistic network, the noise in the antagonistic sample picture is weakened on the premise that the original picture characteristics are kept as far as possible, the defense capability of the pedestrian re-identification model on the antagonistic sample is enhanced, the problem that the deep neural network is vulnerable to attack is effectively solved, and the matching accuracy of the pedestrian re-identification network is effectively improved when the input picture is attacked.

Description

Pedestrian re-identification method for defending antagonistic attack
Technical Field
The invention relates to the technical field of computer vision, in particular to a pedestrian re-identification method for defending adversarial attacks.
Background
Pedestrian re-identification is a picture retrieval problem that aims to match target people under multiple non-overlapping camera perspectives. In recent years, it has become more and more widely used in the fields of video surveillance and security. Inspired by the success of deep learning in various visual tasks, a pedestrian re-identification model based on a deep neural network starts to become a research hotspot, and higher identification precision is obtained in the aspect of picture retrieval.
Recent studies have found that deep neural networks are vulnerable. The input picture is modified elaborately, and the added visually imperceptible interference becomes a antagonistic picture, so that the normal operation of the deep neural network is interfered, and the recognition precision of the pedestrian re-recognition model based on the deep neural network on the picture is influenced. The wide deployment of the pedestrian re-identification framework in the safety-related system makes the enhancement of the pedestrian re-identification model important for the defense capability against the attack picture.
A resistance measurement training method is used in an thesis of Adversal Metal attach and Defense for Person Re-identification to carry out Defense training on a resistance sample, the Defense training effectively improves the identification precision of a pedestrian Re-identification model, but the method only trains the Attack of a specific method, has insufficient mobility and is difficult to resist the resistance sample generated by other Attack methods.
The paper "Vulneravailability of Person Re-Identification Models to Metric adaptive targets" was improved based on the above method, and proposed a method of guiding sampling on-line antagonistic training (GOAT), and calculating a guiding example through multiple iterations to modify the distance Metric. Compared with the method, the accuracy of the method is improved, the robustness of the model is enhanced, but the internal parameters of the pedestrian re-identification network are adjusted, and the original pedestrian re-identification network is changed.
Disclosure of Invention
The invention provides a pedestrian re-identification method for defending adversarial attacks, aiming at the defects of the prior art, firstly, downsampling an adversarial picture, then training the downsampled picture to generate an adversarial network model, then downsampling the picture concentrated in query, regenerating the picture from an encoder by generating variation of an adversarial network, inputting the regenerated picture into a pedestrian re-identification model for retrieval, and matching the regenerated picture with a corresponding picture to obtain a pedestrian re-identification result.
In order to achieve the aim, the technical scheme provided by the invention is a pedestrian re-identification method for defending adversarial attacks, which comprises the following steps:
step 1, training to generate a confrontation type network model, which specifically comprises the following steps:
step 1.1, downsampling a resistibility picture;
step 1.2, outputting distribution and sampling through an encoder to generate an implicit variable;
step 1.3, the hidden variables are output and distributed through a decoder and sampled to generate a new picture;
step 1.4, judging whether a new picture is like a real picture or not by generating a confrontation type network;
step 1.5, calculating a final loss function;
step 1.6, training and generating the antagonistic neural network according to the steps 1.1-1.5 until all training sets are completely learned;
step 2, regenerating the pictures in the query set, specifically comprising the following substeps:
step 2.1, down-sampling processing is carried out on the query set pictures;
step 2.2, the encoder in the generator for generating the countermeasure network obtained by training in the step 1 outputs distribution and samples to generate hidden variables;
step 2.3, outputting and distributing the decoder in the generator for generating the countermeasure network obtained by training the hidden variables in the step 1, and sampling to generate a new picture;
step 2.4, calculating a loss function;
step 2.5, selecting the picture with the loss function smaller than the critical value tau of 0.03 from the step 2.4, calculating the Euclidean distance between the picture and the original picture, and selecting the picture with the minimum Euclidean distance to output;
and 3, inputting the picture regenerated in the step 2 into a pedestrian re-identification network, matching the picture with a corresponding picture and outputting the picture.
In step 1.2, the picture x down-sampled in step 1.1 is input to an encoder, features of the input picture are extracted through six-layer convolution, a posterior distribution q (z | x) of the hidden variable z is calculated, the hidden variable z is obtained through sampling according to the posterior distribution, and the KL divergence D of the posterior distribution q (z | x) of the hidden variable z and the distribution p (z) of the z is calculatedKL(q (z | x) | p (z)). Wherein p (z) compliance is preset
Figure RE-GDA0003097186420000021
And in the step 1.3, the hidden variable z in the step 1.2 is converted into a group of intermediate hidden vectors W through a mapping network f consisting of four fully-connected layers, the W is input into each convolutional layer of a decoder after being operated by AdaIN, the decoder extracts the features of the W through multilayer convolution, wherein the conditional probability distribution p (x ' | z) for generating the picture x ' is obtained by gradually upsampling from the second convolution, and the generated picture x ' is obtained by sampling.
And in the step 1.4, the picture x' generated in the step 1.3 is input into a discriminator, the discriminator extracts features through multilayer convolution, wherein each two layers of convolution is subjected to down-sampling, and finally, a judgment result is output through a full connection layer. Taking the variational self-encoder in the step 1.2 and the step 1.3 as a generator, and calculating a loss function L for generating the countermeasure network together with the result of the discriminatorGAN
Figure RE-GDA0003097186420000031
Wherein the content of the first and second substances,
Figure RE-GDA0003097186420000032
represents the cross entropy loss, D (x) represents the probability that picture x is considered to be the true picture before downsamplingAnd D (x ') represents the probability that the generated picture x' is considered as the true picture before downsampling.
Also, the final loss function L in said step 1.5lossAdding the KL divergence calculated after the output of the encoder and the loss function of the generation countermeasure network, namely:
Figure RE-GDA0003097186420000033
wherein KL divergence DKLThe calculation of (q (z | x') | p (z)) is consistent with step 1.2.
In step 2.2, the picture x obtained by downsampling in step 2.1 is processed1Inputting the trained encoder, extracting the characteristics of the input picture through six layers of convolution, and calculating an implicit variable z1Posterior distribution q (z)1|x1). Sampling to obtain an implicit variable z according to posterior distribution1Then calculating the hidden variable z1Posterior distribution q (z)1|x1) And z1Distribution p (z)1) KL divergence D ofKL(q(z1|x1)‖p(z1)). Wherein p (z) is preset1) Compliance
Figure RE-GDA0003097186420000034
Furthermore, in the step 2.3, the variable z in the step 2.2 is hidden1A mapping network f consisting of four fully-connected layers is converted into a group of intermediate hidden vectors W, and the W is input into each convolutional layer of a trained decoder after passing through an AdaIN operation. The decoder extracts the features of W by multilayer convolution, wherein the upsampling is gradually started from the second convolution to finally obtain a generated picture x'1Conditional probability distribution p (x'1|z1) And sampling to obtain a generated picture x'1
In step 2.4, the loss function L is KL divergence calculated after the encoder outputs, that is:
L=DKL(q(z1|x1')||p(z1)) (3)
wherein KL divergence DKL(q(z1|x1')||p(z1) ) is calculated in accordance with step 2.2.
In addition, the pedestrian re-identification network in the step 3 may adopt ResNet50, train the cross entropy loss as a loss function, extract features through a neural network during testing and implementation, and sort the features from small to large according to the measurement among the features so as to match corresponding pictures.
Compared with the prior art, the invention has the following advantages: compared with the prior art that the input picture is directly calculated and matched, the method generates the antagonistic network model through antagonistic picture training, and then regenerates the picture through the variational self-encoder for generating the antagonistic network, weakens the noise in the antagonistic sample picture on the premise of keeping the original picture characteristics as much as possible, enhances the defense capability of the pedestrian re-identification model on the antagonistic sample, effectively solves the problem that the deep neural network is easy to attack, and effectively improves the matching accuracy of the pedestrian re-identification network when the input picture is attacked.
Drawings
Fig. 1 is a training flow diagram for generating a countermeasure network according to an embodiment of the present invention.
FIG. 2 is a technical flow chart of an embodiment of the present invention.
Fig. 3 is a network structure diagram of a generation countermeasure network according to an embodiment of the present invention.
Detailed Description
The invention provides a pedestrian re-identification method for defending adversarial attack, aiming at the problem that the existing deep neural network is difficult to defend adversarial attack samples in a pedestrian re-identification task.
The technical solution of the present invention is further explained with reference to the drawings and the embodiments.
As shown in fig. 1, the process of the embodiment of the present invention includes the following steps:
step 1, training to generate a confrontation type network model, which specifically comprises the following steps:
and 1.1, performing downsampling processing on the resistibility picture.
And step 1.2, outputting distribution through an encoder and sampling to generate an implicit variable. And inputting the downsampled picture x into an encoder, extracting the characteristics of the input picture through six layers of convolution, and calculating the posterior distribution q (z | x) of the hidden variable z. Sampling according to the posterior distribution to obtain an implicit variable z, and calculating KL divergence D of the posterior distribution q (z | x) of the implicit variable z and the distribution p (z) of the zKL(q (z | x) | p (z)). Wherein p (z) compliance is preset
Figure RE-GDA0003097186420000041
And 1.3, outputting and distributing the hidden variables through a decoder and sampling to generate a new picture. And (3) converting the hidden variable z in the step 1.2 into a group of intermediate hidden vectors W through a mapping network f consisting of four fully-connected layers, and inputting the W into each convolutional layer of the decoder after passing through AdaIN operation. The decoder extracts the features of W by multi-layer convolution, wherein the upsampling is performed gradually starting from the second convolution, and finally the conditional probability distribution p (x ' | z) of the generated picture x ' is obtained, and the generated picture x ' is obtained by sampling.
And step 1.4, judging whether the new picture is like a real picture or not by generating a confrontation type network. And (3) inputting the picture x' generated in the step (1.3) into a discriminator, extracting features by the discriminator through multilayer convolution, performing down-sampling after each two layers of convolution, and finally outputting a judgment result through a full connection layer. Taking the variational self-encoder in the step 1.2 and the step 1.3 as a generator, and calculating a loss function L for generating the countermeasure network together with the result of the discriminatorGAN
Figure RE-GDA0003097186420000042
Wherein the content of the first and second substances,
Figure RE-GDA0003097186420000043
represents the cross entropy loss, D (x) represents the probability that picture x is considered to be the true picture before downsampling, and D (x ') represents the probability that the generated picture x' is considered to be the true picture before downsampling.
Step 1.5, the final loss function is calculated. Final loss function LlossAdding the KL divergence calculated after the output of the encoder and the loss function of the generation countermeasure network, namely:
Figure RE-GDA0003097186420000051
wherein KL divergence DKLThe calculation of (q (z | x') | p (z)) is consistent with step 1.2.
And step 1.6, training and generating the antagonistic neural network according to the steps 1.1-1.5 until all the training sets are completely learned. The trained generation confrontation type network variation self-encoder can weaken the noise in the confrontation sample picture on the premise of keeping the original image characteristics as much as possible by the regenerated picture.
Step 2, regenerating the pictures in the query set, specifically comprising the following substeps:
and 2.1, carrying out downsampling processing on the inquiry set picture.
And 2.2, outputting distribution and sampling to generate a hidden variable by an encoder in the generator for generating the confrontation type network obtained by training in the step 1. The picture x after down sampling in the step 2.1 is processed1Inputting the trained encoder, extracting the characteristics of the input picture through six layers of convolution, and calculating an implicit variable z1Posterior distribution q (z)1|x1). Sampling to obtain an implicit variable z according to posterior distribution1Then calculating the hidden variable z1Posterior distribution q (z)1|x1) And z1Distribution p (z)1) KL divergence D ofKL(q(z1|x1)‖p(z1)). Wherein p (z) is preset1) Compliance
Figure RE-GDA0003097186420000052
And 2.3, outputting and distributing the decoder in the generator for generating the countermeasure network obtained by training the hidden variables in the step 1, and sampling to generate a new picture. The hidden variable z in the step 2.21A mapping network f consisting of four fully-connected layers is converted into a group of intermediate hidden vectors W, and the W is input into each convolutional layer of a trained decoder after passing through an AdaIN operation. The decoder extracts the features of W by multilayer convolution, wherein the upsampling is gradually started from the second convolution to finally obtain a generated picture x'1Conditional probability distribution p (x'1|z1) And sampling to obtain a generated picture x'1
Step 2.4, a loss function is calculated. The loss function L is KL divergence calculated after the encoder outputs, that is:
L=DKL(q(z1|x1')||p(z1)) (3)
wherein KL divergence DKL(q(z1|x1')||p(z1) ) is calculated in accordance with step 2.2.
And 2.5, selecting the picture with the loss function smaller than the critical value tau of 0.03 from the step 2.4, calculating the Euclidean distance between the picture and the original picture, and selecting the picture with the minimum Euclidean distance to output.
And 3, inputting the picture regenerated in the step 2 into a pedestrian re-identification network, matching the picture with a corresponding picture and outputting the picture. The pedestrian re-identification network adopts ResNet50, cross entropy loss is used as a loss function for training, characteristics are extracted through a neural network during testing and implementation, and sequencing is performed from small to large according to measurement among the characteristics so as to match corresponding pictures.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description is for illustrative purposes only and is not intended to limit the scope of the present disclosure, which is to be construed as limiting the present disclosure.

Claims (9)

1. A pedestrian re-identification method for defending adversarial attack is characterized by comprising the following steps:
step 1, training to generate a confrontation type network model, which specifically comprises the following steps:
step 1.1, performing downsampling processing on the antagonism picture to obtain x;
step 1.2, outputting distribution and sampling through an encoder to generate a hidden variable z;
step 1.3, outputting and distributing hidden variables through a decoder and sampling to generate a new picture x';
step 1.4, judging whether a new picture is like a real picture or not by generating a confrontation type network;
step 1.5, calculating a final loss function;
step 1.6, training and generating the antagonistic neural network according to the steps 1.1-1.5 until all training sets are completely learned;
step 2, regenerating the pictures in the query set, specifically comprising the following substeps:
step 2.1, down-sampling processing is carried out on the inquiry set picture to obtain x1
Step 2.2, the encoder output distribution in the generator for generating the confrontation type network obtained by the training in the step 1 is sampled to generate the hidden variable z1
Step 2.3, the hidden variables are subjected to output distribution of a decoder in the generator for generating the antagonistic network obtained by training in the step 1, and sampling is carried out to generate a new picture x'1
Step 2.4, calculating a loss function;
step 2.5, selecting the picture with the loss function smaller than the critical value tau from the step 2.4, wherein tau is a constant, calculating the Euclidean distance between the picture and the original picture, and selecting the picture with the minimum Euclidean distance to output;
and 3, inputting the picture regenerated in the step 2 into a pedestrian re-identification network, matching the picture with a corresponding picture and outputting the picture.
2. A method of pedestrian re-identification against adversarial attacks as claimed in claim 1, characterized in that: step 1.2 is inputting the picture x sampled in step 1.1 into an encoder, extracting the characteristics of the input picture through six layers of convolution, calculating the posterior distribution q (z | x) of the hidden variable z, obtaining the hidden variable z according to the posterior distribution sampling, and calculating the KL divergence D of the posterior distribution q (z | x) of the hidden variable z and the distribution p (z) of the zKZ(q (z | x) | p (z)), wherein p (z) is preset to obey N (0, I).
3. A method of pedestrian re-identification against adversarial attacks as claimed in claim 2, characterized in that: step 1.3 is to convert the hidden variable z in step 1.2 into a group of intermediate hidden vectors W through a mapping network f composed of four fully-connected layers, input W into each convolutional layer of the decoder after AdaIN operation, extract the features of W through multilayer convolution by the decoder, wherein the conditional probability distribution p (x ' | z) for generating the picture x ' is obtained by up-sampling from the second convolution step by step, and the generated picture x ' is obtained by sampling.
4. A method of pedestrian re-identification against adversarial attacks as claimed in claim 3, characterized in that: step 1.4 is to input the picture x' generated in step 1.3 into a discriminator, the discriminator extracts features through multilayer convolution, wherein, each two layers of convolution is carried out with down sampling, finally, a judgment result is output through a full connection layer, the variational self-encoder in step 1.2 and step 1.3 is used as a generator, and the result of the discriminator and the generator are used together to calculate and generate a loss function L of the countermeasure networkGAN
Figure FDA0003041027030000021
Wherein the content of the first and second substances,
Figure FDA0003041027030000022
represents the cross entropy loss, D (x) represents the probability that picture x is considered to be the true picture before downsampling, and D (x ') represents the probability that the generated picture x' is considered to be the true picture before downsampling.
5. A method of pedestrian re-identification against adversarial attacks as claimed in claim 4, characterized in that: final loss function L in said step 1.5lossAdding the KL divergence calculated after the output of the encoder and the loss function of the generation countermeasure network, namely:
Figure FDA0003041027030000023
wherein KL divergence DKLThe calculation of (q (z | x') | p (z)) is consistent with step 1.2.
6. A method of pedestrian re-identification against adversarial attacks as claimed in claim 1, characterized in that: step 2.2 is to take the picture x after down sampling in step 2.11Inputting the trained encoder, extracting the characteristics of the input picture through six layers of convolution, and calculating an implicit variable z1Posterior distribution q (z)1|x1) Sampling to obtain an implicit variable z from the posterior distribution1Then calculating the hidden variable z1Posterior distribution q (z)1|x1) And z1Distribution p (z)1) KL divergence D ofKZ(q(z1|x1)||p(z1) Wherein p (z) is preset1) Obey N (0, I).
7. A method of pedestrian re-identification against adversarial attacks as claimed in claim 6, characterized in that: in the step 2.3, the hidden variable z in the step 2.21Converting into a group of intermediate hidden vectors W through a mapping network f consisting of four fully-connected layers, inputting W into each convolutional layer of the decoder after training through AdaIN operation,the decoder extracts the features of W by multi-layer convolution, in which the upsampling is performed gradually starting from the second convolution, resulting in the final generated picture x1'conditional probability distribution p (x'1|z1) And sampling to obtain a generated picture x'1
8. A method of pedestrian re-identification against adversarial attacks as claimed in claim 7, characterized in that: in step 2.4, the loss function L is KL divergence calculated after the encoder outputs, that is:
L=DKL(q(z1|x'1)||p(z1)) (3)
wherein KL divergence DKL(q(z1|x'1)||p(z1) ) is calculated in accordance with step 2.2.
9. A method of pedestrian re-identification against adversarial attacks as claimed in claim 1, characterized in that: the pedestrian re-identification network in the step 3 can adopt ResNet50, cross entropy loss is used as a loss function for training, characteristics are extracted through a neural network during testing and implementation, and sequencing is performed from small to large according to measurement among the characteristics so as to match corresponding pictures.
CN202110457404.3A 2021-04-27 2021-04-27 Pedestrian re-identification method for defending antagonistic attack Withdrawn CN113205030A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110457404.3A CN113205030A (en) 2021-04-27 2021-04-27 Pedestrian re-identification method for defending antagonistic attack

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110457404.3A CN113205030A (en) 2021-04-27 2021-04-27 Pedestrian re-identification method for defending antagonistic attack

Publications (1)

Publication Number Publication Date
CN113205030A true CN113205030A (en) 2021-08-03

Family

ID=77028805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110457404.3A Withdrawn CN113205030A (en) 2021-04-27 2021-04-27 Pedestrian re-identification method for defending antagonistic attack

Country Status (1)

Country Link
CN (1) CN113205030A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688781A (en) * 2021-09-08 2021-11-23 北京邮电大学 Pedestrian re-identification anti-attack method with blocking elasticity

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826059A (en) * 2019-09-19 2020-02-21 浙江工业大学 Method and device for defending black box attack facing malicious software image format detection model
CN112288627A (en) * 2020-10-23 2021-01-29 武汉大学 Recognition-oriented low-resolution face image super-resolution method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826059A (en) * 2019-09-19 2020-02-21 浙江工业大学 Method and device for defending black box attack facing malicious software image format detection model
CN112288627A (en) * 2020-10-23 2021-01-29 武汉大学 Recognition-oriented low-resolution face image super-resolution method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIANLI ZHOU 等: "Manifold Projection for Adversarial Defense on Face Recognition", 《SPRINGERLINK》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688781A (en) * 2021-09-08 2021-11-23 北京邮电大学 Pedestrian re-identification anti-attack method with blocking elasticity
CN113688781B (en) * 2021-09-08 2023-09-15 北京邮电大学 Pedestrian re-identification anti-attack method capable of shielding elasticity

Similar Documents

Publication Publication Date Title
Rao et al. Selfie video based continuous Indian sign language recognition system
CN111274921B (en) Method for recognizing human body behaviors by using gesture mask
CN107066951B (en) Face spontaneous expression recognition method and system
CN112907598B (en) Method for detecting falsification of document and certificate images based on attention CNN
CN110084130B (en) Face screening method, device, equipment and storage medium based on multi-target tracking
CN111126307B (en) Small sample face recognition method combining sparse representation neural network
Kasim et al. Celebrity face recognition using deep learning
CN110728183A (en) Human body action recognition method based on attention mechanism neural network
CN106650617A (en) Pedestrian abnormity identification method based on probabilistic latent semantic analysis
CN110738153A (en) Heterogeneous face image conversion method and device, electronic equipment and storage medium
Huang et al. Deepfake mnist+: a deepfake facial animation dataset
Al-Nima et al. Regenerating face images from multi-spectral palm images using multiple fusion methods
CN114663685A (en) Method, device and equipment for training pedestrian re-recognition model
Kumar et al. Selfie continuous sign language recognition using neural network
CN111368763A (en) Image processing method and device based on head portrait and computer readable storage medium
CN113205030A (en) Pedestrian re-identification method for defending antagonistic attack
CN113449676A (en) Pedestrian re-identification method based on double-path mutual promotion disentanglement learning
CN111079585B (en) Pedestrian re-identification method combining image enhancement with pseudo-twin convolutional neural network
CN110163489B (en) Method for evaluating rehabilitation exercise effect
CN114821174B (en) Content perception-based transmission line aerial image data cleaning method
CN116229528A (en) Living body palm vein detection method, device, equipment and storage medium
CN113807237A (en) Training of in vivo detection model, in vivo detection method, computer device, and medium
CN113553895A (en) Multi-pose face recognition method based on face orthogonalization
Zhao et al. Generative Adversarial Networks for single image with high quality image
CN111047537A (en) System for recovering details in image denoising

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210803