CN112085055B - Black box attack method based on transfer model Jacobian array feature vector disturbance - Google Patents

Black box attack method based on transfer model Jacobian array feature vector disturbance Download PDF

Info

Publication number
CN112085055B
CN112085055B CN202010775599.1A CN202010775599A CN112085055B CN 112085055 B CN112085055 B CN 112085055B CN 202010775599 A CN202010775599 A CN 202010775599A CN 112085055 B CN112085055 B CN 112085055B
Authority
CN
China
Prior art keywords
black box
sample
disturbance
model
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010775599.1A
Other languages
Chinese (zh)
Other versions
CN112085055A (en
Inventor
崔鹏
周琳钧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202010775599.1A priority Critical patent/CN112085055B/en
Publication of CN112085055A publication Critical patent/CN112085055A/en
Application granted granted Critical
Publication of CN112085055B publication Critical patent/CN112085055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a black box attack method based on transfer model Jacobian array feature vector disturbance, and belongs to the technical field of machine learning system security and black box attack. The method comprises the steps of firstly determining a black box model to be attacked and a migration pre-training model, obtaining an original sample to be attacked and a label of the original sample, continuously applying disturbance to the original sample, continuously updating the disturbance through iterative computation by using a singular value decomposition result of a Jacobian matrix calculated by the migration pre-training model, and finally enabling the sample after the disturbance is applied to be not corresponding to a correct label through black box model classification. The method has the characteristic that only one migratable pre-training network is needed without any training sample, and can greatly improve the attack efficiency of the traditional black box model.

Description

Black box attack method based on transfer model Jacobian array feature vector disturbance
Technical Field
The invention belongs to the technical field of machine learning system security and black box attack, and particularly provides a black box attack method based on transfer model Jacobian array feature vector disturbance.
Background
With the development of deep learning, the security problem about the deep learning system gradually raises the attention of the machine learning community. Since the provider of the general deep learning system does not disclose the specific implementation inside the system, the black box attack is often an effective attack means for the deep learning system. Specifically, the black box attack constructs a series of system input samples through iteration, gradually reduces the recognition degree of the deep learning system to the samples under the condition that the difference between each input sample and the sample to be attacked is small, and finally outputs classification complete errors when the samples are input for a certain time. In robust learning, we call the input sample at the end of this process the challenge sample. The common evaluation criteria of the black box model are attack efficiency, including the average number of times of inquiring the black box model to attack each sample, the average disturbance distance of the counterattack sample relative to the original sample to be attacked and the overall attack success rate.
The black box attack has rich application in practical scenes, for example, in computer vision, the black box attack can slightly disturb a specific image, so that a neural network which originally can correctly classify an original image can make wrong classification judgment on the disturbed image, and a human visual system cannot usually detect the difference between the images before and after disturbance. The exploration aiming at the black box attack of the image can promote the deeper exploration of robust learning in the machine learning boundary so as to prevent misjudgment of a deep learning system in the practical computer vision application, such as an intelligent driving system, a face recognition system and the like.
Conventional black box attack techniques include black box attacks based on white box network migration and black box attacks based on zeroth order gradient optimization. In the former scheme, a white-box network is generally trained by some training samples, and then each step of iteration of black-box attack is guided by using known white-box network parameters, which is characterized in that a large number of pre-training samples are needed, and the pre-training samples are preferably closer to the classification task of the black-box network; in the latter scheme, the gradient of the black box network at a certain input point is estimated by a sampling mode usually by using the thought of zero-order gradient optimization, so that the gradient is decreased to search for a countermeasure sample iteratively.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a black box attack method based on migration model Jacobian array feature vector disturbance. The method has the characteristic that only one migratable pre-training network is needed without any training sample, and can greatly improve the attack efficiency of the traditional black box model.
The invention provides a black box attack method based on migration model Jacobian array feature vector disturbance, which is characterized by comprising the following steps:
1) Determining a black box model F to be attacked;
determining migration pre-training models
Figure BDA0002618246210000021
Wherein the function h is from the input layer to the characterization layer of the migration pre-training model, and the function g is from the characterization layer to the output layer;
selecting an input sample to be attacked and a label corresponding to the input sample to be attacked as (x, y), wherein x represents the input sample to be attacked, and y is the label corresponding to x; taking the input sample to be attacked as an original sample; setting disturbance step length alpha and selecting the number K of eigenvectors in each round;
2) Inputting an original sample x into a black box model F, and calculating a corresponding output probability vector p = p (· | x) of the original sample passing through the black box model F;
let δ =0, δ representing the perturbation applied to the original sample, generating sample x + δ;
3) Inputting a sample x + delta into a migration pre-training model, and calculating a Jacobian matrix J = J of a function h corresponding to the sample h (x+δ);
4) Performing singular value decomposition on the Jacobian matrix J obtained in the step 3) to obtain corresponding first K normalized right eigenvalue vectors V 1 ,...,V K Let i =1;
5) The value of i is determined: if i is less than or equal to K, entering step 6); otherwise, returning to the step 3) again;
6) Iteratively calculating disturbance delta, and finally enabling the sample x + delta not to correspond to the correct label y through black box model classification, wherein the specific steps are as follows:
6-1) let x + delta along vector V i The negative direction moving length of the model is the distance of single disturbance step length alpha, and a corresponding negative output probability vector p of the black box model F is calculated neg =p(·|x+δ-αV i );
6-2) judging whether p is satisfied neg,y <p y Wherein p is y Representing the output probability, p, corresponding to the label y in the vector p neg,y : representing a vector p neg The output probability corresponding to the middle label y; if yes, the negative disturbance is effective, and the step 6-3) is carried out, otherwise, the step 6-4) is carried out;
6-3) update probability vector p = p neg Let i = i +1, update the perturbation to δ = δ - α V i Entering step 6-8);
6-4) along vector V i The forward direction moving length is the distance of single disturbance step length alpha, and the corresponding forward output probability vector p of the black box model is calculated pos =p(·|x+δ+αV i );
6-5) judging whether p is satisfied pos,y <p y Wherein p is pos,y Representing a vector p neg The output probability corresponding to the middle label y; if satisfied, then the disturbance is positiveEffective, entering step 6-6); otherwise, entering step 6-7);
6-6) update probability vector p = p pos Let i = i +1, update the perturbation as δ = δ + α V i Entering step 6-8);
6-7) keeping i = i +1, keeping the probability vector p and the disturbance delta unchanged, and entering the step 6-8);
6-8) determination y ≠ argmax y′ p y′ Whether or not: if yes, the label corresponding to the maximum probability component in the probability vector p is not y, the black box attack is successful, and the step 7) is carried out; if not, the black box attack is unsuccessful, and the step 5) is returned again;
7) And returning the disturbance delta as effective disturbance for making the black box model F make error classification judgment on the original sample x, wherein the sample x + delta at the moment is a countermeasure sample of the black box model F, and ending the method.
The invention has the characteristics and beneficial effects that:
the invention provides a novel black box attack method. By using the invention, an attacker only needs one pre-trained model network structure and network parameters, thereby achieving higher attack efficiency and lower attack cost. The invention does not need any pre-training sample to further adjust the network, thereby saving the time and cost for acquiring the training sample and training. A more efficient black box attack is guided by this pre-trained model. Experiments show that the attack efficiency of the black box attack based on the zero-order gradient optimization can be better than that of the black box attack based on the zero-order gradient optimization through the information of the pre-training model, and the black box attack technology is simpler and more convenient than the black box attack technology based on the white box network migration and needing to be trained in practical application due to the fact that any pre-training sample does not need to be collected.
The invention relates to a large application scene of attacking pictures in computer vision, which is characterized in that a synthesized micro disturbance is added to a target picture, so that a neural network wrongly classifies the disturbed pictures, and the human visual system can hardly distinguish the change of the pictures before and after disturbance because the disturbance is small enough.
The conditions for the user to use the method are as follows: in a black box model attack scenario, the information that the user can utilize is a pre-trained white box model, and the task relationship between the white box model and the black box model is very close. In addition, it is difficult to collect training data related to this task. For example, one black-box image classification system using a convolutional neural network architecture is attacked. The pre-training model of the image can be easily obtained, a plurality of pre-training models formed by training on ImageNet exist, and if the pre-training models are subjected to fine adjustment or structural modification, a large number of image samples need to be acquired, the training process is time-consuming, and the application scene of rapidly realizing the attack on the black box image classification system is very unfavorable. And our method can just as well adapt to this scenario.
Drawings
FIG. 1 is an overall flow diagram of the process of the present invention.
Detailed Description
The black box attack method based on the disturbance of the Jacobian matrix characteristic vector of the migration model provided by the invention is described in detail by combining the attached drawings and an embodiment as follows:
the invention provides a black box attack method based on transfer model Jacobian matrix eigenvector perturbation, which is suitable for any universal black box attack model. In this embodiment, resNet-50 is used to perform black box attack on ImageNet image samples, and pretrained ResNet-18 is used as a migration pretrained model (where the migration pretrained model and the black box model belong to the same class model; for example, if the black box model to be attacked is an image classification model, the migration pretrained model is also an image classification model, and when the task correlation of the two models is stronger, the performance of the method of the present invention is better).
The invention provides a black box attack method based on migration model Jacobian array feature vector disturbance, the whole process is shown as figure 1, and the method comprises the following steps:
1) Determining a black box model F to be attacked, wherein the ResNet-50 of the black box is defined in the embodiment, and due to the characteristics of the black box, the structure and parameters of the black box model are unknown;
determining
Figure BDA0002618246210000041
This example refers to ResNet-18 of a white box, where the function h is from the input layer to the characterization layer of the migration pre-training model (this example is 512-dimensional characterization layer after ResNet-18 is subjected to continuous convolutional layer and average pooling), and the function g is from the characterization layer to the output layer;
selecting an input sample to be attacked and a label corresponding to the input sample to be attacked as (x, y), wherein x represents the input sample to be attacked, and y is the label corresponding to x; taking the input sample to be attacked as an original sample, selecting an image sample in an ImageNet verification set as the input sample to be attacked and acquiring a corresponding label; setting a disturbance step length a and selecting a feature vector number K for each round (the value of K is related to the dimension x, and the higher the dimension x is, the larger the value of K is, and K is 100 in this embodiment).
2) Inputting an original sample x into a black box model F, and calculating a corresponding output probability vector p = p (· | x) of the original sample passing through the black box model F;
let δ =0, δ denote the perturbation applied to the original sample, which is updated continuously as the algorithm iterates, generating sample x + δ.
3) Inputting a sample x + delta into a migration pre-training model, and calculating a Jacobian matrix J = J of a function h corresponding to the sample h (x+δ)。
4) Performing singular value decomposition on the Jacobian matrix J obtained in the step 3) to obtain corresponding first K normalized right eigenvalue vectors V 1 ,...,V K Let i =1;
5) The value of i is determined: if i is less than or equal to K, entering step 6); otherwise, returning to the step 3), at this time, the disturbance delta updating iteration of the current round is completely finished, and the jacobian matrix needs to be recalculated at the position of the updated sample x + delta and the next round of disturbance updating iteration is started;
6) Iteratively calculating disturbance delta, and finally enabling the sample x + delta not to correspond to the correct label y through black box model classification, wherein the method comprises the following specific steps:
6-1) let x + delta along vector V i The negative direction moving length of the model F is the distance of the single disturbance step length alpha, and the corresponding negative output probability vector p of the black box model F is calculated neg =p(·|x+δ-αV i );
6-2) judging whether p is satisfied neg,y <p y Wherein p is y Representing the output probability, p, corresponding to the label y in the vector p neg,y : representing a vector p neg The output probability corresponding to the middle label y; if yes, then the judgment probability of the black box model on the real label of the disturbed sample can be reduced by negative disturbance, the step 6-3 is carried out if the negative disturbance is effective, and otherwise, the step 6-4 is carried out);
6-3) update probability vector p = p neg Let i = i +1, update the perturbation to δ = δ - α V i Entering step 6-8);
6-4) along vector V i The forward direction moving length is the distance of single disturbance step length alpha, and the corresponding forward output probability vector p of the black box model is calculated pos =p(·|x+δ+αV i );
6-5) judging whether p is satisfied pos,y <p y Wherein p is pos,y Representing a vector p neg The output probability corresponding to the middle label y; if so, indicating that the judgment probability of the black box model on the real label of the disturbed sample can be reduced by the forward disturbance, and entering the step 6-6 if the forward disturbance is effective; otherwise, entering step 6-7);
6-6) update probability vector p = p pos Let i = i +1, update the perturbation as δ = δ + α V i Entering step 6-8);
6-7) at this time, both positive disturbance and negative disturbance can not reduce the judgment probability of the black box model on the true label of the disturbed sample, so that only i = i +1 is allowed, the probability vector p and the disturbance delta are kept unchanged, and the step 6-8) is carried out;
6-8) determination y ≠ argmax y′ p y′ Whether the label corresponding to the maximum probability component in the probability vector p is satisfied or not is shown on the right side of the equation, and the function of the equation is to determine whether the label corresponding to the maximum probability component in the probability vector p at the moment is no longer the original sample xThe label y of (a): if yes, the black box attack is successful, and the step 7) is carried out; if not, the black box attack is unsuccessful, and the step 5) is returned again;
7) At this time, the disturbance applied to the original sample can already make an error classification judgment on the black box model F, the disturbance δ is returned as an effective disturbance which makes the black box model F make an error classification judgment on the original sample x, the sample x + δ at this time is a countermeasure sample of the black box model F (in this embodiment, the countermeasure sample is a picture which makes ResNet-50 of the black box make an error classification), and the method ends.

Claims (1)

1. A black box attack method based on migration model Jacobian array eigenvector disturbance is characterized by comprising the following steps:
1) Determining a black box model F to be attacked;
determining a migration pre-training model
Figure FDA0002618246200000011
Wherein the function h is from the input layer to the characterization layer of the migration pre-training model, and the function g is from the characterization layer to the output layer;
selecting an input sample to be attacked and a label corresponding to the input sample to be attacked as (x, y), wherein x represents the input sample to be attacked, and y is the label corresponding to x; taking the input sample to be attacked as an original sample; setting disturbance step length alpha and selecting the number K of eigenvectors in each round;
2) Inputting an original sample x into a black box model F, and calculating a corresponding output probability vector p = p (· | x) of the original sample passing through the black box model F;
let δ =0, δ representing the perturbation applied to the original sample, generating a sample x + δ;
3) Inputting a sample x + delta into a migration pre-training model, and calculating a Jacobian matrix J = J of a function h corresponding to the sample h (x+δ);
4) Performing singular value decomposition on the Jacobian matrix J obtained in the step 3) to obtain corresponding first K normalized right eigenvalue vectors V 1 ,...,V K Let i =1;
5) The value of i is determined: if i is less than or equal to K, entering the step 6); otherwise, returning to the step 3) again;
6) Iteratively calculating disturbance delta, and finally enabling the sample x + delta not to correspond to the correct label y through black box model classification, wherein the specific steps are as follows:
6-1) let x + delta along vector V i The negative direction moving length of the model is the distance of single disturbance step length alpha, and a corresponding negative output probability vector p of the black box model F is calculated neg =p(·|x+δ-αV i );
6-2) judging whether p is satisfied neg,y <p y Wherein p is y Representing the output probability, p, corresponding to the label y in the vector p neg,y : representing a vector p neg The output probability corresponding to the middle label y; if yes, the negative disturbance is effective, and the step 6-3) is carried out, otherwise, the step 6-4) is carried out;
6-3) update probability vector p = p neg Let i = i +1, update the perturbation to δ = δ - α V i Entering step 6-8);
6-4) along vector V i The forward direction moving length is the distance of single disturbance step length alpha, and the corresponding forward output probability vector p of the black box model is calculated pos =p(·|x+δ+αV i );
6-5) judging whether p is satisfied pos,y <p y Wherein p is pos,y Representing a vector p neg The output probability corresponding to the middle label y; if yes, the positive disturbance is effective, and the step 6-6) is carried out; otherwise, entering step 6-7);
6-6) update probability vector p = p pos Let i = i +1, update the perturbation as δ = δ + α V i Entering step 6-8);
6-7) making i = i +1, keeping the probability vector p and the disturbance delta unchanged, and entering the step 6-8);
6-8) determination y ≠ argmax y′ p y′ Whether or not: if yes, the label corresponding to the maximum probability component in the probability vector p is not y, the black box attack is successful, and the step 7) is carried out; if not, the black box attack is unsuccessful, and the step 5) is returned again;
7) And returning the disturbance delta as effective disturbance for making the black box model F make error classification judgment on the original sample x, wherein the sample x + delta at the moment is a countermeasure sample of the black box model F, and ending the method.
CN202010775599.1A 2020-08-05 2020-08-05 Black box attack method based on transfer model Jacobian array feature vector disturbance Active CN112085055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010775599.1A CN112085055B (en) 2020-08-05 2020-08-05 Black box attack method based on transfer model Jacobian array feature vector disturbance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010775599.1A CN112085055B (en) 2020-08-05 2020-08-05 Black box attack method based on transfer model Jacobian array feature vector disturbance

Publications (2)

Publication Number Publication Date
CN112085055A CN112085055A (en) 2020-12-15
CN112085055B true CN112085055B (en) 2022-12-13

Family

ID=73735579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010775599.1A Active CN112085055B (en) 2020-08-05 2020-08-05 Black box attack method based on transfer model Jacobian array feature vector disturbance

Country Status (1)

Country Link
CN (1) CN112085055B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113380255B (en) * 2021-05-19 2022-12-20 浙江工业大学 Voiceprint recognition poisoning sample generation method based on transfer training
CN113469330B (en) * 2021-06-25 2022-12-02 中国人民解放军陆军工程大学 Method for enhancing sample mobility resistance by bipolar network corrosion
CN113298238B (en) * 2021-06-28 2023-06-20 上海观安信息技术股份有限公司 Method, apparatus, processing device, and storage medium for exploring black box neural network using directed attack
CN114693732B (en) * 2022-03-07 2022-11-25 四川大学华西医院 Weak and small target detection and tracking method
CN115115905B (en) * 2022-06-13 2023-06-27 苏州大学 High-mobility image countermeasure sample generation method based on generation model
CN116523032B (en) * 2023-03-13 2023-09-29 之江实验室 Image text double-end migration attack method, device and medium
CN116504069B (en) * 2023-06-26 2023-09-05 中国市政工程西南设计研究总院有限公司 Urban road network capacity optimization method, device and equipment and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110837637A (en) * 2019-10-16 2020-02-25 华中科技大学 Black box attack method for brain-computer interface system
CN111027060A (en) * 2019-12-17 2020-04-17 电子科技大学 Knowledge distillation-based neural network black box attack type defense method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110837637A (en) * 2019-10-16 2020-02-25 华中科技大学 Black box attack method for brain-computer interface system
CN111027060A (en) * 2019-12-17 2020-04-17 电子科技大学 Knowledge distillation-based neural network black box attack type defense method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向低维工控网数据集的对抗样本攻击分析;周文等;《计算机研究与发展》;20200413(第04期);全文 *

Also Published As

Publication number Publication date
CN112085055A (en) 2020-12-15

Similar Documents

Publication Publication Date Title
CN112085055B (en) Black box attack method based on transfer model Jacobian array feature vector disturbance
CN107529650B (en) Closed loop detection method and device and computer equipment
CN111814584B (en) Vehicle re-identification method based on multi-center measurement loss under multi-view environment
CN111583263B (en) Point cloud segmentation method based on joint dynamic graph convolution
CN108647583B (en) Face recognition algorithm training method based on multi-target learning
CN113326731B (en) Cross-domain pedestrian re-identification method based on momentum network guidance
CN111709435B (en) Discrete wavelet transform-based countermeasure sample generation method
CN113076994B (en) Open-set domain self-adaptive image classification method and system
CN114492574A (en) Pseudo label loss unsupervised countermeasure domain adaptive picture classification method based on Gaussian uniform mixing model
CN112396129B (en) Challenge sample detection method and universal challenge attack defense system
CN107945210B (en) Target tracking method based on deep learning and environment self-adaption
CN107862680B (en) Target tracking optimization method based on correlation filter
CN113158955B (en) Pedestrian re-recognition method based on clustering guidance and paired measurement triplet loss
CN110968734A (en) Pedestrian re-identification method and device based on depth measurement learning
CN112232395B (en) Semi-supervised image classification method for generating countermeasure network based on joint training
CN113920472A (en) Unsupervised target re-identification method and system based on attention mechanism
CN116824216A (en) Passive unsupervised domain adaptive image classification method
CN110135435B (en) Saliency detection method and device based on breadth learning system
CN112084895A (en) Pedestrian re-identification method based on deep learning
Zhang et al. Category modeling from just a single labeling: Use depth information to guide the learning of 2d models
CN113807214B (en) Small target face recognition method based on deit affiliated network knowledge distillation
CN111291705A (en) Cross-multi-target-domain pedestrian re-identification method
CN113378620B (en) Cross-camera pedestrian re-identification method in surveillance video noise environment
CN114267060A (en) Face age identification method and system based on uncertain suppression network model
CN114417975A (en) Data classification method and system based on deep PU learning and class prior estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant