CN112085055A - Black box attack method based on migration model Jacobian array feature vector disturbance - Google Patents

Black box attack method based on migration model Jacobian array feature vector disturbance Download PDF

Info

Publication number
CN112085055A
CN112085055A CN202010775599.1A CN202010775599A CN112085055A CN 112085055 A CN112085055 A CN 112085055A CN 202010775599 A CN202010775599 A CN 202010775599A CN 112085055 A CN112085055 A CN 112085055A
Authority
CN
China
Prior art keywords
black box
disturbance
sample
model
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010775599.1A
Other languages
Chinese (zh)
Other versions
CN112085055B (en
Inventor
崔鹏
周琳钧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202010775599.1A priority Critical patent/CN112085055B/en
Publication of CN112085055A publication Critical patent/CN112085055A/en
Application granted granted Critical
Publication of CN112085055B publication Critical patent/CN112085055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention provides a black box attack method based on migration model Jacobian array feature vector disturbance, and belongs to the technical field of machine learning system security and black box attack. The method comprises the steps of firstly determining a black box model to be attacked and a migration pre-training model, obtaining an original sample to be attacked and a label of the original sample, continuously applying disturbance to the original sample, continuously updating the disturbance through iterative computation by using a singular value decomposition result of a Jacobian matrix calculated by the migration pre-training model, and finally enabling the sample after the disturbance is applied to be not corresponding to a correct label through black box model classification. The method has the characteristic that only one migratable pre-training network is needed without any training sample, and can greatly improve the attack efficiency of the traditional black box model.

Description

Black box attack method based on migration model Jacobian array feature vector disturbance
Technical Field
The invention belongs to the technical field of machine learning system security and black box attack, and particularly provides a black box attack method based on transfer model Jacobian array feature vector disturbance.
Background
With the development of deep learning, the security problem related to the deep learning system gradually raises the attention of the machine learning community. Since the provider of the general deep learning system does not disclose the specific implementation inside the system, the black box attack is often an effective attack means for the deep learning system. Specifically, the black box attack constructs a series of system input samples through iteration, gradually reduces the recognition degree of the deep learning system to the samples under the condition that the difference between each input sample and the sample to be attacked is small, and finally outputs classification complete errors when the samples are input for a certain time. In robust learning, we call the input sample at the end of this process the challenge sample. The common evaluation criteria of the black box model are attack efficiency, including the average number of times of inquiring the black box model to attack each sample, the average disturbance distance of the counterattack sample relative to the original sample to be attacked and the overall attack success rate.
The black box attack has rich application in practical scenes, for example, in computer vision, the black box attack can slightly disturb a specific image, so that a neural network which originally can correctly classify an original image can make wrong classification judgment on the disturbed image, and a human visual system cannot usually detect the difference between the images before and after disturbance. The exploration aiming at the black box attack of the image can promote the deeper exploration of robust learning in the machine learning boundary so as to prevent misjudgment of a deep learning system in the practical computer vision application, such as an intelligent driving system, a face recognition system and the like.
Conventional black box attack techniques include black box attacks based on white box network migration and black box attacks based on zeroth order gradient optimization. In the former scheme, a white-box network is generally trained by some training samples, and then each step of iteration of black-box attack is guided by using known white-box network parameters, which is characterized in that a large number of pre-training samples are needed, and the pre-training samples are preferably closer to the classification task of the black-box network; in the latter scheme, the gradient of the black box network at a certain input point is estimated by a sampling mode usually by using the thought of zero-order gradient optimization, so that the gradient is decreased to search for a countermeasure sample iteratively.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a black box attack method based on migration model Jacobian array feature vector disturbance. The method has the characteristic that only one migratable pre-training network is needed without any training sample, and can greatly improve the attack efficiency of the traditional black box model.
The invention provides a black box attack method based on migration model Jacobian array feature vector disturbance, which is characterized by comprising the following steps:
1) determining a black box model F to be attacked;
determining migration pre-training models
Figure BDA0002618246210000021
Wherein the function h is from the input layer to the characterization layer of the migration pre-training model, and the function g is from the characterization layer to the output layer;
selecting an input sample to be attacked and a label corresponding to the input sample to be attacked as (x, y), wherein x represents the input sample to be attacked, and y is the label corresponding to x; taking the input sample to be attacked as an original sample; setting disturbance step length alpha and selecting the number K of eigenvectors in each round;
2) inputting an original sample x into a black box model F, and calculating a corresponding output probability vector p ═ p (· | x) of the original sample passing through the black box model F;
let 0 denote the perturbation applied to the original sample, generating sample x +;
3) inputting the sample x + into a migration pre-training model, and calculating a Jacobian matrix J (J) of a function h corresponding to the sampleh(x+);
4) Performing singular value decomposition on the Jacobian matrix J obtained in the step 3) to obtain corresponding first K normalized right eigenvalue vectors V1,...,VKLet i equal to 1;
5) the value of i is determined: if i is less than or equal to K, entering the step 6); otherwise, returning to the step 3) again;
6) iteratively calculating disturbance, and finally enabling the sample x + not to correspond to the correct label y through black box model classification, wherein the method comprises the following specific steps:
6-1) let x + be along vector ViThe negative direction moving length of the model F is the distance of the single disturbance step length alpha, and the corresponding negative output probability vector p of the black box model F is calculatedneg=p(·|x+-αVi);
6-2) judging whether p is satisfiedneg,y<pyWherein p isyRepresenting the output probability, p, corresponding to the label y in the vector pneg,y: representing a vector pnegThe output probability corresponding to the middle label y; if yes, the negative disturbance is effective, and the step 6-3) is carried out, otherwise, the step 6-4) is carried out;
6-3) updating the probability vector p ═ pnegLet i be i +1 and update the perturbation as- α ViEntering step 6-8);
6-4) along vector ViThe forward direction moving length is the distance of single disturbance step length alpha, and the corresponding forward output probability vector p of the black box model is calculatedpos=p(·|x++αVi);
6-5) judging whether p is satisfiedpos,y<pyWherein p ispos,yRepresenting a vector pnegThe output probability corresponding to the middle label y; if yes, the positive disturbance is effective, and the step 6-6) is carried out; otherwise, entering step 6-7);
6-6) updating the probability vector p ═ pposLet i be i +1, update the perturbation to + α ViEntering step 6-8);
6-7) keeping the probability vector p and the disturbance unchanged by setting i to i +1, and entering a step 6-8);
6-8) determination y ≠ argmaxy′py′Whether or not: if yes, the label corresponding to the maximum probability component in the probability vector p is not y, the black box attack is successful, and the step 7) is carried out; if not, the black box attack is unsuccessful, and the step 5) is returned again;
7) and returning the disturbance as effective disturbance for making the black box model F make error classification judgment on the original sample x, wherein the sample x + at the moment is a countermeasure sample of the black box model F, and ending the method.
The invention has the characteristics and beneficial effects that:
the invention provides a novel black box attack method. By using the method, an attacker only needs one pre-trained model network structure and network parameters, and higher attack efficiency and lower attack cost are achieved. The invention does not need any pre-training sample to further adjust the network, thereby saving the time and cost for acquiring the training sample and training. A more efficient black box attack is guided by this pre-trained model. Experiments show that the attack efficiency of the black box attack based on the zero-order gradient optimization can be better than that of the black box attack based on the zero-order gradient optimization through the information of the pre-training model, and the black box attack technology is simpler and more convenient than the black box attack technology based on the white box network migration and needing to be trained in practical application due to the fact that any pre-training sample does not need to be collected.
The invention relates to a large application scene of attacking pictures in computer vision, which is specifically implemented by adding a synthesized micro disturbance to a target picture, so that a neural network wrongly classifies the disturbed pictures, and a human visual system can hardly distinguish the changes of the pictures before and after the disturbance because the disturbance is small enough.
The conditions for the user to use the method are as follows: in a black box model attack scenario, the information that the user can utilize is a pre-trained white box model, and the task relationship between the white box model and the black box model is very close. In addition, it is difficult to collect training data related to this task. For example, one black-box image classification system using a convolutional neural network architecture is attacked. The pre-training model of the image can be easily obtained, a plurality of pre-training models formed by training on ImageNet exist, and if the pre-training models are subjected to fine adjustment or structural modification, a large number of image samples need to be acquired, the training process is time-consuming, and the application scene of rapidly realizing the attack on the black box image classification system is very unfavorable. And our method can just as well adapt to this scenario.
Drawings
FIG. 1 is an overall flow diagram of the method of the present invention.
Detailed Description
The black box attack method based on the disturbance of the Jacobian matrix characteristic vector of the migration model provided by the invention is described in detail by combining the attached drawings and an embodiment as follows:
the invention provides a black box attack method based on migration model Jacobian array feature vector disturbance, which is suitable for any universal black box attack model. In this embodiment, ResNet-50 is used to perform black box attack on ImageNet image samples, and pretrained ResNet-18 is used as a migration pretrained model (where the migration pretrained model and the black box model belong to the same class model; for example, if the black box model to be attacked is an image classification model, the migration pretrained model is also an image classification model, and when the task correlation of the two models is stronger, the performance of the method of the present invention is better).
The invention provides a black box attack method based on migration model Jacobian array feature vector disturbance, the whole process is shown as figure 1, and the method comprises the following steps:
1) determining a black box model F to be attacked, wherein the model F is ResNet-50 of the black box, and the structure and parameters of the black box model are unknown due to the characteristics of the black box;
determining
Figure BDA0002618246210000041
This example refers to ResNet-18 with white-box, where the function h is from the input layer to the characterization layer of the migration pre-trained model (512-dimensional characterization layer after ResNet-18 has been subjected to successive convolutional layers and average pooling), and the function g is from the characterization layer to the output layer;
selecting an input sample to be attacked and a label corresponding to the input sample to be attacked as (x, y), wherein x represents the input sample to be attacked, and y is the label corresponding to x; taking the input sample to be attacked as an original sample, selecting an image sample in an ImageNet verification set as the input sample to be attacked and acquiring a corresponding label; setting a disturbance step length a and selecting a feature vector number K for each round (the value of K is related to the dimension x, and the higher the dimension x is, the larger the value of K is, and K is 100 in this embodiment).
2) Inputting an original sample x into a black box model F, and calculating a corresponding output probability vector p ═ p (· | x) of the original sample passing through the black box model F;
let 0 denote the perturbation applied to the original sample, which is updated as the algorithm iterates, generating sample x +.
3) Inputting the sample x + into a migration pre-training model, and calculating a Jacobian matrix J (J) of a function h corresponding to the sampleh(x+)。
4) Performing singular value decomposition on the Jacobian matrix J obtained in the step 3) to obtain corresponding first K normalized right eigenvalue vectors V1,...,VKLet i equal to 1;
5) the value of i is determined: if i is less than or equal to K, entering the step 6); otherwise, returning to the step 3), at this time, the disturbance updating iteration of the current round is completely finished, and the jacobian matrix needs to be recalculated at the updated sample x + and the next round of disturbance updating iteration is started;
6) iteratively calculating disturbance, and finally enabling the sample x + not to correspond to the correct label y through black box model classification, wherein the method comprises the following specific steps:
6-1) let x + be along vector ViThe negative direction moving length of the model F is the distance of the single disturbance step length alpha, and the corresponding negative output probability vector p of the black box model F is calculatedneg=p(·|x+-αVi);
6-2) judging whether p is satisfiedneg,y<pyWherein p isyRepresenting the output probability, p, corresponding to the label y in the vector pneg,y: representing a vector pnegThe output probability corresponding to the middle label y; if yes, then the judgment probability of the black box model on the real label of the disturbed sample can be reduced by negative disturbance, the step 6-3 is carried out if the negative disturbance is effective, and otherwise, the step 6-4 is carried out);
6-3) updating the probability vector p ═ pnegLet i be i +1 and update the perturbation as- α ViEntering step 6-8);
6-4) along vector ViThe forward direction moving length is the distance of single disturbance step length alpha, and the corresponding forward output probability vector p of the black box model is calculatedpos=p(·|x++αVi);
6-5) judging whether p is satisfiedpos,y<pyWherein p ispos,yRepresenting a vector pnegMiddle labely corresponding output probability; if so, indicating that the judgment probability of the black box model on the real label of the disturbed sample can be reduced by the forward disturbance, and entering the step 6-6 if the forward disturbance is effective; otherwise, entering step 6-7);
6-6) updating the probability vector p ═ pposLet i be i +1, update the perturbation to + α ViEntering step 6-8);
6-7) at this time, both positive disturbance and negative disturbance can not reduce the judgment probability of the black box model on the true label of the disturbed sample, so that only i is set as i +1, the probability vector p and the disturbance are kept unchanged, and the step 6-8) is carried out;
6-8) determination y ≠ argmaxy′py′Whether the label corresponding to the maximum probability component in the probability vector p is satisfied or not is represented on the right side of the equation, and the function of the equation is to determine whether the label corresponding to the maximum probability component in the probability vector p at this time is no longer the original label y of the original sample x: if yes, the black box attack is successful, and the step 7) is carried out; if not, the black box attack is unsuccessful, and the step 5) is returned again;
7) at this time, the disturbance applied to the original sample can make an error classification judgment on the black box model F, the disturbance is returned as an effective disturbance which makes the black box model F make the error classification judgment on the original sample x, the sample x + at this time is a countermeasure sample of the black box model F (in this embodiment, the countermeasure sample is a picture which makes ResNet-50 of the black box make the error classification), and the method ends.

Claims (1)

1. A black box attack method based on migration model Jacobian array eigenvector disturbance is characterized by comprising the following steps:
1) determining a black box model F to be attacked;
determining migration pre-training models
Figure FDA0002618246200000011
Wherein the function h is from the input layer to the characterization layer of the migration pre-training model, and the function g is from the characterization layer to the output layer;
selecting an input sample to be attacked and a label corresponding to the input sample to be attacked as (x, y), wherein x represents the input sample to be attacked, and y is the label corresponding to x; taking the input sample to be attacked as an original sample; setting disturbance step length alpha and selecting the number K of eigenvectors in each round;
2) inputting an original sample x into a black box model F, and calculating a corresponding output probability vector p ═ p (· | x) of the original sample passing through the black box model F;
let 0 denote the perturbation applied to the original sample, generating sample x +;
3) inputting the sample x + into a migration pre-training model, and calculating a Jacobian matrix J (J) of a function h corresponding to the sampleh(x+);
4) Performing singular value decomposition on the Jacobian matrix J obtained in the step 3) to obtain corresponding first K normalized right eigenvalue vectors V1,...,VKLet i equal to 1;
5) the value of i is determined: if i is less than or equal to K, entering the step 6); otherwise, returning to the step 3) again;
6) iteratively calculating disturbance, and finally enabling the sample x + not to correspond to the correct label y through black box model classification, wherein the method comprises the following specific steps:
6-1) let x + be along vector ViThe negative direction moving length of the model F is the distance of the single disturbance step length alpha, and the corresponding negative output probability vector p of the black box model F is calculatedneg=p(·|x+-αVi);
6-2) judging whether p is satisfiedneg,y<pyWherein p isyRepresenting the output probability, p, corresponding to the label y in the vector pneg,y: representing a vector pnegThe output probability corresponding to the middle label y; if yes, the negative disturbance is effective, and the step 6-3) is carried out, otherwise, the step 6-4) is carried out;
6-3) updating the probability vector p ═ pnegLet i be i +1 and update the perturbation as- α ViEntering step 6-8);
6-4) along vector ViThe forward direction moving length is the distance of single disturbance step length alpha, and the corresponding forward output probability vector p of the black box model is calculatedpos=p(·|x++αVi);
6-5) judging whether p is satisfiedpos,y<pyWherein p ispos,yRepresenting a vector pnegThe output probability corresponding to the middle label y; if yes, the positive disturbance is effective, and the step 6-6) is carried out; otherwise, entering step 6-7);
6-6) updating the probability vector p ═ pposLet i be i +1, update the perturbation to + α ViEntering step 6-8);
6-7) keeping the probability vector p and the disturbance unchanged by setting i to i +1, and entering a step 6-8);
6-8) determination y ≠ argmaxy′py′Whether or not: if yes, the label corresponding to the maximum probability component in the probability vector p is not y, the black box attack is successful, and the step 7) is carried out; if not, the black box attack is unsuccessful, and the step 5) is returned again;
7) and returning the disturbance as effective disturbance for making the black box model F make error classification judgment on the original sample x, wherein the sample x + at the moment is a countermeasure sample of the black box model F, and ending the method.
CN202010775599.1A 2020-08-05 2020-08-05 Black box attack method based on transfer model Jacobian array feature vector disturbance Active CN112085055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010775599.1A CN112085055B (en) 2020-08-05 2020-08-05 Black box attack method based on transfer model Jacobian array feature vector disturbance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010775599.1A CN112085055B (en) 2020-08-05 2020-08-05 Black box attack method based on transfer model Jacobian array feature vector disturbance

Publications (2)

Publication Number Publication Date
CN112085055A true CN112085055A (en) 2020-12-15
CN112085055B CN112085055B (en) 2022-12-13

Family

ID=73735579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010775599.1A Active CN112085055B (en) 2020-08-05 2020-08-05 Black box attack method based on transfer model Jacobian array feature vector disturbance

Country Status (1)

Country Link
CN (1) CN112085055B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298238A (en) * 2021-06-28 2021-08-24 上海观安信息技术股份有限公司 Method, apparatus, processing device, storage medium for exploring black-box neural networks using directed attacks
CN113380255A (en) * 2021-05-19 2021-09-10 浙江工业大学 Voiceprint recognition poisoning sample generation method based on transfer training
CN113469330A (en) * 2021-06-25 2021-10-01 中国人民解放军陆军工程大学 Method for enhancing sample mobility resistance by bipolar network corrosion
CN114693732A (en) * 2022-03-07 2022-07-01 四川大学华西医院 Weak and small target detection and tracking method
CN115115905A (en) * 2022-06-13 2022-09-27 苏州大学 High-mobility image countermeasure sample generation method based on generation model
CN116504069A (en) * 2023-06-26 2023-07-28 中国市政工程西南设计研究总院有限公司 Urban road network capacity optimization method, device and equipment and readable storage medium
CN116523032A (en) * 2023-03-13 2023-08-01 之江实验室 Image text double-end migration attack method, device and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110837637A (en) * 2019-10-16 2020-02-25 华中科技大学 Black box attack method for brain-computer interface system
CN111027060A (en) * 2019-12-17 2020-04-17 电子科技大学 Knowledge distillation-based neural network black box attack type defense method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110837637A (en) * 2019-10-16 2020-02-25 华中科技大学 Black box attack method for brain-computer interface system
CN111027060A (en) * 2019-12-17 2020-04-17 电子科技大学 Knowledge distillation-based neural network black box attack type defense method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周文等: "面向低维工控网数据集的对抗样本攻击分析", 《计算机研究与发展》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113380255A (en) * 2021-05-19 2021-09-10 浙江工业大学 Voiceprint recognition poisoning sample generation method based on transfer training
CN113380255B (en) * 2021-05-19 2022-12-20 浙江工业大学 Voiceprint recognition poisoning sample generation method based on transfer training
CN113469330A (en) * 2021-06-25 2021-10-01 中国人民解放军陆军工程大学 Method for enhancing sample mobility resistance by bipolar network corrosion
CN113298238A (en) * 2021-06-28 2021-08-24 上海观安信息技术股份有限公司 Method, apparatus, processing device, storage medium for exploring black-box neural networks using directed attacks
CN113298238B (en) * 2021-06-28 2023-06-20 上海观安信息技术股份有限公司 Method, apparatus, processing device, and storage medium for exploring black box neural network using directed attack
CN114693732A (en) * 2022-03-07 2022-07-01 四川大学华西医院 Weak and small target detection and tracking method
CN114693732B (en) * 2022-03-07 2022-11-25 四川大学华西医院 Weak and small target detection and tracking method
CN115115905A (en) * 2022-06-13 2022-09-27 苏州大学 High-mobility image countermeasure sample generation method based on generation model
CN116523032A (en) * 2023-03-13 2023-08-01 之江实验室 Image text double-end migration attack method, device and medium
CN116523032B (en) * 2023-03-13 2023-09-29 之江实验室 Image text double-end migration attack method, device and medium
CN116504069A (en) * 2023-06-26 2023-07-28 中国市政工程西南设计研究总院有限公司 Urban road network capacity optimization method, device and equipment and readable storage medium
CN116504069B (en) * 2023-06-26 2023-09-05 中国市政工程西南设计研究总院有限公司 Urban road network capacity optimization method, device and equipment and readable storage medium

Also Published As

Publication number Publication date
CN112085055B (en) 2022-12-13

Similar Documents

Publication Publication Date Title
CN112085055B (en) Black box attack method based on transfer model Jacobian array feature vector disturbance
CN107529650B (en) Closed loop detection method and device and computer equipment
CN111583263B (en) Point cloud segmentation method based on joint dynamic graph convolution
CN108647583B (en) Face recognition algorithm training method based on multi-target learning
WO2020228525A1 (en) Place recognition method and apparatus, model training method and apparatus for place recognition, and electronic device
CN113326731B (en) Cross-domain pedestrian re-identification method based on momentum network guidance
CN111709435B (en) Discrete wavelet transform-based countermeasure sample generation method
CN113076994B (en) Open-set domain self-adaptive image classification method and system
CN110120064B (en) Depth-related target tracking algorithm based on mutual reinforcement and multi-attention mechanism learning
CN114492574A (en) Pseudo label loss unsupervised countermeasure domain adaptive picture classification method based on Gaussian uniform mixing model
CN111639564B (en) Video pedestrian re-identification method based on multi-attention heterogeneous network
CN107945210B (en) Target tracking method based on deep learning and environment self-adaption
WO2022160772A1 (en) Person re-identification method based on view angle guidance multi-adversarial attention
CN107862680B (en) Target tracking optimization method based on correlation filter
CN113158955B (en) Pedestrian re-recognition method based on clustering guidance and paired measurement triplet loss
WO2022218396A1 (en) Image processing method and apparatus, and computer readable storage medium
CN112232395B (en) Semi-supervised image classification method for generating countermeasure network based on joint training
CN112084895B (en) Pedestrian re-identification method based on deep learning
CN113920472A (en) Unsupervised target re-identification method and system based on attention mechanism
CN110135435B (en) Saliency detection method and device based on breadth learning system
CN113807214B (en) Small target face recognition method based on deit affiliated network knowledge distillation
CN113378620B (en) Cross-camera pedestrian re-identification method in surveillance video noise environment
CN114267060A (en) Face age identification method and system based on uncertain suppression network model
CN114417975A (en) Data classification method and system based on deep PU learning and class prior estimation
JP6600288B2 (en) Integrated apparatus and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant