CN114580492B - Cross-domain pedestrian re-recognition method based on mutual learning - Google Patents
Cross-domain pedestrian re-recognition method based on mutual learning Download PDFInfo
- Publication number
- CN114580492B CN114580492B CN202111468957.5A CN202111468957A CN114580492B CN 114580492 B CN114580492 B CN 114580492B CN 202111468957 A CN202111468957 A CN 202111468957A CN 114580492 B CN114580492 B CN 114580492B
- Authority
- CN
- China
- Prior art keywords
- training
- model
- sample
- samples
- neighbor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000012549 training Methods 0.000 claims abstract description 65
- 238000005065 mining Methods 0.000 claims abstract description 27
- 238000002955 isolation Methods 0.000 claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims abstract description 8
- 238000013528 artificial neural network Methods 0.000 claims abstract description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000009412 basement excavation Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000002194 synthesizing effect Effects 0.000 claims description 2
- 239000000523 sample Substances 0.000 description 73
- 230000006870 function Effects 0.000 description 8
- 238000012544 monitoring process Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a cross-domain pedestrian re-identification method based on mutual learning, which comprises two parts of target domain information mining based on mutual learning and training strategies based on mutual learning, wherein the target domain information mining step by utilizing the mutual learning is as follows: (1) Training by using a source domain and a target domain to obtain two pre-training models; (2) Extracting features by using the two models, and excavating neighbor samples of pedestrians in the target domain; (3) generating a pseudo tag by Jaccard distance; the training strategy based on mutual learning comprises the following steps: (1) Each pre-training model selects samples for the peer-to-peer model for training; (2) Defining the isolation of the samples by utilizing KL divergence and selecting the samples; (3) calculating a rank matrix of the selected samples by utilizing KL divergence; (4) Each model is updated through a triplet constructed by the peer-to-peer model; (5) And (5) utilizing the updated two models to carry out target domain information mining, updating the pseudo tag, and carrying out training on the neural network again to finish pedestrian re-identification.
Description
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a cross-domain pedestrian re-identification method based on mutual learning, which can be used for scenes such as security monitoring and video analysis.
Background
In recent years, as the monitoring cameras spread around all corners of the world, the coverage area of a monitoring network is being improved comprehensively, and the rapid mass data further expose the defects of the traditional manual security analysis in time efficiency and accuracy. The traditional security analysis system mainly relies on manual watching of monitoring content to find problems, on one hand, because a large number of videos cannot be watched quickly in time, gold emergency time is missed, and on the other hand, accuracy is not high. Since 2014 deep learning is introduced into the pedestrian re-recognition field, the accuracy is greatly improved by means of a large number of marked data sets and open-source network structures. However, when a model trained on an existing annotated dataset is applied to a new scene without manual annotation, its performance will be greatly degraded, mainly due to inconsistent data distribution caused by illumination, background, camera angles, etc. of different scenes. To alleviate this problem, the main task of cross-domain pedestrian re-recognition is how to migrate the knowledge and feature expression capabilities learned from source domain annotated data to unlabeled target domain data. The cross-domain pedestrian re-recognition can bring convenience to practical application of pedestrian re-recognition, so that the method becomes a main research direction in industry and academia.
The existing unsupervised cross-domain pedestrian re-recognition method is influenced by factors such as noise pseudo labels, low-quality samples, difference of distance distribution caused by positive and negative samples and the like, the accuracy rate is still a large gap compared with the accuracy rate of supervised pedestrian re-recognition, and the method is less in application in actual scenes and does not fully combine time space information.
Disclosure of Invention
The technical solution of the invention is as follows: the cross-domain pedestrian re-recognition method based on mutual learning is effectively applied to a new scene without pedestrian identity labeling, effectively reduces the dependence on labels in a training model, and can be used for scenes such as security monitoring, video analysis and the like.
The technical solution of the invention is as follows: aiming at the problem of lower quality of the generated pseudo tag in the prior method, the target domain information is mined based on mutual learning, and the pseudo tag with higher quality is obtained by combining the feature expression capability of the two models. Firstly, each model performs neighbor mining according to a k-reciprocal nearest neighbor strategy to obtain neighbor sets, so that two neighbor sets can be obtained for each sample, and then, the two neighbor sets are intersected to obtain a neighbor set with higher confidence. After the two sets are subjected to difference set operation, positive sample neighbors also exist in the rest sets, each sample in the sets is subjected to neighbor mining by utilizing the characteristic capacity of a peer-to-peer model and then using a k-reciprocal nearest neighbor strategy to obtain a neighbor set, and whether the sample is added into a final neighbor set is judged by judging the coincidence degree of the neighbor set and the intersection set.
Aiming at the problem that the pseudo tag as the supervision information inevitably has noise, two models are used for training simultaneously, each model selects samples for training the peer model, the problem of error accumulation is effectively relieved, and the problem of model degradation caused by noise is prevented. For the classification loss, the criterion for sample selection is the size of the cross entropy loss, and for the triplet loss, the criterion for sample selection is isolation. Where isolation refers to whether a sample with the same identity can be found in this mini-batch with a high probability, if the probability is high, it is low, otherwise if the probability is low, it is high. The KL divergence is used as an index for measuring the similarity of the identity probability distribution among samples, and samples with high isolation basically have low similarity with other samples.
The invention discloses a cross-domain pedestrian re-identification method based on mutual learning, which comprises the following steps: target domain information mining method based on mutual learning and training strategy based on mutual learning;
The target domain information mining method based on mutual learning comprises the following steps:
(d1) Training by using a labeled source domain data set to obtain a source domain pre-training model, extracting target domain data characteristics by using the source domain pre-training model, generating a pseudo tag by using a DBSCAN clustering algorithm to train, and obtaining a target domain pre-training model which is mutually called as a peer-to-peer model;
(d2) Calculating to obtain a neighbor set of each pedestrian in the target domain by using the two pre-training models obtained in the step (d 1);
(d3) Converting the neighbor set in the step (d 2) into Jaccard distance, and generating a pseudo tag through a DBSCAN clustering algorithm;
the training strategy based on mutual learning comprises the following steps:
(t 1) each pre-trained model in step (d 1) selecting a sample of 80% of the small cross entropy loss for the peer model, respectively, and using this portion of the sample to update the parameters of the peer model by classification loss;
(t 2) calculating KL divergence among samples by using the identity probability generated by each model updated in the step (t 1), wherein the KL divergence represents identity similarity among samples;
(t 3) defining the isolation of each sample based on the KL divergence distance, wherein each model in step (t 1) selects the samples 80% in front of the isolation row for the peer model;
(t 4) determining positive and negative sample pairs of the triplets by using the rank matrix of the selected samples in the KL divergence calculation step (t 3);
(t 5) each model in step (t 1) is subjected to parameter updating by using the triplet loss through the triplet constructed by the peer-to-peer model;
(t 6) re-mining the target domain information according to the updated parameters in the step (t 5), generating updated pseudo tags, and re-training the neural network by using the updated pseudo tags;
and (t 7) obtaining a characteristic model after training is completed, and carrying out pedestrian retrieval.
Said step (d 1) comprises the steps of:
(d 1.1) training by using marked source domain data to obtain a source domain pre-training model;
(d 1.2) extracting the characteristics of each sample of the target domain by using the source domain pre-training model as a characteristic model, and generating a pseudo tag by using a DBSCAN clustering algorithm;
(d 1.3) performing preliminary training on the target domain data set by using the pseudo tag generated in the step (d 1.2) to obtain a target domain pre-training model.
Said step (d 2) comprises the steps of:
(d 2.1) extracting the characteristics of all pedestrians in the target domain dataset by using the two pre-training models obtained in the step (d 1) and generating two characteristic matrixes, and searching a neighbor sample for each pedestrian according to the two characteristic matrixes and the k-reciprocal nearest neighbor strategy;
(d 2.2) performing neighbor mining based on the consistency of the two pre-training models to obtain a more confidence neighbor sample set, and discarding the rest samples after screening;
(d 2.3) each pre-training model further mining the sample set discarded in the step (d 2.2) by utilizing the feature expression capability of the peer-to-peer model to obtain effective neighbor samples;
(d 2.4) merging the neighbor samples obtained by the excavation in the steps (d 2.2) and (d 2.3) to obtain a final neighbor set.
Said step (d 3) comprises the steps of:
(d 3.1) converting the neighbor sample set of each sample obtained in the step (d 2.3) into Jaccard distances to obtain a distance matrix;
(d 3.2) assigning a pseudo tag to each pedestrian sample using a DBSCAN clustering algorithm according to the distance matrix obtained in step (d 3.1), in which a part of the samples are divided into noise samples;
(d 3.3) assigning a pseudo tag to the noise samples in step (d 3.2) using a KNN strategy;
(d 3.4) synthesizing the pseudo tags in steps (d 3.2) and (d 3.3) to obtain the final pseudo tag of the target domain dataset.
Compared with the prior art, the invention has the advantages that:
(1) The invention aims at researching the problem of cross-domain pedestrian re-recognition, and provides a target domain information mining method based on mutual learning, so that monitoring information with higher quality is obtained. And simultaneously, each model is used for selecting samples for the peer model to perform network training, so that the influence of noise in the pseudo tag on model training is relieved. The invention aims to obtain a model with good feature discrimination capability by using target domain information mining to obtain a pseudo tag as supervision information under the condition that no manually marked pedestrian identity exists in a target domain. The problem that the model trained on the manually marked source domain data set cannot transfer the characteristic distinguishing capability to the target domain data set without the manual marking is effectively solved.
(2) The invention can fully utilize the characteristic expression capability of two models to obtain the pseudo tag with higher quality on the basis of obtaining the neighbor set with higher quality.
(3) Through mutual learning for network training, each model selects a sample for training the peer model, and the method can effectively relieve the influence of noise in the pseudo tag and alleviate the problem of model degradation caused by noise in the supervision information in the training process.
Drawings
FIG. 1 is a flow chart of a method implementation of the present invention:
FIG. 2 is a schematic diagram of a pre-training process in the present invention;
FIG. 3 is a schematic diagram of neighbor sample mining in the present invention;
FIG. 4 is a schematic diagram of sample-based selection of mutual learning in the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings.
As shown in fig. 1, the cross-domain pedestrian re-identification method based on mutual learning of the invention comprises the following steps:
Step one: target domain information mining method based on mutual learning
1.1 Two pre-training models are first needed to generate feature vectors for each pedestrian picture, the pre-training process of the two models is shown in fig. 2, and the two models are respectively pre-trained on source domain data and target domain data sets by using ResNet as a backbone network. Firstly, training an initial model on a source domain data set, obtaining a pseudo tag of target domain data by using the initial model, and preliminarily training the initial model on the target domain by using the pseudo tag. After this step, two pre-trained models M s and M t are obtained, where M s represents the model trained using source domain data and the corresponding M t represents the model trained using target domain data.
1.2 Neighbor sample mining is performed on the basis of two pre-trained models, and a schematic diagram of the neighbor sample mining is shown in fig. 3. Two neighbor sample sets are first generated for each sample based on the feature expression capabilities of the two models and the k-reciprocal nearest neighbor strategy. The formula is as follows:
wherein p in formula (1) represents probe, g represents gallery, is a candidate sample, N (p, k) represents the nearest k neighbor sets of sample p, d (p, g i)< d(p,gi+1), d (p, g) represents the euclidean distance between any two pedestrians p, g, and |·| represents the modular length of the set.
Then, neighbor mining is performed based on the consistency of the model, and the formula is as follows:
Rco(p,k)=Rs(p,k)∩Rt(p,k) (2)
Wherein, in the formula (2), R s (p, k) represents a neighbor set obtained by neighbor mining based on a feature model trained by using source domain data and based on a k-reciprocal nearest neighbor strategy, a similar R t (p, k) is a neighbor set obtained by neighbor mining using a feature model trained by using target domain data, and R co (p, k) is a neighbor set obtained by intersection of the two.
And carrying out difference set operation on R s (p, k) and R t (p, k) to obtain a sample set, carrying out neighbor mining on each sample q in the sample set by utilizing the feature capability of a peer-to-peer model, and judging the coincidence degree of the neighbor sample at this time and the R co (p, k) neighbor set, wherein if the coincidence degree is higher, the sample q is considered to be a neighbor sample of p. The formula is as follows:
In the formula (3), R T is a result of further mining the remaining neighbor set mined based on the feature model M t, and the corresponding R S is a result of further mining the remaining neighbor set mined based on the feature model M s.
1.3, After obtaining the neighbor set of each sample, converting the neighbor set into Jaccard distance, and the formula is as follows:
In the formula, d (p, g i) represents the Euclidean distance between two samples, |·| 1 represents the L1 norm, the vector D J(p,gi) represents the final Jaccard distance.
1.4, Clustering by using a DBSCAN clustering algorithm after the Jaccard distance matrix is obtained, wherein the DBSCAN clustering algorithm divides a part of samples into noise samples, and a KNN algorithm is used for reapplying pseudo labels to the part of samples. To this point a pseudo tag for each pedestrian sample is obtained.
Step two: network training strategy based on mutual learning
And 2.1, after the pseudo tag is obtained, performing network training by using the pseudo tag as supervision information, and firstly, formulating a standard for sample selection under mutual learning. Two sample selection criteria are used here, one for sample selection based on the size of the cross entropy loss and the other for sample selection based on the size of the isolation. The isolation refers to whether a sample with the same identity can be found in the mini-batch with high probability, if a certain pedestrian can find a plurality of pedestrian samples with the same identity in the mini-batch, the isolation is low, otherwise, if the number of the pedestrian samples with the same identity can be found is small, even if no pedestrian sample with the same identity can be found, the isolation is high. The KL divergence is used as an index for measuring the similarity of the identity probability distribution among samples, and the formula is as follows:
wherein in formula (5), q i represents samples other than p other than mini-batch.
2.2 Sample selection is performed according to sample selection criteria, this part of samples is used for classification loss, and the set of samples obtained according to the cross entropy loss size is as follows:
Dce=arg minD′:|D′|=α|D|Lce(N,D′) (6)
in formula (6), N represents the network, D 'represents the sample set, and L ce (N, D') represents the sum of losses of the current sample set. The sample set obtained from the islanding size is as follows:
Diso=arg minD′:|D′|=α|D′|Liso(N,D′) (7)
l iso (N, D') in equation (7) represents the orphan synthesis of the current sample set.
Combining the two sample selection criteria to obtain the final sample set, the formula is as follows:
Dinter={p|p∈Dce}∩{p|p∈Diso} (8)
In formula (8), D inter is to re-intersect the samples originally selected by the two sample selection criteria to obtain a set of samples with lower noise that can be used for training.
Defining a classifier as C: f→ { c 1,c2,...,cn }, the final classification loss is as follows:
in the formula (9) Representing the true probability value of i sample in j class,/>Probability value representing network prediction of i samples in j categories,/>Is based on the sample selected by M t, and/>Is a loss function for updating the model M s, corresponding/>Is a loss function used to update model M t. Each model is finally trained with samples chosen from peer models and updated.
2.3 Sample selection is performed according to the isolated sample selection criteria, and triplet pairs are constructed for the triplet loss function. The final triplet loss function formula is as follows:
In the formula (10), a represents a current sample, namely an anchor point sample in the construction triplet, d a,p represents the characteristic distance between the anchor point sample and a positive sample, the positive sample is a sample with the same identity as a certain sample application host, d a,n represents the characteristic distance between the anchor point sample and a negative sample, the negative sample is a sample with inconsistent identity with the anchor point sample host, the most difficult positive sample pair with the largest similarity distance, namely the sample pair with the farthest characteristic distance, is selected from all negative sample pairs, namely the sample pair with the nearest characteristic distance, and finally the triplet is formed. Is a sample set selected by Mt according to the principle of isolation,/>Is a loss function used for updating M s, corresponding/>Is used to update the penalty function used by M t.
2.4, After obtaining the corresponding loss function, performing model training and updating model parameters by using the loss function, wherein the idea of mutual learning based on sample selection is shown in fig. 4, and after training for a period of time, performing one-time updating of the pseudo tag according to the latest model. Fig. 4 mainly shows the process of training for mutual learning of the classification loss and the triplet loss. Model a and model B, due to different pre-training strategies, naturally have different feature discrimination capabilities, each model picking samples for peer-to-peer model training, thereby alleviating the negative impact of training with samples having false labels.
2.5, After training is completed, two feature models are obtained, the feature discrimination capability of the two feature models is basically consistent, and one model can be selected for pedestrian re-recognition.
Claims (3)
1. The cross-domain pedestrian re-identification method based on mutual learning is characterized by comprising the following steps of: target domain information mining method based on mutual learning and training strategy based on mutual learning;
The target domain information mining method based on mutual learning comprises the following steps:
(d1) Training by using a labeled source domain data set to obtain a source domain pre-training model, extracting target domain data characteristics by using the source domain pre-training model, generating a pseudo tag by using a DBSCAN clustering algorithm to train, and obtaining a target domain pre-training model which is mutually called as a peer-to-peer model;
(d2) Calculating to obtain a neighbor set of each pedestrian in the target domain by using the two pre-training models obtained in the step (d 1);
(d3) Converting the neighbor set in the step (d 2) into Jaccard distance, and generating a pseudo tag through a DBSCAN clustering algorithm;
the training strategy based on mutual learning comprises the following steps:
(t 1) each pre-trained model in step (d 1) selecting a sample of 80% of the small cross entropy loss for the peer model, respectively, and using this portion of the sample to update the parameters of the peer model by classification loss;
(t 2) calculating KL divergence among samples by using the identity probability generated by each model updated in the step (t 1), wherein the KL divergence represents identity similarity among samples;
(t 3) defining the isolation of each sample based on the KL divergence distance, wherein each model in step (t 1) selects the samples 80% in front of the isolation row for the peer model;
(t 4) determining positive and negative sample pairs of the triplets by using the rank matrix of the selected samples in the KL divergence calculation step (t 3);
(t 5) each model in step (t 1) is subjected to parameter updating by using the triplet loss through the triplet constructed by the peer-to-peer model;
(t 6) re-mining the target domain information according to the updated parameters in the step (t 5), generating updated pseudo tags, and re-training the neural network by using the updated pseudo tags;
(t 7) obtaining a characteristic model after training is completed, and carrying out pedestrian retrieval;
said step (d 2) comprises the steps of:
(d 2.1) extracting the characteristics of all pedestrians in the target domain dataset by using the two pre-training models obtained in the step (d 1) and generating two characteristic matrixes, and searching a neighbor sample for each pedestrian according to the two characteristic matrixes and the k-reciprocal nearest neighbor strategy;
(d 2.2) carrying out neighbor mining based on the consistency of the two pre-training models, comprehensively screening a more confidence neighbor sample set by the two pre-training models together, and discarding the rest samples after screening;
(d 2.3) each pre-training model further mining the sample set discarded in the step (d 2.2) by utilizing the feature expression capability of the peer-to-peer model to obtain effective neighbor samples;
(d 2.4) merging the neighbor samples obtained by the excavation in the steps (d 2.2) and (d 2.3) to obtain a final neighbor set.
2. The cross-domain pedestrian re-recognition method based on mutual learning according to claim 1, wherein the method comprises the following steps: said step (d 1) comprises the steps of:
(d 1.1) training by using marked source domain data to obtain a source domain pre-training model;
(d 1.2) extracting the characteristics of each sample of the target domain by using the source domain pre-training model as a characteristic model, and generating a pseudo tag by using a DBSCAN clustering algorithm;
(d 1.3) performing preliminary training on the target domain data set by using the pseudo tag generated in the step (d 1.2) to obtain a target domain pre-training model.
3. The cross-domain pedestrian re-recognition method based on mutual learning according to claim 1, wherein the method comprises the following steps: said step (d 3) comprises the steps of:
(d 3.1) converting the neighbor sample set of each sample obtained in the step (d 2.3) into Jaccard distances to obtain a distance matrix;
(d 3.2) assigning a pseudo tag to each pedestrian sample using a DBSCAN clustering algorithm according to the distance matrix obtained in step (d 3.1), in which a part of the samples are divided into noise samples;
(d 3.3) assigning a pseudo tag to the noise samples in step (d 3.2) using a KNN strategy;
(d 3.4) synthesizing the pseudo tags in steps (d 3.2) and (d 3.3) to obtain the final pseudo tag of the target domain dataset.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111468957.5A CN114580492B (en) | 2021-12-03 | 2021-12-03 | Cross-domain pedestrian re-recognition method based on mutual learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111468957.5A CN114580492B (en) | 2021-12-03 | 2021-12-03 | Cross-domain pedestrian re-recognition method based on mutual learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114580492A CN114580492A (en) | 2022-06-03 |
CN114580492B true CN114580492B (en) | 2024-06-21 |
Family
ID=81772136
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111468957.5A Active CN114580492B (en) | 2021-12-03 | 2021-12-03 | Cross-domain pedestrian re-recognition method based on mutual learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114580492B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116246349B (en) * | 2023-05-06 | 2023-08-15 | 山东科技大学 | Single-source domain generalization gait recognition method based on progressive subdomain mining |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110008842A (en) * | 2019-03-09 | 2019-07-12 | 同济大学 | A kind of pedestrian's recognition methods again for more losing Fusion Model based on depth |
CN110866536B (en) * | 2019-09-25 | 2022-06-07 | 西安交通大学 | Cross-regional enterprise tax evasion identification method based on PU learning |
CN111967294B (en) * | 2020-06-23 | 2022-05-20 | 南昌大学 | Unsupervised domain self-adaptive pedestrian re-identification method |
CN111860678B (en) * | 2020-07-29 | 2024-02-27 | 中国矿业大学 | Unsupervised cross-domain pedestrian re-identification method based on clustering |
-
2021
- 2021-12-03 CN CN202111468957.5A patent/CN114580492B/en active Active
Non-Patent Citations (2)
Title |
---|
基于图卷积神经网络的跨域行人再识别;潘少明;王玉杰;种衍文;;华中科技大学学报(自然科学版);20201231(第09期);全文 * |
弱监督场景下的行人重识别研究综述;祁磊;于沛泽;高阳;;软件学报;20200915(第09期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114580492A (en) | 2022-06-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111860678B (en) | Unsupervised cross-domain pedestrian re-identification method based on clustering | |
Wu et al. | Exploit the unknown gradually: One-shot video-based person re-identification by stepwise learning | |
CN109961051B (en) | Pedestrian re-identification method based on clustering and block feature extraction | |
CN113516012B (en) | Pedestrian re-identification method and system based on multi-level feature fusion | |
CN112036322B (en) | Method, system and device for constructing cross-domain pedestrian re-identification model of multi-task network | |
CN108960080B (en) | Face recognition method based on active defense image anti-attack | |
CN111832514B (en) | Unsupervised pedestrian re-identification method and unsupervised pedestrian re-identification device based on soft multiple labels | |
CN110717411A (en) | Pedestrian re-identification method based on deep layer feature fusion | |
CN110210335B (en) | Training method, system and device for pedestrian re-recognition learning model | |
CN112819065B (en) | Unsupervised pedestrian sample mining method and unsupervised pedestrian sample mining system based on multi-clustering information | |
CN112906606B (en) | Domain self-adaptive pedestrian re-identification method based on mutual divergence learning | |
CN114092964A (en) | Cross-domain pedestrian re-identification method based on attention guidance and multi-scale label generation | |
CN110765880B (en) | Light-weight video pedestrian heavy identification method | |
Wanyan et al. | Active exploration of multimodal complementarity for few-shot action recognition | |
WO2022160772A1 (en) | Person re-identification method based on view angle guidance multi-adversarial attention | |
CN113239801B (en) | Cross-domain action recognition method based on multi-scale feature learning and multi-level domain alignment | |
CN111242064A (en) | Pedestrian re-identification method and system based on camera style migration and single marking | |
CN112906605B (en) | Cross-mode pedestrian re-identification method with high accuracy | |
CN115984901A (en) | Multi-mode-based graph convolution neural network pedestrian re-identification method | |
CN116721458A (en) | Cross-modal time sequence contrast learning-based self-supervision action recognition method | |
CN114580492B (en) | Cross-domain pedestrian re-recognition method based on mutual learning | |
CN115311605B (en) | Semi-supervised video classification method and system based on neighbor consistency and contrast learning | |
Yu et al. | Camera-tracklet-aware contrastive learning for unsupervised vehicle re-identification | |
CN112801019A (en) | Method and system for eliminating re-identification deviation of unsupervised vehicle based on synthetic data | |
Casagrande et al. | Abnormal motion analysis for tracking-based approaches using region-based method with mobile grid |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |