CN111428675A - Pedestrian re-recognition method integrated with pedestrian posture features - Google Patents
Pedestrian re-recognition method integrated with pedestrian posture features Download PDFInfo
- Publication number
- CN111428675A CN111428675A CN202010252780.4A CN202010252780A CN111428675A CN 111428675 A CN111428675 A CN 111428675A CN 202010252780 A CN202010252780 A CN 202010252780A CN 111428675 A CN111428675 A CN 111428675A
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- model
- posture
- feature
- retrieval
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 14
- 239000013598 vector Substances 0.000 claims abstract description 29
- 238000012549 training Methods 0.000 claims abstract description 20
- 239000011159 matrix material Substances 0.000 claims abstract description 18
- 238000012360 testing method Methods 0.000 claims abstract description 17
- 238000001514 detection method Methods 0.000 claims abstract description 9
- 238000000605 extraction Methods 0.000 claims abstract description 6
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 238000012545 processing Methods 0.000 claims abstract description 4
- 230000000644 propagated effect Effects 0.000 claims abstract description 4
- 238000012795 verification Methods 0.000 claims abstract description 4
- 230000004927 fusion Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 4
- 101710086049 Rabankyrin-5 Proteins 0.000 claims description 3
- 102100040160 Rabankyrin-5 Human genes 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- IJJWOSAXNHWBPR-HUBLWGQQSA-N 5-[(3as,4s,6ar)-2-oxo-1,3,3a,4,6,6a-hexahydrothieno[3,4-d]imidazol-4-yl]-n-(6-hydrazinyl-6-oxohexyl)pentanamide Chemical compound N1C(=O)N[C@@H]2[C@H](CCCCC(=O)NCCCCCC(=O)NN)SC[C@@H]21 IJJWOSAXNHWBPR-HUBLWGQQSA-N 0.000 claims description 2
- 101150074062 Tnfsf11 gene Proteins 0.000 claims description 2
- 230000010354 integration Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000013508 migration Methods 0.000 description 2
- 230000005012 migration Effects 0.000 description 2
- 230000002087 whitening effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000037237 body shape Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Abstract
The invention discloses a pedestrian re-identification method integrated with the posture characteristics of the pedestrian, which comprises the following steps: preprocessing a pedestrian re-identification data set, and dividing the pedestrian re-identification data set into a training set and a verification set; training a pedestrian skeleton detection model by using a public data set; then extracting pedestrian attitude high-dimensional matrix features; then, the pedestrian search model is integrated into a pedestrian search model for training; forward propagation is carried out, and feature vectors extracted by the feature extraction layer are obtained; forward computing loss; calculating gradient information of each sample in the data set; the gradient of the loss layer is reversely propagated, and weight parameters in the pedestrian retrieval model feature processing module are updated; if the model is not converged or the maximum iteration number is not reached, repeating the steps; and after the network training is finished, completing the pedestrian retrieval task in the query set on the test set. The invention provides a guiding model to learn and screen the posture characteristics of the pedestrian, and the integration of the posture information improves the retrieval capability of the pedestrian retrieval model for the same pedestrian.
Description
Technical Field
The invention belongs to the technical field of neural networks, and particularly relates to a pedestrian re-identification method integrating posture characteristics of a pedestrian.
Background
The pedestrian screening, namely pedestrian re-identification, is a research hotspot in the aspect of intelligent video analysis and is widely concerned by academia. The pedestrian re-identification is to identify whether the two photographed persons are the same person in different cameras. In a broad sense, it belongs to the field of image retrieval. The same person is searched in videos shot by different cameras, and the challenge is that for the same person, due to the fact that the same person is shot by the cameras, the similarity of the same person under different camera images is low due to the fact that the illumination, the visual angle, the distance and the posture of the person are different. And secondly, different pedestrians wear clothes with the same color and the body shapes of different pedestrians are similar to each other and can be easily identified as the same person. Pedestrian re-identification essentially utilizes the external body of the human body, which has both flexible and rigid characteristics, to perform image retrieval. Pedestrian re-identification is easily affected by factors such as clothing color, illumination, angle, etc., so it is a very challenging subject. Pedestrian re-identification is limited by public data sets and hardware technology constraints, and is mainly applied to the field of cross-camera retrieval at the present stage. The cross-camera pedestrian retrieval task hopes that the pedestrian retrieval can be completed under a plurality of cameras through the designed algorithm, so that the position information of the pedestrian in a certain time period is acquired. Different cameras have different shooting angles, shooting styles and image pixel values for the same pedestrian, and higher requirements are put forward on the robustness of the algorithm.
With the development of deep learning Convolutional Neural Network (CNN) models in recent years, the great development of the pedestrian re-identification field is promoted. The main difficulties faced by the pedestrian re-identification problem are: when the pedestrian re-identification model is used for searching pedestrians, the pedestrian re-identification model depends on the clothing characteristics of pedestrians too much, and when the clothing of the pedestrians changes slightly, the searching effect of the pedestrian re-identification model is greatly reduced. In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides a pedestrian re-identification method integrated with pedestrian posture characteristics, which guides a deep convolutional network to learn a pedestrian high-dimensional posture characteristic matrix beneficial to pedestrian re-identification from a sample in the training process, thereby improving the retrieval capability of a model under the situations of different clothes and difficult sample differentiation.
The technical scheme adopted by the invention is as follows: a pedestrian re-recognition method integrated with pedestrian posture features comprises the following steps:
step 1: preprocessing a pedestrian re-identification data set (Market1501, Duke-MTMC, CUHKO3) and the like, dividing the data into a training set and a verification set after whitening, and stacking the training set by using a random erasing, random cutting, color migration and random noise adding mode for data expansion;
step 2: training a pedestrian skeleton detection model using a public data set (COCO, etc.);
and step 3: extracting pedestrian attitude high-dimensional matrix features (feature maps) by utilizing a pedestrian framework detection model,
specifically, a sample is input into a pedestrian skeleton detection model backbone network, and the pedestrian attitude high-dimensional matrix characteristic output from the last layer of the backbone network is obtained;
and 4, step 4: fusing the extracted pedestrian attitude high-dimensional matrix into a pedestrian retrieval model for training; the feature fusion mode can adopt a mode of directly summing the pedestrian attitude high-dimensional matrix features and a feature map extracted by a pedestrian retrieval model, or a mode of compressing by using a convolution layer after splicing two feature matrices, or a mode of screening the pedestrian attitude high-dimensional matrix features by using an SE module, then splicing the feature map extracted by the pedestrian retrieval model and then compressing;
and 5: forward propagation is carried out, and feature vectors extracted by the feature extraction layer are obtained;
step 6: forward calculated loss, softmax loss function is as follows:
the Triplet loss function is as follows:
wherein m represents a distance margin between homogeneous and heterogeneous; hardest positive represents the most distant homogeneous sample spacing within the entire batch; hardest negative denotes the nearest heterogeneous sample spacing within the entire batch;
and 7: calculating gradient information of each sample in the data set;
and 8: the gradient of the loss layer is reversely propagated, and weight parameters in the pedestrian retrieval model feature processing module are updated;
and step 9: if the model is not converged or the maximum iteration number is not reached, repeating the step 3-8;
step 10: after the network training is finished, the pedestrian retrieval task in the query set is completed on the test set, and Rank1, Rank5, Rank10 and mAP are calculated.
Preferably, step 6, conducting L2 regularization operation on the output feature vectors, splicing the unit feature vectors after L2 regularization to form a triple group, calculating ternary loss according to the triple group feature vectors, obtaining a Test Vector through a Bachnormation layer by the unit feature vectors, setting bias parameters bias of a Batch normation layer to be constantly equal to 0, enabling the Test Vector to be used in a later pedestrian retrieval Test stage, inputting the Test Vector into a full connection layer behind a network model, and calculating cross entropy loss.
Compared with the prior art, the invention has the beneficial effects that: the invention provides a guiding model to learn and screen the posture characteristics of the pedestrian, and the integration of the posture information improves the retrieval capability of the pedestrian retrieval model for the same pedestrian.
Drawings
FIG. 1 is a flow chart of pedestrian re-identification model training and testing of the present invention;
FIG. 2 is a flow chart of a pedestrian re-identification model pedestrian retrieval of the present invention;
FIG. 3 is a schematic diagram of a model location of interest after the guided model learns features of the present invention;
FIG. 4 is a schematic diagram of the effect of the pedestrian-oriented attitude model of the present invention;
FIG. 5 is a schematic diagram of a model of the present invention incorporating the posture features of a pedestrian;
FIG. 6 is a schematic diagram of a feature fusion approach of the present invention;
FIG. 7 is a schematic diagram of yet another feature fusion approach of the present invention;
FIG. 8 is a schematic view of yet another feature fusion approach of the present invention;
fig. 9 is the comparison of the pedestrian search results of the present invention and the common model.
Detailed Description
In order to make the technical solutions of the present invention better understood, the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The embodiment of the invention discloses a pedestrian re-identification method integrated with pedestrian posture characteristics, which comprises the following steps as shown in the figure:
step 1: preprocessing a pedestrian re-identification data set (Market1501, Duke-MTMC, CUHK03) and the like, dividing the data into a training set and a verification set after whitening, and stacking the training set by using a random erasing, random cutting, color migration and random noise adding mode for data expansion;
step 2: training a pedestrian skeleton detection model using a public data set (COCO, etc.);
and step 3: inputting a sample into a pedestrian skeleton detection model backbone network to obtain pedestrian attitude high-dimensional matrix characteristics output by the last layer of the backbone network;
and 4, step 4: fusing the extracted pedestrian attitude high-dimensional matrix into a pedestrian retrieval model for training, wherein the feature fusion mode is that firstly, the SE module is utilized to screen the pedestrian attitude high-dimensional matrix features, then, the feature graph extracted by the pedestrian retrieval model is spliced and compressed;
and 5: forward propagation is carried out, and feature vectors extracted by the feature extraction layer are obtained;
step 6: the loss is calculated in the forward direction,
l2 regular operation is carried out on the output feature vector, and the formula of L2 Normal is as follows:
L2(feature_vector)=feature_vector/|feature_vector|
after L2 regularization operation is completed, the Euclidean distance and the cosine distance between two unit feature vectors form a positive correlation relationship, and the obtained positive correlation relationship is as follows:
L2(A,B)=sqrt(2-2cos(A,B))
and splicing unit feature vectors after passing through L2 Normal to form a triple, and calculating the ternary loss according to the triple feature vectors, wherein the calculation formula of the ternary loss is as follows:
wherein m represents a distance margin between homogeneous and heterogeneous; hardest positive represents the most distant homogeneous sample spacing within the entire batch; hardest negative denotes the nearest heterogeneous sample spacing within the entire batch;
the unit feature Vector after L2 regularization is used for obtaining a Test Vector Testvector through a Bach Normalization layer, the Test Vector is used for a later pedestrian retrieval Test stage, and a bias parameter bias of a Batch Normalization layer is set to be constantly equal to 0. the calculation formula of the L BNNeck is obtained as follows:
the Test Vector is input to the fully connected layer (FC layer) after the network model and the loss is calculated, the softmax loss function is as follows:
and 7: calculating gradient information of each sample in the data set;
and 8: the gradient of the loss layer is reversely propagated, and weight parameters in the pedestrian retrieval model feature processing module are updated;
and step 9: if the model is not converged or the maximum iteration number is not reached, returning to the step 3;
step 10: after the network training is finished, the pedestrian retrieval task in the query set is completed on the test set, and Rank1, Rank5, Rank10 and mAP are calculated.
As shown in FIG. 3, G L is a common model attention area, PA is a model extraction posture feature branch I, EPA is a model extraction posture feature branch II, and EAP-Net (the invention) is a combined attention area of G L + PA + EPA three branches.
As shown in fig. 4, the high-dimensional attitude feature matrix and the global feature matrix are fused to obtain a joint attention effect, so that a better search basis is provided, and more sample information can be paid attention to.
As shown in fig. 9, where Query represents a Query sample for pedestrian retrieval, Top-10results represents the 10 samples closest to the Query sample Query in the Query result. And the odd-numbered behaviors do not fuse the basic model query result of the attitude characteristic, and the even-numbered behaviors fuse the EAP-Net query result of the attitude characteristic. The black box is a sample of query errors. EAP-Net has a greater pedestrian retrieval capability than the base model without added attitude features. Under the condition that different pedestrians are difficult to distinguish samples along with similar clothes, the EAP-Net can complete identity judgment based on the posture information of the pedestrians, and the robustness of the model is improved. When Rankl is solved to find the similar sample closest to the query sample, the EAP-Net can preferentially query the sample with the same action as the query sample based on the attitude characteristics of the pedestrian, and the first hit rate of the model in the pedestrian identity distinguishing stage is improved.
The present invention has been described in detail with reference to the embodiments, but the description is only illustrative of the present invention and should not be construed as limiting the scope of the present invention. The scope of the invention is defined by the claims. The technical solutions of the present invention or those skilled in the art, based on the teaching of the technical solutions of the present invention, should be considered to be within the scope of the present invention, and all equivalent changes and modifications made within the scope of the present invention or equivalent technical solutions designed to achieve the above technical effects are also within the scope of the present invention.
Claims (5)
1. A pedestrian re-recognition method integrated with pedestrian posture features is characterized by comprising the following steps: the method comprises the following steps:
step 1: preprocessing a pedestrian re-identification data set, and dividing the whitened data into a training set and a verification set;
step 2: training a pedestrian skeleton detection model by using a public data set;
and step 3: extracting pedestrian attitude high-dimensional matrix features by using a pedestrian skeleton detection model;
and 4, step 4: fusing the extracted pedestrian attitude high-dimensional matrix into a pedestrian retrieval model for training;
and 5: forward propagation is carried out, and feature vectors extracted by the feature extraction layer are obtained;
step 6: forward computing losses, wherein the losses comprise ternary losses and cross entropy losses;
and 7: calculating gradient information of each sample in the data set;
and 8: the gradient of the loss layer is reversely propagated, and weight parameters in the pedestrian retrieval model feature processing module are updated;
and step 9: if the model is not converged or the maximum iteration number is not reached, repeating the step 3-8;
step 10: after the network training is finished, the pedestrian retrieval task in the query set is completed on the test set, and Rankl, Rank5, Rankl0 and mAP are calculated.
2. The pedestrian re-recognition method integrated with the posture and posture features of the pedestrian as claimed in claim 1, wherein: and 3, inputting the sample into a backbone network of the pedestrian skeleton detection model, and acquiring the pedestrian attitude high-dimensional matrix characteristic output by the last layer of the backbone network.
3. The pedestrian re-recognition method integrated with the posture and posture features of the pedestrian as claimed in claim 1, wherein: in the step 4, a feature fusion mode of fusing the pedestrian posture high-dimensional matrix into the pedestrian retrieval model can adopt a mode of directly summing the features of the pedestrian posture high-dimensional matrix and the feature map extracted by the pedestrian retrieval model, or a mode of compressing by using a convolution layer after splicing two feature matrices, or a mode of screening the features of the pedestrian posture high-dimensional matrix by using an SE module, then splicing the feature map extracted by the pedestrian retrieval model and then compressing.
4. The pedestrian re-identification method integrated with the pedestrian posture features as claimed in claim 1, wherein in step 6, L2 regularization operation is carried out on output feature vectors, unit feature vectors after L2 regularization are spliced to form a triple, ternary loss is calculated according to the triple feature vectors, the unit feature vectors are subjected to Bach Normalization layer to obtain Test vectors, the Test vectors are used in a later pedestrian retrieval Test stage, and the Test vectors are input into a full connection layer behind a network model and cross entropy loss is calculated.
5. The pedestrian re-recognition method integrated with the posture and posture features of the pedestrian as claimed in claim 3, wherein: the bias parameter bias of the Batch Normalization layer is set to be constantly equal to 0 in step 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010252780.4A CN111428675A (en) | 2020-04-02 | 2020-04-02 | Pedestrian re-recognition method integrated with pedestrian posture features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010252780.4A CN111428675A (en) | 2020-04-02 | 2020-04-02 | Pedestrian re-recognition method integrated with pedestrian posture features |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111428675A true CN111428675A (en) | 2020-07-17 |
Family
ID=71550533
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010252780.4A Pending CN111428675A (en) | 2020-04-02 | 2020-04-02 | Pedestrian re-recognition method integrated with pedestrian posture features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111428675A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116524602A (en) * | 2023-07-03 | 2023-08-01 | 华东交通大学 | Method and system for re-identifying clothing changing pedestrians based on gait characteristics |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140176551A1 (en) * | 2012-12-21 | 2014-06-26 | Honda Motor Co., Ltd. | 3D Human Models Applied to Pedestrian Pose Classification |
WO2018107760A1 (en) * | 2016-12-16 | 2018-06-21 | 北京大学深圳研究生院 | Collaborative deep network model method for pedestrian detection |
CN108647639A (en) * | 2018-05-10 | 2018-10-12 | 电子科技大学 | Real-time body's skeletal joint point detecting method |
CN110163110A (en) * | 2019-04-23 | 2019-08-23 | 中电科大数据研究院有限公司 | A kind of pedestrian's recognition methods again merged based on transfer learning and depth characteristic |
CN110659589A (en) * | 2019-09-06 | 2020-01-07 | 中国科学院自动化研究所 | Pedestrian re-identification method, system and device based on attitude and attention mechanism |
CN110781771A (en) * | 2019-10-08 | 2020-02-11 | 北京邮电大学 | Abnormal behavior real-time monitoring method based on deep learning |
CN110796026A (en) * | 2019-10-10 | 2020-02-14 | 湖北工业大学 | Pedestrian re-identification method based on global feature stitching |
-
2020
- 2020-04-02 CN CN202010252780.4A patent/CN111428675A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140176551A1 (en) * | 2012-12-21 | 2014-06-26 | Honda Motor Co., Ltd. | 3D Human Models Applied to Pedestrian Pose Classification |
WO2018107760A1 (en) * | 2016-12-16 | 2018-06-21 | 北京大学深圳研究生院 | Collaborative deep network model method for pedestrian detection |
CN108647639A (en) * | 2018-05-10 | 2018-10-12 | 电子科技大学 | Real-time body's skeletal joint point detecting method |
CN110163110A (en) * | 2019-04-23 | 2019-08-23 | 中电科大数据研究院有限公司 | A kind of pedestrian's recognition methods again merged based on transfer learning and depth characteristic |
CN110659589A (en) * | 2019-09-06 | 2020-01-07 | 中国科学院自动化研究所 | Pedestrian re-identification method, system and device based on attitude and attention mechanism |
CN110781771A (en) * | 2019-10-08 | 2020-02-11 | 北京邮电大学 | Abnormal behavior real-time monitoring method based on deep learning |
CN110796026A (en) * | 2019-10-10 | 2020-02-14 | 湖北工业大学 | Pedestrian re-identification method based on global feature stitching |
Non-Patent Citations (2)
Title |
---|
毕晓君: "基于视角信息嵌入的行人重识别", 《光学学报》, vol. 39, no. 6, 30 June 2019 (2019-06-30), pages 262 - 271 * |
高明达: "联合姿态先验的人体精确解析双分支网络模型", 《软件学报》, vol. 31, no. 7, 14 January 2020 (2020-01-14), pages 1959 - 1968 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116524602A (en) * | 2023-07-03 | 2023-08-01 | 华东交通大学 | Method and system for re-identifying clothing changing pedestrians based on gait characteristics |
CN116524602B (en) * | 2023-07-03 | 2023-09-19 | 华东交通大学 | Method and system for re-identifying clothing changing pedestrians based on gait characteristics |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108960140B (en) | Pedestrian re-identification method based on multi-region feature extraction and fusion | |
US11836224B2 (en) | Cross-modality person re-identification method based on local information learning | |
CN111460914B (en) | Pedestrian re-identification method based on global and local fine granularity characteristics | |
CN113065402B (en) | Face detection method based on deformation attention mechanism | |
CN111898736B (en) | Efficient pedestrian re-identification method based on attribute perception | |
CN112784728B (en) | Multi-granularity clothes changing pedestrian re-identification method based on clothing desensitization network | |
CN112150493A (en) | Semantic guidance-based screen area detection method in natural scene | |
CN110598543A (en) | Model training method based on attribute mining and reasoning and pedestrian re-identification method | |
CN108416295A (en) | A kind of recognition methods again of the pedestrian based on locally embedding depth characteristic | |
CN114299542A (en) | Video pedestrian re-identification method based on multi-scale feature fusion | |
CN112163498A (en) | Foreground guiding and texture focusing pedestrian re-identification model establishing method and application thereof | |
CN112560604A (en) | Pedestrian re-identification method based on local feature relationship fusion | |
CN114782977A (en) | Method for guiding pedestrian re-identification based on topological information and affinity information | |
CN115171165A (en) | Pedestrian re-identification method and device with global features and step-type local features fused | |
CN114882537B (en) | Finger new visual angle image generation method based on nerve radiation field | |
CN113269099B (en) | Vehicle re-identification method under heterogeneous unmanned system based on graph matching | |
CN113822145A (en) | Face recognition operation method based on deep learning | |
CN111428675A (en) | Pedestrian re-recognition method integrated with pedestrian posture features | |
CN117333908A (en) | Cross-modal pedestrian re-recognition method based on attitude feature alignment | |
CN112446305A (en) | Pedestrian re-identification method based on classification weight equidistant distribution loss model | |
CN115830643A (en) | Light-weight pedestrian re-identification method for posture-guided alignment | |
CN113537032B (en) | Diversity multi-branch pedestrian re-identification method based on picture block discarding | |
CN114821632A (en) | Method for re-identifying blocked pedestrians | |
CN115376159A (en) | Cross-appearance pedestrian re-recognition method based on multi-mode information | |
CN113869151A (en) | Cross-view gait recognition method and system based on feature fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200717 |