CN107330355B - Deep pedestrian re-identification method based on positive sample balance constraint - Google Patents

Deep pedestrian re-identification method based on positive sample balance constraint Download PDF

Info

Publication number
CN107330355B
CN107330355B CN201710330206.4A CN201710330206A CN107330355B CN 107330355 B CN107330355 B CN 107330355B CN 201710330206 A CN201710330206 A CN 201710330206A CN 107330355 B CN107330355 B CN 107330355B
Authority
CN
China
Prior art keywords
network
training
positive sample
sample
deep
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710330206.4A
Other languages
Chinese (zh)
Other versions
CN107330355A (en
Inventor
黄俊艺
任传贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Sun Yat Sen University
Original Assignee
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Sun Yat Sen University filed Critical National Sun Yat Sen University
Priority to CN201710330206.4A priority Critical patent/CN107330355B/en
Publication of CN107330355A publication Critical patent/CN107330355A/en
Application granted granted Critical
Publication of CN107330355B publication Critical patent/CN107330355B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a deep pedestrian re-identification method based on positive sample balance constraint, a residual error network structure used by the method is simple and widely applied, a sufficiently deep network structure enhances the feature expression capability, and the network structure does not need to be specially designed; the method finds that the accuracy of pedestrian re-identification can be higher than that of most of well-designed methods by using a residual error net classifier to extract image features; compared with a binary group loss and triple group loss method, the method has the advantages that similar effects can be achieved by improving the structure loss without specially generating effective samples, and the learned gradient direction is more stable and effective by utilizing the integral distribution information; on the basis of improving the structure loss, the positive sample balance constraint is added, the distance of the positive sample pair can be controlled, and the gradients of the positive sample pair distance and the negative sample pair distance can be balanced, so that the algorithm is easier to train and the algorithm performance is improved.

Description

Deep pedestrian re-identification method based on positive sample balance constraint
Technical Field
The invention relates to the field of deep learning and pedestrian re-identification, in particular to a deep pedestrian re-identification method based on positive sample balance constraint.
Background
Over the years, impressive advances have been made in the areas of pattern recognition, machine learning, and computer vision research. These advances have attracted the attention of the video surveillance, legal security industry, and the need for such intelligent algorithms and intelligent systems is increasing. Under the continuously developing action of the security industry, intelligent monitoring tools for human face monitoring, fingerprint monitoring, other biological feature monitoring and human and urban environments are widely applied. These tools collect a large amount of data, usually in the form of images or videos, and bring new research subjects to the field of machine learning, while in recent years, a subject of great academic interest is pedestrian re-identification.
In general, there are two types of methods for dealing with the pedestrian re-identification problem, which are the traditional method and the deep learning method. The traditional method generally needs to design or learn out robust and discriminant features, most of which are shallow models, and the feature expression capability is limited; the deep learning network can automatically learn which effective features need to be observed through learning the weight without manually designing the features like the traditional method, more and more researchers use the deep learning method to solve the problem of pedestrian re-identification in the last two or three years, and the deep learning network takes good progress. However, most of the existing deep learning methods utilize information of local data distribution, and a few hidden layers are used, so that the network is relatively not deep, and therefore, the algorithm performance is obviously improved.
Disclosure of Invention
The invention provides a deep pedestrian re-identification method based on positive sample balance constraint, which improves the feature expression capability.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
a deep pedestrian re-identification method based on positive sample balance constraint comprises the following steps:
s1: input data training dataset
Figure BDA0001292289740000011
Wherein the content of the first and second substances,
Figure BDA0001292289740000012
Figure BDA0001292289740000013
n is the number of samples, d is the image pixel, c is the number of different pedestrians in the training set, xiIs a d-dimensional column vector, yi=[yi1,yi2,yi3,…,yic]TIs a c-dimensional column vector in which the elements are equal to 1 or 0, and
Figure BDA0001292289740000021
X=[x1,x2,x3,…,xN]x is a matrix of d rows and N columns;
s2: pre-training the network by using a softmax classification model;
s3: training the network using a lifting structure loss based on a positive sample balance constraint;
s4: carrying out feature extraction on the test sample image;
s5: and carrying out nearest neighbor KNN classification on the test sample by using the obtained characteristics so as to obtain a re-identification result.
Further, the specific process of step S2 is:
setting the training learning rate eta and the maximum epoch times T of the network, and using a softmax classification model to pre-train the network parameters W, wherein the training method is a back propagation algorithm and comprises the following specific steps:
firstly, initializing a network parameter W; if the current epoch times are less than T, generating mini batch data
Figure BDA0001292289740000022
Then hold
Figure BDA0001292289740000023
Inputting the data into the network for forward propagation calculation to obtain the loss function value of the iteration
Figure BDA0001292289740000024
Then according to
Figure BDA0001292289740000025
Backward propagation calculation is carried out to calculate gradient
Figure BDA0001292289740000026
Finally, updating network parameters
Figure BDA0001292289740000027
Figure BDA0001292289740000028
The network parameters are continuously updated according to the rule until the epoch times are equal to T.
Further, the specific process of step S3 is:
the loss function of the deep network is changed from cross entropy to lifting structure loss, and for the training data set, the loss function is
Figure BDA0001292289740000029
Definition of
Figure BDA00012922897400000210
Represents a set of positive sample pairs in the training sample, and
Figure BDA00012922897400000211
then represents a set of negative sample pairs, with a positive sample pair distance of Di,jAnd the negative sample pair distance is Di,kAnd Di,lIs a constant parameter that controls the negative sample versus distance;
the method is characterized in that a loss function of the deep network is converted from cross entropy to lifting structure loss, and the specific formula is as follows:
Figure BDA00012922897400000212
wherein:
Figure BDA00012922897400000213
Di,j=||Ψ(xi)-Ψ(xj)||2
because the loss function is not smooth, a local extreme point with poor performance is easy to fall into in the training process, in addition, the gradient of the function is inconvenient to solve, the original function is indirectly optimized by optimizing a smooth upper bound of the function, the structural loss function refers to the upper bound, two constant parameters beta and lambda are added, the former controls the distance of the positive sample pair, and the latter balances the gradient:
Figure BDA00012922897400000214
Figure BDA00012922897400000215
and then setting the training learning rate eta and the maximum epoch times T of the network and three constant parameters alpha, beta and lambda, and training the deep network by using a back propagation algorithm to finally obtain an optimized network parameter W.
Further, the specific process of step S4 is:
performing feature extraction on a test sample image, wherein the test sample image is composed of a query set
Figure BDA0001292289740000031
Figure BDA0001292289740000032
And test set
Figure BDA0001292289740000033
The purpose of pedestrian re-identification is to give a query set sample xiqIn the test set
Figure BDA0001292289740000034
Searching the image of the same pedestrian, and extracting a depth feature psi (x) from the query set and the test set by setting psi (x) to represent a depth convolution networkiq) And Ψ (x)it)。
Further, the specific process of step S5 is:
using the resulting Ψ (x)iq) And Ψ (x)it) To the query set
Figure BDA0001292289740000035
Each sample in the test set
Figure BDA0001292289740000036
The retrieval is carried out, and the returned retrieval list comprising a plurality of images is the identification result.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the used residual error network structure is simple and widely applied, the sufficiently deep network structure enhances the feature expression capability, and the network structure does not need to be specially designed; the method finds that the accuracy of pedestrian re-identification can be higher than that of most of well-designed methods by using a residual error net classifier to extract image features; compared with a binary group loss method and a triple group loss method, the method has the advantages that the structure loss is improved, a similar effect can be achieved without specially generating effective samples, and the learned gradient direction is more stable and effective by utilizing the integral distribution information.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of a residual network structure with 50 layers;
FIG. 3(a) is a diagram illustrating the case of binary challenge loss;
FIG. 3(b) is a schematic diagram of the case of a ternary countermeasure loss;
FIG. 3(c) is a schematic diagram of lifting structure loss.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, a deep pedestrian re-identification method based on positive sample balance constraint includes the following steps:
s1: input data training dataset
Figure BDA0001292289740000041
Wherein the content of the first and second substances,
Figure BDA0001292289740000042
Figure BDA0001292289740000043
n is the number of samples, d is the image pixel, c is the number of different pedestrians in the training set, xiIs a d-dimensional column vector, yi=[yi1,yi2,yi3,…,yic]TIs a c-dimensional column vector in which the elements are equal to 1 or 0, and
Figure BDA0001292289740000044
X=[x1,x2,x3,…,xN]x is a matrix of d rows and N columns;
s2: pre-training the network by using a softmax classification model;
s3: training the network using the lifting structure loss;
s4: carrying out feature extraction on the test sample image;
s5: and carrying out nearest neighbor KNN classification on the test sample by using the obtained characteristics so as to obtain a re-identification result.
The specific process of step S2 is:
setting a training learning rate eta and an epoch maximum time T of the network, and using a softmax classification model to pre-train a network parameter W, wherein the training method is a back propagation algorithm, and the specific steps are as follows (as shown in FIG. 2):
firstly, initializing a network parameter W; if the current epoch times are less than T, generating mini batch data
Figure BDA0001292289740000045
Then hold
Figure BDA0001292289740000046
Inputting the data into the network for forward propagation calculation to obtain the loss function value of the iteration
Figure BDA0001292289740000047
Then according to
Figure BDA0001292289740000048
Backward propagation calculation is carried out to calculate gradient
Figure BDA0001292289740000049
Finally, updating network parameters
Figure BDA00012922897400000410
Figure BDA00012922897400000411
The network parameters are continuously updated according to the rule until the epoch times are equal to T.
Firstly, initializing a network parameter W; if the current epoch times are less than T, generating mini batch data
Figure BDA00012922897400000412
Then hold
Figure BDA00012922897400000413
Inputting the data into the network for forward propagation calculation to obtain the loss function value of the iteration
Figure BDA00012922897400000414
Then according to
Figure BDA00012922897400000415
Backward propagation calculation is carried out to calculate gradient
Figure BDA00012922897400000416
Finally, updating network parameters
Figure BDA00012922897400000417
Figure BDA00012922897400000418
The network parameters are continuously updated according to the rule until the epoch times are equal to T.
The specific process of step S3 is:
as shown in FIGS. 3(a) - (c), the loss function of the deep network is transformed from cross-entropy to lifting structure loss, for the training data set
Figure BDA00012922897400000419
Definition of
Figure BDA00012922897400000420
Represents a set of positive sample pairs in the training sample, and
Figure BDA00012922897400000421
then represents a set of negative sample pairs, with a positive sample pair distance of Di,jAnd the negative sample pair distance is Di,kAnd Di,lIs a constant parameter that controls the negative sample versus distance;
the method is characterized in that a loss function of the deep network is converted from cross entropy to lifting structure loss, and the specific formula is as follows:
Figure BDA00012922897400000422
wherein:
Figure BDA00012922897400000423
Di,j=||Ψ(xi)-Ψ(xj)||2
because the loss function is not smooth, a local extreme point with poor performance is easy to fall into in the training process, in addition, the gradient of the function is inconvenient to solve, the original function is indirectly optimized by optimizing a smooth upper bound of the function, the structural loss function refers to the upper bound, two constant parameters beta and lambda are added, the former controls the distance of the positive sample pair, and the latter balances the gradient:
Figure BDA0001292289740000051
Figure BDA0001292289740000052
and then setting the training learning rate eta and the maximum epoch times T of the network and three constant parameters alpha, beta and lambda, and training the deep network by using a back propagation algorithm to finally obtain an optimized network parameter W. Fig. 3(a) shows the case of binary countermeasure loss, where the rectangular and triangular samples are negative sample pairs, the rectangular samples are pushed out of the dotted circles at the time of learning, and the direction is likely to be toward the circular samples, which is not in accordance with the desired effect. While FIG. 3(b) shows a ternary penalty, it is equally likely that a rectangular sample will be pushed towards a circular sample. Therefore, learning algorithms based on these two losses need to generate training sample sets carefully to reduce the above. For the lifting loss function, because the lifting loss function considers each negative sample and each positive sample of the samples at the same time, and utilizes the information of the whole structure, as shown in fig. 3(c), the rectangular samples can be well pushed to the same type of samples, so that the intra-group variation is better reduced, and the inter-group variation is increased.
The specific process of step S4 is:
performing feature extraction on a test sample image, wherein the test sample image is composed of a query set
Figure BDA0001292289740000053
Figure BDA0001292289740000054
And test set
Figure BDA0001292289740000055
The purpose of pedestrian re-identification is to give a query set sample xiqIn the test set
Figure BDA0001292289740000056
Searching the image of the same pedestrian, and extracting a depth feature psi (x) from the query set and the test set by setting psi (x) to represent a depth convolution networkiq) And Ψ (x)it)。
The specific process of step S5 is:
using the resulting Ψ (x)iq) And Ψ (x)it) To the query set
Figure BDA0001292289740000057
Each sample in the test set
Figure BDA0001292289740000058
The retrieval is carried out, and the returned retrieval list comprising a plurality of images is the identification result.
The same or similar reference numerals correspond to the same or similar parts;
the positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (3)

1. A deep pedestrian re-identification method based on positive sample balance constraint is characterized by comprising the following steps:
s1: input data training dataset
Figure FDA0002662255870000011
Wherein the content of the first and second substances,
Figure FDA0002662255870000012
Figure FDA0002662255870000013
n is the number of samples, d is the image pixel, c is the number of different pedestrians in the training set, xiIs a d-dimensional column vector, yi=[yi1,yi2,yi3,...,yic]TIs a c-dimensional column vector in which the elements are equal to 1 or 0, and
Figure FDA0002662255870000014
X=[x1,x2,x3,...,xN]and X is a matrix of d rows and N columns;
S2: pre-training the network by using a softmax classification model;
s3: training the network using a lifting structure loss based on a positive sample balance constraint;
s4: carrying out feature extraction on the test sample image;
s5: performing nearest neighbor KNN classification on the test sample by using the obtained characteristics to obtain a re-identification result;
the specific process of step S2 is:
setting the training learning rate eta and the maximum epoch times T of the network, and using a softmax classification model to pre-train the network parameters W, wherein the training method is a back propagation algorithm and comprises the following specific steps:
firstly, initializing a network parameter W; if the current epoch times are less than T, generating mini batch data
Figure FDA0002662255870000015
Then, the user can put the hand-held device in the hand-held device,
Figure FDA0002662255870000016
inputting the data into the network for forward propagation calculation to obtain the loss function value of the iteration
Figure FDA0002662255870000017
Then according to
Figure FDA0002662255870000018
Backward propagation calculation is carried out to calculate gradient
Figure FDA0002662255870000019
Finally, updating network parameters
Figure FDA00026622558700000110
Figure FDA00026622558700000111
The network parameters are carried out according to the ruleContinuously updating until the epoch times are equal to T;
the specific process of step S3 is:
the loss function of the deep network is changed from cross entropy to lifting structure loss, and for the training data set, the loss function is
Figure FDA00026622558700000112
Definition of
Figure FDA00026622558700000113
Represents a set of positive sample pairs in the training sample, and
Figure FDA00026622558700000114
then represents a set of negative sample pairs, with a positive sample pair distance of Di,jAnd the negative sample pair distance is Di,kAnd Di,lIs a constant parameter that controls the negative sample versus distance;
the method is characterized in that a loss function of the deep network is converted from cross entropy to lifting structure loss, and the specific formula is as follows:
Figure FDA00026622558700000115
wherein:
Figure FDA00026622558700000116
Di,j=||Ψ(xi)-Ψ(xj)||2
two constant parameters β and λ are added, the former controlling the positive sample pair distance and the latter balancing the gradient:
Figure FDA00026622558700000117
Figure FDA00026622558700000118
and then setting the training learning rate eta and the maximum epoch times T of the network and three constant parameters alpha, beta and lambda, and training the deep network by using a back propagation algorithm to finally obtain an optimized network parameter W.
2. The method for deep pedestrian re-identification based on positive sample balance constraint according to claim 1, wherein the specific process of the step S4 is as follows:
performing feature extraction on a test sample image, wherein the test sample image is composed of a query set
Figure FDA0002662255870000025
Figure FDA0002662255870000026
And test set
Figure FDA0002662255870000021
The purpose of pedestrian re-identification is to give a query set sample xiqIn the test set
Figure FDA0002662255870000022
Searching the image of the same pedestrian, and extracting a depth feature psi (x) from the query set and the test set by setting psi (x) to represent a depth convolution networkiq) And Ψ (x)it)。
3. The method for deep pedestrian re-identification based on positive sample balance constraint according to claim 2, wherein the specific process of the step S5 is as follows:
using the resulting Ψ (x)iq) And Ψ (x)it) To the query set
Figure FDA0002662255870000023
Each sample in the test set
Figure FDA0002662255870000024
The returned retrieval list comprising a plurality of images is the re-identification result.
CN201710330206.4A 2017-05-11 2017-05-11 Deep pedestrian re-identification method based on positive sample balance constraint Expired - Fee Related CN107330355B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710330206.4A CN107330355B (en) 2017-05-11 2017-05-11 Deep pedestrian re-identification method based on positive sample balance constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710330206.4A CN107330355B (en) 2017-05-11 2017-05-11 Deep pedestrian re-identification method based on positive sample balance constraint

Publications (2)

Publication Number Publication Date
CN107330355A CN107330355A (en) 2017-11-07
CN107330355B true CN107330355B (en) 2021-01-26

Family

ID=60193737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710330206.4A Expired - Fee Related CN107330355B (en) 2017-05-11 2017-05-11 Deep pedestrian re-identification method based on positive sample balance constraint

Country Status (1)

Country Link
CN (1) CN107330355B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229435B (en) * 2018-02-01 2021-03-30 北方工业大学 Method for pedestrian recognition
CN108564030A (en) * 2018-04-12 2018-09-21 广州飒特红外股份有限公司 Classifier training method and apparatus towards vehicle-mounted thermal imaging pedestrian detection
CN108960342B (en) * 2018-08-01 2021-09-14 中国计量大学 Image similarity calculation method based on improved Soft-Max loss function
CN109271852A (en) * 2018-08-07 2019-01-25 重庆大学 A kind of processing method that the pedestrian detection based on deep neural network identifies again
CN109117891B (en) * 2018-08-28 2022-04-08 电子科技大学 Cross-social media account matching method fusing social relations and naming features
CN109598191A (en) * 2018-10-23 2019-04-09 北京市商汤科技开发有限公司 Pedestrian identifies residual error network training method and device again
CN111382793B (en) * 2020-03-09 2023-02-28 腾讯音乐娱乐科技(深圳)有限公司 Feature extraction method and device and storage medium
CN113887561B (en) * 2021-09-03 2022-08-09 广东履安实业有限公司 Face recognition method, device, medium and product based on data analysis
CN113569111B (en) * 2021-09-24 2021-12-21 腾讯科技(深圳)有限公司 Object attribute identification method and device, storage medium and computer equipment
CN114764942B (en) * 2022-05-20 2022-12-09 清华大学深圳国际研究生院 Difficult positive and negative sample online mining method and face recognition method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537356A (en) * 2015-01-12 2015-04-22 北京大学 Pedestrian re-identification method and device for carrying out gait recognition through integral scheduling
CN104915643A (en) * 2015-05-26 2015-09-16 中山大学 Deep-learning-based pedestrian re-identification method
CN105956606A (en) * 2016-04-22 2016-09-21 中山大学 Method for re-identifying pedestrians on the basis of asymmetric transformation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9396412B2 (en) * 2012-06-21 2016-07-19 Siemens Aktiengesellschaft Machine-learnt person re-identification

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537356A (en) * 2015-01-12 2015-04-22 北京大学 Pedestrian re-identification method and device for carrying out gait recognition through integral scheduling
CN104915643A (en) * 2015-05-26 2015-09-16 中山大学 Deep-learning-based pedestrian re-identification method
CN105956606A (en) * 2016-04-22 2016-09-21 中山大学 Method for re-identifying pedestrians on the basis of asymmetric transformation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
People Re-identifification using Deep Convolutional Neural Network;Guanwen Zhang et al.;《2014 International Conference on Computer Vision Theory and Application》;20140108;摘要,第2节 *
融合异构特征的子空间迁移学习算法;张景祥等;《自动化学报》;20140228;第1节 *

Also Published As

Publication number Publication date
CN107330355A (en) 2017-11-07

Similar Documents

Publication Publication Date Title
CN107330355B (en) Deep pedestrian re-identification method based on positive sample balance constraint
CN102314614B (en) Image semantics classification method based on class-shared multiple kernel learning (MKL)
Shao et al. Feature learning for image classification via multiobjective genetic programming
Yang et al. Supervised translation-invariant sparse coding
CN107203787B (en) Unsupervised regularization matrix decomposition feature selection method
CN108427921A (en) A kind of face identification method based on convolutional neural networks
CN110297931B (en) Image retrieval method
CN109241995B (en) Image identification method based on improved ArcFace loss function
CN104462494B (en) A kind of remote sensing image retrieval method and system based on unsupervised feature learning
CN109325443A (en) A kind of face character recognition methods based on the study of more example multi-tag depth migrations
Bui et al. Scalable sketch-based image retrieval using color gradient features
CN103942571B (en) Graphic image sorting method based on genetic programming algorithm
CN111985581A (en) Sample-level attention network-based few-sample learning method
CN103617609B (en) Based on k-means non-linearity manifold cluster and the representative point choosing method of graph theory
CN110598022B (en) Image retrieval system and method based on robust deep hash network
CN106203628A (en) A kind of optimization method strengthening degree of depth learning algorithm robustness and system
CN109325513A (en) A kind of image classification network training method based on magnanimity list class single image
CN114299362A (en) Small sample image classification method based on k-means clustering
CN106339665A (en) Fast face detection method
CN113779283B (en) Fine-grained cross-media retrieval method with deep supervision and feature fusion
CN103336974B (en) A kind of flowers classification discrimination method based on local restriction sparse representation
CN110852304B (en) Hyperspectral data processing method based on deep learning method
CN105184320B (en) The image classification method of non-negative sparse coding based on structural similarity
CN105718858A (en) Pedestrian recognition method based on positive-negative generalized max-pooling
CN110135253A (en) A kind of finger vena identification method based on long-term recursive convolution neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210126

Termination date: 20210511