CN110309810B - Pedestrian re-identification method based on batch center similarity - Google Patents

Pedestrian re-identification method based on batch center similarity Download PDF

Info

Publication number
CN110309810B
CN110309810B CN201910617855.1A CN201910617855A CN110309810B CN 110309810 B CN110309810 B CN 110309810B CN 201910617855 A CN201910617855 A CN 201910617855A CN 110309810 B CN110309810 B CN 110309810B
Authority
CN
China
Prior art keywords
pedestrian
batch
similarity
network
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910617855.1A
Other languages
Chinese (zh)
Other versions
CN110309810A (en
Inventor
王兴刚
徐继伟
王建辉
刘文予
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201910617855.1A priority Critical patent/CN110309810B/en
Publication of CN110309810A publication Critical patent/CN110309810A/en
Application granted granted Critical
Publication of CN110309810B publication Critical patent/CN110309810B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a pedestrian re-identification method based on batch center similarity, which comprises the following steps: (1) based on a classical basic network structure, removing a final full-connection layer of the network, and adding an additional convolution layer so as to establish a full-convolution neural network model; (2) randomly selecting P pedestrians from an original training data set, and randomly selecting K images for each pedestrian; (3) feeding the P x K images obtained in the step (2) into a network for training to obtain P x K feature vectors; (4) solving a central vector for K feature vectors of each pedestrian to obtain P central vectors; (5) and (4) forming a triple by each of the P x K characteristic vectors, the corresponding central vector and the non-homogeneous central vector for regression optimization. The method is simple and easy to implement, has a wide application range, and can effectively solve the problems of dislocation, shielding and the like in the pedestrian re-identification task.

Description

Pedestrian re-identification method based on batch center similarity
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a pedestrian re-identification method based on batch center similarity.
Background
Pedestrian re-identification has many applications in people's daily life, such as video surveillance, people's location and tracking, etc. Pedestrian Re-identification (Person Re-identification) refers to pedestrian matching in a multi-camera network with non-overlapping fields of view, i.e., matching people of interest in multiple scenes, and the method provided by the invention is to solve the problem. With the explosive growth of video data, the cost of finding and locating a target pedestrian by using a manual supervision mode is higher and higher, and the low-efficiency manual supervision mode easily causes missing of the key opportunity of case reconnaissance, thereby causing great loss of lives and properties of people. The use of machines to accurately and efficiently locate pedestrians is an urgent need.
Research is currently focused on two of the most critical issues: the method for the metric learning is mainly based on a cross entropy loss function, a triple loss function, a spherical loss function and the like, and the common problems of the loss functions are that the difficulty in selecting training samples is high, the inter-class distance and the intra-class distance cannot be optimized simultaneously, and therefore the common effect is not ideal.
Disclosure of Invention
Aiming at the problems in the prior art, a training sample is easy to select, and in addition, a loss function of an inter-class distance and an intra-class distance can be simultaneously optimized, so that a better training effect is obtained. The invention provides an efficient metric learning method aiming at the defects of the prior metric method. The invention aims to provide a pedestrian re-identification method based on batch center similarity, which is simple to realize, easy to select training samples, capable of simultaneously optimizing the inter-class distance and the intra-class distance of the samples and capable of obtaining an efficient measurement effect.
To achieve the above object, according to one aspect of the present invention, there is provided a pedestrian re-identification method based on similarity between batch centers, including the steps of:
(1) sample selection:
(1.1) the training samples are selected from pictures of multiple pedestrians at different angles, which are acquired by multiple cameras, the same person has larger morphological difference under different cameras, each image only contains one pedestrian, P pedestrians are randomly selected from the training samples, and K samples are randomly selected from each pedestrian, so that a P x K batch sample can be formed, wherein P and K are preset values;
(1.2) considering that the number of samples of each pedestrian is different, and the number of samples of some pedestrians is less than the selected K value, a method of sampling in a return mode is adopted: when the number of samples of each pedestrian is less than K, the selection can be repeated until the set K value is reached;
(2) data enhancement:
(2.1) setting area to represent the area of the original image, taking s to be the area [ A, B ] area ] to represent the size of the randomly selected area, and then shielding the image part with the randomly selected area as s; wherein 0< A < B <1, in the embodiment of the invention, A is 0.02, and B is 0.4;
(2.2) for the image obtained after the occlusion in (2.1), firstly zooming to a first fixed size, and then randomly intercepting an image of a second fixed size, wherein the first fixed size and the second fixed size are preset values, and the second fixed size is smaller than the first fixed size; for example, the image is firstly scaled to [288, 144] size, and then the image is randomly intercepted to [256, 128] size as a training image;
(3) feature extraction:
(3.1) ResNet50 is selected as the support network, wherein each part uses the ReLU function as the activation function, and the function is defined as
Figure BDA0002124547380000021
The network has been pre-trained on the ImageNet dataset;
(3.2) in the fourth block of the network selected in (3.1), reducing the convolution step length from the original 2 to 1, so that the resolution of the obtained feature vector can be improved, and the dimension of the output feature vector is 2048 dimensions; after this layer, the pool layer of the original ResNet50 is average _ pool, which is replaced by max _ pool layer in order to learn more distinguishable features, and then a full connection layer is added, so that the output dimension is reduced from 2048 dimension to 1024 dimension, and the output of this layer is the feature vector for characterizing the image, which is labeled as F1; after this level, a fully-connected level is added, and the output dimension is the number of classes of the data set, for example, the output dimension of the marker 1501 data set is set to 751, and for the resulting vector, labeled F2;
(4) a backbone optimization objective:
(4.1) according to step (3.2), two eigenvectors with size P × K and dimensions 1024 and labels (number of classes of samples) are obtained, namely: f1 and F2. For F1, the mean vector of K feature vectors of each person is obtained, labeled Ci,i∈[1,P]Thus, P central vectors can be obtained;
(4.2) the calculation formula of the finally designed triple loss function based on the batch center is as follows:
Figure BDA0002124547380000031
wherein TBCL is an abbreviation of triple Batch Center Loss, S stands for similarity formula,
Figure BDA0002124547380000032
m represents a set threshold. In order to simplify the computation and at the same time facilitate the back propagation of the gradient, all feature vectors are subjected to a unitization operation, namely:
Figure BDA0002124547380000033
and then to
Figure BDA0002124547380000034
The purpose of the formula is to make each sample have greater and smaller similarity with the center point and smaller similarity with the non-center point, so that the inter-class distance of the sample is greater and smaller and the intra-class distance of the sample is smaller and smaller;
(5) auxiliary optimization objective:
(5.1) it has been demonstrated in many tasks that multi-task learning based methods tend to achieve better results than single tasks. According to (4.2), the dimension [ P x K, labels ] is obtained]The feature vector F2, the regression optimization is performed using the cross entropy function:
Figure BDA0002124547380000041
wherein y isjWhich represents the real label of the tag or tags,
Figure BDA0002124547380000042
this can further enhance the effect of metric learning.
Through the technical scheme, compared with the prior art, the invention has the following technical effects:
(1) the method is simple to realize: compared with the conventional metric learning method, the method has the advantages that the thought is clear, simple and effective by the method of calculating the batch center;
(2) the sample is easy to select: the traditional triple loss function needs to carefully select a training sample, for example, a positive sample farthest from a target sample and a negative sample closest to the target sample are selected, and the method does not need to consider the relation between the target sample and the positive sample when selecting the sample, so that the sample is relatively easy to select;
(3) the robustness is strong: some traditional measurement learning methods, such as a central loss function and a spherical loss function, can only optimize the inter-class distance and cannot optimize the intra-class distance well, and the measurement function realized by the method can simultaneously optimize the inter-class distance and the intra-class distance, so that the performance is more robust;
drawings
FIG. 1 is a flow chart of a pedestrian re-identification method based on batch center similarity according to the present invention;
FIG. 2 is a schematic diagram of a network structure of a convolutional neural network model in an embodiment of the present invention;
FIG. 3 is a schematic diagram of center vector generation according to an embodiment of the present invention;
FIG. 4 is a graphical representation comparing the effect of the metric on the test sample in the example of the invention with other methods.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The technical terms of the present invention are explained and explained first:
market1501 data set: the Market-1501 data set was collected in a Qinghua university campus, photographed in summer, constructed and published in 2015. It includes 1501 pedestrians, 32668 pedestrian's rectangle frame that detects that shoot by 6 cameras (wherein 5 high definition camera and 1 low definition camera). Each pedestrian is captured by at least 2 cameras, and there may be multiple images in one camera. 751 persons in the training set, containing 12936 images, and 17.2 training data for each person on average; the test set had 750 people, contained 19732 images, and on average 26.3 test data per person. 3368 the pedestrian detection rectangular frame of the query image is drawn manually, while the pedestrian detection rectangular frame in the galery is detected using a Deformable Part Model (DPM) detector. The fixed number of training and test sets provided by the data set can be used in either a single-shot or multi-shot test setting.
CUHK03 dataset: CUHK03 is the first large-scale pedestrian re-identification dataset sufficient for deep learning, whose images were collected at the Chinese University of Hong Kong (CUHK) campus. The data contained 1467 different people, collected by 5 pairs of cameras. The data set provides two data sets of machine detection and manual detection, wherein the detection data set contains some detection errors, more approximate to the actual situation, and 9.6 training data are provided for each person on average.
DukeMTMC-reiD: the DukeMTMC dataset is a large scale labeled multi-target multi-camera pedestrian tracking dataset. It provides a new large high definition video data set recorded by 8 synchronized cameras with 7000 single camera tracks and over 2700 independent people, DukeMTMC-reID is the pedestrian re-identification subset of DukeMTMC data set, and provides a manually labeled bounding box.
A first matching rate rank-1 and mAP (mean Average precision). The first matching rate is the most intuitive and key index for reflecting the performance of the re-recognition model.
Figure BDA0002124547380000051
Where m is the total number of pictures in the query library, SiIndicating whether the matching is successful or not for the ith query image, and if the matching is successful, Si1, otherwise Si0. Generally speaking, in the task of identifying people again, when the superiority and inferiority between two models need to be evaluated, the index is compared firstly, and the model with higher rank-1 tends to have better performance. The mAP index considers the recall rate (call) and the accuracy rate (precision) at the same time, and can evaluate the performance of the model more comprehensively and objectively. In the task of re-identifying pedestrians, a query sample is obtainedThe retrieval accuracy (precision) is defined as the proportion of the number of correct matches in the image matching the rank k, namely:
Figure BDA0002124547380000061
wherein k iscTo match the correct number of images. And the retrieval recall (recall) of the query sample is defined as the proportion of the number of correctly matched images in the images which are matched with the k-th rank top in the candidate library to the number of all the images of the pedestrian, namely:
Figure BDA0002124547380000062
wherein k isallThe number of images representing the same pedestrian as the query image in the candidate library is a fixed value. When different k is selected, both P and R will change accordingly. Gradually increasing k, drawing a curve by corresponding P and R values to form a PR graph, calculating the area under the PR graph of a certain query sample, namely the AP value, and forming the average value of all the AP values of the query sample to be the mAP.
As shown in fig. 1, the pedestrian re-identification method based on the similarity of the batch centers of the invention comprises the following steps:
(1) sample selection:
(1.1) randomly selecting P pedestrians from the training samples, and randomly selecting K samples from each pedestrian, so that a P x K batch sample can be formed;
(1.2) considering that the number of samples of each pedestrian is different, and the number of samples of some pedestrians is less than the selected K value, a method of sampling in a return mode is adopted: when the number of samples of each pedestrian is less than K, the selection can be repeated until the set K value is reached;
(2) data enhancement:
(2.1) setting area to represent the area of the original image, taking s to be the area [0.02 area, 0.4 area ] to represent the size of the randomly selected area, and then shielding the image part with the randomly selected area as s;
(2.2) zooming the image obtained after random occlusion in (2.1) to the size of [288, 144], and then randomly intercepting the image with the size of [256, 128] from the image to be used as a training image;
(3) feature extraction:
(3.1) As shown in FIG. 2, ResNet50 is selected as the support network, in which each part uses the ReLU function as the activation function, and the function is defined as
Figure BDA0002124547380000071
The network has been pre-trained on the ImageNet dataset;
(3.2) as shown in fig. 2, in the fourth block of the network selected in (3.1), the step size of convolution is reduced from original 2 to 1, so that the resolution of the obtained feature vector can be improved, and the dimension of the output feature vector is 2048 dimensions at this time; after this layer, the pool layer of the original ResNet is average _ pool, in order to learn more distinguishable features, it is replaced with max _ pool layer, then a full-connected layer is added, so that the output dimension is reduced from 2048 dimension to 1024 dimension, and the output of this layer is the feature vector used to characterize the image, and is labeled as F1; after this level, a fully-connected level is added, and the output dimension is the number of classes of the data set, for example, the output dimension of the marker 1501 data set is set to 751, and for the resulting vector, labeled F2;
(4) a backbone optimization objective:
(4.1) according to step (3.2), two eigenvectors with size P × K and dimensions 1024 and labels (number of classes of samples) are obtained, namely: f1 and F2. As shown in FIG. 3, F1 is the mean vector of K feature vectors for each person, labeled Ci,i∈[1,P]Thus, P central vectors can be obtained;
(4.2) the calculation formula of the finally designed triple loss function based on the batch center is as follows:
Figure BDA0002124547380000072
wherein TBCL is an abbreviation of triple Batch Center Loss, S stands for similarity formula,
Figure BDA0002124547380000073
m represents a set threshold. To simplify the computation while facilitating the back propagation of the gradient,performing unitization operation on all the feature vectors, namely:
Figure BDA0002124547380000074
and then to
Figure BDA0002124547380000075
The purpose of the formula is to make each sample have greater and smaller similarity with the center point and smaller similarity with the non-center point, so that the inter-class distance of the sample is greater and smaller and the intra-class distance of the sample is smaller and smaller;
(5) auxiliary optimization objective:
(5.1) it has been demonstrated in many tasks that multi-task learning based methods tend to achieve better results than single tasks. According to (4.2), the dimension [ P x K, labels ] is obtained]The feature vector F2, applying the cross entropy function:
Figure BDA0002124547380000081
wherein y isjWhich represents the real label of the tag or tags,
Figure BDA0002124547380000082
this can further enhance the effect of metric learning.
The effectiveness of the invention is proved by the experimental examples below, and the experimental results prove that the invention can improve the identification accuracy of pedestrian re-identification.
The invention compares the Market1501, CUHK03 and DukeMTMC-reiD data sets with 15 existing representative pedestrian re-identification methods, and Table 1 shows the expression of mAP and rank-1 indexes on three public data sets by the method of the invention and the comparison method in 15 for comparison, the larger the result value is, the better the performance is, and the improvement of the method of the invention (namely the TBCL noted in Table 1) is very obvious.
Table 1: representation of rank-1 and mAP indices on Market1501, CUHK03 and DukeMTMC-reiD datasets by different methods, representing lack of such data, representing recurring data
Figure BDA0002124547380000083
10 pedestrians are selected, 10 pictures are selected for each pedestrian, the total number of the pictures is 100, 100 eigenvectors obtained by the method are clustered, according to the graph shown in FIG. 4, a circle represents a sample of the same type, the larger the circle is, the worse the clustering effect is, otherwise, the better the clustering effect is, and as can be seen from FIG. 4, the clustering effect is the best.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A pedestrian re-identification method based on batch center similarity is characterized by comprising the following steps:
(1) sample selection: for training samples, randomly selecting P pedestrians, and for each pedestrian, randomly selecting K samples to form a batch, wherein the batch comprises P × K samples, P and K are preset values, and the number of P is smaller than the total number of pedestrians in the samples;
(2) data enhancement:
(2.1) for the original image in the sample, adopting a random shielding method to shield a part with random size;
(2.2) for the image obtained after the occlusion in the step (2.1), firstly zooming to a first fixed size, and then randomly intercepting an image of a second fixed size, wherein the first fixed size and the second fixed size are preset values, and the second fixed size is smaller than the first fixed size;
(3) feature extraction:
(3.1) based on the basic network structure, removing the last full connection layer of the network, and adding an additional layer to establish a convolutional neural network model;
(3.2) sending each batch of the images randomly intercepted in the step (2.2) to the network set in the step (3.1) to extract a feature vector;
(4) a backbone optimization objective:
(4.1) dividing the P x K eigenvectors obtained in the step (3.2) according to different people, selecting K vectors of the same person, and solving an average vector of the K vectors as a central vector to obtain P central points; the method specifically comprises the following steps: according to the step (3.2), feature vectors F1 and F2 are obtained, and for F1, the mean vector of K feature vectors of each person is obtained and marked as Ci,i∈[1,P]Thus obtaining P central vectors;
(4.2) selecting a central point of the same class and a central point of different classes for each feature vector in P x K, wherein the central points of different classes are selected as the central points closest to the feature vectors to form a triple, and performing regression optimization by using a triple loss function; the method specifically comprises the following steps:
the calculation formula of the finally designed triple loss function based on the batch center is as follows:
Figure FDA0003123538020000021
Figure FDA0003123538020000022
wherein S represents a similarity formula in which,
Figure FDA0003123538020000023
m represents a set threshold value, fiRepresentation for sample xiFeature vectors obtained via the network, ciDenotes xiThe class center vector represented;
(5) auxiliary optimization objective:
(5.1) obtaining feature vectors with label sizes from the P x K feature vectors obtained in the step (3.2) through a newly added network layer;
and (5.2) carrying out regression optimization on the feature vectors obtained in the step (5.1) and the corresponding labels thereof by using a cross entropy function.
2. The pedestrian re-identification method based on the similarity of the center of the batch as claimed in claim 1, wherein in the step (3): selecting ResNet50 as a support network, wherein each part uses a ReLU function as an activation function, and reducing the convolution step length from original 2 to 1 in the fourth block of the ResNet50 network, thereby improving the resolution of the obtained feature vector; after the fourth block of the ResNet50 network, replace the pool layer average _ pool of the original ResNet50 with the max _ pool layer, and add a full-connected layer, where the output of the added full-connected layer is used to represent the feature vector of the image and is marked as F1; after the addition of the fully-connected layer, one fully-connected layer is added, the output dimension is the number of categories of the data set, and the obtained vector is marked as F2.
3. The pedestrian re-identification method based on the similarity of the centers of the batches as claimed in claim 1, wherein in order to simplify the calculation amount and simultaneously facilitate the backward propagation of the gradient, the unitization operation is performed on all the feature vectors, that is:
Figure FDA0003123538020000024
and then to
Figure FDA0003123538020000025
4. The pedestrian re-identification method based on the similarity of the center of the batch as claimed in claim 1 or 2, wherein the step (5.2) is specifically as follows: performing regression optimization by using a cross entropy function according to the feature vector F2 obtained in the step (3):
Figure FDA0003123538020000026
wherein y isjWhich represents the real label of the tag or tags,
Figure FDA0003123538020000031
5. the pedestrian re-identification method based on the similarity of the center of the batch as claimed in claim 1 or 2, wherein the step (2.1) is specifically as follows: and setting area to represent the area of the original image, taking s ∈ [ A × area, B × area ] to represent the size of the randomly selected area, and then shielding the randomly selected image part with the area s, wherein 0< A < B < 1.
6. The pedestrian re-identification method based on the similarity of the center of the batch as claimed in claim 1 or 2, wherein the step (2.2) is specifically as follows: and (3) zooming the image obtained after random occlusion in the step (2.1) to the size of [288, 144], and then randomly intercepting the image with the size of [256, 128 ].
7. The pedestrian re-identification method based on the batch center similarity as claimed in claim 1 or 2, wherein when selecting the K samples, if the number of samples of a certain pedestrian is less than the selected K value, the pedestrian is sampled in a put-back manner, that is, the selection is repeated until the set K value is reached.
8. The pedestrian re-identification method based on the similarity between the centers of the batches as claimed in claim 5, wherein A is 0.02 and B is 0.4.
CN201910617855.1A 2019-07-10 2019-07-10 Pedestrian re-identification method based on batch center similarity Active CN110309810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910617855.1A CN110309810B (en) 2019-07-10 2019-07-10 Pedestrian re-identification method based on batch center similarity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910617855.1A CN110309810B (en) 2019-07-10 2019-07-10 Pedestrian re-identification method based on batch center similarity

Publications (2)

Publication Number Publication Date
CN110309810A CN110309810A (en) 2019-10-08
CN110309810B true CN110309810B (en) 2021-08-17

Family

ID=68079809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910617855.1A Active CN110309810B (en) 2019-07-10 2019-07-10 Pedestrian re-identification method based on batch center similarity

Country Status (1)

Country Link
CN (1) CN110309810B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110968735B (en) * 2019-11-25 2023-06-20 中国矿业大学 Unsupervised pedestrian re-identification method based on spherical similarity hierarchical clustering
CN111091548B (en) * 2019-12-12 2020-08-21 哈尔滨市科佳通用机电股份有限公司 Railway wagon adapter dislocation fault image identification method and system based on deep learning
CN111666823B (en) * 2020-05-14 2022-06-14 武汉大学 Pedestrian re-identification method based on individual walking motion space-time law collaborative identification
CN112818755B (en) * 2021-01-13 2022-05-06 华中科技大学 Gait recognition method based on active learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832711A (en) * 2017-11-13 2018-03-23 常州大学 A kind of recognition methods again of the pedestrian based on transfer learning
CN109344787A (en) * 2018-10-15 2019-02-15 浙江工业大学 A kind of specific objective tracking identified again based on recognition of face and pedestrian

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101692747B1 (en) * 2015-09-09 2017-01-04 인천대학교 산학협력단 Improved directional weighted interpolation method for single-sensor camera imaging
CN107451545B (en) * 2017-07-15 2019-11-15 西安电子科技大学 The face identification method of Non-negative Matrix Factorization is differentiated based on multichannel under soft label
CN108230322B (en) * 2018-01-28 2021-11-09 浙江大学 Eye ground characteristic detection device based on weak sample mark

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832711A (en) * 2017-11-13 2018-03-23 常州大学 A kind of recognition methods again of the pedestrian based on transfer learning
CN109344787A (en) * 2018-10-15 2019-02-15 浙江工业大学 A kind of specific objective tracking identified again based on recognition of face and pedestrian

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Triplet-Center Loss for Multi-View 3D Object Retrieval;Xinwei He et al;《2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition》;20181217;第1945-1954页 *
道路监控场景下行人再识别方法研究;乔治;《中国优秀硕士学位论文全文数据库信息科技辑》;20180815(第8期);第I138-655页 *

Also Published As

Publication number Publication date
CN110309810A (en) 2019-10-08

Similar Documents

Publication Publication Date Title
CN110309810B (en) Pedestrian re-identification method based on batch center similarity
CN111126360B (en) Cross-domain pedestrian re-identification method based on unsupervised combined multi-loss model
CN103207898B (en) A kind of similar face method for quickly retrieving based on local sensitivity Hash
Zhao et al. Learning mid-level filters for person re-identification
CN107153817B (en) Pedestrian re-identification data labeling method and device
Tieu et al. Boosting image retrieval
Shahab et al. ICDAR 2011 robust reading competition challenge 2: Reading text in scene images
Qin et al. Query adaptive similarity for large scale object retrieval
US8548256B2 (en) Method for fast scene matching
AU2017201281B2 (en) Identifying matching images
EP2808827A1 (en) System and method for OCR output verification
CN110852152B (en) Deep hash pedestrian re-identification method based on data enhancement
JP2016134175A (en) Method and system for performing text-to-image queries with wildcards
US9361523B1 (en) Video content-based retrieval
CN110751027B (en) Pedestrian re-identification method based on deep multi-instance learning
CN109934258A (en) The image search method of characteristic weighing and Regional Integration
CN108537223B (en) License plate detection method, system and equipment and storage medium
CN116704490B (en) License plate recognition method, license plate recognition device and computer equipment
Wu et al. A vision-based indoor positioning method with high accuracy and efficiency based on self-optimized-ordered visual vocabulary
CN108960013B (en) Pedestrian re-identification method and device
Liu et al. Video retrieval based on object discovery
CN110830734B (en) Abrupt change and gradual change lens switching identification method and system
CN110941994B (en) Pedestrian re-identification integration method based on meta-class-based learner
Joly et al. Unsupervised individual whales identification: spot the difference in the ocean
CN113963295A (en) Method, device, equipment and storage medium for recognizing landmark in video clip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant