CN110008861A - A kind of recognition methods again of the pedestrian based on global and local feature learning - Google Patents

A kind of recognition methods again of the pedestrian based on global and local feature learning Download PDF

Info

Publication number
CN110008861A
CN110008861A CN201910219450.2A CN201910219450A CN110008861A CN 110008861 A CN110008861 A CN 110008861A CN 201910219450 A CN201910219450 A CN 201910219450A CN 110008861 A CN110008861 A CN 110008861A
Authority
CN
China
Prior art keywords
pedestrian
feature
training
indicate
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910219450.2A
Other languages
Chinese (zh)
Inventor
晋建秀
王鹏
邢晓芬
青春美
徐向民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201910219450.2A priority Critical patent/CN110008861A/en
Publication of CN110008861A publication Critical patent/CN110008861A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of recognition methods again of the pedestrian based on global and local feature learning, comprising the following steps: S1, obtains training dataset, training data, which is carried out pretreatment and data, to be enhanced;S2, building depth convolutional neural networks;Step S3, the training data training network is utilized;S4, test data set is obtained, and it is pre-processed, the feature of each test set image is then extracted using trained network;The similarity score of S5, the feature for calculating each Query data and the feature in Gallery data set;S6, it sorts to all similarity scores, the Gallery pedestrian image of highest scoring can consider that with corresponding query pedestrian be the same pedestrian, and then obtain the result of images to be recognized.Network proposed by the present invention not only simply, but also does not need additional pedestrian information and can obtain accuracy rate more higher than other classical ways.

Description

A kind of recognition methods again of the pedestrian based on global and local feature learning
Technical field
The present invention relates to pedestrian identification technology fields again, and in particular to a kind of row based on global and local feature learning People's recognition methods again.
Background technique
With the gradually development of social economy and science and technology, intelligent monitoring technology is had been to be concerned by more and more people.School, doctor Institute, railway station, the biggish public place of flows of the people such as airport is assembled with a large amount of camera, by the video data of these magnanimity It is researched and analysed, this is of great significance in public safety, the fields such as criminal investigation.
Identification technology refers to and had already appeared a pedestrian in some camera pedestrian again, when the pedestrian takes the photograph at another As head when occurring again, it would be desirable to identify him.Identification technology is different from recognition of face to pedestrian again.Face is known The facial image background used in not is relatively simple, and face is easy to discrimination than more visible.And pedestrian identify again in pedestrian figure Picture resolution ratio is lower, and face information is fuzzy, and background is complex, it is difficult to correct matching;Secondly, the shooting between different cameras There are great differences for angle, and pedestrian is likely to change in the posture occurred every time either figure and features feature.Based on these Feature, so that our analyses to image, the extraction of pedestrian's feature are all extremely difficult.Current pedestrian identifies the technology in field again Be roughly divided into two classes: the first kind is to study the character representation method of pedestrian's object, extracts the diagnostic characteristics pair for having more robustness Pedestrian is indicated;Second class uses learning distance metric method, by learning the distance metric function for having judgement index, makes The image distance of the same person is obtained less than the distance between different pedestrian images.Recent years with deep learning development, more Method focus on character representation this aspect of pedestrian, there are three types of the technologies of mainstream: the first kind is global characteristics, global characteristics It is concerned with global information, such as the gender of pedestrian, body shape, clothes color etc..But global characteristics tend to lose The mistake in information and pedestrian detection in details.Second class is local feature, and many methods are directly by entire pedestrian's picture It is divided into the part of several fixations, they is then inputted into training in neural network, but this mode has ignored the posture of pedestrian Change and block the influence to the picture of cutting.Third class is to combine global characteristics and local message, and this mode directly merges The global and local information feature descriptor last as pedestrian, the disadvantage is that often having bigger calculating cost and additional Memory space.From this, above-mentioned three classes method, all cannot sufficiently excavate overall situation and partial situation's feature of pedestrian.
Summary of the invention
The purpose of the present invention is to propose to a kind of recognition methods again of the pedestrian based on global and local feature learning, existing to solve Some deep learning methods are unable to fully the technical issues of excavating the global and local feature of pedestrian.
The purpose of the present invention is realized at least through following technical solution.
Pedestrian based on global and local feature learning recognition methods again, comprising the following steps:
Step S1, training dataset is obtained, training data, which is carried out pretreatment and data, to be enhanced;
Step S2, depth convolutional neural networks are constructed;
Step S3, the training data training depth convolutional neural networks handled well are utilized;
Step S4, test data set is obtained, and it is pre-processed, then extracts each survey using trained network Examination collection
The feature of image;
Step S5, the feature and the feature in Candidate Set (Gallery) data set for calculating each query set (Query) data Similarity score;
Step S6, it sorts to all similarity score, the Gallery pedestrian image of highest scoring can consider and therewith Corresponding Query pedestrian is the same pedestrian, and then obtains the result of images to be recognized.
Further, the test data set includes Query data set and Gallery data set.
Further, the pretreatment of the step S1 be the RGB image size of each pedestrian is adjusted to 256*144, and And to its mean normalization;Data enhancement method includes that picture size size is cut to 256*128 and level by random cropping Overturning etc..
Further, the step S2 depth convolutional neural networks building the following steps are included:
Step S21, the all-network layer (convolution before the last layer convolutional layer (Conv5 layers) of Resnet50 is intercepted Layer Conv5) and the good parameter of pre-training on data set ImageNet is used to initialize it;The parameter include weight to Measure θ12,…,θm,…θnAnd biasing;
Step S22, in practice, it is contemplated that the vertical direction of pedestrian image can intuitively be divided into different parts, example Such as head, chest, leg, there are also feet etc..After Conv5 layers, part pond (Local is carried out to the output X of Conv5 Average Pooling), it is that the output is cut into k part (Part), then respectively to this k part pond, Qi Chihua Receptive field be (H/k) * W, wherein H, W and k are the length of the output of Conv5 and the quantity of wide and cutting part respectively, each Each element representation of Part are as follows:
Here, Xc,i,jIndicate that each element of convolutional layer Conv5 output, i, j are illustrated respectively in the rope in long and wide direction Draw, c indicates that c ties up channel.Δ=H/k.
Step S23, (Mapping) study is mapped to each Part that cutting obtains, the result after mapping are as follows:
Wherein σ1And σ2It is line rectification function (ReLU) and Sigmoid function respectively.It is convolution kernel ginseng Number.
Each of step S24, in view of the information of proximate region in pedestrian image is similar, so will be obtained by mapping study Part first replicates (Repeat) once, then gets up in height (H) dimension splicing (Cat);
Step S25, the tensor being stitched together (Tensor) is multiplied point by point with X, realizes the selection of local feature, selection As a result it indicates are as follows:
Wherein, XC, i, jIndicate each element of convolutional layer Conv5 output, Sc,i,jIndicate mapping study obtain as a result, Indicate the operation being multiplied point by point.
Step S26, the fusion of global characteristics is carried out to the result that step S25 is obtained, i.e., global average pond (Global AveragePooling it) operates.
Further, the step S3 training the following steps are included:
Step S31, by training data, tissue line is good as needed, network described in input step S2.
Step S32, the loss function of the depth convolutional neural networks is set:
Wherein, λ1、λ2、λ3、λ4It is the coefficient of corresponding loss function with u, is respectively set to 0.1,0.1,0.1,0.1 and 0.6, p1、p2、p3、p4Respectively indicate each section respectively corresponded in the k part with G, G correspond to the cutting before it is whole Body;WithRespectively indicate the loss function of corresponding part;
Step S33, by the parameter of the loss function and depth convolutional neural networks, propagated forward penalty values are obtained; The parameter includes weight vectors θ12,…,θm,…θn
Step S34, backpropagation obtains training error.
Further, this method all uses Softmax classification function to local feature and global characteristics;For Softmax Loss function first has to calculate image pattern (x(z)) belong to the probability of each classification.Assuming that all samples are divided into n class, it is right Input sample x(z)(z indicates z-th of sample), belongs to the probability value of classification m are as follows:
Wherein, θ12,…,θm,…θnIt is the parameter of depth convolutional neural networks, by SmFormula obtain Softmax loss Function:
Wherein, y is the vector of a 1*n, ymIndicate to be 1 when the corresponding position of the sample is true classification.
Further, test data set its pretreatment mode in the step S4 is that image size is adjusted to 256* 144, and to its mean normalization.
Further, the similarity measure of the step S5 uses Euclidean distance, the Euclidean distance formula are as follows:
Wherein xuAnd xvRespectively indicate v-th of pedestrian in the feature and Gallery of u-th of pedestrian in Query data set Feature
Compared with prior art, the present invention having the advantages that the present invention cannot for existing depth learning technology The problem of sufficiently excavating the global and local feature of pedestrian proposes a kind of new network structure.The structure can be automatically real The selection of current situation portion and global information learns, and simultaneously to local feature and global characteristics using Softmax loss function come into Row constraint obtains a feature descriptor with very robust, so that improve pedestrian identifies matched accuracy again.
Detailed description of the invention
Fig. 1 is the basic procedure that pedestrian identifies again;
Fig. 2 is that the present invention is based on the depth network structures of the global and local feature selecting of pedestrian.
Specific embodiment
In order to be more clear technical solution of the present invention and advantage, come With reference to embodiment and referring to attached drawing The present invention is described in more detail.
A kind of recognition methods again of the pedestrian based on global and local feature learning as shown in Figure 1, comprising the following steps:
Step S1, training dataset is obtained, training data, which is carried out pretreatment and data, to be enhanced
The present invention uses three disclosed pedestrians identification database again: Market-1501, DukeMTMC-reID and CUHK03.Since the picture size in raw data set is different, it is not able to satisfy the input needs of neural network, so every RGB picture Resize is at 256*144 size, and to its mean normalization.Then in order to promote deep learning e-learning Robustness prevents over-fitting, carries out data enhancing to obtained data, mode includes that (size is 256* to random cropping 128), the modes such as flip horizontal.
Step S2, depth convolutional neural networks are constructed
As shown in Fig. 2, the step S2 the following steps are included:
Step S21, the last layer convolutional layer Conv5 for intercepting Resnet-50 (depth residual error network -50) (includes Conv5 Layer) before all-network layer as basic network (Base Network), and use pre-training on ImageNet is good Parameter it is initialized, the network structure of addition is initialized using Gauss.
Step S22, after Conv5 layers, to the output X of Conv5 carry out local pond (Local Average Pooling, LAP), the receptive field in pond is (H/k) * W, and wherein H, W and k are the length of the output of Conv5 and the part of width and cutting respectively Quantity,If it is respectively 8 and 4 that the size for inputting picture, which is 256*128, H and W,.In practice, consider Vertical direction to pedestrian can intuitively be divided into different parts, such as head, and chest, leg, there are also feet etc..So can be with It is 1,2,3,4 etc. by k value, the value of the present embodiment k is 4.
Each element in each part can indicate are as follows:
Here, Xc,i,jIndicate that each element of convolutional layer Conv5 output, i, j are illustrated respectively in the rope in long and wide direction Draw, c indicates that c ties up channel.Δ=H/k,
Step S23, mapping study is carried out to each part that cutting obtains.Each mapping is by convolutional layer, ReLU letter Number, convolutional layer, sigmoid function cascaded, and the parameter of this part are without shared.
Result after mapping are as follows:
Wherein σ1And σ2It is ReLU function and Sigmoid function respectively,WithBeing is convolution nuclear parameter,
Step S24, in view of the information of proximate region in pedestrian image is similar, so will be by mapping (Mapping) study Obtained each Part first replicates (Repeat) once, then gets up in height (H) dimension splicing (Cat);
Step S25, the Tensor being stitched together is multiplied point by point with X, realizes the selection of local feature.Selection (selection) result are as follows:
Wherein, Xc,i,jIndicate each element of convolutional layer Conv5 output, Sc,i,jIndicate mapping study obtain as a result, Indicate the operation being multiplied point by point,
Step S26, the fusion of global characteristics is carried out to the result that step S25 is obtained, i.e., global average pond (Global AveragePooling, GAP) operation.
Step S3, the training data training depth convolutional neural networks handled well are utilized
Wherein, the step S3 is comprised the steps of:
Step S31, by training data, tissue line is good as needed, network described in input step S2;
The present invention individually puts each different classes of pedestrian image together when implementing.
Step S32, the loss function of the depth convolutional neural networks is set.This method is special to local feature and the overall situation Sign all employs Softmax classification function, calculates sample (x(z)) belonging to the probability of each classification, calculating process is as follows:
Assuming that all samples are divided into n class, to input sample x(z)(z indicates z-th of sample), belongs to the probability of classification m Value are as follows:
Wherein, θ12,…,θm,…θnIt is the parameter of depth convolutional neural networks, by SmFormula obtain Softmax loss Function:
Wherein, y is the vector of a 1*n, ymIndicate to be 1 when the corresponding position of the sample is true classification.
As shown in Fig. 2, loss function used in the present embodiment is 5 Classification Loss functions (Loss), respectively to part Feature and global characteristics constraint.The final loss function of depth convolutional neural networks are as follows:
Wherein, λ1、λ2、λ3、λ4It is the coefficient of corresponding loss function with u, is respectively set to 0.1,0.1,0.1,0.1 and 0.6, p1、p2、p3、p4Each section in the k part is respectively corresponded with G, G corresponds to the entirety before the cutting;WithRespectively indicate the loss function of corresponding part;
Step S33, before being obtained by the loss function and depth convolutional neural networks parameter (including weight and biasing) To propagation loss value;
The deep learning optimization algorithm that the present invention uses is stochastic gradient descent method, trains 60 iteration (Epoch) altogether, Wherein the learning rate of preceding 40 epoch is set as 0.1, and the learning rate of 20 Epoch is set as 0.01 later.Batch size (Batchsize) 32 are set as.
Step S34, backpropagation obtains training error
Utilize L described in step S32lossAs benchmark, back transfer training error.
Step S4, test data set (comprising Query data set and Gallery data set) is obtained, and it is located in advance Reason, so
The feature of all images in test set is extracted using trained network afterwards;
Wherein, the test data set in the step S4 needs to reset size, is uniformly cut to 256*144, and To its mean normalization.
Step S5, the similarity for calculating the feature and the feature in the Gallery data set of each Query data obtains Point;The feature refers to the feature in step S4.
Wherein, the similarity measure in the step S5 uses Euclidean distance:
Wherein xuAnd xvRespectively indicate v-th of pedestrian in the feature and Gallery of u-th of pedestrian in Query data set Feature
Step S6, it sorts to all similarity score, the Gallery pedestrian image of highest scoring can consider and therewith Corresponding Query pedestrian is the same pedestrian, and then obtains the result of images to be recognized.
Rank-1 index and mAP index have been used to the evaluation criterion of result in the present invention.
The foregoing is merely illustrative embodiments of the invention, the protection scope being not intended to limit the invention is all at this Within the spirit and principle of invention, any modification, equivalent substitution, improvement and etc. done should be included in protection model of the invention Within enclosing.

Claims (8)

1. a kind of recognition methods again of the pedestrian based on global and local feature learning, which is characterized in that the method includes following Step:
Step S1, training dataset is obtained, training data, which is carried out pretreatment and data, to be enhanced;
Step S2, depth convolutional neural networks are constructed;
Step S3, the training data training depth convolutional neural networks handled well are utilized;
Step S4, test data set is obtained, and it is pre-processed, is then mentioned using trained depth convolutional neural networks Test data is taken to concentrate the feature of all images;
Step S5, the phase of the feature and the feature in Candidate Set (Gallery) data set of each query set (Query) data is calculated Like degree score;The feature refers to the feature in step S4;
Step S6, sort to all similarity score, the Gallery pedestrian image of highest scoring then think with it is corresponding Query pedestrian is the same pedestrian, and then obtains the result of images to be recognized.
2. pedestrian according to claim 1 recognition methods again, which is characterized in that the test data set includes Query number According to collection and Gallery data set.
3. pedestrian according to claim 1 recognition methods again, which is characterized in that the pretreatment of the step S1 is each The RGB image size of pedestrian is adjusted to 256*144, and to its mean normalization;Data enhancement method include random cropping i.e. Picture size size is cut to 256*128 and flip horizontal mode.
4. pedestrian according to claim 1 recognition methods again, which is characterized in that the step S2 depth convolutional neural networks Building the following steps are included:
Step S21, the all-network layer before the last layer convolutional layer Conv5 of Resnet50, including convolutional layer are intercepted Conv5, and the good parameter of pre-training on ImageNet data set is used to initialize it;The parameter includes weight vectors θ1, θ2..., θm... θn
Step S22, local pond (Local Average Pooling) is carried out to the output X of Conv5, is by the outputting cutting It is divided into k part (Part), then respectively to this k part pond, the receptive field in pond is (H/k) * W, wherein H, W and k points It is not the length of the output of Conv5 and the quantity of wide and cutting part, each element representation of each Part are as follows:
Here, XC, i, jIndicate that each element of convolutional layer Conv5 output, i, j are illustrated respectively in the index in long and wide direction, c Indicate that c ties up channel, Δ=H/k;
Step S23, (Mapping) study is mapped to each Part that cutting obtains, the result after mapping are as follows:
Wherein, VC, kIndicate mapping study obtain as a result, σ1And σ2It is line rectification function (ReLU) and Sigmoid letter respectively Number,WithIt is convolution nuclear parameter;
Step S24, in view of the information of proximate region in pedestrian image is similar, so each Part that will be obtained by mapping study First duplication (Repeat) once, is then got up in height (H) dimension splicing (Cat);
Step S25, the tensor being stitched together (Tensor) is multiplied point by point with X, realizes the selection of local feature, the result of selection It indicates are as follows:
Wherein, XC, i, jIndicate each element of convolutional layer Conv5 output, SC, i, jIndicate mapping study obtain as a result,It indicates The operation being multiplied point by point;
Step S26, the fusion of global characteristics is carried out to the result that step S25 is obtained, i.e., global average pond (Global Average Pooling) operation.
5. according to the method described in claim 4, it is characterized in that, the training of the step S3 the following steps are included:
Step S31, by depth convolutional neural networks described in training data input step S2;
Step S32, the loss function of the depth convolutional neural networks is set:
Wherein, λ1、λ2、λ3、λ4It is the coefficient of corresponding loss function with u, is respectively set to 0.1,0.1,0.1,0.1 and 0.6, p1、 p2、p3、p4Each section in the k part is respectively corresponded, G corresponds to the entirety before the cutting;WithRespectively indicate the loss function of corresponding part;
Step S33, by the parameter of the loss function and depth convolutional neural networks, propagated forward penalty values are obtained;
Step S34, backpropagation obtains training error.
6. the method according to claim 1, wherein being all made of Softmax points to local feature and global characteristics Class function calculates image pattern x(z)Belong to the probability of each classification, calculating process is as follows:
Assuming that all samples are divided into n class, to input sample x(z), z indicate z-th of sample, belong to the probability value of classification m are as follows:
Wherein, θ1, θ2..., θm... θnIt is the parameter of depth convolutional neural networks, by SmFormula obtain Softmax loss letter Number:
Wherein, y is the vector of a l*n, ymIndicate to be 1 when the corresponding position of the sample is true classification.
7. the method according to claim 1, wherein pretreatment described in step S4 is to be adjusted to image size 256*144, and to its mean normalization.
8. being somebody's turn to do the method according to claim 1, wherein the similarity measure of the step S5 uses Euclidean distance Euclidean distance formula are as follows:
Wherein xuAnd xvRespectively indicate the spy of v-th of pedestrian in the feature and Gallery of u-th of pedestrian in Query data set Sign.
CN201910219450.2A 2019-03-21 2019-03-21 A kind of recognition methods again of the pedestrian based on global and local feature learning Pending CN110008861A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910219450.2A CN110008861A (en) 2019-03-21 2019-03-21 A kind of recognition methods again of the pedestrian based on global and local feature learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910219450.2A CN110008861A (en) 2019-03-21 2019-03-21 A kind of recognition methods again of the pedestrian based on global and local feature learning

Publications (1)

Publication Number Publication Date
CN110008861A true CN110008861A (en) 2019-07-12

Family

ID=67167747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910219450.2A Pending CN110008861A (en) 2019-03-21 2019-03-21 A kind of recognition methods again of the pedestrian based on global and local feature learning

Country Status (1)

Country Link
CN (1) CN110008861A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688976A (en) * 2019-10-09 2020-01-14 创新奇智(北京)科技有限公司 Store comparison method based on image identification
CN110781817A (en) * 2019-10-25 2020-02-11 南京大学 Pedestrian re-identification method for solving component misalignment
CN111275712A (en) * 2020-01-15 2020-06-12 浙江工业大学 Residual semantic network training method oriented to large-scale image data
CN112149517A (en) * 2020-08-31 2020-12-29 三盟科技股份有限公司 Face attendance checking method and system, computer equipment and storage medium
CN112200093A (en) * 2020-10-13 2021-01-08 北京邮电大学 Pedestrian re-identification method based on uncertainty estimation
CN113269070A (en) * 2021-05-18 2021-08-17 重庆邮电大学 Pedestrian re-identification method fusing global and local features, memory and processor

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506703A (en) * 2017-08-09 2017-12-22 中国科学院大学 A kind of pedestrian's recognition methods again for learning and reordering based on unsupervised Local Metric
CN108062756A (en) * 2018-01-29 2018-05-22 重庆理工大学 Image, semantic dividing method based on the full convolutional network of depth and condition random field
CN108875696A (en) * 2018-07-05 2018-11-23 五邑大学 The Off-line Handwritten Chinese Recognition method of convolutional neural networks is separated based on depth
CN108960140A (en) * 2018-07-04 2018-12-07 国家新闻出版广电总局广播科学研究院 The pedestrian's recognition methods again extracted and merged based on multi-region feature
CN109034044A (en) * 2018-06-14 2018-12-18 天津师范大学 A kind of pedestrian's recognition methods again based on fusion convolutional neural networks
CN109271926A (en) * 2018-09-14 2019-01-25 西安电子科技大学 Intelligent Radiation source discrimination based on GRU depth convolutional network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506703A (en) * 2017-08-09 2017-12-22 中国科学院大学 A kind of pedestrian's recognition methods again for learning and reordering based on unsupervised Local Metric
CN108062756A (en) * 2018-01-29 2018-05-22 重庆理工大学 Image, semantic dividing method based on the full convolutional network of depth and condition random field
CN109034044A (en) * 2018-06-14 2018-12-18 天津师范大学 A kind of pedestrian's recognition methods again based on fusion convolutional neural networks
CN108960140A (en) * 2018-07-04 2018-12-07 国家新闻出版广电总局广播科学研究院 The pedestrian's recognition methods again extracted and merged based on multi-region feature
CN108875696A (en) * 2018-07-05 2018-11-23 五邑大学 The Off-line Handwritten Chinese Recognition method of convolutional neural networks is separated based on depth
CN109271926A (en) * 2018-09-14 2019-01-25 西安电子科技大学 Intelligent Radiation source discrimination based on GRU depth convolutional network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李姣等: "多置信度重排序的行人再识别算法", 《模式识别与人工智能》 *
王鹏 等: "Local-Global Extraction Unit for Person Re-identification", 《 INTERNATIONAL CONFERENCE ON BRAIN INSPIRED COGNITIVE SYSTEM》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688976A (en) * 2019-10-09 2020-01-14 创新奇智(北京)科技有限公司 Store comparison method based on image identification
CN110781817A (en) * 2019-10-25 2020-02-11 南京大学 Pedestrian re-identification method for solving component misalignment
CN111275712A (en) * 2020-01-15 2020-06-12 浙江工业大学 Residual semantic network training method oriented to large-scale image data
CN112149517A (en) * 2020-08-31 2020-12-29 三盟科技股份有限公司 Face attendance checking method and system, computer equipment and storage medium
CN112200093A (en) * 2020-10-13 2021-01-08 北京邮电大学 Pedestrian re-identification method based on uncertainty estimation
CN113269070A (en) * 2021-05-18 2021-08-17 重庆邮电大学 Pedestrian re-identification method fusing global and local features, memory and processor

Similar Documents

Publication Publication Date Title
CN110008861A (en) A kind of recognition methods again of the pedestrian based on global and local feature learning
CN105512680B (en) A kind of more view SAR image target recognition methods based on deep neural network
CN108460403A (en) The object detection method and system of multi-scale feature fusion in a kind of image
CN111666843B (en) Pedestrian re-recognition method based on global feature and local feature splicing
CN104166841B (en) The quick detection recognition methods of pedestrian or vehicle is specified in a kind of video surveillance network
CN108537136A (en) The pedestrian's recognition methods again generated based on posture normalized image
CN109934176A (en) Pedestrian's identifying system, recognition methods and computer readable storage medium
CN109101865A (en) A kind of recognition methods again of the pedestrian based on deep learning
CN108710868A (en) A kind of human body critical point detection system and method based under complex scene
CN107330396A (en) A kind of pedestrian's recognition methods again based on many attributes and many strategy fusion study
CN110070010A (en) A kind of face character correlating method identified again based on pedestrian
CN109447115A (en) Zero sample classification method of fine granularity based on multilayer semanteme supervised attention model
CN108229444A (en) A kind of pedestrian's recognition methods again based on whole and local depth characteristic fusion
CN107463920A (en) A kind of face identification method for eliminating partial occlusion thing and influenceing
CN108960184A (en) A kind of recognition methods again of the pedestrian based on heterogeneous components deep neural network
CN106529499A (en) Fourier descriptor and gait energy image fusion feature-based gait identification method
CN110008913A (en) The pedestrian's recognition methods again merged based on Attitude estimation with viewpoint mechanism
Liu et al. A contrario comparison of local descriptors for change detection in very high spatial resolution satellite images of urban areas
CN107133569A (en) The many granularity mask methods of monitor video based on extensive Multi-label learning
CN109190475A (en) A kind of recognition of face network and pedestrian identify network cooperating training method again
CN108416295A (en) A kind of recognition methods again of the pedestrian based on locally embedding depth characteristic
CN109741240A (en) A kind of more flat image joining methods based on hierarchical clustering
CN108564012A (en) A kind of pedestrian's analytic method based on characteristics of human body's distribution
CN109784130A (en) Pedestrian recognition methods and its device and equipment again
CN104298974A (en) Human body behavior recognition method based on depth video sequence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190712