CN107330355B - Deep pedestrian re-identification method based on positive sample balance constraint - Google Patents
Deep pedestrian re-identification method based on positive sample balance constraint Download PDFInfo
- Publication number
- CN107330355B CN107330355B CN201710330206.4A CN201710330206A CN107330355B CN 107330355 B CN107330355 B CN 107330355B CN 201710330206 A CN201710330206 A CN 201710330206A CN 107330355 B CN107330355 B CN 107330355B
- Authority
- CN
- China
- Prior art keywords
- network
- training
- positive sample
- sample
- deep
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 11
- 238000012549 training Methods 0.000 claims description 33
- 230000006870 function Effects 0.000 claims description 24
- 238000012360 testing method Methods 0.000 claims description 24
- 230000008569 process Effects 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000013145 classification model Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 4
- 238000013135 deep learning Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- General Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a deep pedestrian re-identification method based on positive sample balance constraint, a residual error network structure used by the method is simple and widely applied, a sufficiently deep network structure enhances the feature expression capability, and the network structure does not need to be specially designed; the method finds that the accuracy of pedestrian re-identification can be higher than that of most of well-designed methods by using a residual error net classifier to extract image features; compared with a binary group loss and triple group loss method, the method has the advantages that similar effects can be achieved by improving the structure loss without specially generating effective samples, and the learned gradient direction is more stable and effective by utilizing the integral distribution information; on the basis of improving the structure loss, the positive sample balance constraint is added, the distance of the positive sample pair can be controlled, and the gradients of the positive sample pair distance and the negative sample pair distance can be balanced, so that the algorithm is easier to train and the algorithm performance is improved.
Description
Technical Field
The invention relates to the field of deep learning and pedestrian re-identification, in particular to a deep pedestrian re-identification method based on positive sample balance constraint.
Background
Over the years, impressive advances have been made in the areas of pattern recognition, machine learning, and computer vision research. These advances have attracted the attention of the video surveillance, legal security industry, and the need for such intelligent algorithms and intelligent systems is increasing. Under the continuously developing action of the security industry, intelligent monitoring tools for human face monitoring, fingerprint monitoring, other biological feature monitoring and human and urban environments are widely applied. These tools collect a large amount of data, usually in the form of images or videos, and bring new research subjects to the field of machine learning, while in recent years, a subject of great academic interest is pedestrian re-identification.
In general, there are two types of methods for dealing with the pedestrian re-identification problem, which are the traditional method and the deep learning method. The traditional method generally needs to design or learn out robust and discriminant features, most of which are shallow models, and the feature expression capability is limited; the deep learning network can automatically learn which effective features need to be observed through learning the weight without manually designing the features like the traditional method, more and more researchers use the deep learning method to solve the problem of pedestrian re-identification in the last two or three years, and the deep learning network takes good progress. However, most of the existing deep learning methods utilize information of local data distribution, and a few hidden layers are used, so that the network is relatively not deep, and therefore, the algorithm performance is obviously improved.
Disclosure of Invention
The invention provides a deep pedestrian re-identification method based on positive sample balance constraint, which improves the feature expression capability.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
a deep pedestrian re-identification method based on positive sample balance constraint comprises the following steps:
s1: input data training datasetWherein, n is the number of samples, d is the image pixel, c is the number of different pedestrians in the training set, xiIs a d-dimensional column vector, yi=[yi1,yi2,yi3,…,yic]TIs a c-dimensional column vector in which the elements are equal to 1 or 0, andX=[x1,x2,x3,…,xN]x is a matrix of d rows and N columns;
s2: pre-training the network by using a softmax classification model;
s3: training the network using a lifting structure loss based on a positive sample balance constraint;
s4: carrying out feature extraction on the test sample image;
s5: and carrying out nearest neighbor KNN classification on the test sample by using the obtained characteristics so as to obtain a re-identification result.
Further, the specific process of step S2 is:
setting the training learning rate eta and the maximum epoch times T of the network, and using a softmax classification model to pre-train the network parameters W, wherein the training method is a back propagation algorithm and comprises the following specific steps:
firstly, initializing a network parameter W; if the current epoch times are less than T, generating mini batch dataThen holdInputting the data into the network for forward propagation calculation to obtain the loss function value of the iterationThen according toBackward propagation calculation is carried out to calculate gradientFinally, updating network parameters The network parameters are continuously updated according to the rule until the epoch times are equal to T.
Further, the specific process of step S3 is:
the loss function of the deep network is changed from cross entropy to lifting structure loss, and for the training data set, the loss function isDefinition ofRepresents a set of positive sample pairs in the training sample, andthen represents a set of negative sample pairs, with a positive sample pair distance of Di,jAnd the negative sample pair distance is Di,kAnd Di,lIs a constant parameter that controls the negative sample versus distance;
the method is characterized in that a loss function of the deep network is converted from cross entropy to lifting structure loss, and the specific formula is as follows:
Di,j=||Ψ(xi)-Ψ(xj)||2
because the loss function is not smooth, a local extreme point with poor performance is easy to fall into in the training process, in addition, the gradient of the function is inconvenient to solve, the original function is indirectly optimized by optimizing a smooth upper bound of the function, the structural loss function refers to the upper bound, two constant parameters beta and lambda are added, the former controls the distance of the positive sample pair, and the latter balances the gradient:
and then setting the training learning rate eta and the maximum epoch times T of the network and three constant parameters alpha, beta and lambda, and training the deep network by using a back propagation algorithm to finally obtain an optimized network parameter W.
Further, the specific process of step S4 is:
performing feature extraction on a test sample image, wherein the test sample image is composed of a query set And test setThe purpose of pedestrian re-identification is to give a query set sample xiqIn the test setSearching the image of the same pedestrian, and extracting a depth feature psi (x) from the query set and the test set by setting psi (x) to represent a depth convolution networkiq) And Ψ (x)it)。
Further, the specific process of step S5 is:
using the resulting Ψ (x)iq) And Ψ (x)it) To the query setEach sample in the test setThe retrieval is carried out, and the returned retrieval list comprising a plurality of images is the identification result.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the used residual error network structure is simple and widely applied, the sufficiently deep network structure enhances the feature expression capability, and the network structure does not need to be specially designed; the method finds that the accuracy of pedestrian re-identification can be higher than that of most of well-designed methods by using a residual error net classifier to extract image features; compared with a binary group loss method and a triple group loss method, the method has the advantages that the structure loss is improved, a similar effect can be achieved without specially generating effective samples, and the learned gradient direction is more stable and effective by utilizing the integral distribution information.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of a residual network structure with 50 layers;
FIG. 3(a) is a diagram illustrating the case of binary challenge loss;
FIG. 3(b) is a schematic diagram of the case of a ternary countermeasure loss;
FIG. 3(c) is a schematic diagram of lifting structure loss.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, a deep pedestrian re-identification method based on positive sample balance constraint includes the following steps:
s1: input data training datasetWherein, n is the number of samples, d is the image pixel, c is the number of different pedestrians in the training set, xiIs a d-dimensional column vector, yi=[yi1,yi2,yi3,…,yic]TIs a c-dimensional column vector in which the elements are equal to 1 or 0, andX=[x1,x2,x3,…,xN]x is a matrix of d rows and N columns;
s2: pre-training the network by using a softmax classification model;
s3: training the network using the lifting structure loss;
s4: carrying out feature extraction on the test sample image;
s5: and carrying out nearest neighbor KNN classification on the test sample by using the obtained characteristics so as to obtain a re-identification result.
The specific process of step S2 is:
setting a training learning rate eta and an epoch maximum time T of the network, and using a softmax classification model to pre-train a network parameter W, wherein the training method is a back propagation algorithm, and the specific steps are as follows (as shown in FIG. 2):
firstly, initializing a network parameter W; if the current epoch times are less than T, generating mini batch dataThen holdInputting the data into the network for forward propagation calculation to obtain the loss function value of the iterationThen according toBackward propagation calculation is carried out to calculate gradientFinally, updating network parameters The network parameters are continuously updated according to the rule until the epoch times are equal to T.
Firstly, initializing a network parameter W; if the current epoch times are less than T, generating mini batch dataThen holdInputting the data into the network for forward propagation calculation to obtain the loss function value of the iterationThen according toBackward propagation calculation is carried out to calculate gradientFinally, updating network parameters The network parameters are continuously updated according to the rule until the epoch times are equal to T.
The specific process of step S3 is:
as shown in FIGS. 3(a) - (c), the loss function of the deep network is transformed from cross-entropy to lifting structure loss, for the training data setDefinition ofRepresents a set of positive sample pairs in the training sample, andthen represents a set of negative sample pairs, with a positive sample pair distance of Di,jAnd the negative sample pair distance is Di,kAnd Di,lIs a constant parameter that controls the negative sample versus distance;
the method is characterized in that a loss function of the deep network is converted from cross entropy to lifting structure loss, and the specific formula is as follows:
Di,j=||Ψ(xi)-Ψ(xj)||2
because the loss function is not smooth, a local extreme point with poor performance is easy to fall into in the training process, in addition, the gradient of the function is inconvenient to solve, the original function is indirectly optimized by optimizing a smooth upper bound of the function, the structural loss function refers to the upper bound, two constant parameters beta and lambda are added, the former controls the distance of the positive sample pair, and the latter balances the gradient:
and then setting the training learning rate eta and the maximum epoch times T of the network and three constant parameters alpha, beta and lambda, and training the deep network by using a back propagation algorithm to finally obtain an optimized network parameter W. Fig. 3(a) shows the case of binary countermeasure loss, where the rectangular and triangular samples are negative sample pairs, the rectangular samples are pushed out of the dotted circles at the time of learning, and the direction is likely to be toward the circular samples, which is not in accordance with the desired effect. While FIG. 3(b) shows a ternary penalty, it is equally likely that a rectangular sample will be pushed towards a circular sample. Therefore, learning algorithms based on these two losses need to generate training sample sets carefully to reduce the above. For the lifting loss function, because the lifting loss function considers each negative sample and each positive sample of the samples at the same time, and utilizes the information of the whole structure, as shown in fig. 3(c), the rectangular samples can be well pushed to the same type of samples, so that the intra-group variation is better reduced, and the inter-group variation is increased.
The specific process of step S4 is:
performing feature extraction on a test sample image, wherein the test sample image is composed of a query set And test setThe purpose of pedestrian re-identification is to give a query set sample xiqIn the test setSearching the image of the same pedestrian, and extracting a depth feature psi (x) from the query set and the test set by setting psi (x) to represent a depth convolution networkiq) And Ψ (x)it)。
The specific process of step S5 is:
using the resulting Ψ (x)iq) And Ψ (x)it) To the query setEach sample in the test setThe retrieval is carried out, and the returned retrieval list comprising a plurality of images is the identification result.
The same or similar reference numerals correspond to the same or similar parts;
the positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
Claims (3)
1. A deep pedestrian re-identification method based on positive sample balance constraint is characterized by comprising the following steps:
s1: input data training datasetWherein, n is the number of samples, d is the image pixel, c is the number of different pedestrians in the training set, xiIs a d-dimensional column vector, yi=[yi1,yi2,yi3,...,yic]TIs a c-dimensional column vector in which the elements are equal to 1 or 0, andX=[x1,x2,x3,...,xN]and X is a matrix of d rows and N columns;
S2: pre-training the network by using a softmax classification model;
s3: training the network using a lifting structure loss based on a positive sample balance constraint;
s4: carrying out feature extraction on the test sample image;
s5: performing nearest neighbor KNN classification on the test sample by using the obtained characteristics to obtain a re-identification result;
the specific process of step S2 is:
setting the training learning rate eta and the maximum epoch times T of the network, and using a softmax classification model to pre-train the network parameters W, wherein the training method is a back propagation algorithm and comprises the following specific steps:
firstly, initializing a network parameter W; if the current epoch times are less than T, generating mini batch dataThen, the user can put the hand-held device in the hand-held device,inputting the data into the network for forward propagation calculation to obtain the loss function value of the iterationThen according toBackward propagation calculation is carried out to calculate gradientFinally, updating network parameters The network parameters are carried out according to the ruleContinuously updating until the epoch times are equal to T;
the specific process of step S3 is:
the loss function of the deep network is changed from cross entropy to lifting structure loss, and for the training data set, the loss function isDefinition ofRepresents a set of positive sample pairs in the training sample, andthen represents a set of negative sample pairs, with a positive sample pair distance of Di,jAnd the negative sample pair distance is Di,kAnd Di,lIs a constant parameter that controls the negative sample versus distance;
the method is characterized in that a loss function of the deep network is converted from cross entropy to lifting structure loss, and the specific formula is as follows:
Di,j=||Ψ(xi)-Ψ(xj)||2
two constant parameters β and λ are added, the former controlling the positive sample pair distance and the latter balancing the gradient:
and then setting the training learning rate eta and the maximum epoch times T of the network and three constant parameters alpha, beta and lambda, and training the deep network by using a back propagation algorithm to finally obtain an optimized network parameter W.
2. The method for deep pedestrian re-identification based on positive sample balance constraint according to claim 1, wherein the specific process of the step S4 is as follows:
performing feature extraction on a test sample image, wherein the test sample image is composed of a query set And test setThe purpose of pedestrian re-identification is to give a query set sample xiqIn the test setSearching the image of the same pedestrian, and extracting a depth feature psi (x) from the query set and the test set by setting psi (x) to represent a depth convolution networkiq) And Ψ (x)it)。
3. The method for deep pedestrian re-identification based on positive sample balance constraint according to claim 2, wherein the specific process of the step S5 is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710330206.4A CN107330355B (en) | 2017-05-11 | 2017-05-11 | Deep pedestrian re-identification method based on positive sample balance constraint |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710330206.4A CN107330355B (en) | 2017-05-11 | 2017-05-11 | Deep pedestrian re-identification method based on positive sample balance constraint |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107330355A CN107330355A (en) | 2017-11-07 |
CN107330355B true CN107330355B (en) | 2021-01-26 |
Family
ID=60193737
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710330206.4A Expired - Fee Related CN107330355B (en) | 2017-05-11 | 2017-05-11 | Deep pedestrian re-identification method based on positive sample balance constraint |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107330355B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108229435B (en) * | 2018-02-01 | 2021-03-30 | 北方工业大学 | Method for pedestrian recognition |
CN108564030A (en) * | 2018-04-12 | 2018-09-21 | 广州飒特红外股份有限公司 | Classifier training method and apparatus towards vehicle-mounted thermal imaging pedestrian detection |
CN108960342B (en) * | 2018-08-01 | 2021-09-14 | 中国计量大学 | Image similarity calculation method based on improved Soft-Max loss function |
CN109271852A (en) * | 2018-08-07 | 2019-01-25 | 重庆大学 | A kind of processing method that the pedestrian detection based on deep neural network identifies again |
CN109117891B (en) * | 2018-08-28 | 2022-04-08 | 电子科技大学 | Cross-social media account matching method fusing social relations and naming features |
CN109598191A (en) * | 2018-10-23 | 2019-04-09 | 北京市商汤科技开发有限公司 | Pedestrian identifies residual error network training method and device again |
CN111382793B (en) * | 2020-03-09 | 2023-02-28 | 腾讯音乐娱乐科技(深圳)有限公司 | Feature extraction method and device and storage medium |
CN113887561B (en) * | 2021-09-03 | 2022-08-09 | 广东履安实业有限公司 | Face recognition method, device, medium and product based on data analysis |
CN113569111B (en) * | 2021-09-24 | 2021-12-21 | 腾讯科技(深圳)有限公司 | Object attribute identification method and device, storage medium and computer equipment |
CN114764942B (en) * | 2022-05-20 | 2022-12-09 | 清华大学深圳国际研究生院 | Difficult positive and negative sample online mining method and face recognition method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104537356A (en) * | 2015-01-12 | 2015-04-22 | 北京大学 | Pedestrian re-identification method and device for carrying out gait recognition through integral scheduling |
CN104915643A (en) * | 2015-05-26 | 2015-09-16 | 中山大学 | Deep-learning-based pedestrian re-identification method |
CN105956606A (en) * | 2016-04-22 | 2016-09-21 | 中山大学 | Method for re-identifying pedestrians on the basis of asymmetric transformation |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9396412B2 (en) * | 2012-06-21 | 2016-07-19 | Siemens Aktiengesellschaft | Machine-learnt person re-identification |
-
2017
- 2017-05-11 CN CN201710330206.4A patent/CN107330355B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104537356A (en) * | 2015-01-12 | 2015-04-22 | 北京大学 | Pedestrian re-identification method and device for carrying out gait recognition through integral scheduling |
CN104915643A (en) * | 2015-05-26 | 2015-09-16 | 中山大学 | Deep-learning-based pedestrian re-identification method |
CN105956606A (en) * | 2016-04-22 | 2016-09-21 | 中山大学 | Method for re-identifying pedestrians on the basis of asymmetric transformation |
Non-Patent Citations (2)
Title |
---|
People Re-identifification using Deep Convolutional Neural Network;Guanwen Zhang et al.;《2014 International Conference on Computer Vision Theory and Application》;20140108;摘要,第2节 * |
融合异构特征的子空间迁移学习算法;张景祥等;《自动化学报》;20140228;第1节 * |
Also Published As
Publication number | Publication date |
---|---|
CN107330355A (en) | 2017-11-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107330355B (en) | Deep pedestrian re-identification method based on positive sample balance constraint | |
Yang et al. | Supervised translation-invariant sparse coding | |
CN107203787B (en) | Unsupervised regularization matrix decomposition feature selection method | |
CN103116766B (en) | A kind of image classification method of encoding based on Increment Artificial Neural Network and subgraph | |
CN108427921A (en) | A kind of face identification method based on convolutional neural networks | |
CN109241995B (en) | Image identification method based on improved ArcFace loss function | |
CN109325443A (en) | A kind of face character recognition methods based on the study of more example multi-tag depth migrations | |
CN106372581A (en) | Method for constructing and training human face identification feature extraction network | |
CN109508379A (en) | A kind of short text clustering method indicating and combine similarity based on weighted words vector | |
CN110097060B (en) | Open set identification method for trunk image | |
Bui et al. | Scalable sketch-based image retrieval using color gradient features | |
CN111985581A (en) | Sample-level attention network-based few-sample learning method | |
CN103617609B (en) | Based on k-means non-linearity manifold cluster and the representative point choosing method of graph theory | |
CN104966075B (en) | A kind of face identification method and system differentiating feature based on two dimension | |
CN106203628A (en) | A kind of optimization method strengthening degree of depth learning algorithm robustness and system | |
CN109325513A (en) | A kind of image classification network training method based on magnanimity list class single image | |
CN110516533A (en) | A kind of pedestrian based on depth measure discrimination method again | |
CN109948534A (en) | The method for carrying out recognition of face is clustered using fast density peak value | |
CN114299362A (en) | Small sample image classification method based on k-means clustering | |
CN110598022A (en) | Image retrieval system and method based on robust deep hash network | |
CN111160119B (en) | Multi-task depth discrimination measurement learning model construction method for face verification | |
CN117935299A (en) | Pedestrian re-recognition model based on multi-order characteristic branches and local attention | |
CN113779283B (en) | Fine-grained cross-media retrieval method with deep supervision and feature fusion | |
CN103336974B (en) | A kind of flowers classification discrimination method based on local restriction sparse representation | |
CN110852304B (en) | Hyperspectral data processing method based on deep learning method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210126 Termination date: 20210511 |