CN113051962B - Pedestrian re-identification method based on twin Margin-Softmax network combined attention machine - Google Patents

Pedestrian re-identification method based on twin Margin-Softmax network combined attention machine Download PDF

Info

Publication number
CN113051962B
CN113051962B CN201911366078.4A CN201911366078A CN113051962B CN 113051962 B CN113051962 B CN 113051962B CN 201911366078 A CN201911366078 A CN 201911366078A CN 113051962 B CN113051962 B CN 113051962B
Authority
CN
China
Prior art keywords
softmax
loss
margin
network
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911366078.4A
Other languages
Chinese (zh)
Other versions
CN113051962A (en
Inventor
何小海
苏婕
卿粼波
吴小强
许盛宇
吴晓红
滕奇志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201911366078.4A priority Critical patent/CN113051962B/en
Publication of CN113051962A publication Critical patent/CN113051962A/en
Application granted granted Critical
Publication of CN113051962B publication Critical patent/CN113051962B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Abstract

The invention discloses a pedestrian re-identification method based on a twin Margin-Softmax network of a combined attention machine system. Firstly, an AAM-Softmax recognition model with a space-channel joint attention mechanism is provided, local features and global features are combined, metric learning is integrated in classification, and the feature discrimination is favorably improved. And then, a twin Margin-Softmax network combining recognition loss and verification loss is provided, the trained AAM-Softmax recognition model is used as a feature encoder of two weight sharing sub-networks to extract image pair features, and the network is trained under the joint supervision of recognition loss and verification loss, so that the discriminative features can be effectively learned for pedestrian re-recognition. Compared with the mainstream pedestrian re-identification method, the method disclosed by the invention can obtain better performance. The invention can be effectively applied to the fields of intelligent monitoring, public safety and the like.

Description

Pedestrian re-identification method based on twin Margin-Softmax network combined attention machine
Technical Field
The invention relates to a pedestrian re-identification technology, in particular to a pedestrian re-identification method based on a twin Margin-Softmax network of a combined attention machine, and belongs to the field of computer vision.
Background
Pedestrian re-identification (Person re-ID) is an important research topic of computer vision, and is now widely used in the fields of video surveillance and public safety. Pedestrian re-identification is a technique that utilizes computer vision techniques to retrieve across devices whether a particular pedestrian is present in an image/video sequence. Pedestrian re-identification studies have focused primarily on both Feature Representation (Feature reconstruction) and Metric Learning (Metric Learning). With the development of deep learning, the pedestrian re-identification makes a great breakthrough. Depth metric learning aims at learning the similarity between deeply embedded features (the similarity of different images of the same pedestrian is higher than that of different images of different pedestrians) through a network by reducing the distance within a feature class and simultaneously increasing the distance between feature classes. However, metric learning is highly dependent on the selected sample pairs and only partial samples are considered, not the global structure of the feature space samples. In terms of depth feature representation, feature representation of an image is automatically extracted by a Convolutional Neural Network (CNN). Pedestrian re-identification is generally considered a classification/identification problem or a verification problem. The classification/recognition task is to train a classification network using the ID or attribute of a pedestrian as a label, and the verification task is to determine whether an input image pair belongs to the same person. The recognition model has rich ID tags but its training objectives do not take into account the similarity measure, while the verification model performs the similarity measure during the training phase, but its training tags are weak (both same/different classes of tags) and do not take into account the relationship between the image pair and other images in the dataset. Therefore, the advantages of the two models are combined, and the embedding features with discriminative power can be effectively trained.
Disclosure of Invention
The invention provides a pedestrian re-identification method based on a twin Margin-Softmax network in a combined attention system for solving the problems. The network has two weight-sharing sub-networks, which respectively identify the IDs of a pair of input images; at the same time, the distance between the extracted embedded features of the two sub-networks is calculated to verify whether the input image pair belongs to the same person. The network is trained under the joint supervision of recognition loss and verification loss (both Margin-Softmax loss), and discriminative embedded features can be effectively learned for pedestrian re-recognition.
The invention achieves the above purpose through the following technical scheme:
1. the invention provides an AAM-Softmax identification model with a space-channel combined attention mechanism, which comprises the following steps and requirements:
(1) An AAM-Softmax recognition model is provided, and a Feature Embedding Module (Feature Embedding Module) of the model is composed of ImageNet pre-trained Resnet _50, a Global Average Pooling (GAP) layer, a Batch Normalization (BN) layer and a space-channel joint attention mechanism and is used for extracting features from an original input image. An additional Angular Margin Softmax (AAM-Softmax) is adopted as a classifier, and in the training stage, the ID of each pedestrian is used as a label for training. By applying an additional angle margin penalty to Softmax, the AAM-Softmax can simultaneously improve the compactness of the features in the class and the difference of the features between the classes in the hypersphere space, so that the features with discriminative power can be learned. The Softmax classifier is widely used in deep learning, and the Softmax loss expression is shown as formula (1):
Figure BDA0002338455100000021
in the softmax loss function of the function,
Figure BDA0002338455100000022
referred to as target location. AAM-Softmax loss willOffset in standard softmax loss b j Set to a constant of 0.target logic
Figure BDA0002338455100000023
Can be expressed as
Figure BDA0002338455100000024
Figure BDA0002338455100000025
Wherein theta is j Is W j And x i The included angle therebetween. Respectively combine W j And x i Carry out l 2 Regularization (| | W) j ||=1,||x i | = 1), i.e., respectively apply W j And x i Mapping onto Hypersphere Manifold (Hypersphere Manifold). Therefore, target logic
Figure BDA0002338455100000026
Equivalent to cosine distance
Figure BDA0002338455100000027
Then are aligned in the angle space
Figure BDA0002338455100000028
An additional angular margin constraint m is applied to improve the discrimination of the features. All the logit terms are then rescaled by the feature scaling factor s. Therefore, the Softmax loss is converted into an AAM-Softmax loss, and the expression thereof is shown in formula (2):
Figure BDA0002338455100000029
(2) The spatial-channel joint attention mechanism can focus on local features containing important information in the space and the channel without adding any additional learning parameters. Firstly, extracting a characteristic map from Resnet-50
Figure BDA00023384551000000210
Through a space attention mechanism, summing operation is carried out on the corresponding position of each channel to obtain a space summing matrix
Figure BDA00023384551000000211
The expression is shown as formula (3):
Figure BDA00023384551000000212
reshaping a (i, j) into a vector a (i, j), and activating a function through softmax to obtain a weight of each spatial position, wherein an expression of the weight is shown in formula (4):
Figure BDA00023384551000000213
multiplying the space position weight by the corresponding point of the feature map, adding the obtained result to the original feature map to obtain the space attention feature map
Figure BDA0002338455100000031
The expression is shown in formula (5):
Figure BDA0002338455100000032
through GAP layer, space attention feature map
Figure BDA0002338455100000033
Is pooled into a vector
Figure BDA0002338455100000034
Then the input channel attention is controlled, and finally the embedded characteristic vector can be obtained
Figure BDA0002338455100000035
The channel attention mechanism is the same as the space attention mechanism in principle, and the expression thereof is shown in formulas (6) to (7):
Figure BDA0002338455100000036
Figure BDA0002338455100000037
2. the invention provides a twin Margin-Softmax classification network combining recognition loss and verification loss for pedestrian re-recognition, wherein the network not only considers the relation between every two samples, but also considers the similarity measurement between features in the training process. The method comprises the following steps and requirements:
(1) A twin network is proposed, the two sub-networks of which share weights. Feature Encoder (Feature Encoder) using an identification model pre-trained with AAM-Softmax loss function as a sub-network for extracting depth features from an input image pair
Figure BDA0002338455100000038
Will f is a ,f b Input is defined as f s =(f a -f b ) 2 Distance Measurement module (Distance Measurement module) to obtain Distance characteristics
Figure BDA0002338455100000039
(2) The network is trained under co-supervision of recognition loss and validation loss. On the one hand, the depth features f are respectively identified a ,f b The ID of the user; on the other hand, the distance characteristic f s And inputting a binary network (same/different), and verifying whether the input image pair belongs to the ID of the same person. The verification loss and the recognition loss both adopt a Combined Margin Softmax (Combined Margin-Softmax), and the expression of the Combined Margin Softmax is shown as (8):
Figure BDA00023384551000000310
wherein m is 1 ,m 2 And m 3 Respectively, a multiplicative angle margin, an additional angle margin, and a cosine margin. The total loss function of the network is shown in equation (9):
L total =αL iden1 +βL iden2 +γL ver (9)
where α, β, γ represent weights for verification loss 1, 2 and identification loss, respectively.
Drawings
FIG. 1 is an AAM-Softmax identification model with a space-channel joint attention mechanism proposed by the present invention
FIG. 2 is a schematic diagram of a space-channel combined attention mechanism proposed by the present invention
FIG. 3 shows a twin Margin-Softmax network pedestrian re-identification method combining identification loss and verification loss
Detailed Description
The invention will be further described with reference to the accompanying drawings in which:
as shown in fig. 1, the pedestrian re-identification method with the AAM-Softmax identification model of the space-channel joint attention mechanism may be specifically divided into the following three steps:
(1) Constructing an AAM-Softmax identification model with a space-channel combined attention mechanism;
(2) Training the model constructed in the step (1) by using the training image;
(3) The model is tested to evaluate the performance of the model.
Specifically, in the step (1), an AAM-Softmax recognition model with a space-channel joint attention mechanism is constructed as shown in fig. 1. The constructed model consists of a feature embedding module and a fully connected layer with an AAM-Softmax classifier. The feature embedding module includes Resnet _50 pre-trained by ImageNet, a global averaging pooling layer and a batch normalization layer, and a spatial-channel joint attention mechanism for extracting 2048-dimensional features from the original input image.
In the step (2), firstly, the image size of the database training set is adjusted to 256 × 128, the model constructed in the step (1) is input, and the model is trained by using the ID of the input image as a label. The loss function adopts AAM-Softmax loss (in the invention, s =30, m = 0.005), and model training is completed through parameter iteration;
in the step (3), the model performance trained in the step (2) is tested, the features of the query image set and the image set of the gallery are respectively extracted, the similarity between the query image set and the image set of the gallery is calculated one by one, and the ranking is carried out according to a nearest neighbor searching method. And (3) adopting a Rank-1 score and a mean Average Precision (mAP) as evaluation indexes to evaluate the performance of the model.
To verify the effectiveness of the method of the invention, experiments were performed on three pedestrian re-identification data sets, market1501, dukeMTMC-reiD and CUHK 03-NP. Three sets of comparative experiments were performed, comparative experiment (1): the image size is adjusted to 224 multiplied by 224, the model is a model without a joint attention mechanism, and the model is trained by Softmax loss; comparative experiment (2): the image size is adjusted to 224 multiplied by 224, the model is a model of a combined attention mechanism, and the model is trained by AAM-Softmax loss; comparative experiment (3): the image size was adjusted to 256 × 128, the model was a model without joint attention mechanism, and the model was trained with AAM-Softmax loss. Table one is the evaluation of four sets of experimental methods on three data sets.
Watch 1
Figure BDA0002338455100000041
As can be seen from the first table, the AAM-Softmax recognition model with the spatial-channel joint attention mechanism provided by the invention achieves the highest value on the two evaluation indexes, and the improvement is obvious. Analyzing the experimental results, it can be concluded that the AAM-Softmax method can achieve better performance than the standard Softmax method. This is because the additional angular angle margin constraint in AAM-Softmax can reduce the intra-class feature distance and simultaneously expand the inter-class feature distance in hypersphere space, thereby integrating metric learning into classification to enhance the discriminative power of features. In addition, since the aspect ratio of the input is more natural, the model trained using the image (with the size of 256 × 128) is better than the model trained using the image (with the size of 224 × 224). In addition, the AAM-Softmax recognition model with the joint space-channel attention mechanism can obtain better performance because the joint space-channel attention mechanism can combine local features with global features to learn stronger distinguishing features.
As shown in fig. 2, the spatial-channel joint attention mechanism is a schematic diagram, and the joint attention mechanism includes two parts, namely a spatial attention mechanism and a channel attention mechanism. The joint attention mechanism guides the model to pay attention to important local information in the space and the channel under the condition that no additional learning parameter is added, and is favorable for better learning of the characteristics with discriminative power.
As shown in fig. 3, the pedestrian re-identification method of the twin Margin-Softmax network combining the identification loss and the verification loss can be specifically divided into the following three steps:
(1) Constructing a twin Margin-Softmax network combining the recognition loss and the verification loss;
(2) Training the network constructed in the step (1) by using a training image pair;
(3) The network was tested to evaluate the performance of the method of the present invention.
Specifically, in the step (1), a twin Margin-Softmax network combining the identification loss and the verification loss is constructed as shown in FIG. 3. The twin network constructed has two weight-shared sub-networks. Feature encoder for extracting depth features f from input image pairs using recognition models pre-trained with the AAM-Softmax loss function of FIG. 1 as subnetworks a ,f b (2048D). On the one hand, the depth features f are respectively identified a ,f b The ID of the user; on the other hand, f is calculated a ,f b Distance characteristic f s The (2048-dimensional) input two-classification network (same/different) verifies whether the input image pair belongs to the ID of the same person.
In step (2), we first input the database training set in the form of image pairs (size adjusted to 256 × 128). The network total loss function is composed of identification losses 1 and 2 and verification loss (Combined Margin-Softmax is used in each case) (in the present invention, α =0.5, β =0.5, γ = 1). And the second table and the third table are parameter settings of Combined Margin-Softmax in the identification loss and the verification loss, and model training is completed through parameter iteration.
Watch two
Figure BDA0002338455100000051
Watch III
Figure BDA0002338455100000061
In the step (3), the network trained in the step (2) is tested, and the Rank-1 score and the mAP are used as evaluation indexes to evaluate the performance of the model.
To verify the effectiveness of the method of the invention, experiments were carried out on three data sets, market1501, dukeMTMC-reiD and CUHK 03-NP. And the fourth table is the comparison of the experimental evaluation result and other mainstream pedestrian re-identification methods on three data sets.
Watch four
Figure BDA0002338455100000062
As can be seen from the table IV, compared with several mainstream pedestrian re-identification methods, the method disclosed by the invention can obtain higher performance on the data sets of Market-1501, duke MTMC-reiD and CUHK03-NP, and the effectiveness of the method disclosed by the invention is proved.

Claims (4)

1. The pedestrian re-identification method based on the twin Margin-Softmax network based on the combined attention machine is characterized by comprising the following steps of:
the method comprises the following steps: constructing an AAM-Softmax recognition model with a space-channel joint attention mechanism, wherein the constructed model consists of a feature embedding module and a full connection layer with an AAM-Softmax classifier, the feature embedding module comprises a Resnet _50 pre-trained by ImageNet, a global average pooling layer and a batch normalization layer and the space-channel joint attention mechanism and is used for extracting 2048-dimensional features from an original input image;
step two: training the model constructed in the first step by using a training image, firstly adjusting the image size of a database training set to 256 multiplied by 128, inputting the model constructed in the step (1), and training the model by taking the ID of the input image as a label; the loss function adopts AAM-Softmax loss, and model training is completed through parameter iteration;
step three: constructing a twin Margin-Softmax network combining the recognition loss and the verification loss by using the model trained in the second step as a Feature coder, wherein two sub-network weights of the network are shared, and the recognition model pre-trained by using an AAM-Softmax loss function is used as a Feature coder (Feature Encoder) of the sub-network for extracting the depth features from the input image pair
Figure FDA0003851123760000011
Figure FDA0003851123760000012
Will f is mixed a ,f b Input is defined as f s =(f a -f b ) 2 Distance Measurement module (Distance Measurement module) to obtain Distance characteristics
Figure FDA0003851123760000013
Step four: training the network under the joint supervision of recognition loss and verification loss; on the one hand, the depth features f are respectively identified a ,f b To the ID, and on the other hand, to the distance feature f s Inputting the same or different two-classification networks, and verifying whether the input image pair belongs to the ID of the same person; the verification loss and the recognition loss both adopt a Combined Margin Softmax (Combined Margin-Softmax), and the expression of the Combined Margin Softmax is shown as (1):
Figure FDA0003851123760000014
wherein m is 1 ,m 2 And m 3 Respectively representing a multiplicative angle margin, an additional angle margin and a cosine margin, and the total loss function of the network is shown as formula (2):
L total =αL iden1 +βL iden2 +γL ver (2)
wherein α, β, γ represent weights of the verification loss 1, 2 and the recognition loss, respectively;
step five: and (5) testing the network trained in the fourth step, and evaluating the performance of the network by using Rank-1 scores and mAP as evaluation indexes.
2. The pedestrian re-identification method based on the twin Margin-Softmax network of the joint attention mechanism as claimed in claim 1, wherein the identification model constructed in the first step is added with a space-channel joint attention mechanism, and local features important to the space and the channel are concerned without adding any additional learning parameters.
3. The pedestrian re-identification method based on the twin Margin-Softmax network of the combined attention mechanism as claimed in claim 1, wherein AAM-Softmax loss is adopted in the first step to train the model, metric learning is integrated in a classifier, and therefore distinguishing power of features is improved.
4. The pedestrian re-identification method based on the twin Margin-Softmax network based on the Combined attention mechanism according to claim 1, wherein the twin network constructed in the third step combines identification loss and verification loss (both Combined Margin-Softmax loss), and trains the network under the mutual supervision of the two losses.
CN201911366078.4A 2019-12-26 2019-12-26 Pedestrian re-identification method based on twin Margin-Softmax network combined attention machine Active CN113051962B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911366078.4A CN113051962B (en) 2019-12-26 2019-12-26 Pedestrian re-identification method based on twin Margin-Softmax network combined attention machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911366078.4A CN113051962B (en) 2019-12-26 2019-12-26 Pedestrian re-identification method based on twin Margin-Softmax network combined attention machine

Publications (2)

Publication Number Publication Date
CN113051962A CN113051962A (en) 2021-06-29
CN113051962B true CN113051962B (en) 2022-11-04

Family

ID=76505369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911366078.4A Active CN113051962B (en) 2019-12-26 2019-12-26 Pedestrian re-identification method based on twin Margin-Softmax network combined attention machine

Country Status (1)

Country Link
CN (1) CN113051962B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114005078B (en) * 2021-12-31 2022-03-29 山东交通学院 Vehicle weight identification method based on double-relation attention mechanism

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109492528A (en) * 2018-09-29 2019-03-19 天津卡达克数据有限公司 A kind of recognition methods again of the pedestrian based on gaussian sum depth characteristic
CN109711232A (en) * 2017-10-26 2019-05-03 北京航天长峰科技工业集团有限公司 Deep learning pedestrian recognition methods again based on multiple objective function
CN109919073A (en) * 2019-03-01 2019-06-21 中山大学 A kind of recognition methods again of the pedestrian with illumination robustness
CN110009052A (en) * 2019-04-11 2019-07-12 腾讯科技(深圳)有限公司 A kind of method of image recognition, the method and device of image recognition model training
CN110059616A (en) * 2019-04-17 2019-07-26 南京邮电大学 Pedestrian's weight identification model optimization method based on fusion loss function
CN110222792A (en) * 2019-06-20 2019-09-10 杭州电子科技大学 A kind of label defects detection algorithm based on twin network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3041649A1 (en) * 2016-10-25 2018-05-03 Deep North, Inc. Point to set similarity comparison and deep feature learning for visual recognition
CN110580460A (en) * 2019-08-28 2019-12-17 西北工业大学 Pedestrian re-identification method based on combined identification and verification of pedestrian identity and attribute characteristics

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711232A (en) * 2017-10-26 2019-05-03 北京航天长峰科技工业集团有限公司 Deep learning pedestrian recognition methods again based on multiple objective function
CN109492528A (en) * 2018-09-29 2019-03-19 天津卡达克数据有限公司 A kind of recognition methods again of the pedestrian based on gaussian sum depth characteristic
CN109919073A (en) * 2019-03-01 2019-06-21 中山大学 A kind of recognition methods again of the pedestrian with illumination robustness
CN110009052A (en) * 2019-04-11 2019-07-12 腾讯科技(深圳)有限公司 A kind of method of image recognition, the method and device of image recognition model training
CN110059616A (en) * 2019-04-17 2019-07-26 南京邮电大学 Pedestrian's weight identification model optimization method based on fusion loss function
CN110222792A (en) * 2019-06-20 2019-09-10 杭州电子科技大学 A kind of label defects detection algorithm based on twin network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A New Discriminative Feature Learning for Person Re-Identification Using Additive Angular Margin Softmax Loss;Jie SU 等;《2019 UK/ China Emerging Technologies (UCET)》;20191024;第1-4页 *
An enhanced siamese angular softmax network with dual joint-attention for person re-identification;jie su 等;《Applied Intelligence》;20210203;第51卷(第8期);第6148-6166页 *
基于多任务学习的行人重识别特征表示方法;刘康凝 等;《重庆邮电大学学报(自然科学版)》;20200815;第32卷(第4期);第519-527页 *
基于多分块三重损失计算的行人识别方法;宋宗涛;《电视技术》;20171231;第41卷(第Z4期);第203-206页 *
基于深度学习的行人重识别研究进展;罗浩 等;《自动化学报》;20190122;第45卷(第11期);2032-2049 *
基于深度学习的行人重识别算法研究;姚乐炜;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20190115(第1期);I138-3023 *

Also Published As

Publication number Publication date
CN113051962A (en) 2021-06-29

Similar Documents

Publication Publication Date Title
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
CN106326886B (en) Finger vein image quality appraisal procedure based on convolutional neural networks
CN103262118B (en) Attribute value estimation device and property value method of estimation
CN102737633B (en) Method and device for recognizing speaker based on tensor subspace analysis
CN103942568B (en) A kind of sorting technique based on unsupervised feature selection
CN102663447B (en) Cross-media searching method based on discrimination correlation analysis
CN103810252B (en) Image retrieval method based on group sparse feature selection
CN111582178B (en) Vehicle weight recognition method and system based on multi-azimuth information and multi-branch neural network
CN105389326A (en) Image annotation method based on weak matching probability canonical correlation model
CN110135459A (en) A kind of zero sample classification method based on double triple depth measure learning networks
CN106250925B (en) A kind of zero Sample video classification method based on improved canonical correlation analysis
CN106203483A (en) A kind of zero sample image sorting technique of multi-modal mapping method of being correlated with based on semanteme
CN103839033A (en) Face identification method based on fuzzy rule
CN108960142B (en) Pedestrian re-identification method based on global feature loss function
CN106203296B (en) The video actions recognition methods of one attribute auxiliary
CN113361636B (en) Image classification method, system, medium and electronic device
CN111881716A (en) Pedestrian re-identification method based on multi-view-angle generation countermeasure network
CN104616005A (en) Domain-self-adaptive facial expression analysis method
CN114299542A (en) Video pedestrian re-identification method based on multi-scale feature fusion
CN107220598A (en) Iris Texture Classification based on deep learning feature and Fisher Vector encoding models
Li et al. Feature pyramid attention model and multi-label focal loss for pedestrian attribute recognition
CN103714340A (en) Self-adaptation feature extracting method based on image partitioning
CN113051962B (en) Pedestrian re-identification method based on twin Margin-Softmax network combined attention machine
CN114972904A (en) Zero sample knowledge distillation method and system based on triple loss resistance
CN103942572A (en) Method and device for extracting facial expression features based on bidirectional compressed data space dimension reduction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant