CN115147774A - Pedestrian re-identification method in degradation environment based on feature alignment - Google Patents
Pedestrian re-identification method in degradation environment based on feature alignment Download PDFInfo
- Publication number
- CN115147774A CN115147774A CN202210792619.5A CN202210792619A CN115147774A CN 115147774 A CN115147774 A CN 115147774A CN 202210792619 A CN202210792619 A CN 202210792619A CN 115147774 A CN115147774 A CN 115147774A
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- module
- network
- feature alignment
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000015556 catabolic process Effects 0.000 title claims abstract description 29
- 238000006731 degradation reaction Methods 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000005457 optimization Methods 0.000 claims abstract description 7
- 238000012549 training Methods 0.000 claims description 19
- 230000006870 function Effects 0.000 claims description 18
- 238000000605 extraction Methods 0.000 claims description 12
- 238000010276 construction Methods 0.000 claims description 7
- 230000014759 maintenance of location Effects 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 6
- 238000013136 deep learning model Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 4
- 238000007710 freezing Methods 0.000 claims description 3
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 238000012423 maintenance Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 125000004122 cyclic group Chemical group 0.000 claims description 2
- 230000008014 freezing Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 2
- 238000003062 neural network model Methods 0.000 abstract 1
- 238000013459 approach Methods 0.000 description 5
- 238000009826 distribution Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
- G06V10/765—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a pedestrian re-identification method in a degradation environment based on feature alignment, which comprises the following steps: 1. constructing a new pedestrian re-identification neural network model in a degradation environment; 2. processing and calculating input data by using the model; 3. calculating each loss function of the model to obtain a total loss function; 4. and performing iterative optimization on the model according to the total loss function. The feature alignment module provided by the invention is a plug-and-play module, can be combined with the existing pedestrian re-identification model, so that the performance of the model in a degraded environment can be improved, the performance of the model in a clean environment can be ensured not to be lost, the high efficiency of running in a normal environment and the degraded environment can be realized at the same time, and the pedestrian re-identification with high accuracy is realized.
Description
Technical Field
The invention belongs to the technical field of image processing, particularly relates to pedestrian re-recognition in a degradation environment, and provides a plug-and-play feature alignment module-based pedestrian re-recognition algorithm in the degradation environment.
Background
Pedestrian re-identification is directed to open pedestrian retrieval in a non-overlapping camera network. However, in practical applications, the image of the pedestrian may be degraded to different degrees due to illumination, resolution and weather, for example, the monitoring camera image (i.e. the picture to be queried in pedestrian re-identification, query set) usually has a lower resolution due to the problem of the device, but the image of the galeley set matched with the monitoring camera image usually has a higher resolution. As a result, pedestrian re-recognition models trained on clean pictures do not perform well in degraded environments that are widespread in reality. In addition, because it is extremely difficult to collect large-scale labeled degraded images for various degraded scenes in reality, it is not feasible to retrain the supervised pedestrian re-recognition mode for various degraded environments.
Currently, there are two main approaches to solve the above-mentioned dilemma, but both have their own drawbacks. And (1) an unsupervised domain adaptation-based method. The premise of this strategy is that the deep neural network can align the edge distributions of the low-quality and high-quality images in the learned feature space. Once the difference between the edge distributions in the learned feature space is reduced, the re-id network will perform well on low quality images. Although unsupervised domain adaptive pedestrian re-identification based methods can improve performance in degraded environments, such methods also change the mapping rules of the clean picture, thereby compromising pedestrian re-identification performance on the clean picture, which is not desirable for real-world applications. (2) The degraded image is pre-processed using off-the-shelf image restoration or enhancement methods that do not affect the performance of the clean image and can eliminate the negative impact of the degraded environment on pedestrian re-identification, e.g., low light image enhancement techniques can be used to improve the visual quality of images of people taken at night. This solution based on image pre-processing, also called two-stage approach, is suitable for various degraded scenes by integrating different image restoration modules. However, the goal of the image restoration or enhancement method is to achieve a subjectively pleasing visual effect without much attention being paid to the performance of pedestrian re-identification, and therefore the performance improvement of the two-stage method on degraded images is limited.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a pedestrian re-identification method in a degradation environment based on feature alignment, so that the performance of a model on a degraded picture can be improved as much as possible while the performance of the model on the pedestrian re-identification on a clean picture is not sacrificed, and therefore the pedestrian re-identification with high accuracy can be realized.
In order to solve the technical problems, the invention adopts the following technical scheme:
the invention relates to a pedestrian re-identification method in a degradation environment based on feature alignment, which is characterized by comprising the following steps of:
step 2, constructing a deep learning model of pedestrian re-identification in a degradation environment based on feature alignment, comprising the following steps of: pedestrian re-recognition model F and two feature alignment modules G c2d And G d2c Two authentication networks D c And D d ;
Step 2.1, theThe pedestrian re-identification model F consists of a backbone network and a classification network, wherein the backbone network is based on a ResNet-50 network; pre-training the pedestrian re-recognition model F by utilizing a pedestrian image data set shot in a normal environment to obtain a pre-trained pedestrian re-recognition modelFreezing the pre-training weights;
step 2.2, the feature alignment Module G c2d And G d2c The network structures of (a) each include: m residual convolution modules;
each residual convolution module consists of a convolution layer, a batch normalization layer and an activation function RELU in sequence, wherein the convolution kernel of the convolution layer has the size of k multiplied by k and the step length of j; the input of the residual convolution module is spliced with the output of the residual convolution module and then used as the final output of the residual convolution module;
step 2.3, the authentication network D c And D d The network structures of (a) each include: a feature extraction module and a classification module;
the structure of the feature extraction module is the same as that of the backbone network, and the pre-training weight is loaded to serve as a network parameter of the feature extraction module; the classification module consists of a global average pooling layer, two full-connection layers, a batch normalization layer and an activation function leak RELU in sequence;
step 3, training a deep learning model for pedestrian re-recognition in a degradation environment based on feature alignment:
step 3.1, the ith normal pedestrian image X i And j-th degraded pedestrian image Y j Inputting the pre-trained pedestrian re-recognition modelThe backbone network carries out feature extraction to obtain the corresponding pedestrian featureAnd
step 3.2, characteristics of pedestriansInputting the feature alignment module G c2d And obtaining the aligned pedestrian featuresCharacterizing pedestriansInput feature alignment module G d2c And obtaining the aligned pedestrian features
Characterizing pedestriansAndinputting the authentication network D c And correspondingly obtaining the probability under the normal environmentAnd
characterizing pedestriansAndinput the authentication network D d And correspondingly obtaining the probability under the degradation environmentAnd Dd;
respectively constructing pedestrian image X by using formula (1) and formula (2) i And Y j Against loss ofAnd
in the formulae (1) and (2), E represents desirably;
step 3.3, aligning the pedestrian characteristicsInputting the feature alignment module G d2c And obtaining reconstructed pedestrian featuresFeatures of the pedestrian after alignmentInputting the feature alignment module G c2d And obtaining reconstructed pedestrian features
Construction of pedestrian image X using equations (3) and (4) i And Y j Loss of cyclic consistencyAnd
step 3.4, characteristics of pedestriansInputting the feature alignment module G d2c And obtaining individual retention characteristicsCharacterizing pedestriansInputting the feature alignment module G c2d To obtain individual retention characteristics
Construction of pedestrian image X using equations (5) and (6) i And Y j Individual maintenance loss ofAnd
step 3.5,Construction of pedestrian image X Using equation (7) i And Y j Degraded residual consistency loss L res :
Step 3.6, establishing a global loss function L by using the formula (8) total :
In formula (8), λ 1 、λ 2 、λ 3 、λ 4 4 hyper-parameters of the global loss function respectively;
step 3.7, aligning two feature alignment modules G by a random gradient descent method c2d And G d2c And two authentication networks D c And D d Carrying out optimization solution and calculating a global loss function L total Then carrying out gradient back propagation until the convergence of the global loss function L is reached, thereby obtaining the trained feature alignment moduleAndand authenticating the networkAnd
step 4, aligning the trained features to a modulePedestrian re-recognition model connected in pre-trainingAnd obtaining a final pedestrian re-identification model for identifying the pedestrian picture in the degraded environment.
The electronic device comprises a memory and a processor, and is characterized in that the memory is used for storing a program for supporting the processor to execute the pedestrian re-identification method under the characteristic alignment-based degradation environment, and the processor is configured to execute the program stored in the memory.
The invention relates to a computer-readable storage medium, on which a computer program is stored, wherein the computer program is executed by a processor to perform the steps of the method for re-identifying a pedestrian in a degraded environment based on feature alignment.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention provides a new structural paradigm taking a feature alignment module as a plug-and-play module for a pedestrian re-identification task in a degraded environment. The plug-and-play structure model has high structure flexibility, can be combined with any Keke pedestrian re-identification model, has less network parameters, and can ensure the efficient operation of the pedestrian re-identification network after the module is inserted.
2. According to the method, a feature alignment module is inserted into the network, and an unsupervised confrontation training strategy is used to learn feature alignment between the degraded image and the clean image under the guidance of the pedestrian re-identification model trained by the clean image.
Drawings
FIG. 1 is a flow chart of a pedestrian re-identification algorithm in a degraded environment based on feature alignment of the present invention;
FIG. 2 is a graph comparing pedestrian re-identification performance with a two-stage approach to performance optimization in several fog-degraded environments;
FIG. 3 is a graph comparing pedestrian re-identification performance with several two-stage methods of optimal performance in low light degradation environments;
FIG. 4 is a graph comparing pedestrian re-identification performance with a two-stage approach to performance optimization in several hybrid degradation environments.
Detailed Description
In this embodiment, an idea of image preprocessing is applied to a feature space, that is, degraded features are aligned to clean features in an unsupervised learning and self-supervised learning manner to suppress influence caused by image degradation, and a plug-and-play feature alignment module is provided to improve performance of pedestrian re-identification in a degraded environment, specifically, as shown in fig. 1, the method includes the following steps:
step 2, constructing a deep learning model of pedestrian re-identification in a degradation environment based on feature alignment, comprising the following steps of: pedestrian re-recognition model F and two feature alignment modules G c2d And G d2c Two authentication networks D c And D d ;
Step 2.1, the pedestrian re-identification model F consists of a backbone network and a classification network, wherein the backbone network is based on a ResNet-50 network; pre-training the pedestrian re-recognition model F by utilizing a pedestrian image data set shot in a normal environment to obtain a pre-trained pedestrian re-recognition modelAnd freezing the pre-training weights;
step 2.2, feature alignment Module G c2d And G d2c The network structures of (a) each include: m residual convolution modules;
each residual convolution module consists of a convolution layer, a batch normalization layer and an activation function RELU in sequence, wherein the convolution kernel of the convolution layer has the size of k multiplied by k, and the step length is j; the input of the residual convolution module is spliced with the output of the residual convolution module and then used as the final output of the residual convolution module;
step 2.3, authentication network D c And D d The network structures of (a) each include: a feature extraction module and a classification module;
the structure of the feature extraction module is the same as that of the backbone network, the pre-training weight is loaded to serve as the network parameter of the feature extraction module, the structure of the feature extraction module is kept the same as that of the backbone network, and the identifier can extract features from the pedestrian re-identification angle by adopting the same and training parameters, so that more attention is paid to the pedestrian re-identification task; the classification module sequentially comprises a global average pooling layer, two full-connection layers, a batch normalization layer and an activation function leak RELU;
step 3, training of a deep learning model for pedestrian re-identification in a degradation environment based on feature alignment:
step 3.1, carrying out image X on the ith normal pedestrian i And the j-th degraded pedestrian image Y j Inputting a pre-trained pedestrian re-recognition modelThe characteristics of the pedestrian are extracted in the backbone network to obtain the corresponding characteristics of the pedestrianAnd
step 3.2, characterizing the pedestriansInput feature alignment module G c2d And obtaining the aligned pedestrian featuresCharacterizing pedestriansInput feature alignment module G d2c And obtaining the aligned pedestrian features
Characterizing pedestriansAndinputting the authentication network D c And obtaining an authentication network D c Characteristic of pedestriansAndis the probability extracted from the pedestrian picture shot under normal environmentAnd
characterizing pedestriansAndinputting the authentication network D d And obtaining an authentication network D d Characteristic of pedestriansAndis the probability extracted from the pedestrian picture taken in a degraded environmentAnd
respectively constructing pedestrian images X by using formula (1) and formula (2) i And Y j Against loss ofAnd
in equations (1) and (2), E represents the expectation that the countermeasure loss causes alignment of the clean feature to the degraded feature and alignment of the degraded feature to the clean feature by countermeasure training, making the aligned features more similar to the real features;
step 3.3, aligning the pedestrian characteristicsInput feature alignment module G d2c And obtaining reconstructed pedestrian featuresFeatures of the pedestrian after alignmentInput feature alignment module G c2d And obtaining reconstructed pedestrian characteristics
Construction of pedestrian image X using equations (3) and (4) i And Y j Loss of cycle consistencyAnd
the cycle consistency loss is reconstructed by the aligned features through the alignment module, and the features are converted back to the original pedestrian features, so that the aim of ensuring the consistency of the content information is fulfilled;
step 3.4, characteristics of pedestriansInput feature alignment module G d2c And obtaining individual retention characteristicsCharacterizing pedestriansInput feature alignment module G c2d To obtain individual retention characteristics
Construction of pedestrian image X using equations (5) and (6) i And Y j Individual maintenance loss ofAnd
individual retention loss by encouraging alignment of module G d2c And G c2d More attention is paid to the degradation information in the features so as to achieve the aim of further protecting the content information in the features;
step 3.5, constructing a pedestrian image X by using the formula (7) i And Y j Degraded residual consistency loss L res :
Because the invention adopts an unsupervised mode to train the network, stronger constraint can be applied to the network by adopting degradation consistency loss so as to ensure the stability of network training;
step 3.6, establishing a global loss function L by using the formula (8) total :
In formula (8), λ 1 、λ 2 、λ 3 、λ 4 Respectively, of the global loss function, in the present embodiment, λ is fixed 1 =1,λ 2 =5,λ 3 =10,λ 4 =1;
Step 3.7, aligning the two feature alignment modules G by a random gradient descent method c2d And G d2c And two authentication networks D c And D d Carrying out optimization solution and calculating a global loss function L total Then, gradient back propagation is carried out until the convergence of a global loss function L is reached, so that a trained feature alignment module is obtainedAndand authenticating the networkAnd
step 4, aligning the trained features to a modulePedestrian re-recognition model connected in pre-trainingAnd obtaining a final pedestrian re-identification model for identifying the pedestrian picture in the degraded environment.
In this embodiment, an electronic device includes a memory for storing a program for supporting a processor to execute a pedestrian re-recognition method in a degraded environment based on feature alignment, and a processor configured to execute the program stored in the memory.
In this embodiment, a computer-readable storage medium has stored thereon a computer program which, when executed by a processor, executes the steps of the pedestrian re-identification method in a degraded environment based on feature alignment.
In order to quantitatively evaluate the effect of the invention and verify the effectiveness of the invention, the method is compared with a plurality of performance-optimized two-stage methods under a fog degradation environment, a low-light degradation environment and a mixed degradation environment, and three performance indexes of CMC-k (cumulative matching characterization, a.k.a, rank-k matching access), mAP and mINP are selected as evaluation indexes;
FIG. 2 illustrates the pedestrian re-identification performance of the present invention with a two-stage approach to performance optimization in several fog-degraded environments; FIG. 3 illustrates a comparison of pedestrian re-identification performance with several two-stage methods of optimal performance in low light degradation environments; FIG. 4 illustrates the comparison of the pedestrian re-identification performance of the present invention with several two-stage methods of optimal performance in a hybrid degradation environment; the results clearly show that the method achieves the optimal pedestrian re-identification performance in three degradation environments, and the performance is greatly improved compared with a two-stage method.
Claims (3)
1. A pedestrian re-identification method in a degradation environment based on feature alignment is characterized by comprising the following steps:
step 1, acquiring a pedestrian image data set (X) shot in a normal environment 1 ,X 2 ,…,X i ,…,X N ) Wherein X is i Representing the ith normal pedestrian image, and N represents the total number of images; acquiring a pedestrian image dataset (Y) taken in a degraded environment 1 ,Y 2 ,…,Y j ,…,Y M ) Wherein Y is j Representing the j-th degraded pedestrian image, and M represents the total number of images;
step 2, constructing a deep learning model of pedestrian re-identification in a degradation environment based on feature alignment, comprising the following steps of: pedestrian re-recognition model F and two feature alignment modules G c2d And G d2c Two authentication networks D c And D d ;
Step 2.1, the pedestrian re-identification model F consists of a backbone network and a classification network, wherein the backbone network takes a ResNet-50 network as a baseA foundation; pre-training the pedestrian re-recognition model F by utilizing a pedestrian image data set shot in a normal environment to obtain a pre-trained pedestrian re-recognition modelAnd freezing the pre-training weights;
step 2.2, the feature alignment Module G c2d And G d2c The network structures of (a) each include: m residual convolution modules;
each residual convolution module consists of a convolution layer, a batch normalization layer and an activation function RELU in sequence, wherein the convolution kernel of the convolution layer has the size of k multiplied by k and the step length of j; the input of the residual convolution module is spliced with the output of the residual convolution module and then used as the final output of the residual convolution module;
step 2.3, the authentication network D c And D d The network structures of (a) each include: a feature extraction module and a classification module;
the structure of the feature extraction module is the same as that of the backbone network, and the pre-training weight is loaded to serve as a network parameter of the feature extraction module; the classification module consists of a global average pooling layer, two full-connection layers, a batch normalization layer and an activation function leak RELU in sequence;
step 3, training of a deep learning model for pedestrian re-identification in a degradation environment based on feature alignment:
step 3.1, the ith normal pedestrian image X i And the j-th degraded pedestrian image Y j Inputting the pre-trained pedestrian re-recognition modelThe backbone network carries out feature extraction to obtain the corresponding pedestrian featureAnd
step 3.2, characterizing the pedestriansInputting the feature alignment module G c2d And obtaining the aligned pedestrian featuresCharacterizing pedestriansInput feature alignment module G d2c And obtaining the aligned pedestrian features
Characterizing pedestriansAndinputting the authentication network D c And correspondingly obtaining the probability under the normal environmentAnd
characterizing pedestriansAndinputting the authentication network D d And correspondingly obtaining the probability under the degradation environmentAnd D d ;
Respectively constructing pedestrian image X by using formula (1) and formula (2) i And Y j Against loss ofAnd
in the formulae (1) and (2), E represents desirably;
step 3.3, aligning the pedestrian characteristicsInputting the feature alignment module G d2c And obtaining reconstructed pedestrian characteristicsFeatures of the pedestrian after alignmentInputting the feature alignment module G c2d And obtaining reconstructed pedestrian features
Construction of pedestrian image X using equations (3) and (4) i And Y j Loss of cyclic consistencyAnd
step 3.4, characterizing the pedestriansInputting the feature alignment module G d2c And obtaining individual retention characteristicsCharacterizing pedestriansInputting the feature alignment module G c2d To obtain individual retention characteristics
Construction of pedestrian image X using equations (5) and (6) i And Y j Individual maintenance loss ofAnd
step 3.5, constructing a pedestrian image X by using the formula (7) i And Y j Degraded residual consistency loss L res :
Step 3.6, establishing a global loss function L by using the formula (8) total :
In formula (8), λ 1 、λ 2 、λ 3 、λ 4 4 hyper-parameters of the global loss function respectively;
step 3.7, aligning the two feature alignment modules G by a random gradient descent method c2d And G d2c And two authentication networks D c And D d Carrying out optimization solution and calculating a global loss function L total Then, gradient back propagation is carried out until the convergence of a global loss function L is reached, so that a trained feature alignment module is obtainedAndand authenticating the networkAnd
2. An electronic device comprising a memory and a processor, wherein the memory is configured to store a program that enables the processor to perform the method of claim 1, and wherein the processor is configured to execute the program stored in the memory.
3. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as claimed in claim 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210792619.5A CN115147774B (en) | 2022-07-05 | 2022-07-05 | Pedestrian re-identification method based on characteristic alignment in degradation environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210792619.5A CN115147774B (en) | 2022-07-05 | 2022-07-05 | Pedestrian re-identification method based on characteristic alignment in degradation environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115147774A true CN115147774A (en) | 2022-10-04 |
CN115147774B CN115147774B (en) | 2024-04-02 |
Family
ID=83413157
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210792619.5A Active CN115147774B (en) | 2022-07-05 | 2022-07-05 | Pedestrian re-identification method based on characteristic alignment in degradation environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115147774B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111126360A (en) * | 2019-11-15 | 2020-05-08 | 西安电子科技大学 | Cross-domain pedestrian re-identification method based on unsupervised combined multi-loss model |
CN111783736A (en) * | 2020-07-23 | 2020-10-16 | 上海高重信息科技有限公司 | Pedestrian re-identification method, device and system based on human body semantic alignment |
CN113408492A (en) * | 2021-07-23 | 2021-09-17 | 四川大学 | Pedestrian re-identification method based on global-local feature dynamic alignment |
WO2021203801A1 (en) * | 2020-04-08 | 2021-10-14 | 苏州浪潮智能科技有限公司 | Person re-identification method and apparatus, electronic device, and storage medium |
CN114627496A (en) * | 2022-03-01 | 2022-06-14 | 中国科学技术大学 | Robust pedestrian re-identification method based on depolarization batch normalization of Gaussian process |
-
2022
- 2022-07-05 CN CN202210792619.5A patent/CN115147774B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111126360A (en) * | 2019-11-15 | 2020-05-08 | 西安电子科技大学 | Cross-domain pedestrian re-identification method based on unsupervised combined multi-loss model |
WO2021203801A1 (en) * | 2020-04-08 | 2021-10-14 | 苏州浪潮智能科技有限公司 | Person re-identification method and apparatus, electronic device, and storage medium |
CN111783736A (en) * | 2020-07-23 | 2020-10-16 | 上海高重信息科技有限公司 | Pedestrian re-identification method, device and system based on human body semantic alignment |
CN113408492A (en) * | 2021-07-23 | 2021-09-17 | 四川大学 | Pedestrian re-identification method based on global-local feature dynamic alignment |
CN114627496A (en) * | 2022-03-01 | 2022-06-14 | 中国科学技术大学 | Robust pedestrian re-identification method based on depolarization batch normalization of Gaussian process |
Non-Patent Citations (1)
Title |
---|
熊炜;熊子婕;杨荻椿;童磊;刘敏;曾春艳;: "基于深层特征融合的行人重识别方法", 计算机工程与科学, no. 02, 15 February 2020 (2020-02-15) * |
Also Published As
Publication number | Publication date |
---|---|
CN115147774B (en) | 2024-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112861720B (en) | Remote sensing image small sample target detection method based on prototype convolutional neural network | |
CN110163110B (en) | Pedestrian re-recognition method based on transfer learning and depth feature fusion | |
CN108230278B (en) | Image raindrop removing method based on generation countermeasure network | |
Thai et al. | Image classification using support vector machine and artificial neural network | |
CN110516095B (en) | Semantic migration-based weak supervision deep hash social image retrieval method and system | |
CN111460980B (en) | Multi-scale detection method for small-target pedestrian based on multi-semantic feature fusion | |
CN112215119B (en) | Small target identification method, device and medium based on super-resolution reconstruction | |
CN110837846A (en) | Image recognition model construction method, image recognition method and device | |
CN112862690B (en) | Transformers-based low-resolution image super-resolution method and system | |
CN113269224B (en) | Scene image classification method, system and storage medium | |
CN109146944A (en) | A kind of space or depth perception estimation method based on the revoluble long-pending neural network of depth | |
CN110555461A (en) | scene classification method and system based on multi-structure convolutional neural network feature fusion | |
CN113065516B (en) | Sample separation-based unsupervised pedestrian re-identification system and method | |
CN112634171A (en) | Image defogging method based on Bayes convolutional neural network and storage medium | |
CN111126155B (en) | Pedestrian re-identification method for generating countermeasure network based on semantic constraint | |
CN113205103A (en) | Lightweight tattoo detection method | |
Zhou et al. | MSAR‐DefogNet: Lightweight cloud removal network for high resolution remote sensing images based on multi scale convolution | |
CN113283320B (en) | Pedestrian re-identification method based on channel feature aggregation | |
CN111191704A (en) | Foundation cloud classification method based on task graph convolutional network | |
CN117197451A (en) | Remote sensing image semantic segmentation method and device based on domain self-adaption | |
CN115147774A (en) | Pedestrian re-identification method in degradation environment based on feature alignment | |
CN116503896A (en) | Fish image classification method, device and equipment | |
CN115861997A (en) | License plate detection and identification method for guiding knowledge distillation by key foreground features | |
CN115830401A (en) | Small sample image classification method | |
CN113673629A (en) | Open set domain adaptive remote sensing image small sample classification method based on multi-graph convolution network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |