CN113723232A - Vehicle weight recognition method based on channel cooperative attention - Google Patents

Vehicle weight recognition method based on channel cooperative attention Download PDF

Info

Publication number
CN113723232A
CN113723232A CN202110940766.8A CN202110940766A CN113723232A CN 113723232 A CN113723232 A CN 113723232A CN 202110940766 A CN202110940766 A CN 202110940766A CN 113723232 A CN113723232 A CN 113723232A
Authority
CN
China
Prior art keywords
channel
identification
weight
image
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110940766.8A
Other languages
Chinese (zh)
Inventor
王越峰
魏颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaoxing Beida Information Technology Innovation Center
Original Assignee
Shaoxing Beida Information Technology Innovation Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaoxing Beida Information Technology Innovation Center filed Critical Shaoxing Beida Information Technology Innovation Center
Priority to CN202110940766.8A priority Critical patent/CN113723232A/en
Publication of CN113723232A publication Critical patent/CN113723232A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a vehicle weight recognition method based on channel cooperative attention, which comprises the following steps: the method comprises the steps of constructing channel joint features, calculating channel covariance vectors, constructing an discriminative region detector, establishing a deep learning feature extraction model based on the discriminative region regularization of weight density, inputting a picture into the model to extract re-recognition feature vectors, calculating cosine distances of the re-recognition feature vectors of a query image and the re-recognition feature vectors of candidate images, and sequencing the feature distances to obtain a final re-recognition matching result. By the method, the problem of mismatching caused by low intra-class similarity and high inter-class similarity in vehicle re-identification can be effectively solved, additional local marking information is not needed, deployment and application are easier, and the method has positive significance in smart city construction and intelligent security management.

Description

Vehicle weight recognition method based on channel cooperative attention
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a vehicle weight recognition method based on channel cooperative attention.
Background
Vehicle heavy identification is a challenging and meaningful problem in intelligent traffic tasks, most of the current vehicle heavy identification systems are constructed based on a convolutional neural network technology, and the basic convolutional neural network technology can well complete vehicle appearance feature extraction but cannot achieve high heavy identification precision. The main causes of recognition errors are as follows: because too many vehicles with similar appearances cannot be effectively distinguished by integral features alone, the difference between the detail features which can be used for identification and the integral appearance scale is too large, and the network hardly gives too high attention; the appearance characteristic change caused by the change of the vehicle posture is too obvious, so for the vehicle pictures of the same identity and different angles, the network can not provide a higher similarity measurement result, and the result is often misled by the vehicle pictures of different identities and the same angles.
In the face of these problems researchers have had to analyze subtle similarities and differences of vehicles by means of detecting local detailed areas of the vehicles. During the analysis process, an accurate and stable local area detection algorithm can help the re-recognition to visually locate the local details of the vehicle so as to assist the further appearance feature extraction and analysis. Usually, people detect a target local area by means of adding a detection model, and this process needs a large amount of local area labeling information, and needs to add the detection model to the original model, which increases the parameter scale and the calculation amount of the model.
The invention content is as follows:
the invention aims to solve the technical problem that the existing detection model is large in parameter scale and calculation amount.
The invention provides a vehicle weight recognition method based on channel cooperative attention, which comprises the following steps:
s1, constructing a learning sample according to the vehicle model, inputting the images in the learning sample into a pre-trained neural network model, expanding the convolution characteristics of the channels in each image into characteristic vectors to generate channel characteristics, and then connecting the channel characteristics in series in the channel direction to obtain channel joint characteristics;
s2, calculating covariance vectors between the channel joint characteristics of the current channel and the channel joint characteristics of other channels in the convolution layer;
s3, clustering the channel covariance vectors, grouping the channels according to the clustering result, and performing mask generation and weight calculation on each group to respectively obtain an identification area weight map and an identification area detector;
s4, defining a rectangular subregion in the discrimination region weight map, calculating the weight density of the rectangular subregion according to the discrimination region detector, and obtaining the rectangular region with the strongest discrimination according to the weight density;
s5, establishing a deep learning feature extraction model, dividing the image in S1 into m images, applying a rectangular region with the strongest identification force in S4 to each image to obtain m identification pictures, coding the image in S1 and the m identification pictures through the model to obtain m +1 feature vectors, and connecting the feature vectors in series to obtain candidate image re-identification feature vectors, wherein m is more than or equal to 2;
and S6, inputting the image to be detected into the deep learning feature extraction model to obtain a query image re-identification feature vector, calculating the distance between the query image re-identification feature vector and the candidate image re-identification feature vector, and sequencing the distances to obtain a re-identification matching result.
Preferably, in step S1, the pre-trained neural network model is a RESNET50 model pre-trained in a Veri776 database.
Preferably, in step S2, the calculation formula of the covariance vector is:
COVunionfi={COVunionfij},j∈{1,2,3,...,512};
wherein, COVunionfij=E(unionfiunionfj)-E(unionfi)E(uniofj) E () is the desired function, and unionfi and unionfj are both channel join features.
Preferably, the method of step S3 is:
a, calculating the similarity between every two channel combined features to form a similarity matrix W ═ sij}i=1...n,j=1...nWherein
Figure BDA0003213424210000031
B, calculating the sum of each row element of the similarity matrix W, and constructing a diagonal matrix D by taking the sum as a diagonal;
c, calculating a Laplace matrix according to a formula L-D-W, calculating an eigenvalue of L, sequencing the eigenvalues from small to large, calculating eigenvectors of the first m eigenvalues, forming m column vectors into a matrix, and setting the matrix as an ith row vector of U;
d, clustering the new sample points into a C class by using a kmean algorithm to obtain a cluster, wherein the class represents the class;
and E, obtaining a channel group, giving a single input image I, obtaining convolution Fconv through forward propagation, grouping the Fconv according to ChanelGroupi, generating an identification area mask for each group to obtain an identification weight graph, and performing weight calculation on all the groups in the ChanelGroupi to obtain a group of masks which are used as identification area detectors.
Preferably, in step S4, the rectangular subregion area is less than or equal to one half of the area of the discrimination region weight map; the weight density is calculated by the formula:
Figure BDA0003213424210000032
where N (x, y) represents the number of spatial positions within a rectangular sub-region or the area of a local region and mask represents the discriminative region detector.
Preferably, in step S4, the method for calculating the connected rectangle with the strongest identification includes: initializing all parameters to be 0, iterating and traversing all rectangular sub-regions and calculating the weight density in the region, wherein the region with the maximum weight density is the rectangular region with the strongest identification.
Preferably, in step S5, m is 2, the image is divided into two equal parts, and the identification picture includes an upper identification partial picture and a lower identification picture.
Preferably, in step S5, the deep learning feature extraction model includes a main network, an upper half feature extraction network, and a lower half feature extraction network, and the main network encodes the image in S1, the upper half feature extraction network encodes the upper part identification partial picture, and the lower half feature extraction network encodes the lower part identification partial picture.
Preferably, the main network loss function adopts a cross entropy loss function, the upper half part feature extraction and the lower half part feature extraction adopt a weighted joint loss function, and the weighted joint loss function is composed of the cross entropy loss function and a hard-sampling loss function.
Compared with the prior art, the invention has the following advantages and effects:
1. the invention can use the relativity of single channel response and all other channel responses as the attention attribute of the channel, and when clustering the related attention attributes of all channels, the channels belonging to the same cluster tend to pay attention to the same type of local information.
2. The invention combines the area constraint regularization local sub-area and applies the detector to obtain the local identification areas of the upper and lower two areas of the vehicle, and the two local areas and the global image are sent into three network branches together for training to obtain the vehicle weight identification characteristic with more identification on the local information.
3. The method can effectively solve the problem of mismatching caused by low intra-class similarity and high inter-class similarity in vehicle re-identification, does not need additional local marking information, is easier to deploy and apply, and has positive significance on smart city construction and intelligent security management.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention.
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow chart of the generation of the discrimination region detector according to the present invention;
fig. 3 is a flow chart of generating a rectangular region with the strongest discrimination according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1: the invention discloses a vehicle weight recognition method based on channel cooperative attention, which comprises two stages, namely a model training stage and a task conjecture stage.
A model training stage:
(1) constructing a channel joint characteristic;
1.1, identifying area learning sample set construction: firstly, constructing a significant learning sample set, sampling four types of vehicle models including cars, trucks, buses and SUVs, and sampling 5 images of each type of vehicle model according to three angles of the front, the back and the side, wherein the total number of the images is 60. Meanwhile, the RESNET50 model was pre-trained in the Veri776 database as a pre-trained model for the discriminative area detector configuration.
1.2, generating channel characteristics: the image size in the sample set is uniformly resampled to be 128 x 256, 60 sample images are simultaneously input into a pre-trained neural network model, and the feature map features with the size of 60 x 28 x 512 are obtained through calculation after the sample images reach the 6 th convolutional layer of the network after forward propagation. The convolution features are divided from the dimensions of the channels, and the convolution features of the channels in each image are expanded into feature vectors of 1 × 784 dimensions, so that the feature map of 6 layers can be divided into 512 channel features of 784 dimensions.
1.3, generating a joint channel characteristic: generating channel characteristics of all images in the sample set to obtain 60 channel characteristics, and then connecting the 60 channel characteristics in series in a channel direction to obtain 512 channel joint characteristics unionf with 47,040 dimensionsi,i∈{1,2,3,…,512}。
(2) Calculating a channel covariance vector;
calculating unionfiThe formula is calculated as follows for the covariance vectors of other channel features.
COVunionfi={COVunionfij},j∈{1,2,3,…,512}。
Wherein COVunionfij=E(unionfiunionfj)-E(unionfi)E(unionfj)。
(3) As shown in fig. 2, channel covariance vectors are clustered and an discriminative area detector is constructed;
for COVunionfi(i 1.. 512.) similarity between each two pairs is calculated to form a similarity matrix W ═ s.. 512ij}i=1...512,j=1...512
Figure BDA0003213424210000061
Calculating the sum d of each row element of the similarity matrix Wi,i=1...512And constructing a diagonal matrix D with the diagonal matrix as a diagonal,
Figure BDA0003213424210000062
computing a Laplace matrix
L=D-W
Calculating the characteristic value of L, sorting the characteristic values from small to large, taking the first 256 characteristic values, and calculating the characteristic vectors u of the first 256 characteristic values1,u2,...,u256The above 256 column vectors are formed into a matrix U ═ U1,u2,...,u256},U=R512 ×256Let yi∈R256Is the vector for the ith row of U, where i is 1, 2. Using the kmean algorithm to set the new sample point Y ═ Y1,y2,...,y512Cluster-into-cluster center C1,C2,...,C10Further, a cluster A is obtainedi={yj|yj∈CiIn which y isjIs equivalent to COVunionfiThe category (2). Thus obtaining 10 channel groups Chanelgroupi={COVunionfi|COVunionfi∈Ci}i=0,...,NGiven a single input image I, by forward propagation, a layer 6 convolution F is obtainedconvAccording to ChanelGroupiF is to beconvDividing into 10 groups, generating discriminating region mask for each group to obtain 28 × 28 dimension discriminating weight graph, and comparing ChanelgroupiAll the groups in (1) are weighted to obtain a set of masksi,i=1,...,nThe set of masks will act as the discriminatory area detector.
(4) Regularization of the discriminative region based on the weight density;
the weight density p is calculated, and in a feature map, the expression of p is as follows.
Figure BDA0003213424210000071
Where N (x, y) represents the number of spatial positions in the local region, and may also be regarded as the area of the local region, where ρ is the discrimination strength in a unit area, and ρ is used as a reference to find a rectangular region with the strongest discrimination. The flow of computing a rectangular authentication area is shown in fig. 3: firstly, initializing all parameters to be 0, then traversing all possible rectangular areas through iteration and calculating the weight density in the area, and finally reserving the area with the maximum weight density as the rectangular area with the strongest identification.
(5) Establishing a deep learning feature extraction model;
and constructing a network with m +1 branches, dividing the image in the S1 into m images, applying the rectangular region with the strongest identification force in the S4 to each image to obtain m identification pictures, coding the image and the m identification pictures in the S1 through the model to obtain m +1 characteristic vectors, and connecting the characteristic vectors in series to obtain candidate image re-identification characteristic vectors. Preferably, m is 2, a main network, an upper half part feature extraction network and a lower half part feature extraction network are constructed, the image in S1 is divided into an upper image and a lower image, the main network is responsible for extracting the global features of the image, a picture enters the main network and outputs a classification vector, and simultaneously outputs a bounding box of the upper local identification area and the lower local identification area, the image in S1 is cut into an upper part identification local picture and a lower part identification picture according to the bounding box, the upper part identification local picture and the lower part identification picture are respectively input into the upper half part feature extraction network and the upper half part feature extraction network, the three networks are trained through independent loss functions, the main network loss function adopts a cross entropy loss function, the upper half part feature extraction and the upper half part feature extraction adopt a weighting joint loss function, and the weighting joint loss function consists of two parts, namely a cross entropy loss function and a difficult-to-sample loss function.
A task presumption stage:
(6) and after the network training is converged, inputting the picture into the model to extract the re-recognition characteristic vector.
(7) And calculating the cosine distance between the re-identification feature vector of the query image and the re-identification feature vector of the candidate image.
(8) And sequencing the characteristic distances to obtain a final re-recognition matching result.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A vehicle weight recognition method based on channel cooperative attention is characterized by comprising the following steps:
s1, constructing a learning sample according to the vehicle model, inputting the images in the learning sample into a pre-trained neural network model, expanding the convolution characteristics of the channels in each image into characteristic vectors to generate channel characteristics, and then connecting the channel characteristics in series in the channel direction to obtain channel joint characteristics;
s2, calculating covariance vectors between the channel joint characteristics of the current channel and the channel joint characteristics of other channels in the convolution layer;
s3, clustering the channel covariance vectors, grouping the channels according to the clustering result, and performing mask generation and weight calculation on each group to respectively obtain an identification area weight map and an identification area detector;
s4, defining a rectangular subregion in the discrimination region weight map, calculating the weight density of the rectangular subregion according to the discrimination region detector, and obtaining the rectangular region with the strongest discrimination according to the weight density;
s5, establishing a deep learning feature extraction model, dividing the image in S1 into m images, applying a rectangular region with the strongest identification force in S4 to each image to obtain m identification pictures, coding the image in S1 and the m identification pictures through the model to obtain m +1 feature vectors, and connecting the feature vectors in series to obtain candidate image re-identification feature vectors, wherein m is more than or equal to 2;
and S6, inputting the image to be detected into the deep learning feature extraction model to obtain a query image re-identification feature vector, calculating the distance between the query image re-identification feature vector and the candidate image re-identification feature vector, and sequencing the distances to obtain a re-identification matching result.
2. The method for vehicle re-recognition according to the cooperative attention of the channel as claimed in claim 1, wherein in the step S1, the pre-trained neural network model is a RESNET50 model pre-trained in a Veri776 database.
3. The method for recognizing a vehicle weight according to a lane cooperative attention of claim 1, wherein in the step S2, the calculation formula of the covariance vector is:
COVunionfi={COVunionfij},j∈{1,2,3,...,512};
COVunionfij=E(unionfiunionfj)-E(unionfi)E(unionfj);
wherein E () is a function for expectation, and unionfi and unionfj are both channel join features.
4. The method for recognizing vehicle weight according to channel cooperative attention as claimed in claim 1, wherein the method for clustering the channel covariance vectors in step S3 is:
a, calculating the similarity between every two channel combined features to form a similarity matrix W ═ sij}i=1...n,j=1...nWherein
Figure FDA0003213424200000021
B, calculating the sum of each row element of the similarity matrix W, and constructing a diagonal matrix D by taking the sum as a diagonal;
c, calculating a Laplace matrix according to a formula L-D-W, calculating eigenvalues of L, sorting the eigenvalues from small to large, and calculating eigenvectors u of the first m eigenvalues1,u2,...,umForming m column vectors into a matrix U ═ U1,u2,...,um},U=Rn×mLet yi∈RmIs the vector of the ith row of U;
d, using a kmean algorithm to set the new sample point Y as { Y ═ Y1,y2,...,ynCluster into C type to get cluster Ai={yj|yj∈CiIn which y isjClass (c) represents COVunionfiA category of (1);
5. the method for recognizing vehicle weight according to the cooperative attention of the passageway as claimed in claim 4, wherein the step S3 is implemented by obtaining the distinctive zone weight map and the distinctive zone detector by:
obtaining channel group Chanelgroupi={COVunionfi|COVunionfi∈Ci}i=0,...,NGiving a single input image I, obtaining convolution Fconv through forward propagation, grouping Fconv according to ChanelGroupi, generating an identification area mask for each group to obtain an identification weight graph, carrying out weight calculation on all the groups in the ChanelGroupi to obtain a group of masks, and using the masks as an identification area detector.
6. The vehicle weight recognition method according to the lane cooperative attention as claimed in claim 1, wherein in the step S4, the rectangular subregion area is equal to or less than one-half of the area of the discrimination region weight map; the weight density is calculated by the formula:
Figure FDA0003213424200000022
where N (x, y) represents the number of spatial positions within a rectangular sub-region or the area of a local region and mask represents the discriminative region detector.
7. The method for identifying vehicle weight according to the channel cooperative attention as claimed in claim 1, wherein the calculation method for identifying the rectangular region with the strongest force in step S4 is: initializing all parameters to be 0, iterating and traversing all rectangular sub-regions and calculating the weight density in the region, wherein the region with the maximum weight density is the rectangular region with the strongest identification.
8. The method for recognizing vehicle weight according to channel cooperative attention as claimed in claim 1, wherein in step S5, m is 2, the image is divided into two equal parts, and the identification picture includes an upper part identification partial picture and a lower part identification picture.
9. The method for vehicle re-recognition according to channel synergistic attention as claimed in claim 8, wherein the deep learning feature extraction model includes a main network, a top half feature extraction network and a bottom half feature extraction network in step S5, the image in S1 is encoded through the main network, the top half partial identification picture is encoded through the top half feature extraction network, and the bottom half partial identification picture is encoded through the bottom half feature extraction network.
10. The method for recognizing the vehicle weight according to the channel cooperative attention as claimed in claim 9, wherein the main network loss function is a cross entropy loss function, the two networks for the upper half feature extraction and the lower half feature extraction are weighted joint loss functions, and the weighted joint loss functions are composed of the cross entropy loss function and the hard-sampling loss function.
CN202110940766.8A 2021-08-16 2021-08-16 Vehicle weight recognition method based on channel cooperative attention Pending CN113723232A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110940766.8A CN113723232A (en) 2021-08-16 2021-08-16 Vehicle weight recognition method based on channel cooperative attention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110940766.8A CN113723232A (en) 2021-08-16 2021-08-16 Vehicle weight recognition method based on channel cooperative attention

Publications (1)

Publication Number Publication Date
CN113723232A true CN113723232A (en) 2021-11-30

Family

ID=78675997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110940766.8A Pending CN113723232A (en) 2021-08-16 2021-08-16 Vehicle weight recognition method based on channel cooperative attention

Country Status (1)

Country Link
CN (1) CN113723232A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975929A (en) * 2016-05-04 2016-09-28 北京大学深圳研究生院 Fast pedestrian detection method based on aggregated channel features
CN107729818A (en) * 2017-09-21 2018-02-23 北京航空航天大学 A kind of multiple features fusion vehicle recognition methods again based on deep learning
CN109190513A (en) * 2018-08-14 2019-01-11 中山大学 In conjunction with the vehicle of saliency detection and neural network again recognition methods and system
CN110110642A (en) * 2019-04-29 2019-08-09 华南理工大学 A kind of pedestrian's recognition methods again based on multichannel attention feature
CN111242102A (en) * 2019-12-17 2020-06-05 大连理工大学 Fine-grained image recognition algorithm of Gaussian mixture model based on discriminant feature guide
US20210073563A1 (en) * 2019-09-10 2021-03-11 Microsoft Technology Licensing, Llc Depth-based object re-identification
CN112836646A (en) * 2021-02-05 2021-05-25 华南理工大学 Video pedestrian re-identification method based on channel attention mechanism and application

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975929A (en) * 2016-05-04 2016-09-28 北京大学深圳研究生院 Fast pedestrian detection method based on aggregated channel features
CN107729818A (en) * 2017-09-21 2018-02-23 北京航空航天大学 A kind of multiple features fusion vehicle recognition methods again based on deep learning
CN109190513A (en) * 2018-08-14 2019-01-11 中山大学 In conjunction with the vehicle of saliency detection and neural network again recognition methods and system
CN110110642A (en) * 2019-04-29 2019-08-09 华南理工大学 A kind of pedestrian's recognition methods again based on multichannel attention feature
US20210073563A1 (en) * 2019-09-10 2021-03-11 Microsoft Technology Licensing, Llc Depth-based object re-identification
CN111242102A (en) * 2019-12-17 2020-06-05 大连理工大学 Fine-grained image recognition algorithm of Gaussian mixture model based on discriminant feature guide
CN112836646A (en) * 2021-02-05 2021-05-25 华南理工大学 Video pedestrian re-identification method based on channel attention mechanism and application

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
HE YANGUANG 等: "Combination of Appearance and License Plate Features for Vehicle Re-Identification", 《2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》, vol. 1, pages 3108 - 3112, XP033647278, DOI: 10.1109/ICIP.2019.8803323 *
WANG G 等: "Learning discriminative features with multiple granularities for person re-identification", 《PROCEEDINGS OF THE 26TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA》, pages 274 - 282 *
WANG YUEFENG 等: "Vehicle-identification Based on Complementarity Feature", 《2020 CHINESE CONTROL AND DECISION CONFERENCE (CCDC)》, pages 3207 - 3211 *
YUEFENG WANG 等: "Vehicle re-identification based on unsupervised local area detection and view discrimination", 《IMAGE AND VISION COMPUTING》, vol. 104, pages 3 - 5 *
廖华年 等: "基于注意力机制的跨分辨率行人重识别", 《北京航空航天大学学报》, vol. 47, no. 03, pages 605 - 612 *
马兴安: "基于深度学习的车辆精细分类与重识别", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》, no. 02, pages 034 - 413 *

Similar Documents

Publication Publication Date Title
CN106682598B (en) Multi-pose face feature point detection method based on cascade regression
CN106599797B (en) A kind of infrared face recognition method based on local parallel neural network
CN107657226B (en) People number estimation method based on deep learning
CN109117826B (en) Multi-feature fusion vehicle identification method
Jiang et al. Recognizing human actions by learning and matching shape-motion prototype trees
CN104598890B (en) A kind of Human bodys' response method based on RGB D videos
Paclík et al. Building road-sign classifiers using a trainable similarity measure
CN110659665B (en) Model construction method of different-dimension characteristics and image recognition method and device
CN105894047A (en) Human face classification system based on three-dimensional data
CN110163258A (en) A kind of zero sample learning method and system reassigning mechanism based on semantic attribute attention
CN103605972A (en) Non-restricted environment face verification method based on block depth neural network
CN103514456A (en) Image classification method and device based on compressed sensing multi-core learning
CN106355138A (en) Face recognition method based on deep learning and key features extraction
CN104951793B (en) A kind of Human bodys' response method based on STDF features
CN109993061B (en) Face detection and recognition method, system and terminal equipment
CN103927511A (en) Image identification method based on difference feature description
Xiang et al. Segmentation-based classification for 3D point clouds in the road environment
CN103295025A (en) Automatic selecting method of three-dimensional model optimal view
CN110516533A (en) A kind of pedestrian based on depth measure discrimination method again
CN110348478B (en) Method for extracting trees in outdoor point cloud scene based on shape classification and combination
CN110555463B (en) Gait feature-based identity recognition method
JP5464739B2 (en) Image area dividing apparatus, image area dividing method, and image area dividing program
CN103714340A (en) Self-adaptation feature extracting method based on image partitioning
Al-Ghrairi et al. Classification of satellite images based on color features using remote sensing
CN102129557A (en) Method for identifying human face based on LDA subspace learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination