WO2022147977A1 - Vehicle re-identification method and system based on depth feature and sparse metric projection - Google Patents
Vehicle re-identification method and system based on depth feature and sparse metric projection Download PDFInfo
- Publication number
- WO2022147977A1 WO2022147977A1 PCT/CN2021/103200 CN2021103200W WO2022147977A1 WO 2022147977 A1 WO2022147977 A1 WO 2022147977A1 CN 2021103200 W CN2021103200 W CN 2021103200W WO 2022147977 A1 WO2022147977 A1 WO 2022147977A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- depth feature
- sparse
- feature
- target vehicle
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 239000011159 matrix material Substances 0.000 claims abstract description 97
- 230000003044 adaptive effect Effects 0.000 claims abstract description 49
- 238000004364 calculation method Methods 0.000 claims abstract description 29
- 238000000605 extraction Methods 0.000 claims abstract description 23
- 238000012549 training Methods 0.000 claims description 34
- 230000006870 function Effects 0.000 claims description 23
- 230000015654 memory Effects 0.000 claims description 22
- 239000002131 composite material Substances 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 11
- 238000011478 gradient descent method Methods 0.000 claims description 6
- 230000000717 retained effect Effects 0.000 claims description 2
- 238000012360 testing method Methods 0.000 description 26
- 239000013598 vector Substances 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 238000009826 distribution Methods 0.000 description 5
- 238000005286 illumination Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 230000021615 conjugation Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
Definitions
- the present application relates to the technical field of computer vision, and in particular, to a method and system for vehicle re-identification based on depth feature and sparse metric projection.
- vehicle re-identification based on vehicle appearance information in surveillance video has attracted the attention of many researchers due to its important practical value, which involves the driving vehicle recognition technology.
- the task of vehicle re-identification is to find the images of the target vehicle captured by other cameras given the image of the target vehicle in a certain camera, so as to realize relay tracking across cameras.
- Existing supervised vehicle re-identification methods can be divided into feature learning-based methods and metric learning-based methods.
- the method based on feature learning expresses vehicle images by designing effective features to improve the matching accuracy of vehicle appearance features. This method has strong interpretability, but the recognition rate is low due to differences in vehicle appearance due to changes in illumination, perspective changes, and occlusions in the actual traffic monitoring environment.
- the method based on metric learning focuses on using the metric loss function to learn the similarity between vehicle images, and reduces the feature differences caused by illumination changes, viewing angle changes and occlusions through feature projection.
- the vehicle re-identification method based on metric learning mainly learns a specific feature projection matrix, so that the transformed features can eliminate the problems of intra-class differences and inter-class similarities caused by changes in perspective.
- Bai et al. in “Improving triplet-wise training of In the paper "convolutional neural network for vehicle re-identification”, a group-sensitive triplet embedding method is designed to perform metric learning in an end-to-end manner.
- the idea proposed by Liu et al. in "Deep Relative Distance Learning: Tell the Difference between Similar Vehicles” has received a lot of attention from later generations. Deep Relative Distance Learning (DRDL), using the features learned from different branch tasks finally passed the The fully connected layer is integrated to obtain the final mapping feature.
- DRDL Deep Relative Distance Learning
- this paper proposes to construct a positive and negative sample set and use the clustered cluster loss function (Coupled Clusters Loss) to replace the triple loss function as a measure. Learning can make vehicles of the same category more aggregated, and vehicles of different categories more discrete.
- the re-identification model is very sensitive to the position of the image in the feature space.
- the vehicle re-identification metric learning method has the difference between the feature space of the training set and the test set and the generalization of the re-id model under other cameras.
- the present application provides a vehicle re-identification method and system based on depth feature and sparse metric projection; fully considering the influence of factors such as lighting conditions, camera parameters, viewing angle and occlusion on vehicle appearance characteristics, through the data space Collection of overcomplete dictionaries and metaprojection matrices.
- a feature sparse projection matrix is constructed adaptively for each vehicle image feature, which overcomes the diversity of vehicle image feature data distribution, improves the accuracy of vehicle re-identification, and enhances the generalization ability of the re-identification method.
- the present application provides a vehicle re-identification method based on depth feature and sparse metric projection
- Vehicle re-identification methods based on deep feature and sparse metric projection including:
- the distance between the depth feature of the target vehicle image and the depth feature of each image in the set of images to be re-identified is calculated;
- the present application provides a vehicle re-identification system based on depth feature and sparse metric projection
- a vehicle re-identification system based on deep feature and sparse metric projection including:
- an acquisition module which is configured to: acquire an image of the target vehicle; acquire a set of images to be re-identified;
- a feature extraction module which is configured to: perform depth feature extraction on each image in the target vehicle image and the image set to be re-identified to obtain the depth feature of each image;
- the projection matrix calculation module is configured to: calculate the adaptive sparse projection matrix corresponding to the depth feature of the target vehicle image; at the same time, calculate the adaptive sparse projection matrix corresponding to the depth feature of each image in the set of images to be re-identified ;
- a distance calculation module which is configured to: based on the depth feature and the adaptive sparse projection matrix corresponding to the depth feature, calculate the distance between the depth feature of the target vehicle image and the depth feature of each image in the set of images to be re-identified;
- the output module is configured to: repeat the steps of the distance calculation module until the distance between the depth feature of the target image and the depth features of all images in the set of images to be re-identified is calculated; select the image corresponding to the minimum distance as the target vehicle the re-identified image.
- the present application also provides an electronic device, comprising: one or more processors, one or more memories, and one or more computer programs; wherein the processor is connected to the memory, and one or more of the above
- the computer program is stored in the memory, and when the electronic device runs, the processor executes one or more computer programs stored in the memory, so that the electronic device performs the method described in the first aspect above.
- the present application further provides a computer-readable storage medium for storing computer instructions, and when the computer instructions are executed by a processor, the method described in the first aspect is completed.
- the present application also provides a computer program (product), including a computer program, which when run on one or more processors, is used to implement the method of any one of the foregoing first aspects.
- the vehicle image imaging process is easily affected by the shooting environment (including lighting conditions, camera parameters, shooting angle and external occlusion and many other factors), and the corresponding features of each vehicle image have a unique data distribution.
- Vehicle re-identification methods based on traditional metric learning cannot cope with the uniqueness of this feature distribution, resulting in low accuracy of feature distance calculation and vehicle re-identification.
- the present invention proposes a vehicle re-identification method based on deep feature and sparse metric projection, which introduces an adaptive strategy into the traditional metric projection matrix learning process, and learns for each image feature by constructing a data space overcomplete dictionary and a meta-projection matrix.
- the adaptive sparse projection matrix ensures that all image features are in the same data space after projection.
- the model maintains good nearest neighbor calculation performance under various data distributions; on the other hand, the distance metric can be better adapted to different types. Practical application scenarios to improve the generalization ability of the system.
- the experimental results on the vehicle re-identification task confirm the effectiveness of the method proposed in the present invention.
- FIG. 2 is a flowchart of a data space adaptive sparse metric projection learning algorithm according to an embodiment of the present application
- This embodiment provides a vehicle re-identification method based on depth feature and sparse metric projection
- Vehicle re-identification methods based on deep feature and sparse metric projection including:
- S101 Obtain an image of a target vehicle; obtain a set of images to be re-identified;
- S102 Perform depth feature extraction on the target vehicle image and each image in the set of images to be re-identified to obtain the depth feature of each image;
- S104 Based on the depth feature and the adaptive sparse projection matrix corresponding to the depth feature, calculate the distance between the depth feature of the target vehicle image and the depth feature of each image in the image set to be re-identified;
- S105 Repeat S104 until the distance between the depth feature of the target image and the depth features of all images in the set of images to be re-identified is calculated; the image corresponding to the minimum distance is selected as the re-identified image of the target vehicle.
- the S102 perform depth feature extraction on the target vehicle image and each image in the image set to be re-identified to obtain the depth feature of each image; specifically including:
- the improved VGG-19 network is used to extract the depth feature, and the depth feature of each image is obtained;
- the improved VGG-19 network is pre-trained with the ImageNet dataset.
- the S103 the calculation step of calculating the adaptive sparse projection matrix corresponding to the depth feature of the target vehicle image corresponds to the calculation step of the depth feature of each image in the image set to be re-identified
- the calculation steps of the adaptive sparse projection matrix are consistent.
- the S103 Calculate the adaptive sparse projection matrix corresponding to the depth feature of the target vehicle image; specifically include:
- the overcomplete dictionary is obtained using training data.
- the obtaining step includes:
- S10311 Initialize the overcomplete dictionary D as the K cluster center in the training data space; initialize each element of the sparse coefficient matrix; the training data includes: a known target vehicle image and a known target vehicle re-identification image;
- S10313 According to the feature sparse coding loss function, adopt an iterative training strategy, first fix the overcomplete dictionary D, use the gradient descent method to update the sparse coefficient matrix, then fix the sparse coefficient matrix, and use the gradient descent method to update the overcomplete dictionary D.
- F is the characteristic matrix of the training data set
- ⁇ is the sparse coefficient matrix
- ⁇ is the balance coefficient
- meta-projection matrix is also obtained using training data.
- the step of obtaining the meta-projection matrix includes:
- a gradient descent strategy is used to calculate the gradient value of the composite projection matrix, and update the gradient value of the composite projection matrix, that is, to obtain each element projection matrix.
- the method includes the following steps:
- ⁇ is the step size of iterative update.
- the S104 based on the depth feature and the adaptive sparse projection matrix corresponding to the depth feature, calculate the difference between the depth feature of the target vehicle image and the depth feature of each image in the image set to be re-identified distance; specific steps include:
- the distance between the first product and the second product is the distance between the depth feature of the target vehicle image and the depth feature of each image in the image set to be re-identified.
- S101-S105 also includes a training phase and a testing phase; the details are as follows: the method collects images of the same vehicle under different cameras, and the vehicle The images are divided into training image sets and test image sets, and feature extraction is performed on the images to form training data sets and test data sets respectively. Above, use the data space adaptive sparse metric projection matrix to transform image features, and perform distance calculation based on the transformed image features to complete vehicle re-identification.
- the training phase and the testing phase include the following steps:
- Step 1) Collect images of the same vehicle under different cameras
- Step 2) Divide the vehicle image into a training image set and a test image set, perform feature extraction on the image, and form a training data set and a test data set respectively;
- Step 3 on the training data set, learn the calculation method of the data space adaptive sparse metric projection matrix
- Step 4) On the test data set, use the data space adaptive sparse metric projection matrix to perform image feature transformation, and perform distance calculation based on the transformed image features to complete vehicle re-identification.
- Described step 1) for M vehicles, collect images of each vehicle under camera A and camera B to form image sets X and Y respectively.
- Described step 2) randomly select N vehicles from M vehicles, extract images belonging to the N vehicles in the image sets X and Y to form a training image set, and images belonging to the remaining M-N vehicles form a test image set.
- Step 2) Randomly select N vehicles from M vehicles, extract images belonging to the N vehicles in the image set X and Y to form a training image set, a total of 2*N images, and the images of the remaining M-N vehicles form a test image set , a total of 2*(M-N) images.
- Described step 3 is carried out on the training data set, including:
- the projection matrix that defines the feature sample x is:
- Described step 4) is carried out on the test data set, including:
- step 4.2 Repeat step 4.2) until the distance calculation between x test and all the image features to be re-identified in the test data set is completed, and it is considered that the image corresponding to the minimum distance and x test belong to the same vehicle.
- Described step 1) comprises:
- step 1.1) The image obtained in step 1.1) is sent to the VGG-19 network for feature extraction to obtain a 4096-dimensional feature vector;
- the 16 convolutional layers and the first fully connected layer of the VGG-19 network are used as the feature extraction part, and the last two fully connected layers of the VGG-19 are removed.
- PCA is further used to perform dimension reduction operation on the feature vector, and a feature vector of 127 dimensions is finally obtained under the condition of retaining 80% of the feature values.
- VGG-19 network pre-trained based on the ImageNet dataset, remove the last 2 fully connected layers of the VGG-19 network, and retain the 16 convolutional layers and the first fully connected layer of the VGG-19 network as a deep feature extraction network .
- the image is sent to the deep feature extraction network for feature extraction, and a 4096-dimensional feature vector is obtained;
- this method uses PCA to perform dimensionality reduction operations on the original features, and finally obtains a 127-dimensional feature vector while retaining 80% of the eigenvalues.
- Step 3 On the training data set, learn the data space adaptive sparse projection matrix calculation method.
- the data space adaptation refers to learning an adaptive projection matrix for the image feature vectors, so that all image feature vectors are projected in the same data space, thereby ensuring the effectiveness of the nearest neighbor comparison.
- the approximate learning method based on sparse coding constructs an over-complete dictionary and a meta-projection matrix in the data space, uses the over-complete dictionary to sparsely encode the feature data, and compares the coding coefficients with the meta-projection matrix.
- a data space adaptive sparse projection matrix is constructed.
- Step 4) On the basis of the data space adaptive sparse projection matrix calculation method learned in step 3), vehicle re-identification is performed on the test data set; the specific implementation method is as follows:
- step 4.1) The M-N distance calculation results obtained in step 4.1) are sorted from small to large, and the camera B image corresponding to the distance calculation result in the first place is the image that belongs to the same vehicle as the camera A image provided by this method;
- the present application provides a vehicle re-identification method based on depth feature and sparse metric projection.
- the method collects images of the same vehicle under different cameras, divides the vehicle images into a training image set and a test image set, and characterizes the images. Extract, respectively form a training data set and a test data set.
- learn the calculation method of the data space adaptive sparse metric projection matrix, and on the test data set use the data space adaptive sparse metric projection matrix to perform image feature transformation , and calculate the distance based on the transformed image features to complete the vehicle re-identification.
- This application considers that in the traffic monitoring network, the shooting environment (including factors such as illumination, camera angle, camera parameters, occlusion, etc.) of each vehicle image is different, resulting in unique data distribution of corresponding features.
- the data space is overcomplete with dictionary and meta-projection matrix, and an adaptive sparse projection matrix is learned for each image feature, so that the projected feature samples are all in the same feature space, thus ensuring the effectiveness of the nearest neighbor comparison.
- the present application belongs to a method based on metric learning, and the goal is to project all feature vectors into a unified feature space, so that the features of the same car are closer, and the features of different cars are farther away.
- This embodiment provides a vehicle re-identification system based on depth feature and sparse metric projection
- a vehicle re-identification system based on deep feature and sparse metric projection including:
- an acquisition module which is configured to: acquire an image of the target vehicle; acquire a set of images to be re-identified;
- a feature extraction module which is configured to: perform depth feature extraction on each image in the target vehicle image and the image set to be re-identified to obtain the depth feature of each image;
- the projection matrix calculation module is configured to: calculate the adaptive sparse projection matrix corresponding to the depth feature of the target vehicle image; at the same time, calculate the adaptive sparse projection matrix corresponding to the depth feature of each image in the set of images to be re-identified ;
- a distance calculation module which is configured to: based on the depth feature and the adaptive sparse projection matrix corresponding to the depth feature, calculate the distance between the depth feature of the target vehicle image and the depth feature of each image in the set of images to be re-identified;
- the output module is configured to: repeat the steps of the distance calculation module until the distance between the depth feature of the target image and the depth features of all images in the set of images to be re-identified is calculated; select the image corresponding to the minimum distance as the target vehicle the re-identified image.
- the above-mentioned acquisition module, feature extraction module, projection matrix calculation module, distance calculation module and output module correspond to steps S101 to S105 in the first embodiment, and the examples and The application scenarios are the same, but are not limited to the content disclosed in the first embodiment. It should be noted that the above modules can be executed in a computer system such as a set of computer-executable instructions as part of the system.
- the proposed system can be implemented in other ways.
- the system embodiments described above are only illustrative.
- the division of the above modules is only a logical function division.
- multiple modules may be combined or integrated into other A system, or some feature, can be ignored, or not implemented.
- This embodiment also provides an electronic device, including: one or more processors, one or more memories, and one or more computer programs; wherein the processor is connected to the memory, and the one or more computer programs are Stored in the memory, when the electronic device runs, the processor executes one or more computer programs stored in the memory, so that the electronic device executes the method described in the first embodiment.
- the processor may be a central processing unit (CPU), and the processor may also be other general-purpose processors, digital signal processors, DSPs, application-specific integrated circuits (ASICs), off-the-shelf programmable gate arrays (FPGAs), or other programmable logic devices. , discrete gate or transistor logic devices, discrete hardware components, etc.
- a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
- the memory may include read-only memory and random access memory and provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory.
- the memory may also store device type information.
- each step of the above-mentioned method can be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
- the method in the first embodiment may be directly embodied as being executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
- the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
- the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware. To avoid repetition, detailed description is omitted here.
- This embodiment also provides a computer-readable storage medium for storing computer instructions, and when the computer instructions are executed by a processor, the method described in the first embodiment is completed.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
Claims (10)
- 基于深度特征和稀疏度量投影的车辆重识别方法,其特征是,包括:The vehicle re-identification method based on deep feature and sparse metric projection is characterized by including:获取目标车辆图像;获取待重识别的图像集合;Obtain the target vehicle image; obtain the image set to be re-identified;对目标车辆图像和待重识别的图像集合中的每一幅图像,均进行深度特征提取,得到每一幅图像的深度特征;Perform depth feature extraction on the target vehicle image and each image in the set of images to be re-identified to obtain the depth feature of each image;计算目标车辆图像的深度特征所对应的自适应稀疏投影矩阵;同时,计算待重识别图像集合中每一幅图像的深度特征所对应的自适应稀疏投影矩阵;Calculate the adaptive sparse projection matrix corresponding to the depth feature of the target vehicle image; at the same time, calculate the adaptive sparse projection matrix corresponding to the depth feature of each image in the image set to be re-identified;基于深度特征和深度特征对应的自适应稀疏投影矩阵,计算出目标车辆图像的深度特征与待重识别图像集合中每一幅图像深度特征之间的距离;Based on the depth feature and the adaptive sparse projection matrix corresponding to the depth feature, the distance between the depth feature of the target vehicle image and the depth feature of each image in the set of images to be re-identified is calculated;重复上一步的步骤,直到计算出目标图像的深度特征与待重识别图像集合中所有幅图像深度特征之间的距离;选择最小距离所对应的图像作为目标车辆的重识别图像。Repeat the steps of the previous step until the distance between the depth feature of the target image and the depth features of all images in the set of images to be re-identified is calculated; the image corresponding to the minimum distance is selected as the re-identified image of the target vehicle.
- 如权利要求1所述的基于深度特征和稀疏度量投影的车辆重识别方法,其特征是,对目标车辆图像和待重识别的图像集合中的每一幅图像,均进行深度特征提取,得到每一幅图像的深度特征;具体包括:The vehicle re-identification method based on depth feature and sparse metric projection as claimed in claim 1, wherein depth feature extraction is performed on each image in the target vehicle image and the image set to be re-identified to obtain each image. Depth features of an image; specifically include:对目标车辆图像和待重识别的图像集合中的每一幅图像,均采用改进后的VGG-19网络进行深度特征提取,得到每一幅图像的深度特征;For each image in the target vehicle image and the image set to be re-identified, the improved VGG-19 network is used to extract the depth feature, and the depth feature of each image is obtained;所述改进后的VGG-19网络,为将VGG-19网络的最后两个全连接层去掉,只保留前16个卷积层和第一个全连接层。In the improved VGG-19 network, in order to remove the last two fully connected layers of the VGG-19 network, only the first 16 convolutional layers and the first fully connected layer are retained.
- 如权利要求1所述的基于深度特征和稀疏度量投影的车辆重识别方法,其特征是,计算目标车辆图像的深度特征所对应的自适应稀疏投影矩阵;具体包括:The vehicle re-identification method based on depth feature and sparse metric projection as claimed in claim 1, wherein the adaptive sparse projection matrix corresponding to the depth feature of the target vehicle image is calculated; specifically, it includes:根据过完备字典,计算目标车辆图像的深度特征所对应的稀疏系数;Calculate the sparse coefficient corresponding to the depth feature of the target vehicle image according to the overcomplete dictionary;将目标车辆图像的深度特征所对应的稀疏系数视为权重,对元投影矩阵进行加权求和,得到目标车辆图像的深度特征所对应的自适应稀疏投影矩阵。The sparse coefficients corresponding to the depth features of the target vehicle image are regarded as weights, and the meta-projection matrix is weighted and summed to obtain the adaptive sparse projection matrix corresponding to the depth features of the target vehicle image.
- 如权利要求3所述的基于深度特征和稀疏度量投影的车辆重识别方法,其特征是,所述过完备字典,获取步骤包括:The vehicle re-identification method based on depth feature and sparse metric projection as claimed in claim 3, wherein, in the overcomplete dictionary, the obtaining step comprises:初始化过完备字典D为训练数据空间的K聚类中心;初始化稀疏系数矩阵各个元素;所述训练数据,包括:已知目标车辆图像,和已知目标车辆的重识别图像;Initialize the overcomplete dictionary D as the K cluster center in the training data space; initialize each element of the sparse coefficient matrix; the training data includes: known target vehicle images and re-identified images of known target vehicles;计算特征稀疏编码损失函数;Calculate the feature sparse coding loss function;根据特征稀疏编码损失函数,采用迭代训练策略,首先固定过完备字典D,使用梯度下降法更新稀疏系数矩阵,然后固定稀疏系数矩阵,使用梯度下降法更新过完备字典D。According to the feature sparse coding loss function, an iterative training strategy is adopted. First, the overcomplete dictionary D is fixed, and the sparse coefficient matrix is updated by the gradient descent method, and then the sparse coefficient matrix is fixed, and the overcomplete dictionary D is updated by the gradient descent method.
- 如权利要求3所述的基于深度特征和稀疏度量投影的车辆重识别方法,其特征是,所述元投影矩阵,获取步骤包括:The vehicle re-identification method based on depth feature and sparse metric projection as claimed in claim 3, wherein, in the element projection matrix, the obtaining step comprises:采用联合训练策略,拼接元投影矩阵构建复合投影矩阵;Using the joint training strategy, splicing the element projection matrix to construct the composite projection matrix;计算复合投影矩阵的损失函数;Calculate the loss function of the composite projection matrix;根据复合投影矩阵的损失函数,采用梯度下降策略,计算复合投影矩阵的梯度值,对复合投影矩阵的梯度值进行更新,即获得各元投影矩阵。According to the loss function of the composite projection matrix, using the gradient descent strategy, the gradient value of the composite projection matrix is calculated, and the gradient value of the composite projection matrix is updated, that is, each element projection matrix is obtained.
- 如权利要求1所述的基于深度特征和稀疏度量投影的车辆重识别方法,其特征是,基于深度特征和深度特征对应的自适应稀疏投影矩阵,计算出目标车辆图像的深度特征与待重识别图像集合中每一幅图像深度特征之间的距离;具体步骤包括:The vehicle re-identification method based on depth feature and sparse metric projection according to claim 1, wherein the depth feature of the target vehicle image and the to-be-re-identified vehicle image are calculated based on the depth feature and the adaptive sparse projection matrix corresponding to the depth feature. The distance between the depth features of each image in the image set; the specific steps include:将目标车辆图像的深度特征,与目标车辆图像的深度特征对应的自适应稀 疏投影矩阵相乘得到第一乘积;Multiply the depth feature of the target vehicle image and the adaptive sparse projection matrix corresponding to the depth feature of the target vehicle image to obtain the first product;将待重识别图像集合中某一幅图像深度特征,与待重识别图像集合中该幅图像深度特征对应的自适应稀疏投影矩阵相乘得到第二乘积;Multiplying the depth feature of a certain image in the set of images to be re-identified with the adaptive sparse projection matrix corresponding to the depth feature of the image in the set of images to be re-identified to obtain a second product;计算第一乘积与第二乘积的距离;Calculate the distance between the first product and the second product;第一乘积与第二乘积的距离,即为目标车辆图像的深度特征与待重识别图像集合中每一幅图像深度特征之间的距离。The distance between the first product and the second product is the distance between the depth feature of the target vehicle image and the depth feature of each image in the image set to be re-identified.
- 如权利要求2所述的基于深度特征和稀疏度量投影的车辆重识别方法,其特征是,所述改进后的VGG-19网络,采用ImageNet数据集预训练。The method for vehicle re-identification based on deep feature and sparse metric projection according to claim 2, wherein the improved VGG-19 network is pre-trained with ImageNet dataset.
- 基于深度特征和稀疏度量投影的车辆重识别系统,其特征是,包括:The vehicle re-identification system based on deep feature and sparse metric projection is characterized by including:获取模块,其被配置为:获取目标车辆图像;获取待重识别的图像集合;an acquisition module, which is configured to: acquire an image of the target vehicle; acquire a set of images to be re-identified;特征提取模块,其被配置为:对目标车辆图像和待重识别的图像集合中的每一幅图像,均进行深度特征提取,得到每一幅图像的深度特征;a feature extraction module, which is configured to: perform depth feature extraction on each image in the target vehicle image and the image set to be re-identified to obtain the depth feature of each image;投影矩阵计算模块,其被配置为:计算目标车辆图像的深度特征所对应的自适应稀疏投影矩阵;同时,计算待重识别图像集合中每一幅图像的深度特征所对应的自适应稀疏投影矩阵;The projection matrix calculation module is configured to: calculate the adaptive sparse projection matrix corresponding to the depth feature of the target vehicle image; at the same time, calculate the adaptive sparse projection matrix corresponding to the depth feature of each image in the set of images to be re-identified ;距离计算模块,其被配置为:基于深度特征和深度特征对应的自适应稀疏投影矩阵,计算出目标车辆图像的深度特征与待重识别图像集合中每一幅图像深度特征之间的距离;a distance calculation module, which is configured to: based on the depth feature and the adaptive sparse projection matrix corresponding to the depth feature, calculate the distance between the depth feature of the target vehicle image and the depth feature of each image in the image set to be re-identified;输出模块,其被配置为:重复距离计算模块的步骤,直到计算出目标图像的深度特征与待重识别图像集合中所有幅图像深度特征之间的距离;选择最小距离所对应的图像作为目标车辆的重识别图像。The output module is configured to: repeat the steps of the distance calculation module until the distance between the depth feature of the target image and the depth features of all images in the set of images to be re-identified is calculated; select the image corresponding to the minimum distance as the target vehicle the re-identified image.
- 一种电子设备,其特征是,包括:一个或多个处理器、一个或多个存储 器、以及一个或多个计算机程序;其中,处理器与存储器连接,上述一个或多个计算机程序被存储在存储器中,当电子设备运行时,该处理器执行该存储器存储的一个或多个计算机程序,以使电子设备执行上述权利要求1-7任一项所述的方法。An electronic device is characterized by comprising: one or more processors, one or more memories, and one or more computer programs; wherein the processor is connected to the memory, and the one or more computer programs are stored in In the memory, when the electronic device is running, the processor executes one or more computer programs stored in the memory, so that the electronic device executes the method of any one of the above claims 1-7.
- 一种计算机可读存储介质,其特征是,用于存储计算机指令,所述计算机指令被处理器执行时,完成权利要求1-7任一项所述的方法。A computer-readable storage medium, characterized in that it is used for storing computer instructions, and when the computer instructions are executed by a processor, the method according to any one of claims 1-7 is completed.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110014228.6 | 2021-01-05 | ||
CN202110014228.6A CN112699829B (en) | 2021-01-05 | 2021-01-05 | Vehicle weight identification method and system based on depth feature and sparse measurement projection |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022147977A1 true WO2022147977A1 (en) | 2022-07-14 |
Family
ID=75514949
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/103200 WO2022147977A1 (en) | 2021-01-05 | 2021-06-29 | Vehicle re-identification method and system based on depth feature and sparse metric projection |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112699829B (en) |
WO (1) | WO2022147977A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112699829B (en) * | 2021-01-05 | 2022-08-30 | 山东交通学院 | Vehicle weight identification method and system based on depth feature and sparse measurement projection |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107085206A (en) * | 2017-03-22 | 2017-08-22 | 南京航空航天大学 | A kind of one-dimensional range profile recognition methods for keeping projecting based on adaptive sparse |
CN108875445A (en) * | 2017-05-08 | 2018-11-23 | 上海荆虹电子科技有限公司 | A kind of pedestrian recognition methods and device again |
CN109145777A (en) * | 2018-08-01 | 2019-01-04 | 北京旷视科技有限公司 | Vehicle recognition methods, apparatus and system again |
CN109635728A (en) * | 2018-12-12 | 2019-04-16 | 中山大学 | A kind of isomery pedestrian recognition methods again based on asymmetric metric learning |
CN110765960A (en) * | 2019-10-29 | 2020-02-07 | 黄山学院 | Pedestrian re-identification method for adaptive multi-task deep learning |
CN112699829A (en) * | 2021-01-05 | 2021-04-23 | 山东交通学院 | Vehicle weight identification method and system based on depth feature and sparse measurement projection |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106056141B (en) * | 2016-05-27 | 2019-04-19 | 哈尔滨工程大学 | A kind of target identification of use space sparse coding and angle rough estimate calculating method |
CN106682087A (en) * | 2016-11-28 | 2017-05-17 | 东南大学 | Method for retrieving vehicles on basis of sparse codes of features of vehicular ornaments |
CN108509854B (en) * | 2018-03-05 | 2020-11-17 | 昆明理工大学 | Pedestrian re-identification method based on projection matrix constraint and discriminative dictionary learning |
CN109241981B (en) * | 2018-09-03 | 2022-07-12 | 哈尔滨工业大学 | Feature detection method based on sparse coding |
CN109492610B (en) * | 2018-11-27 | 2022-05-10 | 广东工业大学 | Pedestrian re-identification method and device and readable storage medium |
EP3722998A1 (en) * | 2019-04-11 | 2020-10-14 | Teraki GmbH | Data analytics on pre-processed signals |
-
2021
- 2021-01-05 CN CN202110014228.6A patent/CN112699829B/en active Active
- 2021-06-29 WO PCT/CN2021/103200 patent/WO2022147977A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107085206A (en) * | 2017-03-22 | 2017-08-22 | 南京航空航天大学 | A kind of one-dimensional range profile recognition methods for keeping projecting based on adaptive sparse |
CN108875445A (en) * | 2017-05-08 | 2018-11-23 | 上海荆虹电子科技有限公司 | A kind of pedestrian recognition methods and device again |
CN109145777A (en) * | 2018-08-01 | 2019-01-04 | 北京旷视科技有限公司 | Vehicle recognition methods, apparatus and system again |
CN109635728A (en) * | 2018-12-12 | 2019-04-16 | 中山大学 | A kind of isomery pedestrian recognition methods again based on asymmetric metric learning |
CN110765960A (en) * | 2019-10-29 | 2020-02-07 | 黄山学院 | Pedestrian re-identification method for adaptive multi-task deep learning |
CN112699829A (en) * | 2021-01-05 | 2021-04-23 | 山东交通学院 | Vehicle weight identification method and system based on depth feature and sparse measurement projection |
Also Published As
Publication number | Publication date |
---|---|
CN112699829B (en) | 2022-08-30 |
CN112699829A (en) | 2021-04-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Garg et al. | Don't look back: Robustifying place categorization for viewpoint-and condition-invariant place recognition | |
CN113222041B (en) | High-order association discovery fine-grained image identification method and device of graph structure representation | |
WO2020228525A1 (en) | Place recognition method and apparatus, model training method and apparatus for place recognition, and electronic device | |
CN109800692B (en) | Visual SLAM loop detection method based on pre-training convolutional neural network | |
CN108681746B (en) | Image identification method and device, electronic equipment and computer readable medium | |
CN110543581B (en) | Multi-view three-dimensional model retrieval method based on non-local graph convolution network | |
CN108229347B (en) | Method and apparatus for deep replacement of quasi-Gibbs structure sampling for human recognition | |
EP3847580A1 (en) | Multi-view image clustering techniques using binary compression | |
US11270425B2 (en) | Coordinate estimation on n-spheres with spherical regression | |
CN114830131A (en) | Equal-surface polyhedron spherical gauge convolution neural network | |
WO2022147977A1 (en) | Vehicle re-identification method and system based on depth feature and sparse metric projection | |
CN104036296A (en) | Method and device for representing and processing image | |
CN114612698A (en) | Infrared and visible light image registration method and system based on hierarchical matching | |
Pultar | Improving the hardnet descriptor | |
CN114202694A (en) | Small sample remote sensing scene image classification method based on manifold mixed interpolation and contrast learning | |
CN112860936A (en) | Visual pedestrian re-identification method based on sparse graph similarity migration | |
CN111027609B (en) | Image data weighted classification method and system | |
CN110929801B (en) | Improved Euclid distance KNN classification method and system | |
CN110674689B (en) | Vehicle re-identification method and system based on feature embedding space geometric constraint | |
Wang et al. | Image splicing tamper detection based on deep learning and attention mechanism | |
CN116704187A (en) | Real-time semantic segmentation method, system and storage medium for semantic alignment | |
CN114663861A (en) | Vehicle re-identification method based on dimension decoupling and non-local relation | |
CN116012744A (en) | Closed loop detection method, device, equipment and storage medium | |
CN112287995A (en) | Low-resolution image identification method based on multilayer coupling mapping | |
CN115929495B (en) | Engine valve fault diagnosis method based on Markov transition field and improved Gaussian prototype network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21917023 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21917023 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.10.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21917023 Country of ref document: EP Kind code of ref document: A1 |