CN112699829B - Vehicle weight identification method and system based on depth feature and sparse measurement projection - Google Patents

Vehicle weight identification method and system based on depth feature and sparse measurement projection Download PDF

Info

Publication number
CN112699829B
CN112699829B CN202110014228.6A CN202110014228A CN112699829B CN 112699829 B CN112699829 B CN 112699829B CN 202110014228 A CN202110014228 A CN 202110014228A CN 112699829 B CN112699829 B CN 112699829B
Authority
CN
China
Prior art keywords
image
depth feature
depth
target vehicle
projection matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110014228.6A
Other languages
Chinese (zh)
Other versions
CN112699829A (en
Inventor
刘凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jiaotong University
Original Assignee
Shandong Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Jiaotong University filed Critical Shandong Jiaotong University
Priority to CN202110014228.6A priority Critical patent/CN112699829B/en
Publication of CN112699829A publication Critical patent/CN112699829A/en
Priority to PCT/CN2021/103200 priority patent/WO2022147977A1/en
Application granted granted Critical
Publication of CN112699829B publication Critical patent/CN112699829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a vehicle re-identification method and system based on depth features and sparse measurement projection, and a target vehicle image is obtained; acquiring an image set to be re-identified; carrying out depth feature extraction on the target vehicle image and each image in the image set to be re-identified to obtain the depth feature of each image; calculating a self-adaptive sparse projection matrix corresponding to the depth features of the target vehicle image; calculating a self-adaptive sparse projection matrix corresponding to the depth characteristic of each image in the image set to be re-identified; calculating the distance between the depth feature of the target vehicle image and the depth feature of each image in the image set to be re-identified based on the depth feature and the adaptive sparse projection matrix corresponding to the depth feature; repeating the step of the distance calculation module until the distance between the depth feature of the target image and the depth feature of all the images in the image set to be re-identified is calculated; and selecting the image corresponding to the minimum distance as the re-recognition image of the target vehicle.

Description

Vehicle weight identification method and system based on depth feature and sparse measurement projection
Technical Field
The application relates to the technical field of computer vision, in particular to a vehicle weight recognition method and system based on depth features and sparse measurement projection.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
At present, surveillance cameras are widely installed in cities, suburbs and expressways, a large number of vehicle surveillance images are collected and stored in real time, cross-camera retrieval and continuous tracking of target vehicles appearing in different areas are in real requirements, the conventional method mainly adopts a license plate recognition technology to achieve the functions, but under the real traffic environment, the vehicles have the conditions of license plate shielding, license plate registration, counterfeiting, removal and the like, under the condition, license plate information is used for retrieval, and the target vehicles cannot be accurately positioned.
In recent years, with the development of computer vision and multimedia technology, vehicle weight recognition based on vehicle appearance information in surveillance videos has received attention from many researchers due to its important practical value, which relates to a vehicle recognition technology. The task of vehicle re-identification is to find out the images of a target vehicle in a certain camera under other cameras in order to realize relay tracking across the cameras.
However, due to the fact that the positions of the cameras are different, illumination changes, visual angle changes and resolution differences are generated, and in addition, under a complex monitoring scene, shielding in different degrees exists among vehicles, so that intra-class differences (the same vehicle generates self differences at different visual angles) and inter-class similarities (different vehicles form the inter-class similarities due to the same model) are caused, and the problem of vehicle heavy identification is further troublesome.
Existing supervised vehicle re-identification methods can be classified into feature learning-based methods and metric learning-based methods. The vehicle image is expressed by designing effective features based on the feature learning method, so that the matching accuracy of the appearance features of the vehicle is improved. The method has strong interpretability, but the appearance difference of the vehicle is caused by illumination change, visual angle change, occlusion and the like under the actual traffic monitoring environment, so the identification rate is low. The method based on the metric learning emphasizes the utilization of the metric loss function to learn the similarity between the vehicle images, and reduces the characteristic difference caused by illumination change, visual angle change, occlusion and the like through characteristic projection.
At present, a vehicle re-identification method based on metric learning mainly learns a specific feature projection matrix, so that the problem of intra-class difference and inter-class similarity caused by view angle change can be solved for transformed features, and Bai et al designs an intra-group sensitive triple embedding method in the 'accelerating three-dimensional training of a connected neural network for a vertical three-dimensional identification' document, and performs metric learning in an end-to-end manner. Liu et al paid a lot of attention to the thought that "Deep Relative Distance Learning: remote between Similar Vehicles" put forward, Deep Relative Distance Learning (DRDL), utilize characteristic that different branch tasks learn to integrate through the complete connection layer finally, get the final mapping characteristic, and the unstable characteristic that the function of the ternary Loss trained herein, put forward and construct the positive and negative sample set and utilize the cluster Loss function (Coupled Clusters Loss) to replace the triple Loss function and make the metric Learning, can make the Vehicles of the same classification more polymeric, the Vehicles of different classifications more discrete. However, in the metric learning process, the re-recognition model is very sensitive to the position of an image in a feature space, and the difference of the feature spaces of a training set and a testing set and the generalization capability of the re-recognition model under other cameras in the existing vehicle re-recognition metric learning method are not deeply researched.
Disclosure of Invention
In order to overcome the defects of the prior art, the application provides a vehicle weight identification method and system based on depth features and sparse measurement projection; the influence of factors such as illumination conditions, camera parameters, visual angles and shielding on the appearance characteristics of the vehicle is fully considered, and a dictionary and a meta-projection matrix set are overcomplete through a data space. A characteristic sparse projection matrix is constructed for each vehicle image characteristic adaptability, the diversity of vehicle image characteristic data distribution is overcome, the accuracy rate of vehicle re-identification is improved, and meanwhile the generalization capability of the re-identification method is enhanced.
In a first aspect, the application provides a vehicle weight identification method based on depth features and sparse measurement projection;
the vehicle weight identification method based on the depth feature and the sparse measurement projection comprises the following steps:
acquiring a target vehicle image; acquiring an image set to be re-identified;
carrying out depth feature extraction on each image in the target vehicle image and the image set to be re-identified to obtain the depth feature of each image;
calculating a self-adaptive sparse projection matrix corresponding to the depth features of the target vehicle image; meanwhile, calculating a self-adaptive sparse projection matrix corresponding to the depth feature of each image in the image set to be re-identified;
calculating the distance between the depth feature of the target vehicle image and the depth feature of each image in the image set to be re-identified based on the depth feature and the self-adaptive sparse projection matrix corresponding to the depth feature;
repeating the previous step until the distance between the depth feature of the target image and the depth feature of all the images in the image set to be re-identified is calculated; and selecting the image corresponding to the minimum distance as the re-recognition image of the target vehicle.
In a second aspect, the present application provides a vehicle weight identification system based on depth features and sparse metric projections;
vehicle weight identification system based on depth feature and sparse measurement projection, comprising:
an acquisition module configured to: acquiring a target vehicle image; acquiring an image set to be re-identified;
a feature extraction module configured to: carrying out depth feature extraction on each image in the target vehicle image and the image set to be re-identified to obtain the depth feature of each image;
a projection matrix calculation module configured to: calculating a self-adaptive sparse projection matrix corresponding to the depth features of the target vehicle image; meanwhile, calculating a self-adaptive sparse projection matrix corresponding to the depth feature of each image in the image set to be re-identified;
a distance calculation module configured to: calculating the distance between the depth feature of the target vehicle image and the depth feature of each image in the image set to be re-identified based on the depth feature and the adaptive sparse projection matrix corresponding to the depth feature;
an output module configured to: repeating the step of the distance calculation module until the distance between the depth feature of the target image and the depth feature of all the images in the image set to be re-identified is calculated; and selecting the image corresponding to the minimum distance as the re-recognition image of the target vehicle.
In a third aspect, the present application further provides an electronic device, including: one or more processors, one or more memories, and one or more computer programs; wherein a processor is connected to the memory, the one or more computer programs being stored in the memory, and when the electronic device is running, the processor executes the one or more computer programs stored in the memory, so as to make the electronic device execute the method according to the first aspect.
In a fourth aspect, the present application also provides a computer-readable storage medium for storing computer instructions which, when executed by a processor, perform the method of the first aspect.
In a fifth aspect, the present application also provides a computer program (product) comprising a computer program for implementing the method of any of the preceding first aspects when run on one or more processors.
Compared with the prior art, the beneficial effects of this application are:
in an actual traffic monitoring scene, the imaging process of the vehicle image is easily influenced by a shooting environment (including many factors such as illumination conditions, camera parameters, a shooting visual angle and external shielding), and the corresponding characteristic of each vehicle image has unique data distribution. The vehicle weight recognition method based on the traditional metric learning cannot cope with the uniqueness of the feature distribution, so that the accuracy of feature distance calculation and vehicle weight recognition is not high. Based on the method, a self-adaptive strategy is introduced into the traditional measurement projection matrix learning process, a data space over-complete dictionary and a meta-projection matrix are constructed, a self-adaptive sparse projection matrix is learned for each image feature, all the image features are ensured to be in the same data space after being projected, and on one hand, the model keeps good nearest neighbor calculation performance under various data distribution; on the other hand, the distance measurement can better adapt to different types of practical application scenes, and the generalization capability of the system is improved. The experimental result on the task of vehicle weight identification proves the effectiveness of the method provided by the invention.
Advantages of additional aspects of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
FIG. 1 is a flow chart of an embodiment of the present application;
FIG. 2 is a flowchart of a data space adaptive sparse metric projection learning algorithm according to an embodiment of the present application;
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular is intended to include the plural unless the context clearly dictates otherwise, and furthermore, it should be understood that the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Example one
The embodiment provides a vehicle re-identification method based on depth features and sparse measurement projection;
the vehicle weight identification method based on the depth feature and the sparse measurement projection comprises the following steps:
s101: acquiring a target vehicle image; acquiring an image set to be re-identified;
s102: carrying out depth feature extraction on each image in the target vehicle image and the image set to be re-identified to obtain the depth feature of each image;
s103: calculating a self-adaptive sparse projection matrix corresponding to the depth features of the target vehicle image;
meanwhile, calculating a self-adaptive sparse projection matrix corresponding to the depth feature of each image in the image set to be re-identified;
s104: calculating the distance between the depth feature of the target vehicle image and the depth feature of each image in the image set to be re-identified based on the depth feature and the adaptive sparse projection matrix corresponding to the depth feature;
s105: repeating S104 until the distance between the depth feature of the target image and the depth feature of all the images in the image set to be re-identified is calculated; and selecting the image corresponding to the minimum distance as the re-recognition image of the target vehicle.
As one or more embodiments, the S102: carrying out depth feature extraction on each image in the target vehicle image and the image set to be re-identified to obtain the depth feature of each image; the method specifically comprises the following steps:
carrying out depth feature extraction on each image in the target vehicle image and the image set to be re-identified by adopting an improved VGG-19 network to obtain the depth feature of each image;
the improved VGG-19 network only reserves the first 16 convolutional layers and the first full-link layer in order to remove the last two full-link layers of the VGG-19 network.
The improved VGG-19 network is pre-trained by adopting an ImageNet data set.
As one or more embodiments, the S103: the step of calculating the adaptive sparse projection matrix corresponding to the depth features of the target vehicle image is consistent with the step of calculating the adaptive sparse projection matrix corresponding to the depth features of each image in the image set to be re-identified.
As one or more embodiments, the S103: calculating a self-adaptive sparse projection matrix corresponding to the depth features of the target vehicle image; the method specifically comprises the following steps:
s1031: calculating a sparse coefficient corresponding to the depth feature of the target vehicle image according to the over-complete dictionary;
s1032: and taking the sparse coefficient corresponding to the depth feature of the target vehicle image as a weight, and performing weighted summation on the element projection matrix to obtain a self-adaptive sparse projection matrix corresponding to the depth feature of the target vehicle image.
Further, the overcomplete dictionary is obtained using training data.
Further, the obtaining step of the overcomplete dictionary comprises:
s10311: initializing an overcomplete dictionary D as a K clustering center of a training data space; initializing each element of a sparse coefficient matrix; the training data includes: an image of a known target vehicle, and a re-identification image of the known target vehicle;
s10312: calculating a characteristic sparse coding loss function;
s10313: according to the characteristic sparse coding loss function, an iterative training strategy is adopted, firstly, an overcomplete dictionary D is fixed, a sparse coefficient matrix is updated by using a gradient descent method, then, the sparse coefficient matrix is fixed, and the overcomplete dictionary D is updated by using the gradient descent method.
Illustratively, the specific step of calculating the overcomplete dictionary and the sparse coefficients includes:
(11) and initializing an overcomplete dictionary D as a K clustering center of a training data space, and initializing each element of a sparse coefficient matrix as 1/K.
(12) Computing a feature sparse coding loss function, said loss function being as described in equation (1):
Figure BDA0002886198380000081
wherein: f is a training data set characteristic matrix, alpha is a sparse coefficient matrix, and lambda is a balance coefficient.
(13) According to the formula (1), an iterative training strategy is adopted, firstly, the overcomplete dictionary D is fixed, the sparse coefficient matrix alpha is updated by using a gradient descent method, then, the sparse coefficient matrix alpha is fixed, and the overcomplete dictionary D is updated by using the gradient descent method.
Further, the meta-projection matrix is also obtained using training data.
Further, the element projection matrix obtaining step includes:
s10321: adopting a joint training strategy, and constructing a composite projection matrix by splicing the element projection matrices;
s10322: calculating a loss function of the composite projection matrix;
s10323: and calculating the gradient value of the composite projection matrix by adopting a gradient descent strategy according to the loss function of the composite projection matrix, and updating the gradient value of the composite projection matrix to obtain each element of projection matrix.
Illustratively, a set of element projection matrices is computed, the method comprising the steps of:
defining a composite projection matrix using a joint training strategy
Figure BDA0002886198380000082
Composite feature vector
Figure BDA0002886198380000083
And
Figure BDA0002886198380000084
(21) calculating a composite projection matrix loss function; the loss function is as described in equation (2):
Figure BDA0002886198380000085
wherein:
Figure BDA0002886198380000086
if the sample
Figure BDA0002886198380000087
And a sample
Figure BDA0002886198380000088
Belonging to the same vehicle; then s il 1, otherwise, s il 0; if it is not
Figure BDA0002886198380000091
Is that
Figure BDA0002886198380000092
K is one of the neighbors of
Figure BDA0002886198380000093
And
Figure BDA0002886198380000094
belong to the same vehicle, then η ij 1 is ═ 1; else η ij =0。
(22) Calculating by adopting a gradient descent strategy according to the formula (2)
Figure BDA0002886198380000095
To obtain the formula (3)
Figure BDA0002886198380000096
Wherein: sigma β (·)=(1+e -βx ) -1
Figure BDA0002886198380000097
η ij And s il Corresponding to the parameters of formula (2);
(23) on the basis of the formula (3)
Figure BDA0002886198380000098
Updating, wherein the updating rule is as follows:
Figure BDA0002886198380000099
wherein: λ is the step size of the iterative update.
As one or more embodiments, the S104: calculating the distance between the depth feature of the target vehicle image and the depth feature of each image in the image set to be re-identified based on the depth feature and the adaptive sparse projection matrix corresponding to the depth feature; the method comprises the following specific steps:
multiplying the depth characteristic of the target vehicle image by a self-adaptive sparse projection matrix corresponding to the depth characteristic of the target vehicle image to obtain a first product;
multiplying a certain image depth characteristic in the image set to be re-identified by a self-adaptive sparse projection matrix corresponding to the image depth characteristic in the image set to be re-identified to obtain a second product;
calculating the distance between the first product and the second product;
and the distance between the first product and the second product is the distance between the depth feature of the target vehicle image and the depth feature of each image in the image set to be re-identified.
Since the overcomplete dictionary and the meta-projection matrix are obtained through a training data set, the training stage and the testing stage are included before S101-S105; the method comprises the following specific steps: the method includes the steps of collecting images of the same vehicle under different cameras, dividing the images of the vehicle into a training image set and a test image set, extracting features of the images, forming a training data set and a test data set respectively, learning a calculation method of a data space self-adaptive sparse measurement projection matrix on the training data set, performing image feature transformation on the test data set by using the data space self-adaptive sparse measurement projection matrix, performing distance calculation based on transformed image features, and completing vehicle re-identification.
As shown in fig. 1, the training phase and the testing phase include the following steps:
step 1): collecting images of the same vehicle under different cameras;
step 2): dividing the vehicle image into a training image set and a test image set, and performing feature extraction on the images to respectively form a training data set and a test data set;
step 3): learning a calculation method of a data space self-adaptive sparse measurement projection matrix on a training data set;
step 4): and on the test data set, performing image feature transformation by using a data space self-adaptive sparse measurement projection matrix, and performing distance calculation based on the transformed image features to complete vehicle weight identification.
The step 1): for M vehicles, images of each vehicle under camera a and camera B are collected, forming sets of images X and Y, respectively.
The step 2): and randomly selecting N vehicles from the M vehicles, extracting the images belonging to the N vehicles from the image sets X and Y to form a training image set, and forming a test image set by the images belonging to the rest M-N vehicles.
And 2) randomly selecting N vehicles from the M vehicles, extracting images belonging to the N vehicles from the image sets X and Y to form a training image set, wherein 2X N images are calculated in total, and the images of the rest M-N vehicles form a test image set, wherein 2X (M-N) images are calculated in total. Performing depth feature extraction on all images in the training image set to form a training data set; and performing depth feature extraction on all images in the test image set to form a test data set.
The step 3) is performed on a training data set, and comprises the following steps:
defining the projection matrix of the feature sample x as:
Figure BDA0002886198380000111
wherein
Figure BDA0002886198380000112
Is a sample space overcomplete dictionary
Figure BDA0002886198380000113
A set of corresponding meta-projection matrices,
Figure BDA0002886198380000114
are sparse coefficients.
The step 4) is performed on the test data set, and comprises the following steps:
4.1) for any image feature x in the test data set test Computing its adaptive sparse projection matrix
Figure BDA0002886198380000115
The method comprises the following steps:
4.1.1) fixing the complete dictionary D, calculating x according to the steps (11), (12) and (13) test The sparse coefficient of (a);
4.1.2) calculating x according to equation (5) test Adaptive projection matrix of
Figure BDA0002886198380000116
4.2) calculating the target image characteristic x test And the first image feature y to be re-identified 1 The distance therebetween, as shown in formula (6):
Figure BDA0002886198380000117
4.3) repeat step 4.2) until x test Completing distance calculation with all image features to be re-identified in the test data set, and considering the minimum distance corresponding to the image and x test Belonging to the same vehicle.
The step 1) comprises the following steps:
1.1) given a vehicle image, resizing to 224 x 224 pixels;
1.2) sending the image obtained in the step 1.1) into a VGG-19 network for feature extraction to obtain 4096-dimensional feature vectors;
the 16 convolutional layers and the 1 st full-connection layer of the VGG-19 network are used as a feature extraction part, and the last 2 full-connection layers of the VGG-19 network are removed.
After the 1.2), the feature vector is further subjected to dimensionality reduction by using PCA, and under the condition that 80% of feature values are reserved, a 127-dimensionality feature vector is finally obtained.
In this embodiment, the specific implementation of the feature extraction method is described as follows:
constructing a VGG-19 network based on ImageNet data set pre-training, removing the last 2 full connection layers of the VGG-19 network, and reserving 16 convolution layers and the 1 st full connection layer of the VGG-19 network as a deep feature extraction network.
Normalizing the vehicle image size to 224 x 224 pixels;
sending the image into a depth feature extraction network for feature extraction to obtain 4096-dimensional feature vectors;
in order to reduce the number of model parameters and improve the generalization capability of the model, the method uses PCA to perform dimensionality reduction operation on the original features, and finally obtains 127-dimensionality feature vectors under the condition of keeping 80% of feature values.
And 3) learning a data space self-adaptive sparse projection matrix calculation method on the training data set.
The data space self-adaption means that a self-adaption projection matrix is learned aiming at the image characteristic vectors, so that all the image characteristic vectors are in the same data space after projection, and the effectiveness of nearest neighbor comparison is guaranteed. In actual operation, in order to improve algorithm efficiency, an overcomplete dictionary and a meta-projection matrix are constructed in a data space based on a sparse coding approximate learning method, the overcomplete dictionary is used for carrying out sparse coding on feature data, and a coding coefficient is combined with the meta-projection matrix to construct a data space self-adaptive sparse projection matrix.
As shown in fig. 2, in the flowchart of the learning data space adaptive sparse projection matrix calculation method provided in the embodiment, a specific learning process is as follows:
3.1) initializing overcomplete dictionary D and sparse coefficient matrix alpha
3.2) calculating a characteristic sparse coding loss function by using the formula (1);
3.3) using an iterative gradient optimization strategy to iteratively update the overcomplete dictionary D and the sparse coefficient matrix alpha to complete the optimization of the overcomplete dictionary D and the sparse coefficient matrix alpha
3.4) calculating a loss function according to the formula (1) by using the updated overcomplete dictionary D and the sparse coefficient matrix alpha if delta omega>ε 1 If not, the step 3.3 is carried out, and if not, convergence is determined, and corresponding D and alpha are output.
3.5) calculating a composite projection matrix loss function by using the formula (2);
3.6) update with gradient optimization strategy using equations (3) and (4)
Figure BDA0002886198380000131
3.7) use of updated
Figure BDA0002886198380000132
Calculating a loss function according to equation (2) if Δ Ψ>ε 2 If yes, go to step 3.6, otherwise, determine to converge, output the corresponding
Figure BDA0002886198380000133
Step 4): on the basis of the data space self-adaptive sparse projection matrix calculation method obtained by learning in the step 3), vehicle weight recognition is carried out on the test data set; the specific implementation method comprises the following steps:
4.1) in the test data set, carrying out distance calculation on the image characteristics of the first vehicle under the camera A and the characteristics of all vehicles (totally M-N vehicles) under the camera B according to a formula (6) to obtain M-N distance calculation results;
4.2) sorting the M-N distance calculation results obtained in the step 4.1) from small to large, wherein the camera B image corresponding to the first distance calculation result is the image which is given by the method and belongs to the same vehicle as the camera A image;
4.3) repeating the steps 4.1) and 4.2), and completing the distance calculation and vehicle consistency judgment of all image features under the camera A and all image features under the camera B.
The method includes the steps of collecting images of the same vehicle under different cameras, dividing the images of the vehicle into a training image set and a testing image set, extracting features of the images, forming a training data set and a testing data set respectively, learning a calculation method of a data space self-adaptive sparse measurement projection matrix on the training data set, performing image feature transformation on the testing data set by using the data space self-adaptive sparse measurement projection matrix, performing distance calculation based on transformed image features, and completing vehicle heavy identification.
In the traffic monitoring network, the shooting environments (including factors such as illumination, camera angles, camera parameters and shielding) of each vehicle image are different, so that the corresponding features have unique data distribution.
The method belongs to a method based on metric learning, and aims to project all feature vectors into a unified feature space, so that the features of the same vehicle are closer, and the features of different vehicles are farther.
Example two
The embodiment provides a vehicle weight recognition system based on depth feature and sparse measurement projection;
vehicle weight identification system based on depth feature and sparse measurement projection, comprising:
an acquisition module configured to: acquiring a target vehicle image; acquiring an image set to be re-identified;
a feature extraction module configured to: carrying out depth feature extraction on each image in the target vehicle image and the image set to be re-identified to obtain the depth feature of each image;
a projection matrix calculation module configured to: calculating a self-adaptive sparse projection matrix corresponding to the depth features of the target vehicle image; meanwhile, calculating a self-adaptive sparse projection matrix corresponding to the depth feature of each image in the image set to be re-identified;
a distance calculation module configured to: calculating the distance between the depth feature of the target vehicle image and the depth feature of each image in the image set to be re-identified based on the depth feature and the adaptive sparse projection matrix corresponding to the depth feature;
an output module configured to: repeating the step of the distance calculation module until the distance between the depth feature of the target image and the depth feature of all the images in the image set to be re-identified is calculated; and selecting the image corresponding to the minimum distance as the re-identification image of the target vehicle.
It should be noted here that the above-mentioned obtaining module, the feature extracting module, the projection matrix calculating module, the distance calculating module and the output module correspond to steps S101 to S105 in the first embodiment, and the above-mentioned modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the contents disclosed in the first embodiment. It should be noted that the modules described above as part of a system may be implemented in a computer system such as a set of computer-executable instructions.
In the foregoing embodiments, the description of each embodiment has an emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions in other embodiments.
The proposed system can be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the above-described modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules may be combined or integrated into another system, or some features may be omitted, or not executed.
EXAMPLE III
The present embodiment also provides an electronic device, including: one or more processors, one or more memories, and one or more computer programs; wherein, a processor is connected with the memory, the one or more computer programs are stored in the memory, and when the electronic device runs, the processor executes the one or more computer programs stored in the memory, so as to make the electronic device execute the method according to the first embodiment.
It should be understood that in this embodiment, the processor may be a central processing unit CPU, and the processor may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate arrays FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include both read-only memory and random access memory, and may provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software.
The method in the first embodiment may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, among other storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and combines hardware thereof to complete the steps of the method. To avoid repetition, it is not described in detail here.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Example four
The present embodiments also provide a computer-readable storage medium for storing computer instructions, which when executed by a processor, perform the method of the first embodiment.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (7)

1. The vehicle weight identification method based on the depth feature and the sparse measurement projection is characterized by comprising the following steps of:
acquiring a target vehicle image; acquiring an image set to be re-identified;
carrying out depth feature extraction on each image in the target vehicle image and the image set to be re-identified to obtain the depth feature of each image;
calculating a self-adaptive sparse projection matrix corresponding to the depth features of the target vehicle image; meanwhile, calculating a self-adaptive sparse projection matrix corresponding to the depth feature of each image in the image set to be re-identified;
calculating a self-adaptive sparse projection matrix corresponding to the depth feature of the target vehicle image; the method specifically comprises the following steps:
calculating a sparse coefficient corresponding to the depth feature of the target vehicle image according to the over-complete dictionary;
taking the sparse coefficient corresponding to the depth feature of the target vehicle image as a weight, and performing weighted summation on the element projection matrix to obtain a self-adaptive sparse projection matrix corresponding to the depth feature of the target vehicle image;
the overcomplete dictionary obtaining step comprises:
initializing an overcomplete dictionary D as a K clustering center of a training data space; initializing each element of a sparse coefficient matrix; the training data comprises: an image of a known target vehicle, and a re-identification image of the known target vehicle;
calculating a characteristic sparse coding loss function;
according to the characteristic sparse coding loss function, an iterative training strategy is adopted, firstly, an overcomplete dictionary D is fixed, a sparse coefficient matrix is updated by using a gradient descent method, then, the sparse coefficient matrix is fixed, and the overcomplete dictionary D is updated by using the gradient descent method;
the element projection matrix obtaining step comprises:
adopting a joint training strategy, and constructing a composite projection matrix by splicing the element projection matrices;
calculating a loss function of the composite projection matrix;
calculating the gradient value of the composite projection matrix by adopting a gradient descent strategy according to the loss function of the composite projection matrix, and updating the gradient value of the composite projection matrix to obtain each projection matrix;
calculating the distance between the depth feature of the target vehicle image and the depth feature of each image in the image set to be re-identified based on the depth feature and the adaptive sparse projection matrix corresponding to the depth feature;
repeating the previous step until the distance between the depth feature of the target image and the depth feature of all the images in the image set to be re-identified is calculated; and selecting the image corresponding to the minimum distance as the re-recognition image of the target vehicle.
2. The vehicle re-identification method based on depth feature and sparse measurement projection as claimed in claim 1, wherein depth feature extraction is performed on each image in the target vehicle image and the image set to be re-identified to obtain the depth feature of each image; the method specifically comprises the following steps:
carrying out depth feature extraction on each image in the target vehicle image and the image set to be re-identified by adopting an improved VGG-19 network to obtain the depth feature of each image;
the improved VGG-19 network only reserves the first 16 convolutional layers and the first full-link layer in order to remove the last two full-link layers of the VGG-19 network.
3. The vehicle re-identification method based on the depth features and the sparse measurement projection as claimed in claim 1, wherein the distance between the depth features of the target vehicle image and the depth features of each image in the image set to be re-identified is calculated based on the adaptive sparse projection matrix corresponding to the depth features and the depth features; the method comprises the following specific steps:
multiplying the depth characteristic of the target vehicle image by a self-adaptive sparse projection matrix corresponding to the depth characteristic of the target vehicle image to obtain a first product;
multiplying a certain image depth characteristic in the image set to be re-identified by a self-adaptive sparse projection matrix corresponding to the image depth characteristic in the image set to be re-identified to obtain a second product;
calculating the distance between the first product and the second product;
and the distance between the first product and the second product is the distance between the depth feature of the target vehicle image and the depth feature of each image in the image set to be re-identified.
4. The method of claim 2, wherein the modified VGG-19 network is pre-trained using ImageNet data sets.
5. The depth feature and sparse measurement projection-based vehicle weight recognition system using the depth feature and sparse measurement projection-based vehicle weight recognition method according to claim 1, comprising:
an acquisition module configured to: acquiring a target vehicle image; acquiring an image set to be re-identified;
a feature extraction module configured to: carrying out depth feature extraction on each image in the target vehicle image and the image set to be re-identified to obtain the depth feature of each image;
a projection matrix calculation module configured to: calculating a self-adaptive sparse projection matrix corresponding to the depth features of the target vehicle image; meanwhile, calculating a self-adaptive sparse projection matrix corresponding to the depth feature of each image in the image set to be re-identified;
a distance calculation module configured to: calculating the distance between the depth feature of the target vehicle image and the depth feature of each image in the image set to be re-identified based on the depth feature and the self-adaptive sparse projection matrix corresponding to the depth feature;
an output module configured to: repeating the step of the distance calculation module until the distance between the depth feature of the target image and the depth feature of all the images in the image set to be re-identified is calculated; and selecting the image corresponding to the minimum distance as the re-recognition image of the target vehicle.
6. An electronic device, comprising: one or more processors, one or more memories, and one or more computer programs; wherein a processor is connected to the memory, the one or more computer programs being stored in the memory, the processor executing the one or more computer programs stored in the memory when the electronic device is running, to cause the electronic device to perform the method of any of the preceding claims 1-4.
7. A computer-readable storage medium storing computer instructions which, when executed by a processor, perform the method of any one of claims 1 to 4.
CN202110014228.6A 2021-01-05 2021-01-05 Vehicle weight identification method and system based on depth feature and sparse measurement projection Active CN112699829B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110014228.6A CN112699829B (en) 2021-01-05 2021-01-05 Vehicle weight identification method and system based on depth feature and sparse measurement projection
PCT/CN2021/103200 WO2022147977A1 (en) 2021-01-05 2021-06-29 Vehicle re-identification method and system based on depth feature and sparse metric projection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110014228.6A CN112699829B (en) 2021-01-05 2021-01-05 Vehicle weight identification method and system based on depth feature and sparse measurement projection

Publications (2)

Publication Number Publication Date
CN112699829A CN112699829A (en) 2021-04-23
CN112699829B true CN112699829B (en) 2022-08-30

Family

ID=75514949

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110014228.6A Active CN112699829B (en) 2021-01-05 2021-01-05 Vehicle weight identification method and system based on depth feature and sparse measurement projection

Country Status (2)

Country Link
CN (1) CN112699829B (en)
WO (1) WO2022147977A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699829B (en) * 2021-01-05 2022-08-30 山东交通学院 Vehicle weight identification method and system based on depth feature and sparse measurement projection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056141A (en) * 2016-05-27 2016-10-26 哈尔滨工程大学 Target recognition and angle coarse estimation algorithm using space sparse coding
CN106682087A (en) * 2016-11-28 2017-05-17 东南大学 Method for retrieving vehicles on basis of sparse codes of features of vehicular ornaments
CN108509854A (en) * 2018-03-05 2018-09-07 昆明理工大学 A kind of constrained based on projection matrix combines the pedestrian's recognition methods again for differentiating dictionary learning
CN109241981A (en) * 2018-09-03 2019-01-18 哈尔滨工业大学 A kind of characteristic detection method based on sparse coding
CN109492610A (en) * 2018-11-27 2019-03-19 广东工业大学 A kind of pedestrian recognition methods, device and readable storage medium storing program for executing again

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107085206B (en) * 2017-03-22 2020-02-28 南京航空航天大学 One-dimensional range profile identification method based on adaptive sparse preserving projection
CN108875445B (en) * 2017-05-08 2020-08-25 深圳荆虹科技有限公司 Pedestrian re-identification method and device
CN109145777A (en) * 2018-08-01 2019-01-04 北京旷视科技有限公司 Vehicle recognition methods, apparatus and system again
CN109635728B (en) * 2018-12-12 2020-10-13 中山大学 Heterogeneous pedestrian re-identification method based on asymmetric metric learning
EP3722998A1 (en) * 2019-04-11 2020-10-14 Teraki GmbH Data analytics on pre-processed signals
CN110765960B (en) * 2019-10-29 2022-03-04 黄山学院 Pedestrian re-identification method for adaptive multi-task deep learning
CN112699829B (en) * 2021-01-05 2022-08-30 山东交通学院 Vehicle weight identification method and system based on depth feature and sparse measurement projection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056141A (en) * 2016-05-27 2016-10-26 哈尔滨工程大学 Target recognition and angle coarse estimation algorithm using space sparse coding
CN106682087A (en) * 2016-11-28 2017-05-17 东南大学 Method for retrieving vehicles on basis of sparse codes of features of vehicular ornaments
CN108509854A (en) * 2018-03-05 2018-09-07 昆明理工大学 A kind of constrained based on projection matrix combines the pedestrian's recognition methods again for differentiating dictionary learning
CN109241981A (en) * 2018-09-03 2019-01-18 哈尔滨工业大学 A kind of characteristic detection method based on sparse coding
CN109492610A (en) * 2018-11-27 2019-03-19 广东工业大学 A kind of pedestrian recognition methods, device and readable storage medium storing program for executing again

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation";M. Aharon.et al;《IEEE Transactions on Signal Processing》;20061130;第54卷(第11期);第4311–4322页 *
"Learning to align multi-camera domains using part-aware clustering for unsupersived video person re-identification";Youngeun Kim.et al;《Arxiv》;20200513;第1-11页 *
"基于字典学习的车辆重识别技术研究";王盼盼;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190115(第01期);全文 *
"基于度量学习和稀疏表示的行人重识别技术研究";丘宇辉;《中国优秀硕士学位论文全文数据库 信息科技辑》;20151215(第12期);全文 *

Also Published As

Publication number Publication date
CN112699829A (en) 2021-04-23
WO2022147977A1 (en) 2022-07-14

Similar Documents

Publication Publication Date Title
CN109117858B (en) Method and device for monitoring icing of wind driven generator blade
US10332028B2 (en) Method for improving performance of a trained machine learning model
CA2993011C (en) Enforced sparsity for classification
EP3251058A1 (en) Hyper-parameter selection for deep convolutional networks
CN110543581B (en) Multi-view three-dimensional model retrieval method based on non-local graph convolution network
KR20170140214A (en) Filter specificity as training criterion for neural networks
CN114359851A (en) Unmanned target detection method, device, equipment and medium
CN113312983A (en) Semantic segmentation method, system, device and medium based on multi-modal data fusion
US11270425B2 (en) Coordinate estimation on n-spheres with spherical regression
CN114830131A (en) Equal-surface polyhedron spherical gauge convolution neural network
CN116188999B (en) Small target detection method based on visible light and infrared image data fusion
CN114255403A (en) Optical remote sensing image data processing method and system based on deep learning
CN110689578A (en) Unmanned aerial vehicle obstacle identification method based on monocular vision
CN112699829B (en) Vehicle weight identification method and system based on depth feature and sparse measurement projection
CN111626120A (en) Target detection method based on improved YOLO-6D algorithm in industrial environment
CN115170859A (en) Point cloud shape analysis method based on space geometric perception convolutional neural network
CN110348299B (en) Method for recognizing three-dimensional object
CN113850783B (en) Sea surface ship detection method and system
CN114998610A (en) Target detection method, device, equipment and storage medium
CN114612698A (en) Infrared and visible light image registration method and system based on hierarchical matching
CN116229406B (en) Lane line detection method, system, electronic equipment and storage medium
CN112668662A (en) Outdoor mountain forest environment target detection method based on improved YOLOv3 network
Li et al. Detection of Imaged Objects with Estimated Scales.
EP4032028A1 (en) Efficient inferencing with fast pointwise convolution
CN116630816B (en) SAR target recognition method, device, equipment and medium based on prototype comparison learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant