CN112418168B - Vehicle identification method, device, system, electronic equipment and storage medium - Google Patents

Vehicle identification method, device, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN112418168B
CN112418168B CN202011435225.1A CN202011435225A CN112418168B CN 112418168 B CN112418168 B CN 112418168B CN 202011435225 A CN202011435225 A CN 202011435225A CN 112418168 B CN112418168 B CN 112418168B
Authority
CN
China
Prior art keywords
vehicle
training
features
splitting
learner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011435225.1A
Other languages
Chinese (zh)
Other versions
CN112418168A (en
Inventor
吴天舒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN202011435225.1A priority Critical patent/CN112418168B/en
Publication of CN112418168A publication Critical patent/CN112418168A/en
Application granted granted Critical
Publication of CN112418168B publication Critical patent/CN112418168B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The embodiment of the invention provides a vehicle identification method, a device and electronic equipment, wherein the method comprises the following steps: extracting target vehicle characteristics in a vehicle image to be identified; splitting the target vehicle features to obtain a preset number of vehicle splitting features; respectively inputting the vehicle splitting features into a preset learner, and identifying the vehicle splitting features to obtain a preset number of first identification results; and carrying out post-processing on the first recognition result to obtain a second recognition result, and outputting the second recognition result as a vehicle recognition result. The vehicle split characteristics are identified by splitting the full-connection layer into the learners with the preset number, the original single identification result is jointly represented by the dimensions of the first identification results, so that the identification result is more robust, the expression capability of the existing vehicle characteristics can be fully utilized, and the identification accuracy can be improved without increasing the calculation amount of an additional characteristic extraction network.

Description

Vehicle identification method, device, system, electronic equipment and storage medium
Technical Field
The present invention relates to the field of artificial intelligence, and in particular, to a vehicle identification method, apparatus, system, electronic device, and storage medium.
Background
With the rapid development of artificial intelligence, the image recognition technology can be used for landing in various application scenes, such as face recognition in an access control system, vehicle recognition in a traffic system and the like; in the application of vehicle identification, for example, a vehicle that has appeared in a monitoring device may be retrieved by extracting feature information of the vehicle, or a target vehicle in another monitoring device may be retrieved by extracting feature information of the vehicle, or a vehicle may be tracked by extracting features of the vehicle, or the like. In the implementation process of vehicle identification, feature extraction is generally performed through a deep convolutional neural network, and finally, the extracted vehicle information is classified and identified through a full connection layer, so that an identification result is obtained. However, since the quality of the photo taken by the monitoring device is problematic, for example: the vehicle motion blur, vehicle reflection, object shielding and the like become factors influencing the recognition result, so that the information is also extracted in the process of extracting the characteristics, the characteristic expression is inaccurate, the classification recognition effect of the full-connection layer is poor, and the recognition accuracy is low. In this regard, although vehicle feature information having more abstract and characterization capabilities can be obtained by increasing the depth of the feature extraction calculation layer, this clearly increases the calculation amount in the feature extraction process, which is disadvantageous for miniaturization and front-end deployment. Therefore, the conventional vehicle recognition method has a problem of low robustness.
Disclosure of Invention
The embodiment of the invention provides a vehicle identification method, which can improve the robustness of vehicle identification under the condition of not increasing the calculation amount of additional feature extraction.
In a first aspect, an embodiment of the present invention provides a vehicle identification method, including:
extracting target vehicle characteristics in a vehicle image to be identified;
splitting the target vehicle features to obtain a preset number of vehicle splitting features;
respectively inputting the vehicle splitting features into preset learners, and identifying the vehicle splitting features to obtain a first identification result with a preset number, wherein the learners are obtained by splitting according to a full-connection layer, and the number of the learners is the same as that of the vehicle splitting features;
and carrying out post-processing on the first recognition result to obtain a second recognition result, and outputting the second recognition result as a vehicle recognition result.
Optionally, the splitting the target vehicle feature to obtain a preset number of vehicle splitting features includes:
splitting the target vehicle features according to the dimension information of the target vehicle features to obtain a preset number of vehicle splitting features.
Optionally, the post-processing the first recognition result to obtain a second recognition result, and outputting the second recognition result as a vehicle recognition result includes:
splicing the first recognition results according to the sequence of the learner to obtain a second recognition result;
and outputting the second recognition result as a vehicle recognition result.
Optionally, the method further comprises:
acquiring a vehicle image training set, wherein the vehicle image training set comprises vehicle images;
training the deep neural network to be trained based on the vehicle image training set, wherein the deep neural network comprises a plurality of learners, and after the deep neural network to be trained is trained, the plurality of learners are preset.
Optionally, the deep neural network includes a vehicle feature extractor, and the training the deep neural network to be trained based on the vehicle image training set includes:
extracting training features of vehicle images in a vehicle image training set through the vehicle feature extractor;
splitting the training features to obtain a preset number of training split features;
and training a corresponding learner in the deep neural network to be trained based on the training split characteristics.
Optionally, the training the corresponding learner in the deep neural network to be trained based on the training splitting feature includes:
acquiring the learning weight of the previous learner;
calculating the current loss of the current learner according to the current training splitting characteristics;
and calculating the learning weight of the current learner according to the learning weight of the previous learner and the current loss of the current learner.
Optionally, the vehicle image training set includes a plurality of sets of vehicle images, each set of vehicle images includes at least a first vehicle image and a second vehicle image, and training the deep neural network to be trained based on the vehicle image training set includes:
extracting first training features in the first vehicle image and extracting second training features in the second vehicle image;
splitting the first training features and the second training features respectively to obtain a preset number of first training splitting features and the same number of second training splitting features, wherein the splitting mode of the first training features is the same as that of the second training features;
based on the first training split feature and the second training split feature, training a corresponding learner in the deep neural network to be trained.
Optionally, the training the corresponding learner in the deep neural network to be trained based on the first training split feature and the second training split feature includes:
acquiring a first super parameter and a second super parameter;
calculating the current similarity between the current first training split feature and the current second training split feature;
calculating the loss of the current learner according to the first super parameter, the second super parameter and the current similarity;
training the current learner based on the loss of the current learner.
Optionally, before the calculating the loss of the current learner according to the first super parameter, the second super parameter and the current similarity, the method further includes:
determining a first parameter and a second parameter according to the current similarity;
the calculating the loss of the current learner according to the first super parameter, the second super parameter and the current similarity comprises the following steps:
and calculating the loss of the current learner according to the first super parameter, the second super parameter, the first parameter, the second parameter and the current similarity.
Optionally, the determining the first parameter and the second parameter according to the current similarity includes:
judging whether the current similarity is larger than a preset similarity threshold value or not;
if the current similarity is greater than the similarity threshold, determining the first parameter as a first preset value and determining the second parameter as a third preset value;
if the current similarity is smaller than the similarity threshold, determining the first parameter of the holding right as a second preset value and determining the second parameter as a fourth preset value.
In a second aspect, an embodiment of the present invention further provides a vehicle identification apparatus, including:
the extraction module is used for extracting target vehicle characteristics in the vehicle image to be identified;
the splitting module is used for splitting the target vehicle characteristics to obtain a preset number of vehicle splitting characteristics;
the first processing module is used for respectively inputting the vehicle splitting features into preset learners, and identifying the vehicle splitting features to obtain a first identification result with a preset number, wherein the learners are obtained by splitting according to the full-connection layer, and the learner is identical to the vehicle splitting features in number;
And the second processing module is used for carrying out post-processing on the first recognition result to obtain a second recognition result, and outputting the second recognition result as a vehicle recognition result.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the vehicle identification method according to any one of claims 1 to 10 when the computer program is executed.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the vehicle identification method as claimed in any one of claims 1 to 10.
In the embodiment of the invention, the characteristics of the target vehicle in the vehicle image to be identified are extracted; splitting the target vehicle features to obtain a preset number of vehicle splitting features; respectively inputting the vehicle splitting features into preset learners, and identifying the vehicle splitting features to obtain a first identification result with a preset number, wherein the learners are obtained by splitting according to a full-connection layer, and the number of the learners is the same as that of the vehicle splitting features; and carrying out post-processing on the first recognition result to obtain a second recognition result, and outputting the second recognition result as a vehicle recognition result. The vehicle split characteristics are identified by splitting the full-connection layer into the learners with the preset number, the original single identification result is jointly represented by the dimensions of the first identification results, so that the identification result is more robust, the expression capability of the existing vehicle characteristics can be fully utilized, and the identification accuracy can be improved without increasing the calculation amount of an additional characteristic extraction network.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for identifying a vehicle according to an embodiment of the present invention;
FIG. 2 is a flowchart of a deep neural network training method according to an embodiment of the present invention;
FIG. 3 is a flow chart of another method for training a deep neural network according to an embodiment of the present invention;
FIG. 4 is a flow chart of another method for training a deep neural network according to an embodiment of the present invention;
FIG. 5 is a flowchart of a loss value calculation method according to an embodiment of the present invention;
fig. 6 is a schematic structural view of a vehicle identification device according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a second processing module according to an embodiment of the present invention;
fig. 8 is a schematic structural view of another vehicle identification apparatus according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a training module according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of a first training sub-module according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of another training module according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of a second training sub-module according to an embodiment of the present invention;
FIG. 13 is a schematic structural view of another second training sub-module according to an embodiment of the present invention;
fig. 14 is a schematic structural view of a determining unit according to an embodiment of the present invention;
fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart of a vehicle identification method according to an embodiment of the present invention, and as shown in fig. 1, the vehicle identification method includes the following steps:
101. And extracting the characteristics of the target vehicle in the vehicle image to be identified.
In the embodiment of the present invention, the vehicle image to be identified may be a vehicle image collected by a monitoring device disposed at a traffic intersection, for example, a vehicle image collected by the monitoring device in a timing manner or a vehicle image collected in real time. The vehicle image to be identified may also be a vehicle image uploaded by the user.
The target vehicle feature may be obtained by feature extraction through a preset vehicle feature extractor, and the preset vehicle feature extractor may be trained in advance. It should be appreciated that the vehicle feature extractor described above may also be referred to as a vehicle feature extraction network. In the image recognition process, image features are extracted through a feature extractor, and then the image features are classified through a classifier, so that the purpose of recognizing the image is achieved. It may also be understood that in an image recognition model, the feature extractor and the feature classifier are included, in the deep convolutional neural network, the feature extractor may be formed by multiple convolutional calculation layers, and the feature classifier may be formed by a fully connected layer including a category matrix, and is used for calculating the similarity and the confidence level of the image feature and each category feature, where the higher the similarity of the image feature and a certain category feature, the higher the chance that the image feature has the category, and the higher the confidence level, the higher the confidence level indicates that the classification result is.
The target vehicle feature may be a feature vector having multiple dimensions, and in the vehicle feature extractor, the vehicle image to be identified may be extracted as a feature vector having a predetermined number of dimensions, for example, 1024-dimensional, 512-dimensional, 256-dimensional, and the like.
102. Splitting the target vehicle features to obtain a preset number of vehicle splitting features.
In the embodiment of the present invention, the target vehicle feature refers to a feature vector of the target vehicle, and the splitting of the target vehicle feature may be the splitting of the feature vector, that is, splitting a high-dimensional feature vector into a plurality of low-dimensional feature vectors (that is, the vehicle splitting feature). Further, the embodiment of the invention can split the characteristics of the target vehicle according to the dimension information of the characteristics of the target vehicle to obtain the preset number of vehicle split characteristics.
The preset number can be set according to specific conditions, and can also be a default set by a developer. The preset number is an integer value equal to or greater than 2.
In one possible embodiment, the preset number may be a plurality of preset values, and the user may select different preset values in the interactive interface to split the target vehicle feature. Such as: the preset values can be set to be 2, 3, 4 and 5, and when the user selects the preset value to be 2 on the interactive interface, the target vehicle characteristic is split into 2 vehicle split characteristics; and when the user selects the preset value to be 3 on the interactive interface, splitting the target vehicle characteristic into 3 vehicle splitting characteristics.
The specific splitting mode may be to split the target vehicle feature into a plurality of low-dimensional vehicle splitting features according to the dimensions of the target vehicle feature, and the sum of the dimensions of all the low-dimensional vehicle splitting features is the dimension of the target vehicle feature. For the target vehicle feature as a 512-dimensional feature vector, if the target vehicle feature is split into 2 vehicle split features, the target vehicle feature can be split into 2 256-dimensional vehicle split features; if the target vehicle feature is split into 3 vehicle split features, the target vehicle feature may be split into 3 vehicle split features of 128 dimensions, 256 dimensions.
Further, the sequential splitting may be performed according to the dimension of the target vehicle feature, for example, in the case that the target vehicle feature is taken as a 512-dimensional feature vector, if the target vehicle feature is split into 3 vehicle splitting features, the first vehicle splitting feature may be taken as 128 dimensions before the splitting, the second vehicle splitting feature may be taken as 128 dimensions to 256 dimensions, and the third vehicle splitting feature may be taken as 256 dimensions to 512 dimensions. Thus, the split features of the vehicles can be spliced in sequence, and the original features of the target vehicles can be obtained.
103. The vehicle splitting features are respectively input into a preset learner, and the vehicle splitting features are identified to obtain a preset number of first identification results.
In the embodiment of the invention, the learner is obtained by splitting according to the full connection layer, and the number of split features of the learner is the same as that of the vehicle. Specifically, the full-connection layer can be split into learners with corresponding numbers according to the number of the split features of the vehicle, the original full-connection layer is replaced by the combination of the learners, then a final result is obtained according to the recognition results of the learners, after the features of the vehicle are split, the recognition results of the learners are obtained through the corresponding learners, the final recognition results are obtained through the recognition results of the learners, the local expression capacity of the features of the vehicle is fully excavated, and then the robustness of vehicle recognition is improved.
The predetermined number of first recognition results is the same in number as the predetermined number of vehicle split features.
For example, taking the target vehicle feature as a 512-dimensional feature vector as an example, if the target vehicle feature is split into 3 vehicle split features, the first vehicle split feature may be 128 dimensions before the split, the second vehicle split feature may be 128 dimensions to 256 dimensions, and the third vehicle split feature may be 256 dimensions to 512 dimensions. The number of learners is 3, a first vehicle splitting feature is input into the first learner to obtain a first identification result, a second vehicle splitting feature is input into the second learner to obtain a second first identification result, and a third vehicle splitting feature is input into the third learner to obtain a third first identification result.
In the embodiment of the invention, the learners are a plurality of learners which are obtained by splitting according to the full-connection layer, and each learner correspondingly comprises a corresponding category matrix for classifying and identifying split features of the vehicle.
In the forward reasoning process, the learners can be in parallel relation, and after the target vehicle features are split, the split features of the vehicles are respectively input into the corresponding learners.
In the training process, the learners can be in serial relation, so that the dependence of each learner on the overall relation of the target vehicle characteristics can be enhanced through the spatial sequence relation among the split characteristics of each vehicle, and the recognition accuracy of each learner is improved.
104. And carrying out post-processing on the first recognition result to obtain a second recognition result, and outputting the second recognition result as a vehicle recognition result.
In the embodiment of the present invention, the number of the first recognition results is the same as the number of the vehicle splitting features, that is, a preset number of first recognition results, where the preset number is greater than or equal to 2.
The post-processing refers to integrating the plurality of first recognition results to obtain a second recognition result as a final vehicle recognition result.
The post-processing may be splicing the first recognition results, and the sequence of the splicing may be according to the splitting sequence of the splitting features of the vehicle. The second recognition result is formed by splicing the first recognition result, and further, the second recognition result can be obtained by splicing the first recognition result and then performing logistic regression on the spliced first recognition result, so that the first recognition result after splicing can be understood to be subjected to full-connection calculation again. For example, a first vehicle split feature is input into a first learner to obtain a first recognition vector as a first recognition result, a second vehicle split feature is input into a second learner to obtain a second recognition vector as a second first recognition result, and a third vehicle split feature is input into a third learner to obtain a third recognition vector as a third first recognition result. And splicing the first recognition vector, the second recognition vector and the third recognition vector to obtain a result recognition vector, inputting the result recognition vector into the full-connection layer for classification recognition, and outputting a final recognition result as a second recognition result.
In some possible embodiments, the post-processing may be an average result of the first recognition result, or a voting result of the first recognition result.
The second recognition result has a different expression form according to different application scenes, such as vehicle recognition, and the color, the model and the like of the vehicle can be obtained as the vehicle recognition result. Corresponding to the vehicle re-identification (REID), a predicted ID of the corresponding vehicle may be obtained for indicating the vehicle identity.
In the embodiment of the invention, the characteristics of the target vehicle in the vehicle image to be identified are extracted; splitting the target vehicle features to obtain a preset number of vehicle splitting features; respectively inputting the vehicle splitting features into preset learners, and identifying the vehicle splitting features to obtain a first identification result with a preset number, wherein the learners are obtained by splitting according to a full-connection layer, and the number of the learners is the same as that of the vehicle splitting features; and carrying out post-processing on the first recognition result to obtain a second recognition result, and outputting the second recognition result as a vehicle recognition result. The vehicle split characteristics are identified by splitting the full-connection layer into the learners with the preset number, the original single identification result is jointly represented by the dimensions of the first identification results, so that the identification result is more robust, the expression capability of the existing vehicle characteristics can be fully utilized, and the identification accuracy can be improved without increasing the calculation amount of an additional characteristic extraction network.
It should be noted that, the vehicle identification method provided by the embodiment of the invention can be applied to devices such as a mobile phone, a monitor, a computer, a server, and a computer external device which can perform vehicle identification.
Referring to fig. 2, fig. 2 is a flowchart of a deep neural network training method according to an embodiment of the present invention, where the deep neural network training method is used in the vehicle identification method, and as shown in fig. 2, the deep neural network training method includes the following steps:
201. a training set of vehicle images is obtained.
The vehicle image training set includes a plurality of vehicle images as training samples, and the vehicle images in the vehicle image training set may be RGB images or gray images, which are not limited herein.
202. Based on the vehicle image training set, training the deep neural network to be trained.
The deep neural network comprises a vehicle feature extractor and a plurality of learners, and the learners are preset after the deep neural network to be trained is trained.
The vehicle feature extractor described above may be composed of multiple convolution computation layers for extracting abstract feature representations of images.
The learners can be split by a full connection layer in the deep neural network and used for classifying and identifying split features of the vehicle.
Specifically, referring to fig. 3, fig. 3 is a flowchart of another deep neural network training method according to an embodiment of the present invention, as shown in fig. 3, the step 202 specifically includes:
301. and extracting training features of the vehicle images in the vehicle image training set through the vehicle feature extractor.
In the embodiment of the invention, the vehicle feature extractor can be composed of a plurality of convolution calculation layers, and abstract training features are extracted by calculating the vehicle images in the training set through convolution check of each convolution calculation layer. The training features may also be referred to as training feature vectors. The extracted training features may be feature vectors of a predetermined number of dimensions, such as 1024, 512, 256, etc. dimensions.
302. Splitting the training features to obtain a preset number of training split features.
In the embodiment of the present invention, the training feature refers to a feature vector extracted from a vehicle image in a vehicle image training set, and the splitting of the training feature may be the splitting of a feature vector, that is, splitting a high-dimensional feature vector into a plurality of low-dimensional feature vectors (that is, the training splitting feature). Further, the embodiment of the invention can split the training features according to the dimension information of the training features to obtain a preset number of training split features.
The preset number can be set according to specific conditions, and can also be a default set by a developer. The preset number is an integer value equal to or greater than 2.
In one possible embodiment, the preset number may be a plurality of preset values, and the user may select different preset values in the interactive interface to split the training feature. Such as: the preset values can be set to be 2, 3, 4 and 5, and when the user selects the preset value to be 2 on the interactive interface, the training features are split into 2 training split features; when the user selects the preset value to be 3 on the interactive interface, the training features are split into 3 training split features.
The specific splitting mode can be to split the training features into a plurality of low-dimensional training splitting features according to the dimensions of the training features, and the sum of the dimensions of all the low-dimensional training splitting features is the dimension of the training features. For the training feature as 512-dimensional feature vector, if the training feature is split into 2 training split features, the training feature can be split into 2 training split features with 256 dimensions; if the training features are split into 3 training split features, the training features can be split into 3 training split features of 128 dimensions, 128 dimensions and 256 dimensions.
Further, the training features may be split sequentially according to the dimensions of the training features, for example, for a 512-dimensional feature vector, if the training features are split into 3 training split features, 128 dimensions may be the first training split feature before the splitting, 128 dimensions to 256 dimensions may be the second training split feature, and 256 dimensions to 512 dimensions may be the third training split feature. Thus, the training split features can be spliced in sequence, and the original training features can be obtained.
303. Based on the training split characteristics, training the corresponding learner in the deep neural network to be trained.
In the embodiment of the invention, the learner to be trained is obtained by splitting the full-connection layer according to the number of the training splitting features, and the number of the learner is the same as that of the training splitting features. Specifically, the full-connection layer can be split into learners with corresponding numbers according to the number of the training splitting features, the original full-connection layer is replaced by the combination of the learners, and then a final result is obtained according to the recognition results of the learners. After training is completed, the recognition is carried out through the corresponding learners, the recognition results of the learners can be obtained, the final recognition result is obtained through the recognition results of the learners, the local expression capacity of the feature vector is fully excavated, and the robustness of vehicle recognition is further improved.
For example, taking the training feature as a 512-dimensional feature vector as an example, if the training feature is split into 3 training split features, 128 dimensions before the split may be the first training split feature, 128 dimensions to 256 dimensions may be the second training split feature, and 256 dimensions to 512 dimensions may be the third training split feature. The number of learners is 3, a first training split characteristic is input into the first learner to obtain a first training result, a second training split characteristic is input into the second learner to obtain a second first training result, and a third training split characteristic is input into the third learner to obtain a third first training result.
In a training mode, initial weights of all learners are initialized, error rates of all learners are calculated according to a first training result, the weights of all learners are changed according to the proportion of the error rates of all learners, and learners with large error rates are allocated with more weights so as to train the learners with large error rates in a key mode. And (3) carrying out weight distribution according to the proportion of the error rate of each learner in each round of training, and stopping training until the error rate of each learner accords with the preset error rate to obtain a trained learner, thereby obtaining a trained deep neural network.
In the embodiment of the invention, the loss of each learner is calculated according to the first training result, the reverse bias is calculated according to the loss of each learner, and the weight of each learner is updated.
In the training process, the learners can be in serial relation, so that the dependence of each learner on the overall relation of the target vehicle characteristics can be enhanced through the spatial sequence relation among the split characteristics of each vehicle, and the recognition accuracy of each learner is improved.
Further, the learning weight of the previous learner can be obtained; calculating the current loss of the current learner according to the current training splitting characteristics; and calculating the learning weight of the current learner according to the learning weight of the previous learner and the current loss of the current learner. When the current learner is the first learner, the learning weight of the last learner may be an initial value or the learning weight of the last learner in the previous training.
For example, the full connection layer is broken down into 3 learners, namely, a learner A1, a learner B1 and a learner C1, and the corresponding extracted training features are respectively divided into 3 training split features, namely, a training split feature a, a training split feature B and a training split feature C, wherein the training feature ABC can be understood as an integral continuous feature, and the training split feature a, the training split feature B and the training split feature C can be understood as three independent sub-features. The training split characteristic A is input into a learner A1, corresponding loss a is calculated, and the learning weight corresponding to the learner A1 is updated according to the loss a reverse bias, so that the learning weight W1 is obtained. And inputting the training split characteristic B into a learner B1, calculating a corresponding loss B, reversely solving the bias derivative according to the product W1×b of the loss B and the learning weight W1 of the previous learner, and updating the learning weight corresponding to the learner B1 to obtain W2. The training split characteristic C is input into a learner C1, corresponding loss C is calculated, and according to the loss C and the product W2 xc of the learning weight W2 of the previous learner, the learning weight corresponding to the learner C1 is updated to obtain W3. The total loss can be calculated by the calculated loss a, loss b and loss c, and the total loss of the deep neural network is used as the total loss of the deep neural network to carry out back propagation corresponding to the deep neural network so as to train the deep neural network.
The above-mentioned loss refers to a difference between a value obtained by the learner and a label value prepared in advance in the training set during the training, for example, in the case of the vehicle image A2, the label value prepared in advance in the training set is y=1, the training result obtained by the learner during the training is A2, and the preset loss function is l=f (Y, A2), and the corresponding loss value can be calculated from the loss function.
Optionally, referring to fig. 4, fig. 4 is a flowchart of another deep neural network training method provided in an embodiment of the present invention, unlike the embodiment of fig. 3, in which a training set of vehicle images includes multiple sets of vehicle images, each set of vehicle images includes at least a first vehicle image and a second vehicle image, and as shown in fig. 4, step 202 specifically includes:
401. a first training feature in a first vehicle image is extracted and a second training feature in a second vehicle image is extracted.
In the embodiment of the invention, the vehicle feature extractor can be composed of a plurality of convolution calculation layers, and abstract training features are extracted by calculating the vehicle images in the training set through convolution check of each convolution calculation layer. The training features may also be referred to as training feature vectors. The extracted training features may be feature vectors of a predetermined number of dimensions, such as 1024, 512, 256, etc. dimensions.
The first training feature and the second training feature are extracted by the same vehicle feature extractor, so that the dimensions of the obtained training features are the same.
402. And respectively splitting the first training features and the second training features to obtain a preset number of first training splitting features and the same number of second training splitting features.
In the embodiment of the present invention, the first training feature is extracted from a first vehicle image in a vehicle image training set according to the same splitting manner of the first training feature as the second training feature, and splitting the first training feature may be splitting the feature vector, that is, splitting a high-dimensional feature vector into a plurality of low-dimensional feature vectors (that is, the training splitting feature). Further, in the embodiment of the invention, the first training features are split according to the dimension information of the first training features to obtain a preset number of first training split features.
The preset number can be set according to specific conditions, and can also be a default set by a developer. The preset number is an integer value equal to or greater than 2.
In one possible embodiment, the preset number may be a plurality of preset values, and the user may select different preset values in the interactive interface to split the first training feature. Such as: the preset values can be set to be 2, 3, 4 and 5, and when the user selects the preset value to be 2 on the interactive interface, the first training features are split into 2 first training split features; when the user selects the preset value to be 3 on the interactive interface, the training features are split into 3 first training split features.
The specific splitting mode may be to split the first training feature into a plurality of low-dimensional first training splitting features according to the dimensions of the first training feature, where the sum of the dimensions of all the low-dimensional first training splitting features is the dimension of the first training feature. For the training feature as 512-dimensional feature vector, if the first training feature is split into 2 first training split features, the first training split features can be split into 2 first training split features with 256 dimensions; if the first training features are split into 3 first training split features, the first training features can be split into 3 first training split features of 128 dimensions, 128 dimensions and 256 dimensions.
Further, the first training features may be split sequentially according to the dimensions of the first training features, for example, for a 512-dimensional feature vector of the first training features, if the first training features are split into 3 first training split features, the first training split features may be 128 dimensions before the splitting, the second first training split features may be split from 128 dimensions to 256 dimensions, and the third first training split features may be split from 256 dimensions to 512 dimensions. Thus, the first training split features can be spliced in sequence, and the original first training features can be obtained.
Similarly, the first training feature refers to a feature vector extracted from a first vehicle image in the vehicle image training set, and the manner of splitting the second training feature may be the same manner as the first training feature, which is not described herein again.
403. Based on the first training split feature and the second training split feature, training a corresponding learner in the deep neural network to be trained.
In the embodiment of the invention, the number of the first training splitting features is the same as the number of the second training splitting features, and the learner to be trained splits the full-connection layer according to the number of the training splitting features to obtain the full-connection layer. Specifically, the full-connection layer can be split into learners with corresponding numbers according to the number of the first training splitting features or the second training splitting features, the original full-connection layer is replaced by the combination of the learners, and then a final result is obtained according to the identification results of the learners. After training is completed, the first training features and the second training features are split, the corresponding learners are used for identifying, the identification results of the learners can be obtained, the final identification results are obtained through the identification results of the learners, the local expression capacity of the feature vectors is fully excavated, and then the robustness of vehicle identification is improved.
For example, taking the first training feature A1B1C1 and the second training feature A2B2C2 as 512-dimensional feature vectors as examples, if the first training feature is split into 3 first training split features. Similarly, the second training features are split into 3 second training split features. If the first training features are split into 3 first training split features, 128 dimensions before splitting can be a first training split feature A1, 128 dimensions to 256 dimensions can be a second first training split feature B1, and 256 dimensions to 512 dimensions can be a third first training split feature C1; similarly, the second training features are split into 3 second training split features, which can be 128 dimensions before splitting into a first second training split feature A2, 128 dimensions to 256 dimensions into a second training split feature B2, and 256 dimensions to 512 dimensions into a third second training split feature C2. The number of learners is 3, a first training split feature A1 and a first second training split feature are input into a first learner to obtain a first second training result, a second first training split feature B1 and a second training split feature B2 are input into a second learner to obtain a second training result, and a third first training split feature C1 and a third second training split feature C2 are input into a third learner to obtain a third second training result.
The second training result includes a loss value corresponding to each learner, and the loss value is calculated through a preset loss function.
Optionally, referring to fig. 5, fig. 5 is a flowchart of a loss value calculation method, where the loss value calculation method may be used for each learner in the embodiment of fig. 4, and as shown in fig. 5, the loss value calculation method includes:
501. and acquiring a first super parameter alpha and a second super parameter beta.
The first super parameter alpha and the second super parameter beta are preset, and a user can change and adjust the first super parameter alpha and the second super parameter beta through the interactive interface.
502. And calculating the current similarity s of the current first training split characteristic and the current second training split characteristic.
For example, the first training split features are A1, B1, and C1, the second training split features are A2, B2, and C2, respectively, and if the current first training split feature is A1, the current second training split feature is A2, and the similarity s between A1 and A2 is calculated.
503. And calculating the loss L of the current learner according to the first super parameter alpha, the second super parameter beta and the current similarity s.
Specifically, the loss L of the current learner may be calculated by the following equation:
L=log(1+e -α(s-β) )
The above formula includes a first superparameter α, a second superparameter β, and a current similarity s. More specifically, the first super parameter α may be a fraction between 0 and 1, and the second super parameter β is any integer greater than or equal to 1. In the embodiment of the present invention, the first super parameter α is preferably 0.5, and the second super parameter β is preferably 2.0.
In one possible embodiment, the loss L of the current learner is calculated according to the first super parameter α, the second super parameter β, the first parameter y, the second parameter c and the current similarity s, and in particular, the first parameter y and the second parameter c may be determined according to the current similarity s, where the loss L of the current learner may be calculated by the following equation:
L=log(1+e -(2y-1)α(s-β)c )
the above formula includes a first superparameter α, a second superparameter β, a first parameter y, a second parameter c, and a current similarity s. More specifically, the first super parameter α may be a fraction between 0 and 1, and the second super parameter β is any integer greater than or equal to 1. In the embodiment of the present invention, the first super parameter α is preferably 0.5, and the second super parameter β is preferably 2.0. The learner may be prevented from overfitting by increasing the first parameter y and the second parameter c.
The first parameter y and the second parameter c may be determined by determining whether the current similarity s is greater than a preset similarity threshold, and if the current similarity s is greater than the similarity threshold, determining that the first parameter y is a first preset value, and determining that the second parameter c is a third preset value.
If the current similarity is smaller than the similarity threshold, determining that the first parameter y is a second preset value, and determining that the second parameter c is a fourth preset value.
The first parameter y may be 0 or 1, that is, the first preset value is 0 or 1, and the second preset value is 1 or 0. In the embodiment of the present invention, the first preset value is preferably 1, and the second preset value is preferably 0.
In the embodiment of the present invention, the third preset value is preferably 1, and the fourth preset value is preferably 25.
Specifically, if the current similarity s is greater than the similarity threshold, the first parameter y is 1, and the second parameter c is 1; if the current similarity s is smaller than the similarity threshold, the first parameter y is 0, and the second parameter c is 25.
The current learner is trained based on the loss of the current learner.
In the embodiment of the invention, the vehicle split characteristics are identified by splitting the full-connection layer into the learners with the preset number, the original single identification result is commonly represented by the dimensions of the first identification results, so that the identification result is more robust, the expression capability of the existing vehicle characteristics can be fully utilized, and the identification accuracy can be improved without increasing the calculation amount of an additional characteristic extraction network.
The invention tests on a vehicle re-identification data set, wherein the data set comprises a vehicle sample image and a corresponding vehicle ID, the vehicle sample image is identified, an output identification result is the vehicle ID, the data set is divided into three specifications, a large data set 2400ID, a data set 1600ID and a small data set 800ID, a comparison group is a deep neural network acceptance v1 for classifying and identifying the full connection layer, the invention splits the full connection layer of the acceptance v1 to obtain a plurality of learners, and the extracted vehicle characteristics are split and trained by combining with the embodiment 4.
The comparison results are shown in Table 1:
2400ID 1600ID 800ID
first 1 recall rate of this scheme 0.760 0.793 0.826
Recall rate of 5 before this protocol 0.906 0.883 0.864
Recall ratio 1 before 1 in accept 0.679 0.730 0.780
First 5 recall rates of incapacitating v1 0.875 0.847 0.824
TABLE 1
Wherein, the above-mentioned first 1 recall rate refers to the accuracy rate of retrieving the correct ID in the first ID of the rank, and the above-mentioned first 5 recall rate refers to the accuracy rate of retrieving the correct ID in the first ID of the rank 5. It can be seen that the accuracy of vehicle re-identification is improved by replacing the full connection layer with a plurality of learners. And, do not add extra feature extraction network computational complexity.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a vehicle identification device according to an embodiment of the present invention, as shown in fig. 6, the device includes:
An extracting module 601, configured to extract a target vehicle feature in a vehicle image to be identified;
the splitting module 602 is configured to split the target vehicle feature to obtain a preset number of vehicle splitting features;
the first processing module 603 is configured to input the vehicle splitting features into preset learners respectively, identify the vehicle splitting features, and obtain a preset number of first identification results, where the learners are obtained by splitting according to a full-connection layer, and the learner is the same as the vehicle splitting features in number;
and the second processing module 604 is configured to perform post-processing on the first recognition result to obtain a second recognition result, and output the second recognition result as a vehicle recognition result.
Optionally, the splitting module 602 is further configured to split the target vehicle feature according to the dimension information of the target vehicle feature, to obtain a preset number of vehicle splitting features.
Optionally, as shown in fig. 7, the second processing module 604 includes:
a splicing submodule 6041, configured to splice the first recognition results according to the order of the learner, so as to obtain a second recognition result;
and an output submodule 6042, configured to output the second identification result as a vehicle identification result.
Optionally, as shown in fig. 8, the apparatus further includes:
an obtaining module 605, configured to obtain a training set of vehicle images, where the training set of vehicle images includes vehicle images;
the training module 606 is configured to train the deep neural network to be trained based on the vehicle image training set, where the deep neural network includes a plurality of learners, and after the deep neural network to be trained completes training, a preset plurality of learners are obtained.
Optionally, as shown in fig. 9, the deep neural network includes a vehicle feature extractor, and the training module 606 includes:
a first extraction submodule 6061 for extracting training features of the vehicle images in the vehicle image training set by the vehicle feature extractor;
a first splitting submodule 6062, configured to split the training features to obtain a preset number of training split features;
and a first training submodule 6063, configured to train a learner corresponding to the deep neural network to be trained based on the training split feature.
Optionally, as shown in fig. 10, the first training submodule 6063 includes:
a first acquisition unit 60631 for acquiring the learning weight of the previous learner;
A first calculating unit 60632, configured to calculate a current loss of the current learner according to the current training split feature;
and a second calculating unit 60633, configured to calculate a learning weight of the current learner according to the learning weight of the previous learner and the current loss of the current learner.
Optionally, as shown in fig. 11, the training set of vehicle images includes multiple sets of vehicle images, where each set of vehicle images includes at least a first vehicle image and a second vehicle image, and the training module 606 includes:
a second extraction sub-module 6064 for extracting first training features in the first vehicle image and extracting second training features in the second vehicle image;
the second splitting submodule 6065 is configured to split the first training feature and the second training feature respectively to obtain a preset number of first training splitting features and a same number of second training splitting features, where a splitting manner of the first training features is the same as that of the second training features;
and a second training submodule 6066, configured to train a corresponding learner in the deep neural network to be trained based on the first training split feature and the second training split feature.
Optionally, as shown in fig. 12, the second training submodule 6066 includes:
a second acquiring unit 60661, configured to acquire a first superparameter and a second superparameter;
a third calculation unit 60662 for calculating a current similarity between the current first training split feature and the current second training split feature;
a fourth calculation unit 60663, configured to calculate a loss of the current learner according to the first superparameter, the second superparameter, and the current similarity;
and a training unit 60664, configured to train the current learner based on the loss of the current learner.
Optionally, as shown in fig. 13, the second training submodule 6066 further includes:
a determining unit 60665, configured to determine a first parameter and a second parameter according to the current similarity;
the fourth computing unit 60663 is further configured to calculate a loss of the current learner according to the first superparameter, the second superparameter, the first parameter, the second parameter, and the current similarity.
Alternatively, as shown in fig. 14, the determining unit 60665 includes:
a judging subunit 606651, configured to judge whether the current similarity is greater than a preset similarity threshold;
A first determining subunit 606652, configured to determine the first parameter as a first preset value and determine the second parameter as a third preset value if the current similarity is greater than the similarity threshold;
and a second determining subunit 606653, configured to determine the first parameter as a second preset value and determine the second parameter as a fourth preset value if the current similarity is less than the similarity threshold.
It should be noted that the vehicle identification device provided by the embodiment of the invention can be applied to devices such as a mobile phone, a monitor, a computer, a server, and a computer external device, which can perform vehicle identification.
The vehicle identification device provided by the embodiment of the invention can realize each process realized by the vehicle identification method in the method embodiment, and can achieve the same beneficial effects. In order to avoid repetition, a description thereof is omitted.
Referring to fig. 15, fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, as shown in fig. 15, including: a memory 1502, a processor 1501 and a computer program stored on the memory 1502 and executable on the processor 1501, wherein:
the processor 1501 is configured to call a computer program stored in the memory 1502, and execute the following steps:
Extracting target vehicle characteristics in a vehicle image to be identified;
splitting the target vehicle features to obtain a preset number of vehicle splitting features;
respectively inputting the vehicle splitting features into preset learners, and identifying the vehicle splitting features to obtain a first identification result with a preset number, wherein the learners are obtained by splitting according to a full-connection layer, and the number of the learners is the same as that of the vehicle splitting features;
and carrying out post-processing on the first recognition result to obtain a second recognition result, and outputting the second recognition result as a vehicle recognition result.
Optionally, the splitting the target vehicle feature performed by the processor 1501 obtains a preset number of vehicle split features, including:
splitting the target vehicle features according to the dimension information of the target vehicle features to obtain a preset number of vehicle splitting features.
Optionally, the post-processing of the first recognition result by the processor 1501 to obtain a second recognition result, and outputting the second recognition result as a vehicle recognition result includes:
splicing the first recognition results according to the sequence of the learner to obtain a second recognition result;
And outputting the second recognition result as a vehicle recognition result.
Optionally, the processor 1501 further executes instructions comprising:
acquiring a vehicle image training set, wherein the vehicle image training set comprises vehicle images;
training the deep neural network to be trained based on the vehicle image training set, wherein the deep neural network comprises a plurality of learners, and after the deep neural network to be trained is trained, the plurality of learners are preset.
Optionally, the deep neural network includes a vehicle feature extractor, and the training, performed by the processor 1501, based on the vehicle image training set, of the deep neural network to be trained includes:
extracting training features of vehicle images in a vehicle image training set through the vehicle feature extractor;
splitting the training features to obtain a preset number of training split features;
and training a corresponding learner in the deep neural network to be trained based on the training split characteristics.
Optionally, the training, performed by the processor 1501, based on the training split feature, trains a corresponding learner in the deep neural network to be trained, including:
Acquiring the learning weight of the previous learner;
calculating the current loss of the current learner according to the current training splitting characteristics;
and calculating the learning weight of the current learner according to the learning weight of the previous learner and the current loss of the current learner.
Optionally, the vehicle image training set includes a plurality of sets of vehicle images, each set of vehicle images includes at least a first vehicle image and a second vehicle image, and the training, performed by the processor 1501, based on the vehicle image training set, of the deep neural network to be trained includes:
extracting first training features in the first vehicle image and extracting second training features in the second vehicle image;
splitting the first training features and the second training features respectively to obtain a preset number of first training splitting features and the same number of second training splitting features, wherein the splitting mode of the first training features is the same as that of the second training features;
based on the first training split feature and the second training split feature, training a corresponding learner in the deep neural network to be trained.
Optionally, the training, by the processor 1501, based on the first training split feature and the second training split feature, the corresponding learner in the deep neural network to be trained includes:
acquiring a first super parameter and a second super parameter;
calculating the current similarity between the current first training split feature and the current second training split feature;
calculating the loss of the current learner according to the first super parameter, the second super parameter and the current similarity;
training the current learner based on the loss of the current learner.
Optionally, before said calculating the loss of the current learner based on the first hyper-parameter, the second hyper-parameter, and the current similarity, the processor 1501 further performs steps including:
determining a first parameter and a second parameter according to the current similarity;
the calculating, by the processor 1501, a loss of the current learner according to the first super parameter, the second super parameter, and the current similarity includes:
and calculating the loss of the current learner according to the first super parameter, the second super parameter, the first parameter, the second parameter and the current similarity.
Optionally, the determining, by the processor 1501, the first parameter and the second parameter according to the current similarity includes:
judging whether the current similarity is larger than a preset similarity threshold value or not;
if the current similarity is greater than the similarity threshold, determining the first parameter as a first preset value and determining the second parameter as a third preset value;
and if the current similarity is smaller than the similarity threshold, determining the first parameter as a second preset value and determining the second parameter as a fourth preset value.
The electronic device may be a mobile phone, a monitor, a computer, a server, a computer external device, or the like, which can be used for vehicle identification.
The electronic device provided by the embodiment of the invention can realize each process realized by the vehicle identification method in the method embodiment, and can achieve the same beneficial effects, and in order to avoid repetition, the description is omitted.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements each process of the vehicle identification method provided by the embodiment of the invention, and can achieve the same technical effects, so that repetition is avoided, and no further description is provided herein.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM) or the like.
The foregoing disclosure is illustrative of the present invention and is not to be construed as limiting the scope of the invention, which is defined by the appended claims.

Claims (12)

1. A vehicle identification method characterized by comprising the steps of:
extracting target vehicle characteristics in a vehicle image to be identified;
sequentially splitting according to the dimensions of the target vehicle features, splitting the target vehicle features into a plurality of low-dimensional vehicle splitting features, wherein the sum of the dimensions of all the low-dimensional vehicle splitting features is the dimension of the target vehicle features;
respectively inputting the vehicle splitting features into preset learners, and identifying the vehicle splitting features to obtain a first identification result with a preset number, wherein the learners are obtained by splitting according to a full-connection layer, the number of the learners is the same as that of the vehicle splitting features, each learner comprises a corresponding category matrix for classifying and identifying the vehicle splitting features, a plurality of learners are in parallel relation in the forward reasoning process, and a plurality of learners are in serial relation in the training process;
And carrying out post-processing on the first recognition result to obtain a second recognition result, and outputting the second recognition result as a vehicle recognition result.
2. The method of claim 1, wherein the post-processing the first recognition result to obtain a second recognition result, and outputting the second recognition result as a vehicle recognition result, comprises:
splicing the first recognition results according to the sequence of the learner to obtain a second recognition result;
and outputting the second recognition result as a vehicle recognition result.
3. The method of claim 1, wherein the method further comprises:
acquiring a vehicle image training set, wherein the vehicle image training set comprises vehicle images;
training the deep neural network to be trained based on the vehicle image training set, wherein the deep neural network comprises a plurality of learners, and after the deep neural network to be trained is trained, the plurality of learners are preset.
4. The method of claim 3, wherein the deep neural network includes a vehicle feature extractor therein, the training the deep neural network to be trained based on the vehicle image training set, comprising:
Extracting training features of vehicle images in a vehicle image training set through the vehicle feature extractor;
splitting the training features to obtain a preset number of training split features;
and training a corresponding learner in the deep neural network to be trained based on the training split characteristics.
5. The method of claim 4, wherein the training the corresponding learner in the deep neural network to be trained based on the training split feature comprises:
acquiring the learning weight of the previous learner;
calculating the current loss of the current learner according to the current training splitting characteristics;
and calculating the learning weight of the current learner according to the learning weight of the previous learner and the current loss of the current learner.
6. The method of any of claims 3-5, wherein the training set of vehicle images includes a plurality of sets of vehicle images, each set of vehicle images including at least a first vehicle image and a second vehicle image, the training the deep neural network to be trained based on the training set of vehicle images comprising:
extracting first training features in the first vehicle image and extracting second training features in the second vehicle image;
Splitting the first training features and the second training features respectively to obtain a preset number of first training splitting features and the same number of second training splitting features, wherein the splitting mode of the first training features is the same as that of the second training features;
based on the first training split feature and the second training split feature, training a corresponding learner in the deep neural network to be trained.
7. The method of claim 6, wherein the training a corresponding learner in a deep neural network to be trained based on the first training split feature and the second training split feature comprises:
acquiring a first super parameter and a second super parameter;
calculating the current similarity between the current first training split feature and the current second training split feature;
calculating the loss of the current learner according to the first super parameter, the second super parameter and the current similarity;
training the current learner based on the loss of the current learner.
8. The method of claim 7, wherein prior to said calculating a loss of a current learner based on said first hyper-parameter, said second hyper-parameter, and said current similarity, said method further comprises:
Determining a first parameter and a second parameter according to the current similarity;
the calculating the loss of the current learner according to the first super parameter, the second super parameter and the current similarity comprises the following steps:
and calculating the loss of the current learner according to the first super parameter, the second super parameter, the first parameter, the second parameter and the current similarity.
9. The method of claim 8, wherein determining the first parameter and the second parameter based on the current similarity comprises:
judging whether the current similarity is larger than a preset similarity threshold value or not;
if the current similarity is greater than the similarity threshold, determining the first parameter as a first preset value and determining the second parameter as a third preset value;
and if the current similarity is smaller than the similarity threshold, determining the first parameter as a second preset value and determining the second parameter as a fourth preset value.
10. A vehicle identification apparatus, characterized in that the apparatus comprises:
the extraction module is used for extracting target vehicle characteristics in the vehicle image to be identified;
the splitting module is used for sequentially splitting according to the dimensions of the target vehicle features, splitting the target vehicle features into a plurality of low-dimensional vehicle splitting features, and the sum of the dimensions of all the low-dimensional vehicle splitting features is the dimension of the target vehicle features;
The first processing module is used for respectively inputting the vehicle splitting features into preset learners, identifying the vehicle splitting features to obtain a first identification result with a preset number, wherein the learners are split according to a full-connection layer, the number of the learners is the same as that of the vehicle splitting features, each learner comprises a corresponding category matrix and is used for classifying and identifying the vehicle splitting features, a plurality of learners are in parallel relation in the forward reasoning process, and a plurality of learners are in serial relation in the training process;
and the second processing module is used for carrying out post-processing on the first recognition result to obtain a second recognition result, and outputting the second recognition result as a vehicle recognition result.
11. An electronic device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the vehicle identification method according to any one of claims 1 to 9 when the computer program is executed.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps in the vehicle identification method according to any one of claims 1 to 9.
CN202011435225.1A 2020-12-10 2020-12-10 Vehicle identification method, device, system, electronic equipment and storage medium Active CN112418168B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011435225.1A CN112418168B (en) 2020-12-10 2020-12-10 Vehicle identification method, device, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011435225.1A CN112418168B (en) 2020-12-10 2020-12-10 Vehicle identification method, device, system, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112418168A CN112418168A (en) 2021-02-26
CN112418168B true CN112418168B (en) 2024-04-02

Family

ID=74776706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011435225.1A Active CN112418168B (en) 2020-12-10 2020-12-10 Vehicle identification method, device, system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112418168B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014067174A (en) * 2012-09-25 2014-04-17 Nippon Telegr & Teleph Corp <Ntt> Image classification device, image identification device and program
CN104463135A (en) * 2014-12-19 2015-03-25 深圳市捷顺科技实业股份有限公司 Vehicle logo recognition method and system
CN105956560A (en) * 2016-05-06 2016-09-21 电子科技大学 Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN106295707A (en) * 2016-08-17 2017-01-04 北京小米移动软件有限公司 Image-recognizing method and device
CN106778745A (en) * 2016-12-23 2017-05-31 深圳先进技术研究院 A kind of licence plate recognition method and device, user equipment
CN106991474A (en) * 2017-03-28 2017-07-28 华中科技大学 The parallel full articulamentum method for interchanging data of deep neural network model and system
CN107679462A (en) * 2017-09-13 2018-02-09 哈尔滨工业大学深圳研究生院 A kind of depth multiple features fusion sorting technique based on small echo
CN107885764A (en) * 2017-09-21 2018-04-06 银江股份有限公司 Based on the quick Hash vehicle retrieval method of multitask deep learning
CN108319907A (en) * 2018-01-26 2018-07-24 腾讯科技(深圳)有限公司 A kind of vehicle identification method, device and storage medium
CN108388888A (en) * 2018-03-23 2018-08-10 腾讯科技(深圳)有限公司 A kind of vehicle identification method, device and storage medium
CN109214441A (en) * 2018-08-23 2019-01-15 桂林电子科技大学 A kind of fine granularity model recognition system and method
CN110598709A (en) * 2019-08-12 2019-12-20 北京智芯原动科技有限公司 Convolutional neural network training method and license plate recognition method and device
CN111062396A (en) * 2019-11-29 2020-04-24 深圳云天励飞技术有限公司 License plate number recognition method and device, electronic equipment and storage medium
CN111652293A (en) * 2020-05-20 2020-09-11 西安交通大学苏州研究院 Vehicle weight recognition method for multi-task joint discrimination learning

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10346726B2 (en) * 2014-12-15 2019-07-09 Samsung Electronics Co., Ltd. Image recognition method and apparatus, image verification method and apparatus, learning method and apparatus to recognize image, and learning method and apparatus to verify image
US10984289B2 (en) * 2016-12-23 2021-04-20 Shenzhen Institute Of Advanced Technology License plate recognition method, device thereof, and user equipment
CN110135437B (en) * 2019-05-06 2022-04-05 北京百度网讯科技有限公司 Loss assessment method and device for vehicle, electronic equipment and computer storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014067174A (en) * 2012-09-25 2014-04-17 Nippon Telegr & Teleph Corp <Ntt> Image classification device, image identification device and program
CN104463135A (en) * 2014-12-19 2015-03-25 深圳市捷顺科技实业股份有限公司 Vehicle logo recognition method and system
CN105956560A (en) * 2016-05-06 2016-09-21 电子科技大学 Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN106295707A (en) * 2016-08-17 2017-01-04 北京小米移动软件有限公司 Image-recognizing method and device
CN106778745A (en) * 2016-12-23 2017-05-31 深圳先进技术研究院 A kind of licence plate recognition method and device, user equipment
CN106991474A (en) * 2017-03-28 2017-07-28 华中科技大学 The parallel full articulamentum method for interchanging data of deep neural network model and system
CN107679462A (en) * 2017-09-13 2018-02-09 哈尔滨工业大学深圳研究生院 A kind of depth multiple features fusion sorting technique based on small echo
CN107885764A (en) * 2017-09-21 2018-04-06 银江股份有限公司 Based on the quick Hash vehicle retrieval method of multitask deep learning
CN108319907A (en) * 2018-01-26 2018-07-24 腾讯科技(深圳)有限公司 A kind of vehicle identification method, device and storage medium
CN108388888A (en) * 2018-03-23 2018-08-10 腾讯科技(深圳)有限公司 A kind of vehicle identification method, device and storage medium
CN109214441A (en) * 2018-08-23 2019-01-15 桂林电子科技大学 A kind of fine granularity model recognition system and method
CN110598709A (en) * 2019-08-12 2019-12-20 北京智芯原动科技有限公司 Convolutional neural network training method and license plate recognition method and device
CN111062396A (en) * 2019-11-29 2020-04-24 深圳云天励飞技术有限公司 License plate number recognition method and device, electronic equipment and storage medium
CN111652293A (en) * 2020-05-20 2020-09-11 西安交通大学苏州研究院 Vehicle weight recognition method for multi-task joint discrimination learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《基于可变形卷积神经网络的遥感影像密集区域车辆检测方法》;高鑫等;《电子与信息学报》;第40卷(第12期);第2812-2819页 *

Also Published As

Publication number Publication date
CN112418168A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
Rahmon et al. Motion U-Net: Multi-cue encoder-decoder network for motion segmentation
CN111814902A (en) Target detection model training method, target identification method, device and medium
CN109117781B (en) Multi-attribute identification model establishing method and device and multi-attribute identification method
CN109919252B (en) Method for generating classifier by using few labeled images
CN110837846A (en) Image recognition model construction method, image recognition method and device
CN111126514A (en) Image multi-label classification method, device, equipment and medium
CN110135505B (en) Image classification method and device, computer equipment and computer readable storage medium
CN112381763A (en) Surface defect detection method
CN112927209B (en) CNN-based significance detection system and method
CN107247952B (en) Deep supervision-based visual saliency detection method for cyclic convolution neural network
CN111523421A (en) Multi-user behavior detection method and system based on deep learning and fusion of various interaction information
CN114245910A (en) Automatic machine learning (AutoML) system, method and equipment
CN110968725A (en) Image content description information generation method, electronic device, and storage medium
CN113591978A (en) Image classification method, device and storage medium based on confidence penalty regularization self-knowledge distillation
CN112418032A (en) Human behavior recognition method and device, electronic equipment and storage medium
CN112200772A (en) Pox check out test set
CN112418168B (en) Vehicle identification method, device, system, electronic equipment and storage medium
CN114170484B (en) Picture attribute prediction method and device, electronic equipment and storage medium
CN114155388B (en) Image recognition method and device, computer equipment and storage medium
US20220366242A1 (en) Information processing apparatus, information processing method, and storage medium
CN115905613A (en) Audio and video multitask learning and evaluation method, computer equipment and medium
CN115661618A (en) Training method of image quality evaluation model, image quality evaluation method and device
CN115620083A (en) Model training method, face image quality evaluation method, device and medium
CN115063858A (en) Video facial expression recognition model training method, device, equipment and storage medium
CN114022698A (en) Multi-tag behavior identification method and device based on binary tree structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant