CN111582178A - Vehicle weight recognition method and system based on multi-azimuth information and multi-branch neural network - Google Patents

Vehicle weight recognition method and system based on multi-azimuth information and multi-branch neural network Download PDF

Info

Publication number
CN111582178A
CN111582178A CN202010387486.4A CN202010387486A CN111582178A CN 111582178 A CN111582178 A CN 111582178A CN 202010387486 A CN202010387486 A CN 202010387486A CN 111582178 A CN111582178 A CN 111582178A
Authority
CN
China
Prior art keywords
vehicle
picture
pictures
view group
shared view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010387486.4A
Other languages
Chinese (zh)
Other versions
CN111582178B (en
Inventor
聂秀山
尹义龙
孙自若
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jianzhu University
Original Assignee
Shandong Jianzhu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Jianzhu University filed Critical Shandong Jianzhu University
Priority to CN202010387486.4A priority Critical patent/CN111582178B/en
Publication of CN111582178A publication Critical patent/CN111582178A/en
Application granted granted Critical
Publication of CN111582178B publication Critical patent/CN111582178B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a vehicle weight recognition method and a system based on vehicle direction information and a multi-branch neural network, wherein the vehicle weight recognition method comprises the following steps: acquiring a plurality of pictures to be identified of vehicles and retrieving a plurality of vehicle comparison pictures on a data set; acquiring direction information of a picture to be identified of a vehicle and a vehicle comparison picture; matching the pictures to be identified of the vehicle with the vehicle contrast pictures to form a plurality of picture groups, and dividing the plurality of picture groups into a shared view group and a non-shared view group according to the direction information; inputting the picture into a training model to obtain shared view group characteristics or non-shared view group characteristics; calculating the Euclidean distance between the pictures to be identified of the vehicles and the comparison pictures of the vehicles according to the shared view group characteristics or the unshared view group characteristics, sequencing, and searching a plurality of vehicles with the similarity close to the vehicles to be identified; two different characteristics are learned according to whether the shared vision field exists or not, so that the characteristics with strong discrimination can be learned, the retrieval and sequencing performance is enhanced, and the accuracy of vehicle re-identification is improved.

Description

Vehicle weight recognition method and system based on multi-azimuth information and multi-branch neural network
Technical Field
The invention relates to a vehicle weight recognition method and system based on multi-azimuth information and a multi-branch neural network, and belongs to the technical field of computational vision and artificial intelligence.
Background
As socio-economic development and the number of vehicles is increasing, the management of the vehicles is becoming more difficult. The vehicle weight recognition refers to a process of matching vehicle pictures under different monitoring cameras on the premise of not depending on license plate information and finding a target vehicle in videos shot by non-overlapping cameras at different times. Vehicle re-identification has important applications in real life, such as criminal investigation, urban computing, public management, intelligent transportation.
Initially, identification of the vehicle was mainly performed by some sensors, such as: magnetic sensors, induction coil sensors, global positioning systems, etc. The cost of deploying these sensors is very large and the information available is limited. With the deployment of large-scale urban monitoring systems and the wide application of computer vision technology in the field of intelligent transportation, the vision-based vehicle re-identification method becomes an important research field, and the vision-based method is divided into a manual feature extraction-based method and a depth feature-based method, while with the great success of a deep convolutional neural network in multiple fields of computer vision, the depth feature-based method becomes a mainstream due to its excellent performance.
Most of the existing methods based on the depth features learn features from the most obvious parts of the whole picture and ignore some local detail information, and the local detail information often contains key features for distinguishing particularly similar vehicles in vision. Furthermore, although some methods utilize these local detailed information, the influence of the vehicle direction on the feature extraction is ignored. Due to different shooting angles of the cameras and different driving conditions of the vehicles, the directions of the vehicles on the shot pictures are greatly different.
Disclosure of Invention
In order to solve the technical problem, the invention provides a vehicle weight recognition method and system based on direction information and a multi-branch neural network. According to the fact that whether the vehicles in the pictures have the shared vision field or not, the network learns two different feature representations, and each feature representation comprises global macro information and local detail information, and accuracy of vehicle weight identification is improved.
The technical scheme adopted by the invention is as follows:
in a first aspect, the present invention provides a vehicle weight recognition method based on vehicle direction information and a multi-branch neural network, comprising the steps of:
acquiring a plurality of pictures to be identified of vehicles and retrieving a plurality of vehicle comparison pictures on a data set;
acquiring direction information of a picture to be identified of a vehicle and a vehicle comparison picture;
matching the pictures to be identified of the vehicle with the vehicle contrast pictures to form a plurality of picture groups, and dividing the plurality of picture groups into a shared view group and a non-shared view group according to the direction information; inputting the picture group into a training model to obtain shared view group characteristics or non-shared view group characteristics;
and calculating the Euclidean distance between the picture to be identified of the vehicle and the comparison picture of the vehicle according to the shared view group characteristics or the unshared view group characteristics, sequencing, and searching a plurality of vehicles with the similarity close to the vehicle to be identified.
In a second aspect, the present invention further provides a vehicle weight recognition system based on vehicle direction information and a multi-branch neural network, including:
a data acquisition module configured to: acquiring a plurality of pictures to be identified of vehicles and retrieving a plurality of vehicle comparison pictures on a data set;
a direction information acquisition module configured to: acquiring direction information of a picture to be identified of a vehicle and a vehicle comparison picture;
a training module configured to: matching the pictures to be identified of the vehicle with the vehicle contrast pictures to form a plurality of picture groups, and dividing the plurality of picture groups into a shared view group and a non-shared view group according to the direction information; inputting the picture group into a training model to obtain shared view group characteristics or non-shared view group characteristics;
a vehicle retrieval module configured to: and calculating the Euclidean distance between the picture to be recognized of the vehicle and the comparison picture of the vehicle according to the shared view group characteristics or the non-shared view group characteristics, sequencing, and searching a plurality of vehicles with the similarity close to that of the vehicle to be recognized.
In a third aspect, the present invention also provides a computer-readable storage medium for storing computer instructions which, when executed by a processor, perform the vehicle weight recognition method according to the first aspect.
In a fourth aspect, the present invention also provides an electronic device, comprising a memory and a processor, and computer instructions stored on the memory and run on the processor, wherein when the computer instructions are run by the processor, the vehicle weight recognition method according to the first aspect is completed.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the invention, by acquiring the direction information of the picture to be identified of the vehicle and the direction information of the vehicle comparison picture, two different characteristics are learned according to whether the shared vision field exists, and each characteristic contains global macroscopic information and local detailed information, so that the accuracy of vehicle re-identification is improved.
2. The method calculates the distance by judging whether two vehicle pictures have shared vision field or not and adopting different characteristics, and then searches the vehicle according to the distance between the vehicle pictures; two different characteristics are learned according to whether the shared vision field exists or not, and each characteristic contains global macroscopic information and local detailed information, so that the accuracy of vehicle re-identification is improved.
3. The training model of the invention adopts a four-branch deep convolution neural network, and adopts a multi-task design, and uses a cross entropy function as a loss function in a classification task; the triple loss is adopted as a network loss function in the metric learning task, and two different characteristics with strong discriminative power are effectively learned through the two tasks according to whether a shared view exists or not, so that the retrieval sequencing performance is enhanced.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
Fig. 1 is a main network diagram of the vehicle weight recognition method based on direction information and a multi-branch neural network according to the present invention.
The specific implementation mode is as follows:
the invention is further described with reference to the following figures and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example 1
As shown in fig. 1, a vehicle weight recognition method based on direction information and a multi-branch neural network is proposed. According to the fact that whether the vehicles in the pictures have the shared vision field or not, the network learns two different feature representations, and each feature representation comprises global macro information and local detail information, and accuracy of vehicle weight identification is improved.
The technical scheme adopted by the invention is as follows:
a vehicle weight recognition method based on vehicle direction information and a multi-branch neural network comprises the following steps:
acquiring a plurality of pictures to be identified of vehicles and retrieving a plurality of vehicle comparison pictures on a data set;
acquiring direction information of a picture to be identified of a vehicle and a vehicle comparison picture;
matching the pictures to be identified of the vehicle with the vehicle contrast pictures to form a plurality of picture groups, and dividing the plurality of picture groups into a shared view group and a non-shared view group according to the direction information; inputting the picture group into a training model to obtain shared view group characteristics or non-shared view group characteristics;
and calculating the Euclidean distance between the picture to be identified of the vehicle and the comparison picture of the vehicle according to the shared view group characteristics or the unshared view group characteristics, sequencing, and searching a plurality of vehicles with the similarity close to the vehicle to be identified.
Further, the step of obtaining the direction information of the picture to be recognized and the vehicle contrast picture of the vehicle comprises:
collecting a plurality of vehicle pictures as training set pictures, and labeling direction information of the training set pictures; inputting the marked vehicle pictures into a deep convolutional network model for training to obtain a direction classification training model;
and inputting the picture to be recognized of the vehicle and the vehicle comparison picture into a direction classification training model to obtain the direction information of the picture to be recognized of the vehicle and the vehicle comparison picture.
Further, the direction information includes a plurality of directions; the plurality of directions are eight directions, and are respectively: front, back, left, right, left front, right front, left back and right back.
Further, the training model is a deep convolutional network training model, and includes four branches, that is, a GS branch, a GD branch, an LS branch, and an LD branch, which are respectively used to extract the shared view group global information, the unshared view group global information, the shared view group local information, and the unshared view group local information.
Further, the extracting of the global information of the shared view group is obtained through information classification and metric learning; the information classification comprises color classification, vehicle type classification and vehicle I D classification, and a cross entropy function is used as a loss function in the information classification; the metric learning adopts the triple loss as a network loss function, and the network loss of the metric learning comprises space internal loss and space crossing loss.
Further, the loss in space is calculated by using respective vehicle samples of each group of the shared view group or the unshared view group to form a triplet.
Furthermore, the cross-space loss is two, and the two cross-space losses respectively correspond to the local branch and the global branch; the local branch comprises an LS branch and an LD branch; the global branch includes a GS branch and a GD branch. And adopting the anchor image, the positive example image of the shared view domain group formed by the anchor image and the negative example image of the unshared group formed by the anchor image to form a triple in the function calculation of the cross-space loss.
Furthermore, only vehicle ID classification tasks are performed on the local branches, i.e., the LS branch and the LD branch; the global branch, namely the GS branch and the GD branch, performs three classification tasks including color classification, vehicle type classification and vehicle ID classification.
The in-space loss is calculated by taking the triplet loss in the m-branch (m ═ { GS, GD, LS, LD } corresponds to the GS, GD, LS, and LD branches, respectively), the mapping function in the m-branch that maps the pictures as features, the number of vehicle classes in a batch, the number of vehicle pictures in each class, the minimum distance of positive and negative examples, and the anchor image, positive example image, and negative example image in the triplet loss.
Further, the training step of the deep convolutional network training model comprises: the picture group passes through a shared parameter convolution layer consisting of the first three parts of ResNet-50 and then is divided into a shared view branch (S branch) and a non-shared view branch (D branch), wherein the shared branch is used for learning the characteristics of the shared view group, and the non-shared branch is used for learning the characteristics of the non-shared view group;
furthermore, in the shared branch and the non-shared branch, after passing through the convolutional layer composed of the fourth part of ResNet-50, the convolutional layer is divided into two branches, one branch is used for extracting global macro information, and the other branch is used for extracting local detail information; namely, the GS branch, the GD branch, the LS branch, and the LD branch, are used to extract the shared view group global information, the non-shared view group global information, the shared view group local information, and the non-shared view group local information, respectively.
Further, the step of extracting the local features is as follows: random covering is carried out on a characteristic diagram obtained after a batch of pictures passes through the above mentioned convolutional layer in a spatial domain (through a randomly generated mask matrix which is equal to the characteristic diagram in size and consists of 0,1, and through point multiplication with the characteristic diagram, a corresponding area which is 0 on the mask matrix is discarded), so that the network is forced to learn in the rest area to capture more local detail information.
Further, the shared view group features include a shared view global feature and a shared view local feature; the unshared view group characteristics include unshared view global characteristics and unshared view local characteristics.
Further, the global features of the shared vision field are color features, vehicle type features and vehicle ID features; the non-shared field of view group characteristic is a vehicle ID characteristic; the global features of the unshared vision field are color features, vehicle type features and vehicle ID features; the unshared view local feature is a vehicle ID feature.
Further, the step of pairing the picture to be recognized of the vehicle and the vehicle contrast picture to form a plurality of picture groups comprises: and dividing a picture to be identified of the vehicle and a vehicle comparison picture into a plurality of groups.
Further, the step of dividing the plurality of groups of pictures into a shared view group and a non-shared view group according to the direction information includes: dividing a picture group into a sharing view group or a non-sharing view group according to the direction information of two pictures in the picture group, wherein the two pictures do not have sharing views; the shared view is whether the two pictures to be recognized of the vehicle have the same view characteristics, for example: two pictures with direction "front" and direction "front left" have a shared field of view due to both having a "front" field of view feature.
Calculating the distance by adopting different characteristics by judging whether the two vehicle pictures have the shared vision field or not, and then retrieving the vehicle according to the distance between the vehicle pictures; two different characteristics are learned according to whether the shared vision field exists or not, and each characteristic contains global macroscopic information and local detailed information, so that the accuracy of vehicle re-identification is improved.
Example 2
A vehicle weight recognition method based on vehicle direction information and a multi-branch neural network is characterized by comprising the following steps:
the method for processing the direction information specifically comprises the following steps:
firstly, labeling direction information of some pictures to train a deep convolutional network classifier for judging the direction of the unlabeled pictures. The directions of the vehicle pictures are divided into 8 types.
And judging whether the pictures in the two directions have the shared view. So that different features are extracted at the feature extraction stage depending on whether there is a shared view or not.
The specific method for extracting the features comprises the following steps:
the invention designs a four-branch deep convolutional neural network for feature extraction, and obtains more robust feature representation by adopting a multi-task design through a classification task (a loss function is cross entropy loss) and a metric learning task (a loss function is triple loss). Four different feature representations are obtained in the four branches through different processing modes of the feature maps and different triple composition modes selected according to whether the feature maps have the shared view, wherein the four different feature representations comprise global macro features with the same view, global macro features without the same view, local detail features with the same view and local detail features without the same view. Finally, the present invention fuses global and local features as the final feature representation.
The vehicle retrieval method specifically comprises the steps of calculating the distance by judging whether two vehicle pictures have a shared view and adopting different characteristics, and then retrieving the vehicle according to the distance between the vehicle pictures.
The method of the present invention comprises the following specific steps.
Direction information processing
Firstly, the direction information of pictures of some vehicles is marked, and the invention marks the vehicle as 8 directions, namely front, back, left, right, front left, front right, back left and back right. And then, training a direction classifier by using the marked pictures, wherein the classifier is a deep convolutional neural network and is subjected to fine adjustment on the basis of ResNet-50, and the loss function is cross entropy loss. The invention can obtain the direction information of the unlabeled vehicle through the classifier. Since the direction prediction is a simple task, a high accuracy can be achieved, and the classifier is independent of the main network of the present invention and can be trained separately.
According to the direction information, the invention judges whether the two pictures have the shared visual field by judging whether the two pictures contain the same visual field, for example, the two pictures with the directions of front and left front are regarded as having the shared visual field and are marked as S pairs, and the left front and the right back are regarded as not having the shared visual field and are marked as D pairs. Specific decision diagrams are shown in the following table, where S represents two such oriented picture pairs having a shared view and D represents no shared view.
TABLE 1 determination of whether there is a shared field of view
Front side Rear end Left side of Left front Left back Right side Right front Right back
Front side S D D S D D S D
Rear end D S D D S D D S
Left side of D D S S S S S S
Left front S D S S D S S D
Left back D S S D S S D S
Right side D D S S S S S S
Right front S D S S D S S D
Right back D S S D S S D S
Depth feature extraction
Features are obtained using a deep convolutional network, which is based on ResNet-50, which the present invention extends to four branches, each of which extracts a particular feature. The network structure is shown in fig. 1, the first input picture passes through a shared parameter convolution layer composed of the first three parts of ResNet-50, and then is divided into two branches, one branch is used for learning the characteristics of the S pair, and the other branch is used for learning the characteristics of the D pair. In the two branches, after passing through the convolutional layer composed of the fourth part of ResNet-50, the convolutional layer is divided into two branches, one is used for extracting global macro information, and the other is used for extracting local detail information. The method for extracting the local detail information is to randomly cover the feature map obtained after the batch of pictures passes through the above-mentioned convolutional layer in the spatial domain (through a randomly generated mask matrix which is 0,1 and has the same size as the feature map, and through dot multiplication with the feature map, the corresponding area which is 0 on the mask matrix is discarded), so that the network is forced to learn in the remaining area to capture more local detail information. The network is thus extended into four branches, denoted GS, GD, LS and LD branches.
The classification task and metric learning task are completed on all four branches. Classification helps to learn discriminative features, while metric learning helps to enhance search ranking performance. On the global branch, i.e. GS and GD, the invention performs three classification tasks including color classification, vehicle type classification and vehicle ID classification. Whereas only the vehicle ID classification task is performed on the local branches LS and LD, in which the present invention uses the cross entropy function as a loss function. For metric learning tasks, the present invention uses the triplet loss as a loss function of the network. The present invention classifies metric learning loss functions into two categories, one being an intra-spatial loss function, each branch having an intra-spatial loss function, where on the S pair of branches, i.e., GS and LS, the present invention uses the vehicle samples of the S pair to form triplets to calculate losses, and on the D pair of branches GD and LD, the present invention uses the vehicle samples of the D pair to form triplets to calculate losses. The other type is cross-space loss, two in total, which respectively corresponds to global branches GS and GD and local branches LS and LD, and in the loss, the invention uses an anchor image, a positive example image forming a D pair with the anchor image and a negative example image forming an S pair with the anchor image to form a triple. Two types of triple loss equations are represented as follows:
Figure BDA0002484594920000111
Figure BDA0002484594920000112
wherein S-pair and D-pair represent S pair and D pair, respectively, TmIs the triplet loss in the mth branch (m ═ { GS, GD, LS, LD } corresponding to the GS, GD, LS, and LD branches, respectively), [ ·]+Is a hinge loss. Symbol fmThe symbols P and K are the number of vehicle classes (number of vehicle IDs) in a batch and the number of vehicle pictures in each class, respectively, the symbol α represents the minimum distance of positive and negative examples
Figure BDA0002484594920000113
And
Figure BDA0002484594920000114
anchor images, positive example images and negative example images in the triple loss are represented respectively.
Figure BDA0002484594920000115
Wherein T isg/lCross means Cross in GS and GD Branch or LS and LD BranchThe spatial triplet is lost. Likewise, fgd/ld(. represents a mapping function on a GD or LD branch, fgs/ls(. cndot.) denotes a mapping function on the GD or LD branch.
And finally, respectively acquiring the global macroscopic feature of the S pair, the local detailed feature of the S pair, the global macroscopic feature of the D pair and the local detailed feature of the D pair by the four branches. Thus, for each picture of the vehicle, the network can learn its features as S-pairs and its features as D-pairs.
Vehicle retrieval
The invention utilizes the characteristic representation which is learned by the network and designed by the method to search the vehicle pictures, firstly calculates the Euclidean distance between the query picture and all pictures on the search data set, wherein when two vehicle pictures calculated by the invention are S pairs, the invention uses the S pair characteristics of the two pictures to calculate the distance, and conversely, when the two pictures are D pairs, the D pair characteristics are used. And then sorting the vehicle pictures according to the Euclidean distance. Thereby retrieving the vehicle with the same ID as the inquiry vehicle.
Table 2 shows a simulation experiment of the method of the present invention, which uses the indexes Rank-1 and Rank-5 in mAP (average accuracy) and CMC (cumulative matching curve) curves for measurement, and performs experiments on a VeRi-776 database. The data in table 2 is a comparison of the performance of the present invention (OMNet) with other algorithms. Where RK represents the use of reordering strategies.
TABLE 2 Performance of the invention with other algorithms mAP and rank-1 and rank-5
Figure BDA0002484594920000121
In other embodiments of the present invention, there are also provided:
a vehicle weight recognition system based on vehicle direction information and a multi-branch neural network, comprising:
a data acquisition module configured to: acquiring a plurality of pictures to be identified of vehicles and retrieving a plurality of vehicle comparison pictures on a data set;
a direction information acquisition module configured to: acquiring direction information of a picture to be identified of a vehicle and a vehicle comparison picture;
a training module configured to: matching the pictures to be identified of the vehicle with the vehicle contrast pictures to form a plurality of picture groups, and dividing the plurality of picture groups into a shared view group and a non-shared view group according to the direction information; inputting the picture group into a training model to obtain shared view group characteristics or non-shared view group characteristics;
a vehicle retrieval module configured to: and calculating the Euclidean distance between the picture to be recognized of the vehicle and the comparison picture of the vehicle according to the shared view group characteristics or the non-shared view group characteristics, sequencing, and searching a plurality of vehicles with the similarity close to that of the vehicle to be recognized.
Further, the specific configurations of the data acquisition module, the direction information acquisition module, the training module and the vehicle retrieval module respectively correspond to the steps in the vehicle re-identification method in the above embodiment, and embodiment 1 can be viewed in the detailed process.
A computer-readable storage medium storing computer instructions which, when executed by a processor, perform the vehicle weight recognition method of embodiment 1 or embodiment 2.
An electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, the computer instructions, when executed by the processor, performing the vehicle weight identification method of embodiment 1 or embodiment 2.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (10)

1. A vehicle weight recognition method based on vehicle direction information and a multi-branch neural network is characterized by comprising the following steps:
acquiring a plurality of pictures to be identified of vehicles and retrieving a plurality of vehicle comparison pictures on a data set;
acquiring direction information of a picture to be identified of a vehicle and a vehicle comparison picture;
matching the pictures to be identified of the vehicle with the vehicle contrast pictures to form a plurality of picture groups, and dividing the plurality of picture groups into a shared view group and a non-shared view group according to the direction information; inputting the picture group into a training model to obtain shared view group characteristics or non-shared view group characteristics;
and calculating the Euclidean distance between the picture to be recognized of the vehicle and the comparison picture of the vehicle according to the shared view group characteristics or the non-shared view group characteristics, sequencing, and searching a plurality of vehicles with the similarity close to that of the vehicle to be recognized.
2. The vehicle weight recognition method according to claim 1, wherein the step of obtaining the direction information of the picture to be recognized and the vehicle comparison picture of the vehicle comprises:
collecting a plurality of vehicle pictures as training set pictures, and labeling direction information of the training set pictures; inputting the marked vehicle pictures into a deep convolutional network model for training to obtain a direction classification training model;
and inputting the picture to be recognized of the vehicle and the vehicle comparison picture into a direction classification training model to obtain the direction information of the picture to be recognized of the vehicle and the vehicle comparison picture.
3. The vehicle weight recognition method according to claim 1, wherein the training model is a deep convolutional network training model including four branches, namely, a GS branch, a GD branch, an LS branch, and an LD branch, for extracting shared view group global information, unshared view group global information, shared view group local information, and unshared view group local information, respectively.
4. The vehicle weight recognition method according to claim 1, wherein the extraction of the shared view group global information is obtained by information classification and metric learning; the information classification comprises color classification, vehicle type classification and vehicle ID classification, and a cross entropy function is used as a loss function in the information classification; the metric learning adopts ternary loss as a network loss function, and the network loss of the metric learning comprises space internal loss and space crossing loss.
5. The vehicle weight recognition method according to claim 1, wherein the in-space loss is calculated by composing a triplet using respective vehicle samples corresponding to each of the shared view group or the unshared view group.
6. The vehicle weight recognition method according to claim 1, wherein the shared view group feature includes a shared view global feature and a shared view local feature; the unshared view group characteristics include unshared view global characteristics and unshared view local characteristics.
7. The vehicle weight recognition method as claimed in claim 1, wherein the dividing of the plurality of picture groups into the shared view group and the non-shared view group according to the direction information comprises: dividing a picture group into a shared view group or a non-shared view group according to whether the direction information of two pictures in the picture group has a shared view; the shared view is whether the two pictures to be identified of the vehicle have the same view characteristics or not.
8. A vehicle weight recognition system based on vehicle direction information and a multi-branch neural network, comprising:
a data acquisition module configured to: acquiring a plurality of pictures to be identified of vehicles and retrieving a plurality of vehicle comparison pictures on a data set;
a direction information acquisition module configured to: acquiring direction information of a picture to be identified of a vehicle and a vehicle comparison picture;
a training module configured to: matching the pictures to be identified of the vehicle with the vehicle contrast pictures to form a plurality of picture groups, and dividing the plurality of picture groups into a shared view group and a non-shared view group according to the direction information; inputting the picture group into a training model to obtain shared view group characteristics or non-shared view group characteristics;
a vehicle retrieval module configured to: and calculating the Euclidean distance between the picture to be recognized of the vehicle and the comparison picture of the vehicle according to the shared view group characteristics or the non-shared view group characteristics, sequencing, and searching a plurality of vehicles with the similarity close to that of the vehicle to be recognized.
9. A computer readable storage medium storing computer instructions which, when executed by a processor, perform the vehicle weight recognition method of any one of claims 1 to 7.
10. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, the computer instructions, when executed by the processor, performing the method of identifying vehicle weight according to any one of claims 1 to 7.
CN202010387486.4A 2020-05-09 2020-05-09 Vehicle weight recognition method and system based on multi-azimuth information and multi-branch neural network Active CN111582178B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010387486.4A CN111582178B (en) 2020-05-09 2020-05-09 Vehicle weight recognition method and system based on multi-azimuth information and multi-branch neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010387486.4A CN111582178B (en) 2020-05-09 2020-05-09 Vehicle weight recognition method and system based on multi-azimuth information and multi-branch neural network

Publications (2)

Publication Number Publication Date
CN111582178A true CN111582178A (en) 2020-08-25
CN111582178B CN111582178B (en) 2021-06-18

Family

ID=72110749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010387486.4A Active CN111582178B (en) 2020-05-09 2020-05-09 Vehicle weight recognition method and system based on multi-azimuth information and multi-branch neural network

Country Status (1)

Country Link
CN (1) CN111582178B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214631A (en) * 2020-10-19 2021-01-12 山东建筑大学 Vehicle weight identification retrieval reordering method and system guided by direction information
CN112766353A (en) * 2021-01-13 2021-05-07 南京信息工程大学 Double-branch vehicle re-identification method for enhancing local attention
CN112818837A (en) * 2021-01-29 2021-05-18 山东大学 Aerial photography vehicle weight recognition method based on attitude correction and difficult sample perception
CN114067293A (en) * 2022-01-17 2022-02-18 武汉珞信科技有限公司 Vehicle weight identification rearrangement method and system based on dual attributes and electronic equipment
CN115408580A (en) * 2022-08-31 2022-11-29 广东数鼎科技有限公司 Vehicle source model identification method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084139A (en) * 2019-04-04 2019-08-02 长沙千视通智能科技有限公司 A kind of recognition methods again of the vehicle based on multiple-limb deep learning
CN110765954A (en) * 2019-10-24 2020-02-07 浙江大华技术股份有限公司 Vehicle weight recognition method, equipment and storage device
US20200097742A1 (en) * 2018-09-20 2020-03-26 Nvidia Corporation Training neural networks for vehicle re-identification

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200097742A1 (en) * 2018-09-20 2020-03-26 Nvidia Corporation Training neural networks for vehicle re-identification
CN110084139A (en) * 2019-04-04 2019-08-02 长沙千视通智能科技有限公司 A kind of recognition methods again of the vehicle based on multiple-limb deep learning
CN110765954A (en) * 2019-10-24 2020-02-07 浙江大华技术股份有限公司 Vehicle weight recognition method, equipment and storage device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
RUIHANG CHU 等: "Vehicle Re-identification with Viewpoint-aware Metric Learning", 《2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 *
YI ZHOU 等: "Viewpoint-aware Attentive Multi-view Inference for Vehicle Re-identification", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
ZHENG TANG 等: "PAMTRI: Pose-Aware Multi-Task Learning for Vehicle Re-Identification Using Highly Randomized Synthetic Data", 《2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 *
ZUOZHUO DAI 等: "Batch DropBlock Network for Person Re-identification and Beyond", 《2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 *
刘凯 等: "车辆再识别技术综述", 《智能科学与技术学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214631A (en) * 2020-10-19 2021-01-12 山东建筑大学 Vehicle weight identification retrieval reordering method and system guided by direction information
CN112214631B (en) * 2020-10-19 2024-02-27 山东建筑大学 Method and system for re-identifying, retrieving and reordering vehicles guided by direction information
CN112766353A (en) * 2021-01-13 2021-05-07 南京信息工程大学 Double-branch vehicle re-identification method for enhancing local attention
CN112766353B (en) * 2021-01-13 2023-07-21 南京信息工程大学 Double-branch vehicle re-identification method for strengthening local attention
CN112818837A (en) * 2021-01-29 2021-05-18 山东大学 Aerial photography vehicle weight recognition method based on attitude correction and difficult sample perception
CN114067293A (en) * 2022-01-17 2022-02-18 武汉珞信科技有限公司 Vehicle weight identification rearrangement method and system based on dual attributes and electronic equipment
CN114067293B (en) * 2022-01-17 2022-04-22 武汉珞信科技有限公司 Vehicle weight identification rearrangement method and system based on dual attributes and electronic equipment
CN115408580A (en) * 2022-08-31 2022-11-29 广东数鼎科技有限公司 Vehicle source model identification method and device

Also Published As

Publication number Publication date
CN111582178B (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN111582178B (en) Vehicle weight recognition method and system based on multi-azimuth information and multi-branch neural network
CN110728263B (en) Pedestrian re-recognition method based on strong discrimination feature learning of distance selection
Shen et al. Learning deep neural networks for vehicle re-id with visual-spatio-temporal path proposals
CN112101150B (en) Multi-feature fusion pedestrian re-identification method based on orientation constraint
CN108875608B (en) Motor vehicle traffic signal identification method based on deep learning
CN111259786B (en) Pedestrian re-identification method based on synchronous enhancement of appearance and motion information of video
Cao et al. Landmark recognition with sparse representation classification and extreme learning machine
CN111639564B (en) Video pedestrian re-identification method based on multi-attention heterogeneous network
CN110598543B (en) Model training method based on attribute mining and reasoning and pedestrian re-identification method
CN110175615B (en) Model training method, domain-adaptive visual position identification method and device
CN109492583A (en) A kind of recognition methods again of the vehicle based on deep learning
CN108830254B (en) Fine-grained vehicle type detection and identification method based on data balance strategy and intensive attention network
CN108764096B (en) Pedestrian re-identification system and method
CN113034545A (en) Vehicle tracking method based on CenterNet multi-target tracking algorithm
CN109034035A (en) Pedestrian's recognition methods again based on conspicuousness detection and Fusion Features
CN109165658B (en) Strong negative sample underwater target detection method based on fast-RCNN
CN104281572A (en) Target matching method and system based on mutual information
CN111881716A (en) Pedestrian re-identification method based on multi-view-angle generation countermeasure network
WO2021243947A1 (en) Object re-identification method and apparatus, and terminal and storage medium
Li et al. VRID-1: A basic vehicle re-identification dataset for similar vehicles
Barodi et al. An enhanced artificial intelligence-based approach applied to vehicular traffic signs detection and road safety enhancement
Chen et al. Part alignment network for vehicle re-identification
CN112668662B (en) Outdoor mountain forest environment target detection method based on improved YOLOv3 network
CN106650814B (en) Outdoor road self-adaptive classifier generation method based on vehicle-mounted monocular vision
CN115830643A (en) Light-weight pedestrian re-identification method for posture-guided alignment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant