CN109190513A - In conjunction with the vehicle of saliency detection and neural network again recognition methods and system - Google Patents

In conjunction with the vehicle of saliency detection and neural network again recognition methods and system Download PDF

Info

Publication number
CN109190513A
CN109190513A CN201810921051.6A CN201810921051A CN109190513A CN 109190513 A CN109190513 A CN 109190513A CN 201810921051 A CN201810921051 A CN 201810921051A CN 109190513 A CN109190513 A CN 109190513A
Authority
CN
China
Prior art keywords
vehicle
image
neural network
identification
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810921051.6A
Other languages
Chinese (zh)
Inventor
李熙莹
李国鸣
江倩殷
李鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201810921051.6A priority Critical patent/CN109190513A/en
Publication of CN109190513A publication Critical patent/CN109190513A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses the vehicle of a kind of detection of combination saliency and neural network again recognition methods and system, method includes: to carry out conspicuousness detection using original image of the SIM algorithm to vehicle, obtains the significant external appearance characteristic image of vehicle;The significant external appearance characteristic image and original image of vehicle are inputted neural network together to be trained;According to the training result of neural network, extracts vehicle characteristics and vehicle is identified again;System includes conspicuousness detection module, training module and weight identification module;System further includes memory and processor.The present invention can identify again vehicle according to the significant external appearance characteristic image of vehicle, and the robustness for extracting feature is enhanced;In addition, the present invention enables neural network pointedly to carry out feature learning to the notable feature region of vehicle image, efficiency is higher, can be widely applied to image identification technical field.

Description

Vehicle weight recognition method and system combining image significance detection and neural network
Technical Field
The invention relates to the technical field of image recognition, in particular to a vehicle re-recognition method and system combining image significance detection and a neural network.
Background
High-definition vehicle images can be obtained by video monitoring equipment such as public security monitoring gates and electronic police gates, and various details of the vehicle can be obtained from the images. The realization of the correlation of the vehicle information in the multi-point monitoring checkpoint images of the road network is beneficial to analyzing the driving tracks of the vehicles in the road network and mining the travel rules of the vehicles, and can also be helpful to quickly realize the positioning and tracking of the vehicles involved in the accident in the aspect of security protection, so that the problem of vehicle re-identification in the multi-point checkpoint images becomes one of research hotspots. The main challenges currently present in vehicle weight recognition research are: (1) the vehicle pictures shot by the multiple monitoring cameras have the influence on recognition such as inconsistent resolution, illumination change, vehicle angle and posture change and the like; (2) the brands, models and annual styles of vehicles are numerous, and the differences among vehicles of different models are not obvious; (3) vehicles of the same brand model have almost the same appearance and are difficult to distinguish.
The vehicle weight recognition research belongs to the category of target weight recognition, and the existing target weight recognition research method can be divided into two research ideas of measurement learning and feature learning. The method based on metric learning is to train a metric method by using samples, and to increase the similarity of similar sample data and reduce the similarity of non-similar samples by learning a proper distance metric, namely, a reasonable feature space mapping is found by metric learning, so that the feature distribution among the samples in a new space is more reasonable, for example, a mahalanobis distance learning-based method, a ranking support vector machine (RankSVM) method and the like. The method usually needs manual feature design, the recognition effect depends on feature extraction to a certain extent, and the generalization capability is poor. The method based on feature learning combines a plurality of feature methods such as color features and scale invariant features to obtain a better re-recognition effect. Because the re-recognition problem has high requirements on the robustness of extracted features, particularly under the conditions that no license plate information exists and the difference of vehicle features is not obvious, the traditional feature learning method is difficult to obtain a good recognition effect. Therefore, in the present stage, many research methods combine the convolutional neural method to train and extract features, for example, the neural network method using triple Loss as a Loss function, and the distance between the positive and negative samples and the central point sample is trained to achieve the purpose of maximizing inter-class variance and minimizing intra-class variance.
Vehicle weight recognition can generally be realized by license plate recognition. However, when the license plate is altered or forged, the vehicle is not brand, or the license plate recognition method is wrong, the correct vehicle re-recognition needs to be realized by using some unique appearance characteristics of the vehicle. In the road intersection vehicle image, because the vehicles with various brands and models and the same models and colors are visible everywhere, the appearance difference between the vehicles with the same brand, model and year is not obvious, and the re-identification of the vehicles according to the color, scale invariant features and the like of the vehicles by the traditional measurement learning and feature learning method has certain difficulty under the condition of not depending on the license plate information of the vehicles. In the method combining the neural network at the present stage, the difficulty of network training based on the triple loss function is large, and convergence is difficult; in addition, the learning of the features is based on the loss function optimization of the neural network, and the important features are not purposefully studied, so that the computation amount is large and the efficiency is not high.
Disclosure of Invention
To solve the above technical problems, the present invention aims to: the vehicle re-identification method and system are strong in robustness and high in efficiency, and combine image significance detection and a neural network.
The first technical scheme adopted by the invention is as follows:
the vehicle weight recognition method combining image significance detection and a neural network comprises the following steps of:
carrying out significance detection on an original image of the vehicle by adopting an SIM algorithm to obtain a significant appearance characteristic image of the vehicle;
inputting the significant appearance characteristic image of the vehicle and the original image into a neural network together for training;
and according to the training result of the neural network, extracting the vehicle characteristics and carrying out re-identification on the vehicle.
Further, the step of performing saliency detection on the original image of the vehicle by using the SIM algorithm to obtain a salient appearance characteristic image of the vehicle includes the following steps:
respectively carrying out wavelet transformation on three color channels of the original image;
respectively calculating a central point active coefficient and a neighborhood region active coefficient of the wavelet transform result of each color channel by adopting a binary filter according to the wavelet transform results of the three color channels;
respectively calculating the active contrast ratio between the central point of each color channel and the neighborhood region according to the central point active coefficient and the neighborhood region active coefficient;
performing weight adjustment on the active contrast between the central point of each color channel and the neighborhood region through an extended contrast sensitivity function;
and performing inverse wavelet transformation on the weight adjustment result to obtain a remarkable appearance characteristic image of the vehicle.
Further, the step of performing inverse wavelet transform on the result of the weight adjustment to obtain a prominent appearance characteristic image of the vehicle includes the steps of:
performing inverse wavelet transformation on the weight adjustment result of each color channel to obtain a significance detection result of each color channel;
carrying out normalization operation on the significance detection result of each color channel to obtain a significance gray image;
and carrying out pixel multiplication calculation on the significant gray level image and the original image of the vehicle to obtain a significant appearance characteristic image of the vehicle.
Further, the step of inputting the salient appearance characteristic image of the vehicle and the original image into a neural network for training comprises the following steps:
classifying the vehicles according to the license plate information of the vehicles, and setting a vehicle ID for each category;
acquiring a significant appearance characteristic image and an original image of a vehicle, and generating a tensor according to the significant appearance characteristic image and the original image of the vehicle, wherein the tensor has six channels;
converting the obvious appearance characteristic image of the vehicle into a semantic characteristic image of the vehicle through a convolutional layer by adopting a nonlinear mapping method;
processing the semantic feature image of the vehicle by a maximum pooling layer by adopting a down-sampling method so as to ensure that the semantic feature image of the vehicle keeps geometric and translational invariance;
and performing feature extraction and feature combination through the full connection layer to obtain a feature vector of the vehicle.
Further, the step of extracting vehicle features and re-identifying the vehicle according to the training result of the neural network comprises the following steps:
acquiring a training result of the neural network and removing the Softmax layer;
inputting the images in the query set and the candidate set into a neural network to obtain a characteristic vector of the target vehicle and a characteristic vector of the vehicle to be identified; wherein the query set stores original images of target vehicles, and the candidate set stores original images of vehicles to be identified;
and inquiring and matching the characteristic vector of the target vehicle and the characteristic vector of the vehicle to be identified to realize the re-identification of the vehicle.
Further, the feature vector is a 1024-dimensional vector.
Further, the step of querying and matching the feature vector of the target vehicle and the feature vector of the vehicle to be identified to realize re-identification of the vehicle comprises the following steps:
calculating the Euclidean distance between the characteristic vector of the target vehicle and the characteristic vector of the vehicle to be identified;
performing ascending sequencing on the calculated Euclidean distances;
and matching the vehicle IDs of the corresponding vehicles to be identified and the target vehicle according to the ascending sorting result to obtain a vehicle re-identification result.
The second technical scheme adopted by the invention is as follows:
a vehicle weight recognition system incorporating image saliency detection and neural networks, comprising:
the saliency detection module is used for carrying out saliency detection on an original image of the vehicle by adopting an SIM algorithm to obtain a saliency appearance characteristic image of the vehicle;
the training module is used for inputting the significant appearance characteristic image of the vehicle and the original image into a neural network together for training;
and the re-recognition module is used for extracting the vehicle characteristics and re-recognizing the vehicle according to the training result of the neural network.
Further, the significance detection module comprises:
the wavelet transformation unit is used for respectively performing wavelet transformation on three color channels of the original image;
the active coefficient calculation unit is used for calculating a central active coefficient and a neighborhood region active coefficient of the wavelet transform result of each color channel by adopting a binary filter according to the wavelet transform results of the three color channels;
the active contrast calculation unit is used for calculating the active contrast between the central point of each color channel and the neighborhood region according to the central point active coefficient and the neighborhood region active coefficient;
the weight adjusting unit is used for performing weight adjustment on the active contrast between the central point of each color channel and the neighborhood region through expanding the contrast sensitivity function;
and the inverse wavelet transform unit is used for performing inverse wavelet transform on the weight adjustment result to obtain a remarkable appearance characteristic image of the vehicle.
The third technical scheme adopted by the invention is as follows:
a vehicle weight recognition system incorporating image saliency detection and neural networks, comprising:
a memory for storing a program;
a processor for loading a program to execute the vehicle re-identification method combining image saliency detection and neural network as described in the first technical aspect.
The invention has the beneficial effects that: the method realizes the significance detection of the vehicle based on the SIM algorithm, and can re-identify the vehicle according to the significant appearance characteristic image of the vehicle under the condition of small appearance difference of the vehicle, thereby enhancing the robustness of extracting the characteristic; in addition, the invention inputs the salient appearance characteristic image of the vehicle and the original image into the neural network for training, so that the neural network can carry out characteristic learning on the salient characteristic region of the vehicle image in a targeted manner, and the efficiency is higher.
Drawings
FIG. 1 is a flow chart of the steps of a method of vehicle re-identification incorporating image saliency detection and neural networks of the present invention;
FIG. 2 is a flowchart of significance detection steps based on the SIM algorithm in an embodiment of the present invention;
FIG. 3 is a diagram of a horizontal neighborhood region binary filter in an embodiment of the present invention;
FIG. 4 is a schematic diagram of a vertical neighborhood region binary filter in an embodiment of the present invention;
FIG. 5 is a schematic diagram of a diagonal neighborhood region binary filter in an embodiment of the present invention;
FIG. 6 is a diagram of a horizontal centerpoint binary filter in an embodiment of the present invention;
FIG. 7 is a schematic diagram of a vertical center point binary filter in an embodiment of the invention;
FIG. 8 is a schematic diagram of a diagonal centerpoint binary filter in an embodiment of the present invention;
FIG. 9 is a diagram illustrating pixel multiplication of an original image and a saliency map according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a neural network structure in an embodiment of the present invention.
Detailed Description
The invention will be further explained and explained with reference to the drawings and the embodiments in the description. The step numbers in the embodiments of the present invention are set for convenience of illustration only, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adaptively adjusted according to the understanding of those skilled in the art.
Referring to fig. 1, the vehicle weight recognition method combining image saliency detection and a neural network of the present invention includes the steps of:
carrying out significance detection on an original image of the vehicle by adopting an SIM algorithm to obtain a significant appearance characteristic image of the vehicle;
inputting the significant appearance characteristic image of the vehicle and the original image into a neural network together for training;
and according to the training result of the neural network, extracting the vehicle characteristics and carrying out re-identification on the vehicle.
Further as a preferred embodiment, the step of performing saliency detection on the original image of the vehicle by using the SIM algorithm to obtain a salient appearance characteristic image of the vehicle includes the following steps:
respectively carrying out wavelet transformation on three color channels of the original image;
respectively calculating a central point active coefficient and a neighborhood region active coefficient of the wavelet transform result of each color channel by adopting a binary filter according to the wavelet transform results of the three color channels;
respectively calculating the active contrast ratio between the central point of each color channel and the neighborhood region according to the central point active coefficient and the neighborhood region active coefficient;
performing weight adjustment on the active contrast between the central point of each color channel and the neighborhood region through an extended contrast sensitivity function;
and performing inverse wavelet transformation on the weight adjustment result to obtain a remarkable appearance characteristic image of the vehicle.
Further preferably, the step of performing inverse wavelet transform on the result of the weight adjustment to obtain a distinctive appearance characteristic image of the vehicle includes the steps of:
performing inverse wavelet transformation on the weight adjustment result of each color channel to obtain a significance detection result of each color channel;
carrying out normalization operation on the significance detection result of each color channel to obtain a significance gray image;
and carrying out pixel multiplication calculation on the significant gray level image and the original image of the vehicle to obtain a significant appearance characteristic image of the vehicle.
Further as a preferred embodiment, the step of inputting the salient appearance feature image of the vehicle and the original image together into the neural network for training includes the following steps:
classifying the vehicles according to the license plate information of the vehicles, and setting a vehicle ID for each category;
acquiring a significant appearance characteristic image and an original image of a vehicle, and generating a tensor according to the significant appearance characteristic image and the original image of the vehicle, wherein the tensor has six channels;
converting the obvious appearance characteristic image of the vehicle into a semantic characteristic image of the vehicle through a convolutional layer by adopting a nonlinear mapping method;
processing the semantic feature image of the vehicle by a maximum pooling layer by adopting a down-sampling method so as to ensure that the semantic feature image of the vehicle keeps geometric and translational invariance;
and performing feature extraction and feature combination through the full connection layer to obtain a feature vector of the vehicle.
Further as a preferred embodiment, the step of extracting vehicle features and re-identifying the vehicle according to the training result of the neural network includes the following steps:
acquiring a training result of the neural network and removing the Softmax layer;
inputting the images in the query set and the candidate set into a neural network to obtain a characteristic vector of the target vehicle and a characteristic vector of the vehicle to be identified; wherein the query set stores original images of target vehicles, and the candidate set stores original images of vehicles to be identified;
and inquiring and matching the characteristic vector of the target vehicle and the characteristic vector of the vehicle to be identified to realize the re-identification of the vehicle.
Further preferably, the feature vector is a 1024-dimensional vector.
Further as a preferred embodiment, the step of performing query matching on the feature vector of the target vehicle and the feature vector of the vehicle to be identified to realize re-identification of the vehicle includes the following steps:
calculating the Euclidean distance between the characteristic vector of the target vehicle and the characteristic vector of the vehicle to be identified;
performing ascending sequencing on the calculated Euclidean distances;
and matching the vehicle IDs of the corresponding vehicles to be identified and the target vehicle according to the ascending sorting result to obtain a vehicle re-identification result.
Corresponding to the method of fig. 1, the present invention provides a vehicle weight recognition system combining image saliency detection and neural networks, comprising:
the saliency detection module is used for carrying out saliency detection on an original image of the vehicle by adopting an SIM algorithm to obtain a saliency appearance characteristic image of the vehicle;
the training module is used for inputting the significant appearance characteristic image of the vehicle and the original image into a neural network together for training;
and the re-recognition module is used for extracting the vehicle characteristics and re-recognizing the vehicle according to the training result of the neural network.
Further as a preferred embodiment, the significance detection module comprises:
the wavelet transformation unit is used for respectively performing wavelet transformation on three color channels of the original image;
the active coefficient calculation unit is used for calculating a central active coefficient and a neighborhood region active coefficient of the wavelet transform result of each color channel by adopting a binary filter according to the wavelet transform results of the three color channels;
the active contrast calculation unit is used for calculating the active contrast between the central point of each color channel and the neighborhood region according to the central point active coefficient and the neighborhood region active coefficient;
the weight adjusting unit is used for performing weight adjustment on the active contrast between the central point of each color channel and the neighborhood region through expanding the contrast sensitivity function;
and the inverse wavelet transform unit is used for performing inverse wavelet transform on the weight adjustment result to obtain a remarkable appearance characteristic image of the vehicle.
Corresponding to the method of fig. 1, the present invention provides a vehicle weight recognition system combining image saliency detection and neural networks, comprising:
a memory for storing a program;
and the processor is used for loading a program to execute the vehicle weight recognition method combining the image significance detection and the neural network.
The following describes in detail a specific implementation step of the vehicle re-identification method combining image saliency detection and a neural network, taking a vehicle image captured by a camera of a public security gate as an example:
s1, shooting the passing vehicle through a camera installed at the public security gate to obtain an original image of the vehicle;
s2, carrying out significance detection on the original image of the vehicle by adopting an SIM algorithm to obtain a significant appearance characteristic image of the vehicle;
as shown in fig. 2, step S2 specifically includes the following steps:
s21, respectively carrying out wavelet transformation on three color channels of the original image; the step S21 specifically includes: for each input color vehicle image, it contains three color channels (channels) of RGB, and this embodiment records each Channel as ci(i ═ 1,2,3), and then wavelet transform is performed for each color channel, respectively, the calculation formula of the wavelet transform being:
{ws,o}=WT(c),s=1,2,...,n;o=h,v,d,
wherein c represents any color channel of the image; WT (×) represents wavelet changes; w is as,oRepresenting the wavelet transformation result of the o direction under the s decomposition layer number; h. v, d represent horizontal, vertical and diagonal directions, respectively; s represents the number of layers of the wavelet transform, i.e., the scale of the wavelet transform. The calculation formula of the total layer number of the wavelet transformation is as follows: n is log2min (W, H), WXH represents the resolution of the image.
S22, respectively calculating the central active coefficient and neighborhood of the wavelet transform result of each color channel by using a binary filter according to the wavelet transform results of the three color channelsA zone activity coefficient; the calculation formula of the central point active coefficient and the neighborhood region active coefficient is as follows:wherein,andrespectively a central point active coefficient and a neighborhood region active coefficient;anda central point binary filter and a neighborhood region binary filter in the o direction respectively; the 6 binary filters of the horizontal neighborhood region binary filter, the vertical neighborhood region binary filter, the diagonal neighborhood region binary filter, the horizontal center point binary filter, the vertical center point binary filter and the diagonal center point binary filter are respectively shown in fig. 3, fig. 4, fig. 5, fig. 6, fig. 7 and fig. 8; wherein,representing a convolution operation.
S23, respectively calculating the active contrast ratio between the central point and the neighborhood region of each color channel according to the central point active coefficient and the neighborhood region active coefficient; the calculation formula of the active contrast is as follows:
wherein z iss,oIs the center-neighborhood active contrast in the o direction of the scale s (i.e., the active contrast between the center point and the neighborhood region), which reflects the contrast of a region in the image with the surroundingsRelationship if zs,oIf the value of (a) is larger, the central region of the image has higher liveness than the surrounding regions, that is, for the image, the central region can be considered to have higher significance; r iss,oRepresenting the square ratio of the central activity coefficient and the neighborhood activity coefficient.
S24, performing weight adjustment on the active contrast between the central point and the neighborhood region of each color channel by expanding a contrast sensitivity function; the step S24 specifically includes: upon calculation of zs,oThen, z is further adjusted by a weight functions,oThe SIM algorithm adjusts the center-neighborhood active contrast using an Extended Contrast Sensitivity Function (ECSF). ECSF is a simple linear function whose coefficients vary according to the level of the wavelet transform, and is calculated by the formula: ECSF (z)s,o,s)=zs,o·g(s)+k(s),
Wherein g(s) and k(s) are coefficients of ECSF function, which are variables generating attenuation along with the change of the scale s, and the calculation formulas are respectively:
in the present embodiment, the result α of the weight adjustments,oIs αs,o=ECSF(zs,o,s)。
S25, inverse wavelet transform is carried out on the weight adjustment result to obtain the obvious appearance characteristic image of the vehicle, α is obtained according to the weight adjustments,oThen pair αs,oPerforming inverse wavelet transform, wherein the calculation formula of the inverse wavelet transform is as follows:
Sc=WT-1s,o},s=1,2,...n,o=h,v,d,
wherein S iscIs the significance detection result of the channel c; WT (WT)-1Representing an inverse wavelet transform.
The invention is applicable to all channels (i.e. three RGB)Channel) all perform the same saliency detection operation to obtain a final saliency grayscale image, wherein the calculation formula of the final saliency grayscale image is as follows:wherein S ismapFor the final saliency gray image, normalization (x) represents a normalization operation, such that the saliency gray image is found with pixel values in the interval [0,1 ]]A gray scale map of (a).
As shown in fig. 9, after obtaining the saliency gray scale map, the invention performs pixel multiplication on the saliency gray scale image and the original image of the vehicle to obtain a saliency appearance characteristic image of the vehicle, wherein I represents the original image of the vehicle; smapA saliency grayscale image representing an image;representing the operation of pixel multiplication between image matrices according to corresponding positions; i issalThe result of the multiplication (i.e., the distinctive appearance characteristic image of the vehicle) is represented. According to the method, the area with significance in the original image can be obtained through the significant appearance characteristic image of the vehicle, namely, each pixel point of the original image is multiplied by a specific weight, if the pixel point belongs to the pixel of the significant area, the pixel point is multiplied by a weight with a large value, and if not, the pixel point is multiplied by a weight with a small value.
As shown in fig. 10, S3, inputting the salient appearance feature image of the vehicle and the original image together into a neural network for training; the method utilizes a convolutional neural network to extract the characteristics of the image, the basic network of the neural network of the invention is VGG16, the structure of the VGG16 network is shown in Table 1, and the convolutional neural network shown in Table 1 comprises a convolutional layer (convolution), a max pooling layer (max pooling) and a fully connected layer (full connected).
TABLE 1VGG16 convolutional neural network architecture diagram
The convolutional neural network architecture of the invention is different from the traditional convolutional neural network to the greatest extent, the network model provided by the invention has two input parts, wherein the first input part is an original image of a vehicle, and the second input part is a significant appearance characteristic image corresponding to the original image. The method comprises the steps of connecting the image with the salient appearance characteristics as auxiliary information in series to an original image to form a tensor with 6 channels, and inputting the tensor with the 6 channels into a subsequent network layer together, so that the input part simultaneously contains the salient characteristic information of the image, and the salient characteristic information of the image has the function of enhancing the characteristics, so that the neural network can extract the robust characteristics more probably and more effectively.
In addition, the number of neurons in the full connection layer of the neural network is changed into 1024 to train the classification task of the neural network, and the network obtained after training has the capability of extracting features, so that the output of the full connection layer is the features of the image extracted by the neural network. In the original VGG16 neural network, the number of neurons of a full connection layer is 4096, which is a very high-dimensional vector, and the high-dimensional vector not only can cause the extracted features to be abnormal sparse, but also can reduce the efficiency of subsequent feature matching, so that 1024 is selected as the dimension of neural network training, and the robustness of feature extraction is improved.
Specifically, step S3 includes the steps of:
s31, classifying the vehicles according to the license plate information of the vehicles, and setting a vehicle ID for each category;
the step S31 specifically includes: according to the method, a 6-channel tensor composed of an original image and a remarkable appearance characteristic image obtained by a saliency detection module is used as the input of a neural network for training, vehicles of each license plate are respectively marked as different IDs (identities), the IDs of the vehicles are regarded as one type, and the problem of vehicle re-identification is integrated into a classification task to train the network.
S32, acquiring a significant appearance characteristic image and an original image of the vehicle, and generating a tensor according to the significant appearance characteristic image and the original image of the vehicle, wherein the tensor has six channels;
s33, converting the salient appearance characteristic image of the vehicle into a semantic characteristic image of the vehicle through a convolution layer by adopting a nonlinear mapping method;
the step S33 specifically includes: the role of the convolutional layer is to convert low-level image features into high-level semantic features using a non-linear mapping. The input to the convolutional layer of the present invention is a three-dimensional matrix X of size s1 xs 2 xs 3, where s3 is the number of two-dimensional feature maps input and s1 xs 2 is the two-dimensional feature map XiThe size of (2). The output of the convolutional layer is a three-dimensional matrix Y of size t1 × t2 × t3, where t3 is the number of two-dimensional feature maps output and t1 × t2 is the Y of the two-dimensional feature maps outputjSize, the yjThe calculation formula of (2) is as follows:
wherein xiIs the two-dimensional characteristic diagram of the ith input of the convolutional layer; y isjIs the two-dimensional characteristic diagram of the jth output of the convolution layer;represents a convolution operation; k is a radical ofijRepresenting a two-dimensional convolution kernel of the jth output two-dimensional feature map corresponding to the ith input two-dimensional feature map, wherein parameters of the two-dimensional convolution kernel are obtained by network training; (x) is an activation function defined by the formula: f (x) max (0, x).
S34, processing the semantic feature image of the vehicle through a maximum pooling layer by adopting a down-sampling method to ensure that the semantic feature image of the vehicle keeps geometric and translational invariance;
wherein, the step S34 specifically includes: the function of the maximum pooling layer is to make the feature have geometric and translational invariance by means of downsampling, and the calculation formula of the maximum pooling layer is as follows:
yi,j,k=max(bi-p,j-q,k,bi-p+1,j-q+1,k,...,bi+p,j+q,k),
wherein y isi,j,kRepresents the pixel value with coordinates (i, j) in the kth output feature map, bi+p,j+q,kThe pixel value with coordinates (i + p, j + q) in the kth input feature map is shown.
And S35, extracting and combining the features through the full connection layer to obtain the feature vector of the vehicle.
The step S35 specifically includes: after the alternating processing of a plurality of convolution layers and pooling layers, the neural network of the invention can be provided with one or more fully-connected layers according to actual needs to combine the characteristics and output the finally extracted characteristics. Each neuron in the fully-connected layer is connected with all neurons in the input layer, namely each neuron performs weighted statistics on all features in the input layer, and the calculation formula of the weighted statistics is as follows:
wherein, layer l represents a fully connected layer;is the jth neuron of layer i;is the parameter of the j-th neuron of the l-th layer connected with all the neurons in the ith input feature map of the l-1 layer;is the bias term.
And S4, extracting vehicle characteristics and re-identifying the vehicle according to the training result of the neural network.
Wherein the step S4 includes the steps of:
s41, obtaining a training result of the neural network and removing a Softmax layer: after the training of the neural network is completed, the Softmax layer is removed, and the output result of the last full-connection layer is used as the extracted feature which is a 1024-dimensional vector.
S42, inputting the images in the query set and the candidate set into a neural network to obtain the characteristic vector of the target vehicle and the characteristic vector of the vehicle to be identified; wherein the query set stores original images of target vehicles, and the candidate set stores original images of vehicles to be identified;
s43, inquiring and matching the characteristic vector of the target vehicle and the characteristic vector of the vehicle to be identified, and realizing re-identification of the vehicle;
wherein, the step S43 specifically includes the following steps:
s431, calculating the Euclidean distance between the characteristic vector of the target vehicle and the characteristic vector of the vehicle to be identified; the calculation formula of the Euclidean distance is as follows:
dist=||featurequery-featuregallery| |, where dist represents the distance between feature vectors; featurequeryA feature vector representing an image of the target vehicle; featuregalleryA feature vector representing an image of a vehicle to be identified; | | | | represents the modulus of the vector.
S432, performing ascending sorting on the calculated Euclidean distances; and the Euclidean distance between the feature vector of the target vehicle with the same vehicle ID and the feature vector of the vehicle to be identified is smaller.
And S433, matching the vehicle IDs of the corresponding vehicle to be identified and the target vehicle according to the ascending sorting result to obtain a vehicle re-identification result.
After the Euclidean distance between the characteristic vectors is obtained through calculation, the characteristic vectors are sorted in an ascending order. And according to the ascending sorting result, checking whether the vehicles to be recognized which are sorted in front (namely the Euclidean distance is smaller) have the same vehicle ID with the corresponding target vehicles, and further counting the vehicle recognition rate and obtaining the re-recognition result of the vehicles.
In summary, the vehicle weight recognition method and system combining image saliency detection and neural network of the present invention have the following advantages:
1) the vehicle saliency detection is realized based on the SIM algorithm, and the vehicle can be re-identified according to the salient appearance feature image of the vehicle under the condition of small vehicle appearance difference, so that the robustness of feature extraction is enhanced;
2) the method inputs the significant appearance characteristic image of the vehicle and the original image into the neural network for training, so that the neural network can carry out characteristic learning on the significant characteristic region of the vehicle image in a targeted manner, and the efficiency is higher;
3) the method adopts a method based on the combination of significance detection and a convolutional neural network to carry out vehicle re-identification, and can accurately carry out re-identification on the vehicle image at the bayonet;
4) the vehicle re-identification method can realize re-identification of the vehicle according to unique appearance characteristics under the condition that the appearance difference of the vehicle is small without depending on the license plate information;
5) the invention combines the neural network method, does not need manual design and has higher efficiency.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. The vehicle weight recognition method combining image significance detection and a neural network is characterized in that: the method comprises the following steps:
carrying out significance detection on an original image of the vehicle by adopting an SIM algorithm to obtain a significant appearance characteristic image of the vehicle;
inputting the significant appearance characteristic image of the vehicle and the original image into a neural network together for training;
and according to the training result of the neural network, extracting the vehicle characteristics and carrying out re-identification on the vehicle.
2. The method of vehicle re-identification in combination with image saliency detection and neural networks of claim 1, characterized by: the step of adopting the SIM algorithm to carry out significance detection on the original image of the vehicle to obtain the significant appearance characteristic image of the vehicle comprises the following steps:
respectively carrying out wavelet transformation on three color channels of the original image;
respectively calculating a central point active coefficient and a neighborhood region active coefficient of the wavelet transform result of each color channel by adopting a binary filter according to the wavelet transform results of the three color channels;
respectively calculating the active contrast ratio between the central point of each color channel and the neighborhood region according to the central point active coefficient and the neighborhood region active coefficient;
performing weight adjustment on the active contrast between the central point of each color channel and the neighborhood region through an extended contrast sensitivity function;
and performing inverse wavelet transformation on the weight adjustment result to obtain a remarkable appearance characteristic image of the vehicle.
3. The method of vehicle re-identification in combination with image saliency detection and neural networks of claim 2, characterized by: the step of performing inverse wavelet transform on the weight adjustment result to obtain the prominent appearance characteristic image of the vehicle comprises the following steps:
performing inverse wavelet transformation on the weight adjustment result of each color channel to obtain a significance detection result of each color channel;
carrying out normalization operation on the significance detection result of each color channel to obtain a significance gray image;
and carrying out pixel multiplication calculation on the significant gray level image and the original image of the vehicle to obtain a significant appearance characteristic image of the vehicle.
4. The method of vehicle re-identification in combination with image saliency detection and neural networks of claim 1, characterized by: the step of inputting the image with the obvious appearance characteristics of the vehicle and the original image into a neural network for training comprises the following steps:
classifying the vehicles according to the license plate information of the vehicles, and setting a vehicle ID for each category;
acquiring a significant appearance characteristic image and an original image of a vehicle, and generating a tensor according to the significant appearance characteristic image and the original image of the vehicle, wherein the tensor has six channels;
converting the obvious appearance characteristic image of the vehicle into a semantic characteristic image of the vehicle through a convolutional layer by adopting a nonlinear mapping method;
processing the semantic feature image of the vehicle by a maximum pooling layer by adopting a down-sampling method so as to ensure that the semantic feature image of the vehicle keeps geometric and translational invariance;
and performing feature extraction and feature combination through the full connection layer to obtain a feature vector of the vehicle.
5. The method of vehicle re-identification in combination with image saliency detection and neural networks of claim 4, wherein: the step of extracting vehicle features and re-identifying the vehicle according to the training result of the neural network comprises the following steps:
acquiring a training result of the neural network and removing the Softmax layer;
inputting the images in the query set and the candidate set into a neural network to obtain a characteristic vector of the target vehicle and a characteristic vector of the vehicle to be identified; wherein the query set stores original images of target vehicles, and the candidate set stores original images of vehicles to be identified;
and inquiring and matching the characteristic vector of the target vehicle and the characteristic vector of the vehicle to be identified to realize the re-identification of the vehicle.
6. The method of vehicle re-identification in combination with image saliency detection and neural networks of claim 5, wherein: the feature vector is a 1024-dimensional vector.
7. The method of vehicle re-identification in combination with image saliency detection and neural networks of claim 5, wherein: the step of inquiring and matching the characteristic vector of the target vehicle and the characteristic vector of the vehicle to be identified to realize the re-identification of the vehicle comprises the following steps:
calculating the Euclidean distance between the characteristic vector of the target vehicle and the characteristic vector of the vehicle to be identified;
performing ascending sequencing on the calculated Euclidean distances;
and matching the vehicle IDs of the corresponding vehicles to be identified and the target vehicle according to the ascending sorting result to obtain a vehicle re-identification result.
8. The vehicle weight recognition system combining image significance detection and a neural network is characterized in that: the method comprises the following steps:
the saliency detection module is used for carrying out saliency detection on an original image of the vehicle by adopting an SIM algorithm to obtain a saliency appearance characteristic image of the vehicle;
the training module is used for inputting the significant appearance characteristic image of the vehicle and the original image into a neural network together for training;
and the re-recognition module is used for extracting the vehicle characteristics and re-recognizing the vehicle according to the training result of the neural network.
9. The vehicle weight recognition system in combination with image saliency detection and neural networks of claim 8, wherein: the significance detection module comprises:
the wavelet transformation unit is used for respectively performing wavelet transformation on three color channels of the original image;
the active coefficient calculation unit is used for calculating a central active coefficient and a neighborhood region active coefficient of the wavelet transform result of each color channel by adopting a binary filter according to the wavelet transform results of the three color channels;
the active contrast calculation unit is used for calculating the active contrast between the central point of each color channel and the neighborhood region according to the central point active coefficient and the neighborhood region active coefficient;
the weight adjusting unit is used for performing weight adjustment on the active contrast between the central point of each color channel and the neighborhood region through expanding the contrast sensitivity function;
and the inverse wavelet transform unit is used for performing inverse wavelet transform on the weight adjustment result to obtain a remarkable appearance characteristic image of the vehicle.
10. The vehicle weight recognition system combining image significance detection and a neural network is characterized in that: the method comprises the following steps:
a memory for storing a program;
a processor for loading a program to perform the method of vehicle re-identification in combination with image saliency detection and neural networks of any one of claims 1 to 7.
CN201810921051.6A 2018-08-14 2018-08-14 In conjunction with the vehicle of saliency detection and neural network again recognition methods and system Pending CN109190513A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810921051.6A CN109190513A (en) 2018-08-14 2018-08-14 In conjunction with the vehicle of saliency detection and neural network again recognition methods and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810921051.6A CN109190513A (en) 2018-08-14 2018-08-14 In conjunction with the vehicle of saliency detection and neural network again recognition methods and system

Publications (1)

Publication Number Publication Date
CN109190513A true CN109190513A (en) 2019-01-11

Family

ID=64921408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810921051.6A Pending CN109190513A (en) 2018-08-14 2018-08-14 In conjunction with the vehicle of saliency detection and neural network again recognition methods and system

Country Status (1)

Country Link
CN (1) CN109190513A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321949A (en) * 2019-06-29 2019-10-11 天津大学 A kind of distributed car tracing method and system based on observed terminals network
CN110619280A (en) * 2019-08-23 2019-12-27 长沙千视通智能科技有限公司 Vehicle heavy identification method and device based on deep joint discrimination learning
CN110909785A (en) * 2019-11-18 2020-03-24 西北工业大学 Multitask Triplet loss function learning method based on semantic hierarchy
CN111292530A (en) * 2020-02-04 2020-06-16 浙江大华技术股份有限公司 Method, device, server and storage medium for processing violation pictures
CN111428688A (en) * 2020-04-16 2020-07-17 成都旸谷信息技术有限公司 Intelligent vehicle driving lane identification method and system based on mask matrix
CN111429484A (en) * 2020-03-31 2020-07-17 电子科技大学 Multi-target vehicle track real-time construction method based on traffic monitoring video
CN111540217A (en) * 2020-04-16 2020-08-14 成都旸谷信息技术有限公司 Mask matrix-based intelligent average vehicle speed monitoring method and system
CN111738362A (en) * 2020-08-03 2020-10-02 成都睿沿科技有限公司 Object recognition method and device, storage medium and electronic equipment
CN111738048A (en) * 2020-03-10 2020-10-02 重庆大学 Pedestrian re-identification method
CN111881922A (en) * 2020-07-28 2020-11-03 成都工业学院 Insulator image identification method and system based on significance characteristics
CN113723232A (en) * 2021-08-16 2021-11-30 绍兴市北大信息技术科创中心 Vehicle weight recognition method based on channel cooperative attention
CN116503914A (en) * 2023-06-27 2023-07-28 华东交通大学 Pedestrian re-recognition method, system, readable storage medium and computer equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105023008A (en) * 2015-08-10 2015-11-04 河海大学常州校区 Visual saliency and multiple characteristics-based pedestrian re-recognition method
CN106529578A (en) * 2016-10-20 2017-03-22 中山大学 Vehicle brand model fine identification method and system based on depth learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105023008A (en) * 2015-08-10 2015-11-04 河海大学常州校区 Visual saliency and multiple characteristics-based pedestrian re-recognition method
CN106529578A (en) * 2016-10-20 2017-03-22 中山大学 Vehicle brand model fine identification method and system based on depth learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
NAILA MURRAY ET AL: "Low-Level Spatiochromatic Grouping for Saliency Estimation", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
NAILA MURRAY ET AL: "Saliency Estimation Using a Non-Parametric Low-Level Vision Model", 《CVPR 2011》 *
NIKI MARTINEL,ET AL: "Kernelized Saliency-Based Person Re-Identification Through Multiple Metric Learning", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
XIYING LI ET AL: "VRID-1: A Basic Vehicle Re-identification Dataset for Similar Vehicles", 《2017 ITSC》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321949A (en) * 2019-06-29 2019-10-11 天津大学 A kind of distributed car tracing method and system based on observed terminals network
CN110619280A (en) * 2019-08-23 2019-12-27 长沙千视通智能科技有限公司 Vehicle heavy identification method and device based on deep joint discrimination learning
CN110619280B (en) * 2019-08-23 2022-05-24 长沙千视通智能科技有限公司 Vehicle re-identification method and device based on deep joint discrimination learning
CN110909785B (en) * 2019-11-18 2021-09-14 西北工业大学 Multitask Triplet loss function learning method based on semantic hierarchy
CN110909785A (en) * 2019-11-18 2020-03-24 西北工业大学 Multitask Triplet loss function learning method based on semantic hierarchy
CN111292530A (en) * 2020-02-04 2020-06-16 浙江大华技术股份有限公司 Method, device, server and storage medium for processing violation pictures
CN111738048B (en) * 2020-03-10 2023-08-22 重庆大学 Pedestrian re-identification method
CN111738048A (en) * 2020-03-10 2020-10-02 重庆大学 Pedestrian re-identification method
CN111429484A (en) * 2020-03-31 2020-07-17 电子科技大学 Multi-target vehicle track real-time construction method based on traffic monitoring video
CN111429484B (en) * 2020-03-31 2022-03-15 电子科技大学 Multi-target vehicle track real-time construction method based on traffic monitoring video
CN111428688B (en) * 2020-04-16 2022-07-26 成都旸谷信息技术有限公司 Intelligent vehicle driving lane identification method and system based on mask matrix
CN111540217A (en) * 2020-04-16 2020-08-14 成都旸谷信息技术有限公司 Mask matrix-based intelligent average vehicle speed monitoring method and system
CN111428688A (en) * 2020-04-16 2020-07-17 成都旸谷信息技术有限公司 Intelligent vehicle driving lane identification method and system based on mask matrix
CN111881922A (en) * 2020-07-28 2020-11-03 成都工业学院 Insulator image identification method and system based on significance characteristics
CN111881922B (en) * 2020-07-28 2023-12-15 成都工业学院 Insulator image recognition method and system based on salient features
CN111738362B (en) * 2020-08-03 2020-12-01 成都睿沿科技有限公司 Object recognition method and device, storage medium and electronic equipment
CN111738362A (en) * 2020-08-03 2020-10-02 成都睿沿科技有限公司 Object recognition method and device, storage medium and electronic equipment
CN113723232A (en) * 2021-08-16 2021-11-30 绍兴市北大信息技术科创中心 Vehicle weight recognition method based on channel cooperative attention
CN116503914A (en) * 2023-06-27 2023-07-28 华东交通大学 Pedestrian re-recognition method, system, readable storage medium and computer equipment
CN116503914B (en) * 2023-06-27 2023-09-01 华东交通大学 Pedestrian re-recognition method, system, readable storage medium and computer equipment

Similar Documents

Publication Publication Date Title
CN109190513A (en) In conjunction with the vehicle of saliency detection and neural network again recognition methods and system
CN109902590B (en) Pedestrian re-identification method for deep multi-view characteristic distance learning
Dong et al. Vehicle type classification using a semisupervised convolutional neural network
Tsai et al. Vehicle detection using normalized color and edge map
Hu et al. Vehicle color recognition with spatial pyramid deep learning
CN109583482B (en) Infrared human body target image identification method based on multi-feature fusion and multi-kernel transfer learning
CN101350069B (en) Computer implemented method for constructing classifier from training data and detecting moving objects in test data using classifier
Li et al. Multimodal bilinear fusion network with second-order attention-based channel selection for land cover classification
US10445602B2 (en) Apparatus and method for recognizing traffic signs
CN111738143B (en) Pedestrian re-identification method based on expectation maximization
CN110175615B (en) Model training method, domain-adaptive visual position identification method and device
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
CN113947814B (en) Cross-view gait recognition method based on space-time information enhancement and multi-scale saliency feature extraction
CN108898138A (en) Scene text recognition methods based on deep learning
Molina-Moreno et al. Efficient scale-adaptive license plate detection system
CN113822246B (en) Vehicle weight identification method based on global reference attention mechanism
Shujuan et al. Real-time vehicle detection using Haar-SURF mixed features and gentle AdaBoost classifier
CN115830637B (en) Method for re-identifying blocked pedestrians based on attitude estimation and background suppression
CN112686242B (en) Fine-grained image classification method based on multilayer focusing attention network
CN112396036A (en) Method for re-identifying blocked pedestrians by combining space transformation network and multi-scale feature extraction
Hu et al. Vehicle color recognition based on smooth modulation neural network with multi-scale feature fusion
CN113269099B (en) Vehicle re-identification method under heterogeneous unmanned system based on graph matching
CN112418262A (en) Vehicle re-identification method, client and system
CN108647679B (en) Car logo identification method based on car window coarse positioning
Kaufhold et al. Recognition and segmentation of scene content using region-based classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190111