CN106845341B - Unlicensed vehicle identification method based on virtual number plate - Google Patents
Unlicensed vehicle identification method based on virtual number plate Download PDFInfo
- Publication number
- CN106845341B CN106845341B CN201611156309.5A CN201611156309A CN106845341B CN 106845341 B CN106845341 B CN 106845341B CN 201611156309 A CN201611156309 A CN 201611156309A CN 106845341 B CN106845341 B CN 106845341B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- image
- virtual number
- number plate
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for identifying a unlicensed vehicle based on a virtual number plate, which is characterized by comprising the following steps: step 1 is a virtual number plate construction step, which comprises the steps of carrying out vehicle detection based on videos or images, obtaining a vehicle close-up image, carrying out feature extraction on the vehicle image, generating a virtual number plate, and constructing a virtual number plate library; and 2, identifying the virtual number plate library, namely detecting a target vehicle based on a video or an image, obtaining a special graph of the target vehicle, extracting the characteristics of the image of the target vehicle, generating a virtual number plate, and comparing the result with the virtual number plate library. The method generates a unique virtual license plate for each vehicle on the basis of image recognition, so that target vehicles can be effectively searched and retrieved without license plate recognition, and the method has positive significance for various traffic applications such as expressway fee evasion check, red light running of unlicensed vehicles, quick search of hit vehicles and the like.
Description
Technical Field
The invention relates to the field of computer vision, in particular to a virtual number plate-based unlicensed vehicle identification method, which is an important application in the field of traffic application.
Background
Along with the rapid development of the traffic industry of China and the large-scale increase of civil vehicles, the classification management of running vehicles through images is very important, and particularly, the identification and management work of unlicensed vehicles is realized. At present, a mature unlicensed vehicle identification solution does not exist, and information such as the position and the size of a vehicle, the brand of the vehicle, an annual inspection mark, a pendant and the like needs to be accurately acquired so as to uniquely determine the same vehicle. Only in this way, the unlicensed vehicle retrieval and management can be effectively carried out, but the previous method is not good in work development because the feature description is not complete.
Disclosure of Invention
In order to effectively search and retrieve a target vehicle, a virtual number plate library is constructed through image processing, and therefore identification and management of the unlicensed vehicles are improved. The purpose of the invention is realized by the following technical scheme.
A method for identifying a unlicensed vehicle based on a virtual number plate, the method comprising: step 1 is a virtual number plate construction step, which comprises the steps of carrying out vehicle detection based on videos or images to obtain vehicle images, carrying out feature extraction on the vehicle images to generate virtual number plates and constructing a virtual number plate library; and 2, identifying the virtual number plate library, namely detecting a target vehicle based on a video or an image to obtain a target vehicle image, extracting the characteristics of the target vehicle image to generate a virtual number plate, and comparing the virtual number plate with the virtual number plate library.
Preferably, the step 1 comprises:
step 1-1, detecting a vehicle based on a video or an image, and extracting a vehicle image;
step 1-2, carrying out feature extraction on the vehicle image, wherein the feature extraction is based on image features described by deep learning, and the image features comprise a plurality of dimensional global features describing the whole vehicle image and/or a plurality of dimensional local features describing a vehicle local area;
and 1-3, quantizing and storing the obtained global features and/or local features, generating a virtual number plate, and constructing a virtual number plate library.
Preferably, the step 2 includes:
step 2-1, detecting a target vehicle based on the video or the image, and extracting a target vehicle image;
2-2, extracting features of the target vehicle image, wherein the feature extraction is based on image features described by deep learning, and the image features comprise a plurality of dimensional global features describing the whole vehicle image and/or a plurality of dimensional local features describing a vehicle local area;
step 2-3, quantizing and storing the obtained global features and/or local features of the target vehicle to generate a virtual number plate of the target vehicle;
and 2-4, comparing the virtual number plate of the target vehicle with the virtual number plates in the virtual number plate library one by one to obtain an identification result.
Preferably, in step 1-2 or step 2-2, the deep learning includes using a fast region deep convolutional neural network object detection algorithm.
Preferably, in step 1-2 or step 2-2, the deep learning method is not adopted, but a cascade characteristic target detection method is adopted.
Preferably, the target detection algorithm of the fast regional deep convolution neural network comprises the steps of obtaining an image of a vehicle, using a deep convolution method to take the image as an input of the deep convolution neural network, calculating a plurality of dimensional global features of the image of the vehicle through a feed-forward neural network, determining a local region according to a geometric position relation, and obtaining a plurality of corresponding dimensional local features on the basis of the local features.
Preferably, the step 1-3 includes forming a complete feature vector from the obtained global features and local features and storing the complete feature vector in a feature library in a lossless manner, so as to construct a virtual number plate library of the vehicle, or forming hash codes from the features according to a training threshold value according to training of a large amount of sample data, and forming two-dimensional codes according to a corresponding hash characterization algorithm, so as to construct a virtual number plate library of the vehicle.
Preferably, the step 2-4 includes matching the virtual number plate of the target vehicle with the virtual number plate in the database in a distance similarity manner by using a reciprocal ratio distance and a cosine distance in a distributed parallel computing manner, so as to perform vehicle comparison analysis.
Preferably, the step 2-4 includes determining that the recognition is successful when the similarity is higher than a certain threshold, and outputting the matched virtual number plate result.
The invention has the advantages that: based on the image recognition, the global and local features of the vehicle are fully utilized, including but not limited to the image features based on deep learning or cascading object description. And then the virtual license plate is generated and compared, the whole and local information is comprehensively considered, on the basis, the only vehicle virtual license plate can be generated for each vehicle, so that the target vehicle can be effectively searched and searched without license plate identification, and the method has positive significance for various traffic applications such as freeway fare check, no-license plate vehicle running red light, hit-and-run vehicle quick search and the like.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 shows a flow diagram of a virtual number plate based unlicensed vehicle identification method according to an embodiment of the present invention;
FIG. 2 illustrates a block diagram of a model of a multi-layer convolutional neural network, in accordance with an embodiment of the present invention;
FIG. 3 shows a block diagram of an improved multi-layer convolutional neural network model, according to an embodiment of the present invention.
FIG. 4 illustrates a flow chart of a method for DPM + depth target detection for vehicle identification, according to an embodiment of the present invention.
FIG. 5 illustrates a deep recognition neural network framework diagram according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
According to the embodiment of the invention, a method for identifying a unlicensed vehicle based on a virtual number plate is disclosed, and the method comprises the following steps:
step 1-1, detecting a vehicle based on a video or an image, and extracting a vehicle image;
step 1-2, performing feature extraction on a vehicle image, wherein the feature extraction is based on image features described by deep learning (for example, a fast regional depth convolution neural network target detection algorithm), and the image features comprise a plurality of dimensional (for example, 4096 dimensional) global features describing a whole vehicle image and also comprise a plurality of dimensional local features describing a vehicle local region, such as a plurality of dimensional (for example, 4096 dimensional) local features of a vehicle window, an annual inspection mark, an exhaust grid, a pendant and other local regions;
step 1-3, quantizing and storing the obtained global features and/or local features, generating virtual license plates, and constructing a virtual license plate library;
step 2-1, detecting a target vehicle based on the video or the image, and extracting a target vehicle image;
step 2-2, performing feature extraction on the target vehicle image, wherein the feature extraction is based on image features described by deep learning (for example: a fast regional depth convolution neural network target detection algorithm), and the image features comprise a plurality of dimensional (for example 4096 dimensional) global features describing a whole vehicle image and/or a plurality of dimensional local features describing a vehicle local region, such as a plurality of dimensional (for example 4096 dimensional) local features of a vehicle window, an annual inspection mark, an exhaust grid, a pendant and other local regions;
step 2-3, quantizing and storing the obtained global features and/or local features of the target vehicle to generate a virtual number plate of the target vehicle;
and 2-4, comparing the virtual number plate of the target vehicle with the virtual number plates in the virtual number plate library one by one to obtain an identification result.
According to an embodiment of the present invention, the image of the vehicle belongs to a natural image, and the statistical characteristics of a part of the image are the same as those of other parts, whereby the feature learned in one part can be used in another part as well, so that the same learned feature can be used for all positions on this image.
Preferably, in step 1-2, the fast regional deep convolutional neural network target detection algorithm includes that after an image of a vehicle is obtained, the image is used as an input of a deep convolutional neural network by using a deep convolutional method, a plurality of dimensional global features of the image of the vehicle are obtained through calculation of a feed-forward neural network, meanwhile, a local region is determined according to a geometric position relationship, and a corresponding plurality of dimensional local features are obtained on the basis. Specifically, obtaining a plurality of dimensional global features or a plurality of dimensional local features based on a convolutional neural network comprises the following steps:
for the collected vehicle image, a small block, for example, 3x3, is randomly selected from a vehicle close-up global image (or from a vehicle window, an annual inspection mark, an exhaust grid, a pendant and other local area images) to serve as a sample, some features are learned from the small block sample, and the features learned from the 3x3 sample can serve as a detector and be applied to any place of the vehicle image. Preferably, the features learned from the 3 × 3 sample are convolved with the original vehicle image (including a close-up global image of the vehicle or a local area image of the vehicle window, annual inspection target, exhaust grid, pendant, etc.) to obtain an activation value for a different feature for any position on the map. According to the embodiment of the invention, the characteristics of a 3x3 sample of a 96x96 vehicle image are firstly learned, assuming that the self-coding of 100 implicit elements is used for realizing the characteristics. Each small block image region of 3x3 of the 96x96 image is convolved to obtain the convolution characteristic. That is, a small region of 3x3 is extracted and labeled sequentially from the start coordinate as (1, 1), (1, 2),.. until (94, 94), and then the trained sparse self-encoding is run one by one on the extracted region to get the activation values of the features. In this example, 100 sets were obtained, each set containing 89x89 convolution features. According to the embodiment of the invention, the convolution processing process comprises the following steps: let r c be given as an image of the vehicle, defined as Xlarge. Firstly, training sparse self-coding by small-size image samples Xsmall of a b extracted from a vehicle image to obtain k features (k is the number of hidden layer neurons), then solving an activation value fs for each block of a b size in Xlarge, and then performing convolution on fs. This results in (r-a +1) × (c-b +1) × k convolved feature matrices.
After the features of the vehicle image are obtained by convolution, the features are used for classification. The prior art uses a classifier, such as a softmax classifier, to associate all the parsed features with each other, which is very computationally expensive. For example: for a 96X96 pixel image, 400 features were learned from 3X3 inputs, and each convolution resulted in a result set of (96-3+1) × (96-3+1) ═ 8836, which, since 400 features were already obtained, would reach millions of features for each sample result set. Learning a classifier with over a million feature inputs is highly susceptible to overfitting. To solve this problem and to describe a large image, in an embodiment of the present invention, feature aggregation statistics are performed on features of different positions of the vehicle image, and preferably, an average value (or a maximum value) of a certain feature on one area of the image is calculated. These summary statistical features not only have much lower dimensionality (compared to using all extracted features), but also improve the results (not easily overfitting). The average (or maximum) characteristics of these regions are used as the most improved convolution characteristics for classification.
Generally, a color map is used for a vehicle image, the color map has 3 channels, each channel is independently convolved and pooled, and each value of a hidden layer is punctured by the 3 channels corresponding to one map, so that the values are added after being convolved by the 3 channels, and the values can just correspond to neurons of one hidden layer, namely correspond to one feature.
Further, in a specific embodiment, a multilayer convolution is used, and then training is performed by using a fully connected layer, wherein the purpose of the multilayer convolution is that the learned features of one layer of the convolution are often local, and the higher the number of layers is, the more global the learned features are. The vehicle image may be convolved using the model shown in fig. 2. The model adopts a 2-GPU parallel structure, namely the 1 st, 2 nd, 4 th and 5 th convolution layers divide model parameters into 2 parts for training. Here, further, the parallel structure is divided into data parallel and model parallel. The data parallel means that the model structures are the same on different GPUs, but training data are segmented, different models are obtained through respective training, and then the models are fused. And the model parallel is to divide the model parameters of a plurality of layers, train the same data on different GPUs, and directly connect the obtained results as the input of the next layer.
The basic parameters of the model shown in fig. 2 are:
inputting: 224 x 224 size vehicle image, 3 channels
The first layer of convolution: the convolution kernel size of 5 × 5 is 96, 48 per GPU.
First layer max-pooling: 2 × 2 cores.
Second layer convolution: 256 of the 3 × 3 convolution kernels, 128 per GPU.
Second layer max-pooling: 2 × 2 cores.
And a third layer of convolution: with the previous layer being fully connected, 384 convolution kernels of 3x3 are present. And the number of the GPU is 192.
And a fourth layer of convolution: the 3 × 3 convolution kernels are 384, 192 for each of the two GPUs. This layer was connected to the previous layer without a pooling layer.
And (3) convolution of a fifth layer: 256 convolution kernels of 3 × 3, 128 on the two GPUs.
Fifth layer max-pooling: 2 × 2 cores.
The first layer is fully connected: 4096 dimensions, the outputs of the fifth layer max-firing are concatenated into a one-dimensional vector as the input to the layer.
The second layer is fully connected: 4096 dimension
Softmax layer: the output is 1000, and each dimension of the output is the probability that the vehicle image belongs to that category.
Further, the above model can be improved by using the structure of fig. 3. In fig. 3, there is only one fully connected layer at the end, then the softmax layer, with the fully connected layer as the representation of the image. And in the fully connected layer, the output of the convolution of the fourth layer and the max-posing of the third layer is used as the input of the fully connected layer, so that the local and global characteristics of the vehicle image are learned.
In the step 1-3, the obtained global features and local features form complete feature vectors and are stored in a feature library in a lossless manner, so that a virtual number plate library of the vehicle is constructed, or the features form hash codes according to a training threshold value according to training of a large amount of sample data, form two-dimensional codes according to a corresponding hash representation algorithm, and further construct the virtual number plate library of the vehicle.
Preferably, the step of constructing the virtual number plate library of the vehicle may adopt one of three algorithms:
the first algorithm comprises the following steps:
1. the pictures are uniformly scaled to 8 × 8, and the pictures have 64 pixels according to the convolutional neural network algorithm. 2. And (3) converting into a gray-scale image: and converting the scaled picture into a 256-step gray-scale image.
The grayscale map correlation algorithm is as follows (R ═ red, G ═ green, B ═ blue):
1) floating-point algorithm: gray ═ R0.3 + G0.59 + B0.11
2) Integer method: gray ═ 30+ G59 + B11)/100
3) The shifting method: gray ═ (R76 + G151 + B28) > > 8;
4) average method: (R + G + B)/3;
5) take green only: g ═ G;
3. calculating the average value: and calculating the average value of all pixel points of the image subjected to the gray processing.
4. Comparing pixel gray values: and traversing each pixel of the gray picture, recording as 1 if the pixel is larger than the average value, and recording as 0 if the pixel is not larger than the average value.
5. Obtaining an information fingerprint: the 64 bits are combined and the sequence is kept consistent at will.
6. By adopting distance comparison, the larger the distance is, the more inconsistent the pictures are, on the contrary, the smaller the distance is, the more similar the pictures are, and when the distance is 0, the pictures are completely the same.
The second algorithm comprises the following steps:
1. and uniformly scaling the pictures to 32 × 32 according to the convolutional neural network algorithm, so that DCT calculation is facilitated.
2. And (3) converting into a gray-scale image: and converting the scaled picture into a 256-step gray-scale image. (see the first step of Algorithm)
3. DCT is computed-DCT separates the picture into a set of fractions.
4. And (3) reducing DCT: DCT is 32 x 32, leaving 8 x8 in the upper left corner, the lowest frequency of these representative pictures.
5. Calculating the average value: and calculating the average value of all the pixel points after DCT reduction.
6. Further reduction of DCT: values greater than the average are recorded as 1, whereas values greater than the average are recorded as 0.
7. Obtaining an information fingerprint: and combining 64 information bits, and keeping the sequence consistent at will.
8. By adopting distance comparison, the larger the distance is, the more inconsistent the pictures are, on the contrary, the smaller the distance is, the more similar the pictures are, and when the distance is 0, the pictures are completely the same.
The third algorithm comprises the following steps:
1. and uniformly scaling the pictures to 9 × 8 according to the convolutional neural network algorithm to obtain 72 pixels.
2. And (3) converting into a gray-scale image: and converting the scaled picture into a 256-step gray-scale image. (see the first step of Algorithm)
3. Calculating a difference value: algorithm three works between adjacent pixels such that 8 different differences are generated between 9 pixels per row, for a total of 8 rows, resulting in 64 difference values.
4. Obtaining a fingerprint: if the pixel on the left is brighter than the pixel on the right, a 1 is recorded, otherwise a 0 is recorded.
5. Obtaining an information fingerprint: and combining 64 information bits, and keeping the sequence consistent at will.
6. By adopting distance comparison, the larger the distance is, the more inconsistent the pictures are, on the contrary, the smaller the distance is, the more similar the pictures are, and when the distance is 0, the pictures are completely the same.
In step 2-2, a plurality of dimensional global features or a plurality of dimensional local features of the target vehicle are obtained based on the convolutional neural network by adopting a method similar to the step 1-2.
In step 2-3, similar to step 1-3, the global features and/or local features of the target vehicle obtained in step 2-2 are quantified and stored to generate a virtual number plate of the target vehicle.
In step 2-4, a distributed parallel computing mode is adopted, the virtual number plate of the target vehicle and the virtual number plate in the database are subjected to distance similarity matching by adopting the reciprocal ratio distance and the cosine distance, so that vehicle comparison analysis is carried out, when the similarity is higher than a certain threshold value, the successful recognition is judged, and the matched virtual number plate result is output.
In a specific embodiment, the distance similarity matching is used for judging that the cosine value of the included angle between two vectors in the vector space is used as a measure for measuring the difference between the two individuals. A vector is a directional line segment in a multidimensional space, and two vectors are close if their directions coincide, i.e. the angle is close to zero. And to determine whether the directions of the two vectors are consistent, the cosine law is used for calculating the included angle of the vectors. Assuming three sides of the triangle are a, B, and C, and the corresponding three angles are A, B, and C, then the cosine of angle A is:
if we consider the two sides b and c of a triangle as two vectors, the above equation is equivalent to:
where the denominator represents the length of the two vectors b and c and the numerator represents the inner product of the two vectors.
In the embodiment of the invention, the characteristic vector X in the virtual number plate library obtained in the step 1-3 and the characteristic vector Y of the virtual number plate of the target vehicle obtained in the step 2-3 are respectively as follows:
x1, x 2., x6400 and y1, y 2.,. y6400
The cosine distance between them can be represented by the cosine of the angle between them:
as the cosine of the vector angle approaches 1, the eigenvector approaches perfect repetition. And when the cosine of the vector included angle is higher than a certain threshold value, judging that the identification is successful, and outputting a virtual number plate result in the matched virtual number plate library.
In another preferred embodiment of the invention, in step 1-2 or step 2-2, a single deep learning method is not adopted, but a cascade characteristic target detection method is adopted. Specifically, a method combining DPM (deformable component model) and depth target detection is adopted, wherein DPM is used as a target detection algorithm, and the steps of the algorithm are briefly summarized as follows:
1, template cascade shape feature extraction;
2, extracting global image cascade shape features;
and 3, SVM target detection and position fitting based on rapid sliding window matching.
The deep target detection is a cascade target detection work flow designed by utilizing good characteristic expression capability of deep learning, the relation between hardware cost and operation accuracy is comprehensively measured, and the steps of the algorithm are briefly summarized as follows:
1, constructing an image multi-resolution pyramid;
2, performing hierarchical depth feedforward calculation to obtain a multi-level feature map;
3, target detection based on SOFTMAX fast sliding window matching;
the region position NMS fit of the 4-level voting mechanism.
In the preferred embodiment of the present invention, the algorithmic processing used is a combination of DPM and depth target detection. Generally, a deep target detection algorithm shows excellent performance in the detection rate of a target, but because the inherent geometrical structural relationship of an object is not considered, particularly for objects with relatively clear rigid structures such as vehicles, bicycles and pedestrians, the invention introduces a spring deformation model in a DPM algorithm into a deep learning model. A flow chart of a method for DPM + depth target detection for vehicle identification is shown in fig. 4.
The depth convolution network structure of the invention is shown in figure 5, the network fully utilizes local and global information characteristics of the image, including various depth characteristics of the image such as texture, color, gradient and the like, and truly realizes omnibearing and accurate description from bottom to top from abstraction to concrete. In a specific convolution operation, the convolution operation of the next layer depends not only on the convolution layer of the previous layer but also on the convolution of the previous layers, so that the region consistency can be fully considered, and the description of the approximate redundancy makes the algorithm highly robust.
The invention does not adopt hundreds or even thousands of layers of neural networks in the prior art, but combines the actual proposal of a special deep convolution neural network structure, and the network structure is clear, simple and convenient, and has very high operability and expandability.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (6)
1. A method for identifying a unlicensed vehicle based on a virtual number plate, the method comprising:
step 1 is a virtual number plate construction step, which comprises the steps of carrying out vehicle detection based on videos or images to obtain vehicle images, carrying out feature extraction on the vehicle images to generate virtual number plates and constructing a virtual number plate library;
step 2, a step of identifying a virtual number plate library, which comprises the steps of detecting a target vehicle based on a video or an image, obtaining a target vehicle image, extracting the characteristics of the target vehicle image, generating a virtual number plate and comparing the virtual number plate with the virtual number plate library;
the step 2 comprises the following steps:
step 2-1, detecting a target vehicle based on the video or the image, and extracting a target vehicle image;
2-2, extracting features of the target vehicle image, wherein the feature extraction is based on image features described by deep learning, and the image features comprise a plurality of dimensional global features describing the whole vehicle image and/or a plurality of dimensional local features describing a vehicle local area; the deep learning comprises the steps of adopting a rapid regional deep convolution neural network target detection algorithm, wherein the rapid regional deep convolution neural network target detection algorithm comprises the steps of utilizing a deep convolution method after obtaining an image of a vehicle, taking the image as the input of a deep convolution neural network, calculating through a feedforward neural network to obtain a plurality of dimensional global features of the image of the vehicle, simultaneously determining a local region according to a geometric position relation, and obtaining a plurality of corresponding dimensional local features on the basis of the local region;
step 2-3, quantizing and storing the obtained global features and/or local features of the target vehicle to generate a virtual number plate of the target vehicle;
and 2-4, comparing the virtual number plate of the target vehicle with the virtual number plates in the virtual number plate library one by one to obtain an identification result.
2. The virtual license plate-based unlicensed vehicle identification method according to claim 1, wherein said step 1 includes:
step 1-1, detecting a vehicle based on a video or an image, and extracting a vehicle image;
step 1-2, carrying out feature extraction on the vehicle image, wherein the feature extraction is based on image features described by deep learning, and the image features comprise a plurality of dimensional global features describing the whole vehicle image and/or a plurality of dimensional local features describing a vehicle local area;
and 1-3, quantizing and storing the obtained global features and/or local features, generating a virtual number plate, and constructing a virtual number plate library.
3. The virtual number plate-based unlicensed vehicle identification method according to claim 2, characterized in that: in the step 1-2 or the step 2-2, the deep learning method is not adopted, but a cascade characteristic target detection method is adopted.
4. The virtual license plate-based unlicensed vehicle identification method according to claim 2,
the steps 1-3 comprise forming a complete feature vector by the obtained global features and local features and storing the complete feature vector in a feature library in a lossless manner so as to construct a virtual number plate library of the vehicle, or forming hash codes by the features according to a training threshold value according to training of a large amount of sample data, and forming two-dimensional codes according to a corresponding hash representation algorithm so as to construct the virtual number plate library of the vehicle.
5. The method for identifying the unlicensed vehicle based on the virtual number plate as claimed in claim 1, wherein the step 2-4 comprises performing distance similarity matching between the virtual number plate of the target vehicle and the virtual number plate in the database by using reciprocal ratio distance and cosine distance in a distributed parallel computing manner, so as to perform vehicle comparison analysis.
6. The method as claimed in claim 5, wherein the steps 2-4 include determining that the recognition is successful when the similarity is higher than a certain threshold, and outputting the matched virtual number plate result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611156309.5A CN106845341B (en) | 2016-12-15 | 2016-12-15 | Unlicensed vehicle identification method based on virtual number plate |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611156309.5A CN106845341B (en) | 2016-12-15 | 2016-12-15 | Unlicensed vehicle identification method based on virtual number plate |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106845341A CN106845341A (en) | 2017-06-13 |
CN106845341B true CN106845341B (en) | 2020-04-10 |
Family
ID=59139259
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611156309.5A Active CN106845341B (en) | 2016-12-15 | 2016-12-15 | Unlicensed vehicle identification method based on virtual number plate |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106845341B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107748895B (en) * | 2017-10-29 | 2021-06-25 | 北京工业大学 | Unmanned aerial vehicle landing landform image classification method based on DCT-CNN model |
CN110555125A (en) * | 2018-05-14 | 2019-12-10 | 桂林远望智能通信科技有限公司 | Vehicle retrieval method based on local features |
CN108960412B (en) * | 2018-06-29 | 2022-09-30 | 北京京东尚科信息技术有限公司 | Image recognition method, device and computer readable storage medium |
CN109190649B (en) * | 2018-07-02 | 2021-10-01 | 北京陌上花科技有限公司 | Optimization method and device for deep learning network model server |
CN109241349B (en) * | 2018-08-14 | 2022-03-25 | 中国电子科技集团公司第三十八研究所 | Monitoring video multi-target classification retrieval method and system based on deep learning |
CN109992690B (en) * | 2019-03-11 | 2021-04-13 | 中国华戎科技集团有限公司 | Image retrieval method and system |
CN110490272B (en) * | 2019-09-05 | 2022-10-21 | 腾讯音乐娱乐科技(深圳)有限公司 | Image content similarity analysis method and device and storage medium |
CN111007761A (en) * | 2019-11-28 | 2020-04-14 | 上海蓝色帛缔智能工程有限公司 | Automatic monitoring and management system of data center |
CN113157641B (en) * | 2021-02-07 | 2023-07-04 | 北京卓视智通科技有限责任公司 | Method, device, system, equipment and storage medium for archiving and inquiring non-license vehicle |
CN112990136B (en) * | 2021-04-29 | 2021-08-03 | 成都深蓝思维信息技术有限公司 | Target detection method and device |
CN113706390A (en) * | 2021-10-29 | 2021-11-26 | 苏州浪潮智能科技有限公司 | Image conversion model training method, image conversion method, device and medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102902957A (en) * | 2012-09-05 | 2013-01-30 | 佳都新太科技股份有限公司 | Video-stream-based automatic license plate recognition method |
CN103000028A (en) * | 2011-09-14 | 2013-03-27 | 上海宝康电子控制工程有限公司 | Vehicle registration plate recognition system and method |
CN103258213A (en) * | 2013-04-22 | 2013-08-21 | 中国石油大学(华东) | Vehicle model dynamic identification method used in intelligent transportation system |
CN103530640A (en) * | 2013-11-07 | 2014-01-22 | 沈阳聚德视频技术有限公司 | Unlicensed vehicle detection method based on AdaBoost and SVM (support vector machine) |
CN104951784A (en) * | 2015-06-03 | 2015-09-30 | 杨英仓 | Method of detecting absence and coverage of license plate in real time |
CN105354533A (en) * | 2015-09-28 | 2016-02-24 | 江南大学 | Bag-of-word model based vehicle type identification method for unlicensed vehicle at gate |
CN105488099A (en) * | 2015-11-03 | 2016-04-13 | 杭州全实鹰科技有限公司 | Vehicle retrieval method based on similarity learning |
CN105512662A (en) * | 2015-06-12 | 2016-04-20 | 北京卓视智通科技有限责任公司 | Detection method and apparatus for unlicensed vehicle |
-
2016
- 2016-12-15 CN CN201611156309.5A patent/CN106845341B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103000028A (en) * | 2011-09-14 | 2013-03-27 | 上海宝康电子控制工程有限公司 | Vehicle registration plate recognition system and method |
CN102902957A (en) * | 2012-09-05 | 2013-01-30 | 佳都新太科技股份有限公司 | Video-stream-based automatic license plate recognition method |
CN103258213A (en) * | 2013-04-22 | 2013-08-21 | 中国石油大学(华东) | Vehicle model dynamic identification method used in intelligent transportation system |
CN103530640A (en) * | 2013-11-07 | 2014-01-22 | 沈阳聚德视频技术有限公司 | Unlicensed vehicle detection method based on AdaBoost and SVM (support vector machine) |
CN104951784A (en) * | 2015-06-03 | 2015-09-30 | 杨英仓 | Method of detecting absence and coverage of license plate in real time |
CN105512662A (en) * | 2015-06-12 | 2016-04-20 | 北京卓视智通科技有限责任公司 | Detection method and apparatus for unlicensed vehicle |
CN105354533A (en) * | 2015-09-28 | 2016-02-24 | 江南大学 | Bag-of-word model based vehicle type identification method for unlicensed vehicle at gate |
CN105488099A (en) * | 2015-11-03 | 2016-04-13 | 杭州全实鹰科技有限公司 | Vehicle retrieval method based on similarity learning |
Also Published As
Publication number | Publication date |
---|---|
CN106845341A (en) | 2017-06-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106845341B (en) | Unlicensed vehicle identification method based on virtual number plate | |
CN109840556B (en) | Image classification and identification method based on twin network | |
CN110929607B (en) | Remote sensing identification method and system for urban building construction progress | |
CN107506740B (en) | Human body behavior identification method based on three-dimensional convolutional neural network and transfer learning model | |
WO2019169816A1 (en) | Deep neural network for fine recognition of vehicle attributes, and training method thereof | |
CN108108751B (en) | Scene recognition method based on convolution multi-feature and deep random forest | |
Ma et al. | Crowd density analysis using co-occurrence texture features | |
CN110929080B (en) | Optical remote sensing image retrieval method based on attention and generation countermeasure network | |
CN103714148B (en) | SAR image search method based on sparse coding classification | |
CN102663391A (en) | Image multifeature extraction and fusion method and system | |
Tabia et al. | Compact vectors of locally aggregated tensors for 3D shape retrieval | |
CN102662949A (en) | Method and system for retrieving specified object based on multi-feature fusion | |
CN107886067A (en) | A kind of pedestrian detection method of the multiple features fusion based on HIKSVM graders | |
CN104715266B (en) | The image characteristic extracting method being combined based on SRC DP with LDA | |
Wang et al. | An image similarity descriptor for classification tasks | |
CN114419406A (en) | Image change detection method, training method, device and computer equipment | |
Ahmad et al. | 3D capsule networks for object classification from 3D model data | |
CN115937540A (en) | Image Matching Method Based on Transformer Encoder | |
CN115131580A (en) | Space target small sample identification method based on attention mechanism | |
Wang et al. | A multi-label hyperspectral image classification method with deep learning features | |
Zhang et al. | Robust semantic segmentation for automatic crack detection within pavement images using multi-mixing of global context and local image features | |
CN116129280B (en) | Method for detecting snow in remote sensing image | |
Xia et al. | Abnormal event detection method in surveillance video based on temporal CNN and sparse optical flow | |
Patil et al. | Improving the efficiency of image and video forgery detection using hybrid convolutional neural networks | |
Zou et al. | Texture classification by matching co-occurrence matrices on statistical manifolds |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |