CN111914892A - Vehicle type and vehicle logo identification method based on tire detection - Google Patents
Vehicle type and vehicle logo identification method based on tire detection Download PDFInfo
- Publication number
- CN111914892A CN111914892A CN202010582541.5A CN202010582541A CN111914892A CN 111914892 A CN111914892 A CN 111914892A CN 202010582541 A CN202010582541 A CN 202010582541A CN 111914892 A CN111914892 A CN 111914892A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- image
- logo
- tire
- circle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000001514 detection method Methods 0.000 title claims abstract description 42
- 238000012706 support-vector machine Methods 0.000 claims description 37
- 238000000746 purification Methods 0.000 claims description 6
- 238000007689 inspection Methods 0.000 claims 2
- 239000013598 vector Substances 0.000 description 20
- 230000006870 function Effects 0.000 description 19
- 238000012549 training Methods 0.000 description 18
- 238000000605 extraction Methods 0.000 description 15
- 230000009466 transformation Effects 0.000 description 15
- 238000004422 calculation algorithm Methods 0.000 description 11
- 238000006243 chemical reaction Methods 0.000 description 10
- 238000005070 sampling Methods 0.000 description 7
- 238000009825 accumulation Methods 0.000 description 6
- 238000013507 mapping Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 239000010432 diamond Substances 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000005286 illumination Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000004044 response Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000009191 jumping Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 241000282320 Panthera leo Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
- G06V10/507—Summing image-intensity values; Histogram projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/09—Recognition of logos
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a vehicle type and vehicle logo identification method based on tire detection, which specifically comprises the following steps: s1, detecting the circle center of a wheel and the radius Image-R of the wheel tire in the Image of the vehicle to be detected; s2, identifying and detecting the vehicle type of the vehicle to be detected based on the distance between two wheels on the same side, wherein the vehicle type is a small vehicle, a medium vehicle or a large vehicle; and S3, positioning and identifying the vehicle logo on the tire of the vehicle to be detected. And simultaneously recognizing the vehicle type and the vehicle logo based on the wheels.
Description
Technical Field
The invention belongs to the technical field of vehicle identification, and provides a vehicle type and vehicle logo identification method based on tire detection.
Background
In recent years, the rapid increase of social vehicles brings many new challenges to traffic management, and the research and development of intelligent traffic systems are increasingly urgent.
With the requirement of rapid development of ITS, more and more application environments not only need license plate character recognition, but also need to recognize vehicle types and brands. With the more and more detailed division of the society, the requirements of the intelligent transportation system on vehicle type recognition in different occasions are not different as much. In the high-speed traffic charging, it is only necessary to judge whether the vehicle is a large vehicle, a medium vehicle, or a small vehicle, but in the field of investigation, the more detailed the vehicle type identification is, or a specific vehicle brand such as a bmw, a gallop, a cadilac, or the like can be identified. Therefore, the vehicle type identification technology product plays a different role in different occasions, is believed to be related to aspects of life in the near future, and is mainly used for the aspects of vehicle theft prevention, fake-licensed vehicle identification and the like.
The existing vehicle type recognition based on images is mostly based on the vehicle type recognition of wireless communication of an IC card, when a vehicle enters a card port or exits the card port, the vehicle transmits the information of the vehicle to a receiving system through wireless equipment, the card port system receives the information and compares the information with data in a database, and finally the information is consistent, so that the vehicle type recognition is successful. However, the amount of investment is large, wireless devices must be installed on each vehicle and the checkpoint system, then the national traffic system must be networked in real time, data are updated in real time, most of the existing vehicle logo identification is carried out by extracting vehicle logos at the front end or the rear end of the vehicle, and the vehicle type and the vehicle logo are difficult to be identified simultaneously.
Disclosure of Invention
The invention provides a vehicle type and vehicle logo recognition method based on tire detection, which is used for recognizing a vehicle type and a vehicle logo simultaneously based on wheels.
The invention is realized in this way, a vehicle type and vehicle logo recognition method based on tire detection, the method specifically comprises the following steps:
s1, detecting the wheel center and the wheel tire radius Image-R of the vehicle to be detected in the Image;
s2, identifying the vehicle type of the vehicle to be detected based on the distance between two wheels on the same side, wherein the vehicle type is a small vehicle, a medium vehicle or a large vehicle;
and S3, positioning and identifying the vehicle mark on the tire of the vehicle to be detected, namely finishing vehicle identification.
Further, the method for positioning the vehicle logo on the wheel specifically comprises the following steps:
s31, the vehicle tire and the vehicle logo are concentric circles, and the vehicle logo radius Image-R is calculated based on the vehicle tire-vehicle logo proportion scale of the corresponding vehicle type, wherein the Image-R is Image-R/scale;
s32, taking the circle center of the wheel as the circle center and the circle with the vehicle logo radius Image-r as the radius as the vehicle logo position;
the vehicle tire-mark proportion scale is the ratio of the actual radius of the vehicle tire Real-R to the actual radius of the mark in the vehicle, namely the scale is Real-R/Real-R.
Further, the identification method of the car logo specifically comprises the following steps:
s33, extracting feature points of the car logo position image based on SIFT, and putting the extracted feature points into a feature point set;
s34, deleting redundant feature points in the feature point set, namely finishing the purification of the feature points;
and S35, identifying the car logo on the wheel based on the support vector machine.
Further, the purification process of the characteristic points is specifically as follows:
s341, calculating the Euclidean distance between any two feature points in the feature point set;
and S342, if the Euclidean distance between the two feature points is smaller than a set distance threshold value, determining that the two feature points are similar, wherein one feature point is a redundant feature point.
Further, a distance interval between two wheels on the same side corresponding to each type of vehicle is defined as follows:
the small-sized vehicle: the distance interval between the two wheels on the same side is as follows: 260-310 cm;
the medium-sized vehicle: the distance interval between the two wheels on the same side is as follows: 310 cm-365 mm;
the large-scale vehicle: the distance interval between the two wheels on the same side is as follows: 365 cm-420 cm.
Further, the center of a wheel circle and the radius Image-R of a wheel tire in the Image of the vehicle to be detected are detected based on the HOUGH conversion circle.
According to the method, geometric features such as tire radius, circle center, distance between front and rear tires and the like are extracted from a vehicle image through HOUGH circle transformation detection, so that the vehicle image is judged to belong to a large vehicle, a medium vehicle and a small vehicle, the vehicle mark is positioned accurately by utilizing the fact that the tires and the tire inner vehicle mark are concentric circles, the tire radius is combined with the proportional relation between the actual tire and the actual vehicle mark, SIFT feature extraction is carried out simultaneously, SIFT feature points are trained and identified through an SVM, and then the vehicle mark is identified; in addition, SIFT feature points are purified by using an Euclidean distance algorithm, the feature quantity of the car logo is reduced, the car logo can be recognized through SVM training and recognition again although the feature points are reduced, the time is reduced due to the reduction of the feature points, and the car logo has real-time performance.
Drawings
FIG. 1 is a flow chart of vehicle type identification according to an embodiment of the present invention;
fig. 2 is a schematic diagram of polar HOUGH transform provided in the embodiment of the present invention;
fig. 3 is a schematic diagram of circle detection of the conventional HOUGH transform, in which (a) is an image containing a circle, (b) is an accumulator space, and (c) is a three-dimensional accumulator space;
FIG. 4 is a schematic view of tire identification of a vehicle according to an embodiment of the present invention, wherein (a) is a small vehicle, (b) is a medium vehicle, and (c) is a large vehicle;
FIG. 5 is a flow chart illustrating the identification of a vehicle logo according to an embodiment of the present invention;
fig. 6 is a SIFT feature extraction diagram provided in the embodiment of the present invention, where (a) is the SIFT feature of the original image, (b) is the SIFT feature of the original image multiplied by 2 times, and (c) is the SIFT feature rotated by 90 degrees;
FIG. 7 is a SIFT feature map of different car logos provided by the embodiment of the present invention, wherein (a) is Mazda and (b) is Honda;
fig. 8 is a euclidean distance refined SIFT feature map provided in the embodiment of the present invention, where (a) is the euclidean distance refined SIFT feature map of fig. 7(a), and (b) is the euclidean distance refined SIFT feature map of fig. 7 (b);
fig. 9 is a schematic diagram of SVM recognition of SIFT features provided in the embodiment of the present invention;
fig. 10 is a schematic diagram of SVM recognition after the SIFT features are extracted in the european style provided in the embodiment of the present invention.
Detailed Description
The following detailed description of the embodiments of the present invention will be given in order to provide those skilled in the art with a more complete, accurate and thorough understanding of the inventive concept and technical solutions of the present invention.
In order to identify the size and brand of the vehicle, feature selection and extraction are required to be carried out on the preprocessed image, wheels are detected by Hough transformation in the chapter, and a vehicle logo is positioned; then extracting the vehicle logo features by using a Sift algorithm; and finally, introducing an SVM algorithm to classify and recognize the car logo, wherein the car recognition in the invention refers to recognition of the car type and the car logo.
Vehicle type size recognition
The invention adopts the distance between the side tires to identify the vehicle type. Because the tires have obvious gray difference with other parts in the image and have good circular contour after binarization, the HOUGH transformation which consumes relatively little time and is less influenced by noise and illumination is adopted to extract the circle centers and the radiuses of the tires, so that the distance between two tires is obtained to identify small vehicles, medium vehicles and large vehicles. Meanwhile, the vehicle logo at the center of the tire is positioned by using the priori knowledge on the basis, and the local feature extraction is carried out by using the SIFT, so that the external form of the vehicle can be identified, the brand mark of the vehicle can be identified, and the vehicle size identification flow is shown in figure 1.
HOUGH transform straight line detection
The most classical application of the hough transform is to perform linear detection in an image, perform corresponding transformation on 2 coordinates by using coordinate space transformation, or form a peak value by mapping a straight line to a point in another parameter space, thereby converting the problem of detecting an arbitrary shape into the problem of a statistical peak value.
If the expression of the straight line M in the plane rectangular coordinate system is as follows:
y=a*x+b (1)
where a is the slope and b is the intercept, different points (x, y) on the line M are transformed in the parameter space into a cluster of lines intersecting point P according to equation (1). Therefore, if a local maximum in the parameter space can be obtained, the detection of a straight line can be smoothly achieved.
However, the slope of a straight line in a rectangular coordinate system may be infinite, and a straight line is usually expressed by a polar coordinate equation, that is, an image cartesian coordinate system is converted into a polar hough transform spatial system by using a spatial coordinate mapping relationship. Namely p and theta, and at any point (x, y) in the image space, the function expression is:
p=x*cosθ+y*sinθ (2)
wherein p is the vertical distance from the origin to the straight line, and theta is the angle between the vertical distance from the origin to the straight line and the X-axis direction, as shown in FIG. 2.
In the image processing in practical application, a contour part in an image can be detected, so the hough transform is generally processed after edge detection or binarization.
The invention introduces HOUGH conversion circle detection on the basis of HOUGH straight line detection. As is known, the basic parameters of the circle are the circle center and the radius, and HOUGH conversion circle detection is similar to HOUGH conversion straight line detection, so that spatial parameter accumulation is carried out to obtain a peak value, namely a two-dimensional accumulator is changed into a three-dimensional accumulator. The invention utilizes HOUGH conversion circle detection to extract geometric characteristics of the center, radius and the like of the tire so as to identify that the vehicle is a large vehicle, a medium vehicle and a small vehicle, thereby positioning the mark of the brand of the vehicle in the tire according to the radius and achieving the purpose of identifying the vehicle type.
Circle detection for traditional HOUGH transform
Any curve in image space can be expressed as:
f((a1,...,an),(x,y))=0 (3)
wherein, a1,...,anIs a curve f ((a)1,...,an) (x, y)) -0. Assuming that the variables and the eigenvalues in equation (3) are interchanged, equation (3) can be equivalently written as:
g((x,y),(a1,...an))=0 (4)
then the points of the same curve in image space are mapped into the parameter space by passing through equation (4) to intersect with the parameter a1,...,anThe determined points. Thus, if there are enough data points in the image space that belong to the same curve, the accumulated value of the parameter points in the parameter space is passedTo determine the curve expression.
Expressed by the equation of the curve of equation (4), the HOUGH transform can be defined taking into account the equation of a circle:
(x-x0)2+(y-y0)2=r2 (5)
wherein (x)0,y0) The center of the circle, r, is the curve locus of the point (x, y) of the radius.
Fig. 3(a) is an image including a circle, and fig. 3(b) is an accumulator space. Each edge point in fig. 3(a) defines a set in the accumulator space, and the circle is defined by all possible radii, and the coordinates of the edge point is the center of the circle, fig. 3(b) shows a circle defined by 3 edge points, which have a given radius, and each edge point defines a circle with other radius, that is, the accumulator space is three-dimensional and has 3 interesting parameters, and the edge point maps to a voting cone of the accumulator, and after all the edge point evidences are collected, the maximum value in the accumulator can be obtained, and the maximum value in the accumulator corresponds to the parameters of the circle in the original image, and the three-dimensional accumulator is like fig. 3 (c).
Three-dimensional accumulator space
The process of evidence collection is the same as for HOUGH transform straight line detection, but the vote is formed by generating a cone according to equation (5). Then the parametric form of equation (5) may become:
the advantage of this expression can solve the parameter problem, and the final HOUGH mapping can be defined as:
where the radius r determines the upper accumulator space point, although theta is not a free parameter, but defines the trajectory of the curve. The HOUGH transform circle detection has good recognition of circular objects.
Random Hough transform detection circle
Randomly selecting three non-collinear points in image space and mapping the three points into one point in parameter space[23]And forming a many-to-one pattern, namely random Hough transformation. The method can well reduce a large amount of square operation and square operation generated by one-to-many mapping of the traditional Hough transformation, effectively reduce the calculated amount and the memory consumption of a computer, and improve the real-time processing performance requirement in practical application.
The random HOUGH conversion can be well mastered as long as three different points are randomly selected at the initial edge of the image target area to calculate and obtain the circle parameter, the generation of the parameter unit accumulator and whether the counting value in the accumulator reaches the specified threshold value.
When the metering value reaches a threshold value, the round is considered as a candidate circle, otherwise, the round is considered to be absent, then, the number of points of the candidate circle in the image space is judged, if the number of points is larger than the set minimum point number, the candidate circle is determined to be a real circle, and otherwise, the candidate circle is considered to be a virtual false circle.
The basic steps of the random HOUGH transformation algorithm are as follows:
(1) creating a set B of random edge points, setting a parameter unit set S to be infinite, and setting the cycle number k to be 0;
(2) randomly selecting 3 points p1, p2 and p3 from the B;
(3) calculating circle parameters k determined by p1, p2 and p3, if the circle parameters k are subjected to the solution (4), otherwise, the circle parameters k are subjected to the conversion (7);
(4) finding a data P in a set S of unitscSatisfy | k-Pc|<<f2, f2 is tolerance, if the query jumps to (6), otherwise jumps to (5);
(5) inserting k into S, setting the value of k to be 1, and jumping to (7);
(6) will number PcPlus 1, if not greater than a specified threshold, jump to (7), otherwise jump to (8);
(7) k is K +1, if K>kmaxIf not, turning to (2);
(8) if P iscIs the data of the candidate circle, the data corresponds to the number M of points on the circle>MminTurning to (9), otherwise, it is considered as a pseudo-circle, requiring P to be deleted from the set S of unitscJumping to (2);
(9) when the detection considers PcWhen the data is a real circle, judging whether the detected circles reach the set number, if so, finishing the algorithm, and if not, adding PcAnd (3) removing the point on the corresponding circle of the parameter from the unit set B, resetting S to be infinite and K to be 0, and then jumping to (2) to detect again.
It can be known from the foregoing reasoning steps that the dynamic linked list structure is a main data structure detected by the random HOUGH transform circle, but only maps the random edge points to multiple points to one point, and performs allocation unit accumulation on the parameters, thereby reducing the memory requirement and improving the computer operating efficiency. Although the random HOUGH circle detection has the advantage of improving the parameter accuracy to any height, a large number of invalid units are inevitably generated due to random and irregular image sampling, and more invalid judgments and accumulation are generated[28]. Therefore, when processing relatively complex images, selecting random HOUGH transform circle detection may significantly reduce algorithm performance and memory space waste. For example, if there are N image circles with size q points and N points not on the circle, then the probability that three points obtained by random HOUGH transform sampling fall on the same circle is:
if there are no points on the circle outside the circle, then equation (8) can be simplified as:
as can be seen from equation (9), if there are circles of the same size on one image, the smaller the number of circles, the greater the probability that three random points fall on the same circle, and the smaller the probability of invalid accumulation. Thus, the random Hough transform will be efficient and fast when only a small number of circles are present in the image, whereas in this paper the far numbers are relatively small and can be done quite efficiently.
In this paper, we default that only 2 circles, i.e. 2 tires, in the image, the probability that three points obtained by random sampling fall on the same circle is 25% calculated according to equation (9), the probability of invalid sampling is 75%, the invalid accumulation probability is still high, but it is known from the analysis of the acquired image that the image size is constant, and the positions of two tires are exactly in half of the image, then half of the image can be sampled first and the other half can be sampled, so that theoretically there is almost no invalid accumulation probability, and the efficiency is improved, and the results are shown in fig. 4(a), fig. 4(b) and fig. 4(c), wherein the gray straight line is to divide the image into 2 parts for processing, and 2 gray circles are 2 circles found after the HOUGH transformation, at this time, the centers of two tires of a large vehicle are respectively (83,235), (478,230) the radius is 21, and the distance between 2 tyres of the middle-sized vehicle can be approximately obtained as 395. The centers of the two tires of the middle-sized vehicle are (105,229), (431,229), the radius is 20, and the distance between the 2 tires of the middle-sized vehicle is 326. The centers of the two tires of the small vehicle are respectively (99,219), (379,219), the radius is 23, the distance between 2 tires of the small vehicle can be approximately obtained to be 280, and the vehicle can be judged to belong to the medium vehicle or the small vehicle according to the distance between 2 tires. And then, the vehicle logo position in the tire can be positioned according to the priori knowledge and the radius, and then the SIFT is utilized to extract the local features so as to identify the brand of the vehicle.
Finally, according to the traditional classical HOUGH transformation circle detection and the random HOUGH transformation circle detection geometric feature extraction, table 1, table 2 and table 3 can be obtained;
TABLE 1 HOUGH transform circle detection minicar features
TABLE 2 HOUGH transform circle detection medium-sized vehicle characteristics
TABLE 3 HOUGH transform circle detection Large vehicle characteristics
As can be seen from tables 1,2 and 3, the centers and radii of the tire can be accurately determined by both conventional HOUGH transform circle detection and random HOUGH transform circle detection, but the time for random HOUGH transform circle detection is far shorter than that of conventional HOUGH transform circle detection in terms of time and efficiency, and after the centers of the front tire and the rear tire are determined, the distance between the two tires can be obtained, so that whether the vehicle is a small vehicle or a medium vehicle can be identified, and finally the distance is classified by an SVM, and the classification result of the vehicle type is finally obtained.
Criteria for vehicle type recognition and classification
In this context, the selected vehicle image sizes are all of the 600 x 300 specification. Meanwhile, according to the related information, the distances between the front and rear tires of the large-sized vehicle, the medium-sized vehicle and the small-sized vehicle are greatly different regardless of the radius of the tire of the large-sized vehicle, the medium-sized vehicle and the small-sized vehicle. Under a simulation experiment of a large number of images with the size of 600 × 300, the distance between the small vehicles and the medium vehicles is between 260 and 310 pixels, the distance between the medium vehicles is between 310 and 365 pixels, the distance between the large vehicles is between 365 and 420 pixels,
the distances between the front and rear tires of large-sized vehicles, medium-sized vehicles and small-sized vehicles can be summarized into a table 4;
TABLE 4 vehicle type Classification criteria
Vehicle model | Distance between front and rear tires (cm) |
Small-sized vehicle | 260<Distance between two adjacent plates<310 |
Medium-sized vehicle | 310<Distance between two adjacent plates<365 |
Large-sized vehicle | 365<Distance between two adjacent plates<420 |
Car logo positioning
Circle centers and radiuses are extracted through HOUGH conversion, and car logos in the tires can be simply and quickly positioned according to priori knowledge, wherein the specific positioning method is that the ratio scale is calculated by dividing the radius Real-R of the tires and the radius Real-R of the interior car logos in reality, namely the expression is as follows:
scale=Real-R/Real-r (10)
the radius Image-R of the tire in the Image can be obtained through HOUGH circle transformation detection, and then the radius Image-R of the emblem in the Image can be calculated to be Image-R/scale, namely the expression is as follows:
Image-r=Image-R/scale (11)
moreover, the tire and the inner logo of the tire are concentric circles, so that the radius of the logo in the image can be accurately calculated, and the logo can be positioned. If the size of the vehicle is 3 types, large vehicles, medium vehicles and small vehicles, the corresponding vehicle also has 3 scales.
Car logo feature extraction and recognition
When the car logo is located, the car logo features need to be extracted, and the car logo identification process is as shown in fig. 5:
vehicle logo identification
After the HOUGH conversion circle detects the tire, the center and the radius of the tire can be obtained, so that the vehicle logo in the tire can be quickly and simply positioned, and the characteristic extraction is carried out on the vehicle logo. While the Scale-invariant feature transform (sift transform) has wide application in the fields of image processing and machine vision, and meanwhile, the Scale-invariant feature transform is irrelevant to the distance and the target cognition in the object vision, namely the digital camera is irrelevant to the size of the Scale displayed by the distance and the distance of the object, and has better anti-interference performance on rotation, illumination, target shielding noise and the like, so that the Scale-invariant feature transform becomes a direct algorithm for extracting the local features. The local features are different from the surrounding features in the field of image or machine vision, and are generally expressed as a region, so that the target and background regions have high difference, and the quality of local feature extraction directly has great influence on later-stage identification and classification.
The SIFT operator has invariance to rotation, scale, illumination and other changes, has strong separable type to objects, and becomes a focus of computer vision research, and the main process of SIFT transformation is as follows: firstly, generating a scale space; detecting extreme points of the scale space; accurate positioning of extreme points; fourthly, assigning direction parameters to the key points; generation of key point descriptors.
Generation of scale space
The scale space of a two-dimensional image may be defined as:
is a scale-variable Gaussian functionThe space coordinate is (x, y) and the scale factor isWherein the scale factorThe smoothness of the image is shown, the detail of the image corresponds to the small scale, and the contour of the image corresponds to the large scale.
Detecting extreme point in scale space
Each sampling point is compared with adjacent points one by one, and the extreme point of the scale space can be obtained only by judging the size of the sampling point, the image area and the scale area. The detection point in the middle of the image area is firstly compared with 8 adjacent points of the same scale, and then is compared with 18(9 x 2) points corresponding to the adjacent scales above and below the scale, namely the total number of 26(9 x 2+8) points is compared in size, so that the extreme points of the scale detection degree space and the image space point can be determined. If the point is in the Difference of gaussians (Difference of Gaussian)
If the minimum value or the maximum value is obtained in 26 neighborhoods of the local layer and the upper and lower layers of the DOG scale space, the point is considered as a characteristic point.
The specific steps of the extreme value detection in the scale space are as follows:
a. establishing a Gaussian pyramid
In the first step, the influence caused by image noise is weakened and eliminated, namely, the image is subjected to Gaussian convolution, meanwhile, the previous size is doubled, and interpolation operation is carried out on the first layer image to obtain more feature points[36]. And secondly, down-sampling is carried out to generate a hierarchical structure of an image pyramid. The gaussian pyramid layer structure is obtained by convolving G (x, y, sigma) with different scale factors sigma and each layer of image I (x, y).
b. Building a Difference (DOG) pyramid
Typically the subtraction of the gaussian pyramid of the previous layer and the gaussian pyramid of the next layer results in a DOG pyramid. And Low, carrying out extreme value detection by adopting a Gaussian operator:
where K is a constant dividing adjacent scale layers in the same order.
c. DOG spatial extreme point detection
The local extreme points of the DOG space form key points. And comparing the pixel points in the scale space with the gray values of the pixel points in the scale, the upper scale and the lower scale, and judging whether the maximum value and the minimum value exist, so that the extreme value points can be detected in the scale space and the image space.
Locating extreme points
When the local extreme value of the DOG space is detected, the contrast is not high and the edge point is considered as the extreme point, so that the extreme point needs to be screened again to be really determined as the key point. The specific method is to reduce or even remove the points with low contrast and the points at the edge. First, the taylor formula expansion is performed on the spatial scale function (12) as follows:
calculating the derivative, assigning a value of zero, and calculating to obtain the precise positionThe expression is as follows:
after the feature points are detected, the edge response feature points with poor stability and low contrast need to be removed[40]. Firstly, removing points with low contrast, and only obtaining the first two terms after transformation by taking the formula (14) into the formula (15)If it is notThe feature point remains otherwise removed directly.
To remove the unstable edge response feature points, a very good gaussian difference operator must be defined to make the extreme value have a larger principal curvature in the edge region and a smaller principal curvature in the vertical edge direction. A 2 x 2 Hessian matrix H is typically chosen to find the principal curvature:
let a larger α and a smaller β be the two eigenvalues of the matrix, respectively, then the following formula is given:
Tr(H)=Dxx+Dyy=α+β (17)
Det(H)=Dxx*Dyy-(Dxy)2=α*β (18)
wherein the sum of the diagonal lines of the matrix is denoted by Tr (H), the determinant of the matrix is denoted by Det (H), and α and β denote the gradients in the X and Y directions. In the first step, the point with the negative determinant is removed, and then the point with the larger principal curvature is removed, and the use judgment rule in Lowe is as follows: let alpha be r beta, thenHere, the larger r is, the more likely that the point is at the edge, and in order to make the edge response point be eliminated and make a certain threshold larger than the ratio, in order to detect the magnitude of the principal curvature comparison and the certain threshold, it is only necessary to judge:
if the formula is established, the key points are reserved, otherwise, the key points are eliminated.
Key point designation direction
In order to have rotation invariance, the descriptor must assign a reference direction to each key point, so that the module value and direction of the key point can be well found by means of an image gradient method, as follows:
θ(x,y)=arct((L(x,y+1)-L(x,y-1))/(L(x+1,y)-L(x-1,y))) (21)
and L is the value of the key point in the scale space, and the direction and the gradient of the pixels in the neighborhood of the histogram need to be counted after the gradient calculation of the key point is completed. The 360-degree direction range enables the gradient histogram to be divided into 8 columns, each column is 45 degrees, the direction of the peak value of the final histogram is the main direction of the key point, if a plurality of peak values exist, the auxiliary direction of the key point needs to be set, and the direction of the maximum peak value is generally 80 percent.
Key point descriptor generation
Through the steps, each key point has three pieces of information: position, scale, direction. At this time, a set of vectors is needed to establish a descriptor for the key points, so that the descriptor is not changed due to various changes such as visual angle and illumination change, has high uniqueness, and improves the probability of correct recognition of the feature points.
The SIFT feature extraction was chosen in the foregoing because it does not have any change in scaling, rotation, etc. of the image, so it is proved by simulation experiments, as shown in fig. 6(a), fig. 6(b), fig. 6 (c).
Each black circle represents a feature point, the circle center represents a position, the length of a line segment drawn from the circle center represents the scale and the direction, and from the result, the feature point is not changed whether the car logo is enlarged or selected, so that the sift has scaling and rotation invariance.
SIFT feature vector extraction of car logo
All feature points of one image can be extracted through SIFT, and the vehicle logo feature extraction algorithm finds that the number of extracted feature points is different even if different vehicle logo samples are of the same brand and the same size, such as the result of SIFT feature point extraction of vehicle logos of two brands shown in fig. 7(a) and 7 (b).
After the key points are determined and quantized, a large number of similar feature points exist, and feature vector similarity judgment needs to be carried out, so that redundant feature points and feature unobvious feature points are eliminated by introducing an Euclidean distance algorithm.
European distance purification SIFT feature point
The SIFT transform not only has good uniqueness, but also has the advantages of more characteristic information, unchanged rotation and translation scales, noise resistance and the like, and particularly can form the expansibility of combined characteristics with other characteristics so that the combined characteristics cannot be ignored in the field of recognition, for example, in the paper, the HOUGH transform extracts the combination of the circle center, the radius and the tire distance of a tire to recognize which type (large-sized vehicle, medium-sized vehicle and small-sized vehicle) and which brand belong to. Of course, SIFT has many disadvantages, and the number of image key points is too large, which affects the recognition speed, but SIFT has an irreplaceable status due to the fact that it is not concealed.
In order to improve the identification efficiency of SIFT features and reduce feature points, the Euclidean distance algorithm is introduced, the similarity between extracted SIFT feature vectors is judged mainly through the Euclidean distance, the closer the Euclidean distance between two SIFT feature vectors is, the higher the similarity between the two SIFT feature vectors is, when the Euclidean distance between the two SIFT feature vectors is smaller than a preset distance value, the two SIFT feature vectors are considered to be similar, one SIFT feature vector is deleted, and the feature point purification effect is as shown in fig. 8(a) and fig. 8 (b).
As can be seen from comparison between fig. 7(a) and fig. 8(a) and comparison between fig. 7(b) and fig. 8(b), SIFT feature points after euclidean distance screening have significantly reduced feature points, but do not reduce the uniqueness of the emblem itself. Then, training the feature points through the SVM can save a large amount of time, and meanwhile, the recognition and classification efficiency can be correspondingly improved, and finally recognition and classification of the brand can be realized.
Support vector machine image classification
The difficulty of the image classification problem is mainly embodied in two aspects of selecting and extracting features, selecting a model and learning the classification problem. The radius of the tire, the distance between the front tire and the rear tire, and the position, the scale and the direction of SIFT feature extraction obtained by HOUGH transformation are taken as features, and a support vector machine is adopted in the aspects of model selection and learning.
Support vector machine
A Support Vector Machine (SVM) is rising in the middle of the 90 s, is a Machine learning method for minimizing experience risk and confidence range, is a supervised learning model based on a statistical learning theory, and is often used for classification, pattern recognition and the like.
The linear divisible problem of the support vector machine, which is obtained by converting the linear inseparable problem through the kernel function, is the biggest advantage of the linear divisible problem, namely, the low-dimensional space is converted into the high-dimensional space, so that the nonlinear processing capacity is effectively improved, the kernel function is utilized to carry out inner product operation on all vectors of the high-dimensional space, complex operation does not need to be carried out in the high-dimensional space, so that the operation is simplified, and even specific nonlinear mapping does not need to be known, so that the linear divisible problem becomes an important method for selecting, classifying and identifying the paper.
Optimal classification surface
Finding a learning objective for which the classification hyperplane is a linear classifier in an N-dimensional data space can be expressed as:
WTx+b=0 (23)
the above formula is a definition description of linear classification, it is obvious that when WTWhen x + b > 0, it can be represented as 1, WTWhere W is a vector perpendicular to the classification hyperplane, x is a point of the classification hyperplane, and b is a displacement amount for enhancing the flexibility of the hyperplane.
Assuming two different points on a two-dimensional plane, two different states are represented by diamonds and pentagons, respectively, wherein diamonds and pentagons represent the two states of a class of two-dimensional samples to be trained, wherein F separates diamonds and pentagons in different regions, F1, F2 are parallel to F, and F1, F2 pass through the lines of the nearest samples of the diamond and pentagons regions, respectively, from the optimal plane.
The optimal classification line is a straight line that correctly separates the two types of samples represented by the diamonds and the pentagons in the figure, while maximizing the distance between F1 and F2. In real-world applications where the samples are typically multi-dimensional, then F is the optimal classification plane rather than the classification line as it is now.
If there is a sampleSet representation is { N }i,Mi},i=1,..n,N∈RdAnd M is the category label corresponding to the sample needing to be learned, and in this case, the expression of the linear discriminant function is M (x) ═ w × N + b, if M (x)>0 then the sample is judged as positive and represented by 1, if M (x)<The classification of the sample as a negative class is denoted by 1 if M (x) is 0, and the sample is classified as an arbitrary class or not considered as a result.
A problem commonly encountered is that many samples are linearly inseparable. Therefore, it is desirable to find a way to make the linear indivisible data linear divisible, while the kernel function can transform the linear indivisible data into a high-dimensional space to make it linear divisible.
Kernel function
The kernel function can map the trained data samples to a sample data set more favorable for linear classification, and can receive a plurality of vectors in a low-dimensional space and map the vectors to a vector inner product numerical value in a high-dimensional space. In general we will choose several kernel functions as follows:
(1) linear kernel function
K(x,x1)=(x·x1) (24)
(2) Polynomial kernel function
K(x,x1)=[v(x*x1)+1]q (25)
(3) Radial basis kernel function (Gaussian kernel function)
(4) Sigmoid kernel function
K(x,x1)=tan(h(v(x·x1)+c)) (27)
Construction SVM multi-class classifier
Although linear undeviability is changed to linear divisible by a kernel function, a multi-classification problem is generally encountered in real-life applications, and a support camera cannot be directly applied to multi-classification. Therefore, the SVM must construct an SVM multi-classifier to realize multi-classification, and there are 2 direct methods and indirect methods for constructing the SVM multi-classifier.
The direct method is to modify an objective function, so that the solving parameters of a plurality of classifiers are fused to an optimization problem, thereby realizing one-time solution of a plurality of class classifications.
The indirect method is also a one-to-one method and a one-to-many method to construct a multi-classifier, and the main implementation method is to realize the multi-classifier by combining a plurality of SVM two classifiers. In this paper, the indirect method is mainly used to realize multi-classification construction.
One-to-many method
And classifying the samples of one category into one category according to a certain sequence, and taking the rest samples as the other category, so that m SVM classifiers are constructed by m categories for training. When classification is carried out, each classifier can obtain a result, each result is compared finally, the result of the classifier with the maximum value is the final classification result of the unknown sample, and the specific method is as follows:
suppose the car logo is classified into 3 types of Volkswagen, Toyota and Honda, which are respectively marked as C1, C2 and C3. Then when the training set is extracted:
(1) the data corresponding to C1 is regarded as positive samples, and the data corresponding to C2 and C3 are regarded as negative samples;
(2) the data corresponding to C2 is regarded as positive samples, and the data corresponding to C1 and C3 are regarded as negative samples;
(3) the data corresponding to C3 is regarded as positive samples, and the data corresponding to C1 and C2 are regarded as negative samples;
it is known from this method that 3 training sets are generated for training, and three training result files are generated simultaneously. When prediction is performed, the corresponding sample data is tested one by using 3 training result files, and finally 3 results result1, result2 and result3 are obtained, wherein the maximum value is the classification result.
One-to-one method
One-to-one is to generate an SVM classifier in any two classes of data samples, so that N (N-1)/2 SVM classifiers are generated for N classes of data samples. For classifying an unknown data sample, in order to determine which category belongs to, it is necessary to know which category is voted the most among a plurality of classifiers, and the specific method is as follows:
taking the states of the vehicle logos of the public, Toyota and Honda as examples, the public, Toyota and Honda are marked as A, B and C respectively. Selecting A and B during training; a, C; b, C; three classes of training will result in three training result files. During prediction, only the feature vectors need to be ensured to be tested in three training results respectively, and statistics is carried out in a ticket counting mode[54]And obtaining a set of test results.
Assume that the votes are as follows: a ═ B ═ C ═ 0;
(a, B) -classsifer if a wins, a ═ a + 1; otherwise, B ═ B + 1;
(a, C) -classsifer if a wins, a ═ a + 1; otherwise, C ═ C + 1;
(B, C) -classsifer if B wins, B ═ B + 1; otherwise, C ═ C + 1;
the final classification is the maximum value of (a, B, C), i.e., Max (a, B, C) is the result of the classification.
General SVM types
On the basis of constructing multiple classes, the type of SVM needs to be selected. General SVM types[55]The method mainly comprises a v-support vector classifier (v-SVC), a C-support vector classifier (C-SVC), a support vector regression machine (-SVR), a v-support vector regression machine (v-SVR), a classification estimation (on-class SVM) and other types of support vector machines, wherein each type of support vector machine can produce different effects when being applied in different occasions.
SVM classification identification process
And performing feature extraction on the acquired vehicle image, training the feature of sample data with uniqueness to a support vector machine, generating a vehicle image feature vector to be classified and identified, and storing the result in a file.
The initialization parameters are very important in training and selecting, and the set parameters have very great influence on the generalization capability of the SVM, for example, C-SVC, and the kernel function is set to be a radial basis kernel function, so that the penalty coefficient setting of the function becomes very important, and generally, C >100 has good generalization capability. Therefore, SVM training is a complicated process, and suitable parameters can be found only by testing in continuous parameter selection. As can be seen from fig. 9 and 10, although the SIFT feature points after the euclidean distance is refined become fewer, the car logo can be recognized after SVM training recognition, and the SVM training recognition time is reduced invisibly.
According to the method, geometric features such as the tire radius, the circle center, the distance between a front tire and a rear tire and the like are extracted from a vehicle image through HOUGH conversion circle detection, so that the vehicle image is judged to belong to a large vehicle, a medium vehicle and a small vehicle, the tires and the vehicle logo in the tires are concentric circles, the tire radius is combined with the proportional relation between the actual tires and the actual vehicle logo, the vehicle logo can be accurately positioned, SIFT feature extraction is carried out at the same time, SIFT feature points are trained and identified through an SVM, although the vehicle logo can be identified, a lot of feature points are not used, so that space and time waste is caused, and the identification efficiency is improved; the SIFT feature points are purified by using an Euclidean distance algorithm, the feature quantity of the car logo is reduced, the car logo can be recognized through SVM training and recognition again although the feature points are reduced, the time is reduced due to the reduction of the feature points, and the car logo is more real-time.
The invention has been described above with reference to the accompanying drawings, it is obvious that the invention is not limited to the specific implementation in the above-described manner, and it is within the scope of the invention to apply the inventive concept and solution to other applications without substantial modification.
Claims (6)
1. A vehicle type and vehicle logo identification method based on tire detection is characterized by comprising the following steps:
s1, detecting the wheel center and the wheel tire radius Image-R of the vehicle to be detected in the Image;
s2, identifying the vehicle type of the vehicle to be detected based on the distance between two wheels on the same side, wherein the vehicle type is a small vehicle, a medium vehicle or a large vehicle;
and S3, positioning and identifying the vehicle mark on the tire of the vehicle to be detected, namely, finishing the identification of the vehicle.
2. The method for identifying a vehicle type and a vehicle logo based on tire inspection as claimed in claim 1, wherein the method for locating the vehicle logo on the wheel is as follows:
s31, the vehicle tire and the vehicle logo are concentric circles, and the vehicle logo radius Image-R is calculated based on the vehicle tire-vehicle logo proportion scale of the corresponding vehicle type, wherein the Image-R is Image-R/scale;
s32, taking the circle center of the wheel as the circle center and the circle with the vehicle logo radius Image-r as the radius as the vehicle logo position;
the vehicle tire-mark proportion scale is the ratio of the actual radius of the vehicle tire Real-R to the actual radius of the mark in the vehicle, namely the scale is Real-R/Real-R.
3. The method as claimed in claim 1, wherein the wheel center and the wheel tire radius Image-R in the Image of the vehicle to be tested are detected based on the HOUGH transform circle.
4. The method for identifying a vehicle type and a vehicle logo based on tire detection as claimed in claim 2, wherein the method for identifying a vehicle logo specifically comprises the steps of:
s33, extracting feature points of the car logo position image based on SIFT, and putting the extracted feature points into a feature point set;
s34, deleting redundant feature points in the feature point set, namely finishing the purification of the feature points;
and S35, identifying the car logo on the wheel based on the support vector machine.
5. The method for identifying a vehicle type and a vehicle logo based on tire inspection as claimed in claim 3, wherein the purification process of the feature points is as follows:
s341, calculating the Euclidean distance between any two feature points in the feature point set;
and S342, if the Euclidean distance between the two feature points is smaller than a set distance threshold value, determining that the two feature points are similar, wherein one feature point is a redundant feature point.
6. The method for recognizing vehicle types and vehicle logos based on tire detection as claimed in claim 1, wherein the distance interval between the two wheels on the same side corresponding to each type of vehicle is defined as follows:
the small-sized vehicle: the distance interval between the two wheels on the same side is as follows: 260-310 cm;
the medium-sized vehicle: the distance interval between the two wheels on the same side is as follows: 310 cm-365 cm;
the large-scale vehicle: the distance interval between the two wheels on the same side is as follows: 365 cm-420 cm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010582541.5A CN111914892A (en) | 2020-06-23 | 2020-06-23 | Vehicle type and vehicle logo identification method based on tire detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010582541.5A CN111914892A (en) | 2020-06-23 | 2020-06-23 | Vehicle type and vehicle logo identification method based on tire detection |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111914892A true CN111914892A (en) | 2020-11-10 |
Family
ID=73226512
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010582541.5A Pending CN111914892A (en) | 2020-06-23 | 2020-06-23 | Vehicle type and vehicle logo identification method based on tire detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111914892A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113762213A (en) * | 2021-09-28 | 2021-12-07 | 杭州鸿泉物联网技术股份有限公司 | Dangerous driving behavior detection method, electronic equipment and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002228423A (en) * | 2001-01-31 | 2002-08-14 | Matsushita Electric Ind Co Ltd | Tire detecting method and device |
CN106127107A (en) * | 2016-06-14 | 2016-11-16 | 宁波熵联信息技术有限公司 | The model recognizing method that multi-channel video information based on license board information and vehicle's contour merges |
US20180114337A1 (en) * | 2016-10-20 | 2018-04-26 | Sun Yat-Sen University | Method and system of detecting and recognizing a vehicle logo based on selective search |
CN110569921A (en) * | 2019-09-17 | 2019-12-13 | 中控智慧科技股份有限公司 | Vehicle logo identification method, system, device and computer readable medium |
-
2020
- 2020-06-23 CN CN202010582541.5A patent/CN111914892A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002228423A (en) * | 2001-01-31 | 2002-08-14 | Matsushita Electric Ind Co Ltd | Tire detecting method and device |
CN106127107A (en) * | 2016-06-14 | 2016-11-16 | 宁波熵联信息技术有限公司 | The model recognizing method that multi-channel video information based on license board information and vehicle's contour merges |
US20180114337A1 (en) * | 2016-10-20 | 2018-04-26 | Sun Yat-Sen University | Method and system of detecting and recognizing a vehicle logo based on selective search |
CN110569921A (en) * | 2019-09-17 | 2019-12-13 | 中控智慧科技股份有限公司 | Vehicle logo identification method, system, device and computer readable medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113762213A (en) * | 2021-09-28 | 2021-12-07 | 杭州鸿泉物联网技术股份有限公司 | Dangerous driving behavior detection method, electronic equipment and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Al-Shemarry et al. | Ensemble of adaboost cascades of 3L-LBPs classifiers for license plates detection with low quality images | |
Al-Shemarry et al. | An efficient texture descriptor for the detection of license plates from vehicle images in difficult conditions | |
CN105160330B (en) | A kind of automobile logo identification method and vehicle-logo recognition system | |
CN101859382A (en) | License plate detection and identification method based on maximum stable extremal region | |
CN112200186B (en) | Vehicle logo identification method based on improved YOLO_V3 model | |
CN102663401B (en) | Image characteristic extracting and describing method | |
Liu et al. | Research on vehicle object detection algorithm based on improved YOLOv3 algorithm | |
Peng et al. | Vehicle classification using sparse coding and spatial pyramid matching | |
CN114358166B (en) | Multi-target positioning method based on self-adaptive k-means clustering | |
Dai et al. | CAMV: Class activation mapping value towards open set fine-grained recognition | |
Chen et al. | Ship target discrimination in SAR images based on BOW model with multiple features and spatial pyramid matching | |
Niroomand et al. | Robust vehicle classification based on deep features learning | |
CN111914892A (en) | Vehicle type and vehicle logo identification method based on tire detection | |
CN112101283A (en) | Intelligent identification method and system for traffic signs | |
CN112070116A (en) | Automatic art painting classification system and method based on support vector machine | |
Erus et al. | How to involve structural modeling for cartographic object recognition tasks in high-resolution satellite images? | |
Sotheeswaran et al. | A coarse-to-fine strategy for vehicle logo recognition from frontal-view car images | |
Gayen et al. | Two decades of vehicle make and model recognition–survey, challenges and future directions | |
Backes et al. | Texture classification using fractal dimension improved by local binary patterns | |
CN112257792A (en) | SVM (support vector machine) -based real-time video target dynamic classification method | |
Erabati et al. | Object Detection in Traffic Scenarios-A Comparison of Traditional and Deep Learning Approaches | |
Cyganek | Circular road signs recognition with affine moment invariants and the probabilistic neural classifier | |
Roy et al. | Recognizing Bangla Handwritten Numerals: A Hybrid Model | |
Liao | Road Damage Intelligent Detection with Deep Learning Techniques | |
Yonghua et al. | A Fourier descriptor based on Zernike invariant moments in spherical coordinates for 3D pollen image recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |