CN105608441B - Vehicle type recognition method and system - Google Patents

Vehicle type recognition method and system Download PDF

Info

Publication number
CN105608441B
CN105608441B CN201610019285.2A CN201610019285A CN105608441B CN 105608441 B CN105608441 B CN 105608441B CN 201610019285 A CN201610019285 A CN 201610019285A CN 105608441 B CN105608441 B CN 105608441B
Authority
CN
China
Prior art keywords
region
vehicle type
picture
information
classifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610019285.2A
Other languages
Chinese (zh)
Other versions
CN105608441A (en
Inventor
苏志杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Boguan Intelligent Technology Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201610019285.2A priority Critical patent/CN105608441B/en
Publication of CN105608441A publication Critical patent/CN105608441A/en
Application granted granted Critical
Publication of CN105608441B publication Critical patent/CN105608441B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Abstract

The invention discloses a vehicle type identification method and a system, wherein the method comprises the following steps: the method comprises a process of generating a classifier through machine training and a process of judging a picture to be detected, wherein in the process of generating the classifier, a required image range in a training set picture is determined based on a license plate, the determined image range is divided into regions, characteristic information in each region is selected, all the selected characteristic information in each region is input into a machine to be trained respectively to generate the classifier corresponding to each region one by one, the picture to be detected is judged in a single region through the generated classifier, and a vehicle type identification result is obtained through multi-region confidence fusion judgment according to a single-region judgment result. The invention effectively increases the accuracy of vehicle type identification, can further identify the detail information such as vehicle manufacturers, is particularly suitable for an intelligent traffic system, and can provide strong evidence for the processing of traffic events.

Description

Vehicle type recognition method and system
Technical Field
The invention relates to the technical field of vehicle type recognition in video image processing.
Background
The vehicle type recognition is an important component of an intelligent traffic system, can provide strong evidence for processing traffic events, and can provide more help for functions such as vehicle tracking and the like. In the prior art, vehicle type recognition is mainly divided into two types, namely template-based recognition and classifier-based recognition.
The recognition based on the template is suitable for a specific scene, Chinese patent with application number 201410014474.1 discloses a vehicle type recognition method and a device in an ETC lane, namely the method and the device are suitable for a non-stop electronic toll collection system, the vehicle type recognition under the scene only requires that the recognized vehicle type can be corresponding to the template stored in the system, the environment of the recognized vehicle is relatively stable, the distance between cameras is relatively fixed, but the method cannot process the vehicle type recognition under the complex environment.
To this end, the chinese patent with application number 201210049730.1 discloses a vehicle type recognition method under a complex scene, and the principle adopted is the above-mentioned recognition method based on a classifier, which mainly involves selecting features and combining with a specific machine learning discrimination method for classification, and the classifier regarding the selection of the features and the generation of the machine learning is the key to determine the efficiency and accuracy of vehicle type recognition.
Disclosure of Invention
An object of the present invention is to provide a vehicle type recognition method and apparatus capable of recognizing an accurate type of a vehicle such as a manufacturer.
In order to solve the technical problems, the invention adopts the following technical scheme: a vehicle type recognition method comprises a process of generating a classifier through machine training and a process of distinguishing a picture to be detected, wherein in the process of generating the classifier, an image range required in a training set picture is determined based on a license plate, the determined image range is divided into regions, characteristic information in each region is selected, all the characteristic information selected in each region is input into the machine training to generate the classifier corresponding to each region one by one, the generated classifier is used for carrying out single-region distinguishing on the picture to be detected, and a vehicle type recognition result is obtained through multi-region confidence fusion according to a single-region distinguishing result.
In principle, the accuracy of vehicle type recognition depends on the extraction and utilization of image characteristic information, the required image range size is determined by taking a license plate as a reference, the license plate has a unified standard in the manufacturing process, the image is subjected to a regional training and distinguishing method under the condition of determining the range, independent classifiers are respectively trained in different regions, and a vehicle type recognition result is obtained through multi-region confidence fusion based on a single-region distinguishing result.
The invention also discloses a vehicle type recognition system, which comprises a classifier generating device and a device for judging the picture to be detected, wherein the classifier generating device determines the image range required in the picture of the training set based on the license plate, divides the determined image range into regions, selects the characteristic information in each region, and respectively inputs all the characteristic information selected in each region into a machine to train and generate a classifier which corresponds to each region one by one; the judging device judges the single area of the picture to be detected through the generated classifier, and obtains a vehicle type recognition result through multi-area confidence fusion according to the single area judgment result.
After the technical scheme is adopted, the invention has the following advantages: the method and the device have the advantages that the image is divided into the regions, independent classifiers are trained for different regions respectively, and the recognition result is obtained by performing multi-region confidence fusion based on the single-region discrimination result, so that the accuracy of vehicle type recognition is effectively improved, the detailed information such as vehicle manufacturers can be further recognized, the method and the device are particularly suitable for intelligent traffic systems, and strong evidence can be provided for the processing of traffic incidents.
Drawings
The following further describes embodiments of the present invention with reference to the accompanying drawings:
FIG. 1 is a flow chart of an embodiment of a vehicle type recognition method of the present invention;
FIG. 2 is an exemplary view of a vehicle image in a video image;
FIG. 3 is an exemplary diagram of image partition according to an embodiment of the vehicle type recognition method of the present invention;
fig. 4 is a diagram illustrating an example of extracting feature information according to an embodiment of the vehicle type recognition method of the present invention.
Detailed Description
In the present invention, many terms in the image recognition technology are involved, including corner points, hog features, sift point features, random forests and various algorithms, which are uniquely defined within the art, and the above terms are directly cited herein.
Referring to fig. 1, a flowchart of a preferred embodiment of the present invention is mainly divided into two main processes, namely, a process of generating a classifier by machine training and a process of determining a picture to be tested, where the process of generating the classifier involves processing a picture of a training set, and a specific process of determining involves processing a picture to be tested. The processing of the training set picture or the picture to be tested both relates to the determination of the size of the image range, because the scaling of the real objects in different pictures is different, the pictures need to be processed uniformly, and because the license plate of the vehicle has a uniform manufacturing standard, the license plate can be used as a reference for determining the image range. The location of the license plate is a conventional technology and is not described in detail herein.
Referring to fig. 2, in this embodiment, a range of a vehicle front face image is determined based on license plate position information, and certainly, a range of a vehicle tail may also be determined in the same manner, according to an actual situation, a total width of the image is 5 times a width of a license plate, and a total height of the image is 10 times a height of the license plate, where a specific position of the license plate in the front face of the vehicle is considered to be lower, so the width of the image should be centered on a middle point of the license plate, and the height of the image should be 7.5 times the height of the license plate above the middle point of the license plate as a reference point, and 2.5 times the height of the license plate below the middle point of the license plate, and the image after the range determination is completed is. The range-determined image is then scaled uniformly for post-processing, preferably to a size of 400 x 200 pixels in an embodiment. As can be seen from fig. 1, the range size determination method for the training set picture is consistent with the range size determination method for the picture to be measured, which is also a precondition for vehicle type determination in the following.
After the range size of the image is determined, the image is divided into regions, as shown in fig. 3, in this embodiment, the image with the determined range size is preferably divided into 12 regions, wherein the region where the license plate is located is excluded, because the region does not have information for increasing the determination accuracy for a specific vehicle type, and instead, the corner points on the license plate cause unnecessary interference, the region is isolated separately. Referring again to fig. 3, the vicinity of both sides of the license plate is also excluded for the same reason.
It should be emphasized that the divided regions may be the same for the training set picture and the picture to be detected, but considering that the sudden change of the information of the adjacent boundary region caused by boundary division may affect the subsequent determination result, the preferred embodiment is that the division of the image region related to the training set picture is different from the region division of the picture to be detected. As shown in fig. 3, there is a partial overlap between the image areas of the training set picture partitions, and there is no partial overlap between the image areas of the picture partitions to be tested. For example, the two solid lines of a and B plus the two outer edges form a divided region of the region 1 divided by the picture to be tested, and the two dotted lines of a and B plus the two outer edges form a divided region of the region 1 divided by the picture in the training set; the solid lines a, b and D plus the uppermost outer edge form the divided region of the region 2 divided by the picture to be tested, and the dotted lines B, C and D plus the uppermost edge line form the divided region of the region 2 divided by the picture in the training set, and it can be seen that there is an overlapping portion between the region 1 and the region 2 divided by the picture in the training set, and both are larger than and completely include the region corresponding to the picture to be tested, and preferably, there are more than 20-25 pixels in the width direction (e.g., the pixel difference between Aa) and 15-20 pixels in the height direction (e.g., the pixel difference between Bb).
After the image range size determination and the area division are completed, the feature information is selected and extracted for each divided area, it should be firstly clear that whether the area division step or the specific area division method is performed to identify the vehicle type more accurately and effectively, as for fig. 3, the division manner of 12 areas is based on the effective information concentration ratio for identifying the vehicle type, so that each area can be divided into a feature sparse area, a feature dense area and a common area, the size of each area position is determined by the effective information concentration ratio, of course, the division method of 12 areas is only a relatively preferable manner, theoretically, the finer the area division is more beneficial to the subsequent vehicle type identification, but the too fine division also affects the vehicle type identification efficiency, as shown in fig. 3, regions 1, 3, and 11 belong to feature sparse regions, 8 corner points are selected and detected as a preferred mode, regions 5 and 8 belong to feature dense regions, 12 corner points to be detected are selected as a preferred mode, and the other regions are normal regions, and 10 corner points are selected and detected as a preferred mode. In the following, differences in characteristic information due to differences in the selected number of corners will be described, and when a vehicle type is subsequently identified, the larger the natural characteristic dense region plays, so that the accuracy of vehicle type identification is increased, and a specific corner detection method may adopt an FAST corner detection algorithm.
Next, selecting and extracting feature information of each region, where the feature information described in this embodiment is formed by fusing three parts, specifically including position information, block feature information, and point feature information, in terms of efficiency, corner pairing may be performed first, corner points in each region may be paired arbitrarily, several paired corner points form one group, see fig. 4, as a preferred embodiment, three pairs are uniformly performed on corner points of each region, for example, 8 corner points of a first region are taken as an example, any three corner points form one group, and 56 combinations (8 × 7 × 6/1 × 2 × 3 × 56) are total, and the combination formed by every three points performs selecting and extracting feature information as shown in fig. 4.
Referring to fig. 4, also taking the first area as an example, where M represents a center point of a license plate, and A, B, C represents three points detected by the first area, distances between the three points and M are first compared, and an arrangement order of the three points is determined according to the distance, where AM is farthest, BM is next closest, and CM is closest, so that the arrangement order of the three points is ABC. For absolute position information, taking M as the origin of a two-dimensional rectangular coordinate system, the upper right is a first quadrant, the upper left is a second quadrant, the lower left is a third quadrant, and the lower right is a fourth quadrant, then sequentially extracting x and y coordinate values of three points ABC and distances relative to the origin, respectively expressing the x and y coordinate values and the distances relative to the origin by AMx, AMy and | AM | in an example of point A, and totaling 9 values. Regarding the relative position information, the coordinate displacements (including x and y coordinate axes, 4 variables in total) of the two points AB with respect to the point C are extracted using the point C as a reference point, and calculated by the following methods (CMx-AMx), (CMy-AMy), (CMx-BMx), (CMy-BMy). Two further scaling parameters are required, calculated in the following way: (| AB |/| BC |), (| AC |/| BC |), the two proportional parameters are more stable relative to other position information through calculation, and the anti-scaling and rotation capabilities are stronger. Thus, the extraction of the relative position information is completed, 6-dimensional variables are added to 9-dimensional variables of the absolute position information, and the overall position information includes 15-dimensional variable information.
Next, extracting block feature information, wherein the most key point of matching of block features is alignment, namely two blocks need to approximately point to the same region, and here we use the position of a reference point to constrain the block position, thereby well ensuring the alignment of block matching, and the specific method comprises the following steps: calculating the maximum value and the minimum value of the corresponding coordinates of the ABC points in the X direction and the Y direction respectively, recording the maximum value and the minimum value as Hx, Lx (the maximum value and the minimum value in the X direction) and Hy, Ly (the maximum value and the minimum value in the Y direction), completely confirming a rectangular frame through the 4 values, uniformly scaling the rectangular frame to 24 × 24 pixels, and then extracting the hog feature (extracting 64-dimensional feature according to a standard hog feature extraction method).
And finally, sequentially extracting sift point feature information of the ABC three points (128 dimensions, and the same can be carried out according to a standard sift point feature extraction method).
The position information, the block feature information, and the point feature information are combined to form complete feature information, and the total dimension of the feature information is 463 dimensions (15+64+128 × 3 ═ 463). The proportion of the sift point features is high, the features can be reduced from 384 dimensions to 256 dimensions through PCA (principal component analysis), so that the total feature dimension is 335 dimensions (15+64+256 is 335), and multiple experiments prove that the dimension reduction of the sift features has little influence on the total accuracy, but the training efficiency is improved. The method for selecting and extracting the feature information of other regions is the same as that of the first region in principle, but the number of the corner points may be different, and the method for extracting the feature information is consistent for the training set picture and the picture to be detected. The license plate center is used as a reference, the triangular structural position information, the HOG characteristic of the rectangular block and the sift point characteristic formed based on the triangular point constraint are fused, the function of each characteristic information can be fully utilized, and the subsequent vehicle type recognition accuracy is effectively improved.
After the selected extraction of the feature information is completed, the step of training the generation classifier is started, in the embodiment, the feature information is preferably thrown into a random forest for training, the random forest can process high-dimensional features, the introduction of randomness enables the random forest not to be easily subjected to overfitting, discrete variables and non-discrete variables can be processed at the same time, and the training process is quicker. It should be emphasized that the present embodiment adopts the partition for the determination, and there are 12 areas in total, and the feature information in each area is respectively put into the random forest for training, and finally 12 random forest classifiers are generated. Considering that the vehicle types are more, 200 decision trees are preferably used in the embodiment, and certainly, the decision trees can be selected among 200-400 according to actual requirements, and after the number of decision trees exceeds 400, the recognition rate is difficult to improve continuously, and only the efficiency is reduced. The decision tree adopts C4.5 algorithm to select the best attribute, and uses pessimistic pruning strategy to prevent overfitting phenomenon. Each tree uses a randomly selected 25-dimensional feature, with a maximum depth of 30 levels for a single tree.
As described above, the feature information input into the random forest includes three parts, and each part of the feature information has different importance for vehicle type identification, wherein the position information has more stable properties and has anti-scaling and anti-rotation capabilities, the importance of the position information is higher than that of the block feature information and the point feature information, the distinction degree of the block feature information is slightly higher than that of the point feature information in the actual test, so that the block feature information is more important than the point feature information, and the feature selection of the random forest is completely random, in order to not destroy the random characteristics of the random forest and make the feature be biased to more important feature information when selecting the feature, the present embodiment adopts a manner of weighting the random selection feature, and the position information, the block feature information and the point feature are weighted according to a certain proportion, wherein the weight proportion of the position information, the block feature information and the point feature information should be reduced in turn, regarding the weight ratio, it is preferable that the position information is not less than 2 times of the block feature information and the block feature information is not less than 1.5 times of the point feature information, and it is further preferable that if the position information is 4 to 9 parts in a proportion, the block feature information is 1.5 to 4 parts and the point feature is 1 to 1.5 parts. In this embodiment, the ratio of the position information, the block feature information, and the point feature information weight is 5: 2: multiplying the 15-dimensional feature of the position information by 5 to obtain a 75-dimensional feature, wherein the 0-4-dimensional feature of the new feature corresponds to the original 0-dimensional feature, the 5-9-dimensional feature corresponds to the original 1-dimensional feature, and so on; the block features are multiplied by 2 to become 128-dimensional features, the 0-1-dimensional features correspond to the original 0-dimensional features, the 2-3-dimensional features correspond to the original 1-dimensional features, and so on. The 256 dimensions of the point feature information are kept unchanged, so that when the random forest selects the features, the searching range is changed from the original 335(0-334) to the current 459(0-458), and the searching algorithm still uses a complete random number, so that the randomness of feature selection is ensured, the probability of selecting more important features is improved, and the vehicle type distinguishing capability is effectively improved.
After 12 random forest classifiers are generated and feature information of a picture to be detected is selected and extracted, each region of the picture to be detected is judged independently, features extracted from each region of the picture to be detected enter the classifier of the corresponding region, generally, only one result is output for one-time judgment by the random forest, namely, the vehicle type with the largest decision tree is obtained, in the embodiment, because local features are used, a plurality of vehicle type categories theoretically have similar local features, if only the result with the largest decision tree is obtained, other results are ignored, loss of effective information can be caused, therefore, the first 3 to 5 results with the largest comprehensive matching number are output by the random forest, and preferably 5 results are output in the embodiment. And finally, carrying out a multi-region confidence fusion process: through the process, the fusion judgment of the confidence coefficients of the multiple regions is carried out according to the matching result of each region, and the judgment formula is
Figure BDA0000905470150000081
If the comprehensive matching degree rank of one vehicle type class C in the K region is Kr (the value range is 1-5), the number of corner points acquired by the K region is Ks, n is the total number of the divided regions, 12 should be taken in the embodiment, m is the number of corner points in the same group, 3 is taken in the embodiment, and the vehicle type class with the highest confidence degree ratio is the vehicle type identification result.
The denominator in the above formula is equivalent to a normalization constant, that is, it is ensured that in the process of this discrimination, the confidence degrees of all the occurring candidate categories are added to be equal to 1, the category with the highest confidence degree is calculated and taken as the result of vehicle type discrimination through the above formula, but considering that there may be errors in vehicle type discrimination, the vehicle types with the confidence degrees ranked 5 at the top can also be presented to the user, and the method can accurately identify the detailed vehicle type information of the vehicle manufacturer, the production year and the like.
In this embodiment, a vehicle type recognition system is further disclosed corresponding to the vehicle type recognition method, wherein the classifier generating device includes a region divider, and excludes the license plate region when dividing the region of the determined image range, and the divided region includes a feature sparse region, a feature dense region and a normal region, wherein the feature information selected by the feature dense region is more than that of the normal region, and the feature information selected by the normal region is more than that of the feature sparse region.
The classifier generating device comprises a feature information selector, wherein corner points are selected in each region, the number of the selected corner points in a dense feature region is more than that in a common region, the number of the selected corner points in the common region is more than that in a sparse feature region, the corner points in each region are randomly paired, a plurality of paired corner points form a group, the feature information in each region comprises position information, block feature information and point feature information, the position information comprises relative position information among the corner points in the same group and absolute position information of the corner points relative to the center point of a license plate, the block feature information comprises hog features extracted from a rectangular frame surrounded by the corner points in the same group, and the point feature information is a sift point feature sorted according to the position relationship of the corner points in the same group. The discrimination device comprises a single-region discriminator, the single-region discriminator performs region division on the picture to be measured, the division mode corresponds to the region division mode of the training set picture, the picture to be measured after the region division is completed selects the characteristic information in each region, the characteristic information selection mode corresponds to the characteristic information selection mode of the training set picture, and the classifier performs single-region discrimination on the picture to be measured.
The judging device comprises a multi-region confidence coefficient judger for judging the multi-region confidence coefficient according to the matching result of each region, and the judging formula is
Figure BDA0000905470150000101
If the comprehensive matching degree rank of one vehicle type class C in the K region is Kr, the number of corner points acquired by the K region is Ks, n is the total number of the divided regions, m is the number of corner points in the same group, and the vehicle type class with the highest confidence coefficient ratio is a vehicle type identification result.
Other embodiments of the present invention than the preferred embodiments described above will be apparent to those skilled in the art from the present invention, and various changes and modifications can be made therein without departing from the spirit of the present invention as defined in the appended claims.

Claims (8)

1. A vehicle type recognition method comprises a process of generating a classifier through machine training and a process of distinguishing a picture to be detected, and is characterized in that: in the process of generating a classifier, determining an image range required in a training set picture based on a license plate, dividing the determined image range into regions, selecting characteristic information in each region, respectively putting all the selected characteristic information in each region into a machine for training to generate classifiers corresponding to each region one by one, carrying out single-region discrimination on a picture to be detected through the generated classifiers, and carrying out multi-region confidence fusion judgment according to a single-region discrimination result to obtain a vehicle type recognition result;
the license plate region is excluded when the determined image range is divided into regions, and the divided regions comprise a characteristic sparse region, a characteristic dense region and a common region, wherein the characteristic information selected by the characteristic dense region is more than that of the common region, and the characteristic information selected by the common region is more than that of the characteristic sparse region;
selecting angular points in each area, wherein the number of the selected angular points in the dense feature area is greater than that in the common area, the number of the selected angular points in the common area is greater than that in the sparse feature area, arbitrarily pairing the angular points in each area, the paired angular points form a group, the feature information in each area comprises position information, block feature information and point feature information, the position information comprises relative position information between the angular points in the same group and absolute position information of the angular points relative to the center point of the license plate, the block feature information comprises hog features extracted from a rectangular frame surrounded by the angular points in the same group, and the point feature information is a sift point feature ordered according to the position relationship of the angular points in the same group.
2. The vehicle type recognition method according to claim 1, characterized in that: and carrying out region division on the picture to be detected, wherein the division mode corresponds to a region division mode of the training set picture, the picture to be detected after the region division is finished selects the characteristic information in each region, the characteristic information selection mode corresponds to a characteristic information selection mode of the training set picture, and the classifier is used for carrying out single region judgment on the picture to be detected.
3. The vehicle type recognition method according to claim 2, characterized in that: and putting the characteristic information into a random forest for training to generate a classifier, and adopting a weighted random characteristic information selection method, wherein the position information, the block characteristic information and the point characteristic information form complete characteristic information after being adjusted according to a preset weight proportion, and the weights are reduced in sequence.
4. The vehicle type recognition method according to claim 3, characterized in that: when the single-region discrimination is carried out on the picture to be measured, the first 3 to 5 results of the comprehensive matching degree of each region are output, the multi-region confidence fusion discrimination is carried out according to the matching result of each region, and the discrimination formula is
Figure FDA0002269255920000021
If the comprehensive matching degree rank of one vehicle type class C in the K region is Kr, the number of corner points acquired by the K region is Ks, n is the total number of the divided regions, m is the number of corner points in the same group, and the vehicle type class with the highest confidence coefficient ratio is a vehicle type identification result.
5. The vehicle type recognition method according to claim 3, characterized in that: and the part of the area divided by the pictures of the training set is larger than the corresponding part of the area divided by the pictures to be detected.
6. The vehicle type recognition system comprises a classifier generation device and a device for distinguishing a picture to be detected, and is characterized in that:
the classifier generating device determines the image range required in the training set picture based on the license plate, divides the defined image range into regions, selects the characteristic information in each region, and respectively puts all the characteristic information in each selected region into a machine to train and generate a classifier corresponding to each region one by one;
the judging device judges a single region of the picture to be detected through the generated classifier, and obtains a vehicle type recognition result through multi-region confidence fusion judgment according to a single region judgment result;
the classifier generating device comprises a region divider, wherein the region divider excludes a license plate region when dividing a determined image range into regions, and the divided regions comprise a characteristic sparse region, a characteristic dense region and a common region, wherein the characteristic information selected by the characteristic dense region is more than that of the common region, and the characteristic information selected by the common region is more than that of the characteristic sparse region;
the classifier generating device comprises a feature information selector, wherein corner points are selected in each region, the number of the selected corner points in a dense feature region is more than that in a common region, the number of the selected corner points in the common region is more than that in a sparse feature region, the corner points in each region are randomly paired, a plurality of paired corner points form a group, the feature information in each region comprises position information, block feature information and point feature information, the position information comprises relative position information among the corner points in the same group and absolute position information of the corner points relative to the center point of a license plate, the block feature information comprises hog features extracted from a rectangular frame surrounded by the corner points in the same group, and the point feature information is a sift point feature sorted according to the position relationship of the corner points in the same group.
7. The vehicle type recognition system according to claim 6, characterized in that: the discrimination device comprises a single-region discriminator, the single-region discriminator performs region division on the picture to be measured, the division mode corresponds to the region division mode of the training set picture, the picture to be measured after the region division is completed selects the characteristic information in each region, the characteristic information selection mode corresponds to the characteristic information selection mode of the training set picture, and the classifier performs single-region discrimination on the picture to be measured.
8. The vehicle type recognition system according to claim 7, characterized in that: the judging device comprises a multi-region confidence coefficient judger for judging the multi-region confidence coefficient according to the matching result of each region, and the judging formula is
Figure FDA0002269255920000031
If the comprehensive matching degree rank of one vehicle type class C in the K region is Kr, the number of corner points acquired by the K region is Ks, n is the total number of the divided regions, m is the number of corner points in the same group, and the vehicle type class with the highest confidence coefficient ratio is a vehicle type identification result.
CN201610019285.2A 2016-01-13 2016-01-13 Vehicle type recognition method and system Active CN105608441B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610019285.2A CN105608441B (en) 2016-01-13 2016-01-13 Vehicle type recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610019285.2A CN105608441B (en) 2016-01-13 2016-01-13 Vehicle type recognition method and system

Publications (2)

Publication Number Publication Date
CN105608441A CN105608441A (en) 2016-05-25
CN105608441B true CN105608441B (en) 2020-04-10

Family

ID=55988367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610019285.2A Active CN105608441B (en) 2016-01-13 2016-01-13 Vehicle type recognition method and system

Country Status (1)

Country Link
CN (1) CN105608441B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107450505A (en) * 2016-05-31 2017-12-08 优信拍(北京)信息科技有限公司 A kind of rapid detection system of vehicle, method
CN106250852A (en) * 2016-08-01 2016-12-21 乐视控股(北京)有限公司 Virtual reality terminal and hand-type recognition methods and device
CN106339445B (en) * 2016-08-23 2019-06-18 东方网力科技股份有限公司 Vehicle retrieval method and device based on big data
CN106504540B (en) * 2016-12-12 2020-10-20 浙江宇视科技有限公司 Vehicle information analysis method and device
CN108319952B (en) * 2017-01-16 2021-02-02 浙江宇视科技有限公司 Vehicle feature extraction method and device
US10275687B2 (en) * 2017-02-16 2019-04-30 International Business Machines Corporation Image recognition with filtering of image classification output distribution
CN107122583A (en) * 2017-03-10 2017-09-01 深圳大学 A kind of method of syndrome differentiation and device of Syndrome in TCM element
CN107784309A (en) * 2017-11-01 2018-03-09 深圳汇生通科技股份有限公司 A kind of realization method and system to vehicle cab recognition
CN109703569B (en) * 2019-02-21 2021-07-27 百度在线网络技术(北京)有限公司 Information processing method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318225A (en) * 2014-11-19 2015-01-28 深圳市捷顺科技实业股份有限公司 License plate detection method and device
CN105160299A (en) * 2015-07-31 2015-12-16 华南理工大学 Human face emotion identifying method based on Bayes fusion sparse representation classifier
CN105205486A (en) * 2015-09-15 2015-12-30 浙江宇视科技有限公司 Vehicle logo recognition method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8180112B2 (en) * 2008-01-21 2012-05-15 Eastman Kodak Company Enabling persistent recognition of individuals in images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318225A (en) * 2014-11-19 2015-01-28 深圳市捷顺科技实业股份有限公司 License plate detection method and device
CN105160299A (en) * 2015-07-31 2015-12-16 华南理工大学 Human face emotion identifying method based on Bayes fusion sparse representation classifier
CN105205486A (en) * 2015-09-15 2015-12-30 浙江宇视科技有限公司 Vehicle logo recognition method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Automatic recognition of serial numbers in bank notes;Bo-Yuan Feng.etc;《Pattern Recognition》;20140831;第2621-2634页 *
Extraction of Serial Numbers on Bank Notes;Bo-Yuan Feng.etc;《International Conference on Document Analysis & Recognition》;20130828;第698-702页 *
Part-Based High Accuracy Recognition of Serial Numbers in Bank Notes;Bo-Yuan Feng.etc;《Springer International Publishing》;20141031;第204-215页 *

Also Published As

Publication number Publication date
CN105608441A (en) 2016-05-25

Similar Documents

Publication Publication Date Title
CN105608441B (en) Vehicle type recognition method and system
CN109829398B (en) Target detection method in video based on three-dimensional convolution network
US10025998B1 (en) Object detection using candidate object alignment
Zhao et al. Learning mid-level filters for person re-identification
CN103207898B (en) A kind of similar face method for quickly retrieving based on local sensitivity Hash
US8577151B2 (en) Method, apparatus, and program for detecting object
JP5538967B2 (en) Information processing apparatus, information processing method, and program
CN101339609B (en) Image processing apparatus and image processing method
CN111723721A (en) Three-dimensional target detection method, system and device based on RGB-D
CN104036284A (en) Adaboost algorithm based multi-scale pedestrian detection method
US8908921B2 (en) Object detection method and object detector using the method
CN103218610B (en) The forming method of dog face detector and dog face detecting method
US20140270479A1 (en) Systems and methods for parameter estimation of images
CN104464079A (en) Multi-currency-type and face value recognition method based on template feature points and topological structures of template feature points
CN111652292B (en) Similar object real-time detection method and system based on NCS and MS
CN104077594A (en) Image recognition method and device
CN112825192B (en) Object identification system and method based on machine learning
US20200311488A1 (en) Subject recognizing method and apparatus
CN108846831A (en) The steel strip surface defect classification method combined based on statistical nature and characteristics of image
US11507780B2 (en) Image analysis device, image analysis method, and image analysis program
CN104281851A (en) Extraction method and device of car logo information
CN112712066B (en) Image recognition method and device, computer equipment and storage medium
CN106650773A (en) SVM-AdaBoost algorithm-based pedestrian detection method
JP5791751B2 (en) Image recognition method and image recognition apparatus
KR101733288B1 (en) Object Detecter Generation Method Using Direction Information, Object Detection Method and Apparatus using the same

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200601

Address after: 250001 floor 17, building 3, Aosheng building, 1166 Xinluo street, Jinan City, Shandong Province

Patentee after: Jinan boguan Intelligent Technology Co., Ltd

Address before: Hangzhou City, Zhejiang province 310051 Binjiang District West Street Jiangling Road No. 88 building 10 South Block 1-11

Patentee before: ZHEJIANG UNIVIEW TECHNOLOGIES Co.,Ltd.

TR01 Transfer of patent right