CN105205486A - Vehicle logo recognition method and device - Google Patents

Vehicle logo recognition method and device Download PDF

Info

Publication number
CN105205486A
CN105205486A CN201510586228.8A CN201510586228A CN105205486A CN 105205486 A CN105205486 A CN 105205486A CN 201510586228 A CN201510586228 A CN 201510586228A CN 105205486 A CN105205486 A CN 105205486A
Authority
CN
China
Prior art keywords
region
car mark
car
candidate region
confidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510586228.8A
Other languages
Chinese (zh)
Other versions
CN105205486B (en
Inventor
陈羽飞
陈鑫嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201510586228.8A priority Critical patent/CN105205486B/en
Publication of CN105205486A publication Critical patent/CN105205486A/en
Application granted granted Critical
Publication of CN105205486B publication Critical patent/CN105205486B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a vehicle logo recognition method and device. The method includes the steps that a vehicle logo classifier is trained; the license plate position is acquired to determine a vehicle logo primary selection area; the vehicle logo classifier is used for detecting the vehicle logo primary selection area to acquire first vehicle logo candidate areas; first vehicle logo confidence coefficients of the first vehicle logo candidate areas are calculated; vehicle logo position confidence coefficients are calculated according to the positions of the first vehicle logo candidate areas; first vehicle logo candidate areas close to the central axis are screened out according to the vehicle logo position confidence coefficients to serve as second vehicle logo candidate areas; the second vehicle logo candidate areas are recognized to acquire second vehicle logo confidence coefficients; the second vehicle logo candidate areas are integrated to generate integrated candidate areas; integrated confidence coefficients corresponding to the integrated candidate areas are calculated according to the first vehicle logo confidence coefficients, the vehicle logo position confidence coefficients and the second vehicle logo confidence coefficients; the integrated candidate area with the highest integrated confidence coefficient is selected as a recognized vehicle logo. By means of the vehicle logo recognition method and device, the vehicle logo recognition rate under a complex scene can be increased.

Description

A kind of automobile logo identification method and device
Technical field
The application relates to technical field of video monitoring, particularly relates to a kind of automobile logo identification method and device.
Background technology
Car mark is the important information of vehicle, and be the significant image of vehicle, vehicle-logo recognition can provide strong Informational support to vehicle monitoring and tracking.But, because car mark greatly, and by cause influences such as illumination, background, shapes, is difficult to accurate identification compared with little, similarity.
Current vehicle-logo recognition technology depends on car target precise positioning, and vehicle-logo location many employings image procossing and mode identification technology, this vehicle-logo location mode is to extreme sensitivity such as texture, angle rotation, inclinations around car target, therefore, be difficult to accomplish accurate location under complex scene, and then cause car target discrimination low.
Summary of the invention
In view of this, the application provides a kind of automobile logo identification method and device.
Particularly, the application is achieved by the following technical solution:
The application provides a kind of automobile logo identification method, and the method comprises:
Adopt car mark detection algorithm training cart mark sorter;
Obtain the car plate positional information in image to be detected;
According to described car plate positional information determination car mark just favored area;
Utilize and train the car mark sorter obtained to carry out the detection of car mark to the first favored area of described car mark, obtain some first car mark candidate regions;
Calculate the first car mark degree of confidence of each the first car mark candidate region;
The car cursor position degree of confidence corresponding according to the position calculation of the first car mark candidate region;
The first car mark candidate region nearer apart from the axis of the first favored area of described car mark is filtered out as the second car mark candidate region from described some first car mark candidate regions according to described car cursor position degree of confidence;
Adopt machine learning algorithm to identify described second car mark candidate region, obtain the second car mark degree of confidence of the second car mark candidate region;
Area merges is carried out to multiple second car mark candidate region and generates multiple fusion candidate region;
The fusion degree of confidence that the first car mark degree of confidence of the second car mark candidate region of candidate region, car cursor position degree of confidence and the second car mark confidence calculations correspondence merges candidate region is merged according to generation;
Select to merge the highest fusion candidate region of degree of confidence as the car mark identified.
The application also provides a kind of vehicle-logo recognition device, and this device comprises:
Training unit, for adopting car mark detection algorithm training cart mark sorter;
Acquiring unit, for obtaining the car plate positional information in image to be detected;
Determining unit, for favored area at the beginning of described car plate positional information determination car mark;
Detecting unit, training the car mark sorter obtained to carry out the detection of car mark to the first favored area of described car mark for utilizing, obtaining some first car mark candidate regions;
First computing unit, for calculating the first car mark degree of confidence of each the first car mark candidate region;
Second computing unit, for the car cursor position degree of confidence corresponding according to the position calculation of the first car mark candidate region;
Screening unit, for filtering out the first car mark candidate region nearer apart from the axis of the first favored area of described car mark as the second car mark candidate region according to described car cursor position degree of confidence from described some first car mark candidate regions;
Recognition unit, for adopting machine learning algorithm to identify described second car mark candidate region, obtains the second car mark degree of confidence of the second car mark candidate region;
Integrated unit, generates multiple fusion candidate region for carrying out area merges to multiple second car mark candidate region;
3rd computing unit, for according to the fusion degree of confidence generating the first car mark degree of confidence of the second car mark candidate region, car cursor position degree of confidence and the second car mark confidence calculations correspondence merging candidate region and merge candidate region;
Selection unit, merges the highest fusion candidate region of degree of confidence as the car mark identified for selecting.
Described as can be seen from above, the application does not rely on car target precise positioning, but based on degree of depth learning algorithm, adopts the mode of multiple degree of confidence Weighted Fusion to carry out vehicle-logo recognition, improve the vehicle-logo recognition rate under complex scene.
Accompanying drawing explanation
Fig. 1 is a kind of automobile logo identification method process flow diagram shown in the application one exemplary embodiment;
Fig. 2 is the positive sample shown in the application one exemplary embodiment and negative sample example;
Fig. 3 is the car mark primary election area schematic shown in the application one exemplary embodiment;
Fig. 4 is the underlying hardware structural representation of a kind of vehicle-logo recognition device place equipment shown in the application one exemplary embodiment;
Fig. 5 is the structural representation of a kind of vehicle-logo recognition device shown in the application one exemplary embodiment.
Embodiment
Here will be described exemplary embodiment in detail, its sample table shows in the accompanying drawings.When description below relates to accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawing represents same or analogous key element.Embodiment described in following exemplary embodiment does not represent all embodiments consistent with the application.On the contrary, they only with as in appended claims describe in detail, the example of apparatus and method that some aspects of the application are consistent.
Only for describing the object of specific embodiment at term used in this application, and not intended to be limiting the application." one ", " described " and " being somebody's turn to do " of the singulative used in the application and appended claims is also intended to comprise most form, unless context clearly represents other implications.It is also understood that term "and/or" used herein refer to and comprise one or more project of listing be associated any or all may combine.
Term first, second, third, etc. may be adopted although should be appreciated that to describe various information in the application, these information should not be limited to these terms.These terms are only used for the information of same type to be distinguished from each other out.Such as, when not departing from the application's scope, the first information also can be called as the second information, and similarly, the second information also can be called as the first information.Depend on linguistic context, word as used in this " if " can be construed as into " ... time " or " when ... time " or " in response to determining ".
Current vehicle-logo recognition technology mainly adopts image procossing and mode identification technology, on the basis of license plate recognition technology, utilizes car plate and car target relative position relation, carries out just location to car mark; Recycling carries out the detection of car mark based on the Adaboost algorithm of Haar feature, obtains some doubtful car mark regions; Then utilize based on HOG (HistogramofOrientedGradient, histograms of oriented gradients) SVM (SupportVectorMachine of feature, support vector machine) algorithm screens doubtful car mark region, chooses the maximum region of degree of confidence as vehicle-logo location region; Finally vehicle-logo recognition is carried out to the vehicle-logo location region determined.
In above-mentioned vehicle-logo recognition process, usual needs train multiple car mark sorter according to the difference of car mark length breadth ratio, and due to length breadth ratio difference, must cause adopting different scale, different step-length, different moving window to detect, therefore, need to consume a large amount of system resource.Secondly, the problems such as due to the inherent shortcoming of HOG algorithm, very difficult process is blocked, incomplete, and this algorithm is quite responsive to image noise.In addition, choose the discriminant approach too simple violence of the maximum region of degree of confidence as vehicle-logo location region, easily cause erroneous judgement.
For the problems referred to above, the embodiment of the present application proposes a kind of automobile logo identification method, and the method does not rely on car target precise positioning, but based on degree of depth learning algorithm, carries out vehicle-logo recognition by the mode of multiple degree of confidence Weighted Fusion.
See Fig. 1, be an embodiment process flow diagram of the application's automobile logo identification method, this embodiment is described vehicle-logo recognition process.
Step 101, adopts car mark detection algorithm training cart mark sorter.
The embodiment of the present application uses car mark sorter to carry out vehicle-logo recognition in vehicle-logo recognition process, therefore, needs training in advance car mark sorter.The training (step 101) of this car mark sorter only need complete before car mark detects (step 104).The detection algorithms such as Adaboost algorithm, HOG algorithm, DPM (DeformablePartsModel, deformable member model) algorithm specifically can be adopted to carry out the training of car mark sorter.
In a preferred embodiment, adopt Adaboost algorithm to carry out the training of car mark sorter, be specially:
Obtain car to mark on a map the positive sample of case and negative sample, positive sample is comprise car to mark on a map the sample of case, and negative sample is do not comprise car to mark on a map the sample of case.In the embodiment of the present application, all positive samples are demarcated according to the ratio of width to height of 1:1, as shown in Fig. 2 (a), for the car mark not meeting 1:1 the ratio of width to height, car mark region can be intercepted and there is the part of identification as positive sample, such as, Audi's car mark can choose a ring in Fourth Ring as positive sample, as shown in Fig. 2 (b).
In addition, negative sample in the present embodiment is divided into two parts, in order only to comprise the first favored area of car mark, (car mark just favored area is car plate area just above to a part, can introduce in detail in subsequent descriptions) negative sample, hereinafter referred to as the first negative sample, as shown in Fig. 2 (c), this first negative sample mainly considers that the embodiment of the present application carries out vehicle-logo recognition based on car mark prime area in follow-up vehicle-logo recognition process, therefore, select the first favored area of car mark effectively can get rid of the non-car mark region in the first favored area of car mark as negative sample, improve the detection efficiency of car mark sorter.Another part negative sample is the negative sample comprising whole front face, as shown in Fig. 2 (d), hereinafter referred to as the second negative sample.Although the first negative sample has very strong elimination ability, but because the scope of the first favored area of car mark is smaller, and textural characteristics in this region is limited, when car mark kind increases, the otherness being positioned at the car mark region of this car mark primary election regional extent and non-car mark region is very little, the car mark sorter utilizing merely the first negative sample to generate carries out the poor effect of vehicle-logo recognition, the embodiment of the present application adds the second negative sample, this second negative sample comprises whole front face, textural characteristics enriches, the difference in car mark region and non-car mark region can be increased, be conducive to vehicle-logo recognition, therefore, can the convergence of lifting car mark sorter by increasing by the second negative sample.Certainly, be no matter that the first negative sample or the second negative sample all need to take car and to mark on a map case.
After the positive sample got needed for training and negative sample, first the embodiment of the present application obtains front N level strong classifier according to positive sample and the training of the first negative sample, obtain rear M level strong classifier according to positive sample and the training of the second negative sample again, front N level strong classifier and the cascade of rear M level strong classifier are generated car mark sorter.As previously mentioned, obtain front N level strong classifier according to the first negative sample training and effectively can get rid of the non-car mark region of car mark just in favored area; Obtaining rear M level strong classifier according to the second negative sample training can the convergence of lifting car mark sorter, does not repeat them here.
Step 102, obtains the car plate positional information in image to be detected.
Existing more ripe license plate recognition technology identification car plate can be adopted, obtain the positional information of car plate in image to be detected, do not repeat them here.
Step 103, according to described car plate positional information determination car mark just favored area.
Can be found by a large amount of observations, car mark is usually located at car plate area just above, but highly there are differences residing for it, such as, some pony car car marks are usually located at adjacent domain above car plate, and some oversize vehicle car marks are usually located at upper zone above car plate (as truck, medium truck etc.).By finding the statistics of a large amount of car cursor position, car absolute altitude degree is no more than twice car plate width usually.Therefore, the embodiment of the present application determines a car mark just favored area be positioned at above car plate according to above-mentioned statistics, 2 times that the lower edge of the first favored area of this car mark connects with car plate upper edge, the height on side edge is car plate width, as shown in Figure 3.This car mark just favored area can contain the car cursor position of the overwhelming majority, and due to this car mark, just favored area is only relevant with position with the wide height of car plate, therefore, under varying environment and different angles, this car mark just favored area is less relative to whole front face change, for follow-up vehicle-logo recognition provides a relatively stable and surveyed area that scope is less.
Step 104, utilizes and trains the car mark sorter obtained to carry out the detection of car mark to the first favored area of described car mark, obtain some first car mark candidate regions.
Completed the training of car mark sorter by step 101, the first favored area of car mark utilizing this car mark sorter to treat in detected image carries out the detection of car mark, gets rid of non-car mark region, gets multiple doubtful car mark region, hereinafter referred to as the first car mark candidate region.The the first car mark candidate region obtained by above-mentioned car mark detection of classifier has obvious aggregation around true car mark or similar car mark, merges lay the first stone for subsequent sections.
Step 105, calculates the first car mark degree of confidence of each the first car mark candidate region.
After getting the first car mark candidate region, calculate the degree of confidence of each the first car mark candidate region, hereinafter referred to as the first car mark degree of confidence, concrete computation process is as follows:
F fstWeight=K/P formula (1)
Wherein, K is the number of the Weak Classifier that can detect the first car mark candidate region in car mark sorter; P is total number of Weak Classifier in car mark sorter; F fstWeightit is the first car mark degree of confidence of the first car mark candidate region.
Known in aforementioned description, car mark sorter is made up of the strong classifier of multiple cascade, and strong classifier is made up of the multiple Weak Classifiers generated in training process.Above-mentioned formula (1) utilizes the Weak Classifier that generates in training process as the basis of calculating first car mark candidate region just.In car mark sorter training process, each Weak Classifier has a corresponding threshold value, can there be a corresponding output valve in the to be detected region of each Weak Classifier to input, this output valve is compared with corresponding threshold value, thus determines whether region to be detected can by the detection of weak typing.If region to be detected can by the detection of current Weak Classifier, then the Weak Classifier number of adding up adds one, when region to be detected can obtain final statistics, i.e. K value after all Weak Classifiers (P Weak Classifier) detect.After this region to be detected detection by car mark sorter of confirmation, this region to be detected can be used as the first car mark candidate region and calculates the first corresponding car mark degree of confidence according to formula (1).This first car mark degree of confidence is larger, then the first corresponding car mark candidate region is that car target possibility is larger.
Step 106, the car cursor position degree of confidence corresponding according to the position calculation of the first car mark candidate region.
Because the embodiment of the present application does not carry out precise positioning to car mark region, therefore, by algorithm of target detection when just favored area carries out the detection of car mark to the car mark preset, the characteristic area that structural texture is similar to car mark may be detected, and near this characteristic area, assemble the first a large amount of car mark candidate regions.
In order to get rid of the said structure texture characteristic area similar to car mark, the embodiment of the present application makes investigation further based on car cursor position.As everyone knows, although car target size, highly to there are differences, car target horizontal level is usually located on the axis of vehicle, and therefore, the embodiment of the present application utilizes car target position characteristic to calculate the car cursor position degree of confidence of each the first car mark candidate region.
The computation process of car cursor position degree of confidence is as follows:
F L o c W e i g h t = 1 - D W Formula (2)
Wherein, D is the distance of the first car mark candidate region central point to the first favored area axis of car mark; W is the wide of the first car mark candidate region; F locWeightit is the car cursor position degree of confidence of the first car mark candidate region.
Need supplementary notes a bit, for the vehicle of car mark not on axis, still formula (2) can be adopted to calculate car cursor position degree of confidence, this needs when car mark sorter is trained, select car mark just to replace true car mark as positive sample in favored area meta some obvious characteristic regions on axis, the similar car mark region that position deviation is larger can be excluded equally.
Step 107, filters out the first car mark candidate region nearer apart from the axis of the first favored area of described car mark as the second car mark candidate region according to described car cursor position degree of confidence from described some first car mark candidate regions.
Particularly, according to the car cursor position degree of confidence that formula (2) calculates, the embodiment of the present application judges whether current first car mark candidate region can carry out follow-up vehicle-logo recognition as the second car mark candidate region.
In a preferred embodiment, the car cursor position degree of confidence F calculated locWeighttime between 0 to 1, think that the first corresponding car mark candidate region is positioned near axis, can be used as the second car mark candidate region and identify further; And work as F locWeightwhen being less than 0, illustrating that the first corresponding distance axis, car mark candidate region is comparatively far away, this first car mark candidate region should be rejected.Screen by this step the second car mark candidate region obtained all to be positioned near axis, and more intensively cluster goes up at various height, reduces follow-up vehicle-logo recognition scope.
Step 108, adopts machine learning algorithm to identify described second car mark candidate region, obtains the second car mark degree of confidence of the second car mark candidate region.
Existing machine learning algorithm is a lot, such as, and CNN (ConvolutionalNeuralNetworks, convolutional neural networks) algorithm, SVM algorithm, BOW (Bagofwords, word bag) algorithm etc.The embodiment of the present application, for CNN algorithm, is done to identify further to the second car mark candidate region.
When carrying out the training of CNN sorter, the training sample of CNN can increase the car standard specimen of partly incompleteness, rotation, translation originally, to strengthen CNN sorter identification car target robustness on the basis of preceding aim detection algorithm sample.
In this step, the second car mark degree of confidence F of each the second car mark candidate region can be got by the identification of CNN sorter sndWeight, the acquisition methods of this second car mark degree of confidence is prior art, does not repeat them here.
Step 109, carries out area merges to multiple second car mark candidate region and generates multiple fusion candidate region.
Because above-mentioned recognizer all exists error, and the general more intensively cluster in the second car mark candidate region identified is around true car mark or similar car mark, therefore, the embodiment of the present application and second car mark candidate region that machine learning result consistent close with size to position is merged.After the second car mark candidate region obtained being carried out to the fusion of S wheel, generate S and merge candidate region, concrete fusion process is as follows:
Perform new round mixing operation: select one not merge with other car mark candidate region and be not chosen as the original fusion region of the second car mark candidate region as new round mixing operation in original fusion region, this original fusion region is the first middle integration region when front-wheel mixing operation.
Perform when front-wheel mixing operation: select one to have neither part nor lot in when the second car mark candidate region of front-wheel mixing operation is as region to be fused; Obtain the position in middle integration region and region to be fused, width, height and machine recognition result (result after machine learning algorithm identification) respectively; Calculate current fusion threshold value; Judge whether centre integration region and region to be fused meet fusion conditions according to merging threshold value, the position in middle integration region and region to be fused, width, height and machine recognition result; When middle integration region and region to be fused meet fusion conditions, middle integration region and region to be fused are merged, as new middle integration region.
Judge whether that each second car mark candidate region has participated in the mixing operation when front-wheel all; If not, execution is returned when front-wheel mixing operation; If so, the second car mark candidate region not being chosen as original fusion region is in addition judged whether; If nothing, then the middle integration region of current existence is for merging candidate region; If have, then return and perform new round mixing operation.
The computation process of above-mentioned fusion threshold value is as follows:
DDelta=θ × MIN (iRectWdt1, iRectWdt2) formula (3)
Wherein, θ is threshold value adjustment factor, such as, and θ=0.3; IRectWdt1 is the width of middle integration region; IRectWdt2 is the width in region to be fused; MIN (iRectWdt1, iRectWdt2) is for getting the minimum value of middle integration region width and peak width to be fused; DDelta is for merging threshold value.
Above-mentioned judge in the middle of whether to meet the process of fusion conditions as follows integration region and region to be fused:
IType1=iType2 formula (4)
| iRectX1-iRectX2|≤dDelta formula (5)
| iRectY1-iRectY2|≤dDelta formula (6)
| iRectX1+iRectWdt1-iRectX2-iRectWdt2|≤dDelta formula (7)
| iRectY1+iRectHgt1-iRectY2-iRectHgt2|≤dDelta formula (8)
Wherein, iType1 is the machine recognition result of middle integration region; IType2 is the machine recognition result in region to be fused; (iRectX1, iRectY1) is the position of middle integration region; (iRectX2, iRectY2) is the position in region to be fused; IRectWdt1 is the width of middle integration region; IRectWdt2 is the width in region to be fused; IRectHgt1 is the height of middle integration region; IRectHgt2 is the height in region to be fused; DDelta is for merging threshold value.
As can be seen from above-mentioned formula, in the middle of formula (4) represents, the machine recognition in integration region and region to be fused comes to the same thing, and such as, is all identified as " benz " car mark; In the middle of formula (5) and formula (6) represent, the position in integration region and region to be fused is close; In the middle of formula (7) and formula (8) represent, the size in integration region and region to be fused is close.
When middle integration region and region to be fused meet that machine recognition comes to the same thing, position is close, size is close simultaneously, in the middle of confirming, integration region and region to be fused meet fusion conditions.
Step 110, merges the fusion degree of confidence that the first car mark degree of confidence of the second car mark candidate region of candidate region, car cursor position degree of confidence and the second car mark confidence calculations correspondence merges candidate region according to generation.
After by the fusion of step 109, still there is multiple fusion candidate region, need to continue screening to determine final car mark.
The recognizer adopted in vehicle-logo recognition process all has respective relative merits, and such as, the distortion of CNN algorithm to translation, proportional zoom, inclination or altogether his form has height unchangeability; The recognition confidence of Adaboost algorithm to candidate region is more accurately higher, and has the recognition confidence of the candidate region of skew lower slightly for position, and the recognition confidence of the candidate region of flase drop is lower.Certainly, do not get rid of some candidate region yet and have great similarity with the sample in training storehouse on Structure and form.Therefore, in order to improve the accuracy of vehicle-logo recognition, in conjunction with the multiple confidence calculations obtained in said process, each merges the fusion degree of confidence of candidate region to the embodiment of the present application.
The computation process merging degree of confidence is:
F W e i g h t = Σ i = 0 t F S n d W e i g h t ( i ) × ( α × F F s t W e i g h t ( i ) + β × F L o c W e i g h t ( i ) )
Formula (9)
Wherein, t is the number of the second car mark candidate region forming present fusion candidate region; F sndWeighti () is the second car mark degree of confidence of i-th the second car mark candidate region; F fstWeighti () is the first car mark degree of confidence of i-th the second car mark candidate region; F locWeighti () is the car cursor position degree of confidence of i-th the second car mark candidate region; α, β are respectively the weight coefficient of the first car mark degree of confidence and car cursor position degree of confidence, and alpha+beta=1; F weightfor the fusion degree of confidence of present fusion candidate region.
As can be seen from formula (9), the fusion degree of confidence merging candidate region is the cumulative sum of the degree of confidence of the second car mark candidate region forming this fusion candidate region.Wherein, when calculating the degree of confidence of each the second car mark candidate region, process is weighted to the first car mark degree of confidence of expression second car mark candidate region and true car mark similarity degree and car cursor position degree of confidence, such as, in a preferred embodiment, get α=0.35, β=0.65, recognition effect is better.
Step 111, selects to merge the highest fusion candidate region of degree of confidence as the car mark identified.
The fusion candidate region of merging degree of confidence higher is that car target possibility is also higher.
As can be seen from foregoing description, the application without the need to carrying out precise positioning to car mark, but is weighted fusion by the multiple degree of confidence obtained depth recognition, obtains final recognition result.The application is applicable to the vehicle-logo recognition under complex scene, and discrimination is higher.
Corresponding with the embodiment of aforementioned automobile logo identification method, present invention also provides the embodiment of vehicle-logo recognition device.
The embodiment of the application's vehicle-logo recognition device can be applied on an electronic device.Device embodiment can pass through software simulating, also can be realized by the mode of hardware or software and hardware combining.For software simulating, as the device on a logical meaning, be that computer program instructions corresponding in the processor run memory by its place equipment is formed.Say from hardware view, as shown in Figure 4, for a kind of hardware structure diagram of the application's vehicle-logo recognition device place equipment, except the processor shown in Fig. 4, network interface and storer, in embodiment, the equipment at device place is usually according to the actual functional capability of this equipment, other hardware can also be comprised, this is repeated no more.
Please refer to Fig. 5, is the structural representation of the vehicle-logo recognition device in the application's embodiment.This vehicle-logo recognition device comprises training unit 501, acquiring unit 502, determining unit 503, detecting unit 504, first computing unit 505, second computing unit 506, screening unit 507, recognition unit 508, integrated unit 509, the 3rd computing unit 510 and selection unit 511, wherein:
Training unit 501, for adopting car mark detection algorithm training cart mark sorter;
Acquiring unit 502, for obtaining the car plate positional information in image to be detected;
Determining unit 503, for favored area at the beginning of described car plate positional information determination car mark;
Detecting unit 504, training the car mark sorter obtained to carry out the detection of car mark to the first favored area of described car mark for utilizing, obtaining some first car mark candidate regions;
First computing unit 505, for calculating the first car mark degree of confidence of each the first car mark candidate region;
Second computing unit 506, for the car cursor position degree of confidence corresponding according to the position calculation of the first car mark candidate region;
Screening unit 507, for filtering out the first car mark candidate region nearer apart from the axis of the first favored area of described car mark as the second car mark candidate region according to described car cursor position degree of confidence from described some first car mark candidate regions;
Recognition unit 508, for adopting machine learning algorithm to identify described second car mark candidate region, obtains the second car mark degree of confidence of the second car mark candidate region;
Integrated unit 509, generates multiple fusion candidate region for carrying out area merges to multiple second car mark candidate region;
3rd computing unit 510, for according to the fusion degree of confidence generating the first car mark degree of confidence of the second car mark candidate region, car cursor position degree of confidence and the second car mark confidence calculations correspondence merging candidate region and merge candidate region;
Selection unit 511, merges the highest fusion candidate region of degree of confidence as the car mark identified for selecting.
Further, described first computing unit 505, is specially:
F FstWeight=K/P
Wherein,
K is the number of the Weak Classifier that can detect the first car mark candidate region in described car mark sorter;
P is total number of Weak Classifier in described car mark sorter;
F fstWeightfor the first car mark degree of confidence of described first car mark candidate region.
Further, described second computing unit 506, is specially:
F L o c W e i g h t = 1 - D W
Wherein,
D is the distance of the first car mark candidate region central point to the first favored area axis of car mark;
W is the wide of the first car mark candidate region;
F locWeightit is the car cursor position degree of confidence of the first car mark candidate region.
Further, described integrated unit 509, comprising:
Original fusion region selection module, do not merge with other car mark candidate region for selecting one and be not chosen as the original fusion region of the second car mark candidate region as new round mixing operation in original fusion region, described original fusion region is the first middle integration region when front-wheel mixing operation;
Region selection module to be fused, has neither part nor lot in when the second car mark candidate region of front-wheel mixing operation is as region to be fused for selecting one;
Data obtaining module, for obtaining the position in described middle integration region and described region to be fused, width, height and machine recognition result respectively;
Threshold calculation module, for calculating current fusion threshold value;
Merge judge module, judge whether described middle integration region and described region to be fused meet fusion conditions for the position according to described fusion threshold value, described middle integration region and described region to be fused, width, height and machine recognition result;
Area merges module, for when described middle integration region and described region to be fused meet fusion conditions, merges described middle integration region and described region to be fused, as new middle integration region;
Result judge module, for judging whether that each second car mark candidate region has participated in the mixing operation when front-wheel all; If not, then region selection module to be fused is performed; If so, the second car mark candidate region not being chosen as original fusion region is in addition judged whether; If nothing, then the middle integration region of current existence is for merging candidate region; If have, then perform original fusion region selection module.
Further, described threshold calculation module, is specially:
dDelta=θ×MIN(iRectWdt1,iRectWdt2)
Wherein,
θ is threshold value adjustment factor;
IRectWdt1 is the width of middle integration region;
IRectWdt2 is the width in region to be fused;
MIN (iRectWdt1, iRectWdt2) is for getting the minimum value of middle integration region width and peak width to be fused;
DDelta is for merging threshold value.
Further, described fusion conditions is:
iType1=iType2
|iRectX1-iRectX2|≤dDelta
|iRectY1-iRectY2|≤dDelta
|iRectX1+iRectWdt1-iRectX2-iRectWdt2|≤dDelta
|iRectY1+iRectHgt1-iRectY2-iRectHgt2|≤dDelta
Wherein,
IType1 is the machine recognition result of middle integration region;
IType2 is the machine recognition result in region to be fused;
(iRectX1, iRectY1) is the position of middle integration region;
(iRectX2, iRectY2) is the position in region to be fused;
IRectWdt1 is the width of middle integration region;
IRectWdt2 is the width in region to be fused;
IRectHgt1 is the height of middle integration region;
IRectHgt2 is the height in region to be fused;
DDelta is for merging threshold value.
Further, described 3rd computing unit 510, is specially:
F W e i g h t = Σ i = 0 t F S n d W e i g h t ( i ) × ( α × F F s t W e i g h t ( i ) + β × F L o c W e i g h t ( i ) )
Wherein,
T is the number of the second car mark candidate region forming present fusion candidate region;
F sndWeighti () is the second car mark degree of confidence of i-th the second car mark candidate region;
F fstWeighti () is the first car mark degree of confidence of i-th the second car mark candidate region;
F locWeighti () is the car cursor position degree of confidence of i-th the second car mark candidate region;
α, β are respectively the weight coefficient of the first car mark degree of confidence and car cursor position degree of confidence, and alpha+beta=1;
F weightfor the fusion degree of confidence of present fusion candidate region.
In said apparatus, the implementation procedure of the function and efficacy of unit specifically refers to the implementation procedure of corresponding step in said method, does not repeat them here.
For device embodiment, because it corresponds essentially to embodiment of the method, so relevant part illustrates see the part of embodiment of the method.Device embodiment described above is only schematic, the wherein said unit illustrated as separating component or can may not be and physically separates, parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of module wherein can be selected according to the actual needs to realize the object of the application's scheme.Those of ordinary skill in the art, when not paying creative work, are namely appreciated that and implement.
The foregoing is only the preferred embodiment of the application, not in order to limit the application, within all spirit in the application and principle, any amendment made, equivalent replacements, improvement etc., all should be included within scope that the application protects.

Claims (14)

1. an automobile logo identification method, is characterized in that, the method comprises:
Adopt car mark detection algorithm training cart mark sorter;
Obtain the car plate positional information in image to be detected;
According to described car plate positional information determination car mark just favored area;
Utilize and train the car mark sorter obtained to carry out the detection of car mark to the first favored area of described car mark, obtain some first car mark candidate regions;
Calculate the first car mark degree of confidence of each the first car mark candidate region;
The car cursor position degree of confidence corresponding according to the position calculation of the first car mark candidate region;
The first car mark candidate region nearer apart from the axis of the first favored area of described car mark is filtered out as the second car mark candidate region from described some first car mark candidate regions according to described car cursor position degree of confidence;
Adopt machine learning algorithm to identify described second car mark candidate region, obtain the second car mark degree of confidence of the second car mark candidate region;
Area merges is carried out to multiple second car mark candidate region and generates multiple fusion candidate region;
The fusion degree of confidence that the first car mark degree of confidence of the second car mark candidate region of candidate region, car cursor position degree of confidence and the second car mark confidence calculations correspondence merges candidate region is merged according to generation;
Select to merge the highest fusion candidate region of degree of confidence as the car mark identified.
2. the method for claim 1, is characterized in that, the first car mark degree of confidence of each the first car mark candidate region of described calculating, comprising:
The computing method of the first car mark degree of confidence of each the first car mark candidate region described are identical, are specially:
F FstWeight=K/P
Wherein,
K is the number of the Weak Classifier that can detect the first car mark candidate region in described car mark sorter;
P is total number of Weak Classifier in described car mark sorter;
F fstWeightfor the first car mark degree of confidence of described first car mark candidate region.
3. the method for claim 1, is characterized in that, the described car cursor position degree of confidence corresponding according to the position calculation of the first car mark candidate region, comprising:
F L o c W e i g h t = 1 - D W
Wherein,
D is the distance of the first car mark candidate region central point to the first favored area axis of car mark;
W is the wide of the first car mark candidate region;
F locWeightit is the car cursor position degree of confidence of the first car mark candidate region.
4. the method for claim 1, is characterized in that, describedly carries out area merges to multiple second car mark candidate region and generates multiple fusion candidate region, comprising:
Perform new round mixing operation: select one not merge with other car mark candidate region and be not chosen as the original fusion region of the second car mark candidate region as new round mixing operation in original fusion region, described original fusion region is the first middle integration region when front-wheel mixing operation;
Perform when front-wheel mixing operation: select one to have neither part nor lot in when the second car mark candidate region of front-wheel mixing operation is as region to be fused; Obtain the position in described middle integration region and described region to be fused, width, height and machine recognition result respectively; Calculate current fusion threshold value; Judge whether described middle integration region and described region to be fused meet fusion conditions according to the position in described fusion threshold value, described middle integration region and described region to be fused, width, height and machine recognition result; When described middle integration region and described region to be fused meet fusion conditions, described middle integration region and described region to be fused are merged, as new middle integration region;
Judge whether that each second car mark candidate region has participated in the mixing operation when front-wheel all; If not, execution is returned when front-wheel mixing operation; If so, the second car mark candidate region not being chosen as original fusion region is in addition judged whether; If nothing, then the middle integration region of current existence is for merging candidate region; If have, then return and perform new round mixing operation.
5. method as claimed in claim 4, is characterized in that the fusion threshold value that described calculating is current comprises:
dDelta=θ×MIN(iRectWdt1,iRectWdt2)
Wherein,
θ is threshold value adjustment factor;
IRectWdt1 is the width of middle integration region;
IRectWdt2 is the width in region to be fused;
MIN (iRectWdt1, iRectWdt2) is for getting the minimum value of middle integration region width and peak width to be fused;
DDelta is for merging threshold value.
6. method as claimed in claim 4, it is characterized in that, described fusion conditions is:
iType1=iType2
|iRectX1-iRectX2|≤dDelta
|iRectY1-iRectY2|≤dDelta
|iRectX1+iRectWdt1-iRectX2-iRectWdt2|≤dDelta
|iRectY1+iRectHgt1-iRectY2-iRectHgt2|≤dDelta
Wherein,
IType1 is the machine recognition result of middle integration region;
IType2 is the machine recognition result in region to be fused;
(iRectX1, iRectY1) is the position of middle integration region;
(iRectX2, iRectY2) is the position in region to be fused;
IRectWdt1 is the width of middle integration region;
IRectWdt2 is the width in region to be fused;
IRectHgt1 is the height of middle integration region;
IRectHgt2 is the height in region to be fused;
DDelta is for merging threshold value.
7. the method for claim 1, it is characterized in that, the described fusion degree of confidence according to generating the first car mark degree of confidence of the second car mark candidate region, car cursor position degree of confidence and the second car mark confidence calculations correspondence merging candidate region and merge candidate region, comprising:
F W e i g h t = Σ i = 0 t F S n d W e i g h t ( i ) × ( α × F F s t W e i g h t ( i ) + β × F L o c W e i g h t ( i ) )
Wherein,
T is the number of the second car mark candidate region forming present fusion candidate region;
F sndWeighti () is the second car mark degree of confidence of i-th the second car mark candidate region;
F fstWeighti () is the first car mark degree of confidence of i-th the second car mark candidate region;
F locWeighti () is the car cursor position degree of confidence of i-th the second car mark candidate region;
α, β are respectively the weight coefficient of the first car mark degree of confidence and car cursor position degree of confidence, and alpha+beta=1;
F weightfor the fusion degree of confidence of present fusion candidate region.
8. a vehicle-logo recognition device, is characterized in that, this device comprises:
Training unit, for adopting car mark detection algorithm training cart mark sorter;
Acquiring unit, for obtaining the car plate positional information in image to be detected;
Determining unit, for favored area at the beginning of described car plate positional information determination car mark;
Detecting unit, training the car mark sorter obtained to carry out the detection of car mark to the first favored area of described car mark for utilizing, obtaining some first car mark candidate regions;
First computing unit, for calculating the first car mark degree of confidence of each the first car mark candidate region;
Second computing unit, for the car cursor position degree of confidence corresponding according to the position calculation of the first car mark candidate region;
Screening unit, for filtering out the first car mark candidate region nearer apart from the axis of the first favored area of described car mark as the second car mark candidate region according to described car cursor position degree of confidence from described some first car mark candidate regions;
Recognition unit, for adopting machine learning algorithm to identify described second car mark candidate region, obtains the second car mark degree of confidence of the second car mark candidate region;
Integrated unit, generates multiple fusion candidate region for carrying out area merges to multiple second car mark candidate region;
3rd computing unit, for according to the fusion degree of confidence generating the first car mark degree of confidence of the second car mark candidate region, car cursor position degree of confidence and the second car mark confidence calculations correspondence merging candidate region and merge candidate region;
Selection unit, merges the highest fusion candidate region of degree of confidence as the car mark identified for selecting.
9. device as claimed in claim 8, it is characterized in that, described first computing unit, is specially:
F FstWeight=K/P
Wherein,
K is the number of the Weak Classifier that can detect the first car mark candidate region in described car mark sorter;
P is total number of Weak Classifier in described car mark sorter;
F fstWeightfor the first car mark degree of confidence of described first car mark candidate region.
10. device as claimed in claim 8, it is characterized in that, described second computing unit, is specially:
F L o c W e i g h t = 1 - D w
Wherein,
D is the distance of the first car mark candidate region central point to the first favored area axis of car mark;
W is the wide of the first car mark candidate region;
F locWeightit is the car cursor position degree of confidence of the first car mark candidate region.
11. devices as claimed in claim 8, it is characterized in that, described integrated unit, comprising:
Original fusion region selection module, do not merge with other car mark candidate region for selecting one and be not chosen as the original fusion region of the second car mark candidate region as new round mixing operation in original fusion region, described original fusion region is the first middle integration region when front-wheel mixing operation;
Region selection module to be fused, has neither part nor lot in when the second car mark candidate region of front-wheel mixing operation is as region to be fused for selecting one;
Data obtaining module, for obtaining the position in described middle integration region and described region to be fused, width, height and machine recognition result respectively;
Threshold calculation module, for calculating current fusion threshold value;
Merge judge module, judge whether described middle integration region and described region to be fused meet fusion conditions for the position according to described fusion threshold value, described middle integration region and described region to be fused, width, height and machine recognition result;
Area merges module, for when described middle integration region and described region to be fused meet fusion conditions, merges described middle integration region and described region to be fused, as new middle integration region;
Result judge module, for judging whether that each second car mark candidate region has participated in the mixing operation when front-wheel all; If not, then region selection module to be fused is performed; If so, the second car mark candidate region not being chosen as original fusion region is in addition judged whether; If nothing, then the middle integration region of current existence is for merging candidate region; If have, then perform original fusion region selection module.
12. devices as claimed in claim 11, it is characterized in that, described threshold calculation module, is specially:
dDelta=θ×MIN(iRectWdt1,iRectWdt2)
Wherein,
θ is threshold value adjustment factor;
IRectWdt1 is the width of middle integration region;
IRectWdt2 is the width in region to be fused;
MIN (iRectWdt1, iRectWdt2) is for getting the minimum value of middle integration region width and peak width to be fused;
DDelta is for merging threshold value.
13. devices as claimed in claim 11, it is characterized in that, described fusion conditions is:
iType1=iType2
|iRectX1-iRectX2|≤dDelta
|iRectY1-iRectY2|≤dDelta
|iRectX1+iRectWdt1-iRectX2-iRectWdt2|≤dDelta
|iRectY2+iRectHgt1-iRectY2-iRectHgt2|≤dDelta
Wherein,
IType1 is the machine recognition result of middle integration region;
IType2 is the machine recognition result in region to be fused;
(iRectX1, iRectY1) is the position of middle integration region;
(iRectX2, iRectY2) is the position in region to be fused;
IRectWdt1 is the width of middle integration region;
IRectWdt2 is the width in region to be fused;
IRectHgt1 is the height of middle integration region;
IRectHgt2 is the height in region to be fused;
DDelta is for merging threshold value.
14. devices as claimed in claim 8, is characterized in that, described 3rd computing unit, is specially:
F W e i g h t = Σ i = 0 t F S n d W e i g h t ( i ) × ( α × F F s t W e i g h t ( i ) + β × F L o c W e i g h t ( i ) )
Wherein,
T is the number of the second car mark candidate region forming present fusion candidate region;
F sndWeighti () is the second car mark degree of confidence of i-th the second car mark candidate region;
F fstWeighti () is the first car mark degree of confidence of i-th the second car mark candidate region;
F locWeighti () is the car cursor position degree of confidence of i-th the second car mark candidate region;
α, β are respectively the weight coefficient of the first car mark degree of confidence and car cursor position degree of confidence, and alpha+beta=1;
F weightfor the fusion degree of confidence of present fusion candidate region.
CN201510586228.8A 2015-09-15 2015-09-15 A kind of automobile logo identification method and device Active CN105205486B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510586228.8A CN105205486B (en) 2015-09-15 2015-09-15 A kind of automobile logo identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510586228.8A CN105205486B (en) 2015-09-15 2015-09-15 A kind of automobile logo identification method and device

Publications (2)

Publication Number Publication Date
CN105205486A true CN105205486A (en) 2015-12-30
CN105205486B CN105205486B (en) 2018-12-07

Family

ID=54953158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510586228.8A Active CN105205486B (en) 2015-09-15 2015-09-15 A kind of automobile logo identification method and device

Country Status (1)

Country Link
CN (1) CN105205486B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608441A (en) * 2016-01-13 2016-05-25 浙江宇视科技有限公司 Vehicle type identification method and system
CN105957071A (en) * 2016-04-26 2016-09-21 浙江宇视科技有限公司 Lamp group positioning method and device
CN106339445A (en) * 2016-08-23 2017-01-18 东方网力科技股份有限公司 Vehicle retrieval method and device based on large data
CN106503710A (en) * 2016-10-26 2017-03-15 北京邮电大学 A kind of automobile logo identification method and device
CN106529460A (en) * 2016-11-03 2017-03-22 贺江涛 Object classification identification system and identification method based on robot side
CN107590492A (en) * 2017-08-28 2018-01-16 浙江工业大学 A kind of vehicle-logo location and recognition methods based on convolutional neural networks
WO2018072233A1 (en) * 2016-10-20 2018-04-26 中山大学 Method and system for vehicle tag detection and recognition based on selective search algorithm
CN108171274A (en) * 2018-01-17 2018-06-15 百度在线网络技术(北京)有限公司 For identifying the method and apparatus of animal
CN108229308A (en) * 2017-11-23 2018-06-29 北京市商汤科技开发有限公司 Recongnition of objects method, apparatus, storage medium and electronic equipment
CN108256404A (en) * 2016-12-29 2018-07-06 北京旷视科技有限公司 Pedestrian detection method and device
CN109409159A (en) * 2018-10-11 2019-03-01 上海亿保健康管理有限公司 A kind of fuzzy two-dimensional code detection method and device
CN109697719A (en) * 2019-03-05 2019-04-30 百度在线网络技术(北京)有限公司 A kind of image quality measure method, apparatus and computer readable storage medium
CN109919154A (en) * 2019-02-28 2019-06-21 北京科技大学 A kind of character intelligent identification Method and identification device
CN110852252A (en) * 2019-11-07 2020-02-28 厦门市美亚柏科信息股份有限公司 Vehicle weight removing method and device based on minimum distance and maximum length-width ratio
CN112069862A (en) * 2019-06-10 2020-12-11 华为技术有限公司 Target detection method and device
CN113470347A (en) * 2021-05-20 2021-10-01 上海天壤智能科技有限公司 Congestion identification method and system combining bayonet vehicle passing record and floating vehicle GPS data

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040141645A1 (en) * 2001-02-02 2004-07-22 Lee Shih-Jong J. Robust method for automatic reading of skewed, rotated or partially obscured characters
CN101630361A (en) * 2008-12-30 2010-01-20 北京邮电大学 Plate number, body color and mark identification-based equipment and plate number, body color and mark identification-based method for identifying fake plate vehicles
CN102968646A (en) * 2012-10-25 2013-03-13 华中科技大学 Plate number detecting method based on machine learning
CN103077384A (en) * 2013-01-10 2013-05-01 北京万集科技股份有限公司 Method and system for positioning and recognizing vehicle logo
CN103310231A (en) * 2013-06-24 2013-09-18 武汉烽火众智数字技术有限责任公司 Auto logo locating and identifying method
CN104268596A (en) * 2014-09-25 2015-01-07 深圳市捷顺科技实业股份有限公司 License plate recognizer and license plate detection method and system thereof
CN104281851A (en) * 2014-10-28 2015-01-14 浙江宇视科技有限公司 Extraction method and device of car logo information
CN104331691A (en) * 2014-11-28 2015-02-04 深圳市捷顺科技实业股份有限公司 Vehicle logo classifier training method, vehicle logo recognition method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040141645A1 (en) * 2001-02-02 2004-07-22 Lee Shih-Jong J. Robust method for automatic reading of skewed, rotated or partially obscured characters
CN101630361A (en) * 2008-12-30 2010-01-20 北京邮电大学 Plate number, body color and mark identification-based equipment and plate number, body color and mark identification-based method for identifying fake plate vehicles
CN102968646A (en) * 2012-10-25 2013-03-13 华中科技大学 Plate number detecting method based on machine learning
CN103077384A (en) * 2013-01-10 2013-05-01 北京万集科技股份有限公司 Method and system for positioning and recognizing vehicle logo
CN103310231A (en) * 2013-06-24 2013-09-18 武汉烽火众智数字技术有限责任公司 Auto logo locating and identifying method
CN104268596A (en) * 2014-09-25 2015-01-07 深圳市捷顺科技实业股份有限公司 License plate recognizer and license plate detection method and system thereof
CN104281851A (en) * 2014-10-28 2015-01-14 浙江宇视科技有限公司 Extraction method and device of car logo information
CN104331691A (en) * 2014-11-28 2015-02-04 深圳市捷顺科技实业股份有限公司 Vehicle logo classifier training method, vehicle logo recognition method and device

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608441A (en) * 2016-01-13 2016-05-25 浙江宇视科技有限公司 Vehicle type identification method and system
CN105608441B (en) * 2016-01-13 2020-04-10 浙江宇视科技有限公司 Vehicle type recognition method and system
CN105957071A (en) * 2016-04-26 2016-09-21 浙江宇视科技有限公司 Lamp group positioning method and device
CN105957071B (en) * 2016-04-26 2019-04-12 浙江宇视科技有限公司 A kind of lamp group localization method and device
CN106339445A (en) * 2016-08-23 2017-01-18 东方网力科技股份有限公司 Vehicle retrieval method and device based on large data
CN106339445B (en) * 2016-08-23 2019-06-18 东方网力科技股份有限公司 Vehicle retrieval method and device based on big data
WO2018072233A1 (en) * 2016-10-20 2018-04-26 中山大学 Method and system for vehicle tag detection and recognition based on selective search algorithm
CN106503710A (en) * 2016-10-26 2017-03-15 北京邮电大学 A kind of automobile logo identification method and device
CN106529460A (en) * 2016-11-03 2017-03-22 贺江涛 Object classification identification system and identification method based on robot side
CN108256404A (en) * 2016-12-29 2018-07-06 北京旷视科技有限公司 Pedestrian detection method and device
CN108256404B (en) * 2016-12-29 2021-12-10 北京旷视科技有限公司 Pedestrian detection method and device
CN107590492B (en) * 2017-08-28 2019-11-19 浙江工业大学 A kind of vehicle-logo location and recognition methods based on convolutional neural networks
CN107590492A (en) * 2017-08-28 2018-01-16 浙江工业大学 A kind of vehicle-logo location and recognition methods based on convolutional neural networks
CN108229308A (en) * 2017-11-23 2018-06-29 北京市商汤科技开发有限公司 Recongnition of objects method, apparatus, storage medium and electronic equipment
US11182592B2 (en) 2017-11-23 2021-11-23 Beijing Sensetime Technology Development Co., Ltd. Target object recognition method and apparatus, storage medium, and electronic device
CN108171274B (en) * 2018-01-17 2019-08-09 百度在线网络技术(北京)有限公司 The method and apparatus of animal for identification
CN108171274A (en) * 2018-01-17 2018-06-15 百度在线网络技术(北京)有限公司 For identifying the method and apparatus of animal
CN109409159A (en) * 2018-10-11 2019-03-01 上海亿保健康管理有限公司 A kind of fuzzy two-dimensional code detection method and device
CN109919154A (en) * 2019-02-28 2019-06-21 北京科技大学 A kind of character intelligent identification Method and identification device
CN109919154B (en) * 2019-02-28 2020-10-13 北京科技大学 Intelligent character recognition method and device
CN109697719A (en) * 2019-03-05 2019-04-30 百度在线网络技术(北京)有限公司 A kind of image quality measure method, apparatus and computer readable storage medium
CN112069862A (en) * 2019-06-10 2020-12-11 华为技术有限公司 Target detection method and device
CN110852252A (en) * 2019-11-07 2020-02-28 厦门市美亚柏科信息股份有限公司 Vehicle weight removing method and device based on minimum distance and maximum length-width ratio
CN110852252B (en) * 2019-11-07 2022-12-02 厦门市美亚柏科信息股份有限公司 Vehicle weight-removing method and device based on minimum distance and maximum length-width ratio
CN113470347A (en) * 2021-05-20 2021-10-01 上海天壤智能科技有限公司 Congestion identification method and system combining bayonet vehicle passing record and floating vehicle GPS data
CN113470347B (en) * 2021-05-20 2022-07-26 上海天壤智能科技有限公司 Congestion identification method and system combining bayonet vehicle passing record and floating vehicle GPS data

Also Published As

Publication number Publication date
CN105205486B (en) 2018-12-07

Similar Documents

Publication Publication Date Title
CN105205486A (en) Vehicle logo recognition method and device
US10452999B2 (en) Method and a device for generating a confidence measure for an estimation derived from images captured by a camera mounted on a vehicle
Marzougui et al. A lane tracking method based on progressive probabilistic Hough transform
Zhang et al. A robust, real-time ellipse detector
Shi et al. Fast and robust vanishing point detection for unstructured road following
CN103699905B (en) Method and device for positioning license plate
Mallikarjuna et al. Traffic data collection under mixed traffic conditions using video image processing
CN109711437A (en) A kind of transformer part recognition methods based on YOLO network model
US20070058856A1 (en) Character recoginition in video data
CN102799888B (en) Eye detection method and eye detection equipment
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
CN105261020A (en) Method for detecting fast lane line
CN108492298B (en) Multispectral image change detection method based on generation countermeasure network
CN106780557A (en) A kind of motion target tracking method based on optical flow method and crucial point feature
CN110119726A (en) A kind of vehicle brand multi-angle recognition methods based on YOLOv3 model
CN109508731A (en) A kind of vehicle based on fusion feature recognition methods, system and device again
CN101159018A (en) Image characteristic points positioning method and device
Arróspide et al. HOG-like gradient-based descriptor for visual vehicle detection
CN105427333A (en) Real-time registration method of video sequence image, system and shooting terminal
KR20160128930A (en) Apparatus and method for detecting bar-type traffic sign in traffic sign recognition system
Han et al. A novel loop closure detection method with the combination of points and lines based on information entropy
Yang et al. Vehicle detection from low quality aerial LIDAR data
Yang et al. Fast and accurate vanishing point detection in complex scenes
JP2018124963A (en) Image processing device, image recognition device, image processing program, and image recognition program
CN107170004A (en) To the image matching method of matching matrix in a kind of unmanned vehicle monocular vision positioning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant