CN102930242B - Bus type identifying method - Google Patents

Bus type identifying method Download PDF

Info

Publication number
CN102930242B
CN102930242B CN201210337115.0A CN201210337115A CN102930242B CN 102930242 B CN102930242 B CN 102930242B CN 201210337115 A CN201210337115 A CN 201210337115A CN 102930242 B CN102930242 B CN 102930242B
Authority
CN
China
Prior art keywords
bus
vehicle
model
judge
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210337115.0A
Other languages
Chinese (zh)
Other versions
CN102930242A (en
Inventor
杨华
马文琪
董莉莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201210337115.0A priority Critical patent/CN102930242B/en
Publication of CN102930242A publication Critical patent/CN102930242A/en
Application granted granted Critical
Publication of CN102930242B publication Critical patent/CN102930242B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a bus type identifying method, belonging to the technical field of processing a computer video. The bus type identifying method comprises the following steps of: carrying out mixed Gaussian modeling on a monitoring video, and carrying out bus type initial identification on a vehicle; secondly, establishing a corresponding 3D (three-dimensional) model according to the position information of the vehicle and the characteristics of a bus; meanwhile, extracting a characteristic segment by using an LSD (Large Screen Display) segment extracting algorithm; and finally, matching the 3D model of the vehicle and the segment characteristics by using a comprehensive algorithm of combining template matching and shortest distance matching. According to the bus type identifying method provided by the invention, the 3D model is established by using the vehicle position information and the bus type characteristics, so that the preparation work of a car type database can be avoided; meanwhile, the vehicle to be judged is initially judged by using the bus type characteristics, so that the 3D modeling and matching process can be carried out on all vehicles in the scene, and the calculation amount is reduced. Finally, the calculation accuracy is improved, and the calculation amount can be reduced by using a combined matching algorithm.

Description

A kind of bus model recognizing method
Technical field
The invention belongs to computer video processing technology field, be specially a kind of bus model recognizing method, especially design a kind of bus model recognizing method being suitable for public security bayonet socket monitoring application.
Background technology
At present, vehicle cab recognition technology plays more and more important effect in public security monitoring, and bus is as a kind of important urban transportation tool, the key monitoring object especially in monitoring.
Feature Points Matching method and 3D model matching method are two kinds of common vehicle detection methods.But Feature Points Matching method needs whole 2D features of extraction usually (as edge line segment, edge pixel point etc.) to carry out with the 2D feature of model mating (see: Grimson, W., " The combinatorics of heuristic searchtermination for object recognition in cluttered environment; " IEEE Trans.PAMI., vol.13, no.9, pp.920-935,1991.) thus its calculated amount is larger, real-time is relatively poor, can not directly apply to bayonet socket real-time monitoring system.The method of traditional 3D coupling (see: (and see: Tan, T.N., Sullivan, G.D., Baker, K.D., " Model-based localization and recognition of road vehicles, " Int.J.Comput.Vis., vol.27, no.1, pp.5-25,1998.) its recognition accuracy depends on the complete degree of 3D model bank, but major applications equipment is difficult to have complete 3D auto model database, and method complexity linearly can increase with the increase of model quantity, is therefore also difficult to directly apply on bayonet socket watch-dog.
Find through retrieval; publication number is the Chinese invention patent of 101783076A; method for quick vehicle type recognition under a kind of video monitoring mode of this disclosure of the invention; implement according to following steps: road monitoring apparatus is set; and vehicle is divided into car, with the taxi of particular color mark, minibus, in-between car, bus and high capacity waggon; step 1, initialization, carry out training study to video monitoring apparatus; Step 2, extract the area in vehicle target region and the length of boundary rectangle thereof and wide, structure individual features, and be compact car, in-between car, large car by vehicle target rough sort; Step 3, the dominant hue feature multiple target of compact car being extracted respectively to vehicle body identify taxi, then extract compact car vehicle window relative seat feature parameter, determine minibus or car further; Step 4, extraction roof brightness parameter and roof textural characteristics parameter, determine whether large car is bus.This patent when whether judgement is bus adopts roof brightness and roof texture as principal character, is thus subject to the impact of illumination variation; And the present invention is using the shape of bus and LSD Eigenvector as principal character, the impact of change on method of thus illumination is less, makes the robustness of method better, is conducive to improving the accuracy identified.
Summary of the invention
The object of the invention is to overcome above-mentioned the deficiencies in the prior art part, a kind of new bus model recognizing method is proposed, the method is based on 3D Model Matching and LSD Eigenvector extraction algorithm, avoid the preliminary work in model data storehouse and 3D modeling and matching process are carried out to vehicles all in scene, decrease calculated amount, improve computing accuracy rate and reduce calculated amount simultaneously.
For achieving the above object, the technical solution used in the present invention is: first carry out Gaussian modeling to monitor video, obtains pending vehicle foreground image; Then carry out bus vehicle to vehicle tentatively to identify; Secondly according to the positional information of vehicle and the feature of bus vehicle, corresponding 3D model is set up; Use LSD line segments extraction algorithm to extract the Eigenvector of vehicle simultaneously; Finally adopt a kind of integration algorithm in conjunction with template matches and bee-line coupling, the 3D model of vehicle and line segment feature are mated.Confirm through experiment, the present invention has higher accuracy for bus vehicle cab recognition, and real-time meets bayonet socket watch-dog demand.
The inventive method specifically comprises the following steps:
The first step: carry out mixed Gaussian background modeling to monitor video, obtains pending vehicle foreground image.
Second step: under world coordinate system, carries out bus vehicle to obtained vehicle and tentatively identifies.
Concrete steps are:
1. contours extract is carried out to the vehicle foreground image obtained in the first step, obtain N number of connected region Ω k, k=1,2 ..., N, to each connected region Ω k, one can be obtained and comprise Ω kand the rectangular area R that area is minimum k, k=1,2 ..., N.
2. under world coordinate system, by rectangle R k2 p in bottom 1, p 2structure judging point p m, specifically:
p m . x = ( p 1 . x + p 2 . x ) / 2 p m . y = ( p 1 . y + p 2 . y ) / 2 - l p m . z = p 1 . z
Wherein, l represents bus length of wagon, R kfor comprising the minimum rectangle frame of the whole profile of vehicle to be identified, (x, y, z) represents the three-dimensional coordinate of each point in world coordinate system.
3. under image coordinate system, according to p mand R kposition relationship tentatively judge whether this vehicle is bus, if p m∈ R k, then tentatively judge that this vehicle is bus, carry out the 3rd step; If then judge that this vehicle is non-bus, judge to terminate.
3rd step: under world coordinate system, according to bus vehicle feature and R kpositional information, builds the 3D model of Current vehicle.
Concrete steps are:
1. R k2, bottom p 1, p 2as 3D model front bottom end two point.
2. by p 1, p 2build bus 3D model front top 2 p 3, p 4, specifically:
p 3 . x = p 1 . x , p 3 . y = p 1 . y , p 3 . z = p 1 . z + h p 4 . x = p 2 . x , p 4 . y = p 2 . y , p 4 . z = p 2 . z + h
Wherein, h represents bus height, and (x, y, z) represents each point three-dimensional coordinate in world coordinate system; .
3. according to the perspective view of bus on xOy, 2 p bottom 3D model tail end are built 5, p 6, specifically:
θ = arctan ( p 1 . y - p 2 . y p 2 . x - p 1 . x ) p j . x = p i . x - l sin θ p j . y = p i . x - l cos θ
Wherein, (i, j) ∈ { (1,5), (2,6) }, θ represents in perspective view, line segment (p 1, p 2) with the angle of y-axis, l represents bus length of wagon, and (x, y, z) represents each point three-dimensional coordinate in world coordinate system.
4. by p 5, p 6build 3D model tail end 2, top p 7, p 8, be specially:
p 7 . x = p 5 . x , p 7 . y = p 5 . y , p 7 . z = p 5 . z + h p 8 . x = p 6 . x , p 8 . y = p 6 . y , p 8 . z = p 6 . z + h
Wherein, h represents bus height, and (x, y, z) represents each point three-dimensional coordinate in world coordinate system.。
5. obtain 12 line segment aggregates by 3D model 8 end points, wherein can judge according to the visual angle of bayonet socket camera and the position of end points: line segment (p 1, p 2) (p 1, p 3) (p 2, p 4) (p 3, p 4) (p 3, p 7) (p 4, p 8) be camera visible segment, line segment (p 5, p 6) be the invisible line segment of camera, line segment (p 2, p 6) (p 6, p 8) then need according to p 6position judge, specifically: under image coordinate system, if then (p 2, p 6) (p 6, p 8) be camera visible segment; If p 6∈ R (p 1234), then it is invisible line segment.R (p 1234) represent by a p 1, p 2, p 3, p 4the rectangle formed.In like manner, to line segment (p 1, p 5) (p 5, p 7) judge.Select camera visible segment as the line segment aggregate of 3D model.So far the 3D model of this vehicle can be obtained.
4th step: adopt LSD line segments extraction algorithm to carry out Eigenvector extraction to the vehicle obtained in second step.
5th step: the method adopting stencil matching method and knearest neighbour method to combine carries out matching primitives to the Eigenvector obtained in the 3D model obtained in the 3rd step and the 4th step, obtains recognition result.
Concrete steps are:
1. adopt template matching method, calculate matching factor η 1, specifically:
Wherein, represent the overlapping region between 3D model gray-scale map and Eigenvector gray-scale map; ψ is 3D model gray-scale map, and its pixel value is 0 or 1; for LSD Eigenvector gray-scale map; represent and morphological dilations process is carried out to image; ∑ I represents and carries out pixel summation to gray level image I; represent and threshold process is done to gray-scale map I, specifically:
Threshold t v ( I x , y ) = v , I x , y ≥ v 0 , otherwise
2. according to η 1judge whether Current vehicle is bus, specifically: if η 1>TH l, TH lfor setting threshold value, representation feature line segment overlaps with 3D model large percentage, thinks that this vehicle may be bus, enters next step, and employing knearest neighbour method judges further; If η 1>TH h, TH hfor setting threshold value and TH h>TH l, then think that this vehicle is bus, judge to terminate; If η 1≤ TH l, then think that this vehicle is non-bus, judge to terminate.
3. bee-line algorithm is implemented to the vehicle needing in previous step to judge further, order represent removing overlapping region 3D illustraton of model; represent removing overlapping region eigenvector figure; I (α) represents the gray-scale value of pixel α; P (α) represents the position of pixel α under image coordinate system, initialization counter sum=0.Then minimum distance method is specifically implemented as follows:
(1) to pixel α ∈ ψ hand I (α) ≠ 0, centered by p (α), step is that the length of side sets up square aearch window.
(2) calculate pixel α to arrive bee-line, be specially
(3) if d (α)≤d tH, then sum=sum+1 and I (β is made 0)=0, d tHfor setting threshold value.
(4) step (1) is repeated until ψ hin pixel travel through completely.
4. matching factor η is calculated 2, be specially:
η 2 = sum Σ ψ h
According to η 2judge whether Current vehicle is bus, specifically: if η 2>TH d, TH dfor setting threshold value, representation feature line segment overlaps with 3D model large percentage, judges that this vehicle is bus; Otherwise, then judge that this vehicle is non-bus, judge to terminate.
Compared with prior art, main contributions of the present invention and feature are: 1) utilize vehicle position information and bus vehicle feature to build 3D model, thus do not need complete model data storehouse; 2) utilize bus vehicle feature to treat and judge that vehicle tentatively judges, to avoid carrying out 3D modeling and matching process to vehicles all in scene, decrease calculated amount; 3) a kind of matching algorithm of combination is adopted, template matching method is first adopted to mate for the first time 3D model and Eigenvector, knearest neighbour method only need carry out matching operation to residue 3D model pixel point and Eigenvector pixel after first coupling, improves computing accuracy rate and reduces calculated amount simultaneously.The present invention is particularly useful for the bus vehicle cab recognition analysis of public security bayonet socket application.
Accompanying drawing explanation
By reading the detailed description done non-limiting example with reference to the following drawings, other features, objects and advantages of the present invention will become more obvious:
Fig. 1 is bus model recognizing method overall block flow diagram of the present invention.
Fig. 2 is schematic diagram in the embodiment of the present invention, and wherein (a) tentatively identifies signal for bus, and (b) (c) is bus perspective view on xOy face.
Fig. 3 is 3D model 8 end points process of establishing schematic diagram in the embodiment of the present invention.
Fig. 4 is visible segment schematic diagram in the embodiment of the present invention, wherein: bus visible segment under (a) camera angles; B () is the invisible line segment of bus and possibility visible segment schematic diagram.
Fig. 5 (a) (b) is LSD Eigenvector extraction schematic diagram.
Fig. 6 is identifying schematic diagram in the embodiment of the present invention: (a) judges vehicle for waiting, b () is Eigenvector gray-scale map, c () is 3D model gray-scale map, d () represents in stencil matching method, 3D model gray-scale map and Eigenvector gray-scale map stack result, e () represents 3D illustraton of model and Eigenvector figure overlapping region, (f) is 3D illustraton of model removing overlapping region schematic diagram.
Fig. 7 is embodiment of the present invention bayonet socket monitoring scene schematic diagram, and camera and horizontal plane angle are less than 45 °.
Embodiment
Below in conjunction with specific embodiment, the present invention is described in detail.Following examples will contribute to those skilled in the art and understand the present invention further, but not limit the present invention in any form.It should be pointed out that to those skilled in the art, without departing from the inventive concept of the premise, some distortion and improvement can also be made.These all belong to protection scope of the present invention.
Embodiment
The video sequence that the present embodiment adopts is public security bayonet socket monitoring scene sequence.
The bus model recognizing method that the present embodiment relates to, comprises following concrete steps:
The first step: carry out mixed Gaussian background modeling to video sequence, obtains pending vehicle foreground image.
Second step: under world coordinate system, carries out bus vehicle to obtained vehicle and tentatively identifies.
Concrete steps are:
1. pair vehicle foreground image carries out contours extract, obtains N number of connected region Ω k, k=1,2 ..., N, to each connected region Ω k, one can be obtained and comprise Ω kand the rectangular area R that area is minimum k, k=1,2 ..., N.
2. under world coordinate system, by rectangle R k2 p in bottom 1, p 2structure judging point p m, specifically:
p m . x = ( p 1 . x + p 2 . x ) / 2 p m . y = ( p 1 . y + p 2 . y ) / 2 - l p m . z = p 1 . z
Wherein, l represents bus length of wagon, l=10 in the present embodiment, and (x, y, z) represents each point three-dimensional coordinate in world coordinate system.
3. under image coordinate system, according to p mand R kposition relationship tentatively judge vehicle vehicle, if p m∈ R k, then tentatively judge that this vehicle is bus, carry out the 3rd step; If then judge that this vehicle is non-bus, judge to terminate.As shown in Figure 2 (a) shows, in figure, rectangle frame is R k, by 2 p bottom it 1, p 2judging point p can be constructed m, because bus vehicle body is longer, after under coordinate conversion to image coordinate system, judging point p mmeet p m∈ R k.
3rd step: under world coordinate system, is approximately rectangular parallelepiped and R according to bus vehicle kpositional information, builds the 3D rectangular parallelepiped model of Current vehicle.
Concrete steps are:
1.R k2, bottom p 1, p 2as 3D model front bottom end two point.
2. be approximately square by bus headstock, can according to p 1, p 2build bus 3D model front top 2 p 3, p 4, specifically:
p 3 . x = p 1 . x , p 3 . y = p 1 . y , p 3 . z = p 1 . z + h p 4 . x = p 2 . x , p 4 . y = p 2 . y , p 4 . z = p 2 . z + h
Wherein, h represents bus height, h=2.5 in the present embodiment, and (x, y, z) represents each point three-dimensional coordinate in world coordinate system.
3., according to the perspective view of bus on xOy, as shown in Fig. 2 (b) (c), 2 p bottom 3D model tail end can be built by geometric relationship 5, p 6, specifically:
θ = arctan ( p 1 . y - p 2 . y p 2 . x - p 1 . x ) p j . x = p i . x - l sin θ p j . y = p i . x - l cos θ
Wherein, (i, j) ∈ { (1,5), (2,6) }, θ represents in perspective view, line segment (p 1, p 2) with the angle of y-axis, as shown in Figure 2 (c).L represents bus length of wagon, l=10 in the present embodiment, and (x, y, z) represents each point three-dimensional coordinate in world coordinate system.
4. by p 5, p 6build 3D model tail end 2, top p 7, p 8, be specially:
p 7 . x = p 5 . x , p 7 . y = p 5 . y , p 7 . z = p 5 . z + h p 8 . x = p 6 . x , p 8 . y = p 6 . y , p 8 . z = p 6 . z + h
Wherein, h represents bus height, h=2.5 in the present embodiment, and (x, y, z) represents each point three-dimensional coordinate in world coordinate system.
Fig. 3 is the building process schematic diagram of above-mentioned 8 end points.
5. can obtain 12 line segment aggregates by 3D model 8 end points, but in camera perspective, only have part line segment visible, thus need to select camera visible segment as model line segment aggregate.Selection mode is as follows: as shown in Figure 4 (a), line segment (p 1, p 2) (p 1, p 3) (p 2, p 4) (p 3, p 4) (p 3, p 7) (p 4, p 8) be camera visible segment; As shown in Figure 4 (b), line segment (p 5, p 6) be the invisible line segment of camera, line segment (p 2, p 6) (p 6, p 8) then need according to p 6position judge, specifically: under image coordinate system, if then (p 2, p 6) (p 6, p 8) be camera visible segment; If p 6∈ R (p 1234), then it is invisible line segment.R (p 1234) represent by a p 1, p 2, p 3, p 4the rectangle formed.In like manner, to line segment (p 1, p 5) (p 5, p 7) judge.Select camera visible segment as the line segment aggregate of 3D model.So far the 3D model of this vehicle can be obtained.
4th step: adopt LSD line segments extraction algorithm to carry out Eigenvector extraction to the vehicle obtained in second step, Fig. 5 is LSD extraction algorithm effect schematic diagram.
5th step: matching primitives is carried out to the Eigenvector obtained in the 3D model obtained in the 3rd step and the 4th step, specifically:
1. adopt template matching method, calculate matching factor η 1, specifically:
Wherein, represent the overlapping region between 3D model gray-scale map and Eigenvector gray-scale map, as shown in Figure 6 (e); ψ is 3D model gray-scale map, and its pixel value is 0 or 1, as shown in Figure 6 (c); for LSD Eigenvector gray-scale map, as shown in Figure 6 (b); represent and morphological dilations process is carried out, as shown in Fig. 6 (d) to image; ∑ I represents and carries out pixel summation to gray level image I; represent and threshold process is done to gray-scale map I, specifically:
Threshold t v ( I x , y ) = v , I x , y ≥ v 0 , otherwise
2. according to η 1judge whether Current vehicle is bus, specifically: if η 1>TH l, TH lfor setting threshold value, then representation feature line segment overlaps with 3D model large percentage, thinks that this vehicle may be bus, enters next step, and employing knearest neighbour method judges further; If η 1>TH h, TH hfor setting threshold value, then think that this vehicle is bus, judge to terminate; If η 1≤ TH l, then think that this vehicle is non-bus, judge to terminate.TH in the present embodiment l=0.7, TH h=0.8.
3. in pair previous step, need the vehicle judged further to implement bee-line algorithm, order represent removing overlapping region 3D illustraton of model, as shown in Fig. 6 (f); represent removing overlapping region eigenvector figure; I (α) represents the gray-scale value of pixel α; P (α) represents the position of pixel α under image coordinate system, initialization counter sum=0.Then minimum distance method is specifically implemented as follows:
(1) to pixel α ∈ ψ hand I (α) ≠ 0, centered by p (α), set up the square aearch window that window size is 3 × 3.
(2) calculate pixel α to arrive bee-line, be specially
(3) if d (α)≤d tH, then sum=sum+1 and I (β 0)=0, the present embodiment threshold value d tH=1.5.
(4) step (1) is repeated until ψ hin pixel travel through completely.
4. calculate matching factor η 2, be specially:
η 2 = sum Σ ψ h
According to η 2judge whether Current vehicle is bus, specifically: if η 2>TH d, TH dfor setting threshold value, representation feature line segment overlaps with 3D model large percentage, judges that this vehicle is bus; Otherwise, then judge that this vehicle is non-bus, judge to terminate.TH in the present embodiment d=0.9.
The present embodiment is tested the video sequence comprising 100 buses and 150 other types vehicles, and test result is as shown in table 1.
Table 1. bus vehicle cab recognition result
Above specific embodiments of the invention are described.It is to be appreciated that the present invention is not limited to above-mentioned particular implementation, those skilled in the art can make various distortion or amendment within the scope of the claims, and this does not affect flesh and blood of the present invention.

Claims (3)

1. a bus model recognizing method, is characterized in that, comprises the following steps:
The first step: carry out mixed Gaussian background modeling to monitor video, obtains pending vehicle foreground image;
Second step: under world coordinate system, carries out bus vehicle to obtained vehicle and tentatively identifies; Concrete steps are:
1. contours extract is carried out to the vehicle foreground image obtained in the first step, obtain N number of connected region Ω k, k=1,2 ..., N, to each connected region Ω k, obtain one and comprise Ω kand the rectangular area R that area is minimum k, k=1,2 ..., N;
2. under world coordinate system, by rectangle R k2 p in bottom 1, p 2structure judging point p m:
p m . x = ( p 1 . x + p 2 . x ) / 2 p m . y = ( p 1 . y + p 2 . y ) / 2 - l p m . z = p 1 . z
Wherein, l represents bus length of wagon; (x, y, z) represents the three-dimensional coordinate of each point in world coordinate system;
3. under image coordinate system, according to p mand R kposition relationship tentatively judge whether this vehicle is bus, if p m∈ R k, then tentatively judge that this vehicle is bus, carry out the 3rd step; If then judge that this vehicle is non-bus, judge to terminate;
3rd step: under world coordinate system, according to bus vehicle feature and R kpositional information, builds the 3D model of Current vehicle;
4th step, adopts LSD line segments extraction algorithm to carry out Eigenvector extraction to the vehicle obtained in second step;
5th step: the method adopting stencil matching method and knearest neighbour method to combine carries out matching primitives to the Eigenvector obtained in the 3D model obtained in the 3rd step and the 4th step, obtains recognition result;
3D model described in 3rd step, the method for building up of its end points is: R k2, bottom p 1, p 2as 3D model front bottom end two point; Again by p 1, p 2build bus 3D model front top 2 p 3, p 4:
p 3 . x = p 1 . x , p 3 . y = p 1 . y , p 3 . z = p 1 . z + h p 4 . x = p 2 . x , p 4 . y = p 2 . y , p 4 . z = p 2 . z + h
Wherein, h represents bus height, and (x, y, z) represents the three-dimensional coordinate of each point in world coordinate system;
Then according to the perspective view of bus on xOy, 2 p bottom 3D model tail end are built 5, p 6:
θ = arctan ( p 1 . y - p 2 . y p 2 . x - p 1 . x ) p j . x = p i . x - l sin θ p j . y = p i . x - l cos θ
Wherein, (i, j) ∈ { (1,5), (2,6) }, θ represents in perspective view, line segment (p 1, p 2) with the angle of y-axis,
L represents bus length of wagon, and (x, y, z) represents the three-dimensional coordinate of each point in world coordinate system;
Last by p 5, p 6build 3D model tail end 2, top p 7, p 8:
p 7 . x = p 5 . x , p 7 . y = p 5 . y , p 7 . z = p 5 . z + h p 8 . x = p 6 . x , p 8 . y = p 6 . y , p 8 . z = p 6 . z + h
Wherein, h represents bus height, and (x, y, z) represents the three-dimensional coordinate of each point in world coordinate system;
5th step concrete steps are:
1. adopt template matching method, calculate matching factor η 1,
Wherein, represent the overlapping region between 3D model gray-scale map and Eigenvector gray-scale map; ψ is 3D model gray-scale map, and its pixel value is 0 or 1; for LSD Eigenvector gray-scale map ; represent and morphological dilations process is carried out to image; Σ I represents and carries out pixel summation to gray level image I; represent and threshold process done to gray-scale map I,
Threshold t v ( I x , y ) = v , I x , y ≥ v 0 , otherwise
2. according to η 1judge whether Current vehicle is bus, if η 1> TH l, TH lfor setting threshold value, representation feature line segment overlaps with 3D model large percentage, thinks that this vehicle may be bus, enters next step, and employing knearest neighbour method judges further; If η 1> TH h, TH hfor setting threshold value and TH h> TH l, then think that this vehicle is bus, judge to terminate; If η 1≤ TH l, then think that this vehicle is non-bus, judge to terminate;
3. bee-line algorithm is implemented to the vehicle needing in previous step to judge further, order represent removing overlapping region 3D illustraton of model; represent removing overlapping region eigenvector figure; I (α) represents the gray-scale value of pixel α; P (α) represents the position of pixel α under image coordinate system, initialization counter sum=0, then minimum distance method is specifically implemented as follows:
(1) to pixel α ∈ ψ hand I (α) ≠ 0, centered by p (α), step is that the length of side sets up square aearch window;
(2) calculate pixel α to arrive bee-line, be specially
(3) if d (α)≤d tH, then sum=sum+1 and I (β is made 0)=0, d tHfor setting threshold value;
(4) step (1) is repeated until ψ hin pixel travel through completely;
4. matching factor η is calculated 2, η 2 = sum Σ ψ h ;
According to η 2judge whether Current vehicle is bus, if η 2> TH d, TH dfor setting threshold value, representation feature line segment overlaps with 3D model large percentage, judges that this vehicle is bus; Otherwise, then judge that this vehicle is non-bus, judge to terminate.
2. bus model recognizing method according to claim 1, is characterized in that: in the 3rd step, after completing the structure to 3D model end points, route selection section (p 1, p 2) (p 1, p 3) (p 2, p 4) (p 3, p 4) (p 3, p 7) (p 4, p 8) be camera visible segment, line segment (p 5, p 6) be the invisible line segment of camera, line segment (p 2, p 6) (p 6, p 8) then need according to p 6position judge, under image coordinate system, if then (p 2, p 6) (p 6, p 8) be camera visible segment; If p 6∈ R (p 1234), then it is invisible line segment; R (p 1234) represent by a p 1, p 2, p 3, p 4the rectangle formed; In like manner, to line segment (p 1, p 5) (p 5, p 7) judge; Select camera visible segment as the line segment aggregate of 3D model.
3. bus model recognizing method according to claim 1, it is characterized in that: in the 5th step: first adopt stencil matching method to identify to 3D model gray-scale map and Eigenvector gray-scale map, adopt knearest neighbour method to judge further to the vehicle meeting condition for identification, knearest neighbour method only need mate remaining 3D model pixel point and Eigenvector pixel after template coupling.
CN201210337115.0A 2012-09-12 2012-09-12 Bus type identifying method Active CN102930242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210337115.0A CN102930242B (en) 2012-09-12 2012-09-12 Bus type identifying method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210337115.0A CN102930242B (en) 2012-09-12 2012-09-12 Bus type identifying method

Publications (2)

Publication Number Publication Date
CN102930242A CN102930242A (en) 2013-02-13
CN102930242B true CN102930242B (en) 2015-07-08

Family

ID=47645039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210337115.0A Active CN102930242B (en) 2012-09-12 2012-09-12 Bus type identifying method

Country Status (1)

Country Link
CN (1) CN102930242B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104301735B (en) * 2014-10-31 2017-09-29 武汉大学 The overall situation coding method of urban transportation monitor video and system
CN108932857B (en) * 2017-05-27 2021-07-27 西门子(中国)有限公司 Method and device for controlling traffic signal lamp
CN110307809B (en) * 2018-03-20 2021-08-06 中移(苏州)软件技术有限公司 Vehicle type recognition method and device
CN109614950A (en) * 2018-12-25 2019-04-12 黄梅萌萌 Remotely-sensed data on-line checking mechanism, method and storage medium
CN111340888B (en) * 2019-12-23 2020-10-23 首都师范大学 Light field camera calibration method and system without white image
CN113435224A (en) * 2020-03-06 2021-09-24 华为技术有限公司 Method and device for acquiring 3D information of vehicle

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101783076A (en) * 2010-02-04 2010-07-21 西安理工大学 Method for quick vehicle type recognition under video monitoring mode
CN101976341A (en) * 2010-08-27 2011-02-16 中国科学院自动化研究所 Method for detecting position, posture, and three-dimensional profile of vehicle from traffic images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8212812B2 (en) * 2007-05-21 2012-07-03 Siemens Corporation Active shape model for vehicle modeling and re-identification

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101783076A (en) * 2010-02-04 2010-07-21 西安理工大学 Method for quick vehicle type recognition under video monitoring mode
CN101976341A (en) * 2010-08-27 2011-02-16 中国科学院自动化研究所 Method for detecting position, posture, and three-dimensional profile of vehicle from traffic images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Model-Based Object Tracking in Monocular Image Sequences of Road Traffic Scenes;D.Koller etc.;《International Journal of Computer Vision》;19930630;第10卷(第3期);第257-281页 *
基于混合高斯模型的运动目标检测方法研究;魏晓慧 等;《应用光学》;20100731;第31卷(第4期);第574-578页 *

Also Published As

Publication number Publication date
CN102930242A (en) 2013-02-13

Similar Documents

Publication Publication Date Title
CN102930242B (en) Bus type identifying method
CN105825203B (en) Based on point to matching and the matched ground arrow mark detection of geometry and recognition methods
CN109190444B (en) Method for realizing video-based toll lane vehicle feature recognition system
CN102880877B (en) Target identification method based on contour features
CN111563412B (en) Rapid lane line detection method based on parameter space voting and Bessel fitting
CN103971128A (en) Traffic sign recognition method for driverless car
CN103886760B (en) Real-time vehicle detecting system based on traffic video
CN106156752B (en) A kind of model recognizing method based on inverse projection three-view diagram
CN105335710A (en) Fine vehicle model identification method based on multi-stage classifier
CN106127107A (en) The model recognizing method that multi-channel video information based on license board information and vehicle's contour merges
WO2020000253A1 (en) Traffic sign recognizing method in rain and snow
CN105243381B (en) Failure automatic identification detection system and method based on 3D information
CN102592114A (en) Method for extracting and recognizing lane line features of complex road conditions
CN108090429A (en) Face bayonet model recognizing method before a kind of classification
CN103593981B (en) A kind of model recognizing method based on video
CN103745224A (en) Image-based railway contact net bird-nest abnormal condition detection method
CN107256633B (en) Vehicle type classification method based on monocular camera three-dimensional estimation
CN102799859A (en) Method for identifying traffic sign
CN102902957A (en) Video-stream-based automatic license plate recognition method
CN103544489A (en) Device and method for locating automobile logo
CN103413145A (en) Articulation point positioning method based on depth image
CN105740844A (en) Insulator cracking fault detection method based on image identification technology
CN104102909A (en) Vehicle characteristic positioning and matching method based on multiple-visual information
CN103646241A (en) Real-time taxi identification method based on embedded system
CN105426863A (en) Method and device for detecting lane line

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant