CN107315998A - Vehicle class division method and system based on lane line - Google Patents

Vehicle class division method and system based on lane line Download PDF

Info

Publication number
CN107315998A
CN107315998A CN201710396082.XA CN201710396082A CN107315998A CN 107315998 A CN107315998 A CN 107315998A CN 201710396082 A CN201710396082 A CN 201710396082A CN 107315998 A CN107315998 A CN 107315998A
Authority
CN
China
Prior art keywords
target
mrow
vehicle
msub
lane line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710396082.XA
Other languages
Chinese (zh)
Other versions
CN107315998B (en
Inventor
高尚兵
陈涛
周君
张正伟
张海艳
于永涛
姜海林
曹苏群
覃方哲
何桂炼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN201710396082.XA priority Critical patent/CN107315998B/en
Publication of CN107315998A publication Critical patent/CN107315998A/en
Application granted granted Critical
Publication of CN107315998B publication Critical patent/CN107315998B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of vehicle class division method and system based on lane line, method includes:The lane line of left and right two of both sides of the road is extracted from road monitoring video image based on Hough transformation;Foreground target in video image is obtained based on Vibe backgrounds difference algorithm and hole removal and smoothing processing are carried out to foreground target;The developed width of target is calculated by referential of lane line, the developed width of target is imported in the vehicle classification device using vehicle width as criteria for classification, the type of vehicle of target is obtained;Error correction is carried out to classification results, the classification results that driving trace is not the target of straight line are excluded.Compared with prior art, the inventive method is without complicated mathematical computations, based on plane geometry, and real-time is good with the degree of accuracy, can very easily be applied in the traffic study based on video.

Description

Vehicle class division method and system based on lane line
Technical field
The present invention relates to a kind of vehicle class division method and system based on lane line, belong to image processing techniques neck Domain.
Background technology
With urban economy develop rapidly and the light weight walking-replacing tool such as electric bicycle appearance, city electric bicycle Scale is increased rapidly, and relevant departments are difficult to manage, and generates great traffic safety hidden danger.With city video equipment quantity, Coverage rate is increased rapidly, and Computer Image Processing level lifting, video image processing technology is used for city intelligent Traffic system carries out computer auto-detection turns into a kind of trend.It it is generally desirable to individually study a class things, so The research divided for intelligent transportation detecting system reasonable to type of vehicle is particularly significant, various sorting algorithm quilts Propose.
Current almost all of vehicle classification algorithm is all trained study to sample using grader and realized.Such as Shen beautiful jade The vehicle classification algorithm based on active contour Yu fuzzy type SVMs is proposed, using movable contour model, and mould is used Paste type support vector machine method is classified to vehicle.Open big grade and be based on adjacent sensor network and BP neural network proposition one Effective vehicle classification algorithm, literature will etc. proposes a kind of based on class haar features and to improve the vehicle figures of AdaBoost graders As recognizer.Sun Rui etc. is directed to two class supervised classification problems in vehicle identification, proposes a kind of based on core K-SVD dictionary training With reference to the sorting technique of rarefaction representation, roc etc. is directed in practical application the vehicle cab recognition caused by the low factor of image definition A kind of the problem of error is excessive, it is proposed that vehicle targets based on sparse Scale invariant converting characteristic, Chen Xiangjun etc. is proposed Vehicle image sparse features method for expressing, and realize the vehicle image SVMs linear classification based on sparse features Device, constructs the monitoring vehicle classification identification application framework based on sparse features and background modeling.Wang Hai etc. is for having classified Device constructs a Weakly supervised layering by framework of two-dimensional depth confidence network deep in structure type and the deficiency of training method Degree study vehicle identification algorithm.There is containing much information for processing in Dong Enzeng etc., extract feature dimensions during being directed to vehicle cab recognition Number is high, the problems such as identification real-time is poor, devises a kind of involvement PCA LBP Feature Dimension Reduction vehicle targets.It is above-mentioned existing Using classifier algorithm carry out vehicle class division when, speed is slow, and recognition efficiency dependent on training sample size, should When using in the traffic study system based on video, real-time is poor, and classification effectiveness is low.
The content of the invention
Goal of the invention:It is a kind of based on lane line present invention aims at providing for problems of the prior art Vehicle class division method and system, solves the problem of electric car is difficult to differentiate between with other targets, and can improve vehicle classification Accuracy and real-time.
Technical scheme:For achieving the above object, the present invention provides a kind of vehicle class division side based on lane line Method, comprises the following steps:
(1) lane line of the road left and right sides is extracted from road monitoring video image based on Hough transformation;
(2) foreground target in video image is obtained based on Vibe backgrounds difference algorithm and hole shifting is carried out to foreground target Remove and smoothing processing;
(3) developed width of target is calculated by referential of lane line, the developed width of target is imported with vehicle width As in the vehicle classification device of criteria for classification, the type of vehicle of target is obtained;
(4) error correction is carried out to classification results, excludes the classification results that driving trace is not the target of straight line.
Preferably, the error correction of the step (4) is also including same based on what is obtained from the multiple image of video The average value of target developed width carries out type of vehicle classification.
Preferably, in the step (1), the straight line obtained using Hough transformation from the first two field picture of video is divided into tiltedly Rate be more than 0 and slope be less than 0 two groups, and select from this two groups of straight lines most long one respectively respectively as left-lane line and the right side Lane line.
Preferably, in the step (3) target developed width ws1=(ws2/w1)w2, wherein ws2For the external of target Pixel wide of the rectangle frame in video image, w1For the central point extended line in the horizontal direction of the boundary rectangle frame of target With the distance of two intersection points of the lane line of left and right two, w2For the developed width of road.
Preferably, vehicle classification device is expressed as in the step (3):
Wherein, wsFor the developed width of vehicle, a1,a2,…,anN kind type of vehicle, (w are represented respectively(1)min,w(2)max) (w(2)min,w(2)max) ..., (w(n)min,w(n)max) respectively represent corresponding vehicle type width range, the width of every kind of type of vehicle Spend scope all non-intersect with other kinds of width range.
Preferably, judging whether target travel track is that the method for straight line is in the step (4):First from target line Sail and several points are chosen in track as analysis sample;Then two points are randomly selected from sample and constitute straight line, and calculate it Remaining point to the straight line distance, if points of the distance in threshold range account for more than 95% always counted, then it is assumed that the row of target Track is sailed for straight line, is not otherwise straight line.
The present invention also provide it is a kind of using above-mentioned vehicle class division methods based on lane line based on lane line Vehicle class dividing system, including:Lane detection module, for being extracted based on Hough transformation from road monitoring video image The lane line of the road left and right sides;Perspective process module, before being obtained based on Vibe backgrounds difference algorithm in video image Scape target simultaneously carries out hole removal and smoothing processing to foreground target;Target width computing module, for using lane line as reference System calculates the developed width of target;Vehicle classification device module, the developed width for the target according to input is to the vehicle of target Type is divided;And, error correction module, for carrying out error correction to classification results, it is not straight to exclude driving trace The classification results of the target of line.
Beneficial effect:For electric bicycle target is more in urban road monitor video, be difficult to carry out area with other targets The characteristics of dividing, a kind of vehicle class division methods based on lane line proposed by the present invention are entered to the lane line in video first Row detection, then extracts foreground target using Vibe algorithms and prospect is handled, then using lane line as reference, according to mesh Target carries out greatly species division on a small scale, so as to realize the division of automobile and electric bicycle and other type of vehicle.Largely Test result indicates that, the inventive method effectively can carry out species division to vehicle, and error is very low, can very easily answer Use in various traffic incident detecting systems, with preferable real-time and robustness.
Brief description of the drawings
Fig. 1 is the method flow diagram of the embodiment of the present invention.
Fig. 2 is the straight line result set schematic diagram detected using Hough transformation.
Fig. 3 is line constraint scope schematic diagram.
Fig. 4 is pixel classifications schematic diagram in 2-D theorem in Euclid space in Vibe algorithms.
Fig. 5 is the foreground image diagram obtained based on Vibe algorithms.
Fig. 6 is the foreground image diagram after hole is removed.
Fig. 7 is the foreground image diagram after smoothing processing.
Fig. 8 is type of vehicle difference schematic diagram.
Fig. 9 is vehicle and lane line reference schematic diagram.
Figure 10 is object pixel width indication figure.
Figure 11 is classifying quality figure.Wherein (a), (b) be car classification results schematic diagram, (c), (d) for it is electronic from The classification results schematic diagram of driving.
Figure 12 is the system structure diagram of the embodiment of the present invention.
Embodiment
With reference to specific embodiment, the present invention is furture elucidated, it should be understood that these embodiments are merely to illustrate the present invention Rather than limitation the scope of the present invention, after the present invention has been read, various equivalences of the those skilled in the art to the present invention The modification of form falls within the application appended claims limited range.
A kind of vehicle class division methods based on lane line, first proposed a kind of track disclosed in the embodiment of the present invention Line detecting method is used to extract lane line, and next employs the current recognition speed in terms of traffic tracking most fast Vibe background subtractions Divide algorithm to be used to obtain foreground target and handle foreground target, then using lane line as reference system, calculate the reality of target Border width, after being divided to targeted species, draws the type of target.Whole algorithm is without complicated mathematical computations and utilization The total characteristic within the specific limits of target width, improves the accuracy and real-time of vehicle classification, algorithm flow chart such as Fig. 1 institutes Show.Detailed step is as follows:
Step (1) lane detection:The road left and right sides is extracted from road monitoring video image based on Hough transformation Lane line.
The straight line set extracted using Hough transformation contains a large amount of interference straight lines, and lane line is in traffic surveillance videos Geometrical property it is very special, the present invention identifies lane line using the geometrical property of lane line from video pictures.Including:
(1.1) Hough transformation extracts straight line
Hough transformation (Hough Transform) be recognized from image in image procossing geometry basic skills it One, using very extensively, also there are many innovatory algorithms.It is mainly used to isolate the geometric form with certain same characteristic features from image Shape (such as straight line, circle etc.).Most basic Hough transformation is the detection of straight lines from black white image.
Lane line is rendered as the form of straight line in video pictures, and straight line can be obtained from the first frame of video using Hough transformation Result set:
Z={ z1,z2,…,zn} (1)
Wherein Z is the straight line set extracted using Hough transformation, z1~znFor the element in set, lane line to be extracted, as shown in Figure 2.
(1.2) straight line is screened
Two lane lines of left and right are distributed in video image the right and left under normal circumstances, and inclined degree is within the specific limits.
If the slope of two lane lines is respectively k1、k2, left and right car is drawn by carrying out calculating analysis to great amount of images sample The Slope Constraint scope of diatom:kmin<|k1∣<kmax, kmin<∣k2∣<kmax, wherein kminFor range lower limit, kmaxFor range limit, As shown in Figure 3.
(1.3) lane line is extracted
The lane line of left and right two (solid-line boundary of road) runs through whole video pictures, and usually length is most long in video pictures Two straight lines, if the lane line of left and right two is respectively l1With l2, can must extract the formula of two lane lines:
Wherein max is length max function, Z in cut-off line results set+It is more than 0 straight line set, Z for slope in Z- It is less than 0 straight line set for slope in Z.
Step (2) perspective process:Based on the foreground target in Vibe backgrounds difference algorithm acquisition video image and to prospect Target carries out hole removal and smoothing processing.
Vibe is the algorithm of a kind of Pixel-level video background modeling or foreground detection, and effect is better than known several calculations Method, takes also few to hardware memory.Vibe is a kind of background modeling of Pixel-level, foreground detection algorithm, the algorithm main difference Part is the more new strategy of background model, and random selection needs the sample for the pixel replaced, and random selection neighborhood territory pixel is carried out more Newly.When that can not determine the model of pixel change, random more new strategy can be changed not with simulation pixel to a certain extent Certainty.
Background model is defined:Each pixel in background model be made up of n background sample (herein, 20) n values is, Remember that v (x) represents pixel value of the image at x, v in given European color spaceiRepresent background sample value of the index for i. Background model M is defined as shown in formula (3):
M (x)={ v1,v2,…,vn-1,vn} (3)
Initialization background:The process of initial background is also to choose v (x) process, 8 field Ns of the Vibe algorithms from xG(x) In randomly select 20 sample values be used for initial background model, as shown in formula (4):
M0(x)={ v0(y|y∈NG(x))} (4)
Wherein M0(x) it is the background model of initialization.
Random selection strategy:Vibe algorithms use the Euclidean distance in 2-D spaces to classify pixel, remember SR(v(x)) Represent centered on v (x), radius is R 2-D theorem in Euclid space (see Fig. 4), if SR(v (x)) and M (x) common factor meet certain Radix (H { }, which represents to occur simultaneously, is not less than 2), then it is assumed that v (x) is background pixel, as shown in formula (5):
H{SR(v(x))∩{v1,v2,…,vn}} (5)
Background model more new strategy:If pixel p (x) is background pixel, a value is randomly chosen from M (x) and is adopted Replaced with p (x), the random choice method of this univesral distribution ensure that the life cycle exponentially of the sample of each in sample set Successively decrease, it is to avoid pixel, which is for a long time retained in background model, influences the accuracy of model.Foreground image is as shown in Figure 5.
Information dissemination mechanism:In order to keep the uniformity in pixel neighborhoods space, Vibe algorithms are carried out more to v (x) sample While new, using same update method come more frontier NG(x) sample of pixel in, for example:Sample is replaced using v (x) A sample in model M (x), while updating N using v (x)GThe sample of some pixel in (v (x)).
The foreground image that is obtained by Vibe algorithms is simultaneously irregular, it is therefore desirable to foreground picture is carried out hole remove with it is smooth Processing.Wherein, it is that the connected domain of foreground picture is traveled through that hole, which is removed, by the picture of all pixels point of the small connected domain of area Plain value is inverted, and smoothing processing is to carry out field average value smoothing processing to foreground picture, then carries out binaryzation.Hole remove and Foreground image difference after smoothing processing is as shown in Figure 6, Figure 7.
Definition, initialization, renewal and the classification of pixel of background model it can be seen from the core description of Vibe algorithms It is all fairly simple, there is no the calculating of complexity in whole algorithm, so as to ensure that the real-time of algorithm.
Step (3) target classification:Developed width based on target, carries out the Type division of vehicle.
In the traffic events research based on video, it is often desired to a class target is studied, rational sorting algorithm Research can be allowed to get twice the result with half the effort.Traditional sorting algorithm is trained using SVM classifier to sample, and this method efficiency is low, time-consuming It is long, poor real, dependent on sample size.And in traffic study, the difference master of a class type of vehicle and other type of vehicle It is the size of vehicle, the present invention carries out species division, and then the successfully mesh to detecting to target according to the size of target Mark is classified.
● set up vehicle classification device
Type of vehicle not of the same race because production specification difference and had differences in width, or even electric bicycle width Not as good as the half of car, (the car entity e of same level position is in figure as shown in Figure 81Width be electrical salf-walking Car entity e23.5 times).According to the production dimensional standard of various vehicles, the width of different type of vehicle is limited to certain model In enclosing, such as car production specification such as table 1, so vehicle classification device can be set up using this characteristic.
The car dimensions table of table 1
According to data, so that it may determine car width range in the range of 1.5-2.0.If there is the vehicle of n types, often The width range for planting type of vehicle is all non-intersect with other kinds of width range, if this n kind type of vehicle is respectively a1, a2,…,an, a1Width range be (w(1)min,w(2)max), a2Width range be (w(2)min,w(2)max) ..., anWidth model Enclose for (w(n)min,w(n)max), the vehicle classification device category (s) of n types is set up according to following formula:
Wherein wsFor vehicle s width.
● reference system is selected
In traffic video, all targets in picture can all produce the effect of perspective (near big and far smaller), an equal amount of Object, in small scale of the position away from camera in picture, closing on, scale of the position of camera in picture is big.Thing The size of body in video is not fixed, and is extremely unfavorable for the behavioral study to object, so needing to set up a reference system conveniently Obtain the developed width of target to be studied.
The research to target can be facilitated by choosing suitable reference system, and choosing actual size will not change and in picture In static track be used as reference system.
The road surface pixel wide for remembering target s positions is w1, the developed width of road is w2, as shown in Figure 9.
● calculate developed width
In video pictures, it is impossible to directly obtain the developed width of target, using reference system, the width of target is drawn indirectly Degree.
The developed width for remembering target s is ws1, it is w to represent the pixel wide of s boundary rectangle frame in videos2, such as Figure 10 It is shown.
P=ws2/w1, wherein p is the ratio of road width shared by target width, w1To represent target s boundary rectangle frame Central point extended line in the horizontal direction and two intersection points of two lane lines of left and right distance.Road width shared by target width Ratio and the product of road developed width are the developed width of target, and calculation formula is:
ws1=pw2=(ws2/w1)w2 (7)
● species is divided
Obtained width is imported into grader to compare, type of vehicle is obtained:
A=category (s) (8)
Wherein a is target s type.
It can be seen that, whole assorting process is without complicated mathematical computations, and instant application is good, and calculating speed is fast, will not give Audio/video player system brings very big burden.
Step (4) error correction:Target classification result progress error is repaiied by carrying out statistics and analysis to multiple image Just.
In the road, vehicle will not be only existed under normal circumstances, can also there are other non-rows such as pedestrian, debris, animal Vehicle is sailed, and the classification results that single frames classification is obtained are easy to produce error, so needing a kind of rational decision-making technique to disappear Except error.Target for entering picture, should carry out all-the-way tracking to it, track the distance of the two frame targets before and after to sentence It is disconnected, if the centre distance of two targets of front and rear two frame is less than threshold value dt, then it is assumed that be same target.The error correction of the present invention Relate generally to following two aspects:
(1) trajectory analysis, the driving trace of the vehicle of normally travel should be close to straight line, from vehicle driving trace N is chosen in (set of the central point of the boundary rectangle frame of same target)pIndividual point is analyzed sample as sample, if sample This collection is:
P={ p1,p2,…,pn} (9)
1. it is random that two points composition straight line l are chosen from Pp, and remove it in P.
2. a unlabelled point p is chosen from P, p to l is calculatedpDistance, judge whether in the range of threshold.
3. to performing 2. middle operation in P a little, calculate and arrive lpPoint of the distance in the range of threshold number c, If c/np>0.95 is thought that track is straight line.
It is not the target of straight line for track, it is believed that be other moving objects of non-driving vehicle, it should be missed Difference amendment, removes it classified list.
(2) multiframe is counted, and the results averaged operation for the vehicle width calculating multiframe of acquisition avoids a certain frame from carrying The inaccurate situation of the vehicle foreground that takes.Because the prospect that background subtraction is obtained not is absolutely accurate, in a certain frame, carry The foreground target size taken may differ greatly with actual target sizes, can largely be kept away in the way of averaging Exempt from the generation of such case.
As shown in figure 12, the vehicle class dividing system based on lane line disclosed in the embodiment of the present invention, including:Lane line Detection module, the lane line for extracting the road left and right sides from road monitoring video image based on Hough transformation;At prospect Module is managed, for obtaining the foreground target in video image based on Vibe backgrounds difference algorithm and carrying out hole shifting to foreground target Remove and smoothing processing;Target width computing module, the developed width for calculating target by referential of lane line;Vehicle classification Device module, the developed width for the target according to input is divided to the type of vehicle of target;And, error correction mould Block, for carrying out error correction to classification results, the average based on target width is classified, and it is not straight to exclude driving trace The classification results of the target of line.
The validity and advantage of the inventive method are verified below by following experiment.
Experimental situation of the present invention is the PC of Intel (R) Core (TM) i5-2410M CPU, 8G internal memories.Experimental data by The urban traffic road video that network is downloaded and voluntarily shot, image size is 600 × 400.Employ road surface exist it is electronic from The video of driving, car and large car is tested, and ordinary ray environment is respectively adopted, and half-light environment, road target is multiple Miscellaneous environment is tested.
The present invention is evaluated test result using two parameters of classification accuracy and recognition rate, by the present invention Vehicle image SVMs linear classifier (SVM) contrast test based on sparse features of algorithm and Chen Xiangjun etc., test As a result such as following table:
The inventive algorithm classification results (home) of table 2
Table 3SVM classification results (home)
The inventive algorithm classification results of table 4 (half-light environment)
Table 5SVM classification results (half-light environment)
The inventive algorithm classification results (complex environment) of table 6
Table 7SVM classification results (complex environment)
Because the present invention is based on lane line acquisition vehicle width it can be seen from table 2- tables 7, according to developed width to species Divided, without complicated mathematical operation, and SVM classifier depends on sample size, so the efficiency under this experimental situation Compared with SVM classifier efficiency high, faster, real-time is good for speed.As can be seen from the table, in half-light environment and complex environment Under, the accuracy rate of classification has declined, this be due to test video definition is inadequate and the insufficient reason of light caused by.
The inventive method extracts lane line first with Hough transformation and constraints, then is carried using Vibe background subtractions Foreground image is taken, and processing is optimized to foreground image, vehicle classification device is then set up according to vehicle width scope, is recycled Lane line is as with reference to vehicle developed width is obtained, and importing vehicle classification device, which is compared, draws type of vehicle, finally to track of vehicle Analysis and multiframe statistics carry out error analysis to vehicle, so as to obtain final classification.Whole algorithm is without complicated mathematics meter Calculate, based on plane geometry, real-time is good with the degree of accuracy, can very easily be applied to the traffic study based on video In.

Claims (7)

1. the vehicle class division methods based on lane line, it is characterised in that comprise the following steps:
(1) lane line of the road left and right sides is extracted from road monitoring video image based on Hough transformation;
(2) based on Vibe backgrounds difference algorithm obtain video image in foreground target and to foreground target carry out hole remove and Smoothing processing;
(3) developed width of target is calculated by referential of lane line, by the developed width of target import using vehicle width as In the vehicle classification device of criteria for classification, the type of vehicle of target is obtained;
(4) error correction is carried out to classification results, excludes the classification results that driving trace is not the target of straight line.
2. the vehicle class division methods according to claim 1 based on lane line, it is characterised in that the step (4) Error correction also include average value based on the same target developed width obtained from the multiple image of video and carry out vehicle Classification of type.
3. the vehicle class division methods according to claim 1 based on lane line, it is characterised in that the step (1) In, the straight line obtained using Hough transformation from the first two field picture of video is divided into slope more than 0 and slope is less than 0 two groups, and is divided Most long one is not selected from this two groups of straight lines respectively as left-lane line and right-lane line.
4. the vehicle class division methods according to claim 1 based on lane line, it is characterised in that the step (3) The developed width w of middle targets1=(ws2/w1)w2, wherein ws2For pixel wide of the boundary rectangle frame in video image of target, w1For the distance of the central point extended line in the horizontal direction and two intersection points of two lane lines of left and right of the boundary rectangle frame of target, w2For the developed width of road.
5. the vehicle class division methods according to claim 1 based on lane line, it is characterised in that the step (3) Middle vehicle classification device is expressed as:
<mrow> <mi>c</mi> <mi>a</mi> <mi>t</mi> <mi>e</mi> <mi>g</mi> <mi>o</mi> <mi>r</mi> <mi>y</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <msub> <mi>w</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>&lt;</mo> <msub> <mi>w</mi> <mi>s</mi> </msub> <mo>&lt;</mo> <msub> <mi>w</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>a</mi> <mn>2</mn> </msub> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <msub> <mi>w</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>&lt;</mo> <msub> <mi>w</mi> <mi>s</mi> </msub> <mo>&lt;</mo> <msub> <mi>w</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> <mi>max</mi> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mn>...</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>a</mi> <mi>n</mi> </msub> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>&lt;</mo> <msub> <mi>w</mi> <mi>s</mi> </msub> <mo>&lt;</mo> <msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein, wsFor the developed width of vehicle, a1,a2,…,anN kind type of vehicle, (w are represented respectively(1)min,w(2)max)(w(2)min, w(2)max) ..., (w(n)min,w(n)max) respectively represent corresponding vehicle type width range, the width range of every kind of type of vehicle It is all non-intersect with other kinds of width range.
6. the vehicle class division methods according to claim 1 based on lane line, it is characterised in that the step (4) It is middle to judge whether target travel track is that the method for straight line is:Several points are chosen first from target travel track and are used as analysis Sample;Then two points are randomly selected from sample and constitute straight line, and calculate remaining point to the distance of the straight line, if distance is in threshold Points in the range of value account for more than 95% always counted, then it is assumed that the driving trace of target is straight line, are not otherwise straight line.
7. using based on the lane line of the vehicle class division methods based on lane line as described in claim any one of 1-6 Vehicle class dividing system, it is characterised in that including:
Lane detection module, the track for extracting the road left and right sides from road monitoring video image based on Hough transformation Line;
Perspective process module, for obtaining the foreground target in video image based on Vibe backgrounds difference algorithm and to foreground target Carry out hole removal and smoothing processing;
Target width computing module, the developed width for calculating target by referential of lane line;
Vehicle classification device module, the developed width for the target according to input is divided to the type of vehicle of target;
And, error correction module, for carrying out error correction to classification results, excludes the target that driving trace is not straight line Classification results.
CN201710396082.XA 2017-05-31 2017-05-31 Vehicle class division method and system based on lane line Active CN107315998B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710396082.XA CN107315998B (en) 2017-05-31 2017-05-31 Vehicle class division method and system based on lane line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710396082.XA CN107315998B (en) 2017-05-31 2017-05-31 Vehicle class division method and system based on lane line

Publications (2)

Publication Number Publication Date
CN107315998A true CN107315998A (en) 2017-11-03
CN107315998B CN107315998B (en) 2019-08-06

Family

ID=60183500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710396082.XA Active CN107315998B (en) 2017-05-31 2017-05-31 Vehicle class division method and system based on lane line

Country Status (1)

Country Link
CN (1) CN107315998B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886766A (en) * 2017-12-14 2018-04-06 大连理创科技有限公司 A kind of parking lot vehicle guidance method based on parking stall width
CN107886767A (en) * 2017-12-14 2018-04-06 大连理创科技有限公司 A kind of parking lot vehicle guidance system based on parking stall width
CN108012098A (en) * 2017-11-26 2018-05-08 合肥赛为智能有限公司 A kind of unmanned plane traffic inspection method
CN108242183A (en) * 2018-02-06 2018-07-03 淮阴工学院 Traffic conflict detection method and device based on moving target indicia framing width characteristic
CN108734105A (en) * 2018-04-20 2018-11-02 东软集团股份有限公司 Method for detecting lane lines, device, storage medium and electronic equipment
CN109726699A (en) * 2019-01-07 2019-05-07 殷鹏 Electric bicycle based on artificial intelligence occupies car lane recognition methods
CN111540010A (en) * 2020-05-15 2020-08-14 百度在线网络技术(北京)有限公司 Road monitoring method and device, electronic equipment and storage medium
CN114937253A (en) * 2022-06-15 2022-08-23 北京百度网讯科技有限公司 Vehicle type information processing method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101608924A (en) * 2009-05-20 2009-12-23 电子科技大学 A kind of method for detecting lane lines based on gray scale estimation and cascade Hough transform
CN101837780A (en) * 2009-03-18 2010-09-22 现代自动车株式会社 A lane departure warning system using a virtual lane and a system according to the same
EP3070491A1 (en) * 2015-03-06 2016-09-21 Q-Free ASA Vehicle detection

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101837780A (en) * 2009-03-18 2010-09-22 现代自动车株式会社 A lane departure warning system using a virtual lane and a system according to the same
CN101608924A (en) * 2009-05-20 2009-12-23 电子科技大学 A kind of method for detecting lane lines based on gray scale estimation and cascade Hough transform
EP3070491A1 (en) * 2015-03-06 2016-09-21 Q-Free ASA Vehicle detection

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIN-WOO LEE 等: "A study on recognition of road lane and movement of vehicles using vision system", 《SICE 2001. PROCEEDINGS OF THE 40TH SICE ANNUAL CONFERENCE. INTERNATIONAL SESSION PAPERS (IEEE CAT. NO.01TH8603)》 *
刘富强 等: "一种基于视觉的车道线检测与跟踪算法", 《同济大学学报(自然科学版)》 *
周勇: "智能车辆中的几个关键技术研究", 《中国博士学位论文全文数据库工程科技Ⅱ辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108012098A (en) * 2017-11-26 2018-05-08 合肥赛为智能有限公司 A kind of unmanned plane traffic inspection method
CN107886766A (en) * 2017-12-14 2018-04-06 大连理创科技有限公司 A kind of parking lot vehicle guidance method based on parking stall width
CN107886767A (en) * 2017-12-14 2018-04-06 大连理创科技有限公司 A kind of parking lot vehicle guidance system based on parking stall width
CN108242183A (en) * 2018-02-06 2018-07-03 淮阴工学院 Traffic conflict detection method and device based on moving target indicia framing width characteristic
CN108242183B (en) * 2018-02-06 2019-12-10 淮阴工学院 traffic conflict detection method and device based on width characteristic of moving target mark frame
CN108734105A (en) * 2018-04-20 2018-11-02 东软集团股份有限公司 Method for detecting lane lines, device, storage medium and electronic equipment
CN109726699A (en) * 2019-01-07 2019-05-07 殷鹏 Electric bicycle based on artificial intelligence occupies car lane recognition methods
CN111540010A (en) * 2020-05-15 2020-08-14 百度在线网络技术(北京)有限公司 Road monitoring method and device, electronic equipment and storage medium
CN111540010B (en) * 2020-05-15 2023-09-19 阿波罗智联(北京)科技有限公司 Road monitoring method and device, electronic equipment and storage medium
CN114937253A (en) * 2022-06-15 2022-08-23 北京百度网讯科技有限公司 Vehicle type information processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN107315998B (en) 2019-08-06

Similar Documents

Publication Publication Date Title
CN107315998B (en) Vehicle class division method and system based on lane line
CN109919072B (en) Fine vehicle type recognition and flow statistics method based on deep learning and trajectory tracking
Johnson-Roberson et al. Driving in the matrix: Can virtual worlds replace human-generated annotations for real world tasks?
CN107133974B (en) Gaussian Background models the vehicle type classification method combined with Recognition with Recurrent Neural Network
Changzhen et al. A traffic sign detection algorithm based on deep convolutional neural network
CN109190444B (en) Method for realizing video-based toll lane vehicle feature recognition system
CN108171112A (en) Vehicle identification and tracking based on convolutional neural networks
CN105354568A (en) Convolutional neural network based vehicle logo identification method
Derpanis et al. Classification of traffic video based on a spatiotemporal orientation analysis
Zhang et al. Study on traffic sign recognition by optimized Lenet-5 algorithm
CN108304798A (en) The event video detecting method of order in the street based on deep learning and Movement consistency
CN105404857A (en) Infrared-based night intelligent vehicle front pedestrian detection method
JP2016062610A (en) Feature model creation method and feature model creation device
CN104978567A (en) Vehicle detection method based on scenario classification
Kavitha et al. Pothole and object detection for an autonomous vehicle using yolo
CN113516853B (en) Multi-lane traffic flow detection method for complex monitoring scene
Sheng et al. Vehicle detection and classification using convolutional neural networks
CN108549901A (en) A kind of iteratively faster object detection method based on deep learning
Tourani et al. A robust vehicle detection approach based on faster R-CNN algorithm
Liu et al. Multi-type road marking recognition using adaboost detection and extreme learning machine classification
CN114049572A (en) Detection method for identifying small target
Chen et al. Robust vehicle detection and viewpoint estimation with soft discriminative mixture model
Cai et al. Vehicle Detection Based on Deep Dual‐Vehicle Deformable Part Models
CN117456482B (en) Abnormal event identification method and system for traffic monitoring scene
Ghasemi et al. A real-time multiple vehicle classification and tracking system with occlusion handling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20171103

Assignee: Jiangsu Huai deep blue Information Technology Co., Ltd.

Assignor: Huaijin Polytechnical College

Contract record no.: X2019980001193

Denomination of invention: Vehicle type division method and system based on lanes

Granted publication date: 20190806

License type: Common License

Record date: 20191220