CN109272536A - A kind of diatom vanishing point tracking based on Kalman filter - Google Patents
A kind of diatom vanishing point tracking based on Kalman filter Download PDFInfo
- Publication number
- CN109272536A CN109272536A CN201811110435.6A CN201811110435A CN109272536A CN 109272536 A CN109272536 A CN 109272536A CN 201811110435 A CN201811110435 A CN 201811110435A CN 109272536 A CN109272536 A CN 109272536A
- Authority
- CN
- China
- Prior art keywords
- diatom
- point
- vanishing point
- sample
- line segment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/168—Segmentation; Edge detection involving transform domain methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20061—Hough transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Abstract
The invention discloses a kind of diatom vanishing point tracking based on Kalman filter, belongs to field of image processing.The present invention is input with the image sequence that vehicle-mounted vidicon acquires, and the straight-line segment in detection image excludes the line segment not included or comprising less diatom block in the method for machine learning, to achieve the purpose that really to determine diatom vanishing point by diatom edge line.In terms of existing technologies, the present invention can reduce influence of the various interfering objects occurred in the ken to diatom vanishing point estimated accuracy.
Description
Technical field
The present invention relates to field of image processing, in particular to a kind of diatom vanishing point tracking based on Kalman filter.
Background technique
Advanced driving assistance system (ADAS) utilizes vehicle-mounted various sensors, incudes while the car is driving
The environment of surrounding collects data, carries out static and dynamic object identification, detecting and tracking, allowing driver to perceive in advance can
The danger that can occur, to effectively increase the comfortableness and security of car steering.The video camera equipped in by ADAS system
In acquired image, on road surface parallel diatom by converge in image a bit, i.e. vanishing point.It is inclined to diatom detection, diatom
For the application such as early warning, Foregut fermenters, the position of vanishing point in the picture is very important input information.
Chinese patent 201610492617.9 discloses a kind of vanishing point scaling method based on Horizon line search, needs to define
Multiple horizon situation templates, determine horizon position in the form of verifying;Chinese patent 201710651702.X is announced
A kind of with video is that input building row stacks image, searches for maximum in stacking image, by maximum position determines diatom,
The method that vanishing point is determined by the intersection point of diatom.Other than road wire tag, passed through in the ken observed by the video camera of ADAS system
Often there is the objects such as various vehicles, shade, the divisional guardrail that vehicle projects, the straight line determined by the edge of these objects
It is formed by intersection point and diatom vanishing point usually has biggish deviation, vanishing point estimation will seriously affect with these intersection points estimation vanishing point
Precision.
Summary of the invention
The present invention provides a kind of diatom vanishing point tracking based on Kalman filter, provided technology are taken the photograph with vehicle-mounted
The image sequence of camera acquisition is input, the straight-line segment in detection image, is not included or is wrapped with the method exclusion of machine learning
Line segment containing less diatom block, to achieve the purpose that really to determine diatom vanishing point by diatom edge line;Diatom vanishing point is sat
Mark is modeled as the state that discrete dynamic system changes over time, with Kalman filter algorithm keeps track diatom vanishing point.
The present invention it is specific the technical solution adopted is as follows:
A kind of diatom vanishing point tracking based on Kalman filter, method includes the following steps:
Step 1: to the image detection edge pixel of input, line segment is detected with Hough transform algorithm by edge pixel;
Step 2: to resulting line segment is detected, using the point on line segment as anchor point, image is extracted with multiple scales and offset
Whether block is diatom block with the image block that classifier identification trained in advance is extracted;
Step 3: counting the number for the point for having image block to be identified as diatom block on line segment, if the number is greater than
Then candidate line sections set is added in corresponding line segment by preset threshold value, and the weight for calculating the corresponding line segment is equal to image block quilt
The number of the point of diatom block is identified as divided by the length of line segment;
Step 4: the every line segment extended in candidate line sections set becomes straight line, to non-parallel straight line, calculates straight line two-by-two
Between intersection point calculate the weighted average and covariance matrix of all samples using the two-dimensional coordinate of intersection point as sample;
Step 5: present frame is set as t frame, the intersecting point coordinate sample of resulting present frame is calculated according to step 1 to step 4
The diatom vanishing point that this weighted average and covariance and the 0th frame are persistently tracked to t-1 frame, is estimated with Kalman filter algorithm
Count diatom vanishing point, tracking result of the resulting vanishing point of output estimation as t frame.
Each step in above-mentioned technical proposal can be used following concrete mode and realize.
Using the point on line segment as anchor point described in step 2, image block is extracted with multiple scales and offset, to instruct in advance
Whether the image block that experienced classifier identification is extracted is diatom block, comprising:
It is I (X- δ by the extracted image block of anchor point of the point if (X, Y) is the point on line segmentX,Y-δY, W/s, H/s),
It represents with (X- δX,Y-δY) it is top left co-ordinate, W/s and H/s are a wide and high rectangular image area, wherein W and H difference
It is preset datum windows mouth width and height, δXAnd δYOffset respectively both horizontally and vertically, s are preset scale coefficients;
It is a cascade classifier that whether extracted image block, which is the classifier of diatom block, for identification, each of these
Grade is the strong classifier being composed of several Weak Classifiers;
The corresponding feature of each Weak Classifier, calculates as follows:
Wherein x is image block to be detected, and p=± 1, for controlling sign of inequality direction, θ is threshold value, and f is that characteristic value calculates
Function;
In the training process, the weighting that each Weak Classifier to be selected is calculated as follows accidentally divides loss function,
εt=minf,p,θ∑iwi|h(xi,f,p,θ)-yi|
Wherein, xiAnd yiRespectively sample and corresponding label, if xiFor positive sample, then yi=+1, otherwise yi=-1, h
() is the Weak Classifier, if the symbol of Weak Classifier output valve and label is inconsistent, means accidentally to divide;Selection has most
The small Weak Classifier for accidentally dividing loss function value forms strong classifier as optimal Weak Classifier.
Haar-like feature, calculation method can be used in the corresponding feature of the Weak Classifier are as follows: firstly, taking to be detected
A rectangular area of image block x, by the region division at the subregion of 2,3 or 4 equivalent sizes;If being divided into 2 sub-districts
Domain, then 2 sub-regions can be for left and right distribution or distribution up and down, characteristic value the pixel value in one of subregion it is cumulative and with
Cumulative its difference of sum of pixel value in another sub-regions;If being divided into 3 sub-regions, 3 sub-regions can be distributed for left, center, right
Or upper, middle and lower distribution, characteristic value add up and subtract middle sub-field for the pixel value in wherein left and right or upper and lower two sub-regions
Pixel value is cumulative and multiplied by the difference after 2;If being divided into four sub-regions, it is both horizontally and vertically respectively divided into two portions
Point, characteristic value is that the pixel value of two sub-regions of upper left and bottom right adds up and subtract the pixel value of two sub-regions of upper right and lower-left
The difference of cumulative sum.
To non-parallel straight line described in step 4, the intersection point between straight line two-by-two is calculated, using the two-dimensional coordinate of intersection point as sample
This, calculates the weighted average and covariance matrix of all samples, comprising:
To any two lines section LiAnd Lj, weight is respectively ηiAnd ηj, and extend line segment LiAnd LjThe straight line of formation, which exists, to be handed over
Point, then the intersection point is endowed weight: ηi+ηj;
If intersection point sample set isWherein (Xk,Yk) be k-th of intersection point coordinate;Power corresponding with intersection point
Value set isηkFor the weight of k-th of intersection point, N is intersection point number of samples total in intersection point sample set, then intersection point sample
The mean value of sample is in this setWherein
The covariance matrix of sample are as follows:
Wherein
The mean value of the intersecting point coordinate sample of the present frame calculated described in step 5 according to step 1 to step 4 and association side
The diatom vanishing point that difference and the 0th frame are persistently tracked to t-1 frame estimates diatom vanishing point with Kalman filter algorithm, comprising:
Firstly, the state that diatom vanishing point coordinates modeling is changed over time at discrete dynamic system, the vanishing point of note t frame are
Vt, vanishing point V with previous momentt-1Relational expression are as follows:
Vt=Vt-1+z
Wherein, z represents the process noise of system, and meeting mean value is 0, and covariance matrix is the normal distribution of Q;
Secondly, by the diatom vanishing point V of previous momentt-1The vanishing point for predicting t moment, by the state error association side of previous moment
Poor matrix Pt-1With the state error covariance matrix P of matrix Q prediction t momentt -:
Pt -=Pt-1+Q
Wherein,Indicate that the diatom vanishing point of the t moment of estimation, Q represent the noise covariance matrix of the system;
Kalman gain is calculated, calculation formula such as following formula:
Kt=Pt -(Pt -+Σ)-1
Wherein Σ is sample covariance matrix described in step 4;
Again, diatom vanishing point is updated, calculation formula is as follows:
Wherein u is sample weighting average value described in step 4,For updated t moment diatom vanishing point;
Finally, updating the state error covariance matrix of t moment, calculation formula is as follows:
Pt=(D-Kt)Pt -
The wherein unit matrix that D is 2 × 2, PtFor updated t moment state error covariance matrix.
Diatom vanishing point tracking based on Kalman filter of the invention, the image sequence with vehicle-mounted vidicon acquisition are
It inputting, the straight-line segment in detection image excludes the line segment not included or comprising less diatom block in the method for machine learning, from
And achieve the purpose that really to determine diatom vanishing point by diatom edge line.In terms of existing technologies, the present invention can subtract
Influence of the various interfering objects occurred in few ken to diatom vanishing point estimated accuracy.
Detailed description of the invention
Fig. 1 is the flow diagram of diatom vanishing point tracking of the embodiment of the present invention based on Kalman filter;
Fig. 2 is Haar-like feature calculation schematic diagram;
Fig. 3 is that diatom mark and positive sample extract schematic diagram;
Fig. 4 is with the flow diagram of Adaboost algorithm training classifier.
Specific embodiment
The present invention provides a kind of diatom vanishing point tracking based on Kalman filter, and provided technology is with vehicle-mounted pick-up
The image sequence of machine acquisition is input, the straight-line segment in detection image, do not included with the method exclusion of machine learning or comprising
The line segment of less diatom block, to achieve the purpose that really to determine diatom vanishing point by diatom edge line;By diatom vanishing point coordinate
It is modeled as the state that discrete dynamic system changes over time, with Kalman filter algorithm keeps track diatom vanishing point.
As shown in Figure 1, the diatom vanishing point tracking process the present invention is based on Kalman filter may comprise steps of
101~105:
Step 101: to the image detection edge pixel of input, line segment being detected with Hough transform algorithm by edge pixel;
Step 102: to resulting line segment is detected, using the point on line segment as anchor point, extracting image with multiple scales and offset
Whether block is diatom block with the image block that classifier identification trained in advance is extracted;
Step 103: the number for the point for having image block to be identified as diatom block on line segment being counted, if the number is greater than
Then candidate line sections set is added in corresponding line segment by preset threshold value, and the weight for calculating the corresponding line segment is equal to image block quilt
The number of the point of diatom block is identified as divided by the length of line segment;
Step 104, the every line segment extended in candidate line sections set becomes straight line, to non-parallel straight line, calculates straight two-by-two
Intersection point between line calculates the weighted average and covariance matrix of all samples using the two-dimensional coordinate of intersection point as sample;
Step 105: setting present frame as t frame, the intersecting point coordinate of resulting t frame is calculated according to step 101 to step 104
The diatom vanishing point that the weighted average and covariance and the 0th frame of sample are persistently tracked to t-1 frame, with Kalman filter algorithm
Estimate diatom vanishing point, result of the vanishing point obtained by output estimation as t frame.
The specific practice of above-mentioned part steps in the present embodiment is described below in conjunction with attached drawing.
To resulting line segment is detected in step 102, using the point on line segment as anchor point, image is extracted with multiple scales and offset
Whether block is diatom block with the image block that classifier identification trained in advance is extracted, specifically, if (X, Y) is the point on line segment,
It is I (X- δ by the extracted image block of anchor point of the pointX,Y-δY, W/s, H/s), it represents with (X- δX,Y-δY) it is that the upper left corner is sat
Mark, W/s and H/s are a wide and high rectangular image area, and wherein W and H is preset datum windows mouth width and height, δ respectivelyXWith
δYOffset respectively both horizontally and vertically, s are preset scale coefficients, and one embodiment of the present of invention takes W=24, H=
10, δX,δY∈ { -8, -4,0 ,+4 ,+8 }, s ∈ { 0.8,0.9,1.0,1.1,1.25 }.
Identify whether the image block extracted is diatom block, the classifier in step 102 with classifier trained in advance
It is a cascade classifier, each of these grade is the strong classifier being composed of several Weak Classifiers, each weak
Classifier corresponds to a feature, calculates as follows:
Wherein x is image block to be detected, and p=± 1, for controlling sign of inequality direction, θ is threshold value, and f is that characteristic value calculates
Function, the embodiment of the present invention use Haar-like feature, referring to fig. 2, calculation method are as follows: firstly, taking a rectangle region of x
Domain, by the region division at the subregion of 2,3 or 4 equivalent sizes;If being divided into 2 sub-regions, 2 sub-regions can be a left side
Right distribution or up and down distribution, characteristic value are that the pixel value in white subregion is cumulative and cumulative with the pixel value in black subregion
Its difference of sum;If being divided into 3 sub-regions, 3 sub-regions can be wherein left for left, center, right distribution or upper, middle and lower distribution, characteristic value
Pixel value under right or up in two white subregions is cumulative and to subtract the pixel value of intermediate black subregion cumulative and multiplied by after 2
Difference;If being divided into four sub-regions, two parts are both horizontally and vertically respectively divided into, characteristic value is upper left and bottom right
The pixel value of two white subregions is cumulative and subtracts the difference of the cumulative sum of the pixel value of two black subregions of upper right and lower-left.
It includes positive sample and negative sample that the embodiment of the present invention, which is used to train the sample set of the classifier, referring to Fig. 3, positive sample
Originally be it is one wide with high in a certain proportion of rectangular image block, L is a horizontal linear, the intersection point of straight line and diatom edge in figure
Length for A and B, line segment AB is w, and the marked region of diatom approximatively occupies the middle section of image block;Negative sample is not wrap
The road surface area image of the wire tag containing road, positive negative sample are all scaled to preset size.
The strong classifier described with Adaboost algorithm training of the embodiment of the present invention, specifically, reference Fig. 4 may include
Following steps:
Step 401, the weight of each sample is initialized, the weight of each positive sample is 1/2Np, the weight of negative sample is 1/
2Nf, wherein NpBe positive number of samples, NfStrong classifier to be trained is initialized as comprising 0 Weak Classifier by negative sample number;
Step 402, t is from 1 to T iteration, and wherein T is the Weak Classifier for allowing to contain up in a preset strong classifier
Number, every wheel iteration select a Weak Classifier;
Step 403, optimal Weak Classifier is selected, firstly, the weighting that each Weak Classifier to be selected is calculated as follows accidentally divides damage
Function is lost,
εt=minf,p,θ∑iwi|h(xi,f,p,θ)-yi| (2)
Wherein, xiAnd yiRespectively sample and corresponding label, if xiFor positive sample, then yi=+1, otherwise yi=-1, h
() is Weak Classifier shown in formula (1), if the symbol of Weak Classifier output valve and label is inconsistent, means accidentally to divide;
Secondly, selecting that there is minimum accidentally to divide the Weak Classifier of loss function value as optimal Weak Classifier, it is denoted as ht;
Step 404, if εt> 0.5, then terminate iteration;Otherwise, 405 are gone to step;
Step 405, the weight of each sample is updated as the following formula,
Wherein, as sample xiCorrectly classified, then corresponding ei=0, otherwise ei=1,
Step 406, weight of the current Weak Classifier in strong classifier is calculated as follows,
Become strong classifier by each Weak Classifier of the weighed combination of Weak Classifier:
F (x)=sign (∑tαtht(x)) (5)
Step 407, classified with current strong classifier to test sample, if classifier result reaches target,
Terminate iteration, exports the strong classifier as shown in formula (5);Otherwise, 402 are gone to step.
Each line segment in extension candidate line sections set described in step 104 becomes straight line, to non-parallel straight line, calculates two
Intersection point between two straight lines calculates the weighted average and covariance matrix of all samples using the two-dimensional coordinate of intersection point as sample,
Specifically, it may include:
To any two lines section LiAnd Lj, weight is respectively ηiAnd ηj, and extend line segment LiAnd LjThe straight line intersection of formation, then
The intersection point is endowed weight: ηi+ηj;
If intersection point sample set isWherein (Xk,Yk) be k-th of intersection point coordinate;Power corresponding with intersection point
Value set isηkFor the weight of k-th of intersection point, N is intersection point number of samples total in intersection point sample set, then intersection point sample
The mean value of sample is in this setWherein
The covariance matrix of sample are as follows:
Wherein
Described in step 105, mean value and the association of the intersecting point coordinate sample of resulting t frame are calculated by step 101 to step 104
The diatom vanishing point that variance and the 0th frame are persistently tracked to t-1 frame estimates diatom vanishing point with Kalman filter algorithm, specifically,
Can include:
Firstly, the state that diatom vanishing point coordinates modeling is changed over time at discrete dynamic system, the vanishing point of note t frame are
Vt, vanishing point V with previous momentt-1There are following relationships:
Vt=Vt-1+z (6)
Wherein z represents the process noise of system, and meeting mean value is 0, and covariance matrix is the normal distribution of Q;
Secondly, by the diatom vanishing point V of previous momentt-1The vanishing point for predicting t moment, by the state error association side of previous moment
Poor matrix Pt-1With the state error covariance matrix P of matrix Q prediction t momentt -:
Pt -=Pt-1+Q (8)
Wherein,Indicate that the diatom vanishing point of t moment, Q represent the covariance matrix of the system noise;
Kalman gain is calculated, calculation formula such as following formula:
Kt=Pt -(Pt -+Σ)-1 (9)
Wherein Σ is sample covariance matrix described in step 104;
Again, diatom vanishing point is updated, calculation formula is as follows:
Wherein u is sample weighting average value described in step 104,For updated t moment diatom vanishing point;
Finally, updating the state error covariance matrix of t moment, calculation formula is as follows:
Pt=(D-Kt)Pt - (11)
The wherein unit matrix that D is 2 × 2.
By above-mentioned process flow, that is, it can reach the purpose that diatom vanishing point is accurately determined by diatom edge line.
The foregoing is merely presently preferred embodiments of the present invention, but scope of protection of the present invention is not limited thereto, all at this
Within the spirit and principle of invention, any modifications or substitutions done etc. be should be covered by the scope of protection of the present invention.
Claims (5)
1. a kind of diatom vanishing point tracking based on Kalman filter, which is characterized in that method includes the following steps:
Step 1: to the image detection edge pixel of input, line segment is detected with Hough transform algorithm by edge pixel;
Step 2: to resulting line segment is detected, using the point on line segment as anchor point, extracting image block with multiple scales and offset, with
Whether the image block that trained classifier identification in advance is extracted is diatom block;
Step 3: counting the number for the point for having image block to be identified as diatom block on line segment, presets if the number is greater than
Threshold value candidate line sections set then is added in corresponding line segment, and calculates the corresponding line segment weight be equal to and have image block identified
For diatom block point number divided by line segment length;
Step 4: the every line segment extended in candidate line sections set becomes straight line, to non-parallel straight line, calculates between straight line two-by-two
Intersection point calculate the weighted average and covariance matrix of all samples using the two-dimensional coordinate of intersection point as sample;
Step 5: present frame is set as t frame, the intersecting point coordinate sample of resulting present frame is calculated according to step 1 to step 4
The diatom vanishing point that weighted average and covariance and the 0th frame are persistently tracked to t-1 frame estimates road with Kalman filter algorithm
Line vanishing point, tracking result of the resulting vanishing point of output estimation as t frame.
2. the diatom vanishing point tracking according to claim 1 based on Kalman filter, which is characterized in that in step 2
It is described to extract image block using the point on line segment as anchor point with multiple scales and offset, it is mentioned with classifier identification trained in advance
Whether the image block taken is diatom block, comprising:
It is I (X- δ by the extracted image block of anchor point of the point if (X, Y) is the point on line segmentX,Y-δY, W/s, H/s), it represents
With (X- δX,Y-δY) it is top left co-ordinate, W/s and H/s are a wide and high rectangular image area, and wherein W and H is pre- respectively
If datum windows mouth width and height, δXAnd δYOffset respectively both horizontally and vertically, s are preset scale coefficients;
It is a cascade classifier that whether extracted image block, which is the classifier of diatom block, for identification, and each of these grade is
One strong classifier being composed of several Weak Classifiers;
The corresponding feature of each Weak Classifier, calculates as follows:
Wherein x is image block to be detected, and p=± 1, for controlling sign of inequality direction, θ is threshold value, and f is that characteristic value calculates letter
Number;
In the training process, the weighting that each Weak Classifier to be selected is calculated as follows accidentally divides loss function,
εt=minf,p,θ∑iwi|h(xi,f,p,θ)-yi|
Wherein, xiAnd yiRespectively sample and corresponding label, if xiFor positive sample, then yi=+1, otherwise yi=-1, selection has
Minimum accidentally divides the Weak Classifier of loss function value to form strong classifier as optimal Weak Classifier.
3. the diatom vanishing point tracking according to claim 2 based on Kalman filter, which is characterized in that described weak point
The corresponding feature of class device uses Haar-like feature, calculation method are as follows: firstly, taking a rectangle region of image to be detected block x
Domain, by the region division at the subregion of 2,3 or 4 equivalent sizes;If being divided into 2 sub-regions, 2 sub-regions can be a left side
Right distribution or up and down distribution, characteristic value be the pixel value in one of subregion it is cumulative and with the pixel in another sub-regions
It is worth cumulative its difference of sum;If being divided into 3 sub-regions, 3 sub-regions can be for left, center, right distribution or upper, middle and lower distribution, characteristic value
Pixel value wherein in left and right or upper and lower two sub-regions is cumulative and to subtract the pixel value of middle sub-field cumulative and multiplied by after 2
Difference;If being divided into four sub-regions, two parts are both horizontally and vertically respectively divided into, characteristic value is upper left and bottom right two
The pixel value of sub-regions is cumulative and subtracts the difference of the cumulative sum of the pixel value of two sub-regions of upper right and lower-left.
4. the diatom vanishing point tracking according to claim 1 based on Kalman filter, which is characterized in that in step 4
It is described to non-parallel straight line, the intersection point calculated between straight line two-by-two calculates all samples using the two-dimensional coordinate of intersection point as sample
Weighted average and covariance matrix, comprising:
To any two lines section LiAnd Lj, weight is respectively ηiAnd ηj, and extend line segment LiAnd LjThere are intersection points for the straight line of formation, then
The intersection point is endowed weight: ηi+ηj;
If intersection point sample set isWherein (Xk,Yk) be k-th of intersection point coordinate;Weight collection corresponding with intersection point
It is combined intoηkFor the weight of k-th of intersection point, N is intersection point number of samples total in intersection point sample set, then intersection point sample set
The mean value of sample is in conjunctionWherein
The covariance matrix of sample are as follows:
Wherein
5. the diatom vanishing point tracking according to claim 1 based on Kalman filter, which is characterized in that in step 5
The mean value and covariance of the intersecting point coordinate sample of the present frame calculated according to step 1 to step 4 and the 0th frame are to t-1
The diatom vanishing point that frame persistently tracks estimates diatom vanishing point with Kalman filter algorithm, comprising:
Firstly, the state that diatom vanishing point coordinates modeling is changed over time at discrete dynamic system, the vanishing point of note t frame is Vt, with
The vanishing point V of previous momentt-1Relational expression are as follows:
Vt=Vt-1+z
Wherein, z represents the process noise of system, and meeting mean value is 0, and covariance matrix is the normal distribution of Q;
Secondly, by the diatom vanishing point V of previous momentt-1The vanishing point for predicting t moment, by the state error covariance square of previous moment
Battle array Pt-1With the state error covariance matrix P of matrix Q prediction t momentt -:
Wherein,Indicate that the diatom vanishing point of the t moment of estimation, Q represent the covariance matrix of the systematic procedure noise;
Kalman gain is calculated, calculation formula such as following formula:
Wherein Σ is sample covariance matrix described in step 4;
Again, diatom vanishing point is updated, calculation formula is as follows:
Wherein u is sample weighting average value described in step 4,For updated t moment diatom vanishing point;
Finally, updating the state error covariance matrix of t moment, calculation formula is as follows:
The wherein unit matrix that D is 2 × 2, PtFor updated t moment state error covariance matrix.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811110435.6A CN109272536B (en) | 2018-09-21 | 2018-09-21 | Lane line vanishing point tracking method based on Kalman filtering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811110435.6A CN109272536B (en) | 2018-09-21 | 2018-09-21 | Lane line vanishing point tracking method based on Kalman filtering |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109272536A true CN109272536A (en) | 2019-01-25 |
CN109272536B CN109272536B (en) | 2021-11-09 |
Family
ID=65198756
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811110435.6A Active CN109272536B (en) | 2018-09-21 | 2018-09-21 | Lane line vanishing point tracking method based on Kalman filtering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109272536B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111968038A (en) * | 2020-10-23 | 2020-11-20 | 网御安全技术(深圳)有限公司 | Method and system for rapidly searching vanishing points in image |
US11373063B2 (en) * | 2018-12-10 | 2022-06-28 | International Business Machines Corporation | System and method for staged ensemble classification |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103366156A (en) * | 2012-04-09 | 2013-10-23 | 通用汽车环球科技运作有限责任公司 | Road structure detection and tracking |
CN103839264A (en) * | 2014-02-25 | 2014-06-04 | 中国科学院自动化研究所 | Detection method of lane line |
CN104318258A (en) * | 2014-09-29 | 2015-01-28 | 南京邮电大学 | Time domain fuzzy and kalman filter-based lane detection method |
CN106228125A (en) * | 2016-07-15 | 2016-12-14 | 浙江工商大学 | Method for detecting lane lines based on integrated study cascade classifier |
CN106529415A (en) * | 2016-10-16 | 2017-03-22 | 北海益生源农贸有限责任公司 | Characteristic and model combined road detection method |
CN106529443A (en) * | 2016-11-03 | 2017-03-22 | 温州大学 | Method for improving detection of lane based on Hough transform |
CN106682586A (en) * | 2016-12-03 | 2017-05-17 | 北京联合大学 | Method for real-time lane line detection based on vision under complex lighting conditions |
CN107316331A (en) * | 2017-08-02 | 2017-11-03 | 浙江工商大学 | For the vanishing point automatic calibration method of road image |
US20180060669A1 (en) * | 2016-08-30 | 2018-03-01 | Canon Kabushiki Kaisha | Method, system and apparatus for processing an image |
CN107796373A (en) * | 2017-10-09 | 2018-03-13 | 长安大学 | A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven |
-
2018
- 2018-09-21 CN CN201811110435.6A patent/CN109272536B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103366156A (en) * | 2012-04-09 | 2013-10-23 | 通用汽车环球科技运作有限责任公司 | Road structure detection and tracking |
CN103839264A (en) * | 2014-02-25 | 2014-06-04 | 中国科学院自动化研究所 | Detection method of lane line |
CN104318258A (en) * | 2014-09-29 | 2015-01-28 | 南京邮电大学 | Time domain fuzzy and kalman filter-based lane detection method |
CN106228125A (en) * | 2016-07-15 | 2016-12-14 | 浙江工商大学 | Method for detecting lane lines based on integrated study cascade classifier |
US20180060669A1 (en) * | 2016-08-30 | 2018-03-01 | Canon Kabushiki Kaisha | Method, system and apparatus for processing an image |
CN106529415A (en) * | 2016-10-16 | 2017-03-22 | 北海益生源农贸有限责任公司 | Characteristic and model combined road detection method |
CN106529443A (en) * | 2016-11-03 | 2017-03-22 | 温州大学 | Method for improving detection of lane based on Hough transform |
CN106682586A (en) * | 2016-12-03 | 2017-05-17 | 北京联合大学 | Method for real-time lane line detection based on vision under complex lighting conditions |
CN107316331A (en) * | 2017-08-02 | 2017-11-03 | 浙江工商大学 | For the vanishing point automatic calibration method of road image |
CN107796373A (en) * | 2017-10-09 | 2018-03-13 | 长安大学 | A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven |
Non-Patent Citations (6)
Title |
---|
JINJIN SHI ET AL.: "Fast and Robust Vanishing Point Detection for Unstructured Road Following", 《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》 * |
QING XU ET AL.: "Real-time Rear of Vehicle Detection from a Moving Camera", 《2014 CCDC》 * |
付永春: "单目视觉结构化道路车道线检测和跟踪技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
李佳旺: "基于计算机视觉的前方车辆检测及测距研究", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 * |
陈茜: "基于Android平台的车道线检测技术的实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
黄惠迪: "基于机器视觉的行车安全预警系统研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11373063B2 (en) * | 2018-12-10 | 2022-06-28 | International Business Machines Corporation | System and method for staged ensemble classification |
CN111968038A (en) * | 2020-10-23 | 2020-11-20 | 网御安全技术(深圳)有限公司 | Method and system for rapidly searching vanishing points in image |
Also Published As
Publication number | Publication date |
---|---|
CN109272536B (en) | 2021-11-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Keller et al. | The benefits of dense stereo for pedestrian detection | |
Li et al. | Springrobot: A prototype autonomous vehicle and its algorithms for lane detection | |
CN107463890B (en) | A kind of Foregut fermenters and tracking based on monocular forward sight camera | |
CN106682586A (en) | Method for real-time lane line detection based on vision under complex lighting conditions | |
CN106682641A (en) | Pedestrian identification method based on image with FHOG- LBPH feature | |
CN106203398A (en) | A kind of detect the method for lane boundary, device and equipment | |
CN104318258A (en) | Time domain fuzzy and kalman filter-based lane detection method | |
CN108364466A (en) | A kind of statistical method of traffic flow based on unmanned plane traffic video | |
CN108052904B (en) | Method and device for acquiring lane line | |
CN105046255A (en) | Vehicle tail character recognition based vehicle type identification method and system | |
CN103164711A (en) | Regional people stream density estimation method based on pixels and support vector machine (SVM) | |
CN108416258A (en) | A kind of multi-human body tracking method based on human body model | |
CN103413308A (en) | Obstacle detection method and device | |
Enzweiler et al. | Towards multi-cue urban curb recognition | |
CN111626275B (en) | Abnormal parking detection method based on intelligent video analysis | |
CN104573685A (en) | Natural scene text detecting method based on extraction of linear structures | |
CN104599286A (en) | Optical flow based feature tracking method and device | |
CN107480585A (en) | Object detection method based on DPM algorithms | |
CN104143197A (en) | Detection method for moving vehicles in aerial photography scene | |
CN102194102A (en) | Method and device for classifying a traffic sign | |
CN109190483A (en) | A kind of method for detecting lane lines of view-based access control model | |
CN105160292A (en) | Vehicle identification recognition method and system | |
CN109272536A (en) | A kind of diatom vanishing point tracking based on Kalman filter | |
CN107133600A (en) | A kind of real-time lane line detection method based on intra-frame trunk | |
WO2013026205A1 (en) | System and method for detecting and recognizing rectangular traffic signs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |