CN109300139A - Method for detecting lane lines and device - Google Patents
Method for detecting lane lines and device Download PDFInfo
- Publication number
- CN109300139A CN109300139A CN201811159602.6A CN201811159602A CN109300139A CN 109300139 A CN109300139 A CN 109300139A CN 201811159602 A CN201811159602 A CN 201811159602A CN 109300139 A CN109300139 A CN 109300139A
- Authority
- CN
- China
- Prior art keywords
- edge
- parameter
- lane
- imaging model
- video frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 103
- 238000003384 imaging method Methods 0.000 claims abstract description 184
- 230000004044 response Effects 0.000 claims abstract description 38
- 238000001514 detection method Methods 0.000 claims description 18
- 238000003708 edge detection Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 6
- 241000208340 Araliaceae Species 0.000 claims description 5
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims description 5
- 235000003140 Panax quinquefolius Nutrition 0.000 claims description 5
- 235000008434 ginseng Nutrition 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 230000008569 process Effects 0.000 description 8
- 230000006854 communication Effects 0.000 description 6
- 238000001914 filtration Methods 0.000 description 6
- 230000002776 aggregation Effects 0.000 description 5
- 238000004220 aggregation Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 230000005291 magnetic effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses method for detecting lane lines and device.One specific embodiment of the method for detecting lane lines includes: the edge detected in current video frame;Based on the edge detected, candidate edge set is determined;Using the lane imaging model for having determined that parameter, it is fitted each edge in candidate edge set;For each edge in candidate edge set, the fitting result at the edge and the error at the edge are calculated;Choose the edge that the error calculated is less than or equal to predictive error;It is more than or equal to 4 in response to the quantity at the edge of selection, lane line is determined based on the fitting result at the edge of selection.The embodiment can obtain fitting result based on the edge in candidate edge set, can be effectively fitted using multiple marginal informations, increase the stability of the lane line determined according to fitting result.
Description
Technical field
This application involves field of computer technology, and in particular to electronic map technique field more particularly to lane detection
Method and apparatus.
Background technique
In lane detection application, need to be fitted the lane line detected, to obtain the driving of present road
Parameter.
Currently, usually use straight line or multinomial is fitted multiple lane lines respectively, RANSAC is used in fit procedure
Scheduling algorithm, when filtering, need using camera extrinsic calibration parameter.
However, current lane line approximating method, has the following problems: (1) each lane line is individually fitted, cannot be effective
Increase fitting stability using other lane line information;(2) it is needed when filtering using camera extrinsic calibration parameter, no outer ginseng calibration
Under occasion use it is limited;(3) interframe parameter tracking can not effectively be carried out;(4) disappeared using the hardware performance of RANSAC scheduling algorithm
Consumption is big.
Summary of the invention
The embodiment of the present application provides method for detecting lane lines and device.
In a first aspect, the embodiment of the present application provides a kind of method for detecting lane lines, comprising: in detection current video frame
Edge;Based on the edge detected, candidate edge set is determined;Using the lane imaging model for having determined that parameter, fitting is candidate
Each edge in edge aggregation;For each edge in candidate edge set, fitting result and the side at the edge are calculated
The error of edge;Choose the edge that the error calculated is less than or equal to predictive error;It is more than or equal in response to the quantity at the edge of selection
4, lane line is determined based on the fitting result at the edge of selection.
In some embodiments, the parameter of lane imaging model is determined based on following steps: in response to obtaining from database
The parameter of the lane imaging model of a upper video frame is determined as currently by the parameter for getting the lane imaging model of a video frame
The parameter of the lane imaging model of video frame;In response to being had not been obtained from database to the lane imaging model of a upper video frame
Parameter is fitted each edge determined in parameter step fitting candidate edge set based on data, determines the vehicle of current video frame
The parameter of road imaging model.
In some embodiments, each edge determined in parameter step fitting candidate edge set is fitted based on data,
The parameter for determining the lane imaging model of current video frame includes: to use to every two edges combination in candidate edge set
Data fitting method determines the parameter of one group of lane imaging model;The vehicle based on determined by the parameter of each group of lane imaging model
The error of road imaging model, the fitting result and edge line that determine in candidate edge set each edge line is less than predictive error
Lines quantity;The parameter of the determining lane imaging model of lines quantity maximum and lines quantity greater than 4 is determined as currently
The parameter of the lane imaging model of video frame.
In some embodiments, each edge determined in parameter step fitting candidate edge set is fitted based on data,
Determine the parameter of the lane imaging model of current video frame further include: if the vehicle of lines quantity maximum and lines quantity greater than 4
The parameter of road imaging model is not present, then using next frame video frame as current video frame, and to current video frame, execution is based on
The edge detected determines candidate edge set, and based on each edge in fitting candidate edge set, forward sight is worked as in determination
The parameter of the lane imaging model of frequency frame.
In some embodiments, use data fitting method determine one group of lane imaging model parameter include: use with
At least one of lower data fitting method determines the parameter of one group of lane imaging model: least square method, Hough transformation and maximum
Posterior estimator.
In some embodiments, in response to the ginseng of the lane imaging model to a upper video frame has not been obtained from database
Number is fitted each edge determined in parameter step fitting candidate edge set based on data, determines the lane of current video frame
The parameter of imaging model includes: the parameter in response to the lane imaging model to a upper video frame has not been obtained from database, base
In the external parameter of the video camera of calibrated shooting video frame, the vanishing point ginseng of the lane imaging model of current video frame is determined
Number;At each edge being fitted based on data in determining parameter step fitting candidate edge set, determined using vanishing point parameter
The parameter of the lane imaging model of current video frame.
In some embodiments, based on the edge detected, determine that candidate edge set includes: based on the edge detected
In pixel included by each edge quantity, determine candidate edge set;Or it is wrapped based on the edge edge Zhong Ge detected
The adjacent blank area of the quantity of the pixel included and the edge edge Zhong Ge detected, determines candidate edge set.
In some embodiments, it quantity based on pixel included by the edge edge Zhong Ge detected and detects
The adjacent blank area in the edge edge Zhong Ge, determine that candidate edge set includes: according to the edge edge the Zhong Ge institute detected
Including the quantity of pixel carry out length sequence from high to low, each edge after obtaining length sequence;After being sorted according to length
Each edge collating sequence, the edge for choosing predetermined quantity is added to candidate edge set;According to each in the edge detected
The adjacent blank area in edge carries out adjacent blank area sequence to each edge from large to small, obtains sorting according to adjacent blank area
Each edge afterwards;Based on the collating sequence according to each edge after the sequence of adjacent blank area, the edge addition of preset quantity is chosen
To candidate edge set.
In some embodiments, lane imaging model includes: u-u0=A (v-v0)+B/(v-v0), wherein (u0,v0) it is figure
As vanishing point position (v=v0For horizon), (u, v) is the coordinate points at the edge in current video frame, and A, B are model coefficient, together
In one frame image, only A value is different for different lane lines.
In some embodiments, lane imaging model includes: u-u0=∑ ai(v-v0)i, wherein (u0,v0) it is image vanishing point
Position (v=v0For horizon), (u, v) is the coordinate points at the edge in current video frame, aiRefer to Taylor's grade of hyperbolic model
I-th coefficient of number expansion.
In some embodiments, method further include: be less than in response to the error of calculating greater than the amount of edge of predictive error
4, using next frame video frame as current video frame, and method for detecting lane lines is executed to new current video frame.
Second aspect, the embodiment of the present application provide a kind of lane detection device, comprising: edge detection unit is matched
It is set to the edge in detection current video frame;Gather determination unit, is configured to determine candidate edge based on the edge detected
Set;Edge fitting unit is configured to be fitted each in candidate edge set using the lane imaging model for having determined that parameter
Edge;Error calculation unit is configured to calculate the fitting result at the edge for each edge in candidate edge set
With the error at the edge;Edge selection unit, the error for being configured to choose calculating are less than or equal to the edge of predictive error;Lane
Line determination unit, the quantity for being configured in response to the edge chosen are more than or equal to 4, and the fitting result at the edge based on selection is true
Determine lane line.
In some embodiments, the parameter of lane imaging model is determined based on step identified below in edge fitting unit:
In response to getting the parameter of the lane imaging model of a upper video frame from database, mould is imaged in the lane of a upper video frame
The parameter of type is determined as the parameter of the lane imaging model of current video frame;In response to being had not been obtained from database to a upper video
The parameter of the lane imaging model of frame is fitted each edge determined in parameter step fitting candidate edge set based on data,
Determine the parameter of the lane imaging model of current video frame.
In some embodiments, the determination step that the parameter of lane imaging model is based in edge fitting unit is further
It include: that one group of lane imaging model is determined using data fitting method to every two edges combination in candidate edge set
Parameter;The lane imaging model based on determined by the parameter of each group of lane imaging model determines each item in candidate edge set
The fitting result of edge line and the error of edge line are less than the lines quantity of predictive error;By determining lines quantity it is maximum and
The parameter of lane imaging model of the lines quantity greater than 4 is determined as the parameter of the lane imaging model of current video frame.
In some embodiments, the determination step that the parameter of lane imaging model is based in edge fitting unit is further
If including: that lines quantity is maximum and the parameter of lane imaging model of the lines quantity greater than 4 is not present, by next frame video
Frame executes based on the edge detected as current video frame, and to current video frame, determines candidate edge set, Yi Jiji
Each edge in fitting candidate edge set, determines the parameter of the lane imaging model of current video frame.
In some embodiments, the determination step that the parameter of lane imaging model is based in edge fitting unit is further
Include: the parameter that one group of lane imaging model is determined using at least one of following data fitting method: least square method, Hough become
It changes and MAP estimation.
In some embodiments, the determination step that the parameter of lane imaging model is based in edge fitting unit is further
Include: the parameter in response to the lane imaging model to a upper video frame has not been obtained from database, is based on calibrated shooting
The external parameter of the video camera of video frame determines the vanishing point parameter of the lane imaging model of current video frame;Quasi- based on data
When closing each edge in determining parameter step fitting candidate edge set, the lane of current video frame is determined using vanishing point parameter
The parameter of imaging model.
In some embodiments, set determination unit is further configured to: based on the edge edge the Zhong Ge institute detected
Including pixel quantity, determine candidate edge set;Or based on pixel included by the edge edge Zhong Ge detected
The adjacent blank area of quantity and the edge edge Zhong Ge that detects, determine candidate edge set.
In some embodiments, set determination unit is further configured to: according to the edge edge the Zhong Ge institute detected
Including the quantity of pixel carry out length sequence from high to low, each edge after obtaining length sequence;After being sorted according to length
Each edge collating sequence, the edge for choosing predetermined quantity is added to candidate edge set;According to each in the edge detected
The adjacent blank area in edge carries out adjacent blank area sequence to each edge from large to small, obtains sorting according to adjacent blank area
Each edge afterwards;Based on the collating sequence according to each edge after the sequence of adjacent blank area, the edge addition of preset quantity is chosen
To candidate edge set.
In some embodiments, the lane imaging model in edge fitting unit includes: u-u0=A (v-v0)+B/(v-
v0), wherein (u0,v0) it is image vanishing point position (v=v0For horizon), (u, v) is the coordinate at the edge in current video frame
Point, A, B are model coefficient, and in same frame image, only A value is different for different lane lines.
In some embodiments, the lane imaging model in edge fitting unit includes: u-v0=∑ ai(v-v0)i, wherein
(u0,v0) it is image vanishing point position (v=v0For horizon), (u, v) is the coordinate points at the edge in current video frame, aiRefer to
I-th coefficient of the taylor series expansion of hyperbolic model.
In some embodiments, device further include: video frame updating unit, the error for being configured in response to calculate are greater than
The amount of edge of predictive error is less than 4, using next frame video frame as current video frame, and executes vehicle to new current video frame
Road line detecting method.
The third aspect, the embodiment of the present application provide a kind of equipment, comprising: one or more processors;Storage device is used
In the one or more programs of storage;When one or more programs are executed by one or more processors, so that at one or more
It manages device and realizes as above any method.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should
As above any method is realized when program is executed by processor.
Method for detecting lane lines and device provided by the embodiments of the present application, first acquisition current video frame;Based on the institute arrived
The edge in current video frame is stated, candidate edge set is obtained;Later, using the lane imaging model fitting time for having determined that parameter
Select each edge in edge aggregation;Later, for each edge in candidate edge set, the fitting result at the edge is calculated
With the error at the edge;Finally, the error in response to calculating is less than predictive error, vehicle is determined based on the fitting result at each edge
Diatom.In this course, fitting result can be obtained based on the edge in candidate edge set, it can be effectively using multiple
Marginal information is fitted, and increases the stability of the lane line determined according to fitting result.
Detailed description of the invention
Non-limiting embodiment is described in detail referring to made by the following drawings by reading, other features,
Objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow diagram according to one embodiment of the method for detecting lane lines of the embodiment of the present application;
Fig. 3 a to Fig. 3 f is an application scenarios schematic diagram according to the embodiment of the present application;
Fig. 4 is one according to the method for the parameter of the lane imaging model of the determination current video frame of the embodiment of the present application
The flow diagram of embodiment;
Fig. 5 is the structural schematic diagram of one embodiment of the lane detection device of the application;
Fig. 6 is adapted for the structural schematic diagram for the computer system for realizing the server of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the example of the embodiment of the method for detecting lane lines or lane detection device of the application
Property system architecture 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105,
106.Network 104 between terminal device 101,102,103 and server 105,106 to provide the medium of communication link.Net
Network 104 may include various connection types, such as wired, wireless communication link or fiber optic cables etc..
User 110 can be used terminal device 101,102,103 and be interacted by network 104 with server 105,106, to connect
Receive or send message etc..Various telecommunication customer end applications, such as electronic map can be installed on terminal device 101,102,103
Class application, the application of search engine class, shopping class application, instant messaging tools, mailbox client, social platform software, video are broadcast
Put class application etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard
When part, it can be the various electronic equipments with display screen and supported web page browsing, including but not limited to smart phone, plate
Computer, E-book reader, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic
Image expert's compression standard audio level 3), MP4 (Moving Picture Experts Group AudioLayer IV, move
State image expert's compression standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal is set
Standby 101,102,103 when being software, may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or
Software module (such as providing Distributed Services), also may be implemented into single software or software module.It does not do herein specific
It limits.
Server 105,106 can be to provide the server of various services, such as provide terminal device 101,102,103
The background server of support.The data that background server can submit terminal such as be analyzed, stored or be calculated at processing, and the general
Analysis, storage or calculated result are pushed to terminal device.
It should be noted that in practice, method for detecting lane lines provided by the embodiment of the present application can be set by terminal
Standby 101,102,103 or server 105,106 execute, lane detection device also can be set in terminal device 101,102,
103 or server 105,106 in.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, Fig. 2 shows the processes 200 according to one embodiment of the method for detecting lane lines of the application.
The method for detecting lane lines, comprising the following steps:
Step 201, the edge in current video frame is detected.
In the present embodiment, executing subject (such as the terminal shown in FIG. 1 of above-mentioned method for detecting lane lines operation thereon
Or server) video that video camera is shot, and the view that will be pulled can be read from the video camera of locally or remotely electronic equipment
The video frame for currently needing to handle determining lane line in frequency is as current video frame.
Later, above-mentioned executing subject can detecte the edge in current video frame.Detect the edge in current video frame
Purpose is: the apparent point of brightness change in reference numbers image.Significant changes in image attributes usually reflect the weight of attribute
Want event and variation.These include that discontinuous discontinuous, surface direction in depth, material property variation and scene lighting become
Change.
The method for detecting the edge in current video frame can be in the prior art or the technology of future development for detecting
The method at the edge in video frame, the application are not construed as limiting this.For example, can be using the side based on search and based on zero crossing
Edge detection method detects edge.
Edge detection method based on search calculates edge strength first, is usually indicated with first derivative, such as gradient-norm,
Then, with the local direction for calculating estimation edge, the direction of gradient is generallyd use, and find partial gradient mould using this direction
Maximum value.
The zero cross point of the second dervative obtained by image is found based on the method for zero crossing to position edge.Usually with drawing
The zero cross point of general Laplacian operater or nonlinear differential equation.
Filtering is commonly necessary as the pretreatment of edge detection, generallys use gaussian filtering.
The measurement of edge detection method computation boundary intensity, this has the difference of essence with smothing filtering.As many
Edge detection method depends on the calculating of image gradient, they estimate the direction x- and the direction y- with different types of filter
Gradient.
It should be appreciated that for the needs of testing result, the video camera for shooting video usually requires to meet installation requirement.
Such as: the pitch angle (pitch) and yaw angle (yaw) of video camera should in a certain range, so that vanishing point (refers in image
Crosspoint caused by the extension line on each side of solid figure) as close as possible to picture centre, the roll angle (roll) of video camera is no
It can exceed that 5 degree etc..
Step 202, based on the edge detected, candidate edge set is determined.
In the present embodiment, based on the edge detected in step 201, the edge that can directly will test is as candidate
Edge aggregation can also screen the edge detected, and the edge after being screened is as candidate edge set.
In an optional implementation of the present embodiment, based on the edge detected, determine that candidate edge set can be with
Include: the quantity based on pixel included by the edge edge Zhong Ge detected, determines candidate edge set;Or based on detection
To the edge edge Zhong Ge included by pixel the adjacent blank area of quantity and the edge edge Zhong Ge that detects, really
Determine candidate edge set.
In this implementation, the quantity of the pixel as included by each edge can determine the length at each edge, that
It may be able to be the edge of lane line according to the length at each edge, determination, and these edges determined are added to candidate side
Edge set.
In view of in practical application scene, the blank area adjacent with lane line is typically larger than adjacent with non-lane line
Therefore blank area on the basis of the length determination based on each edge may be the edge of lane line, is also based on each side
The size of the adjacent blank area of edge, which further determines that, to be the edge of lane line, and the edge that will be determined respectively twice adds
To candidate edge set.
In some optional implementations of the present embodiment, based on pixel included by the edge edge Zhong Ge detected
The adjacent blank area of quantity and the edge edge Zhong Ge that detects, determine that candidate edge set includes: according to detecting
The edge edge Zhong Ge included by the quantity of pixel carry out length sequence from high to low, each side after obtaining length sequence
Edge;The collating sequence at each edge after being sorted according to length, the edge for choosing predetermined quantity are added to candidate edge set;According to
The adjacent blank area in the edge edge Zhong Ge detected carries out adjacent blank area sequence to each edge from large to small, obtains evidence
Each edge after adjacent blank area sequence;Based on the collating sequence according to each edge after the sequence of adjacent blank area, choose pre-
If the edge of quantity is added to candidate edge set.
In this implementation, the collating sequence of the length based on each edge can determine that part candidate edge is added to
In candidate edge set;The collating sequence of adjacent blank area based on each edge can also determine that part candidate edge adds
Into candidate edge set.Edge in the candidate edge set is the edge of candidate lane line.Here predetermined quantity
And preset quantity, it can rule of thumb setting or artificial setting respectively.
In a specific embodiment, longest 8 edges of length can be set as candidate edge, while determining phase
Adjacent maximum 8 edges of blank area are also used as candidate edge, to obtain candidate edge set.It should be appreciated that length longest
8 edges may be overlapped with maximum 8 edges of adjacent blank area, therefore, the line that may include in candidate edge set
Quantity is more than or equal to 8 but less than 16.
Step 203, using the lane imaging model for having determined that parameter, it is fitted each edge in candidate edge set.
In the present embodiment, above-mentioned executing subject can be using the lane imaging model fitting candidate edge for having determined that parameter
Each edge in set.Lane imaging model can usually be realized using the function of simulated roadway line.For example, using straight line
Equation or multinomial realize lane imaging model etc..
Herein, the parameter of lane imaging model can be the parameter or base of the lane imaging model of similar picture frame
In the parameter for the lane imaging model that the data fitting result of current image frame determines.
In some optional implementations of the present embodiment, the parameter of the lane imaging model can be based on following steps
It determines: the parameter in response to getting the lane imaging model of a upper video frame from database, by the lane of a upper video frame
The parameter of imaging model is determined as the parameter of the lane imaging model of current video frame;In response to being had not been obtained from database to upper
The parameter of the lane imaging model of one video frame is fitted based on data and determines that parameter step is fitted in the candidate edge set
Each edge determines the parameter of the lane imaging model of current video frame.
In this implementation, due to the fitting obtained in the application according to each edge in the candidate edge set
As a result with reference to a plurality of edge, wide adaptability, and there is continuity, therefore can be on the side of current video frame between video frame
The parameter of the lane imaging model of a video frame is continued to use in edge fit procedure.
In some optional implementations of the present embodiment, in response to the vehicle to a upper video frame has not been obtained from database
The parameter of road imaging model, each edge determined in parameter step fitting candidate edge set is fitted based on data, and determination is worked as
The parameter of the lane imaging model of preceding video frame includes: in response to being had not been obtained from database to the imaging of the lane of a upper video frame
The parameter of model, the external parameter of the video camera based on calibrated shooting video frame determine the lane imaging of current video frame
The vanishing point parameter of model;At each edge being fitted based on data in determining parameter step fitting candidate edge set, use
Vanishing point parameter determines the parameter of the lane imaging model of current video frame.
In this implementation, if the external parameter of the video camera of calibrated shooting video frame exists, then can be with base
The vanishing point parameter of lane imaging model is determined in external parameter, to reduce the calculating of the parameter of determining lane imaging model
Amount, improves the efficiency of the parameter of determining lane imaging model.
In some optional implementations of the present embodiment, lane imaging model includes: u-u0=A (v-v0)+B/(v-
v0), wherein (u0,v0) it is image vanishing point position (v=v0For horizon), (u, v) is the coordinate at the edge in current video frame
Point, A, B are model coefficient, and in same frame image, only A value is different for different lane lines.
In this implementation, lane imaging model simultaneously can be modeled rectilinear stretch and bend on picture frame, be mentioned
The high accuracy of lane imaging model, and the wide adaptability of lane imaging model, are conducive to carry out interframe tracking.At one
It include k group lane line feature point set in candidate edge set, in v in specific example0Item known to value and other parameter variances
Under part, lane line is fitted the least square fitting that can be converted into Weight, and least square method is a kind of efficient data fitting side
Method.
In some optional implementations of the present embodiment, lane imaging model includes: u-u0=∑ ai(v-v0)i, wherein
(u0,v0) it is image vanishing point position (v=v0For horizon), (u, v) is the coordinate points at the edge in current video frame, aiRefer to
I-th coefficient of the taylor series expansion of hyperbolic model.
In this implementation, hyperbolic model u-u0=A (v-v0)+B/(v-v0).By using hyperbolic model
Taylor expansion eliminates first order parameter as hyperbolic model, additionally it is possible to simultaneously to rectilinear stretch and curved on picture frame
Road modeling, equally improves the accuracy of lane imaging model, and the wide adaptability of lane imaging model, is conducive to carry out frame
Between track.
Step 204, for each edge in candidate edge set, the fitting result at the edge and the mistake at the edge are calculated
Difference.
In the present embodiment, above-mentioned executing subject can be based on having determined that the lane imaging model of parameter to candidate edge collection
The fitting result at each edge in conjunction calculates the fitting result at the edge and the error at the edge for each edge.?
Here, error can be " residual sum ", " residual absolute value and " or " residual sum of squares (RSS) ".
Step 205, the error for choosing calculating is less than or equal to the edge of predictive error.
In the present embodiment, the error of calculating is less than predictive error, illustrates that fitting result meets practical lane line, therefore can
Using by the fitting result at edge as the lane line edge estimated, so that it is determined that lane line.
Step 206, it is more than or equal to 4 in response to the quantity at the edge of selection, is determined based on the fitting result at the edge of selection
Lane line.
In the present embodiment, it is contemplated that a lane has two lane lines, and each lane line includes two sides
Edge, thus choose edge quantity be more than or equal to 4 when, illustrate in current image frame include at least a lane.At this point, being based on
The fitting result at the edge of selection is assured that lane line.When determining lane line according to lane line edge, it may be considered that vehicle
Width, the position of lane center in road etc., accept or reject the edge of selection, determine final lane line.
In some optional implementations of the present embodiment, method for detecting lane lines further include: in response to the error of calculating
Greater than predictive error amount of edge less than 4, using next frame video frame as current video frame, and new current video frame is held
Driveway line detection method.
In this implementation, if the error calculated is greater than the amount of edge of predictive error less than 4, then current video frame
In the edge that detects do not include complete lane (such as acquired image when vehicle doubling), therefore can be by next frame video
Frame executes method for detecting lane lines as described above as current video frame, and to new current video frame, to determine lane line.
Below in conjunction with Fig. 3 a to Fig. 3 e, the exemplary application scene of the method for detecting lane lines of the application is described.
As shown in Fig. 3 a to Fig. 3 e, Fig. 3 a to Fig. 3 e shows an application of the method for detecting lane lines according to the application
The schematic flow chart of scene.
As shown in Figure 3a, method for detecting lane lines 300 is run in electronic equipment 310, may include:
Firstly, the edge 302 in detection current video frame 301, obtains the edge in current video frame as shown in Figure 3a;
Later, the quantity of the pixel according to included by each edge in the edge 302 detected from high to low, is chosen predetermined
The edge 303 of quantity is added to candidate edge set 305, obtains candidate edge set as shown in Figure 3b;
Later, from large to small according to the adjacent blank area in each edge in the edge 302 detected, preset quantity is chosen
Edge 304 is added to candidate edge set 305, obtains candidate edge set as shown in Figure 3c;
Later, using the lane imaging model 306 for having determined that parameter, it is fitted each edge in candidate edge set 305;
Later, for each edge in candidate edge set 305, the fitting result at the edge and the mistake at the edge are calculated
Poor 307;
Later, the error 307 for choosing calculating is less than or equal to the edge of predictive error 308, the edge 309 chosen;
Later, it is more than or equal to 4 in response to the quantity at the edge of selection 309, the fitting result 310 at the edge based on selection
(fitting result at the edge of selection as shown in Figure 3 d), determines lane line 311, obtains lane line as shown in Figure 3 e.
It should be appreciated that the application scenarios of method for detecting lane lines shown in above-mentioned Fig. 3, only for lane detection
The exemplary description of method does not represent the restriction to this method.For example, each step shown in above-mentioned Fig. 3, it can be into one
Step uses the implementation method of more details.
The method for detecting lane lines of the above embodiments of the present application can detect the edge in current video frame first;Later,
Based on the edge detected, candidate edge set is determined;Later, using the lane imaging model for having determined that parameter, fitting is candidate
Each edge in edge aggregation;Later, for each edge in candidate edge set, calculate the fitting result at the edge with
The error at the edge;Later, the error for choosing calculating is less than or equal to the edge of predictive error;Finally, in response to the edge of selection
Quantity be more than or equal to 4, lane line is determined based on the fitting result at the edge of selection.In this course, it is waited due to using
It selects a plurality of edge in edge aggregation to be fitted, increases the stability of fitting result, improve the standard of lane imaging model
True property, and the wide adaptability of lane imaging model are conducive to carry out interframe tracking.And without considering phase in filtering
Machine extrinsic calibration parameter, use occasion are unrestricted.
Referring to FIG. 4, it illustrates the sides according to the parameter of the lane imaging model of the determination current video frame of the application
The flow chart of one embodiment of method.
As shown in figure 4, the process of the method for the parameter of the lane imaging model of the determination current video frame of the present embodiment
400, it may comprise steps of:
Step 401, every two edges in candidate edge set are combined, one group of lane is determined using data fitting method
The parameter of imaging model.
In this implementation, every two edges in candidate edge set are combined, unknown parameters can be substituted into
Lane imaging model obtains the parameter of one group of lane imaging model to solve unknown parameter.
Here data fitting, also known as curve matching, are that available data is substituted into a numerical expression through mathematical method
Representation.Science and engineering problem can be by the methods of such as sampling, tests several discrete data of acquisition, according to these
Data, we are often desirable to obtain the discrete equation and datum of a continuous function (namely curve) or more crypto set
According to matching, this process is just called fitting (fitting).
In some optional implementations of the present embodiment, one group of lane imaging model is determined using data fitting method
Parameter includes: that the parameter of one group of lane imaging model is determined using at least one of following data fitting method: least square method, suddenly
Husband's transformation and MAP estimation.
In this implementation, when using linear model fitting data, data volume is generally higher than the unknown several of equation group
Number, obtains an over-determined systems, and coefficient is possible and incompatible between separate equation, causes equation group without solution.Minimum two
Multiplication acquires the optimal solution of over-determined systems under the constraint of least squares error condition.The Singular variance problem of least square method
Weighting can be used to solve.The process of maximum likelihood method solving model parameter, exactly scans for parameter space, find so that
The maximum parameter point of a possibility that set of characteristic points occur.
Different from the voting process of the characteristic point of Hough transform to parameter space, MAP estimation is that parameter space arrives
One matching process of set of characteristic points.Illustratively, data fitting method may include: to be realized most based on least square method
Big Posterior estimator.
Step 402, the lane imaging model based on determined by the parameter of each group of lane imaging model, determines candidate edge
The error of the fitting result of each edge line and edge line is less than the lines quantity of predictive error in set.
In the present embodiment, the combination due to the parameter of each group of lane imaging model based on two edges determines, each group
The amount of edge in candidate edge that the parameter of lane imaging model is applicable in is not identical, in order to determine optimal lane imaging
The parameter of model, it is thus necessary to determine which group is the lines quantity that each group lane imaging model is applicable in determine further according to lines quantity
The applicability of the parameter of lane imaging model is more extensive.
Step 403, by the parameter of the determining lane imaging model of lines quantity maximum and lines quantity greater than 4, really
It is set to the parameter of the lane imaging model of current video frame.
In the present embodiment, determining lines quantity is maximum, it is ensured that the applicability of the lane imaging model is the widest
It is general, and the lines quantity determined is greater than 4, then may insure that the parameter of lane imaging model is at least adapted to included by a lane
4 edges.
In some optional implementations of the present embodiment, it is fitted based on data and determines that parameter step is fitted candidate edge collection
Each edge in conjunction, determines the parameter of the lane imaging model of current video frame further include: if lines quantity maximum and line
The parameter of the lane imaging model of quantity greater than 4 is not present, then using next frame video frame as current video frame, and to current
Video frame is executed based on the edge detected, determines candidate edge set, and based on each item in fitting candidate edge set
Edge determines the parameter of the lane imaging model of current video frame.
In this implementation, if the parameter of the lane imaging model of lines quantity maximum and lines quantity greater than 4 is not
In the presence of, namely show that lines quantity cannot meet suitable for four edges included by a lane, it is not deposited in current video frame
In a complete lane.Therefore, lane line can be determined based on next frame video frame.
The method of the parameter of the lane imaging model of the determination current video frame of the above embodiments of the present application, to candidate edge
Every two edges combination in set, determines a group model parameter using data fitting method;Mould is imaged based on each group of lane
Lane imaging model determined by the parameter of type determines in candidate edge set the fitting result and edge line of each edge line
Error is less than the lines quantity of predictive error;By the lane imaging mould that determining lines quantity is maximum and lines quantity is greater than 4
The parameter of type is determined as the parameter of the lane imaging model of current video frame.In this course, it has filtered out and has been adapted to most
The parameter of the lane imaging model of multiple edge improves the applicability of the parameter of identified lane imaging model.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, the embodiment of the present application provides a kind of lane
One embodiment of line detector, the Installation practice is corresponding with embodiment of the method shown in Fig. 2-Fig. 4, device tool
Body can be applied in various electronic equipments.
As shown in figure 5, the lane detection device 500 of the present embodiment may include: edge detection unit 510, it is configured
At the edge in detection current video frame;Gather determination unit 520, is configured to determine candidate side based on the edge detected
Edge set;Edge fitting unit 530 is configured to be fitted candidate edge set using the lane imaging model for having determined that parameter
In each edge;Error calculation unit 540 is configured to calculate the edge for each edge in candidate edge set
Fitting result and the edge error;Edge selection unit 550, the error for being configured to choose calculating are less than or equal to predetermined miss
The edge of difference;Lane line determination unit 560, the quantity for being configured in response to the edge chosen is more than or equal to 4, based on selection
The fitting result at edge determines lane line.
In some embodiments, the parameter of lane imaging model is true based on step identified below in edge fitting unit 530
It is fixed: the parameter in response to getting the lane imaging model of a upper video frame from database, by the lane of a upper video frame at
As the parameter of model is determined as the parameter of the lane imaging model of current video frame;In response to being had not been obtained from database to upper one
The parameter of the lane imaging model of video frame is fitted each side determined in parameter step fitting candidate edge set based on data
Edge determines the parameter of the lane imaging model of current video frame.
In some embodiments, the determination step that the parameter of lane imaging model is based in edge fitting unit 530 into
One step includes: to determine one group of lane imaging mould using data fitting method to every two edges combination in candidate edge set
The parameter of type;The lane imaging model based on determined by the parameter of each group of lane imaging model, determines in candidate edge set
The fitting result of each edge line and the error of edge line are less than the lines quantity of predictive error;Determining lines quantity is maximum
And the parameter of lane imaging model of the lines quantity greater than 4 is determined as the parameter of the lane imaging model of current video frame.
In some embodiments, the determination step that the parameter of lane imaging model is based in edge fitting unit 530 into
If a step includes: that the parameter of the lane imaging model of lines quantity maximum and lines quantity greater than 4 is not present, by next frame
Video frame executes based on the edge detected as current video frame, and to current video frame, determines candidate edge set, with
And based on each edge in fitting candidate edge set, determine the parameter of the lane imaging model of current video frame.
In some embodiments, the determination step that the parameter of lane imaging model is based in edge fitting unit 530 into
One step includes: that the parameter of one group of lane imaging model is determined using at least one of following data fitting method: least square method, suddenly
Husband's transformation and MAP estimation.
In some embodiments, the determination step that the parameter of lane imaging model is based in edge fitting unit 530 into
One step includes: the parameter in response to the lane imaging model to a upper video frame has not been obtained from database, based on calibrated
The external parameter for shooting the video camera of video frame, determines the vanishing point parameter of the lane imaging model of current video frame;Based on number
When according to each edge being fitted in determining parameter step fitting candidate edge set, current video frame is determined using vanishing point parameter
The parameter of lane imaging model.
In some embodiments, set determination unit 520 is further configured to: based on the edge edge Zhong Ge detected
The quantity of included pixel determines candidate edge set;Or based on pixel included by the edge edge Zhong Ge detected
The adjacent blank area of the quantity of point and the edge edge Zhong Ge that detects, determines candidate edge set.
In some embodiments, set determination unit 520 is further configured to: according to the edge edge Zhong Ge detected
The quantity of included pixel carries out length sequence from high to low, each edge after obtaining length sequence;It is sorted according to length
The collating sequence at each edge afterwards, the edge for choosing predetermined quantity are added to candidate edge set;According in the edge detected
The adjacent blank area in each edge carries out adjacent blank area sequence to each edge from large to small, obtains arranging according to adjacent blank area
Each edge after sequence;Based on the collating sequence according to each edge after the sequence of adjacent blank area, the edge for choosing preset quantity adds
Add to candidate edge set.
In some embodiments, the lane imaging model in edge fitting unit 530 includes: u-u0=A (v-v0)+B/(v-
v0), wherein (u0,v0) it is image vanishing point position (v=v0For horizon), (u, v) is the coordinate at the edge in current video frame
Point, A, B are model coefficient, and in same frame image, only A value is different for different lane lines.
In some embodiments, the lane imaging model in edge fitting unit 530 includes: u-u0=∑ ai(v-v0)i,
Wherein (u0,v0) it is image vanishing point position (v=v0For horizon), (u, v) is the coordinate points at the edge in current video frame, ai
Refer to i-th coefficient of the taylor series expansion of hyperbolic model.
In some embodiments, device further include: video frame updating unit 570 is configured in response to the error calculated
Greater than predictive error amount of edge less than 4, using next frame video frame as current video frame, and new current video frame is held
Driveway line detection method.
It should be appreciated that each step in the method that all units recorded in device 500 can be described with reference Fig. 2-Fig. 4
It is corresponding.It is equally applicable to device 500 and unit wherein included above with respect to the operation and feature of method description as a result,
This is repeated no more.
Below with reference to Fig. 6, it illustrates the computer systems 600 for the server for being suitable for being used to realize the embodiment of the present application
Structural schematic diagram.Terminal device or server shown in Fig. 6 are only an example, should not function to the embodiment of the present application and
Use scope brings any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and
Execute various movements appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
I/O interface 605 is connected to lower component: the importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 608 including hard disk etc.;
And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net executes communication process.Driver 610 is also connected to I/O interface 605 as needed.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 610, in order to read from thereon
Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 609, and/or from detachable media
611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer readable storage medium either the two any combination.Computer readable storage medium for example can be --- but
Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.
The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires electrical connection,
Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer readable storage medium, which can be, any include or stores
The tangible medium of program, the program can be commanded execution system, device or device use or in connection.And
In the application, computer-readable signal media may include in a base band or the data as the propagation of carrier wave a part are believed
Number, wherein carrying computer-readable program code.The data-signal of this propagation can take various forms, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet
It is true to include edge detection unit, set determination unit, edge fitting unit, error calculation unit, edge selection unit and lane line
Order member.Wherein, the title of these units does not constitute the restriction to the unit itself under certain conditions, for example, edge is examined
It surveys unit and is also described as " unit at the edge in detection current video frame ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be
Included in device described in above-described embodiment;It is also possible to individualism, and without in the supplying device.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are executed by the device, so that should
Device: the edge in detection current video frame;Based on the edge detected, candidate edge set is determined;Using having determined that parameter
Lane imaging model, be fitted candidate edge set in each edge;For each edge in candidate edge set, calculate
The fitting result at the edge and the error at the edge;Choose the edge that the error calculated is less than or equal to predictive error;In response to choosing
The quantity at the edge taken is more than or equal to 4, determines lane line based on the fitting result at the edge of selection.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (24)
1. a kind of method for detecting lane lines, comprising:
Detect the edge in current video frame;
Based on the edge detected, candidate edge set is determined;
Using the lane imaging model for having determined that parameter, it is fitted each edge in the candidate edge set;
For each edge in candidate edge set, the fitting result at the edge and the error at the edge are calculated;
Choose the edge that the error calculated is less than or equal to predictive error;
It is more than or equal to 4 in response to the quantity at the edge of selection, lane line is determined based on the fitting result at the edge of selection.
2. according to the method described in claim 1, wherein, the parameter of the lane imaging model is determined based on following steps:
In response to getting the parameter of the lane imaging model of a upper video frame from database, by the lane of a upper video frame at
As the parameter of model is determined as the parameter of the lane imaging model of current video frame;
In response to the parameter of the lane imaging model to a upper video frame has not been obtained from database, it is fitted based on data and determines ginseng
Number step is fitted each edge in the candidate edge set, determines the parameter of the lane imaging model of current video frame.
3. according to the method described in claim 2, wherein, described be fitted based on data determines the parameter step fitting candidate side
Each edge in edge set determines that the parameter of the lane imaging model of current video frame includes:
To every two edges combination in the candidate edge set, one group of lane imaging model is determined using data fitting method
Parameter;
The lane imaging model based on determined by the parameter of each group of lane imaging model determines each side in candidate edge set
The fitting result of edge line and the error of edge line are less than the lines quantity of predictive error;
The parameter of the determining lane imaging model of lines quantity maximum and lines quantity greater than 4 is determined as current video frame
Lane imaging model parameter.
4. according to the method described in claim 3, wherein, described be fitted based on data determines the parameter step fitting candidate side
Each edge in edge set, determines the parameter of the lane imaging model of current video frame further include:
If the parameter of the lane imaging model of lines quantity maximum and lines quantity greater than 4 is not present, by next frame video
For frame as current video frame, and to the current video frame, execution is described based on the edge detected, determines candidate edge collection
It closes and described based on each edge being fitted in the candidate edge set, determines the lane imaging model of current video frame
Parameter.
5. described to determine one group of lane imaging model using data fitting method according to the method described in claim 3, wherein
Parameter includes:
The parameter of one group of lane imaging model is determined using at least one of following data fitting method: least square method, Hough become
It changes and MAP estimation.
6. according to the method described in claim 3, wherein, the vehicle in response to being had not been obtained from database to a upper video frame
The parameter of road imaging model is fitted based on data and determines that parameter step is fitted each edge in the candidate edge set, really
The parameter for determining the lane imaging model of current video frame includes:
In response to the parameter of the lane imaging model to a upper video frame has not been obtained from database, regarded based on calibrated shooting
The external parameter of the video camera of frequency frame determines the vanishing point parameter of the lane imaging model of current video frame;
When determining that parameter step is fitted each edge in the candidate edge set based on data fitting, using the vanishing point
Parameter determines the parameter of the lane imaging model of current video frame.
7. it is described based on the edge detected according to the method described in claim 1, wherein, determine that candidate edge set includes:
Based on the quantity of pixel included by the edge edge Zhong Ge detected, candidate edge set is determined;Or
Quantity based on pixel included by the edge edge Zhong Ge detected and the edge edge Zhong Ge detected are adjacent
Blank area, determine candidate edge set.
8. described based on pixel included by the edge edge Zhong Ge detected according to the method described in claim 7, wherein
The adjacent blank area of quantity and the edge edge Zhong Ge that detects, determine that candidate edge set includes:
Length sequence is carried out from high to low according to the quantity of pixel included by the edge edge Zhong Ge detected, obtains length
Each edge after sequence;
The collating sequence at each edge after being sorted according to length, the edge for choosing predetermined quantity are added to candidate edge set;
Adjacent blank area row is carried out to each edge from large to small according to the adjacent blank area in the edge edge Zhong Ge detected
Sequence obtains each edge after sorting according to adjacent blank area;
Based on the collating sequence at each edge after the sequence according to adjacent blank area, the edge for choosing preset quantity is added to institute
State candidate edge set.
9. according to the method described in claim 1, wherein, the lane imaging model includes:
u-u0=A (v-v0)+B/(v-v0), wherein (u0, v0) it is image vanishing point position (v=v0For horizon), (u, v) is current
The coordinate points at the edge in video frame, A, B are model coefficient, and in same frame image, only A value is different for different lane lines.
10. according to the method described in claim 1, wherein, the lane imaging model includes: u-u0=∑ ai(v-v0)i, wherein
(u0, v0) it is image vanishing point position (v=v0For horizon), (u, v) is the coordinate points at the edge in current video frame, aiRefer to
I-th coefficient of the taylor series expansion of hyperbolic model.
11. according to the method described in claim 1, wherein, the method also includes:
It is greater than the amount of edge of predictive error less than 4 in response to the error of calculating, using next frame video frame as current video frame,
And the method for detecting lane lines is executed to new current video frame.
12. a kind of lane detection device, comprising:
Edge detection unit is configured to detect the edge in current video frame;
Gather determination unit, is configured to determine candidate edge set based on the edge detected;
Edge fitting unit is configured to be fitted in the candidate edge set using the lane imaging model for having determined that parameter
Each edge;
Error calculation unit, be configured to calculate each edge in candidate edge set the fitting result at the edge with
The error at the edge;
Edge selection unit, the error for being configured to choose calculating are less than or equal to the edge of predictive error;
Lane line determination unit, the quantity for being configured in response to the edge chosen are more than or equal to 4, and the edge based on selection is intended
It closes result and determines lane line.
13. device according to claim 12, wherein the parameter of lane imaging model is based in the edge fitting unit
Step identified below determines:
In response to getting the parameter of the lane imaging model of a upper video frame from database, by the lane of a upper video frame at
As the parameter of model is determined as the parameter of the lane imaging model of current video frame;
In response to the parameter of the lane imaging model to a upper video frame has not been obtained from database, it is fitted based on data and determines ginseng
Number step is fitted each edge in the candidate edge set, determines the parameter of the lane imaging model of current video frame.
14. device according to claim 13, wherein the parameter institute base of lane imaging model in the edge fitting unit
In determination step further comprise:
To every two edges combination in the candidate edge set, one group of lane imaging model is determined using data fitting method
Parameter;
The lane imaging model based on determined by the parameter of each group of lane imaging model determines each side in candidate edge set
The fitting result of edge line and the error of edge line are less than the lines quantity of predictive error;
The parameter of the determining lane imaging model of lines quantity maximum and lines quantity greater than 4 is determined as current video frame
Lane imaging model parameter.
15. device according to claim 14, wherein the parameter institute base of lane imaging model in the edge fitting unit
In determination step further comprise:
If the parameter of the lane imaging model of lines quantity maximum and lines quantity greater than 4 is not present, by next frame video
For frame as current video frame, and to the current video frame, execution is described based on the edge detected, determines candidate edge collection
It closes and described based on each edge being fitted in the candidate edge set, determines the lane imaging model of current video frame
Parameter.
16. device according to claim 14, wherein the parameter institute base of lane imaging model in the edge fitting unit
In determination step further comprise:
The parameter of one group of lane imaging model is determined using at least one of following data fitting method: least square method, Hough become
It changes and MAP estimation.
17. device according to claim 14, wherein the parameter institute base of lane imaging model in the edge fitting unit
In determination step further comprise:
In response to the parameter of the lane imaging model to a upper video frame has not been obtained from database, regarded based on calibrated shooting
The external parameter of the video camera of frequency frame determines the vanishing point parameter of the lane imaging model of current video frame;
When determining that parameter step is fitted each edge in the candidate edge set based on data fitting, using the vanishing point
Parameter determines the parameter of the lane imaging model of current video frame.
18. device according to claim 12, wherein the set determination unit is further configured to:
Based on the quantity of pixel included by the edge edge Zhong Ge detected, candidate edge set is determined;Or
Quantity based on pixel included by the edge edge Zhong Ge detected and the edge edge Zhong Ge detected are adjacent
Blank area, determine candidate edge set.
19. device according to claim 18, wherein the set determination unit is further configured to:
Length sequence is carried out from high to low according to the quantity of pixel included by the edge edge Zhong Ge detected, obtains length
Each edge after sequence;
The collating sequence at each edge after being sorted according to length, the edge for choosing predetermined quantity are added to candidate edge set;
Adjacent blank area row is carried out to each edge from large to small according to the adjacent blank area in the edge edge Zhong Ge detected
Sequence obtains each edge after sorting according to adjacent blank area;
Based on the collating sequence at each edge after the sequence according to adjacent blank area, the edge for choosing preset quantity is added to institute
State candidate edge set.
20. device according to claim 12, wherein the lane imaging model packet in the edge fitting unit
It includes:
u-u0=A (v-v0)+B/(v-v0), wherein (u0, v0) it is image vanishing point position (v=v0For horizon), (u, v) is current
The coordinate points at the edge in video frame, A, B are model coefficient, and in same frame image, only A value is different for different lane lines.
21. device according to claim 12, wherein the lane imaging model packet in the edge fitting unit
It includes: u-u0=∑ ai(v-v0)i, wherein (u0, v0) it is image vanishing point position (v=v0For horizon), (u, v) is current video frame
In edge coordinate points, aiRefer to i-th coefficient of the taylor series expansion of hyperbolic model.
22. device according to claim 12, wherein described device further include:
Video frame updating unit, be configured in response to calculate error be greater than predictive error amount of edge less than 4, will be next
Frame video frame executes the method for detecting lane lines as current video frame, and to new current video frame.
23. a kind of server, comprising:
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
The now method as described in any in claim 1-11.
24. a kind of computer-readable medium, is stored thereon with computer program, such as right is realized when which is executed by processor
It is required that any method in 1-11.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111106274.5A CN113792690B (en) | 2018-09-30 | 2018-09-30 | Lane line detection method and device |
CN201811159602.6A CN109300139B (en) | 2018-09-30 | 2018-09-30 | Lane line detection method and device |
CN202111105791.0A CN113793356B (en) | 2018-09-30 | 2018-09-30 | Lane line detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811159602.6A CN109300139B (en) | 2018-09-30 | 2018-09-30 | Lane line detection method and device |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111105791.0A Division CN113793356B (en) | 2018-09-30 | 2018-09-30 | Lane line detection method and device |
CN202111106274.5A Division CN113792690B (en) | 2018-09-30 | 2018-09-30 | Lane line detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109300139A true CN109300139A (en) | 2019-02-01 |
CN109300139B CN109300139B (en) | 2021-10-15 |
Family
ID=65161420
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111106274.5A Active CN113792690B (en) | 2018-09-30 | 2018-09-30 | Lane line detection method and device |
CN202111105791.0A Active CN113793356B (en) | 2018-09-30 | 2018-09-30 | Lane line detection method and device |
CN201811159602.6A Active CN109300139B (en) | 2018-09-30 | 2018-09-30 | Lane line detection method and device |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111106274.5A Active CN113792690B (en) | 2018-09-30 | 2018-09-30 | Lane line detection method and device |
CN202111105791.0A Active CN113793356B (en) | 2018-09-30 | 2018-09-30 | Lane line detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (3) | CN113792690B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109934169A (en) * | 2019-03-13 | 2019-06-25 | 东软睿驰汽车技术(沈阳)有限公司 | A kind of Lane detection method and device |
CN112050821A (en) * | 2020-09-11 | 2020-12-08 | 湖北亿咖通科技有限公司 | Lane line polymerization method |
CN112560680A (en) * | 2020-12-16 | 2021-03-26 | 北京百度网讯科技有限公司 | Lane line processing method and device, electronic device and storage medium |
CN113793356A (en) * | 2018-09-30 | 2021-12-14 | 百度在线网络技术(北京)有限公司 | Lane line detection method and device |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08329383A (en) * | 1995-06-05 | 1996-12-13 | Nec Corp | Means and method for detecting lane change |
US20120148094A1 (en) * | 2010-12-09 | 2012-06-14 | Chung-Hsien Huang | Image based detecting system and method for traffic parameters and computer program product thereof |
KR20130076108A (en) * | 2011-12-28 | 2013-07-08 | 전자부품연구원 | Lane departure warning system |
CN104008387A (en) * | 2014-05-19 | 2014-08-27 | 山东科技大学 | Lane line detection method based on feature point piecewise linear fitting |
CN105069415A (en) * | 2015-07-24 | 2015-11-18 | 深圳市佳信捷技术股份有限公司 | Lane line detection method and device |
CN105760812A (en) * | 2016-01-15 | 2016-07-13 | 北京工业大学 | Hough transform-based lane line detection method |
CN106326822A (en) * | 2015-07-07 | 2017-01-11 | 北京易车互联信息技术有限公司 | Method and device for detecting lane line |
CN106384085A (en) * | 2016-08-31 | 2017-02-08 | 浙江众泰汽车制造有限公司 | Calculation method for yaw angle of unmanned vehicle |
CN106774328A (en) * | 2016-12-26 | 2017-05-31 | 广州大学 | A kind of automated driving system and method based on road Identification |
CN107832732A (en) * | 2017-11-24 | 2018-03-23 | 河南理工大学 | Method for detecting lane lines based on ternary tree traversal |
CN107909007A (en) * | 2017-10-27 | 2018-04-13 | 上海识加电子科技有限公司 | Method for detecting lane lines and device |
CN108009524A (en) * | 2017-12-25 | 2018-05-08 | 西北工业大学 | A kind of method for detecting lane lines based on full convolutional network |
CN108519605A (en) * | 2018-04-09 | 2018-09-11 | 重庆邮电大学 | Curb detection method based on laser radar and video camera |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6819779B1 (en) * | 2000-11-22 | 2004-11-16 | Cognex Corporation | Lane detection system and apparatus |
US7409092B2 (en) * | 2002-06-20 | 2008-08-05 | Hrl Laboratories, Llc | Method and apparatus for the surveillance of objects in images |
CN101470801B (en) * | 2007-12-24 | 2011-06-01 | 财团法人车辆研究测试中心 | Vehicle shift inspection method |
CN102208019B (en) * | 2011-06-03 | 2013-01-09 | 东南大学 | Method for detecting lane change of vehicle based on vehicle-mounted camera |
CN102314599A (en) * | 2011-10-11 | 2012-01-11 | 东华大学 | Identification and deviation-detection method for lane |
CN102663744B (en) * | 2012-03-22 | 2015-07-08 | 杭州电子科技大学 | Complex road detection method under gradient point pair constraint |
CN104008645B (en) * | 2014-06-12 | 2015-12-09 | 湖南大学 | One is applicable to the prediction of urban road lane line and method for early warning |
CN104268860B (en) * | 2014-09-17 | 2017-10-17 | 电子科技大学 | A kind of method for detecting lane lines |
CN105320927B (en) * | 2015-03-25 | 2018-11-23 | 中科院微电子研究所昆山分所 | Method for detecting lane lines and system |
CN105741559B (en) * | 2016-02-03 | 2018-08-31 | 安徽清新互联信息科技有限公司 | A kind of illegal occupancy Emergency Vehicle Lane detection method based on track line model |
CN108052880B (en) * | 2017-11-29 | 2021-09-28 | 南京大学 | Virtual and real lane line detection method for traffic monitoring scene |
CN108280450B (en) * | 2017-12-29 | 2020-12-29 | 安徽农业大学 | Expressway pavement detection method based on lane lines |
CN113792690B (en) * | 2018-09-30 | 2023-06-23 | 百度在线网络技术(北京)有限公司 | Lane line detection method and device |
-
2018
- 2018-09-30 CN CN202111106274.5A patent/CN113792690B/en active Active
- 2018-09-30 CN CN202111105791.0A patent/CN113793356B/en active Active
- 2018-09-30 CN CN201811159602.6A patent/CN109300139B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08329383A (en) * | 1995-06-05 | 1996-12-13 | Nec Corp | Means and method for detecting lane change |
US20120148094A1 (en) * | 2010-12-09 | 2012-06-14 | Chung-Hsien Huang | Image based detecting system and method for traffic parameters and computer program product thereof |
KR20130076108A (en) * | 2011-12-28 | 2013-07-08 | 전자부품연구원 | Lane departure warning system |
CN104008387A (en) * | 2014-05-19 | 2014-08-27 | 山东科技大学 | Lane line detection method based on feature point piecewise linear fitting |
CN106326822A (en) * | 2015-07-07 | 2017-01-11 | 北京易车互联信息技术有限公司 | Method and device for detecting lane line |
CN105069415A (en) * | 2015-07-24 | 2015-11-18 | 深圳市佳信捷技术股份有限公司 | Lane line detection method and device |
CN105760812A (en) * | 2016-01-15 | 2016-07-13 | 北京工业大学 | Hough transform-based lane line detection method |
CN106384085A (en) * | 2016-08-31 | 2017-02-08 | 浙江众泰汽车制造有限公司 | Calculation method for yaw angle of unmanned vehicle |
CN106774328A (en) * | 2016-12-26 | 2017-05-31 | 广州大学 | A kind of automated driving system and method based on road Identification |
CN107909007A (en) * | 2017-10-27 | 2018-04-13 | 上海识加电子科技有限公司 | Method for detecting lane lines and device |
CN107832732A (en) * | 2017-11-24 | 2018-03-23 | 河南理工大学 | Method for detecting lane lines based on ternary tree traversal |
CN108009524A (en) * | 2017-12-25 | 2018-05-08 | 西北工业大学 | A kind of method for detecting lane lines based on full convolutional network |
CN108519605A (en) * | 2018-04-09 | 2018-09-11 | 重庆邮电大学 | Curb detection method based on laser radar and video camera |
Non-Patent Citations (4)
Title |
---|
JONGIN SON等: "Real-time illumination invariant lane detection for lane departure warning system", 《EXPERT SYSTEMS WITH APPLICATIONS》 * |
MINGFA LI等: "Lane Detection Based on Connection of Various Feature Extraction Methods", 《ADVANCES IN MULTIMEDIA》 * |
李超等: "一种基于帧间关联的实时车道线检测算法", 《计算机科学》 * |
王晓锦等: "基于消失点检测与分段直线模型的车道线识别", 《机电一体化》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113793356A (en) * | 2018-09-30 | 2021-12-14 | 百度在线网络技术(北京)有限公司 | Lane line detection method and device |
CN113792690A (en) * | 2018-09-30 | 2021-12-14 | 百度在线网络技术(北京)有限公司 | Lane line detection method and device |
CN109934169A (en) * | 2019-03-13 | 2019-06-25 | 东软睿驰汽车技术(沈阳)有限公司 | A kind of Lane detection method and device |
CN112050821A (en) * | 2020-09-11 | 2020-12-08 | 湖北亿咖通科技有限公司 | Lane line polymerization method |
CN112050821B (en) * | 2020-09-11 | 2021-08-20 | 湖北亿咖通科技有限公司 | Lane line polymerization method |
CN112560680A (en) * | 2020-12-16 | 2021-03-26 | 北京百度网讯科技有限公司 | Lane line processing method and device, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113792690B (en) | 2023-06-23 |
CN113793356B (en) | 2023-06-23 |
CN113793356A (en) | 2021-12-14 |
CN113792690A (en) | 2021-12-14 |
CN109300139B (en) | 2021-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108898086B (en) | Video image processing method and device, computer readable medium and electronic equipment | |
CN109300139A (en) | Method for detecting lane lines and device | |
CN108280477B (en) | Method and apparatus for clustering images | |
CN108090916B (en) | Method and apparatus for tracking the targeted graphical in video | |
CN110400363A (en) | Map constructing method and device based on laser point cloud | |
CN108491816A (en) | The method and apparatus for carrying out target following in video | |
CN109614935A (en) | Car damage identification method and device, storage medium and electronic equipment | |
CN113607185B (en) | Lane line information display method, lane line information display device, electronic device, and computer-readable medium | |
CN109740588A (en) | The X-ray picture contraband localization method reassigned based on the response of Weakly supervised and depth | |
CN110378175A (en) | The recognition methods of road edge and device | |
CN109118456A (en) | Image processing method and device | |
CN110390706A (en) | A kind of method and apparatus of object detection | |
CN111784774A (en) | Target detection method and device, computer readable medium and electronic equipment | |
CN113850838A (en) | Ship voyage intention acquisition method and device, computer equipment and storage medium | |
CN109901988A (en) | A kind of page elements localization method and device for automatic test | |
CN109949414A (en) | The construction method and device of indoor map | |
CN108171167B (en) | Method and apparatus for exporting image | |
CN111291715B (en) | Vehicle type identification method based on multi-scale convolutional neural network, electronic device and storage medium | |
CN110110696B (en) | Method and apparatus for processing information | |
CN110657760B (en) | Method and device for measuring space area based on artificial intelligence and storage medium | |
CN108492284A (en) | Method and apparatus for the perspective shape for determining image | |
CN110321854B (en) | Method and apparatus for detecting target object | |
CN111340015A (en) | Positioning method and device | |
CN113255819B (en) | Method and device for identifying information | |
CN110634155A (en) | Target detection method and device based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20211011 Address after: 100176 101, floor 1, building 1, yard 7, Ruihe West 2nd Road, economic and Technological Development Zone, Daxing District, Beijing Applicant after: Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Address before: 100085 third floor, baidu building, No. 10, Shangdi 10th Street, Haidian District, Beijing Applicant before: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) Co.,Ltd. |
|
TA01 | Transfer of patent application right |