CN109426800A - A kind of method for detecting lane lines and device - Google Patents
A kind of method for detecting lane lines and device Download PDFInfo
- Publication number
- CN109426800A CN109426800A CN201810688772.7A CN201810688772A CN109426800A CN 109426800 A CN109426800 A CN 109426800A CN 201810688772 A CN201810688772 A CN 201810688772A CN 109426800 A CN109426800 A CN 109426800A
- Authority
- CN
- China
- Prior art keywords
- data
- lane
- lane line
- current
- detection result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
Abstract
The present invention discloses a kind of method for detecting lane lines and device, to solve the problems, such as position inaccurate existing for diatom detection scheme in the prior art.This method comprises: lane detection device obtains the current perception data of the driving environment of vehicle;Wherein, current perception data includes current frame image data and current position determination data;Obtain lane line template data;Wherein, lane line template data is the lane detection result data that last lane detection is handled;It is extracted to obtain current lane line image data according to perception data;According to lane line image data and lane line template data, determination obtains current lane detection result data;It wherein, include the data for expressing the relative positional relationship of vehicle and lane line in current lane detection result data.
Description
Technical field
The present invention relates to computer vision field, in particular to a kind of method for detecting lane lines and device.
Background technique
Currently, one of advanced DAS (Driver Assistant System) (Advanced Driver Assistance Systems, ADAS)
Main research point is to improve the safety of vehicle itself or vehicle driving and reduces road accident.Intelligent vehicle is driven with nobody
It sails vehicle and is expected to solving road safety, traffic problems and the comfortableness problem of passenger.For intelligent vehicle or vehicle
In Task, lane detection is a complexity and challenging task.A main portion of the lane line as road
Point, play the role of providing reference for automatic driving vehicle, instruct safe driving.Lane detection includes road positioning, vehicle
The direction of traffic of relative positional relationship and vehicle between road.
In current technology scheme, lane detection is usually that the image obtained according to camera and GPS device provide
Positioning signal realize.But location information, lane line and the vehicle of the lane line and lane line determined with this solution
The accuracy of relative position information between is low, is unable to satisfy the traveling demand of automatic driving vehicle.That is, in existing lane
There is a problem of that locating accuracy is low in the technical solution of line detection.
Summary of the invention
In view of this, the embodiment of the invention provides a kind of method for detecting lane lines and device, to solve existing lane
The problem of position inaccurate present in line detection technique.
On the one hand, the embodiment of the present application provides a kind of method for detecting lane lines, comprising:
Lane detection device obtains the current perception data of the driving environment of vehicle;Wherein, current perception data includes
Current frame image data and current position determination data;
Obtain lane line template data;Wherein, lane line template data is the vehicle that last lane detection is handled
Diatom testing result data;
It is extracted to obtain current lane line image data according to perception data;
According to lane line image data and lane line template data, determination obtains current lane detection result data;
It wherein, include the data for expressing the relative positional relationship of vehicle and lane line in current lane detection result data.
On the other hand, the embodiment of the present application provides a kind of lane detection device, comprising:
Acquiring unit, the current perception data of the driving environment for obtaining admixture, wherein currently perception data includes
Current frame image data and location data;Obtain lane line template data, wherein lane line template data is last lane line
The lane detection result data that detection processing obtains;
Extraction unit obtains current lane line image data for extracting according to perception data;
Determination unit, for according to lane line image data and lane line template data, determination to obtain current lane line
Testing result data;It wherein, include the relative position pass of expression vehicle and lane line in current lane detection result data
The data of system.
On the other hand, the embodiment of the present application provides a kind of lane detection device, including a processor and at least one
A memory stores at least one machine-executable instruction at least one processor, and processor executes at least one machine can
It executes instruction to execute:
Obtain the current perception data of the driving environment of vehicle;Wherein, current perception data includes current frame image data
And current position determination data;
Obtain lane line template data;Wherein, lane line template data is the vehicle that last lane detection is handled
Diatom testing result data;
It is extracted to obtain current lane line image data according to perception data;
According to lane line image data and lane line template data, determination obtains current lane detection result data;
It wherein, include the data for expressing the relative positional relationship of vehicle and lane line in current lane detection result data.
The technical solution provided according to embodiments of the present invention, lane detection device obtain the current of the driving environment in lane
Perception data extracts lane line image data from current perception data, and receives last lane detection and handle to obtain
Lane detection result data (namely lane line template data), according to current lane line image data and last vehicle
Diatom testing result data, determination obtain current lane detection result data.Due to last lane detection result
Include accurate lane line location information in data, positioning can be provided with reference to letter for the processing of current lane detection
Breath.The perception data currently obtained is relied solely in compared with the prior art and carries out lane detection, is able to carry out more accurate
Lane detection, determination obtain more accurate location information, thus solve the prior art lane detection scheme exist
Position inaccurate the problem of.
Detailed description of the invention
Attached drawing is used to provide further understanding of the present invention, and constitutes part of specification, with reality of the invention
It applies example to be used to explain the present invention together, not be construed as limiting the invention.
Fig. 1 is the process flow diagram of method for detecting lane lines provided by the embodiments of the present application;
Fig. 2 a is an example of lane line image data;
Fig. 2 b is another process flow diagram of method for detecting lane lines provided by the embodiments of the present application;
Fig. 3 a is the process flow diagram of step 104 in Fig. 1 or Fig. 2;
Fig. 3 b is another process flow diagram of method for detecting lane lines provided by the embodiments of the present application;
Fig. 4 is another process flow diagram of method for detecting lane lines provided by the embodiments of the present application;
Fig. 5 is the process flow diagram of step 105 in Fig. 4;
Fig. 6 a is the process flow diagram of step 106 in Fig. 4;
Fig. 6 b is another process flow diagram of step 106 in Fig. 4;
Fig. 7 is an example image;
Fig. 8 is the exemplary diagram executed in Fig. 6 a after step 1061 extension lane line;
Fig. 9 is the schematic diagram after being adjusted to the lane line after extending in Fig. 8;
Figure 10 is the structural block diagram of lane detection device provided by the embodiments of the present application;
Figure 11 is another structural block diagram of lane detection device provided by the embodiments of the present application;
Figure 12 is another structural block diagram of lane detection device provided by the embodiments of the present application;
Figure 13 is another structural block diagram of lane detection device provided by the embodiments of the present application.
Specific embodiment
Technical solution in order to enable those skilled in the art to better understand the present invention, below in conjunction with of the invention real
The attached drawing in example is applied, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described implementation
Example is only a part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, this field is common
Technical staff's every other embodiment obtained without making creative work, all should belong to protection of the present invention
Range.
In the prior art carry out lane detection technical solution present in position inaccurate aiming at the problem that, the application
Embodiment provides a kind of method for detecting lane lines and device, to solve the problems, such as this.
In lane detection scheme provided by the embodiments of the present application, lane detection device obtains the driving environment in lane
Current perception data, lane line image data is extracted from current perception data, and receive at last lane detection
Obtained lane detection result data (namely lane line template data) is managed, according to current lane line image data and upper one
Secondary lane detection result data, determination obtain current lane detection result data.Since last lane line is examined
Surveying includes accurate lane line location information in result data, can provide positioning ginseng for the processing of current lane detection
Examine information.The perception data currently obtained is relied solely in compared with the prior art and carries out lane detection, is able to carry out more
Accurate lane detection, determination obtains more accurate location information, to solve the lane detection scheme of the prior art
The problem of existing position inaccurate.
It is core of the invention thought above, in order to enable those skilled in the art to better understand the present invention in embodiment
Technical solution, and keep the above objects, features, and advantages of the embodiment of the present invention more obvious and easy to understand, with reference to the accompanying drawing
Technical solution in the embodiment of the present invention is described in further detail.
Fig. 1 shows the process flow of method for detecting lane lines provided by the embodiments of the present application, and this method includes following
Treatment process:
Step 101, lane detection device obtain the current perception data of the driving environment of vehicle;Wherein, current perception
Data include current frame image data and current position determination data;
Step 102 obtains lane line template data;Wherein, lane line template data is last lane detection processing
Obtained lane detection result data;
Step 103 is extracted to obtain current lane line image data according to perception data;
Step 104, according to lane line image data and lane line template, determination obtains current lane detection number of results
According to;It wherein, include the data for expressing the relative positional relationship of vehicle and lane line in current lane detection result data.
Wherein, step 102 and step 103 execution sequence in no particular order.
Above-mentioned implementation procedure is described in detail below.
In above-mentioned steps 101, the current perception data of the driving environment of vehicle be can be through vehicle-mounted awareness apparatus
It obtains.Such as an at least frame current frame image data are obtained by least one vehicle-mounted camera, it is obtained by positioning device
Current position determination data, positioning device include that global positioning system (GPS, Global Position System) and/or inertia are surveyed
It measures unit (IMU, Inertial Measurement Unit).It can further include current driving environment in perception data
Map datum, laser radar (LIDAR) data.Wherein, map datum can be the true map datum obtained in advance, can also
Figure unit (SLAM, Simultaneous Localization and Mapping) offer is positioned and built while to be vehicle
Map datum.
In above-mentioned steps 102, lane line template data is the lane detection that last lane detection is handled
Result data includes the data of the relative positional relationship of the location information of lane line, vehicle and lane line in the data.Lane line
Template data (namely lane detection result data) can show as the 3d space data of depression angle, for example, by vehicle
Driving direction is as the Y-axis in coordinate system, then the direction vertical with vehicle heading is X-axis.
After upper primary lane detection handles to obtain lane line template data, which can be stored
In a storage equipment, which can be the local storage of lane detection device, be also possible to its in vehicle
Its memory can also be long-range memory.
Lane detection device can read the lane line mould from storage equipment in current lane detection processing
Plate data can also receive the lane line template data according to scheduled process cycle.
In the embodiment of the present application, it is wrapped in the lane detection result data handled due to last lane detection
The location information of lane line, the relative positional relationship of vehicle and lane line have been included, in the adjacent two field pictures data of acquisition, vehicle
The variation of the position of diatom is smaller, has a stability, and the relative positional relationship of vehicle and lane line is relatively stable, becomes
Changing has continuity, carries out current lane detection in conjunction with last lane detection result, is capable of providing and aligns
True positioning reference information.
In above-mentioned steps 103, extracted to obtain the processing of current lane line image data, Ke Yitong according to perception data
Various ways are crossed to realize.
Mode one, can be for the current frame image data that at least one camera obtains respectively, using semantic segmentation
Method, the algorithm or model obtained using preparatory training carry out classification marker to the pixel of current frame image data, and therefrom
Extraction obtains current lane line image data.
Wherein training obtains in advance algorithm or model, can be the truthful data (ground according to driving environment
Truth) and the image data that obtains of camera, obtained after being iterated training to neural network.
Mode two can also be extracted according to current frame image data and current position determination data, by way of object identification
Obtain current lane line image data.
The example of a lane line image data is shown in Fig. 2 a.
The method that both the above extracts lane line image data is only listed in the application, can also be handled according to others
Method obtains current lane line image data, and the application does not do strict regulations.
In some embodiments of the present application, since the result of last lane detection processing is not necessarily complete symbol
Priori knowledge or convention are closed, needs further to be adjusted lane line template data before executing step 104, such as
Shown in Fig. 2 b, comprising:
Step 104S, according to current perception data, priori knowledge and/or scheduled constraint condition to lane line template data
In lane line be adjusted;Wherein, priori knowledge or scheduled constraint condition include the Physimetric about road structure
Parameter or data representation.
For example, it is parallel to each other that priori knowledge or constraint condition, which may include: between lane line on (1) road,;
(2) lane line of curve shape is camber line;(3) length of curved lanes line is less than 300 meters;(4) between adjacent lane line away from
From between 3 to 4 meters, for example, about 3.75 meters;(5) color of lane line is not identical as the color of road other parts.According to
The needs of concrete application scene, priori knowledge or constraint condition can also include other contents or data, and the application is real
It applies example and does not do stringent specific limitation.
Lane line template data can be adjusted so as to be more in line with priori knowledge or convention, be by adjustment in this way
Determine that current lane detection result data provides more accurate positioning reference information.
Lane line template data can be specifically mapped in lane line image data by the processing of above-mentioned steps 104, according to
Mapping result is fitted to obtain current lane detection result data, which can be accomplished in several ways.
For example, coordinate conversion is carried out to lane line template data and lane line image data, the lane after coordinate is converted
In lane line image data after line template data projection to coordinate conversion, and tied according to scheduled formula or algorithm and projection
Fruit is fitted to obtain current lane detection result data.The processing of step 104 can also be realized by another way.
Except above-mentioned implementation, the embodiment of the present application provides a kind of implementation based on machine learning, such as schemes
Shown in 3a, specifically include:
Lane line image data and lane line template data are input to a scheduled loss function by step 1041
In (Loss Function), which exports a cost value;Wherein, which is an expression lane line mould
The function of the positional relationship of lane line between plate data and lane line image data, the cost value are in lane line template data
Lane line and lane line image data in the distance between lane line;
Step 1042, the difference of adjacent cost value twice be greater than a predetermined threshold in the case where, iterative modifications lane
The position of lane line in line template data;The case where the difference of adjacent cost value twice is less than or equal to the predetermined threshold
Under, terminate iterative processing, and obtain current lane detection result data.
Wherein, the operation of the position of the lane line in iterative modifications lane line template data can be declined by gradient and be calculated
Method is realized.
It further, can also be according to constantly accumulating obtained lane detection knot in some embodiments of the present application
Fruit data, optimize loss function, to enhance the accuracy, stability and robustness of loss function.
By processing as shown in Figure 3a, constantly measured by loss function the lane line in lane line template data with
The distance between lane line in lane line image data, and by gradient descent algorithm come constantly by lane line template number
Lane line in is fitted on the lane line in lane line image data, can obtain accurate current lane line detection
As a result, the result can be the 3d space data of depression angle, and the relative positional relationship including expressing vehicle and lane line
Data, can also include expression lane line position data.
Handled by lane detection shown in FIG. 1, using the result data of last lane detection, can obtain compared with
For the positioning reference information of accurate lane line and vehicle, and the result data of last lane detection projected to current
In lane line image data, fitting obtains current lane detection result data, can obtain accurately current vehicle
The location information of diatom and vehicle, so as to solve that asking for accurate lane detection can not be carried out in the prior art
Topic.
Also, it in the embodiment of the present application, further include a variety of data, such as map datum in perception data, it can be into one
Step ground provides accurate location information for lane detection processing, to obtain the higher lane detection result of accuracy
Data.
Further, in some embodiments, it as shown in the step 104t of Fig. 3 b, will be obtained by above-mentioned processing current
Lane detection result data be determined as the lane line template data of the processing of lane detection next time.
Alternatively, in further embodiments, it can also be further to lane detection result data obtained in step 104
It tests and optimizes and revises, it is ensured that provide the vehicle with more accurate location information for lane detection processing next time
Diatom template data.
Lane X -ray inspection X and optimization processing after showing method shown in Fig. 1 in Fig. 4, comprising:
Step 105 tests to current lane detection result data;
Step 106, in the case where upchecking, adjustment is optimized to current lane detection result data, is obtained
To the lane line template data handled for lane detection next time;In the case where examining failure, current lane is abandoned
Line testing result data.
Wherein, as shown in figure 5, step 105 includes following treatment process:
Step 1051, the confidence level model (Confidence Model) obtained according to preparatory training, determine and obtain currently
Lane detection result data confidence level (Confidence);
Specifically, current lane detection result data can be fed as input to confidence level model, confidence level
Model exports confidence level corresponding with current lane detection result data.
Wherein, confidence level model is previously according to the lane detection result data of history and the truthful data of lane line
(ground truth) training deep neural network obtains.Confidence level model is for indicating lane detection result data and setting
Corresponding relationship between reliability.
In the mistake for training deep neural network according to the lane detection result data of history and the truthful data of lane line
Cheng Zhong first compares the lane detection result data of history with truthful data;According to comparing result to the lane of history
Line testing result is classified or is marked, such as will test result data a, c, d and be labeled as successful testing result, will test knot
Fruit data b, e are labeled as the testing result of failure.According to the training of the history lane detection result data and truthful data of mark
Neural network obtains confidence level model.The confidence level model that training obtains can reflect the success of lane detection result data
Probability or probability of failure (namely confidence level).
Step 1052, in the case where determining that obtained confidence level meets scheduled test condition, examine successfully;In determination
In the case that obtained confidence level does not meet scheduled test condition, failure is examined.
For example, test condition may include: to determine inspection in the case where confidence level is greater than or equal to the probability of success of X%
Success is tested, failure is otherwise examined.
It further, can also be according to constantly accumulating obtained lane detection knot in some embodiments of the present application
The truthful data of fruit data and lane line, opposed credit model optimize training, optimize trained treatment process and training
The processing for obtaining confidence level model is similar, and which is not described herein again.
As shown in Figure 6 a, the following treatment process of step 106:
Step 1061, in the case where upchecking, in current lane detection result data lane line carry out
Extension;
Specifically, may include: to the processing that lane line is extended
Step S1, according to the lane cable architecture in lane detection result data, it is flat that duplication is carried out to the lane line at edge
It moves;
Step S2, in the case where can including the lane line of duplication translation in lane detection result data, retain multiple
The lane line of translation is made, and saves new lane detection result data;
Step S3, it in the case where can not including the lane line of duplication translation in lane detection result data, abandons multiple
Make the lane line of translation.
It include two lane lines, CL1 in current lane detection result data for example, an example as shown in Figure 7
And CL2, rear available new lane line EL1 and EL2 are extended to lane line.Fig. 8 is to show in lane line image data
Lane line after the extension shown.
Step 1062, according to current perception data, priori knowledge and/or scheduled constraint condition to lane line template data
In lane line be adjusted, obtain for lane detection next time processing lane line template data.
The processing of adjustment can refer to above-mentioned step 104S.
An example is shown in Fig. 9, and for the lane line after extension shown in Fig. 8, lane line EL2 is adjusted
Whole, the lane line EL2 ' after being adjusted, lane line EL2 ' adjusted are compared to the lane line EL2 before adjustment closer to straight
Line.
In some embodiments of the present application, can simultaneously setting steps 104S and step 1062.In the another of the application
In a little embodiments, the one in step 104S and step 1062 can be set.
Step 1063, examine failure in the case where, abandon current lane detection result data.
Further, as shown in the step 1064 of Fig. 6 b, after abandoning current lane detection result data, by one
Preset lane line template data is determined as the lane line template data for the processing of lane detection next time.The lane line mould
Plate data can be a general lane line template data, be also possible to lane line template number corresponding with driving environment classification
According to can also be the lane line template data of specific driving environment.For example, it may be one is suitable for the lane of all environment
Line template data are also possible to the lane line template data an of highway environment, the lane line template data of urban road,
The lane line template data of specific road either where vehicle.Preset lane line template data can be according to specifically answering
It is specifically set with the needs of scene.
The preset lane line template data, which can be, is pre-stored in lane detection device local, is also possible to be pre-stored in
In the automatic Pilot processing unit of vehicle, it can also be and be stored in remote server.When lane detection device needs
It, can be by reading or remote request and received mode obtain when obtaining the preset lane line template data.
Processing is optimized and revised by shown in Fig. 4, available the embodiment of the present application includes more accurate location information
Lane line template data;And the lane line template data obtained compared to method shown in Fig. 1, method shown in Fig. 4 obtain
Lane line template data makes method for detecting lane lines provided by the embodiments of the present application have higher stability and robustness.
Based on identical inventive concept, the embodiment of the present application also provides a kind of lane detection devices.
Figure 10 shows the structural block diagram of lane detection device provided by the embodiments of the present application, comprising:
Acquiring unit 11, the current perception data of the driving environment for obtaining vehicle, wherein currently perception data includes
Current frame image data and location data;Obtain lane line template data, wherein lane line template data is last lane line
The lane detection result data that detection processing obtains;
It further include at least one in perception data: map datum, laser radar (LIDAR) number of current driving environment
According to;Location data includes GPS positioning data and/or inertial navigation location data;
Extraction unit 12 obtains current lane line image data for extracting according to perception data;
Determination unit 13, for according to lane line image data and lane line template data, determination to obtain current lane
Line testing result data;It wherein, include the relative position of expression vehicle and lane line in current lane detection result data
The data of relationship.
Wherein, lane line template data and lane detection result data are the 3d space data of depression angle.
In some embodiments, extraction unit 12 extracts lane line image data from current frame image data, comprising:
Lane line image data is extracted from current frame image data according to the method for object identification or semantic segmentation.
Determination unit 13 obtains current lane detection according to lane line image data and lane line template data, determination
Result data, comprising: lane line template data is mapped in lane line image data, is fitted to obtain according to mapping result current
Lane detection result data.Further, determination unit 13 is defeated by lane line image data and lane line template data
Enter into a scheduled loss function, which exports a cost value;Wherein, which is an expression vehicle
The function of the positional relationship of lane line between diatom template data and lane line image data, the cost value are lane line template
The distance between the lane line in lane line and lane line image data in data;It is greater than in the difference of adjacent cost value twice
In the case where one predetermined threshold, the position of the lane line in iterative modifications lane line template data;In adjacent cost value twice
Difference be less than or equal to the predetermined threshold in the case where, terminate iterative processing, and obtain current lane detection number of results
According to.In application scenes, determination unit 13 is using the lane in gradient descent algorithm iterative modifications lane line template data
The position of line.
Determination unit 13 according to lane line image data and lane line template data, is determining that obtaining current lane line examines
It surveys before result data, determination unit 13 is also adjusted lane line template data, comprising: according to current perception data, elder generation
It tests knowledge and/or scheduled constraint condition is adjusted the lane line in lane line template data;Wherein, priori knowledge or
Scheduled constraint condition includes the object metric parameter or data representation about road structure.
Further, it is determined that unit 13 is also used to be determined as current lane detection result data to be used for next train
The lane line template data of diatom detection processing.
In further embodiments, lane detection device can also be as shown in figure 11, further comprises:
Verification unit 14, for testing to current lane detection result data;
Optimize unit 15, in the case where verification unit 14 is upchecked, to current lane detection result data into
Row is optimized and revised, and the lane line template data for the processing of lane detection next time is obtained;In the case where examining failure, throw
Abandon current lane detection result data.
Wherein, verification unit 14 tests to current lane detection result data, comprising: according to trained in advance
The confidence level model arrived, determination obtain the confidence level of current lane detection result data;Determining obtained confidence level symbol
In the case where closing scheduled test condition, examine successfully;The feelings of scheduled test condition are not met in determining obtained confidence level
Under condition, failure is examined.
Further, as shown in figure 12, lane detection device provided by the embodiments of the present application can also include: pre-training
Unit 16, for previously according to the lane detection result data of history and the truthful data of lane line, training depth nerve net
Network obtains confidence level model;Confidence level model is used to indicate the corresponding relationship between lane detection result data and confidence level.
Optimization unit 15 optimizes adjustment to current lane detection result data, comprising: to current lane line
Lane line in testing result data is extended;According to current perception data, priori knowledge and/or scheduled constraint condition pair
Lane line in lane line template data is adjusted, and obtains the lane line template number for the processing of lane detection next time
According to;Wherein, priori knowledge or scheduled constraint condition include the Physimetric parameter or data representation about road structure.
Wherein, optimization unit 15 is extended the lane line in current lane detection result data, comprising: according to
It is flat to carry out duplication to the edge lane line in lane detection result data for lane cable architecture in lane detection result data
It moves;In the case where can including the lane line that duplication translates in lane detection result data, the lane of conservative replication translation
Line, and save new lane detection result data;It can not include the lane for replicating translation in lane detection result data
In the case where line, the lane line of duplication translation is abandoned.
Further, optimization unit 15 is also used to default by one after abandoning current lane detection result data
Lane line template data be determined as lane detection next time processing lane line template data.
By lane detection device provided by the embodiments of the present application, using the result data of last lane detection,
The positioning reference information of accurate lane line and vehicle can be obtained, and the result data of last lane detection is thrown
For shadow into current lane line image data, fitting obtains current lane detection result data, and it is more accurate to obtain
Current lane line and vehicle location information, so as to solve that accurate lane line can not be carried out in the prior art
The problem of detection.
Based on identical inventive concept, the embodiment of the present application also provides a kind of lane detection devices.
As shown in figure 13, lane detection device provided by the embodiments of the present application includes a processor 131 and at least one
A memory 132, at least one machine-executable instruction is stored at least one processor, and processor executes at least one machine
Executable instruction is to execute:
Obtain the current perception data of the driving environment of vehicle;Wherein, current perception data includes current frame image data
And current position determination data;
Obtain lane line template data;Wherein, lane line template data is the vehicle that last lane detection is handled
Diatom testing result data;
It is extracted to obtain current lane line image data according to perception data;
According to lane line image data and lane line template data, determination obtains current lane detection result data;
It wherein, include the data for expressing the relative positional relationship of vehicle and lane line in current lane detection result data.
Wherein, lane line template data and lane detection result data are the 3d space data of depression angle.Perceive number
It further include at least one in: the map datum of current driving environment, laser radar (LIDAR) data;Location data packet
Include GPS positioning data and/or inertial navigation location data.
In some embodiments, processor 131 executes at least one machine-executable instruction and executes from current frame image number
Lane line image data is extracted in, comprising: according to the method for the method of object identification or semantic segmentation from present frame figure
As extracting lane line image data in data.
Processor 131 executes at least one machine-executable instruction and executes according to lane line image data and lane line template
Data, determination obtain current lane detection result data, comprising: lane line template data is mapped to lane line picture number
In, it is fitted to obtain current lane detection result data according to mapping result.The processing can specifically include: by lane line
Image data and lane line template data are input in a scheduled loss function, which exports a cost value;
Wherein, which is the positional relationship of the lane line between an expression lane line template data and lane line image data
Function, the cost value be lane line template data in lane line and lane line image data in lane line between away from
From;In the case where the difference of adjacent cost value twice is greater than a predetermined threshold, in iterative modifications lane line template data
The position of lane line;In the case where the difference of adjacent cost value twice is less than or equal to the predetermined threshold, terminate iterative processing,
And obtain current lane detection result data.In application scenes, processor 131 can execute at least one machine
Executable instruction executes: using the position of the lane line in gradient descent algorithm iterative modifications lane line template data.
Processor 131 executes at least one machine-executable instruction and executes according to lane line image data and lane line mould
Plate data determine before obtaining current lane detection result data, also execute: according to current perception data, priori knowledge
And/or scheduled constraint condition is adjusted the lane line in lane line template data;Wherein, priori knowledge or scheduled
Constraint condition includes the Physimetric parameter or data representation about road structure.
Processor executes at least one machine-executable instruction and also executes: current lane detection result data is determined
For the lane line template data handled for lane detection next time.
In further embodiments, processor 131 executes at least one machine-executable instruction and also executes: to current vehicle
Diatom testing result data are tested;In the case where upchecking, current lane detection result data is carried out excellent
Change adjustment, obtains the lane line template data for the processing of lane detection next time;In the case where examining failure, abandons and work as
Preceding lane detection result data.
Processor 131 executes at least one machine-executable instruction and executes to current lane detection result data progress
It examines, comprising: according to the confidence level model that preparatory training obtains, determination obtains the confidence of current lane detection result data
Degree;In the case where determining that obtained confidence level meets scheduled test condition, examine successfully;Determining obtained confidence level not
In the case where meeting scheduled test condition, failure is examined.
At least one machine-executable instruction of execution of processor 131 also executes training in advance and obtains confidence level model, comprising:
Previously according to the lane detection result data of history and the truthful data of lane line, training deep neural network obtains confidence level
Model;Confidence level model is used to indicate the corresponding relationship between lane detection result data and confidence level.
Processor 131 executes at least one machine-executable instruction and executes to current lane detection result data progress
It optimizes and revises, comprising: the lane line in current lane detection result data is extended;According to current perception data,
Priori knowledge and/or scheduled constraint condition are adjusted the lane line in lane line template data, obtain for next time
The lane line template data of lane detection processing;Wherein, priori knowledge or scheduled constraint condition include about road knot
The Physimetric parameter or data representation of structure.
Processor 131 executes at least one machine-executable instruction and executes in current lane detection result data
Lane line is extended, comprising: according to the lane cable architecture in lane detection result data, to lane detection result data
In edge lane line carry out duplication translation;It can include the feelings for replicating the lane line of translation in lane detection result data
Under condition, the lane line of conservative replication translation, and save new lane detection result data;In lane detection result data
In the case where can not including the lane line that duplication translates, the lane line of duplication translation is abandoned.
Processor 131 executes the execution of at least one machine-executable instruction and is abandoning current lane detection result data
Later, the lane line mould that a preset lane line template data is determined as being used for lane detection processing next time is also executed
Plate data.
By lane detection device provided by the embodiments of the present application, using the result data of last lane detection,
The positioning reference information of accurate lane line and vehicle can be obtained, and the result data of last lane detection is thrown
For shadow into current lane line image data, fitting obtains current lane detection result data, and it is more accurate to obtain
Current lane line and vehicle location information, so as to solve that accurate lane line can not be carried out in the prior art
The problem of detection.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art
Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to include these modifications and variations.
Claims (45)
1. a kind of method for detecting lane lines characterized by comprising
Lane detection device obtains the current perception data of the driving environment of vehicle;Wherein, current perception data includes current
Frame image data and current position determination data;
Obtain lane line template data;Wherein, lane line template data is the lane line that last lane detection is handled
Testing result data;
It is extracted to obtain current lane line image data according to perception data;
According to lane line image data and lane line template data, determination obtains current lane detection result data;Wherein,
It include the data of the relative positional relationship of expression vehicle and lane line in current lane detection result data.
2. the method according to claim 1, wherein according to lane line image data and lane line template data,
Determination obtains current lane detection result data, comprising:
Lane line template data is mapped in lane line image data, is fitted to obtain current lane line inspection according to mapping result
Survey result data.
3. according to the method described in claim 2, it is characterized in that, lane line template data is mapped to lane line image data
In, it is fitted to obtain current lane detection result data according to mapping result, comprising:
Lane line image data and lane line template data are input in a scheduled loss function, loss function output
One cost value;Wherein, which is the lane between an expression lane line template data and lane line image data
The function of the positional relationship of line, the cost value are the lane line in lane line template data and the lane in lane line image data
The distance between line;
In the case where the difference of adjacent cost value twice is greater than a predetermined threshold, in iterative modifications lane line template data
The position of lane line;In the case where the difference of adjacent cost value twice is less than or equal to the predetermined threshold, terminate iterative processing,
And obtain current lane detection result data.
4. according to the method described in claim 3, it is characterized in that, the position of the lane line in iterative modifications lane line template data
It sets, comprising:
Using the position of the lane line in gradient descent algorithm iterative modifications lane line template data.
5. the method according to claim 1, wherein according to lane line image data and lane line template number
According to, it determines before obtaining current lane detection result data, the method also includes:
According to current perception data, priori knowledge and/or scheduled constraint condition to the lane line in lane line template data into
Row adjustment;
Wherein, priori knowledge or scheduled constraint condition include the Physimetric parameter or data representation about road structure.
6. the method according to claim 1, wherein further include:
It tests to current lane detection result data;
In the case where upchecking, adjustment is optimized to current lane detection result data, is obtained for next time
The lane line template data of lane detection processing;In the case where examining failure, current lane detection number of results is abandoned
According to.
7. according to the method described in claim 6, it is characterized in that, test to current lane detection result data,
Include:
According to the confidence level model that preparatory training obtains, determination obtains the confidence level of current lane detection result data;
In the case where determining that obtained confidence level meets scheduled test condition, examine successfully;Determining obtained confidence level
In the case where not meeting scheduled test condition, failure is examined.
8. the method according to the description of claim 7 is characterized in that the method also includes training in advance to obtain confidence level mould
Type, comprising:
Previously according to the lane detection result data of history and the truthful data of lane line, training deep neural network is set
Credit model;Confidence level model is used to indicate the corresponding relationship between lane detection result data and confidence level.
9. according to the method described in claim 6, it is characterized in that, optimizing tune to current lane detection result data
It is whole, comprising:
Lane line in current lane detection result data is extended;
According to current perception data, priori knowledge and/or scheduled constraint condition to the lane line in lane line template data into
Row adjustment, obtains the lane line template data for the processing of lane detection next time;Wherein, priori knowledge or it is scheduled about
Beam condition includes the Physimetric parameter or data representation about road structure.
10. according to the method described in claim 9, it is characterized in that, to the lane in current lane detection result data
Line is extended, comprising:
According to the lane cable architecture in lane detection result data, to the edge lane line in lane detection result data into
Row duplication translation;
In the case where can including the lane line that duplication translates in lane detection result data, the lane of conservative replication translation
Line, and save new lane detection result data;
In the case where can not including the lane line that duplication translates in lane detection result data, the lane of duplication translation be abandoned
Line.
11. according to the method described in claim 6, it is characterized in that, being incited somebody to action after abandoning current lane detection result data
One preset lane line template data is determined as the lane line template data for the processing of lane detection next time.
12. the method according to claim 1, wherein the method also includes:
Current lane detection result data is determined as being used for the lane line template data of lane detection processing next time.
13. the method according to claim 1, wherein lane line template data and lane detection result data
For the 3d space data of depression angle.
14. the method according to claim 1, wherein extracting lane line image from current frame image data
Data, comprising:
Lane line picture number is extracted from current frame image data according to the method for the method of object identification or semantic segmentation
According to.
15. the method according to claim 1, wherein further including at least one in perception data: currently driving
Sail map datum, laser radar (LIDAR) data of environment;
Location data includes GPS positioning data and/or inertial navigation location data.
16. a kind of lane detection device characterized by comprising
Acquiring unit, the current perception data of the driving environment for obtaining vehicle, wherein current perception data includes present frame
Image data and location data;Obtain lane line template data, wherein lane line template data is at last lane detection
Manage obtained lane detection result data;
Extraction unit obtains current lane line image data for extracting according to perception data;
Determination unit, for according to lane line image data and lane line template data, determination to obtain current lane detection
Result data;It wherein, include the relative positional relationship of expression vehicle and lane line in current lane detection result data
Data.
17. device according to claim 16, which is characterized in that determination unit is according to lane line image data and lane line
Template data, determination obtain current lane detection result data, comprising:
Lane line template data is mapped in lane line image data, is fitted to obtain current lane line inspection according to mapping result
Survey result data.
18. device according to claim 17, which is characterized in that lane line template data is mapped to lane by determination unit
In line image data, it is fitted to obtain current lane detection result data according to mapping result, comprising:
Lane line image data and lane line template data are input in a scheduled loss function, loss function output
One cost value;Wherein, which is the lane between an expression lane line template data and lane line image data
The function of the positional relationship of line, the cost value are the lane line in lane line template data and the lane in lane line image data
The distance between line;
In the case where the difference of adjacent cost value twice is greater than a predetermined threshold, in iterative modifications lane line template data
The position of lane line;In the case where the difference of adjacent cost value twice is less than or equal to the predetermined threshold, terminate iterative processing,
And obtain current lane detection result data.
19. device according to claim 18, which is characterized in that in determination unit iterative modifications lane line template data
The position of lane line, comprising:
Using the position of the lane line in gradient descent algorithm iterative modifications lane line template data.
20. device according to claim 16, which is characterized in that determining module is according to lane line image data and lane
Line template data are determined before obtaining current lane detection result data, are also used to:
According to current perception data, priori knowledge and/or scheduled constraint condition to the lane line in lane line template data into
Row adjustment;
Wherein, priori knowledge or scheduled constraint condition include the object metric parameter or data representation about road structure.
21. device according to claim 16, which is characterized in that described device further include:
Verification unit, for testing to current lane detection result data;
Optimization unit optimizes tune to current lane detection result data in the case where verification unit is upchecked
It is whole, obtain the lane line template data for the processing of lane detection next time;In the case where examining failure, abandon current
Lane detection result data.
22. device according to claim 21, which is characterized in that verification unit is to current lane detection result data
It tests, comprising:
According to the confidence level model that preparatory training obtains, determination obtains the confidence level of current lane detection result data;
In the case where determining that obtained confidence level meets scheduled test condition, examine successfully;Determining obtained confidence level
In the case where not meeting scheduled test condition, failure is examined.
23. device according to claim 22, which is characterized in that described device further include:
Pre-training unit, for previously according to the lane detection result data of history and the truthful data of lane line, training to be deep
Degree neural network obtains confidence level model;Confidence level model is used to indicate pair between lane detection result data and confidence level
It should be related to.
24. device according to claim 21, which is characterized in that optimization unit is to current lane detection result data
Optimize adjustment, comprising:
Lane line in current lane detection result data is extended;
According to current perception data, priori knowledge and/or scheduled constraint condition to the lane line in lane line template data into
Row adjustment, obtains the lane line template data for the processing of lane detection next time;Wherein, priori knowledge or it is scheduled about
Beam condition includes the Physimetric parameter or data representation about road structure.
25. device according to claim 24, which is characterized in that optimization unit is to current lane detection result data
In lane line be extended, comprising:
According to the lane cable architecture in lane detection result data, to the edge lane line in lane detection result data into
Row duplication translation;
In the case where can including the lane line that duplication translates in lane detection result data, the lane of conservative replication translation
Line, and save new lane detection result data;
In the case where can not including the lane line that duplication translates in lane detection result data, the lane of duplication translation be abandoned
Line.
26. device according to claim 21, which is characterized in that optimization unit is abandoning current lane detection result
After data, it is also used to be determined as being used for by a preset lane line template data lane line of lane detection processing next time
Template data.
27. device according to claim 16, which is characterized in that determination unit is also used to current lane detection knot
Fruit data are determined as the lane line template data for the processing of lane detection next time.
28. device according to claim 16, which is characterized in that extraction unit extracts vehicle from current frame image data
Diatom image data, comprising:
Lane line image data is extracted from current frame image data according to the method for object identification or semantic segmentation.
29. device according to claim 16, which is characterized in that lane line template data and lane detection result data
For the 3d space data of depression angle.
30. device according to claim 16, which is characterized in that further include at least one in perception data: current
The map datum of driving environment, laser radar (LIDAR) data;
Location data includes GPS positioning data and/or inertial navigation location data.
31. a kind of lane detection device, which is characterized in that including a processor and at least one processor, at least one
At least one machine-executable instruction is stored in memory, processor executes at least one machine-executable instruction to execute:
Obtain the current perception data of the driving environment of vehicle;Wherein, current perception data includes current frame image data and works as
Prelocalization data;
Obtain lane line template data;Wherein, lane line template data is the lane line that last lane detection is handled
Testing result data;
It is extracted to obtain current lane line image data according to perception data;
According to lane line image data and lane line template data, determination obtains current lane detection result data;Wherein,
It include the data of the relative positional relationship of expression vehicle and lane line in current lane detection result data.
32. device according to claim 31, which is characterized in that processor executes at least one machine-executable instruction and holds
Row obtains current lane detection result data according to lane line image data and lane line template data, determination, comprising:
Lane line template data is mapped in lane line image data, is fitted to obtain current lane line inspection according to mapping result
Survey result data.
33. device according to claim 32, which is characterized in that processor executes at least one machine-executable instruction and holds
It is about to lane line template data to be mapped in lane line image data, is fitted to obtain current lane detection according to mapping result
Result data, comprising:
Lane line image data and lane line template data are input in a scheduled loss function, loss function output
One cost value;Wherein, which is the lane between an expression lane line template data and lane line image data
The function of the positional relationship of line, the cost value are the lane line in lane line template data and the lane in lane line image data
The distance between line;
In the case where the difference of adjacent cost value twice is greater than a predetermined threshold, in iterative modifications lane line template data
The position of lane line;In the case where the difference of adjacent cost value twice is less than or equal to the predetermined threshold, terminate iterative processing,
And obtain current lane detection result data.
34. device according to claim 33, which is characterized in that processor executes at least one machine-executable instruction and holds
The position of the lane line in lane line template data is modified in row iteration, comprising:
Using the position of the lane line in gradient descent algorithm iterative modifications lane line template data.
35. device according to claim 33, which is characterized in that processor executes at least one machine-executable instruction and holds
It goes according to lane line image data and lane line template data, determines before obtaining current lane detection result data,
Also execute:
According to current perception data, priori knowledge and/or scheduled constraint condition to the lane line in lane line template data into
Row adjustment;
Wherein, priori knowledge or scheduled constraint condition include the Physimetric parameter or tables of data about road structure
It reaches.
36. device according to claim 31, which is characterized in that processor executes at least one machine-executable instruction also
It executes:
It tests to current lane detection result data;
In the case where upchecking, adjustment is optimized to current lane detection result data, is obtained for next time
The lane line template data of lane detection processing;In the case where examining failure, current lane detection number of results is abandoned
According to.
37. device according to claim 36, which is characterized in that processor executes at least one machine-executable instruction and holds
Row tests to current lane detection result data, comprising:
According to the confidence level model that preparatory training obtains, determination obtains the confidence level of current lane detection result data;
In the case where determining that obtained confidence level meets scheduled test condition, examine successfully;Determining obtained confidence level
In the case where not meeting scheduled test condition, failure is examined.
38. the device according to claim 37, which is characterized in that processor executes at least one machine-executable instruction also
It executes training in advance and obtains confidence level model, comprising:
Previously according to the lane detection result data of history and the truthful data of lane line, training deep neural network is set
Credit model;Confidence level model is used to indicate the corresponding relationship between lane detection result data and confidence level.
39. device according to claim 36, which is characterized in that processor executes at least one machine-executable instruction and holds
Row optimizes adjustment to current lane detection result data, comprising:
Lane line in current lane detection result data is extended;
According to current perception data, priori knowledge and/or scheduled constraint condition to the lane line in lane line template data into
Row adjustment, obtains the lane line template data for the processing of lane detection next time;Wherein, priori knowledge or it is scheduled about
Beam condition includes the Physimetric parameter or data representation about road structure.
40. device according to claim 39, which is characterized in that processor executes at least one machine-executable instruction and holds
Row is extended the lane line in current lane detection result data, comprising:
According to the lane cable architecture in lane detection result data, to the edge lane line in lane detection result data into
Row duplication translation;
In the case where can including the lane line that duplication translates in lane detection result data, the lane of conservative replication translation
Line, and save new lane detection result data;
In the case where can not including the lane line that duplication translates in lane detection result data, the lane of duplication translation be abandoned
Line.
41. device according to claim 36, which is characterized in that processor executes at least one machine-executable instruction and holds
Row also executes after abandoning current lane detection result data and is determined as using by a preset lane line template data
In the lane line template data of the processing of lane detection next time.
42. device according to claim 31, which is characterized in that processor executes at least one machine-executable instruction also
It executes:
Current lane detection result data is determined as being used for the lane line template data of lane detection processing next time.
43. device according to claim 31, which is characterized in that lane line template data and lane detection result data
For the 3d space data of depression angle.
44. device according to claim 31, which is characterized in that processor executes at least one machine-executable instruction and holds
Row extracts lane line image data from current frame image data, comprising:
Lane line picture number is extracted from current frame image data according to the method for the method of object identification or semantic segmentation
According to.
45. device according to claim 31, which is characterized in that further include at least one in perception data: current
The map datum of driving environment, laser radar (LIDAR) data;
Location data includes GPS positioning data and/or inertial navigation location data.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/683,463 US10373003B2 (en) | 2017-08-22 | 2017-08-22 | Deep module and fitting module system and method for motion-based lane detection with multiple sensors |
US15/683,494 US10482769B2 (en) | 2017-08-22 | 2017-08-22 | Post-processing module system and method for motioned-based lane detection with multiple sensors |
USUS15/683,463 | 2017-08-22 | ||
USUS15/683,494 | 2017-08-22 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109426800A true CN109426800A (en) | 2019-03-05 |
CN109426800B CN109426800B (en) | 2021-08-13 |
Family
ID=65514491
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810688772.7A Active CN109426800B (en) | 2017-08-22 | 2018-06-28 | Lane line detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109426800B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110595490A (en) * | 2019-09-24 | 2019-12-20 | 百度在线网络技术(北京)有限公司 | Preprocessing method, device, equipment and medium for lane line perception data |
CN111295666A (en) * | 2019-04-29 | 2020-06-16 | 深圳市大疆创新科技有限公司 | Lane line detection method, device, control equipment and storage medium |
CN111439259A (en) * | 2020-03-23 | 2020-07-24 | 成都睿芯行科技有限公司 | Agricultural garden scene lane deviation early warning control method and system based on end-to-end convolutional neural network |
CN111898540A (en) * | 2020-07-30 | 2020-11-06 | 平安科技(深圳)有限公司 | Lane line detection method, lane line detection device, computer equipment and computer-readable storage medium |
CN112180923A (en) * | 2020-09-23 | 2021-01-05 | 深圳裹动智驾科技有限公司 | Automatic driving method, intelligent control equipment and automatic driving vehicle |
CN112699747A (en) * | 2020-12-21 | 2021-04-23 | 北京百度网讯科技有限公司 | Method and device for determining vehicle state, road side equipment and cloud control platform |
CN113167885A (en) * | 2021-03-03 | 2021-07-23 | 华为技术有限公司 | Lane line detection method and lane line detection device |
CN113175937A (en) * | 2021-06-29 | 2021-07-27 | 天津天瞳威势电子科技有限公司 | Method and device for evaluating lane line sensing result |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130231824A1 (en) * | 2012-03-05 | 2013-09-05 | Florida A&M University | Artificial Intelligence Valet Systems and Methods |
CN103440649A (en) * | 2013-08-23 | 2013-12-11 | 安科智慧城市技术(中国)有限公司 | Detection method and device for lane boundary line |
US20150112765A1 (en) * | 2013-10-22 | 2015-04-23 | Linkedln Corporation | Systems and methods for determining recruiting intent |
CN104700072A (en) * | 2015-02-06 | 2015-06-10 | 中国科学院合肥物质科学研究院 | Lane line historical frame recognition method |
CN105046235A (en) * | 2015-08-03 | 2015-11-11 | 百度在线网络技术(北京)有限公司 | Lane line recognition modeling method and apparatus and recognition method and apparatus |
US9286524B1 (en) * | 2015-04-15 | 2016-03-15 | Toyota Motor Engineering & Manufacturing North America, Inc. | Multi-task deep convolutional neural networks for efficient and robust traffic lane detection |
WO2016130719A2 (en) * | 2015-02-10 | 2016-08-18 | Amnon Shashua | Sparse map for autonomous vehicle navigation |
US9443320B1 (en) * | 2015-05-18 | 2016-09-13 | Xerox Corporation | Multi-object tracking with generic object proposals |
CN106611147A (en) * | 2015-10-15 | 2017-05-03 | 腾讯科技(深圳)有限公司 | Vehicle tracking method and device |
CN106845385A (en) * | 2017-01-17 | 2017-06-13 | 腾讯科技(上海)有限公司 | The method and apparatus of video frequency object tracking |
-
2018
- 2018-06-28 CN CN201810688772.7A patent/CN109426800B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130231824A1 (en) * | 2012-03-05 | 2013-09-05 | Florida A&M University | Artificial Intelligence Valet Systems and Methods |
CN103440649A (en) * | 2013-08-23 | 2013-12-11 | 安科智慧城市技术(中国)有限公司 | Detection method and device for lane boundary line |
US20150112765A1 (en) * | 2013-10-22 | 2015-04-23 | Linkedln Corporation | Systems and methods for determining recruiting intent |
CN104700072A (en) * | 2015-02-06 | 2015-06-10 | 中国科学院合肥物质科学研究院 | Lane line historical frame recognition method |
WO2016130719A2 (en) * | 2015-02-10 | 2016-08-18 | Amnon Shashua | Sparse map for autonomous vehicle navigation |
US9286524B1 (en) * | 2015-04-15 | 2016-03-15 | Toyota Motor Engineering & Manufacturing North America, Inc. | Multi-task deep convolutional neural networks for efficient and robust traffic lane detection |
US9443320B1 (en) * | 2015-05-18 | 2016-09-13 | Xerox Corporation | Multi-object tracking with generic object proposals |
CN105046235A (en) * | 2015-08-03 | 2015-11-11 | 百度在线网络技术(北京)有限公司 | Lane line recognition modeling method and apparatus and recognition method and apparatus |
CN106611147A (en) * | 2015-10-15 | 2017-05-03 | 腾讯科技(深圳)有限公司 | Vehicle tracking method and device |
CN106845385A (en) * | 2017-01-17 | 2017-06-13 | 腾讯科技(上海)有限公司 | The method and apparatus of video frequency object tracking |
Non-Patent Citations (3)
Title |
---|
JUN LI ET AL.: "Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene", 《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》 * |
YUNCHAO WEI ET AL.: "STC: A Simple to Complex Framework for Weakly-supervised Semantic Segmentation", 《ARXIV》 * |
李超 等: "一种基于帧间关联的实时车道线检测算法", 《计算机科学》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111295666A (en) * | 2019-04-29 | 2020-06-16 | 深圳市大疆创新科技有限公司 | Lane line detection method, device, control equipment and storage medium |
WO2020220182A1 (en) * | 2019-04-29 | 2020-11-05 | 深圳市大疆创新科技有限公司 | Lane line detection method and apparatus, control device, and storage medium |
CN110595490A (en) * | 2019-09-24 | 2019-12-20 | 百度在线网络技术(北京)有限公司 | Preprocessing method, device, equipment and medium for lane line perception data |
CN110595490B (en) * | 2019-09-24 | 2021-12-14 | 百度在线网络技术(北京)有限公司 | Preprocessing method, device, equipment and medium for lane line perception data |
CN111439259A (en) * | 2020-03-23 | 2020-07-24 | 成都睿芯行科技有限公司 | Agricultural garden scene lane deviation early warning control method and system based on end-to-end convolutional neural network |
CN111898540A (en) * | 2020-07-30 | 2020-11-06 | 平安科技(深圳)有限公司 | Lane line detection method, lane line detection device, computer equipment and computer-readable storage medium |
CN112180923A (en) * | 2020-09-23 | 2021-01-05 | 深圳裹动智驾科技有限公司 | Automatic driving method, intelligent control equipment and automatic driving vehicle |
CN112699747A (en) * | 2020-12-21 | 2021-04-23 | 北京百度网讯科技有限公司 | Method and device for determining vehicle state, road side equipment and cloud control platform |
CN113167885A (en) * | 2021-03-03 | 2021-07-23 | 华为技术有限公司 | Lane line detection method and lane line detection device |
CN113175937A (en) * | 2021-06-29 | 2021-07-27 | 天津天瞳威势电子科技有限公司 | Method and device for evaluating lane line sensing result |
Also Published As
Publication number | Publication date |
---|---|
CN109426800B (en) | 2021-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109426800A (en) | A kind of method for detecting lane lines and device | |
CN107235044B (en) | A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior | |
KR102125958B1 (en) | Method and apparatus for fusing point cloud data | |
CN111239790B (en) | Vehicle navigation system based on 5G network machine vision | |
CN109583415B (en) | Traffic light detection and identification method based on fusion of laser radar and camera | |
CN106530794B (en) | The automatic identification and calibration method and system of carriage way | |
CN109074085B (en) | Autonomous positioning and map building method and device and robot | |
CN111912416B (en) | Method, device and equipment for positioning equipment | |
CN109031304A (en) | Vehicle positioning method in view-based access control model and the tunnel of millimetre-wave radar map feature | |
CN108388641B (en) | Traffic facility map generation method and system based on deep learning | |
CN108764187A (en) | Extract method, apparatus, equipment, storage medium and the acquisition entity of lane line | |
CN109949594A (en) | Real-time traffic light recognition method | |
CN109931939A (en) | Localization method, device, equipment and the computer readable storage medium of vehicle | |
CN108303103A (en) | The determination method and apparatus in target track | |
CN105930819A (en) | System for real-time identifying urban traffic lights based on single eye vision and GPS integrated navigation system | |
CN105676253A (en) | Longitudinal positioning system and method based on city road marking map in automatic driving | |
WO2021082745A1 (en) | Information completion method, lane line recognition method, intelligent driving method and related product | |
CN110599853B (en) | Intelligent teaching system and method for driving school | |
CN103424112A (en) | Vision navigating method for movement carrier based on laser plane assistance | |
CN110525342A (en) | A kind of vehicle-mounted auxiliary driving method of AR-HUD based on deep learning and its system | |
CN109515439A (en) | Automatic Pilot control method, device, system and storage medium | |
CN108573611A (en) | A kind of speed limit mark fusion method and speed limit identify emerging system | |
CN112904395A (en) | Mining vehicle positioning system and method | |
CN110378210A (en) | A kind of vehicle and car plate detection based on lightweight YOLOv3 and long short focus merge distance measuring method | |
KR102480972B1 (en) | Apparatus and method for generating High Definition Map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |