CN109740469A - Method for detecting lane lines, device, computer equipment and storage medium - Google Patents
Method for detecting lane lines, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN109740469A CN109740469A CN201811581791.6A CN201811581791A CN109740469A CN 109740469 A CN109740469 A CN 109740469A CN 201811581791 A CN201811581791 A CN 201811581791A CN 109740469 A CN109740469 A CN 109740469A
- Authority
- CN
- China
- Prior art keywords
- pixel
- lane
- offset
- reference zone
- lane line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The application proposes a kind of method for detecting lane lines, device, computer equipment and storage medium, wherein, method includes: to utilize the neural network model that training generates in advance, identifying processing is carried out to the road image of acquisition, to obtain the markup information of each pixel in road image, wherein the markup information of each pixel includes type label belonging to the pixel and corresponding first offset of the pixel;According to type label belonging to each pixel, each lane reference zone for including in road image is determined;According to the position of each pixel in each lane reference zone and corresponding first offset of each pixel, the lane line position in each affiliated lane of lane reference zone is determined.This method greatly reduces post-processing difficulty and detection accuracy with higher, stronger to the adaptability of scene, and does not need to introduce a large amount of rule judgement, has preferable scalability and robustness.
Description
Technical field
This application involves intelligent automobile automatic Pilots and auxiliary driving technology field more particularly to a kind of lane detection side
Method, device, computer equipment and storage medium.
Background technique
With the development of intelligent driving technology, lane detection is one of key technology of intelligent driving.Currently, occurring
A variety of method for detecting lane lines, for example, based on parsing model method for detecting lane lines, utilizing bianry image and edge detection
Method etc..
But the post-processing of these method for detecting lane lines is complicated, scene bad adaptability, and can introduce a large amount of rule and sentence
It is disconnected, cause scalability and robustness poor.
Summary of the invention
The application proposes a kind of method for detecting lane lines, device, computer equipment and storage medium, for solving related skill
There is scene bad adaptability and scalability and poor robustness in method for detecting lane lines in art.
The application one side embodiment proposes a kind of method for detecting lane lines, comprising:
The neural network model generated using preparatory training carries out identifying processing to the road image of acquisition, to obtain
The markup information of each pixel in road image is stated, wherein the markup information of each pixel includes type belonging to the pixel
Label and corresponding first offset of the pixel, first offset be used to characterize the pixel be located at its two sides and
At a distance between its nearest pixel there are color difference;
According to type label belonging to each pixel, each lane reference zone for including in the road image is determined,
Wherein, the type label of each pixel is identical in each lane reference zone;
According to the position of each pixel in each lane reference zone and corresponding first offset of each pixel, determine every
The lane line position in a affiliated lane of lane reference zone.
The method for detecting lane lines of the embodiment of the present application obtains road by the neural network model that training generates in advance
The type label of each pixel and corresponding first offset in image click through pixel according to the type label of each pixel
Row classification, determines each lane reference zone in road image, and using the position of the pixel in each lane reference zone and
Corresponding first offset of pixel can determine the lane line position in each affiliated lane of lane reference zone, substantially reduce
Post-processing difficulty and detection accuracy with higher, it is stronger to the adaptability of scene, and do not need to introduce a large amount of rule
Judgement has preferable scalability and robustness.
The application another aspect embodiment proposes a kind of lane detection device, comprising:
Identification module, for being identified to the road image of acquisition using the neural network model that training generates in advance
Processing, to obtain the markup information of each pixel in the road image, wherein the markup information of each pixel includes the picture
Type label belonging to vegetarian refreshments and corresponding first offset of the pixel, first offset be used to characterize the pixel with
At a distance from positioned at its two sides and between its nearest pixel there are color difference;
First determining module determines in the road image for the type label according to belonging to each pixel and includes
Each lane reference zone, wherein the type label of each pixel is identical in each lane reference zone;
Second determining module, position and each pixel corresponding first for each pixel in each lane reference zone
Offset determines the lane line position in each affiliated lane of lane reference zone.
The lane detection device of the embodiment of the present application obtains road by the neural network model that training generates in advance
The type label of each pixel and corresponding first offset in image click through pixel according to the type label of each pixel
Row classification, determines each lane reference zone in road image, and using the position of the pixel in each lane reference zone and
Corresponding first offset of pixel can determine the lane line position in each affiliated lane of lane reference zone, substantially reduce
Post-processing difficulty and detection accuracy with higher, it is stronger to the adaptability of scene, and do not need to introduce a large amount of rule
Judgement has preferable scalability and robustness.
The application another aspect embodiment proposes a kind of computer equipment, including processor and memory;
Wherein, the processor run by reading the executable program code stored in the memory with it is described can
The corresponding program of program code is executed, for realizing the method for detecting lane lines as described in above-mentioned one side embodiment.
The application another aspect embodiment proposes a kind of non-transitorycomputer readable storage medium, is stored thereon with meter
Calculation machine program realizes the method for detecting lane lines as described in above-mentioned one side embodiment when the program is executed by processor.
The additional aspect of the application and advantage will be set forth in part in the description, and will partially become from the following description
It obtains obviously, or recognized by the practice of the application.
Detailed description of the invention
The application is above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments
Obviously and it is readily appreciated that, in which:
Fig. 1 is a kind of flow diagram of method for detecting lane lines provided by the embodiments of the present application;
Fig. 2 is a kind of lane reference zone schematic diagram provided by the embodiments of the present application;
Fig. 3 is the flow diagram of another method for detecting lane lines provided by the embodiments of the present application;
Fig. 4 is the flow diagram of another method for detecting lane lines provided by the embodiments of the present application;
Fig. 5 is the flow diagram of another method for detecting lane lines provided by the embodiments of the present application;
Fig. 6 is a kind of structural schematic diagram of lane detection device provided by the embodiments of the present application;
Fig. 7 shows the block diagram for being suitable for the exemplary computer device for being used to realize the application embodiment.
Specific embodiment
Embodiments herein is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the application, and should not be understood as the limitation to the application.
Below with reference to the accompanying drawings method for detecting lane lines, device, computer equipment and the storage for describing the embodiment of the present application are situated between
Matter.
The embodiment of the present application based on parsing model method for detecting lane lines, utilizes binary map in the related technology
The post-processing such as method of picture and edge detection is complicated, scene bad adaptability, and can introduce a large amount of rule judgement, causes to extend
Property and the poor problem of robustness, propose a kind of method for detecting lane lines.
The method for detecting lane lines of the embodiment of the present application obtains road by the neural network model that training generates in advance
The type label of each pixel and corresponding first offset in image click through pixel according to the type label of each pixel
Row classification, determines each lane reference zone in road image, and using the position of the pixel in each lane reference zone and
Corresponding first offset of pixel can determine the lane line position in each affiliated lane of lane reference zone, substantially reduce
Post-processing difficulty and detection accuracy with higher, it is stronger to the adaptability of scene, and do not need to introduce a large amount of rule
Judgement, to have preferable scalability and robustness.
Fig. 1 is a kind of flow diagram of method for detecting lane lines provided by the embodiments of the present application.
The method for detecting lane lines of the embodiment of the present application can be held by lane detection device provided by the embodiments of the present application
Row, the device are configured in computer equipment, to realize according to the position of the pixel in each lane reference zone and its
To be located at its two sides and between its nearest pixel there are color difference at a distance from, determine belonging to each lane reference zone
The lane line position in lane.
As shown in Figure 1, the method for detecting lane lines includes:
Step 101, the neural network model generated using preparatory training carries out identifying processing to the road image of acquisition,
To obtain the markup information of each pixel in road image, wherein the markup information of each pixel includes belonging to the pixel
Type label and corresponding first offset of the pixel.
In vehicle travel process, using the photographic device being installed on vehicle, the road image of vehicle front is acquired,
And the road image of acquisition is input in the neural network model that training generates in advance.Neural network model extracts road image
In feature, and the feature of extraction is handled, exports each pixel markup information in road image.
Wherein, markup information includes type label belonging to the pixel and corresponding first offset of the pixel, and
One offset is used to characterize the pixel at a distance from positioned at its two sides and between its nearest pixel there are color difference,
Type label can be used for classifying to pixel.
As an example, type label can be indicated with 0,1,2,3, and the pixel that could dictate that type label is 0 is corresponding
First offset is 0, or is sky.Neural network model determines the pixel if it is determined that when the type label of pixel is 0
Corresponding first offset is 0, if it is non-zero, then calculates the color value of the pixel, and search and be located at the pixel two sides,
The pixel different from its color value, and according to the position of each pixel, calculating is located at its two sides and color value is different most
Distance between close pixel.
Neural network model is in actual treatment, since lane line may exist discontinuously, if lane is dotted line or vehicle
There is abrasion etc. in diatom, then neural network model can only determine the first offset of part pixel according to color difference, it is right
It, can be according to other pixels with the pixel distance interval in the part in a certain range in the lane line of breaking part
Position and direction predict.
For example, be based on color difference in Fig. 2, pixel C can be found, obtain the first offset of pixel B,
Pixel K can be found, the first offset of pixel N Yu pixel N are obtained.So, according to pixel C and pixel K
Between position and pixel C neighbor pixel line identical with its color direction and pixel K and its color phase
The direction of the line of same neighbor pixel, determines the position of pixel L.In turn, it calculates between pixel L and pixel M
Distance, to obtain the first offset of pixel M.
Step 102, the type label according to belonging to each pixel determines each lane reference area for including in road image
Domain.
In the present embodiment, is classified according to the type label of each pixel to pixel, determine in road image and wrap
The reference zone in each lane contained.That is, each lane has a reference zone, and pixel in same reference zone
Type label it is identical.
As an example, if being indicated respectively with 0,1,2,3 there are four types of type labels, and the picture that stated type label is 0
Corresponding first offset of vegetarian refreshments is 0, or is sky, then the pixel that type label is 1 forms the reference area in a lane
Domain, the pixel that type label is 2 form the reference zone in a lane, and the pixel that type label is 3 forms a lane
Reference zone.
That is, the number of species of the type label of model output subtract 1 as the number in the lane for including in road image
Amount, and each lane has reference zone.
As shown in Fig. 2, including 3 reference zones in the road image, each lane reference zone is lane reference zone
The central area in affiliated lane.
It should be noted that above-mentioned indicate that type label is only example with 0,1,2,3, type label can also be accorded with other
It number indicates, the present embodiment is not construed as limiting this.
In the present embodiment, by being classified according to the type label of pixel to pixel, available each lane
Reference zone.
Step 103, according to corresponding first offset in the position of each pixel in each lane reference zone and each pixel
Amount, determines the lane line position in each affiliated lane of lane reference zone.
Since there is a reference zone in each lane, and the first offset be positioned at pixel in reference zone two sides and
At a distance between its nearest pixel there are color difference, it is, the first offset includes two distances, then according to every
The position of each pixel in a lane reference zone the first offset corresponding with the pixel, can determine positioned at the picture
Vegetarian refreshments two sides, there are the positions of two nearest pixels of color difference with it.
It in practical applications, is that the lane line different from road color with color distinguishes different lanes, therefore, in road
In image, the color value of pixel be may be considered identical in same lane, and the pixel of the lane line of lane two sides
Color value is different from the color value of pixel in lane, so being located at the two sides of pixel in the reference zone of lane and depositing with it
It may be considered the pixel in lane line in the nearest pixel of color difference, then according in each lane reference zone
Pixel can determine the position of the lane line of the left and right sides in each affiliated lane of lane reference zone.
The method for detecting lane lines of the embodiment of the present application, the model generated using preparatory training are obtained every in road image
The markup information of a pixel, and then each lane reference zone, and then basis are determined according to the type label in markup information
The position of each pixel and corresponding first offset of each pixel in each lane reference zone can determine each lane ginseng
The lane line position in examination district domain post-processes precision simple and with higher, stronger to the adaptability of scene, does not also need to introduce
A large amount of rule judgement, has preferable scalability and robustness.
In one embodiment of the application, for above-mentioned steps 103, it can realize in the following way.Fig. 3 is the application
The flow diagram for another method for detecting lane lines that embodiment provides.
As shown in figure 3, above-mentioned steps 103 include:
Step 301, according to corresponding first offset in the position of each pixel in each lane reference zone and each pixel
Amount determines the position for each pixel being located in each lane line.
Due to can consider be located at reference zone in pixel two sides and there are the nearest pixels of color difference with it
For the pixel in lane line, then partially according to the position of each pixel in each lane reference zone and corresponding first
Shifting amount can determine the position of two pixels being located on the lane line of the left and right sides corresponding with the pixel.
It is assumed that the pixel coordinate of pixel is (x in certain lane reference zone0,y0), corresponding first offset of the pixel
Amount is (xl,yr), wherein xlFor the pixel and positioned at its left side and between its nearest pixel there are color difference away from
From yrIt is the pixel at a distance from the left of it and between its nearest pixel there are color difference, then according to picture
Plain coordinate is (x0,y0) and (xl,yr), it can calculate in the left-hand lane line of lane on the left of the position and lane of a pixel
The position of a pixel in lane line.
As a result, according to the position of each pixel in each lane reference zone and corresponding first offset of each pixel,
It can determine the position of each pixel in each lane line.
Step 302, according to the position for each pixel being located in each lane line, lane line position is determined.
In the present embodiment, the position for each pixel being located in each lane is clustered, can determine road image
In each lane the left and right sides lane line position.
Since in practical applications, road surface may have spot etc., and it is poor that color can also exist with the color of lane line
It is different, therefore, it not can determine that the pixel two sides in reference zone and there are the nearest pixels of color difference with it
Point is exactly the pixel in lane line, then according to the pixel of pixel and the determination of the first offset in reference zone
Position is not necessarily located at the position of the pixel in lane line, therefore to each pixel being located in each lane
Before position is clustered, the quantity of the identical pixel in position in each lane line is counted, quantity is less than preset quantity
The corresponding pixel in position screens out, then clustering to remaining pixel, the position of available lane line.As a result, will
Malposition, pixel inaccurately are screened out, and the detection accuracy of lane line position is substantially increased.
In practical applications, road would generally include multiple lanes, and part adjacent lane has public lane line,
I.e. lane line is single line.In one embodiment of the application, if in the road image of acquisition including the first adjacent vehicle of left and right
Road and second lane, the right-hand lane line of first lane are overlapped with the left-hand lane line of second lane, can side according to Fig.4,
Method determines lane line position.Fig. 4 is the flow diagram of another method for detecting lane lines provided by the embodiments of the present application.
As shown in figure 4, the lane line position in the above-mentioned each affiliated lane of lane reference zone of determination, comprising:
Step 401, according to corresponding first offset in the position of each pixel in first lane reference zone and each pixel
Amount determines the position for each first pixel being located in first lane right-hand lane line.
It, can be according to the position of each pixel in first lane reference zone and each pixel corresponding for first lane
One offset calculates the position of each pixel in first lane left-hand lane line and is located at each the in right-hand lane line
The position of one pixel.
Step 402, according to corresponding first offset in the position of each pixel in second lane reference zone and each pixel
Amount determines the position for each second pixel being located in second lane left-hand lane line.
It, can be according to the position of each pixel in second lane reference zone and each pixel corresponding for second lane
One offset, in the position and right-hand lane line for calculating each second pixel in second lane left-hand lane line
The position of each pixel.
Step 403, according to the position of each first pixel and the position of each second pixel, the right side of first lane is determined
Lane line position.
Since first lane is adjacent with second lane, and the right-hand lane line of first lane and the left-hand lane of second lane
Line is overlapped, i.e., the lane line between first lane and second lane is single line, then each the in first lane right-hand lane line
Each second pixel in one pixel and second lane left-hand lane line is the pixel in same lane line,
Thus the right-hand lane line position of first lane can be determined according to the position of each first pixel and the position of each second pixel
It sets namely the left-hand lane line position of second lane.
It is understood that if having adjacent lane on the left of first lane, and lane line and the first vehicle on the right side of the lane
Road left-hand lane line is overlapped, and determines first lane left-hand lane line position so as to method according to Fig.4,.Similarly,
If having adjacent lane on the right side of second lane, and the left-hand lane line in the lane is overlapped with second lane right-hand lane line, then
Second lane right-hand lane line position can be determined according to above-mentioned method shown in Fig. 4.
In the present embodiment, could dictate that different type labels, can represent in road image from left to right, or from the right side to
Pixel in left lane reference zone, then can determine vehicle according to the type label of pixel in the reference zone of lane
The affiliated lane of road reference zone whether be boundary in road image lane, if therefore according to type label determine first lane or
It, can be according to each pixel in determining first lane left-hand lane line when person's second lane is the lane on boundary in road image
The position of point, determines first lane left-hand lane line position, or according to the position of pixel each in second lane right-hand lane,
Determine second lane right-hand lane line position.For example, first lane on the left side is the vehicle of left border in road image in Fig. 2
Road, then can determine the left-hand lane in the lane according to the position of each pixel in the determining lane left-hand lane line
Line position.
The method for detecting lane lines of the embodiment of the present application can pass through two vehicles if the lane line between adjacent lane is overlapped
Corresponding first offset of pixel and pixel in road reference zone, determines the position of the pixel on common lane line,
And then determine that the position of the common lane line determines the position of lane line to realize the structural information in conjunction with road, it improves
The detection accuracy of lane line.
In practical applications, lane line has a multiple types, such as solid line, dotted line, in the present embodiment, each pixel
It further include the linear label in the affiliated lane of the pixel in markup information, wherein linear label is used to indicate the type of lane line.
It, can be according to the affiliated vehicle of each pixel after the lane line position for determining each affiliated lane of lane reference zone
The linear label and lane line position in road construct lane line.Specifically, according to the linear label of lane line and the position of lane line
It sets, lane line can be constructed.
For example, the linear label of lane line is dotted line, then can construct dotted line lane line according to the position of lane line.
In practical application, due to angle problem, in the road image of shooting, lane line is vertical with not laterally being,
For example, in Fig. 2, the left-hand lane line of left lane and the right-hand lane line in the right lane, with lateral and out of plumb.Therefore,
For ease of calculation, the first offset can characterize the pixel and be located at its two sides and there are the nearest of color difference with it
Distance between colleague's pixel.That is, corresponding first offset of pixel in the reference zone of lane is colleague's pixel
In point, pixel two sides in the reference zone of lane and between its nearest pixel there are color difference at a distance from.
As shown in Fig. 2, corresponding first offset of pixel B in left lane reference zone, be B and pixel A it
Between distance and the distance between B and pixel C.
In the present embodiment, the first offset characterize the pixel be located at its two sides and with its there are color differences most
Therefore distance between close colleague's pixel only needs the position according to the pixel when calculating, look into colleague's pixel
Find two nearest pixels having differences with the pixel color.It compares, the first pixel characterizes the pixel and position
At a distance from its two sides and between its nearest pixel there are color difference, it is to look for being located at the pixel two sides and exists
The all pixels point of color difference selects nearest pixel further according to the position of the pixel, and calculates distance, reduces
Position calculates.
If the first offset characterizes the pixel and there are the nearest pictures of going together of color difference positioned at its two sides and with it
Distance between vegetarian refreshments, then according to each lane reference zone the position of each pixel and each pixel it is corresponding first partially
Shifting amount when determining the position for each pixel being located in each lane line, can be directed to every row pixel, be referred to according to each lane
The plus and minus calculation of the position of each pixel and corresponding first offset of each pixel in region obtains being located at the reference of each lane
The position of pixel in the two sides lane line in the affiliated lane in region.
As shown in Fig. 2, pixel D is the pixel in middle lane reference zone, it is assumed that the pixel coordinate of pixel D is
(X0, Y0), corresponding first offset of pixel D are (Xl, Xr), wherein Xl, Xr are to be located at pixel D in colleague's pixel
Two sides and with pixel D there are the nearest pixel E and pixel F of color difference respectively the distance between with pixel D.
So, the pixel coordinate of pixel E is (X0-Xl, Y0), and the pixel coordinate of pixel F is (X0+Xr, Y0), i.e., according to pixel
The position of D and the corresponding offset of pixel D, the position of the available pixel E in middle lane left-hand lane line,
With the position for the pixel F being located in right-hand lane line.
Similarly, in Fig. 2, pixel H is the pixel in the reference zone of the right lane, it is assumed that the pixel coordinate of pixel H
For (X1,Y1), corresponding first offset is (X1l,X1r), wherein X1l、X1rIt is to be located at and pixel H two in colleague's pixel
Side and the pixel I and pixel J and pixel H the distance between nearest there are color difference with it.So, according to pixel
Pixel coordinate (the X of point H1,Y1) and the first offset be (X1l,X1r), the pixel coordinate of available pixel I is (X1-X1l,
Y1), the pixel coordinate of pixel J is (X1+X1r,Y1), that is, determine the position for the pixel I being located in the left-hand lane line of the right lane
It sets, and the position of the pixel J in right-hand lane line.
In the embodiment of the present application, by corresponding according to the position of each pixel in each lane reference zone and each pixel
The first offset, determine be located at each lane line in each pixel position, and then according to be located at each lane line in
The position of each pixel determines lane line position, detects to realize to lane line point grade, improves the essence of lane detection
Degree.
In order to guarantee vehicle driving in the intermediate region in lane, the safety of vehicle is improved, in the present embodiment, each pixel
The markup information of point may also include the second offset.Wherein, the second offset is used to characterize the pixel and lane where it
Distance between central point and it is less than preset value.
Wherein, central point be in the first offset according to the pixel, determine be located at the pixel two sides and and its
It is nearest there are behind the position of two pixels of color difference, determined according to the position of the two pixels.For example, Fig. 2
In, according to the first offset of pixel D, can determine the position of pixel E Yu pixel F, later, determine pixel E with
Pixel F determines the position of the central point G of pixel E and pixel F, and pixel G is the pixel on lane center
Point.
In the present embodiment, since the first offset of pixel in non-lane reference zone is 0 or is sky, then second
Offset is also 0 or sky, and for the pixel in the reference zone of lane, the first offset is the pixel and two sides lane line
Distance between upper pixel, at a distance from the second offset is pixel between the central point in lane where it, since lane refers to
Region is the central area in lane, therefore the second offset is less than the first offset.
Wherein, the range of the second offset can be determined according to the width of preset lane reference zone, i.e., second
The width less than or equal to preset reference zone of offset.
In the present embodiment, according to the position corresponding with the pixel second of each pixel in each lane reference zone
Offset can determine the position of each pixel in the lane center in each affiliated lane of lane reference zone, into
And according to the position of each pixel in the lane center for being located at each lane, the position of lane center is determined, to make
It obtains vehicle to be travelled according to the lane center for being currently located lane, the safety of vehicle can be improved.
By taking Fig. 2 as an example, the right side of the pixel D in middle lane reference zone, with pixel D there are color differences most
Nearly pixel is pixel G, and corresponding second offset of pixel D is that the distance between pixel D and central point pixel G are
cx.So, according to the corresponding second offset cx of pixel coordinate (X0, Y0) and pixel D of pixel D, pixel can be determined
The pixel coordinate of point G is (X0+cx, Y0), that is, determines the position for the pixel G being located in the lane center of middle lane.
In one embodiment of the application, in the neural network model using training generation in advance, to the road of acquisition
Before image carries out identifying processing, training can be first passed through and obtain neural network model.Fig. 5 is provided by the embodiments of the present application another
The flow diagram of kind method for detecting lane lines.
As shown in figure 5, being identified using the neural network model that training generates in advance to the road image of acquisition
Before processing, the method for detecting lane lines further include:
Step 501, sample graph image set is obtained.
In the present embodiment, a large amount of road image can be obtained, to constitute sample graph image set.Wherein, the road sample image Zhong Ge
The lane that road image includes may be the same or different.
Step 502, each sample image concentrated to sample image is labeled processing, in each sample image of determination
The target markup information of each pixel.
For each sample image, each pixel in each sample image is labeled.Specifically, can choose every
The central area of a lane predetermined width is as lane reference zone, for the pixel in same lane reference zone with identical
Type label be labeled, and calculate each pixel of lane reference zone to place lane two sides lane line hang down
Straight distance, i.e. the first offset, and for the pixel marking types label in non-lane reference zone, non-lane reference area
Corresponding first offset of pixel in domain can be labeled as 0 or be labeled as sky.
In the present embodiment, with pixel in different type label mark non-reference regions, and different lanes reference area
Pixel in domain, and the width of lane reference zone can be where lane width presupposition multiple, such as 0.3 times or
It 0.2 times etc., can specifically be set according to actual needs.
By taking Fig. 2 as an example, 3 lanes, the pixel type label in left lane reference zone are shared in the road image
It can be labeled as 1, the type label of the pixel in middle lane reference zone can be labeled as 2, in the reference zone of the right lane
The type label of pixel can be labeled as 3, the type label of the pixel in non-lane reference zone can be labeled as 0.
As a result, according to the type label of the pixel in the reference zone of lane, the position of each lane reference zone can be determined
Relationship is set, e.g., the lane where the lane reference zone that the pixel that type label is 1 forms, the pixel for being 2 with type label
Lane composed by point where reference zone is adjacent, and marks out in reference zone pixel to the right and left lane line
Distance.
As it can be seen that the embodiment of the present application, in training neural network model, by road structure training into network, thus greatly
Lane detection accuracy is improved greatly.
It should be noted that when being labeled processing, can by the reference zone of lane pixel and place lane
Left-hand lane line and right-hand lane vertical range as the first offset, can also be by the pixel in the reference zone of lane
It can be incited somebody to action by taking Fig. 2 as an example with the distance between pixel the first offset of conduct of going together in lane line at left and right sides of the lane of place
Go together the distance between pixel H and pixel J in pixel H and the right lane left-hand lane line and right-hand lane line make
For the first offset of pixel H.
Step 503, each sample image is inputted into initial neural network model, to obtain initial neural network model output
Each pixel prediction markup information.
In the embodiment of the present application, each sample image is inputted into initial neural network model, to obtain initial neural network
The prediction markup information of each pixel in each sample image of output.
Step 504, according to the difference of prediction markup information and target markup information, initial neural network model is repaired
Just, to generate neural network model.
Sample is determined according to the target markup information of each pixel and prediction markup information for each sample image
Difference in image between the target markup information of each pixel and prediction markup information.Wherein, difference includes the pre- of pixel
Whether identical as type label in target markup information survey type label in markup information, and pixel in prediction markup information
Corresponding first offset, the difference between the first offset corresponding with pixel in target markup information.
Later, pass through the difference of target markup information and prediction markup information using each pixel in each sample image
It is different, it carries out successive ignition and initial neural network model parameter is modified, obtain the optimized parameter of neural network model, thus
Ultimately generate neural network model.
The method for detecting lane lines of the embodiment of the present application generates neural network model by the training of pixel markup information,
Post-processing component difficulty can be substantially reduced, and utilizes the pixel and left-hand lane line and right-hand lane line in reference zone
Distance, i.e., utilized using road structure information training neural model thus when lane line serious wear or partial occlusion
The Model of Neural Network can also steadily determine the position of lane line, have preferable robustness.
In order to realize above-described embodiment, the embodiment of the present application also proposes a kind of lane detection device.Fig. 6 is that the application is real
A kind of structural schematic diagram of lane detection device of example offer is provided.
As shown in fig. 6, the lane detection device includes: identification module 610, the first determining module 620 and the second determination
Module 630.
Identification module 610, for being known to the road image of acquisition using the neural network model that training generates in advance
Other places reason, to obtain the markup information of each pixel in road image, wherein the markup information of each pixel includes the pixel
Type label and corresponding first offset of the pixel belonging to point, the first offset are used to characterize the pixel and are located at it
Two sides and between its nearest pixel there are color difference at a distance from;
First determining module 620 determines in road image for the type label according to belonging to each pixel and includes
Each lane reference zone, wherein the type label of each pixel is identical in each lane reference zone;
Second determining module 630, it is corresponding for the position of each pixel in each lane reference zone and each pixel
First offset determines the lane line position in each affiliated lane of lane reference zone.
In a kind of possible implementation of the embodiment of the present application, above-mentioned first determining module 620 is specifically used for:
According to the position of each pixel in each lane reference zone and corresponding first offset of each pixel, position is determined
The position of each pixel in each lane line;
According to the position for each pixel being located in each lane line, institute's line position is determined.
In a kind of possible implementation of the embodiment of the present application, in the road image of acquisition include left and right it is adjacent first
Lane and second lane, the right-hand lane line of first lane are overlapped with the left-hand lane line of second lane;
Above-mentioned first determining module 620 is also used to: according to the position of each pixel in first lane reference zone and each picture
Corresponding first offset of vegetarian refreshments determines the position for each first pixel being located in first lane right-hand lane line;
According to the position of each pixel in second lane reference zone and corresponding first offset of each pixel, position is determined
The position of each second pixel in second lane left-hand lane line;
According to the position of each first pixel and the position of each second pixel, the right-hand lane line position of first lane is determined
It sets.
It further include the pixel in the markup information of each pixel in a kind of possible implementation of the embodiment of the present application
The linear label in lane belonging to point;The device further include:
Module is constructed, for the linear label and lane line position according to the affiliated lane of each pixel, constructs lane line.
In a kind of possible implementation of the embodiment of the present application, the first offset characterize the pixel be located at its two
Side and between its nearest pixel of going together there are color difference at a distance from.
In a kind of possible implementation of the embodiment of the present application, the markup information of each pixel further includes the second offset
Amount, the second offset be used to characterize the pixel between the central point in lane where it at a distance from and the second offset less than the
One offset and be less than preset value.
In a kind of possible implementation of the embodiment of the present application, which may also include that
First obtains module, for obtaining sample graph image set;
Third determining module, it is each to determine for being labeled processing to each sample image that sample image is concentrated
The target markup information of each pixel in sample image;
Second obtains module, for each sample image to be inputted initial neural network model, to obtain initial nerve net
The prediction markup information of each pixel of network model output;
Generation module, for the difference according to prediction markup information and target markup information, to initial neural network model
It is modified, to generate neural network model.
It should be noted that the aforementioned explanation to method for detecting lane lines embodiment, is also applied for the embodiment
Lane detection device, therefore details are not described herein.
The lane detection device of the embodiment of the present application obtains road by the neural network model that training generates in advance
The type label of each pixel and corresponding first offset in image click through pixel according to the type label of each pixel
Row classification, determines each lane reference zone in road image, and using the position of the pixel in each lane reference zone and
Corresponding first offset of pixel can determine the lane line position in each affiliated lane of lane reference zone, substantially reduce
Post-processing difficulty and detection accuracy with higher, it is stronger to the adaptability of scene, and do not need to introduce a large amount of rule
Judgement has preferable scalability and robustness.
In order to realize above-described embodiment, the embodiment of the present application also proposes a kind of computer equipment, including processor and storage
Device;
Wherein, processor is run and executable program code by reading in memory the executable program code that stores
Corresponding program, for realizing the method for detecting lane lines as described in above-described embodiment.
Fig. 7 shows the block diagram for being suitable for the exemplary computer device for being used to realize the application embodiment.What Fig. 7 was shown
Computer equipment 12 is only an example, should not function to the embodiment of the present application and use scope bring any restrictions.
As shown in fig. 7, computer equipment 12 is showed in the form of universal computing device.The component of computer equipment 12 can be with
Including but not limited to: one or more processor or processing unit 16, system storage 28 connect different system components
The bus 18 of (including system storage 28 and processing unit 16).
Bus 18 indicates one of a few class bus structures or a variety of, including memory bus or Memory Controller,
Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.It lifts
For example, these architectures include but is not limited to industry standard architecture (Industry Standard
Architecture;Hereinafter referred to as: ISA) bus, microchannel architecture (Micro Channel Architecture;Below
Referred to as: MAC) bus, enhanced isa bus, Video Electronics Standards Association (Video Electronics Standards
Association;Hereinafter referred to as: VESA) local bus and peripheral component interconnection (Peripheral Component
Interconnection;Hereinafter referred to as: PCI) bus.
Computer equipment 12 typically comprises a variety of computer system readable media.These media can be it is any can be by
The usable medium that computer equipment 12 accesses, including volatile and non-volatile media, moveable and immovable medium.
Memory 28 may include the computer system readable media of form of volatile memory, such as random access memory
Device (Random Access Memory;Hereinafter referred to as: RAM) 30 and/or cache memory 32.Computer equipment 12 can be with
It further comprise other removable/nonremovable, volatile/non-volatile computer system storage mediums.Only as an example,
Storage system 34 can be used for reading and writing immovable, non-volatile magnetic media, and (Fig. 7 do not show, commonly referred to as " hard drive
Device ").Although being not shown in Fig. 7, the disk for reading and writing to removable non-volatile magnetic disk (such as " floppy disk ") can be provided and driven
Dynamic device, and to removable anonvolatile optical disk (such as: compact disc read-only memory (Compact Disc Read Only
Memory;Hereinafter referred to as: CD-ROM), digital multi CD-ROM (Digital Video Disc Read Only
Memory;Hereinafter referred to as: DVD-ROM) or other optical mediums) read-write CD drive.In these cases, each driving
Device can be connected by one or more data media interfaces with bus 18.Memory 28 may include that at least one program produces
Product, the program product have one group of (for example, at least one) program module, and it is each that these program modules are configured to perform the application
The function of embodiment.
Program/utility 40 with one group of (at least one) program module 42 can store in such as memory 28
In, such program module 42 include but is not limited to operating system, one or more application program, other program modules and
It may include the realization of network environment in program data, each of these examples or certain combination.Program module 42 is usual
Execute the function and/or method in embodiments described herein.
Computer equipment 12 can also be with one or more external equipments 14 (such as keyboard, sensing equipment, display 24
Deng) communication, can also be enabled a user to one or more equipment interact with the computer equipment 12 communicate, and/or with make
The computer equipment 12 any equipment (such as network interface card, the modulatedemodulate that can be communicated with one or more of the other calculating equipment
Adjust device etc.) communication.This communication can be carried out by input/output (I/O) interface 22.Also, computer equipment 12 may be used also
To pass through network adapter 20 and one or more network (such as local area network (Local Area Network;Hereinafter referred to as:
LAN), wide area network (Wide Area Network;Hereinafter referred to as: WAN) and/or public network, for example, internet) communication.Such as figure
Shown, network adapter 20 is communicated by bus 18 with other modules of computer equipment 12.It should be understood that although not showing in figure
Out, other hardware and/or software module can be used in conjunction with computer equipment 12, including but not limited to: microcode, device drives
Device, redundant processing unit, external disk drive array, RAID system, tape drive and data backup storage system etc..
Processing unit 16 by the program that is stored in system storage 28 of operation, thereby executing various function application and
Data processing, such as realize the method referred in previous embodiment.
In order to realize above-described embodiment, the embodiment of the present application also proposes a kind of non-transitorycomputer readable storage medium,
It is stored thereon with computer program, the lane detection side as described in above-described embodiment is realized when which is executed by processor
Method.
In the description of this specification, term " first ", " second " are used for description purposes only, and should not be understood as instruction or
It implies relative importance or implicitly indicates the quantity of indicated technical characteristic.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes
It is one or more for realizing custom logic function or process the step of executable instruction code module, segment or portion
Point, and the range of the preferred embodiment of the application includes other realization, wherein can not press shown or discussed suitable
Sequence, including according to related function by it is basic simultaneously in the way of or in the opposite order, Lai Zhihang function, this should be by the application
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction
The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium ", which can be, any may include, stores, communicates, propagates or pass
Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment
It sets.The more specific example (non-exhaustive list) of computer-readable medium include the following: there is the electricity of one or more wirings
Interconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable
Medium, because can then be edited, be interpreted or when necessary with it for example by carrying out optical scanner to paper or other media
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the application can be realized with hardware, software, firmware or their combination.Above-mentioned
In embodiment, software that multiple steps or method can be executed in memory and by suitable instruction execution system with storage
Or firmware is realized.Such as, if realized with hardware in another embodiment, following skill well known in the art can be used
Any one of art or their combination are realized: have for data-signal is realized the logic gates of logic function from
Logic circuit is dissipated, the specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile
Journey gate array (FPGA) etc..
Those skilled in the art are understood that realize all or part of step that above-described embodiment method carries
It suddenly is that relevant hardware can be instructed to complete by program, the program can store in a kind of computer-readable storage medium
In matter, which when being executed, includes the steps that one or a combination set of embodiment of the method.
It, can also be in addition, can integrate in a processing module in each functional unit in each embodiment of the application
It is that each unit physically exists alone, can also be integrated in two or more units in a module.Above-mentioned integrated mould
Block both can take the form of hardware realization, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized and when sold or used as an independent product in the form of software function module, also can store in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although having been shown and retouching above
Embodiments herein is stated, it is to be understood that above-described embodiment is exemplary, and should not be understood as the limit to the application
System, those skilled in the art can be changed above-described embodiment, modify, replace and become within the scope of application
Type.
Claims (10)
1. a kind of method for detecting lane lines characterized by comprising
The neural network model generated using preparatory training carries out identifying processing to the road image of acquisition, to obtain the road
The markup information of each pixel in the image of road, wherein the markup information of each pixel includes type label belonging to the pixel
And corresponding first offset of the pixel, first offset be used to characterize the pixel be located at its two sides and and its
There are the distances between the nearest pixel of color difference;
According to type label belonging to each pixel, each lane reference zone for including in the road image is determined, wherein
The type label of each pixel is identical in each lane reference zone;
According to the position of each pixel in each lane reference zone and corresponding first offset of each pixel, each vehicle is determined
The lane line position in the affiliated lane of road reference zone.
2. the method as described in claim 1, which is characterized in that the position according to each pixel in each lane reference zone
It sets and corresponding first offset of each pixel, determines the lane line position in each affiliated lane of lane reference zone, comprising:
According to the position of each pixel in each lane reference zone and corresponding first offset of each pixel, determines and be located at often
The position of each pixel in a lane line;
According to the position for each pixel being located in each lane line, the lane line position is determined.
3. method according to claim 2, which is characterized in that in the road image of the acquisition include left and right it is adjacent first
Lane and second lane, the right-hand lane line of first lane are overlapped with the left-hand lane line of second lane;
The lane line position in each affiliated lane of lane reference zone of determination, comprising:
According to the position of each pixel in the first lane reference zone and corresponding first offset of each pixel, position is determined
The position of each first pixel in the first lane right-hand lane line;
According to the position of each pixel in the second lane reference zone and corresponding first offset of each pixel, position is determined
The position of each second pixel in the second lane left-hand lane line;
According to the position of each first pixel and the position of each second pixel, the right side of the first lane is determined
Lane line position.
4. the method as described in claim 1, which is characterized in that further include the pixel in the markup information of each pixel
The linear label in lane belonging to point;
After the lane line position in each affiliated lane of lane reference zone of determination, further includes:
According to the linear label and lane line position in each affiliated lane of pixel, the lane line is constructed.
5. the method as described in claim 1-4 is any, which is characterized in that first offset characterizes the pixel and is located at
Its two sides and between its nearest pixel of going together there are color difference at a distance from.
6. method as claimed in claim 5, which is characterized in that the markup information of each pixel further includes the second offset,
Second offset be used to characterize the pixel with its where lane central point between at a distance from and second offset it is small
In first offset and it is less than preset value.
7. the method as described in claim 1-4 is any, which is characterized in that described to utilize the neural network mould that training generates in advance
Type, before the road image progress identifying processing of acquisition, further includes:
Obtain sample graph image set;
The each sample image concentrated to the sample image is labeled processing, with each pixel in each sample image of determination
The target markup information of point;
Each sample image is inputted into initial neural network model, to obtain each of the initial neural network model output
The prediction markup information of pixel;
According to the difference of the prediction markup information and the target markup information, the initial neural network model is repaired
Just, to generate the neural network model.
8. a kind of lane detection device characterized by comprising
Identification module, for carrying out identifying processing to the road image of acquisition using the neural network model that training generates in advance,
To obtain the markup information of each pixel in the road image, wherein the markup information of each pixel includes the pixel institute
The type label of category and corresponding first offset of the pixel, first offset are used to characterize the pixel and are located at it
Two sides and between its nearest pixel there are color difference at a distance from;
First determining module determines include in the road image each for the type label according to belonging to each pixel
Lane reference zone, wherein the type label of each pixel is identical in each lane reference zone;
Second determining module, for corresponding first offset in the position of each pixel in each lane reference zone and each pixel
Amount, determines the lane line position in each affiliated lane of lane reference zone.
9. a kind of computer equipment, which is characterized in that including processor and memory;
Wherein, the processor is run by reading the executable program code stored in the memory can be performed with described
The corresponding program of program code, for realizing the method for detecting lane lines as described in any in claim 1-7.
10. a kind of non-transitorycomputer readable storage medium, is stored thereon with computer program, which is characterized in that the program
The method for detecting lane lines as described in any in claim 1-7 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811581791.6A CN109740469B (en) | 2018-12-24 | 2018-12-24 | Lane line detection method, lane line detection device, computer device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811581791.6A CN109740469B (en) | 2018-12-24 | 2018-12-24 | Lane line detection method, lane line detection device, computer device, and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109740469A true CN109740469A (en) | 2019-05-10 |
CN109740469B CN109740469B (en) | 2021-01-22 |
Family
ID=66361078
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811581791.6A Active CN109740469B (en) | 2018-12-24 | 2018-12-24 | Lane line detection method, lane line detection device, computer device, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109740469B (en) |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110232368A (en) * | 2019-06-20 | 2019-09-13 | 百度在线网络技术(北京)有限公司 | Method for detecting lane lines, device, electronic equipment and storage medium |
CN110263713A (en) * | 2019-06-20 | 2019-09-20 | 百度在线网络技术(北京)有限公司 | Method for detecting lane lines, device, electronic equipment and storage medium |
CN110263714A (en) * | 2019-06-20 | 2019-09-20 | 百度在线网络技术(北京)有限公司 | Method for detecting lane lines, device, electronic equipment and storage medium |
CN110276293A (en) * | 2019-06-20 | 2019-09-24 | 百度在线网络技术(北京)有限公司 | Method for detecting lane lines, device, electronic equipment and storage medium |
CN110363182A (en) * | 2019-07-24 | 2019-10-22 | 北京信息科技大学 | Method for detecting lane lines based on deep learning |
CN111347831A (en) * | 2020-03-13 | 2020-06-30 | 北京百度网讯科技有限公司 | Vehicle running stability control method, device, equipment and storage medium |
CN111368804A (en) * | 2020-03-31 | 2020-07-03 | 河北科技大学 | Lane line detection method, system and terminal equipment |
CN111739043A (en) * | 2020-04-13 | 2020-10-02 | 北京京东叁佰陆拾度电子商务有限公司 | Parking space drawing method, device, equipment and storage medium |
CN111898540A (en) * | 2020-07-30 | 2020-11-06 | 平安科技(深圳)有限公司 | Lane line detection method, lane line detection device, computer equipment and computer-readable storage medium |
CN112131914A (en) * | 2019-06-25 | 2020-12-25 | 北京市商汤科技开发有限公司 | Lane line attribute detection method and device, electronic equipment and intelligent equipment |
CN113392680A (en) * | 2020-03-13 | 2021-09-14 | 富士通株式会社 | Road recognition device and method and electronic equipment |
CN113688653A (en) * | 2020-05-18 | 2021-11-23 | 富士通株式会社 | Road center line recognition device and method and electronic equipment |
CN113705436A (en) * | 2021-08-27 | 2021-11-26 | 一汽解放青岛汽车有限公司 | Lane information determination method and device, electronic equipment and medium |
WO2022051951A1 (en) * | 2020-09-09 | 2022-03-17 | 华为技术有限公司 | Lane line detection method, related device, and computer readable storage medium |
CN114724119A (en) * | 2022-06-09 | 2022-07-08 | 天津所托瑞安汽车科技有限公司 | Lane line extraction method, lane line detection apparatus, and storage medium |
WO2023280135A1 (en) * | 2021-07-09 | 2023-01-12 | 华为技术有限公司 | Communication method and apparatus, and storage medium and program |
WO2023287906A1 (en) * | 2021-07-13 | 2023-01-19 | Canoo Technologies Inc. | System and method in the prediction of target vehicle behavior based on image frame and normalization |
CN116543363A (en) * | 2023-04-14 | 2023-08-04 | 小米汽车科技有限公司 | Sample image acquisition method and device, electronic equipment and vehicle |
US11840147B2 (en) | 2021-07-13 | 2023-12-12 | Canoo Technologies Inc. | System and method in data-driven vehicle dynamic modeling for path-planning and control |
US11845428B2 (en) | 2021-07-13 | 2023-12-19 | Canoo Technologies Inc. | System and method for lane departure warning with ego motion and vision |
US11891059B2 (en) | 2021-07-13 | 2024-02-06 | Canoo Technologies Inc. | System and methods of integrating vehicle kinematics and dynamics for lateral control feature at autonomous driving |
US11891060B2 (en) | 2021-07-13 | 2024-02-06 | Canoo Technologies Inc. | System and method in lane departure warning with full nonlinear kinematics and curvature |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09223218A (en) * | 1996-02-15 | 1997-08-26 | Toyota Motor Corp | Method and device for detecting traveling route |
CN102208019A (en) * | 2011-06-03 | 2011-10-05 | 东南大学 | Method for detecting lane change of vehicle based on vehicle-mounted camera |
CN102592114A (en) * | 2011-12-26 | 2012-07-18 | 河南工业大学 | Method for extracting and recognizing lane line features of complex road conditions |
CN102646274A (en) * | 2011-02-16 | 2012-08-22 | 日产自动车株式会社 | Lane boundary detecting device and lane boundary detecting method |
CN103714538A (en) * | 2013-12-20 | 2014-04-09 | 中联重科股份有限公司 | Road edge detection method, device and vehicle |
CN105260699A (en) * | 2015-09-10 | 2016-01-20 | 百度在线网络技术(北京)有限公司 | Lane line data processing method and lane line data processing device |
CN107066986A (en) * | 2017-04-21 | 2017-08-18 | 哈尔滨工业大学 | A kind of lane line based on monocular vision and preceding object object detecting method |
CN107203738A (en) * | 2016-03-17 | 2017-09-26 | 福特全球技术公司 | Vehicle lane boundary alignment |
CN107944388A (en) * | 2017-11-24 | 2018-04-20 | 海信集团有限公司 | A kind of method for detecting lane lines, device and terminal |
CN108009524A (en) * | 2017-12-25 | 2018-05-08 | 西北工业大学 | A kind of method for detecting lane lines based on full convolutional network |
US20180129887A1 (en) * | 2016-11-07 | 2018-05-10 | Samsung Electronics Co., Ltd. | Method and apparatus for indicating lane |
CN108921089A (en) * | 2018-06-29 | 2018-11-30 | 驭势科技(北京)有限公司 | Method for detecting lane lines, device and system and storage medium |
-
2018
- 2018-12-24 CN CN201811581791.6A patent/CN109740469B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09223218A (en) * | 1996-02-15 | 1997-08-26 | Toyota Motor Corp | Method and device for detecting traveling route |
CN102646274A (en) * | 2011-02-16 | 2012-08-22 | 日产自动车株式会社 | Lane boundary detecting device and lane boundary detecting method |
CN102208019A (en) * | 2011-06-03 | 2011-10-05 | 东南大学 | Method for detecting lane change of vehicle based on vehicle-mounted camera |
CN102592114A (en) * | 2011-12-26 | 2012-07-18 | 河南工业大学 | Method for extracting and recognizing lane line features of complex road conditions |
CN103714538A (en) * | 2013-12-20 | 2014-04-09 | 中联重科股份有限公司 | Road edge detection method, device and vehicle |
CN105260699A (en) * | 2015-09-10 | 2016-01-20 | 百度在线网络技术(北京)有限公司 | Lane line data processing method and lane line data processing device |
CN107203738A (en) * | 2016-03-17 | 2017-09-26 | 福特全球技术公司 | Vehicle lane boundary alignment |
US20180129887A1 (en) * | 2016-11-07 | 2018-05-10 | Samsung Electronics Co., Ltd. | Method and apparatus for indicating lane |
CN107066986A (en) * | 2017-04-21 | 2017-08-18 | 哈尔滨工业大学 | A kind of lane line based on monocular vision and preceding object object detecting method |
CN107944388A (en) * | 2017-11-24 | 2018-04-20 | 海信集团有限公司 | A kind of method for detecting lane lines, device and terminal |
CN108009524A (en) * | 2017-12-25 | 2018-05-08 | 西北工业大学 | A kind of method for detecting lane lines based on full convolutional network |
CN108921089A (en) * | 2018-06-29 | 2018-11-30 | 驭势科技(北京)有限公司 | Method for detecting lane lines, device and system and storage medium |
Non-Patent Citations (4)
Title |
---|
JINJU ZANG,WEI ZHOU,GUANWEN ZHANG,ZHEMIN DUAN: "Traffic Lane Detection using Fully Convolutional Neural Network", 《APSIPA ANNUAL SUMMIT AND CONFERENCE》 * |
PING-RONG CHEN, SHAO-YUAN LO, HSUEH-MING HANG: "Efficient Road Lane Marking Detection with Deep Learning", 《ARXIV:1809.03994》 * |
李松泽: "基于深度学习的车道线检测系统的设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
鞠乾翱: "基于机器视觉的快速车道线辨识研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110263713B (en) * | 2019-06-20 | 2021-08-10 | 百度在线网络技术(北京)有限公司 | Lane line detection method, lane line detection device, electronic device, and storage medium |
CN110263713A (en) * | 2019-06-20 | 2019-09-20 | 百度在线网络技术(北京)有限公司 | Method for detecting lane lines, device, electronic equipment and storage medium |
CN110263714A (en) * | 2019-06-20 | 2019-09-20 | 百度在线网络技术(北京)有限公司 | Method for detecting lane lines, device, electronic equipment and storage medium |
CN110276293A (en) * | 2019-06-20 | 2019-09-24 | 百度在线网络技术(北京)有限公司 | Method for detecting lane lines, device, electronic equipment and storage medium |
CN110232368B (en) * | 2019-06-20 | 2021-08-24 | 百度在线网络技术(北京)有限公司 | Lane line detection method, lane line detection device, electronic device, and storage medium |
CN110263714B (en) * | 2019-06-20 | 2021-08-20 | 百度在线网络技术(北京)有限公司 | Lane line detection method, lane line detection device, electronic device, and storage medium |
CN110232368A (en) * | 2019-06-20 | 2019-09-13 | 百度在线网络技术(北京)有限公司 | Method for detecting lane lines, device, electronic equipment and storage medium |
CN110276293B (en) * | 2019-06-20 | 2021-07-27 | 百度在线网络技术(北京)有限公司 | Lane line detection method, lane line detection device, electronic device, and storage medium |
CN112131914B (en) * | 2019-06-25 | 2022-10-21 | 北京市商汤科技开发有限公司 | Lane line attribute detection method and device, electronic equipment and intelligent equipment |
CN112131914A (en) * | 2019-06-25 | 2020-12-25 | 北京市商汤科技开发有限公司 | Lane line attribute detection method and device, electronic equipment and intelligent equipment |
CN110363182A (en) * | 2019-07-24 | 2019-10-22 | 北京信息科技大学 | Method for detecting lane lines based on deep learning |
CN110363182B (en) * | 2019-07-24 | 2021-06-18 | 北京信息科技大学 | Deep learning-based lane line detection method |
CN111347831A (en) * | 2020-03-13 | 2020-06-30 | 北京百度网讯科技有限公司 | Vehicle running stability control method, device, equipment and storage medium |
CN113392680A (en) * | 2020-03-13 | 2021-09-14 | 富士通株式会社 | Road recognition device and method and electronic equipment |
CN113392680B (en) * | 2020-03-13 | 2024-03-05 | 富士通株式会社 | Road identification device and method and electronic equipment |
CN114148136A (en) * | 2020-03-13 | 2022-03-08 | 北京百度网讯科技有限公司 | Vehicle running stability control method, device, equipment and storage medium |
CN111347831B (en) * | 2020-03-13 | 2022-04-12 | 北京百度网讯科技有限公司 | Vehicle running stability control method, device, equipment and storage medium |
CN111368804A (en) * | 2020-03-31 | 2020-07-03 | 河北科技大学 | Lane line detection method, system and terminal equipment |
CN111739043A (en) * | 2020-04-13 | 2020-10-02 | 北京京东叁佰陆拾度电子商务有限公司 | Parking space drawing method, device, equipment and storage medium |
CN111739043B (en) * | 2020-04-13 | 2023-08-08 | 北京京东叁佰陆拾度电子商务有限公司 | Parking space drawing method, device, equipment and storage medium |
CN113688653A (en) * | 2020-05-18 | 2021-11-23 | 富士通株式会社 | Road center line recognition device and method and electronic equipment |
CN111898540A (en) * | 2020-07-30 | 2020-11-06 | 平安科技(深圳)有限公司 | Lane line detection method, lane line detection device, computer equipment and computer-readable storage medium |
WO2022051951A1 (en) * | 2020-09-09 | 2022-03-17 | 华为技术有限公司 | Lane line detection method, related device, and computer readable storage medium |
WO2023280135A1 (en) * | 2021-07-09 | 2023-01-12 | 华为技术有限公司 | Communication method and apparatus, and storage medium and program |
WO2023287906A1 (en) * | 2021-07-13 | 2023-01-19 | Canoo Technologies Inc. | System and method in the prediction of target vehicle behavior based on image frame and normalization |
US11840147B2 (en) | 2021-07-13 | 2023-12-12 | Canoo Technologies Inc. | System and method in data-driven vehicle dynamic modeling for path-planning and control |
US11845428B2 (en) | 2021-07-13 | 2023-12-19 | Canoo Technologies Inc. | System and method for lane departure warning with ego motion and vision |
US11891059B2 (en) | 2021-07-13 | 2024-02-06 | Canoo Technologies Inc. | System and methods of integrating vehicle kinematics and dynamics for lateral control feature at autonomous driving |
US11891060B2 (en) | 2021-07-13 | 2024-02-06 | Canoo Technologies Inc. | System and method in lane departure warning with full nonlinear kinematics and curvature |
US11908200B2 (en) | 2021-07-13 | 2024-02-20 | Canoo Technologies Inc. | System and method in the prediction of target vehicle behavior based on image frame and normalization |
CN113705436A (en) * | 2021-08-27 | 2021-11-26 | 一汽解放青岛汽车有限公司 | Lane information determination method and device, electronic equipment and medium |
CN114724119A (en) * | 2022-06-09 | 2022-07-08 | 天津所托瑞安汽车科技有限公司 | Lane line extraction method, lane line detection apparatus, and storage medium |
CN116543363A (en) * | 2023-04-14 | 2023-08-04 | 小米汽车科技有限公司 | Sample image acquisition method and device, electronic equipment and vehicle |
CN116543363B (en) * | 2023-04-14 | 2024-01-30 | 小米汽车科技有限公司 | Sample image acquisition method and device, electronic equipment and vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN109740469B (en) | 2021-01-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109740469A (en) | Method for detecting lane lines, device, computer equipment and storage medium | |
Bai et al. | Exploiting semantic information and deep matching for optical flow | |
Pizzati et al. | Lane detection and classification using cascaded CNNs | |
Engelmann et al. | Joint object pose estimation and shape reconstruction in urban street scenes using 3D shape priors | |
Huang et al. | Robust inter-vehicle distance estimation method based on monocular vision | |
CN109085837A (en) | Control method for vehicle, device, computer equipment and storage medium | |
CN110163176A (en) | The recognition methods of lane line change location, device, equipment and medium | |
CN115717894B (en) | Vehicle high-precision positioning method based on GPS and common navigation map | |
CN109919144B (en) | Drivable region detection method, device, computer storage medium and drive test visual apparatus | |
CN109977823A (en) | Pedestrian's recognition and tracking method, apparatus, computer equipment and storage medium | |
CN109902658A (en) | Pedestrian's characteristic recognition method, device, computer equipment and storage medium | |
CN110263732A (en) | Multiscale target detection method and device | |
CN109242831A (en) | Picture quality detection method, device, computer equipment and storage medium | |
JP6595375B2 (en) | Traffic condition analysis device, traffic condition analysis method, and traffic condition analysis program | |
CN109766793A (en) | Data processing method and device | |
CN110276756A (en) | Road surface crack detection method, device and equipment | |
CN109738884A (en) | Method for checking object, device and computer equipment | |
CN108765315A (en) | Image completion method, apparatus, computer equipment and storage medium | |
CN110084230A (en) | Vehicle body direction detection method and device based on image | |
CN109703465A (en) | The control method and device of vehicle-mounted imaging sensor | |
CN111091038A (en) | Training method, computer readable medium, and method and apparatus for detecting vanishing points | |
Li et al. | A robust lane detection method based on hyperbolic model | |
CN109740609A (en) | A kind of gauge detection method and device | |
Cao et al. | Detection method for auto guide vehicle’s walking deviation based on image thinning and Hough transform | |
CN109948515A (en) | The classification recognition methods of object and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20211013 Address after: 105 / F, building 1, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085 Patentee after: Apollo Intelligent Technology (Beijing) Co.,Ltd. Address before: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing Patentee before: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) Co.,Ltd. |
|
TR01 | Transfer of patent right |