CN109740469B - Lane line detection method, lane line detection device, computer device, and storage medium - Google Patents

Lane line detection method, lane line detection device, computer device, and storage medium Download PDF

Info

Publication number
CN109740469B
CN109740469B CN201811581791.6A CN201811581791A CN109740469B CN 109740469 B CN109740469 B CN 109740469B CN 201811581791 A CN201811581791 A CN 201811581791A CN 109740469 B CN109740469 B CN 109740469B
Authority
CN
China
Prior art keywords
lane
pixel point
pixel
lane line
reference area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811581791.6A
Other languages
Chinese (zh)
Other versions
CN109740469A (en
Inventor
翟玉强
谢术富
夏添
马彧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201811581791.6A priority Critical patent/CN109740469B/en
Publication of CN109740469A publication Critical patent/CN109740469A/en
Application granted granted Critical
Publication of CN109740469B publication Critical patent/CN109740469B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application provides a lane line detection method, a lane line detection device, computer equipment and a storage medium, wherein the method comprises the following steps: identifying and processing the acquired road image by utilizing a neural network model generated by pre-training so as to acquire the labeling information of each pixel point in the road image, wherein the labeling information of each pixel point comprises a type label to which the pixel point belongs and a first offset corresponding to the pixel point; determining each lane reference area contained in the road image according to the type label of each pixel point; and determining the lane line position of the lane to which each lane reference area belongs according to the position of each pixel point in each lane reference area and the first offset corresponding to each pixel point. The method greatly reduces the post-processing difficulty, has higher detection precision, has stronger adaptability to scenes, does not need to introduce a large amount of rule judgment, and has better expansibility and robustness.

Description

Lane line detection method, lane line detection device, computer device, and storage medium
Technical Field
The application relates to the technical field of intelligent automobile automatic driving and assistant driving, in particular to a lane line detection method, a lane line detection device, computer equipment and a storage medium.
Background
With the development of intelligent driving technology, lane line detection is one of the key technologies for intelligent driving. Currently, various lane line detection methods have appeared, for example, a lane line detection method based on a matching model, a method using a binary image and edge detection, and the like.
However, these lane line detection methods have complex post-processing and poor scene adaptability, and introduce a large amount of rule judgment, resulting in poor expansibility and robustness.
Disclosure of Invention
The application provides a lane line detection method, a lane line detection device, computer equipment and a storage medium, which are used for solving the problems of poor scene adaptability, poor expansibility and poor robustness of the lane line detection method in the related technology.
An embodiment of one aspect of the present application provides a lane line detection method, including:
identifying and processing the acquired road image by utilizing a neural network model generated by pre-training so as to acquire labeling information of each pixel point in the road image, wherein the labeling information of each pixel point comprises a type label to which the pixel point belongs and a first offset corresponding to the pixel point, and the first offset is used for representing the distance between the pixel point and the nearest pixel point which is positioned at the two sides of the pixel point and has color difference with the pixel point;
determining each lane reference area contained in the road image according to the type label of each pixel point, wherein the type labels of the pixel points in each lane reference area are the same;
and determining the lane line position of the lane to which each lane reference area belongs according to the position of each pixel point in each lane reference area and the first offset corresponding to each pixel point.
According to the lane line detection method, the type label and the corresponding first offset of each pixel in the road image are obtained through the neural network model generated through pre-training, the pixel points are classified according to the type label of each pixel point, the lane reference area in the road image is determined, and the lane line position of the lane to which each lane reference area belongs can be determined by utilizing the positions of the pixel points in each lane reference area and the first offsets corresponding to the pixel points.
An embodiment of another aspect of the present application provides a lane line detection device, including:
the identification module is used for identifying and processing the acquired road image by utilizing a neural network model generated by pre-training so as to acquire the labeling information of each pixel point in the road image, wherein the labeling information of each pixel point comprises a type label to which the pixel point belongs and a first offset corresponding to the pixel point, and the first offset is used for representing the distance between the pixel point and the nearest pixel point which is positioned on the two sides of the pixel point and has color difference with the pixel point;
the first determining module is used for determining each lane reference area contained in the road image according to the type label of each pixel point, wherein the type labels of the pixel points in each lane reference area are the same;
and the second determining module is used for determining the position of each pixel point in each lane reference area and the first offset corresponding to each pixel point, and determining the lane line position of the lane to which each lane reference area belongs.
The lane line detection device of the embodiment of the application obtains the type label and the corresponding first offset of each pixel in the road image through the neural network model generated by pre-training, classifies the pixel points according to the type label of each pixel point, determines the reference area of each lane in the road image, and can determine the lane line position of the lane to which each lane reference area belongs by using the positions of the pixel points in the reference area of each lane and the first offsets corresponding to the pixel points, so that the post-processing difficulty is greatly reduced, the detection precision is high, the adaptability to a scene is strong, a large amount of rule judgment is not required, and the expansibility and the robustness are good.
Another embodiment of the present application provides a computer device, including a processor and a memory;
the processor reads the executable program code stored in the memory to run a program corresponding to the executable program code, so as to implement the lane line detection method according to the embodiment of the above aspect.
Another embodiment of the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the lane line detection method according to the above-described embodiment of the present application.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of a lane line detection method according to an embodiment of the present disclosure;
FIG. 2 is a schematic view of a lane reference area provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of another lane line detection method according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of another lane line detection method according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of another lane line detection method according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a lane line detection apparatus according to an embodiment of the present disclosure;
FIG. 7 illustrates a block diagram of an exemplary computer device suitable for use to implement embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
A lane line detection method, apparatus, computer device, and storage medium according to embodiments of the present application are described below with reference to the accompanying drawings.
The embodiment of the application provides a lane line detection method aiming at the problems that in the related art, post-processing is complex, scene adaptability is poor, and a large number of rules are introduced for judgment, so that expansibility and robustness are poor due to the fact that a method for detecting lane lines based on a matching model, a method for detecting binary images and edges and the like are used.
According to the lane line detection method, the type label and the corresponding first offset of each pixel in the road image are obtained through the neural network model generated through pre-training, the pixel points are classified according to the type label of each pixel point, the lane reference area in the road image is determined, and the lane line position of the lane to which each lane reference area belongs can be determined by utilizing the positions of the pixel points in each lane reference area and the first offsets corresponding to the pixel points.
Fig. 1 is a schematic flow chart of a lane line detection method according to an embodiment of the present disclosure.
The lane line detection method provided by the embodiment of the application can be executed by the lane line detection device provided by the embodiment of the application, and the device can be configured in computer equipment to determine the lane line position of the lane to which each lane reference area belongs according to the positions of the pixel points in each lane reference area and the distance between the pixel points and the nearest pixel points which are positioned at the two sides of the lane reference area and have color difference with the pixel points.
As shown in fig. 1, the lane line detection method includes:
step 101, identifying and processing the acquired road image by using a neural network model generated by pre-training to obtain labeling information of each pixel point in the road image, wherein the labeling information of each pixel point comprises a type label to which the pixel point belongs and a first offset corresponding to the pixel point.
In the running process of the vehicle, a camera device arranged on the vehicle can be used for collecting road images in front of the vehicle and inputting the collected road images into a neural network model generated by pre-training. And the neural network model extracts the features in the road image, processes the extracted features and outputs labeling information of each pixel point in the road image.
The labeling information comprises a type label of the pixel point and a first offset corresponding to the pixel point, the first offset is used for representing the distance between the pixel point and the nearest pixel point which is located on the two sides of the pixel point and has color difference with the pixel point, and the type label can be used for classifying the pixel points.
As an example, the type label can be represented by 0, 1, 2, 3, and the first offset corresponding to the pixel point with the type label of 0 can be defined as 0 or null. And if the neural network model determines that the type label of the pixel point is 0, determining that the first offset corresponding to the pixel point is 0, if the type label is not 0, calculating the color value of the pixel point, searching the pixel points which are positioned at two sides of the pixel point and have different color values, and calculating the distance between the nearest pixel points which are positioned at two sides of the pixel point and have different color values according to the positions of the pixel points.
When the neural network model is actually processed, due to the fact that the lane lines may be interrupted, if lanes are broken lines or the lane lines are abraded, the neural network model can only determine the first offset of partial pixel points according to color differences, and for the lane lines of the broken portions, prediction can be conducted according to positions and directions of other pixel points which are spaced from the pixel points in the portions within a certain range.
For example, in fig. 2, based on the color difference, the pixel C may be found to obtain the first offset of the pixel B, and the pixel K may also be found to obtain the first offset of the pixel N and the pixel N. Then, the position of the pixel point L is determined according to the position between the pixel point C and the pixel point K, the direction of the connection line between the pixel point C and the adjacent pixel point with the same color as the pixel point C, and the direction of the connection line between the pixel point K and the adjacent pixel point with the same color as the pixel point K. And then, calculating the distance between the pixel point L and the pixel point M, thereby obtaining the first offset of the pixel point M.
And 102, determining each lane reference area contained in the road image according to the type label of each pixel point.
In this embodiment, the pixel points are classified according to the type label of each pixel point, and a reference area of each lane included in the road image is determined. That is to say, each lane has a reference region, and the type labels of the pixel points in the same reference region are the same.
As an example, if four types of type labels are respectively represented by 0, 1, 2, and 3, and it is specified that the first offset corresponding to the pixel point with the type label of 0 is 0 or null, the pixel point with the type label of 1 constitutes a reference area of a lane, the pixel point with the type label of 2 constitutes a reference area of a lane, and the pixel point with the type label of 3 constitutes a reference area of a lane.
That is, the number of kinds of type labels of the model output minus 1 is the number of lanes included in the road image, and each lane has a reference area.
As shown in fig. 2, the road image includes 3 reference regions, and each lane reference region is a central region of a lane to which the lane reference region belongs.
Note that the above-mentioned type labels represented by 0, 1, 2, and 3 are only examples, and the type labels may be represented by other symbols, which is not limited in this embodiment.
In this embodiment, each lane reference region can be obtained by classifying the pixel points according to the type labels of the pixel points.
And 103, determining the lane line position of the lane to which each lane reference area belongs according to the position of each pixel point in each lane reference area and the first offset corresponding to each pixel point.
Because each lane has a reference area, and the first offset is the distance between the nearest pixel points which are located at the two sides of the pixel points in the reference area and have color difference with the pixel points, that is, the first offset comprises two distances, then the positions of the two nearest pixel points which are located at the two sides of the pixel point and have color difference with the pixel point can be determined according to the position of each pixel point in the reference area of each lane and the first offset corresponding to the pixel point.
In practical application, the lane lines with different colors and road colors are used for distinguishing different lanes, so that in a road image, the color values of pixel points in the same lane can be considered to be the same, the color values of the pixel points of the lane lines on two sides of the lane are different from the color values of the pixel points in the lane, the pixel points in the lane lines are located on two sides of the pixel points in the lane reference area and can be considered as the pixel points in the lane lines with the nearest pixel points with color differences, and the positions of the lane lines on the left side and the right side of the lane to which each lane reference area belongs can be determined according to the pixel points in each lane reference area.
According to the lane line detection method, the model generated by pre-training is utilized to obtain the marking information of each pixel point in the road image, each lane reference area is determined according to the type label in the marking information, and the lane line position of each lane reference area can be determined according to the position of each pixel point in each lane reference area and the first offset corresponding to each pixel point.
In one embodiment of the present application, step 103 described above can be implemented as follows. Fig. 3 is a schematic flow chart of another lane line detection method according to an embodiment of the present disclosure.
As shown in fig. 3, the step 103 includes:
step 301, determining the position of each pixel point located in each lane line according to the position of each pixel point in each lane reference area and the first offset corresponding to each pixel point.
Since the nearest pixel points which are located on the two sides of the pixel points in the reference region and have color difference with the pixel points can be considered as the pixel points in the lane lines, the positions of the two pixel points which are located on the lane lines on the left side and the right side and correspond to the pixel points can be determined according to the position of each pixel point in each lane reference region and the corresponding first offset.
Suppose that the pixel coordinate of the pixel point in the reference area of a certain lane is (x)0,y0) The first offset corresponding to the pixel point is (x)l,yr) Wherein x islIs the distance between the pixel point and the nearest pixel point which is positioned at the left side of the pixel point and has color difference with the pixel point, yrThe distance between the pixel point and the nearest pixel point which is located at the left side of the pixel point and has color difference with the pixel point is (x) according to the pixel coordinate0,y0) And (x)l,yr) The position of a pixel point in the lane line on the left side of the lane and one in the lane line on the left side of the lane can be calculatedThe position of each pixel.
Therefore, the positions of the pixel points in each lane line can be determined according to the positions of the pixel points in each lane reference area and the first offsets corresponding to the pixel points.
Step 302, determining the lane line position according to the position of each pixel point in each lane line.
In this embodiment, the positions of the pixels in each lane are clustered, and lane positions on the left and right sides of each lane in the road image can be determined.
In practical application, stains and the like may exist on the road surface, and the color of the road surface is different from the color of the lane line, so that the positions of the pixel points located in the reference region and the pixel points determined by the first offset cannot be determined, and the positions of the pixel points located in the lane line are not necessarily the positions of the pixel points located in the lane line, so that the number of the pixel points with the same position in each lane line is counted before clustering the positions of the pixel points located in each lane, the pixel points corresponding to the positions with the number smaller than the preset number are screened out, and the remaining pixel points are clustered so as to obtain the positions of the lane line. Therefore, the pixel points with abnormal positions and inaccurate positions are screened out, and the detection precision of the lane line positions is greatly improved.
In practical applications, a road usually includes a plurality of lanes, and some adjacent lanes have a common lane line, i.e. the lane line is a single line. In an embodiment of the present application, if the acquired road image includes a first lane and a second lane adjacent to each other left and right, and a right lane line of the first lane coincides with a left lane line of the second lane, the lane line position may be determined according to the method shown in fig. 4. Fig. 4 is a schematic flowchart of another lane line detection method according to an embodiment of the present disclosure.
As shown in fig. 4, the determining the lane line position of the lane to which each lane reference region belongs includes:
step 401, determining the position of each first pixel point located in the lane line on the right side of the first lane according to the position of each pixel point in the first lane reference area and the first offset corresponding to each pixel point.
For the first lane, the positions of the pixel points in the lane line on the left side of the first lane and the positions of the first pixel points in the lane line on the right side of the first lane can be calculated according to the positions of the pixel points in the reference area of the first lane and the first offsets corresponding to the pixel points.
And 402, determining the position of each second pixel point in the lane line on the left side of the second lane according to the position of each pixel point in the reference area of the second lane and the first offset corresponding to each pixel point.
For the second lane, the positions of the second pixel points in the lane line on the left side of the second lane and the positions of the pixel points in the lane line on the right side of the second lane can be calculated according to the positions of the pixel points in the reference area of the second lane and the first offset corresponding to the pixel points.
Step 403, determining the position of the right lane line of the first lane according to the position of each first pixel point and the position of each second pixel point.
Because the first lane is adjacent to the second lane, and the right lane line of the first lane coincides with the left lane line of the second lane, that is, the lane line between the first lane and the second lane is a single line, so each first pixel point in the lane line on the right side of the first lane and each second pixel point in the lane line on the left side of the second lane are all pixel points located in the same lane line, and therefore, the position of the right lane line of the first lane, that is, the position of the left lane line of the second lane, can be determined according to the positions of each first pixel point and each second pixel point.
It will be appreciated that if there is an adjacent lane to the left of the first lane and the lane line to the right of that lane coincides with the lane line to the left of the first lane, then the position of the lane line to the left of the first lane may be determined according to the method shown in figure 4. Similarly, if there is an adjacent lane to the right of the second lane and the left lane line of the lane coincides with the right lane line of the second lane, then the position of the right lane line of the second lane can be determined according to the method shown in FIG. 4.
In this embodiment, different type labels may be specified, which may represent pixel points in a lane reference area from left to right or from right to left in a road image, and then according to the type labels of the pixel points in the lane reference area, it may be determined whether a lane to which the lane reference area belongs is a lane of a boundary in the road image, so if it is determined that the first lane or the second lane is the lane of the boundary in the road image according to the type labels, the position of the lane line on the left side of the first lane may be determined according to the determined position of each pixel point in the lane line on the left side of the first lane, or the position of the lane line on the right side of the second lane may be determined according to the position of each pixel point in the lane line on the right side of the second. For example, the first lane on the left side in fig. 2 is the lane on the left side boundary in the road image, and then the left lane position of the lane can be determined according to the determined positions of the pixel points in the lane on the left side of the lane.
According to the lane line detection method, if lane lines between adjacent lanes are overlapped, the positions of the pixel points on the common lane line can be determined through the pixel points in the two lane reference areas and the first offset corresponding to the pixel points, and then the positions of the common lane line can be determined, so that structural information of a road is combined, the positions of the lane lines are determined, and the detection accuracy of the lane lines is improved.
In practical application, the lane lines have multiple types, such as a solid line, a dashed line, and the like, in this embodiment, the labeling information of each pixel point further includes a linear label of a lane to which the pixel point belongs, where the linear label is used to indicate the type of the lane line.
After the lane line position of the lane to which each lane reference area belongs is determined, the lane line can be constructed according to the linear label of the lane to which each pixel point belongs and the lane line position. Specifically, the lane line may be constructed according to the line-shaped tag of the lane line and the position of the lane line.
For example, if the linear tag of the lane line is a dotted line, a dotted lane line may be constructed according to the position of the lane line.
In practical applications, due to the angle of view, the lane lines in the captured road image are not perpendicular to the lateral direction, for example, in fig. 2, the left lane line of the left lane and the right lane line of the right lane are not perpendicular to the lateral direction. Therefore, for the convenience of calculation, the first offset may represent the distance between the pixel point and the nearest pixels in the same row located at both sides of the pixel point and having a color difference with the pixel point. That is to say, the first offset corresponding to the pixel point in the lane reference region is the distance between the nearest pixel points which are located on both sides of the pixel point in the lane reference region and have a color difference with the pixel point in the same row.
As shown in fig. 2, the first offset corresponding to the pixel B in the left lane reference region is the distance between B and the pixel a, and the distance between B and the pixel C.
In this embodiment, the first offset represents a distance between the pixel point and the nearest pixel point in the same row located on both sides of the pixel point and having a color difference with the pixel point, so that two nearest pixel points having a color difference with the pixel point are found in the pixel points in the same row according to the position of the pixel point during calculation. Compared with the prior art, the first pixel point represents the distance between the pixel point and the nearest pixel point which is located on the two sides of the first pixel point and has color difference with the first pixel point, all the pixel points which are located on the two sides of the first pixel point and have color difference are found, then the nearest pixel point is selected according to the position of the pixel point, the distance is calculated, and the position calculation is reduced.
If the first offset represents the distance between the pixel point and the nearest same-row pixel point which is positioned at the two sides of the pixel point and has color difference with the pixel point, the positions of the pixel points positioned in each lane line can be obtained according to the addition and subtraction operation of the position of each pixel point in each lane reference area and the first offset corresponding to each pixel point aiming at each row of pixel points when the position of each pixel point positioned in each lane line is determined according to the position of each pixel point in each lane reference area and the first offset corresponding to each pixel point.
As shown in fig. 2, the pixel point D is a pixel point in the reference area of the middle lane, the pixel coordinate of the pixel point D is assumed to be (X0, Y0), and the first offset corresponding to the pixel point D is (Xl, Xr), where Xl and Xr are distances between the closest pixel point E and the closest pixel point F, which are located on both sides of the pixel point D and have a color difference with the pixel point D, and the pixel point D, respectively. Then, the pixel coordinate of the pixel point E is (X0-Xl, Y0), and the pixel coordinate of the pixel point F is (X0+ Xr, Y0), that is, the position of the pixel point E located in the lane line on the left side of the middle lane and the position of the pixel point F located in the lane line on the right side can be obtained according to the position of the pixel point D and the offset corresponding to the pixel point D.
Similarly, in fig. 2, the pixel point H is a pixel point in the right lane reference area, and the pixel coordinate of the pixel point H is assumed to be (X)1,Y1) The corresponding first offset is (X)1l,X1r) Wherein X is1l、X1rThe distances between the pixel point I and the pixel point J which are positioned at the two sides of the pixel point H and have the color difference with the pixel point H in the same row and the pixel point J and the pixel point H. Then, according to the pixel coordinate (X) of the pixel point H1,Y1) And the first offset is (X)1l,X1r) The pixel coordinate of the pixel point I can be obtained as (X)1-X1l,Y1) The pixel coordinate of the pixel point J is (X)1+X1r,Y1) And determining the position of the pixel point I in the lane line on the left side of the right lane and the position of the pixel point J in the lane line on the right side.
In the embodiment of the application, the positions of the pixel points in each lane line are determined according to the positions of the pixel points in each lane reference area and the first offsets corresponding to the pixel points, and then the positions of the lane lines are determined according to the positions of the pixel points in each lane line, so that the point-level detection of the lane lines is realized, and the detection precision of the lane lines is improved.
In order to ensure that the vehicle runs in the middle area of the lane and improve the safety of the vehicle, in this embodiment, the labeling information of each pixel point may further include a second offset. The second offset is used for representing the distance between the pixel point and the central point of the lane where the pixel point is located, and is smaller than a preset value.
The central point is determined according to the positions of two pixel points which are positioned at two sides of the pixel point and have color difference with the two pixel points nearest to the pixel point according to the first offset of the pixel point. For example, in fig. 2, according to the first offset of the pixel D, the positions of the pixel E and the pixel F can be determined, and then the positions of the pixel E and the pixel F are determined, and the position of the center point G of the pixel E and the pixel F is determined, where the pixel G is a pixel located on the center line of the lane.
In this embodiment, since the first offset of the pixel point in the non-lane reference region is 0 or null, then the second offset is also 0 or null, for the pixel point in the lane reference region, the first offset is the distance between the pixel point and the pixel points on the lane lines on both sides, the second offset is the distance between the pixel point and the central point of the lane where the pixel point is located, and since the lane reference region is the central region of the lane, the second offset is smaller than the first offset.
The range of the second offset amount may be determined according to the width of the preset lane reference region, that is, the second offset amount is smaller than or equal to the width of the preset reference region.
In this embodiment, according to the position of each pixel point in each lane reference region and the second offset corresponding to the pixel point, the position of each pixel point in the lane center line of the lane to which each lane reference region belongs can be determined, and then according to the position of each pixel point in the lane center line of each lane, the position of the lane center line is determined, so that the vehicle runs according to the lane center line of the current lane, and the safety of the vehicle can be improved.
Taking fig. 2 as an example, the right side of the pixel point D in the middle lane reference area and the nearest pixel point having a color difference with the pixel point D are pixel points G, and the second offset corresponding to the pixel point D is a distance between the pixel point D and the center-point pixel point G, which is cx. Then, according to the pixel coordinates (X0, Y0) of the pixel point D and the second offset cx corresponding to the pixel point D, the pixel coordinate of the pixel point G can be determined to be (X0+ cx, Y0), that is, the position of the pixel point G located in the lane center line of the middle lane is determined.
In an embodiment of the present application, before the neural network model generated by pre-training is used to perform recognition processing on the acquired road image, the neural network model may be obtained through training. Fig. 5 is a schematic flowchart of another lane line detection method according to an embodiment of the present disclosure.
As shown in fig. 5, before the recognition processing is performed on the acquired road image by using the neural network model generated by the pre-training, the lane line detection method further includes:
step 501, a sample image set is obtained.
In this embodiment, a large number of road images may be acquired to form a sample image set. The lanes included in each road image in the sample image may be the same or different.
Step 502, performing annotation processing on each sample image in the sample image set to determine target annotation information of each pixel point in each sample image.
And labeling each pixel point in each sample image according to each sample image. Specifically, a central area of a preset width of each lane can be selected as a lane reference area, pixels in the same lane reference area are labeled by the same type label, the vertical distance from each pixel in the lane reference area to lane lines on two sides of the lane, namely the first offset, is calculated, and the type label is labeled for pixels in a non-lane reference area, wherein the first offset corresponding to the pixel in the non-lane reference area can be labeled as 0 or labeled as null.
In this embodiment, different types of labels are used to label the pixels in the non-reference region and the pixels in the different lane reference regions, and the width of the lane reference region may be a preset multiple of the width of the lane where the lane is located, such as 0.3 time or 0.2 time, and may be specifically set according to actual needs.
Taking fig. 2 as an example, there are 3 lanes in the road image, the pixel point type label in the left lane reference region may be labeled as 1, the pixel point type label in the middle lane reference region may be labeled as 2, the pixel point type label in the right lane reference region may be labeled as 3, and the pixel point type label in the non-lane reference region may be labeled as 0.
Therefore, according to the type labels of the pixel points in the lane reference areas, the position relation of the lane reference areas can be determined, for example, the lane where the lane reference area composed of the pixel points with the type label of 1 is located is adjacent to the lane where the reference area composed of the pixel points with the type label of 2 is located, and the distances from the pixel points in the reference areas to the lane lines on the left side and the right side are marked.
Therefore, when the neural network model is trained, the road structure is trained in the network, and the accuracy of lane line detection is greatly improved.
It should be noted that, during the labeling process, the vertical distance between the pixel point in the lane reference region and the left lane line and the vertical distance between the pixel point in the lane reference region and the co-traveling pixel point in the lane lines on the left and right sides of the lane may be used as the first offset, and the distance between the pixel point H and the co-traveling pixel point H and the pixel point J in the left lane line and the right lane line of the right lane may be used as the first offset of the pixel point H, taking fig. 2 as an example.
Step 503, inputting each sample image into the initial neural network model to obtain the prediction labeling information of each pixel point output by the initial neural network model.
In the embodiment of the application, each sample image is input into the initial neural network model to obtain the prediction marking information of each pixel point in each sample image output by the initial neural network.
Step 504, according to the difference between the prediction labeling information and the target labeling information, the initial neural network model is corrected to generate a neural network model.
And aiming at each sample image, determining the difference between the target labeling information and the prediction labeling information of each pixel point in the sample image according to the target labeling information and the prediction labeling information of each pixel point. The difference comprises the difference value between the type label in the prediction marking information of the pixel point and the type label in the target marking information, the first offset corresponding to the pixel point in the prediction marking information and the first offset corresponding to the pixel point in the target marking information.
And then, correcting the initial neural network model parameters by carrying out multiple iterations by utilizing the difference between the target labeling information and the prediction labeling information of each pixel point in each sample image to obtain the optimal parameters of the neural network model, thereby finally generating the neural network model.
According to the lane line detection method, the neural network model is generated through training of the pixel point marking information, the post-processing complex difficulty can be greatly reduced, the distances between the pixel points in the reference area and the left lane line and the right lane line are utilized, namely, the neural model is trained through the road structure information, and therefore when the lane lines are seriously worn or partially shielded, the positions of the lane lines can be stably determined through the neural network model, and the method has good robustness.
In order to implement the above embodiments, an apparatus for detecting lane lines is also provided in the embodiments of the present application. Fig. 6 is a schematic structural diagram of a lane line detection device according to an embodiment of the present application.
As shown in fig. 6, the lane line detecting apparatus includes: an identification module 610, a first determination module 620, and a second determination module 630.
The identification module 610 is configured to identify the acquired road image by using a neural network model generated through pre-training to obtain labeling information of each pixel point in the road image, where the labeling information of each pixel point includes a type label to which the pixel point belongs and a first offset corresponding to the pixel point, and the first offset is used to represent a distance between the pixel point and a nearest pixel point located on both sides of the pixel point and having a color difference with the pixel point;
the first determining module 620 is configured to determine, according to the type tag to which each pixel belongs, each lane reference region included in the road image, where the type tags of the pixels in each lane reference region are the same;
the second determining module 630 is configured to determine the position of the lane line of the lane to which each lane reference region belongs, according to the position of each pixel point in each lane reference region and the first offset corresponding to each pixel point.
In a possible implementation manner of the embodiment of the present application, the first determining module 620 is specifically configured to:
determining the position of each pixel point in each lane line according to the position of each pixel point in each lane reference area and the first offset corresponding to each pixel point;
and determining the positions of the lines according to the positions of all the pixel points in each lane line.
In a possible implementation manner of the embodiment of the application, the acquired road image comprises a first lane and a second lane which are adjacent left and right, and a right lane line of the first lane is overlapped with a left lane line of the second lane;
the first determining module 620 is further configured to: determining the position of each first pixel point in a lane line on the right side of the first lane according to the position of each pixel point in the first lane reference area and the first offset corresponding to each pixel point;
determining the position of each second pixel point in the lane line on the left side of the second lane according to the position of each pixel point in the reference area of the second lane and the first offset corresponding to each pixel point;
and determining the position of the right lane line of the first lane according to the position of each first pixel point and the position of each second pixel point.
In a possible implementation manner of the embodiment of the application, the labeling information of each pixel point further includes a linear label of a lane to which the pixel point belongs; the device also includes:
and the construction module is used for constructing the lane line according to the linear label of the lane to which each pixel point belongs and the position of the lane line.
In a possible implementation manner of the embodiment of the application, the first offset represents a distance between the pixel point and a closest pixel point in the same row located on both sides of the pixel point and having a color difference with the pixel point.
In a possible implementation manner of the embodiment of the application, the labeling information of each pixel further includes a second offset, the second offset is used to represent a distance between the pixel and a center point of a lane where the pixel is located, and the second offset is smaller than the first offset and smaller than a preset value.
In a possible implementation manner of the embodiment of the present application, the apparatus may further include:
a first obtaining module, configured to obtain a sample image set;
the third determining module is used for performing labeling processing on each sample image in the sample image set so as to determine target labeling information of each pixel point in each sample image;
the second acquisition module is used for inputting each sample image into the initial neural network model so as to acquire the prediction marking information of each pixel point output by the initial neural network model;
and the generating module is used for correcting the initial neural network model according to the difference between the prediction labeling information and the target labeling information so as to generate the neural network model.
It should be noted that the explanation of the embodiment of the lane line detection method is also applicable to the lane line detection apparatus of this embodiment, and therefore, the explanation is not repeated herein.
The lane line detection device of the embodiment of the application obtains the type label and the corresponding first offset of each pixel in the road image through the neural network model generated by pre-training, classifies the pixel points according to the type label of each pixel point, determines the reference area of each lane in the road image, and can determine the lane line position of the lane to which each lane reference area belongs by using the positions of the pixel points in the reference area of each lane and the first offsets corresponding to the pixel points, so that the post-processing difficulty is greatly reduced, the detection precision is high, the adaptability to a scene is strong, a large amount of rule judgment is not required, and the expansibility and the robustness are good.
In order to implement the foregoing embodiments, an embodiment of the present application further provides a computer device, including a processor and a memory;
the processor reads the executable program code stored in the memory to run a program corresponding to the executable program code, so as to implement the lane line detection method according to the above embodiment.
FIG. 7 illustrates a block diagram of an exemplary computer device suitable for use to implement embodiments of the present application. The computer device 12 shown in fig. 7 is only an example, and should not bring any limitation to the function and the scope of use of the embodiments of the present application.
As shown in FIG. 7, computer device 12 is in the form of a general purpose computing device. The components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Computer device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, and commonly referred to as a "hard drive"). Although not shown in FIG. 7, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described herein.
Computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with computer device 12, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Moreover, computer device 12 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via Network adapter 20. As shown, network adapter 20 communicates with the other modules of computer device 12 via bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing, for example, implementing the methods mentioned in the foregoing embodiments, by executing programs stored in the system memory 28.
In order to implement the above embodiments, the present application also proposes a non-transitory computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the lane line detection method as described in the above embodiments.
In the description of the present specification, the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A lane line detection method is characterized by comprising the following steps:
identifying and processing the acquired road image by utilizing a neural network model generated by pre-training so as to acquire labeling information of each pixel point in the road image, wherein the labeling information of each pixel point comprises a type label to which the pixel point belongs and a first offset corresponding to the pixel point, and the first offset is used for representing the distance between the pixel point and the nearest pixel point which is positioned at the two sides of the pixel point and has color difference with the pixel point;
determining each lane reference area contained in the road image according to the type label of each pixel point, wherein the type labels of the pixel points in each lane reference area are the same, and each lane reference area is a central area with the preset width of each lane;
and determining the lane line position of the lane to which each lane reference area belongs according to the position of each pixel point in each lane reference area and the first offset corresponding to each pixel point.
2. The method of claim 1, wherein determining the lane line position of the lane to which each lane reference region belongs according to the position of each pixel point in each lane reference region and the first offset corresponding to each pixel point comprises:
determining the position of each pixel point in each lane line according to the position of each pixel point in each lane reference area and the first offset corresponding to each pixel point;
and determining the position of each lane line according to the position of each pixel point in each lane line.
3. The method of claim 2, wherein the acquired road image comprises a first lane and a second lane adjacent to each other left and right, and a right lane line of the first lane is coincident with a left lane line of the second lane;
the determining the lane line position of the lane to which each lane reference region belongs comprises:
determining the position of each first pixel point in the lane line on the right side of the first lane according to the position of each pixel point in the first lane reference area and the first offset corresponding to each pixel point;
determining the position of each second pixel point in the lane line on the left side of the second lane according to the position of each pixel point in the reference area of the second lane and the first offset corresponding to each pixel point;
and determining the position of the right lane line of the first lane according to the positions of the first pixel points and the positions of the second pixel points.
4. The method according to claim 1, wherein the label information of each pixel point further includes a linear label of a lane to which the pixel point belongs;
after the determining the lane line position of the lane to which each lane reference region belongs, the method further includes:
and constructing the lane line according to the linear label of the lane to which each pixel point belongs and the position of the lane line.
5. The method of any one of claims 1-4, wherein the first offset is indicative of a distance between the pixel point and a nearest pixel point in the same row located on both sides of the pixel point and having a color difference with the pixel point.
6. The method of claim 5, wherein the label information of each pixel further comprises a second offset, the second offset is used to represent a distance between the pixel and a center point of the lane where the pixel is located, and the second offset is smaller than the first offset and smaller than a predetermined value.
7. The method according to any one of claims 1 to 4, wherein before the identifying the collected road image by using the neural network model generated by the pre-training, the method further comprises:
acquiring a sample image set;
performing labeling processing on each sample image in the sample image set to determine target labeling information of each pixel point in each sample image;
inputting each sample image into an initial neural network model to obtain the prediction marking information of each pixel point output by the initial neural network model;
and correcting the initial neural network model according to the difference between the prediction labeling information and the target labeling information to generate the neural network model.
8. A lane line detection apparatus, comprising:
the identification module is used for identifying and processing the acquired road image by utilizing a neural network model generated by pre-training so as to acquire the labeling information of each pixel point in the road image, wherein the labeling information of each pixel point comprises a type label to which the pixel point belongs and a first offset corresponding to the pixel point, and the first offset is used for representing the distance between the pixel point and the nearest pixel point which is positioned on the two sides of the pixel point and has color difference with the pixel point;
the first determining module is used for determining each lane reference area contained in the road image according to the type label to which each pixel point belongs, wherein the type labels of the pixel points in each lane reference area are the same, and each lane reference area is a central area with preset width of each lane;
and the second determining module is used for determining the position of each pixel point in each lane reference area and the first offset corresponding to each pixel point, and determining the lane line position of the lane to which each lane reference area belongs.
9. A computer device comprising a processor and a memory;
wherein the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for implementing the lane line detection method according to any one of claims 1 to 7.
10. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the lane line detection method according to any one of claims 1 to 7.
CN201811581791.6A 2018-12-24 2018-12-24 Lane line detection method, lane line detection device, computer device, and storage medium Active CN109740469B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811581791.6A CN109740469B (en) 2018-12-24 2018-12-24 Lane line detection method, lane line detection device, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811581791.6A CN109740469B (en) 2018-12-24 2018-12-24 Lane line detection method, lane line detection device, computer device, and storage medium

Publications (2)

Publication Number Publication Date
CN109740469A CN109740469A (en) 2019-05-10
CN109740469B true CN109740469B (en) 2021-01-22

Family

ID=66361078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811581791.6A Active CN109740469B (en) 2018-12-24 2018-12-24 Lane line detection method, lane line detection device, computer device, and storage medium

Country Status (1)

Country Link
CN (1) CN109740469B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230015357A1 (en) * 2021-07-13 2023-01-19 Canoo Technologies Inc. System and method in the prediction of target vehicle behavior based on image frame and normalization
US11845428B2 (en) 2021-07-13 2023-12-19 Canoo Technologies Inc. System and method for lane departure warning with ego motion and vision

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276293B (en) * 2019-06-20 2021-07-27 百度在线网络技术(北京)有限公司 Lane line detection method, lane line detection device, electronic device, and storage medium
CN110263713B (en) * 2019-06-20 2021-08-10 百度在线网络技术(北京)有限公司 Lane line detection method, lane line detection device, electronic device, and storage medium
CN110263714B (en) * 2019-06-20 2021-08-20 百度在线网络技术(北京)有限公司 Lane line detection method, lane line detection device, electronic device, and storage medium
CN110232368B (en) * 2019-06-20 2021-08-24 百度在线网络技术(北京)有限公司 Lane line detection method, lane line detection device, electronic device, and storage medium
CN112131914B (en) * 2019-06-25 2022-10-21 北京市商汤科技开发有限公司 Lane line attribute detection method and device, electronic equipment and intelligent equipment
CN110363182B (en) * 2019-07-24 2021-06-18 北京信息科技大学 Deep learning-based lane line detection method
CN111347831B (en) * 2020-03-13 2022-04-12 北京百度网讯科技有限公司 Vehicle running stability control method, device, equipment and storage medium
CN113392680B (en) * 2020-03-13 2024-03-05 富士通株式会社 Road identification device and method and electronic equipment
CN111368804A (en) * 2020-03-31 2020-07-03 河北科技大学 Lane line detection method, system and terminal equipment
CN111739043B (en) * 2020-04-13 2023-08-08 北京京东叁佰陆拾度电子商务有限公司 Parking space drawing method, device, equipment and storage medium
CN113688653B (en) * 2020-05-18 2024-06-28 富士通株式会社 Recognition device and method for road center line and electronic equipment
CN111898540B (en) * 2020-07-30 2024-07-09 平安科技(深圳)有限公司 Lane line detection method, lane line detection device, computer equipment and computer readable storage medium
WO2022051951A1 (en) * 2020-09-09 2022-03-17 华为技术有限公司 Lane line detection method, related device, and computer readable storage medium
CN114930126A (en) * 2020-11-12 2022-08-19 深圳元戎启行科技有限公司 Vehicle positioning method, device, computer equipment and storage medium
CN115604651A (en) * 2021-07-09 2023-01-13 华为技术有限公司(Cn) Communication method, communication apparatus, storage medium, and program
US12017661B2 (en) 2021-07-13 2024-06-25 Canoo Technologies Inc. System and method in vehicle path prediction based on full nonlinear kinematics
US11891059B2 (en) 2021-07-13 2024-02-06 Canoo Technologies Inc. System and methods of integrating vehicle kinematics and dynamics for lateral control feature at autonomous driving
US11891060B2 (en) 2021-07-13 2024-02-06 Canoo Technologies Inc. System and method in lane departure warning with full nonlinear kinematics and curvature
US11840147B2 (en) 2021-07-13 2023-12-12 Canoo Technologies Inc. System and method in data-driven vehicle dynamic modeling for path-planning and control
CN113705436A (en) * 2021-08-27 2021-11-26 一汽解放青岛汽车有限公司 Lane information determination method and device, electronic equipment and medium
CN113869249B (en) * 2021-09-30 2024-05-07 广州文远知行科技有限公司 Lane marking method, device, equipment and readable storage medium
CN114724119B (en) * 2022-06-09 2022-09-06 天津所托瑞安汽车科技有限公司 Lane line extraction method, lane line detection device, and storage medium
CN116543363B (en) * 2023-04-14 2024-01-30 小米汽车科技有限公司 Sample image acquisition method and device, electronic equipment and vehicle

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09223218A (en) * 1996-02-15 1997-08-26 Toyota Motor Corp Method and device for detecting traveling route
JP5716443B2 (en) * 2011-02-16 2015-05-13 日産自動車株式会社 Lane boundary detection device and lane boundary detection method
CN102208019B (en) * 2011-06-03 2013-01-09 东南大学 Method for detecting lane change of vehicle based on vehicle-mounted camera
CN102592114B (en) * 2011-12-26 2013-07-31 河南工业大学 Method for extracting and recognizing lane line features of complex road conditions
CN103714538B (en) * 2013-12-20 2016-12-28 中联重科股份有限公司 Road edge detection method and device and vehicle
CN105260699B (en) * 2015-09-10 2018-06-26 百度在线网络技术(北京)有限公司 A kind of processing method and processing device of lane line data
US10124730B2 (en) * 2016-03-17 2018-11-13 Ford Global Technologies, Llc Vehicle lane boundary position
KR102628654B1 (en) * 2016-11-07 2024-01-24 삼성전자주식회사 Method and apparatus of indicating lane
CN107066986A (en) * 2017-04-21 2017-08-18 哈尔滨工业大学 A kind of lane line based on monocular vision and preceding object object detecting method
CN107944388A (en) * 2017-11-24 2018-04-20 海信集团有限公司 A kind of method for detecting lane lines, device and terminal
CN108009524B (en) * 2017-12-25 2021-07-09 西北工业大学 Lane line detection method based on full convolution network
CN108921089A (en) * 2018-06-29 2018-11-30 驭势科技(北京)有限公司 Method for detecting lane lines, device and system and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230015357A1 (en) * 2021-07-13 2023-01-19 Canoo Technologies Inc. System and method in the prediction of target vehicle behavior based on image frame and normalization
US11845428B2 (en) 2021-07-13 2023-12-19 Canoo Technologies Inc. System and method for lane departure warning with ego motion and vision

Also Published As

Publication number Publication date
CN109740469A (en) 2019-05-10

Similar Documents

Publication Publication Date Title
CN109740469B (en) Lane line detection method, lane line detection device, computer device, and storage medium
CN110163176B (en) Lane line change position identification method, device, equipment and medium
JP6670071B2 (en) Vehicle image recognition system and corresponding method
JP4741704B2 (en) Device, method and computer program for confirming road signs in images
CN111191611B (en) Traffic sign label identification method based on deep learning
CN107992819B (en) Method and device for determining vehicle attribute structural features
CN104751187A (en) Automatic meter-reading image recognition method
CN108052904B (en) Method and device for acquiring lane line
CN109740609B (en) Track gauge detection method and device
CN110084230B (en) Image-based vehicle body direction detection method and device
CN112580734B (en) Target detection model training method, system, terminal equipment and storage medium
CN111414826A (en) Method, device and storage medium for identifying landmark arrow
CN111950523A (en) Ship detection optimization method and device based on aerial photography, electronic equipment and medium
CN109948515B (en) Object class identification method and device
CN112837384B (en) Vehicle marking method and device and electronic equipment
Martin et al. Object of fixation estimation by joint analysis of gaze and object dynamics
CN114005120A (en) License plate character cutting method, license plate recognition method, device, equipment and storage medium
Diego et al. Vision-based road detection via on-line video registration
CN115713750B (en) Lane line detection method and device, electronic equipment and storage medium
CN112381034A (en) Lane line detection method, device, equipment and storage medium
Lu A lane detection, tracking and recognition system for smart vehicles
CN116721396A (en) Lane line detection method, device and storage medium
CN113989765A (en) Detection method and detection device for rail obstacle and readable storage medium
CN114926817B (en) Method and device for identifying parking space, electronic equipment and computer readable storage medium
Kluwak et al. ALPR-extension to traditional plate recognition methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211013

Address after: 105 / F, building 1, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085

Patentee after: Apollo Intelligent Technology (Beijing) Co.,Ltd.

Address before: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing

Patentee before: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) Co.,Ltd.

TR01 Transfer of patent right