CN111291601A - Lane line identification method and device and electronic equipment - Google Patents

Lane line identification method and device and electronic equipment Download PDF

Info

Publication number
CN111291601A
CN111291601A CN201811496189.2A CN201811496189A CN111291601A CN 111291601 A CN111291601 A CN 111291601A CN 201811496189 A CN201811496189 A CN 201811496189A CN 111291601 A CN111291601 A CN 111291601A
Authority
CN
China
Prior art keywords
lane line
road image
points
area
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811496189.2A
Other languages
Chinese (zh)
Other versions
CN111291601B (en
Inventor
李程
孙艺
时代奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811496189.2A priority Critical patent/CN111291601B/en
Publication of CN111291601A publication Critical patent/CN111291601A/en
Application granted granted Critical
Publication of CN111291601B publication Critical patent/CN111291601B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention provides a lane line identification method and device and electronic equipment. The method comprises the following steps: acquiring at least one road image; acquiring a lane line area and a vanishing point area in the road image according to a pre-trained deep learning segmentation model; and acquiring the lane line in the road image according to the lane line area and the vanishing point area in the road image. According to the embodiment of the invention, the data in the road image is analyzed through the deep learning segmentation model to obtain the lane line region and the vanishing point region in the road image, and then the lane line in the road image is obtained according to the lane line region and the vanishing point region in the road image, and the vanishing point is crucial to determining the lane trend, so that compared with a scheme using edge detection, the scheme also improves the accuracy of lane line identification.

Description

Lane line identification method and device and electronic equipment
Technical Field
The present invention relates to the field of image recognition technologies, and in particular, to a lane line recognition method and apparatus, and an electronic device.
Background
With the continuous development of map navigation application functions, the inventor finds that the scene has stronger and stronger requirements on image recognition technology. Taking the example of recognizing the lane line by the image, the inventor finds that, in the process of researching the prior art, the prior art generally recognizes the lane line from the image by using an edge detection technology, but the edge detection technology is easily affected by factors such as abrasion of the lane line, color of the lane line, illumination and the like, and the lane line recognition accuracy is not high.
Disclosure of Invention
The embodiment of the invention provides a lane line identification method and device and electronic equipment, and aims to overcome the defect that a lane line identified in the prior art is not accurate enough.
In order to achieve the above object, an embodiment of the present invention provides a lane line identification method, including:
acquiring at least one road image;
acquiring a lane line area and a vanishing point area in the road image according to a pre-trained deep learning segmentation model;
and acquiring the lane line in the road image according to the lane line area and the vanishing point area in the road image.
An embodiment of the present invention further provides a lane line identification apparatus, including:
the image acquisition module is used for acquiring at least one road image;
the area acquisition module is used for acquiring a lane line area and a vanishing point area in the road image according to a pre-trained deep learning segmentation model;
and the lane line acquisition module is used for acquiring a lane line in the road image according to the lane line area and the vanishing point area in the road image.
An embodiment of the present invention further provides an electronic device, including:
a memory for storing a program;
and the processor is used for operating the program stored in the memory, and the program executes the lane line identification method provided by the embodiment of the invention when running.
The lane line recognition method, the lane line recognition device and the electronic equipment provided by the embodiment of the invention analyze the data in the road image through the deep learning segmentation model to obtain the lane line area and the vanishing point area in the road image, and further obtain the lane line in the road image according to the lane line area and the vanishing point area in the road image, because the embodiment of the invention analyzes the image data in the road image by using the deep learning segmentation model to recognize the lane line in the road image instead of recognizing the lane line by using edge detection, the false detection caused by the edge detection is avoided, and when the embodiment of the invention recognizes the lane line in the road image by using the deep learning segmentation model, not only the lane line area but also the vanishing point area of the lane line are detected, and the vanishing point is crucial to determining the lane trend, therefore, compared with the scheme using the edge detection, the scheme also improves the accuracy of lane line identification.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a system block diagram of a service system according to an embodiment of the present invention;
FIG. 2a is a flowchart of an embodiment of a lane marking identification method according to the present invention;
fig. 2b is a first schematic area diagram in an embodiment of the lane line identification method provided by the present invention;
fig. 2c is a schematic area diagram ii in an embodiment of the lane line identification method provided in the present invention;
FIG. 3a is a flowchart of another embodiment of a lane marking identification method according to the present invention;
fig. 3b is a schematic diagram of backbone points in an embodiment of the lane line identification method provided in the present invention;
FIG. 4 is a flowchart of another embodiment of a lane marking identification method provided by the present invention;
fig. 5 is a schematic structural diagram of an embodiment of a lane line recognition device provided in the present invention;
fig. 6 is a schematic structural diagram of another embodiment of the lane line identification device provided in the present invention;
fig. 7 is a schematic structural diagram of an embodiment of an electronic device provided in the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
To the defect that the lane line identified by adopting the traditional image processing mode in the prior art is not accurate enough, the application provides a lane line identification scheme, and the main principle is as follows: acquiring a lane line area and a vanishing point area in a road image according to a pre-trained deep learning segmentation model; then, the lane line in the road image is identified further from the lane line area and the vanishing point area in the road image. The data in the road image are analyzed by utilizing the deep learning segmentation model to obtain the lane line area and the vanishing point area in the road image, and then the lane line in the road image is obtained according to the lane line area and the vanishing point area in the road image, so that the false detection caused by adopting edge detection can be avoided, and the accuracy of lane line identification is improved.
The method provided by the embodiment of the invention can be applied to any business system with an image processing function. Fig. 1 is a system block diagram of a service system provided in an embodiment of the present invention, and the structure shown in fig. 1 is only one example of a service system to which the technical solution of the present invention can be applied. As shown in fig. 1, the traffic system includes a lane line recognition device. This lane line recognition device includes: the image acquisition module, the area acquisition module and the lane line acquisition module may be configured to execute the processing flows shown in fig. 2a, 3a and 4 below. In the business system, a road image is acquired from a road image acquisition device such as a camera or a camera of a vehicle; then, according to a pre-trained deep learning segmentation model, acquiring a lane line area and a vanishing point area in the road image; and acquiring the lane line in the road image according to the lane line area and the vanishing point area in the acquired road image. Analyzing the image data in the road image through a deep learning segmentation model to obtain a lane line area and a vanishing point area in the road image, further acquiring the lane line in the road image according to the lane line area and the vanishing point area in the road image, because the embodiment of the invention utilizes the deep learning segmentation model to analyze the image data in the road image so as to identify the lane lines in the road image, and replaces the use of edge detection to identify the lane lines, the error detection caused by the edge detection is avoided, in addition, when the invention utilizes the deep learning segmentation model to identify the lane lines in the road image, not only the lane line area but also the vanishing point area of the lane lines are detected, the vanishing point is important for determining the direction of the lane, therefore, the present scheme also improves the accuracy of lane line recognition compared to a scheme using edge detection.
The above embodiments are illustrations of technical principles and exemplary application frameworks of the embodiments of the present invention, and specific technical solutions of the embodiments of the present invention are further described in detail below through a plurality of embodiments.
Example one
Fig. 2a is a flowchart of an embodiment of the lane line identification method provided by the present invention, where an execution subject of the method may be the service system, or may be various terminal devices with an image processing function, such as a vehicle-mounted Augmented Reality (AR) navigation terminal, a smart phone, and the like, or may be devices or chips integrated on these terminal devices. As shown in fig. 2a, the lane line identification method includes the following steps:
s201, at least one road image is obtained.
In the embodiment of the present invention, it is first necessary to acquire a road image from a road image acquisition device such as a camera or a camera of a vehicle. The road image can be obtained from a video shot by a camera or can be obtained by continuous shooting of the camera. Aiming at the video shot by the camera, the service system can process each frame of image and can also select video frames at a certain time interval, so that the calculation amount of the system is reduced, and one road image corresponds to one frame of image in the video. For the images continuously shot by the camera, the image shot each time corresponds to one road image.
S202, obtaining a lane line area and a vanishing point area in the road image according to the pre-trained deep learning segmentation model.
In the embodiment of the invention, the service system inputs the acquired road image into the deep learning segmentation model for analysis, the deep learning segmentation model outputs the classification of each pixel point on the road image, and then the lane line region and the vanishing point region in the road image are obtained through a connected domain detection algorithm. In the divided region image, different values are assigned to different regions. Fig. 2b is a first schematic area diagram in the embodiment of the lane line identification method provided by the present invention. As shown in fig. 2b, in the region diagram, a pixel in the background region is assigned a value of "0", a pixel in the vanishing point region is assigned a value of "1", a pixel in the left lane line region is assigned a value of "2", and a pixel in the right lane line region is assigned a value of "3". Fig. 2c is a schematic area diagram of the embodiment of the lane line identification method according to the present invention. In fig. 2c, a pixel in the background region is assigned with "0", a pixel in the vanishing point region is assigned with "1", and a pixel in the lane line region is assigned with "2". That is, in the case shown in fig. 2c, the left and right lane line regions are not distinguished according to the result output by the deep learning segmentation model, in which case a subsequent step is required to distinguish the left and right lane lines. Fig. 2b and 2c are only examples for illustrating the embodiments of the present invention more clearly, and should not be construed as limiting the present invention.
And S203, acquiring the lane line in the road image according to the lane line area and the vanishing point area in the road image.
In the embodiment of the invention, the lane lines in the road image are obtained according to the more accurate lane line region and the more accurate vanishing point region obtained by the deep learning segmentation model.
The lane line identification method provided by the embodiment of the invention analyzes the data in the road image through the deep learning segmentation model to obtain the lane line area and the vanishing point area in the road image, and further obtains the lane line in the road image according to the lane line area and the vanishing point area in the road image, because the embodiment of the invention analyzes the image data in the road image by using the deep learning segmentation model to identify the lane line in the road image instead of using edge detection to identify the lane line, the false detection caused by the edge detection is avoided, and when the embodiment of the invention identifies the lane line in the road image by using the deep learning segmentation model, the lane line area and the vanishing point area of the lane line are not only detected, but the vanishing point is important for determining the lane trend, therefore, compared with the scheme using the edge detection, the scheme also improves the accuracy of lane line identification.
Example two
Fig. 3a is a flowchart of another embodiment of the lane line identification method according to the present invention. As shown in fig. 3a, on the basis of the embodiment shown in fig. 2a, the lane line identification method provided in this embodiment may further include the following steps:
s301, at least one road image is obtained.
S302, according to the pre-trained deep learning segmentation model, a lane line region and a vanishing point region in the road image are obtained.
In the embodiment of the present invention, the step of acquiring the road image and the step of acquiring the lane line area and the vanishing point area in the road image are similar to those in the embodiment shown in fig. 2a, and are not repeated herein. Specifically, in the embodiment of the present invention, the left lane line region and the right lane line region are distinguished according to the lane line region obtained by the deep learning segmentation model, that is, a region schematic diagram similar to that shown in fig. 2b is obtained.
And S303, respectively obtaining a left lane line skeleton point and a right lane line skeleton point from a left lane line area and a right lane line area in the road image.
In the embodiment of the invention, the boundary points of the lane line area are acquired on the same line, and the number of the boundary points is judged. Fig. 3b is a schematic diagram of backbone points in the embodiment of the lane line identification method provided by the present invention. As shown in fig. 3b, when there are two boundary points in the same row, the midpoint of the two boundary points is taken as a backbone point; when the number of the boundary points in the same row is three, the middle boundary point is abandoned, and the middle point of the two boundary points on the two sides is taken as a backbone point; when the number of the boundary points in the same row is four, the midpoint of the two boundary points on the left side and the midpoint of the two boundary points on the right side are respectively taken as backbone points, that is, at this time, the same row acquires two backbone points. In the image, the pixel points in the same row refer to the pixel points with the same vertical coordinate in the pixel coordinate; the pixels in the same row refer to pixels with the same abscissa in the pixel coordinate. Taking the screen of the road image acquisition device as an example, the upper left corner of the screen may be taken as the origin of coordinates of a screen pixel coordinate system, the horizontal direction of the screen is the horizontal axis, and the vertical direction of the screen is the vertical axis. The pixel coordinate system of the image may be the same as the screen pixel coordinate system.
S304, performing straight line fitting on the skeleton points of the left lane line to obtain a left candidate straight line; and performing straight line fitting on the skeleton points of the right lane line to obtain a right candidate straight line.
In the embodiment of the invention, straight line fitting can be carried out on the left lane line backbone point and the right lane line backbone point, for example, a hough straight line detection method can be utilized to obtain a left candidate straight line and a right candidate straight line.
S305, a lane line vanishing point is obtained from the vanishing point area in the road image.
In the embodiment of the present invention, each pixel point has its own confidence level in the vanishing point region obtained according to the deep learning segmentation model, so that a confidence coordinate value of each pixel point can be obtained according to the coordinate value of each pixel point in the vanishing point region and the confidence level given to the pixel point by the deep learning segmentation model (for example, the confidence coordinate value of the pixel point can be obtained by multiplying the coordinate value of the pixel point by the confidence level thereof); and then, determining pixel points corresponding to the mean value of the confidence coordinate values of all the pixel points in the vanishing point area as the lane line vanishing points.
In addition, the execution sequence of steps S303 to S304 and step S305 in the embodiment of the present invention is not sequential, and may be executed simultaneously, or may be executed in the sequence of steps S303, S304, and S305, or may be executed in the sequence of steps S305, S303, and S304.
S306, acquiring left lane line skeleton points with a left candidate straight line distance smaller than a preset distance threshold from the left lane line skeleton points as left target lane line skeleton points; and acquiring a right lane line skeleton point of which the right candidate straight line distance is smaller than a preset distance threshold value from the right lane line skeleton points as a right target lane line skeleton point.
In the embodiment of the invention, all the left lane line skeleton points can be traversed to obtain all the left lane line skeleton points which are closer to the left candidate straight line, and then curve fitting is carried out on the left lane line skeleton points which are closer to the left candidate straight line and the lane line vanishing points together to obtain the left lane line; similarly, the right lane line can be obtained in the same manner according to the right lane line skeleton point and the right candidate straight line.
S307, performing curve fitting on the skeleton points and the vanishing points of the left target lane line to obtain a left lane line in the road image; and performing curve fitting on the skeleton points and the vanishing points of the right target lane line to obtain a right lane line in the road image.
In the embodiment of the present invention, in the steps S306 and S307, the left and right lane lines in the road image are obtained according to the left and right candidate straight lines, the left and right lane line skeleton points, and the lane line vanishing point.
In the embodiment of the invention, each pixel point has respective confidence in the lane line region output by the deep learning segmentation model, so that the skeleton points in the lane line region also have respective confidence, and the function expression of the lane line can be calculated in a weighting fitting mode. Specifically, performing weighted fitting on backbone points and confidence degrees thereof, vanishing points and confidence degrees thereof contained in the left candidate straight line, and calculating a function expression of the left lane line; and performing weighted fitting on the backbone points and the confidence degrees thereof, and the vanishing points and the confidence degrees thereof contained in the right candidate straight line, and calculating a function expression of the right lane line.
According to the lane line identification method provided by the embodiment of the invention, data in a road image are analyzed through a deep learning segmentation model so as to obtain a left lane line area, a right lane line area and a vanishing point area in the road image; meanwhile, backbone points are extracted from a lane line area in the road image, and vanishing points are obtained according to the vanishing point area in the road image; and then fitting the left and right lane lines according to the left and right lane line skeleton points and the lane line vanishing points, because the image data in the road image is analyzed by using the deep learning segmentation model to identify the lane lines in the road image instead of using edge detection to identify the lane lines, the embodiment of the invention avoids false detection caused by edge detection.
EXAMPLE III
Fig. 4 is a flowchart of a lane line identification method according to another embodiment of the present invention. As shown in fig. 4, on the basis of the embodiment shown in fig. 2a, the lane line identification method provided in the embodiment of the present invention may further include the following steps:
s401, at least one road image is obtained.
S402, obtaining a lane line area and a vanishing point area in the road image according to the pre-trained deep learning segmentation model.
In the embodiment of the present invention, the step of acquiring the road image and the step of acquiring the lane line area and the vanishing point area in the road image are similar to those in the embodiment shown in fig. 2a, and are not repeated herein. Specifically, in the embodiment of the present invention, the lane line regions obtained according to the deep learning segmentation model do not distinguish between the left lane line region and the right lane line region, that is, a region schematic diagram similar to that shown in fig. 2c is obtained.
S403, acquiring a lane line skeleton point from the total lane line area of the road image.
In the embodiment of the present invention, the lane line area includes at least one connected area, and specifically, for the lane line area, boundary points of the lane line area are acquired on the same line, and the number of the boundary points is determined. When two boundary points in the same row are arranged, taking the middle point of the two boundary points as a backbone point; when the number of the boundary points in the same row is three, the middle boundary point is abandoned, and the middle point of the two boundary points on the two sides is taken as a backbone point; when the number of the boundary points in the same row is four, the midpoint of the two boundary points on the left side and the midpoint of the two boundary points on the right side are respectively taken as backbone points, that is, at this time, the same row acquires two backbone points.
And S404, performing straight line fitting on the backbone points of the lane line to obtain candidate straight lines.
In the embodiment of the invention, straight line fitting can be performed on the backbone points of the lane line, for example, a hough straight line detection method can be utilized to obtain candidate straight lines.
S405, the candidate straight lines are distinguished into left candidate straight lines and right candidate straight lines according to the slopes of the candidate straight lines.
In the embodiment of the present invention, whether the candidate straight line is a left candidate straight line or a right candidate straight line may be determined according to the position and the slope of the candidate straight line. For example, in a road image, generally, a left candidate straight line is located on the left side of the image, and a right candidate straight line is located on the right side of the image; in the road image, the display form of the lane lines gradually converges from near to far, and therefore, when the slope of the candidate straight line is a positive number, it may be determined as a left candidate straight line, and when the slope of the candidate straight line is a negative number, it may be determined as a right candidate straight line.
S406, acquiring a lane line vanishing point from the vanishing point area in the road image.
In addition, the execution sequence of steps S403 to S405 and step S406 in the embodiment of the present invention is not sequential, and may be executed simultaneously, or may be executed in the sequence of steps S403, S404, S405, and S406, or may be executed in the sequence of steps S406, S403, S404, and S405.
S407, obtaining the confidence degree given by the deep learning segmentation model to the backbone points of the lane line and the confidence degree given to the vanishing points of the lane line.
S408, carrying out weighted fitting on the backbone points of the lane lines and the confidence degrees thereof, and the vanishing points of the lane lines and the confidence degrees thereof contained in the candidate straight lines to obtain the curve expression of the lane lines in the road image.
And S409, acquiring a pixel point set of the lane line according to the curve expression.
In the embodiment of the present invention, according to the curve expression of the left lane line and the right lane line, the coordinate values of the pixel points of the left lane line and the right lane line in the designated row can be obtained, so as to obtain the pixel point sets of the left lane line and the right lane line: left _ points and right _ points, the lane lines displayed to the user are composed of these sets of pixel points.
In addition, when two or more road images are acquired and the road images are acquired in time sequence, inter-frame filtering may be performed on the pixel point sets of the lane lines in each road image in time sequence.
In the embodiment of the invention, filtering processing can be performed on the pixel point set left _ points { i } and right _ points { i } of the ith frame, and the pixel point set left _ points { i-1} and right _ points { i-1} of the (i-1) th frame, so as to correct the lane line, and further stabilize the identification result.
Specifically, the pixel point set of the ith frame is compared with the pixel point set of the (i-1) th frame, and if the error between the pixel point set of the ith frame and the pixel point set of the (i + 1) th frame is within an allowable range, the newly generated pixel point set of the (i + 1) th frame is compared with the ith frame; if the error between the ith frame and the (i-1) th frame exceeds the allowable range, the ith frame is discarded, the newly generated (i + 1) th frame is compared with the (i-1) th frame, and the like. When the data of a certain frame is detected to be always used as a reference to be compared with the subsequent data, and the time exceeds a preset threshold value, the lane line detection of the frame is considered to be wrong, and at the moment, the preset default pixel point set data is used as the reference to be compared with the pixel point set generated subsequently.
With regard to the preset default pixel point set data, before the vehicle travels, the user may adjust the camera angle according to the actual road surface, so as to select a specific lane line in the screen, and store the corresponding pixel point set as the default pixel point set data.
In addition, in the embodiment of the invention, the pixel point set of the lane line can be input into the Kalman filter for filtering aiming at the lane line in one road image; the filtered set of pixel points output by the kalman filter is then used as an input to a map rendering engine.
According to the lane line identification method provided by the embodiment of the invention, data in a road image are analyzed through a deep learning segmentation model so as to obtain a lane line area and a vanishing point area in the road image; meanwhile, backbone points are extracted from a lane line area in the road image, and vanishing points are obtained according to the vanishing point area in the road image; the lane line is further fitted according to the lane line skeleton points and the lane line vanishing points, the image data in the road image is analyzed by the deep learning segmentation model to identify the lane line in the road image, and edge detection is replaced to identify the lane line, so that false detection caused by edge detection is avoided.
Example four
Fig. 5 is a schematic structural diagram of an embodiment of the lane line identification apparatus provided in the present invention, which can be used to execute the method steps shown in fig. 2 a. As shown in fig. 5, the apparatus may include: an image acquisition module 51, an area acquisition module 52, and a lane line acquisition module 53.
The image obtaining module 51 is configured to obtain at least one road image; the region acquisition module 52 is configured to acquire a lane line region and a vanishing point region in the road image according to the pre-trained deep learning segmentation model; the lane line obtaining module 53 is configured to obtain a lane line in the road image according to the lane line area and the vanishing point area in the road image.
In the embodiment of the present invention, first, the image acquisition module 51 acquires a road image from a road image acquisition device such as a camera or a camera of a vehicle. The road image can be obtained from a video shot by a camera or can be obtained by continuous shooting of the camera. For the video shot by the camera, the image obtaining module 51 may process each frame of image, or may select video frames at a certain time interval, so as to reduce the amount of computation of the system, and one road image corresponds to one frame of image in the video. For the images continuously shot by the camera, the image shot each time corresponds to one road image. Then, the region obtaining module 52 inputs the road image obtained by the image obtaining module 51 into a deep learning segmentation model for analysis, the deep learning segmentation model outputs the classification of each pixel point on the road image, and then a lane line region and a vanishing point region in the road image are obtained through a connected domain detection algorithm. In the divided region image, different values are assigned to different regions. Finally, the lane line obtaining module 53 obtains the lane lines in the road image according to the more accurate lane line region and vanishing point region obtained by the region obtaining module 52.
The lane line recognition device provided by the embodiment of the invention analyzes the data in the road image through the deep learning segmentation model to obtain the lane line region and the vanishing point region in the road image, and further obtains the lane line in the road image according to the lane line region and the vanishing point region in the road image, because the embodiment of the invention analyzes the image data in the road image through the deep learning segmentation model to recognize the lane line in the road image instead of recognizing the lane line through edge detection, the false detection caused by the edge detection is avoided, and when the embodiment of the invention recognizes the lane line in the road image through the deep learning segmentation model, not only the lane line region but also the vanishing point region of the lane line are detected, the vanishing point is important for determining the lane trend, therefore, compared with the scheme using the edge detection, the scheme also improves the accuracy of lane line identification.
EXAMPLE five
Fig. 6 is a schematic structural diagram of another embodiment of the lane marking recognition apparatus provided in the present invention, which can be used to execute the method steps shown in fig. 3a and fig. 4. As shown in fig. 6, on the basis of the embodiment shown in fig. 5, in the lane line identification apparatus provided in the embodiment of the present invention, the lane line obtaining module 53 may include: a first obtaining unit 531, a second obtaining unit 532, a third obtaining unit 533, and a fourth obtaining unit 534.
The first obtaining unit 531 is configured to obtain a lane line skeleton point from a lane line area in a road image; the second obtaining unit 532 is configured to perform linear fitting on the backbone points of the lane line to obtain candidate linear lines; the third obtaining unit 533 is configured to obtain a lane line vanishing point from a vanishing point area in the road image; the fourth obtaining unit 534 is configured to obtain the lane line in the road image according to the candidate straight line, the lane line skeleton point, and the lane line vanishing point.
In this embodiment of the present invention, the lane line area includes at least one connected area, and the first obtaining unit 531 obtains boundary points of the lane line area obtained by the area obtaining module 52 on the same line, and determines the number of the boundary points. When two boundary points in the same row are arranged, taking the middle point of the two boundary points as a backbone point; when the number of the boundary points in the same row is three, the middle boundary point is abandoned, and the middle point of the two boundary points on the two sides is taken as a backbone point; when the number of the boundary points in the same row is four, the midpoint of the two boundary points on the left side and the midpoint of the two boundary points on the right side are respectively taken as backbone points, that is, at this time, the same row acquires two backbone points.
Then, the second obtaining unit 532 performs straight line fitting on the trunk points of the lane line obtained by the first obtaining unit 531 to obtain candidate straight lines. Specifically, the second obtaining unit 532 may perform straight line fitting on the trunk points of the lane line, and for example, may obtain a candidate straight line by using a hough straight line detection method.
On the other hand, each pixel point has its own confidence in the vanishing point region obtained by the deep learning segmentation model, so that the third obtaining unit 533 can obtain a confidence coordinate value of each pixel point according to the coordinate value of each pixel point in the vanishing point region obtained by the region obtaining module 52 and the confidence degree given to the pixel point by the deep learning segmentation model (for example, the confidence coordinate value of the pixel point can be multiplied by the coordinate value of the pixel point to obtain the confidence coordinate value of the pixel point); and then, determining pixel points corresponding to the mean value of the confidence coordinate values of all the pixel points in the vanishing point area as the lane line vanishing points.
Finally, the fourth obtaining unit 534 obtains, from the lane line skeleton points, the lane line skeleton points whose candidate straight-line distance is smaller than the preset distance threshold as target lane line skeleton points; and performing curve fitting on the skeleton points and the vanishing points of the target lane line to obtain the lane line in the road image.
Further, the third obtaining unit 533 may be configured to obtain a confidence coordinate value of each pixel point according to the coordinate value of each pixel point in the vanishing point region and the confidence given to the pixel point by the deep learning segmentation model; and determining pixel points corresponding to the mean value of the confidence coordinate values of all the pixel points in the vanishing point area as the lane line vanishing points.
Further, the fourth obtaining unit 534 may be further configured to obtain a confidence degree given by the deep learning segmentation model to the backbone points of the lane line, and a confidence degree given to the vanishing points of the lane line; and performing weighted fitting on the backbone points of the lane lines and the confidence degrees thereof, and the vanishing points of the lane lines and the confidence degrees thereof contained in the candidate straight lines to obtain the curve expression of the lane lines in the road image. And then, acquiring a pixel point set of the lane line according to the curve expression.
In this embodiment of the present invention, the fourth obtaining unit 534 may obtain, according to the curve expression of the left lane line and the right lane line, the coordinate values of the pixel points of the left lane line and the right lane line in the designated row, so as to obtain the pixel point sets of the left lane line and the right lane line: left _ points and right _ points, the lane lines displayed to the user are composed of these sets of pixel points.
In addition, the lane line recognition apparatus provided in the embodiment of the present invention may further include: a filtering module 61. When two or more road images are acquired and the road images are acquired in time sequence, the filtering module 61 may be configured to perform inter-frame filtering on the pixel point sets of the lane lines in each road image in time sequence.
Specifically, the filtering module 61 compares the pixel point set of the i-th frame with the pixel point set of the i-1 th frame, and if the error between the two is within the allowable range, the newly generated pixel point set of the i +1 th frame is compared with the i-th frame; if the error between the ith frame and the (i-1) th frame exceeds the allowable range, the ith frame is discarded, the newly generated (i + 1) th frame is compared with the (i-1) th frame, and the like. When the data of a certain frame is detected to be always used as a reference to be compared with the subsequent data, and the time exceeds a preset threshold value, the lane line detection of the frame is considered to be wrong, and at the moment, the preset default pixel point set data is used as the reference to be compared with the pixel point set generated subsequently.
With regard to the preset default pixel point set data, before the vehicle travels, the user may adjust the camera angle according to the actual road surface, so as to select a specific lane line in the screen, and store the corresponding pixel point set as the default pixel point set data.
In addition, in the embodiment of the present invention, the filtering module 61 may also be configured to input a set of pixel points of a lane line into the kalman filter for filtering the lane line in one road image; the filtered set of pixel points output by the kalman filter is then used as an input to a map rendering engine.
The lane line recognition device provided by the embodiment of the invention analyzes data in a road image through a deep learning segmentation model to obtain a lane line area and a vanishing point area in the road image; meanwhile, backbone points are extracted from a lane line area in the road image, and vanishing points are obtained according to the vanishing point area in the road image; the lane line is further fitted according to the lane line skeleton points and the lane line vanishing points, the image data in the road image is analyzed by the deep learning segmentation model to identify the lane line in the road image, and edge detection is replaced to identify the lane line, so that false detection caused by edge detection is avoided.
EXAMPLE six
The internal functions and structure of the lane line identification apparatus, which can be implemented as an electronic device, are described above. Fig. 7 is a schematic structural diagram of an embodiment of an electronic device provided in the present invention. As shown in fig. 7, the electronic device includes a memory 71 and a processor 72.
The memory 71 stores programs. In addition to the above-described programs, the memory 71 may also be configured to store other various data to support operations on the electronic device. Examples of such data include instructions for any application or method operating on the electronic device, contact data, phonebook data, messages, pictures, videos, and so forth.
The memory 71 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A processor 72, coupled to the memory 71, executes a program stored in the memory 71, which when executed performs any of the lane line identification methods described above.
Further, as shown in fig. 7, the electronic device may further include: communication components 73, power components 74, audio components 75, a display 76, and the like. Only some of the components are schematically shown in fig. 7, and the electronic device is not meant to include only the components shown in fig. 7.
The communication component 73 is configured to facilitate wired or wireless communication between the electronic device and other devices. The electronic device may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 73 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 73 further includes a Near Field Communication (NFC) module to facilitate short-range communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
A power supply component 74 provides power to the various components of the electronic device. The power components 74 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for an electronic device.
The audio component 75 is configured to output and/or input audio signals. For example, the audio component 75 includes a Microphone (MIC) configured to receive external audio signals when the electronic device is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory 71 or transmitted via a communication component 73. In some embodiments, audio assembly 75 also includes a speaker for outputting audio signals.
The display 76 includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A lane line identification method is characterized by comprising the following steps:
acquiring at least one road image;
acquiring a lane line area and a vanishing point area in the road image according to a pre-trained deep learning segmentation model;
and acquiring the lane line in the road image according to the lane line area and the vanishing point area in the road image.
2. The method according to claim 1, wherein the obtaining a lane line in the road image according to a lane line area and a vanishing point area in the road image includes:
acquiring a lane line skeleton point from a lane line area in the road image;
performing straight line fitting on the backbone points of the lane line to obtain candidate straight lines;
acquiring a lane line vanishing point from the vanishing point area in the road image;
and acquiring the lane line in the road image according to the candidate straight line, the lane line skeleton point and the lane line vanishing point.
3. The method according to claim 2, wherein the obtaining of the lane line in the road image according to the candidate straight line, the lane line skeleton point, and the lane line vanishing point specifically includes:
acquiring the lane line skeleton points of which the candidate straight line distance is smaller than a preset distance threshold from the lane line skeleton points as target lane line skeleton points;
and performing curve fitting on the skeleton points of the target lane line and the vanishing points of the lane line to obtain the lane line in the road image.
4. The lane line identification method according to any one of claims 2 to 3, wherein the obtaining of the lane line backbone points from the lane line region in the road image includes:
acquiring boundary points of the lane line area on the same line, and judging the number of the boundary points;
when the number of the boundary points in the same row is two, taking the middle point of the two boundary points as the backbone point;
when the number of the boundary points in the same row is three, the middle point of the two boundary points at the two sides is taken as the bone trunk point;
and when the number of the boundary points in the same row is four, respectively taking the midpoints of the two boundary points on the left side and the midpoints of the two boundary points on the right side as the skeleton points.
5. The method according to any one of claims 2 to 3, wherein the obtaining of the lane line vanishing point from the vanishing point region in the road image includes:
obtaining a confidence coordinate value of each pixel point according to the coordinate value of each pixel point in the vanishing point region and the confidence degree given to the pixel point by the deep learning segmentation model;
and determining pixel points corresponding to the mean value of the confidence coordinate values of all the pixel points in the vanishing point area as the lane line vanishing points.
6. The method according to claim 2, wherein the obtaining of the lane line in the road image according to the candidate straight line, the lane line skeleton point, and the lane line vanishing point includes:
obtaining a confidence degree given to the lane line skeleton point by the deep learning segmentation model and a confidence degree given to the lane line vanishing point;
and performing weighted fitting on the backbone points of the lane lines and the confidence degrees thereof, and the vanishing points of the lane lines and the confidence degrees thereof contained in the candidate straight lines to obtain the curve expression of the lane lines in the road image.
7. The lane line identification method according to claim 6,
the method further comprises the following steps:
acquiring a pixel point set of the lane line according to the curve expression;
when there are two or more acquired road images and the road images are acquired in time sequence, the method further comprises:
and performing inter-frame filtering on the pixel point set of the lane line in each road image according to the time sequence.
8. The lane line identification method according to claim 7, further comprising:
aiming at a lane line in one road image, inputting a pixel point set of the lane line into a Kalman filter for filtering;
and taking the filtered pixel point set output by the Kalman filter as the input of a map rendering engine.
9. A lane line identification apparatus, comprising:
the image acquisition module is used for acquiring at least one road image;
the area acquisition module is used for acquiring a lane line area and a vanishing point area in the road image according to a pre-trained deep learning segmentation model;
and the lane line acquisition module is used for acquiring a lane line in the road image according to the lane line area and the vanishing point area in the road image.
10. An electronic device, comprising:
a memory for storing a program;
a processor for executing the program stored in the memory, the program executing the lane line identification method according to any one of claims 1 to 8 when executed.
CN201811496189.2A 2018-12-07 2018-12-07 Lane line identification method and device and electronic equipment Active CN111291601B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811496189.2A CN111291601B (en) 2018-12-07 2018-12-07 Lane line identification method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811496189.2A CN111291601B (en) 2018-12-07 2018-12-07 Lane line identification method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111291601A true CN111291601A (en) 2020-06-16
CN111291601B CN111291601B (en) 2023-05-02

Family

ID=71029314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811496189.2A Active CN111291601B (en) 2018-12-07 2018-12-07 Lane line identification method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111291601B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150560A (en) * 2020-09-27 2020-12-29 上海高德威智能交通系统有限公司 Method and device for determining vanishing point and computer storage medium
CN112215213A (en) * 2020-12-11 2021-01-12 智道网联科技(北京)有限公司 Lane line detection method, lane line detection device, electronic device, and storage medium
CN112364869A (en) * 2021-01-14 2021-02-12 北京经纬恒润科技股份有限公司 Lane line identification method and device
CN112597846A (en) * 2020-12-14 2021-04-02 合肥英睿系统技术有限公司 Lane line detection method, lane line detection device, computer device, and storage medium
CN113807333A (en) * 2021-11-19 2021-12-17 智道网联科技(北京)有限公司 Data processing method and storage medium for detecting lane line
CN114092919A (en) * 2022-01-18 2022-02-25 深圳佑驾创新科技有限公司 Vehicle deviation warning method, equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324017A (en) * 2011-06-09 2012-01-18 中国人民解放军国防科学技术大学 FPGA (Field Programmable Gate Array)-based lane line detection method
CN105260699A (en) * 2015-09-10 2016-01-20 百度在线网络技术(北京)有限公司 Lane line data processing method and lane line data processing device
CN106991407A (en) * 2017-04-10 2017-07-28 吉林大学 The method and device of a kind of lane detection
CN107025432A (en) * 2017-02-28 2017-08-08 合肥工业大学 A kind of efficient lane detection tracking and system
CN108090456A (en) * 2017-12-27 2018-05-29 北京初速度科技有限公司 A kind of Lane detection method and device
US20180204073A1 (en) * 2017-01-16 2018-07-19 Denso Corporation Lane detection apparatus
CN108921089A (en) * 2018-06-29 2018-11-30 驭势科技(北京)有限公司 Method for detecting lane lines, device and system and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324017A (en) * 2011-06-09 2012-01-18 中国人民解放军国防科学技术大学 FPGA (Field Programmable Gate Array)-based lane line detection method
CN105260699A (en) * 2015-09-10 2016-01-20 百度在线网络技术(北京)有限公司 Lane line data processing method and lane line data processing device
US20180181817A1 (en) * 2015-09-10 2018-06-28 Baidu Online Network Technology (Beijing) Co., Ltd. Vehicular lane line data processing method, apparatus, storage medium, and device
US20180204073A1 (en) * 2017-01-16 2018-07-19 Denso Corporation Lane detection apparatus
CN107025432A (en) * 2017-02-28 2017-08-08 合肥工业大学 A kind of efficient lane detection tracking and system
CN106991407A (en) * 2017-04-10 2017-07-28 吉林大学 The method and device of a kind of lane detection
CN108090456A (en) * 2017-12-27 2018-05-29 北京初速度科技有限公司 A kind of Lane detection method and device
CN108921089A (en) * 2018-06-29 2018-11-30 驭势科技(北京)有限公司 Method for detecting lane lines, device and system and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DAVY NEVEN等: "《Towards End-to-End Lane Detection: an Instance Segmentation Approach》" *
王春阳: "《车道检测方法综述》" *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150560A (en) * 2020-09-27 2020-12-29 上海高德威智能交通系统有限公司 Method and device for determining vanishing point and computer storage medium
CN112150560B (en) * 2020-09-27 2024-02-02 上海高德威智能交通系统有限公司 Method, device and computer storage medium for determining vanishing point
CN112215213A (en) * 2020-12-11 2021-01-12 智道网联科技(北京)有限公司 Lane line detection method, lane line detection device, electronic device, and storage medium
CN112597846A (en) * 2020-12-14 2021-04-02 合肥英睿系统技术有限公司 Lane line detection method, lane line detection device, computer device, and storage medium
CN112597846B (en) * 2020-12-14 2022-11-11 合肥英睿系统技术有限公司 Lane line detection method, lane line detection device, computer device, and storage medium
CN112364869A (en) * 2021-01-14 2021-02-12 北京经纬恒润科技股份有限公司 Lane line identification method and device
CN113807333A (en) * 2021-11-19 2021-12-17 智道网联科技(北京)有限公司 Data processing method and storage medium for detecting lane line
CN113807333B (en) * 2021-11-19 2022-03-18 智道网联科技(北京)有限公司 Data processing method and storage medium for detecting lane line
CN114092919A (en) * 2022-01-18 2022-02-25 深圳佑驾创新科技有限公司 Vehicle deviation warning method, equipment and medium

Also Published As

Publication number Publication date
CN111291601B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN111291601B (en) Lane line identification method and device and electronic equipment
JP6392468B2 (en) Region recognition method and apparatus
KR101864759B1 (en) Method and device for identifying region
EP3163504B1 (en) Method, device and computer-readable medium for region extraction
KR101763891B1 (en) Method for region extraction, method for model training, and devices thereof
US10007841B2 (en) Human face recognition method, apparatus and terminal
JP6392467B2 (en) Region identification method and apparatus
CN109664820A (en) Driving reminding method, device, equipment and storage medium based on automobile data recorder
CN106845385A (en) The method and apparatus of video frequency object tracking
CN107480665B (en) Character detection method and device and computer readable storage medium
CN106250831A (en) Image detecting method, device and the device for image detection
CN106355573A (en) Target object positioning method and device in pictures
KR20170061627A (en) Method and apparatus for area identification
CN106228556B (en) image quality analysis method and device
CN106557759B (en) Signpost information acquisition method and device
CN108171225B (en) Lane detection method, device, terminal and storage medium
CN111476057B (en) Lane line acquisition method and device, and vehicle driving method and device
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN114627561B (en) Dynamic gesture recognition method and device, readable storage medium and electronic equipment
CN115660945A (en) Coordinate conversion method and device, electronic equipment and storage medium
CN113627277A (en) Method and device for identifying parking space
CN114693702B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111507202B (en) Image processing method, device and storage medium
CN113012029B (en) Curved surface image correction method and device and electronic equipment
CN117351556A (en) Gesture recognition method, device, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant