WO2020087322A1 - 车道线识别方法和装置、车辆 - Google Patents

车道线识别方法和装置、车辆 Download PDF

Info

Publication number
WO2020087322A1
WO2020087322A1 PCT/CN2018/112894 CN2018112894W WO2020087322A1 WO 2020087322 A1 WO2020087322 A1 WO 2020087322A1 CN 2018112894 W CN2018112894 W CN 2018112894W WO 2020087322 A1 WO2020087322 A1 WO 2020087322A1
Authority
WO
WIPO (PCT)
Prior art keywords
line segment
line
segment
contribution
segments
Prior art date
Application number
PCT/CN2018/112894
Other languages
English (en)
French (fr)
Inventor
崔健
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2018/112894 priority Critical patent/WO2020087322A1/zh
Priority to CN201880039256.XA priority patent/CN110770741B/zh
Publication of WO2020087322A1 publication Critical patent/WO2020087322A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Definitions

  • the invention relates to the technical field of image processing, and in particular to a lane line recognition method and device, and a vehicle.
  • the lane lines in the road image are detected by feature extraction, straight line or curve detection methods.
  • many non-lane lines also have a line-like shape.
  • many non-lane lines will be included in the road image, such as guardrails on both sides of the road, linear signs on road signs Objects (such as arrow text, etc.), vehicles or pedestrian edges on the road surface are recognized as line-like shapes, and these non-lane lines are recognized as lane lines. It can be seen that the error rate of lane lines identified based on the above algorithm is relatively high. high.
  • the invention provides a lane line recognition method and device, and a vehicle.
  • a lane line recognition method comprising:
  • the associated line segment of the line segment is determined from other line segments, and the contribution degree of the related line segment of the line segment to the line segment is calculated.
  • the contribution degree is used to characterize the influence degree of the related line segment on the line segment as the lane line ;
  • the score is used to characterize the likelihood that the line segment is a lane line;
  • the line segment with the highest score and the associated line segment with the highest score are determined as lane lines.
  • a lane line recognition device including:
  • Storage device for storing program instructions
  • the processor invokes the program instructions stored in the storage device, and when the program instructions are executed, it is used to:
  • the associated line segment of the line segment is determined from other line segments, and the contribution degree of the related line segment of the line segment to the line segment is calculated.
  • the contribution degree is used to characterize the influence degree of the related line segment on the line segment as the lane line ;
  • the score is used to characterize the likelihood that the line segment is a lane line;
  • the line segment with the highest score and the associated line segment with the highest score are determined as lane lines.
  • a vehicle including:
  • a processor the processor is electrically connected to the shooting device
  • the photographing device is used to photograph a road image in front of the vehicle and send it to the processor, and the processor is used to:
  • the associated line segment of the line segment is determined from other line segments, and the contribution degree of the related line segment of the line segment to the line segment is calculated.
  • the contribution degree is used to characterize the influence degree of the related line segment on the line segment as the lane line ;
  • the score is used to characterize the likelihood that the line segment is a lane line;
  • the line segment with the highest score and the associated line segment with the highest score are determined as lane lines.
  • a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the following steps:
  • the associated line segment of the line segment is determined from other line segments, and the contribution degree of the related line segment of the line segment to the line segment is calculated.
  • the contribution degree is used to characterize the influence degree of the related line segment on the line segment as the lane line ;
  • the score is used to characterize the likelihood that the line segment is a lane line;
  • the line segment with the highest score and the associated line segment with the highest score are determined as lane lines.
  • the embodiment of the present invention comprehensively considers the length of each line segment in the road image and the contribution of the related line segment to the line segment to determine the possibility of the line segment being a lane line, by Combinatorial optimization makes the detected lane line results as close to the actual road line as possible, thereby improving the robustness of lane line detection.
  • FIG. 1 is a method flowchart of a lane line recognition method in an embodiment of the invention
  • FIG. 2 is a flowchart of a specific implementation manner of the lane line recognition method shown in FIG. 1 in an embodiment of the present invention
  • FIG. 3 is a flowchart of another specific implementation manner of the lane line recognition method shown in FIG. 1 in an embodiment of the present invention
  • FIG. 4 is a flowchart of a specific implementation manner of the lane line recognition method shown in FIG. 3 in an embodiment of the present invention
  • FIG. 5 is a structural block diagram of a lane line recognition device in an embodiment of the invention.
  • FIG. 6 is a structural block diagram of a vehicle in an embodiment of the invention.
  • FIG. 1 is a method flowchart of a lane line recognition method according to Embodiment 1 of the present invention.
  • the lane line recognition method may include the following steps:
  • Step S101 Identify all line segments in the road image
  • the lane line may include straight lines and curves, so both straight line segments and curved line segments in the road image will be treated as suspected lane lines.
  • all straight line segments and / or curved line segments in the road image may be identified, that is, the line segments in this embodiment may include straight line segments and / or curved line segments.
  • different methods may be selected to identify all line segments in the road image.
  • all line segment regions in the road image are first segmented, and then all line segments are identified based on the line segment detection algorithm.
  • the way of segmenting all the line segment areas in the road image can be selected according to needs, for example, in some of them, all line segment areas in the road image are segmented based on CNN (Convolutional Neural Network).
  • all the line segment regions in the road image are segmented based on CNN semantics.
  • a deep learning algorithm is used to train a large number of road image samples to obtain a lane line model, and the current road image is input to the lane line model to obtain all line segment areas (including straight line segment areas and / or Curved area).
  • all line segment areas in the road image are segmented based on the edge detection algorithm. Specifically, the edges of all line segments in the road image are detected based on the edge detection algorithm, so that all line segment regions in the road image are segmented.
  • the line segment detection algorithm in this embodiment may be a Hough transform algorithm or other line segment detection algorithms. Specifically, the type of line segment detection algorithm may be selected according to needs. In this embodiment, the line segment detection algorithm can identify some parameter information of all line segments in the road image, for example, the length of all line segments, the positional relationship between all line segments (such as the angle between all line segments and / or all The spacing between line segments).
  • the lane line recognition method of this embodiment may be applied to vehicles, especially unmanned vehicles, and the road image may be a road image in front of the vehicle captured by a camera on the vehicle.
  • the road image is generally a front view.
  • road markers such as road surface arrows, lane lines, etc. may be distorted.
  • the shape of the distortion is related to the position of the vehicle, and the line segment in the front view that is farther away from the vehicle is more difficult to identify.
  • Lane markers have poor consistency and are difficult to identify accurately.
  • this embodiment performs image correction on the road image before identifying all line segments in the road image.
  • the image correction method can be selected according to needs.
  • the road image is projected to the corresponding top view.
  • the road surface markers such as lane lines and arrows will be restored to the true scale and nature, and the road surface markers in the top view are easier to be identified; and, the pixels on the road surface in the top view
  • the position directly corresponds to the real position, and the positional relationship between a certain pixel and the vehicle can be directly obtained according to the top view, thereby meeting the needs of basic ADAS functions and automatic driving functions.
  • projecting the road image to the corresponding top view may include the following steps:
  • the internal reference of the shooting device f x , f y represent the focal length of the shooting device
  • c x , c y represent the position where the optical axis of the lens of the shooting device passes through the imaging sensor.
  • the calibration of the internal parameters of the camera can use the existing calibration algorithm, which will not be detailed here.
  • the external reference of the shooting device to the ground includes a rotation matrix R and a translation vector T, which are the rotation and translation of the shooting device relative to the object plane.
  • the object plane is the plane where the lane line is located.
  • T can be converted to ground height by the camera.
  • the calibration of R is achieved by indirectly calibrating the pitch of the camera to the ground (the ground when the camera shoots the current road image), the roll angle of the camera to the ground, and the yaw angle of the camera to the front of the vehicle , Roll, and yaw are the rotation angles of the camera to their own coordinate axes x, y, and z, respectively ⁇ , ⁇ , the rotation matrix corresponding to the three axes can be calculated according to the three angles R y ( ⁇ ), R z ( ⁇ ), and then calculate R according to the rotation matrix corresponding to the three axes.
  • mapping the points of the object plane coordinate system to the image coordinate system can be expressed as:
  • u, v are the pixels of the road image coordinate system
  • s is the normalization coefficient
  • M is the internal reference of the shooting device
  • [r 1 r 2 r 3 t] is the external parameter of the shooting device to the object plane, that is, the positional relationship;
  • r 1 , r 2 and r 3 are 3 by 1 column vectors, and r 1 , r 2 and r 3 form the rotation matrix R;
  • t is a 3 by 1 column vector, representing the translation of the camera to the object plane
  • X and Y represent the coordinates on the object plane.
  • the above anti-perspective transformation algorithm to project the point of the object plane to the top view can only deal with the point on the object plane more accurately, and the point projection on the non-object plane will have errors, such as the railing on the guardrail leaving the road from the true perspective
  • the lane line on the side is very close, but the projection of the point on the guardrail using the above anti-perspective transformation algorithm is inaccurate, so the projection of the point on the guardrail onto the top view will have a certain distance from the lane line on the roadside, resulting in the false detection of the guardrail Into a lane line.
  • the road image is projected onto the top view by using the above-mentioned anti-perspective transformation algorithm, and then the lane line detection is performed on the top view.
  • the subsequent lane line detection method using prior conditions makes it easier to distinguish the true lane line from the top view and the falsely detected lane line .
  • all line segments in the top view are identified. Specifically, all line segment regions in the top view are first segmented, and then all line segments are identified based on the line segment detection algorithm.
  • Step S102 For each line segment, determine the associated line segment of the line segment from other line segments, and calculate the contribution degree of the related line segment of the line segment to the line segment. size;
  • the associated line segment of the line segment is determined from the other line segments.
  • the positional relationship between the line segment and other line segments includes the angle between the line segment and other line segments, and / or the distance between the line segment and other line segments, and of course, the position between the line segment and other line segments
  • the relationship can also be set according to other positional relationships between real lane lines.
  • the positional relationship between the line segment and other line segments includes the angle between the line segment and other line segments.
  • the positional relationship between the line segment and other line segments includes the distance between the line segment and other line segments.
  • the positional relationship between the line segment and other line segments includes the angle between the line segment and other line segments, and the distance between the line segment and other line segments.
  • the positional relationship between the line segment and other line segments includes the angle between the line segment and other line segments, and the distance between the line segment and other line segments as an example for further description.
  • the lane lines on the actual road are as parallel as possible, and the spacing between adjacent lane lines is approximately in the range of 2.5 meters to 4.2 meters.
  • the preset prior condition of the lane line in this embodiment includes that the angle between the lane lines is within the preset angle range, where , The preset angle range can be set according to the condition that the lane lines are as parallel as possible.
  • the preset prior condition of the lane line also includes that the distance between the lane lines is an integer multiple of the preset distance.
  • the preset spacing is a value or range of values that are proportionally reduced by the spacing between adjacent lane lines. After the preset spacing is determined, the ratio of the spacing between two line segments in the road image to the preset spacing is an integer or more Close to an integer, it means that the two lines are more likely to be lane lines.
  • the size of the preset spacing can be set according to the condition that the spacing between adjacent lane lines is approximately in the range of 2.5 meters to 4.2 meters.
  • the identified line segments are preliminarily screened to filter out some line segments that are obviously not lane lines, for example, some arrows or the edge of the car or the distance between the guardrail and the normal lane line does not meet the real distance, Line segments that are not parallel to the normal lane line, etc.
  • the determined related line segment of the current line segment is a straight line segment. If the current line segment is a curve segment, the determined related line segment of the current line segment is a curve segment.
  • the related straight line segment of the straight line segment can be determined from other straight line segments directly according to the positional relationship between the straight line segment and other straight line segments and preset prior conditions of the lane line.
  • the curve segment is usually divided into multiple segments, each curve segment is similar to a straight segment, and then the related curve segment of each curve segment is determined, and the determination process is similar to the straight segment, which will not be repeated here.
  • the straight line segment can also be divided into multiple segments for processing.
  • the positional relationship between the line segment and the associated line segment of the line segment includes the angle between the line segment and the associated line segment of the line segment, and the distance between the line segment and the associated line segment.
  • Step S401 For each associated line segment of each line segment, calculate the score of the associated line segment according to the length of the associated line segment and the contribution of other associated line segments of the line segment to the associated line segment;
  • Step S402 For each line segment, calculate according to the score of each associated line segment, the angle between each associated line segment and the line segment, the ratio of the distance between each associated line segment and the line segment to the preset interval, and the length of the line segment The contribution of each associated line segment to this line segment.
  • step S401 when step S401 is executed, the left-associated line segment on the left side of the line segment and the right-associated line segment on the right side of the line segment are determined according to the positional relationship between the line segment and the associated line segment of the line segment ; Then, for each left-associated line segment of each line segment, the score of the left-associated line segment is calculated according to the length of the left-associated line segment and the contribution of other left-associated line segments of the left-associated line segment to the left-associated line segment, and For each right-associated line segment of each line segment, the score of the right-associated line segment is calculated according to the length of the right-associated line segment and the contribution of other right-associated line segments of the right-associated line segment to the right-associated line segment. Wherein, calculating the score of the left associated line segment and calculating the score of the right associated line segment may be performed at the same time, or may be performed in order.
  • the other left related line segments of the left related line segment refer to the related line segments located on the left side of the left related line segment
  • the other right related line segments of the right related segment refer to the right side of the right related line segment Associated line segment.
  • line segment 3 the left related line segment of line segment 3 includes line segment 1 and line segment 2
  • the right related line segment includes line segment 4 and line segment 5
  • line segment 1 has no left related line segment
  • the left related line segment of line segment 2 includes line segment 1, and the right of line segment 4
  • the related line segment includes line segment 5, and line segment 5 has no right related line segment.
  • step S402 for each line segment, according to the score of each left-associated line segment, the angle between each left-associated line segment and the line segment, the ratio of the distance between each left-associated line segment and the line segment to the preset interval and The length of the line segment, calculate the left contribution of each left associated line segment to the line segment; for each line segment, according to the score of each right related line segment, the angle between each right related line segment and the line segment, each right association The ratio of the distance between the line segment and the line segment to the preset interval and the length of the line segment is calculated, and the right contribution of each right-associated line segment to the line segment is calculated.
  • i, j are positive integers, i ⁇ (1, n), j ⁇ (1, n), n is the number of line segments;
  • L i is the length of the ith line segment
  • k 1 is the first preset coefficient, and k 1 >0; in this embodiment, the size of k 1 can be set according to need. The larger the setting of k 1 , the greater the contribution is affected by ⁇ ;
  • is the angle between the i-th line segment and the j-th left line segment or the j-th right line segment;
  • is the ratio of the distance between the i-th line segment and the j-th left line segment or the j-th right line segment to the preset interval
  • k 2 is the second preset coefficient, and 0 ⁇ k 2 ⁇ 1; in this embodiment, the size of k 2 can be set according to need. The smaller k 2 is set, the greater the contribution is due to the influence of ⁇ ;
  • the calculation method is not limited to the calculation formula of formula (6) listed in this embodiment,
  • the calculation method of is also not limited to the calculation formula of formula (7) listed in this embodiment.
  • Step S103 Calculate the score of the line segment according to the length of each line segment and the contribution of the associated line segment of the line segment to the line segment.
  • the score is used to characterize the likelihood that the line segment is a lane line;
  • the i-th line segment of the score S i, S i is calculated as follows:
  • L i is the length of the ith line segment
  • CS i, N is the contribution of the Nth related line segment of the i-th line segment to the i-th line segment.
  • step S103 when step S103 is executed, specifically for each line segment, according to the length of the line segment and the maximum value of the left contribution of the line segment to the left contribution of the line segment And the maximum value of the right contribution of the line segment to the right contribution of the line segment Calculate the score of the line segment, the maximum value And maximum It can also be used to calculate the score of the line segment after processing with different weighting coefficients.
  • the specific weighting coefficient is determined by the specific application scenario. In this solution, the weighting coefficient of both is 1, which represents the contribution of the two to the line segment Degree is quite.
  • the score of each line segment is the maximum value of the length of the line segment and the left contribution of the line segment to the left contribution of the line segment And the maximum value of the right contribution of the line segment to the right contribution of the line segment Sum.
  • the score of the ith line segment can be calculated by formula (9).
  • the related line segments of the line segment 3 are determined to be line segment 1, line segment 2, line segment 4, and line segment 5, where line segment 1 and line segment 2 are located on the left side of line segment 3, line segment 4 and line segment 5 Located on the left side of line segment 3, when calculating the score of line segment 3, it is necessary to calculate the contribution of line segment 1 and line segment 2 to line segment 3 and line segment 4 and line segment 5 to line segment 3, respectively.
  • the line segment 3 score is calculated.
  • Step S104 Determine the line segment with the highest score and the associated line segment with the highest score as the lane line.
  • step S103 After performing step S103, the scores of all line segments in the road image can be obtained, and then, the line segment with the highest score among all line segments is determined:
  • the line segment with the highest score determined by formula (10) and the associated line segment with the highest score are the lane lines.
  • the lane line recognition method in the embodiment of the present invention comprehensively considers the length of each line segment in the road image and the contribution of the line segment to the line segment to determine the possibility of the line segment being a lane line, and optimizes the combination to make the detection
  • the result of the lane line conforms to the actual road line as much as possible, thereby improving the robustness of the lane line detection.
  • some line segments that are mistakenly detected as lane lines can be filtered out, such as arrows, sidewalks, pavement text, guardrails, vehicle edges Wait, and select a set of optimal combinations as lane lines, reducing the lane line misdetection rate.
  • the lane line detection device includes: a storage device 110 and a first processor 120.
  • the storage device 110 is used to store program instructions.
  • the first processor 120 calls the program instructions stored in the storage device 110, and when the program instructions are executed, it is used to identify all line segments in the road image; and for each line segment, the associated line segment of the line segment is determined from other line segments, And calculate the contribution of the associated line segment of the line segment to the line segment, the contribution degree is used to characterize the degree of influence of the related line segment on the line segment as the lane line; according to the length of each line segment and the contribution of the line segment of the line segment , Calculate the score of the line segment, the score is used to characterize the likelihood of the line segment being a lane line; determine the line segment with the highest score and the associated line segment with the highest score as the lane line.
  • the first processor 120 can implement the corresponding method as shown in the embodiments of FIG. 1 to FIG. 4 of the present invention.
  • the storage device 110 may include volatile memory (volatile memory), such as random-access memory (RAM); the storage device 110 may also include non-volatile memory (non-volatile memory). volatile memory), such as flash memory (flash memory), hard disk (hard disk drive), or solid-state drive (SSD); the storage device 110 may also include a combination of the aforementioned types of memory.
  • volatile memory such as random-access memory (RAM)
  • non-volatile memory non-volatile memory
  • volatile memory such as flash memory (flash memory), hard disk (hard disk drive), or solid-state drive (SSD); the storage device 110 may also include a combination of the aforementioned types of memory.
  • the first processor 120 may be a central processing unit (central processing unit, CPU).
  • the processor may further include a hardware chip.
  • the hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD) or a combination thereof.
  • the PLD may be a complex programmable logic device (complex programmable logic device (CPLD), a field programmable logic gate array (field-programmable gate array, FPGA), a general array logic (generic array logic, GAL), or any combination thereof.
  • the vehicle may include a vehicle body (not shown), a camera 210 fixed on the vehicle body, and a second processor 220, where the camera 210 is electrically connected to the second processor 220.
  • the photographing device 210 of this embodiment is used to photograph the road image in front of the vehicle and send it to the second processor 220.
  • the second processor 220 is used to identify all line segments in the road image; and for each line segment, the associated line segment of the line segment is determined from other line segments, and the contribution degree of the related line segment of the line segment to the line segment is calculated. Characterize the degree of influence of the associated line segment on the line segment as a lane line; calculate the score of the line segment according to the length of each line segment and the contribution of the related line segment of the line segment to the line segment, the score is used to characterize the line segment as the lane line The possibility of the; the line segment with the highest score and the associated line segment with the highest score is determined as the lane line.
  • the second processor 220 can implement the corresponding method as shown in the embodiments of FIG. 1 to FIG. 4 of the present invention.
  • the second processor 220 in this embodiment may be a vehicle main controller, or may be another controller provided on the vehicle. Taking the second processor 220 as the main controller as an example to further illustrate, after the second processor 220 of this embodiment determines the lane line in the above manner, it can control the vehicle operation according to the determined lane line to meet the basic ADAS functions and automatic Demand for driving functions.
  • the photographing device 210 may be a camera or an image sensor. Specifically, the type of the photographing device 210 may be selected according to needs.
  • an embodiment of the present invention further provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the steps of the lane line recognition method of the foregoing embodiment.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (Random Access Memory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种车道线识别方法和装置、车辆。所述车道线识别方法包括:识别道路图像中所有线段;针对每一线段,从其他线段中确定出该线段的关联线段,并计算该线段的关联线段对该线段的贡献度,所述贡献度用于表征关联线段对该线段作为车道线的影响程度大小;根据每一线段的长度及该线段的关联线段对该线段的贡献度,计算该线段的分值,所述分值用于表征该线段为车道线的可能性大小;将分值最高的线段及该分值最高的线段的关联线段确定为车道线。本发明综合考虑了道路图像中每一线段的长度及该线段的关联线段对该线段贡献度判断该线段作为车道线的可能性大小,通过组合优化,让检测出来的车道线结果尽可能符合实际道路线,从而提高车道线检测的鲁棒性。

Description

车道线识别方法和装置、车辆 技术领域
本发明涉及图像处理技术领域,尤其涉及一种车道线识别方法和装置、车辆。
背景技术
相关技术中,通过特征提取、直线或曲线检测方法检测道路图像中的车道线。然而,在实际场景中,很多非车道线的东西也呈类似线的形状,通过上述算法会将道路图像中很多非车道线的东西,如道路两侧的护栏、路面标志物上的线状标志物(如箭头文字等)、路面上的车辆或行人边缘等,识别成类似线的形状,而将这些非车道线的东西识别成车道线,可见,基于上述算法识别到的车道线错误率较高。
发明内容
本发明提供一种车道线识别方法和装置、车辆。
具体地,本发明是通过如下技术方案实现的:
根据本发明的第一方面,提供一种车道线识别方法,所述方法包括:
识别道路图像中所有线段;
针对每一线段,从其他线段中确定出该线段的关联线段,并计算该线段的关联线段对该线段的贡献度,所述贡献度用于表征关联线段对该线段作为车道线的影响程度大小;
根据每一线段的长度及该线段的关联线段对该线段的贡献度,计算该线段的分值,所述分值用于表征该线段为车道线的可能性大小;
将分值最高的线段及该分值最高的线段的关联线段确定为车道线。
根据本发明的第二方面,提供一种车道线识别装置,包括:
存储装置,用于存储程序指令;
处理器,调用所述存储装置中存储的程序指令,当所述程序指令被执行时,用于:
识别道路图像中所有线段;
针对每一线段,从其他线段中确定出该线段的关联线段,并计算该线段的关联线段对该线段的贡献度,所述贡献度用于表征关联线段对该线段作为车道线的影响程度大小;
根据每一线段的长度及该线段的关联线段对该线段的贡献度,计算该线段的分值,所述分值用于表征该线段为车道线的可能性大小;
将分值最高的线段及该分值最高的线段的关联线段确定为车道线。
根据本发明的第三方面,提供一种车辆,包括:
车身;
固定在车身上的拍摄装置;以及
处理器,所述处理器与所述拍摄装置电连接;
所述拍摄装置用于拍摄车辆前方的道路图像并发送至所述处理器,所述处理器用于:
识别道路图像中所有线段
针对每一线段,从其他线段中确定出该线段的关联线段,并计算该线段的关联线段对该线段的贡献度,所述贡献度用于表征关联线段对该线段作为车道线的影响程度大小;
根据每一线段的长度及该线段的关联线段对该线段的贡献度,计算该线段的分值,所述分值用于表征该线段为车道线的可能性大小;
将分值最高的线段及该分值最高的线段的关联线段确定为车道线。
根据本发明的第四方面,提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如下步骤:
识别道路图像中所有线段;
针对每一线段,从其他线段中确定出该线段的关联线段,并计算该线段的关联线段对该线段的贡献度,所述贡献度用于表征关联线段对该线段作为车道线的影响程度大小;
根据每一线段的长度及该线段的关联线段对该线段的贡献度,计算该线段的分值,所述分值用于表征该线段为车道线的可能性大小;
将分值最高的线段及该分值最高的线段的关联线段确定为车道线。
由以上本发明实施例提供的技术方案可见,本发明实施例综合考虑了道路图像中每一线段的长度及该线段的关联线段对该线段贡献度判断该线段作为车道线的可能性大小,通过组合优化,让检测出来的车道线结果尽可能符合实际道路线,从而提高车道线检测的鲁棒性。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明一实施例中的一种车道线识别方法的方法流程图;
图2是本发明一实施例中的图1所示的车道线识别方法的一种具体实现方式流程图;
图3是本发明一实施例中的图1所示的车道线识别方法的另一种具体实现方式流程图;
图4是本发明一实施例中的图3所示的车道线识别方法的一种具体实现方式流程图;
图5是本发明一实施例中的一种车道线识别装置的结构框图;
图6是本发明一实施例中的一种车辆的结构框图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
下面结合附图,对本发明的车道线识别方法和装置、车辆进行详细说明。在不冲突的情况下,下述的实施例及实施方式中的特征可以相互组合。
实施例一
图1为本发明实施例一提供的一种车道线识别方法的方法流程图。参见图1,所述车道线识别方法可以包括如下步骤:
步骤S101:识别道路图像中所有线段;
在实际中,车道线可包括直线和曲线,故而道路图像中的直线段和曲线段都会被当作疑似车道线。本实施例通过执行步骤S101,可识别出道路图像中所有直线段和/或曲线段,即本实施例的线段可包括直线段和/或曲线段。
具体可选择不同的方式识别道路图像中所有线段,在本实施例中,先分割出道路图像中所有的线段区域,再基于线段检测算法识别所有线段。其中,对道路图像中所有的线段区域进行分割的方式可根据需要选择,例如,在其中一些例子中,基于CNN(Convolutional Neural Network,卷积神经网络)分割出道路图像中所有的线段区域。可选的,基于CNN语义分割出道路图像中所有的线段区域。可选的,利用深度学习算法对大量的道路图像样本进行训练,获得车道线模型,将当前道路图像输入该车道线模型,获得当前道路图像中的所有的线段区域(包括直线段区域和/或曲线段区域)。
在另一些例子中,基于边缘检测算法分割出道路图像中所有的线段区域。具体的,基于边缘检测算法检测道路图像中所有线段的边缘,从而分割出道路图像中所有的线段区域。
本实施例的线段检测算法可以为霍尔夫变换算法(hough transform),也可以为其他线段检测算法,具体可根据需要选择线段检测算法的类型。在本实施例中,基于线段检测算法能够识别出道路图像中所有线段的一些参数信息,例如,所有线段的长度、所有线段之间的位置关系(如所有线段之间的夹角和/或所有线段之间的间距)。
本实施例的车道线识别方法可应用在车辆上,特别是无人驾驶车辆,道路图像可由车辆上的拍摄装置拍摄到的车辆前方的道路图像,该道路图像一般为前视图。在前视图中,路面标志物箭头、车道线等车道标志物可能会存在扭曲,扭曲的形状与车辆的位置相关,并且前视图中离车辆越远的线段越难被识别,前视图中同种车道标志物一致性较差,难以识别准确。为提高车道线识别的准确性,本实施例在识别道路图像中所有线段之前,对道路图像进行图像矫正。图像矫正的方式可根据需要选择,在一实施例中,基于反透视变换,将道路图像投影至对应的俯视图。本实施例通过将道路图像投影到俯视图,使得车道线和箭头等路面标志物都会被还原成真实尺度和性质,俯视图中的路面标志物更容易被识别;并且,俯视图中路面上的像素点的位置直接对 应真实位置,可根据俯视图直接得到某一像素点和车辆的位置关系,从而可以满足基本ADAS功能和自动驾驶功能的需求。
具体的,基于反透视变换,将道路图像投影至对应的俯视图可包括如下步骤:
(1)标定拍摄装置的内参及该拍摄装置对地外参;
其中,拍摄装置的内参
Figure PCTCN2018112894-appb-000001
f x、f y表征拍摄装置的焦距,c x、c y表征拍摄装置的镜头光轴穿过成像传感器的位置。拍摄装置的内参的标定可使用现有的标定算法,此处不再详述。
拍摄装置对地外参包括旋转矩阵R和平移向量T,分别为拍摄装置相对物体平面的旋转和平移,在本实施例中,该物体平面是车道线所在平面。其中,T可以通过拍摄装置对地高度进行换算。R的标定通过间接标定拍摄装置对地(拍摄装置拍摄到当前道路图像时的地面)俯仰角pitch、拍摄装置对地横滚角roll及拍摄装置对车辆正前方的偏航角yaw来实现,pitch、roll、yaw分别是拍摄装置对自身坐标轴x、y、z的旋转角度,分别为
Figure PCTCN2018112894-appb-000002
θ、φ,根据3个角度可以计算出三个轴分别对应的旋转矩阵
Figure PCTCN2018112894-appb-000003
R y(θ)、R z(φ),然后再根据三个轴分别对应的旋转矩阵计算R。
本实施例中,
Figure PCTCN2018112894-appb-000004
Figure PCTCN2018112894-appb-000005
Figure PCTCN2018112894-appb-000006
Figure PCTCN2018112894-appb-000007
(2)计算道路图像中的像素点映射到俯视图的投影矩阵H;
将物体平面坐标系的点映射到图像坐标系上可以表示为:
Figure PCTCN2018112894-appb-000008
其中,u、v是道路图像坐标系的像素点;
s为归一化系数;
M为拍摄装置内参;
[r 1 r 2 r 3 t]是拍摄装置对物体平面的外参,即位置关系;
r 1、r 2、r 3为3乘1的列向量,r 1、r 2、r 3构成旋转矩阵R;
t是3乘1的列向量,表示拍摄装置到物体平面的平移;
X、Y表示物体平面上的坐标。
假设物体在一个平面,Z是零,公式(1)可以表示为:
Figure PCTCN2018112894-appb-000009
H=sM[r 1 r 2 t]  (3);
(3)根据投影矩阵H,将道路图像投影到俯视图。
将公式(3)代入公式(2),可获得:
Figure PCTCN2018112894-appb-000010
如果将X、Y物体平面坐标作为地面坐标,那么,公式(4)就变成地面到道路图像之间映射关系两边乘以H的逆矩阵:
Figure PCTCN2018112894-appb-000011
将道路图像上各像素点坐标带入公式(5),即可获得各像素点在俯视图中的坐标。
利用上述反透视变换算法将物体平面的点投影到俯视图只能较为精确地处理在物体平面上的点,非物体平面上的点投影会存在误差,比如护栏上的栏杆在真实俯视视角上离路边的车道线很近,但由于利用上述反透视变换算法投影护栏上的点是不准确的,所以护栏上的点投影到俯视图会和路边上的车道线有一定距离,导致将护栏误检成车道线。本实施例先利用上述反透视变换算法将道路图像投影至俯视图,再对俯视图进行车道线检测,后续利用先验条件的车道线检测方法更容易区分俯视图中的真实车道线和误检的车道线。
本实施例中,识别俯视图中的所有线段,具体的,先分割出俯视图中所有的线段区域,再基于线段检测算法识别所有线段。
步骤S102:针对每一线段,从其他线段中确定出该线段的关联线段,并计算该线段的关联线段对该线段的贡献度,贡献度用于表征关联线段对该线段作为车道线的影响程度大小;
具体的,参见图2,根据该线段与其他线段之间的位置关系和预设的车道线先验条件,从其他线段中确定出该线段的关联线段。可选的,该线段与其他线段之间的位置关系包括该线段与其他线段之间的夹角,或/和该线段与其他线段之间的间距,当然,该线段与其他线段之间的位置关系还可根据真实车道线之间的其他位置关系设定。在一实施例中,该线段与其他线段之间的位置关系包括该线段与其他线段之间的夹角。在另一实施例中,该线段与其他线段之间的位置关系包括该线段与其他线段之间的间距。在又一实施例中,该线段与其他线段之间的位置关系包括该线段与其他线段之间的夹角,及该线段与其他线段之间的间距。
本实施例以该线段与其他线段之间的位置关系包括该线段与其他线段之间的夹角,及该线段与其他线段之间的间距为例进一步说明。
实际道路上的车道线之间尽可能平行,且相邻车道线之间的间距大约位于2.5米至4.2米的范围。然而,由于拍摄角度以及其他一些因素的影响,道路图像中的车道线之间通常会存在一个较小夹角,道路图像中两条线段之间的夹角越小,表明这两条线段在实际道路上越趋向于平行位置关系,这两条线段为车道线的可能性越大,本实施例的预设的车道线先验条件包括车道线之间的夹角位于预设夹角范围内,其中,可根据车道线之间尽可能平行这一条件来设定预设夹角范围大小。
进一步的,预设的车道线先验条件还包括车道线之间的间距为预设间距的整数 倍。通常预设间距为相邻车道线之间的间距等比缩小的一个数值或者数值范围,在预设间距确定后,道路图像中两条线段之间的间距与预设间距的比值为整数或越接近整数,则表明这两条线段为车道线的可能性越大。本实施例中,可根据相邻车道线之间的间距大约位于2.5米至4.2米的范围这一条件来设定预设间距的大小。
通过预设的车道线先验条件,对识别出的线段进行初步筛选,滤除一些明显不是车道线的线段,比如,一些箭头或车边缘或护栏和正常车道线间距不符合真实间距的线段、与正常车道线不平行的线段等。
在该步骤中,若当前线段为直线段,确定出的当前线段的关联线段则为直线段。若当前线段为曲线段,确定出的当前线段的关联线段则为曲线段。
在处理时,针对直线段,可直接根据该直线段与其直他线段之间的位置关系和预设的车道线先验条件,从其他直线段中确定出该直线段的关联直线段。而针对曲线段,通常会将曲线段分割成多段处理,每段曲线段近似于一个直线段,再确定每段曲线段的关联曲线段,确定过程近似于直线段,此处不再赘述。对直线段进行处理时,也可以将直线段分割成多段处理。
参见图3,在确定出每一线段的关联线段后,针对每一线段,根据该线段与该线段的关联线段之间的位置关系、该线段的关联线段的长度及该线段的长度,计算该线段的关联线段对该线段的贡献度。其中,该线段与该线段的关联线段之间的位置关系包括该线段与该线段的关联线段之间的夹角,及该线段与该关联线段之间的间距。
针对每一线段,根据该线段与该线段的关联线段之间的位置关系、该线段的关联线段的长度及该线段的长度,计算该线段的关联线段对该线段的贡献度的具体步骤可参见图4:
步骤S401:针对每一线段的每一关联线段,根据该关联线段的长度及该线段的其他关联线段对该关联线段的贡献度,计算该关联线段的分值;
步骤S402:针对每一线段,根据每一关联线段的分值、每一关联线段与该线段的夹角、每一关联线段与该线段的间距与预设间距的比值及该线段的长度,计算每一关联线段对该线段的贡献度。
在一可行的实现方式中,在执行步骤S401时,先根据该线段与该线段的关联线段之间的位置关系,确定位于该线段左侧的左关联线段和位于该线段右侧的右关联线段;接着,针对每一线段的每一左关联线段,根据该左关联线段的长度及该左关联 线段的其他左关联线段对该左关联线段的贡献度,计算该左关联线段的分值,并针对每一线段的每一右关联线段,根据该右关联线段的长度及该右关联线段的其他右关联线段对该右关联线段的贡献度,计算该右关联线段的分值。其中,计算该左关联线段的分值和计算该右关联线段的分值可同时执行,也可以按照先后顺序执行。
需要说明的是,本发明实施例中,左关联线段的其他左关联线段是指位于该左关联线段左侧的关联线段,右关联线段的其他右关联线段是指位于该右关联线段右侧的关联线段。例如,针对道路图像中识别出的关联线段:线段1、线段2、线段3、线段4和线段5,其中,线段1、线段2、线段3、线段4和线段5从左至右依次排列,以线段3为例,线段3的左关联线段包括线段1和线段2,右关联线段包括线段4和线段5,线段1无左关联线段,线段2的左关联线段包括线段1,线段4的右关联线段包括线段5,线段5无右关联线段。在执行步骤S402时,针对每一线段,根据每一左关联线段的分值、每一左关联线段与该线段的夹角、每一左关联线段与该线段的间距与预设间距的比值及该线段的长度,计算每一左关联线段对该线段的左贡献度;针对每一线段,根据每一右关联线段的分值、每一右关联线段与该线段的夹角、每一右关联线段与该线段的间距与预设间距的比值及该线段的长度,计算每一右关联线段对该线段的右贡献度。
具体的,针对第i条线段,第j条左关联线段对该第i条线段的左贡献度
Figure PCTCN2018112894-appb-000012
的计算公式如下:
Figure PCTCN2018112894-appb-000013
针对第i条线段,第j条左关联线段对该第i条线段的左贡献度
Figure PCTCN2018112894-appb-000014
的计算公式如下:
Figure PCTCN2018112894-appb-000015
其中,i、j为正整数,i∈(1,n),j∈(1,n),n为线段条数;
L i为第i条线段的长度;
k 1为第一预设系数,且k 1>0;本实施例中,k 1的大小可根据需要设定,k 1设置的越大,贡献度受α影响扩大;
α为第i条线段和第j条左线段或第j条右线段之间的夹角;
δ为第i条线段和第j条左线段或第j条右线段之间的间距与预设间距的比值;
k 2为第二预设系数,且0<k 2<1;本实施例中,k 2的大小可根据需要设定,k 2设置的越小,贡献度受δ的影响扩大;
Figure PCTCN2018112894-appb-000016
为第j条左关联线段的分值;
Figure PCTCN2018112894-appb-000017
为第j条右关联线段的分值。
需要说明的是,当第j条左关联线段的左侧不存在关联线段时,
Figure PCTCN2018112894-appb-000018
当第j条右线段的右侧不存在关联线段时,
Figure PCTCN2018112894-appb-000019
可以理解的是,
Figure PCTCN2018112894-appb-000020
计算方式并不限于本实施例列出的公式(6)的计算公式,
Figure PCTCN2018112894-appb-000021
的计算方式也不限于本实施例列出的公式(7)的计算公式。
步骤S103:根据每一线段的长度及该线段的关联线段对该线段的贡献度,计算该线段的分值,该分值用于表征该线段为车道线的可能性大小;
本实施例中,第i条线段的分值为S i,S i的计算公式如下:
S i=L i+CS i,N  (8);
公式(8)中,L i为第i条线段的长度;
CS i,N为该第i条线段的第N条关联线段对该第i条线段的贡献度。
可以理解的是,第i条线段的分值为S i的计算方式并不限于上述公式(8)。
为了简化计算过程,在执行步骤S103时,具体针对每一线段,根据该线段的长度、该线段的左关联线段对该线段的左贡献度中的最大值
Figure PCTCN2018112894-appb-000022
及该线段的右关联线段对该线段的右贡献度中的最大值
Figure PCTCN2018112894-appb-000023
计算该线段的分值,其中最大值
Figure PCTCN2018112894-appb-000024
和最大值
Figure PCTCN2018112894-appb-000025
也可以通过不同的加权系数处理后用于计算该线段的分值,具体加权系数由具体应用场景而定,在本方案中,二者的加权系数都为1,表征二者对该线段的贡献度相当。可选的,每一线段的分值为该线段的长度、该线段的左关联线段对该线段的左贡献度中的最大值
Figure PCTCN2018112894-appb-000026
及该线段的右关联线段对该线段的右贡献度中的最大值
Figure PCTCN2018112894-appb-000027
之和。
上述公式(8)简化为:
Figure PCTCN2018112894-appb-000028
其中,
Figure PCTCN2018112894-appb-000029
的计算方式可参见上述步骤S102,此处不再赘述。
通过公式(9)即可计算出第i条线段的分值。
例如,针对道路图像识别出的线段3,确定出线段3的关联线段为线段1、线段2、线段4和线段5,其中,线段1和线段2位于线段3的左侧,线段4和线段5位于线段3的左侧,在计算线段3的分值时,需要分别计算线段1和线段2对线段3以及线段4和线段5对线段3的贡献度。
在计算线段1对线段3的贡献度时,由于线段1的左侧不存在关联线段,故只需根据线段1的长度,确定线段1的分值;再根据线段1的分值、线段1和线段3之间的位置关系(线段1和线段3之间的夹角、线段1和线段3的间距与预设间距的比值)和线段3的长度,确定线段1对线段3的贡献度。
在计算线段2对线段3的贡献度时,首先根据线段2的长度以及线段1对线段2的贡献度,确定线段2的分值;再根据线段2的分值、线段2和线段3之间的位置关系(线段2和线段3之间的夹角、线段2和线段3的间距与预设间距的比值)和线段3的长度,确定线段1对线段3的贡献度。
在计算线段4对线段3的贡献度时,首先根据线段4的长度以及线段5对线段4的贡献度,确定线段4的分值;再根据线段4的分值、线段4和线段3之间的位置关系(线段4和线段3之间的夹角、线段4和线段3的间距与预设间距的比值)和线段4的长度,确定线段4对线段3的贡献度。
在计算线段5对线段3的贡献度时,由于线段5的右侧不存在关联线段,故只需根据线段5的长度,确定线段5的分值;再根据线段5的分值、线段5和线段3之间的位置关系(线段5和线段3之间的夹角、线段5和线段3的间距与预设间距的比值)和线段5的长度,确定线段5对线段3的贡献度。
在计算出线段1、线段2、线段4以及线段5对线段3的贡献度后,根据线段3的长度、线段1对线段3的贡献度及线段2对线段3的贡献度中的最大值、及线段4对线段3的贡献度及线段5对线段3的贡献度中的最大值,计算线段3的分值。
步骤S104:将分值最高的线段及该分值最高的线段的关联线段确定为车道线。
执行完步骤S103后,可获得道路图像中所有线段的分值,接着,确定所有线段中分值最高的线段:
Figure PCTCN2018112894-appb-000030
通过公式(10)确定出的分值最高的线段及该分值最高的线段的关联线段即为车道线。
本发明实施例的车道线识别方法,综合考虑了道路图像中每一线段的长度及该线段的关联线段对该线段贡献度判断该线段作为车道线的可能性大小,通过组合优化,让检测出来的车道线结果尽可能符合实际道路线,从而提高车道线检测的鲁棒性。
基于线段的长度、线段之间的夹角以及线段之间的间距与预设间距的比值,能够将一些被错误检测成车道线的线段滤除,如箭头、人行道、路面文字、护栏、车辆边缘等等,并选择出一组最优组合作为车道线,降低了车道线误检率。
实施例二
图5是本发明实施例二提供的一种车道线检测装置的结构框图。参见图5,该车道线检测装置包括:存储装置110以及第一处理器120。
其中,存储装置110,用于存储程序指令。第一处理器120,调用存储装置110中存储的程序指令,当程序指令被执行时,用于识别道路图像中所有线段;并针对每一线段,从其他线段中确定出该线段的关联线段,并计算该线段的关联线段对该线段的贡献度,贡献度用于表征关联线段对该线段作为车道线的影响程度大小;根据每一线段的长度及该线段的关联线段对该线段的贡献度,计算该线段的分值,分值用于表征该线段为车道线的可能性大小;将分值最高的线段及该分值最高的线段的关联线段确定为车道线。
第一处理器120可以实现如本发明图1至图4实施例中所示的相应方法,具体可参见上述实施例一的车道线识别方法对本实施例的车道线识别装置进行说明,此处不再赘述。
在本实施例中,所述存储装置110可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储装置110也可以包括非易失性存储器(non-volatile memory),例如快闪存储器(flash memory),硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD);存储装置110还可以包括上述种类的存储器的组合。
所述第一处理器120可以是中央处理器(central processing unit,CPU)。所述 处理器还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
实施例三
图6是本发明实施例三提供的一种车辆的结构框图。参见图6,该车辆可包括车身(未显示)、固定在车身上的拍摄装置210及第二处理器220,其中,拍摄装置210与第二处理器220电连接。
本实施例的拍摄装置210用于拍摄车辆前方的道路图像,并发送至第二处理器220。第二处理器220用于识别道路图像中所有线段;并针对每一线段,从其他线段中确定出该线段的关联线段,并计算该线段的关联线段对该线段的贡献度,贡献度用于表征关联线段对该线段作为车道线的影响程度大小;根据每一线段的长度及该线段的关联线段对该线段的贡献度,计算该线段的分值,分值用于表征该线段为车道线的可能性大小;将分值最高的线段及该分值最高的线段的关联线段确定为车道线。
第二处理器220可以实现如本发明图1至图4实施例中所示的相应方法,具体可参见上述实施例一的车道线识别方法对本实施例的车道线识别装置进行说明,此处不再赘述。
本实施例的第二处理器220可以为车辆主控器,也可以为设于车辆上的其他控制器。以第二处理器220为主控器为例进一步说明,本实施例的第二处理器220采用上述方式确定出车道线后,可根据确定出的车道线控制车辆运行,满足基本ADAS功能和自动驾驶功能的需求。
拍摄装置210可以为相机,也可以为图像传感器,具体可根据需要选择拍摄装置210的类型。
实施例四
此外,本发明实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述实施例的车道线识别方法的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存 储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本发明部分实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。

Claims (58)

  1. 一种车道线识别方法,其特征在于,所述方法包括:
    识别道路图像中所有线段;
    针对每一线段,从其他线段中确定出该线段的关联线段,并计算该线段的关联线段对该线段的贡献度,所述贡献度用于表征关联线段对该线段作为车道线的影响程度大小;
    根据每一线段的长度及该线段的关联线段对该线段的贡献度,计算该线段的分值,所述分值用于表征该线段为车道线的可能性大小;
    将分值最高的线段及该分值最高的线段的关联线段确定为车道线。
  2. 根据权利要求1所述的方法,其特征在于,所述线段包括:
    直线段和/或曲线段。
  3. 根据权利要求1所述的方法,其特征在于,针对每一线段,从其他线段中确定出该线段的关联线段,包括:
    根据该线段与其他线段之间的位置关系和预设的车道线先验条件,从其他线段中确定出该线段的关联线段。
  4. 根据权利要求3所述的方法,其特征在于,该线段与其他线段之间的位置关系包括:
    该线段与其他线段之间的夹角,或/和该线段与其他线段之间的间距。
  5. 根据权利要求4所述的方法,其特征在于,所述预设的车道线先验条件包括:
    车道线之间的夹角位于预设夹角范围内,或/和车道线之间的间距为预设间距的整数倍。
  6. 根据权利要求1所述的方法,其特征在于,针对每一线段,计算该线段的关联线段对该线段的贡献度,包括:
    针对每一线段,根据该线段与该线段的关联线段之间的位置关系、该线段的关联线段的长度及该线段的长度,计算该线段的关联线段对该线段的贡献度。
  7. 根据权利要求6所述的方法,其特征在于,该线段与其他线段之间的位置关系包括:
    该线段与其他线段之间的夹角,或/和该线段与其他线段之间的间距。
  8. 根据权利要求6所述的方法,其特征在于,针对每一线段,根据该线段与该线段的关联线段之间的位置关系、该线段的关联线段的长度及该线段的长度,计算该线段的关联线段对该线段的贡献度,包括:
    针对每一线段的每一关联线段,根据该关联线段的长度及该线段的其他关联线段对该关联线段的贡献度,计算该关联线段的分值;
    针对每一线段,根据每一关联线段的分值、每一关联线段与该线段的位置关系及该线段的长度,计算每一关联线段对该线段的贡献度。
  9. 根据权利要求8所述的方法,其特征在于,所述针对每一线段的每一关联线段,根据该关联线段的长度及该线段的其他关联线段对该关联线段的贡献度,计算该关联线段的分值,包括:
    根据该线段与该线段的关联线段之间的位置关系,确定位于该线段左侧的左关联线段和位于该线段右侧的右关联线段;
    针对每一线段的每一左关联线段,根据该左关联线段的长度及该左关联线段的其他左关联线段对该左关联线段的贡献度,计算该左关联线段的分值;
    针对每一线段的每一右关联线段,根据该右关联线段的长度及该右关联线段的其他右关联线段对该右关联线段的贡献度,计算该右关联线段的分值。
  10. 根据权利要求9所述的方法,其特征在于,所述针对每一线段,根据每一关联线段的分值、每一关联线段与该线段的位置关系及该线段的长度,计算每一关联线段对该线段的贡献度,包括:
    针对每一线段,根据每一左关联线段的分值、每一左关联线段与该线段的位置关系及该线段的长度,计算每一左关联线段对该线段的左贡献度;
    针对每一线段,根据每一右关联线段的分值、每一右关联线段与该线段的位置关系及该线段的长度,计算每一右关联线段对该线段的右贡献度。
  11. 根据权利要求10所述的方法,其特征在于,针对第i条线段,第j条左关联线段对该第i条线段的左贡献度
    Figure PCTCN2018112894-appb-100001
    的计算公式如下:
    Figure PCTCN2018112894-appb-100002
    针对第i条线段,第j条左关联线段对该第i条线段的左贡献度
    Figure PCTCN2018112894-appb-100003
    的计算公式如下:
    Figure PCTCN2018112894-appb-100004
    其中,i、j为正整数;
    L i为第i条线段的长度;
    k 1为第一预设系数,且k 1>0;
    α为第i条线段和第j条左线段或第j条右线段之间的夹角;
    δ为第i条线段和第j条左线段或第j条右线段之间的间距与预设间距的比值;
    k 2为第二预设系数,且0<k 2<1;
    Figure PCTCN2018112894-appb-100005
    为第j条左关联线段的分值;
    Figure PCTCN2018112894-appb-100006
    为第j条右关联线段的分值。
  12. 根据权利要求10所述的方法,其特征在于,所述根据每一线段的长度及该线段的关联线段对该线段的贡献度,计算该线段的分值,包括:
    针对每一线段,根据该线段的长度、该线段的左关联线段对该线段的左贡献度中的最大值及该线段的右关联线段对该线段的右贡献度中的最大值,计算该线段的分值。
  13. 根据权利要求12所述的方法,其特征在于,每一线段的分值为该线段的长度、该线段的左关联线段对该线段的左贡献度中的最大值及该线段的右关联线段对该线段的右贡献度中的最大值之和。
  14. 根据权利要求1所述的方法,其特征在于,所述识别道路图像中所有线段,包括:分割出所述道路图像中所有的线段区域;
    基于线段检测算法识别所有线段。
  15. 根据权利要求14所述的方法,其特征在于,所述分割出所述道路图像中所有的线段区域,包括:
    基于CNN分割出所述道路图像中所有的线段区域;或者,
    基于边缘检测算法分割出所述道路图像中所有的线段区域。
  16. 根据权利要求14所述的方法,其特征在于,所述基于线段检测算法识别所有线段,包括:
    基于线段检测算法识别所有线段的长度、所有线段之间的夹角和/或所有线段之间的间距。
  17. 根据权利要求14或16所述的方法,其特征在于,所述线段检测算法为霍尔夫变换算法。
  18. 根据权利要求1或14所述的方法,其特征在于,所述识别道路图像中所有线段之前,还包括:
    对所述道路图像进行图像矫正。
  19. 根据权利要求18所述的方法,其特征在于,所述对所述道路图像进行图像矫正,包括:
    基于反透视变换,将所述道路图像投影至对应的俯视图。
  20. 一种车道线识别装置,其特征在于,包括:
    存储装置,用于存储程序指令;
    处理器,调用所述存储装置中存储的程序指令,当所述程序指令被执行时,用于:
    识别道路图像中所有线段;
    针对每一线段,从其他线段中确定出该线段的关联线段,并计算该线段的关联线段对该线段的贡献度,所述贡献度用于表征关联线段对该线段作为车道线的影响程度大小;
    根据每一线段的长度及该线段的关联线段对该线段的贡献度,计算该线段的分值,所述分值用于表征该线段为车道线的可能性大小;
    将分值最高的线段及该分值最高的线段的关联线段确定为车道线。
  21. 根据权利要求20所述的装置,其特征在于,所述线段包括:
    直线段和/或曲线段。
  22. 根据权利要求20所述的装置,其特征在于,所述处理器,具体用于:
    根据该线段与其他线段之间的位置关系和预设的车道线先验条件,从其他线段中确定出该线段的关联线段。
  23. 根据权利要求22所述的装置,其特征在于,该线段与其他线段之间的位置关系包括:
    该线段与其他线段之间的夹角,或/和该线段与其他线段之间的间距。
  24. 根据权利要求23所述的装置,其特征在于,所述预设的车道线先验条件包括:
    车道线之间的夹角位于预设夹角范围内,或/和车道线之间的间距为预设间距的整数倍。
  25. 根据权利要求20所述的装置,其特征在于,所述处理器,具体用于:
    针对每一线段,根据该线段与该线段的关联线段之间的位置关系、该线段的关联线段的长度及该线段的长度,计算该线段的关联线段对该线段的贡献度。
  26. 根据权利要求25所述的装置,其特征在于,该线段与其他线段之间的位置关系包括:
    该线段与其他线段之间的夹角,或/和该线段与其他线段之间的间距。
  27. 根据权利要求25所述的装置,其特征在于,所述处理器,具体用于:
    针对每一线段的每一关联线段,根据该关联线段的长度及该线段的其他关联线段对该关联线段的贡献度,计算该关联线段的分值;
    针对每一线段,根据每一关联线段的分值、每一关联线段与该线段的位置关系及该线段的长度,计算每一关联线段对该线段的贡献度。
  28. 根据权利要求27所述的装置,其特征在于,所述处理器,具体用于:
    根据该线段与该线段的关联线段之间的位置关系,确定位于该线段左侧的左关联线段和位于该线段右侧的右关联线段;
    针对每一线段的每一左关联线段,根据该左关联线段的长度及该左关联线段的其他左关联线段对该左关联线段的贡献度,计算该左关联线段的分值;
    针对每一线段的每一右关联线段,根据该右关联线段的长度及该右关联线段的其他右关联线段对该右关联线段的贡献度,计算该右关联线段的分值。
  29. 根据权利要求28所述的装置,其特征在于,所述处理器,具体用于:
    针对每一线段,根据每一左关联线段的分值、每一左关联线段与该线段的位置关系及该线段的长度,计算每一左关联线段对该线段的左贡献度;
    针对每一线段,根据每一右关联线段的分值、每一右关联线段与该线段的位置关系与预设间距的比值及该线段的长度,计算每一右关联线段对该线段的右贡献度。
  30. 根据权利要求29所述的装置,其特征在于,针对第i条线段,第j条左关联线段对该第i条线段的左贡献度
    Figure PCTCN2018112894-appb-100007
    的计算公式如下:
    Figure PCTCN2018112894-appb-100008
    针对第i条线段,第j条左关联线段对该第i条线段的左贡献度
    Figure PCTCN2018112894-appb-100009
    的计算公式如下:
    Figure PCTCN2018112894-appb-100010
    其中,i、j为正整数;
    L i为第i条线段的长度;
    k 1为第一预设系数,且k 1>0;
    α为第i条线段和第j条左线段或第j条右线段之间的夹角;
    δ为第i条线段和第j条左线段或第j条右线段之间的间距与预设间距的比值;
    k 2为第二预设系数,且0<k 2<1;
    Figure PCTCN2018112894-appb-100011
    为第j条左关联线段的分值;
    Figure PCTCN2018112894-appb-100012
    为第j条右关联线段的分值。
  31. 根据权利要求29所述的装置,其特征在于,所述处理器,具体用于:
    针对每一线段,根据该线段的长度、该线段的左关联线段对该线段的左贡献度中的最大值及该线段的右关联线段对该线段的右贡献度中的最大值,计算该线段的分值。
  32. 根据权利要求31所述的装置,其特征在于,每一线段的分值为该线段的长度、 该线段的左关联线段对该线段的左贡献度中的最大值及该线段的右关联线段对该线段的右贡献度中的最大值之和。
  33. 根据权利要求20所述的装置,其特征在于,所述处理器,具体用于:
    分割出所述道路图像中所有的线段区域;
    基于线段检测算法识别所有线段。
  34. 根据权利要求33所述的装置,其特征在于,所述处理器,具体用于:
    基于CNN分割出所述道路图像中所有的线段区域;或者,
    基于边缘检测算法分割出所述道路图像中所有的线段区域。
  35. 根据权利要求33所述的装置,其特征在于,所述处理器,具体用于:
    基于线段检测算法识别所有线段的长度、所有线段之间的夹角和/或所有线段之间的间距。
  36. 根据权利要求33或35所述的装置,其特征在于,所述线段检测算法为霍尔夫变换算法。
  37. 根据权利要求20或33所述的装置,其特征在于,所述处理器识别道路图像中所有线段之前,还用于:
    对所述道路图像进行图像矫正。
  38. 根据权利要求37所述的装置,其特征在于,所述处理器,具体用于:
    基于反透视变换,将所述道路图像投影至对应的俯视图。
  39. 一种车辆,其特征在于,包括:
    车身;
    固定在车身上的拍摄装置;以及
    处理器,所述处理器与所述拍摄装置电连接;
    所述拍摄装置用于拍摄车辆前方的道路图像并发送至所述处理器,所述处理器用于:
    识别道路图像中所有线段;
    针对每一线段,从其他线段中确定出该线段的关联线段,并计算该线段的关联线段对该线段的贡献度,所述贡献度用于表征关联线段对该线段作为车道线的影响程度大小;
    根据每一线段的长度及该线段的关联线段对该线段的贡献度,计算该线段的分值,所述分值用于表征该线段为车道线的可能性大小;
    将分值最高的线段及该分值最高的线段的关联线段确定为车道线。
  40. 根据权利要求39所述的车辆,其特征在于,所述线段包括:
    直线段和/或曲线段。
  41. 根据权利要求39所述的车辆,其特征在于,所述处理器,具体用于:
    根据该线段与其他线段之间的位置关系和预设的车道线先验条件,从其他线段中确定出该线段的关联线段。
  42. 根据权利要求41所述的车辆,其特征在于,该线段与其他线段之间的位置关系包括:
    该线段与其他线段之间的夹角,或/和该线段与其他线段之间的间距。
  43. 根据权利要求42所述的车辆,其特征在于,所述预设的车道线先验条件包括:
    车道线之间的夹角位于预设夹角范围内,或/和车道线之间的间距为预设间距的整数倍。
  44. 根据权利要求39所述的车辆,其特征在于,所述处理器,具体用于:
    针对每一线段,根据该线段与该线段的关联线段之间的位置关系、该线段的关联线段的长度及该线段的长度,计算该线段的关联线段对该线段的贡献度。
  45. 根据权利要求44所述的车辆,其特征在于,该线段与其他线段之间的位置关系包括:
    该线段与其他线段之间的夹角,或/和该线段与其他线段之间的间距。
  46. 根据权利要求44所述的车辆,其特征在于,所述处理器,具体用于:
    针对每一线段的每一关联线段,根据该关联线段的长度及该线段的其他关联线段对该关联线段的贡献度,计算该关联线段的分值;
    针对每一线段,根据每一关联线段的分值、每一关联线段与该线段的位置关系及该线段的长度,计算每一关联线段对该线段的贡献度。
  47. 根据权利要求46所述的车辆,其特征在于,所述处理器,具体用于:
    根据该线段与该线段的关联线段之间的位置关系,确定位于该线段左侧的左关联线段和位于该线段右侧的右关联线段;
    针对每一线段的每一左关联线段,根据该左关联线段的长度及该左关联线段的其他左关联线段对该左关联线段的贡献度,计算该左关联线段的分值;
    针对每一线段的每一右关联线段,根据该右关联线段的长度及该右关联线段的其他右关联线段对该右关联线段的贡献度,计算该右关联线段的分值。
  48. 根据权利要求47所述的车辆,其特征在于,所述处理器,具体用于:
    针对每一线段,根据每一左关联线段的分值、每一左关联线段与该线段的位置关 系及该线段的长度,计算每一左关联线段对该线段的左贡献度;
    针对每一线段,根据每一右关联线段的分值、每一右关联线段与该线段的位置关系及该线段的长度,计算每一右关联线段对该线段的右贡献度。
  49. 根据权利要求48所述的车辆,其特征在于,针对第i条线段,第j条左关联线段对该第i条线段的左贡献度
    Figure PCTCN2018112894-appb-100013
    的计算公式如下:
    Figure PCTCN2018112894-appb-100014
    针对第i条线段,第j条左关联线段对该第i条线段的左贡献度
    Figure PCTCN2018112894-appb-100015
    的计算公式如下:
    Figure PCTCN2018112894-appb-100016
    其中,i、j为正整数;
    L i为第i条线段的长度;
    k 1为第一预设系数,且k 1>0;
    α为第i条线段和第j条左线段或第j条右线段之间的夹角;
    δ为第i条线段和第j条左线段或第j条右线段之间的间距与预设间距的比值;
    k 2为第二预设系数,且0<k 2<1;
    Figure PCTCN2018112894-appb-100017
    为第j条左关联线段的分值;
    Figure PCTCN2018112894-appb-100018
    为第j条右关联线段的分值。
  50. 根据权利要求48所述的车辆,其特征在于,所述处理器,具体用于:
    针对每一线段,根据该线段的长度、该线段的左关联线段对该线段的左贡献度中的最大值及该线段的右关联线段对该线段的右贡献度中的最大值,计算该线段的分值。
  51. 根据权利要求50所述的车辆,其特征在于,每一线段的分值为该线段的长度、该线段的左关联线段对该线段的左贡献度中的最大值及该线段的右关联线段对该线段的右贡献度中的最大值之和。
  52. 根据权利要求39所述的车辆,其特征在于,所述处理器,具体用于:
    分割出所述道路图像中所有的线段区域;
    基于线段检测算法识别所有线段。
  53. 根据权利要求52所述的车辆,其特征在于,所述处理器,具体用于:
    基于CNN分割出所述道路图像中所有的线段区域;或者,
    基于边缘检测算法分割出所述道路图像中所有的线段区域。
  54. 根据权利要求52所述的车辆,其特征在于,所述处理器,具体用于:
    基于线段检测算法识别所有线段的长度、所有线段之间的夹角和/或所有线段之间的间距。
  55. 根据权利要求52或54所述的车辆,其特征在于,所述线段检测算法为霍尔夫变换算法。
  56. 根据权利要求39或52所述的车辆,其特征在于,所述处理器识别道路图像中所有线段之前,还用于:
    对所述道路图像进行图像矫正。
  57. 根据权利要求56所述的车辆,其特征在于,所述处理器,具体用于:
    基于反透视变换,将所述道路图像投影至对应的俯视图。
  58. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1至19任一项所述的车道线识别方法的步骤。
PCT/CN2018/112894 2018-10-31 2018-10-31 车道线识别方法和装置、车辆 WO2020087322A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/112894 WO2020087322A1 (zh) 2018-10-31 2018-10-31 车道线识别方法和装置、车辆
CN201880039256.XA CN110770741B (zh) 2018-10-31 2018-10-31 一种车道线识别方法和装置、车辆

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/112894 WO2020087322A1 (zh) 2018-10-31 2018-10-31 车道线识别方法和装置、车辆

Publications (1)

Publication Number Publication Date
WO2020087322A1 true WO2020087322A1 (zh) 2020-05-07

Family

ID=69328785

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/112894 WO2020087322A1 (zh) 2018-10-31 2018-10-31 车道线识别方法和装置、车辆

Country Status (2)

Country Link
CN (1) CN110770741B (zh)
WO (1) WO2020087322A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220258769A1 (en) * 2021-02-18 2022-08-18 Honda Motor Co., Ltd. Vehicle control device, vehicle control method, and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311675B (zh) * 2020-02-11 2022-09-16 腾讯科技(深圳)有限公司 车辆定位方法、装置、设备和存储介质
CN112498342A (zh) * 2020-11-26 2021-03-16 潍柴动力股份有限公司 一种行人碰撞预测方法及系统
CN112347983B (zh) * 2020-11-27 2021-12-14 腾讯科技(深圳)有限公司 车道线检测处理方法、装置、计算机设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100054538A1 (en) * 2007-01-23 2010-03-04 Valeo Schalter Und Sensoren Gmbh Method and system for universal lane boundary detection
CN102663356A (zh) * 2012-03-28 2012-09-12 柳州博实唯汽车科技有限公司 车道线提取及偏离预警方法
CN103440785A (zh) * 2013-08-08 2013-12-11 华南师范大学 一种快速的车道偏移警示方法
CN103940434A (zh) * 2014-04-01 2014-07-23 西安交通大学 基于单目视觉和惯性导航单元的实时车道线检测系统
CN104063877A (zh) * 2014-07-16 2014-09-24 中电海康集团有限公司 一种候选车道线混合判断识别方法
CN104700072A (zh) * 2015-02-06 2015-06-10 中国科学院合肥物质科学研究院 基于车道线历史帧的识别方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6278791B2 (ja) * 2014-03-31 2018-02-14 株式会社デンソーアイティーラボラトリ 車両位置検出装置、車両位置検出方法及び車両位置検出用コンピュータプログラムならびに車両位置検出システム
CN105718870B (zh) * 2016-01-15 2019-06-14 武汉光庭科技有限公司 自动驾驶中基于前向摄像头的道路标线提取方法
CN107229908B (zh) * 2017-05-16 2019-11-29 浙江理工大学 一种车道线检测方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100054538A1 (en) * 2007-01-23 2010-03-04 Valeo Schalter Und Sensoren Gmbh Method and system for universal lane boundary detection
CN102663356A (zh) * 2012-03-28 2012-09-12 柳州博实唯汽车科技有限公司 车道线提取及偏离预警方法
CN103440785A (zh) * 2013-08-08 2013-12-11 华南师范大学 一种快速的车道偏移警示方法
CN103940434A (zh) * 2014-04-01 2014-07-23 西安交通大学 基于单目视觉和惯性导航单元的实时车道线检测系统
CN104063877A (zh) * 2014-07-16 2014-09-24 中电海康集团有限公司 一种候选车道线混合判断识别方法
CN104700072A (zh) * 2015-02-06 2015-06-10 中国科学院合肥物质科学研究院 基于车道线历史帧的识别方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220258769A1 (en) * 2021-02-18 2022-08-18 Honda Motor Co., Ltd. Vehicle control device, vehicle control method, and storage medium
US11932283B2 (en) * 2021-02-18 2024-03-19 Honda Motor Co., Ltd. Vehicle control device, vehicle control method, and storage medium

Also Published As

Publication number Publication date
CN110770741A (zh) 2020-02-07
CN110770741B (zh) 2024-05-03

Similar Documents

Publication Publication Date Title
WO2020087322A1 (zh) 车道线识别方法和装置、车辆
WO2018219054A1 (zh) 一种车牌识别方法、装置及系统
US10025997B2 (en) Device and method for recognizing obstacle and parking slot to support unmanned autonomous parking function
WO2020048152A1 (zh) 高精度地图制作中地下车库停车位提取方法及系统
CN110598512B (zh) 一种车位检测方法及装置
WO2016119532A1 (zh) 车辆违章停车的取证方法及其装置
CN107895375B (zh) 基于视觉多特征的复杂道路线提取方法
CN112257692B (zh) 一种行人目标的检测方法、电子设备及存储介质
CN110211185B (zh) 在一组候选点内识别校准图案的特征点的方法
WO2014032496A1 (zh) 一种人脸特征点定位方法、装置及存储介质
Youjin et al. A robust lane detection method based on vanishing point estimation
CN109543493B (zh) 一种车道线的检测方法、装置及电子设备
LU502288B1 (en) Method and system for detecting position relation between vehicle and lane line, and storage medium
WO2020146980A1 (zh) 车道线识别方法、车道线识别装置以及非易失性存储介质
JP6466038B1 (ja) 画像処理装置および画像処理方法
KR101461108B1 (ko) 인식기, 차량모델인식장치 및 방법
WO2023184868A1 (zh) 障碍物朝向的确定方法、装置、系统、设备、介质及产品
WO2020133488A1 (zh) 车辆检测方法和设备
CN111046845A (zh) 活体检测方法、装置及系统
CN109753886B (zh) 一种人脸图像的评价方法、装置及设备
CN114543819A (zh) 车辆定位方法、装置、电子设备及存储介质
Jin et al. Road curvature estimation using a new lane detection method
KR102629639B1 (ko) 차량용 듀얼 카메라 장착 위치 결정 장치 및 방법
CN115507815A (zh) 一种目标测距方法、装置及车辆
CN112634141B (zh) 一种车牌矫正方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18938481

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18938481

Country of ref document: EP

Kind code of ref document: A1