WO2022078074A1 - 车辆与车道线的位置关系检测方法、系统和存储介质 - Google Patents

车辆与车道线的位置关系检测方法、系统和存储介质 Download PDF

Info

Publication number
WO2022078074A1
WO2022078074A1 PCT/CN2021/114250 CN2021114250W WO2022078074A1 WO 2022078074 A1 WO2022078074 A1 WO 2022078074A1 CN 2021114250 W CN2021114250 W CN 2021114250W WO 2022078074 A1 WO2022078074 A1 WO 2022078074A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
line
lane line
line segment
coordinates
Prior art date
Application number
PCT/CN2021/114250
Other languages
English (en)
French (fr)
Inventor
曹忠
李伟杰
尚文利
赵文静
浣沙
揭海
Original Assignee
广州大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州大学 filed Critical 广州大学
Publication of WO2022078074A1 publication Critical patent/WO2022078074A1/zh
Priority to US18/300,737 priority Critical patent/US20230252677A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Definitions

  • the invention relates to the field of intelligent driving, in particular to a method, system and storage medium for detecting the positional relationship between a vehicle and a lane line.
  • one of the reasons why the driver can drive normally is to judge whether the vehicle is running normally according to the relative position of the lane line on the road and the driver's vehicle. For example, on a straight road, the driver can see the position of the lane lines on both sides of the vehicle on the front window through the front window, so as to judge whether the vehicle is driving in the center of the lane; when reversing into the garage, the driver may also need to pass the left and right side mirrors. To observe the angle between the vehicle and the parking space line, so as to determine what angle should be used to complete the storage. If manned driving is changed to unmanned driving, then judging the positional relationship between the vehicle and the lane line is the key to judging whether the vehicle can drive normally.
  • the common method for judging the relationship between the vehicle and the lane line is to use GPS or Beidou system to install a positioning information receiver on the vehicle, and install vector-assisted positioning equipment such as gyroscope, accelerometer and electronic compass.
  • vector-assisted positioning equipment such as gyroscope, accelerometer and electronic compass.
  • it is necessary to measure precise data such as the coordinates of the lane lines of a certain site in advance, and establish a site model, so that the vehicle can be accurately positioned in a certain site, so as to determine the positional relationship between the vehicle and the lane line.
  • the use of the positioning system requires large-scale laying of base stations in order to ensure the positioning accuracy, and the site model needs to be re-established for the change of the site lane line, which increases the construction cost.
  • the present invention proposes a method, system and storage medium for detecting the positional relationship between a vehicle and a lane line, which can determine the positional relationship between the lane line and the vehicle without using a positioning system, thereby Reduce the construction cost of intelligent driving.
  • an embodiment of the present invention provides a method for detecting a positional relationship between a vehicle and a lane line, including the following steps:
  • the vehicle model being represented by a plurality of first coordinates in the world coordinate system
  • the positional relationship between the lane line and the vehicle is determined according to the positional relationship between the first line segment and the plurality of first coordinates in the world coordinate system.
  • the vehicle model is obtained by:
  • the vehicle model is obtained by mapping the length and width information into the world coordinate system.
  • the determining the positional relationship between the lane line and the vehicle according to the positional relationship between the first line segment and a plurality of the first coordinates in the world coordinate system includes the following steps:
  • the positional relationship between the lane line and the vehicle is determined as the vehicle pressure line.
  • the determining according to the lane line picture and the calibration parameter that the lane line is mapped to the first line segment in the world coordinate system includes the following steps:
  • the first line segment is determined according to the two third coordinates.
  • the determining two third coordinates of the lane line in the world coordinate system according to the lane line picture and the calibration parameter includes the following steps:
  • the image pixel coordinates are mapped to the world coordinate system according to the calibration parameters to obtain two third coordinates of the lane line in the world coordinate system.
  • the determining the recognition area according to the lane line picture includes the following steps:
  • the step of determining the positional relationship between the lane line and the vehicle according to the positional relationship between the first line segment and the plurality of first coordinates in the world coordinate system when the identified The number of the first line segments is multiple, and the following steps are also included:
  • an embodiment of the present invention also provides a system for detecting the positional relationship between a vehicle and a lane line, including:
  • the processing component is used for acquiring the lane line picture, acquiring the calibration parameters of the camera, and acquiring a vehicle model, wherein the vehicle model is represented by a plurality of first coordinates in the world coordinate system, and the processing
  • the component determines the first line segment where the lane line is mapped to the world coordinate system according to the picture of the lane line and the calibration parameter, and the processing component determines where the first line segment and a plurality of the first coordinates are located according to the first line segment and the plurality of first coordinates.
  • the position relationship in the world coordinate system is used to determine the position relationship between the lane line and the vehicle.
  • the positional relationship between the lane line and the vehicle is judged according to the line segment and the vehicle model of the world coordinate system, and the vehicle and the lane can be realized without the aid of the positioning system.
  • the judgment of the line position relationship only needs to be completed by the equipment of the vehicle itself, reducing the construction cost of intelligent driving.
  • FIG. 1 is a flowchart of a method for detecting a positional relationship between a vehicle and a lane line provided according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a method for detecting a positional relationship between a vehicle and a lane line provided according to another embodiment of the present invention
  • FIG. 3 is a flowchart of a method for detecting a positional relationship between a vehicle and a lane line provided according to another embodiment of the present invention
  • FIG. 4 is a flowchart of a method for detecting a positional relationship between a vehicle and a lane line provided according to another embodiment of the present invention
  • FIG. 5 is a flowchart of a method for detecting a positional relationship between a vehicle and a lane line provided according to another embodiment of the present invention.
  • FIG. 6 is a flowchart of a method for detecting a positional relationship between a vehicle and a lane line provided according to another embodiment of the present invention.
  • FIG. 7 is a flowchart of a method for detecting a positional relationship between a vehicle and a lane line provided according to another embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a vehicle model in a world coordinate system provided according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of the positional relationship between the vehicle model and the first line segment in the world coordinate system provided according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of fitting a first line segment in a world coordinate system according to an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of a positional relationship between a vehicle model and a first line segment in a world coordinate system provided according to an embodiment of the present invention.
  • Embodiments of the present invention provide a method for detecting the positional relationship between a vehicle and a lane line, which can determine the positional relationship between a vehicle and a lane line without using a positioning system, thereby reducing the construction cost of intelligent driving.
  • the method of the embodiment of the present invention includes, but is not limited to, step S110 , step S120 , step S130 , and step S140 .
  • Step S110 acquiring a vehicle model, where the vehicle model is represented by a plurality of first coordinates in the world coordinate system.
  • S1 and S2 are auxiliary lines.
  • the auxiliary line S1 is used to identify the position where the width of the vehicle body gradually increases from the front of the car to the maximum position.
  • the auxiliary line S2 is used to identify the width of the vehicle body from the maximum. Measure the width information of the three bodies W1, W2 and W3 at the position that just gradually becomes smaller. Where W1 is the front width, W2 is the body width, and W3 is the rear width. Then measure the length information of the three bodies H1, H2 and H3.
  • H1 is the distance from the front of the vehicle to the auxiliary line S1; H2 is the distance between the auxiliary lines S1 and S2, and H3 is the distance from the auxiliary line S2 to the rear of the vehicle.
  • the vehicle model is obtained by mapping the length and width information into the world coordinate system.
  • the coordinate values of the first coordinate A, the first coordinate B, the first coordinate C, the first coordinate D, the first coordinate E, the first coordinate F, the first coordinate G and the first coordinate H can be determined according to the length and width information.
  • a straight line is determined by two points, and a straight line set can be obtained.
  • the straight line set includes straight line L1, straight line L2, straight line L3, straight line L4, straight line L5, straight line L6, straight line L7 and straight line L8. These straight lines constitute the vehicle model representing the vehicle. . It should be noted that, if the vehicle model needs to better restore the vehicle or the vehicle body shape is more complex, more straight lines can also be used to make the vehicle model.
  • Step S120 acquiring a lane line image, wherein the lane line image is captured by a camera installed on the vehicle.
  • the number of cameras is four, and the cameras are arranged at the front, rear, left, and right positions of the vehicle body, the road conditions around the vehicle body can be captured with the fewest cameras, and the surrounding lane line pictures can be obtained.
  • the shooting field of view is insufficient, the number of cameras and cameras can be increased.
  • Step S130 acquiring calibration parameters of the camera.
  • the rotation matrix and displacement matrix of the camera can be calculated using the five-point method.
  • the five-point method means that the world coordinates and image pixel coordinates of the five points are known, and four of the points are used as the pose estimation parameters, and the remaining A point of is used as a reference point for the accuracy of the pose estimation.
  • the rotation matrix and displacement matrix of the corresponding camera can be deduced by using the camera's internal parameter matrix, distortion matrix and four of them.
  • the lane line has a certain width, there will be two line segments on the edge of the lane line after the image of the lane line is recognized, and the line segment close to the vehicle model side can be selected as the first line segment in the world coordinate system.
  • Step S150 Determine the positional relationship between the lane line and the vehicle according to the positional relationship between the first line segment and the plurality of first coordinates in the world coordinate system.
  • Step S210 determining the second coordinates of each first coordinate projected onto the first line segment.
  • Step S230 when all the first coordinates are located on one side of the first line segment, it is determined that the positional relationship between the lane line and the vehicle is that the vehicle does not press the line.
  • Y G' is the ordinate of the second coordinate G'
  • k J1 is the slope of the first line segment J1
  • b J1 is the intercept of the first line segment J1
  • X G is the abscissa of the first coordinate G.
  • the second coordinate G′ is located above the first coordinate G, that is, the first coordinate G is located below the first line segment J1;
  • the second coordinate G' is located below the first coordinate G, that is, the first coordinate G is located above the first line segment J1;
  • the first coordinates when some of the first coordinates are located on one side of the first line segment, it can be further divided into: when some of the first coordinates are located on one side of the first line segment, and some of the first coordinates are located on the other side of the first line segment, Then it is judged that the lane line passes through the vehicle; when part of the first coordinates are on one side of the first line segment and the remaining first coordinates are on the first line segment, it is judged that the vehicle is just on the lane line.
  • an angle formed by the intersection of the lines of two lanes will appear when reversing into the garage, which is mapped to the world coordinate system.
  • the intersection of the first line segment J2 and the first line segment J3 forms an angle.
  • the coordinates of the intersection point I can be obtained according to the functional expression of the first line segment J2 and the first line segment J3, and whether the intersection point I is within the vehicle model is determined according to the intersection point I and the set of lines representing the vehicle model. It can determine whether the vehicle is pressing down.
  • FIG. 3 is a schematic diagram of an embodiment of the refinement process of step S140 in FIG. 1 .
  • Step S140 includes but It is not limited to step S310 and step S320.
  • Step S320 determining the first line segment according to the two third coordinates.
  • the image pixel coordinate system in the lane line picture can be converted into the world coordinate system by using the calibration parameters. Since the lane line in reality is a straight line, the lane line captured by the camera will have a certain length. The two end points of are mapped to the world coordinate system to obtain two third coordinates, and the first line segment representing the lane line can be obtained by connecting these two third coordinates. Using the characteristic that the lane line is a straight line, it only needs to determine two third coordinates in the world coordinate system to map the lane line in the world coordinate system, which reduces the amount of calculation and improves the operation speed.
  • FIG. 4 is a schematic diagram of an embodiment of the refinement process of step S310 in FIG. 3 .
  • Step S310 includes but It is not limited to step S410, step S420, step S430, and step S440.
  • Step S410 Determine the recognition area according to the lane line picture.
  • Step S420 using the Hough transform to determine the second line segment under the recognition area.
  • Step S430 Determine image pixel coordinates of two endpoints of the second line segment according to the second line segment.
  • Step S440 Map the pixel coordinates of the image to the world coordinate system according to the calibration parameters to obtain two third coordinates of the lane line in the world coordinate system.
  • the recognition area can be determined after image processing is performed on the acquired lane line picture, and the coordinates of the upper left corner of the recognition area are calculated by using the following formula:
  • X 0 is the abscissa of the center point of the recognition area
  • Y 0 is the ordinate of the center point of the recognition area
  • W is the width of the recognition area
  • H is the height of the recognition area
  • X u is the abscissa of the upper left corner of the recognition area
  • Y u is the The ordinate of the upper left corner of the identification area.
  • the Hough transform can be used to find the straight line in the recognition area, that is, the second line segment. Convert the two endpoints of the second line segment under the recognition area into image pixel coordinates.
  • the conversion process can be as follows:
  • X ul is the abscissa of an endpoint of the second line segment in the recognition area
  • Y ul is the ordinate of an endpoint of the second line segment in the recognition area
  • X 1 is the abscissa of the image pixel of an endpoint of the second line segment
  • Y 1 is the second line segment The image pixel ordinate of an endpoint.
  • f is the focal length of the camera
  • dx and dy are the actual lengths of the X-axis and Y-axis of the photosensitive plate coordinate system occupied by a pixel in the image pixel coordinate system.
  • Step S510 grayscale the lane line picture to obtain a grayscale image.
  • the lane line image can have a better display color gradient after grayscale processing, which is beneficial to subsequent image processing.
  • Step S520 filtering the grayscale image to obtain a noise reduction image.
  • filtering the image can filter out most of the Gaussian noise in the image, such as dust, raindrops, etc., and setting the Gaussian kernel to an appropriate size can adapt to different environmental conditions.
  • Step S530 performing distortion correction processing on the noise reduction image to obtain a correction image.
  • the actual lane line position line but the photo taken by the camera will be distorted to a certain extent, so it is necessary to perform distortion correction on the picture.
  • the corrected picture after distortion correction is a picture without distortion, which can provide the pixel coordinates of the image more accurately. Correction of distortion requires camera internal parameter matrix and distortion matrix.
  • Step S550 substituting the picture data into the recognition model to obtain the recognition area.
  • the recognition model can also include the virtual and solid lines for identifying the road.
  • the recognition model can first collect picture samples and use labeling software to label the picture samples to form a training file, and input the training file into the deep neural network for training to obtain the recognition.
  • Model, deep learning network can choose Yolov3, SSD and other target detection deep learning frameworks, as long as the solid and dashed lines can be recognized on the image, and the recognition type and recognition area can be output.
  • the recognition model can also output the length and width of the recognition area and the coordinates of the recognition center point.
  • the parameters of the Hough transform need to be adjusted, because the distribution of the dotted line and the solid line in the image space is different.
  • the solid line is a large continuous image on the image, and the dotted line is a small image. If the two use the same parameters for identification, there may be more than 2 straight lines recognized when recognizing the solid line, or A case where a straight line cannot be recognized when a dashed line is recognized.
  • Step S610 Calculate the first distance between the plurality of first line segments and the fourth coordinate, where the fourth coordinate is any point in the world coordinate system.
  • the same lane line in a lane line image may be mapped to multiple first line segments.
  • the first line segment J4 , the first line segment J5 and the The first line segment J6 may be a straight line, but multiple lines are identified due to the low resolution of the picture, the environment, parameter settings, etc., then the first line segment J4, the first line segment J5 and the first line Line segment J6 is fitted as a new first line segment J7.
  • different cameras may capture the same lane line, so the lane line pictures captured by multiple cameras need to be mapped to the first line in the world coordinate system.
  • the line segment is fitted, that is, the first line segment representing the same lane line under different recognition areas is fitted. Similarly, it is first necessary to judge whether these first line segments are the same first line
  • the fourth coordinate is selected as the coordinate origin O.
  • FIG. 7 is a schematic diagram of another embodiment in FIG. 1 .
  • the position of the vehicle and the lane line provided by this embodiment is The relationship detection method includes but is not limited to step S710, step S720, step S730, and step S740.
  • Step S740 Determine the angle between the vehicle and the lane line according to the slope of the first line segment, the intercept of the first line segment, and the second distance.
  • b 1 is the intercept of the first line segment J8, and k 1 is the slope of the first line segment J8.
  • the deflection angle ⁇ between the vehicle model and the first line segment J8 can be solved according to the following formula:
  • the deflection angle between the vehicle and the lane line is calculated, and the direction of the vehicle can be adjusted in time according to the deflection angle for correct driving.
  • An embodiment of the present invention also provides a system for detecting the positional relationship between a vehicle and a lane line, including several cameras and processing components.
  • the camera is set on the body, and the camera is used to take pictures of the lane lines.
  • the processing component is used to obtain a picture of the lane line captured by the camera, obtain the calibration parameters of the camera, and obtain a vehicle model, wherein the vehicle model is represented by a plurality of first coordinates in the world coordinate system, and the processing component determines the lane line according to the lane line picture and the calibration parameters.
  • the first line segment is mapped to the world coordinate system, and the processing component determines the positional relationship between the lane line and the vehicle according to the positional relationship between the first line segment and a plurality of first coordinates in the world coordinate system.
  • An embodiment of the present invention also provides a computer-readable storage medium storing computer-executable instructions that are executed by one or more control processors, eg, to perform the above description
  • Computer storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices, or may Any other medium used to store desired information and which can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and can include any information delivery media, as is well known to those of ordinary skill in the art .

Abstract

本发明公开了一种车辆与车道线的位置关系检测方法。涉及智能驾驶领域,其中,车辆与车道线的位置关系检测方法包括以下步骤:获取车辆模型,所述车辆模型由世界坐标系中的多个第一坐标表示;获取车道线图片,其中,所述车道线图像由设置于车辆上的摄像头拍摄;获取所述摄像头的标定参数;根据所述车道线图片和所述标定参数确定车道线映射到所述世界坐标系中的第一线段;根据所述第一线段与多个所述第一坐标在所述世界坐标系中的位置关系确定车道线与车辆的位置关系。该检测方法不需要利用定位系统就能判断车道线与车辆的位置关系,从而减少智能驾驶的建设成本。

Description

车辆与车道线的位置关系检测方法、系统和存储介质 技术领域
本发明涉及智能驾驶领域,尤其涉及一种车辆与车道线的位置关系检测方法、系统和存储介质。
背景技术
在开车过程中,司机可以正常行驶的原因之一,是根据道路上的车道线与司机驾驶车辆的相对位置,来判断车辆行驶是否正常。例如在直道上,司机通过车前窗,看到车辆两侧的车道线在车前窗上的位置,从而判断车辆是否在车道正中央行驶;在倒车入库时可能还需通过左右侧视镜来观察车辆与车位线的角度,从而判断应该以什么样的角度才能完成入库。如果将有人驾驶改为无人驾驶,那么判断车辆与车道线的位置关系,就是判断车辆是否能正常行驶的关键。
判断车辆与车道线位置关系的常用方法是利用GPS或者北斗系统,在车辆上安装定位信息接收机,并安装陀螺仪、加速计和电子罗盘等矢量辅助定位设备。且需要预先测量某个场地的车道线的坐标等精确数据,并建立场地模型,才能使得车辆能够在某场地进行精确定位,从而判断车辆与车道线的位置关系。
但是采用定位系统为了保证定位精度需要大规模铺设基站,且场地车道线变化需要重新建立场地模型,从而增加了建设成本。
发明内容
鉴于此,为了解决上述技术问题的至少之一,本发明提出一种车辆与车道线的位置关系检测方法、系统和存储介质,不需要利用定位系统就能判断车道线与车辆的位置关系,从而减少智能驾驶的建设成本。
第一方面,本发明实施例提供了一种车辆与车道线的位置关系检测方法,包括以下步骤:
获取车辆模型,所述车辆模型由世界坐标系中的多个第一坐标表示;
获取车道线图片,其中,所述车道线图像由设置于车辆上的摄像头拍摄;
获取所述摄像头的标定参数;
根据所述车道线图片和所述标定参数确定车道线映射到所述世界坐标系中的第一线段;
根据所述第一线段与多个所述第一坐标在所述世界坐标系中的位置关系确定车道线与 车辆的位置关系。
在一些实施例中,所述车辆模型通过以下步骤获得:
获取所述车辆的长宽信息;
将所述长宽信息映射到所述世界坐标系中得到所述车辆模型。
在一些实施例中,所述根据所述第一线段与多个所述第一坐标在所述世界坐标系中的位置关系确定车道线与车辆的位置关系,包括以下步骤:
确定各所述第一坐标投影到所述线第一段上的第二坐标;
确定各所述第一坐标与所述第一线段的位置关系,其中,所述第一坐标与所述第一线段的位置关系根据该第一坐标和该第一坐标对应的所述第二坐标确定;
当所有所述第一坐标位于所述第一线段一侧则确定车道线与车辆的位置关系为车辆未压线;
当部分所述第一坐标位于所述第一线段一侧则确定车道线与车辆的位置关系为车辆压线。
在一些实施例中,所述根据所述车道线图片和所述标定参数确定车道线映射到所述世界坐标系中的第一线段,包括以下步骤:
根据所述车道线图片和所述标定参数确定车道线在所述世界坐标系下的两个第三坐标;
根据所述两个第三坐标确定所述第一线段。
在一些实施例中,所述根据所述车道线图片和所述标定参数确定车道线在所述世界坐标系下的两个第三坐标,包括以下步骤:
根据所述车道线图片确定识别区域;
利用霍夫变换确定所述识别区域下的第二线段;
根据所述第二线段确定所述第二线段的两个端点的图像像素坐标;
根据所述标定参数将所述图像像素坐标映射到所述世界坐标系,得到车道线在所述世界坐标系下的两个第三坐标。
在一些实施例中,所述根据所述车道线图片确定识别区域,包括以下步骤:
将所述车道线图片灰度化得到灰度图;
将所述灰度图进行滤波处理得到降噪图;
将所述降噪图进行畸变矫正处理得到矫正图;
将所述矫正图进行边缘检测处理得到图片数据;
将所述图片数据代入识别模型得到识别区域。
在一些实施例中,在所述根据所述第一线段与多个所述第一坐标在所述世界坐标系中的位置关系确定车道线与车辆的位置关系的步骤之前,当识别到的所述第一线段的数量为多条,还包括以下步骤:
计算多条所述第一线段和第四坐标的第一距离,其中第四坐标为所述世界坐标系下任意一点;
确定多条第一线段的斜率;
当任意两条所述第一线段的斜率之差小于第一阈值且该两条所述第一线段的第一距离之差小于第二阈值,则将该两条所述第一线段合并为一条新的第一线段。
在一些实施例中,所述车辆模型的车头处位于所述世界坐标系的原点,所述车辆模型的车头朝向与所述世界坐标系的Y轴相同,所述车辆与车道线的位置关系检测方法还包括以下步骤:
确定所述第一线段的斜率;
确定所述第一线段的截距;
确定第五坐标与所述第一线段的第二距离,其中,所述第五坐标设置在所述车辆模型的车头处;
根据所述第一线段的斜率、所述第一线段的截距以及所述第二距离确定车辆与车道线的夹角。
第二方面,本发明实施例还提供了一种车辆与车道线的位置关系检测系统,包括:
若干个摄像头,所述摄像头设置在车身上,所述摄像头用于拍摄车道线图片;
处理部件,所述处理部件用于获取所述车道线图片、获取所述摄像头的标定参数以及获取车辆模型,其中,所述车辆模型由世界坐标系中的多个第一坐标表示,所述处理部件根据所述车道线图片和所述标定参数确定车道线映射到所述世界坐标系下的第一线段,所述处理部件根据所述第一线段与多个所述第一坐标在所述世界坐标系中的位置关系确定车道线与车辆的位置关系。
第三方面,本发明实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使计算机执行如上述第一方面所述的车辆与车道线的位置关系检测方法。
本发明上述的技术方案至少具有如下优点或有益效果之一:在世界坐标系中绘制多个第一坐标形成车辆模型,在需要进行车道线与车辆的位置判断时,获取该车辆模型,并获取设置在车辆车身上的摄像头的标定参数以及获取由摄像头拍摄的车道线图片,根据摄像头的标定参数和车道线图片确定车道线映射到世界坐标系中的第一线段,然后根据世界坐标系中的 第一线段与多个第一坐标的位置关系确定车道线与车辆的位置关系。通过将现实中的车辆与拍摄到的车道线统一映射到世界坐标系中,根据世界坐标系的线段与车辆模型去判断车道线与车辆的位置关系,不需要借助定位系统就能实现车辆与车道线位置关系的判断,只需要由车辆本身的设备完成,减少智能驾驶的建设成本。
附图说明
图1是根据本发明实施例提供的车辆与车道线的位置关系检测方法流程图;
图2是根据本发明另一实施例提供的车辆与车道线的位置关系检测方法流程图;
图3是根据本发明另一实施例提供的车辆与车道线的位置关系检测方法流程图;
图4是根据本发明另一实施例提供的车辆与车道线的位置关系检测方法流程图;
图5是根据本发明另一实施例提供的车辆与车道线的位置关系检测方法流程图;
图6是根据本发明另一实施例提供的车辆与车道线的位置关系检测方法流程图;
图7是根据本发明另一实施例提供的车辆与车道线的位置关系检测方法流程图;
图8是根据本发明实施例提供的世界坐标系下车辆模型的示意图;
图9是根据本发明实施例提供的世界坐标系下车辆模型与第一线段位置关系示意图;
图10是根据本发明实施例提供的世界坐标系下拟合第一线段的示意图;
图11是根据本发明实施例提供的世界坐标系下车辆模型与第一线段位置关系示意图。
具体实施方式
本申请实施例所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
本发明实施例提供一种车辆与车道线的位置关系检测方法,能够不采用定位系统判断车辆与车道线的位置关系,从而减少智能驾驶的建设成本。参照图1,本发明实施例的方法包括但不限于步骤S110、步骤S120、步骤S130、步骤S140。
步骤S110,获取车辆模型,车辆模型由世界坐标系中的多个第一坐标表示。
在一些实施例中,车辆模型的获取方法是:
获取车辆的长宽信息,如图8所示,S1和S2为辅助线,辅助线S1用于标识车身宽度从车头逐渐变大至刚好最大处的位置,辅助线S2用于标识车身宽度从最大处刚好逐渐变小的位置,测量W1、W2和W3这三个车身的宽度信息。其中W1为车头宽度,W2为车身宽度,W3为车尾宽度。再测量H1、H2和H3这三个车身的长度信息。其中H1为车头到辅助线S1的距离;H2为辅助线S1与S2之间的距离,H3为辅助线S2到车尾的距离。将世界坐标原点O设置在车头中间处,世界坐标系的方向与车辆方向垂直。
将长宽信息映射到世界坐标系中得到所述车辆模型。根据长宽信息可以确定第一坐标A、第一坐标B、第一坐标C、第一坐标D、第一坐标E、第一坐标F、第一坐标G和第一坐标H的坐标值。由两点确定一条直线则可得出直线集,直线集包括直线L1、直线L2、直线L3、直线L4、直线L5、直线L6、直线L7和直线L8,这些直线就构成了代表车辆的车辆模型。需要说明的是,如果车辆模型要更好地还原车辆或是车辆车身形状更为复杂,也可用更多的直线制作车辆模型。
步骤S120,获取车道线图片,其中,所述车道线图像由设置于车辆上的摄像头拍摄。
在一些实施例中,摄像头可以为一个,设置在车辆的车头位置,拍摄区域为车辆行驶的前方区域;也可以设置在摄像头一侧,拍摄区域为摄像头的侧方区域。
在一些实施例中,摄像头也可以为多个,沿车辆四周布设,则车辆拍摄区域为车辆四周区域。例如,当摄像头的数量为四个,将摄像头分别布设在车身的前后左右位置,能够在以最少的摄像头拍摄到车身四周的路面情况,获取到四周的车道线图片。当然,若拍摄视野不足可增加摄像头与的数量。
步骤S130,获取摄像头的标定参数。
在一些实施例中,标定参数包括内参矩阵、旋转矩阵和位移矩阵。内参矩阵是计算图像像素坐标系与世界坐标系转换关系的一个重要参数。内参矩阵可以通过张正友标定法得到相机的内参矩阵与畸变矩阵,也可以通过说明书或者厂商直接获得相机的内参及畸变矩阵。其中,畸变矩阵是矫正畸变图像得到无畸变图像的重要参数。将摄像头拍摄出的图片的图像像素坐标系转换为与世界坐标系时,该世界坐标系需要变换到与车辆模型所在的世界坐标系重合,才能进行后续的运算,在该世界坐标系变换的过程中则能得到摄像头的旋转矩阵和位移矩阵。
在一些实施例中,可以使用五点法计算摄像头的旋转矩阵和位移矩阵,五点法即已知五个点的世界坐标及图像像素坐标,使用其中四个点作为位姿估计参数,剩下的一个点作为位姿估计是否准确的参考点。利用相机内参矩阵、畸变矩阵和其中四个点,即可推导出相应相机的旋转矩阵及位移矩阵。
在一些实施例中,参照图8,世界坐标系的原点O可选择在车头位置,世界坐标系X轴的方向与车身垂直。需要说明的是世界坐标系方向选择及原点O的位置选择只要方便计算即可。每一次摄像机的位置有所变化时,均需要重新确定摄像机的旋转矩阵和位移矩阵。
步骤S140,根据车道线图片和标定参数确定车道线映射到世界坐标系中的第一线段。
在一些实施例中,摄像头拍摄到图片之后发送到处理部件,处理部件识别出为车道线图片之后,对车道线图片进行处理,再结合摄像机的标定参数将车道线映射到世界坐标系下得到第一线段。在世界坐标系上则由多个第一坐标形成的直线集代表车辆模型,由第一线段代表车道线。
需要说明的是,由于车道线具有一定的宽度,因此识别车道线图片之后会有两条车道线边缘的线段,转化为世界坐标系中可以选择靠近车辆模型一侧的线段作为第一线段。
步骤S150,根据第一线段与多个第一坐标在世界坐标系中的位置关系确定车道线与车辆的位置关系。
在一些实施例中,世界坐标系中的第一线段对应有函数表达式,根据第一线段的函数表达式和第一坐标就能判断出世界坐标系中第一线段与车辆模型的位置关系,进而判断出车道与车辆的位置关系,从而判断车辆行驶是否符合规范,例如车辆有没有压线。
本发明的另一个实施例还提供了一种车辆与车道线的位置关系检测方法,参照图2,图2是图1中步骤S150的细化流程的一个实施例的示意图,该步骤S150包括但不限于步骤S210、步骤S220、步骤S230、步骤S240。
步骤S210,确定各第一坐标投影到第一线段上的第二坐标。
步骤S220,确定各第一坐标与第一线段的位置关系,其中,第一坐标与第一线段的位置关系根据该第一坐标和该第一坐标对应的第二坐标确定。
步骤S230,当所有第一坐标位于第一线段一侧则确定车道线与车辆的位置关系为车辆未压线。
步骤S240,当部分第一坐标位于第一线段一侧则确定车道线与车辆的位置关系为车辆压线。
在一些实施例中,参照图9,车道线映射在世界坐标系下用第一线段J1表示,第二坐标G’为第一坐标G在第一线段J1上的投影,第二坐标G’的纵坐标可通过以下公式获得:
Y G′=k J1X G+b J1
其中,Y G′为第二坐标G’的纵坐标,k J1第一线段J1的斜率,b J1为第一线段J1的截距,X G为第一坐标G的横坐标。进一步地,可根据第二坐标G’的纵坐标Y G′与第一坐标G的纵坐标Y G的大小关系,从而判断第一坐标G相对于第二坐标G’的位置情况,进而判断第一坐标 G相对于第一线段J1的位置情况。判断第一坐标G相对于第一线段J1的位置有以下情形:
当Y G′>Y G,第二坐标G’位于第一坐标G的上方,即第一坐标G位于第一线段J1的下方;
当Y G′<Y G,第二坐标G’位于第一坐标G的下方,即第一坐标G位于第一线段J1的上方;
当Y G′=Y G,第二坐标G’与第一坐标G重合,即第一坐标G在第一线段J1上。
判断出每个第一坐标相对于第一线段的位置后,可判断出车道线与车辆的位置关系,有以下情况:
当所有第一坐标位于第一线段的一侧,此时第一线段不经过车辆模型,则车辆没有压线;
当部分第一坐标位于第一线段一侧,此时第一线段经过车辆模型,则车辆压线。
具体地,当部分第一坐标位于第一线段的一侧,还可以具体分为:当部分第一坐标位于第一线段的一侧,部分第一坐标位于第一线段另一侧,则判断车道线穿过车辆;当部分第一坐标位于第一线段的一侧,剩余第一坐标位于第一线段上,则判断车辆刚好位于车道线上。
需要说明的是,当第一线段斜率的绝对值超过斜率阈值,例如斜率阈值为1,则第一线段基本位于车辆模型的左侧或者右侧,则可判断第一坐标的横坐标与第一坐标投影在第一线段上的第二坐标的横坐标的关系,同样可以得出车道线与车辆的位置关系。
在一些实施例中,倒车入库会出现两车道线线相交形成的角,映射到世界坐标系上,如图9所示,第一线段J2与第一线段J3相交形成了角,判断车辆是否压角具体如下:根据第一线段J2与第一线段J3的函数表达式可以求得交点I的坐标,根据交点I和表示车辆模型的直线集判断交点I是否在车辆模型内部就能判断车辆是否压角。
本发明的另一个实施例还提供了一种车辆与车道线的位置关系检测方法,参照图3,图3是图1中步骤S140的细化流程的一个实施例的示意图,该步骤S140包括但不限于步骤S310、步骤S320。
步骤S310,根据车道线图片和标定参数确定车道线在世界坐标系下的两个第三坐标。
步骤S320,根据两个第三坐标确定所述第一线段。
在一些实施例中,利用标定参数能将车道线图片中的图像像素坐标系转换为世界坐标系,由于现实中的车道线是直线,摄像头拍摄到的车道线都会有一定的长度,将车道线的两个端点映射到世界坐标系中得到两个第三坐标,将这两个第三坐标连接起来就能得到表示车道线的第一线段。利用车道线为直线的特性,只需要确定两个在世界坐标系下的第三坐标就能将车道线映射在世界坐标系中,减少了计算量,从而提高运算速度。
本发明的另一个实施例还提供了一种车辆与车道线的位置关系检测方法,参照图4,图4是图3中步骤S310的细化流程的一个实施例的示意图,该步骤S310包括但不限于步S410、步骤S420、步骤S430、步骤S440。
步骤S410,根据车道线图片确定识别区域。
步骤S420,利用霍夫变换确定识别区域下的第二线段。
步骤S430,根据第二线段确定第二线段的两个端点的图像像素坐标。
步骤S440,根据标定参数将图像像素坐标映射到世界坐标系,得到车道线在世界坐标系下的两个第三坐标。
在一些实施例中,对获取到的车道线图片进行图像处理之后可以确定识别区域,利用以下公式计算识别区域左上角坐标:
X u=X 0-W/2;
Y u=Y 0-H/2;
其中,X 0为识别区域中心点横坐标,Y 0为识别区域中心点纵坐标,W为识别区域的宽,H为识别区域的高,X u为识别区域左上角端点横坐标,Y u为识别区域左上角端点纵坐标。
由于车道线为直线,利用霍夫变换可以找出识别区域中的直线,也就是第二线段。将第二线段在识别区域下的两个端点转换为图像像素坐标,转换过程可以采用以下方式:
X l=X u+X ul
Y l=Y u+Y ul
其中,X ul为第二线段一个端点在识别区域横坐标,Y ul为第二线段一个端点在识别区域纵坐标,X l为第二线段一个端点的图像像素横坐标,Y l为第二线段一个端点的图像像素纵坐标。
得到图像像素坐标之后,根据转化公式将图像像素坐标转化为世界坐标,其中,转化公式为:
Figure PCTCN2021114250-appb-000001
其中,
Figure PCTCN2021114250-appb-000002
为内参矩阵,R为旋转矩阵,T为位移向量,Z c为比例因子,u、v为图像像素坐标,X W、Y w、Z w为世界坐标,由于车道线在地面上是一个二维空间,因此Z w=0,由Z w=0可以求出比例因子Z c
在内参矩阵中,cmx与cmy的计算公式为:
Figure PCTCN2021114250-appb-000003
Figure PCTCN2021114250-appb-000004
其中,f为摄像头焦距,dx与dy为一个像素点在图像像素坐标系中所占感光板坐标系X轴与Y轴的实际长度。
本发明的另一个实施例还提供了一种车辆与车道线的位置关系检测方法,参照图5,图5是图4中步骤S410的细化流程的一个实施例的示意图,该步骤S410包括但不限于步骤S510、步骤S520、步骤S530、步骤S540、步骤S550。
步骤S510,将车道线图片灰度化得到灰度图。
在一些实施例中,车道线图片进行灰度化处理之后可以有更好的显示色彩梯度,利于后续的图像处理。
步骤S520,将灰度图进行滤波处理得到降噪图。
在一些实施例中,对图片进行滤波处理可以滤除图片大部分的高斯噪点,例如灰尘、雨点等,将高斯核设置成合适的大小可以适应不同的环境情况。
步骤S530,将降噪图进行畸变矫正处理得到矫正图。
在一些实施例中,现实中的车道线位置线,但摄像头拍摄出来的照片会有一定程度的变形,因此需要对图片进行畸变矫正。经过畸变矫正后的矫正图为没有畸变的图片,可以更为准确地提供图像像素坐标,矫正畸变需要摄像头内参矩阵及畸变矩阵。
步骤S540,将矫正图进行边缘检测处理得到图片数据。
在一些实施例中,对图片可以采用canny算子进行边缘检测,边缘检测图可以很好的反映物体的边缘情况。
步骤S550,将图片数据代入识别模型得到识别区域。
在一些实施例中,车道线图片经过一系列处理之后得到图片数据,然后将代入识别模型之后可以得到识别区域。
需要说明的是,识别模型还可以包括对识别道路的虚实线,识别模型可以先采集图片样本利用标注软件对图片样本进行标注之后形成训练文件,将训练文件输入深度神经网络训练即可得出识别模型,深度学习网络可选用Yolov3、SSD等目标检测深度学习框架,只要可以在图像上识别出实虚线,并输出识别类型和识别区域即可。当然识别模型也可以输出识别区域的长宽以及识别中心点坐标。
需要说明的是,当识别类型不同时,需要调整霍夫变换的参数,这是因为虚线与实线在图像空间上的分布不同导致的。实线在图像上为一大段连续的图像,而虚线则为一段一段小的图像,如果两者使用相同的参数进行识别,则可能会出现识别实线时识别出2条以上的直线,或者识别虚线时无法识别出直线的情况。
本发明的另一个实施例还提供了一种车辆与车道线的位置关系检测方法,参照图6,图 6是图1中另一个实施例的示意图,当一条车道线映射到世界坐标系下的第一线段有多条时,则执行步骤S610、步骤S620、步骤S630,且执行在步骤S150之前。
步骤S610,计算多条第一线段和第四坐标的第一距离,其中,第四坐标为世界坐标系下任意一点。
步骤S620,确定多条第一线段的斜率。
步骤S630,当任意两条第一线段的斜率之差小于第一阈值且该两条所述第一线段的第一距离之差小于第二阈值,则将该两条第一线段合并为一条新的第一线段。
在一些实施例中,一张车道线图片中的同一条车道线可能映射出多条第一线段,参照图10,在同一识别区域100中,第一线段J4、第一线段J5与第一线段J6可能是一条直线,但由于图片分辨率过低、环境、参数设置等原因导致该直线被识别出多条,则需要将第一线段J4、第一线段J5和第一线段J6拟合为一条新的第一线段J7。首先需要判断这些第一线段是否原来为同一条第一线段,可以根据第一线段的斜率以及第四坐标与多条第一线段的第一距离判断,本实施例的第四坐标选为坐标原点O。当多条第一线段的斜率之差在第一阈值之内则可以确定这些第一线段的方向相同,进一步地,当第一距离之差在第二阈值之内则可以确定这些第一线段其实为一条线段,则对这些线段进行直线拟合。具体的拟合过程为求取这些确定为一条线段的第一线段的斜率平均值和第一距离平均值,根据斜率平均值和第一距离平均值可以拟合出一条新的第一线段表示车道线。
在一些实施例中,在同一识别区域下拟合第一线段后,不同的摄像头可能会拍摄到同一条车道线,那么需要将多摄像头拍摄的车道线图片映射到世界坐标系下的第一线段进行拟合,即对不同识别区域下表示同一条车道线的第一线段进行拟合。同样地,首先需要判断这些第一线段是否原来为同一条第一线段,可以根据第一线段的斜率以及第四坐标与多条第一线段的第一距离判断,本实施例的第四坐标选为坐标原点O。当多条第一线段的斜率之差在第一阈值之内则可以确定这些第一线段的方向相同,进一步地,当第一距离之差在第二阈值之内则可以确定这些第一线段其实为一条线段,则对这些线段进行直线拟合。具体的拟合过程为求取这些确定为一条线段的第一线段的斜率平均值和第一距离平均值,根据斜率平均值和第一距离平均值可以拟合出一条新的第一线段表示车道线。
本发明的另一个实施例还提供了一种车辆与车道线的位置关系检测方法,参照图7,图7是图1中另一个实施例的示意图,本实施例提供的车辆与车道线的位置关系检测方法包括但不限于步骤S710、步骤S720、步骤S730、步骤S740。
步骤S710,确定第一线段的斜率。
步骤S720,确定第一线段的截距。
步骤S730,确定第五坐标与第一线段的第二距离,其中,所述第五坐标设置在车辆模型的车头处。
步骤S740,根据第一线段的斜率、第一线段的截距以及第二距离确定车辆与车道线的夹角。
在一些实施例中,参照图11,根据两个第三坐标就可以确定第一线段J8的斜率和截距,第五坐标表示的是车头的坐标,车辆模型的车头设置在原点O,则第五坐标为原点O坐标,车辆模型的车头朝向与世界坐标系的Y轴相同,即车辆模型的车身垂直于世界坐标系的X轴。根据点到直线的距离公式求出第五坐标到第一线段J8的距离d。
Figure PCTCN2021114250-appb-000005
其中,b 1为第一线段J8的截距,k 1为第一线段J8的斜率。
车辆模型与第一线段J8的偏转角α可根据以下公式求解:
Figure PCTCN2021114250-appb-000006
Figure PCTCN2021114250-appb-000007
计算出车辆与车道线的偏转角,能够根据该偏转角及时调整车辆方向以便正确驾驶。
需要说明的是,对车尾与第一线段的偏转角计算也可采用上述处理方式。
本发明实施例还提供了一种车辆与车道线的位置关系检测系统,包括:若干个摄像头和处理部件。摄像头设置在车身上,摄像头用于拍摄车道线图片。处理部件用于获取摄像头拍摄车道线图片、获取摄像头的标定参数以及获取车辆模型,其中,车辆模型由世界坐标系中的多个第一坐标表示,处理部件根据车道线图片和标定参数确定车道线映射到所述世界坐标系下的第一线段,处理部件根据第一线段与多个第一坐标在世界坐标系中的位置关系确定车道线与车辆的位置关系。
本发明的一个实施例还提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个控制处理器执行,例如,执行以上描述的图1中的方法步骤S110至步骤S150、图2中的方法步骤S210至步骤S240、图3中的方法步骤S310至步骤S320、图4中的方法步骤S410至步骤S440、图5中的方法步骤S510至步骤S550、图6中的方法步骤至S610至步骤S630、图7中的方法步骤至S710、至步骤S740。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统可以被实施为软件、固件、硬件及其适当的组合。某些物理组件或所有物理组件可以被实施为由处理 器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
以上是对本发明的较佳实施进行了具体说明,但本发明并不局限于上述实施方式,熟悉本领域的技术人员在不违背本发明精神的前提下还可作出种种的等同变形或替换,这些等同的变形或替换均包含在本发明权利要求所限定的范围内。

Claims (10)

  1. 一种车辆与车道线的位置关系检测方法,其特征在于,包括以下步骤:
    获取车辆模型,所述车辆模型由世界坐标系中的多个第一坐标表示;
    获取车道线图片,其中,所述车道线图像由设置于车辆上的摄像头拍摄;
    获取所述摄像头的标定参数;
    根据所述车道线图片和所述标定参数确定车道线映射到所述世界坐标系中的第一线段;
    根据所述第一线段与多个所述第一坐标在所述世界坐标系中的位置关系确定车道线与车辆的位置关系。
  2. 根据权利要求1所述的车辆与车道线的位置关系检测方法,其特征在于,所述车辆模型通过以下步骤获得:
    获取所述车辆的长宽信息;
    将所述长宽信息映射到所述世界坐标系中得到所述车辆模型。
  3. 根据权利要求1所述的车辆与车道线的位置关系检测方法,其特征在于,所述根据所述第一线段与多个所述第一坐标在所述世界坐标系中的位置关系确定车道线与车辆的位置关系,包括以下步骤:
    确定各所述第一坐标投影到所述第一线段上的第二坐标;
    确定各所述第一坐标与所述第一线段的位置关系,其中,所述第一坐标与所述第一线段的位置关系根据该第一坐标和该第一坐标对应的所述第二坐标确定;
    当所有所述第一坐标位于所述第一线段一侧则确定车道线与车辆的位置关系为车辆未压线;
    当部分所述第一坐标位于所述第一线段一侧则确定车道线与车辆的位置关系为车辆压线。
  4. 根据权利要求1所述的车辆与车道线的位置关系检测方法,其特征在于,所述根据所述车道线图片和所述标定参数确定车道线映射到所述世界坐标系中的第一线段,包括以下步骤:
    根据所述车道线图片和所述标定参数确定车道线在所述世界坐标系下的两个第三坐标;
    根据所述两个第三坐标确定所述第一线段。
  5. 根据权利要求4所述的车辆与车道线的位置关系检测方法,其特征在于,所述根据所述车道线图片和所述标定参数确定车道线在所述世界坐标系下的两个第三坐标,包括以下步骤:
    根据所述车道线图片确定识别区域;
    利用霍夫变换确定所述识别区域下的第二线段;
    根据所述第二线段确定所述第二线段的两个端点的图像像素坐标;
    根据所述标定参数将所述图像像素坐标映射到所述世界坐标系,得到车道线在所述世界坐标系下的两个第三坐标。
  6. 根据权利要求5所述的车辆与车道线的位置关系检测方法,其特征在于,所述根据所述车道线图片确定识别区域,包括以下步骤:
    将所述车道线图片灰度化得到灰度图;
    将所述灰度图进行滤波处理得到降噪图;
    将所述降噪图进行畸变矫正处理得到矫正图;
    将所述矫正图进行边缘检测处理得到图片数据;
    将所述图片数据代入识别模型得到识别区域。
  7. 根据权利要求1所述的车辆与车道线的位置关系检测方法,其特征在于,在所述根据所述第一线段与多个所述第一坐标在所述世界坐标系中的位置关系确定车道线与车辆的位置关系的步骤之前,当识别到的所述第一线段的数量为多条,还包括以下步骤:
    计算多条所述第一线段和第四坐标的第一距离,其中第四坐标为所述世界坐标系下任意一点;
    确定多条第一线段的斜率;
    当任意两条所述第一线段的斜率之差小于第一阈值且该两条所述第一线段的第一距离之差小于第二阈值,则将该两条所述第一线段合并为一条新的第一线段。
  8. 根据权利要求1所述的车辆与车道线的位置关系检测方法,其特征在于,所述车辆模型的车头处位于所述世界坐标系的原点,所述车辆模型的车头朝向与所述世界坐标系的Y轴相同,所述车辆与车道线的位置关系检测方法还包括以下步骤:
    确定所述第一线段的斜率;
    确定所述第一线段的截距;
    确定第五坐标与所述第一线段的第二距离,其中,所述第五坐标设置在所述车辆模型的车头处;
    根据所述第一线段的斜率、所述第一线段的截距以及所述第二距离确定车辆与车道线的夹角。
  9. 一种车辆与车道线的位置关系检测系统,其特征在于,包括:
    若干个摄像头,所述摄像头设置在车身上,所述摄像头用于拍摄车道线图片;
    处理部件,所述处理部件用于获取所述车道线图片、获取所述摄像头的标定参数以及获取车辆模型,其中,所述车辆模型由世界坐标系中的多个第一坐标表示,所述处理部件根据所述车道线图片和所述标定参数确定车道线映射到所述世界坐标系下的第一线段,所述处理部件根据所述第一线段与多个所述第一坐标在所述世界坐标系中的位置关系确定车道线与车辆的位置关系。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使计算机执行如权利要求1至8任意一项所述的车辆与车道线的位置关系检测方法。
PCT/CN2021/114250 2020-10-16 2021-08-24 车辆与车道线的位置关系检测方法、系统和存储介质 WO2022078074A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/300,737 US20230252677A1 (en) 2020-10-16 2023-04-14 Method and system for detecting position relation between vehicle and lane line, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011107336.XA CN112257539A (zh) 2020-10-16 2020-10-16 车辆与车道线的位置关系检测方法、系统和存储介质
CN202011107336.X 2020-10-16

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/300,737 Continuation US20230252677A1 (en) 2020-10-16 2023-04-14 Method and system for detecting position relation between vehicle and lane line, and storage medium

Publications (1)

Publication Number Publication Date
WO2022078074A1 true WO2022078074A1 (zh) 2022-04-21

Family

ID=74244215

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/114250 WO2022078074A1 (zh) 2020-10-16 2021-08-24 车辆与车道线的位置关系检测方法、系统和存储介质

Country Status (4)

Country Link
US (1) US20230252677A1 (zh)
CN (1) CN112257539A (zh)
LU (1) LU502288B1 (zh)
WO (1) WO2022078074A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257539A (zh) * 2020-10-16 2021-01-22 广州大学 车辆与车道线的位置关系检测方法、系统和存储介质
CN113205687B (zh) * 2021-04-30 2022-07-05 广州大学 一种基于视频监控的酒驾车辆轨迹识别系统
CN113256665B (zh) * 2021-05-26 2023-08-08 长沙以人智能科技有限公司 一种基于图像处理的机动车与虚实线位置关系检测方法
CN113378735B (zh) * 2021-06-18 2023-04-07 北京东土科技股份有限公司 一种道路标识线识别方法、装置、电子设备及存储介质
CN113674358A (zh) * 2021-08-09 2021-11-19 浙江大华技术股份有限公司 一种雷视设备的标定方法、装置、计算设备及存储介质
CN114274948A (zh) * 2021-12-15 2022-04-05 武汉光庭信息技术股份有限公司 一种基于360度全景的自动泊车方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463935A (zh) * 2014-11-11 2015-03-25 中国电子科技集团公司第二十九研究所 一种用于交通事故还原的车道重建方法和系统
CN104859563A (zh) * 2015-05-28 2015-08-26 北京汽车股份有限公司 车道偏离预警方法和系统
CN108052908A (zh) * 2017-12-15 2018-05-18 郑州日产汽车有限公司 车道保持方法
CN110956081A (zh) * 2019-10-14 2020-04-03 广东星舆科技有限公司 车辆与交通标线位置关系的识别方法、装置及存储介质
CN112257539A (zh) * 2020-10-16 2021-01-22 广州大学 车辆与车道线的位置关系检测方法、系统和存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894271B (zh) * 2010-07-28 2012-11-07 重庆大学 汽车偏离车道线角度和距离的视觉计算及预警方法
CN106682646B (zh) * 2017-01-16 2020-12-22 北京新能源汽车股份有限公司 一种车道线的识别方法及装置
CN108332979B (zh) * 2018-02-08 2020-07-07 青岛平行智能产业管理有限公司 一种车辆压线检测方法
CN108776767B (zh) * 2018-04-18 2019-12-17 福州大学 一种有效判别车辆压线及预先提示系统
CN109624976B (zh) * 2018-12-25 2020-10-16 广州小鹏汽车科技有限公司 一种车辆的车道保持控制方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463935A (zh) * 2014-11-11 2015-03-25 中国电子科技集团公司第二十九研究所 一种用于交通事故还原的车道重建方法和系统
CN104859563A (zh) * 2015-05-28 2015-08-26 北京汽车股份有限公司 车道偏离预警方法和系统
CN108052908A (zh) * 2017-12-15 2018-05-18 郑州日产汽车有限公司 车道保持方法
CN110956081A (zh) * 2019-10-14 2020-04-03 广东星舆科技有限公司 车辆与交通标线位置关系的识别方法、装置及存储介质
CN112257539A (zh) * 2020-10-16 2021-01-22 广州大学 车辆与车道线的位置关系检测方法、系统和存储介质

Also Published As

Publication number Publication date
LU502288B1 (en) 2022-08-16
CN112257539A (zh) 2021-01-22
US20230252677A1 (en) 2023-08-10

Similar Documents

Publication Publication Date Title
WO2022078074A1 (zh) 车辆与车道线的位置关系检测方法、系统和存储介质
CN109598972B (zh) 一种基于视觉的自动泊车停车位检测与测距系统
CN107577988B (zh) 实现侧方车辆定位的方法、装置及存储介质、程序产品
CN109902637B (zh) 车道线检测方法、装置、计算机设备和存储介质
CN108805934B (zh) 一种车载摄像机的外部参数标定方法及装置
US7659835B2 (en) Method and apparatus for recognizing parking slot by using bird&#39;s eye view and parking assist system using the same
CN112541953B (zh) 一种基于雷达信号和视频同步坐标映射的车辆检测方法
WO2020228694A1 (zh) 一种相机姿态信息检测方法、装置以及相应的智能驾驶设备
US11880993B2 (en) Image processing device, driving assistance system, image processing method, and program
CN110766760B (zh) 用于相机标定的方法、装置、设备和存储介质
CN112967344B (zh) 相机外参标定的方法、设备、存储介质及程序产品
US11151729B2 (en) Mobile entity position estimation device and position estimation method
CN111443704B (zh) 用于自动驾驶系统的障碍物定位方法及装置
JP2018147393A (ja) 標識認識システム
WO2021204867A1 (en) A system and method to track a coupled vehicle
CN112215214A (zh) 调整智能车载终端的摄像头偏移的方法及系统
TWI424259B (zh) 相機擺置角度校正法
CN115597550A (zh) 一种基于消失点和目标接地点的坡道单目测距方法及装置
US11477371B2 (en) Partial image generating device, storage medium storing computer program for partial image generation and partial image generating method
CN113643374A (zh) 基于道路特征的多目相机标定方法、装置、设备和介质
CN114998849B (zh) 一种基于路端单目相机的交通流要素感知与定位方法及其应用
CN110852278A (zh) 地面标识线识别方法、设备及计算机可读存储介质
CN114046769B (zh) 一种基于多维参考信息的单目测距方法
KR102565603B1 (ko) 긴급 제동 시스템의 성능평가 장치 및 방법
CN117360390A (zh) 一种自适应车载毫米波雷达系统及其控制方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21879120

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21879120

Country of ref document: EP

Kind code of ref document: A1