WO2021051344A1 - 高精度地图中车道线的确定方法和装置 - Google Patents

高精度地图中车道线的确定方法和装置 Download PDF

Info

Publication number
WO2021051344A1
WO2021051344A1 PCT/CN2019/106648 CN2019106648W WO2021051344A1 WO 2021051344 A1 WO2021051344 A1 WO 2021051344A1 CN 2019106648 W CN2019106648 W CN 2019106648W WO 2021051344 A1 WO2021051344 A1 WO 2021051344A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
lane line
target
dimensional point
points
Prior art date
Application number
PCT/CN2019/106648
Other languages
English (en)
French (fr)
Inventor
孙路
周游
朱振宇
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980033197.XA priority Critical patent/CN112154445A/zh
Priority to PCT/CN2019/106648 priority patent/WO2021051344A1/zh
Publication of WO2021051344A1 publication Critical patent/WO2021051344A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Definitions

  • the present disclosure relates to the field of map processing, and in particular to a method for determining a lane line in a high-precision map, a device for determining a lane line in a high-precision map, electronic equipment, and an autonomous vehicle.
  • lane lines in the road In the field of autonomous driving, the recognition of lane lines in the road is very important. In related technologies, there are two main ways to obtain lane lines. One is to detect lane lines in real-time from the current environment image, and the other is to obtain pre-marked lane lines from high-precision maps to determine the environment. The location of the lane line.
  • the existing way of marking the lane lines in the high-precision map is mainly done manually, and the manual marking operation in the map is relatively accurate for two-dimensional images, but three-dimensional Images are generally generated based on lidar.
  • the images generated by lidar generally have no color information, and the generated images are also affected by obstacles on the road surface, making it difficult for annotators to distinguish which positions on the road belong to the lane line, which leads to artificial The accuracy of marking lane lines in 3D images is low.
  • the high-precision map is a three-dimensional image, so it is difficult to mark lane lines in the high-precision map to achieve the desired accuracy.
  • manual marking of lane lines in high-precision maps requires a lot of repeated operations, and the marking speed is slow and the efficiency is low.
  • the present disclosure proposes a method for determining a lane line in a high-precision map, a device for determining a lane line in a high-precision map, electronic equipment, and an autonomous vehicle to solve the problem that the accuracy of manually marking lane lines in a high-precision map in related technologies is low.
  • Technical problems with low efficiency are a few problems that are encountered in the process of manually marking lane lines in a high-precision map in related technologies.
  • a method for determining lane lines in a high-precision map including:
  • a device for determining lane lines in a high-precision map includes one or more processors working individually or in cooperation, and the processors are configured to execute:
  • an electronic device which includes the device for determining lane lines in a high-precision map according to any of the foregoing embodiments.
  • an autonomous driving vehicle which includes the electronic device described in the foregoing embodiment.
  • the target point located in the lane line area in the three-dimensional point cloud by combining the environment image and the three-dimensional point cloud, and then determine the lane line by fitting the target point.
  • the three-dimensional point cloud can be used as a high-precision map, and the above process of determining lane lines does not require manual participation to a large extent, so it is conducive to semi-automatic or even fully automatic determination of lane lines in high-precision maps. When repeating the determination operation, it can be completed at high speed and efficiently, and the accuracy of determining the lane line can be improved.
  • Fig. 1 is a schematic flowchart showing a method for determining lane lines in a high-precision map according to an embodiment of the present disclosure.
  • Fig. 2 is a schematic diagram of determining a lane line area in an environment image according to an embodiment of the present disclosure.
  • Fig. 3 is a schematic diagram of projecting points in the three-dimensional point cloud to the environmental image according to an embodiment of the present disclosure.
  • Fig. 4 is a schematic diagram showing a fitting target point to determine a lane line according to an embodiment of the present disclosure.
  • Fig. 5 is a schematic flowchart of another method for determining lane lines in a high-precision map according to an embodiment of the present disclosure.
  • Fig. 6 is a schematic flowchart showing yet another method for determining lane lines in a high-precision map according to an embodiment of the present disclosure.
  • Fig. 7 is a schematic flowchart showing yet another method for determining a lane line in a high-precision map according to an embodiment of the present disclosure.
  • Fig. 8 is a schematic flowchart showing yet another method for determining lane lines in a high-precision map according to an embodiment of the present disclosure.
  • Fig. 9 is a schematic flowchart showing yet another method for determining a lane line in a high-precision map according to an embodiment of the present disclosure.
  • Fig. 10 is a schematic flowchart of yet another method for determining lane lines in a high-precision map according to an embodiment of the present disclosure.
  • Fig. 11 is a schematic flowchart showing yet another method for determining a lane line in a high-precision map according to an embodiment of the present disclosure.
  • Fig. 12 is a schematic flowchart showing yet another method for determining lane lines in a high-precision map according to an embodiment of the present disclosure.
  • Fig. 13 is a schematic flowchart showing yet another method for determining a lane line in a high-precision map according to an embodiment of the present disclosure.
  • Fig. 14 is a schematic flowchart showing yet another method for determining a lane line in a high-precision map according to an embodiment of the present disclosure.
  • Fig. 15 is a schematic flowchart showing yet another method for determining a lane line in a high-precision map according to an embodiment of the present disclosure.
  • Fig. 16 is a schematic flowchart showing yet another method for determining a lane line in a high-precision map according to an embodiment of the present disclosure.
  • Fig. 17 is a schematic flowchart showing yet another method for determining a lane line in a high-precision map according to an embodiment of the present disclosure.
  • Fig. 18 is a schematic diagram showing a hardware structure of a device where a device for determining a lane line in a high-precision map is located according to an embodiment of the present disclosure.
  • Fig. 1 is a schematic flowchart of a method for determining lane lines in a high-precision map according to an embodiment of the present disclosure.
  • the method for determining lane lines in a high-precision map described in the embodiments of the present disclosure may be applicable to image acquisition devices, which may be environment images and three-dimensional point clouds of the vehicle driving environment, and may also be applicable to environment images and three-dimensional point clouds.
  • Other electronic devices that perform analysis and processing in the cloud such as terminals, servers, and in-vehicle devices.
  • the method for determining lane lines in the high-precision map may include the following steps:
  • step S1 an environment image and a three-dimensional point cloud of the vehicle driving environment are acquired;
  • the environment image and the three-dimensional point cloud of the vehicle driving environment can be obtained, where the environment image can be obtained by an image acquisition device such as a camera, and the three-dimensional point cloud can be obtained by a lidar.
  • step S2 a lane line area is determined in the environment image
  • the lane line area in the acquired environment image, can be determined.
  • the lane line area can be determined in the environment image according to a predetermined image recognition model.
  • an image recognition model for example, a designed neural network
  • the image recognition model can be based on the input image
  • the acquired environment image can be input into the image recognition model to determine the lane line area in the environment image.
  • the road surface area can be determined in the environment image first, and then the lane line area is determined in the road surface area, so that it is not necessary to analyze all the information in the environment image, so as to reduce the amount of information on which the lane line area is determined. , Which helps reduce misjudgments.
  • Fig. 2 is a schematic diagram of determining a lane line area in an environment image according to an embodiment of the present disclosure.
  • a specific color such as white
  • step S3 project the points in the three-dimensional point cloud to the environment image, and determine the target point located in the lane line area;
  • the internal parameters of the image acquisition device that collects environmental images can be acquired, and then based on the internal parameters, and the rotation and displacement relationships from the world coordinate system to the coordinate system of the image acquisition device, the points in the three-dimensional point cloud can be placed in the world.
  • the first coordinate in the coordinate system is converted to the second coordinate in the environment image.
  • the first coordinate of a point in the three-dimensional point cloud in the world coordinate system is (x w , y w , z w ), and the corresponding second coordinate of the point in the environment image is ( ⁇ , ⁇ , 1).
  • the relationship between coordinates and second coordinates is as follows:
  • z c is the scale factor of any homogeneous coordinates
  • the matrix R is the rotation matrix (Rotation Matrix), which is used to represent the rotation relationship between the world coordinate system and the coordinate system of the image acquisition device
  • the matrix T is the translation matrix of the displacement matrix, which is used to represent the world The displacement relationship between the coordinate system and the coordinate system of the image acquisition device, the matrix R and the matrix t belong to the extrinsic matrix of the image acquisition device.
  • the target point located in the lane line area can be determined, that is, the point in the lane line area in the three-dimensional point cloud.
  • step S4 the target point is fitted to determine the lane line.
  • the target point after determining the target point located in the lane line area, the target point may be fitted to determine the lane line, for example, the target point may be fitted by a Bezier curve. Since the target point is located in the lane line area, the curve obtained by fitting the target point can be used as the lane line.
  • the target point located in the lane line area in the three-dimensional point cloud by combining the environment image and the three-dimensional point cloud, and then determine the lane line by fitting the target point.
  • the three-dimensional point cloud can be used as a high-precision map, and the above process of determining lane lines does not require manual participation to a large extent, so it is conducive to semi-automatic or even fully automatic determination of lane lines in high-precision maps. When repeating the determination operation, it can be completed at high speed and efficiently, and the accuracy of determining the lane line can be improved.
  • Fig. 3 is a schematic diagram of projecting points in the three-dimensional point cloud to the environmental image according to an embodiment of the present disclosure.
  • Fig. 4 is a schematic diagram showing a fitting target point to determine a lane line according to an embodiment of the present disclosure.
  • the points in the three-dimensional point cloud can be projected to the environment image, and the points located in the lane line area can be determined as the target points.
  • the target point can be fitted by, for example, a Bezier curve, and the three-dimensional bird's eye view of the lane line obtained by the fitting is shown in FIG. 4.
  • the starting point A, the ending point C and the point B can be selected from multiple target points, and the point B is located between the starting point A and the ending point C.
  • the third-order or higher-order Bezier curve can also be used for fitting, which can be selected as required.
  • Fig. 5 is a schematic flowchart of another method for determining lane lines in a high-precision map according to an embodiment of the present disclosure. As shown in FIG. 5, the determining the lane line area in the environment image includes:
  • step S201 the lane line area is determined in the environment image according to a predetermined image recognition model.
  • the environment image and the three-dimensional point cloud of the vehicle driving environment may be acquired first, and then the lane line area may be determined in the environment image according to a predetermined image recognition model.
  • the image recognition model can be obtained through machine learning in advance (for example, it can be a designed neural network).
  • the image recognition model can determine the lane line area in the image according to the input image, then the obtained environment image can be input into the image recognition model , You can determine the lane line area in the environment image.
  • the points in the three-dimensional point cloud may be projected to the environment image, the target point located in the lane line area may be determined, and finally the target point may be fitted to determine the lane line.
  • Fig. 6 is a schematic flowchart showing yet another method for determining lane lines in a high-precision map according to an embodiment of the present disclosure. As shown in FIG. 6, the determining the lane line area in the environment image includes:
  • step S202 determine a road surface area in the environment image
  • step S203 a lane line area is determined in the road surface area.
  • the environment image and the three-dimensional point cloud of the vehicle driving environment may be acquired first, and then the road surface area may be determined in the acquired environment image.
  • the road surface recognition model can be obtained through machine learning in advance.
  • the road surface recognition model can determine the road surface area in the image according to the input image, and then the acquired environmental image can be input into the road surface recognition model to determine the road surface area in the environmental image ; And then input the determined image of the road area into the image recognition model obtained through machine learning in advance.
  • the image recognition model can determine the lane line area in the image according to the input image, so as to determine the road surface according to the input image of the road area The lane line area in the area.
  • the points in the three-dimensional point cloud may be projected to the environment image, the target point located in the lane line area may be determined, and finally the target point may be fitted to determine the lane line.
  • Fig. 7 is a schematic flowchart showing yet another method for determining a lane line in a high-precision map according to an embodiment of the present disclosure. As shown in FIG. 7, before projecting the points in the three-dimensional point cloud to the environmental image, the method further includes:
  • step S5 an obstacle point belonging to an obstacle is determined in the three-dimensional point cloud
  • step S6 the obstacle points are eliminated from the three-dimensional point cloud
  • the projecting the points in the three-dimensional point cloud to the environmental image includes:
  • step S301 the points in the three-dimensional point cloud excluding the obstacle points are projected onto the environment image.
  • the environment image and the three-dimensional point cloud of the vehicle driving environment may be acquired first, and then the road surface area may be determined in the acquired environment image.
  • the points in the three-dimensional point cloud can be projected onto the environment image, and before the points in the three-dimensional point cloud are projected onto the environment image, the obstacle points belonging to the obstacle in the three-dimensional point cloud can be eliminated.
  • the obstacle recognition model can be obtained in advance through machine learning (for example, deep learning).
  • the obstacle recognition model can determine the obstacles in the three-dimensional point cloud according to the input three-dimensional point cloud, and then input the three-dimensional point cloud into the obstacle recognition In the model, obstacles in the three-dimensional point cloud can be determined.
  • the obstacle area corresponding to the obstacle in the three-dimensional point cloud can be determined, and the points in the obstacle area can be removed from the three-dimensional point cloud as obstacle points, thereby There are no obstacle points in the remaining points in the 3D point cloud.
  • the points in the three-dimensional point cloud with the obstacle points removed can be projected to the environment image, the target point located in the lane line area is determined, and the target point is finally fitted to determine the lane line. Therefore, it is beneficial to prevent the obstacle points belonging to the obstacle from being projected to the lane line area of the environment image, which affects the accuracy of determining the target point.
  • Fig. 8 is a schematic flowchart showing yet another method for determining lane lines in a high-precision map according to an embodiment of the present disclosure.
  • the determining the obstacle point belonging to the obstacle in the three-dimensional point cloud includes:
  • step S501 an obstacle point belonging to an obstacle is determined in the three-dimensional point cloud according to a predetermined deep learning model.
  • the environment image and the three-dimensional point cloud of the vehicle driving environment may be acquired first, and then the road surface area may be determined in the acquired environment image.
  • the points in the three-dimensional point cloud can be projected onto the environment image, and before the points in the three-dimensional point cloud are projected onto the environment image, the obstacle points belonging to the obstacle in the three-dimensional point cloud can be eliminated first.
  • the deep learning model can be obtained through deep learning in advance.
  • the deep learning model can take the three-dimensional point cloud as input and output the information of the obstacle points belonging to the obstacle. Based on this information, the obstacle points belonging to the obstacle in the three-dimensional point cloud can be determined .
  • obstacles include but are not limited to vehicles, pedestrians, traffic signs, etc.
  • the identified obstacle points can be eliminated from the three-dimensional point cloud.
  • the points in the three-dimensional point cloud with the obstacle points removed can be projected to the environment image, the target point located in the lane line area is determined, and the target point is finally fitted to determine the lane line. Therefore, it is beneficial to prevent the obstacle points belonging to the obstacle from being projected to the lane line area of the environment image, which affects the accuracy of determining the target point.
  • Fig. 9 is a schematic flowchart showing yet another method for determining a lane line in a high-precision map according to an embodiment of the present disclosure.
  • the three-dimensional point cloud is the three-dimensional point cloud of the environment at the target time.
  • the method further includes:
  • step S7 determine the three-dimensional point cloud of the vehicle driving environment at at least one other time before or after the target time, and the predicted point cloud at the target time;
  • step S8 stack the predicted point cloud into the three-dimensional point cloud of the environment at the target time
  • the projecting the points in the three-dimensional point cloud to the environmental image includes:
  • step S302 the points in the stacked three-dimensional point cloud are projected to the environment image.
  • the environment image and the three-dimensional point cloud of the vehicle driving environment may be acquired first, and then the road surface area may be determined in the acquired environment image.
  • the points can be projected to the environment image, and before the points in the three-dimensional point cloud are projected to the environment image, the three-dimensional point cloud operation collected on the vehicle driving environment can be executed continuously (the number of executions and The execution time can be set according to needs), so as to collect the three-dimensional point cloud of the environment at multiple times.
  • the points in the 3D point cloud Project the points in the 3D point cloud to the environment image. If only the 3D point cloud of the environment at the target time is projected to the environment image, because the number of lines of the lidar for collecting the 3D point cloud is small, the points in the collected 3D point cloud are The cloud density is low, and there are fewer target points in the lane line area projected into the environment image, which is not conducive to obtaining accurate fitting results.
  • three-dimensional point clouds at multiple times can be collected.
  • three-dimensional point clouds of the vehicle driving environment at at least one other time before or after the target time can also be collected.
  • the predicted point cloud at the target time can be determined.
  • the three-dimensional points of the vehicle driving environment at other times can be determined according to the difference in attitude and position of the vehicle at other times and the target time.
  • Cloud prediction (prediction method can be selected according to needs, for example, it can be realized by Kalman filter, or it can be realized according to the prediction model obtained by machine learning in advance) to determine the three-dimensional point cloud of the vehicle driving environment at other times, and the value at the target time Forecast point cloud.
  • the points in the three-dimensional point cloud are projected onto the environment image, specifically, the stacked three-dimensional point cloud
  • the points in the image are projected to the environment image, and then the target points located in the lane line area are determined. Since the stacked three-dimensional point cloud contains more points, the projection to the target points in the lane line area in the environment image can be improved Then, the target points can be fitted to determine the lane line, and then a larger number of target points can be fitted, which is conducive to obtaining accurate fitting results.
  • Fig. 10 is a schematic flowchart showing yet another method for determining a lane line in a high-precision map according to an embodiment of the present disclosure.
  • the three-dimensional point cloud at at least one other time before or after the target time is determined, and the predicted point cloud at the target time includes:
  • step S701 determine the posture difference of the vehicle at the other time and the target time, and the position difference of the vehicle at the other time and the target time;
  • step S702 the three-dimensional point cloud of the vehicle driving environment at the other time and the predicted point cloud at the target time are determined according to the posture difference and the position difference.
  • the environment image and the three-dimensional point cloud of the vehicle driving environment may be acquired first, and then the road surface area may be determined in the acquired environment image.
  • the points can be projected to the environment image, and before the points in the three-dimensional point cloud are projected to the environment image, the three-dimensional point cloud operation collected on the vehicle driving environment can be executed continuously (the number of executions and The execution time can be set according to needs), so as to collect the three-dimensional point cloud of the environment at multiple times.
  • the three-dimensional point cloud of the vehicle driving environment it is also possible to collect the three-dimensional point cloud of the vehicle driving environment at at least one other time before or after the target time.
  • the predicted point cloud at the target moment can be determined.
  • the difference in the posture and position of the vehicle at other times and the target time may cause the vehicle to have a different three-dimensional point cloud of the environment at other times and the target time. Then the other can be determined based on the posture difference and the position difference.
  • time t is the current time
  • time t+1 is the next time of the current time
  • the position coordinates of the vehicle at time t are (x1, y1)
  • the position coordinates of the vehicle at time t+1 are (x1+x0, y1+y0)
  • the posture (such as the direction of travel) does not change, then for the three-dimensional point cloud at time t+1, determine its predicted point cloud at time t, and all points in the three-dimensional point cloud at time t+1 can be subtracted in the x direction x0, subtract y0 in the y direction.
  • the method of determining the predicted point cloud is not limited to the above method, but can be selected according to needs.
  • the target point cloud at other moments can be predicted based on Kalman filtering, so as to determine the predicted point cloud, which can also be based on The prediction model obtained by machine learning in advance predicts the target point cloud at other moments to determine the predicted point cloud.
  • the points in the three-dimensional point cloud are projected onto the environment image, specifically, the stacked three-dimensional point cloud
  • the points in the image are projected to the environment image, and then the target points located in the lane line area are determined. Since the stacked three-dimensional point cloud contains more points, the projection to the target points in the lane line area in the environment image can be improved Then, the target points can be fitted to determine the lane line, and then a larger number of target points can be fitted, which is conducive to obtaining accurate fitting results.
  • Fig. 11 is a schematic flowchart showing yet another method for determining a lane line in a high-precision map according to an embodiment of the present disclosure.
  • the projecting the points in the three-dimensional point cloud to the environmental image includes:
  • step S303 according to the internal parameters of the image acquisition device that collects the environmental image, the rotation relationship and displacement relationship from the world coordinate system to the coordinate system of the image acquisition device, the points in the three-dimensional point cloud are set in the world coordinate system Convert the first coordinate in the environment image to the second coordinate in the environment image.
  • the environment image and the three-dimensional point cloud of the vehicle driving environment may be acquired first, and then the road surface area is determined in the acquired environment image, and then the points in the three-dimensional point cloud are projected onto the environment image.
  • the internal parameters of the image acquisition device that collects environmental images can be acquired, and then based on the internal parameters, and the rotation and displacement relationships from the world coordinate system to the coordinate system of the image acquisition device, the points in the three-dimensional point cloud can be placed in the world coordinate system.
  • One coordinate is converted to the second coordinate in the environment image.
  • the target point located in the lane line area can be determined, that is, the point in the lane line area in the three-dimensional point cloud.
  • the target point can be fitted to determine the lane line.
  • Fig. 12 is a schematic flowchart showing yet another method for determining lane lines in a high-precision map according to an embodiment of the present disclosure. As shown in Figure 12, the method further includes:
  • step S9 mark the target point
  • step S10 the target point with a mark is displayed.
  • the environment image and three-dimensional point cloud of the vehicle driving environment may be acquired first, and then the road surface area may be determined in the acquired environment image, and then the points in the three-dimensional point cloud may be projected onto the environment image to determine The target point located in the lane line area is then fitted to the target point to determine the lane line.
  • the target point it can be marked, for example, the target point can be marked by a specific color, for example, the target point in the three-dimensional point cloud can be marked as white, and other non-target points can be marked as black, or by a specific color. Mark the target point, for example, give the target point a different mark from the non-target point in the three-dimensional point cloud.
  • the target points with annotations can be displayed, so that the user can distinguish the target points and non-target points in the three-dimensional point cloud according to the annotations when viewing the three-dimensional point cloud.
  • Fig. 13 is a schematic flowchart showing yet another method for determining a lane line in a high-precision map according to an embodiment of the present disclosure. As shown in Figure 13, the method further includes:
  • step S11 correct the lane line according to the received correction instruction
  • step S12 project the corrected lane line into the environmental image to determine whether the projection of the corrected lane line in the environmental image matches the lane line area;
  • step S13 response information is generated according to the matching result between the projection of the corrected lane line in the environment image and the lane line area.
  • the environment image and three-dimensional point cloud of the vehicle driving environment may be acquired first, and then the road surface area may be determined in the acquired environment image, and then the points in the three-dimensional point cloud may be projected onto the environment image to determine The target point located in the lane line area is then fitted to the target point to determine the lane line.
  • the corrected lane line can be projected into the environment image to determine the corrected lane line Whether the projection in the environment image matches the lane line area, and then generate response information according to the matching result of the corrected lane line projection in the environment image and the lane line area.
  • the generated response information can be used for Prompt the user that the correction result is unreasonable, so that the user can re-correct; if the projection of the corrected lane line in the environment image matches the lane line area, for example, the projection of the corrected lane line in the environment image is less than the preset ratio and falls on the lane line Outside the area, the generated response information can be used to prompt the user that the correction result is reasonable.
  • manual participation may also be performed in the process of determining the lane line area and projecting the points in the three-dimensional point cloud onto the environment image.
  • Correction for example, in the process of determining the lane line, you can receive manual input instructions to modify, supplement, and delete the lane line area in the environment image.
  • the target is adjusted.
  • Fig. 14 is a schematic flowchart showing yet another method for determining a lane line in a high-precision map according to an embodiment of the present disclosure. As shown in Fig. 14, before fitting the target point to determine the lane line, the method further includes:
  • step S14 among the non-target points in the three-dimensional point cloud, determine candidate points whose distance to the target point is less than a preset distance;
  • step S15 it is determined among the candidate points that the similarity with the preset attribute information of the target point is greater than the extension point of the preset similarity
  • step S16 use the extension point and the target point as a new target point
  • the fitting of the target point to determine the lane line includes:
  • step S401 the new target point is fitted to determine the lane line.
  • the environment image and three-dimensional point cloud of the vehicle driving environment may be acquired first, and then the road surface area may be determined in the acquired environment image, and then the points in the three-dimensional point cloud may be projected onto the environment image to determine The target point located in the lane line area is then fitted to the target point to determine the lane line.
  • the target points that are not projected to the lane line area are generally closer to the target points projected to the area. Therefore, before fitting the target point to determine the lane line, you can determine the candidate points whose distance to the target point is less than the preset distance from the non-target points in the three-dimensional point cloud. These candidate points may not be projected to the lane line. The target point of the area.
  • the floodfill algorithm can be used to determine the extension points, where the preset attributes can be determined according to Need to be set, for example, reflection brightness (intensity), these extension points are very close to the preset attribute information of the target point, so it is very likely that the target point is not projected to the lane line area, so as to fit the new The target point is used to determine the lane line.
  • the extension point and the original target point can be used as the new target point to perform the lane line fitting.
  • Fig. 15 is a schematic flowchart showing yet another method for determining a lane line in a high-precision map according to an embodiment of the present disclosure.
  • the fitting the target point to determine the lane line includes:
  • step S402 the target point is fitted by a curve model to determine the lane line.
  • the environment image and three-dimensional point cloud of the vehicle driving environment may be acquired first, and then the road surface area may be determined in the acquired environment image, and then the points in the three-dimensional point cloud may be projected onto the environment image to determine
  • the target point located in the lane line area is then fitted to the target point to determine the lane line.
  • the fitting target point can be fitted by selecting a curve model as required, for example, a Bezier curve fitting target point can be selected to determine the lane line.
  • Fig. 16 is a schematic flowchart showing yet another method for determining a lane line in a high-precision map according to an embodiment of the present disclosure.
  • the method of fitting the target point to determine the lane line by the Bezier curve includes:
  • step S4021 the target point is fitted by a multi-segment third-order Bezier curve to determine the lane line.
  • the environment image and three-dimensional point cloud of the vehicle driving environment may be acquired first, and then the road surface area may be determined in the acquired environment image, and then the points in the three-dimensional point cloud may be projected onto the environment image to determine
  • the target point located in the lane line area is then fitted to the target point to determine the lane line.
  • the fitting target point can be fitted by a multi-segment third-order Bezier curve.
  • P(t) A(1-t) 3 +B ⁇ 3(1-t) 2 ⁇ t+C ⁇ 3(1-t) ⁇ t 2 +D ⁇ t 3 ;
  • A, B, C, and D are the coordinates of the target point as the control point.
  • the selection of the control point has been described in the foregoing, and will not be repeated here.
  • the specific fitting method can be to determine the two farthest points of the target point as the starting point and the end point for fitting, and then for the curve obtained by fitting, determine whether there is a target point and the curve distance is greater than the preset distance.
  • a target point indicating that the fitting effect does not meet the requirements, then draw a perpendicular line from the target point to the curve, and then divide the curve into two parts from the intersection of the perpendicular line and the curve, and continue to fit the target point for each part of the curve , If there is still a target point whose distance to the curve is greater than the preset distance for the curve after further fitting, then continue to draw a perpendicular line from the target point to the curve, and then divide the curve further from the intersection of the perpendicular line and the curve, and then The target points are continued to be fitted to each part of the divided curve until the distance between all target points and the curve is less than or equal to the preset distance for the fitted curve.
  • Fig. 17 is a schematic flowchart showing yet another method for determining a lane line in a high-precision map according to an embodiment of the present disclosure. As shown in Figure 17, the method further includes:
  • step S17 a control instruction is generated based on the lane line, wherein the control instruction is used to control the driving of the vehicle.
  • the environment image and three-dimensional point cloud of the vehicle driving environment may be acquired first, and then the road surface area may be determined in the acquired environment image, and then the points in the three-dimensional point cloud may be projected onto the environment image to determine The target point located in the lane line area is then fitted to the target point to determine the lane line.
  • the method for determining the lane line in the high-precision map can be applied to the field of automatic driving technology. For example, when it is applied to an automatic driving vehicle, a control instruction can be generated based on the determined lane line by fitting to control the driving of the vehicle, for example, a control can be generated.
  • the instructions enable the vehicle to keep driving between the two lane lines during automatic driving, so as to interfere with or collide with vehicles in other lanes to ensure traffic safety during automatic driving.
  • the embodiments of the device for determining lane lines in the high-precision map of the present disclosure can be applied to in-vehicle equipment.
  • the device embodiments can be implemented by software, or can be implemented by hardware or a combination of software and hardware.
  • Taking software implementation as an example as a logical device, it is formed by reading the corresponding computer program instructions in the non-volatile memory into the memory through the processor of the device where it is located.
  • FIG. 18 it is a schematic diagram of a hardware structure of the device where the device for determining lane lines in the high-precision map of the present disclosure is located, except for the processor, network interface, memory, and non-volatile memory shown in FIG.
  • the device where the device is located in the embodiment can usually also include other hardware, such as a forwarding chip responsible for processing messages, etc.; from the perspective of the hardware structure, the device may also be a distributed device, which may include multiple interfaces. Card, in order to carry on the expansion of the message processing at the hardware level.
  • other hardware such as a forwarding chip responsible for processing messages, etc.
  • the device may also be a distributed device, which may include multiple interfaces. Card, in order to carry on the expansion of the message processing at the hardware level.
  • the embodiment of the present disclosure also provides a device for determining lane lines in a high-precision map.
  • the device for determining lane lines in a high-precision map according to the embodiments of the present disclosure may be suitable for image acquisition equipment.
  • the image acquisition equipment may be an environment image and a three-dimensional point cloud of a vehicle driving environment, and may also be applicable to an environment image and a three-dimensional point cloud.
  • Other electronic devices that perform analysis and processing in the cloud such as terminals, servers, and in-vehicle devices.
  • the apparatus for determining lane lines includes one or more processors working individually or in cooperation, and the processors are configured to execute:
  • the processor is configured to execute:
  • the lane line area is determined in the environment image according to a predetermined image recognition model.
  • the processor is configured to execute:
  • the lane line area is determined in the road surface area.
  • the processor is further configured to execute:
  • the projecting the points in the three-dimensional point cloud to the environmental image includes:
  • the processor is configured to execute:
  • an obstacle point belonging to an obstacle is determined in the three-dimensional point cloud.
  • the three-dimensional point cloud is a three-dimensional point cloud of the environment at the target moment
  • the processor is further configured to execute:
  • the projecting the points in the three-dimensional point cloud to the environmental image includes:
  • the processor is configured to execute:
  • the three-dimensional point cloud of the vehicle driving environment at the other time and the predicted point cloud at the target time are determined according to the posture difference and the position difference.
  • the processor is configured to execute:
  • the point in the three-dimensional point cloud is set to the first coordinate in the world coordinate system , Converted to the second coordinate in the environment image.
  • the processor is further configured to execute:
  • the target point with the label is displayed.
  • the processor is further configured to execute:
  • the response information is generated according to the matching result between the projection of the corrected lane line in the environment image and the lane line area.
  • the processor is further configured to execute:
  • the fitting of the target point to determine the lane line includes:
  • the new target point is fitted to determine the lane line.
  • the processor is configured to execute:
  • the target point is fitted by a curve model to determine the lane line.
  • the processor is configured to execute:
  • the target point is fitted by a multi-segment third-order Bezier curve to determine the lane line.
  • the processor is further configured to execute:
  • a control instruction is generated based on the lane line, wherein the control instruction is used to control the driving of the vehicle.
  • An embodiment of the present disclosure also provides an electronic device, including the device for determining a lane line in a high-precision map according to any of the foregoing embodiments.
  • the embodiment of the present disclosure also proposes an autonomous driving vehicle, which includes the electronic device described in the foregoing embodiment.
  • the systems, devices, modules, or units explained in the above embodiments may be implemented by computer chips or entities, or implemented by products with certain functions.
  • the functions are divided into various units and described separately.
  • the functions of each unit can be implemented in the same one or more software and/or hardware.
  • the embodiments of the present disclosure can be provided as a method, a system, or a computer program product. Therefore, the present disclosure may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware.
  • the present disclosure may take the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.

Abstract

一种高精度地图中车道线的确定方法,包括:获取车辆行驶环境的环境图像和三维点云(S1);在环境图像中确定车道线区域(S2);将三维点云中的点向环境图像投影,确定位于车道线区域内的目标点(S3);拟合目标点以确定车道线(S4)。由此,可以结合环境图像和三维点云,确定三维点云中位于车道线区域中的目标点,进而通过拟合目标点确定车道线。而三维点云可以作为高精度地图,并且确定车道线的过程在很大程度上无需人工参与,因此有利于半自动甚至全自动地在高精度地图中确定车道线,在面对大量车道线的重复确定操作时,可以高速高效地完成,提高确定车道线的精度。

Description

高精度地图中车道线的确定方法和装置 技术领域
本公开涉及地图处理领域,尤其涉及高精度地图中车道线的确定方法、高精度地图中车道线的确定装置、电子设备和自动驾驶车辆。
背景技术
在自动驾驶领域中,对于道路中车道线的识别是非常重要的。在相关技术中,车道线获取的方式主要有两种,其一是从当前环境图像中实时地检测出车道线,其二是在高精度地图中获取预先标注好的车道线,以确定环境中车道线的位置。
由于高精度地图中的车道线需要预先标注,现有的在高精度地图中标注车道线的方式主要由人工完成,而人工在地图中的标注操作,针对二维图像是相对准确的,可是三维图像一般是基于激光雷达生成的,激光雷达生成的图像一般无颜色信息,并且生成的图像还会受到路面上障碍物的影响,使得标注人员难以分辨路面上哪些位置属于车道线,从而导致人工在三维图像中标注车道线的精度较低。
而高精度地图就是三维图像,所以在高精度地图中标注车道线难以达到理想的精度。并且人工在高精度地图中标注车道线,需要大量重复操作,标注速度较慢,效率较低。
发明内容
本公开提出了高精度地图中车道线的确定方法、高精度地图中车道线的确定装置、电子设备和自动驾驶车辆,以解决相关技术中人工在高精度地图中标注车道线的精度较低,效率较低的技术问题。
根据本公开实施例的第一方面,提出一种高精度地图中车道线的确定方 法,包括:
获取车辆行驶环境的环境图像和三维点云;
在所述环境图像中确定车道线区域;
将所述三维点云中的点向所述环境图像投影,确定位于所述车道线区域内的目标点;
拟合所述目标点以确定车道线。
根据本公开实施例的第二方面,提出一种高精度地图中车道线的确定装置,所述确定装置包括包括单独或者协同工作的一个或者多个处理器,所述处理器用于执行:
获取车辆行驶环境的环境图像和三维点云;
在所述环境图像中确定车道线区域;
将所述三维点云中的点向所述环境图像投影,确定位于所述车道线区域内的目标点;
拟合所述目标点以确定车道线。
根据本公开实施例的第三方面,提出一种电子设备,包括上述任一实施例所述的高精度地图中车道线的确定装置。
根据本公开实施例的第四方面,提出一种自动驾驶车辆,包括上述实施例所述的电子设备。
根据本公开的实施例,可以结合环境图像和三维点云,确定三维点云中位于车道线区域中的目标点,进而通过拟合目标点确定车道线。而三维点云可以作为高精度地图,并且上述确定车道线的过程在很大程度上无需人工参与,因此有利于半自动甚至全自动地在高精度地图中确定车道线,在面对大量车道线的重复确定操作时,可以高速高效地完成,而且可以提高确定车道线的精度。
附图说明
为了更清楚地说明本公开实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是根据本公开的实施例示出的一种高精度地图中车道线的确定方法的示意流程图。
图2是根据本公开的实施例示出的一种在环境图像中确定车道线区域的示意图。
图3是根据本公开的实施例示出的一种将所述三维点云中的点向所述环境图像投影的示意图。
图4是根据本公开的实施例示出的一种拟合目标点以确定车道线的示意图。
图5是根据本公开的实施例示出的另一种高精度地图中车道线的确定方法的示意流程图。
图6是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。
图7是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。
图8是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。
图9是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。
图10是根据本公开的实施例示出的又一种高精度地图中车道线的确定方 法的示意流程图。
图11是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。
图12是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。
图13是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。
图14是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。
图15是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。
图16是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。
图17是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。
图18是根据本公开的实施例示出高精度地图中车道线的确定装置所在设备的一种硬件结构示意图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。另外,在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
图1是根据本公开的实施例示出的一种高精度地图中车道线的确定方法 的示意流程图。本公开实施例所述的高精度地图中车道线的确定方法,可以适用于图像采集设备,图像采集设备可以车辆行驶环境的环境图像和三维点云,也可以适用于能够对环境图像和三维点云进行分析处理的其他电子设备,例如终端、服务器、车载设备等。
如图1所示,所述高精度地图中车道线的确定方法可以包括以下步骤:
在步骤S1中,获取车辆行驶环境的环境图像和三维点云;
在一个实施例中,可以获取车辆行驶环境的环境图像和三维点云,其中,环境图像可以通过相机等图像采集设备获取,三维点云可以通过激光雷达获取。
在步骤S2中,在所述环境图像中确定车道线区域;
在一个实施例中,在获取到的环境图像中,可以确定车道线区域。
其中,可以根据预先确定的图像识别模型在所述环境图像中确定车道线区域,例如可以预先通过机器学习得到图像识别模型(例如可以是设计好的神经网络),图像识别模型可以根据输入的图像确定图像中的车道线区域,那么可以将获取到的环境图像输入到图像识别模型中,即可确定出环境图像中的车道线区域。
在另一些实施方式中,还可以先在环境图像中确定出路面区域,然后在路面区域中确定车道线区域,从而不必分析环境图像中的所有信息,以便缩小确定车道线区域所依据的信息量,有利于减少误判。
图2是根据本公开的实施例示出的一种在环境图像中确定车道线区域的示意图。如图2所示,在环境图像中确定车道线区域,可以针对车道线区域标注特定的颜色,例如白色,以区分环境图像中的车道线区域和其他区域,并且针对其他区域中的不同物体,例如静态物和动态物,也可以标注不同的颜色,以便区分。
在步骤S3中,将所述三维点云中的点向所述环境图像投影,确定位于所述车道线区域内的目标点;
在一个实施例中,可以获取采集环境图像的图像采集设备的内参,然后根据该内参,以及世界坐标系到图像采集设备的坐标系的旋转关系和位移关系,将三维点云中的点在世界坐标系中的第一坐标,转换为环境图像中的第二坐标。
例如三维点云中的点在世界坐标系中的第一坐标为(x w,y w,z w),该点在环境图像中对应的第二坐标为(μ,ν,1),第一坐标和第二坐标之间的关系如下:
Figure PCTCN2019106648-appb-000001
其中,
Figure PCTCN2019106648-appb-000002
图像采集设备包含5个内参,分别是a x=fm x,a y=fm y,γ,μ 0和ν 0,其中,f为图像采集设备的焦距,m x为x方向上单位距离的像素数(scale factors),m y为y方向上单位距离的像素数,γ为x方向和y方向之间的畸变参数(skew parameters),μ 0和ν 0对应光心位置principal point。
z c为任意齐次坐标的比例因子,矩阵R为旋转矩阵(Rotation Matrix),用于表示世界坐标系到图像采集设备的坐标系的旋转关系,矩阵T为位移矩阵Translation Matrix,用于表示世界坐标系到图像采集设备的坐标系的位移关系,矩阵R和矩阵t属于图像采集设备的外参(Extrinsic Matrix)。
在将三维点云中的点在世界坐标系中的第一坐标,转换为环境图像中的第二坐标后,就确定了三维点云中的点在环境图像中对应的坐标,即第二坐标,而车道线区域已经确定,那么根据第二坐标与车道线区域的关系,就可以确定位于车道线区域内的目标点,也即三维点云中位于所述车道线区域内的点。
在步骤S4中,拟合所述目标点以确定车道线。
在一个实施例中,确定位于车道线区域内的目标点后,可以拟合所述目标点以确定车道线,例如可以通过贝塞尔曲线对目标点进行拟合。由于目标 点位于车道线区域内,那么对目标点拟合得到的曲线可以作为车道线。
根据本公开的实施例,可以结合环境图像和三维点云,确定三维点云中位于车道线区域中的目标点,进而通过拟合目标点确定车道线。而三维点云可以作为高精度地图,并且上述确定车道线的过程在很大程度上无需人工参与,因此有利于半自动甚至全自动地在高精度地图中确定车道线,在面对大量车道线的重复确定操作时,可以高速高效地完成,而且可以提高确定车道线的精度。
图3是根据本公开的实施例示出的一种将所述三维点云中的点向所述环境图像投影的示意图。图4是根据本公开的实施例示出的一种拟合目标点以确定车道线的示意图。
如图3所示,可以将所述三维点云中的点向环境图像投影,确定位于车道线区域内点为目标点。可以例如通过贝塞尔曲线对目标点进行拟合,拟合得到的车道线的三维鸟瞰图如图4所示。
以通过二阶贝塞尔曲线对目标点进行拟合为例,可以在多个目标点中选取起点A和终点C以及点B,点B位于起点A和终点C之间。
在线段AB上确定控制点D,以及在线段BC上确定和控制点E,其中,AD/AB=BE/BC=k,k为预设比值;
然后在线段DE上确定点F,其中,DF/DE=k;
在保证AD/AB=BE/BC=k的基础上,改变控制点D和控制点E的位置,并在DF/DE=k的情况下确定改变控制点D和控制点E后新的点F,并按照该方式确定所都的点F。
最后从起点A开始,朝着点C的方向,依次连接所有点F,最终构成的曲线即通过二阶贝塞尔曲线对点A、点B和点C拟合后的曲线。
需要说明的是,除了通过二阶贝塞尔曲线拟合,也可以通过三阶或更高阶的贝塞尔曲线进行拟合,具体可以根据需要选择。
图5是根据本公开的实施例示出的另一种高精度地图中车道线的确定方法的示意流程图。如图5所示,所述在所述环境图像中确定车道线区域包括:
在步骤S201中,根据预先确定的图像识别模型在所述环境图像中确定所述车道线区域。
在一个实施例中,可以先获取车辆行驶环境的环境图像和三维点云,然后根据预先确定的图像识别模型在所述环境图像中确定车道线区域。
例如可以预先通过机器学习得到图像识别模型(例如可以是设计好的神经网络),图像识别模型可以根据输入的图像确定图像中的车道线区域,那么可以将获取到的环境图像输入到图像识别模型中,即可确定出环境图像中的车道线区域。
进而可以将所述三维点云中的点向所述环境图像投影,确定位于所述车道线区域内的目标点,最后拟合所述目标点以确定车道线。
图6是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。如图6所示,所述在所述环境图像中确定车道线区域包括:
在步骤S202中,在所述环境图像中确定路面区域;
在步骤S203中,在所述路面区域中确定车道线区域。
在一个实施例中,可以先获取车辆行驶环境的环境图像和三维点云,然后在获取到的环境图像中确定路面区域。
例如可以预先通过机器学习得到路面识别模型,路面识别模型可以根据输入的图像确定图像中的路面区域,那么可以将获取到的环境图像输入到路面识别模型中,从而确定出环境图像中的路面区域;进而将确定出的路面区域的图像输入到预先通过机器学习得到的图像识别模型中,图像识别模型可以根据输入的图像确定图像中的车道线区域,从而根据输入的路面区域的图像,确定路面区域中的车道线区域。
进而可以将所述三维点云中的点向所述环境图像投影,确定位于所述车道线区域内的目标点,最后拟合所述目标点以确定车道线。
图7是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。如图7所示,在将所述三维点云中的点向所述环境图像投影之前,所述方法还包括:
在步骤S5中,在所述三维点云中确定属于障碍物的障碍点;
在步骤S6中,在所述三维点云中剔除所述障碍点;
其中,所述将所述三维点云中的点向所述环境图像投影包括:
在步骤S301中,将剔除所述障碍点的三维点云中的点向所述环境图像投影。
在一个实施例中,可以先获取车辆行驶环境的环境图像和三维点云,然后在获取到的环境图像中确定路面区域。
对于三维点云,可以将其中的点向环境图像投影,而在将三维点云中的点向所述环境图像投影之前,可以先将三维点云中属于障碍物的障碍点剔除。例如可以预先通过机器学习(例如可以是深度学习)得到障碍物识别模型,障碍物识别模型可以根据输入的三维点云确定三维点云中的障碍物,那么通过将三维点云输入到障碍物识别模型中,可以确定出三维点云中的障碍物,例如可以确定出障碍物在在三维点云中对应的障碍物区域,将障碍物区域内的点作为障碍点从三维点云中剔除,从而三维点云点云中剩余的点中没有障碍物点。
从而后续进行投影操作时,可以将剔除了障碍点的三维点云中的点向环境图像投影,确定位于所述车道线区域内的目标点,最后拟合所述目标点以确定车道线,据此,有利于避免属于障碍物的障碍点投影到环境图像的车道线区域,影响确定目标点的准确性。
图8是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。如图8所示,所述在所述三维点云中确定属于障碍物的障碍点包括:
在步骤S501中,根据预先确定的深度学习模型,在所述三维点云中确定属于障碍物的障碍点。
在一个实施例中,可以先获取车辆行驶环境的环境图像和三维点云,然后在获取到的环境图像中确定路面区域。
对于三维点云,可以将其中的点向环境图像投影,而在将三维点云中的 点向所述环境图像投影之前,可以先将三维点云中属于障碍物的障碍点剔除。
其中,可以预先通过深度学习得到深度学习模型,深度学习模型可以以三维点云作为输入,输出属于障碍物的障碍物点的信息,根据该信息,可以确定三维点云中属于障碍物的障碍点。其中,障碍物包括但不限于车辆、行人、交通指示牌等。对于确定出的障碍点,可以从三维点云中剔除。
从而后续进行投影操作时,可以将剔除了障碍点的三维点云中的点向环境图像投影,确定位于所述车道线区域内的目标点,最后拟合所述目标点以确定车道线,据此,有利于避免属于障碍物的障碍点投影到环境图像的车道线区域,影响确定目标点的准确性。
图9是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。如图9所示,所述三维点云为目标时刻下环境的三维点云,在将所述三维点云中的点向所述环境图像投影之前,所述方法还包括:
在步骤S7中,确定所述目标时刻之前或之后的至少一个其他时刻的车辆行驶环境的三维点云,在所述目标时刻下的预测点云;
在步骤S8中,将所述预测点云堆叠到所述目标时刻下环境的三维点云中;
其中,所述将所述三维点云中的点向所述环境图像投影包括:
在步骤S302中,将堆叠后的三维点云中的点向所述环境图像投影。
在一个实施例中,可以先获取车辆行驶环境的环境图像和三维点云,然后在获取到的环境图像中确定路面区域。
对于三维点云,可以将其中的点向环境图像投影,而在将三维点云中的点向所述环境图像投影之前,对车辆行驶环境采集的三维点云操作可以连续执行(执行的次数和执行的时刻可以根据需要进行设置),从而采集到多个时刻下环境的三维点云。
将三维点云中的点向环境图像投影,如果仅将目标时刻下环境的三维点云向环境图像投影,由于采集三维点云的激光雷达的线数较少,采集到的三维点云中点云密度较低,投影到环境图像中位于车道线区域的目标点较少,不利于得到准确的拟合结果。
根据本实施例,可以采集多个时刻下的三维点云,具体地,除了采集目标时刻下的三维点云,还可以采集目标时刻之前或之后至少一个其他时刻下车辆行驶环境的三维点云。对于采集到的其他时刻下的三维点云,可以确定在目标时刻下的预测点云,例如可以根据车辆在其他时刻与目标时刻的姿态差异和位置差异,对其他时刻的车辆行驶环境的三维点云进行预测(预测方式可以根据需要选择,例如可以通过卡尔曼滤波实现,也可以根据预先机器学习得到的预测模型实现),以确定其他时刻的车辆行驶环境的三维点云,在目标时刻下的预测点云。
然后将预测点云堆叠到所述目标时刻下环境的三维点云中,从而提高点云密度,那么将三维点云中的点向所述环境图像投影,具体可以是将堆叠后的三维点云中的点向环境图像投影,然后确定位于所述车道线区域内的目标点,由于堆叠后的三维点云中包含更多的点,因此可以提高投影到环境图像中位于车道线区域的目标点的数量,进而拟合目标点以确定车道线,就可以针对数量更多的目标点进行拟合,有利于得到准确的拟合结果。
图10是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。如图10所示,所述确定所述目标时刻之前或之后的至少一个其他时刻的三维点云,在所述目标时刻下的预测点云包括:
在步骤S701中,确定所述车辆在所述其他时刻与所述目标时刻的姿态差异,以及所述车辆在所述其他时刻与所述目标时刻的位置差异;
在步骤S702中,根据所述姿态差异和所述位置差异确定所述其他时刻的车辆行驶环境的三维点云,在所述目标时刻下的预测点云。
在一个实施例中,可以先获取车辆行驶环境的环境图像和三维点云,然后在获取到的环境图像中确定路面区域。
对于三维点云,可以将其中的点向环境图像投影,而在将三维点云中的点向所述环境图像投影之前,对车辆行驶环境采集的三维点云操作可以连续执行(执行的次数和执行的时刻可以根据需要进行设置),从而采集到多个时刻下环境的三维点云。具体地,除了采集目标时刻下的三维点云,还可以采 集目标时刻之前或之后至少一个其他时刻下车辆行驶环境的三维点云。对于采集到的其他时刻下的三维点云,可以确定在目标时刻下的预测点云。
在一个实施例中,车辆在其他时刻与目标时刻的姿态和位置上的差异,可以导致车辆在其他时刻与目标时刻下环境的三维点云有所不同,那么可以根据姿态差异和位置差异确定其他时刻的车辆行驶环境的三维点云,在目标时刻下的预测点云。
例如t时刻为当前时刻,t+1时刻为当前时刻的下一时刻,t时刻车辆的位置坐标为(x1,y1),t+1时刻车辆的位置坐标为(x1+x0,y1+y0),姿态(例如行驶方向)没有变化,那么对于t+1时刻的三维点云,确定其在t时刻的预测点云,可以将t+1时刻的三维点云中所有点在x方向上减去x0,在y方向减去y0。
需要说明的是,确定预测点云的方式并不限于上述方式,而是可以根据需要选择,例如可以基于卡尔曼滤波对其他时刻下的目标点云进行预测,从而确定预测点云,也可以基于预先机器学习得到的预测模型对其他时刻下的目标点云进行预测,从而确定预测点云。
然后将预测点云堆叠到所述目标时刻下环境的三维点云中,从而提高点云密度,那么将三维点云中的点向所述环境图像投影,具体可以是将堆叠后的三维点云中的点向环境图像投影,然后确定位于所述车道线区域内的目标点,由于堆叠后的三维点云中包含更多的点,因此可以提高投影到环境图像中位于车道线区域的目标点的数量,进而拟合目标点以确定车道线,就可以针对数量更多的目标点进行拟合,有利于得到准确的拟合结果。
图11是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。如图11所示,所述将所述三维点云中的点向所述环境图像投影包括:
在步骤S303中,根据采集所述环境图像的图像采集设备的内参,世界坐标系到所述图像采集设备的坐标系的旋转关系和位移关系,将所述三维点云中的点在世界坐标系中的第一坐标,转换为所述环境图像中的第二坐标。
在一个实施例中,可以先获取车辆行驶环境的环境图像和三维点云,然后在获取到的环境图像中确定路面区域,进而将所述三维点云中的点向所述环境图像投影。
例如可以获取采集环境图像的图像采集设备的内参,然后根据该内参,以及世界坐标系到图像采集设备的坐标系的旋转关系和位移关系,将三维点云中的点在世界坐标系中的第一坐标,转换为环境图像中的第二坐标。
在将三维点云中的点在世界坐标系中的第一坐标,转换为环境图像中的第二坐标后,就确定了三维点云中的点在环境图像中对应的坐标,即第二坐标,而车道线区域已经确定,那么根据第二坐标与车道线区域的关系,就可以确定位于车道线区域内的目标点,也即三维点云中位于所述车道线区域内的点。最后可以拟合所述目标点以确定车道线。
图12是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。如图12所示,所述方法还包括:
在步骤S9中,标注所述目标点;
在步骤S10中,显示带有标注的所述目标点。
在一个实施例中,可以先获取车辆行驶环境的环境图像和三维点云,然后在获取到的环境图像中确定路面区域,进而将所述三维点云中的点向所述环境图像投影,确定位于所述车道线区域内的目标点,然后拟合目标点以确定车道线。
而对于目标点,可以对其进行标注,例如可以通过特定的颜色对目标点进行标注,例如对三维点云中的目标点标记为白色,其他非目标点则可以标记为黑色,也可以通过特定的标识对目标点进行标注,例如在三维点云中为目标点赋予不同于非目标点的标识。
据此,可以显示带有标注的目标点,以便用户在查看三维点云时,能够根据标注区分三维点云中的目标点和非目标点。
图13是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。如图13所示,所述方法还包括:
在步骤S11中,根据接收到的修正指令修正所述车道线;
在步骤S12中,将修正后的车道线投影到所述环境图像中,以确定修正后的车道线在所述环境图像中的投影与所述车道线区域是否匹配;
在步骤S13中,根据修正后的车道线在所述环境图像中的投影与所述车道线区域的匹配结果生成响应信息。
在一个实施例中,可以先获取车辆行驶环境的环境图像和三维点云,然后在获取到的环境图像中确定路面区域,进而将所述三维点云中的点向所述环境图像投影,确定位于所述车道线区域内的目标点,然后拟合目标点以确定车道线。
在确定车道线之后,可以接收人工输入的修正指令对车道线进行修正,但是人工修正的结果也可能存在误差,因此可以将修正后的车道线投影到环境图像中,以确定修正后的车道线在环境图像中的投影与车道线区域是否匹配,然后根据修正后的车道线在环境图像中的投影与车道线区域的匹配结果生成响应信息。
若修正后的车道线在环境图像中的投影与车道线区域不匹配,例如修正后的车道线在环境图像中的投影超过预设比例落在车道线区域以外,那么生成的响应信息可以用于提示用户修正结果不合理,以便用户重新修正;若修正后的车道线在环境图像中的投影与车道线区域匹配,例如修正后的车道线在环境图像中的投影小于预设比例落在车道线区域以外,那么生成的响应信息可以用于提示用户修正结果合理。
需要说明的是,本实施例除了在拟合目标点得到车道线后由人工参与进行修正,也可以在确定车道线区域,将三维点云中的点向环境图像投影的过程中由人工参与进行修正,例如在确定车道线过程中,可以接收人工输入的指令修正、补充、删减环境图像中的车道线区域,例如在将三维点云中的点向环境图像投影的过程中,对投影后的目标进行调整。
图14是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。如图14所示,在拟合所述目标点以确定车道线之前,所述 方法还包括:
在步骤S14中,在所述三维点云的非目标点中,确定到所述目标点的距离小于预设距离的候选点;
在步骤S15中,在所述候选点中确定与所述目标点的预设属性信息的相似度大于预设相似度的扩展点;
在步骤S16中,将所述扩展点和所述目标点作为新的目标点;
其中,所述拟合所述目标点以确定车道线包括:
在步骤S401中,拟合所述新的目标点以确定车道线。
在一个实施例中,可以先获取车辆行驶环境的环境图像和三维点云,然后在获取到的环境图像中确定路面区域,进而将所述三维点云中的点向所述环境图像投影,确定位于所述车道线区域内的目标点,然后拟合目标点以确定车道线。
然而将三维点云中的点向环境图像中投影,或多或少会存在一些偏差,例如可能是采集环境图像的图像采集设备的外参不够准确导致的,那么将导致在三维点云中位于车道线区域的部分目标点,并不会投影到环境图像的车道线区域内,这就可能导致拟合结果并不准确,也即拟合确定的车道线与三维点云中实际的车道线存在差异。
但是由于偏差一般不会很大,所以这些未投影到车道线区域的目标点距离投影到区域的目标点距离一般较近。因此,在拟合目标点以确定车道线之前,可以在三维点云的非目标点中,确定到目标点的距离小于预设距离的候选点,这些候选点就有可能是未投影到车道线区域的目标点。
针对候选点,可以在其中确定与目标点的预设属性信息的相似度大于预设相似度的扩展点,例如可以采用floodfill(泛洪填充)算法来确定扩展点,其中,预设属性可以根据需要进行设置,例如可以是反射亮度(intensity),这些扩展点由于与目标点的预设属性信息非常接近,因此极有可能是未投影到车道线区域的目标点,从而拟合所述新的目标点以确定车道线,具体可以是将扩展点和原来的目标点作为新的目标点进行车道线拟合。
据此,可以缓解将三维点云中的点向环境图像中投影存在偏差而导致拟合结果不准确的问题。
图15是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。如图15所示,所述拟合所述目标点以确定车道线包括:
在步骤S402中,通过曲线模型拟合所述目标点以确定车道线。
在一个实施例中,可以先获取车辆行驶环境的环境图像和三维点云,然后在获取到的环境图像中确定路面区域,进而将所述三维点云中的点向所述环境图像投影,确定位于所述车道线区域内的目标点,然后拟合目标点以确定车道线。其中,拟合目标点可以根据需要选择曲线模型来进行拟合,例如可以选择贝塞尔曲线拟合目标点以确定车道线。
图16是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。如图16所示,所述通过贝塞尔曲线拟合所述目标点以确定车道线包括:
在步骤S4021中,通过多段三阶贝塞尔曲线拟合所述目标点以确定车道线。
在一个实施例中,可以先获取车辆行驶环境的环境图像和三维点云,然后在获取到的环境图像中确定路面区域,进而将所述三维点云中的点向所述环境图像投影,确定位于所述车道线区域内的目标点,然后拟合目标点以确定车道线。其中,拟合目标点可以通过多段三阶贝塞尔曲线进行拟合。
三阶贝塞尔曲线的方程如下:
P(t)=A(1-t) 3+B·3(1-t) 2·t+C·3(1-t)·t 2+D·t 3
A、B、C和D为目标点中作为控制点的坐标,关于控制点的选取,在前文有所描述,在此不再赘述。
其中,具体拟合方式可以是确定目标点中最远的两点作为起点和终点进行拟合,然后对于拟合得到的曲线,确定是否存在目标点距离该曲线距离大于预设距离,若存在这种目标点,说明拟合效果并不满足要求,则以该目标点向曲线做垂线,然后从垂线和曲线的交点将曲线划分为两部分,针对每部 分曲线对目标点继续进行拟合,若对于进一步拟合后的曲线,仍存在到曲线的距离大于预设距离的目标点,那么继续以目标点向曲线做垂线,然后再从垂线和曲线的交点将曲线进一步划分,再对划分后的每部分曲线对目标点继续进行拟合,直至对于拟合后的曲线,所有目标点到曲线的距离小于或等于预设距离。
图17是根据本公开的实施例示出的又一种高精度地图中车道线的确定方法的示意流程图。如图17所示,所述方法还包括:
在步骤S17中,基于所述车道线生成控制指令,其中,所述控制指令用于控制车辆行驶。
在一个实施例中,可以先获取车辆行驶环境的环境图像和三维点云,然后在获取到的环境图像中确定路面区域,进而将所述三维点云中的点向所述环境图像投影,确定位于所述车道线区域内的目标点,然后拟合目标点以确定车道线。
在所述高精度地图中车道线的确定方法可以应用于自动驾驶技术领域,例如应用于自动驾驶的车辆时,可以基于拟合确定的车道线生成控制指令,以控制车辆行驶,例如可以生成控制指令使得车辆在自动驾驶时,保持在两道车道线之间行驶,以便干扰或碰撞到其他车道内的车辆,确保自动驾驶过程中的交通安全。
本公开高精度地图中车道线的确定装置的实施例可以应用在车载设备上。装置实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻辑意义上的装置,是通过其所在设备的处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,如图18所示,为本公开高精度地图中车道线的确定装置所在设备的一种硬件结构示意图,除了图18所示的处理器、网络接口、内存以及非易失性存储器之外,实施例中装置所在的设备通常还可以包括其他硬件,如负责处理报文的转发芯片等等;从硬件结构上来讲该设备还可能是 分布式的设备,可能包括多个接口卡,以便在硬件层面进行报文处理的扩展。
本公开的实施例还提出一种高精度地图中车道线的确定装置。本公开实施例所述的高精度地图中车道线的确定装置,可以适用于图像采集设备,图像采集设备可以车辆行驶环境的环境图像和三维点云,也可以适用于能够对环境图像和三维点云进行分析处理的其他电子设备,例如终端、服务器、车载设备等。
在一个实施例中,所述车道线确定装置包括包括单独或者协同工作的一个或者多个处理器,所述处理器用于执行:
获取车辆行驶环境的环境图像和三维点云;
在所述环境图像中确定车道线区域;
将所述三维点云中的点向所述环境图像投影,确定位于所述车道线区域内的目标点;
拟合所述目标点以确定车道线。
在一个实施例中,所述处理器用于执行:
根据预先确定的图像识别模型在所述环境图像中确定所述车道线区域。
在一个实施例中,所述处理器用于执行:
在所述环境图像中确定路面区域;
在所述路面区域中确定车道线区域。
在一个实施例中,所述处理器还用于执行:
在将所述三维点云中的点向所述环境图像投影之前,在所述三维点云中确定属于障碍物的障碍点;
在所述三维点云中剔除所述障碍点;
其中,所述将所述三维点云中的点向所述环境图像投影包括:
将剔除所述障碍点的三维点云中的点向所述环境图像投影。
在一个实施例中,所述处理器用于执行:
根据预先确定的深度学习模型,在所述三维点云中确定属于障碍物的障碍点。
在一个实施例中,所述三维点云为目标时刻下环境的三维点云,所述处理器还用于执行:
在将所述三维点云中的点向所述环境图像投影之前,确定所述目标时刻之前或之后的至少一个其他时刻的车辆行驶环境的三维点云,在所述目标时刻下的预测点云;
将所述预测点云堆叠到所述目标时刻下环境的三维点云中;
其中,所述将所述三维点云中的点向所述环境图像投影包括:
将堆叠后的三维点云中的点向所述环境图像投影。
在一个实施例中,所述确定所述目标时刻之前或之后的至少一个其他时刻的三维点云,所述处理器用于执行:
确定所述车辆在所述其他时刻与所述目标时刻的姿态差异,以及所述车辆在所述其他时刻与所述目标时刻的位置差异;
根据所述姿态差异和所述位置差异确定所述其他时刻的车辆行驶环境的三维点云,在所述目标时刻下的预测点云。
在一个实施例中,所述处理器用于执行:
根据采集所述环境图像的图像采集设备的内参,世界坐标系到所述图像采集设备的坐标系的旋转关系和位移关系,将所述三维点云中的点在世界坐标系中的第一坐标,转换为所述环境图像中的第二坐标。
在一个实施例中,所述处理器还用于执行:
标注所述目标点;
显示带有标注的所述目标点。
在一个实施例中,所述处理器还用于执行:
根据接收到的修正指令修正所述车道线;
将修正后的车道线投影到所述环境图像中,以确定修正后的车道线在所述环境图像中的投影与所述车道线区域是否匹配;
根据修正后的车道线在所述环境图像中的投影与所述车道线区域的匹配结果生成响应信息。
在一个实施例中,所述处理器还用于执行:
在拟合所述目标点以确定车道线之前,在所述三维点云的非目标点中,确定到所述目标点的距离小于预设距离的候选点;
在所述候选点中确定与所述目标点的预设属性信息的相似度大于预设相似度的扩展点;
将所述扩展点和所述目标点作为新的目标点;
其中,所述拟合所述目标点以确定车道线包括:
拟合所述新的目标点以确定车道线。
在一个实施例中,所述处理器用于执行:
通过曲线模型拟合所述目标点以确定车道线。
在一个实施例中,所述处理器用于执行:
通过多段三阶贝塞尔曲线拟合所述目标点以确定车道线。
在一个实施例中,所述处理器还用于执行:
基于所述车道线生成控制指令,其中,所述控制指令用于控制车辆行驶。
本公开的实施例还提出一种电子设备,包括上述任一实施例所述的高精度地图中车道线的确定装置。
本公开的实施例还提出一种自动驾驶车辆,包括上述实施例所述的电子设备。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本申请时可以把各单元的功能在同一个或多个软件和/或硬件中实现。本领域内的技术人员应明白,本公开的实施例可提供为方法、系统、或计算机程序产品。因此,本公开可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (30)

  1. 一种高精度地图中车道线的确定方法,其特征在于,包括:
    获取车辆行驶环境的环境图像和三维点云;
    在所述环境图像中确定车道线区域;
    将所述三维点云中的点向所述环境图像投影,确定位于所述车道线区域内的目标点;
    拟合所述目标点以确定车道线。
  2. 根据权利要求1所述的方法,其特征在于,所述在所述环境图像中确定车道线区域包括:
    根据预先确定的图像识别模型在所述环境图像中确定所述车道线区域。
  3. 根据权利要求1所述的方法,其特征在于,所述在所述环境图像中确定车道线区域包括:
    在所述环境图像中确定路面区域;
    在所述路面区域中确定车道线区域。
  4. 根据权利要求1所述的方法,其特征在于,在将所述三维点云中的点向所述环境图像投影之前,所述方法还包括:
    在所述三维点云中确定属于障碍物的障碍点;
    在所述三维点云中剔除所述障碍点;
    其中,所述将所述三维点云中的点向所述环境图像投影包括:
    将剔除所述障碍点的三维点云中的点向所述环境图像投影。
  5. 根据权利要求4所述的方法,其特征在于,所述在所述三维点云中确定属于障碍物的障碍点包括:
    根据预先确定的深度学习模型,在所述三维点云中确定属于障碍物的障碍点。
  6. 根据权利要求1所述的方法,其特征在于,所述三维点云为目标时刻下环境的三维点云,在将所述三维点云中的点向所述环境图像投影之前,所述方法还包括:
    确定所述目标时刻之前或之后的至少一个其他时刻的车辆行驶环境的三维点云,在所述目标时刻下的预测点云;
    将所述预测点云堆叠到所述目标时刻下环境的三维点云中;
    其中,所述将所述三维点云中的点向所述环境图像投影包括:
    将堆叠后的三维点云中的点向所述环境图像投影。
  7. 根据权利要求6所述的方法,其特征在于,所述确定所述目标时刻之前或之后的至少一个其他时刻的三维点云,在所述目标时刻下的预测点云包括:
    确定所述车辆在所述其他时刻与所述目标时刻的姿态差异,以及所述车辆在所述其他时刻与所述目标时刻的位置差异;
    根据所述姿态差异和所述位置差异确定所述其他时刻的车辆行驶环境的三维点云,在所述目标时刻下的预测点云。
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述将所述三维点云中的点向所述环境图像投影包括:
    根据采集所述环境图像的图像采集设备的内参,世界坐标系到所述图像采集设备的坐标系的旋转关系和位移关系,将所述三维点云中的点在世界坐标系中的第一坐标,转换为所述环境图像中的第二坐标。
  9. 根据权利要求1至7中任一项所述的方法,其特征在于,所述方法还包括:
    标注所述目标点;
    显示带有标注的所述目标点。
  10. 根据权利要求1至7中任一项所述的方法,其特征在于,所述方法还包括:
    根据接收到的修正指令修正所述车道线;
    将修正后的车道线投影到所述环境图像中,以确定修正后的车道线在所述环境图像中的投影与所述车道线区域是否匹配;
    根据修正后的车道线在所述环境图像中的投影与所述车道线区域的匹配 结果生成响应信息。
  11. 根据权利要求1至7中任一项所述的方法,其特征在于,在拟合所述目标点以确定车道线之前,还包括:
    在所述三维点云的非目标点中,确定到所述目标点的距离小于预设距离的候选点;
    在所述候选点中确定与所述目标点的预设属性信息的相似度大于预设相似度的扩展点;
    将所述扩展点和所述目标点作为新的目标点;
    其中,所述拟合所述目标点以确定车道线包括:
    拟合所述新的目标点以确定车道线。
  12. 根据权利要求1至7中任一项所述的方法,其特征在于,所述拟合所述目标点以确定车道线包括:
    通过曲线模型拟合所述目标点以确定车道线。
  13. 根据权利要求12所述的方法,其特征在于,所述通过贝塞尔曲线拟合所述目标点以确定车道线包括:
    通过多段三阶贝塞尔曲线拟合所述目标点以确定车道线。
  14. 根据权利要求1至7中任一项所述的方法,其特征在于,所述方法还包括:
    基于所述车道线生成控制指令,其中,所述控制指令用于控制车辆行驶。
  15. 一种高精度地图中车道线的确定装置,其特征在于,所述确定装置包括包括单独或者协同工作的一个或者多个处理器,所述处理器用于执行:
    获取车辆行驶环境的环境图像和三维点云;
    在所述环境图像中确定车道线区域;
    将所述三维点云中的点向所述环境图像投影,确定位于所述车道线区域内的目标点;
    拟合所述目标点以确定车道线。
  16. 根据权利要求15所述的装置,其特征在于,所述处理器用于执行:
    根据预先确定的图像识别模型在所述环境图像中确定所述车道线区域。
  17. 根据权利要求15所述的装置,其特征在于,所述处理器用于执行:
    在所述环境图像中确定路面区域;
    在所述路面区域中确定车道线区域。
  18. 根据权利要求15所述的装置,其特征在于,所述处理器还用于执行:
    在将所述三维点云中的点向所述环境图像投影之前,在所述三维点云中确定属于障碍物的障碍点;
    在所述三维点云中剔除所述障碍点;
    其中,所述将所述三维点云中的点向所述环境图像投影包括:
    将剔除所述障碍点的三维点云中的点向所述环境图像投影。
  19. 根据权利要求18所述的装置,其特征在于,所述处理器用于执行:
    根据预先确定的深度学习模型,在所述三维点云中确定属于障碍物的障碍点。
  20. 根据权利要求15所述的装置,其特征在于,所述三维点云为目标时刻下环境的三维点云,所述处理器还用于执行:
    在将所述三维点云中的点向所述环境图像投影之前,确定所述目标时刻之前或之后的至少一个其他时刻的车辆行驶环境的三维点云,在所述目标时刻下的预测点云;
    将所述预测点云堆叠到所述目标时刻下环境的三维点云中;
    其中,所述将所述三维点云中的点向所述环境图像投影包括:
    将堆叠后的三维点云中的点向所述环境图像投影。
  21. 根据权利要求20所述的装置,其特征在于,所述确定所述目标时刻之前或之后的至少一个其他时刻的三维点云,所述处理器用于执行:
    确定所述车辆在所述其他时刻与所述目标时刻的姿态差异,以及所述车辆在所述其他时刻与所述目标时刻的位置差异;
    根据所述姿态差异和所述位置差异确定所述其他时刻的车辆行驶环境的三维点云,在所述目标时刻下的预测点云。
  22. 根据权利要求15至21中任一项所述的装置,其特征在于,所述处理器用于执行:
    根据采集所述环境图像的图像采集设备的内参,世界坐标系到所述图像采集设备的坐标系的旋转关系和位移关系,将所述三维点云中的点在世界坐标系中的第一坐标,转换为所述环境图像中的第二坐标。
  23. 根据权利要求15至21中任一项所述的装置,其特征在于,所述处理器还用于执行:
    标注所述目标点;
    显示带有标注的所述目标点。
  24. 根据权利要求15至21中任一项所述的装置,其特征在于,所述处理器还用于执行:
    根据接收到的修正指令修正所述车道线;
    将修正后的车道线投影到所述环境图像中,以确定修正后的车道线在所述环境图像中的投影与所述车道线区域是否匹配;
    根据修正后的车道线在所述环境图像中的投影与所述车道线区域的匹配结果生成响应信息。
  25. 根据权利要求15至21中任一项所述的装置,其特征在于,所述处理器还用于执行:
    在拟合所述目标点以确定车道线之前,在所述三维点云的非目标点中,确定到所述目标点的距离小于预设距离的候选点;
    在所述候选点中确定与所述目标点的预设属性信息的相似度大于预设相似度的扩展点;
    将所述扩展点和所述目标点作为新的目标点;
    其中,所述拟合所述目标点以确定车道线包括:
    拟合所述新的目标点以确定车道线。
  26. 根据权利要求15至21中任一项所述的装置,其特征在于,所述处理器用于执行:
    通过曲线模型拟合所述目标点以确定车道线。
  27. 根据权利要求16所述的装置,其特征在于,所述处理器用于执行:
    通过多段三阶贝塞尔曲线拟合所述目标点以确定车道线。
  28. 根据权利要求15至21中任一项所述的装置,其特征在于,所述处理器还用于执行:
    基于所述车道线生成控制指令,其中,所述控制指令用于控制车辆行驶。
  29. 一种电子设备,其特征在于,包括权利要求15至28中任一项所述的高精度地图中车道线的确定装置。
  30. 一种自动驾驶车辆,其特征在于,包括权利要求29所述的电子设备。
PCT/CN2019/106648 2019-09-19 2019-09-19 高精度地图中车道线的确定方法和装置 WO2021051344A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980033197.XA CN112154445A (zh) 2019-09-19 2019-09-19 高精度地图中车道线的确定方法和装置
PCT/CN2019/106648 WO2021051344A1 (zh) 2019-09-19 2019-09-19 高精度地图中车道线的确定方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/106648 WO2021051344A1 (zh) 2019-09-19 2019-09-19 高精度地图中车道线的确定方法和装置

Publications (1)

Publication Number Publication Date
WO2021051344A1 true WO2021051344A1 (zh) 2021-03-25

Family

ID=73891923

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/106648 WO2021051344A1 (zh) 2019-09-19 2019-09-19 高精度地图中车道线的确定方法和装置

Country Status (2)

Country Link
CN (1) CN112154445A (zh)
WO (1) WO2021051344A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362421A (zh) * 2021-06-30 2021-09-07 北京百度网讯科技有限公司 地图中导流区的绘制方法、装置和电子设备
CN115131761A (zh) * 2022-08-31 2022-09-30 北京百度网讯科技有限公司 道路边界的识别方法、绘制方法、装置及高精地图
CN115201817A (zh) * 2022-09-08 2022-10-18 南京慧尔视智能科技有限公司 一种车道生成方法、装置、设备及存储介质
CN115407364A (zh) * 2022-09-06 2022-11-29 安徽蔚来智驾科技有限公司 点云地图处理方法、车道标注数据获取方法、设备及介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113536883B (zh) * 2021-03-23 2023-05-02 长沙智能驾驶研究院有限公司 障碍物检测方法、车辆、设备及计算机存储介质
CN113160355A (zh) * 2021-04-15 2021-07-23 的卢技术有限公司 园区车道线生成方法、系统及计算机可读存储介质
CN114863026B (zh) * 2022-05-18 2023-04-14 禾多科技(北京)有限公司 三维车道线信息生成方法、装置、设备和计算机可读介质
CN115330923B (zh) * 2022-08-10 2023-11-14 小米汽车科技有限公司 点云数据渲染方法、装置、车辆、可读存储介质及芯片

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678689A (zh) * 2015-12-31 2016-06-15 百度在线网络技术(北京)有限公司 高精地图数据配准关系确定方法及装置
US20170039436A1 (en) * 2015-08-03 2017-02-09 Nokia Technologies Oy Fusion of RGB Images and Lidar Data for Lane Classification
CN107463918A (zh) * 2017-08-17 2017-12-12 武汉大学 基于激光点云与影像数据融合的车道线提取方法
CN108985230A (zh) * 2018-07-17 2018-12-11 深圳市易成自动驾驶技术有限公司 车道线检测方法、装置及计算机可读存储介质
CN110097620A (zh) * 2019-04-15 2019-08-06 西安交通大学 基于图像和三维激光的高精度地图创建系统
CN110136182A (zh) * 2019-05-28 2019-08-16 北京百度网讯科技有限公司 激光点云与2d影像的配准方法、装置、设备和介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170039436A1 (en) * 2015-08-03 2017-02-09 Nokia Technologies Oy Fusion of RGB Images and Lidar Data for Lane Classification
CN105678689A (zh) * 2015-12-31 2016-06-15 百度在线网络技术(北京)有限公司 高精地图数据配准关系确定方法及装置
CN107463918A (zh) * 2017-08-17 2017-12-12 武汉大学 基于激光点云与影像数据融合的车道线提取方法
CN108985230A (zh) * 2018-07-17 2018-12-11 深圳市易成自动驾驶技术有限公司 车道线检测方法、装置及计算机可读存储介质
CN110097620A (zh) * 2019-04-15 2019-08-06 西安交通大学 基于图像和三维激光的高精度地图创建系统
CN110136182A (zh) * 2019-05-28 2019-08-16 北京百度网讯科技有限公司 激光点云与2d影像的配准方法、装置、设备和介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362421A (zh) * 2021-06-30 2021-09-07 北京百度网讯科技有限公司 地图中导流区的绘制方法、装置和电子设备
CN113362421B (zh) * 2021-06-30 2023-11-28 北京百度网讯科技有限公司 地图中导流区的绘制方法、装置和电子设备
CN115131761A (zh) * 2022-08-31 2022-09-30 北京百度网讯科技有限公司 道路边界的识别方法、绘制方法、装置及高精地图
CN115407364A (zh) * 2022-09-06 2022-11-29 安徽蔚来智驾科技有限公司 点云地图处理方法、车道标注数据获取方法、设备及介质
CN115201817A (zh) * 2022-09-08 2022-10-18 南京慧尔视智能科技有限公司 一种车道生成方法、装置、设备及存储介质
CN115201817B (zh) * 2022-09-08 2022-12-30 南京慧尔视智能科技有限公司 一种车道生成方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN112154445A (zh) 2020-12-29

Similar Documents

Publication Publication Date Title
WO2021051344A1 (zh) 高精度地图中车道线的确定方法和装置
JP6862409B2 (ja) 地図生成及び移動主体の位置決めの方法及び装置
CN111340797A (zh) 一种激光雷达与双目相机数据融合检测方法及系统
JP2022509302A (ja) 地図生成方法、運転制御方法、装置、電子機器及びシステム
WO2020043081A1 (zh) 定位技术
JP2021119507A (ja) 車線の決定方法、車線測位精度の評価方法、車線の決定装置、車線測位精度の評価装置、電子デバイス、コンピュータ可読記憶媒体、及びプログラム
WO2018133727A1 (zh) 一种正射影像图的生成方法及装置
WO2021051346A1 (zh) 立体车道线确定方法、装置和电子设备
WO2021017211A1 (zh) 一种基于视觉的车辆定位方法、装置及车载终端
CN115376109B (zh) 障碍物检测方法、障碍物检测装置以及存储介质
CN115410167A (zh) 目标检测与语义分割方法、装置、设备及存储介质
CN115164918A (zh) 语义点云地图构建方法、装置及电子设备
CN113255578B (zh) 交通标识的识别方法及装置、电子设备和存储介质
CN114119682A (zh) 一种激光点云和图像配准方法及配准系统
CN112507891B (zh) 自动化识别高速路口并构建路口向量的方法及装置
CN117079238A (zh) 道路边沿检测方法、装置、设备及存储介质
CN110827340B (zh) 地图的更新方法、装置及存储介质
CN116978010A (zh) 图像标注方法和装置、存储介质和电子设备
CN116642490A (zh) 基于混合地图的视觉定位导航方法、机器人及存储介质
WO2022077660A1 (zh) 一种车辆定位的方法和装置
CN114898321A (zh) 道路可行驶区域检测方法、装置、设备、介质及系统
Lee et al. Semi-automatic framework for traffic landmark annotation
CN113870365B (zh) 相机标定方法、装置、设备以及存储介质
CN114612879A (zh) 一种地面交通标志检测方法、装置和电子设备
CN115272998A (zh) 感知元素相对关系检测方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19946006

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19946006

Country of ref document: EP

Kind code of ref document: A1