WO2021051346A1 - 立体车道线确定方法、装置和电子设备 - Google Patents

立体车道线确定方法、装置和电子设备 Download PDF

Info

Publication number
WO2021051346A1
WO2021051346A1 PCT/CN2019/106656 CN2019106656W WO2021051346A1 WO 2021051346 A1 WO2021051346 A1 WO 2021051346A1 CN 2019106656 W CN2019106656 W CN 2019106656W WO 2021051346 A1 WO2021051346 A1 WO 2021051346A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane line
point cloud
points
dimensional point
dimensional
Prior art date
Application number
PCT/CN2019/106656
Other languages
English (en)
French (fr)
Inventor
孙路
周游
朱振宇
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980033269.0A priority Critical patent/CN112154446B/zh
Priority to PCT/CN2019/106656 priority patent/WO2021051346A1/zh
Publication of WO2021051346A1 publication Critical patent/WO2021051346A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present disclosure relates to the field of map processing technology, and in particular to a method for determining a three-dimensional lane line, a device for determining a three-dimensional lane line, and electronic equipment.
  • lane lines in the road In the field of autonomous driving, the recognition of lane lines in the road is very important. In related technologies, there are two main ways to obtain lane lines. One is to detect lane lines in real-time from the current environment image, and the other is to obtain pre-marked lane lines from high-precision maps to determine the environment. The location of the lane line.
  • the existing method of marking the lane lines in the high-precision map is mainly done manually.
  • it is a three-dimensional image, which is generated based on lidar.
  • the image generated by lidar has no color information, and the generated image is also affected by obstacles on the road, making it artificial Marking lane lines in the middle is prone to errors, and requires a lot of repeated operations, the marking speed is slow, and the efficiency is low.
  • the present disclosure proposes a three-dimensional lane line determination method, a three-dimensional lane line determination device, and electronic equipment to solve technical problems in related technologies.
  • a method for determining a three-dimensional lane line which includes:
  • a device for determining a three-dimensional lane line which includes one or more processors working individually or in cooperation, and the processors are configured to execute:
  • a three-dimensional lane line is generated.
  • an electronic device including the three-dimensional lane line determining device described in the above-mentioned embodiment.
  • the two-dimensional point cloud image formed by the projected points retains the height information of the three-dimensional point cloud, thereby performing lane line fitting based on the two-dimensional point cloud image
  • the obtained lane line points also have height information, and based on the height information, lane lines at different heights can be distinguished, and the lane line points are integrated into a three-dimensional lane line.
  • Fig. 1 is a schematic flowchart of a method for determining a three-dimensional lane line according to an embodiment of the present disclosure.
  • Fig. 2 is a schematic flowchart of another method for determining a three-dimensional lane line according to an embodiment of the present disclosure.
  • Fig. 3 is a schematic diagram showing a block according to an embodiment of the present disclosure.
  • Fig. 4 is a schematic flowchart showing yet another method for determining a three-dimensional lane line according to an embodiment of the present disclosure.
  • Fig. 5 is a schematic flowchart showing another method for determining a three-dimensional lane line according to an embodiment of the present disclosure.
  • Fig. 6 is a schematic flowchart showing yet another method for determining a three-dimensional lane line according to an embodiment of the present disclosure.
  • Fig. 7 is a schematic flowchart showing yet another method for determining a three-dimensional lane line according to an embodiment of the present disclosure.
  • Fig. 8 is a schematic flowchart showing another method for determining a three-dimensional lane line according to an embodiment of the present disclosure.
  • Fig. 9 is a schematic flowchart showing yet another method for determining a three-dimensional lane line according to an embodiment of the present disclosure.
  • Fig. 10 is a schematic flow chart showing a method of performing lane line fitting according to the two-dimensional point cloud image to obtain lane line points according to an embodiment of the present disclosure.
  • Fig. 11 is another schematic flow chart of performing lane line fitting according to the two-dimensional point cloud image to obtain lane line points according to an embodiment of the present disclosure.
  • Fig. 12 is a schematic flow chart showing yet another method of performing lane line fitting according to the two-dimensional point cloud image to obtain lane line points according to an embodiment of the present disclosure.
  • Fig. 13 is a schematic flow chart showing yet another method of performing lane line fitting according to the two-dimensional point cloud image to obtain lane line points according to an embodiment of the present disclosure.
  • Fig. 14 is a schematic flow chart showing yet another schematic flow chart of performing lane line fitting according to the two-dimensional point cloud image to obtain lane line points according to an embodiment of the present disclosure.
  • Fig. 15 is a schematic flow chart showing yet another method for determining a three-dimensional lane line according to an embodiment of the present disclosure.
  • Fig. 16 is a schematic flowchart showing another method for determining a three-dimensional lane line according to an embodiment of the present disclosure.
  • Fig. 17 is a schematic flowchart showing another method for determining a three-dimensional lane line according to an embodiment of the present disclosure.
  • Fig. 18 is a schematic flow chart showing yet another method for determining a three-dimensional lane line according to an embodiment of the present disclosure.
  • Fig. 19 is a schematic diagram showing a hardware structure of a device where a device for determining a three-dimensional lane line is located according to an embodiment of the present disclosure.
  • Fig. 1 is a schematic flowchart of a method for determining a three-dimensional lane line according to an embodiment of the present disclosure.
  • the three-dimensional lane line determination method described in the embodiments of the present disclosure can be applied to image acquisition equipment, which can collect a three-dimensional point cloud of a vehicle driving environment, and can also be applied to other electronic devices that can analyze and process three-dimensional point clouds.
  • image acquisition equipment which can collect a three-dimensional point cloud of a vehicle driving environment
  • other electronic devices that can analyze and process three-dimensional point clouds.
  • the method for determining a three-dimensional lane line includes the following steps:
  • step S1 a three-dimensional point cloud of the target environment is acquired, and the three-dimensional point cloud is projected in the vertical direction to obtain a two-dimensional point cloud image composed of projection points, wherein the projection point retains the three-dimensional point cloud Height information
  • the three-dimensional point cloud of the target environment can be obtained by lidar.
  • the three-dimensional point cloud is projected in a vertical direction, similar to projecting to a birdview.
  • a birdview may include two-dimensional coordinates parallel to a horizontal plane, such as x-axis coordinates and y-axis coordinates.
  • the three-dimensional point cloud in this embodiment is projected in the vertical direction, and the obtained two-dimensional point cloud image formed by the projected points not only includes the x-axis coordinates and the y-axis coordinates, but also retains the height information of the three-dimensional point cloud, that is, the vertical Coordinates on the horizontal plane, such as z-axis coordinates.
  • step S2 perform lane line fitting according to the two-dimensional point cloud image to obtain lane line points
  • the lane line area may be determined in the two-dimensional point cloud image first, and then the projection point in the lane line area may be determined; and then according to the two-dimensional coordinates of the lane line point parallel to the horizontal plane, such as the above
  • the x-axis coordinates and the y-axis coordinates are used to perform lane line fitting according to the two-dimensional point cloud image to obtain lane line points.
  • step S3 determine the height information of the lane line point based on the height information of the projection point
  • step S4 a three-dimensional lane line is generated based on the height information of the lane line point.
  • the two-dimensional point cloud image formed by the projected points retains the height information of the three-dimensional point cloud, and thus the lane line fitting is obtained according to the two-dimensional point cloud image
  • the lane line points also have height information, and based on the height information, lane lines at different heights can be distinguished, and the lane line points can be integrated into three-dimensional lane lines.
  • Fig. 2 is a schematic flow chart showing another method for determining a three-dimensional lane line according to an embodiment of the present disclosure.
  • the block height information includes a height interval
  • the determining the height information of the lane line point based on the height information of the projection point includes:
  • step S301 the two-dimensional point cloud image is divided into multiple blocks
  • step S302 cluster the height information of the projection points in the block to determine at least one height interval.
  • the two-dimensional point cloud image can be divided into multiple blocks, for example, the two-dimensional point cloud image is rasterized, and each grid is a block, because the projection points in the block If the height information is retained, the height information of the projection points in the block can be clustered to determine at least one height interval.
  • clustering algorithms include but are not limited to k-means clustering algorithm, AP (Affinity Propagation) clustering algorithm, etc., which can be specifically selected according to needs.
  • the projection points are mainly located in three height ranges, which are 0 to 0.3 meters, 3.1 meters to 3.4 meters, and 6.7 meters to 7.0 meters, then the height of the projection point in the block
  • the information can be clustered to determine 3 height intervals, that is, 0 to 0.3 meters, 3.1 meters to 3.4 meters, and 6.7 meters to 7.0 meters, indicating that there are mainly lanes in the three height ranges.
  • the corresponding scene is a three-story viaduct, so there are lane lines located in three height intervals.
  • the height information of the lane line point on the bottom lane is in the range of 0 to 0.3 meters, and the lane line point height information on the middle lane. In the range of 3.1 meters to 3.4 meters, the height information of the lane line point on the top lane is in the range of 6.7 meters to 7.0 meters.
  • Fig. 3 is a schematic diagram showing a block according to an embodiment of the present disclosure.
  • the two-dimensional point cloud image can be divided into 16 blocks, with the division ratios A1B1, A1B2, A1B3, A1B4, A2B1, A2B2, A2B3, A2B4, A3B1, A3B2, A3B3, A3B4, A4B1, A4B2, A4B3, A4B4.
  • the specific number of divided blocks and the shape of the blocks can be set as required, and are not limited to the situation shown in FIG. 3.
  • It may include two intersecting lane lines, lane line ⁇ and lane line ⁇ , which intersect in block A2B2.
  • Fig. 4 is a schematic flowchart showing yet another method for determining a three-dimensional lane line according to an embodiment of the present disclosure. As shown in FIG. 4, the determining the height information of the lane line point based on the height information of the projection point further includes:
  • step S303 if a height interval is determined, the height information of the lane line point is determined according to the interval information of the height interval.
  • the height information of the lane line point in the lane can be determined directly according to the section information of the height section.
  • the determined height range is 3.1 meters to 3.4 meters, which means that there is only one lane in the vertical direction in block A2B4, so the lane line in the lane is not the same as other lane lines. intersect.
  • the height information of the height interval may include the upper limit and the lower limit of the height interval, such as 3.1 meters and 3.4 meters. Then the height information of the lane line points can be determined according to the interval information of the height interval. The lower limit value calculates the average value, and the obtained height information of the lane line point is 3.25 meters (this height information can also represent the height information of the lane where the lane line point is located).
  • Fig. 5 is a schematic flowchart showing another method for determining a three-dimensional lane line according to an embodiment of the present disclosure. As shown in FIG. 5, the determining the height information of the lane line point based on the height information of the projection point further includes:
  • step S304 if multiple height intervals are determined, determine the lane line to which the target lane line point belongs, where the target lane line point is located in the first block of the multiple blocks;
  • step S305 the height information of other lane line points located in other blocks on the lane line is determined, where the other blocks are those adjacent to the first block among the plurality of blocks Block
  • step S306 according to the height information of the other lane line points, determine the target height interval to which the target lane line point belongs in the multiple height intervals;
  • step S307 the height information of the target lane line point is determined according to the interval information of the target height interval.
  • the height must be different, so part of the projection points in block A2B2 belong to one lane, and the other part belongs to another lane, so they belong to two height intervals, such as the interval of 0 to 0.3 meters and the interval of 3.1 meters to 3.4 meters.
  • the lane line to which the target lane line point belongs can be determined.
  • step S2 the lane line fitting is performed according to the two-dimensional point cloud image, and at least one lane line can be obtained.
  • you can get Lane line ⁇ and lane line ⁇ because the projection points in the block belong to the two-dimensional point cloud image, it is possible to determine the lane line to which the lane line belongs. For example, determine that some of the lane line points in the block A2B2 belong to the lane line ⁇ , and the other part The lane line point belongs to the lane line ⁇ .
  • the interval information of the height interval has multiple upper and lower limits, the span is too large, and the height information of the lane line points cannot be determined simply by calculating the mean value.
  • this embodiment can determine the height information of other lane line points located in other blocks on the lane line, where the other blocks are blocks adjacent to the first block among the plurality of blocks
  • the first block is A2B2 in Fig. 3, then for the lane line ⁇ , the other blocks are A1B2 and A3B2, and for the lane line ⁇ , the other blocks are A2B1 and A2B3.
  • the height information of the lane line points in the block can be determined according to the height information of this height interval.
  • the height information of the lane lines can be determined according to the embodiment shown in FIG. 4.
  • the determined height interval is 0 to 0.3 meters, then the height information of the lane line is 0.15 meters; for A3B2, the determined height interval is 0.5 to 0.8 meters, then the height information of the lane line is 0.55 meters ;
  • the determined height interval is 3.1 to 3.4 meters, then the height information of the lane line is 3.25 meters; for A2B3, the determined height interval is 3.6 to 3.9 meters, then the height information of the lane line is 3.75 meters .
  • the target height interval to which the target lane line point belongs among multiple height intervals can be determined according to the height information of other lane line points, and then the target height interval is determined according to the interval information of the target height interval Height information of lane line points.
  • the lane line points in A1B2 and A3B2 belong to the lane line ⁇ , and the height information of the lane line point in A1B2 is 0.15 meters, and the height information of the lane line point in A3B2 is 0.55 meters. Since the general height of the lane line is continuously changing, Therefore, it can be determined that the height range of the lane line points belonging to the lane line ⁇ in A2B2 is 0.15 to 0.55 meters, and the average value of 0.15 m and 0.55 meters can be calculated as the height information of the lane line points belonging to the lane line ⁇ in A2B2. That is 0.35 meters.
  • the lane line points in A2B2 and A2B3 belong to the lane line ⁇ , and the height information of the lane line points in A2B2 is 3.25 meters, and the height information of the lane line points in A2B3 is 3.75 meters. It can be determined that they belong to the lane line in A2B2.
  • the height range of the lane line points of ⁇ is 3.25 meters to 3.75 meters, and the average value of 3.25 meters and 3.75 meters can be calculated as the height information of the lane line belonging to the lane line ⁇ in A2B2, that is, 3.50 meters.
  • Fig. 6 is a schematic flowchart showing yet another method for determining a three-dimensional lane line according to an embodiment of the present disclosure.
  • the other blocks include a second block and a third block, and the height information of other lane line points on the lane line located in the second block is the second height information, so The height information of other lane line points located in the third block on the lane line is the second height information;
  • the determining, according to the height information of the other lane line points, the target height interval to which the target lane line point belongs among the multiple height intervals includes:
  • step S3061 an interval between the second height information and the third height information among the plurality of height intervals is determined as a target height interval.
  • other blocks include the second block A1B2 and the third block A3B2, and the first height of the lane line point in A1B2
  • the information is 0.15 meters
  • the second height information of the lane line point in A3B2 is 0.55 meters.
  • the first block A2B2 includes height intervals of 0.2 to 0.4 meters and 3.3 to 3.6 meters, of which the 0.2 to 0.4 meters interval is between 0.15 meters And 0.55 meters, so you can choose the range of 0.2 meters to 0.4 meters as the target height range of the lane line points belonging to the lane line ⁇ in A2B2.
  • the height information of the lane line points can be determined according to the target height interval, for example, the average value of the upper limit and the lower limit of the target height interval can be calculated, that is, 0.3 meters are used as the height information of the lane line points belonging to the lane line ⁇ in A2B2.
  • the height information determined for the lane line in the block may not be a single value, but a part of a continuously changing function, which can be based on the target height interval of the lane line point.
  • the function is a proportional function
  • the height information of the lane line points in the block is continuously changed according to the proportional function, so that it can be well connected to the lane line points of the adjacent blocks to ensure the lane line points in multiple blocks.
  • the height information is continuous in order to draw continuous lane lines based on the continuous height information.
  • Fig. 7 is a schematic flowchart showing yet another method for determining a three-dimensional lane line according to an embodiment of the present disclosure.
  • the clustering the height information of the projection points in the block to determine at least one height interval includes:
  • step S3021 cluster the height information of the projection points in the block to determine a plurality of extreme height values
  • step S3022 the plurality of extreme height values are used as the boundary value of the height interval to determine at least one height interval.
  • the height information of the projection points in the block is clustered to determine at least one height interval.
  • the height information of the projection points in the block may be clustered to determine multiple height poles. For example, by clustering the height information of the projection points in the block A2B2 shown in Figure 3, 4 extreme height values can be determined. The ratios are 0.2m, 0.4m, 3.3m and 3.6m. Then the extreme values can be From small to large, each two is divided into a group, and the two extreme values in a group are respectively used as the boundary value of the altitude interval. For example, 0.2 m and 0.4 m belong to a group, where 0.2 m can be used as the lower limit of the altitude interval, 0.4 Meters can be used as the upper limit of the altitude interval.
  • Fig. 8 is a schematic flowchart showing another method for determining a three-dimensional lane line according to an embodiment of the present disclosure. As shown in Figure 8, the method further includes:
  • step S5 mark the lane line point
  • step S6 the marked lane line points are displayed.
  • the lane lines can be labeled, and then the labeled lane lines can be displayed.
  • the lane line points at different heights can be labeled with different height information, and then the labeled lane line points can be displayed, which is convenient for users. Determine the height of the lane line visually according to the label.
  • the label includes at least one of the following:
  • the position information, category, etc. may include a dashed line, a solid line, a double solid line, a zebra crossing, etc., so that the user can intuitively determine the lane line location based on the label.
  • Fig. 9 is a schematic flowchart showing yet another method for determining a three-dimensional lane line according to an embodiment of the present disclosure. As shown in Figure 9, the method further includes:
  • step S7 a three-dimensional lane line map is determined according to the height information of the lane line points.
  • a three-dimensional lane line map may be determined according to the height information of the lane line points, for example, a three-dimensional lane line map is automatically generated in a high-precision map.
  • Fig. 10 is a schematic flow chart showing a method of performing lane line fitting according to the two-dimensional point cloud image to obtain lane line points according to an embodiment of the present disclosure. As shown in FIG. 10, performing lane line fitting according to the two-dimensional point cloud image to obtain lane line points includes:
  • step S201 a lane line area is determined in the two-dimensional point cloud image
  • step S202 determine the projection point in the lane line area
  • step S203 according to the two-dimensional coordinates of the projection point parallel to the horizontal plane, the lane line fitting is performed on the two-dimensional point cloud image to obtain lane line points.
  • the lane line area in the acquired environment image, can be determined.
  • the lane line area can be determined in the environment image according to a predetermined image recognition model.
  • an image recognition model for example, a neural network
  • the image recognition model can determine the image in the image according to the input image.
  • the lane line area in the environment image can be input into the image recognition model to determine the lane line area in the environment image.
  • the projection point can be fitted as the lane line.
  • the projection point can be fitted by a Bezier curve. Since the projection point is located in the lane line area, the curve obtained by fitting the projection point can be used as the lane line.
  • the environment image and the three-dimensional point cloud can be combined to determine the projection points located in the lane line area in the three-dimensional point cloud, and then to determine the lane line by fitting the projection points.
  • the three-dimensional point cloud can be used as a high-precision map, and the above process of determining lane lines does not require manual participation to a large extent, so it is conducive to semi-automatic or even fully automatic determination of lane lines in high-precision maps. When repeating the determination operation, it can be completed at high speed and efficiently, and the accuracy of determining the lane line can be improved.
  • Fig. 11 is another schematic flow chart of performing lane line fitting according to the two-dimensional point cloud image to obtain lane line points according to an embodiment of the present disclosure.
  • the determining the lane line area in the two-dimensional point cloud image includes:
  • step S2011 the lane line area is determined in the two-dimensional point cloud image according to a predetermined image recognition model.
  • Fig. 12 is a schematic flow chart showing yet another method of performing lane line fitting according to the two-dimensional point cloud image to obtain lane line points according to an embodiment of the present disclosure.
  • the determining the lane line area in the two-dimensional point cloud image includes:
  • step S2012 a road surface area is determined in the two-dimensional point cloud image
  • step S2013 a lane line area is determined in the road surface area.
  • Fig. 13 is a schematic flow chart showing yet another method of performing lane line fitting according to the two-dimensional point cloud image to obtain lane line points according to an embodiment of the present disclosure.
  • the lane line fitting is performed according to the two-dimensional coordinates of the projection point in the lane line area parallel to the horizontal plane, the lane line fitting is performed according to the two-dimensional point cloud image to obtain the lane Line points also include:
  • step S204 among the projection points outside the lane line area, determine candidate points whose distance to the lane line area is less than a preset distance;
  • step S205 it is determined among the candidate points that the similarity with the preset attribute information of the projection point in the lane line area is greater than the extension point of the preset similarity;
  • step S206 use the extension point and the projection point outside the lane line area as a new projection point
  • said performing lane line fitting on the two-dimensional point cloud image according to the two-dimensional coordinates of the projection point parallel to the horizontal plane to obtain lane line points includes:
  • step S2031 according to the two-dimensional coordinates of the projection point parallel to the horizontal plane, a lane line fitting is performed on the new projection point in the two-dimensional point cloud image to obtain a lane line point.
  • the points in the three-dimensional point cloud are projected into the environmental image, there will be more or less deviations. For example, it may be caused by the insufficient accuracy of the external parameters of the image acquisition device.
  • the middle part should be located in the projection point of the lane line area, and will not be projected into the lane line area of the environmental image, which may cause the fitting result to be inaccurate, that is, the fitting of the determined lane line and the actual three-dimensional point cloud There are differences in the lane lines.
  • the candidate points whose distance to the lane line point is less than the preset distance can be determined.
  • These candidate points may be the lane line points that are not projected to the lane line area.
  • the candidate points can be Determine the extension point whose similarity with the preset attribute information of the projection point in the lane line area is greater than the preset similarity.
  • the floodfill algorithm can be used to determine the extension point, where the preset attribute can be determined according to Need to be set, such as reflection intensity (intensity), these extension points are most likely to be projection points that are not projected to the lane line area, so that the extension points and the projection points in the original lane line area can be used as new projection points Lane line fitting.
  • Need to be set such as reflection intensity (intensity)
  • Fig. 14 is a schematic flow chart showing yet another schematic flow chart of performing lane line fitting according to the two-dimensional point cloud image to obtain lane line points according to an embodiment of the present disclosure.
  • the performing lane line fitting according to the two-dimensional point cloud image to obtain lane line points further includes:
  • step S207 correct the lane line according to the received correction instruction
  • step S208 project the corrected lane line into the two-dimensional point cloud image to determine whether the projection of the lane line in the two-dimensional point cloud image matches the lane line;
  • step S209 response information is generated according to the matching result between the projection of the lane line in the two-dimensional point cloud image and the lane line.
  • a manual input correction instruction can be received to correct the lane line, but there may be errors in the result of the manual correction. Therefore, the corrected lane line can be projected into the environment image to determine the corrected lane line Whether the projection in the environment image matches the lane line area, and then generate response information according to the matching result of the corrected lane line projection in the environment image and the lane line area.
  • the generated response information can be used for Prompt the user that the correction result is unreasonable, so that the user can re-correct; if the projection of the corrected lane line in the environment image matches the lane line area, for example, the projection of the corrected lane line in the environment image is less than the preset ratio and falls on the lane line Outside the area, the generated response information can be used to prompt the user that the correction result is reasonable.
  • Fig. 15 is a schematic flowchart showing another method for determining a three-dimensional lane line according to an embodiment of the present disclosure. As shown in FIG. 15, before projecting the three-dimensional point cloud in the vertical direction to obtain a two-dimensional point cloud image composed of projection points, the method further includes:
  • step S8 an obstacle point belonging to an obstacle is determined in the three-dimensional point cloud
  • step S9 remove the obstacle points from the three-dimensional point cloud
  • the projection of the three-dimensional point cloud in a vertical direction to obtain a two-dimensional point cloud image formed by projection points includes:
  • step S101 the points in the three-dimensional point cloud from which the obstacle points are eliminated are projected in the vertical direction to obtain a two-dimensional point cloud image formed by the projection points.
  • the obstacle points belonging to the obstacles in the three-dimensional point cloud may be eliminated first, so as to proceed subsequently
  • the points in the three-dimensional point cloud from which the obstacle points are eliminated can be projected in the vertical direction, so as to prevent the obstacle points belonging to the obstacle from being projected into the two-dimensional point cloud image, which affects the accuracy of the subsequent determination of lane line points.
  • Fig. 16 is a schematic flowchart showing another method for determining a three-dimensional lane line according to an embodiment of the present disclosure. As shown in FIG. 16, the determining an obstacle point belonging to an obstacle in the three-dimensional point cloud includes:
  • step S801 an obstacle point belonging to an obstacle is determined in the three-dimensional point cloud according to a predetermined deep learning model.
  • a deep learning model can be obtained through deep learning in advance.
  • the deep learning model can take a three-dimensional point cloud as input and output information about obstacle points belonging to obstacles. Based on this information, it can be determined that the three-dimensional point cloud belongs to obstacles. Obstacle points of objects. Among them, obstacles include but are not limited to vehicles, pedestrians, traffic signs, etc.
  • Fig. 17 is a schematic flowchart showing another method for determining a three-dimensional lane line according to an embodiment of the present disclosure. As shown in FIG. 17, performing lane line fitting according to the two-dimensional point cloud image to obtain lane line points includes:
  • step S308 a lane line fitting is performed on the two-dimensional point cloud image through a Bezier curve to obtain lane line points.
  • a curve model can be selected as needed to fit the projection points in the two-dimensional point cloud image to determine the lane line.
  • the Bezier curve can be selected to fit the projection points in the two-dimensional point cloud image to determine the lane line.
  • Fig. 18 is a schematic flow chart showing yet another method for determining a three-dimensional lane line according to an embodiment of the present disclosure. As shown in FIG. 18, performing lane line fitting according to the two-dimensional point cloud image through the Bezier curve to obtain lane line points includes:
  • step S3081 lane line fitting is performed on the two-dimensional point cloud image through a multi-segment third-order Bezier curve to obtain lane line points.
  • the projection points in the two-dimensional point cloud image may be fitted by a multi-segment third-order Bezier curve to determine the lane line.
  • the equation of the third-order Bezier curve is as follows:
  • P(t) A(1-t) 3 +B ⁇ 3(1-t) 2 ⁇ t+C ⁇ 3(1-t) ⁇ t 2 +D ⁇ t 3 ;
  • A, B, C, and D are the coordinates of the target point as the control point.
  • the specific fitting method may be to determine the two farthest points of the projection points in the two-dimensional point cloud image as the endpoints for fitting, and then determine whether there is a projection point distance from the curve greater than the preset distance for the curve obtained by fitting, If there is such a projection point, draw a perpendicular line from the projection point to the curve, and then divide the curve into two parts from the intersection of the perpendicular line and the curve, and continue to fit the projection point for each part of the curve.
  • the embodiments of the device for determining a three-dimensional lane line of the present disclosure can be applied to electronic devices (such as terminals and servers).
  • the device embodiments can be implemented by software, or can be implemented by hardware or a combination of software and hardware.
  • Taking software implementation as an example as a logical device, it is formed by reading the corresponding computer program instructions in the non-volatile memory into the memory through the processor of the device where it is located.
  • FIG. 19 it is a schematic diagram of a hardware structure of the device where the device for determining a three-dimensional lane line of this disclosure is located, except for the processor, network interface, memory, and non-volatile memory shown in FIG.
  • the device in which the device is located may also include other hardware, such as a forwarding chip responsible for processing messages, etc.; in terms of hardware structure, the device may also be a distributed device, which may include multiple interface cards to facilitate The hardware level carries on the expansion of the message processing.
  • the embodiments of the present disclosure also provide a device for determining a three-dimensional lane line, which can be applied to an image acquisition device, which can collect a three-dimensional point cloud of a vehicle driving environment, and can also be applied to analyze and process the three-dimensional point cloud.
  • image acquisition device which can collect a three-dimensional point cloud of a vehicle driving environment, and can also be applied to analyze and process the three-dimensional point cloud.
  • Other electronic equipment such as terminals, servers, in-vehicle equipment, etc.
  • the device for determining a three-dimensional lane line includes one or more processors working individually or in cooperation, and the processors are configured to execute:
  • a three-dimensional lane line is generated.
  • the processor is configured to execute:
  • the processor is configured to execute:
  • the height information of the lane line point is determined according to the interval information of the height interval.
  • the processor is configured to execute:
  • the other blocks include a second block and a third block, and the height information of other lane line points located in the second block on the lane line is the second height information, so The height information of other lane line points located in the third block on the lane line is the second height information;
  • the processor is used to execute:
  • the interval between the second height information and the third height information among the plurality of height intervals is determined as a target height interval.
  • the processor is configured to execute:
  • the multiple extreme height values are used as the boundary value of the height interval to determine at least one height interval.
  • the processor is further configured to execute:
  • the label includes at least one of the following:
  • the processor is further configured to execute:
  • a three-dimensional lane line map is determined.
  • the processor is configured to execute:
  • the lane line fitting is performed on the two-dimensional point cloud image to obtain the lane line point.
  • the processor is configured to execute:
  • the lane line area is determined in the two-dimensional point cloud image according to a predetermined image recognition model.
  • the processor is configured to execute:
  • the lane line area is determined in the road surface area.
  • the processor is further configured to execute:
  • the processor is used to execute:
  • a lane line fitting is performed on the new projection point in the two-dimensional point cloud image to obtain the lane line point.
  • the processor is further configured to execute:
  • the response information is generated according to the matching result of the projection of the lane line in the two-dimensional point cloud image and the lane line.
  • the processor is further configured to execute:
  • the projection of the three-dimensional point cloud in a vertical direction to obtain a two-dimensional point cloud image formed by projection points includes:
  • the processor is configured to execute:
  • an obstacle point belonging to an obstacle is determined in the three-dimensional point cloud.
  • the processor is configured to execute:
  • the lane line fitting is performed on the two-dimensional point cloud image through the Bezier curve to obtain lane line points.
  • the processor is configured to execute:
  • the lane line fitting is performed on the two-dimensional point cloud image through a multi-segment third-order Bezier curve to obtain lane line points.
  • An embodiment of the present disclosure also provides an electronic device, including the device for determining a three-dimensional lane line according to any one of the foregoing embodiments.
  • the electronic device may be a terminal (specifically, a mobile terminal such as a mobile phone, or a vehicle-mounted terminal), or a server.
  • the systems, devices, modules, or units explained in the above embodiments may be implemented by computer chips or entities, or implemented by products with certain functions.
  • the functions are divided into various units and described separately.
  • the functions of each unit can be implemented in the same one or more software and/or hardware.
  • the embodiments of the present disclosure can be provided as a method, a system, or a computer program product. Therefore, the present disclosure may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware.
  • the present disclosure may take the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

立体车道线确定方法,包括:获取目标环境的三维点云,并将所述三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像,其中所述投影点保留所述三维点云的高度信息(S1);根据所述二维点云图像进行车道线拟合,获得车道线点(S2);基于所述投影点的高度信息,确定所述车道线点的高度信息(S3);基于所述车道线点的高度信息,生成立体车道线(S4)。由于将三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像保留了三维点云的高度信息,从而根据二维点云图像进行车道线拟合获得的车道线点,也具有高度信息,而基于高度信息则可以区分位于不同高度的车道线,进而将车道线点整合为立体车道线。

Description

立体车道线确定方法、装置和电子设备 技术领域
本公开涉及地图处理技术领域,尤其涉及立体车道线确定方法、立体车道线确定装置和电子设备。
背景技术
在自动驾驶领域中,对于道路中车道线的识别是非常重要的。在相关技术中,车道线获取的方式主要有两种,其一是从当前环境图像中实时地检测出车道线,其二是在高精度地图中获取预先标注好的车道线,以确定环境中车道线的位置。
由于高精度地图中的车道线预先标注,现有的在高精度地图中标注车道线的方式主要由人工完成。可是对于高精度地图而言,其属于三维图像,是基于激光雷达生成的,激光雷达生成的图像一般无颜色信息,并且生成的图像还会受到路面上障碍物的影响,使得人工在高精度地图中标注车道线容易出现错误,并且需要大量重复操作,标注速度较慢,效率较低。
发明内容
本公开提出了立体车道线确定方法、立体车道线确定装置以及电子设备,以解决相关技术中的技术问题。
根据本公开实施例的第一方面,提出一种立体车道线确定方法,包括:
获取目标环境的三维点云,并将所述三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像,其中所述投影点保留所述三维点云的高度信息;
根据所述二维点云图像进行车道线拟合,获得车道线点;
基于所述投影点的高度信息,确定所述车道线点的高度信息;
基于所述车道线点的高度信息,生成立体车道线
根据本公开实施例的第二方面,提出一种立体车道线确定装置,包括单独或者协同工作的一个或者多个处理器,所述处理器用于执行:
获取目标环境的三维点云,并将所述三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像,其中所述投影点保留所述三维点云的高度信息;
根据所述二维点云图像进行车道线拟合,获得车道线点;
基于所述投影点的高度信息,确定所述车道线点的高度信息;
基于所述车道线点的高度信息,生成立体车道线。
根据本公开实施例的第三方面,提出电子设备,包括上述实施例所述的立体车道线确定装置。
根据本公开的实施例,由于将三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像保留了三维点云的高度信息,从而根据二维点云图像进行车道线拟合获得的车道线点,也具有高度信息,而基于高度信息则可以区分位于不同高度的车道线,进而将车道线点整合为立体车道线。
附图说明
为了更清楚地说明本公开实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是根据本公开的实施例示出的一种立体车道线确定方法的示意流程图。
图2是根据本公开的实施例示出的另一种立体车道线确定方法的示意流程图。
图3是根据本公开的实施例示出的一种区块的示意图。
图4是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。
图5是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。
图6是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。
图7是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。
图8是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。
图9是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。
图10是根据本公开的实施例示出的一种根据所述二维点云图像进行车道线拟合,获得车道线点的示意流程图。
图11是根据本公开的实施例示出的另一种根据所述二维点云图像进行车道线拟合,获得车道线点的示意流程图。
图12是根据本公开的实施例示出的又一种根据所述二维点云图像进行车道线拟合,获得车道线点的示意流程图。
图13是根据本公开的实施例示出的又一种根据所述二维点云图像进行车道线拟合,获得车道线点的示意流程图。
图14是根据本公开的实施例示出的又一种根据所述二维点云图像进行车道线拟合,获得车道线点的示意流程图。
图15是根据本公开的实施例示出的又一种立体车道线确定方法的示意流 程图。
图16是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。
图17是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。
图18是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。
图19是根据本公开的实施例示出的立体车道线确定装置所在设备的一种硬件结构示意图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。另外,在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
在相关技术中,为了克服人工标注高精度地图中车道线所存在的种种问题,提出了一些自动标注高精度地图中的车道线的方法,但是目前的方法仅适用于高度方向上存在一个车道线的情况,针对高度方向上存在多个车道线的情况,例如立交桥、高架桥的道路在高度方向上存在重叠,那么道路中的车道线在高度方向上也存在重叠,也即车道线是立体的,并非局限在平面上的,那么基于相关技术中的方式无法区分开重叠的车道线。
图1是根据本公开的实施例示出的一种立体车道线确定方法的示意流程图。本公开实施例所述的立体车道线确定方法,可以适用于图像采集设备,图像采集设备可以采集车辆行驶环境的三维点云,也可以适用于能够对三维 点云进行分析处理的其他电子设备,例如终端、服务器、车载设备等。
如图1所示,所述立体车道线确定方法包括以下步骤:
在步骤S1中,获取目标环境的三维点云,并将所述三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像,其中所述投影点保留所述三维点云的高度信息;
在一个实施例中,目标环境的三维点云可以通过激光雷达获取。
在一个实施例中,将所述三维点云向垂直方向上进行投影,类似于向鸟瞰(birdview)图投影,例如鸟瞰图可以包括平行于水平面的两维坐标,例如x轴坐标和y轴坐标,但是本实施例中的三维点云向垂直方向上投影,得到的投影点构成的二维点云图像,除了包括x轴坐标和y轴坐标,还保留三维点云的高度信息,也即垂直于水平面的坐标,例如z轴坐标。
在步骤S2中,根据所述二维点云图像进行车道线拟合,获得车道线点;
在一个实施例中,可以先在二维点云图像中确定车道线区域,然后确定处于所述车道线区域中的投影点;进而根据所述车道线点平行于水平面的两维坐标,例如上述x轴坐标和y轴坐标,根据所述二维点云图像进行车道线拟合,获得车道线点。
在步骤S3中,基于所述投影点的高度信息,确定所述车道线点的高度信息;
在步骤S4中,基于所述车道线点的高度信息,生成立体车道线。
在一个实施例中,由于将三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像保留了三维点云的高度信息,从而根据二维点云图像进行车道线拟合获得的车道线点,也具有高度信息,而基于高度信息则可以区分位于不同高度的车道线,进而将车道线点整合为立体车道线。
图2是根据本公开的实施例示出的另一种立体车道线确定方法的示意流 程图。如图2所示,所述区块高度信息包括高度区间,所述基于所述投影点的高度信息,确定所述车道线点的高度信息包括:
在步骤S301中,将所述二维点云图像划分为多个区块;
在步骤S302中,对所述区块中的投影点的高度信息进行聚类,以确定至少一个高度区间。
在一个实施例中,可以将二维点云图像划分为多个区块,例如将二维点云图像栅格化,每个栅格(grid)即一个区块,由于区块中的投影点保留有高度信息,那么可以对区块中的投影点的高度信息进行聚类,从而确定至少一个高度区间。
其中,聚类算法包括但不限于k-means聚类算法,AP(Affinity Propagation)聚类算法等,具体可以根据需要选择。
例如对于某个区块而言,其中的投影点主要位于三个高度范围内,分别为0至0.3米,3.1米至3.4米,6.7米至7.0米,那么对该区块中投影点的高度信息进行聚类,可以确定3个高度区间,也即0至0.3米区间,3.1米至3.4米区间,6.7米至7.0米区间,说明该区块中主要有位于三个高度区间的车道,一般对应的场景为三层结构的高架桥,那么也就有位于三个高度区间的车道线,底层的车道上的车道线点高度信息处于0至0.3米区间,中层的车道上的车道线点高度信息处于3.1米至3.4米区间,顶层的车道上的车道线点高度信息处于6.7米至7.0米区间。
通过确定高度区间,便于后续确定区块中车道线的高度信息。
图3是根据本公开的实施例示出的一种区块的示意图。
如图3所示,例如可以将二维点云图像划分为16个区块,分比为A1B1、A1B2、A1B3、A1B4、A2B1、A2B2、A2B3、A2B4、A3B1、A3B2、A3B3、A3B4、A4B1、A4B2、A4B3、A4B4。具体划分的区块数和区块的形状可以根据需要设置,并不限于图3所示的情况。
其中可以包括两条相交的车道线,车道线α和车道线β,具体在区块A2B2中相交。
图4是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。如图4所示,所述基于所述投影点的高度信息,确定所述车道线点的高度信息还包括:
在步骤S303中,若确定了一个高度区间,根据所述高度区间的区间信息确定所述车道线点的高度信息。
在一个实施例中,若只确定了一个高度区间,说明高区块中只有位于一个高度区间的车道,也即该区块中的车道并不存在多层结构,而仅仅是一层结构,那么可以直接根据高度区间的区间信息确定车道中车道线线点的高度信息。
例如图3所示,对于区块A2B4而言,确定的高度区间为3.1米至3.4米,那么说明在区块A2B4中垂直方向上只有一条车道,那么车道中的车道线并不与其他车道线相交。例如高度区间的高度信息可以包括高度区间的上限值和下限值,例如3.1米和3.4米,那么根据高度区间的区间信息确定车道线点的高度信息的方式,可以是对上限值和下限值计算均值,得到的车道线点的高度信息为3.25米(该高度信息也可以表征车道线点所在车道的高度信息)。
图5是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。如图5所示,所述基于所述投影点的高度信息,确定所述车道线点的高度信息还包括:
在步骤S304中,若确定了多个高度区间,确定目标车道线点所属的车道线,其中,所述目标车道线点位于所述多个区块中的第一区块;
在步骤S305中,确定所述车道线上位于其他区块中的其他车道线点的高度信息,其中,所述其他区块为所述多个区块中与所述第一区块相邻的区块;
在步骤S306中,根据所述其他车道线点的高度信息,确定所述目标车道 线点在所述多个高度区间中所属的目标高度区间;
在步骤S307中,根据所述目标高度区间的区间信息确定所述目标车道线点的高度信息。
在一个实施例中,若确定了多个高度区间,例如对于图3中的区块A2B2,在高度方向上存在两个相交车道,也就存在两个相交的车道线,而两个车道由于相交,高度必然不等,所以区块A2B2中的投影点一部分属于一个车道,另一部分属于另一个车道,因此属于两个高度区间,例如0至0.3米区间和3.1米至3.4米区间。
在这种情况下,可以确定目标车道线点所属的车道线,例如在步骤S2中,根据二维点云图像进行车道线拟合,可以得到至少一条车道线,例如图3所示,可以得到车道线α和车道线β,由于区块中的投影点都属于二维点云图像,因此可以确定车道线所属的车道线,例如确定区块A2B2中部分车道线点属于车道线α,另一部分车道线点属于车道线β。
但是由于高度区间的区间信息存在多个上限值和下限值,跨度过大,不能简单的通过计算均值的方式来确定其中车道线点的高度信息。
针对这种情况,本实施例可以确定车道线上位于其他区块中的其他车道线点的高度信息,其中,所述其他区块为多个区块中与第一区块相邻的区块,例如第一区块为图3的A2B2,那么对于车道线α而言,其他区块为A1B2和A3B2,对于车道线β而言,其他区块为A2B1和A2B3。
需要说明的是,如果请其他区块也存在多个高度区间,那么还需要进一步确定其他区块的相邻区块,直至确定的到相邻区块只有一个高度区间,那么针对该相邻区块,可以根据这一个高度区间的高度信息来确定区块中车道线点的高度信息。
本实施例假设区块A2B1、A2B3、A1B2和A3B2只有一个高度区间,那么针对区块A2B1、A2B3、A1B2和A3B2,可以按照图4所示实施例的方式 确定出其中车道线的高度信息。
例如对于A1B2,确定的高度区间为0至0.3米区间,那么其中车道线的高度信息为0.15米;对于和A3B2,确定的高度区间为0.5至0.8米,那么其中车道线的高度信息为0.55米;对于A2B1,确定的高度区间为3.1至3.4米区间,那么其中车道线的高度信息为3.25米;对于A2B3,确定的高度区间为3.6至3.9米区间,那么其中车道线的高度信息为3.75米。
在得到其他车道线点的高度信息后,可以根据其他车道线点的高度信息,确定目标车道线点在多个高度区间中所属的目标高度区间,进而根据目标高度区间的区间信息确定所述目标车道线点的高度信息。
例如确定了A1B2和A3B2中车道线点属于车道线α,且A1B2中车道线点的高度信息为0.15米,A3B2中车道线点的高度信息为0.55米,由于车道线一般高度是连续变化的,因此可以确定属于A2B2中属于车道线α的车道线点的高度区间为0.15米至0.55米,进而可以计算0.15米和0.55米的均值作为A2B2中属于车道线α的车道线点的高度信息,也即0.35米。
同理,确定了A2B2和A2B3中车道线点属于车道线β,且A2B2中车道线点的高度信息为3.25米,A2B3中车道线点的高度信息为3.75米,可以确定属于A2B2中属于车道线β的车道线点的高度区间为3.25米至3.75米,进而可以计算3.25米和3.75米的均值作为A2B2中属于车道线β的车道线的高度信息,也即3.50米。
图6是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。如图6所示,所述其他区块包括第二区块和第三区块,所述车道线上位于所述第二区块中的其他车道线点的高度信息为第二高度信息,所述车道线上位于所述第三区块中的其他车道线点的高度信息为第二高度信息;
其中,所述根据所述其他车道线点的高度信息,确定所述目标车道线点在所述多个高度区间中所属的目标高度区间包括:
在步骤S3061中,在所述多个高度区间中确定介于所述第二高度信息和所述第三高度信息之间的区间作为目标高度区间。
在一个实施例中,在图5所示实施例的基础上,例如对于车道线α而言,其他区块包括第二区块A1B2和第三区块A3B2,A1B2中车道线点的第一高度信息为0.15米,A3B2中车道线点的第二高度信息为0.55米,例如第一区块A2B2包含高度区间为0.2至0.4米和3.3至3.6米,其中0.2米至0.4米区间介于0.15米和0.55米之间,所以可以选择0.2米至0.4米区间作为A2B2中属于车道线α的车道线点的目标高度区间。
进而可以根据该目标高度区间确定车道线点的高度信息,例如计算目标高度区间上限值和下限值的均值,也即0.3米作为A2B2中属于车道线α的车道线点的高度信息。
另外,为了保证车道线的连续性,对于区块中车道线确定的高度信息可以并不是一个单一的值,而是一个连续变化的函数的一部分,该函数可以根据车道线点的目标高度区间来确定,例如该函数为正比例函数,那么可以计算目标高度区间的上限值和下限值之差,并将该差值除以区块(例如区块为正方形)的边长,将得到的值作为函数的比例系数。据此,得到的该区块中车道线点的高度信息,就是按照该正比例函数连续变化的,从而可以良好地衔接于相邻区块的车道线点,确保多个区块中车道线点的高度信息连续,以便根据连续的高度信息绘制连续的车道线。
图7是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。如图7所示,所述对所述区块中的投影点的高度信息进行聚类,以确定至少一个高度区间包括:
在步骤S3021中,对所述区块中的投影点的高度信息进行聚类,以确定多个高度极值;
在步骤S3022中,以所述多个高度极值作为高度区间的边界值,以确定 至少一个高度区间。
在一个实施例中,对区块中的投影点的高度信息进行聚类,以确定至少一个高度区间,具体可以是对区块中的投影点的高度信息进行聚类,以确定多个高度极值,例如对图3所示的区块A2B2中的投影点的高度信息进行聚类,可以确定4个高度极值,分比为0.2米,0.4米,3.3米和3.6米那么可以将极值从小到大每两个划分为一组,一组中的两个极值分别作为高度区间的边界值,例如0.2米和0.4米属于一组,其中0.2米可以作为高度区间的下限值,0.4米可以作为高度区间的上限值。
图8是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。如图8所示,所述方法还包括:
在步骤S5中,标注所述车道线点;
在步骤S6中,显示带有标注的所述车道线点。
在一个实施例中,可以对车道线进行标注,进而显示带有标注的车道线,例如针对位于不同高度的车道线点可以标注不同的高度信息,进而显示带有标注的车道线点,便于用户根据标注直观地确定车道线的高度。
可选地,所述标注包括以下至少之一:
高度信息、位置信息、类别。
在一个实施例中,除了对车道线标注高度信息,还可以标注位置信息、类别等,其中,类别可以包括虚线、实线、双实线、斑马线等,以便用户根据标注直观地确定车道线所处的位置,车道线所属的类型等信息。
图9是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。如图9所示,所述方法还包括:
在步骤S7中,根据所述车道线点的高度信息,确定立体车道线地图。
在一个实施例中,可以根据车道线点的高度信息,确定立体车道线地图, 例如在高精度地图中自动生成立体车道线地图。
图10是根据本公开的实施例示出的一种根据所述二维点云图像进行车道线拟合,获得车道线点的示意流程图。如图10所示,所述根据所述二维点云图像进行车道线拟合,获得车道线点包括:
在步骤S201中,在所述二维点云图像中确定车道线区域;
在步骤S202中,确定处于所述车道线区域中的投影点;
在步骤S203中,根据所述投影点平行于水平面的两维坐标,对所述二维点云图像进行车道线拟合,获得车道线点。
在一个实施例中,在获取到的环境图像中,可以确定车道线区域。
其中,可以根据预先确定的图像识别模型在所述环境图像中确定车道线区域,例如可以预先通过机器学习训练图像识别模型(例如可以是神经网络),图像识别模型可以根据输入的图像确定图像中的车道线区域,那么可以将获取到的环境图像输入到图像识别模型中,即可确定出环境图像中的车道线区域。
另外,还可以根据相关技术中的算法,先在环境图像中确定出路面区域,然后在路面区域中确定车道线区域,从而不必分析环境图像中的所有信息,以便缩小确定车道线区域所依据的信息量,有利于减少误判。
然后可以根据投影点平行于水平面的两维坐标,例如x轴坐标和y轴坐标,拟合投影点作为车道线。例如可以通过贝塞尔曲线对投影点进行拟合。由于投影点位于车道线区域内,那么对投影点拟合得到的曲线可以作为车道线。
据此,可以结合环境图像和三维点云,确定三维点云中位于车道线区域中的投影点,进而通过拟合投影点确定车道线。而三维点云可以作为高精度地图,并且上述确定车道线的过程在很大程度上无需人工参与,因此有利于半自动甚至全自动地在高精度地图中确定车道线,在面对大量车道线的重复 确定操作时,可以高速高效地完成,而且可以提高确定车道线的精度。
图11是根据本公开的实施例示出的另一种根据所述二维点云图像进行车道线拟合,获得车道线点的示意流程图。如图11所示,所述在所述二维点云图像中确定车道线区域包括:
在步骤S2011中,根据预先确定的图像识别模型在所述二维点云图像中确定所述车道线区域。
图12是根据本公开的实施例示出的又一种根据所述二维点云图像进行车道线拟合,获得车道线点的示意流程图。如图12所示,所述在所述二维点云图像中确定车道线区域包括:
在步骤S2012中,在所述二维点云图像中确定路面区域;
在步骤S2013中,在所述路面区域中确定车道线区域。
图13是根据本公开的实施例示出的又一种根据所述二维点云图像进行车道线拟合,获得车道线点的示意流程图。如图13所示,在根据处于所述车道线区域中的投影点平行于水平面的两维坐标进行车道线拟合之前,所述根据所述二维点云图像进行车道线拟合,获得车道线点还包括:
在步骤S204中,在处于所述车道线区域外的投影点中,确定到所述车道线区域的距离小于预设距离的候选点;
在步骤S205中,在所述候选点中确定与处于所述车道线区域中的投影点的预设属性信息的相似度大于预设相似度的扩展点;
在步骤S206中,将所述扩展点和处于所述车道线区域外的投影点作为新的投影点;
其中,所述根据所述投影点平行于水平面的两维坐标,对所述二维点云图像进行车道线拟合,获得车道线点包括:
在步骤S2031中,根据所述投影点平行于水平面的两维坐标,对所述二 维点云图像中新的投影点进行车道线拟合,获得车道线点。
在一个实施例中,由于将三维点云中的点向环境图像中投影,或多或少会存在一些偏差,例如可能是图像采集设备的外参不够准确导致的,那么将导致在三维点云中部分本应位于车道线区域的投影点,并不会投影到环境图像的车道线区域内,这就可能导致拟合结果并不准确,也即拟合确定的车道线与三维点云中实际的车道线存在差异。
但是由于偏差一般不会很大,所以这些未投影到车道线区域的投影点距离投影到区域的投影点距离较近。因此,可以在车道线区域外的投影点中,确定到车道线点的距离小于预设距离的候选点,这些候选点就有可能是未投影到车道线区域的车道线点,针对候选点可以确定确定与车道线区域内的投影点的预设属性信息的相似度大于预设相似度的扩展点,例如可以采用floodfill(泛洪填充)算法来确定扩展点,其中,预设属性可以可以根据需要进行设置,例如可以是反射亮度(intensity),这些扩展点极有可能是未投影到车道线区域的投影点,从而可以将扩展点和原来车道线区域内的投影点作为新的投影点进行车道线拟合。
据此,可以缓解将三维点云中的点向环境图像中投影存在偏差而导致拟合结果不准确的问题。
图14是根据本公开的实施例示出的又一种根据所述二维点云图像进行车道线拟合,获得车道线点的示意流程图。如图14所示,所述根据所述二维点云图像进行车道线拟合,获得车道线点还包括:
在步骤S207中,根据接收到的修正指令修正所述车道线;
在步骤S208中,将修正后的车道线投影到所述二维点云图像中,以确定车道线在所述二维点云图像中的投影与所述车道线是否匹配;
在步骤S209中,根据车道线在所述二维点云图像中的投影与所述车道线的匹配结果生成响应信息。
在一个实施例中,可以接收人工输入的修正指令对车道线进行修正,但是人工修正的结果也可能存在误差,因此可以将修正后的车道线投影到环境图像中,以确定修正后的车道线在环境图像中的投影与车道线区域是否匹配,然后根据修正后的车道线在环境图像中的投影与车道线区域的匹配结果生成响应信息。
若修正后的车道线在环境图像中的投影与车道线区域不匹配,例如修正后的车道线在环境图像中的投影超过预设比例落在车道线区域以外,那么生成的响应信息可以用于提示用户修正结果不合理,以便用户重新修正;若修正后的车道线在环境图像中的投影与车道线区域匹配,例如修正后的车道线在环境图像中的投影小于预设比例落在车道线区域以外,那么生成的响应信息可以用于提示用户修正结果合理。
需要说明的是,本实施例除了在拟合投影点得到车道线后由人工参与进行修正,也可以在确定车道线区域,将三维点云中的点向环境图像投影的过程中由人工参与进行修正,例如在确定车道线过程中,可以接收人工输入的指令修正、补充、删减环境图像中的车道线区域,例如在将三维点云中的点向环境图像投影的过程中,对投影后的目标进行调整。
图15是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。如图15所示,在将所述三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像之前,所述方法还包括:
在步骤S8中,在所述三维点云中确定属于障碍物的障碍点;
在步骤S9中,在所述三维点云中剔除所述障碍点;
其中,所述将所述三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像包括:
在步骤S101中,将剔除所述障碍点的三维点云中的点向垂直方向上进行投影,得到投影点构成的二维点云图像。
在一个实施例中,在将所述三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像之前,可以先将三维点云中属于障碍物的障碍点剔除,从而后续进行投影操作时,可以将剔除障碍点的三维点云中的点向垂直方向上投影,以免属于障碍物的障碍点投影到二维点云图像中,影响后续确定车道线点的准确性。
图16是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。如图16所示,所述在所述三维点云中确定属于障碍物的障碍点包括:
在步骤S801中,根据预先确定的深度学习模型,在所述三维点云中确定属于障碍物的障碍点。
在一个实施例中,可以预先通过深度学习得到深度学习模型,深度学习模型可以以三维点云作为输入,输出属于障碍物的障碍物点的信息,根据该信息,可以确定三维点云中属于障碍物的障碍点。其中,障碍物包括但不限于车辆、行人、交通指示牌等。
图17是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。如图17所示,所述根据所述二维点云图像进行车道线拟合,获得车道线点包括:
在步骤S308中,通过贝塞尔曲线对所述二维点云图像进行车道线拟合,获得车道线点。
在一个实施例中,可以根据需要选择曲线模型来拟合二维点云图像中的投影点以确定车道线。例如可以选择贝塞尔曲线拟合二维点云图像中的投影点以确定车道线。
图18是根据本公开的实施例示出的又一种立体车道线确定方法的示意流程图。如图18所示,所述通过贝塞尔曲线根据所述二维点云图像进行车道线拟合,获得车道线点包括:
在步骤S3081中,通过多段三阶贝塞尔曲线对所述二维点云图像进行车 道线拟合,获得车道线点。
在一个实施例中,可以通过多段三阶贝塞尔曲线拟合二维点云图像中的投影点以确定车道线。其中,三阶贝塞尔曲线的方程如下:
P(t)=A(1-t) 3+B·3(1-t) 2·t+C·3(1-t)·t 2+D·t 3
A、B、C和D为目标点中作为控制点的坐标。
具体拟合方式可以是确定二维点云图像中的投影点中最远的两点作为端点进行拟合,然后对于拟合得到的曲线,确定是否存在投影点距离该曲线距离大于预设距离,若存在这种投影点,则以该投影点向曲线做垂线,然后从垂线和曲线的交点将曲线划分为两部分,针对每部分曲线对投影点继续进行拟合,若对于进一步拟合后的曲线,仍存在到曲线的距离大于预设距离的投影点,那么继续以投影点向曲线做垂线,然后再从垂线和曲线的交点将曲线进一步划分,再对划分后的每部分曲线对目标点继续进行拟合,直至对于拟合后的曲线,所有投影点到曲线的距离小于或等于预设距离。
本公开立体车道线确定装置的实施例可以应用在电子设备(例如终端、服务器)上。装置实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻辑意义上的装置,是通过其所在设备的处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,如图19所示,为本公开立体车道线确定装置所在设备的一种硬件结构示意图,除了图19所示的处理器、网络接口、内存以及非易失性存储器之外,实施例中装置所在的设备通常还可以包括其他硬件,如负责处理报文的转发芯片等等;从硬件结构上来讲该设备还可能是分布式的设备,可能包括多个接口卡,以便在硬件层面进行报文处理的扩展。
本公开的实施例还提出一种立体车道线确定装置,所述装置可以适用于图像采集设备,图像采集设备可以采集车辆行驶环境的三维点云,也可以适用于能够对三维点云进行分析处理的其他电子设备,例如终端、服务器、车载设备等。
所述立体车道线确定装置包括单独或者协同工作的一个或者多个处理器,所述处理器用于执行:
获取目标环境的三维点云,并将所述三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像,其中所述投影点保留所述三维点云的高度信息;
根据所述二维点云图像进行车道线拟合,获得车道线点;
基于所述投影点的高度信息,确定所述车道线点的高度信息;
基于所述车道线点的高度信息,生成立体车道线。
在一个实施例中,所述处理器用于执行:
将所述二维点云图像划分为多个区块;
对所述区块中的投影点的高度信息进行聚类,以确定至少一个高度区间。
在一个实施例中,所述处理器用于执行:
若确定了一个高度区间,根据所述高度区间的区间信息确定所述车道线点的高度信息。
在一个实施例中,所述处理器用于执行:
若确定了多个高度区间,确定目标车道线点所属的车道线,其中,所述目标车道线点位于所述多个区块中的第一区块;
确定所述车道线上位于其他区块中的其他车道线点的高度信息,其中,所述其他区块为所述多个区块中与所述第一区块相邻的区块;
根据所述其他车道线点的高度信息,确定所述目标车道线点在所述多个高度区间中所属的目标高度区间;
根据所述目标高度区间的区间信息确定所述目标车道线点的高度信息。
在一个实施例中,所述其他区块包括第二区块和第三区块,所述车道线上位于所述第二区块中的其他车道线点的高度信息为第二高度信息,所述车道线上位于所述第三区块中的其他车道线点的高度信息为第二高度信息;
其中,所述处理器用于执行:
在所述多个高度区间中确定介于所述第二高度信息和所述第三高度信息 之间的区间作为目标高度区间。
在一个实施例中,所述处理器用于执行:
对所述区块中的投影点的高度信息进行聚类,以确定多个高度极值;
以所述多个高度极值作为高度区间的边界值,以确定至少一个高度区间。
在一个实施例中,所述处理器还用于执行:
标注所述车道线点;
显示带有标注的所述车道线点。
在一个实施例中,所述标注包括以下至少之一:
高度信息、位置信息、类别。
在一个实施例中,所述处理器还用于执行:
根据所述车道线点的高度信息,确定立体车道线地图。
在一个实施例中,所述处理器用于执行:
在所述二维点云图像中确定车道线区域;
确定处于所述车道线区域中的投影点;
根据所述车道线点平行于水平面的两维坐标,对所述二维点云图像进行车道线拟合,获得车道线点。
在一个实施例中,所述处理器用于执行:
根据预先确定的图像识别模型在所述二维点云图像中确定所述车道线区域。
在一个实施例中,所述处理器用于执行:
在所述二维点云图像中确定路面区域;
在所述路面区域中确定车道线区域。
在一个实施例中,所述处理器还用于执行:
在根据所述车道线点平行于水平面的两维坐标,根据所述二维点云图像进行车道线拟合,获得车道线点之前,在处于所述车道线区域外的投影点中,确定到所述车道线区域的距离小于预设距离的候选点;
在所述候选点中确定与处于所述车道线区域中的投影点的预设属性信息 的相似度大于预设相似度的扩展点;
将所述扩展点和处于所述车道线区域外的投影点作为新的投影点;
其中,所述处理器用于执行:
根据所述车道线点平行于水平面的两维坐标,对所述二维点云图像中新的投影点进行车道线拟合,获得车道线点。
在一个实施例中,所述处理器还用于执行:
根据接收到的修正指令修正所述车道线;
将修正后的车道线投影到所述二维点云图像中,以确定车道线在所述二维点云图像中的投影与所述车道线是否匹配;
根据车道线在所述二维点云图像中的投影与所述车道线的匹配结果生成响应信息。
在一个实施例中,所述处理器还用于执行:
在将所述三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像之前,在所述三维点云中确定属于障碍物的障碍点;
在所述三维点云中剔除所述障碍点;
其中,所述将所述三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像包括:
将剔除所述障碍点的三维点云中的点向垂直方向上进行投影,得到投影点构成的二维点云图像。
在一个实施例中,所述处理器用于执行:
根据预先确定的深度学习模型,在所述三维点云中确定属于障碍物的障碍点。
在一个实施例中,所述处理器用于执行:
通过贝塞尔曲线对所述二维点云图像进行车道线拟合,获得车道线点。
在一个实施例中,所述处理器用于执行:
通过多段三阶贝塞尔曲线对所述二维点云图像进行车道线拟合,获得车道线点。
本公开的实施例还提出一种电子设备,包括上述任一实施例所述的立体车道线确定装置。所述电子设备可以是终端(具体可以是手机等移动终端,也可以是车载终端),也可以是服务器。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本公开时可以把各单元的功能在同一个或多个软件和/或硬件中实现。本领域内的技术人员应明白,本公开的实施例可提供为方法、系统、或计算机程序产品。因此,本公开可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅为本公开的实施例而已,并不用于限制本公开。对于本领域技术人员来说,本公开可以有各种更改和变化。凡在本公开的精神和原理之 内所作的任何修改、等同替换、改进等,均应包含在本公开的权利要求范围之内。

Claims (37)

  1. 一种立体车道线确定方法,其特征在于,包括:
    获取目标环境的三维点云,并将所述三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像,其中所述投影点保留所述三维点云的高度信息;
    根据所述二维点云图像进行车道线拟合,获得车道线点;
    基于所述投影点的高度信息,确定所述车道线点的高度信息;
    基于所述车道线点的高度信息,生成立体车道线。
  2. 根据权利要求1所述的方法,其特征在于,所述区块高度信息包括高度区间,所述基于所述投影点的高度信息,确定所述车道线点的高度信息包括:
    将所述二维点云图像划分为多个区块;
    对所述区块中的投影点的高度信息进行聚类,以确定至少一个高度区间。
  3. 根据权利要求2所述的方法,其特征在于,所述基于所述投影点的高度信息,确定所述车道线点的高度信息还包括:
    若确定了一个高度区间,根据所述高度区间的区间信息确定所述车道线点的高度信息。
  4. 根据权利要求2所述的方法,其特征在于,所述基于所述投影点的高度信息,确定所述车道线点的高度信息还包括:
    若确定了多个高度区间,确定目标车道线点所属的车道线,其中,所述目标车道线点位于所述多个区块中的第一区块;
    确定所述车道线上位于其他区块中的其他车道线点的高度信息,其中,所述其他区块为所述多个区块中与所述第一区块相邻的区块;
    根据所述其他车道线点的高度信息,确定所述目标车道线点在所述多个高度区间中所属的目标高度区间;
    根据所述目标高度区间的区间信息确定所述目标车道线点的高度信息。
  5. 根据权利要求4所述的方法,其特征在于,所述其他区块包括第二区 块和第三区块,所述车道线上位于所述第二区块中的其他车道线点的高度信息为第二高度信息,所述车道线上位于所述第三区块中的其他车道线点的高度信息为第二高度信息;
    其中,所述根据所述其他车道线点的高度信息,确定所述目标车道线点在所述多个高度区间中所属的目标高度区间包括:
    在所述多个高度区间中确定介于所述第二高度信息和所述第三高度信息之间的区间作为目标高度区间。
  6. 根据权利要求2所述的方法,其特征在于,所述对所述区块中的投影点的高度信息进行聚类,以确定至少一个高度区间包括:
    对所述区块中的投影点的高度信息进行聚类,以确定多个高度极值;
    以所述多个高度极值作为高度区间的边界值,以确定至少一个高度区间。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    标注所述车道线点;
    显示带有标注的所述车道线点。
  8. 根据权利要求7所述的方法,其特征在于,所述标注包括以下至少之一:
    高度信息、位置信息、类别。
  9. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    根据所述车道线点的高度信息,确定立体车道线地图。
  10. 根据权利要求1至9中任一项所述的方法,其特征在于,所述根据所述二维点云图像进行车道线拟合,获得车道线点包括:
    在所述二维点云图像中确定车道线区域;
    确定处于所述车道线区域中的投影点;
    根据所述车道线点平行于水平面的两维坐标,对所述二维点云图像进行车道线拟合,获得车道线点。
  11. 根据权利要求10所述的方法,其特征在于,所述在所述二维点云图像中确定车道线区域包括:
    根据预先确定的图像识别模型在所述二维点云图像中确定所述车道线区域。
  12. 根据权利要求10所述的方法,其特征在于,所述在所述二维点云图像中确定车道线区域包括:
    在所述二维点云图像中确定路面区域;
    在所述路面区域中确定车道线区域。
  13. 根据权利要求10所述的方法,其特征在于,在根据所述车道线点平行于水平面的两维坐标,根据所述二维点云图像进行车道线拟合,获得车道线点之前,所述根据所述二维点云图像进行车道线拟合,获得车道线点还包括:
    在处于所述车道线区域外的投影点中,确定到所述车道线区域的距离小于预设距离的候选点;
    在所述候选点中确定与处于所述车道线区域中的投影点的预设属性信息的相似度大于预设相似度的扩展点;
    将所述扩展点和处于所述车道线区域外的投影点作为新的投影点;
    其中,所述根据所述投影点平行于水平面的两维坐标,对所述二维点云图像进行车道线拟合,获得车道线点包括:
    根据所述车道线点平行于水平面的两维坐标,对所述二维点云图像中新的投影点进行车道线拟合,获得车道线点。
  14. 根据权利要求10所述的方法,其特征在于,所述根据所述二维点云图像进行车道线拟合,获得车道线点还包括:
    根据接收到的修正指令修正所述车道线;
    将修正后的车道线投影到所述二维点云图像中,以确定车道线在所述二维点云图像中的投影与所述车道线是否匹配;
    根据车道线在所述二维点云图像中的投影与所述车道线的匹配结果生成响应信息。
  15. 根据权利要求1至9中任一项所述的方法,其特征在于,在将所述 三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像之前,所述方法还包括:
    在所述三维点云中确定属于障碍物的障碍点;
    在所述三维点云中剔除所述障碍点;
    其中,所述将所述三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像包括:
    将剔除所述障碍点的三维点云中的点向垂直方向上进行投影,得到投影点构成的二维点云图像。
  16. 根据权利要求15所述的方法,其特征在于,所述在所述三维点云中确定属于障碍物的障碍点包括:
    根据预先确定的深度学习模型,在所述三维点云中确定属于障碍物的障碍点。
  17. 根据权利要求1至9中任一项所述的方法,其特征在于,所述根据所述二维点云图像进行车道线拟合,获得车道线点包括:
    通过贝塞尔曲线对所述二维点云图像进行车道线拟合,获得车道线点。
  18. 根据权利要求17所述的方法,其特征在于,所述通过贝塞尔曲线根据所述二维点云图像进行车道线拟合,获得车道线点包括:
    通过多段三阶贝塞尔曲线对所述二维点云图像进行车道线拟合,获得车道线点。
  19. 一种立体车道线确定装置,其特征在于,包括单独或者协同工作的一个或者多个处理器,所述处理器用于执行:
    获取目标环境的三维点云,并将所述三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像,其中所述投影点保留所述三维点云的高度信息;
    根据所述二维点云图像进行车道线拟合,获得车道线点;
    基于所述投影点的高度信息,确定所述车道线点的高度信息;
    基于所述车道线点的高度信息,生成立体车道线。
  20. 根据权利要求19所述的装置,其特征在于,所述处理器用于执行:
    将所述二维点云图像划分为多个区块;
    对所述区块中的投影点的高度信息进行聚类,以确定至少一个高度区间。
  21. 根据权利要求20所述的装置,其特征在于,所述处理器用于执行:
    若确定了一个高度区间,根据所述高度区间的区间信息确定所述车道线点的高度信息。
  22. 根据权利要求20所述的装置,其特征在于,所述处理器用于执行:
    若确定了多个高度区间,确定目标车道线点所属的车道线,其中,所述目标车道线点位于所述多个区块中的第一区块;
    确定所述车道线上位于其他区块中的其他车道线点的高度信息,其中,所述其他区块为所述多个区块中与所述第一区块相邻的区块;
    根据所述其他车道线点的高度信息,确定所述目标车道线点在所述多个高度区间中所属的目标高度区间;
    根据所述目标高度区间的区间信息确定所述目标车道线点的高度信息。
  23. 根据权利要求22所述的装置,其特征在于,所述其他区块包括第二区块和第三区块,所述车道线上位于所述第二区块中的其他车道线点的高度信息为第二高度信息,所述车道线上位于所述第三区块中的其他车道线点的高度信息为第二高度信息;
    其中,所述处理器用于执行:
    在所述多个高度区间中确定介于所述第二高度信息和所述第三高度信息之间的区间作为目标高度区间。
  24. 根据权利要求20所述的装置,其特征在于,所述处理器用于执行:
    对所述区块中的投影点的高度信息进行聚类,以确定多个高度极值;
    以所述多个高度极值作为高度区间的边界值,以确定至少一个高度区间。
  25. 根据权利要求19所述的装置,其特征在于,所述处理器还用于执行:
    标注所述车道线点;
    显示带有标注的所述车道线点。
  26. 根据权利要求25所述的装置,其特征在于,所述标注包括以下至少之一:
    高度信息、位置信息、类别。
  27. 根据权利要求19所述的装置,其特征在于,所述处理器还用于执行:
    根据所述车道线点的高度信息,确定立体车道线地图。
  28. 根据权利要求19至27中任一项所述的装置,其特征在于,所述处理器用于执行:
    在所述二维点云图像中确定车道线区域;
    确定处于所述车道线区域中的投影点;
    根据所述车道线点平行于水平面的两维坐标,对所述二维点云图像进行车道线拟合,获得车道线点。
  29. 根据权利要求28所述的装置,其特征在于,所述处理器用于执行:
    根据预先确定的图像识别模型在所述二维点云图像中确定所述车道线区域。
  30. 根据权利要求28所述的装置,其特征在于,所述处理器用于执行:
    在所述二维点云图像中确定路面区域;
    在所述路面区域中确定车道线区域。
  31. 根据权利要求28所述的装置,其特征在于,所述处理器还用于执行:
    在根据所述车道线点平行于水平面的两维坐标,根据所述二维点云图像进行车道线拟合,获得车道线点之前,在处于所述车道线区域外的投影点中,确定到所述车道线区域的距离小于预设距离的候选点;
    在所述候选点中确定与处于所述车道线区域中的投影点的预设属性信息的相似度大于预设相似度的扩展点;
    将所述扩展点和处于所述车道线区域外的投影点作为新的投影点;
    其中,所述处理器用于执行:
    根据所述车道线点平行于水平面的两维坐标,对所述二维点云图像中新的投影点进行车道线拟合,获得车道线点。
  32. 根据权利要求28所述的装置,其特征在于,所述处理器还用于执行:
    根据接收到的修正指令修正所述车道线;
    将修正后的车道线投影到所述二维点云图像中,以确定车道线在所述二维点云图像中的投影与所述车道线是否匹配;
    根据车道线在所述二维点云图像中的投影与所述车道线的匹配结果生成响应信息。
  33. 根据权利要求19至27中任一项所述的装置,其特征在于,所述处理器还用于执行:
    在将所述三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像之前,在所述三维点云中确定属于障碍物的障碍点;
    在所述三维点云中剔除所述障碍点;
    其中,所述将所述三维点云向垂直方向上进行投影,得到投影点构成的二维点云图像包括:
    将剔除所述障碍点的三维点云中的点向垂直方向上进行投影,得到投影点构成的二维点云图像。
  34. 根据权利要求33所述的装置,其特征在于,所述处理器用于执行:
    根据预先确定的深度学习模型,在所述三维点云中确定属于障碍物的障碍点。
  35. 根据权利要求19至27中任一项所述的装置,其特征在于,所述处理器用于执行:
    通过贝塞尔曲线对所述二维点云图像进行车道线拟合,获得车道线点。
  36. 根据权利要求35所述的装置,其特征在于,所述处理器用于执行:
    通过多段三阶贝塞尔曲线对所述二维点云图像进行车道线拟合,获得车道线点。
  37. 一种电子设备,其特征在于,包括权利要求19至36中任一项所述的立体车道线确定装置。
PCT/CN2019/106656 2019-09-19 2019-09-19 立体车道线确定方法、装置和电子设备 WO2021051346A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980033269.0A CN112154446B (zh) 2019-09-19 2019-09-19 立体车道线确定方法、装置和电子设备
PCT/CN2019/106656 WO2021051346A1 (zh) 2019-09-19 2019-09-19 立体车道线确定方法、装置和电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/106656 WO2021051346A1 (zh) 2019-09-19 2019-09-19 立体车道线确定方法、装置和电子设备

Publications (1)

Publication Number Publication Date
WO2021051346A1 true WO2021051346A1 (zh) 2021-03-25

Family

ID=73891478

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/106656 WO2021051346A1 (zh) 2019-09-19 2019-09-19 立体车道线确定方法、装置和电子设备

Country Status (2)

Country Link
CN (1) CN112154446B (zh)
WO (1) WO2021051346A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114029953A (zh) * 2021-11-18 2022-02-11 上海擎朗智能科技有限公司 基于深度传感器确定地平面的方法、机器人及机器人系统
CN114677454A (zh) * 2022-03-25 2022-06-28 杭州睿影科技有限公司 一种图像生成方法和装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802126A (zh) * 2021-02-26 2021-05-14 上海商汤临港智能科技有限公司 一种标定方法、装置、计算机设备和存储介质
CN113199479B (zh) * 2021-05-11 2023-02-10 梅卡曼德(北京)机器人科技有限公司 轨迹生成方法、装置、电子设备、存储介质和3d相机
CN113205447A (zh) * 2021-05-11 2021-08-03 北京车和家信息技术有限公司 用于车道线识别的道路图片标注方法和装置
CN114708576B (zh) * 2022-06-06 2022-10-25 天津所托瑞安汽车科技有限公司 一种车道线确定方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100299109A1 (en) * 2009-05-22 2010-11-25 Fuji Jukogyo Kabushiki Kaisha Road shape recognition device
CN104766058A (zh) * 2015-03-31 2015-07-08 百度在线网络技术(北京)有限公司 一种获取车道线的方法和装置
CN108764187A (zh) * 2018-06-01 2018-11-06 百度在线网络技术(北京)有限公司 提取车道线的方法、装置、设备、存储介质以及采集实体
CN109766878A (zh) * 2019-04-11 2019-05-17 深兰人工智能芯片研究院(江苏)有限公司 一种车道线检测的方法和设备
CN109858460A (zh) * 2019-02-20 2019-06-07 重庆邮电大学 一种基于三维激光雷达的车道线检测方法
CN110097620A (zh) * 2019-04-15 2019-08-06 西安交通大学 基于图像和三维激光的高精度地图创建系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100299109A1 (en) * 2009-05-22 2010-11-25 Fuji Jukogyo Kabushiki Kaisha Road shape recognition device
CN104766058A (zh) * 2015-03-31 2015-07-08 百度在线网络技术(北京)有限公司 一种获取车道线的方法和装置
CN108764187A (zh) * 2018-06-01 2018-11-06 百度在线网络技术(北京)有限公司 提取车道线的方法、装置、设备、存储介质以及采集实体
CN109858460A (zh) * 2019-02-20 2019-06-07 重庆邮电大学 一种基于三维激光雷达的车道线检测方法
CN109766878A (zh) * 2019-04-11 2019-05-17 深兰人工智能芯片研究院(江苏)有限公司 一种车道线检测的方法和设备
CN110097620A (zh) * 2019-04-15 2019-08-06 西安交通大学 基于图像和三维激光的高精度地图创建系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114029953A (zh) * 2021-11-18 2022-02-11 上海擎朗智能科技有限公司 基于深度传感器确定地平面的方法、机器人及机器人系统
CN114029953B (zh) * 2021-11-18 2022-12-20 上海擎朗智能科技有限公司 基于深度传感器确定地平面的方法、机器人及机器人系统
CN114677454A (zh) * 2022-03-25 2022-06-28 杭州睿影科技有限公司 一种图像生成方法和装置
CN114677454B (zh) * 2022-03-25 2022-10-04 杭州睿影科技有限公司 一种图像生成方法和装置

Also Published As

Publication number Publication date
CN112154446A (zh) 2020-12-29
CN112154446B (zh) 2024-03-19

Similar Documents

Publication Publication Date Title
WO2021051346A1 (zh) 立体车道线确定方法、装置和电子设备
WO2021051344A1 (zh) 高精度地图中车道线的确定方法和装置
WO2017020466A1 (zh) 基于激光点云的城市道路识别方法、装置、存储介质及设备
CN110796714B (zh) 一种地图构建方法、装置、终端以及计算机可读存储介质
CN110715671B (zh) 三维地图生成方法、装置、车辆导航设备和无人驾驶车辆
CN108280886A (zh) 激光点云标注方法、装置及可读存储介质
CN106920278B (zh) 一种基于Reeb图的立交桥三维建模方法
JP7290240B2 (ja) 対象物認識装置
CN105869211B (zh) 一种可视域分析方法及装置
CN113009506A (zh) 一种虚实结合的实时激光雷达数据生成方法、系统及设备
CN104422451A (zh) 一种道路识别方法及装置
CN110969592A (zh) 图像融合方法、自动驾驶控制方法、装置和设备
CN113971723B (zh) 高精地图中三维地图的构建方法、装置、设备和存储介质
CN113255578A (zh) 交通标识的识别方法及装置、电子设备和存储介质
WO2022077949A1 (zh) 一种数据处理的方法和装置
CN103162664A (zh) 高程数据获取方法与装置、导航设备
CN118037983B (zh) 一种海量城市街景全景图片质量评价与快速三维建模方法
CN114018239A (zh) 一种三维车道地图构建方法、装置、设备及存储介质
CN113709006A (zh) 一种流量确定方法、装置、存储介质及电子装置
CN110174115B (zh) 一种基于感知数据自动生成高精度定位地图的方法及装置
CN117315024A (zh) 远距离目标的定位方法、装置和电子设备
CN115830255B (zh) 一种仿真场景生成方法、装置、电子设备和存储介质
CN116105717A (zh) 车道级高精度地图构建方法及系统
CN111488411A (zh) 道路设施的构建方法以及装置、渲染方法、介质、终端
CN117739950B (zh) 一种地图的生成方法、装置及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19945957

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19945957

Country of ref document: EP

Kind code of ref document: A1