WO2020192105A1 - 一种车辆位姿的修正方法和装置 - Google Patents

一种车辆位姿的修正方法和装置 Download PDF

Info

Publication number
WO2020192105A1
WO2020192105A1 PCT/CN2019/113483 CN2019113483W WO2020192105A1 WO 2020192105 A1 WO2020192105 A1 WO 2020192105A1 CN 2019113483 W CN2019113483 W CN 2019113483W WO 2020192105 A1 WO2020192105 A1 WO 2020192105A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane line
line
dashed
target
segment
Prior art date
Application number
PCT/CN2019/113483
Other languages
English (en)
French (fr)
Inventor
侯政华
杜志颖
管守奎
Original Assignee
魔门塔(苏州)科技有限公司
北京初速度科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 魔门塔(苏州)科技有限公司, 北京初速度科技有限公司 filed Critical 魔门塔(苏州)科技有限公司
Publication of WO2020192105A1 publication Critical patent/WO2020192105A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance

Definitions

  • the invention relates to the technical field of automatic driving, in particular to a method and device for correcting vehicle pose.
  • the existing vehicle positioning method usually uses the perception model obtained by deep learning to detect the image obtained by the vehicle-mounted camera, and extract the perception information of lane lines, street light poles and traffic signs in the image.
  • the perception model obtained by deep learning detects the image obtained by the vehicle-mounted camera, and extracts the perception information of lane lines, street light poles and traffic signs in the image.
  • traffic signs and lamp posts in the road information.
  • the lane line information can only restrict and correct the position of the vehicle in the left and right direction, when the vehicle travels on a road section with only lane line information for a long time, the error between the front and rear of the vehicle will continue to increase, and the positioning accuracy is usually poor.
  • the embodiment of the present invention discloses a method and device for correcting vehicle pose, which solves the problem of poor vehicle positioning accuracy in closed road sections such as expressways due to less information such as traffic signs and street light poles.
  • an embodiment of the present invention discloses a method for correcting the pose of a vehicle, the method including:
  • the end points of the target lane line dashed line matching the end types are identified, and the dashed end points of the target lane line include the target upper end point And the lower end point of the target belonging to the dashed line segment of the same lane, the distance between the upper end point of the target and the vehicle is greater than the distance between the lower end point of the target and the vehicle;
  • the target lane line dashed line segment in the perception image and the target lane line dashed line segment in the navigation map are Matching, and correcting the current pose of the vehicle in the navigation map according to the matching result.
  • the identifying the dotted end of the target lane line matching the end category includes:
  • the category of the endpoint of the lane line dashed line in the perceived image, and the lane line dashed endpoint of the corresponding position and category in the navigation map are determined as the target lane line dashed endpoint.
  • the matching the dashed line segment of the target lane line in the perception image with the dashed line segment of the target lane line in the navigation map includes:
  • correcting the current pose of the vehicle in the navigation map according to the matching result includes:
  • the current pose of the vehicle in the navigation map is corrected according to the projection distance.
  • the determining the projection distance between each dashed line segment of the first target lane line in the navigation map after projection and each dashed line segment of the second target lane line in the perception image includes:
  • the sum of the projection distances with the smallest value is selected from the sum of the projection distances corresponding to each matching combination as the projection distance between each dashed line segment of the first target lane line and each dashed line segment of the second target lane line after projection.
  • iterative correction is used to correct the pose of the vehicle, and the vehicle pose after each correction is used as the input for the next pose correction, so that the sum of the projection distances corresponding to all the matching combinations reaches the set point.
  • Set threshold is used to correct the pose of the vehicle, and the vehicle pose after each correction is used as the input for the next pose correction, so that the sum of the projection distances corresponding to all the matching combinations reaches the set point.
  • an embodiment of the present invention also provides a device for correcting the pose of a vehicle, the device including:
  • the target lane line dotted line endpoint recognition module is configured to identify the target lane line dotted line endpoints that match the endpoint categories from the lane line dotted line endpoints constituting the lane line dashed segment of the perceptual image and the lane line dotted line endpoints corresponding to the navigation map.
  • the dashed end point of the target lane line includes a target upper end point and a target lower end point belonging to the same dashed line segment of the lane line, and the distance between the target upper end point and the vehicle is greater than the distance between the target lower end point and the vehicle;
  • the lane line information judging module is configured to determine whether the lane line information of the lane line to which the dotted end of the target lane line belongs in the perception image matches with the lane line information of the lane line to which it belongs in the navigation map, the lane line
  • the information includes the lane line category;
  • the target lane line dashed line segment matching module is configured to if the lane line information matches, for the target lane line dashed segment formed by the target upper end point and the corresponding target lower end point, the target lane line dashed line in the perceived image Match the segment with the dashed segment of the target lane line in the navigation map;
  • the vehicle pose correction module is configured to correct the current pose of the vehicle in the navigation map according to the matching result.
  • the target lane line dotted line endpoint recognition module is specifically configured as:
  • the category of the endpoint of the lane line dashed line in the perceived image, and the lane line dashed endpoint of the corresponding position and category in the navigation map are determined as the target lane line dashed endpoint.
  • the target lane line dashed line segment matching module includes:
  • the projection unit is configured to if the lane line information matches, for the target lane line dashed segment formed by the target upper end point and the corresponding target lower end point, project the target lane line dashed line segment in the navigation map to the Perceive the plane of the image;
  • the projection distance determining unit is configured to determine the projection distance between each dashed line segment of the first target lane line in the navigation map after projection and each dashed line segment of the second target lane line in the perception image;
  • the vehicle pose correction module is specifically configured as:
  • the current pose of the vehicle in the navigation map is corrected according to the projection distance.
  • the projection distance determining unit is specifically configured as:
  • the sum of the projection distances with the smallest value is selected from the sum of the projection distances corresponding to each matching combination as the projection distance between each dashed line segment of the first target lane line and each dashed line segment of the second target lane line after projection.
  • iterative correction is used to correct the pose of the vehicle, and the vehicle pose after each correction is used as the input for the next pose correction, so that the sum of the projection distances corresponding to all the matching combinations reaches the set point.
  • Set threshold is used to correct the pose of the vehicle, and the vehicle pose after each correction is used as the input for the next pose correction, so that the sum of the projection distances corresponding to all the matching combinations reaches the set point.
  • an embodiment of the present invention also provides a vehicle-mounted terminal, including:
  • a memory storing executable program codes
  • a processor coupled with the memory
  • the processor calls the executable program code stored in the memory to execute part or all of the steps of the vehicle pose correction method provided by any embodiment of the present invention.
  • an embodiment of the present invention also provides a computer-readable storage medium that stores a computer program.
  • the computer program includes part or all of the vehicle pose correction method provided by any embodiment of the present invention. Step instructions.
  • the embodiments of the present invention also provide a computer program product, which when the computer program product runs on a computer, causes the computer to execute part of the vehicle pose correction method provided by any embodiment of the present invention Or all steps.
  • the dashed end of the target lane line matching the endpoint category is determined in the navigation map and the dashed end of the lane line corresponding to the perception image, and the dashed end of the target lane line belongs to the lane in the perception image
  • the target lane line dashed line segment formed by the target upper end point in the target lane line dashed end point and the corresponding target lower end point will be perceived in the image
  • the dashed line segment of the target lane line is matched with the dashed line segment of the target lane line in the navigation map, and the current pose of the vehicle in the navigation map is corrected according to the matching result, which solves the problem of unmanned vehicles in closed road sections with only lane line information.
  • the invention points of the present invention include:
  • the projection distance of each matching combination composed of the dotted line segment of the first target lane line and the dotted line segment of each second target lane line is used as the difference between the dotted line segment of the target lane line in the perception image and the dotted line segment of the target lane line in the navigation map.
  • the matching result, and using the matching result to correct the current position of the vehicle to complete the accurate positioning of the unmanned vehicle on highways, closed road sections and other road sections with insufficient information is one of the invention points of the present invention.
  • FIG. 1 is a schematic flowchart of a vehicle positioning method provided by an embodiment of the present invention
  • Figure 2a is a schematic flowchart of a vehicle positioning method provided by an embodiment of the present invention.
  • 2b is a schematic diagram of projection of the dashed line of the lane line in the navigation map to the plane where the perceived image is provided by an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a device for correcting the pose of a vehicle provided by an embodiment of the present invention
  • Fig. 4 is a schematic structural diagram of a vehicle-mounted terminal provided by an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of a vehicle positioning method according to an embodiment of the present invention. This method is mainly executed after the high-precision map is initialized. At this time, the positioning position of the vehicle has been corrected to the centimeter level, and the projection of each lane line in the navigation map in the perception image matches the lane line in the perception image.
  • the vehicle positioning method provided by the embodiments of the present invention is typically applied in an application scenario where an unmanned vehicle is driving on a highway and other closed roads where traffic signs and street light poles are lacking. It can be executed by the vehicle's positioning device.
  • the method provided in this embodiment specifically includes:
  • the perception image is obtained after recognizing the image containing road information collected by the camera using a preset perception model.
  • the preset perception model can use a large number of road sample images marked with image semantic features to train the perception model in advance.
  • the semantic features of the image may include lane lines, dashed end points of lane lines, prismatic lines, and zebra crossings.
  • the preset perception model can be obtained in the following ways:
  • the training sample set includes multiple sets of training sample data, each set of training sample data includes road sample images and corresponding road perception sample images marked with image semantic features; based on the training sample set to build the initial neural network
  • a preset perception model is obtained through training, and the preset perception model makes the road sample images in each set of training sample data associated with corresponding road perception sample images marked with image semantic features.
  • the output of the model can be called perceptual image.
  • the categories of the dashed endpoints of the lane line that constitute the dashed segment of the lane line in the perception image include the dashed endpoints of the general lane line, the intersection point of the lane line and the vehicle, the cut-off line of the lane line and the crosswalk, etc., which constitute the lane line in the navigation map
  • the category of the dashed end of the lane line in the dashed section mainly refers to the end of the dashed line of the general lane.
  • the end of the dashed line of the lane line with the same endpoint category is regarded as the end point of the dashed line of the target lane line.
  • the dotted line endpoint mainly refers to the dotted line endpoint of a general lane line, that is, the dotted line segment where the dotted line endpoint is located does not intersect with the vehicle, nor is it the cut-off line of the lane line and the crosswalk.
  • the end points of the dashed line of the target lane line include the upper end point and the lower end point of the target.
  • the upper end point of the target and the lower end point of the target belong to the same dashed line segment of the lane line, and the subsequent matching objects in this embodiment also use each dashed line segment in the dashed line of the lane line as a matching unit.
  • the upper end of the target and the lower end of the target are mainly defined according to the distance to the vehicle, that is, for any dashed line segment, the distance between the upper end of the target and the vehicle is greater than the distance between the lower end of the target and the vehicle.
  • category matching means that the categories of the target upper end point and the target lower end point in the perception image are matched with the target upper end point and the target lower end point of the corresponding position in the navigation map.
  • step 120 Determine whether the lane line information of the dashed endpoint of the target lane line in the perception image matches the lane line information of the lane line in the navigation map. If they match, perform step 130; otherwise, perform step 140.
  • the lane line information includes lane line categories, attributes, and so on.
  • the types of lane lines include dotted lines, solid lines, and prismatic lines.
  • the target lane line dashed segment in the perception image is matched with the target lane line dashed segment in the navigation map, and the match is performed according to the matching result
  • the current pose of the vehicle in the navigation map is corrected.
  • matching the dashed segment of the target lane line in the perception image with the dashed segment of the target lane line in the navigation map can be achieved by projecting the dashed segment of the target lane in the navigation map onto the plane where the perception image is located, and on the plane Determine the projection distance between the dashed line segment of the target lane after projection and the dashed line segment of the target lane in the perceived image. Since the projection distance reflects the error between the real pose of the vehicle and the current pose to a certain extent, the current pose of the vehicle can be corrected by correcting the projection distance.
  • This arrangement solves the problem of large errors in front and rear positioning of unmanned vehicles in closed road sections with only lane line information.
  • using the dashed line segment of the lane for matching can effectively reduce the number of mismatches between upper and lower end points.
  • the dashed end of the target lane line that matches the endpoint category is determined in the navigation map and the end of the dashed line of the corresponding perceptual image, and the end of the dashed line of the target lane can be located in the lane in the perceptual image.
  • the target lane line dashed line segment formed by the target upper end point and the corresponding target lower end point in the dashed end of the target lane line will perceive the target lane in the image
  • the dashed line segment is matched with the dashed segment of the target lane line in the navigation map, and the current pose of the vehicle in the navigation map is corrected according to the matching result, which solves the problem of unmanned vehicles in the closed road section with only lane line information.
  • the problem of large positioning error has achieved precise positioning of the vehicle.
  • FIG. 2a is a schematic flowchart of a vehicle positioning method according to an embodiment of the present invention.
  • this embodiment identifies the end points of the target lane line dashed line matching the end types, and matches the dashed line segment of the target lane line in the perception image with the dashed line segment of the target lane line in the navigation map. Optimized.
  • the method includes:
  • the highest category is used as the category of the dotted line endpoint of the lane line in the perception image.
  • the general lane line is dashed
  • the end point is used as the category of the end point of the dashed lane line in the perception image.
  • the end points of the lane line dashed line in the perception image that are inconsistent with the end category of the lane line dashed line in the navigation map are filtered and not used as the end point for the pose correction of the vehicle.
  • the lane line to which the dotted end of the target lane line belongs in the perception image matches the category of the lane line to which it belongs in the navigation map, then for the dotted line segment of the target lane line formed by the upper end of the target and the corresponding lower end of the target, Project the dashed segment of the target lane line in the navigation map to the plane where the perception image is located.
  • FIG. 2b is a schematic projection diagram of projecting a dashed line of a lane line in a navigation map onto a plane where a perception image is provided according to an embodiment of the present invention.
  • 1 represents the upper end of the dashed lane line in the perceived image
  • 2 represents the upper end of the dashed lane line in the navigation map
  • 3 represents the lower end of the dashed lane line in the perceptual image
  • 4 represents the projected end of the lane line in the navigation map
  • 5 represents the dashed line segment of the lane line in the perception image
  • 6 represents the dashed line segment of the lane line in the navigation map
  • 7 represents the dashed line of the lane line in the perception image
  • 8 represents the dashed line of the lane line in the navigation map.
  • the navigation map has been initialized and can provide a centimeter-level positioning position. Therefore, the various types of lane lines in the navigation map are projected on the perception image and correspond to the corresponding positions in the perception image.
  • the projection distance between the lane lines meets the preset distance requirement. For example, after the perception image is projected, the positions 5 and 6 in Figure 2b correspond.
  • 1 and 2, 3 and 4 are the end points of the dashed line of the target lane line with matching categories.
  • the lane lines 1 and 3 belong to match the category of the lane lines 2 and 4 belong to, and they are both lane line dashed lines.
  • the lane line dashed segment formed by the upper and lower end points is used for matching, that is, the dashed line segment of the lane line is used as the basic unit to be matched, for example, 5 and 6 in Figure 2b are performed Matching, by calculating the projection error of 5 and 6 on the plane of the perception image to correct the pose of the vehicle.
  • each lane line dashed line is formed by combining multiple lane line dashed line segments, the projection errors of all matched dashed line end segments can be comprehensively considered to correct the vehicle's pose.
  • determining the projection distance between each dashed line segment of the first target lane line in the navigation map after projection and each dashed line segment of the second target lane line in the perception image includes:
  • the lane line dashed line in the navigation map is composed of the three first target lane line dashed segments A, B, and C
  • the lane line dashed line corresponding to the position in the perception image is composed of the three second targets 1, 2, and 3.
  • the dashed line segment of each target lane line in the navigation map and the perception image is first formed into a matching combination.
  • the dashed line segment of the first target lane line and the second target lane There is a one-to-one matching relationship between the dashed line segments, for example, A-1, B-2, and C-3 are a matching combination, and A-1, B-3, and C-2 are another matching combination.
  • each matching combination calculate the sum of the projection distances between each dashed line segment of the first target lane line and the dashed line segment of the corresponding second target lane line in the matching combination, and select from the sum of projection distances corresponding to each matching combination
  • the sum of the projection distance with the smallest value is used as the projection distance between each dashed line segment of the first target lane line and each dashed line segment of the second target lane line after projection, that is, the dashed line segment of the target lane line in the perception image and the navigation map
  • the matching result of the dashed segment of the target lane line is used as the projection distance between each dashed line segment of the first target lane line and each dashed line segment of the second target lane line after projection, that is, the dashed line segment of the target lane line in the perception image and the navigation map
  • the projection distance between the perceived image and the upper or lower end of the navigation map, that is, the projection error is caused by the inaccurate estimation of the current vehicle pose, that is, the projection error can reflect the current position of the vehicle.
  • the posture is optimized by an iterative method, and the vehicle posture after each correction is used as the input for the next posture correction, so that the sum of the projection distances corresponding to all matching combinations reaches the set threshold, even if all matching dashed lines
  • the sum of the projection errors of the end points can reach the minimum value among the empirical values.
  • This embodiment is optimized on the basis of the above embodiments.
  • the projection distance of each matching combination composed of the first target lane line dashed segment and each second target lane line dashed line segment is used as the target lane line dashed segment in the perceived image
  • the matching result with the dashed segment of the target lane line in the navigation map can be used to correct the current position of the vehicle to complete the accurate positioning of the unmanned vehicle on highways, closed sections and other road sections with insufficient information.
  • FIG. 3 is a schematic structural diagram of a device for correcting vehicle pose provided by an embodiment of the present invention.
  • the device includes: a target lane line dotted line endpoint recognition module 310, a lane line information judgment module 320, a target lane line dotted line segment matching module 330, and a vehicle pose correction module 340. among them,
  • the target lane line dotted line endpoint recognition module 310 is configured to identify the target lane line dotted line endpoints matching the endpoint categories from the lane line dotted line endpoints constituting the lane line dashed segment of the perception image and the lane line dotted line endpoints corresponding to the navigation map.
  • the dashed end point of the target lane line includes a target upper end point and a target lower end point belonging to the same lane line dashed line segment, and the distance between the target upper end point and the vehicle is greater than the distance between the target lower end point and the vehicle;
  • the lane line information judging module 320 is configured to determine whether the lane line information of the lane line to which the dotted end of the target lane line belongs in the perceived image matches with the lane line information of the lane line to which it belongs in the navigation map.
  • Line information includes lane line category;
  • the target lane line dashed line segment matching module 330 is configured to, if the lane line information matches, for the target lane line dashed segment formed by the target upper end point and the corresponding target lower end point, the target lane line in the perceived image
  • the dashed line segment is matched with the dashed line segment of the target lane line in the navigation map;
  • the vehicle pose correction module 340 is configured to correct the current pose of the vehicle in the navigation map according to the matching result.
  • the dashed end of the target lane line that matches the endpoint category is determined in the navigation map and the end of the dashed line of the corresponding perceptual image, and the end of the dashed line of the target lane can be located in the lane in the perceptual image.
  • the target lane line dashed line segment formed by the target upper end point and the corresponding target lower end point in the dashed end of the target lane line will perceive the target lane in the image
  • the dashed line segment is matched with the dashed segment of the target lane line in the navigation map, and the current pose of the vehicle in the navigation map is corrected according to the matching result, which solves the problem of unmanned vehicles in the closed road section with only lane line information.
  • the problem of large positioning error is matched with the dashed segment of the target lane line in the navigation map, and the current pose of the vehicle in the navigation map is corrected according to the matching result, which solves the problem of unmanned vehicles in the closed road section with only lane line information.
  • the problem of large positioning error is matched.
  • the target lane line dotted line endpoint recognition module is specifically configured as:
  • the category of the endpoint of the lane line dashed line in the perceived image, and the lane line dashed endpoint of the corresponding position and category in the navigation map are determined as the target lane line dashed endpoint.
  • the target lane line dashed line segment matching module includes:
  • the projection unit is configured to if the lane line information matches, for the target lane line dashed segment formed by the target upper end point and the corresponding target lower end point, project the target lane line dashed line segment in the navigation map to the Perceive the plane of the image;
  • the projection distance determining unit is configured to determine the projection distance between each dashed line segment of the first target lane line in the navigation map after projection and each dashed line segment of the second target lane line in the perception image;
  • the vehicle pose correction module is specifically configured as:
  • the current pose of the vehicle in the navigation map is corrected according to the projection distance.
  • the projection distance determining unit is specifically configured as:
  • the sum of the projection distances with the smallest value is selected from the sum of the projection distances corresponding to each matching combination as the projection distance between each dashed line segment of the first target lane line and each dashed line segment of the second target lane line after projection.
  • iterative correction is used to correct the pose of the vehicle, and the vehicle pose after each correction is used as the input for the next pose correction, so that the sum of the projection distances corresponding to all the matching combinations reaches the set point.
  • Set threshold is used to correct the pose of the vehicle, and the vehicle pose after each correction is used as the input for the next pose correction, so that the sum of the projection distances corresponding to all the matching combinations reaches the set point.
  • the device for correcting vehicle pose provided by the embodiment of the present invention can execute the method for correcting vehicle pose provided by any embodiment of the present invention, and has functional modules and beneficial effects corresponding to the execution method.
  • the method for correcting vehicle pose provided in any embodiment of the present invention can execute the method for correcting vehicle pose provided by any embodiment of the present invention, and has functional modules and beneficial effects corresponding to the execution method.
  • FIG. 4 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention.
  • the vehicle-mounted terminal may include:
  • a memory 701 storing executable program codes
  • a processor 702 coupled with the memory 701;
  • the processor 702 calls the executable program code stored in the memory 701 to execute the method for correcting the vehicle pose provided by any embodiment of the present invention.
  • the embodiment of the present invention discloses a computer-readable storage medium that stores a computer program, wherein the computer program causes a computer to execute the vehicle pose correction method provided by any embodiment of the present invention.
  • the embodiment of the present invention discloses a computer program product, wherein when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the vehicle pose correction method provided by any embodiment of the present invention.
  • B corresponding to A means that B is associated with A, and B can be determined according to A.
  • determining B according to A does not mean that B is determined only according to A, and B can also be determined according to A and/or other information.
  • the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the aforementioned integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-accessible memory.
  • the essence of the technical solution of the present invention or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a memory.
  • a computer device which may be a personal computer, a server, or a network device, etc., specifically a processor in a computer device
  • the program can be stored in a computer-readable storage medium.
  • the storage medium includes read-only Memory (Read-Only Memory, ROM), Random Access Memory (RAM), Programmable Read-only Memory (PROM), Erasable Programmable Read Only Memory, EPROM), One-time Programmable Read-Only Memory (OTPROM), Electronically-Erasable Programmable Read-Only Memory (EEPROM), CD-ROM (Compact Disc) Read-Only Memory, CD-ROM) or other optical disk storage, magnetic disk storage, tape storage, or any other computer-readable medium that can be used to carry or store data.
  • Read-Only Memory ROM
  • RAM Random Access Memory
  • PROM Programmable Read-only Memory
  • EPROM Erasable Programmable Read Only Memory
  • OTPROM One-time Programmable Read-Only Memory
  • EEPROM Electronically-Erasable Programmable Read-Only Memory
  • CD-ROM Compact Disc

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

一种车辆位姿的修正方法和装置,方法包括:从感知图像的构成车道线虚线段的车道线虚线端点和导航地图对应位置的车道线虚线端点中,识别出端点类别相匹配的目标车道线虚线端点(110);判断目标车道线虚线端点在感知图像中所属车道线的车道线信息与其在导航地图中所属车道线的车道线信息是否相匹配(120);如果相匹配,则对于由目标上端点和对应的目标下端点所构成的目标车道线虚线段,将感知图像中的目标车道线虚线段与导航地图中的目标车道线虚线段进行匹配,并根据匹配结果对导航地图中车辆的当前位姿进行修正(130)。解决了在高速公路等封闭路段,由于交通标志和路灯杆等信息较少而导致的车辆定位精度差的问题。

Description

一种车辆位姿的修正方法和装置 技术领域
本发明涉及自动驾驶技术领域,具体涉及一种车辆位姿的修正方法和装置。
背景技术
在自动驾驶领域,导航定位至关重要。近年来,深度学习等技术的成果,极大促进了图像语义分割、图像识别领域的发展,这为导航地图及导航定位提供了坚实的基础。
现有的车辆定位方法通常是,利用深度学习获得的感知模型对车载相机获得的图像进行检测,提取图像中车道线、路灯杆和交通标志的感知信息。当无人驾驶车辆行驶在高速公路等封闭路段时,道路信息中的交通标志与路灯杆等信息较少。由于车道线信息仅能约束修正车辆左右方向的位置,因此当车辆长时间行驶在仅有车道线信息的路段时,会造成车辆前后的误差持续增大,定位精度通常较差。
发明内容
本发明实施例公开一种车辆位姿的修正方法和装置,解决了在高速公路等封闭路段,由于交通标志和路灯杆等信息较少而导致的车辆定位精度较差的问题。
第一方面,本发明实施例公开了一种车辆的位姿的修正方法,该方法包括:
从感知图像的构成车道线虚线段的车道线虚线端点和导航地图对应位置的车道线虚线端点中,识别出端点类别相匹配的目标车道线虚线端点,所述目标车道线虚线端点包括目标上端点和属于同一车道线虚线段的目标下端点,所述目标上端点与车辆的距离大于所述目标下端点与所述车辆的距离;
判断目标车道线虚线端点在所述感知图像中所属车道线的车道线信息与其在所述导航地图中所属车道线的车道线信息是否相匹配,所述车道线信息包括车道线类别;
如果相匹配,则对于由目标上端点和对应的目标下端点所构成的目标车道线虚线段,将所述感知图像中的目标车道线虚线段与所述导航地图中的目标车道线虚线段进行匹配,并根据匹配结果对所述导航地图中车辆的当前位姿进行修正。
可选的,所述识别出端点类别相匹配的目标车道线虚线端点,包括:
识别所述感知图像中的车道线虚线端点所包含类别的置信度值;
选择置信度值最高的类别作为所述感知图像中的车道线虚线端点的类别;
将所述感知图像中车道线虚线端点的类别,与在导航地图中对应位置且类别一致的车道线虚线端点确定为目标车道线虚线端点。
可选的,所述将所述感知图像中的目标车道线虚线段与所述导航地图中的目标车道线虚线段进行匹配,包括:
将所述导航地图中的目标车道线虚线段投影到所述感知图像所在平面;
确定投影后导航地图中的各个第一目标车道线虚线段与感知图像中的各个第二目标车道线虚线段之间的投影距离;
相应的,根据匹配结果对所述导航地图中车辆的当前位姿进行修正,包括;
根据所述投影距离对所述导航地图中车辆的当前位姿进行修正。
可选的,所述确定投影后导航地图中的各个第一目标车道线虚线段与感知图像中的各个第二目标车道线虚线段之间的投影距离,包括:
将各第一目标车道线虚线段和各第二目标车道线虚线段进行一一对应匹配,形成多个匹配组合;
对于任意一个匹配组合,计算该匹配组合中各第一目标车道线虚线段和对应第二目标车道线虚线段之间的投影距离之和;
从各个匹配组合所对应的投影距离之和中选择取值最小的投影距离之和作为投影后各个第一目标车道线虚线段与各个第二目标车道线虚线段之间的投影距离。
可选的,采用迭代修正的方式对车辆的位姿进行修正,将每次修正后的车辆位姿作为下次位姿修正的 输入,使得所有所述匹配组合所对应投影距离之和均达到设定阈值。
第二方面,本发明实施例还提供了一种车辆的位姿的修正装置,该装置包括:
目标车道线虚线端点识别模块,被配置为从感知图像的构成车道线虚线段的车道线虚线端点和导航地图对应位置的车道线虚线端点中,识别出端点类别相匹配的目标车道线虚线端点,所述目标车道线虚线端点包括目标上端点和属于同一车道线虚线段的目标下端点,所述目标上端点与车辆的距离大于所述目标下端点与所述车辆的距离;
车道线信息判断模块,被配置为判断目标车道线虚线端点在所述感知图像中所属车道线的车道线信息与其在所述导航地图中所属车道线的车道线信息是否相匹配,所述车道线信息包括车道线类别;
目标车道线虚线段匹配模块,被配置为如果车道线信息相匹配,则对于由目标上端点和对应的目标下端点所构成的目标车道线虚线段,将所述感知图像中的目标车道线虚线段与所述导航地图中的目标车道线虚线段进行匹配;
车辆位姿修正模块,被配置为根据匹配结果对所述导航地图中车辆的当前位姿进行修正。
可选的,所述目标车道线虚线端点识别模块,具体被配置为:
识别所述感知图像中的车道线虚线端点所包含类别的置信度值;
选择置信度值最高的类别作为所述感知图像中的车道线虚线端点的类别;
将所述感知图像中车道线虚线端点的类别,与在导航地图中对应位置且类别一致的车道线虚线端点确定为目标车道线虚线端点。
可选的,所述目标车道线虚线段匹配模块,包括:
投影单元,被配置为如果车道线信息相匹配,则对于由目标上端点和对应的目标下端点所构成的目标车道线虚线段,将所述导航地图中的目标车道线虚线段投影到所述感知图像所在平面;
投影距离确定单元,被配置为确定投影后导航地图中的各个第一目标车道线虚线段与感知图像中的各个第二目标车道线虚线段之间的投影距离;
相应的,所述车辆位姿修正模块具体被配置为:
根据所述投影距离对所述导航地图中车辆的当前位姿进行修正。
可选的,所述投影距离确定单元,具体被配置为:
将各第一目标车道线虚线段和各第二目标车道线虚线段进行一一对应匹配,形成多个匹配组合;
对于任意一个匹配组合,计算该匹配组合中各第一目标车道线虚线段和对应第二目标车道线虚线段之间的投影距离之和;
从各个匹配组合所对应的投影距离之和中选择取值最小的投影距离之和作为投影后各个第一目标车道线虚线段与各个第二目标车道线虚线段之间的投影距离。
可选的,采用迭代修正的方式对车辆的位姿进行修正,将每次修正后的车辆位姿作为下次位姿修正的输入,使得所有所述匹配组合所对应投影距离之和均达到设定阈值。
第三方面,本发明实施例还提供了一种车载终端,包括:
存储有可执行程序代码的存储器;
与所述存储器耦合的处理器;
所述处理器调用所述存储器中存储的所述可执行程序代码,执行本发明任意实施例所提供的车辆位姿的修正方法的部分或全部步骤。
第四方面,本发明实施例还提供了一种计算机可读存储介质,其存储计算机程序,所述计算机程序包括用于执行本发明任意实施例所提供的车辆位姿的修正方法的部分或全部步骤的指令。
第五方面,本发明实施例还提供了一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行本发明任意实施例所提供的车辆位姿的修正方法的部分或全部步骤。
本发明实施例提供的技术方案,通过在导航地图和对应感知图像的车道线虚线端点中,确定端点类别相匹配的目标车道线虚线端点,并在目标车道线虚线端点在感知图像中的所属车道线与其在导航地图中所属车道线的车道线信息相匹配的情况下,对于目标车道线虚线端点中的目标上端点和对应的目标下端点所构成的目标车道线虚线段,将感知图像中的目标车道线虚线段与导航地图中的目标车道线虚线段进行匹配, 根据匹配结果对导航地图中车辆的当前位姿进行修正,解决了无人驾驶车辆在仅有车道线信息的封闭路段,车辆前后定位误差较大的问题。
本发明的发明点包括:
1、利用车道线虚线端点信息,实现了在高速公路,封闭路段等信息匮乏的路段完成无人驾驶车辆的精确定位,是本发明的发明点之一。
2、通过对车道线虚线端点的类别信息进行校核,并联合虚线端点所在的车道线信息,可以提升系统的运行效率,降低感知图像与导航地图的错误匹配的发生率,是本发明的发明点之一。
3、通过将第一目标车道线虚线段与各个第二目标车道线虚线段组成的各匹配组合的投影距离,作为感知图像中的目标车道线虚线段与导航地图中的目标车道线虚线段的匹配结果,并利用该匹配结果可对车辆当前位置进行修正,以在高速公路,封闭路段等信息匮乏的路段完成对无人驾驶车辆的准确定位,是本发明的发明点之一。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种车辆的定位方法的流程示意图;
图2a是本发明实施例提供的一种车辆的定位方法的流程示意图;
图2b是本发明实施例提供的将导航地图中的车道线虚线投影到感知图像所在平面的投影示意图;
图3是本发明实施例提供的一种车辆的位姿的修正装置的结构示意图;
图4是本发明实施例提供的一种车载终端的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,本发明实施例及附图中的术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
实施例一
请参阅图1,图1是本发明实施例提供的一种车辆的定位方法的流程示意图。该方法主要是在高精度地图完成初始化后执行,此时,车辆的定位位置已被修正到厘米级,导航地图中各车道线在感知图像中的投影与感知图像中的车道线相匹配。此外,本发明实施例提供的车辆定位方法典型的是应用在无人驾驶车辆行驶在高速公路等交通标志和路灯杆缺乏的封闭路段的应用场景下,可由车辆的定位装置来执行,该装置可通过软件和/或硬件的方式实现,一般可集成在车载电脑、车载工业控制计算机(Industrial personal Computer,IPC)等车载终端中,本发明实施例不做限定。如图1所示,本实施例提供的方法具体包括:
110、从感知图像的构成车道线虚线段的车道线虚线端点和导航地图对应位置的车道线虚线端点中,识别出端点类别相匹配的目标车道线虚线端点。
其中,感知图像是利用预设感知模型对摄像头采集的包含道路信息的图像进行识别后得到的。预设感知模型可以预先采用大量标注有图像语义特征的道路样本图像对感知模型进行训练。其中,图像语义特征可包括车道线、车道线虚线端点、棱形线和斑马线等。通过将包含有道路信息的道路图像输入至训练好的预设感知模型,基于预设感知模型的识别结果,即可得到道路图像中的图像语义特征。其中,预设感知模型可以通过以下方式得到:
构建训练样本集,该训练样本集包括多组训练样本数据,每组训练样本数据包括道路样本图像和对应 的标注有图像语义特征的道路感知样本图像;基于训练样本集对搭建的初始神经网络进行训练得到预设感知模型,该预设感知模型使得每组训练样本数据中的道路样本图像与对应的标注有图像语义特征的道路感知样本图像相关联。模型输出的即可称之为感知图像。
本实施例中,感知图像中构成车道线虚线段的车道线虚线端点的类别包括一般车道线虚线端点,车道线与车辆的交汇点,车道线的截止线和人行横道等,导航地图中构成车道线虚线段的车道线虚线端点的类别主要是指一般车道线虚线端点。本实施例中,从感知图像的构成车道线虚线段的车道线虚线端点和导航地图对应位置的车道线虚线端点中将端点类别一致的车道线虚线端点作为目标车道线虚线端点,该目标车道线虚线端点主要是指一般车道线虚线端点,即该虚线端点所在的虚线段不存在与车辆的交汇,也非车道线的截止线和人行横道。
具体的,目标车道线虚线端点包括目标上端点和目标下端点。其中,目标上端点和目标下端点属于同一车道线虚线段,本实施例中后续进行匹配的对象也是以车道线虚线中的每一个虚线段作为匹配单元。目标上端点和目标下端点主要是按照与车辆的距离进行定义的,即对于任意一个虚线段,目标上端点与车辆的距离大于目标下端点与车辆的距离。本实施例中,类别相匹配是指感知图像中的目标上端点和目标下端点的类别均与导航地图中对应位置的目标上端点和目标下端点的类别相匹配。
120、判断目标车道线虚线端点在感知图像中所属车道线的车道线信息与其在导航地图中所属车道线的车道线信息是否相匹配,如果相匹配,则执行步骤130;否则,执行步骤140。
其中,车道线信息包括车道线类别、属性等。车道线的类别包括虚线、实线和棱形线等。通过对车道线信息进行确认,避免了在车辆行驶到隧道等接收不到GPS定位信号或其他定位异常的情况下车道线发生错位的情况,这样设置可以提高系统的计算效率,减少虚线端点的误匹配数量。
130、对于由目标上端点和对应的目标下端点所构成的目标车道线虚线段,将感知图像中的目标车道线虚线段与导航地图中的目标车道线虚线段进行匹配,并根据匹配结果对导航地图中车辆的当前位姿进行修正。
示例性的,将感知图像中的目标车道线虚线段与导航地图中的目标车道线虚线段进行匹配,可通过将导航地图中的目标车道虚线段投影到感知图像所在平面,并在该平面上判断投影后的目标车道虚线段与感知图像中的目标车道虚线段之间的投影距离。由于投影距离在一定程度上反应了车辆真实位姿与当前位姿的误差,因此,通过修正投影距离就可以对车辆的当前位姿进行修正。这样设置解决了无人驾驶车辆在仅有车道线信息的封闭路段,车辆前后定位误差较大的问题。并且,利用车道线虚线段进行匹配,可以有效的减少上下端点误匹配数量。
140、重新获取当前车辆在导航地图中的位姿。
本实施例提供的技术方案,通过在导航地图和对应感知图像的车道线虚线端点中,确定端点类别相匹配的目标车道线虚线端点,并可在目标车道线虚线端点在感知图像中的所属车道线与其在导航地图中所属车道线的类别相匹配的情况下,对于目标车道线虚线端点中的目标上端点和对应的目标下端点所构成的目标车道线虚线段,将感知图像中的目标车道线虚线段与导航地图中的目标车道线虚线段进行匹配,并根据匹配结果对导航地图中车辆的当前位姿进行修正,解决了无人驾驶车辆在仅有车道线信息的封闭路段,车辆前后定位误差较大的问题,实现了对车辆的精确定位。
实施例二
请参阅图2a,图2a是本发明实施例提供的一种车辆的定位方法的流程示意图。本实施例在上述实施例的基础上,对识别出端点类别相匹配的目标车道线虚线端点,以及将感知图像中的目标车道线虚线段与导航地图中的目标车道线虚线段进行匹配的过程进行了优化。如图2a所示,该方法包括:
210、从感知图像的构成车道线虚线段的车道线虚线端点和导航地图对应位置的车道线虚线端点中,识别感知图像中的车道线虚线端点所包含类别的置信度值,并选择置信度值最高的类别作为感知图像中的车道线虚线端点的类别。
示例性的,如果确定感知图像中的车道线虚线端点所包含类别的置信度值分别为:一般车道线虚线端点90%,人行横道20%,车道线的截止线5%,则将一般车道线虚线端点作为感知图像中车道线虚线端点的 类别。
220、将感知图像中车道线虚线端点的类别,与在导航地图中对应位置且类别一致的车道线虚线端点确定为目标车道线虚线端点。
示例性的,对于与导航地图中的车道线虚线端点类别不一致的感知图像中的车道线虚线端点,则将其过滤,不作为对车辆进行位姿修正的端点。
230、如果目标车道线虚线端点在感知图像中所属的车道线与其在导航地图中所属车道线的类别相匹配,则对于由目标上端点和对应的目标下端点所构成的目标车道线虚线段,将导航地图中的目标车道线虚线段投影到感知图像所在平面。
具体的,图2b是本发明实施例提供的将导航地图中的车道线虚线投影到感知图像所在平面的投影示意图。如图2b所示,1表示感知图像中车道线虚线上端点;2表示导航地图中投影后的车道线虚线上端点;3表示感知图像中车道线虚线下端点;4表示导航地图中投影后的车道线虚线下端点;5表示感知图像中车道线虚线段;6表示导航地图中投影后的车道线虚线段;7表示感知图像中车道线虚线;8表示导航地图中投影后的车道线虚线。由于在本发明实施例对车辆定位的过程中,导航地图已经完成初始化,可提供厘米级的定位位置,因此,导航地图中各种类型的车道线投影到感知图像后与感知图像中对应位置的车道线之间的投影距离符合预设距离要求,例如在将感知图像投影后,图2b中,5和6的位置相对应。此外,由于1和2分别是感知图像和导航地图中对应车道线虚线段的上端点,3和4分别是感知图像和导航地图中该车道线虚线段的下端点,因此在车道线端点的匹配过程中,感知图像中的上端点1和导航地图中的上端点2进行匹配,感知图像中的下端点3和导航地图中的下端点4进行匹配。
如图2b所示,本实施例中,1和2,3和4均为类别相匹配的目标车道线虚线端点。其中,且1和3所属的车道线,与2和4所属的车道线的类别也相匹配,均为车道线虚线。在对车辆位姿修正时,如果利用各个上下端点进行匹配,由于上下端点数量较多,不仅容易发生误匹配,而且也增加了计算量。本实施例为了有效减少上下端点误匹配数量,采用由上下端点所构成的车道线虚线段进行匹配的方式,即将车道线虚线段作为待匹配的基本单位,例如将图2b中的5和6进行匹配,通过计算5和6在感知图像所在平面的投影误差来对车辆的位姿进行修正。
进一步的,由于每条车道线虚线均由多个车道线虚线段所组合而形成,因此,可综合考虑所有匹配的虚线端点段的投影误差来对车辆的位姿进行修正。
240、确定投影后导航地图中的各个第一目标车道线虚线段与感知图像中的各个第二目标车道线虚线段之间的投影距离。
示例性的,本实施例中,确定投影后导航地图中的各个第一目标车道线虚线段与感知图像中的各个第二目标车道线虚线段之间的投影距离,包括:
将各第一目标车道线虚线段和各第二目标车道线虚线段进行一一对应匹配,形成多个匹配组合;对于任意一个匹配组合,计算该匹配组合中各第一目标车道线虚线段和对应第二目标车道线虚线段之间的投影距离之和;从各个匹配组合所对应的投影距离之和中选择取值最小的投影距离之和作为投影后各个第一目标车道线虚线段与各个第二目标车道线虚线段之间的投影距离。
具体的,如果导航地图中的车道线虚线由A、B和C这三段第一目标车道线虚线段组成,感知图像中对应位置的车道线虚线由1、2和3这三段第二目标车道线虚线段组成,则在计算投影距离时,先将导航地图和感知图像中的各目标车道线虚线段形成匹配组合,每个匹配组合中,第一目标车道线虚线段与第二目标车道线虚线段之间存在一一对应的匹配关系,例如A-1、B-2和C-3是一个匹配组合,A-1、B-3和C-2是又一个匹配组合。对于每个匹配组合,计算该匹配组合中各第一目标车道线虚线段和对应第二目标车道线虚线段之间的投影距离之和,并从各匹配组合对应的投影距离之和中选择取值最小的投影距离之和,作为投影后各个第一目标车道线虚线段与各个第二目标车道线虚线段之间的投影距离,也即作为感知图像中的目标车道线虚线段与导航地图中的目标车道线虚线段的匹配结果。
250、根据投影距离对导航地图中车辆的当前位姿进行修正。
在步骤240的匹配结果中,感知图像与导航地图的上端点或下端点之间的投影距离,即投影误差是由当前车辆位姿估计不准确所引起的,也即投影误差可反应车辆当前位姿误差量的大小。因此,利用投影误 差可对当前位姿进行修正。在修正过程中通过迭代的方法优化位姿,将每次修正后的车辆位姿作为下次位姿修正的输入,使得所有匹配组合所对应投影距离之和均达到设定阈值,即使所有匹配虚线端点段的投影误差之和均可达到经验值中的最小值。
本实施例在上述实施例的基础上进行了优化,将第一目标车道线虚线段与各个第二目标车道线虚线段组成的各匹配组合的投影距离,作为感知图像中的目标车道线虚线段与导航地图中的目标车道线虚线段的匹配结果,利用该匹配结果可对车辆当前位置进行修正,以在高速公路,封闭路段等信息匮乏的路段完成对无人驾驶车辆的准确定位。
实施例三
请参阅图3,图3是本发明实施例提供的一种车辆的位姿的修正装置的结构示意图。如图3所示,该装置包括:目标车道线虚线端点识别模块310、车道线信息判断模块320、目标车道线虚线段匹配模块330和车辆位姿修正模块340。其中,
目标车道线虚线端点识别模块310,被配置为从感知图像的构成车道线虚线段的车道线虚线端点和导航地图对应位置的车道线虚线端点中,识别出端点类别相匹配的目标车道线虚线端点,所述目标车道线虚线端点包括目标上端点和属于同一车道线虚线段的目标下端点,所述目标上端点与车辆的距离大于所述目标下端点与所述车辆的距离;
车道线信息判断模块320,被配置为判断目标车道线虚线端点在所述感知图像中所属车道线的车道线信息与其在所述导航地图中所属车道线的车道线信息是否相匹配,所述车道线信息包括车道线类别;
目标车道线虚线段匹配模块330,被配置为如果车道线信息相匹配,则对于由目标上端点和对应的目标下端点所构成的目标车道线虚线段,将所述感知图像中的目标车道线虚线段与所述导航地图中的目标车道线虚线段进行匹配;
车辆位姿修正模块340,被配置为根据匹配结果对所述导航地图中车辆的当前位姿进行修正。
本实施例提供的技术方案,通过在导航地图和对应感知图像的车道线虚线端点中,确定端点类别相匹配的目标车道线虚线端点,并可在目标车道线虚线端点在感知图像中的所属车道线与其在导航地图中所属车道线的类别相匹配的情况下,对于目标车道线虚线端点中的目标上端点和对应的目标下端点所构成的目标车道线虚线段,将感知图像中的目标车道线虚线段与导航地图中的目标车道线虚线段进行匹配,并根据匹配结果对导航地图中车辆的当前位姿进行修正,解决了无人驾驶车辆在仅有车道线信息的封闭路段,车辆前后定位误差较大的问题。
可选的,所述目标车道线虚线端点识别模块,具体被配置为:
识别所述感知图像中的车道线虚线端点所包含类别的置信度值;
选择置信度值最高的类别作为所述感知图像中的车道线虚线端点的类别;
将所述感知图像中车道线虚线端点的类别,与在导航地图中对应位置且类别一致的车道线虚线端点确定为目标车道线虚线端点。
可选的,所述目标车道线虚线段匹配模块,包括:
投影单元,被配置为如果车道线信息相匹配,则对于由目标上端点和对应的目标下端点所构成的目标车道线虚线段,将所述导航地图中的目标车道线虚线段投影到所述感知图像所在平面;
投影距离确定单元,被配置为确定投影后导航地图中的各个第一目标车道线虚线段与感知图像中的各个第二目标车道线虚线段之间的投影距离;
相应的,所述车辆位姿修正模块具体被配置为:
根据所述投影距离对所述导航地图中车辆的当前位姿进行修正。
可选的,所述投影距离确定单元,具体被配置为:
将各第一目标车道线虚线段和各第二目标车道线虚线段进行一一对应匹配,形成多个匹配组合;
对于任意一个匹配组合,计算该匹配组合中各第一目标车道线虚线段和对应第二目标车道线虚线段之间的投影距离之和;
从各个匹配组合所对应的投影距离之和中选择取值最小的投影距离之和作为投影后各个第一目标车 道线虚线段与各个第二目标车道线虚线段之间的投影距离。
可选的,采用迭代修正的方式对车辆的位姿进行修正,将每次修正后的车辆位姿作为下次位姿修正的输入,使得所有所述匹配组合所对应投影距离之和均达到设定阈值。
本发明实施例所提供的车辆位姿的修正装置可执行本发明任意实施例所提供的车辆位姿的修正方法,具备执行方法相应的功能模块和有益效果。未在上述实施例中详尽描述的技术细节,可参见本发明任意实施例所提供的车辆位姿的修正方法。
实施例四
请参阅图4,图4是本发明实施例提供的一种车载终端的结构示意图。如图4所示,该车载终端可以包括:
存储有可执行程序代码的存储器701;
与存储器701耦合的处理器702;
其中,处理器702调用存储器701中存储的可执行程序代码,执行本发明任意实施例所提供的车辆位姿的修正方法。
本发明实施例公开一种计算机可读存储介质,其存储计算机程序,其中,该计算机程序使得计算机执行本发明任意实施例所提供的车辆位姿的修正方法。
本发明实施例公开一种计算机程序产品,其中,当计算机程序产品在计算机上运行时,使得计算机执行本发明任意实施例所提供的车辆位姿的修正方法的部分或全部步骤。
在本发明的各种实施例中,应理解,上述各过程的序号的大小并不意味着执行顺序的必然先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。
在本发明所提供的实施例中,应理解,“与A相应的B”表示B与A相关联,根据A可以确定B。但还应理解,根据A确定B并不意味着仅仅根据A确定B,还可以根据A和/或其他信息确定B。
另外,在本发明各实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述集成的单元若以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可获取的存储器中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或者部分,可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干请求用以使得一台计算机设备(可以为个人计算机、服务器或者网络设备等,具体可以是计算机设备中的处理器)执行本发明的各个实施例上述方法的部分或全部步骤。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质包括只读存储器(Read-Only Memory,ROM)、随机存储器(Random Access Memory,RAM)、可编程只读存储器(Programmable Read-only Memory,PROM)、可擦除可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、一次可编程只读存储器(One-time Programmable Read-Only Memory,OTPROM)、电子抹除式可复写只读存储器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储器、磁盘存储器、磁带存储器、或者能够用于携带或存储数据的计算机可读的任何其他介质。
以上对本发明实施例公开的一种车辆位姿的修正方法和装置进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (10)

  1. 一种车辆位姿的修正方法,其特征在于,包括:
    从感知图像的构成车道线虚线段的车道线虚线端点和导航地图对应位置的车道线虚线端点中,识别出端点类别相匹配的目标车道线虚线端点,所述目标车道线虚线端点包括目标上端点和属于同一车道线虚线段的目标下端点,所述目标上端点与车辆的距离大于所述目标下端点与所述车辆的距离;
    判断目标车道线虚线端点在所述感知图像中所属车道线的车道线信息与其在所述导航地图中所属车道线的车道线信息是否相匹配,所述车道线信息包括车道线类别;
    如果相匹配,则对于由目标上端点和对应的目标下端点所构成的目标车道线虚线段,将所述感知图像中的目标车道线虚线段与所述导航地图中的目标车道线虚线段进行匹配,并根据匹配结果对所述导航地图中车辆的当前位姿进行修正。
  2. 根据权利要求1所述的方法,其特征在于,所述识别出端点类别相匹配的目标车道线虚线端点,包括:
    识别所述感知图像中的车道线虚线端点所包含类别的置信度值;
    选择置信度值最高的类别作为所述感知图像中的车道线虚线端点的类别;
    将所述感知图像中车道线虚线端点的类别,与在导航地图中对应位置且类别一致的车道线虚线端点确定为目标车道线虚线端点。
  3. 根据权利要求1或2所述的方法,其特征在于,所述将所述感知图像中的目标车道线虚线段与所述导航地图中的目标车道线虚线段进行匹配,包括:
    将所述导航地图中的目标车道线虚线段投影到所述感知图像所在平面;
    确定投影后导航地图中的各个第一目标车道线虚线段与感知图像中的各个第二目标车道线虚线段之间的投影距离;
    相应的,根据匹配结果对所述导航地图中车辆的当前位姿进行修正,包括;
    根据所述投影距离对所述导航地图中车辆的当前位姿进行修正。
  4. 根据权利要求1-3任一所述的方法,其特征在于,所述确定投影后导航地图中的各个第一目标车道线虚线段与感知图像中的各个第二目标车道线虚线段之间的投影距离,包括:
    将各第一目标车道线虚线段和各第二目标车道线虚线段进行一一对应匹配,形成多个匹配组合;
    对于任意一个匹配组合,计算该匹配组合中各第一目标车道线虚线段和对应第二目标车道线虚线段之间的投影距离之和;
    从各个匹配组合所对应的投影距离之和中选择取值最小的投影距离之和作为投影后各个第一目标车道线虚线段与各个第二目标车道线虚线段之间的投影距离。
  5. 根据权利要求4所述的方法,其特征在于,
    采用迭代修正的方式对车辆的位姿进行修正,将每次修正后的车辆位姿作为下次位姿修正的输入,使得所有所述匹配组合所对应投影距离之和均达到设定阈值。
  6. 一种车辆位姿的修正装置,其特征在于,包括:
    目标车道线虚线端点识别模块,被配置为从感知图像的构成车道线虚线段的车道线虚线端点和导航地图对应位置的车道线虚线端点中,识别出端点类别相匹配的目标车道线虚线端点,所述目标车道线虚线端点包括目标上端点和属于同一车道线虚线段的目标下端点,所述目标上端点与车辆的距离大于所述目标下端点与所述车辆的距离;
    车道线信息判断模块,被配置为判断目标车道线虚线端点在所述感知图像中所属车道线的车道线信息与其在所述导航地图中所属车道线的车道线信息是否相匹配,所述车道线信息包括车道线类别;
    目标车道线虚线段匹配模块,被配置为如果车道线信息相匹配,则对于由目标上端点和对应的目标下端点所构成的目标车道线虚线段,将所述感知图像中的目标车道线虚线段与所述导航地图中的目标车道线虚线段进行匹配;
    车辆位姿修正模块,被配置为根据匹配结果对所述导航地图中车辆的当前位姿进行修正。
  7. 根据权利要求6所述的装置,其特征在于,所述目标车道线虚线端点识别模块,具体被配置为:
    识别所述感知图像中的车道线虚线端点所包含类别的置信度值;
    选择置信度值最高的类别作为所述感知图像中的车道线虚线端点的类别;
    将所述感知图像中车道线虚线端点的类别,与在导航地图中对应位置且类别一致的车道线虚线端点确定为目标车道线虚线端点。
  8. 根据权利要求6或7所述的装置,其特征在于,所述目标车道线虚线段匹配模块,包括:
    投影单元,被配置为如果车道线信息相匹配,则对于由目标上端点和对应的目标下端点所构成的目标车道线虚线段,将所述导航地图中的目标车道线虚线段投影到所述感知图像所在平面;
    投影距离确定单元,被配置为确定投影后导航地图中的各个第一目标车道线虚线段与感知图像中的各个第二目标车道线虚线段之间的投影距离;
    相应的,所述车辆位姿修正模块具体被配置为:
    根据所述投影距离对所述导航地图中车辆的当前位姿进行修正。
  9. 根据权利要求8所述的装置,其特征在于,所述投影距离确定单元,具体被配置为:
    将各第一目标车道线虚线段和各第二目标车道线虚线段进行一一对应匹配,形成多个匹配组合;
    对于任意一个匹配组合,计算该匹配组合中各第一目标车道线虚线段和对应第二目标车道线虚线段之间的投影距离之和;
    从各个匹配组合所对应的投影距离之和中选择取值最小的投影距离之和作为投影后各个第一目标车道线虚线段与各个第二目标车道线虚线段之间的投影距离。
  10. 根据权利要求9所述的装置,其特征在于,
    采用迭代修正的方式对车辆的位姿进行修正,将每次修正后的车辆位姿作为下次位姿修正的输入,使得所有所述匹配组合所对应投影距离之和均达到设定阈值。
PCT/CN2019/113483 2019-03-28 2019-10-26 一种车辆位姿的修正方法和装置 WO2020192105A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910243966.0A CN111750878B (zh) 2019-03-28 2019-03-28 一种车辆位姿的修正方法和装置
CN201910243966.0 2019-03-28

Publications (1)

Publication Number Publication Date
WO2020192105A1 true WO2020192105A1 (zh) 2020-10-01

Family

ID=72608881

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/113483 WO2020192105A1 (zh) 2019-03-28 2019-10-26 一种车辆位姿的修正方法和装置

Country Status (2)

Country Link
CN (1) CN111750878B (zh)
WO (1) WO2020192105A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112379330A (zh) * 2020-11-27 2021-02-19 浙江同善人工智能技术有限公司 一种多机器人协同的3d声源识别定位方法
CN114543819A (zh) * 2021-09-16 2022-05-27 北京小米移动软件有限公司 车辆定位方法、装置、电子设备及存储介质
CN115098606A (zh) * 2022-05-30 2022-09-23 九识智行(北京)科技有限公司 无人驾驶车辆的红绿灯查询方法、装置、存储介质及设备
WO2023005384A1 (zh) * 2021-07-29 2023-02-02 北京旷视科技有限公司 可移动设备的重定位方法及装置
CN117490728A (zh) * 2023-12-28 2024-02-02 合众新能源汽车股份有限公司 车道线定位故障诊断方法和系统

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112902987B (zh) * 2021-02-02 2022-07-15 北京三快在线科技有限公司 一种位姿修正的方法及装置
CN113791435B (zh) * 2021-11-18 2022-04-05 智道网联科技(北京)有限公司 Gnss信号异常值的检测方法、装置及电子设备、存储介质
CN114034307B (zh) * 2021-11-19 2024-04-16 智道网联科技(北京)有限公司 基于车道线的车辆位姿校准方法、装置和电子设备
CN114136327B (zh) * 2021-11-22 2023-08-01 武汉中海庭数据技术有限公司 一种虚线段的查全率的自动化检查方法及系统
CN115203352B (zh) * 2022-09-13 2022-11-29 腾讯科技(深圳)有限公司 车道级定位方法、装置、计算机设备和存储介质
CN117330097B (zh) * 2023-12-01 2024-05-10 深圳元戎启行科技有限公司 车辆定位优化方法、装置、设备及存储介质
CN117723070A (zh) * 2024-02-06 2024-03-19 合众新能源汽车股份有限公司 地图匹配初值的确定方法及装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663356A (zh) * 2012-03-28 2012-09-12 柳州博实唯汽车科技有限公司 车道线提取及偏离预警方法
CN103632140A (zh) * 2013-11-27 2014-03-12 智慧城市系统服务(中国)有限公司 一种车道线检测方法及装置
US20160012589A1 (en) * 2014-07-11 2016-01-14 Agt International Gmbh Automatic spatial calibration of camera network
CN107679520A (zh) * 2017-10-30 2018-02-09 湖南大学 一种适用于复杂条件下的车道线视觉检测方法
CN108413971A (zh) * 2017-12-29 2018-08-17 驭势科技(北京)有限公司 基于车道线的车辆定位技术及应用
CN108981741A (zh) * 2018-08-23 2018-12-11 武汉中海庭数据技术有限公司 基于高精度地图的路径规划装置及方法

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6061628A (en) * 1996-04-24 2000-05-09 Aisin Aw Co., Ltd. Navigation system for vehicles
US9341485B1 (en) * 2003-06-19 2016-05-17 Here Global B.V. Method and apparatus for representing road intersections
JP4377284B2 (ja) * 2004-06-02 2009-12-02 株式会社ザナヴィ・インフォマティクス 車載ナビゲーション装置
US7539574B2 (en) * 2005-03-22 2009-05-26 Denso Corporation Vehicular navigation system
KR20070091471A (ko) * 2006-03-06 2007-09-11 주식회사 현대오토넷 네비게이션 시스템의 교차점 인식방법
JP5305598B2 (ja) * 2007-02-13 2013-10-02 アイシン・エィ・ダブリュ株式会社 ナビゲーション装置
JP4886597B2 (ja) * 2007-05-25 2012-02-29 アイシン・エィ・ダブリュ株式会社 レーン判定装置及びレーン判定方法、並びにそれを用いたナビゲーション装置
JP4950858B2 (ja) * 2007-11-29 2012-06-13 アイシン・エィ・ダブリュ株式会社 画像認識装置及び画像認識プログラム
CN102150015B (zh) * 2008-10-17 2013-09-25 三菱电机株式会社 导航装置
CN102183259B (zh) * 2011-03-17 2014-07-23 武汉光庭信息技术有限公司 基于电子地图道路特征识别的导航方法
KR20130003308A (ko) * 2011-06-30 2013-01-09 충북대학교 산학협력단 차량의 차선 인식 방법
US9389088B2 (en) * 2011-12-12 2016-07-12 Google Inc. Method of pre-fetching map data for rendering and offline routing
JP5733195B2 (ja) * 2011-12-21 2015-06-10 アイシン・エィ・ダブリュ株式会社 レーン案内表示システム、方法およびプログラム
CN103954275B (zh) * 2014-04-01 2017-02-08 西安交通大学 基于车道线检测和gis地图信息开发的视觉导航方法
CN105021201B (zh) * 2015-08-17 2017-12-01 武汉光庭信息技术有限公司 利用交通标示牌的坐标反推汽车自身位置的系统及方法
CN105783936B (zh) * 2016-03-08 2019-09-24 武汉中海庭数据技术有限公司 用于自动驾驶中的道路标识制图及车辆定位方法及系统
CN105788274B (zh) * 2016-05-18 2018-03-27 武汉大学 基于时空轨迹大数据的城市交叉口车道级结构提取方法
CN106092121B (zh) * 2016-05-27 2017-11-24 百度在线网络技术(北京)有限公司 车辆导航方法和装置
CN107643086B (zh) * 2016-07-22 2021-04-13 北京四维图新科技股份有限公司 一种车辆定位方法、装置及系统
CN106525057A (zh) * 2016-10-26 2017-03-22 陈曦 高精度道路地图的生成系统
CN108303103B (zh) * 2017-02-07 2020-02-07 腾讯科技(深圳)有限公司 目标车道的确定方法和装置
CN108052880B (zh) * 2017-11-29 2021-09-28 南京大学 交通监控场景虚实车道线检测方法
CN108090456B (zh) * 2017-12-27 2020-06-19 北京初速度科技有限公司 识别车道线模型的训练方法、车道线识别方法及装置
CN108318043B (zh) * 2017-12-29 2020-07-31 百度在线网络技术(北京)有限公司 用于更新电子地图的方法、装置和计算机可读存储介质
CN108917778B (zh) * 2018-05-11 2020-11-03 广州海格星航信息科技有限公司 导航提示方法、导航设备及存储介质
CN108830159A (zh) * 2018-05-17 2018-11-16 武汉理工大学 一种前方车辆单目视觉测距系统及方法
CN108680177B (zh) * 2018-05-31 2021-11-09 安徽工程大学 基于啮齿类动物模型的同步定位与地图构建方法及装置
CN109165549B (zh) * 2018-07-09 2021-03-19 厦门大学 基于三维点云数据的道路标识获取方法、终端设备及装置
CN109059940A (zh) * 2018-09-11 2018-12-21 北京测科空间信息技术有限公司 一种用于无人驾驶车辆导航制导的方法及系统
CN109460739A (zh) * 2018-11-13 2019-03-12 广州小鹏汽车科技有限公司 车道线检测方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663356A (zh) * 2012-03-28 2012-09-12 柳州博实唯汽车科技有限公司 车道线提取及偏离预警方法
CN103632140A (zh) * 2013-11-27 2014-03-12 智慧城市系统服务(中国)有限公司 一种车道线检测方法及装置
US20160012589A1 (en) * 2014-07-11 2016-01-14 Agt International Gmbh Automatic spatial calibration of camera network
CN107679520A (zh) * 2017-10-30 2018-02-09 湖南大学 一种适用于复杂条件下的车道线视觉检测方法
CN108413971A (zh) * 2017-12-29 2018-08-17 驭势科技(北京)有限公司 基于车道线的车辆定位技术及应用
CN108981741A (zh) * 2018-08-23 2018-12-11 武汉中海庭数据技术有限公司 基于高精度地图的路径规划装置及方法

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112379330A (zh) * 2020-11-27 2021-02-19 浙江同善人工智能技术有限公司 一种多机器人协同的3d声源识别定位方法
WO2023005384A1 (zh) * 2021-07-29 2023-02-02 北京旷视科技有限公司 可移动设备的重定位方法及装置
CN114543819A (zh) * 2021-09-16 2022-05-27 北京小米移动软件有限公司 车辆定位方法、装置、电子设备及存储介质
CN114543819B (zh) * 2021-09-16 2024-03-26 北京小米移动软件有限公司 车辆定位方法、装置、电子设备及存储介质
CN115098606A (zh) * 2022-05-30 2022-09-23 九识智行(北京)科技有限公司 无人驾驶车辆的红绿灯查询方法、装置、存储介质及设备
CN115098606B (zh) * 2022-05-30 2023-06-16 九识智行(北京)科技有限公司 无人驾驶车辆的红绿灯查询方法、装置、存储介质及设备
CN117490728A (zh) * 2023-12-28 2024-02-02 合众新能源汽车股份有限公司 车道线定位故障诊断方法和系统
CN117490728B (zh) * 2023-12-28 2024-04-02 合众新能源汽车股份有限公司 车道线定位故障诊断方法和系统

Also Published As

Publication number Publication date
CN111750878A (zh) 2020-10-09
CN111750878B (zh) 2022-06-24

Similar Documents

Publication Publication Date Title
WO2020192105A1 (zh) 一种车辆位姿的修正方法和装置
CN110954113B (zh) 一种车辆位姿的修正方法和装置
WO2020199565A1 (zh) 一种基于路灯杆的车辆位姿的修正方法和装置
WO2020199564A1 (zh) 一种导航地图在初始化时车辆位姿的修正方法和装置
CN110163176B (zh) 车道线变化位置识别方法、装置、设备和介质
US9043138B2 (en) System and method for automated updating of map information
US11415993B2 (en) Method and apparatus for processing driving reference line, and vehicle
US20210333124A1 (en) Method and system for detecting changes in road-layout information
CN105225510A (zh) 用于验证地图的路网的方法和系统
CN113033029A (zh) 自动驾驶仿真方法、装置、电子设备及存储介质
WO2020220616A1 (zh) 一种车辆位姿的修正方法和装置
WO2021208110A1 (zh) 车道线识别异常事件确定方法、车道线识别装置及系统
CN110363735B (zh) 一种车联网图像数据融合方法及相关装置
CN113008260A (zh) 一种导航信息处理方法、装置、电子设备及存储介质
US20230018996A1 (en) Method, device, and computer program for providing driving guide by using vehicle position information and signal light information
CN114754778A (zh) 一种车辆定位方法以及装置、电子设备、存储介质
CN112923938B (zh) 一种地图优化方法、装置、存储介质及系统
CN111605481A (zh) 基于环视的拥堵跟车系统和终端
WO2024040499A1 (zh) 高精度导航路径的确定方法、装置、设备、介质及车辆
US20230154203A1 (en) Path planning method and system using the same
CN110539748A (zh) 基于环视的拥堵跟车系统和终端
CN114111817B (zh) 基于slam地图与高精度地图匹配的车辆定位方法及系统
CN112507857B (zh) 一种车道线更新方法、装置、设备及存储介质
CN113469045A (zh) 无人集卡的视觉定位方法、系统、电子设备和存储介质
CN115249407A (zh) 指示灯状态识别方法、装置、电子设备、存储介质及产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19920870

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19920870

Country of ref document: EP

Kind code of ref document: A1