CN111750878A - Vehicle pose correction method and device - Google Patents

Vehicle pose correction method and device Download PDF

Info

Publication number
CN111750878A
CN111750878A CN201910243966.0A CN201910243966A CN111750878A CN 111750878 A CN111750878 A CN 111750878A CN 201910243966 A CN201910243966 A CN 201910243966A CN 111750878 A CN111750878 A CN 111750878A
Authority
CN
China
Prior art keywords
lane line
line
target
target lane
end point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910243966.0A
Other languages
Chinese (zh)
Other versions
CN111750878B (en
Inventor
侯政华
杜志颖
管守奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Momenta Technology Co ltd
Original Assignee
Beijing Chusudu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chusudu Technology Co ltd filed Critical Beijing Chusudu Technology Co ltd
Priority to CN201910243966.0A priority Critical patent/CN111750878B/en
Priority to PCT/CN2019/113483 priority patent/WO2020192105A1/en
Publication of CN111750878A publication Critical patent/CN111750878A/en
Application granted granted Critical
Publication of CN111750878B publication Critical patent/CN111750878B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance

Abstract

The embodiment of the invention discloses a method and a device for correcting the pose of a vehicle, wherein the method comprises the following steps: identifying a target lane line dotted line endpoint with the endpoint category matched from lane line dotted line endpoints forming lane line dotted line segments of the perception image and lane line dotted line endpoints at corresponding positions of the navigation map; judging whether the lane line information of the lane line of the dotted line end point of the target lane line in the perception image is matched with the lane line information of the lane line of the target lane line in the navigation map; and if the virtual line segment is matched with the virtual line segment of the target lane line, the virtual line segment of the target lane line in the perception image is matched with the virtual line segment of the target lane line in the navigation map, and the current pose of the vehicle in the navigation map is corrected according to the matching result. By adopting the technical scheme, the problem of poor vehicle positioning accuracy caused by less information such as traffic signs, light poles and the like on closed road sections such as expressways is solved.

Description

Vehicle pose correction method and device
Technical Field
The invention relates to the technical field of automatic driving, in particular to a method and a device for correcting a vehicle pose.
Background
In the field of automatic driving, navigation and positioning are of great importance. In recent years, the development of the image semantic segmentation and image recognition field is greatly promoted by the achievement of technologies such as deep learning, and the like, so that a solid foundation is provided for navigation maps and navigation positioning.
The existing vehicle positioning method generally detects an image obtained by a vehicle-mounted camera by using a perception model obtained by deep learning, and extracts perception information of a lane line, a light pole and a traffic sign in the image. When the unmanned vehicle runs on a closed road section such as an expressway, information such as traffic signs and light poles in the road information is less. Since the lane line information can only restrict and correct the position of the vehicle in the left-right direction, when the vehicle travels on a road segment with only lane line information for a long time, the error of the vehicle in the front-rear direction is continuously increased, and the positioning accuracy is generally poor.
Disclosure of Invention
The embodiment of the invention discloses a method and a device for correcting the pose of a vehicle, which solve the problem of poor positioning accuracy of the vehicle on closed road sections such as an expressway and the like due to less information such as traffic signs, light poles and the like.
In a first aspect, an embodiment of the invention discloses a method for correcting a pose of a vehicle, which includes:
identifying target lane line dotted line end points with end point types matched from lane line dotted line end points forming lane line dotted line segments of the perception image and lane line dotted line end points of corresponding positions of the navigation map, wherein the target lane line dotted line end points comprise a target upper end point and a target lower end point belonging to the same lane line dotted line segment, and the distance between the target upper end point and the vehicle is greater than the distance between the target lower end point and the vehicle;
judging whether the lane line information of the lane line of the dotted line end point of the target lane line in the perception image is matched with the lane line information of the lane line of the target lane line in the navigation map, wherein the lane line information comprises the category of the lane line;
and if the virtual line segment is matched with the virtual line segment of the target lane line, the virtual line segment of the target lane line in the perception image is matched with the virtual line segment of the target lane line in the navigation map, and the current pose of the vehicle in the navigation map is corrected according to the matching result.
Optionally, the identifying the target lane line dotted end point with the end point category matching includes:
identifying a confidence value of a category contained by a lane line dotted end point in the perception image;
selecting the category with the highest confidence value as the category of the endpoint of the lane line broken line in the perception image;
and determining the category of the lane line dotted line end point in the perception image and the lane line dotted line end point which corresponds to the corresponding position in the navigation map and has the same category as the target lane line dotted line end point.
Optionally, the matching the target lane line broken line segment in the perception image with the target lane line broken line segment in the navigation map includes:
projecting a target lane line dotted line segment in the navigation map to a plane where the perception image is located;
determining the projection distance between each first target lane line virtual line segment in the projected navigation map and each second target lane line virtual line segment in the perception image;
correspondingly, correcting the current pose of the vehicle in the navigation map according to the matching result, including;
and correcting the current pose of the vehicle in the navigation map according to the projection distance.
Optionally, the determining a projection distance between each first target lane line virtual line segment in the post-projection navigation map and each second target lane line virtual line segment in the perception image includes:
carrying out one-to-one corresponding matching on each first target lane line virtual line segment and each second target lane line virtual line segment to form a plurality of matching combinations;
for any one matching combination, calculating the sum of the projection distances between each first target lane line virtual line segment and the corresponding second target lane line virtual line segment in the matching combination;
and selecting the sum of the projection distances with the minimum value from the sum of the projection distances corresponding to the matching combinations as the projection distance between each first target lane line virtual line segment and each second target lane line virtual line segment after projection.
Optionally, the pose of the vehicle is corrected in an iterative correction manner, and the pose of the vehicle after each correction is used as the input of the next pose correction, so that the sum of the projection distances corresponding to all the matching combinations reaches a set threshold.
In a second aspect, an embodiment of the present invention further provides a device for correcting a pose of a vehicle, where the device includes:
the system comprises a target lane line dotted line endpoint identification module, a navigation map and a display module, wherein the target lane line dotted line endpoint identification module is configured to identify target lane line dotted line endpoints of which endpoint types are matched from lane line dotted line endpoints forming a lane line dotted line segment of the perception image and lane line dotted line endpoints of corresponding positions of the navigation map, the target lane line dotted line endpoints comprise a target upper endpoint and a target lower endpoint belonging to the same lane line dotted line segment, and the distance between the target upper endpoint and a vehicle is greater than that between the target lower endpoint and the vehicle;
a lane line information determination module configured to determine whether lane line information of a lane line to which a target lane line dotted-line end point belongs in the perception image matches lane line information of a lane line to which the target lane line dotted-line end point belongs in the navigation map, the lane line information including a lane line category;
a target lane line dotted line segment matching module configured to match a target lane line dotted line segment in the perception image with a target lane line dotted line segment in the navigation map for a target lane line dotted line segment composed of a target upper endpoint and a corresponding target lower endpoint if lane line information is matched;
and the vehicle pose correction module is configured to correct the current pose of the vehicle in the navigation map according to the matching result.
Optionally, the target lane line dotted line endpoint identification module is specifically configured to:
identifying a confidence value of a category contained by a lane line dotted end point in the perception image;
selecting the category with the highest confidence value as the category of the endpoint of the lane line broken line in the perception image;
and determining the category of the lane line dotted line end point in the perception image and the lane line dotted line end point which corresponds to the corresponding position in the navigation map and has the same category as the target lane line dotted line end point.
Optionally, the target lane line dashed line segment matching module includes:
the projection unit is configured to project a target lane line dotted line segment in the navigation map to a plane where the perception image is located, wherein the target lane line dotted line segment is composed of a target upper end point and a corresponding target lower end point if lane line information is matched;
a projection distance determination unit configured to determine a projection distance between each first target lane line virtual line segment in the projected navigation map and each second target lane line virtual line segment in the perception image;
correspondingly, the vehicle pose correction module is specifically configured to:
and correcting the current pose of the vehicle in the navigation map according to the projection distance.
Optionally, the projection distance determining unit is specifically configured to:
carrying out one-to-one corresponding matching on each first target lane line virtual line segment and each second target lane line virtual line segment to form a plurality of matching combinations;
for any one matching combination, calculating the sum of the projection distances between each first target lane line virtual line segment and the corresponding second target lane line virtual line segment in the matching combination;
and selecting the sum of the projection distances with the minimum value from the sum of the projection distances corresponding to the matching combinations as the projection distance between each first target lane line virtual line segment and each second target lane line virtual line segment after projection.
Optionally, the pose of the vehicle is corrected in an iterative correction manner, and the pose of the vehicle after each correction is used as the input of the next pose correction, so that the sum of the projection distances corresponding to all the matching combinations reaches a set threshold.
In a third aspect, an embodiment of the present invention further provides a vehicle-mounted terminal, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program codes stored in the memory to execute part or all of the steps of the vehicle pose correction method provided by any embodiment of the invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium storing a computer program including instructions for executing part or all of the steps of the method for correcting a vehicle pose provided by any embodiment of the present invention.
In a fifth aspect, the embodiment of the present invention further provides a computer program product, which when run on a computer, causes the computer to execute part or all of the steps of the method for correcting the vehicle pose provided by any embodiment of the present invention.
According to the technical scheme provided by the embodiment of the invention, the target lane line dotted line end point with the end point type matched is determined in the navigation map and the lane line dotted line end point corresponding to the perception image, the target lane line dotted line segment formed by the target upper end point in the target lane line dotted line end point and the corresponding target lower end point is matched with the target lane line dotted line segment in the navigation map under the condition that the lane line of the target lane line dotted line end point in the perception image is matched with the lane line information of the lane line of the navigation map, and the current pose of the vehicle in the navigation map is corrected according to the matching result, so that the problem that the front and back positioning errors of the vehicle are large when the unmanned vehicle is in a closed road section with only lane line information is solved.
The invention comprises the following steps:
1. the invention discloses a method for accurately positioning an unmanned vehicle on a highway, a closed road section and other sections with insufficient information by utilizing endpoint information of a lane line.
2. The invention is one of the invention points that the category information of the dotted line end point of the lane line is checked and combined with the lane line information where the dotted line end point is located, the operation efficiency of the system can be improved, and the incidence rate of mismatching of the perception image and the navigation map is reduced.
3. The projection distance of each matching combination formed by the first target lane line virtual line segment and each second target lane line virtual line segment is used as a matching result of the target lane line virtual line segment in the perception image and the target lane line virtual line segment in the navigation map, and the current position of the vehicle can be corrected by using the matching result, so that the unmanned vehicle can be accurately positioned on the highway, the closed road segment and other sections with insufficient information.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a vehicle positioning method according to an embodiment of the present invention;
FIG. 2a is a schematic flow chart of a vehicle positioning method according to an embodiment of the present invention;
FIG. 2b is a schematic projection diagram of a virtual lane line in a navigation map projected onto a plane where a perceived image is located according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a vehicle pose correction device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a vehicle positioning method according to an embodiment of the present invention. The method is mainly executed after the high-precision map is initialized, at the moment, the positioning position of the vehicle is corrected to a centimeter level, and the projection of each lane line in the navigation map in the perception image is matched with the lane line in the perception image. In addition, the vehicle positioning method provided in the embodiment of the present invention is typically applied to an application scenario where an unmanned vehicle runs on a traffic sign such as a highway and a closed road section lacking a light pole, and may be executed by a vehicle positioning device, which may be implemented in a software and/or hardware manner, and may be generally integrated in a vehicle-mounted terminal such as a vehicle-mounted computer and a vehicle-mounted Industrial control computer (IPC), and the embodiment of the present invention is not limited. As shown in fig. 1, the method provided in this embodiment specifically includes:
110. and identifying a target lane line dotted line endpoint with the endpoint class matched from lane line dotted line endpoints forming lane line dotted line segments of the perception image and lane line dotted line endpoints at corresponding positions of the navigation map.
The perception image is obtained by identifying an image which is acquired by a camera and contains road information by using a preset perception model. The preset perception model can be used for training the perception model by adopting a large number of road sample images marked with image semantic features in advance. The image semantic features can include lane lines, lane line dotted line endpoints, prismatic lines, zebra stripes and the like. The road image containing the road information is input into the trained preset perception model, and the image semantic features in the road image can be obtained based on the recognition result of the preset perception model. The preset perception model can be obtained through the following modes:
constructing a training sample set, wherein the training sample set comprises a plurality of groups of training sample data, and each group of training sample data comprises a road sample image and a corresponding road perception sample image marked with image semantic features; training the built initial neural network based on the training sample set to obtain a preset perception model, wherein the preset perception model enables the road sample images in each set of training sample data to be associated with the corresponding road perception sample images marked with image semantic features. The output of the model is called a perception image.
In this embodiment, the category of the lane line dotted line end point constituting the lane line virtual line segment in the perception image includes a general lane line dotted line end point, an intersection point of a lane line and a vehicle, a cut-off line of a lane line, a pedestrian crossing, and the like, and the category of the lane line dotted line end point constituting the lane line virtual line segment in the navigation map mainly refers to the general lane line dotted line end point. In this embodiment, the lane line dotted line end points with the same end point type are used as the target lane line dotted line end points from the lane line dotted line end points of the sensing image forming the lane line dotted line segment and the lane line dotted line end points of the corresponding positions of the navigation map, and the target lane line dotted line end points mainly refer to the common lane line dotted line end points, that is, the dotted line segment where the dotted line end points are located does not have intersection with the vehicle, and is not a cut-off line of the lane line and a pedestrian crossing.
Specifically, the target lane line dotted end points include a target upper end point and a target lower end point. The target upper endpoint and the target lower endpoint belong to the same lane line virtual line segment, and each virtual line segment in the lane line virtual line segment is also used as a matching unit in the subsequent matching in the embodiment. The target upper end point and the target lower end point are mainly defined according to the distance between the target upper end point and the vehicle, namely, for any virtual line segment, the distance between the target upper end point and the vehicle is larger than the distance between the target lower end point and the vehicle. In this embodiment, the category matching means that the categories of the upper target endpoint and the lower target endpoint in the perceptual image are both matched with the categories of the upper target endpoint and the lower target endpoint at the corresponding positions in the navigation map.
120. Judging whether the lane line information of the lane line of the dotted line end point of the target lane line in the perception image is matched with the lane line information of the lane line of the target lane line in the navigation map, and if the lane line information of the lane line of the target lane line end point in the perception image is matched with the lane line information of the lane line of the target lane line in the navigation map, executing step 130; otherwise, step 140 is performed.
The lane line information includes a lane line type, an attribute, and the like. The category of the lane line includes a dotted line, a solid line, a prismatic line, and the like. By confirming the lane line information, the situation that the lane line is misplaced under the situation that the vehicle runs to a tunnel and the like cannot receive GPS positioning signals or other positioning abnormalities is avoided, the calculation efficiency of the system can be improved, and the mismatching quantity of dotted line endpoints is reduced.
130. And matching the target lane line virtual line segment in the perception image with the target lane line virtual line segment in the navigation map for the target lane line virtual line segment formed by the target upper endpoint and the corresponding target lower endpoint, and correcting the current pose of the vehicle in the navigation map according to the matching result.
For example, the virtual line segment of the target lane line in the perception image is matched with the virtual line segment of the target lane line in the navigation map, and the virtual line segment of the target lane in the navigation map is projected onto a plane where the perception image is located, and a projection distance between the projected virtual line segment of the target lane and the virtual line segment of the target lane in the perception image is determined on the plane. Because the projection distance reflects the error between the real pose of the vehicle and the current pose to a certain extent, the current pose of the vehicle can be corrected by correcting the projection distance. The problem that the front and back positioning errors of the unmanned vehicle are large in the closed road section with only lane line information is solved. And moreover, the false matching quantity of the upper end point and the lower end point can be effectively reduced by matching with the virtual line segment of the lane line.
140. And re-acquiring the pose of the current vehicle in the navigation map.
According to the technical scheme provided by the embodiment, the target lane line dotted line end point with the matched end point type is determined in the navigation map and the lane line dotted line end point corresponding to the perception image, the target lane line dotted line section formed by the target upper end point in the target lane line dotted line end point and the corresponding target lower end point can be matched with the target lane line dotted line section formed by the target lower end point in the navigation map under the condition that the lane line of the target lane line dotted line end point in the perception image is matched with the type of the lane line of the navigation map, the current position of the vehicle in the navigation map is corrected according to the matching result, the problem that the front and back positioning error of the vehicle is large when the unmanned vehicle is in a closed road section with only lane line information is solved, and the accurate positioning of the vehicle is realized.
Example two
Referring to fig. 2a, fig. 2a is a schematic flow chart illustrating a vehicle positioning method according to an embodiment of the invention. On the basis of the above embodiment, the process of identifying the target lane line dotted line end point matched with the end point category and matching the target lane line dotted line segment in the perception image with the target lane line dotted line segment in the navigation map is optimized. As shown in fig. 2a, the method comprises:
210. and identifying the confidence values of the categories contained in the lane line dotted line end points in the perception image from the lane line dotted line end points forming the lane line dotted line segments of the perception image and the lane line dotted line end points at the corresponding positions of the navigation map, and selecting the category with the highest confidence value as the category of the lane line dotted line end points in the perception image.
For example, if it is determined that the confidence values of the categories included in the end points of the dashed line of the lane in the perceptual image are respectively: and the end points of the general lane line dotted lines are 90%, the pedestrian crossing is 20% and the cut-off line of the lane line is 5%, and then the end points of the general lane line dotted lines are used as the categories of the end points of the lane line dotted lines in the perception image.
220. And determining the category of the lane line dotted line end point in the perception image and the lane line dotted line end point which corresponds to the corresponding position in the navigation map and has the same category as the target lane line dotted line end point.
Illustratively, for the lane line dotted end points in the perception image which are inconsistent with the lane line dotted end point categories in the navigation map, the lane line dotted end points are filtered and are not used as end points for pose correction of the vehicle.
230. And if the lane line of the virtual end point of the target lane line in the perception image is matched with the category of the lane line of the virtual end point of the target lane line in the navigation map, projecting the virtual segment of the target lane line in the navigation map to the plane of the perception image for the virtual segment of the target lane line consisting of the upper end point of the target and the corresponding lower end point of the target.
Specifically, fig. 2b is a schematic projection diagram of projecting a lane line broken line in a navigation map to a plane where a perception image is located according to an embodiment of the present invention. As shown in fig. 2b, 1 represents the end point on the virtual line of the lane line in the perceptual image; 2, representing the endpoints on the virtual line of the projected lane line in the navigation map; 3 represents the lower end point of the lane line dotted line in the perception image; 4, representing the lower end point of the projected lane line dotted line in the navigation map; 5, a virtual line segment of a lane line in the perception image; 6, showing the projected lane line virtual line segment in the navigation map; 7 represents a dashed lane line in the perceived image; and 8, a projected lane line dotted line in the navigation map. In the process of positioning the vehicle according to the embodiment of the present invention, the navigation map is initialized and a centimeter-level positioning position can be provided, so that the projection distance between the various types of lane lines in the navigation map projected onto the sensing image and the lane line at the corresponding position in the sensing image meets the preset distance requirement, for example, after the sensing image is projected, positions 5 and 6 correspond to each other in fig. 2 b. In addition, since 1 and 2 are the upper end points of the virtual line segment of the corresponding lane line in the perception image and the navigation map, respectively, and 3 and 4 are the lower end points of the virtual line segment of the lane line in the perception image and the navigation map, respectively, in the matching process of the end points of the lane line, the upper end point 1 in the perception image is matched with the upper end point 2 in the navigation map, and the lower end point 3 in the perception image is matched with the lower end point 4 in the navigation map.
As shown in fig. 2b, in this embodiment, 1 and 2, 3 and 4 are all the end points of the dashed line of the target lane line with matching categories. Wherein, the lane lines to which 1 and 3 belong are also matched with the lane lines to which 2 and 4 belong, and are all lane line dotted lines. When the vehicle pose is corrected, if the upper end point and the lower end point are used for matching, due to the fact that the upper end point and the lower end point are large in number, not only is mismatching easy to occur, but also the calculated amount is increased. In order to effectively reduce the number of mismatching between the upper end point and the lower end point, the embodiment adopts a way of matching the lane line dotted line segment formed by the upper end point and the lower end point, i.e. the lane line dotted line segment is used as a basic unit to be matched, for example, 5 and 6 in fig. 2b are matched, and the pose of the vehicle is corrected by calculating the projection errors of 5 and 6 on the plane where the sensing image is located.
Furthermore, each lane line dotted line is formed by combining a plurality of lane line dotted line segments, so that the pose of the vehicle can be corrected by comprehensively considering the projection errors of all matched dotted line end point segments.
240. And determining the projection distance between each first target lane line virtual line segment in the projected navigation map and each second target lane line virtual line segment in the perception image.
For example, in this embodiment, determining a projection distance between each first target lane line virtual line segment in the post-projection navigation map and each second target lane line virtual line segment in the perception image includes:
carrying out one-to-one corresponding matching on each first target lane line virtual line segment and each second target lane line virtual line segment to form a plurality of matching combinations; for any one matching combination, calculating the sum of the projection distances between each first target lane line virtual line segment and the corresponding second target lane line virtual line segment in the matching combination; and selecting the sum of the projection distances with the minimum value from the sum of the projection distances corresponding to the matching combinations as the projection distance between each first target lane line virtual line segment and each second target lane line virtual line segment after projection.
Specifically, if the lane line dotted line in the navigation map is composed of three first target lane line dotted line segments A, B and C, and the lane line dotted line at the corresponding position in the perception image is composed of three second target lane line dotted line segments 1, 2 and 3, then when the projection distance is calculated, the navigation map and each target lane line dotted line segment in the perception image are first formed into a matching combination, and in each matching combination, a one-to-one matching relationship exists between the first target lane line dotted line segment and the second target lane line dotted line segment, for example, a-1, B-2 and C-3 are one matching combination, and a-1, B-3 and C-2 are another matching combination. And for each matching combination, calculating the sum of the projection distances between each first target lane line virtual line segment and the corresponding second target lane line virtual line segment in the matching combination, selecting the sum of the projection distances with the minimum value from the sum of the projection distances corresponding to each matching combination, and taking the sum as the projection distance between each first target lane line virtual line segment and each second target lane line virtual line segment after projection, namely the matching result of the target lane line virtual line segment in the perception image and the target lane line virtual line segment in the navigation map.
250. And correcting the current pose of the vehicle in the navigation map according to the projection distance.
In the matching result of step 240, the projection distance between the sensing image and the upper endpoint or the lower endpoint of the navigation map, i.e. the projection error, is caused by the inaccuracy of the estimation of the current vehicle pose, i.e. the projection error can reflect the magnitude of the error of the current vehicle pose. Therefore, the current pose can be corrected by using the projection error. And optimizing the pose by an iterative method in the correction process, and taking the pose of the vehicle corrected each time as the input of next pose correction, so that the sum of the projection distances corresponding to all the matching combinations reaches a set threshold, even if the sum of the projection errors of all the end point segments of the matching dotted line can reach the minimum value in the empirical values.
The present embodiment is optimized on the basis of the above embodiments, and the projection distance of each matching combination composed of the first target lane line virtual line segment and each second target lane line virtual line segment is used as the matching result of the target lane line virtual line segment in the perception image and the target lane line virtual line segment in the navigation map, and the current position of the vehicle can be corrected by using the matching result, so as to complete accurate positioning of the unmanned vehicle on the expressway, the closed road segment and other short-information road segments.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of a device for correcting a pose of a vehicle according to an embodiment of the present invention. As shown in fig. 3, the apparatus includes: a target lane line dotted line endpoint identification module 310, a lane line information judgment module 320, a target lane line dotted line segment matching module 330, and a vehicle pose correction module 340. Wherein the content of the first and second substances,
a target lane line dotted line endpoint identification module 310, configured to identify a target lane line dotted line endpoint of which endpoint types are matched from lane line dotted line endpoints constituting a lane line dotted line segment of the perception image and lane line dotted line endpoints at corresponding positions of the navigation map, where the target lane line dotted line endpoint includes a target upper endpoint and a target lower endpoint belonging to the same lane line dotted line segment, and a distance between the target upper endpoint and the vehicle is greater than a distance between the target lower endpoint and the vehicle;
a lane line information determining module 320 configured to determine whether lane line information of a lane line to which a target lane line dotted end point belongs in the perception image matches lane line information of a lane line to which the target lane line dotted end point belongs in the navigation map, the lane line information including a lane line category;
a target lane line dotted line segment matching module 330 configured to, if lane line information is matched, match a target lane line dotted line segment in the perception image with a target lane line dotted line segment in the navigation map for a target lane line dotted line segment composed of a target upper endpoint and a corresponding target lower endpoint;
and the vehicle pose correction module 340 is configured to correct the current pose of the vehicle in the navigation map according to the matching result.
According to the technical scheme provided by the embodiment, the target lane line dotted line end point with the matched end point type is determined in the navigation map and the lane line dotted line end point corresponding to the perception image, the target lane line dotted line section formed by the target upper end point in the target lane line dotted line end point and the corresponding target lower end point can be matched with the target lane line dotted line section formed by the target lower end point in the navigation map under the condition that the lane line of the target lane line dotted line end point in the perception image is matched with the type of the lane line of the navigation map, and the current position of the vehicle in the navigation map is corrected according to the matching result, so that the problem that the front and back positioning errors of the vehicle are large when the unmanned vehicle is in a closed road section with only lane line information is solved.
Optionally, the target lane line dotted line endpoint identification module is specifically configured to:
identifying a confidence value of a category contained by a lane line dotted end point in the perception image;
selecting the category with the highest confidence value as the category of the endpoint of the lane line broken line in the perception image;
and determining the category of the lane line dotted line end point in the perception image and the lane line dotted line end point which corresponds to the corresponding position in the navigation map and has the same category as the target lane line dotted line end point.
Optionally, the target lane line dashed line segment matching module includes:
the projection unit is configured to project a target lane line dotted line segment in the navigation map to a plane where the perception image is located, wherein the target lane line dotted line segment is composed of a target upper end point and a corresponding target lower end point if lane line information is matched;
a projection distance determination unit configured to determine a projection distance between each first target lane line virtual line segment in the projected navigation map and each second target lane line virtual line segment in the perception image;
correspondingly, the vehicle pose correction module is specifically configured to:
and correcting the current pose of the vehicle in the navigation map according to the projection distance.
Optionally, the projection distance determining unit is specifically configured to:
carrying out one-to-one corresponding matching on each first target lane line virtual line segment and each second target lane line virtual line segment to form a plurality of matching combinations;
for any one matching combination, calculating the sum of the projection distances between each first target lane line virtual line segment and the corresponding second target lane line virtual line segment in the matching combination;
and selecting the sum of the projection distances with the minimum value from the sum of the projection distances corresponding to the matching combinations as the projection distance between each first target lane line virtual line segment and each second target lane line virtual line segment after projection.
Optionally, the pose of the vehicle is corrected in an iterative correction manner, and the pose of the vehicle after each correction is used as the input of the next pose correction, so that the sum of the projection distances corresponding to all the matching combinations reaches a set threshold.
The vehicle pose correction device provided by the embodiment of the invention can execute the vehicle pose correction method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. Technical details that are not described in detail in the above embodiments may be referred to a method of correcting a vehicle pose provided in any embodiment of the present invention.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention. As shown in fig. 4, the in-vehicle terminal may include:
a memory 701 in which executable program code is stored;
a processor 702 coupled to the memory 701;
the processor 702 calls the executable program code stored in the memory 701 to execute the method for correcting the vehicle pose according to any embodiment of the present invention.
The embodiment of the invention discloses a computer-readable storage medium which stores a computer program, wherein the computer program enables a computer to execute the method for correcting the vehicle pose provided by any embodiment of the invention.
The embodiment of the invention discloses a computer program product, wherein when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the method for correcting the vehicle pose provided by any embodiment of the invention.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
In the embodiments provided herein, it should be understood that "B corresponding to A" means that B is associated with A from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The method and the device for correcting the vehicle pose disclosed by the embodiment of the invention are described in detail, the principle and the implementation mode of the invention are explained by applying specific examples, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A vehicle pose correction method is characterized by comprising the following steps:
identifying target lane line dotted line end points with end point types matched from lane line dotted line end points forming lane line dotted line segments of the perception image and lane line dotted line end points of corresponding positions of the navigation map, wherein the target lane line dotted line end points comprise a target upper end point and a target lower end point belonging to the same lane line dotted line segment, and the distance between the target upper end point and the vehicle is greater than the distance between the target lower end point and the vehicle;
judging whether the lane line information of the lane line of the dotted line end point of the target lane line in the perception image is matched with the lane line information of the lane line of the target lane line in the navigation map, wherein the lane line information comprises the category of the lane line;
and if the virtual line segment is matched with the virtual line segment of the target lane line, the virtual line segment of the target lane line in the perception image is matched with the virtual line segment of the target lane line in the navigation map, and the current pose of the vehicle in the navigation map is corrected according to the matching result.
2. The method of claim 1, wherein the identifying the target lane line dashed end point that matches the end point class comprises:
identifying a confidence value of a category contained by a lane line dotted end point in the perception image;
selecting the category with the highest confidence value as the category of the endpoint of the lane line broken line in the perception image;
and determining the category of the lane line dotted line end point in the perception image and the lane line dotted line end point which corresponds to the corresponding position in the navigation map and has the same category as the target lane line dotted line end point.
3. The method according to claim 1 or 2, wherein the matching of the target lane line dashed segment in the perceptual image with the target lane line dashed segment in the navigation map comprises:
projecting a target lane line dotted line segment in the navigation map to a plane where the perception image is located;
determining the projection distance between each first target lane line virtual line segment in the projected navigation map and each second target lane line virtual line segment in the perception image;
correspondingly, correcting the current pose of the vehicle in the navigation map according to the matching result, including;
and correcting the current pose of the vehicle in the navigation map according to the projection distance.
4. The method of any one of claims 1-3, wherein determining the projected distance between each first target lane line virtual line segment in the projected navigation map and each second target lane line virtual line segment in the perceptual image comprises:
carrying out one-to-one corresponding matching on each first target lane line virtual line segment and each second target lane line virtual line segment to form a plurality of matching combinations;
for any one matching combination, calculating the sum of the projection distances between each first target lane line virtual line segment and the corresponding second target lane line virtual line segment in the matching combination;
and selecting the sum of the projection distances with the minimum value from the sum of the projection distances corresponding to the matching combinations as the projection distance between each first target lane line virtual line segment and each second target lane line virtual line segment after projection.
5. The method of claim 4,
and correcting the pose of the vehicle in an iterative correction mode, and taking the pose of the vehicle corrected each time as the input of next pose correction, so that the sum of the projection distances corresponding to all the matching combinations reaches a set threshold value.
6. A correction device of a vehicle posture, characterized by comprising:
the system comprises a target lane line dotted line endpoint identification module, a navigation map and a display module, wherein the target lane line dotted line endpoint identification module is configured to identify target lane line dotted line endpoints of which endpoint types are matched from lane line dotted line endpoints forming a lane line dotted line segment of the perception image and lane line dotted line endpoints of corresponding positions of the navigation map, the target lane line dotted line endpoints comprise a target upper endpoint and a target lower endpoint belonging to the same lane line dotted line segment, and the distance between the target upper endpoint and a vehicle is greater than that between the target lower endpoint and the vehicle;
a lane line information determination module configured to determine whether lane line information of a lane line to which a target lane line dotted-line end point belongs in the perception image matches lane line information of a lane line to which the target lane line dotted-line end point belongs in the navigation map, the lane line information including a lane line category;
a target lane line dotted line segment matching module configured to match a target lane line dotted line segment in the perception image with a target lane line dotted line segment in the navigation map for a target lane line dotted line segment composed of a target upper endpoint and a corresponding target lower endpoint if lane line information is matched;
and the vehicle pose correction module is configured to correct the current pose of the vehicle in the navigation map according to the matching result.
7. The apparatus of claim 6, wherein the target lane line dashed endpoint identification module is specifically configured to:
identifying a confidence value of a category contained by a lane line dotted end point in the perception image;
selecting the category with the highest confidence value as the category of the endpoint of the lane line broken line in the perception image;
and determining the category of the lane line dotted line end point in the perception image and the lane line dotted line end point which corresponds to the corresponding position in the navigation map and has the same category as the target lane line dotted line end point.
8. The apparatus of claim 6 or 7, wherein the target lane line dashed segment matching module comprises:
the projection unit is configured to project a target lane line dotted line segment in the navigation map to a plane where the perception image is located, wherein the target lane line dotted line segment is composed of a target upper end point and a corresponding target lower end point if lane line information is matched;
a projection distance determination unit configured to determine a projection distance between each first target lane line virtual line segment in the projected navigation map and each second target lane line virtual line segment in the perception image;
correspondingly, the vehicle pose correction module is specifically configured to:
and correcting the current pose of the vehicle in the navigation map according to the projection distance.
9. The apparatus of claim 8, wherein the projection distance determination unit is specifically configured to:
carrying out one-to-one corresponding matching on each first target lane line virtual line segment and each second target lane line virtual line segment to form a plurality of matching combinations;
for any one matching combination, calculating the sum of the projection distances between each first target lane line virtual line segment and the corresponding second target lane line virtual line segment in the matching combination;
and selecting the sum of the projection distances with the minimum value from the sum of the projection distances corresponding to the matching combinations as the projection distance between each first target lane line virtual line segment and each second target lane line virtual line segment after projection.
10. The apparatus of claim 9,
and correcting the pose of the vehicle in an iterative correction mode, and taking the pose of the vehicle corrected each time as the input of next pose correction, so that the sum of the projection distances corresponding to all the matching combinations reaches a set threshold value.
CN201910243966.0A 2019-03-28 2019-03-28 Vehicle pose correction method and device Active CN111750878B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910243966.0A CN111750878B (en) 2019-03-28 2019-03-28 Vehicle pose correction method and device
PCT/CN2019/113483 WO2020192105A1 (en) 2019-03-28 2019-10-26 Vehicle pose correction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910243966.0A CN111750878B (en) 2019-03-28 2019-03-28 Vehicle pose correction method and device

Publications (2)

Publication Number Publication Date
CN111750878A true CN111750878A (en) 2020-10-09
CN111750878B CN111750878B (en) 2022-06-24

Family

ID=72608881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910243966.0A Active CN111750878B (en) 2019-03-28 2019-03-28 Vehicle pose correction method and device

Country Status (2)

Country Link
CN (1) CN111750878B (en)
WO (1) WO2020192105A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112902987A (en) * 2021-02-02 2021-06-04 北京三快在线科技有限公司 Pose correction method and device
CN113791435A (en) * 2021-11-18 2021-12-14 智道网联科技(北京)有限公司 GNSS signal abnormal value detection method and device, electronic equipment and storage medium
CN114034307A (en) * 2021-11-19 2022-02-11 智道网联科技(北京)有限公司 Lane line-based vehicle pose calibration method and device and electronic equipment
CN114136327A (en) * 2021-11-22 2022-03-04 武汉中海庭数据技术有限公司 Automatic inspection method and system for recall ratio of dotted line segment
CN115203352A (en) * 2022-09-13 2022-10-18 腾讯科技(深圳)有限公司 Lane level positioning method and device, computer equipment and storage medium
CN117330097A (en) * 2023-12-01 2024-01-02 深圳元戎启行科技有限公司 Vehicle positioning optimization method, device, equipment and storage medium
CN117723070A (en) * 2024-02-06 2024-03-19 合众新能源汽车股份有限公司 Method and device for determining map matching initial value, electronic equipment and storage medium
CN114034307B (en) * 2021-11-19 2024-04-16 智道网联科技(北京)有限公司 Vehicle pose calibration method and device based on lane lines and electronic equipment

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112379330B (en) * 2020-11-27 2023-03-10 浙江同善人工智能技术有限公司 Multi-robot cooperative 3D sound source identification and positioning method
CN113776533A (en) * 2021-07-29 2021-12-10 北京旷视科技有限公司 Repositioning method and device for movable equipment
CN114543819B (en) * 2021-09-16 2024-03-26 北京小米移动软件有限公司 Vehicle positioning method, device, electronic equipment and storage medium
CN115098606B (en) * 2022-05-30 2023-06-16 九识智行(北京)科技有限公司 Traffic light query method and device for unmanned vehicle, storage medium and equipment
CN117490728B (en) * 2023-12-28 2024-04-02 合众新能源汽车股份有限公司 Lane line positioning fault diagnosis method and system

Citations (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6061628A (en) * 1996-04-24 2000-05-09 Aisin Aw Co., Ltd. Navigation system for vehicles
US20060217882A1 (en) * 2005-03-22 2006-09-28 Denso Corporation Vehicular navigation system
CN1961197A (en) * 2004-06-02 2007-05-09 株式会社查纳位资讯情报 On-vehicle navigation device and vehicle position correction method
KR20070091471A (en) * 2006-03-06 2007-09-11 주식회사 현대오토넷 Method for recognition crossing of navigation system
EP1959235A2 (en) * 2007-02-13 2008-08-20 Aisin AW Co., Ltd. Road map data structure and navigation apparatus
CN101447019A (en) * 2007-11-29 2009-06-03 爱信艾达株式会社 Image recognition apparatuses, methods and programs
CN101675442A (en) * 2007-05-25 2010-03-17 爱信艾达株式会社 Lane determining device, lane determining method and navigation apparatus using the same
CN102150015A (en) * 2008-10-17 2011-08-10 三菱电机株式会社 Navigation device
CN102183259A (en) * 2011-03-17 2011-09-14 光庭导航数据(武汉)有限公司 Navigation method based on electronic map road characteristic recognition
KR20130003308A (en) * 2011-06-30 2013-01-09 충북대학교 산학협력단 Method of lane detection for vehicle
WO2013089836A1 (en) * 2011-12-12 2013-06-20 Google Inc. Method of pre-fetching map data for rendering and offline routing
JP2013130463A (en) * 2011-12-21 2013-07-04 Aisin Aw Co Ltd Lane guide display system, method and program
CN103954275A (en) * 2014-04-01 2014-07-30 西安交通大学 Lane line detection and GIS map information development-based vision navigation method
CN105021201A (en) * 2015-08-17 2015-11-04 武汉光庭信息技术有限公司 System and method for reversely educing position of automobile by means of coordinates of traffic signboards
CN105788274A (en) * 2016-05-18 2016-07-20 武汉大学 Urban intersection lane-level structure extraction method based on time-space trajectory big data
CN105783936A (en) * 2016-03-08 2016-07-20 武汉光庭信息技术股份有限公司 Road sign drawing and vehicle positioning method and system for automatic drive
CN106092121A (en) * 2016-05-27 2016-11-09 百度在线网络技术(北京)有限公司 Automobile navigation method and device
US20170045370A1 (en) * 2003-06-19 2017-02-16 Here Global B.V. Method of representing road lanes
CN106525057A (en) * 2016-10-26 2017-03-22 陈曦 Generation system for high-precision road map
CN107643086A (en) * 2016-07-22 2018-01-30 北京四维图新科技股份有限公司 A kind of vehicle positioning method, apparatus and system
CN107679520A (en) * 2017-10-30 2018-02-09 湖南大学 A kind of lane line visible detection method suitable for complex condition
CN108052880A (en) * 2017-11-29 2018-05-18 南京大学 Traffic monitoring scene actual situation method for detecting lane lines
CN108090456A (en) * 2017-12-27 2018-05-29 北京初速度科技有限公司 A kind of Lane detection method and device
CN108303103A (en) * 2017-02-07 2018-07-20 腾讯科技(深圳)有限公司 The determination method and apparatus in target track
CN108318043A (en) * 2017-12-29 2018-07-24 百度在线网络技术(北京)有限公司 Method, apparatus for updating electronic map and computer readable storage medium
CN108413971A (en) * 2017-12-29 2018-08-17 驭势科技(北京)有限公司 Vehicle positioning technology based on lane line and application
CN108680177A (en) * 2018-05-31 2018-10-19 安徽工程大学 Synchronous superposition method and device based on rodent models
CN108830159A (en) * 2018-05-17 2018-11-16 武汉理工大学 A kind of front vehicles monocular vision range-measurement system and method
CN108917778A (en) * 2018-05-11 2018-11-30 广州海格星航信息科技有限公司 Navigation hint method, navigation equipment and storage medium
CN109059940A (en) * 2018-09-11 2018-12-21 北京测科空间信息技术有限公司 A kind of method and system for automatic driving vehicle navigational guidance
CN109165549A (en) * 2018-07-09 2019-01-08 厦门大学 Road markings acquisition methods, terminal device and device based on three dimensional point cloud
CN109460739A (en) * 2018-11-13 2019-03-12 广州小鹏汽车科技有限公司 Method for detecting lane lines and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663356B (en) * 2012-03-28 2015-04-08 柳州博实唯汽车科技有限公司 Method for extraction and deviation warning of lane line
CN103632140B (en) * 2013-11-27 2017-01-04 智慧城市系统服务(中国)有限公司 A kind of method for detecting lane lines and device
US9928594B2 (en) * 2014-07-11 2018-03-27 Agt International Gmbh Automatic spatial calibration of camera network
CN108981741B (en) * 2018-08-23 2021-02-05 武汉中海庭数据技术有限公司 Path planning device and method based on high-precision map

Patent Citations (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6061628A (en) * 1996-04-24 2000-05-09 Aisin Aw Co., Ltd. Navigation system for vehicles
US20170045370A1 (en) * 2003-06-19 2017-02-16 Here Global B.V. Method of representing road lanes
CN1961197A (en) * 2004-06-02 2007-05-09 株式会社查纳位资讯情报 On-vehicle navigation device and vehicle position correction method
US20060217882A1 (en) * 2005-03-22 2006-09-28 Denso Corporation Vehicular navigation system
KR20070091471A (en) * 2006-03-06 2007-09-11 주식회사 현대오토넷 Method for recognition crossing of navigation system
EP1959235A2 (en) * 2007-02-13 2008-08-20 Aisin AW Co., Ltd. Road map data structure and navigation apparatus
CN101675442A (en) * 2007-05-25 2010-03-17 爱信艾达株式会社 Lane determining device, lane determining method and navigation apparatus using the same
CN101447019A (en) * 2007-11-29 2009-06-03 爱信艾达株式会社 Image recognition apparatuses, methods and programs
CN102150015A (en) * 2008-10-17 2011-08-10 三菱电机株式会社 Navigation device
CN102183259A (en) * 2011-03-17 2011-09-14 光庭导航数据(武汉)有限公司 Navigation method based on electronic map road characteristic recognition
KR20130003308A (en) * 2011-06-30 2013-01-09 충북대학교 산학협력단 Method of lane detection for vehicle
WO2013089836A1 (en) * 2011-12-12 2013-06-20 Google Inc. Method of pre-fetching map data for rendering and offline routing
JP2013130463A (en) * 2011-12-21 2013-07-04 Aisin Aw Co Ltd Lane guide display system, method and program
CN103954275A (en) * 2014-04-01 2014-07-30 西安交通大学 Lane line detection and GIS map information development-based vision navigation method
CN105021201A (en) * 2015-08-17 2015-11-04 武汉光庭信息技术有限公司 System and method for reversely educing position of automobile by means of coordinates of traffic signboards
CN105783936A (en) * 2016-03-08 2016-07-20 武汉光庭信息技术股份有限公司 Road sign drawing and vehicle positioning method and system for automatic drive
CN105788274A (en) * 2016-05-18 2016-07-20 武汉大学 Urban intersection lane-level structure extraction method based on time-space trajectory big data
CN106092121A (en) * 2016-05-27 2016-11-09 百度在线网络技术(北京)有限公司 Automobile navigation method and device
CN107643086A (en) * 2016-07-22 2018-01-30 北京四维图新科技股份有限公司 A kind of vehicle positioning method, apparatus and system
CN106525057A (en) * 2016-10-26 2017-03-22 陈曦 Generation system for high-precision road map
CN108303103A (en) * 2017-02-07 2018-07-20 腾讯科技(深圳)有限公司 The determination method and apparatus in target track
CN107679520A (en) * 2017-10-30 2018-02-09 湖南大学 A kind of lane line visible detection method suitable for complex condition
CN108052880A (en) * 2017-11-29 2018-05-18 南京大学 Traffic monitoring scene actual situation method for detecting lane lines
CN108090456A (en) * 2017-12-27 2018-05-29 北京初速度科技有限公司 A kind of Lane detection method and device
CN108318043A (en) * 2017-12-29 2018-07-24 百度在线网络技术(北京)有限公司 Method, apparatus for updating electronic map and computer readable storage medium
CN108413971A (en) * 2017-12-29 2018-08-17 驭势科技(北京)有限公司 Vehicle positioning technology based on lane line and application
CN108917778A (en) * 2018-05-11 2018-11-30 广州海格星航信息科技有限公司 Navigation hint method, navigation equipment and storage medium
CN108830159A (en) * 2018-05-17 2018-11-16 武汉理工大学 A kind of front vehicles monocular vision range-measurement system and method
CN108680177A (en) * 2018-05-31 2018-10-19 安徽工程大学 Synchronous superposition method and device based on rodent models
CN109165549A (en) * 2018-07-09 2019-01-08 厦门大学 Road markings acquisition methods, terminal device and device based on three dimensional point cloud
CN109059940A (en) * 2018-09-11 2018-12-21 北京测科空间信息技术有限公司 A kind of method and system for automatic driving vehicle navigational guidance
CN109460739A (en) * 2018-11-13 2019-03-12 广州小鹏汽车科技有限公司 Method for detecting lane lines and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YANG ZHENG等: "• Lane-Change Detection From Steering Signal Using Spectral Segmentation and Learning-Based Classification", 《IEEE TRANSACTIONS ON INTELLIGENT VEHICLES》 *
候政华: "基于不确定性模型与重定位技术的语义SLAM方法研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *
贾鑫: "智能车辆视觉感知中的车道标线识别方法的研究", 《中国优秀博硕士学位论文全文数据库(博士) 信息科技辑》 *
邹斌等: "面向智能交通的单目视觉测距方法研究", 《交通运输系统工程与信息》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112902987A (en) * 2021-02-02 2021-06-04 北京三快在线科技有限公司 Pose correction method and device
CN112902987B (en) * 2021-02-02 2022-07-15 北京三快在线科技有限公司 Pose correction method and device
CN113791435A (en) * 2021-11-18 2021-12-14 智道网联科技(北京)有限公司 GNSS signal abnormal value detection method and device, electronic equipment and storage medium
CN113791435B (en) * 2021-11-18 2022-04-05 智道网联科技(北京)有限公司 GNSS signal abnormal value detection method and device, electronic equipment and storage medium
CN114034307A (en) * 2021-11-19 2022-02-11 智道网联科技(北京)有限公司 Lane line-based vehicle pose calibration method and device and electronic equipment
CN114034307B (en) * 2021-11-19 2024-04-16 智道网联科技(北京)有限公司 Vehicle pose calibration method and device based on lane lines and electronic equipment
CN114136327A (en) * 2021-11-22 2022-03-04 武汉中海庭数据技术有限公司 Automatic inspection method and system for recall ratio of dotted line segment
CN115203352A (en) * 2022-09-13 2022-10-18 腾讯科技(深圳)有限公司 Lane level positioning method and device, computer equipment and storage medium
CN117330097A (en) * 2023-12-01 2024-01-02 深圳元戎启行科技有限公司 Vehicle positioning optimization method, device, equipment and storage medium
CN117723070A (en) * 2024-02-06 2024-03-19 合众新能源汽车股份有限公司 Method and device for determining map matching initial value, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2020192105A1 (en) 2020-10-01
CN111750878B (en) 2022-06-24

Similar Documents

Publication Publication Date Title
CN111750878B (en) Vehicle pose correction method and device
CN110954113B (en) Vehicle pose correction method and device
CN111046709B (en) Vehicle lane level positioning method and system, vehicle and storage medium
CN111750882B (en) Method and device for correcting vehicle pose during initialization of navigation map
CN110954112A (en) Method and device for updating matching relation between navigation map and perception image
US20220011117A1 (en) Positioning technology
CN111750881A (en) Vehicle pose correction method and device based on light pole
CN111380539A (en) Vehicle positioning and navigation method and device and related system
CN113033029A (en) Automatic driving simulation method and device, electronic equipment and storage medium
CN110174110B (en) Map corresponding method and device, electronic equipment and computer readable medium
CN113554643B (en) Target detection method and device, electronic equipment and storage medium
CN111179162A (en) Positioning initialization method in special environment and vehicle-mounted terminal
WO2022083487A1 (en) Method and apparatus for generating high definition map and computer-readable storage medium
CN115507862A (en) Lane line positioning method and device, electronic device and storage medium
CN113609148A (en) Map updating method and device
CN111444810A (en) Traffic light information identification method, device, equipment and storage medium
CN113223064A (en) Method and device for estimating scale of visual inertial odometer
CN112507857B (en) Lane line updating method, device, equipment and storage medium
CN111351497A (en) Vehicle positioning method and device and map construction method and device
CN115112125A (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN113048988B (en) Method and device for detecting change elements of scene corresponding to navigation map
CN114359862A (en) Signal lamp identification method and device, electronic equipment and storage medium
CN115249407A (en) Indicating lamp state identification method and device, electronic equipment, storage medium and product
CN111044035B (en) Vehicle positioning method and device
CN111326006B (en) Reminding method, reminding system, storage medium and vehicle-mounted terminal for lane navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220303

Address after: 100083 unit 501, block AB, Dongsheng building, No. 8, Zhongguancun East Road, Haidian District, Beijing

Applicant after: BEIJING MOMENTA TECHNOLOGY Co.,Ltd.

Address before: Room 28, 4 / F, block a, Dongsheng building, No. 8, Zhongguancun East Road, Haidian District, Beijing 100089

Applicant before: BEIJING CHUSUDU TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant