CN111220154A - Vehicle positioning method, device, equipment and medium - Google Patents

Vehicle positioning method, device, equipment and medium Download PDF

Info

Publication number
CN111220154A
CN111220154A CN202010074220.4A CN202010074220A CN111220154A CN 111220154 A CN111220154 A CN 111220154A CN 202010074220 A CN202010074220 A CN 202010074220A CN 111220154 A CN111220154 A CN 111220154A
Authority
CN
China
Prior art keywords
vehicle
target
pose
image
predicted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010074220.4A
Other languages
Chinese (zh)
Inventor
张辉
张鹏
刘奇胜
常松涛
陈聪
罗成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010074220.4A priority Critical patent/CN111220154A/en
Publication of CN111220154A publication Critical patent/CN111220154A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the application discloses a vehicle positioning method, a device, equipment and a medium, which relate to the technical field of intelligent driving, wherein the method comprises the following steps: acquiring inertia measurement data of a vehicle at the current moment, the speed of the vehicle and a target image of the environment where the vehicle is located; calculating the predicted pose of the vehicle at the current moment by using the inertial measurement data and the wheel speed of the vehicle; calling a high-precision map comprising the current environment information of the vehicle based on the predicted position coordinates in the predicted pose; and correcting the predicted pose by using the target image and the high-precision map to obtain the target pose of the vehicle at the current moment. The embodiment of the application can realize the deployment of the sensors with low cost, improve the positioning precision of the vehicle and be suitable for wide positioning environment.

Description

Vehicle positioning method, device, equipment and medium
Technical Field
The embodiment of the application relates to a computer technology, in particular to an intelligent driving technology, and particularly relates to a vehicle positioning method, device, equipment and medium.
Background
Currently, the solutions commonly adopted for vehicle positioning include: a positioning method based on a Global Positioning System (GPS), a positioning method based on a LiDAR (Light Detection and Ranging), a positioning method based on an Ultra Wide Band (UWB) base station, and a pure visual positioning method based on a camera.
However, the GPS system, the lidar, the UWB base station, and the like involved in the above method are relatively expensive in terms of hardware deployment, and although the camera-based pure visual positioning is relatively inexpensive, the positioning accuracy cannot be guaranteed.
Disclosure of Invention
The embodiment of the application discloses a vehicle positioning method, a vehicle positioning device, vehicle positioning equipment and a vehicle positioning medium, so that low-cost sensor deployment is utilized, and vehicle positioning accuracy is improved.
In a first aspect, an embodiment of the present application discloses a vehicle positioning method, including:
acquiring inertia measurement data of a vehicle at the current moment, the speed of the vehicle and a target image of the environment where the vehicle is located;
calculating the predicted pose of the vehicle at the current moment by using the inertial measurement data and the wheel speed of the vehicle;
calling a high-precision map comprising the current environment information of the vehicle based on the predicted position coordinates in the predicted pose;
and correcting the predicted pose by using the target image and the high-precision map to obtain the target pose of the vehicle at the current moment.
One embodiment in the above application has the following advantages or benefits: by fusing inertial measurement data of the vehicle at the current moment, the speed of the vehicle wheel, a target image of the environment where the vehicle is located and a high-precision map comprising the information of the environment where the vehicle is located, the environmental information and the high-precision map are fully utilized in the vehicle positioning process, a high-precision vehicle positioning result is obtained, and the deployment cost of a sensor which depends on the high-precision map is very low.
Optionally, the step of correcting the predicted pose by using the target image and the high-precision map to obtain the target pose of the vehicle at the current time includes:
identifying the target image and determining target elements;
determining the position coordinates of the image feature points of the target elements on the high-precision map;
constructing a geometric constraint relation by using the pixel coordinates of the image feature points on the target image and the position coordinates of the image feature points on the high-precision map;
and correcting the predicted pose based on the geometric constraint relation to obtain the target pose of the vehicle at the current moment.
Optionally, based on the geometric constraint relationship, modifying the predicted pose to obtain the target pose of the vehicle at the current time includes:
constructing a pose constraint relation between adjacent moments of the vehicle-mounted camera by utilizing the vehicle-mounted camera for acquiring the target image and the multi-frame images acquired at the current moment and the previous moment;
and correcting the predicted pose by using the geometric constraint relation and the pose constraint relation of the vehicle-mounted camera to obtain the target pose of the vehicle at the current moment.
Optionally, identifying the target image and determining the target element includes:
identifying the target image to obtain an identification result;
according to the visual field constraint of a vehicle-mounted camera for acquiring the target image, data elimination is carried out on the recognition result;
and determining the target element from the recognition result after the data elimination.
One embodiment in the above application has the following advantages or benefits: and data elimination is carried out on the image recognition result according to the visual field constraint of the vehicle-mounted camera, so that the influence of sensing error recognition on the vehicle positioning result is avoided, and the robustness and the reliability of the positioning scheme are improved.
Optionally, determining the position coordinates of the image feature point of the target element on the high-precision map includes:
re-projecting the high-precision map to an imaging plane corresponding to the target image;
determining a map element corresponding to the target element on the high-precision map after the target element is re-projected;
and determining the position coordinates of the image feature points of the target elements on the high-precision map according to the corresponding relation of the feature points between the map elements and the target elements.
One embodiment in the above application has the following advantages or benefits: by determining the position coordinates of the feature points on the image on the high-precision map, the current pose of the vehicle can be corrected by utilizing the accurate position representation of the entity in the driving environment, so that a high-precision vehicle positioning result is obtained.
Optionally, calculating a predicted pose of the vehicle at the current time by using the inertial measurement data and the wheel speed of the vehicle, including:
and calculating the predicted pose of the vehicle at the current moment by using the inertial measurement data and the wheel speed of the vehicle based on a vehicle kinematic model.
Optionally, based on the geometric constraint relationship, the predicted pose is corrected to obtain the target pose of the vehicle at the current time, and the method further includes:
and extracting feature points and tracking the feature points of the multi-frame images acquired by the vehicle-mounted camera, constructing a pose constraint relation of the vehicle between adjacent moments, and correcting the predicted pose by using the constructed pose constraint relation of the vehicle-mounted camera and the vehicle and the geometric constraint relation.
Optionally, after the obtaining of the inertia measurement data of the vehicle at the current time, the wheel speed of the vehicle, and the target image of the environment where the vehicle is located, the method further includes:
and performing data detection on the inertia measurement data, the wheel speed of the vehicle and the sensing elements on the target image by using a preset data detection algorithm to remove abnormal data.
One embodiment in the above application has the following advantages or benefits: by removing abnormal data, the accuracy of the positioning result is ensured, and the robustness and the reliability of the positioning scheme are improved.
Optionally, the method further includes:
in the vehicle positioning process, determining whether a vehicle-mounted camera used for acquiring the target image is abnormal or not by using the deviation between the predicted pixel coordinates and the observed pixel coordinates of the target feature point on the target image;
and predicting the predicted pixel coordinates of the target feature point based on the pixel coordinates of the target feature point in the previous frame of image, or predicting the predicted pixel coordinates of the target feature point in the previous frame of image corresponding to the position coordinates in the high-precision map.
One embodiment in the above application has the following advantages or benefits: by means of the abnormal detection of the vehicle-mounted camera, the use of abnormal image data in the vehicle positioning process is avoided, and the robustness and the reliability of the positioning scheme are improved.
In a second aspect, an embodiment of the present application further discloses a vehicle positioning device, including:
the data acquisition module is used for acquiring inertial measurement data of the vehicle at the current moment, the speed of the vehicle wheel and a target image of the environment where the vehicle is located;
the predicted pose determination module is used for calculating the predicted pose of the vehicle at the current moment by using the inertial measurement data and the wheel speed of the vehicle;
the high-precision map acquisition module is used for calling a high-precision map comprising the current environment information of the vehicle based on the predicted position coordinates in the predicted pose;
and the data fusion module is used for correcting the predicted pose by using the target image and the high-precision map to obtain the target pose of the vehicle at the current moment.
In a third aspect, an embodiment of the present application further discloses an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a vehicle localization method as described in any of the embodiments of the present application.
In a fourth aspect, embodiments of the present application further disclose a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the vehicle positioning method according to any of the embodiments of the present application.
According to the technical scheme of the embodiment of the application, a high-precision vehicle positioning result is obtained by fusing inertia measurement data of the vehicle at the current moment, the speed of the vehicle wheel, a target image of the environment where the vehicle is located and a high-precision map comprising information of the environment where the vehicle is located.
Compared with the existing positioning scheme, the scheme of the embodiment of the application also has the following advantages: 1) because the scheme of the embodiment of the application does not need to depend on a global positioning system, the positioning result is not influenced by the quality of the GPS signal, the method can be applied to a wider positioning environment, and the scheme of the embodiment of the application also has usability even in parking scenes with poor GPS signal quality such as indoors or underground and the like, and can ensure the positioning precision; 2) according to the embodiment of the application, the laser radar is not needed, so that the carrier pose is not needed to be continuously tracked, point cloud mismatching caused by failure of pose tracking and the like is avoided, and the phenomenon of vehicle positioning error is avoided; 3) according to the scheme of the embodiment of the application, a positioning base station does not need to be deployed depending on a site, so that the positioning implementation process is not limited by the site, namely a driving environment; 4) the scheme of the embodiment of the application can solve the problem of drift error existing in the existing pure visual positioning scheme, and improves the positioning precision of the vehicle, so that the high-precision positioning function of the vehicle can be realized in any driving scene, such as an open road scene. Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a flow chart of a vehicle locating method disclosed in accordance with an embodiment of the present application;
FIG. 2 is a flow chart of another vehicle location method disclosed in accordance with an embodiment of the present application;
FIG. 3 is a flow chart of yet another vehicle locating method disclosed in accordance with an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a vehicle positioning device according to an embodiment of the present disclosure;
fig. 5 is a block diagram of an electronic device disclosed according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flowchart of a vehicle positioning method disclosed in an embodiment of the present application, where the embodiment may be applied to a scenario of vehicle autonomous positioning, such as automatic parking positioning of an autonomous vehicle. The method of the embodiment may be performed by a vehicle positioning apparatus, which may be implemented in software and/or hardware, and may be integrated on any electronic device with computing capability, such as an in-vehicle device.
As shown in fig. 1, the vehicle positioning method disclosed in the present embodiment may include:
s101, obtaining inertia measurement data of the vehicle at the current moment, the wheel speed of the vehicle and a target image of the environment where the vehicle is located.
In this embodiment, an Inertial Measurement Unit (IMU), a odometer or a wheel speed meter, and a camera are deployed on the vehicle, and are respectively used for acquiring Inertial measurement data, a wheel speed of the vehicle, and a target image of an environment where the vehicle is located at the current time, where the target image is an image currently taken by the vehicle-mounted camera and used for current vehicle positioning.
Optionally, after obtaining inertial measurement data of the vehicle at the current time, a wheel speed of the vehicle, and a target image of an environment where the vehicle is located, the method of this embodiment further includes:
and performing data detection on the inertia measurement data, the wheel speed of the vehicle and the sensing elements on the target image by using a preset data detection algorithm to remove abnormal data, so that the use of the abnormal data is avoided, and the final vehicle positioning result is inaccurate. The preset data detection algorithm comprises a chi-square detection algorithm, for example, in the parking process, when a vehicle slides down a slope by a forward gear, the data output by a wheel speed meter is positive, when the vehicle slides down the slope by a reverse gear, the data output by the wheel speed meter is negative, and whether abnormal wheel speed data exist is judged by detecting the positive signs and the negative signs of the data in different vehicle states; the image shot by the camera is compared with the high-precision map of the current position of the vehicle, whether the object displayed on the image, namely the perception element, is obviously deviated from the corresponding map element position on the high-precision map is judged, and therefore whether the image data is abnormal is determined. And if any sensor including the vehicle-mounted camera, the wheel speed meter and the inertia measuring unit continuously outputs abnormal data within a specific time, sending out a sensor abnormality prompt, isolating the abnormal sensor, and waiting for repairing the abnormal sensor.
And S102, calculating the predicted pose of the vehicle at the current moment by using the inertia measurement data and the wheel speed of the vehicle.
The inertial measurement data at the current moment comprises the three-axis attitude angle and the acceleration of the vehicle, the predicted attitude of the vehicle at the current moment can be obtained in real time by combining the speed of the vehicle and utilizing the kinematics principle of the vehicle, the predicted attitude comprises the 6-degree-of-freedom attitude of the vehicle, and the speed of the vehicle can also be obtained. Optionally, the predicted pose of the vehicle at the current moment may be calculated by using the inertial measurement data and the wheel speed of the vehicle based on a vehicle kinematic model. The vehicle kinematics model can reflect the relation between the vehicle position, speed, acceleration and the like and time, the specific form of the model is not limited in the embodiment, and in practical application, the model can be reasonably set according to requirements, for example, the model can be improved on the existing bicycle model to obtain a required model.
And S103, calling a high-precision map comprising the current environment information of the vehicle based on the predicted position coordinates in the predicted pose.
Based on the predicted position coordinates, a high-precision map including information on the environment where the vehicle is currently located can be called from the local or network. The high-precision map comprises detailed indoor and outdoor environment information, has high data precision, can assist vehicles, particularly automatic driving vehicles, and realizes a high-precision positioning function. The high-precision map may include high-precision pose data of the environment entity, and information such as a geometric structure and a size of the environment entity, and the high-precision map includes, for example, a lane line, a shape of the lane line, a type and a color of the lane line, and a position coordinate of any point on the lane line.
And S104, correcting the predicted pose by using the target image and the high-precision map to obtain the target pose of the vehicle at the current moment.
Aiming at the current environment of the vehicle, an incidence relation between a target image and a high-precision map can be established by utilizing any available data fusion method related to vehicle positioning, such as a Kalman filtering tight coupling algorithm, and the predicted pose of the vehicle is corrected by using the incidence relation, so that data fusion among image data, the high-precision map, inertial measurement data and the speed of the vehicle wheel is realized, and a high-precision vehicle positioning result is obtained.
For each sensor deployed in a vehicle, under a normal operating state, a deviation between a current observed value of each sensor and a predicted value calculated according to a previous state of the vehicle satisfies a specific probability model, where the probability model is used to measure a degree of deviation between the predicted value and the observed value, and a specific implementation form of the probability model is not specifically limited in this embodiment, and may be, for example, a probability model determined by a covariance matrix, and at this time, a deviation between the predicted value and the predicted value obeys chi-square distribution. When the sensor observation value obviously violating the probability model appears, the observation value can be judged to be abnormal, and the corresponding sensor is abnormal. Therefore, taking the vehicle-mounted camera as an example, the method of this embodiment further includes: in the vehicle positioning process, determining whether a vehicle-mounted camera used for acquiring a target image is abnormal or not by using the deviation between the predicted pixel coordinates and the observed pixel coordinates of the target feature point on the target image; the predicted pixel coordinates of the target feature point are predicted based on the pixel coordinates of the target feature point in the previous frame of image, or are predicted based on the position coordinates of the target feature point in the previous frame of image corresponding to the high-precision map, and the pixel prediction can be realized by combining the change of the motion state of the vehicle. The observed pixel of the target feature point refers to the actual pixel coordinate of the target feature point on the image captured at the current moment. The target feature point may be any feature point on the target image, and the feature point refers to a point where the image gray value changes drastically or a point with a large curvature on the edge of the image. The target feature points have corresponding position coordinates on the high-precision map, and a numerical mapping relation can be established between the pixel coordinates and the position coordinates.
In the embodiment, whether the vehicle-mounted camera is abnormal or not is judged by comparing the predicted pixel coordinates and the observed pixel coordinates of the target feature points, so that a detection mechanism for detecting whether the camera is abnormal or not is added in the vehicle positioning process, the robustness and the reliability of a positioning scheme are improved, and the phenomenon that the vehicle positioning result is inaccurate due to the utilization of abnormal image data is avoided.
According to the technical scheme of the embodiment, the high-precision vehicle positioning result is obtained by fusing the inertial measurement data of the current moment of the vehicle, the speed of the vehicle wheel, the target image of the environment where the vehicle is located and the high-precision map comprising the information of the current environment where the vehicle is located.
Compared with the existing positioning scheme, the scheme of the embodiment also has the following advantages: 1) because the scheme of the embodiment does not need to rely on a global positioning system, the positioning result is not influenced by the quality of the GPS signal, the scheme can be applied to a wider positioning environment, and even in parking scenes with poor GPS signal quality such as indoors or underground, the scheme of the embodiment has usability and can ensure the positioning accuracy; 2) according to the scheme, the laser radar is not needed, so that the carrier pose is not needed to be continuously tracked, point cloud mismatching caused by failure of pose tracking and the like is avoided, and the phenomenon of vehicle positioning error is avoided; 3) the scheme of the embodiment does not need to depend on a site for deploying the positioning base station, so that the positioning implementation process is not limited by the site, namely the driving environment; 4) the scheme of the embodiment can solve the problem of drift error existing in the existing pure visual positioning scheme, improves the positioning precision of the vehicle, and enables the vehicle to realize the high-precision positioning function in any driving scene, such as an open road scene.
Fig. 2 is a flow chart of another vehicle positioning method disclosed in the embodiment of the present application, which is further optimized and expanded based on the above technical solution, and can be combined with the above various alternative embodiments. As shown in fig. 2, the method of this embodiment may include:
s201, obtaining inertia measurement data of the vehicle at the current moment, the wheel speed of the vehicle and a target image of the environment where the vehicle is located.
S202, calculating the predicted pose of the vehicle at the current moment by using the inertia measurement data and the wheel speed of the vehicle.
And S203, calling a high-precision map comprising the current environment information of the vehicle based on the predicted position coordinates in the predicted pose.
And S204, identifying the target image and determining target elements.
Specifically, any image recognition technology can be utilized to recognize the target image at the current moment, so that the visual semantic information is obtained, and the target elements are determined. The target elements may be one or more of perception elements on the target image, including artificial markers such as two-dimensional code markers with localization effect and natural localization elements including but not limited to: lane lines, parking space angular points, parking space lines, traffic signboards, speed bumps, road surface arrows and the like. In practical applications, the type of the target element used may be predetermined, and this embodiment is not particularly limited thereto. For the two-dimension code identification, the positioning information in the two-dimension code can be acquired by identifying the two-dimension code, and then the positioning information is used for correcting the predicted pose of the vehicle.
Optionally, identifying the target image and determining the target element includes:
identifying the target image to obtain an identification result;
according to the visual field constraint of a vehicle-mounted camera for acquiring a target image, data elimination is carried out on the recognition result;
and determining a target element from the recognition result after the data elimination.
For example, the perception data existing on the target image and belonging to the outside of the camera view angle at the current moment is removed, the perception data exceeding the perception distance of the camera is removed, the perception data behind an obstacle in the camera view angle is removed, and the like, so that the influence of perception misrecognition on the vehicle positioning result is avoided.
In addition, the operations S202 to S204 are not limited to a strict execution sequence, and the logic sequence shown in fig. 2 should not be understood as a specific limitation to the embodiment.
And S205, determining the position coordinates of the image feature points of the target elements on the high-precision map.
After the target elements are identified on the target image, the map matching technology can be utilized, and corresponding map elements are matched on the current high-precision map based on the information of category semantics, geometric features, textural features, spatial positions and the like of the target elements, namely, the association of the same entity in the driving environment on the target image and the high-precision map is realized, and further, the position coordinates of the target feature points on the high-precision map can be determined.
Optionally, determining the position coordinates of the image feature point of the target element on the high-precision map includes:
the high-precision map is re-projected to an imaging plane corresponding to the target image, namely the high-precision map is re-projected to an imaging plane of a vehicle-mounted camera for shooting the target image at the current moment based on the current pose of the vehicle;
determining a map element corresponding to the target element on the high-precision map after the re-projection;
and determining the position coordinates of the image feature points of the target elements on the high-precision map according to the corresponding relation of the feature points between the map elements and the target elements.
S206, constructing a geometric constraint relation by using the pixel coordinates of the image feature points on the target image and the position coordinates of the image feature points on the high-precision map.
And S207, correcting the predicted pose based on the geometric constraint relation to obtain the target pose of the vehicle at the current moment.
The position coordinates in the high-precision map belong to absolute positions in the world coordinate system. The target elements on the target image are associated to the high-precision map, so that the accurate posture representation of the target elements in the driving environment can be determined, and the predicted posture of the vehicle is corrected by utilizing the established geometric constraint relation, namely the current predicted posture of the vehicle is corrected by utilizing the accurate posture of the known target elements in the driving environment, so that a high-precision vehicle positioning result can be obtained.
Specifically, in this embodiment, the perceptual identification element on the target image may be associated and matched with the high-precision map data according to the visual semantic information, the element geometric shape, the spatial relative relationship, and the principle of minimum reprojection error. Illustratively, perception data is removed according to the relative position relationship between different identified natural positioning elements, for example, the natural positioning elements identified on a target image should meet the requirements of the near distance and the far distance, and the relative position relationship between the left and right, the top and the bottom in the real world should be kept unchanged, otherwise, the perception data which does not meet the requirements needs to be removed; and carrying out iterative screening according to the reprojection error, eliminating corresponding poor perception data when the reprojection error is large, and finally caching the target elements successfully associated between the target map and the high-precision map to wait for utilization.
Further, based on the geometric constraint relationship, the predicted pose is corrected to obtain the target pose of the vehicle at the current moment, including:
constructing a pose constraint relation between adjacent moments of the vehicle-mounted camera by utilizing the vehicle-mounted camera for acquiring the target image and the multi-frame images acquired at the current moment and the previous moment, wherein the pose constraint relation can be used for reflecting pose changes of the vehicle-mounted camera at the adjacent moments;
and correcting the predicted pose by using the geometric constraint relation and the pose constraint relation of the vehicle-mounted camera to obtain the target pose of the vehicle at the current moment.
Further, based on the geometric constraint relationship, the predicted pose is corrected to obtain the target pose of the vehicle at the current moment, and the method further comprises the following steps: and carrying out feature point extraction and feature point tracking on multi-frame images acquired by the vehicle-mounted camera, constructing a pose constraint relation of the vehicle between adjacent moments, and correcting the predicted pose by utilizing the constructed pose constraint relation and geometric constraint relation of the vehicle-mounted camera and the vehicle so as to further ensure the accuracy of vehicle positioning. Regarding the extraction and tracking of the image feature points, any available image feature extraction and tracking technology in the prior art may be used for implementation, and this embodiment is not particularly limited.
FIG. 3 shows, as an example, a flow chart of yet another vehicle localization method disclosed in the present embodiment. As shown in fig. 3, the data input source in the solution of the present embodiment includes an inertial measurement unit, a wheel speed meter, a high-precision map, and a vehicle-mounted camera; by identifying the environment image, perception identification natural positioning elements included in the image can be obtained, feature point extraction and tracking are carried out on the environment image, and inter-frame pose constraints of the vehicle at adjacent moments can be constructed for subsequent prediction pose correction; associating the high-precision map data with the identified natural positioning elements, wherein the operations of eliminating perception data and the like can be carried out in the association process; in the positioning process, with the change of the motion state of the vehicle, the recursive update of the vehicle prediction pose can be carried out in real time by using inertial measurement data and the wheel speed of the vehicle; in the specific positioning processing process, the sensing data obtained after the image-high-precision map is successfully associated, position coordinates, wheel speed data, inertia measurement data and feature points extracted and tracked on the environment image are aligned according to time; performing state estimation by using the aligned multi-source data at the back end, namely multi-source data fusion, and determining a high-precision positioning result of the vehicle, wherein the multi-source data fusion method comprises but is not limited to a Kalman filtering tight coupling method and the like; before determining the high-precision positioning result of the vehicle, the abnormality detection and isolation of vehicle-mounted sensors including an inertia measuring unit, a wheel speed meter and a vehicle-mounted camera can be further performed, and the influence of abnormal data on the positioning result is avoided. In addition, in fig. 3, the horizontal links may be executed in parallel, and the vertical links may be executed in series.
According to the technical scheme of the embodiment, the method realizes the full utilization of the environment information and the high-precision map in the vehicle positioning process and obtains the high-precision vehicle positioning result by fusing the inertial measurement data of the current moment of the vehicle, the wheel speed of the vehicle, the target image of the environment where the vehicle is located and the high-precision map comprising the environment information of the current moment of the vehicle, and because the vehicle-mounted sensing devices such as an inertial measurement unit and a camera which depend on the vehicle-mounted sensing devices belong to low-cost sensing devices relative to a global positioning system, a laser radar and the like in the whole positioning process, the embodiment realizes the deployment of the sensors with low cost and improves the positioning precision of the vehicle; in addition, abnormal data detection and isolation are provided in the scheme, so that the positioning scheme has higher robustness and reliability.
As shown in fig. 4, the vehicle localization apparatus 300 disclosed in this embodiment may include a data acquisition module 301, a predicted pose determination module 302, a high-precision map acquisition module 303, and a data fusion module 304, wherein:
the data acquisition module 301 is configured to acquire inertial measurement data of a vehicle at a current time, a wheel speed of the vehicle, and a target image of an environment where the vehicle is located;
the predicted pose determination module 302 is used for calculating the predicted pose of the vehicle at the current moment by using the inertia measurement data and the wheel speed of the vehicle;
the high-precision map acquisition module 303 is configured to call a high-precision map including information of the current environment of the vehicle based on the predicted position coordinates in the predicted pose;
and the data fusion module 304 is configured to correct the predicted pose by using the target image and the high-precision map, so as to obtain the target pose of the vehicle at the current moment.
Optionally, the data fusion module 304 includes:
the image recognition unit is used for recognizing the target image and determining target elements;
the image and map association unit is used for determining the position coordinates of the image feature points of the target elements on the high-precision map;
the geometric constraint relation construction unit is used for constructing a geometric constraint relation by utilizing the pixel coordinates of the image feature points on the target image and the position coordinates of the image feature points on the high-precision map;
and the target pose determining unit is used for correcting the predicted pose based on the geometric constraint relation to obtain the target pose of the vehicle at the current moment.
Optionally, the target pose determining unit includes:
the camera pose constraint relation construction subunit is used for constructing a pose constraint relation between adjacent moments of the vehicle-mounted camera by utilizing the vehicle-mounted camera for acquiring the target image and the multi-frame images acquired at the current moment and the previous moment;
and the pose correction subunit is used for correcting the predicted pose by utilizing the geometric constraint relation and the pose constraint relation of the vehicle-mounted camera to obtain the target pose of the vehicle at the current moment.
Optionally, the image recognition unit includes:
the image identification subunit is used for identifying the target image to obtain an identification result;
the data removing subunit is used for removing data of the recognition result according to the visual field constraint of the vehicle-mounted camera for acquiring the target image;
and the target element determining subunit is used for determining the target element from the recognition result after the data elimination.
Optionally, the image and map associating unit includes:
the map reprojection shadow unit is used for reprojecting the high-precision map to an imaging plane corresponding to the target image;
the map element determining subunit is used for determining the map element corresponding to the target element on the high-precision map after the re-projection;
and the position coordinate determining subunit is used for determining the position coordinates of the image feature points of the target elements on the high-precision map according to the corresponding relationship of the feature points between the map elements and the target elements.
Optionally, the predicted pose determination module 302 is specifically configured to:
and calculating the predicted pose of the vehicle at the current moment by using the inertia measurement data and the wheel speed of the vehicle based on the vehicle kinematics model.
Optionally, the target pose determining unit further includes:
the vehicle pose constraint relation construction subunit is used for extracting and tracking the feature points of the multi-frame images acquired by the vehicle-mounted camera and constructing the pose constraint relation of the vehicle at adjacent moments;
correspondingly, the pose correction subunit is specifically configured to: and correcting the predicted pose by using the constructed pose constraint relation and the geometric constraint relation of the vehicle-mounted camera and the vehicle.
Optionally, the apparatus of this embodiment further includes:
the data detection module is configured to perform data detection on the inertia measurement data, the wheel speed of the vehicle, and a sensing element on the target image by using a preset data detection algorithm after the data acquisition module 301 performs an operation of acquiring the inertia measurement data, the wheel speed of the vehicle, and the target image of the environment where the vehicle is located at the current time, so as to remove abnormal data.
Optionally, the apparatus of this embodiment further includes:
the camera detection module is used for determining whether a vehicle-mounted camera used for acquiring a target image is abnormal or not by using the deviation between the predicted pixel coordinates and the observed pixel coordinates of the target feature point on the target image in the vehicle positioning process;
and the predicted pixel coordinates of the target feature point are predicted based on the pixel coordinates of the target feature point in the previous frame of image, or are predicted based on the position coordinates of the target feature point in the previous frame of image corresponding to the high-precision map.
The vehicle positioning device 300 disclosed in the embodiment of the present application can execute the vehicle positioning method disclosed in the embodiment of the present application, and has functional modules and beneficial effects corresponding to the execution method. Reference may be made to the description of any method embodiment of the present application for details not explicitly described in this embodiment.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 5, fig. 5 is a block diagram of an electronic device for implementing a vehicle positioning method in an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of embodiments of the present application described and/or claimed herein. Typically, the electronic device disclosed in the embodiments of the present application may be an in-vehicle device.
As shown in fig. 5, the electronic apparatus includes: one or more processors 401, memory 402, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display Graphical information for a Graphical User Interface (GUI) on an external input/output device, such as a display device coupled to the Interface. In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations, e.g., as a server array, a group of blade servers, or a multi-processor system. In fig. 5, one processor 401 is taken as an example.
The memory 402 is a non-transitory computer readable storage medium provided by the embodiments of the present application. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the vehicle positioning method provided by the embodiment of the application. The non-transitory computer-readable storage medium of the embodiments of the present application stores computer instructions for causing a computer to perform the vehicle positioning method provided by the embodiments of the present application.
The memory 402, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the vehicle positioning method in the embodiment of the present application, for example, the data acquisition module 301, the predicted pose determination module 302, the high-precision map acquisition module 303, and the data fusion module 304 shown in fig. 4. The processor 401 executes various functional applications of the server and data processing by executing non-transitory software programs, instructions and modules stored in the memory 402, so as to implement the vehicle positioning method in the above-described method embodiment.
The memory 402 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device of the vehicle positioning method, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 402 may optionally include a memory remotely located from the processor 401, and these remote memories may be connected via a network to an electronic device for implementing the vehicle localization method in this embodiment. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device for implementing the vehicle positioning method in the embodiment may further include: an input device 403 and an output device 404. The processor 401, the memory 402, the input device 403 and the output device 404 may be connected by a bus or other means, and fig. 5 illustrates an example of a connection by a bus.
The input device 403 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus for implementing the vehicle localization method in the present embodiment, such as an input device of a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output device 404 may include a display device, an auxiliary lighting device such as a Light Emitting Diode (LED), a tactile feedback device, and the like; the tactile feedback device is, for example, a vibration motor or the like. The Display device may include, but is not limited to, a Liquid Crystal Display (LCD), an LED Display, and a plasma Display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, Integrated circuitry, Application Specific Integrated Circuits (ASICs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs, also known as programs, software applications, or code, include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or Device for providing machine instructions and/or data to a Programmable processor, such as a magnetic disk, optical disk, memory, Programmable Logic Device (PLD), including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device for displaying information to a user, for example, a Cathode Ray Tube (CRT) or an LCD monitor; and a keyboard and a pointing device, such as a mouse or a trackball, by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here, or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, a high-precision vehicle positioning result is obtained by fusing inertia measurement data of the vehicle at the current moment, the speed of the vehicle wheel, a target image of the environment where the vehicle is located and a high-precision map comprising information of the environment where the vehicle is located.
Compared with the existing positioning scheme, the scheme of the embodiment of the application also has the following advantages: 1) because the scheme of the embodiment of the application does not need to depend on a global positioning system, the positioning result is not influenced by the quality of the GPS signal, the method can be applied to a wider positioning environment, and the scheme of the embodiment of the application also has usability even in parking scenes with poor GPS signal quality such as indoors or underground and the like, and can ensure the positioning precision; 2) according to the embodiment of the application, the laser radar is not needed, so that the carrier pose is not needed to be continuously tracked, point cloud mismatching caused by failure of pose tracking and the like is avoided, and the phenomenon of vehicle positioning error is avoided; 3) according to the scheme of the embodiment of the application, a positioning base station does not need to be deployed depending on a site, so that the positioning implementation process is not limited by the site, namely a driving environment; 4) the scheme of the embodiment of the application can solve the problem of drift error existing in the existing pure visual positioning scheme, and improves the positioning precision of the vehicle, so that the high-precision positioning function of the vehicle can be realized in any driving scene, such as an open road scene.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (20)

1. A vehicle positioning method, characterized by comprising:
acquiring inertia measurement data of a vehicle at the current moment, the speed of the vehicle and a target image of the environment where the vehicle is located;
calculating the predicted pose of the vehicle at the current moment by using the inertial measurement data and the wheel speed of the vehicle;
calling a high-precision map comprising the current environment information of the vehicle based on the predicted position coordinates in the predicted pose;
and correcting the predicted pose by using the target image and the high-precision map to obtain the target pose of the vehicle at the current moment.
2. The method of claim 1, wherein correcting the predicted pose using the target image and the high-precision map to obtain a target pose of the vehicle at the current time comprises:
identifying the target image and determining target elements;
determining the position coordinates of the image feature points of the target elements on the high-precision map;
constructing a geometric constraint relation by using the pixel coordinates of the image feature points on the target image and the position coordinates of the image feature points on the high-precision map;
and correcting the predicted pose based on the geometric constraint relation to obtain the target pose of the vehicle at the current moment.
3. The method of claim 2, wherein correcting the predicted pose based on the geometric constraint relationship to obtain the target pose of the vehicle at the current time comprises:
constructing a pose constraint relation between adjacent moments of the vehicle-mounted camera by utilizing the vehicle-mounted camera for acquiring the target image and the multi-frame images acquired at the current moment and the previous moment;
and correcting the predicted pose by using the geometric constraint relation and the pose constraint relation of the vehicle-mounted camera to obtain the target pose of the vehicle at the current moment.
4. The method of claim 2, wherein identifying the target image and determining the target element comprises:
identifying the target image to obtain an identification result;
according to the visual field constraint of a vehicle-mounted camera for acquiring the target image, data elimination is carried out on the recognition result;
and determining the target element from the recognition result after the data elimination.
5. The method of claim 2, wherein determining the location coordinates of the image feature point of the target element on the high-precision map comprises:
re-projecting the high-precision map to an imaging plane corresponding to the target image;
determining a map element corresponding to the target element on the high-precision map after the target element is re-projected;
and determining the position coordinates of the image feature points of the target elements on the high-precision map according to the corresponding relation of the feature points between the map elements and the target elements.
6. The method of claim 1, wherein using the inertial measurement data and the vehicle wheel speed to calculate a predicted pose of the vehicle at a current time comprises:
and calculating the predicted pose of the vehicle at the current moment by using the inertial measurement data and the wheel speed of the vehicle based on a vehicle kinematic model.
7. The method of claim 3, wherein the predicted pose is modified based on the geometric constraint relationship to obtain the target pose of the vehicle at the current time, further comprising:
and extracting feature points and tracking the feature points of the multi-frame images acquired by the vehicle-mounted camera, constructing a pose constraint relation of the vehicle between adjacent moments, and correcting the predicted pose by using the constructed pose constraint relation of the vehicle-mounted camera and the vehicle and the geometric constraint relation.
8. The method of claim 1, wherein after said obtaining inertial measurement data of the vehicle at the current time, the vehicle wheel speed, and the target image of the environment in which the vehicle is located, the method further comprises:
and performing data detection on the inertia measurement data, the wheel speed of the vehicle and the sensing elements on the target image by using a preset data detection algorithm to remove abnormal data.
9. The method of claim 1, further comprising:
in the vehicle positioning process, determining whether a vehicle-mounted camera used for acquiring the target image is abnormal or not by using the deviation between the predicted pixel coordinates and the observed pixel coordinates of the target feature point on the target image;
and predicting the predicted pixel coordinates of the target feature point based on the pixel coordinates of the target feature point in the previous frame of image, or predicting the predicted pixel coordinates of the target feature point in the previous frame of image corresponding to the position coordinates in the high-precision map.
10. A vehicle positioning device, comprising:
the data acquisition module is used for acquiring inertial measurement data of the vehicle at the current moment, the speed of the vehicle wheel and a target image of the environment where the vehicle is located;
the predicted pose determination module is used for calculating the predicted pose of the vehicle at the current moment by using the inertial measurement data and the wheel speed of the vehicle;
the high-precision map acquisition module is used for calling a high-precision map comprising the current environment information of the vehicle based on the predicted position coordinates in the predicted pose;
and the data fusion module is used for correcting the predicted pose by using the target image and the high-precision map to obtain the target pose of the vehicle at the current moment.
11. The apparatus of claim 10, wherein the data fusion module comprises:
the image recognition unit is used for recognizing the target image and determining a target element;
an image and map association unit for determining the position coordinates of the image feature points of the target elements on the high-precision map;
the geometric constraint relation construction unit is used for constructing a geometric constraint relation by utilizing the pixel coordinates of the image feature points on the target image and the position coordinates of the image feature points on the high-precision map;
and the target pose determining unit is used for correcting the predicted pose based on the geometric constraint relation to obtain the target pose of the vehicle at the current moment.
12. The apparatus according to claim 11, characterized in that the target pose determination unit includes:
the camera pose constraint relation construction subunit is used for constructing a pose constraint relation between adjacent moments of the vehicle-mounted camera by utilizing the vehicle-mounted camera for acquiring the target image and the multi-frame images acquired at the current moment and the previous moment;
and the pose correction subunit is used for correcting the predicted pose by using the geometric constraint relation and the pose constraint relation of the vehicle-mounted camera to obtain the target pose of the vehicle at the current moment.
13. The apparatus according to claim 11, wherein the image recognition unit comprises:
the image identification subunit is used for identifying the target image to obtain an identification result;
the data removing subunit is used for removing data from the recognition result according to the visual field constraint of the vehicle-mounted camera used for acquiring the target image;
and the target element determining subunit is used for determining the target element from the recognition result subjected to the data elimination.
14. The apparatus of claim 11, wherein the image-to-map associating unit comprises:
the map reprojection shadow unit is used for reprojecting the high-precision map to an imaging plane corresponding to the target image;
the map element determining subunit is used for determining the map element corresponding to the target element on the high-precision map after the re-projection;
and the position coordinate determining subunit is used for determining the position coordinates of the image feature points of the target elements on the high-precision map according to the corresponding relationship of the feature points between the map elements and the target elements.
15. The apparatus of claim 10, wherein the predicted pose determination module is specifically configured to:
and calculating the predicted pose of the vehicle at the current moment by using the inertial measurement data and the wheel speed of the vehicle based on a vehicle kinematic model.
16. The apparatus according to claim 12, characterized in that the target pose determination unit further comprises:
the vehicle pose constraint relation construction subunit is used for extracting and tracking the feature points of the multi-frame images acquired by the vehicle-mounted camera and constructing the pose constraint relation of the vehicle at adjacent moments;
correspondingly, the pose correction subunit is specifically configured to: and correcting the predicted pose by using the constructed pose constraint relation between the vehicle-mounted camera and the vehicle and the geometric constraint relation.
17. The apparatus of claim 10, further comprising:
and the data detection module is used for performing data detection on the inertia measurement data, the wheel speed of the vehicle and sensing elements on the target image by using a preset data detection algorithm after the data acquisition module executes the operation of acquiring the inertia measurement data, the wheel speed of the vehicle and the target image of the environment where the vehicle is located at the current moment so as to eliminate abnormal data.
18. The apparatus of claim 10, further comprising:
the camera detection module is used for determining whether a vehicle-mounted camera used for acquiring the target image is abnormal or not by using the deviation between the predicted pixel coordinates and the observed pixel coordinates of the target feature point on the target image in the vehicle positioning process;
and predicting the predicted pixel coordinates of the target feature point based on the pixel coordinates of the target feature point in the previous frame of image, or predicting the predicted pixel coordinates of the target feature point in the previous frame of image corresponding to the position coordinates in the high-precision map.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the vehicle localization method of any of claims 1-9.
20. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the vehicle localization method of any one of claims 1-9.
CN202010074220.4A 2020-01-22 2020-01-22 Vehicle positioning method, device, equipment and medium Pending CN111220154A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010074220.4A CN111220154A (en) 2020-01-22 2020-01-22 Vehicle positioning method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010074220.4A CN111220154A (en) 2020-01-22 2020-01-22 Vehicle positioning method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN111220154A true CN111220154A (en) 2020-06-02

Family

ID=70806840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010074220.4A Pending CN111220154A (en) 2020-01-22 2020-01-22 Vehicle positioning method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN111220154A (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111664860A (en) * 2020-07-01 2020-09-15 北京三快在线科技有限公司 Positioning method and device, intelligent equipment and storage medium
CN111721305A (en) * 2020-06-28 2020-09-29 北京百度网讯科技有限公司 Positioning method and apparatus, autonomous vehicle, electronic device, and storage medium
CN111833717A (en) * 2020-07-20 2020-10-27 北京百度网讯科技有限公司 Method, device, equipment and storage medium for positioning vehicle
CN111854731A (en) * 2020-07-22 2020-10-30 中国第一汽车股份有限公司 Pose determination method and device, vehicle and storage medium
CN112083725A (en) * 2020-09-04 2020-12-15 湖南大学 Structure-shared multi-sensor fusion positioning system for automatic driving vehicle
CN112150550A (en) * 2020-09-23 2020-12-29 华人运通(上海)自动驾驶科技有限公司 Fusion positioning method and device
CN112466142A (en) * 2020-11-13 2021-03-09 浙江吉利控股集团有限公司 Vehicle scheduling method, device and system and storage medium
CN112665593A (en) * 2020-12-17 2021-04-16 北京经纬恒润科技股份有限公司 Vehicle positioning method and device
CN112835370A (en) * 2021-01-19 2021-05-25 北京小马智行科技有限公司 Vehicle positioning method and device, computer readable storage medium and processor
CN112833880A (en) * 2021-02-02 2021-05-25 北京嘀嘀无限科技发展有限公司 Vehicle positioning method, positioning device, storage medium, and computer program product
CN112880687A (en) * 2021-01-21 2021-06-01 深圳市普渡科技有限公司 Indoor positioning method, device, equipment and computer readable storage medium
CN113313011A (en) * 2021-05-26 2021-08-27 上海商汤临港智能科技有限公司 Video frame processing method and device, computer equipment and storage medium
CN113358125A (en) * 2021-04-30 2021-09-07 西安交通大学 Navigation method and system based on environmental target detection and environmental target map
CN113516871A (en) * 2021-05-29 2021-10-19 上海追势科技有限公司 Navigation method for underground parking lot of vehicle-mounted machine
CN113516864A (en) * 2021-06-02 2021-10-19 上海追势科技有限公司 Navigation method for mobile phone underground parking lot
CN113580134A (en) * 2021-08-03 2021-11-02 湖北亿咖通科技有限公司 Visual positioning method, device, robot, storage medium and program product
CN113701770A (en) * 2021-07-16 2021-11-26 西安电子科技大学 High-precision map generation method and system
CN113865602A (en) * 2021-08-18 2021-12-31 西人马帝言(北京)科技有限公司 Vehicle positioning method, device, equipment and storage medium
CN113932820A (en) * 2020-06-29 2022-01-14 杭州海康威视数字技术股份有限公司 Object detection method and device
CN114001742A (en) * 2021-10-21 2022-02-01 广州小鹏自动驾驶科技有限公司 Vehicle positioning method and device, vehicle and readable storage medium
CN114018274A (en) * 2021-11-18 2022-02-08 阿波罗智能技术(北京)有限公司 Vehicle positioning method and device and electronic equipment
WO2022052283A1 (en) * 2020-09-08 2022-03-17 广州小鹏自动驾驶科技有限公司 Vehicle positioning method and device
CN114248778A (en) * 2020-09-22 2022-03-29 华为技术有限公司 Positioning method and positioning device of mobile equipment
CN114323035A (en) * 2020-09-30 2022-04-12 华为技术有限公司 Positioning method, device and system
CN114413881A (en) * 2022-01-07 2022-04-29 中国第一汽车股份有限公司 Method and device for constructing high-precision vector map and storage medium
CN114419098A (en) * 2022-01-18 2022-04-29 长沙慧联智能科技有限公司 Moving target trajectory prediction method and device based on visual transformation
CN114475581A (en) * 2022-02-25 2022-05-13 北京流马锐驰科技有限公司 Automatic parking positioning method based on wheel speed pulse and IMU Kalman filtering fusion
CN114549632A (en) * 2021-09-14 2022-05-27 北京小米移动软件有限公司 Vehicle positioning method and device
CN114913491A (en) * 2021-02-08 2022-08-16 广州汽车集团股份有限公司 Vehicle positioning method and system and computer readable storage medium
CN114969231A (en) * 2022-05-19 2022-08-30 高德软件有限公司 Target traffic image determination method, device, electronic equipment and program product
CN115147805A (en) * 2021-03-31 2022-10-04 欧特明电子股份有限公司 Automatic parking mapping and positioning system and method
CN115164912A (en) * 2022-06-24 2022-10-11 宁波均胜智能汽车技术研究院有限公司 Vehicle position positioning method and device and readable storage medium
CN115205828A (en) * 2022-09-16 2022-10-18 毫末智行科技有限公司 Vehicle positioning method and device, vehicle control unit and readable storage medium
CN115235493A (en) * 2022-07-19 2022-10-25 合众新能源汽车有限公司 Method and device for automatic driving positioning based on vector map
CN116698051A (en) * 2023-05-30 2023-09-05 北京百度网讯科技有限公司 High-precision vehicle positioning, vectorization map construction and positioning model training method
CN118053052A (en) * 2024-04-16 2024-05-17 之江实验室 Unsupervised high-precision vector map element anomaly detection method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109405824A (en) * 2018-09-05 2019-03-01 武汉契友科技股份有限公司 A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile
CN109556615A (en) * 2018-10-10 2019-04-02 吉林大学 The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot
CN110136199A (en) * 2018-11-13 2019-08-16 北京初速度科技有限公司 A kind of vehicle location based on camera, the method and apparatus for building figure
CN110147705A (en) * 2018-08-28 2019-08-20 北京初速度科技有限公司 A kind of vehicle positioning method and electronic equipment of view-based access control model perception
US20190259170A1 (en) * 2018-02-21 2019-08-22 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for feature screening in slam

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190259170A1 (en) * 2018-02-21 2019-08-22 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for feature screening in slam
CN110147705A (en) * 2018-08-28 2019-08-20 北京初速度科技有限公司 A kind of vehicle positioning method and electronic equipment of view-based access control model perception
CN109405824A (en) * 2018-09-05 2019-03-01 武汉契友科技股份有限公司 A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile
CN109556615A (en) * 2018-10-10 2019-04-02 吉林大学 The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot
CN110136199A (en) * 2018-11-13 2019-08-16 北京初速度科技有限公司 A kind of vehicle location based on camera, the method and apparatus for building figure

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111721305A (en) * 2020-06-28 2020-09-29 北京百度网讯科技有限公司 Positioning method and apparatus, autonomous vehicle, electronic device, and storage medium
CN111721305B (en) * 2020-06-28 2022-07-22 北京百度网讯科技有限公司 Positioning method and apparatus, autonomous vehicle, electronic device, and storage medium
CN113932820A (en) * 2020-06-29 2022-01-14 杭州海康威视数字技术股份有限公司 Object detection method and device
CN111664860A (en) * 2020-07-01 2020-09-15 北京三快在线科技有限公司 Positioning method and device, intelligent equipment and storage medium
CN111833717A (en) * 2020-07-20 2020-10-27 北京百度网讯科技有限公司 Method, device, equipment and storage medium for positioning vehicle
US11828604B2 (en) 2020-07-20 2023-11-28 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Method and apparatus for positioning vehicle, electronic device, and storage medium
CN111854731A (en) * 2020-07-22 2020-10-30 中国第一汽车股份有限公司 Pose determination method and device, vehicle and storage medium
CN112083725A (en) * 2020-09-04 2020-12-15 湖南大学 Structure-shared multi-sensor fusion positioning system for automatic driving vehicle
WO2022052283A1 (en) * 2020-09-08 2022-03-17 广州小鹏自动驾驶科技有限公司 Vehicle positioning method and device
CN114248778A (en) * 2020-09-22 2022-03-29 华为技术有限公司 Positioning method and positioning device of mobile equipment
WO2022062480A1 (en) * 2020-09-22 2022-03-31 华为技术有限公司 Positioning method and positioning apparatus of mobile device
CN114248778B (en) * 2020-09-22 2024-04-12 华为技术有限公司 Positioning method and positioning device of mobile equipment
WO2022062355A1 (en) * 2020-09-23 2022-03-31 华人运通(上海)自动驾驶科技有限公司 Fusion positioning method and apparatus
CN112150550A (en) * 2020-09-23 2020-12-29 华人运通(上海)自动驾驶科技有限公司 Fusion positioning method and device
CN114323035A (en) * 2020-09-30 2022-04-12 华为技术有限公司 Positioning method, device and system
CN112466142B (en) * 2020-11-13 2022-06-21 浙江吉利控股集团有限公司 Vehicle scheduling method, device and system and storage medium
CN112466142A (en) * 2020-11-13 2021-03-09 浙江吉利控股集团有限公司 Vehicle scheduling method, device and system and storage medium
CN112665593B (en) * 2020-12-17 2024-01-26 北京经纬恒润科技股份有限公司 Vehicle positioning method and device
CN112665593A (en) * 2020-12-17 2021-04-16 北京经纬恒润科技股份有限公司 Vehicle positioning method and device
CN112835370B (en) * 2021-01-19 2023-07-14 北京小马智行科技有限公司 Positioning method and device for vehicle, computer readable storage medium and processor
CN112835370A (en) * 2021-01-19 2021-05-25 北京小马智行科技有限公司 Vehicle positioning method and device, computer readable storage medium and processor
CN112880687B (en) * 2021-01-21 2024-05-17 深圳市普渡科技有限公司 Indoor positioning method, device, equipment and computer readable storage medium
CN112880687A (en) * 2021-01-21 2021-06-01 深圳市普渡科技有限公司 Indoor positioning method, device, equipment and computer readable storage medium
CN112833880A (en) * 2021-02-02 2021-05-25 北京嘀嘀无限科技发展有限公司 Vehicle positioning method, positioning device, storage medium, and computer program product
CN114913491A (en) * 2021-02-08 2022-08-16 广州汽车集团股份有限公司 Vehicle positioning method and system and computer readable storage medium
CN115147805A (en) * 2021-03-31 2022-10-04 欧特明电子股份有限公司 Automatic parking mapping and positioning system and method
CN113358125A (en) * 2021-04-30 2021-09-07 西安交通大学 Navigation method and system based on environmental target detection and environmental target map
CN113313011A (en) * 2021-05-26 2021-08-27 上海商汤临港智能科技有限公司 Video frame processing method and device, computer equipment and storage medium
CN113516871A (en) * 2021-05-29 2021-10-19 上海追势科技有限公司 Navigation method for underground parking lot of vehicle-mounted machine
CN113516864A (en) * 2021-06-02 2021-10-19 上海追势科技有限公司 Navigation method for mobile phone underground parking lot
CN113701770A (en) * 2021-07-16 2021-11-26 西安电子科技大学 High-precision map generation method and system
CN113580134A (en) * 2021-08-03 2021-11-02 湖北亿咖通科技有限公司 Visual positioning method, device, robot, storage medium and program product
CN113580134B (en) * 2021-08-03 2022-11-04 亿咖通(湖北)技术有限公司 Visual positioning method, device, robot, storage medium and program product
CN113865602A (en) * 2021-08-18 2021-12-31 西人马帝言(北京)科技有限公司 Vehicle positioning method, device, equipment and storage medium
CN114549632A (en) * 2021-09-14 2022-05-27 北京小米移动软件有限公司 Vehicle positioning method and device
CN114001742B (en) * 2021-10-21 2024-06-04 广州小鹏自动驾驶科技有限公司 Vehicle positioning method, device, vehicle and readable storage medium
CN114001742A (en) * 2021-10-21 2022-02-01 广州小鹏自动驾驶科技有限公司 Vehicle positioning method and device, vehicle and readable storage medium
CN114018274A (en) * 2021-11-18 2022-02-08 阿波罗智能技术(北京)有限公司 Vehicle positioning method and device and electronic equipment
CN114018274B (en) * 2021-11-18 2024-03-26 阿波罗智能技术(北京)有限公司 Vehicle positioning method and device and electronic equipment
CN114413881A (en) * 2022-01-07 2022-04-29 中国第一汽车股份有限公司 Method and device for constructing high-precision vector map and storage medium
CN114413881B (en) * 2022-01-07 2023-09-01 中国第一汽车股份有限公司 Construction method, device and storage medium of high-precision vector map
CN114419098A (en) * 2022-01-18 2022-04-29 长沙慧联智能科技有限公司 Moving target trajectory prediction method and device based on visual transformation
CN114475581B (en) * 2022-02-25 2022-09-16 北京流马锐驰科技有限公司 Automatic parking positioning method based on wheel speed pulse and IMU Kalman filtering fusion
CN114475581A (en) * 2022-02-25 2022-05-13 北京流马锐驰科技有限公司 Automatic parking positioning method based on wheel speed pulse and IMU Kalman filtering fusion
CN114969231A (en) * 2022-05-19 2022-08-30 高德软件有限公司 Target traffic image determination method, device, electronic equipment and program product
CN115164912A (en) * 2022-06-24 2022-10-11 宁波均胜智能汽车技术研究院有限公司 Vehicle position positioning method and device and readable storage medium
CN115235493A (en) * 2022-07-19 2022-10-25 合众新能源汽车有限公司 Method and device for automatic driving positioning based on vector map
CN115235493B (en) * 2022-07-19 2024-06-18 合众新能源汽车股份有限公司 Method and device for automatic driving positioning based on vector map
CN115205828B (en) * 2022-09-16 2022-12-06 毫末智行科技有限公司 Vehicle positioning method and device, vehicle control unit and readable storage medium
CN115205828A (en) * 2022-09-16 2022-10-18 毫末智行科技有限公司 Vehicle positioning method and device, vehicle control unit and readable storage medium
CN116698051A (en) * 2023-05-30 2023-09-05 北京百度网讯科技有限公司 High-precision vehicle positioning, vectorization map construction and positioning model training method
CN118053052A (en) * 2024-04-16 2024-05-17 之江实验室 Unsupervised high-precision vector map element anomaly detection method

Similar Documents

Publication Publication Date Title
CN111220154A (en) Vehicle positioning method, device, equipment and medium
KR102382420B1 (en) Method and apparatus for positioning vehicle, electronic device and storage medium
CN110595494B (en) Map error determination method and device
CN111274974B (en) Positioning element detection method, device, equipment and medium
CN110806215B (en) Vehicle positioning method, device, equipment and storage medium
CN111220164A (en) Positioning method, device, equipment and storage medium
CN111959495B (en) Vehicle control method and device and vehicle
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
US11227395B2 (en) Method and apparatus for determining motion vector field, device, storage medium and vehicle
CN110794844B (en) Automatic driving method, device, electronic equipment and readable storage medium
CN111401251B (en) Lane line extraction method, lane line extraction device, electronic equipment and computer readable storage medium
CN111721281B (en) Position identification method and device and electronic equipment
CN111784837A (en) High-precision map generation method and device
CN111784835A (en) Drawing method, drawing device, electronic equipment and readable storage medium
CN112147632A (en) Method, device, equipment and medium for testing vehicle-mounted laser radar perception algorithm
CN112101209A (en) Method and apparatus for determining a world coordinate point cloud for roadside computing devices
CN112288825A (en) Camera calibration method and device, electronic equipment, storage medium and road side equipment
CN111523471A (en) Method, device and equipment for determining lane where vehicle is located and storage medium
CN111721305B (en) Positioning method and apparatus, autonomous vehicle, electronic device, and storage medium
CN111783611B (en) Unmanned vehicle positioning method and device, unmanned vehicle and storage medium
CN111612851B (en) Method, apparatus, device and storage medium for calibrating camera
CN111260722B (en) Vehicle positioning method, device and storage medium
CN112577524A (en) Information correction method and device
CN113011298A (en) Truncated object sample generation method, target detection method, road side equipment and cloud control platform
CN115790621A (en) High-precision map updating method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination