CN114252081B - Positioning method, device, equipment and storage medium - Google Patents

Positioning method, device, equipment and storage medium Download PDF

Info

Publication number
CN114252081B
CN114252081B CN202111407916.5A CN202111407916A CN114252081B CN 114252081 B CN114252081 B CN 114252081B CN 202111407916 A CN202111407916 A CN 202111407916A CN 114252081 B CN114252081 B CN 114252081B
Authority
CN
China
Prior art keywords
pose
observation
feature
road
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111407916.5A
Other languages
Chinese (zh)
Other versions
CN114252081A (en
Inventor
颜扬治
林宝尉
傅文标
袁维平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecarx Hubei Tech Co Ltd
Original Assignee
Ecarx Hubei Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecarx Hubei Tech Co Ltd filed Critical Ecarx Hubei Tech Co Ltd
Priority to CN202111407916.5A priority Critical patent/CN114252081B/en
Publication of CN114252081A publication Critical patent/CN114252081A/en
Application granted granted Critical
Publication of CN114252081B publication Critical patent/CN114252081B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position

Abstract

The application provides a positioning method, a device, equipment and a storage medium, wherein by acquiring a road single frame observation characteristic and a relative pose estimation parameter, the relative pose estimation parameter is a relative pose from a first observation point to a second observation point, and the first observation point and the second observation point are any two continuous observation points; splicing a plurality of road single-frame observation features meeting preset requirements into a road multi-frame observation feature according to the relative pose calculation parameters; registering according to the relative pose estimation parameters, the latest historical pose, the road multi-frame observation features and the pre-established feature map to determine positioning information, wherein the positioning information comprises the current pose of the carrier. The technical problem that good positioning registration cannot be performed due to noise or error interference when the observed features are matched with the feature map in the existing feature positioning technology is solved. The technical effects of effectively inhibiting non-random noise and improving positioning accuracy and stability are achieved.

Description

Positioning method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of vehicle engineering technologies, and in particular, to a positioning method, a positioning device, and a storage medium.
Background
The positioning technology is one of the basic and core technologies of intelligent machine application technologies such as automatic driving and the like, and provides position and posture, namely pose information, for the intelligent machine or the carrier. Existing positioning techniques can be categorized into geometric positioning, dead reckoning, feature positioning.
For feature positioning, a plurality of observation features of the surrounding environment, such as base station identity information, wifi fingerprints, images, point clouds acquired by a laser radar and the like, are generally obtained first, then the observation features are matched with a feature map established in advance, and the positions in the feature map are determined, so that absolute positioning information can be provided.
However, when the intelligent machine or the carrier actually runs and noise or errors exist in real-time observation features or feature maps of the surrounding environment, positioning registration cannot be well performed, and the precision and stability of feature positioning are affected. Namely, the prior feature positioning technology has the technical problem that when the observed features are matched with the feature map, good positioning registration cannot be carried out due to noise or error interference.
Disclosure of Invention
The application provides a positioning method, a device, equipment and a storage medium, which are used for solving the technical problem that good positioning registration cannot be performed due to noise or error interference when an observed feature is matched with a feature map in the existing feature positioning technology.
In a first aspect, the present application provides a positioning method, including:
obtaining a road single-frame observation feature and a relative pose estimation parameter, wherein the relative pose estimation parameter is a relative pose from a first observation point to a second observation point, and the first observation point and the second observation point are observation points of any two continuous road single-frame observation features;
splicing a plurality of road single-frame observation features meeting preset requirements into a road multi-frame observation feature according to the relative pose calculation parameters;
and registering according to the relative pose estimation parameters, the latest historical pose, the multi-frame road observation features and the pre-established feature map to determine positioning information, wherein the positioning information comprises the current pose, and the latest historical pose is the pose determined in the latest positioning or the monitoring pose sent by the latest received absolute positioning source.
In one possible design, determining positioning information based on registering relative pose estimation parameters, up-to-date historical poses, road multi-frame observation features, and a pre-established feature map, includes:
dividing the multi-frame observation features of the road to determine a plurality of registration observation units;
registering each registration observation unit with the feature map according to the relative pose estimation parameters and the latest historical pose so as to determine the pose to be selected corresponding to each registration observation unit and the confidence coefficient corresponding to the pose to be selected, wherein the confidence coefficient is used for representing the matching degree of the pose to be selected and the feature map;
And determining the current pose according to each confidence degree and a preset screening mode.
In one possible design, determining the current pose according to the respective confidence levels and the preset screening mode includes:
selecting the maximum value in each confidence coefficient as the maximum confidence coefficient;
and determining the pose to be selected corresponding to the maximum confidence as the current pose.
In one possible design, registering each registration observation unit with the feature map according to the relative pose estimation parameter and the latest historical pose to determine a candidate pose and a confidence level corresponding to each registration observation unit, including:
determining all observation elements in each registration observation unit as initial feature points;
determining an initial predicted pose according to the relative pose estimation parameters and the latest historical pose;
calculating the distance between each characteristic point and each characteristic element in the characteristic map according to the predicted pose, and determining the confidence according to each distance;
optimizing and adjusting the predicted pose according to the distance;
deleting the characteristic points which do not meet the preset requirements according to the adjusted predicted pose, and taking the rest characteristic points as new characteristic points;
and adjusting the predicted pose again according to the distance corresponding to each new feature point until the confidence coefficient is converged, and determining the predicted pose as the pose to be selected.
In one possible design, determining the confidence level based on the respective distances includes: the reciprocal of the average of the distances was taken as the confidence.
In one possible design, the optimizing adjustment of the predicted pose according to distance includes:
adjusting the value of the predicted pose to minimize the distance between the feature points and the feature elements in the feature map;
determining a new confidence level according to the adjusted predicted pose;
if the new confidence coefficient is larger than the original confidence coefficient, the new confidence coefficient is reserved, otherwise, the original confidence coefficient is reserved.
In one possible design, screening out each feature point with a distance smaller than a preset distance threshold as a new feature point includes:
and selecting all the characteristic points meeting the Gaussian white noise requirement as new characteristic points.
In one possible design, the method for segmenting the multi-frame road observation feature according to a preset segmentation mode to determine a plurality of registration observation units includes:
cutting according to a preset central point and a segmentation interval angle in the multi-frame road observation feature to determine a plurality of registration observation units.
In one possible design, according to the relative pose estimation parameter, a plurality of road single-frame observation features meeting preset requirements are spliced into a road multi-frame observation feature, including:
Calculating a first relative pose of each road single frame observation feature and the latest road single frame observation feature;
according to each first relative pose, converting each road single-frame observation feature into an observation feature to be combined, wherein the observation feature to be combined is a road single-frame observation feature described under the latest pose, and the latest pose is a pose corresponding to the latest road single-frame observation feature;
and superposing and combining all the observation features to be combined to determine multi-frame observation features of the road.
In one possible design, the positioning information includes initial positioning information for first positioning, the corresponding latest historical pose is a monitoring pose sent by an absolute positioning source, and the road multiframe observation feature is a road single frame observation feature obtained for the first time.
Optionally, the road single frame observation feature includes: the geometric shape or identity represented by a point, line, plane, feature point containing abstract information, point, line, plane containing semantic information.
In a second aspect, the present application provides a positioning device comprising:
the acquisition module is used for acquiring the road single-frame observation characteristics and relative pose estimation parameters, wherein the relative pose estimation parameters are relative poses from a first observation point to a second observation point, and the first observation point and the second observation point are observation points of any two continuous road single-frame observation characteristics;
A processing module for:
splicing a plurality of road single-frame observation features meeting preset requirements into a road multi-frame observation feature according to the relative pose calculation parameters;
and registering according to the relative pose estimation parameters, the latest historical pose, the multi-frame road observation features and the pre-established feature map to determine positioning information, wherein the positioning information comprises the current pose, and the latest historical pose is the pose determined in the latest positioning or the monitoring pose sent by the latest received absolute positioning source.
In one possible design, the processing module is configured to:
dividing the multi-frame observation features of the road to determine a plurality of registration observation units;
registering each registration observation unit with the feature map according to the relative pose estimation parameters and the latest historical pose so as to determine the pose to be selected corresponding to each registration observation unit and the confidence coefficient corresponding to the pose to be selected, wherein the confidence coefficient is used for representing the matching degree of the pose to be selected and the feature map;
and determining the current pose according to each confidence degree and a preset screening mode.
In one possible design, the processing module is configured to:
selecting the maximum value in each confidence coefficient as the maximum confidence coefficient;
And determining the pose to be selected corresponding to the maximum confidence as the current pose.
In one possible design, the processing module is configured to:
determining all observation elements in each registration observation unit as initial feature points;
determining an initial predicted pose according to the relative pose estimation parameters and the latest historical pose;
calculating the distance between each characteristic point and each characteristic element in the characteristic map according to the predicted pose, and determining the confidence according to each distance;
optimizing and adjusting the predicted pose according to the distance;
deleting the characteristic points which do not meet the preset requirements according to the adjusted predicted pose, and taking the rest characteristic points as new characteristic points;
and adjusting the predicted pose again according to the distance corresponding to each new feature point until the confidence coefficient is converged, and determining the predicted pose as the pose to be selected.
In one possible design, the processing module is configured to take the inverse of the average value of the respective distances as the confidence level.
In one possible design, the processing module is configured to:
adjusting the value of the predicted pose to minimize the distance between the feature points and the feature elements in the feature map;
determining a new confidence level according to the adjusted predicted pose;
If the new confidence coefficient is larger than the original confidence coefficient, the new confidence coefficient is reserved, otherwise, the original confidence coefficient is reserved.
In one possible design, the processing module is configured to select all feature points that meet the gaussian white noise requirement as new feature points.
In one possible design, the processing module is configured to perform cutting according to a preset center point and a slicing interval angle in the multi-frame observation feature of the road, so as to determine a plurality of registration observation units.
In one possible design, the processing module is configured to:
calculating a first relative pose of each road single frame observation feature and the latest road single frame observation feature;
according to each first relative pose, converting each road single-frame observation feature into an observation feature to be combined, wherein the observation feature to be combined is a road single-frame observation feature described under the latest pose, and the latest pose is a pose corresponding to the latest road single-frame observation feature;
and superposing and combining all the observation features to be combined to determine multi-frame observation features of the road.
In one possible design, the positioning information includes initial positioning information for first positioning, the corresponding latest historical pose is a monitoring pose sent by an absolute positioning source, and the road multiframe observation feature is a road single frame observation feature obtained for the first time.
Optionally, the road single frame observation feature includes: the geometric shape or identity represented by a point, line, plane, feature point containing abstract information, point, line, plane containing semantic information.
In a third aspect, the present application provides an electronic device, comprising:
a memory for storing program instructions;
a processor for invoking and executing program instructions in memory to perform any one of the possible positioning methods provided in the first aspect.
In a fourth aspect, the present application provides a vehicle comprising the electronic device provided in the third aspect.
In a fifth aspect, the present application provides a storage medium having stored therein a computer program for performing any one of the possible positioning methods provided in the first aspect.
In a sixth aspect, the present application also provides a computer program product comprising a computer program which, when executed by a processor, implements any one of the possible positioning system methods provided in the first aspect.
The application provides a positioning method, a device, equipment and a storage medium, wherein by acquiring road single-frame observation characteristics and relative pose estimation parameters, the relative pose estimation parameters are relative poses from a first observation point to a second observation point, and the first observation point and the second observation point are observation points of any two continuous road single-frame observation characteristics; splicing a plurality of road single-frame observation features meeting preset requirements into a road multi-frame observation feature according to the relative pose calculation parameters; and registering according to the relative pose estimation parameters, the latest historical pose, the multi-frame road observation features and the pre-established feature map to determine positioning information, wherein the positioning information comprises the current pose, and the latest historical pose is the pose determined in the latest positioning or the monitoring pose sent by the latest received absolute positioning source. The technical problem that good positioning registration cannot be performed due to noise or error interference when the observed features are matched with the feature map in the existing feature positioning technology is solved. The technical effects of effectively inhibiting non-random noise and improving positioning accuracy and stability are achieved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic diagram of a positioning data flow according to an embodiment of the present application;
fig. 2 is a flow chart of a positioning method according to an embodiment of the present application;
FIG. 3 is a flow chart of another positioning method according to the embodiment of the present application;
fig. 4 is a schematic structural diagram of a positioning device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device provided in the present application.
Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, including but not limited to combinations of embodiments, which can be made by one of ordinary skill in the art without inventive faculty, are intended to be within the scope of the present application, based on the embodiments herein.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be capable of operation in sequences other than those illustrated or described herein, for example. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The following first explains the definition of terms referred to in this application:
positioning technology: is one of the basic and core technologies of intelligent machine (or called carrier) application technologies such as automatic driving and the like, and provides position and posture information, namely pose information, for the intelligent machine (or called carrier). Positioning techniques can be classified into geometric positioning, dead reckoning, and feature positioning according to different positioning principles.
Geometric positioning: the method comprises the steps of measuring distance or angle of a reference device with a known position, and determining the position of the reference device through geometric calculation. Including GNSS (Global navigation satellite system ), UWB (Ultra Wide Band, a wireless carrier communication technology), bluetooth, 5G, etc., provides absolute positioning information. The most widely used in smart car applications in GNSS technology. GNSS positioning is based on satellite positioning technology and is divided into single-point positioning, differential GPS (Global Positioning System ) positioning and RTK (Real-time differential positioning) positioning, wherein the single-point positioning provides 3-10 meters of positioning precision, the differential GPS provides 0.5-2 meters of positioning precision, and the RTK GPS provides centimeter-level positioning precision. The method has the limitation of depending on positioning facilities, being influenced by signal shielding, reflection and the like, and being invalid in the scenes of tunnels, overhead and the like.
Dead Reckoning (Dead Reckoning): the position of the next moment is calculated from the motion data of the IMU (Inertial Measurement Unit ) and the wheel speed meter, and relative positioning information is provided. The limitation is that as the estimated distance increases, the positioning error is accumulated and increased.
Feature location first obtains several features of the surrounding environment, such as base station ID (Identity Document, identity number), wifi fingerprint, image, liDAR (Laser Detection and Ranging, liDAR) point cloud, etc. And then matching the observed features with a feature map established in advance to determine the position in the feature map, so that absolute positioning information can be provided. The direct factors that influence feature localization are the number, quality, and degree of differentiation of features. The limitation is that the positioning accuracy and stability are reduced when the scene, environment and other factors influence the feature observation.
The inventor of the application finds that the feature positioning is performed through registration of the observation feature and the feature map, and when noise or error exists in the observation feature or the feature map, the registration cannot be performed well, and the precision and the stability of the feature positioning are affected. The following are several typical cases:
(1) Map change, namely, a feature map is established in advance, and when the map is positioned, the environment features are changed, so that the real-time observation features are inconsistent with the feature map.
(2) The observation errors, such as the real-time observation features and feature maps are inconsistent due to errors introduced by the sensor or the data processing process, such as the incorporation of dynamic objects into the observation features.
(3) Observation noise refers to various random noise introduced due to the sensor or during data processing, such as white gaussian noise. The first two types of typical noise are non-random noise, and the third noise is random noise.
The inventive concept of the present application is:
a feature positioning method based on random sampling consistency is provided. The method combines the random sampling consistent method with the characteristic registration method, thereby effectively inhibiting non-random noise and improving the positioning precision and stability.
The following describes the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a positioning data flow according to an embodiment of the present application. As shown in fig. 1, the IMU inertial measurement unit 101 and the wheel speed/vehicle speed sensor 102 transmit detected data to the DR dead reckoning system 103, and the DR dead reckoning system 103 calculates the relative pose of the vehicle.
Meanwhile, the environmental observation sensor 104 acquires the observation features 105 of the surrounding environment frame by frame, splices a plurality of observation features through relative pose to obtain multi-frame observation features 106, and registers the spliced observation features with the feature map 107 according to the predicted pose detected by the GNSS global navigation satellite system 108 to determine the current pose 109 of the final carrier.
A detailed description of how the positioning method provided in the present application is implemented is provided below.
It should be noted that, for convenience of describing the solutions provided in the embodiments of the present application, first, a definition of explicit coordinates is needed, and the present application refers to a definition of the following coordinate system:
(1) A world coordinate system W is defined which is in a Fixed relation to the actual geographical location, for example the Earth's geocentric coordinate system ECEF (Earth-Centered, earth-Fixed) may be used.
(2) The carrier coordinate system B is defined, which for a vehicle or carrier may also be referred to as a body coordinate system, which is fixed to a certain fixed position of the carrier, such as the center of the rear axle of the vehicle. The vehicle pose is the 6Dof (Degree of Freedom ) pose T of the vehicle body coordinate system in the world coordinate system WB
(3) The sensor coordinates S, also called the observation coordinate system, are defined. The measurement data acquired by the sensor are all based on the sensor coordinate system. The sensor is usually fixed on the carrier and moves in rigid motion with the carrier, so that there is a fixed conversion relationship between the sensor coordinate system and the carrier coordinate system, i.e. pose T BS Also known as external parameters.
It should be noted that, for the feature map used in the following embodiments, the following procedure is established:
(1) The sensor collects road characteristic information, expressed as S P1、 S P2、 S P3…… S Pn, these feature information are in the sensor coordinate system, and therefore the front subscript S is added. Then converted into a vehicle coordinate system through external parameters, expressed as B P1、 B P2、 B P3…… B Pn. Wherein the method comprises the steps of B Pi=T BS × S Pi。
The road characteristic information is three-dimensional data, so that direct projection in subsequent registration is facilitated.
(2) The high-precision positioning equipment determines that the pose of the vehicle at the moment is T WB . Through T WB Converting the road characteristic information in the step (1) into a world coordinate system. Represented as W P1、 W P2、 W P3…… W Pn. Wherein the method comprises the steps of W Pi=T WB × B Pi。
(3) The steps (1) and (2) are that the vehicle obtains road characteristic information in a world coordinate system at one position in the map. The vehicle traverses the map range (in actual operation, the vehicle walks all the roads in the map range, so that the corresponding environmental characteristics are collected through the sensors on the vehicle to form a characteristic map), and the road characteristic information of the vehicle in the world coordinate system at each map position can be obtained.
The scope of the feature map is determined by the requirements or tasks. Such as an application defining that only a certain campus is supported, that map is even for a certain campus. For example, the positioning of the expressway in the preset area is supported, and the range of the characteristic map is all expressways in the defined geographic area.
And accumulating and averaging (namely taking an average value after summing) all the obtained road characteristic information to obtain the characteristic map. The feature map is actually a map storing map elements in "coordinates+feature information".
Feature maps, though, are not currently defined in a uniform format or form. The various feature elements in the feature map are denoted in the form of "coordinates+information" in this application.
Fig. 2 is a flow chart of a positioning method according to an embodiment of the present application. As shown in fig. 2, the specific steps of the positioning method include:
s201, acquiring a road single frame observation feature and a relative pose estimation parameter.
In this step, the road single-frame observation feature and the relative pose estimation parameter of the carrier are continuously obtained according to the preset space interval, the relative pose estimation parameter is the relative pose from the first observation point to the second observation point determined according to the dead reckoning algorithm, and the first observation point and the second observation point are the observation points of any two continuous road single-frame observation features.
The road single frame observation feature includes: the geometric shape or identity represented by a point, line, plane, feature point containing abstract information, point, line, plane containing semantic information. For example, the features corresponding to the points include: traffic lights, corner points (e.g., corner points of a building), the line-corresponding features include: lane solid lines, roadside shafts (e.g., utility poles), etc., the face-corresponding features include: arrows on road surfaces, in-air signs, billboards, etc.
In the present embodiment, on a carrier, such as a vehicle, a sensor is provided including: at least one of a camera, liDAR (Laser Detection and Ranging, liDAR) or other sensor, or a combination of these sensors. And acquiring road observation characteristics through a sensor. The category of the road observation feature here is identical to the feature information in the feature map described above. The road observation characteristics collected by the sensor are expressed as S P1、 S P2、 S P3…… S Pm, these feature information are in the sensor coordinate system, and therefore the front subscript S is added. Then converted into a vehicle coordinate system through external parameters, expressed as B P1、 B P2、 B P3…… B Pm. Wherein the method comprises the steps of B Pi=T BS × S Pi. The road observation feature obtained in this step is single-frame observation, and thus can be noted as a road single-frame observation feature F.
Note that, the road single-frame observation feature F in this step is not any frame collected at random. Which in fact refers to an observation frame that was last acquired during a continuous real-time acquisition.
For relative pose estimation parameters, the pose T is as described above BS The vehicle, such as a vehicle, is equipped with sensors such as an IMU (Inertial Measurement Unit ), a wheel speed meter, or a speedometer. The relative Dead Reckoning parameters of the vehicle are acquired through DR (Dead Reckoning). The relative pose estimation parameter refers to the relative pose from point a to point b provided by DR. Specifically, in the DR coordinate system (which is defined by DR, and generally takes the pose of DR when acquiring the first frame observation as the origin), the pose of point a is T a The position and the posture of the point b are T b The relative pose T between a and b can be obtained ba =T a-inverse *T b
Further, the relative pose estimation parameter T a-inverse Comprises a pose T a Is a matrix of inverse of (a).
S202, splicing a plurality of road single-frame observation features meeting preset requirements into a road multi-frame observation feature according to the relative pose calculation parameters.
In this step, a plurality of road single-frame observation features are spliced into a road multi-frame observation feature by the relative pose estimation parameters in S201.
Wherein, the preset requirements include: the number of the road single-frame observation features corresponding to one road multi-frame observation feature, namely the number of frames, needs to be comprehensively weighed and determined according to the computing capacity of a processor and the computing efficiency of an algorithm or the processor. The larger the information density per frame, the more the number of frames, the greater the overall required computing power, eventually resulting in a decrease in computing efficiency. Therefore, the preset requirements need to be determined in combination with the computing power, hardware parameters, algorithm parameters, etc. of the computing platform.
It should also be noted that the sampling time interval of two adjacent road single-frame observation features may be non-fixed, because the road single-frame observation features are frame-sampled at spatial intervals, such as one sampling frame per interval of 5 to 10 m. And the time passing through the same space interval may be different at different traveling speeds of the vehicle. And the spatial separation of the frame samples, needs to be determined in combination with the sensor characteristics. Taking Lidar as an example, the multi-line laser scans the environment more densely, so the sampling space interval of the multi-frame can be reduced appropriately. Those skilled in the art may choose to use according to the actual situation, and the application is not limited.
In one possible design, the method specifically includes:
(1) The acquired single-frame observation characteristics of the plurality of roads are F1, F2 … Fn, and the pose provided by the corresponding DR dead reckoning is T1, T2 … Tn. Wherein Fn is the latest road single frame observation feature.
(2) And calculating the relative pose of each road single frame observation feature and the latest road single frame observation feature. For the ith frame, relative pose T ni =T i-inverse *Tn。
(3) Each road single frame observation feature is converted into the latest pose of the latest road single frame observation feature through the relative pose to describe, namely each road single frame observation is carried out according to each first relative poseThe features are converted into observation features to be combined, the observation features to be combined are road single-frame observation features described under the latest pose, and the latest pose is the pose corresponding to the latest road single-frame observation features. For the ith frame, observation nFi =t after it is converted to the latest frame ni * Fi, where n in nFi represents a standard reference for the latest road single frame observation feature.
(4) And directly accumulating all the converted road single-frame observation features to obtain spliced road multi-frame observation features. I.e., nf1+nf2+ … … + nFn.
Compared with the single-frame observation feature of the road, the multi-frame observation feature of the road comprises a larger observation range and richer road feature information.
And S203, registering according to the relative pose estimation parameters, the latest historical pose, the multi-frame road observation features and the pre-established feature map to determine positioning information.
In this step, a preset registration algorithm is utilized to determine positioning information of the carrier according to the relative pose estimation parameter, the latest historical pose, the road multi-frame observation feature and a pre-established feature map, wherein the positioning information comprises the current pose of the carrier, and the latest historical pose is the pose determined when the carrier is positioned last time or the monitored pose of the absolute positioning source received last time for the carrier.
It should be noted that the preset registration algorithm includes a registration algorithm based on random sampling consistency.
In this embodiment, registration is performed according to the relative pose estimation parameter, the latest historical pose, the road multi-frame observation feature and the pre-established feature map, and determining positioning information includes:
dividing the multi-frame observation features of the road to determine a plurality of registration observation units;
registering each registration observation unit with the feature map according to the relative pose estimation parameters and the latest historical pose so as to determine the pose to be selected corresponding to each registration observation unit and the confidence coefficient corresponding to the pose to be selected, wherein the confidence coefficient is used for representing the matching degree of the pose to be selected and the feature map;
And determining the current pose according to each confidence degree and a preset screening mode.
In one possible design, determining the current pose according to the respective confidence levels and the preset screening mode includes:
selecting the maximum value in each confidence coefficient as the maximum confidence coefficient;
and determining the pose to be selected corresponding to the maximum confidence as the current pose.
Specifically, after the multi-frame observation feature of the road is continuously obtained, continuous positioning can be realized through the following steps:
(1) Initially, the initial predicted pose is provided with other absolute positioning sources, such as GNSS (Global navigation satellite system). And acquiring the vehicle pose through a registration process of road multi-frame observation features and feature maps based on random sampling consistency. Thereby completing the positioning initialization.
(2) The predicted pose T of the next frame in observation is obtained by combining the relative pose provided by DR and the vehicle pose obtained by the registration of the previous frame WB-predict
(3) In predicting the pose T WB-predic And under the guidance of t, carrying out random sampling consistent registration on the observation features and the feature map to obtain the pose.
(4) And (3) circularly carrying out the steps (2) and (3) to realize continuous positioning.
For the registration process based on random sampling consistency in the positioning process, the specific implementation steps comprise:
Carrying out finite element segmentation on the multi-frame observation features of the road according to a preset segmentation mode so as to determine a plurality of registration observation units;
registering each registration observation unit with the feature map according to the relative pose estimation parameters and the latest historical pose by using a preset registration algorithm to determine a pose to be selected and a confidence coefficient corresponding to each registration observation unit, wherein the confidence coefficient value is used for representing the matching degree of the pose to be selected and the feature map;
and determining the current pose according to the confidence degree corresponding to each pose to be selected and a preset screening mode.
In the embodiment, the road multi-frame observation features are subjected to finite element segmentation, and random sampling consistency registration is performed based on the segmented registration observation units, so that non-random noise and random noise in a system are effectively restrained.
In one possible design, using a preset registration algorithm, registering each registration observation unit with the feature map according to the relative pose estimation parameter and the latest historical pose, including:
determining all observation elements in each registration observation unit as initial local points;
determining an initial predicted pose according to the relative pose estimation parameters and the latest historical pose;
Calculating the re-projection error between each local point and each characteristic element in the characteristic map;
and repeatedly and alternately carrying out value optimization on the predicted pose and quantity optimization on the local points according to the reprojection error by using a preset optimization algorithm until the confidence coefficient is converged, wherein the confidence coefficient is determined by a cost function corresponding to the reprojection error, the pose to be selected is the predicted pose corresponding to the convergence of the confidence coefficient, the value optimization is used for enabling the confidence coefficient to obtain the maximum value or the cost function to obtain the minimum value, the quantity optimization is used for eliminating interference elements in the local points, and the reprojection error of the interference elements is larger than or equal to a preset threshold value.
The embodiment provides a positioning method, by acquiring a road single-frame observation feature and a relative pose estimation parameter, the relative pose estimation parameter is a relative pose from a first observation point to a second observation point, and the first observation point and the second observation point are observation points of any two continuous road single-frame observation features; splicing a plurality of road single-frame observation features meeting preset requirements into a road multi-frame observation feature according to the relative pose calculation parameters; and registering according to the relative pose estimation parameters, the latest historical pose, the multi-frame road observation features and the pre-established feature map to determine positioning information, wherein the positioning information comprises the current pose, and the latest historical pose is the pose determined in the latest positioning or the monitoring pose sent by the latest received absolute positioning source. The technical problem that good positioning registration cannot be performed due to noise or error interference when the observed features are matched with the feature map in the existing feature positioning technology is solved. The technical effects of effectively inhibiting non-random noise and improving positioning accuracy and stability are achieved.
For ease of understanding, one possible implementation in S203 will be described in more detail below with another example.
Fig. 3 is a flow chart of another positioning method according to the embodiment of the present application. As shown in fig. 3, the positioning method specifically includes the steps of:
s301, acquiring a single-frame observation feature of a road and a relative pose estimation parameter.
In this embodiment, the road single-frame observation feature and the relative pose estimation parameter of the carrier are continuously obtained according to the preset space interval.
S302, splicing a plurality of road single-frame observation features meeting preset requirements into a road multi-frame observation feature according to the relative pose calculation parameters.
In this step, specifically, the method includes:
calculating a first relative pose of each road single frame observation feature and the latest road single frame observation feature;
according to each first relative pose, converting each road single-frame observation feature into an observation feature to be combined, wherein the observation feature to be combined is a road single-frame observation feature described under the latest pose, and the latest pose is a pose corresponding to the latest road single-frame observation feature;
and superposing and combining all the observation features to be combined to determine multi-frame observation features of the road.
The explanation of the specific terms of S301 and S302 can refer to S201-S202, and will not be repeated here.
S303, segmenting the multi-frame road observation feature to determine a plurality of registration observation units.
In this step, finite element segmentation is performed on the multi-frame observation feature of the road according to a preset segmentation mode, so as to determine a plurality of registration observation units, where the preset segmentation mode includes: and cutting by taking a certain specific point (such as the position of the carrier) in the road multi-frame observation characteristic as a center and taking an angle d as an interval, wherein the cutting comprises equal-dividing cutting with the constant d size and unequal-dividing cutting with the d changing according to a preset rule. Then, the registration observation sheets after the segmentation are numbered as D1, D2 and D3 … … Dm.
Cutting is carried out according to a preset central point and a cutting interval angle in the multi-frame road observation feature so as to determine a plurality of registration observation units.
The smaller the value of the angle d is, the smaller the road multi-frame observation feature is cut, and the stronger the noise suppression effect is when the road multi-frame observation feature is subsequently aligned with the feature map. However, the value of the angle d cannot be too small, and the accuracy of registration is also reduced due to the small observation range after the dicing, so that the value of the angle d is recommended to be 15 degrees to 90 degrees, such as 15 degrees, 30 degrees, 45 degrees, 60 degrees, and 90 degrees.
S304, all observation elements in each registration observation unit are determined to be initial feature points.
In this step, all the registration observation units D1, D2, D3 … … Dm cut in the previous step are used as initial feature points, which are also called local points, and then each registration observation unit is registered one by one according to a certain sequence, or one registration observation unit is randomly selected each time for registration until all the registration observation units are registered.
S305, determining an initial predicted pose according to the relative pose estimation parameters and the latest historical pose.
In this step, when first registration is performed on any one of the registration observation units, the initial value of the vehicle pose prediction, that is, the initial predicted pose, is estimated at the time of the present registration by the relative pose estimation parameter acquired in S301. The predicted pose is a pose which needs to be continuously corrected in the registration process, and is not necessarily a pose of the vehicle after final positioning, and when the predicted pose is optimized, the final value of the predicted pose is obtained, namely the optimal pose is the current pose of the vehicle in the current positioning.
It should be noted that, when the vehicle is started or is first positioned, other absolute positioning sources, such as the pose provided by GNSS, are used as the initial predicted pose.
It should also be noted that the registration process can be understood as: solving a pose T WB-calculate The distance between the current observation element and the corresponding semantic observation (i.e. the feature element) in the high-definition map, i.e. the feature map, is minimized.
Starting from S306, the detailed process of registration is a circularly repeated iterative process.
S306, calculating the distance between each feature point and each feature element in the feature map according to the predicted pose.
In this step, the re-projection error of the local point and the feature element is determined according to the predicted pose, and it is assumed that all the observation elements in any one registration observation unit Di include: B P1, B P2, B P3…… B pn, the characteristic elements corresponding to each observation element in the characteristic map are W P1, W P2, W P3…… W Pn. Any observation element B Pi and feature elements in a feature map W The reprojection error between Pi can be expressed by equation (1):
DIST(T WB * B Pi, W Pi) (1)
wherein T is WB To predict the pose.
S307, determining the confidence according to each distance.
In this step, a cost function and a confidence level are determined according to a preset average algorithm and a reprojection error, and a direct distance between an observation element and a feature element is defined as the cost function, which can be represented by formula (2):
F(T WB )=MEAN(DIST(T WB * B Pi, W Pi)) (2)
wherein DIST (x) represents the observation element B Pi and feature elements in a feature map W Reprojection errors between PiMEAN (x) represents the averaging of the re-projection errors of all observation elements in the registration observation unit Di.
For the confidence coefficient, the confidence coefficient can be initialized to zero at the beginning of registration, and once the cost function is calculated, the confidence coefficient can be updated to a value through the cost function.
Alternatively, the inverse of the average value of the respective distances is taken as the confidence.
And S308, optimizing and adjusting the predicted pose according to the distance.
In this step, the predicted pose is optimized and adjusted by using a preset optimization algorithm in a repeated iteration manner, which specifically includes:
adjusting the value of the predicted pose to minimize the distance between the feature points and the feature elements in the feature map;
determining a new confidence level according to the adjusted predicted pose;
if the new confidence coefficient is larger than the original confidence coefficient, the new confidence coefficient is reserved, otherwise, the original confidence coefficient is reserved.
For ease of understanding, the process of registration of the present embodiment can be understood as: solving a pose T WB-calculate The distance between the current observation element and the corresponding semantic observation (i.e. the feature element) in the high-definition map, i.e. the feature map, is minimized.
The process of optimizing the solution can be expressed by the formula (3):
T WB-calculate =argmin(F(T WB )) (3)
wherein argmin (x) represents the optimal TWB to minimize the value of the cost function.
Corresponding cost function F (T WB-calculate ) The smaller the value of the cost function is, the higher the registration quality is.
In this embodiment, the confidence is the inverse of the cost function, and therefore, the greater the confidence, the higher the quality of registration.
S309, judging whether the confidence coefficient of the iteration is larger than the confidence coefficient obtained by the previous calculation.
In the step, if not, proving that the cost function takes the minimum value or the confidence coefficient takes the maximum value, namely solving the optimal solution; if so, the optimal solution is not yet solved, and S306-S309 need to be repeatedly executed.
It should be noted that S306-S309 are optimization processes of a loop iteration to optimize the predicted pose. And after each time of optimizing and adjusting the predicted pose, re-determining the cost function and the value of the confidence coefficient according to the adjusted predicted pose until the cost function takes the minimum value or the confidence coefficient takes the maximum value.
After the optimization of the predicted pose is completed, the process of optimizing the number of the local points is started from S310:
and S310, determining the current value of the predicted pose as the optimal pose.
In this step, the current value T of the pose is predicted WB-calculate Equal to the optimal pose.
S311, re-calculating the re-projection errors between each observation element in the registration observation unit and the characteristic elements under the world coordinate system according to the optimal pose.
In this step, T is calculated according to formula (1) WB-calculate Replaced by T WB-best To recalculate the re-projection error.
S312, determining the confidence coefficient of the iteration according to the recalculated reprojection error.
In this step, according to the reprojection error calculated in the formula (2) and S311, the latest value of the cost function is calculated, and the reciprocal of the latest value is taken to obtain the confidence coefficient of the current iteration.
S313, judging whether the difference value between the confidence coefficient of the current iteration and the confidence coefficient of the last iteration is smaller than or equal to a convergence judgment threshold value.
In this step, if yes, it is determined that the confidence coefficient converges, and S314 is executed, that is, it is proved that the selection of the local point is the best selection, or that the noise point or the interference element in the local point has been eliminated, or that the interference effect of the remaining interference element on the registration accuracy is small enough, and no number optimization process is required.
If not, after executing S315, S306-S313 are repeatedly executed.
It should be noted that, the confidence value of the current iteration of the convergence finger is not significantly improved compared with the confidence value of the last iteration, for example, the difference is less than 1e-3. This is an engineering tuning parameter.
In one possible design, the system gaussian white noise is selected as the convergence decision threshold.
And S314, outputting the optimal pose as the current pose of the carrier.
In this step, the best pose is the latest historical pose in S203.
And S315, deleting the characteristic points which do not meet the preset requirement according to the adjusted predicted pose, and taking the rest characteristic points as new characteristic points.
In this embodiment, all feature points satisfying the gaussian white noise requirement are selected as new feature points.
In this step, the observation element with the reprojection error smaller than or equal to the preset threshold value is used as a new feature point, or a new local point, if the reprojection error is larger than the convergence judgment threshold value, the corresponding observation element is removed, the observation element with the reprojection error smaller than or equal to the preset threshold value is reserved, as the new local point, the value optimization of the local point is restarted, that is, the predicted pose is readjusted according to the distance corresponding to each new feature point until the confidence coefficient converges, and the predicted pose is determined as the pose to be selected.
The embodiment provides a positioning method, which continuously acquires the single-frame observation characteristics of a road and the relative pose estimation parameters of a carrier according to a preset space interval; splicing a plurality of road single-frame observation features meeting preset requirements into a road multi-frame observation feature according to the relative pose calculation parameters; and determining positioning information of the carrier according to the relative pose estimation parameters, the latest historical pose, the road multi-frame observation features and the pre-established feature map by using a preset registration algorithm, wherein the positioning information comprises the current pose of the carrier. The technical problem that good positioning registration cannot be performed due to noise or error interference when the observed features are matched with the feature map in the existing feature positioning technology is solved. The technical effects of effectively inhibiting non-random noise and improving positioning accuracy and stability are achieved.
Fig. 4 is a schematic structural diagram of a positioning device according to an embodiment of the present application. The positioning device 400 may be implemented in software, hardware, or a combination of both.
As shown in fig. 4, the positioning device 400 includes:
the obtaining module 401 is configured to obtain a road single-frame observation feature and a relative pose estimation parameter, where the relative pose estimation parameter is a relative pose from a first observation point to a second observation point, and the first observation point and the second observation point are two arbitrary continuous observation points of the road single-frame observation feature;
A processing module 402, configured to:
splicing a plurality of road single-frame observation features meeting preset requirements into a road multi-frame observation feature according to the relative pose calculation parameters;
and registering according to the relative pose estimation parameters, the latest historical pose, the multi-frame road observation features and the pre-established feature map to determine positioning information, wherein the positioning information comprises the current pose, and the latest historical pose is the pose determined in the latest positioning or the monitoring pose sent by the latest received absolute positioning source.
In one possible design, the processing module 402 is configured to:
dividing the multi-frame observation features of the road to determine a plurality of registration observation units;
registering each registration observation unit with the feature map according to the relative pose estimation parameters and the latest historical pose so as to determine the pose to be selected corresponding to each registration observation unit and the confidence coefficient corresponding to the pose to be selected, wherein the confidence coefficient is used for representing the matching degree of the pose to be selected and the feature map;
and determining the current pose according to each confidence degree and a preset screening mode.
In one possible design, the processing module 402 is configured to:
selecting the maximum value in each confidence coefficient as the maximum confidence coefficient;
And determining the pose to be selected corresponding to the maximum confidence as the current pose.
In one possible design, the processing module 402 is configured to:
determining all observation elements in each registration observation unit as initial feature points;
determining an initial predicted pose according to the relative pose estimation parameters and the latest historical pose;
calculating the distance between each characteristic point and each characteristic element in the characteristic map according to the predicted pose, and determining the confidence according to each distance;
optimizing and adjusting the predicted pose according to the distance;
deleting the characteristic points which do not meet the preset requirements according to the adjusted predicted pose, and taking the rest characteristic points as new characteristic points;
and adjusting the predicted pose again according to the distance corresponding to each new feature point until the confidence coefficient is converged, and determining the predicted pose as the pose to be selected.
In one possible design, the processing module 402 is configured to take the inverse of the average of the distances as the confidence level.
In one possible design, the processing module 402 is configured to:
adjusting the value of the predicted pose to minimize the distance between the feature points and the feature elements in the feature map;
determining a new confidence level according to the adjusted predicted pose;
If the new confidence coefficient is larger than the original confidence coefficient, the new confidence coefficient is reserved, otherwise, the original confidence coefficient is reserved.
In one possible design, the processing module 402 is configured to select all feature points that meet the gaussian white noise requirement as new feature points.
In one possible design, the processing module 402 is configured to perform a cut according to a preset center point and a slicing interval angle in the multi-frame observation feature of the road, so as to determine a plurality of registration observation units.
In one possible design, the processing module 402 is configured to:
calculating a first relative pose of each road single frame observation feature and the latest road single frame observation feature;
according to each first relative pose, converting each road single-frame observation feature into an observation feature to be combined, wherein the observation feature to be combined is a road single-frame observation feature described under the latest pose, and the latest pose is a pose corresponding to the latest road single-frame observation feature;
and superposing and combining all the observation features to be combined to determine multi-frame observation features of the road.
In one possible design, the positioning information includes initial positioning information for first positioning, the corresponding latest historical pose is a monitoring pose sent by an absolute positioning source, and the road multiframe observation feature is a road single frame observation feature obtained for the first time.
Optionally, the road single frame observation feature includes: the geometric shape or identity represented by a point, line, plane, feature point containing abstract information, point, line, plane containing semantic information.
It should be noted that, the apparatus provided in the embodiment shown in fig. 4 may perform the method provided in any of the above method embodiments, and the specific implementation principles, technical features, explanation of terms, and technical effects are similar, and are not repeated herein.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 5, the electronic device 500 may include: at least one processor 501 and a memory 502. Fig. 5 shows an electronic device, for example a processor.
A memory 502 for storing a program. In particular, the program may include program code including computer-operating instructions.
The memory 502 may comprise high-speed RAM memory or may further comprise non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor 501 is configured to execute computer-executable instructions stored in the memory 502 to implement the methods described in the method embodiments above.
The processor 501 may be a central processing unit (central processing unit, abbreviated as CPU), or an application specific integrated circuit (application specific integrated circuit, abbreviated as ASIC), or one or more integrated circuits configured to implement embodiments of the present application.
Alternatively, the memory 502 may be separate or integrated with the processor 501. When the memory 502 is a device separate from the processor 501, the electronic device 500 may further include:
a bus 503 for connecting the processor 501 and the memory 502. The bus may be an industry standard architecture (industry standard architecture, abbreviated ISA) bus, an external device interconnect (peripheral component, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. Buses may be divided into address buses, data buses, control buses, etc., but do not represent only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 502 and the processor 501 are integrated on a chip, the memory 502 and the processor 501 may complete communication through an internal interface.
In one possible design, the processor 501 and the memory 502 are integrated In any vehicle-mounted information interaction terminal such as a vehicle-mounted central computing platform architecture, a central supercomputer, a central computer, a central domain controller, an integrated ECU, a driving brain, an SPB, a vehicle, a DHU, an IHU, or an IVI (In-Vehicle Infotainment, in-vehicle infotainment system).
Wherein SPB (super brain) is a central domain controller defined as the brain of an automobile
IHU (Infotainment Head Unit) information entertainment host computer refers to a vehicle-mounted integrated information processing device formed by adopting a vehicle-mounted special central processing unit and based on a vehicle body bus system and Internet service, and can realize a series of applications including three-dimensional navigation, real-time road conditions, IPTV, auxiliary driving, fault detection, vehicle information, vehicle body control, mobile office, wireless communication, online entertainment functions and TSP (Telematics Service Provider, content service provider) and the like, thereby greatly improving the level of electronization, networking and intellectualization of vehicles.
DHU (Driver Head Unit) intelligent cabin controller, dhu=ihu+dim, DHU is the abbreviation that combines IHU and DIM together, taking the "D" of DIM to replace the "I" of IHU, becomes "DHU".
The DIM (Driver Information Module or Dash Integration Module) driver information module, also known as a "meter", is a display screen for displaying information related to the functions of the vehicle, typically placed behind the steering wheel in a position most easily visible to the driver.
The embodiment of the application also provides a vehicle, which comprises any one possible electronic device in the embodiment shown in fig. 5.
Embodiments of the present application also provide a computer-readable storage medium, which may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, and specifically, the computer readable storage medium stores program instructions for the methods in the above method embodiments.
The present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the method of the above-described method embodiments.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (13)

1. A positioning method, comprising:
obtaining a road single-frame observation feature and a relative pose estimation parameter, wherein the relative pose estimation parameter is a relative pose from a first observation point to a second observation point, and the first observation point and the second observation point are any two continuous observation points of the road single-frame observation feature;
splicing a plurality of road single-frame observation features meeting preset requirements into a road multi-frame observation feature according to the relative pose calculation parameters;
Dividing the multi-frame observation feature of the road to determine a plurality of registration observation units;
registering each registration observation unit with a feature map according to the relative pose estimation parameters and the latest historical pose to determine a pose to be selected corresponding to each registration observation unit and a confidence coefficient corresponding to the pose to be selected, wherein the confidence coefficient is used for representing the matching degree of the pose to be selected and the feature map;
determining a current pose according to each confidence degree and a preset screening mode, wherein the positioning information comprises the current pose, and the latest historical pose is the pose determined in the last positioning or the monitoring pose sent by the absolute positioning source received last time;
the determining the current pose according to each confidence level and a preset screening mode comprises the following steps:
selecting the maximum value in each confidence coefficient as the maximum confidence coefficient;
and determining the pose to be selected corresponding to the maximum confidence as the current pose.
2. The positioning method according to claim 1, wherein registering each of the registration observation units with the feature map according to the relative pose estimation parameter and the latest historical pose to determine a candidate pose and a confidence level corresponding to each of the registration observation units includes:
Determining all observation elements in each registration observation unit as initial feature points;
determining an initial predicted pose according to the relative pose estimation parameters and the latest historical pose;
calculating the distance between each characteristic point and each characteristic element in the characteristic map according to the predicted pose, and determining the confidence according to each distance;
optimizing and adjusting the predicted pose according to the distance;
deleting the characteristic points which do not meet the preset requirements according to the adjusted predicted pose, and taking the rest characteristic points as new characteristic points;
and adjusting the predicted pose again according to the distance corresponding to each new feature point until the confidence coefficient converges, and determining the predicted pose as the pose to be selected.
3. The positioning method of claim 2, wherein said determining said confidence level from each of said distances comprises: taking the reciprocal of the average value of each distance as the confidence.
4. The positioning method according to claim 2, wherein said optimally adjusting said predicted pose according to said distance comprises:
Adjusting the value of the predicted pose to minimize the distance between the feature point and the feature element in the feature map;
determining the new confidence coefficient according to the adjusted predicted pose;
if the new confidence coefficient is larger than the original confidence coefficient, the new confidence coefficient is reserved, otherwise, the original confidence coefficient is reserved.
5. The positioning method according to claim 2, wherein the deleting the feature points that do not meet a preset requirement according to the adjusted predicted pose, and taking the remaining feature points as new feature points, includes:
and selecting all the characteristic points meeting the Gaussian white noise requirement as the new characteristic points.
6. The positioning method according to claim 1, wherein the segmenting the multi-frame road observation feature to determine a plurality of registration observation units includes:
cutting according to a preset center point and a cutting interval angle in the road multi-frame observation feature to determine a plurality of registration observation units.
7. The positioning method according to claim 1, wherein the splicing the plurality of road single-frame observation features satisfying a preset requirement into a road multi-frame observation feature according to the relative pose estimation parameter includes:
Calculating a first relative pose of each road single frame observation feature and the latest road single frame observation feature;
according to the first relative pose, converting each road single-frame observation feature into an observation feature to be combined, wherein the observation feature to be combined is the road single-frame observation feature described under the latest pose, and the latest pose is the pose corresponding to the latest road single-frame observation feature;
and carrying out superposition combination on all the observation features to be combined so as to determine the multi-frame observation features of the road.
8. The positioning method according to any one of claims 2 to 5, wherein the positioning information includes initialized positioning information for performing positioning for the first time, the latest historical pose is the monitoring pose sent by the absolute positioning source, and the multi-frame road observation feature is the single-frame road observation feature acquired for the first time.
9. The positioning method according to claim 1, wherein the road single frame observation feature includes: the geometric shape or identity represented by a point, line, plane, feature point containing abstract information, point, line, plane containing semantic information.
10. A positioning device, comprising:
the acquisition module is used for acquiring road single-frame observation characteristics and relative pose estimation parameters, wherein the relative pose estimation parameters are relative poses from a first observation point to a second observation point, and the first observation point and the second observation point are any two continuous observation points of the road single-frame observation characteristics;
a processing module for:
splicing a plurality of road single-frame observation features meeting preset requirements into a road multi-frame observation feature according to the relative pose calculation parameters;
registering according to the relative pose estimation parameters, the latest historical pose, the multi-frame road observation features and a pre-established feature map to determine positioning information, wherein the positioning information comprises current pose, and the latest historical pose is the pose determined during the latest positioning or the monitoring pose sent by the latest received absolute positioning source;
the processing module is specifically configured to segment the multi-frame observation feature of the road to determine a plurality of registration observation units;
registering each registration observation unit with the feature map according to the relative pose estimation parameters and the latest historical pose to determine a pose to be selected corresponding to each registration observation unit and a confidence coefficient corresponding to the pose to be selected, wherein the confidence coefficient is used for representing the matching degree of the pose to be selected and the feature map;
Determining the current pose according to each confidence coefficient and a preset screening mode;
the processing module is specifically configured to select a maximum value of the confidence coefficients as a maximum confidence coefficient;
and determining the pose to be selected corresponding to the maximum confidence as the current pose.
11. An electronic device, comprising: a processor and a memory;
the memory is used for storing a computer program of the processor;
the processor is configured to perform the positioning method of any of claims 1 to 9 via execution of the computer program.
12. A vehicle comprising the electronic device of claim 11.
13. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the positioning method according to any of claims 1 to 9.
CN202111407916.5A 2021-11-24 2021-11-24 Positioning method, device, equipment and storage medium Active CN114252081B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111407916.5A CN114252081B (en) 2021-11-24 2021-11-24 Positioning method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111407916.5A CN114252081B (en) 2021-11-24 2021-11-24 Positioning method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114252081A CN114252081A (en) 2022-03-29
CN114252081B true CN114252081B (en) 2024-03-08

Family

ID=80791113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111407916.5A Active CN114252081B (en) 2021-11-24 2021-11-24 Positioning method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114252081B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105241464A (en) * 2015-10-16 2016-01-13 江苏省电力公司苏州供电公司 Fast matching method of crane track map
US9285805B1 (en) * 2015-07-02 2016-03-15 Geodigital International Inc. Attributed roadway trajectories for self-driving vehicles
CN110068824A (en) * 2019-04-17 2019-07-30 北京地平线机器人技术研发有限公司 A kind of sensor pose determines method and apparatus
CN111965627A (en) * 2020-08-18 2020-11-20 湖北亿咖通科技有限公司 Multi-laser radar calibration method for vehicle
CN112923931A (en) * 2019-12-06 2021-06-08 北理慧动(常熟)科技有限公司 Feature map matching and GPS positioning information fusion method based on fixed route

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9285805B1 (en) * 2015-07-02 2016-03-15 Geodigital International Inc. Attributed roadway trajectories for self-driving vehicles
CN105241464A (en) * 2015-10-16 2016-01-13 江苏省电力公司苏州供电公司 Fast matching method of crane track map
CN110068824A (en) * 2019-04-17 2019-07-30 北京地平线机器人技术研发有限公司 A kind of sensor pose determines method and apparatus
CN112923931A (en) * 2019-12-06 2021-06-08 北理慧动(常熟)科技有限公司 Feature map matching and GPS positioning information fusion method based on fixed route
CN111965627A (en) * 2020-08-18 2020-11-20 湖北亿咖通科技有限公司 Multi-laser radar calibration method for vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
前方车辆距离检测技术研究;李占旗,等;汽车实用技术(第21期);32-35 *

Also Published As

Publication number Publication date
CN114252081A (en) 2022-03-29

Similar Documents

Publication Publication Date Title
EP3506212B1 (en) Method and apparatus for generating raster map
US9933268B2 (en) Method and system for improving accuracy of digital map data utilized by a vehicle
US11410429B2 (en) Image collection system, image collection method, image collection device, recording medium, and vehicle communication device
CN111351502B (en) Method, apparatus and computer program product for generating a top view of an environment from a perspective view
US11507107B2 (en) Map information system
JP7147442B2 (en) map information system
US10977816B2 (en) Image processing apparatus, image processing program, and driving assistance system
CN114111775B (en) Multi-sensor fusion positioning method and device, storage medium and electronic equipment
US20210190537A1 (en) Method and system for generating and updating digital maps
US11680822B2 (en) Apparatus and methods for managing maps
CN112805762B (en) System and method for improving traffic condition visualization
CN111319560B (en) Information processing system, program, and information processing method
US11938945B2 (en) Information processing system, program, and information processing method
JP2019100924A (en) Vehicle trajectory correction device
CN110770540B (en) Method and device for constructing environment model
JP2019174191A (en) Data structure, information transmitting device, control method, program, and storage medium
CN114252081B (en) Positioning method, device, equipment and storage medium
CN114616158A (en) Automatic driving method, device and storage medium
WO2019188874A1 (en) Data structure, information processing device, and map data generation device
CN111060114A (en) Method and device for generating feature map of high-precision map
CN113312403B (en) Map acquisition method and device, electronic equipment and storage medium
JP2019148987A (en) On-vehicle device, image supply method, server device, image collection method, and image acquisition system
JP2019196941A (en) Own vehicle position estimating device
CN113128317B (en) Lane positioning system and lane positioning method
JP2023007914A (en) Estimation system, estimation device, estimation method, and estimation program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220328

Address after: 430051 No. b1336, chuanggu startup area, taizihu cultural Digital Creative Industry Park, No. 18, Shenlong Avenue, Wuhan Economic and Technological Development Zone, Wuhan, Hubei Province

Applicant after: Yikatong (Hubei) Technology Co.,Ltd.

Address before: 430056 building B, building 7, Qidi Xiexin science and Innovation Park, South Taizi Lake innovation Valley, Wuhan Economic and Technological Development Zone, Wuhan City, Hubei Province (qdxx-f7b)

Applicant before: HUBEI ECARX TECHNOLOGY Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant