CN114565670A - Pose optimization method and device - Google Patents

Pose optimization method and device Download PDF

Info

Publication number
CN114565670A
CN114565670A CN202210146496.8A CN202210146496A CN114565670A CN 114565670 A CN114565670 A CN 114565670A CN 202210146496 A CN202210146496 A CN 202210146496A CN 114565670 A CN114565670 A CN 114565670A
Authority
CN
China
Prior art keywords
frame
point cloud
matching
geometric model
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210146496.8A
Other languages
Chinese (zh)
Inventor
余丽
任海兵
邱靖烨
陆亚
辛喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202210146496.8A priority Critical patent/CN114565670A/en
Publication of CN114565670A publication Critical patent/CN114565670A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The specification discloses a pose optimization method and a pose optimization device. And sequentially carrying out feature extraction on each frame of collected point cloud according to the collection sequence, and determining each static target and the type of each static target in the frame of point cloud. And determining at least one frame of point cloud and the frame of point cloud as a matching point cloud from a plurality of frame point clouds optimized before the frame of point cloud. Determining matched static target pairs at least according to the distance between static targets which belong to the same type and belong to different frame matching point clouds, constructing an optimized objective function according to the distance and the angle between the static target pairs, and adjusting the pose of at least part of matching point clouds by taking the minimum optimized objective function as a target to obtain the optimized pose of at least part of matching point clouds. The method does not depend on global positioning system data, and has small calculation amount and high calculation efficiency because all points are not matched by taking the point as a unit.

Description

Pose optimization method and device
Technical Field
The specification relates to the technical field of unmanned driving, in particular to a pose optimization method and a pose optimization device.
Background
In the field of unmanned driving, the construction of a map depends on the relative pose among collected frame point clouds, and the accuracy of the relative pose influences the accuracy of the constructed map, so that the pose optimization becomes an important link in the process of constructing the map.
The existing pose optimization method is generally based on Iterative Closest Point (ICP) algorithm to match each frame of Point cloud, so as to achieve pose optimization. The method determines initial values of relative poses among the point clouds of each frame (namely the relative poses among different frame point clouds during initial optimization) by depending on Global Positioning System (GPS) data, optimizes the initial values, and iteratively obtains the optimized relative poses among the point clouds of each frame.
However, the optimization result of the method is greatly influenced by the GPS, and under the condition that no GPS signal or weak GPS signal is generated due to the fact that satellite signals are lost or signals are interfered and the like, the initial value error determined based on the GPS data is large, negative influences can be brought to the optimization process and the result, and the method is large in calculation amount and low in optimization efficiency.
Disclosure of Invention
The present specification provides a pose optimization method and apparatus to partially solve the above problems in the prior art.
The technical scheme adopted by the specification is as follows:
the present specification provides a pose optimization method, including:
according to the collection sequence of the collected frame point clouds, sequentially aiming at each frame point cloud, carrying out feature extraction on the frame point cloud, and determining each static target and the type of each static target in the frame point cloud;
determining at least one frame of point cloud and the frame of point cloud as a matching point cloud from a plurality of frame point clouds optimized before the frame of point cloud;
determining each matched static target pair at least according to the distance between the static targets which belong to the same type and different frame matching point clouds, and constructing an optimized objective function according to the distance and the angle between each static target pair;
and taking the minimum optimization objective function as a target, and adjusting the pose of at least part of the matching point cloud to obtain the optimized pose of the at least part of the matching point cloud.
Optionally, determining at least one frame of point cloud and the frame of point cloud from a plurality of frame of point clouds optimized before the frame of point cloud, as a matching point cloud, specifically including:
determining a previous frame point cloud of the frame point cloud from a plurality of frame point clouds in front of the frame point cloud;
and taking the previous frame point cloud and the frame point cloud as matching point clouds.
Optionally, determining each matched static target pair at least according to the distance between static targets belonging to the same type and different frame matching point clouds, specifically including:
determining a geometric model of each static target in each matching point cloud;
determining the mass points of the geometric model and a vector uniquely identifying the direction of the geometric model for each geometric model in each matching point cloud;
aiming at each geometric model in the frame point cloud, determining other geometric models which are the same as the geometric model from other geometric models of the previous frame point cloud to serve as the same type models of the geometric models;
determining the distance and angle between each geometric model and each similar model according to the initial value of the relative pose between the frame point cloud and the previous frame point cloud and the mass point and the vector of each geometric model;
and determining each matched static target pair according to the determined distance and angle.
Optionally, determining at least one frame of point cloud and the frame of point cloud from a plurality of frame of point clouds optimized before the frame of point cloud, as a matching point cloud, specifically including:
and determining a plurality of frame point clouds and the frame point cloud from a plurality of frame point clouds optimized in front of the frame point cloud according to a preset interval to be used as matching point clouds.
Optionally, determining each matched static target pair at least according to the distance between static targets belonging to the same type and different frame matching point clouds, specifically including:
determining a geometric model of each static target in each matching point cloud;
determining the mass points of the geometric model and a vector uniquely identifying the direction of the geometric model for each geometric model in each matching point cloud;
when the frame point cloud is determined to be a key frame according to at least one of the distance between the frame point cloud and the previous key frame and the angle difference of the course angle, aiming at each geometric model in each frame matching point cloud, determining other geometric models which are the same type as the geometric model from other geometric models of other frame matching point clouds to serve as the same type models of the geometric models;
determining the distance and angle between each geometric model of each matching point cloud and each similar model according to the initial value of the relative pose between every two matching point clouds and the mass point and the vector of the geometric model in each matching point cloud;
and determining each static target pair matched between the matching point clouds according to the distance and the angle between each geometric model of each matching point cloud and each similar model.
Optionally, the method further comprises:
when the frame point cloud is determined to be a non-key frame according to at least one of the distance between the frame point cloud and the previous key frame and the angle difference of the course angle, aiming at each geometric model in the frame point cloud, determining other geometric models which are the same type as the geometric model from other geometric models of other frame matching point clouds to serve as the same type models of the geometric models;
determining the distance and angle between each geometric model of the frame point cloud and each similar model according to the initial value of the relative pose between the frame point cloud and other matched point clouds and the mass point and the vector of the geometric model in each matched point cloud;
and determining each static target pair matched between the frame point cloud and each other matched point cloud according to the distance and the angle between each geometric model in the frame point cloud and each similar model.
Optionally, constructing an optimization objective function according to the distance and the angle between each static target pair, specifically including:
and constructing an optimization objective function according to the distance between the particles of each geometric model in each static target pair and the other matched geometric model and the angle between the vectors of the geometric models in each static target pair.
Optionally, before constructing the optimized objective function according to the distance and the angle between each static target pair, the method further includes:
and performing linear optimization on the initial value through a random sampling consistency algorithm, and re-determining the initial value of the relative pose between the frame point cloud and the previous frame point cloud.
Optionally, with the minimum optimization objective function as a target, adjusting the pose of at least part of the matching point cloud to obtain the optimized pose of at least part of the matching point cloud, specifically including:
and taking the minimum optimization objective function as a target, and adjusting the pose of the frame point cloud on the basis of the initial value of the relative pose between the frame point cloud and the previous frame point cloud to obtain the optimized pose of the frame point cloud.
Optionally, with the minimum optimization objective function as a target, adjusting the pose of at least part of the matching point cloud to obtain the optimized pose of at least part of the matching point cloud, specifically including:
when the frame point cloud is a key frame, the position and pose of each matched point cloud are adjusted on the basis of the initial value of the relative position and pose between each frame of matched point cloud by taking the minimum optimization objective function as a target to obtain the optimized position and pose of each matched point cloud;
and when the frame point cloud is a non-key frame, the position and posture of the frame point cloud are adjusted on the basis of initial values of relative position and posture between the frame point cloud and other matched point clouds respectively by taking the minimum optimization objective function as a target to obtain the optimized position and posture of the frame point cloud.
This specification provides a pose optimization apparatus, including:
the characteristic extraction module is used for sequentially extracting the characteristics of each frame of point cloud according to the acquisition sequence of each frame of point cloud, and determining each static target and the type of each static target in the frame of point cloud;
the determining module is used for determining at least one frame of point cloud and the frame of point cloud as matching point cloud from a plurality of frame point clouds optimized before the frame of point cloud;
the matching module is used for determining each matched static target pair at least according to the distance between the static targets which belong to the same type and different frame matching point clouds, and constructing an optimized objective function according to the distance and the angle between each static target pair;
and the optimization module is used for adjusting the pose of at least part of the matched point cloud by taking the minimum optimized objective function as a target to obtain the optimized pose of at least part of the matched point cloud.
The present specification provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the above pose optimization method.
The present specification provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the pose optimization method when executing the program.
The technical scheme adopted by the specification can achieve the following beneficial effects:
in the pose optimization method provided by the present specification, according to an acquisition sequence, feature extraction is performed on each frame of acquired point cloud in sequence, and each static target and the type of each static target in the frame of point cloud are determined. And determining at least one frame of point cloud and the frame of point cloud as a matching point cloud from a plurality of frame point clouds optimized before the frame of point cloud. And then, determining each matched static target pair at least according to the distance between the static targets which belong to the same type and different frame matching point clouds, and constructing an optimized objective function according to the distance and the angle between each static target pair. And adjusting the pose of at least part of the matched point cloud by taking the minimum optimized objective function as a target to obtain the optimized pose of at least part of the matched point cloud.
According to the method, data of the global positioning system are not depended on, under the condition that signals of the global positioning system are weak or no signals exist, the static target pairs can be obtained through matching based on the static targets in the point cloud, and the target function is constructed based on the distance and the angle between the static target pairs so as to adjust the pose of at least part of matching point clouds corresponding to each static target pair, and the pose of at least part of matching point clouds after the target function is minimized is obtained. The method takes the static target as a unit to carry out matching to construct the target function, does not take the point as a unit, carries out matching on all the points, and has small calculation amount and high calculation efficiency.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification and are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description serve to explain the specification and not to limit the specification in a non-limiting sense. In the drawings:
fig. 1 is a schematic flow chart of a pose optimization method in this specification;
FIG. 2 is a schematic diagram of a pose optimization process provided herein;
fig. 3 is a schematic diagram of a pose optimization apparatus provided in the present specification;
fig. 4 is a schematic structural diagram of an electronic device provided in this specification.
Detailed Description
At present, when a map is constructed, the relative pose of two frames of point clouds is generally determined frame by frame based on a Global Positioning System (GPS) and an Inertial Measurement Unit (IMU) to obtain the pose of the point clouds. Because the IMU has accumulative errors, the determined pose of the point cloud also has accumulative errors. In the scenes such as underground, high buildings, shade roads and the like, the condition that satellite signals are lost or the signals are interfered exists, so that the GPS data is lost or inaccurate. The pose construction map of the point cloud determined based on the GPS data and the IMU data has double images and large errors. Therefore, the pose of each frame of point cloud needs to be optimized.
In the method for optimizing the Point cloud pose based on the Iterative Closest Point (ICP) algorithm, the initial value of the relative pose between each frame of Point cloud is still determined by depending on GPS data, and the optimization result is still influenced by the GPS. Under the condition of no GPS signal or weak GPS signal such as missing satellite signals or interference of signals, the initial value error determined based on GPS data is large, negative influences can be brought to the optimization process and the optimization result, and the optimization result is inaccurate. In addition, the method needs to match each point in the point cloud with the closest point in other frame point clouds, and has large calculation amount and low optimization efficiency. In addition, the method is easily interfered by part of point clouds corresponding to the dynamic targets in the collected point clouds, and the moving dynamic targets are different in position in different frame point clouds, so that matching is wrong or fails when the point clouds are matched.
In the description, the pose optimization method based on the description can be independent of a GPS and an IMU, and can still optimize and obtain the accurate pose of the point cloud under the scenes without GPS signals or with weak GPS signals and the like. The optimized objective function can be constructed by identifying static targets (such as trunks, columns, planes and the like) in each frame of point cloud, fitting the geometric model of the static targets in each frame of point cloud based on part of point cloud corresponding to the static targets in each frame of point cloud, determining matched static target pairs (matched geometric model pairs are obtained in the same way), and calculating the distance and angle between the geometric models of each static target pair based on the matched geometric model pairs. So as to obtain the pose of each frame of point cloud when the optimization objective function is minimized, namely the optimized pose. The matched static target pair is a pair of static targets corresponding to the same target in the real environment in different frame point clouds, and the matched geometric model pair is a pair of geometric models corresponding to the same target in the real environment in different frame point clouds. The geometric model has stability, and based on the stable geometric model, the matching between different frame point clouds can be free from noise interference, and the matching result is more accurate.
In addition, the pose optimization method provided by the specification can avoid errors caused by a GPS and an IMU, and optimize the pose of each frame of point cloud corresponding to a more accurate map so as to solve the problem of map ghosting. In addition, in the specification, instead of performing point cloud matching by calculating the distance between points, a static target is used as a unit, and the distance between geometric models of the static target is calculated to perform matching, so that a matched static target pair is obtained, and then an optimized target function is further constructed, so that the calculation amount is smaller, and the efficiency is higher.
It should be noted that, the pose of the point cloud in this specification is the pose of the radar that collects the frame point cloud when collecting the frame point cloud.
In order to make the objects, technical solutions and advantages of the present disclosure more clear, the technical solutions of the present disclosure will be clearly and completely described below with reference to the specific embodiments of the present disclosure and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present specification without any creative effort belong to the protection scope of the present specification.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a pose optimization method in this specification, which specifically includes the following steps:
s100: and according to the collection sequence of the collected frame point clouds, sequentially aiming at each frame point cloud, carrying out feature extraction on the frame point cloud, and determining each static target and the type of each static target in the frame point cloud.
In this specification, the pose optimization method may be performed by a server, or may be performed by an unmanned device, and the server may perform the method as an example.
In one or more embodiments of the present disclosure, the pose optimization method is used in a back-end optimization process, and for a plurality of frames of point clouds collected in advance for constructing a map, pose optimization can be performed frame by the pose optimization method.
As described above, in this specification, point cloud matching is performed based on a static object in a point cloud, rather than matching individual points. Moreover, the dynamic target has different positions in different frame point clouds, which may interfere with matching. Therefore, only the static objects in the point clouds are determined in the specification, so as to determine each static object matched between different frame point clouds. And the types of the static targets in different frame point clouds are different, but the matched static targets should belong to the same type.
Therefore, in one or more embodiments of the present disclosure, the server may first determine each acquired frame of point cloud, and sequentially perform feature extraction on each frame of point cloud according to an acquisition sequence of each acquired frame of point cloud, to determine each static object and a type of each static object in the frame of point cloud. So that in the subsequent steps, each static target is matched based on the type of the static target to obtain each static target pair.
In one or more embodiments of the present specification, the method of performing feature extraction is not limited, and for example, feature extraction may be performed by semantic segmentation or the like based on the curvature of a point, and each static object and the type of each static object may be determined. Of course, other ways may be used, and the description is not limited herein.
S102: determining at least one frame point cloud and the frame point cloud as a matching point cloud from a plurality of frame point clouds optimized before the frame point cloud.
In one or more embodiments of the present disclosure, since the pose optimization process is performed by frame-by-frame optimization, each frame of point cloud before the frame of point cloud is optimized, and the pose is more accurate. Thus, the pose of the frame point cloud may be optimized by matching the frame point cloud with at least a portion of the number of frame point clouds optimized prior to the frame point cloud.
Thus, in one or more embodiments of the present description, the server may determine at least one frame point cloud and the frame point cloud as a matching point cloud from among several frame point clouds optimized before the frame point cloud. And matching point clouds, namely each frame of point cloud which is used for matching the static target based on the static target contained in the point cloud, so as to optimize the point cloud pose based on the matching result, wherein all the matching point clouds are not necessarily taken as the optimized point clouds.
It should be noted that, because the execution of step S102 and step S100 does not interfere with each other, the order of step S100 and step S102 is not limited. Of course, other steps in the present specification are not limited to the execution order thereof without interfering with each other.
S104: and determining each matched static target pair at least according to the distance between the static targets which belong to the same type and different frame matching point clouds, and constructing an optimized objective function according to the distance and the angle between each static target pair.
In one or more embodiments of the present description, the position of the point cloud may be a 6 degree of freedom pose, including: position in x, y, z coordinates and attitude in heading, roll, pitch. The relative pose between the point clouds may be represented by transformation matrices (e.g., rotation matrices and translation matrices) between the point clouds.
When the relative pose between two frames of point clouds is determined more accurately, after the point clouds are subjected to transformation operations such as rotation and translation and the like according to the relative pose, the distance between the matched static targets is smaller, and the angle is smaller.
Therefore, in one or more embodiments of the present specification, the server may determine each matched pair of static objects according to at least a distance between the static objects belonging to the same type and belonging to different frame matching point clouds, and construct an optimized objective function according to the distance and angle between each pair of static objects. As to how the pair of static objects is specifically determined, detailed description is given later, and will not be described first.
In one or more embodiments of the present description, for each static target pair, the residual of the static target pair may be constructed according to the distance and angle between the static targets of the static target pair. The server may then construct an optimization objective function by summing the residuals of each static objective pair.
S106: and taking the minimum optimization objective function as a target, and adjusting the pose of at least part of the matching point cloud to obtain the optimized pose of the at least part of the matching point cloud.
In one or more embodiments of the present specification, after determining the optimized objective function, the server may adjust the pose of at least part of the matching point cloud by taking the minimum optimized objective function as a target, so as to obtain the optimized pose of the at least part of the matching point cloud.
Based on the pose optimization method shown in fig. 1, according to the acquisition sequence, feature extraction is performed on each frame of acquired point cloud in sequence, and each static target and the type of each static target in the frame of point cloud are determined. And determining at least one frame of point cloud and the frame of point cloud as a matching point cloud from a plurality of frame point clouds optimized before the frame of point cloud. And then, determining each matched static target pair at least according to the distance between the static targets which belong to the same type and different frame matching point clouds, and constructing an optimized objective function according to the distance and the angle between each static target pair. And taking the minimum optimization objective function as a target, and adjusting the pose of at least part of the matching point cloud to obtain the optimized pose of at least part of the matching point cloud.
According to the method, data of the global positioning system are not depended on, under the condition that signals of the global positioning system are weak or no signals exist, the static target pairs can be obtained through matching based on the static targets in the point cloud, and the target function is constructed based on the distance and the angle between the static target pairs so as to adjust the pose of at least part of matching point clouds corresponding to each static target pair, and the pose of at least part of matching point clouds after the target function is minimized is obtained. The method is used for matching by taking the static target as a unit to construct the target function, is not used for matching all points by taking the points as the unit, and has small calculation amount and high calculation efficiency.
In addition, in one or more embodiments provided in this specification, when performing matching between static objects, frame matching may be performed, that is, each pair of matched static objects is determined only according to two frames of point clouds of the frame point cloud and the previous frame of point cloud, and when performing pose optimization based on a matching result, only the pose of the next frame of point cloud in the two frames of point clouds may be optimized. In step S102, when determining at least one frame point cloud and the frame point cloud from the plurality of frame point clouds optimized before the frame point cloud as matching point clouds, specifically, the server may determine a previous frame point cloud of the frame point cloud from the plurality of frame point clouds before the frame point cloud, and use the previous frame point cloud and the frame point cloud as matching point clouds. That is, so-called frame-frame matching is to match a static object in the frame point cloud with a static object in the previous frame point cloud.
In one or more embodiments of the present description, the server can determine a geometric model of each static object in each matching point cloud. And determining the mass points of the geometric model and a vector uniquely identifying the direction of the geometric model for each geometric model in each matching point cloud. For each geometric model, the geometric model may be represented by the particles corresponding to the geometric model and a vector that uniquely identifies the orientation of the geometric model. A static object also corresponds to a particle and a vector. For the determination of the particles and the vectors, the subsequent determination of the distance and the angle between the geometric models can be facilitated.
In one or more embodiments of the present description, a geometric model of a static object may be determined based on a partial point cloud to which the static object corresponds. The particle of the geometric model may be the center point.
In one or more embodiments of the present specification, in step S104, determining the matched pairs of static objects, further, the server may determine, for each geometric model in the frame point cloud, from other geometric models in a previous frame point cloud of the frame point cloud, other geometric models that are of the same type as the geometric model as the homogeneous model of the geometric model. And then, the server can determine the distance between each geometric model and each similar model according to the initial value of the relative pose between the frame point cloud and the previous frame point cloud and the mass point and the vector of each geometric model. And finally, determining each matched static target pair according to the determined distance.
Specifically, the server may determine, for each geometric model in the frame point cloud, a distance between the geometric model and each of its own homogeneous models. After the distance between the geometric model and each similar model is obtained, judging whether the distance between the geometric model and each similar model is smaller than a preset distance threshold value or not for each similar model, if so, determining that the similar model and the geometric model are a pair of matched geometric models, and determining that a static target corresponding to the similar model and the geometric model is a static target pair.
And when the distance between each geometric model and each similar model is determined according to the initial value of the relative pose between the frame point cloud and the previous frame point cloud and the mass point and the vector of each geometric model, carrying out pose transformation on the frame point cloud according to the initial value of the relative pose between the frame point cloud and the previous frame point cloud, and calculating the distance after the pose transformation.
Wherein the pose transformation comprises at least one of translation and rotation.
In one or more embodiments of the present specification, when there are multiple peer models whose distances from the geometric model are smaller than the distance threshold, the peer model with the smallest corresponding distance may be used as the geometric model matched with the geometric model, and then the static object corresponding to the peer model with the smallest corresponding distance and the static object corresponding to the geometric model are a static object pair.
In one or more embodiments of the present description, the static target determined in step S100 may include: rod-shaped targets, planar targets, etc. For example, a string or column such as a pillar of an underground garage or a trunk on the ground can be considered as a rod-shaped target. The types of static objects include at least: rod-like and planar. The determined geometric model may include a rod-like geometric model as well as a face geometric model. The way in which the distance is calculated is different for different types of geometric models.
In one or more embodiments of the present disclosure, the distances between the rod-shaped geometric models may be determined by calculating the point-to-line distance (the distance between a particle of one rod-shaped geometric model and the matching other rod-shaped geometric model), and the distances between the plane geometric models may be determined by calculating the point-to-plane distance (the distance between a particle of one plane geometric model and the matching other plane geometric model).
In one or more embodiments of the present disclosure, as for a shaft, only a portion of the point cloud of its surface is typically included in the acquired frame of point cloud, not all. Then for the same shaft, the partial point clouds corresponding to the shaft in different frame point clouds may not be completely the same, or even completely different. Then, in order to increase the success rate of matching between the geometric models, for a rod-shaped static object, the geometric model corresponding to the central axis of the static object can be determined as the geometric model of the static object. The particle of the geometric model may be the center point of the central axis. Whether the point clouds collected by different frame point clouds on the surface of the same rod-shaped static target are the same or not, the corresponding central axes are always the same, so that the matching success rate and accuracy are improved.
Of course, considering that the central axis may be determined inaccurately, the geometric model corresponding to the static object may also be determined based on the partial point cloud to which the static object actually corresponds.
In one or more embodiments of the present description, the formula for determining the distance between the rod-like geometric models may be specified as follows:
Figure BDA0003509205630000121
wherein D is1Representing the distance between the rod-shaped geometric model in the point cloud of the ith matching frame and the rod-shaped geometric model of the point cloud of the jth matching frame, PiRepresenting the particle coordinates of the rod-like geometric model in the i-th frame of matching point cloud, PjRepresenting the particle coordinates of the rod-like geometric model in the matching point cloud of frame j.
Figure BDA0003509205630000122
A vector representing the rod-like geometric model in the ith frame of matching point cloud,
Figure BDA0003509205630000123
a vector representing the rod-like geometric model in the jth frame of matching point cloud.
Figure BDA0003509205630000124
And expressing the distance between the rod-shaped geometric model in the point cloud of the jth frame and the rod-shaped geometric model in the point cloud of the ith frame.
Figure BDA0003509205630000125
For rod-shaped geometric model in ith frame point cloudAnd (4) an expression item of the distance between the rod-shaped geometric models in the point cloud of the j-th frame. When the frame matching is carried out, the ith frame matching point cloud and the jth frame matching point cloud are two adjacent frame point clouds.
In one or more embodiments of the present description, the formula for determining the distance between the plane geometric models may be specified as follows:
D2=|aixj+biyj+cizj+di|+|ajxi+bjyi+cjzi+dj|
wherein D is2Representing the distance between the plane geometric model of the matching point cloud of the ith frame and the plane geometric model of the matching point cloud of the jth frame, ai、bi、ci、diPlane parameters representing a plane geometric model in the ith frame of matching point cloud, aj、bj、cj、djAnd representing plane parameters of the plane geometric model in the j frame matching point cloud. x is a radical of a fluorine atomj、yj、zjParticle coordinates, x, of the planar geometric model representing the jth frame of the matched point cloudi、yi、ziThe particle coordinates representing the planar geometric model of the ith frame of matching point cloud. When the frame matching is carried out, the ith frame matching point cloud and the jth frame matching point cloud are two adjacent frame point clouds.
Or, in order to make the determined static target pair more accurate, the matched static target pair can be determined according to the angle between the geometric models of the static targets.
In one or more embodiments of the present specification, when determining each matched pair of static objects, the server may further perform a screening on other geometric models in the previous frame point cloud for each geometric model in the frame point cloud, so as to determine, from the other geometric models in the previous frame point cloud, other geometric models that are of the same type as the geometric model as the homogeneous model of the geometric model. And then, the server can determine the distance and the angle between each geometric model and each similar model according to the initial value of the relative pose between the frame point cloud and the previous frame point cloud and the mass point and the vector of each geometric model. And finally, determining each matched static target pair according to the determined distance and angle.
In one or more embodiments of the present disclosure, the distance and the angle between the geometric models of the different frames of point clouds are calculated after performing pose transformation on one frame of point cloud according to an initial value of a relative pose between the different frames of point clouds. For example, when frame matching is performed, the frame point cloud may be rotated and/or translated according to the initial value of the relative pose between the frame point cloud and the previous frame point cloud, and then the distance and angle between the geometric models may be calculated.
In this specification, an initial value, that is, a relative pose between different frame point clouds that is initially estimated, is adjusted based on the initial value in an optimization process based on an optimization objective function, so as to obtain an optimized relative pose that ultimately minimizes the optimization objective function. When the optimized relative pose is accurate, the absolute pose of each point cloud determined based on the relative pose is also more accurate.
In one or more embodiments of the present description, the angle between the geometric models may be determined by the angle of the vector between the geometric models.
In one or more embodiments of the present specification, in particular, the server may determine, for each geometric model, an expression value of an angle according to the geometric model and each homogeneous model vector, according to the distance and the angle between each geometric model and each homogeneous model, and determine each matched pair of static targets according to the distance and the angle between each geometric model and each homogeneous model. And using the same model of which the expression value is smaller than a preset expression value threshold value and the distance between the same model and the geometric model is smaller than a preset distance threshold value as the geometric model matched with the geometric model.
Of course, in one or more embodiments of the present disclosure, when there are a plurality of similar models that satisfy the condition that the expression value is smaller than the preset expression value threshold and the distance from the geometric model is smaller than the preset distance threshold, one of the similar models that satisfy the condition may be determined as the geometric model matching the geometric model. For example, a homogeneous model having the smallest expression value of the angle and the smallest distance from the geometric model is determined from the homogeneous models that meet the conditions, and the homogeneous model is used as the geometric model matched with the geometric model.
In one or more embodiments of the present description, the formula for determining the expression value of the angle may be specifically as follows:
Figure BDA0003509205630000141
wherein Ag represents an expression value,
Figure BDA0003509205630000142
a vector representing the qth geometric model in the ith frame of matching point cloud,
Figure BDA0003509205630000143
and representing the vector of the s-th geometric model in the matching point cloud of the j-th frame. And the q-th geometric model in the matching point cloud of the ith frame and the s-th geometric model in the matching point cloud of the jth frame form a pair of geometric models. When the frame matching is carried out, the ith frame matching point cloud and the jth frame matching point cloud are two adjacent frame point clouds.
Of course, when calculating the distance and angle between the similar geometric models between two non-adjacent frames of matching point clouds, the above formula for calculating the distance and angle can also be used.
In one or more embodiments of the present description, for a rod-shaped geometric model, the vector uniquely identifying the orientation of the geometric model may be its principal orientation vector, which may be determined by principal component analysis. For a planar geometric model, a vector whose normal vector uniquely identifies the orientation of the geometric model for which it corresponds can be determined.
In one or more embodiments of the present specification, the initial value of the relative pose between each frame of point cloud may be set as needed, for example, the relative pose of the frame of point cloud may be set to be the same as the relative pose of a frame of point cloud from a previous frame of point cloud to the previous frame of point cloud. Alternatively, the absolute pose of the frame of point cloud may be set to be equal to the previous frame of point cloud of the frame of point cloud. Because the frequency of the radar collected point cloud is high, the pose change of two adjacent frames of point clouds is small, the error of the relative pose of the frame of point cloud determined based on the previous frame of point cloud relative to the previous frame of point cloud is small, so that the initial value of the relative pose of each frame of point cloud set is more reasonable, and compared with a method for performing pose optimization by depending on a GPS, the initial value determined based on the GPS under the scene of weak GPS signals is more accurate.
Based on the frame-by-frame optimization process of frame matching, the relative pose between each frame point cloud and the previous frame point cloud can be gradually optimized on the basis of the set initial value.
In one or more embodiments of the present disclosure, in step S102, at least one frame of point cloud and the frame of point cloud are determined from a plurality of frame of point clouds optimized before the frame of point cloud, and when the at least one frame of point cloud and the frame of point cloud are used as matching point clouds, the server may further determine a plurality of frames of matching point clouds, so that a local map formed by a current frame and each matching point cloud may be subsequently matched, that is, frame map matching is performed, instead of frame matching.
Specifically, the server may determine, according to a preset interval, a plurality of frames of point clouds and a frame of point cloud from a plurality of frames of point clouds optimized before a current frame of point cloud, as a matching point cloud.
Wherein the interval can be set as required. For example, the interval may be set to 10, and the server may determine, from several frame point clouds optimized before the frame point cloud, consecutive 10 frame point clouds before the frame point cloud, and the frame point cloud, as a matching point cloud.
Then, further in one or more embodiments of the present disclosure, in step S104, when determining each matched static target pair, the server may determine whether the current frame point cloud is a key frame according to at least one of a distance between the frame point cloud and a previous key frame and an angle difference between heading angles.
In one or more embodiments of the present disclosure, when determining whether the frame point cloud is a key frame, a previous key frame, that is, a previous key frame, may be determined, and it is determined whether a distance between the frame point cloud and the previous key frame is greater than a preset update distance threshold, if yes, the frame point cloud is determined to be the key frame, and if not, the frame point cloud is determined to be a non-key frame.
In one or more embodiments of the present disclosure, when determining whether the frame point cloud is a key frame, the server may further determine a previous frame key frame, and determine whether an angle difference between the frame point cloud and a heading angle between the previous key frame is greater than a preset angle difference threshold, if so, determine that the frame point cloud is a key frame, and if not, determine that the frame point cloud is a non-key frame.
In one or more embodiments of the present disclosure, when the frame point cloud is determined to be a key frame according to at least one of a distance between the frame point cloud and a previous key frame and an angle difference of a heading angle, the server may determine, for each geometric model in each frame matching point cloud, other geometric models that are of the same type as the geometric model from other geometric models of other frame matching point clouds, as a homogeneous model of the geometric model. And determining the distance between each geometric model of each matching point cloud and each similar model according to the initial value of the relative pose between every two matching point clouds and the mass point and the vector of the geometric model in each matching point cloud, and determining each static target pair matched between each matching point cloud according to the distance between each geometric model of each matching point cloud and each similar model.
In order to make the matching result more accurate, in one or more embodiments of the present specification, when the frame point cloud is a key frame, the server may further determine, for each geometric model in each frame matching point cloud, from other geometric models of other frame matching point clouds, another geometric model that is the same type as the geometric model as the same type of the geometric model. And determining the distance and angle between each geometric model of each matching point cloud and the respective similar model according to the initial value of the relative pose between every two matching point clouds and the mass point and the vector of the geometric model in each matching point cloud, and determining each static target pair matched between each matching point cloud according to the distance and angle between each geometric model of each matching point cloud and the respective similar model.
In one or more embodiments of the present specification, when the frame point cloud is a non-key frame, the server may determine, for each geometric model in the frame point cloud, from other geometric models of other frame matching point clouds, other geometric models that are of the same type as the geometric model as the homogeneous model of the geometric model. And determining the distance between each geometric model of the frame point cloud and each similar model according to the initial value of the relative pose between the frame point cloud and other matched point clouds and the mass point and the vector of the geometric model in each matched point cloud. And determining each static target pair matched between the frame point cloud and each other matched point cloud according to the distance between each geometric model in the frame point cloud and each similar model.
Likewise, in order to make the matching result more accurate, in one or more embodiments of the present specification, when the frame point cloud is a non-key frame, the server may determine, for each geometric model in the frame point cloud, from the other geometric models of the other frame matching point clouds, another geometric model that is the same type as the geometric model as the homogeneous model of the geometric model. And determining the distance and angle between each geometric model of the frame point cloud and each similar model according to the initial value of the relative pose between the frame point cloud and other matched point clouds and the mass point and the vector of the geometric model in each matched point cloud. And determining each static target pair matched between the frame point cloud and each other matched point cloud according to the distance and the angle between each geometric model in the frame point cloud and each similar model.
In one or more embodiments of the present disclosure, in the step S104 of constructing the optimization objective function, the server may construct the optimization objective function according to a distance between a particle of each geometric model of each static target pair and another geometric model, and an angle between vectors of each geometric model of each static target pair.
In one or more embodiments of the present description, the optimization objective function may be embodied as follows:
A=min∑r(h(Ti,fm),h(Tj,fn))
wherein, r (h (T)i,fm),h(Tj,fn) Represents a residual determined based on the distance and angle between the geometric model pair of the ith and jth frame of matching point clouds, i.e., there is a geometric model pairA pair of point clouds. And summing the residual errors of all the matched point cloud pairs to obtain A, and solving the minimum value of the objective function, namely the minimum value of A to obtain the pose of at least part of the matched point cloud after optimization. And, h (T)i,fm) Representing the structural body corresponding to the matching point cloud of the ith frame, at least including the data of each geometric model (such as the type, particle coordinates, vectors, etc. of each geometric model, the original radar point of the rod-shaped geometric model, etc.) in the matching point cloud of the ith frame, and fmRepresenting the type, T, of the static object corresponding to each geometric model in the ith frame point cloudiAnd representing the relative pose between the ith frame of matched point cloud and the first frame of point cloud for pose optimization. h (T)j,fn) A structural body corresponding to the jth frame of matched point cloud at least comprises data of each geometric model in the jth frame of matched point cloud (similar to the jth frame of matched point cloud), and fnAnd representing the type of the static target corresponding to each geometric model in the jth frame of matching point cloud. T isjAnd representing the relative pose between the j frame of matching point cloud and the first frame of point cloud for pose optimization.
It should be noted that, when frame matching is performed, the ith frame matching point cloud and the jth frame matching point cloud are two adjacent frame point clouds, and one of the frames is the currently optimized frame point cloud when each frame point cloud is sequentially optimized frame by frame. When frame image matching is performed, the ith frame matching point cloud and the jth frame matching point cloud can be two non-adjacent frame point clouds.
In one or more embodiments of the present description, as a result of determining each pair of geometric models, a number of pairs of matching point clouds may be determined. At least one pair of matched geometric models is included between two frames of matched point clouds in one pair of matched point clouds, namely one pair of geometric models.
Based on this, the objective function can also be expressed as:
A=min∑Hm
Hm=∑r(Miq,Mjs)
wherein HmRepresenting the residual error corresponding to two frames of matching point clouds determined based on the angles and distances between all the geometric models in the pair of geometric models between the two frames of matching point clouds,i.e. the residual error corresponding to the mth point cloud pair. M is a group ofiqRepresenting the q-th geometric model, M, in the ith frame of matched point cloudjsRepresents the q geometric model in the j frame matching point cloud, MiqAnd MjsNamely a pair of geometric models between the ith frame of matched point cloud and the jth frame of matched point cloud. r (M)iq,Mjs) Representing the residual error corresponding to the geometric model pair determined based on the angle and distance between the geometric model pair.
When the pair of geometric models is a pair of rod-shaped geometric models:
Figure BDA0003509205630000181
the weighting value of the qth rod-like geometric model and the s-th rod-like geometric model may be determined based on the number of points corresponding to the qth rod-like geometric model and the s-th rod-like geometric model, for example, the weighting value may be obtained by averaging, maximizing, minimizing, and the like the numbers of the two points. PiqParticle coordinates, P, representing the qth rod geometry model in the ith frame of matched point cloudjsRepresenting the particle coordinates of the s-th rod geometry model in the j-th frame of matching point cloud.
Figure BDA0003509205630000182
A vector representing the qth rod-like geometric model in the ith frame of matching point cloud,
Figure BDA0003509205630000183
a vector representing the s-th rod-like geometric model in the j-th frame of matching point cloud.
Figure BDA0003509205630000184
The term can realize the constraint of the angle between the rod-shaped geometric models and is an expression value of the angle between the rod-shaped geometric models.
Figure BDA0003509205630000185
And
Figure BDA0003509205630000186
then the constraint on the distance between the rod-like geometric models can be implemented as an expression of the distance.
When the pair of geometric models is a pair of planar geometric models:
Figure BDA0003509205630000187
wherein S represents a weight value corresponding to the pair of plane geometric models, and may be determined based on an area corresponding to the qth plane geometric model and the sth plane geometric model, for example, an area average value of the qth plane geometric model and the sth plane geometric model may be used. a isiq、biq、ciq、diqPlane parameters representing the q-th plane geometric model in the i-th frame of matching point cloud, ajs、bjs、cjs、djsAnd representing the plane parameters of the s-th plane geometric model in the j-th frame of matching point cloud. x is a radical of a fluorine atomjs、yjs、zjsParticle coordinates, x, of the s-th planar geometric model representing the j-th frame of the matched point cloudiq、yiq、ziqThe particle coordinates representing the q-th planar geometric model of the i-th frame of matching point cloud. n isiqA vector representing the q-th planar geometric model in the ith frame of matching point cloud,
Figure BDA0003509205630000191
and representing the vector of the s-th plane geometric model in the matching point cloud of the j-th frame.
Figure BDA0003509205630000192
The term can realize the constraint of the angle between the plane geometric models and is an expression value of the angle between the plane geometric models. | aiqxjs+biqyjs+cqizjs+diqI and | ajsxiq+bjsyiq+cjsziq+djsAnd | can realize the constraint on the distance between the plane geometric models, and the constraint is an expression value of the distance between the plane geometric models.
Pairs of geometric models due to matching between a pair of matching point cloudsPossibly including both a pair of geometric rod models and a pair of geometric planar models, then HiIt can also be expressed as:
Hi=∑r(tiq,tjs)+∑r(fiq,fjs)
wherein the content of the first and second substances,
Figure BDA0003509205630000193
Figure BDA0003509205630000194
wherein, tiqRepresenting the q-th rod-like geometric model, t, in the i-th frame of matching point cloudjsAnd representing the s-th rod-shaped geometric model in the j-th frame matching point cloud, wherein the s-th rod-shaped geometric model and the s-th rod-shaped geometric model are a geometric model pair. r (t)iq,tjs) Representing the residual error corresponding to the geometric model pair determined based on the angle and distance between the geometric model pair. f. ofiqRepresenting the q plane geometric model, t, in the i frame matching point cloudjsRepresenting the s-th plane geometric model in the j-th frame of matching point cloud. r (t)iq,tjs) Corresponding formulae and r (f)iq,fjs) For the interpretation of the other symbols in the corresponding formulae, reference may be made to the above-mentioned pair r (M)iq,Mjs) Interpretation of the corresponding two formulas.
Of course, the residual may not be calculated in a frame unit and then summed, or different geometric model pairs may be directly numbered, and the residual of the geometric model pair may be calculated entirely based on the distance and angle between the geometric model pairs and then summed, so as to obtain the value of a corresponding to the objective function.
Then, the objective function can also be expressed as:
A=min∑r(We,Oe)
wherein, r (W)e,Oe) Representing the geometric model pair corresponding to all the matching point clouds, the e-th geometric model pair, the geometric model WeAnd the geometric model OeAngle between andthe residual of the distance determination. And summing the corresponding residual errors of the geometric models corresponding to all the matching point clouds to obtain the complete expression form of the target function. And, r (W) is calculatede,Oe) The formula (b) can be adapted to the types of different geometric model pairs, and the description of the formula for the residual errors corresponding to the rod-shaped geometric model and the plane geometric model can be particularly seen.
In one or more embodiments of the present disclosure, when performing frame and point cloud matching, in step S106, the server may adjust the pose of at least a portion of the matching point cloud to obtain the optimized pose of at least a portion of the matching point cloud, and specifically, the server may adjust the pose of the frame point cloud to obtain the optimized pose of the frame point cloud based on the initial value of the relative pose between the frame point cloud and the previous frame point cloud. That is, only the pose of the frame point cloud is optimized.
In addition, in one or more embodiments of the present specification, when performing frame matching, the server may further perform linear optimization on the initial value between each frame of matching point clouds by using a linear optimization algorithm based on the initial values of the relative pose between each static target pair and each matching point cloud after obtaining each matched static target pair, and re-determine the initial value of the relative pose between the frame point cloud and the previous frame point cloud. Based on the initial value, optimizing the relative pose among the matched point clouds of each frame, and taking the relative pose obtained after optimization as the initial value again.
In one or more embodiments of the present disclosure, the linear optimization algorithm may be a Random Sample Consensus (RANSAC) algorithm, but other linear optimization algorithms may also be used, and the present disclosure is not limited thereto.
In one or more embodiments of the present specification, in step S106, the pose of the frame point cloud is adjusted based on the initial value of the relative pose between the frame point cloud and the previous frame point cloud, and the initial value adopted when obtaining the optimized pose of the frame point cloud is an initial value optimized by a random sampling consensus algorithm.
In one or more embodiments of the present disclosure, when performing frame map matching, in step S106, when the minimum optimization objective function is taken as a target, and the pose of at least part of the matching point clouds is adjusted to obtain the optimized pose of at least part of the matching point clouds, specifically, the server may adjust the pose of each matching point cloud to obtain the optimized pose of each matching point cloud on the basis of the initial value of the relative pose between each frame of matching point clouds, with the minimum optimization objective function as a target when the frame point cloud is taken as a key frame. Namely, based on the initial value of the relative pose between every two frames of point clouds in all the matching point clouds, the poses of all the matching point clouds are optimized.
In one or more embodiments of the present disclosure, when the frame point cloud is a non-key frame, the server may optimize a minimum objective function as a target, and adjust the pose of the frame point cloud based on initial values of relative poses between the frame point cloud and other matching point clouds, to obtain the optimized pose of the frame point cloud. Namely, only the pose of the frame point cloud is optimized based on the initial value of the relative pose between the frame point cloud and other matching point clouds.
In addition, in one or more embodiments of the present specification, the server may further adjust the pose of each matched point cloud on the basis of the initial value of the relative pose between each frame of matched point cloud, with the minimum optimization objective function as a target when the frame of point cloud is a key frame, to obtain the optimized pose of each matched point cloud. And when the frame of point cloud is a non-key frame, not adjusting the pose of any matching point cloud determined based on the frame of point cloud, and continuing to execute the steps S100-S106 on the next frame of point cloud.
Further, in order to reduce the amount of calculation, in one or more embodiments of the present specification, the server may further optimize each collected frame point cloud frame by frame, when the frame is optimized, first determine whether the frame point cloud is a key frame, if the determination result is yes, then perform steps S100 to S106, and if the determination result is no, perform step S100, without performing steps S102 to S106, and may perform optimization on the next frame point cloud.
In one or more embodiments of the present specification, when performing frame map matching, the server may not determine whether the frame point cloud is a key frame, may directly use the frame point cloud as a key frame, and perform subsequent steps.
In one or more embodiments of the present disclosure, calculating the distance between the geometric models based on the particles is only an example, and the calculation of the distance between the geometric models may also be based on a plurality of points, i.e., is not limited to calculation based on the particles. For example, for each geometric model, the geometric model may be sampled to obtain expression points of the geometric model, and the geometric model may be represented by the expression points and vectors of the geometric model. The distance between the geometric models can be determined based on the expression points and the vectors of each geometric model.
In addition, in one or more embodiments of the steps in this specification, when the pose optimization method is executed, frame matching and frame map matching may also be performed. The server can determine two frames of matching point clouds to obtain each static target pair of the two frames of point clouds when the frames are matched, construct an optimization objective function, and optimize frame by frame to obtain the optimized poses of all the point clouds. And then, performing frame image matching on the pose obtained based on frame matching optimization, determining multi-frame matching point clouds again to obtain each static target pair of each frame matching point cloud, constructing an optimization objective function again, and optimizing frame by frame to obtain the pose of all the point clouds after further optimization.
Of course, when frame matching is performed, linear optimization can be performed first, and then nonlinear optimization based on an optimization objective function can be performed. And the pose of each frame point cloud obtained by nonlinear optimization is used as the pose finally obtained by frame matching and then frame image matching is carried out.
After linear optimization, the initial values of the point clouds of the frames are optimized and updated to be used as the re-determined initial values, and nonlinear optimization is carried out based on the re-determined initial values to obtain the relative pose of the point clouds of the frames after secondary optimization. Further, the relative pose after the second optimization between the frame point clouds can be used as a newly determined initial value for frame image matching, so that an optimization objective function constructed based on the matching result of the frame image matching is further used for obtaining a final optimization result of the pose of each frame point cloud.
In one or more embodiments of the present disclosure, after the initial values are re-determined, i.e., updated, the matched geometric model pairs may be re-determined based on the new initial values.
Fig. 2 is a schematic diagram of a pose optimization process provided in this specification. As shown, for each frame of point cloud that has been collected, the server may sequentially perform feature extraction on the frame point cloud for each frame of point cloud, determine each static target in the frame point cloud, and then perform matching on each static target based on the distance between each static target, or the distance and angle between each static target, to determine each static target pair. And then, performing frame matching, and performing linear optimization based on each static target pair to obtain the pose of the frame point cloud after linear optimization. And after the poses of all the frame point clouds after linear optimization are obtained, nonlinear optimization is carried out. After the poses of all the frame point clouds after nonlinear optimization are obtained, frame image matching can be performed, the server can sequentially judge whether the frame point clouds are key frames or not according to each frame point cloud, if so, the poses of all the frame matching point clouds are optimized to serve as targets, an optimization objective function based on light beam adjustment is performed (the description of the key frames in the specification can be referred to in the specific optimization process), and if not, the optimization objective function based on light beam adjustment is performed by only optimizing the frame point clouds to serve as the targets (similarly, the description can be referred to in the specific optimization process). After each frame of point cloud is optimized, the sufficiently accurate poses of all the frame point clouds after being optimized can be obtained.
In one or more embodiments of the present disclosure, specifically, the server may determine a previous frame point cloud of the frame point cloud from a plurality of frame point clouds previous to the frame point cloud, and use the previous frame point cloud and the frame point cloud as a matching point cloud. Then, the server can at least determine the distance between the static targets which belong to the two frames of matching point clouds and belong to the same type based on the initial value of the relative pose between the two frames of matching point clouds, and determine each matched static target pair according to the distance between the static targets which belong to the same type and belong to different frames of matching point clouds. Then, the server can perform linear optimization on the initial value between each frame of matching point clouds through a random sampling consistency algorithm based on the initial values of the relative poses between each static target pair and each matching point cloud, and re-determine the initial value of the relative pose between the frame of point cloud and the previous frame of point cloud to serve as the initial optimization initial value.
After the initial optimization values corresponding to all frame point clouds for constructing the map are obtained through frame-by-frame optimization, the server can further perform nonlinear optimization frame by frame based on the initial optimization values. Specifically, the server may re-determine each matched static target pair according to the obtained initial optimization initial value, and construct a first optimization objective function of nonlinear optimization according to the distance and angle between each static target pair. And adjusting the pose of the frame point cloud by taking the minimum optimization objective function as a target to obtain the optimized pose of the frame point cloud. The specific form of the optimization objective function may refer to the above description of the present specification, and the present specification is not repeated herein.
In one or more embodiments of the present disclosure, after obtaining optimized poses corresponding to all frame point clouds used for constructing a map through nonlinear optimization frame by frame, the server may use the relative pose between the frame point clouds as a secondary optimization initial value. And then, carrying out frame image matching based on the obtained secondary optimization initial value.
In one or more embodiments of the present specification, the server may continue pose optimization frame by frame while frame map matching is performed. Firstly, the server can re-determine the matching point clouds, and specifically, according to a preset interval, a plurality of frame point clouds and the frame point cloud are determined from a plurality of frame point clouds optimized in front of the current frame point cloud to serve as the latest determined matching point cloud. Then, the server can judge whether the current frame point cloud is a key frame according to at least one of the distance between the frame point cloud and the previous key frame and the angle difference of the course angle.
In one or more embodiments of the present specification, when the frame point cloud is a key frame, the server may determine, for each geometric model in each frame matching point cloud, from other geometric models of other frames matching point clouds, other geometric models that are of the same type as the geometric model as the same type of the geometric model. And determining the distance and angle between each geometric model of each matching point cloud and the respective similar model according to the initial value of the relative pose between every two matching point clouds and the mass point and the vector of the geometric model in each matching point cloud, and determining each static target pair matched between each matching point cloud according to the distance and angle between each geometric model of each matching point cloud and the respective similar model. When the frame point cloud is a non-key frame, the server can determine other geometric models which are the same as the geometric model from other geometric models of other frame matching point clouds according to each geometric model in the frame point cloud, and the geometric models serve as the same type of the geometric model. And determining the distance and angle between each geometric model of the frame point cloud and each similar model according to the initial value of the relative pose between the frame point cloud and other matched point clouds and the mass point and the vector of the geometric model in each matched point cloud. And determining each static target pair matched between the frame point cloud and each other matched point cloud according to the distance and the angle between each geometric model in the frame point cloud and each similar model.
Then, the server may construct an optimization objective function for nonlinear optimization of Bundle Adjustment (BA) according to the distance and angle between each static target pair, as a second optimization objective function. Finally, the pose of at least part of the matched point cloud is adjusted by taking the second optimized objective function as the minimum target, so that the pose of at least part of the matched point cloud after final optimization is obtained.
It should be noted that the first optimization objective function and the second optimization objective function are consistent in form with the above optimization objective function. The only difference is the number of matching point clouds that participate in the calculation. In one or more embodiments of the present disclosure, the non-linear optimization may specifically employ a non-linear least squares optimization, for example, Levenberg-Marquardt (LM), gauss-newton, and the like may be employed.
In the description, the static targets can be identified based on semantic features contained in the point cloud, so that the static target pairs are determined by matching based on the static targets, the calculation amount for matching based on the static target pairs is small, and the matching accuracy can be ensured. In addition, in the description, the distance and the angle between every two static targets of the same type but belonging to different frame point clouds are calculated by different methods for different types of static targets, and at least part of the poses of six degrees of freedom of the two frame point clouds can be restrained from different dimensions, so that the relative pose of each frame point cloud obtained through optimization is more accurate. When the relative pose between two frames of point clouds of the matched static target pair is accurate, after one frame of point cloud is rotated and/or translated according to the relative pose, the contact ratio of the geometric model of the static target matched between the two frames of point clouds is higher. And further constructing an objective function based on the matching of the rod-shaped static target, so that the x coordinate, the y coordinate and the posture of the point cloud can be constrained. And further constructing a target function based on the matching of the plane static target, and realizing the restraint of the height z coordinate, the pitch angle and the rolling angle of the point cloud. Based on the constraints, the pose of each frame of point cloud obtained through final optimization can be more accurate.
In one or more embodiments of the present specification, the pose of each frame of point cloud finally obtained by optimization based on the pose optimization method provided by the present specification can be used to further optimize the global pose of each frame of point cloud. The server can perform closed-loop detection and closed-loop constraint between point clouds after the pose of each frame of point cloud is finally obtained through optimization. Then, the server can perform graph optimization based on the pose of each frame of point cloud finally obtained by the pose optimization method as a first constraint and the closed-loop constraint obtained by closed-loop detection as a second constraint.
In one or more embodiments of the present disclosure, the map optimization may further be performed according to a third constraint that is a position determined based on GPS data and a fourth constraint that is a relative pose between each frame of point clouds determined based on IMU data.
Based on the same idea, the pose optimization method provided for one or more embodiments of the present specification further provides a corresponding pose optimization device, as shown in fig. 3.
Fig. 3 is a schematic diagram of a pose optimization device provided in this specification, where the pose optimization device includes:
the feature extraction module 200 is configured to perform feature extraction on each frame of point cloud sequentially according to an acquisition sequence of each acquired frame of point cloud, and determine each static target and a type of each static target in the frame of point cloud;
a determining module 201, configured to determine at least one frame of point cloud and the frame of point cloud as a matching point cloud from a plurality of frame point clouds optimized before the frame of point cloud;
the matching module 202 is configured to determine each matched static target pair at least according to the distance between the static targets belonging to the same type and different frame matching point clouds, and construct an optimized objective function according to the distance and angle between each static target pair;
and the optimizing module 203 is configured to adjust the pose of at least part of the matching point cloud by taking the minimum optimizing objective function as a target, so as to obtain the optimized pose of the at least part of the matching point cloud.
Optionally, the determining module 201 is further configured to determine a previous frame point cloud of the frame point cloud from several frame point clouds before the frame point cloud, and use the previous frame point cloud and the frame point cloud as matching point clouds.
Optionally, the matching module 202 is further configured to determine a geometric model of each static target in each matching point cloud, determine, for each geometric model in each matching point cloud, a mass point of the geometric model and a vector uniquely identifying a direction of the geometric model, determine, for each geometric model in the frame point cloud, another geometric model that is the same type as the geometric model from the other geometric models of the previous frame point cloud, as a similar model of the geometric model, determine, according to an initial value of a relative pose between the frame point cloud and the previous frame point cloud, a mass point and a vector of each geometric model, a distance and an angle between each geometric model and each similar model, and determine, according to the determined distances and angles, each pair of matched static targets.
Optionally, the determining module 201 is further configured to determine, according to a preset interval, a plurality of frame point clouds and the frame point cloud from a plurality of frame point clouds optimized before the frame point cloud, as a matching point cloud.
Optionally, the matching module 202 is further configured to determine a geometric model of each static object in each matching point cloud, determine, for each geometric model in each matching point cloud, a mass point of the geometric model and a vector uniquely identifying a direction of the geometric model, determine, for each geometric model in each matching point cloud, other geometric models that are of the same type as the geometric model from the other geometric models of the other frame matching point clouds as a similar model of the geometric model when the frame point cloud is determined to be a key frame according to at least one of a distance between the frame point cloud and a previous key frame and an angle difference between the frame point cloud and the heading angle, determine, for each geometric model in each matching point cloud, a distance and an angle between each geometric model of each matching point cloud and each similar model respectively according to an initial value of a relative pose between two matching point clouds, a mass point and a vector of a geometric model in each matching point cloud, determine, a distance and an angle between each geometric model of each matching point cloud and each similar model respectively according to a distance and angle between each matching point cloud and each similar model respectively, and determining each static target pair matched between the matching point clouds.
Optionally, the matching module 202 is further configured to, when it is determined that the frame point cloud is a non-key frame according to at least one of a distance between the frame point cloud and a previous key frame and an angle difference between a heading angle and a previous key frame, determine, for each geometric model in the frame point cloud, other geometric models that are of the same type as the geometric model from other geometric models of other frame matching point clouds, as homogeneous models of the geometric models, determine, according to an initial value of a relative pose between the frame point cloud and the other matching point clouds, a mass point and a vector of the geometric model in each matching point cloud, a distance and an angle between each geometric model of the frame point cloud and each homogeneous model, and determine, according to a distance and an angle between each geometric model in the frame point cloud and each homogeneous model, each static target pair matched between the frame point cloud and each other matching point cloud.
Optionally, the matching module 202 is further configured to construct an optimization objective function according to a distance between a particle of each geometric model in each static target pair and the other matched geometric model, and an angle between vectors of each geometric model in each static target pair.
The apparatus further comprises:
and the updating module 204 is configured to perform linear optimization on the initial value through a random sampling consistency algorithm, and re-determine an initial value of the relative pose between the frame of point cloud and the previous frame of point cloud.
Optionally, the optimizing module 203 is further configured to adjust the pose of the frame point cloud based on the initial value of the relative pose between the frame point cloud and the previous frame point cloud by taking the minimum optimizing objective function as a target, so as to obtain the optimized pose of the frame point cloud.
Optionally, the optimizing module 203 is further configured to, when the frame point cloud is a key frame, adjust the pose of each matched point cloud based on the initial value of the relative pose between each frame of matched point clouds by using the minimum optimizing objective function as a target to obtain the optimized pose of each matched point cloud, and when the frame point cloud is a non-key frame, adjust the pose of the frame point cloud based on the initial values of the relative poses between the frame point cloud and other matched point clouds by using the minimum optimizing objective function as a target to obtain the optimized pose of the frame point cloud.
The present specification also provides a computer-readable storage medium storing a computer program, which can be used to execute the pose optimization method provided in fig. 1.
The present specification also provides a schematic structural diagram of the electronic device shown in fig. 4. As shown in fig. 4, at the hardware level, the electronic device includes a processor, an internal bus, a memory, and a non-volatile memory, but may also include hardware required for other services. The processor reads a corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to implement the pose optimization method provided in fig. 1.
It should be noted that all actions of acquiring signals, information or data in this specification are performed under the premise of complying with the corresponding data protection regulation policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
Of course, besides the software implementation, the present specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may be hardware or logic devices.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain a corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical blocks. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be regarded as a hardware component and the means for performing the various functions included therein may also be regarded as structures within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, apparatuses, modules or units described in the above embodiments may be specifically implemented by a computer chip or an entity, or implemented by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, respectively. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

Claims (13)

1. A pose optimization method, comprising:
according to the collection sequence of the collected frame point clouds, sequentially aiming at each frame point cloud, carrying out feature extraction on the frame point cloud, and determining each static target and the type of each static target in the frame point cloud;
determining at least one frame of point cloud and the frame of point cloud as a matching point cloud from a plurality of frame point clouds optimized before the frame of point cloud;
determining each matched static target pair at least according to the distance between the static targets which belong to the same type and different frame matching point clouds, and constructing an optimized objective function according to the distance and the angle between each static target pair;
and taking the minimum optimization objective function as a target, and adjusting the pose of at least part of the matching point cloud to obtain the optimized pose of the at least part of the matching point cloud.
2. The method of claim 1, wherein determining at least one frame of point cloud and the frame of point cloud as matching point clouds from a plurality of frame point clouds optimized before the frame of point cloud comprises:
determining a previous frame point cloud of the frame point cloud from a plurality of frame point clouds in front of the frame point cloud;
and taking the previous frame point cloud and the frame point cloud as matching point clouds.
3. The method of claim 2, wherein determining each pair of matched static objects based at least on the distance between static objects belonging to the same type and to different frame matching point clouds comprises:
determining a geometric model of each static target in each matching point cloud;
determining the mass points of the geometric model and a vector uniquely identifying the direction of the geometric model for each geometric model in each matching point cloud;
aiming at each geometric model in the frame point cloud, determining other geometric models which are the same as the geometric model from other geometric models of the previous frame point cloud to serve as the same type models of the geometric models;
determining the distance and angle between each geometric model and each similar model according to the initial value of the relative pose between the frame point cloud and the previous frame point cloud and the mass point and the vector of each geometric model;
and determining each matched static target pair according to the determined distance and angle.
4. The method of claim 1, wherein determining at least one frame of point cloud and the frame of point cloud as matching point clouds from a plurality of frame point clouds optimized before the frame of point cloud comprises:
and determining a plurality of frame point clouds and the frame point cloud from a plurality of frame point clouds optimized in front of the frame point cloud according to a preset interval to be used as matching point clouds.
5. The method of claim 4, wherein determining each pair of matched static objects based at least on the distance between static objects belonging to the same type and to different frame matching point clouds comprises:
determining a geometric model of each static target in each matching point cloud;
determining the mass points of the geometric model and a vector uniquely identifying the direction of the geometric model for each geometric model in each matching point cloud;
when the frame point cloud is determined to be a key frame according to at least one of the distance between the frame point cloud and the previous key frame and the angle difference of the course angle, aiming at each geometric model in each frame matching point cloud, determining other geometric models which are the same type as the geometric model from other geometric models of other frame matching point clouds to serve as the same type models of the geometric models;
determining the distance and angle between each geometric model of each matching point cloud and each similar model according to the initial value of the relative pose between every two matching point clouds and the mass point and the vector of the geometric model in each matching point cloud;
and determining each static target pair matched between the matched point clouds according to the distance and the angle between each geometric model of each matched point cloud and each similar model.
6. The method of claim 5, wherein the method further comprises:
when the frame point cloud is determined to be a non-key frame according to at least one of the distance between the frame point cloud and the previous key frame and the angle difference of the course angle, aiming at each geometric model in the frame point cloud, determining other geometric models which are the same type as the geometric model from other geometric models of other frame matching point clouds to serve as the same type models of the geometric models;
determining the distance and angle between each geometric model of the frame point cloud and each similar model according to the initial value of the relative pose between the frame point cloud and other matched point clouds and the mass point and the vector of the geometric model in each matched point cloud;
and determining each static target pair matched between the frame point cloud and each other matched point cloud according to the distance and the angle between each geometric model in the frame point cloud and each similar model.
7. The method of claim 3 or 6, wherein constructing the optimization objective function according to the distance and angle between each static object pair comprises:
and constructing an optimization objective function according to the distance between the particles of each geometric model in each static target pair and the other matched geometric model and the angle between the vectors of the geometric models in each static target pair.
8. The method of claim 3, wherein before constructing the optimized objective function based on the distance and angle between each static pair of objectives, the method further comprises:
and performing linear optimization on the initial value through a random sampling consistency algorithm, and re-determining the initial value of the relative pose between the frame point cloud and the previous frame point cloud.
9. The method according to claim 3 or 8, wherein the step of adjusting the pose of at least a part of the matching point cloud to obtain the optimized pose of at least a part of the matching point cloud with the objective of minimizing the optimization objective function comprises:
and taking the minimum optimization objective function as a target, and adjusting the pose of the frame point cloud on the basis of the initial value of the relative pose between the frame point cloud and the previous frame point cloud to obtain the optimized pose of the frame point cloud.
10. The method of claim 6, wherein aiming at the minimum of the optimization objective function, adjusting the pose of at least a portion of the matching point cloud to obtain the optimized pose of the at least a portion of the matching point cloud comprises:
when the frame point cloud is a key frame, the position and pose of each matched point cloud are adjusted on the basis of the initial value of the relative position and pose between each frame of matched point cloud by taking the minimum optimization objective function as a target to obtain the optimized position and pose of each matched point cloud;
and when the frame point cloud is a non-key frame, the position and posture of the frame point cloud are adjusted on the basis of initial values of relative position and posture between the frame point cloud and other matched point clouds respectively by taking the minimum optimization objective function as a target to obtain the optimized position and posture of the frame point cloud.
11. A pose optimization apparatus, comprising:
the characteristic extraction module is used for sequentially extracting the characteristics of each frame of point cloud according to the collection sequence of the collected frame of point cloud, and determining each static target and the type of each static target in the frame of point cloud;
the determining module is used for determining at least one frame of point cloud and the frame of point cloud as matching point cloud from a plurality of frame point clouds optimized before the frame of point cloud;
the matching module is used for determining each matched static target pair at least according to the distance between the static targets which belong to the same type and different frame matching point clouds, and constructing an optimized objective function according to the distance and the angle between each static target pair;
and the optimization module is used for adjusting the pose of at least part of the matched point cloud by taking the minimum optimized objective function as a target to obtain the optimized pose of at least part of the matched point cloud.
12. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1 to 10.
13. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1 to 10 when executing the program.
CN202210146496.8A 2022-02-17 2022-02-17 Pose optimization method and device Pending CN114565670A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210146496.8A CN114565670A (en) 2022-02-17 2022-02-17 Pose optimization method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210146496.8A CN114565670A (en) 2022-02-17 2022-02-17 Pose optimization method and device

Publications (1)

Publication Number Publication Date
CN114565670A true CN114565670A (en) 2022-05-31

Family

ID=81714105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210146496.8A Pending CN114565670A (en) 2022-02-17 2022-02-17 Pose optimization method and device

Country Status (1)

Country Link
CN (1) CN114565670A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114812540A (en) * 2022-06-23 2022-07-29 深圳市普渡科技有限公司 Picture construction method and device and computer equipment
CN115390085A (en) * 2022-07-28 2022-11-25 广州小马智行科技有限公司 Positioning method and device based on laser radar, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114812540A (en) * 2022-06-23 2022-07-29 深圳市普渡科技有限公司 Picture construction method and device and computer equipment
CN115390085A (en) * 2022-07-28 2022-11-25 广州小马智行科技有限公司 Positioning method and device based on laser radar, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110118554B (en) SLAM method, apparatus, storage medium and device based on visual inertia
CN108759833B (en) Intelligent vehicle positioning method based on prior map
US11313684B2 (en) Collaborative navigation and mapping
CN112240768A (en) Visual inertial navigation fusion SLAM method based on Runge-Kutta4 improved pre-integration
CN111402339B (en) Real-time positioning method, device, system and storage medium
CN109461208B (en) Three-dimensional map processing method, device, medium and computing equipment
CN114565670A (en) Pose optimization method and device
CN112639502A (en) Robot pose estimation
JP7131994B2 (en) Self-position estimation device, self-position estimation method, self-position estimation program, learning device, learning method and learning program
CN109154502A (en) System, method and apparatus for geo-location
WO2021115143A1 (en) Motion trajectory processing method, medium, apparatus, and device
CN112762965B (en) Magnetometer calibration method and device
CN111983636A (en) Pose fusion method, pose fusion system, terminal, medium and mobile robot
CN112991441A (en) Camera positioning method and device, electronic equipment and storage medium
CN111882494B (en) Pose graph processing method and device, computer equipment and storage medium
CN117053779A (en) Tightly coupled laser SLAM method and device based on redundant key frame removal
CN114690226A (en) Monocular vision distance measurement method and system based on carrier phase difference technology assistance
CN115711616A (en) Indoor and outdoor unmanned aerial vehicle penetrating smooth positioning method and device
CN114111769A (en) Visual inertial positioning method and device and automatic driving device
CN114943766A (en) Relocation method, relocation device, electronic equipment and computer-readable storage medium
CN112712561A (en) Picture construction method and device, storage medium and electronic equipment
Zehua et al. Indoor Integrated Navigation on PDR/Wi-Fi/barometer via Factor Graph with Local Attention
CN110781803A (en) Human body posture identification method based on extended Kalman filter
CN113899357B (en) Incremental mapping method and device for visual SLAM, robot and readable storage medium
CN117576218B (en) Self-adaptive visual inertial navigation odometer output method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination