CN117274331A - Positioning registration optimization method, system, device and storage medium - Google Patents

Positioning registration optimization method, system, device and storage medium Download PDF

Info

Publication number
CN117274331A
CN117274331A CN202311206743.XA CN202311206743A CN117274331A CN 117274331 A CN117274331 A CN 117274331A CN 202311206743 A CN202311206743 A CN 202311206743A CN 117274331 A CN117274331 A CN 117274331A
Authority
CN
China
Prior art keywords
point cloud
cloud data
preset
characteristic point
round
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311206743.XA
Other languages
Chinese (zh)
Inventor
钟立扬
郭林栋
刘羿
何贝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sinian Zhijia Technology Co ltd
Original Assignee
Beijing Sinian Zhijia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sinian Zhijia Technology Co ltd filed Critical Beijing Sinian Zhijia Technology Co ltd
Priority to CN202311206743.XA priority Critical patent/CN117274331A/en
Publication of CN117274331A publication Critical patent/CN117274331A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The embodiment of the application discloses a positioning registration optimization method, a positioning registration optimization system, a positioning registration optimization device and a storage medium. The method comprises the following steps: acquiring characteristic point cloud data; performing at least one round of iterative processing on the characteristic point cloud data; the round of iterative processing includes: determining conversion characteristic points of each characteristic point of the characteristic point cloud data in a target coordinate system based on preset conversion parameters; in a target coordinate system, determining the association distance between the conversion characteristic point and the adjacent characteristic point; in response to the preset distance condition being met, eliminating adjacent characteristic points to obtain optimized characteristic point cloud data; in response to the iteration termination condition not being met, optimizing preset conversion parameters based on a preset optimization algorithm, and performing next iteration processing based on the optimized preset conversion parameters and the optimized characteristic point cloud data; in response to the iteration termination condition being met, a registration result is determined based on the conversion feature points determined by the round of iterative processing. The method and the device can reduce the computational resources of repeated iteration and improve the iteration convergence rate.

Description

Positioning registration optimization method, system, device and storage medium
Technical Field
The present disclosure relates to the field of laser positioning technologies, and in particular, to a positioning registration optimization method, system, device, and storage medium.
Background
With the wide application of positioning technology, laser radar positioning is also one of important links for realizing fusion positioning. The laser radar positioning usually collects online point cloud data, and then realizes positioning in specific scenes, such as ports and underground parking lots, in a real-time registration mode with a point cloud map built offline.
For point cloud positioning registration, CN116295353a proposes a positioning method of an unmanned vehicle, and the application matches acquired point cloud data with template point cloud data through multiple iterations to realize positioning of the unmanned vehicle, but when the number of point clouds is large, the method consumes a large amount of computing resources due to repeated iterations, so that the speed of iteration convergence is reduced, and the positioning efficiency is affected.
It is therefore desirable to provide a positioning registration optimization method, system, apparatus, and storage medium to reduce computing resources for point cloud registration and improve positioning efficiency.
Disclosure of Invention
One of the embodiments of the present disclosure provides a positioning registration optimization method. The method comprises the following steps: acquiring characteristic point cloud data; performing at least one round of iterative processing on the characteristic point cloud data, and determining a registration result of the characteristic point cloud data in a target coordinate system based on the iterative result; wherein, a round of iterative processing includes: based on preset conversion parameters, determining conversion characteristic points of each characteristic point of the characteristic point cloud data corresponding to the round of iterative processing in a target coordinate system; in a target coordinate system, determining the association distance between the conversion characteristic point and the adjacent characteristic point corresponding to the conversion characteristic point; in response to the correlation distance meeting a preset distance condition, eliminating adjacent feature points to obtain optimized feature point cloud data; in response to the iteration termination condition not being met, optimizing preset conversion parameters based on a preset optimization algorithm, and taking the optimized preset conversion parameters and the optimized characteristic point cloud data as the preset conversion parameters and the characteristic point cloud data of the next round of iteration processing respectively; and determining a registration result based on the conversion characteristic points of each characteristic point determined by the round of iterative processing in the target coordinate system in response to the iteration termination condition being satisfied.
One of the embodiments of the present specification provides a positioning registration optimization system. The system comprises: the acquisition module is used for acquiring the characteristic point cloud data; the iteration processing module is used for carrying out at least one round of iteration processing on the characteristic point cloud data and determining a registration result of the characteristic point cloud data in a target coordinate system based on an iteration result; wherein, a round of iterative processing includes: based on preset conversion parameters, determining conversion characteristic points of each characteristic point of the characteristic point cloud data corresponding to the round of iterative processing in a target coordinate system; in a target coordinate system, determining the association distance between the conversion characteristic point and the adjacent characteristic point corresponding to the conversion characteristic point; in response to the correlation distance meeting a preset distance condition, eliminating adjacent feature points to obtain optimized feature point cloud data; in response to the iteration termination condition not being met, optimizing preset conversion parameters based on a preset optimization algorithm, and taking the optimized preset conversion parameters and the optimized characteristic point cloud data as the preset conversion parameters and the characteristic point cloud data of the next round of iteration processing respectively; and determining a registration result based on the conversion characteristic points of each characteristic point determined by the round of iterative processing in the target coordinate system in response to the iteration termination condition being satisfied.
One of the embodiments of the present specification provides a positioning registration optimization apparatus, the apparatus comprising at least one processor and at least one memory; the at least one memory is configured to store computer instructions; the at least one processor is configured to execute at least some of the computer instructions to implement a positioning registration optimization method.
One of the embodiments of the present specification provides a computer-readable storage medium storing computer instructions that, when read by a computer in the storage medium, the computer performs a positioning registration optimization method.
Drawings
The present specification will be further elucidated by way of example embodiments, which will be described in detail by means of the accompanying drawings. The embodiments are not limiting, in which like numerals represent like structures, wherein:
FIG. 1 is a schematic structural diagram of a localization registration optimization system shown in accordance with some embodiments of the present description;
FIG. 2 is an exemplary flow chart of a localization registration optimization method shown in accordance with some embodiments of the present description;
FIG. 3 is an exemplary flow chart of a round of iterative processing shown in accordance with some embodiments of the present description;
Fig. 4 is an exemplary flow chart of a round of downsampling operations according to some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present specification, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present specification, and it is possible for those of ordinary skill in the art to apply the present specification to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
It will be appreciated that "system," "apparatus," "unit" and/or "module" as used herein is one method for distinguishing between different components, elements, parts, portions or assemblies at different levels. However, if other words can achieve the same purpose, the words can be replaced by other expressions.
As used in this specification and the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
A flowchart is used in this specification to describe the operations performed by the system according to embodiments of the present specification. It should be appreciated that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
For positioning registration optimization, an iterative nearest neighbor (lterative Closest Point, ICP) algorithm is currently adopted to transform initial point cloud data to target point cloud data through an initial rotation transformation matrix R and a translation transformation matrix T. When the number of the point cloud data is large, the processor consumes a large amount of computing resources due to repeated iteration, so that the convergence speed of the ICP algorithm is reduced, and the positioning efficiency is affected.
Therefore, in the iterative processing process, some embodiments of the present disclosure reduce the number of feature point cloud data through fusion of multiple strategies, so as to reduce the computational resources consumed by repeated iteration, and improve the iterative convergence speed and the positioning efficiency.
Fig. 1 is a schematic structural diagram of a localization registration optimization system shown in accordance with some embodiments of the present description. In some embodiments, as shown in fig. 1, the localization registration optimization system 100 may include an acquisition module 110 and an iterative processing module 120.
In some embodiments, the acquisition module 110 may be configured to acquire feature point cloud data.
In some embodiments, the acquisition module 110 may also be configured to: acquiring initial point cloud data; responding to the initial point cloud data not meeting the first preset quantity condition, and performing at least one round of downsampling operation on the initial point cloud data to obtain characteristic point cloud data; and responding to the initial point cloud data meeting the first preset quantity condition, and taking the initial point cloud data as characteristic point cloud data.
In some embodiments, a round of downsampling operations performed by the acquisition module 110 may include: acquiring a voxel downsampling scale corresponding to the round of downsampling operation; performing voxel downsampling on the initial point cloud data based on a voxel downsampling scale to obtain downsampled point cloud data; responding to the downsampled point cloud data to meet a second preset quantity condition, and taking the downsampled point cloud data as characteristic point cloud data; and reducing the voxel downsampling scale in response to the downsampled point cloud data not meeting a second preset number condition, and taking the reduced voxel downsampling scale as the voxel downsampling scale corresponding to the next downsampling operation to re-perform the downsampling operation on the initial point cloud data.
In some embodiments, the iteration processing module 120 may be configured to perform at least one iteration process on the feature point cloud data, and determine a registration result of the feature point cloud data in the target coordinate system based on the iteration result.
In some embodiments, a round of iterative processing performed by the iterative processing module 120 may include: based on preset conversion parameters, determining conversion characteristic points of each characteristic point of the characteristic point cloud data corresponding to the round of iterative processing in a target coordinate system; in a target coordinate system, determining the association distance between the conversion characteristic point and the adjacent characteristic point corresponding to the conversion characteristic point; in response to the correlation distance meeting a preset distance condition, eliminating adjacent feature points to obtain optimized feature point cloud data; in response to the iteration termination condition not being met, optimizing preset conversion parameters based on a preset optimization algorithm, and taking the optimized preset conversion parameters and the optimized characteristic point cloud data as the preset conversion parameters and the characteristic point cloud data of the next round of iteration processing respectively; and determining a registration result based on the conversion characteristic points of each characteristic point determined by the round of iterative processing in the target coordinate system in response to the iteration termination condition being satisfied. For more details of the iterative process, reference may be made to FIGS. 2-4 and their associated description below.
In some embodiments, a round of iterative processing performed by the iterative processing module 120 may further include: and in response to the iteration termination condition not being met, optimizing the preset distance condition, and taking the optimized preset distance condition as the preset distance condition of the next round of iteration processing, wherein optimizing the preset distance condition comprises reducing a preset association threshold value.
In some embodiments, the localization registration optimization system 100 may further include a scale determination module that may be used to determine a type of feature point cloud data based on a height distribution of feature points in the feature point cloud data; and determining an iterative processing scale based on the type of the characteristic point cloud data.
It should be noted that the above description of the acquisition module, the iterative processing module, and other modules is for convenience only, and is not intended to limit the present disclosure to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the principles of the system, various modules may be combined arbitrarily or a subsystem may be constructed in connection with other modules without departing from such principles. In some embodiments, the acquisition module and the iterative processing module disclosed in fig. 1 may be different modules in a system, or may be one module to implement the functions of two or more modules described above. For example, each module may share one memory module, or each module may have a respective memory module. Such variations are within the scope of the present description.
Fig. 2 is an exemplary flow chart of a positioning registration optimization method shown in accordance with some embodiments of the present description. In some embodiments, the processor may obtain the feature point cloud data 210, perform at least one round of iterative processing 220 on the feature point cloud data 210, and determine a registration result 240 of the feature point cloud data 210 in the target coordinate system based on the iteration result 230.
The characteristic point cloud data 210 is point cloud data for representing environmental characteristics of the equipment to be positioned, for example, the characteristic point cloud data can be used for representing environmental characteristics of preset positioning places such as an underground parking lot, a port and the like where the automatic driving vehicle and the transportation equipment wait for the positioning equipment to be positioned. In some embodiments, the feature point cloud data 210 may carry coordinate information, which may be three-dimensional coordinates. For example, the coordinates of one feature point in the feature point cloud data 210 may be (x) 1 ,y 1 ,z 1 )。
In some embodiments, the processor may collect the feature point cloud data 210 in a variety of ways, such as by a variety of devices, such as a laser scanner, a camera, a three-dimensional scanner, etc., to collect the feature point cloud data 210. In some embodiments, the processor may also pre-process the collected point cloud data to filter unnecessary point cloud data (e.g., noisy point cloud data) to obtain the feature point cloud data 210. The noise point cloud data may be interference data generated based on interference factors such as severe weather (e.g., rains, heavy fog, etc.). For more details on the acquisition of feature point cloud data 210, reference may be made to FIG. 4 and its associated description below.
The iterative process 220 refers to a process in which the processor performs at least one round of repetitive processing on the point cloud data to screen the optimized feature point cloud data 210. In some embodiments, the iterative processing may include a variety of processing approaches such as a culling operation. The rejecting operation refers to rejecting outliers in the feature point cloud data.
In some embodiments, in each round of iterative processing 220, the processor may further transform the feature point cloud data 210 to the target coordinate system by presetting a transformation parameter (e.g., a rotation transformation matrix R and a translation transformation matrix T), resulting in transformed feature points of the target coordinate system. If the preset distance condition is met, the processor can reject the corresponding transfer characteristic points so as to optimize the characteristic point cloud data. If the iteration termination condition is not satisfied, the processor may optimize the preset conversion parameters based on the conversion feature points, so as to perform the next iteration process 220 and further reduce the number of feature point cloud data 210. If the iteration termination condition is met, the processor may determine an iteration result 230. For more details of the iterative process, reference may be made to FIG. 3 below and its associated description.
In some embodiments, the iteration result 230 may include the feature point cloud data determined by the iteration and the transformed feature point of each feature point of the feature point cloud data in the target coordinate system. The transformation feature points of the target coordinate system can be obtained by registering feature points based on preset transformation parameters.
Registration refers to a processing manner of transforming each feature point of the feature point cloud data 210 into a target coordinate system by using a preset transformation parameter. In some embodiments, the processor may perform ICP iterative processing on the feature point cloud data 210 to achieve registration.
Correspondingly, in some embodiments, the processor may terminate the iteration when the iteration termination condition is satisfied, and determine the transformed feature points of the feature points determined by the last iteration in the target coordinate system as the iteration result 230. For more details on the iteration termination conditions and the iteration process, reference is made to fig. 3 and its associated description below.
The target coordinate system may be a preset three-dimensional coordinate system. In some embodiments, the target coordinate system may be a three-dimensional coordinate system established based on a preset positioning location. For example, the target coordinate system may be a corresponding three-dimensional coordinate system in the navigation map.
In some embodiments, the processor may convert the feature points in the feature point cloud data 210 into converted feature points in the target coordinate system using preset conversion parameters to achieve registration of the point cloud data.
Registration result 240 refers to a positioning result obtained by transforming feature point cloud data 210 to the target coordinate system. Each conversion characteristic point of the target coordinate system corresponds to a characteristic point of the characteristic point cloud data, and based on the conversion characteristic point of the characteristic point cloud data in the target coordinate system, the corresponding range information of the corresponding characteristic point cloud data in the target coordinate system can be determined, and then the positioning information of an object corresponding to the characteristic point cloud data and the like can be determined. For example, the processor may locate the to-be-located device in the preset location, such as locating the transportation device in the port, according to the registration result 240 of the feature point cloud data corresponding to the to-be-located device.
FIG. 3 is an exemplary flow chart of a round of iterative processing shown in accordance with some embodiments of the present description. As shown in fig. 3, a round of iterative process 220 may include the following steps. In some embodiments, the iterative process may be performed by a processor.
Step 310, determining a conversion feature point of each feature point of the feature point cloud data corresponding to the round of iterative processing in the target coordinate system based on the preset conversion parameter.
The preset conversion parameters are parameters for guiding registration. In some embodiments, the preset conversion parameters may include a rotation transformation matrix R, a translation transformation matrix T. Wherein the rotational transformation matrix R may comprise at least one rotational transformation vector, which may be used to change the relative direction between the plurality of feature points, but not the distance between the plurality of feature points. The translation transformation matrix T may include at least one translation transformation vector that may be used to change the distance between the plurality of feature points, but not the relative direction between the plurality of feature points, thereby transforming the feature point cloud data from the initial coordinate system to the target coordinate system.
In some embodiments, the processor may obtain the preset conversion parameters in a variety of ways. For example, the processor may use the historical conversion parameter based on the history as the current preset conversion parameter, and for example, the processor may directly acquire the preset conversion parameter as the current preset conversion parameter. For more details on preset conversion parameters, reference may be made to step 340A and its associated description below.
The converted feature points may be corresponding feature points in the target coordinate system after the feature points of the feature point cloud data are converted. In some embodiments, the transformed feature points may be used to reflect the registration result obtained after the feature points of the feature point cloud data are registered in the target coordinate system by using the rotation transformation matrix R and the translation transformation matrix T in the round of iterative process.
In some embodiments, the processor may determine the conversion feature point of the feature point cloud data in the target coordinate system by a preset conversion formula including the preset conversion parameter based on the preset conversion parameter. The preset conversion formula may include the following formula (1):
P(t)=R*P(s)+T, (1)
wherein P (T) is a conversion characteristic point corresponding to P(s), P(s) is one characteristic point of the characteristic point cloud data, R is a rotation transformation matrix, and T is a translation transformation matrix.
In step 320, in the target coordinate system, the correlation distance between the conversion feature point and its corresponding neighboring feature point is determined.
The adjacent feature point refers to another conversion feature point closest to the one conversion feature point. For example, for the conversion feature point a, there are a conversion feature point B, a conversion feature point C, and a conversion feature point D in the periphery. The association distances between the conversion feature point a and the conversion feature point B, between the conversion feature point C and between the conversion feature point a and the conversion feature point D may be 3, 4, and 5, respectively, so that the conversion feature point B is an adjacent feature point corresponding to the conversion feature point a.
In some embodiments, the processor may determine the neighboring feature points in a variety of ways. For example, the processor may select one of the conversion feature points, calculate a correlation distance between the conversion feature point and the peripheral conversion feature point based on the conversion feature point and the peripheral conversion feature point, and then use the peripheral conversion feature point with the smallest correlation distance as the corresponding adjacent feature point.
The correlation distance refers to the distance between two conversion feature points. In some embodiments, the correlation distance may include multiple forms of euclidean distance, cosine distance, and the like.
In some embodiments, the correlation distance may be used to reflect whether neighboring feature points are outliers. Noise exists in the point cloud data due to interference such as rainy days, foggy days and the like in the acquisition process, and the outlier adjacent characteristic points can be regarded as noise in the point cloud data. For example, the greater the correlation distance, the farther apart an adjacent feature point is from the transformed feature point, the more likely that the adjacent feature point is outlier, and the more likely that the adjacent feature point is a noise point, the more inaccurate the information (e.g., properties of objects in space, such as shape, color, etc.) that the adjacent feature point represents may be.
In some embodiments, the processor may determine the association distance in a variety of ways. For example, the processor may determine the associated distance of the conversion feature point and the neighboring feature point based on the coordinate information of the conversion feature point and the neighboring feature point, based on the euclidean distance, the cosine distance calculation method, or the like.
And step 330, eliminating adjacent feature points in response to the correlation distance meeting a preset distance condition, and obtaining optimized feature point cloud data.
In some embodiments, a preset distance condition may be used to define adjacent feature points of an outlier. When the associated distance meets the preset distance condition, and the adjacent characteristic points of the conversion characteristic points are described as outliers, the processor can reject the adjacent characteristic points so as to reduce the number of point clouds and errors and obtain optimized characteristic point cloud data.
In some embodiments, the preset distance condition may include the association distance exceeding a preset association threshold.
The preset association threshold may be a maximum association distance of the non-outlier neighboring feature points with the transition feature point.
In some embodiments, the processor may determine the preset association threshold based on a history of maximum association distances of outliers to the transition feature points, or may determine the preset association threshold in a variety of other ways, such as based on human experience.
In some embodiments, there are a number of ways in which the processor rejects neighboring feature points. In some embodiments, for each conversion feature point, the processor may determine, based on the associated distance between the corresponding adjacent feature points, whether to eliminate the adjacent feature points, thereby evaluating the adjacent feature points in turn, and optimizing the feature point cloud data.
In some embodiments, in response to the association distance not satisfying the preset condition, it may be stated that there are no outlier neighboring feature points, and the processor may determine whether an iteration termination condition is satisfied in order to determine whether the iteration process is ended.
In step 340A, in response to the iteration termination condition not being met, optimizing the preset conversion parameter based on the preset optimization algorithm, and taking the optimized preset conversion parameter and the optimized characteristic point cloud data as the preset conversion parameter and the characteristic point cloud data of the next iteration process respectively.
In some embodiments, an iteration termination condition may be used to define the degree of iteration convergence, which may reflect the desired registration result. Correspondingly, in some embodiments, the processor may determine the iteration termination condition in a variety of ways, such as based on historical iteration termination conditions or artificial experience, and the like.
In some embodiments, the iteration termination condition includes: the iteration times meet the preset times conditions; and/or the global associated distance error meets a preset error condition, and the global associated distance error is determined based on the associated distance error of each feature point of the feature point cloud data.
In some embodiments, the preset number of times condition may be a number of desired iterations. When the preset number of times is greater than or equal to the expected number of times of iteration, the number of times of iteration meets the preset number of times condition, and the processor can confirm convergence of the ICP algorithm and obtain the expected registration result. The number of desired iterations may be determined based on a number of ways, such as historical iterations or human experience.
The associated distance error refers to the distance error between the converted feature points of the feature point cloud data in the target coordinate system and the corresponding adjacent feature points. In some embodiments, the associated distance error may be used to optimize a preset conversion parameter.
In some embodiments, the processor may determine the associated distance error between the transformed feature point and its corresponding neighboring feature point in a variety of ways. For example, the processor may determine an associated distance error between the conversion feature point and its corresponding neighboring feature point based on the euclidean distance error. For example, the associated distance error may be equal to the euclidean distance error. The euclidean distance error calculation formula may include the following formula (2):
ΔD=||R*P(s)+T-P(t′)||, (2)
Wherein Δd is euclidean distance error, P(s) is one feature point of the feature point cloud data, P (T') is an adjacent feature point of the conversion feature point P (T) corresponding to P(s), R is a rotation transformation matrix, and T is a translation transformation matrix.
The global associated distance error refers to the overall error of the associated distance error corresponding to all the conversion feature points in the target coordinate system of the feature point cloud data. In some embodiments, the global correlation distance error may be used to reflect the degree of convergence of the ICP algorithm, and the smaller the global correlation distance error, the greater the degree of convergence of the ICP algorithm may be.
In some embodiments, the processor may determine the global correlation error in a number of ways based on the correlation distance error for each transformed feature point of the feature point cloud data. For example, the processor may calculate an average value of the correlation distance errors of all the conversion feature points, and take the average value as the global correlation error.
In some embodiments, the preset error condition may include the global associated error being less than a maximum error threshold. When the global association error meets a preset error condition, the processor can confirm convergence of the ICP algorithm, and can obtain a desired registration result. Wherein the maximum error threshold may be determined based on historical maximum error thresholds or human experience, among other ways.
In the embodiment of the specification, the determined iteration termination condition is richer by setting the preset error condition and the preset times condition, so that whether the iteration reaches the expectations or not can be judged from multiple angles, and the iteration efficiency is improved.
In some embodiments, a preset optimization algorithm may be used to optimize preset conversion parameters, so that the optimized preset conversion parameters may meet the registration requirement of the feature point cloud data. In some embodiments, the preset optimization algorithm may include a plurality of optimization algorithms such as a least squares method.
In some embodiments, the processor may calculate the optimized preset conversion parameter by a least square method based on the mean coordinate of the feature point in the feature point cloud data and the mean coordinate of the transfer feature point in the target coordinate system. For example, the processor may solve the optimized translation transformation matrix T 'and the optimized rotation transformation matrix R' using a matrix transformation algorithm based on the mean value of the feature points in the feature point cloud data and the mean value of the transferred feature points in the target coordinate system. The matrix transformation algorithm may include the following equation (3):
wherein R 'is an optimized rotation transformation matrix, T' is an optimized translation transformation matrix, Is the mean coordinate of the feature points, +.>Mean value seat for transfer feature pointsThe target V, U is the singular value decomposition result corresponding to the feature point and the transfer feature point, respectively.
In some embodiments, the processor may perform weighted average on the coordinates of all the feature points, and calculate a mean coordinate of the feature points; and carrying out weighted average on the coordinates of all the transfer characteristic points, and calculating to obtain the mean value coordinates of the transfer characteristic points.
The processor may, for example, solve the mean coordinates of the feature points using a weighted average algorithm based on the coordinates of the feature points in the feature point cloud data and the coordinates of the transferred feature points in the target coordinate systemAnd mean coordinates of transfer feature points +.>The weighted average algorithm may include the following equation (4):
wherein n is the number of feature points in the feature point cloud data, which is equal to the total number of transfer feature points, P i (s) is the coordinates of the ith feature point, P i (t) is the coordinates of the ith transfer feature point, w i For the ith feature point P i (s) or i-th transfer feature point P i (t) a corresponding weight to be applied,for the weighted coordinate sum of all feature points, +.>For the weighted coordinate sum of all transfer feature points, +.>Is the sum of all weights.
Correspondingly, in some embodiments, the processor may determine singular value decomposition results corresponding to the feature points and the transfer feature points based on a singular value decomposition algorithm. The singular value decomposition algorithm may include the following equation (5):
Wherein X is a parameter matrix corresponding to the transfer characteristic points, Y is a parameter matrix corresponding to the characteristic points of the characteristic point cloud data, and W is a residual matrix.
In some embodiments, the processor may use the optimized translation transformation matrix T 'and the optimized rotation transformation matrix R' as preset conversion parameters for the next iteration process; and the optimized characteristic point cloud data can be used as characteristic point cloud data for the next round of iterative processing.
In some embodiments, in response to the iteration termination condition not being met, the processor may further optimize the preset distance condition and take the optimized preset distance condition as the preset distance condition for the next round of iterative processing.
In some embodiments, the processor may adjust and optimize the preset distance condition when it is determined that the iteration is not stopped, so that the judgment of the correlation distance is more strictly performed in the next iteration process, thereby further reducing the number of the point cloud data.
In some embodiments, optimizing the preset distance condition includes reducing a preset association threshold. Correspondingly, as the preset association threshold is related to the maximum association distance between the non-outlier adjacent feature points and the conversion feature points, the maximum association distance between the transfer feature points and the adjacent feature points is also reduced along with the reduction of the preset association threshold in the next iteration process, so that the judgment of the outlier adjacent feature points is stricter, and the quantity of the point cloud data is further reduced. In some embodiments, the preset association threshold may be reduced to a number of values, such as one-half, three-quarters, etc. For more details of the maximum correlation distance and the preset correlation threshold, reference may be made to step 330 and the description thereof.
In the embodiment of the specification, more outlier adjacent characteristic points can be removed by optimizing the preset distance condition through each round of iteration, so that the number of point cloud data is further reduced, and the computing resources are saved.
In step 340B, in response to the iteration termination condition being satisfied, a registration result is determined based on the transformed feature points of each feature point in the target coordinate system determined by the round of iterative processing.
In some embodiments, in response to the iteration termination condition being met, the processor may consider the ICP algorithm to converge, and may then use the transformed feature points determined in the round of iterative processing via step 310 described above as a result of the registration of the feature point cloud data.
In the embodiment of the specification, in the iterative processing process, based on the association distance between the conversion feature point and the corresponding adjacent feature point, the adjacent feature point with the association distance meeting the preset distance condition is removed, so that the number of the determined optimized feature point cloud data is reduced, the calculation resource consumed by repeated iteration is reduced, and the iterative convergence speed and the positioning efficiency are improved.
In some embodiments, the processor may further determine a type of the feature point cloud data based on a height distribution of the feature points in the feature point cloud data; and determining an iterative processing scale based on the type of the characteristic point cloud data.
The height distribution of the feature points refers to the numerical distribution condition of the feature points in the feature point cloud data on the ordinate in the coordinate system where the feature points are located. In some embodiments, the height distribution of feature points may be used to reflect the dimensional features corresponding to objects in space. The higher the height distribution of the feature points, the more the object in the space is biased to the three-dimensional feature, such as a street lamp, a vehicle and the like; conversely, the lower the height distribution of the feature points, the more the object in space is biased toward a two-dimensional feature, such as a lane line, zebra line, or the like.
In some embodiments, the processor may determine the height distribution of the feature points in a variety of ways. In some embodiments, the processor may determine the height distribution value based on coordinate information of the feature points in the feature point cloud data. The height distribution value may include parameters such as a height mean value, a height variance, a height range, and the like. For example, the processor may calculate the height average based on the ordinate of all feature points. The higher the height average, the more favored an object in space may be to a three-dimensional feature.
In some embodiments, the types of feature point cloud data may include three-dimensional feature types, two-dimensional feature types. In some embodiments, the partitioning of the types of feature point cloud data may be affected by the dimensional features corresponding to the objects in space. For example, if the dimensional feature corresponding to the object in the space has only two-dimensional feature, the type of the feature point cloud data may be a two-dimensional feature type. In some embodiments, in order to ensure accuracy of the registration result, if the dimensional feature corresponding to the object in the space has both a two-dimensional feature and a three-dimensional feature, the type of the feature point cloud data is a three-dimensional feature type.
In some embodiments, the dimensional feature belongs to a two-dimensional feature or a three-dimensional feature, and may be determined based on whether the coordinates of the feature points of the acquired feature point cloud data include two-dimensional coordinate information or three-dimensional coordinate information.
In some embodiments, the iterative processing scale refers to the manner in which the iterative processing occurs. To more accurately distinguish between feature point cloud data of different dimensions, in some embodiments, the processor may determine different iterative processing dimensions for the feature point cloud data of different dimensions to obtain a registration result that can reflect the dimensions.
For example, for feature point cloud data of a two-dimensional feature type, the iterative processing scale may include ignoring longitudinal data of the feature point cloud data during iterative processing to obtain a two-dimensional registration result with three degrees of freedom; for the feature point cloud data of the three-dimensional feature type, the iterative processing scale can include retaining longitudinal data of the feature point cloud data during iterative processing to obtain a three-dimensional registration result with six degrees of freedom.
In the embodiment of the specification, different iteration processing scales are adopted for different types of characteristic point cloud data, so that the obtained registration result can more accurately reflect the dimension of the point cloud data, and the positioning accuracy is improved.
In some embodiments, the processor may obtain initial point cloud data; responding to the initial point cloud data not meeting the first preset quantity condition, and performing at least one round of downsampling operation on the initial point cloud data to obtain characteristic point cloud data; and responding to the initial point cloud data meeting the first preset quantity condition, and taking the initial point cloud data as characteristic point cloud data.
In some embodiments, the initial point cloud data may be point cloud data acquired based on lidar or the like for reflecting an environment in which a device to be located (e.g., an autonomous vehicle, a transportation device, etc.) is located. For more description of the initial point cloud data, see the description of the feature point cloud data in fig. 2.
In some embodiments, the first preset number of conditions may include the number of initial point cloud data being less than or equal to a first preset number threshold. The first preset point cloud threshold value can be determined according to the number of the historical characteristic point cloud data or manual experience.
For example, if the number of initial point cloud data is less than or equal to the first preset number threshold (i.e., the first preset number condition is satisfied), it may be reflected that the number of initial point cloud data is smaller, without reducing the number of point clouds. Correspondingly, the processor does not need to perform the downsampling operation, and the initial point cloud data is directly used as the characteristic point cloud data. Otherwise, if the number of the initial point cloud data is greater than the first preset number threshold (i.e., the first preset number condition is not satisfied), the number of the initial point cloud data may be reflected to be greater, which has a need of reducing the number of the point cloud.
The down-sampling operation refers to an operation of reducing the size of the data amount by sampling the data. In some embodiments, the downsampling may include sampling the initial point cloud data such that the sampled initial point cloud data is used as the feature point cloud data such that the number of feature point cloud data is less than the number of initial point cloud data, while also maintaining the shape characteristics of the initial point cloud.
Fig. 4 is an exemplary flow chart of a round of downsampling operations according to some embodiments of the present description. As shown in fig. 4, a round of downsampling operations may include the following steps. In some embodiments, the downsampling operation may be performed by a processor.
Step 410, obtaining the voxel downsampling scale corresponding to the round of downsampling operation.
Voxel downsampling scale refers to the way in which downsampling is performed, which can be used to define the size of the sampled voxels. Wherein the sampling voxels may be used to guide the volume element performing the sampling operation, the size of which may affect the downsampling effort. For example, the larger the voxel downsampling scale, the larger the size of the sampled voxels, the larger the downsampling strength, and the smaller the number of point cloud data obtained after sampling.
In some embodiments, the processor may obtain the voxel downsampling scale in a variety of ways, such as based on historical downsampling data or artificial experience.
And step 420, performing voxel downsampling on the initial point cloud data based on the voxel downsampling scale to obtain downsampled point cloud data.
In some embodiments, the processor may divide the initial point cloud data into at least one sampled voxel based on a voxel downsampling scale. Wherein each sampled voxel includes a plurality of feature points of the initial point cloud data. The number of feature points included in each sampled voxel is determined based on a voxel downsampling scale, the greater the number of feature points included in one sampled voxel.
In some embodiments, for each sampled voxel, the processor may extract at least one feature point from the sampled voxel as a representative point for the sampled voxel. Wherein the representative point of the sampled voxel may be used to approximately display other feature points of the sampled voxel than the representative point.
In some embodiments, the processor may determine the representative point of the sampled voxel in a variety of ways. For example, the processor may calculate the centroid of all feature points in a sampled voxel, with the centroid being taken as a representative point for the sampled voxel.
In step 430A, in response to the downsampled point cloud data satisfying the second preset number of conditions, the downsampled point cloud data is used as the feature point cloud data.
In some embodiments, the second preset number of conditions is used to define the number of point cloud data after downsampling, so as to ensure the accuracy of registration while reducing the number of point cloud data.
In some embodiments, the second preset number of conditions may include the number of downsampled point cloud data being greater than or equal to a second preset number threshold. The second preset point cloud threshold value can be determined according to the number of the historical characteristic point cloud data or manual experience. In some embodiments, the second preset number threshold may be less than or equal to the first preset number threshold.
In some embodiments, the downsampled point cloud data satisfies a second preset number of conditions, which may reflect that the number of downsampled point cloud data satisfies the registration requirement, and the processor may use the downsampled point cloud data as the feature point cloud data.
And step 430B, in response to the downsampled point cloud data not meeting the second preset number of conditions, reducing the voxel downsampling scale, and taking the reduced voxel downsampling scale as the voxel downsampling scale corresponding to the next downsampling operation to perform downsampling operation on the initial point cloud data again.
In some embodiments, the processor may downsample three-quarters, one-half, etc. of the voxel downsampling scale as the downsampled voxel. Wherein the degree of reduction of the voxel downsampling scale may be determined based on historical downsampling data or artificial experience.
In some embodiments, the downsampled point cloud data does not satisfy the second preset number condition, which may reflect that the number of downsampled point cloud data is too small to satisfy the requirement of accuracy of registration, and the processor may use the downsampled voxel downsampling scale as a voxel downsampling scale corresponding to a next round of downsampling operation, and re-perform the downsampling operation on the initial point cloud data so as to reduce the number of feature point clouds to the greatest extent. For example, the processor may re-execute steps 410-430B to re-perform a round of downsampling operations.
In the embodiment of the specification, based on the voxel downsampling scale, the downsampling operation of the initial point cloud data is performed, and then the characteristic point cloud data is determined when the second preset quantity condition is met, so that the quantity of the characteristic point cloud data can be reduced to the greatest extent while the quantity of the characteristic point cloud data meets the requirement of accuracy of registration, the calculation resource consumed by repeated iteration is further reduced, and the iteration convergence speed and the positioning efficiency are improved.
In some embodiments, the first preset number of conditions and/or the second preset number of conditions comprises a point cloud number threshold; the voxel downsampling scale and the point cloud quantity threshold are related to at least one of a point cloud registration scene type, registration accuracy, a two-dimensional feature quantity and a three-dimensional feature quantity.
In some embodiments, the threshold value of the number of point clouds may be a minimum value of the number of feature point cloud data, so as to ensure that the number of obtained feature point cloud data can meet the accuracy requirement of subsequent iterative processing. For example, the first preset number threshold and the second preset number threshold may be the same, and are both the point cloud number thresholds.
The point cloud registration scene type refers to a scene type corresponding to the target coordinate system. In some embodiments, different voxel downsampling scales and point cloud quantity thresholds may be set for different point cloud registration scene types to improve registration accuracy. In some embodiments, the point cloud registration scene types may include harbor, street, indoor, etc., scene types. For example, when the point cloud registration scene type is street, the complexity of objects in space is higher than indoors, and the processor may set a larger voxel downsampling scale and a point cloud quantity threshold to reduce the amount of calculation of registration and improve positioning efficiency.
In some embodiments, the point cloud registration scene type may be obtained in a variety of ways, such as identifying its corresponding environment based on environmental information (e.g., illumination intensity, altitude, etc.) collected by the device to be positioned, to determine the point cloud registration scene type, or based on human experience.
The registration accuracy refers to the accuracy of registering the characteristic point cloud data with the point cloud map established offline. In some embodiments, different registration accuracies may affect the setting of the voxel downsampling scale and the point cloud quantity threshold. For example, when the desired registration accuracy is high, the processor may set a smaller voxel downsampling scale and point cloud quantity threshold to meet the registration accuracy requirements.
In some embodiments, the processor may determine the registration accuracy based on historical registration accuracy or a variety of ways, such as manual experience.
The two-dimensional feature quantity refers to the quantity of point cloud data which is of a two-dimensional feature type in the initial point cloud data, and the three-dimensional feature quantity refers to the quantity of point cloud data which is of a three-dimensional feature type in the initial point cloud data.
In some embodiments, the ratio of the two-dimensional feature quantity to the three-dimensional feature quantity may be used to reflect the dimension distribution state in the initial point cloud data, and may also have an effect on the voxel downsampling scale and the point cloud quantity threshold. For example, a larger ratio of the number of two-dimensional features to the number of three-dimensional features may reflect that the two-dimensional feature distribution is more than the three-dimensional feature distribution in the initial point cloud data, and the accuracy of registration based on the two-dimensional feature distribution is lower than that based on the three-dimensional feature distribution, so the processor may reduce the voxel downsampling scale and increase the point cloud number threshold to ensure that there are enough feature points for registration to promote the accuracy of registration.
In some embodiments, the processor may determine the ratio of the number of two-dimensional features to the number of three-dimensional features in a number of ways by the height distribution of the initial point cloud data. For example, the ratio of the number of two-dimensional features to the number of three-dimensional features is: the ratio of the number of feature points of the feature point cloud data belonging to the two-dimensional feature type to the number of feature points of the feature point cloud data belonging to the three-dimensional feature type.
In some embodiments, the partitioning of the two-dimensional features and the three-dimensional features may be related to field-of-view point locations and field-of-view range thresholds, which are related to the point cloud registration scene type.
The view point position is a camera shooting position, and the view range threshold is a range that can be shot by the camera with the view point position as the center.
In some embodiments, the view range threshold is also affected by the point cloud registration scene type, e.g., when the point cloud registration scene type is indoor, the camera is able to capture a smaller range than outdoors, and thus the view range threshold is smaller. Correspondingly, in some embodiments, each point cloud registration scene type may set a corresponding field of view threshold.
In some embodiments, the partitioning of the two-dimensional features and the three-dimensional features is also affected by the field-of-view point locations and the field-of-view range thresholds. For example, the ordinate value of the view point position is high, and the view range threshold is downward, that is, the camera collects initial point cloud data in a nodding state, the easier the collected initial point cloud data is divided into two-dimensional features.
In some embodiments, the processor may obtain the field of view point location and the field of view range threshold in a variety of ways, such as by a locating device provided on the camera, or based on manual experience.
In the embodiment of the specification, based on the view point position and the view range threshold, the division of the two-dimensional features and the three-dimensional features is adjusted, so that the obtained point cloud quantity threshold and voxel downsampling scale can be more accurate, and the positioning accuracy is improved.
In the embodiment of the specification, based on guidance information such as the point cloud registration scene type, registration accuracy, the ratio of the two-dimensional feature quantity to the three-dimensional feature quantity and the like, the voxel downsampling scale and the point cloud quantity threshold are determined, so that a processor can reasonably downsample initial point cloud data, and the registration accuracy is improved.
In some embodiments, the processor may determine the voxel downsampling scale and the point cloud quantity threshold using a super-parametric determination model based on the point cloud registration scene type, registration accuracy, the ratio of the number of two-dimensional features to the number of three-dimensional features.
In some embodiments, the super parameter determination model may be a machine learning model, e.g., a deep neural network model (Deep Neural Networks, DNN), etc.
In some embodiments, the inputs to the super-parametric determination model may include a point cloud registration scene type, registration accuracy, a ratio of a number of two-dimensional features to a number of three-dimensional features; the output may include a voxel downsampling scale and a point cloud quantity threshold. For more details on the point cloud registration scene type, registration accuracy, the ratio of the number of two-dimensional features to the number of three-dimensional features, see the description above.
In some embodiments, the super-parametric determination model may be trained from a plurality of labeled training samples. The training samples may include a historical point cloud registration scene type, a historical registration accuracy, a historical duty ratio of the number of two-dimensional features to the number of three-dimensional features in a historical downsampling process, and the labels of the training samples may include a historical voxel downsampling scale and a historical point cloud number threshold. The label may be obtained by analyzing a voxel downsampling scale and a point cloud quantity threshold used in the history downsampling process.
In some embodiments of the present disclosure, the voxel downsampling metric and the point cloud quantity threshold value are determined by using the super-parametric determination model based on at least one reference information (e.g., a point cloud registration scene type, registration accuracy, a ratio of two-dimensional feature quantity to three-dimensional feature quantity, etc.), and the voxel downsampling metric and the point cloud quantity threshold value may not need to be manually determined. Determining the voxel downsampling scale and the point cloud quantity threshold can be helpful for determining a first preset quantity condition and a second preset quantity condition in the subsequent downsampling operation, so that feature point cloud data can be determined more accurately, and the positioning accuracy and efficiency are improved.
There is also provided in one or more embodiments of the present specification a positioning registration optimization apparatus including at least one processor and at least one memory; at least one memory for storing computer instructions; at least one processor is configured to execute at least some of the computer instructions to implement the localization registration optimization method as described in any of the embodiments above.
The processor can be used for designating the operation and control core of the registration optimizing device, and is a final execution unit for information processing and program running. Such as a central processing unit, a graphics processor, a field programmable gate array, etc. In some embodiments, the processor may perform the localization registration optimization method illustrated in fig. 2-4 above, and for further details of the method, reference may be made to the related description above.
There is also provided in one or more embodiments of the present specification a computer-readable storage medium storing computer instructions that, when read by a computer, the computer performs the localization registration optimization method as described in any of the embodiments above.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations to the present disclosure may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this specification, and therefore, such modifications, improvements, and modifications are intended to be included within the spirit and scope of the exemplary embodiments of the present invention.
Meanwhile, the specification uses specific words to describe the embodiments of the specification. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present description. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present description may be combined as suitable.
Furthermore, the order in which the elements and sequences are processed, the use of numerical letters, or other designations in the description are not intended to limit the order in which the processes and methods of the description are performed unless explicitly recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of various examples, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the present disclosure. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Likewise, it should be noted that in order to simplify the presentation disclosed in this specification and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure, however, is not intended to imply that more features than are presented in the claims are required for the present description. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the number allows for a 20% variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations that may be employed in some embodiments to confirm the breadth of the range, in particular embodiments, the setting of such numerical values is as precise as possible.
Each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., referred to in this specification is incorporated herein by reference in its entirety. Except for application history documents that are inconsistent or conflicting with the content of this specification, documents that are currently or later attached to this specification in which the broadest scope of the claims to this specification is limited are also. It is noted that, if the description, definition, and/or use of a term in an attached material in this specification does not conform to or conflict with what is described in this specification, the description, definition, and/or use of the term in this specification controls.
Finally, it should be understood that the embodiments described in this specification are merely illustrative of the principles of the embodiments of this specification. Other variations are possible within the scope of this description. Thus, by way of example, and not limitation, alternative configurations of embodiments of the present specification may be considered as consistent with the teachings of the present specification. Accordingly, the embodiments of the present specification are not limited to only the embodiments explicitly described and depicted in the present specification.

Claims (10)

1. A method of optimization of positioning registration, the method comprising:
Acquiring characteristic point cloud data;
performing at least one round of iterative processing on the characteristic point cloud data, and determining a registration result of the characteristic point cloud data in a target coordinate system based on an iterative result;
wherein a round of the iterative process comprises:
determining conversion characteristic points of each characteristic point of the characteristic point cloud data corresponding to the round of iterative processing in the target coordinate system based on preset conversion parameters;
in the target coordinate system, determining the association distance between the conversion characteristic point and the corresponding adjacent characteristic point;
in response to the correlation distance meeting a preset distance condition, eliminating the adjacent characteristic points to obtain optimized characteristic point cloud data;
in response to the iteration termination condition not being met, optimizing the preset conversion parameters based on a preset optimization algorithm, and taking the optimized preset conversion parameters and the optimized characteristic point cloud data as the preset conversion parameters and the characteristic point cloud data of the next iteration process respectively;
and determining the registration result based on the conversion characteristic points of each characteristic point determined by the round of iterative processing in the target coordinate system in response to the iteration termination condition being satisfied.
2. The method of positioning registration optimization of claim 1, wherein the acquiring feature point cloud data comprises:
acquiring initial point cloud data;
responding to the initial point cloud data not meeting a first preset quantity condition, and performing at least one round of downsampling operation on the initial point cloud data to acquire the characteristic point cloud data;
and responding to the initial point cloud data meeting the first preset quantity condition, and taking the initial point cloud data as the characteristic point cloud data.
3. The method of location registration optimization of claim 2 wherein a round of the downsampling operation comprises:
acquiring a voxel downsampling scale corresponding to the round of downsampling operation;
performing voxel downsampling on the initial point cloud data based on the voxel downsampling scale to obtain downsampled point cloud data;
responding to the downsampled point cloud data to meet a second preset quantity condition, and taking the downsampled point cloud data as the characteristic point cloud data;
and in response to the downsampled point cloud data not meeting the second preset number condition, reducing the voxel downsampling scale, and taking the reduced voxel downsampling scale as a voxel downsampling scale corresponding to the next downsampling operation so as to perform downsampling operation on the initial point cloud data again.
4. The localization registration optimization method of claim 1, further comprising:
determining the type of the characteristic point cloud data based on the height distribution of the characteristic points in the characteristic point cloud data;
and determining an iterative processing scale based on the type of the characteristic point cloud data.
5. The localization registration optimization method of claim 1, wherein the preset distance condition comprises the correlation distance exceeding a preset correlation threshold.
6. The method of positional registration optimization as defined in claim 5, wherein the one round of the iterative process further comprises:
and in response to the iteration termination condition not being met, optimizing the preset distance condition, and taking the optimized preset distance condition as a preset distance condition of the next round of iteration processing, wherein the optimizing the preset distance condition comprises reducing the preset association threshold.
7. The localization registration optimization method of claim 1, wherein the iteration termination condition comprises:
the iteration times meet the preset times conditions; and/or
The global associated distance error satisfies a preset error condition, and is determined based on the associated distance error of each feature point of the feature point cloud data.
8. A positioning registration optimization system, comprising:
the acquisition module is used for acquiring the characteristic point cloud data;
the iteration processing module is used for carrying out at least one round of iteration processing on the characteristic point cloud data and determining a registration result of the characteristic point cloud data in a target coordinate system based on an iteration result;
wherein a round of the iterative process comprises:
determining conversion characteristic points of each characteristic point of the characteristic point cloud data corresponding to the round of iterative processing in the target coordinate system based on preset conversion parameters;
in the target coordinate system, determining the association distance between the conversion characteristic point and the corresponding adjacent characteristic point;
in response to the correlation distance meeting a preset distance condition, eliminating the adjacent characteristic points to obtain optimized characteristic point cloud data;
in response to the iteration termination condition not being met, optimizing the preset conversion parameters based on a preset optimization algorithm, and taking the optimized preset conversion parameters and the optimized characteristic point cloud data as the preset conversion parameters and the characteristic point cloud data of the next iteration process respectively;
and determining the registration result based on the conversion characteristic points of each characteristic point determined by the round of iterative processing in the target coordinate system in response to the iteration termination condition being satisfied.
9. A positioning registration optimization apparatus, characterized in that it comprises at least one memory for storing computer instructions and at least one processor executing the computer instructions or part of the instructions to implement the positioning registration optimization method of any of claims 1-7.
10. A computer readable storage medium storing computer instructions which, when read by a computer, perform the localization registration optimization method of any one of claims 1-7.
CN202311206743.XA 2023-09-19 2023-09-19 Positioning registration optimization method, system, device and storage medium Pending CN117274331A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311206743.XA CN117274331A (en) 2023-09-19 2023-09-19 Positioning registration optimization method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311206743.XA CN117274331A (en) 2023-09-19 2023-09-19 Positioning registration optimization method, system, device and storage medium

Publications (1)

Publication Number Publication Date
CN117274331A true CN117274331A (en) 2023-12-22

Family

ID=89215296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311206743.XA Pending CN117274331A (en) 2023-09-19 2023-09-19 Positioning registration optimization method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN117274331A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816703A (en) * 2017-11-21 2019-05-28 西安交通大学 A kind of point cloud registration method based on camera calibration and ICP algorithm
CN109949350A (en) * 2019-03-11 2019-06-28 中国矿业大学(北京) A kind of multidate point cloud autoegistration method based on form invariant features
CN112241010A (en) * 2019-09-17 2021-01-19 北京新能源汽车技术创新中心有限公司 Positioning method, positioning device, computer equipment and storage medium
CN112950686A (en) * 2021-03-31 2021-06-11 苏州大学 Optimized step-by-step registration method of point cloud data
CN113554614A (en) * 2021-07-21 2021-10-26 中国人民解放军陆军工程大学 Pipeline measurement system pose calibration method for point cloud splicing
CN114359042A (en) * 2021-11-30 2022-04-15 深圳市纵维立方科技有限公司 Point cloud splicing method and device, three-dimensional scanner and electronic equipment
CN115760939A (en) * 2022-11-30 2023-03-07 江苏方天电力技术有限公司 Laser point cloud iterative registration method and device and storage medium
CN115770989A (en) * 2022-12-21 2023-03-10 上海交通大学 3D camera point cloud registration welding workpiece initial positioning system and method
CN115797423A (en) * 2022-12-07 2023-03-14 中国矿业大学(北京) Registration method and system for optimizing near iteration of mine point cloud based on descriptor characteristics
CN116385509A (en) * 2023-03-31 2023-07-04 广州导远电子科技有限公司 Point cloud data registration method, device, electronic equipment, system and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816703A (en) * 2017-11-21 2019-05-28 西安交通大学 A kind of point cloud registration method based on camera calibration and ICP algorithm
CN109949350A (en) * 2019-03-11 2019-06-28 中国矿业大学(北京) A kind of multidate point cloud autoegistration method based on form invariant features
CN112241010A (en) * 2019-09-17 2021-01-19 北京新能源汽车技术创新中心有限公司 Positioning method, positioning device, computer equipment and storage medium
CN112950686A (en) * 2021-03-31 2021-06-11 苏州大学 Optimized step-by-step registration method of point cloud data
CN113554614A (en) * 2021-07-21 2021-10-26 中国人民解放军陆军工程大学 Pipeline measurement system pose calibration method for point cloud splicing
CN114359042A (en) * 2021-11-30 2022-04-15 深圳市纵维立方科技有限公司 Point cloud splicing method and device, three-dimensional scanner and electronic equipment
CN115760939A (en) * 2022-11-30 2023-03-07 江苏方天电力技术有限公司 Laser point cloud iterative registration method and device and storage medium
CN115797423A (en) * 2022-12-07 2023-03-14 中国矿业大学(北京) Registration method and system for optimizing near iteration of mine point cloud based on descriptor characteristics
CN115770989A (en) * 2022-12-21 2023-03-10 上海交通大学 3D camera point cloud registration welding workpiece initial positioning system and method
CN116385509A (en) * 2023-03-31 2023-07-04 广州导远电子科技有限公司 Point cloud data registration method, device, electronic equipment, system and storage medium

Similar Documents

Publication Publication Date Title
CN112258618B (en) Semantic mapping and positioning method based on fusion of prior laser point cloud and depth map
CN109800689B (en) Target tracking method based on space-time feature fusion learning
CN109598765B (en) Monocular camera and millimeter wave radar external parameter combined calibration method based on spherical calibration object
CN111665842B (en) Indoor SLAM mapping method and system based on semantic information fusion
CN106683118B (en) Unmanned aerial vehicle target tracking method based on hierarchical model
CN110930495A (en) Multi-unmanned aerial vehicle cooperation-based ICP point cloud map fusion method, system, device and storage medium
CN113126115B (en) Semantic SLAM method and device based on point cloud, electronic equipment and storage medium
CN113470090A (en) Multi-solid-state laser radar external reference calibration method based on SIFT-SHOT characteristics
CN110908374B (en) Mountain orchard obstacle avoidance system and method based on ROS platform
CN115032648A (en) Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN115685160A (en) Target-based laser radar and camera calibration method, system and electronic equipment
CN112907735A (en) Flexible cable identification and three-dimensional reconstruction method based on point cloud
CN112651944A (en) 3C component high-precision six-dimensional pose estimation method and system based on CAD model
Ren et al. Two AUVs guidance method for self-reconfiguration mission based on monocular vision
CN113095152A (en) Lane line detection method and system based on regression
CN110851978B (en) Camera position optimization method based on visibility
CN114120067A (en) Object identification method, device, equipment and medium
CN113269147B (en) Three-dimensional detection method and system based on space and shape, and storage and processing device
CN115578416A (en) Unmanned aerial vehicle target tracking method, system, medium and electronic equipment
Yi et al. Detection and localization for lake floating objects based on CA-faster R-CNN
Yuan et al. High Speed Safe Autonomous Landing Marker Tracking of Fixed Wing Drone Based on Deep Learning
Sun et al. Automatic targetless calibration for LiDAR and camera based on instance segmentation
CN116503760A (en) Unmanned aerial vehicle cruising detection method based on self-adaptive edge feature semantic segmentation
CN117274331A (en) Positioning registration optimization method, system, device and storage medium
Ding et al. Animation design of multisensor data fusion based on optimized AVOD algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination