CN117761715A - Positioning fusion method and device, standing three-dimensional scanning equipment and storage medium - Google Patents

Positioning fusion method and device, standing three-dimensional scanning equipment and storage medium Download PDF

Info

Publication number
CN117761715A
CN117761715A CN202211126897.3A CN202211126897A CN117761715A CN 117761715 A CN117761715 A CN 117761715A CN 202211126897 A CN202211126897 A CN 202211126897A CN 117761715 A CN117761715 A CN 117761715A
Authority
CN
China
Prior art keywords
pose information
navigation module
laser radar
dimensional
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211126897.3A
Other languages
Chinese (zh)
Inventor
崔岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Germany Zhuhai Artificial Intelligence Institute Co ltd
Guangdong Siwei Kanan Intelligent Equipment Co ltd
4Dage Co Ltd
Original Assignee
China Germany Zhuhai Artificial Intelligence Institute Co ltd
Guangdong Siwei Kanan Intelligent Equipment Co ltd
4Dage Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Germany Zhuhai Artificial Intelligence Institute Co ltd, Guangdong Siwei Kanan Intelligent Equipment Co ltd, 4Dage Co Ltd filed Critical China Germany Zhuhai Artificial Intelligence Institute Co ltd
Priority to CN202211126897.3A priority Critical patent/CN117761715A/en
Publication of CN117761715A publication Critical patent/CN117761715A/en
Pending legal-status Critical Current

Links

Landscapes

  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The application is applicable to the technical field of image processing and provides a positioning fusion method, a positioning fusion device, standing three-dimensional scanning equipment and a storage medium, wherein the method comprises the following steps: performing first calibration on the laser radar and the inertial navigation module; performing second calibration on the laser radar and the visual navigation module; acquiring first pose information corresponding to a laser radar; acquiring second pose information corresponding to the visual navigation module; positioning and fusing the first pose information and the second pose information to obtain target pose information; outputting target pose information. Therefore, the method and the device can output initial pose between different shooting points of the standing three-dimensional scanner, and accurately align the different points of the standing three-dimensional scanner by ICP registration and other methods on the basis of the initial pose, and directly generate complete scene point cloud data, so that the processing time of later professionals of the standing three-dimensional scanner is saved.

Description

Positioning fusion method and device, standing three-dimensional scanning equipment and storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a positioning fusion method, a positioning fusion device, standing type three-dimensional scanning equipment and a storage medium.
Background
The traditional frame station type three-dimensional scanner has no positioning system in the moving process, after three-dimensional laser data are shot at different points, three-dimensional laser point clouds at different points need to be fused together manually, three-dimensional laser data fusion of the whole shooting scene is completed, accurate alignment of different points of the frame station type three-dimensional scanner cannot be automatically realized, and fused complete scene point cloud data are directly generated.
Disclosure of Invention
The embodiment of the application provides a positioning fusion method, a positioning fusion device, standing three-dimensional scanning equipment and a storage medium, which can solve the problem that accurate alignment of different points of a standing three-dimensional scanner cannot be automatically realized in the prior art, and fusion complete scene point cloud data is directly generated.
In a first aspect, an embodiment of the present application provides a positioning fusion party, including:
performing first calibration on the laser radar and the inertial navigation module;
performing second calibration on the laser radar and the visual navigation module;
acquiring first pose information corresponding to the laser radar;
acquiring second pose information corresponding to the visual navigation module;
positioning and fusing the first pose information and the second pose information to obtain target pose information;
and outputting the target pose information.
In a possible implementation manner of the first aspect, performing a first calibration on the laser radar and the inertial navigation module includes:
acquiring laser radar three-dimensional point clouds by shooting laser at the same position and different angles;
collecting the inclination angle of the inertial navigation unit in a static state;
and (5) calibrating a first external parameter of the laser radar relative to the inertial navigation module through a least square method.
In a possible implementation manner of the first aspect, performing the second calibration on the laser radar and the visual navigation module includes:
acquiring two-dimensional image data and point cloud data of a calibration plate at different positions and at different angles, wherein the two-dimensional image data are acquired by a visual navigation module, the point cloud data are acquired by a laser radar, the calibration plate is a checkerboard calibration plate with M multiplied by N angular points, and the angular points are intersection points of checkerboards on the calibration plate;
calibrating an internal reference of the visual navigation module based on the two-dimensional image data;
acquiring positions of four angular points in the point cloud data to acquire two-dimensional checkerboard angular points, and acquiring three-dimensional checkerboard angular points corresponding to the two-dimensional checkerboard angular points according to a pre-established matching relationship between the two-dimensional checkerboard angular points and the three-dimensional checkerboard angular points;
and optimizing second external parameters of the visual navigation module and the laser radar based on the acquired three-dimensional checkerboard angular points.
In a possible implementation manner of the first aspect, acquiring first pose information corresponding to the laser radar includes:
in the moving process of the standing three-dimensional scanning equipment, registering and aligning characteristic points in laser radar data with characteristic points in three-dimensional data of one point position on the standing three-dimensional scanning equipment in real time;
and calculating first pose information according to the laser radar data after registration and alignment.
In a possible implementation manner of the first aspect, obtaining second pose information corresponding to the visual navigation module includes:
acquiring inertial measurement data to be processed of a visual navigation module;
acquiring vision measurement data to be processed of the vision navigation module;
and processing the inertial measurement data to be processed and the visual measurement data to be processed according to a preset visual inertial fusion algorithm to obtain second pose information of the visual navigation module.
In a possible implementation manner of the first aspect, performing positioning fusion on the first pose information and the second pose information to obtain target pose information includes:
aligning the first pose information and the second pose information;
carrying out positioning fusion solution on the aligned first pose information and second pose information to obtain candidate pose information;
and optimizing the candidate pose information to obtain target pose information.
In a second aspect, embodiments of the present application provide a fusion device comprising:
the first calibration unit is used for performing first calibration on the laser radar and the inertial navigation module;
the second calibration unit is used for performing second calibration on the laser radar and the visual navigation module;
the first acquisition unit is used for acquiring first pose information corresponding to the laser radar;
the second acquisition unit is used for acquiring second pose information corresponding to the visual navigation module;
the positioning fusion unit is used for performing positioning fusion on the first pose information and the second pose information to obtain target pose information;
and the output unit is used for outputting the target pose information.
In one possible implementation, the first calibration unit includes:
the first acquisition subunit is used for acquiring laser three-dimensional point clouds by shooting the laser radars at the same position and different angles;
the acquisition subunit is used for acquiring the inclination angle of the inertial navigation unit in a static state;
the first calibration subunit is used for calibrating the first external parameters of the laser radar relative to the inertial navigation module through a least square method.
In one possible implementation, the second calibration unit includes:
the second acquisition subunit is used for acquiring two-dimensional image data and point cloud data obtained by the calibration plate at different positions and at different angles, wherein the two-dimensional image data are acquired by the visual navigation module, the point cloud data are acquired by the laser radar, the calibration plate is a checkerboard calibration plate with M multiplied by N angular points, and the angular points are intersection points of checkerboards on the calibration plate;
the second calibration unit is used for calibrating internal parameters of the visual navigation module based on the two-dimensional image data;
a third obtaining subunit, configured to obtain positions of four corner points in the point cloud data, so as to obtain two-dimensional checkerboard corner points, and obtain a three-dimensional checkerboard corner point corresponding to the two-dimensional checkerboard corner points according to a matching relationship between the two-dimensional checkerboard corner points and the three-dimensional checkerboard corner points, where the matching relationship is established in advance;
a first optimizing subunit, configured to optimize the visual navigation module and a second external parameter of the laser radar based on the obtained three-dimensional checkerboard corner points
In one possible implementation manner, the first obtaining unit includes:
the registration subunit is used for registering and aligning the characteristic points in the laser radar data with the characteristic points in the three-dimensional data of one point position on the standing type three-dimensional scanning equipment in real time in the moving process of the standing type three-dimensional scanning equipment;
and the calculating subunit is used for calculating the first pose information according to the laser radar data after registration and alignment.
In one possible implementation manner, the second obtaining unit is configured to:
the fourth acquisition subunit is used for acquiring inertial measurement data to be processed of the visual navigation module;
a fifth acquisition subunit, configured to acquire to-be-processed visual measurement data of the visual navigation module;
and the measurement subunit is used for processing the inertial measurement data to be processed and the visual measurement data to be processed according to a preset visual inertial fusion algorithm to obtain second pose information of the visual navigation module.
In one possible implementation, a positioning fusion unit includes;
and the alignment subunit is used for aligning the first pose information and the second pose information.
And the fusion subunit is used for carrying out positioning fusion solution on the aligned first pose information and second pose information to obtain candidate pose information.
And the second optimizing subunit is used for optimizing the candidate pose information to obtain target pose information.
In a third aspect, an embodiment of the present application provides a standing three-dimensional scanning device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the method according to the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, the storage medium storing a computer program which, when executed by a processor, implements a method as described in the first aspect above.
Compared with the prior art, the embodiment of the application has the beneficial effects that:
in the embodiment of the application, the laser radar and the inertial navigation module are subjected to first calibration; performing second calibration on the laser radar and the visual navigation module; acquiring first pose information corresponding to a laser radar; acquiring second pose information corresponding to the visual navigation module; positioning and fusing the first pose information and the second pose information to obtain target pose information; outputting target pose information. Therefore, according to the embodiment of the application, the initial pose can be output between different shooting points of the standing three-dimensional scanner, accurate alignment of the different points of the standing three-dimensional scanner is realized on the basis of the initial pose through ICP registration and other methods, and complete scene point cloud data is directly generated, so that the processing time of later professionals of the standing three-dimensional scanner is saved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a positioning fusion method provided in an embodiment of the present application;
FIG. 2 is a block diagram of a locating fusion device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a stand-up three-dimensional scanning device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Fig. 1 shows a flow chart of a positioning fusion method provided in an embodiment of the present application, which is used as an example and not by way of limitation, and the method may be applied to a standing three-dimensional scanning device, where the standing three-dimensional scanning device includes a laser radar, an inertial navigation module, and a visual navigation module, where the laser radar module adopts a laser radar of a standing three-dimensional scanning system, the inertial navigation module may be an inertial navigation module built in the laser radar, or may be an inertial navigation module built in the three-dimensional scanning system, and the visual navigation module may be one or more lens modules (a common lens, a wide-angle lens, a fisheye lens, etc.), and the method includes the following steps:
step S101, performing first calibration on the laser radar and the inertial navigation module.
In a specific application, performing a first calibration on the laser radar and the inertial navigation module includes:
step S201, acquiring laser radar three-dimensional point clouds by shooting laser at different angles at the same position.
Step S202, collecting the inclination angle of the inertial navigation unit in a static state.
In step S203, the first external parameter of the lidar relative to the inertial navigation module is calibrated by the least square method.
It can be understood that there is a rigid transformation between the laser radar and the inertial navigation module, the laser three-dimensional point cloud is shot at the same position and different angles, the inclination angle of the inertial navigation unit in a static state is collected, and the external parameters of the laser radar relative to the inertial navigation module are marked by a least square method.
And step S102, performing second calibration on the laser radar and the visual navigation module.
In a specific application, performing a second calibration on the laser radar and the visual navigation module includes:
step S301, two-dimensional image data and point cloud data of a calibration plate are obtained at different positions and at different angles, wherein the two-dimensional image data are collected by a visual navigation module, the point cloud data are collected by a laser radar, the calibration plate is a checkerboard calibration plate with M multiplied by N angular points, and the angular points are intersection points of checkerboards on the calibration plate.
And step S302, calibrating the internal parameters of the visual navigation module based on the two-dimensional image data.
Illustratively, extracting position data of each corner point in the two-dimensional image data; performing angle pixel point coordinate matching on the acquired position data of the corner points and the standard calibration plate image; and calculating the rotation parameters of the two-dimensional image data relative to the standard calibration plate image through an SFM algorithm to obtain the internal parameters of the visual navigation module.
Step S303, the positions of four angular points in the point cloud data are acquired to acquire two-dimensional checkerboard angular points, and the three-dimensional checkerboard angular points corresponding to the two-dimensional checkerboard angular points are acquired according to the pre-established matching relationship between the two-dimensional checkerboard angular points and the three-dimensional checkerboard angular points.
Illustratively, the specific steps of obtaining two-dimensional tessellation corner points include:
selecting four corner points of upper left, lower right and upper right in the point cloud data anticlockwise;
and according to the fitting plane of the four corner points, uniformly selecting the projection positions of the corner points in the M multiplied by N point cloud data on the plane to obtain two-dimensional checkerboard corner points.
Step S304, optimizing second external parameters of the visual navigation module and the laser radar based on the acquired three-dimensional checkerboard angular points.
Illustratively, the specific steps of optimizing the external parameters of the visual navigation module and the laser radar include:
the three-dimensional point P_lidar generated by the laser radar is subjected to external parameter rotation to obtain a three-dimensional point under a coordinate system of a visual navigation module, and the three-dimensional point is marked as P_camera;
projecting the P_camera to an image coordinate system through an internal reference of a visual navigation module to obtain a P_project, wherein the visual navigation module coordinate system is a three-dimensional rectangular coordinate system, the origin of the three-dimensional rectangular coordinate system is positioned at the optical center of a lens, an X axis and a Y axis are respectively parallel to two sides of a phase plane, a Z axis is the optical axis of the lens and is perpendicular to an image plane, the image coordinate system takes the center of the image as the origin of coordinates, and the X axis and the Y axis are parallel to the two sides of the image;
optimizing external parameters of the visual navigation module and the laser radar through a cost function;
wherein, the cost function is:
step S103, first pose information corresponding to the laser radar is acquired.
In a specific application, acquiring first pose information corresponding to a laser radar includes:
and S401, registering and aligning the characteristic points in the laser radar data with the characteristic points in the three-dimensional data of one point position on the standing three-dimensional scanning equipment in real time in the moving process of the standing three-dimensional scanning equipment.
The number of the laser radars is one or more, and the sampling frequency of the laser radars is 1-20 Hz; the feature points include corner points or plane geometric feature points, etc. It can be understood that the data of each frame of radar extracts angular points, plane geometric feature points and the like in the laser radar data by a feature extraction method.
Illustratively, motion estimation of a visual inertial odometer (VIO, visual-inertial odometry) is used, denoising and scanning of characteristic points in laser radar data are performed so as to perform registration alignment of the characteristic points in the laser radar data and the characteristic points in three-dimensional data of one point position on the standing three-dimensional scanning device, a loop closing module is adopted to perform visual loop detection and initial loop constraint estimation, global attitude diagrams restraining all laser radar attitudes are further subjected to incremental optimization through fine registration of sparse point cloud ICP alignment, global correction tracks and real-time laser radar attitude correction are obtained, and the corrected laser radar attitudes are used as first pose information.
Step S402, first pose information is calculated according to the laser radar data after registration and alignment.
It can be understood that the characteristic points in the laser radar data and the characteristic points in the three-dimensional data of one point position on the standing type three-dimensional scanning equipment are registered and aligned in real time, and the real-time pose is calculated to be the first pose information.
Step S104, second pose information corresponding to the visual navigation module is obtained.
In a specific application, obtaining second pose information corresponding to the visual navigation module includes:
step S501, acquiring inertial measurement data to be processed of the visual navigation module.
The inertial measurement data to be processed are IMU data.
Step S502, visual measurement data to be processed of the visual navigation module is obtained.
Wherein the data to be processed is image data.
Step S503, according to a preset visual inertia fusion algorithm, the inertial measurement data to be processed and the visual measurement data to be processed are processed, and the second pose information of the visual navigation module is obtained.
According to a preset visual inertial fusion algorithm, processing the inertial measurement data to be processed and the visual measurement data to be processed to obtain second pose information of the visual navigation module, wherein the method comprises the following steps:
and step S601, integrating the inertial measurement data to be processed corresponding to adjacent visual key frames in the visual measurement data to be processed by adopting a pre-integration algorithm of SO3 manifold, SO as to obtain a first pose relation between the adjacent visual key frames.
It will be appreciated that the measurement rate of the inertial measurement data to be processed is faster than the measurement data of the visual measurement data to be processed, and that in order to optimise the constraints of the visual measurement data and the inertial measurement data simultaneously in a single frame, it is necessary to integrate the measurement of many inertial measurement data between two adjacent visual key frames into one constraint.
And step S602, extracting characteristic points of visual key frames in the visual measurement data to be processed according to the SPHORB algorithm, matching the characteristic points according to the AKAZE-FREAK algorithm, calculating a change matrix T according to a reprojection minimization principle based on the characteristic points of the matching relationship, and estimating a second pose relationship of each visual key frame.
It will be appreciated that the sphrb algorithm is derived from a geodesic grid and can be considered as an equal area hexagonal grid parameterization of spheres used in climate modeling. Topology has shown that any surface can be approximated by triangulation. Thus, a sphere can also be approximated by triangles, which can be combined into a hexagonal mesh (and possibly contain a small number of pentagons). The idea of the sphrb algorithm is to approximate a spherical image to obtain a hexagonal spherical grid (similar to a football). Then, the characteristics of fine granularity and robustness are directly constructed on the hexagonal spherical grid, so that time-consuming spherical harmonic calculation and related bandwidth limitation are avoided, and the very fast performance and high description quality are realized.
And step S603, based on the sliding window, performing motion recovery on the first pose relation between the adjacent visual key frames and the second pose relation of each visual key frame to obtain the second pose information of the visual navigation module.
The second pose information of the visual navigation module comprises initial frame poses in the visual measurement data and the second pose information of the visual navigation module corresponding to each visual key frame.
In a specific application, two visual key frame frames with enough feature differences in the sliding window are selected according to a first pose relation between adjacent visual key frames and a second pose relation of each visual key frame. Next, the essential matrix E is restored by using an eight-point method of restoring the pose by polar coordinate geometry. The scale of the translation transformation is fixed, and the eight-point method is used for recovering the essential matrix E for retrieving the motion pose and triangulating the 3D map points. After initializing a batch of 3D points, solving pose information of the rest visual key frames in the sliding window by adopting a perspective n-point (PnP) method. Global full beam adjustment is used to minimize the gross projection error observed for all features in all frames. At this time, the measurement may obtain pose information and second pose information of the visual navigation module corresponding to each visual key frame.
In one possible implementation manner, the motion recovery is performed on the first pose relationship between adjacent visual key frames and the second pose relationship of each visual key frame, so as to obtain the second pose information of the visual navigation module, and then the method further comprises the following steps:
and performing closed-loop detection on each visual key frame based on a DBow3 algorithm to obtain a closed-loop frame.
It will be appreciated that closed loop detection mainly solves the problem of position drift over time, requiring the identification of the previously walked-over position and correction thereof.
And secondly, optimizing pose information according to the closed-loop frame.
It can be understood that the relative position and posture error is obtained according to the comparison of the estimated closed-loop frame and the actual closed-loop frame quality inspection pose, and the pose information is optimized according to the error result.
In one possible implementation, after optimizing the pose information according to the closed-loop frame, the method further includes:
and calibrating the second pose information of the visual navigation module.
In a specific application, calibrating the second pose information of the visual navigation module includes:
and step one, matching the feature points between the visual key frames to obtain feature points with matching relations.
And secondly, tracking and calibrating the feature points with the matching relation according to an SFM algorithm.
And thirdly, calibrating the second pose information of the visual navigation module according to the characteristic points with the matching relation after tracking and calibration.
It can be appreciated that the embodiment of the application may also calibrate the second pose information of the visual navigation module using the SFM algorithm.
Step S105, carrying out positioning fusion on the first pose information and the second pose information to obtain target pose information.
In a specific application, performing positioning fusion on the first pose information and the second pose information to obtain target pose information, including:
step S701, aligning the first pose information and the second pose information.
And step S702, carrying out positioning fusion solution on the aligned first pose information and second pose information to obtain candidate pose information.
Step S703, optimizing the candidate pose information to obtain the target pose information.
It can be appreciated that the laser radar data and the inertial navigation data are synchronized in time in millisecond level, and the visual navigation module is sampled and synchronized in millisecond level with the inertial navigation data. After the first pose information and the second pose information are acquired in real time, the first pose information and the second pose information are aligned in time and then fused. And solving a pose by positioning fusion so that the weighted sum of the visual residual error, the inertial navigation residual error and the laser radar point cloud residual error is minimum. After positioning and fusion, transmitting the optimized candidate pose information to a laser radar and visual navigation module, calculating a difference value between the optimized pose and the original pose, and optimizing the candidate pose information in a pose propagation mode to obtain target pose information.
And step S106, outputting target pose information.
Illustratively, the output target pose information of the internal positioning fusion system is read after the stand-by three-dimensional scanner is moved from one point location to the next. After shooting of the second point location is completed, the pose of the auxiliary positioning system is used as the initial pose of the ICP, and finally the pose of the second point location is obtained through optimization, so that point cloud registration is achieved. In this way, after shooting a scene, the standing three-dimensional scanner can directly output three-dimensional point cloud data of the whole scene without any post-processing operation.
In the embodiment of the application, the laser radar and the inertial navigation module are subjected to first calibration; performing second calibration on the laser radar and the visual navigation module; acquiring first pose information corresponding to a laser radar; acquiring second pose information corresponding to the visual navigation module; positioning and fusing the first pose information and the second pose information to obtain target pose information; outputting target pose information. Therefore, according to the embodiment of the application, the initial pose can be output between different shooting points of the standing three-dimensional scanner, accurate alignment of the different points of the standing three-dimensional scanner is realized on the basis of the initial pose through ICP registration and other methods, and complete scene point cloud data is directly generated, so that the processing time of later professionals of the standing three-dimensional scanner is saved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Corresponding to the positioning fusion method described in the above embodiments, fig. 2 shows a block diagram of the positioning fusion device provided in the embodiment of the present application, and for convenience of explanation, only the portions related to the embodiment of the present application are shown.
Referring to fig. 2, the apparatus is applied to a standing three-dimensional scanning device including a laser radar, an inertial navigation module, and a visual navigation module, and includes:
a first calibration unit 21, configured to perform a first calibration on the laser radar and the inertial navigation module;
a second calibration unit 22, configured to perform a second calibration on the laser radar and the visual navigation module;
a first obtaining unit 23, configured to obtain first pose information corresponding to the lidar;
a second obtaining unit 24, configured to obtain second pose information corresponding to the visual navigation module;
a positioning fusion unit 25, configured to perform positioning fusion on the first pose information and the second pose information, so as to obtain target pose information;
an output unit 26 for outputting the target pose information.
In one possible implementation, the first calibration unit includes:
the first acquisition subunit is used for acquiring laser three-dimensional point clouds by shooting the laser radars at the same position and different angles;
the acquisition subunit is used for acquiring the inclination angle of the inertial navigation unit in a static state;
the first calibration subunit is used for calibrating the first external parameters of the laser radar relative to the inertial navigation module through a least square method.
In one possible implementation, the second calibration unit includes:
the second acquisition subunit is used for acquiring two-dimensional image data and point cloud data obtained by the calibration plate at different positions and at different angles, wherein the two-dimensional image data are acquired by the visual navigation module, the point cloud data are acquired by the laser radar, the calibration plate is a checkerboard calibration plate with M multiplied by N angular points, and the angular points are intersection points of checkerboards on the calibration plate;
the second calibration unit is used for calibrating internal parameters of the visual navigation module based on the two-dimensional image data;
a third obtaining subunit, configured to obtain positions of four corner points in the point cloud data, so as to obtain two-dimensional checkerboard corner points, and obtain a three-dimensional checkerboard corner point corresponding to the two-dimensional checkerboard corner points according to a matching relationship between the two-dimensional checkerboard corner points and the three-dimensional checkerboard corner points, where the matching relationship is established in advance;
a first optimizing subunit, configured to optimize the visual navigation module and a second external parameter of the laser radar based on the obtained three-dimensional checkerboard corner points
In one possible implementation manner, the first obtaining unit includes:
the registration subunit is used for registering and aligning the characteristic points in the laser radar data with the characteristic points in the three-dimensional data of one point position on the standing type three-dimensional scanning equipment in real time in the moving process of the standing type three-dimensional scanning equipment;
and the calculating subunit is used for calculating the first pose information according to the laser radar data after registration and alignment.
In one possible implementation manner, the second obtaining unit is configured to:
the fourth acquisition subunit is used for acquiring inertial measurement data to be processed of the visual navigation module;
a fifth acquisition subunit, configured to acquire to-be-processed visual measurement data of the visual navigation module;
and the measurement subunit is used for processing the inertial measurement data to be processed and the visual measurement data to be processed according to a preset visual inertial fusion algorithm to obtain second pose information of the visual navigation module.
In one possible implementation, a positioning fusion unit includes;
and the alignment subunit is used for aligning the first pose information and the second pose information.
And the fusion subunit is used for carrying out positioning fusion solution on the aligned first pose information and second pose information to obtain candidate pose information.
And the second optimizing subunit is used for optimizing the candidate pose information to obtain target pose information.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
Fig. 3 is a schematic structural diagram of a stand-up three-dimensional scanning device according to an embodiment of the present application. As shown in fig. 3, the stand-type three-dimensional scanning apparatus 3 of this embodiment includes: at least one processor 30, a memory 31 and a computer program 32 stored in the memory 31 and executable on the at least one processor 30, the processor 30 implementing the steps of any of the various method embodiments described above when executing the computer program 32.
The stand-alone three-dimensional scanning device may include, but is not limited to, a processor 30, a memory 31. It will be appreciated by those skilled in the art that fig. 3 is merely an example of a stand-alone three-dimensional scanning device 3 and is not limiting of the stand-alone three-dimensional scanning device 3, and may include more or fewer components than shown, or may combine certain components, or may include different components, such as input-output devices, network access devices, etc.
The processor 30 may be a central processing unit (Central Processing Unit, CPU), the processor 30 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 31 may in some embodiments be an internal storage unit of the stand-alone three-dimensional scanning device 3, such as a hard disk or a memory of the stand-alone three-dimensional scanning device 3. The memory 31 may in other embodiments also be an external storage device of the stand-up three-dimensional scanning device 3, for example a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the stand-up three-dimensional scanning device 3. Further, the memory 31 may also include both an internal memory unit and an external memory device of the standing three-dimensional scanning device 3. The memory 31 is used for storing an operating system, application programs, boot loader (BootLoader), data, other programs etc., such as program codes of the computer program etc. The memory 31 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The present application also provides a storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a stand-alone three-dimensional scanning apparatus, a recording medium, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. The positioning fusion method is applied to a standing three-dimensional scanning device, and the standing three-dimensional scanning device comprises a laser radar, an inertial navigation module and a visual navigation module, and is characterized by comprising the following steps:
performing first calibration on the laser radar and the inertial navigation module;
performing second calibration on the laser radar and the visual navigation module;
acquiring first pose information corresponding to the laser radar;
acquiring second pose information corresponding to the visual navigation module;
positioning and fusing the first pose information and the second pose information to obtain target pose information;
and outputting the target pose information.
2. The positioning fusion method of claim 1, wherein performing a first calibration of the lidar and the inertial navigation module comprises:
acquiring laser radar three-dimensional point clouds by shooting laser at the same position and different angles;
collecting the inclination angle of the inertial navigation unit in a static state;
and (5) calibrating a first external parameter of the laser radar relative to the inertial navigation module through a least square method.
3. The positioning fusion method of claim 1, wherein performing a second calibration on the lidar and the visual navigation module comprises:
acquiring two-dimensional image data and point cloud data of a calibration plate at different positions and at different angles, wherein the two-dimensional image data are acquired by a visual navigation module, the point cloud data are acquired by a laser radar, the calibration plate is a checkerboard calibration plate with M multiplied by N angular points, and the angular points are intersection points of checkerboards on the calibration plate;
calibrating an internal reference of the visual navigation module based on the two-dimensional image data;
acquiring positions of four angular points in the point cloud data to acquire two-dimensional checkerboard angular points, and acquiring three-dimensional checkerboard angular points corresponding to the two-dimensional checkerboard angular points according to a pre-established matching relationship between the two-dimensional checkerboard angular points and the three-dimensional checkerboard angular points;
and optimizing second external parameters of the visual navigation module and the laser radar based on the acquired three-dimensional checkerboard angular points.
4. The positioning fusion method of claim 1, wherein obtaining the first pose information corresponding to the lidar comprises:
in the moving process of the standing three-dimensional scanning equipment, registering and aligning characteristic points in laser radar data with characteristic points in three-dimensional data of one point position on the standing three-dimensional scanning equipment in real time;
and calculating first pose information according to the laser radar data after registration and alignment.
5. The positioning fusion method of claim 1, wherein obtaining the second pose information corresponding to the visual navigation module comprises:
acquiring inertial measurement data to be processed of a visual navigation module;
acquiring vision measurement data to be processed of the vision navigation module;
and processing the inertial measurement data to be processed and the visual measurement data to be processed according to a preset visual inertial fusion algorithm to obtain second pose information of the visual navigation module.
6. The positioning fusion method of claim 1, wherein performing positioning fusion on the first pose information and the second pose information to obtain target pose information comprises:
aligning the first pose information and the second pose information;
carrying out positioning fusion solution on the aligned first pose information and second pose information to obtain candidate pose information;
and optimizing the candidate pose information to obtain target pose information.
7. The utility model provides a location fuses device, the device is applied to frame standing type three-dimensional scanning equipment, frame standing type three-dimensional scanning equipment includes laser radar, inertial navigation module and visual navigation module, its characterized in that, the device includes:
the first calibration unit is used for performing first calibration on the laser radar and the inertial navigation module;
the second calibration unit is used for performing second calibration on the laser radar and the visual navigation module;
the first acquisition unit is used for acquiring first pose information corresponding to the laser radar;
the second acquisition unit is used for acquiring second pose information corresponding to the visual navigation module;
the positioning fusion unit is used for performing positioning fusion on the first pose information and the second pose information to obtain target pose information;
and the output unit is used for outputting the target pose information.
8. The fusion device of claim 7, wherein the first calibration unit comprises:
the first acquisition subunit is used for acquiring laser three-dimensional point clouds by shooting the laser radars at the same position and different angles;
the acquisition subunit is used for acquiring the inclination angle of the inertial navigation unit in a static state;
and the calibration subunit is used for calibrating the first external parameter of the laser radar relative to the inertial navigation module through a least square method.
9. A standing three-dimensional scanning device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of any of claims 1 to 6 when executing the computer program.
10. A storage medium storing a computer program which, when executed by a processor, implements the method of any one of claims 1 to 6.
CN202211126897.3A 2022-09-16 2022-09-16 Positioning fusion method and device, standing three-dimensional scanning equipment and storage medium Pending CN117761715A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211126897.3A CN117761715A (en) 2022-09-16 2022-09-16 Positioning fusion method and device, standing three-dimensional scanning equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211126897.3A CN117761715A (en) 2022-09-16 2022-09-16 Positioning fusion method and device, standing three-dimensional scanning equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117761715A true CN117761715A (en) 2024-03-26

Family

ID=90313164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211126897.3A Pending CN117761715A (en) 2022-09-16 2022-09-16 Positioning fusion method and device, standing three-dimensional scanning equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117761715A (en)

Similar Documents

Publication Publication Date Title
CN109461190B (en) Measurement data processing device and measurement data processing method
EP3825954A1 (en) Photographing method and device and unmanned aerial vehicle
CN105096329B (en) Method for accurately correcting image distortion of ultra-wide-angle camera
JP5992184B2 (en) Image data processing apparatus, image data processing method, and image data processing program
JP7300550B2 (en) METHOD AND APPARATUS FOR CONSTRUCTING SIGNS MAP BASED ON VISUAL SIGNS
CN113048980B (en) Pose optimization method and device, electronic equipment and storage medium
CN113256718B (en) Positioning method and device, equipment and storage medium
CN112308925A (en) Binocular calibration method and device of wearable device and storage medium
Habib et al. Linear features in photogrammetry
CN116433737A (en) Method and device for registering laser radar point cloud and image and intelligent terminal
CN108801225B (en) Unmanned aerial vehicle oblique image positioning method, system, medium and equipment
CN113361365A (en) Positioning method and device, equipment and storage medium
CN113587934A (en) Robot, indoor positioning method and device and readable storage medium
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
KR102159134B1 (en) Method and system for generating real-time high resolution orthogonal map for non-survey using unmanned aerial vehicle
CN113436267B (en) Visual inertial navigation calibration method, device, computer equipment and storage medium
Marcato et al. Experimental assessment of techniques for fisheye camera calibration
Chatterjee et al. A nonlinear Gauss–Seidel algorithm for noncoplanar and coplanar camera calibration with convergence analysis
CN111445513A (en) Plant canopy volume obtaining method and device based on depth image, computer equipment and storage medium
CN113129422A (en) Three-dimensional model construction method and device, storage medium and computer equipment
Brunken et al. Deep learning self-calibration from planes
CN117761715A (en) Positioning fusion method and device, standing three-dimensional scanning equipment and storage medium
Castanheiro et al. Modeling Hyperhemispherical Points and Calibrating a Dual-Fish-Eye System for Close-Range Applications
CN115018922A (en) Distortion parameter calibration method, electronic device and computer readable storage medium
CN117406185B (en) External parameter calibration method, device and equipment between radar and camera and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination