WO2021212844A1 - 点云拼接方法、装置、设备和存储设备 - Google Patents
点云拼接方法、装置、设备和存储设备 Download PDFInfo
- Publication number
- WO2021212844A1 WO2021212844A1 PCT/CN2020/133375 CN2020133375W WO2021212844A1 WO 2021212844 A1 WO2021212844 A1 WO 2021212844A1 CN 2020133375 W CN2020133375 W CN 2020133375W WO 2021212844 A1 WO2021212844 A1 WO 2021212844A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point cloud
- cloud data
- optical center
- calibration
- coordinate system
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/002—Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B21/00—Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant
- G01B21/02—Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness
- G01B21/04—Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness by measuring coordinates of points
- G01B21/042—Calibration or calibration artifacts
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C15/00—Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C15/00—Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
- G01C15/002—Active optical surveying means
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C25/00—Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Definitions
- the embodiments of the present invention relate to trajectory calibration technology, and in particular to a point cloud splicing method, device, equipment and storage medium.
- the actual building measurement robot Based on a high-precision visual sensing system, the actual building measurement robot performs three-dimensional reconstruction of the indoor data during the construction phase of the building, and processes the three-dimensional point cloud data through the measurement algorithm to obtain various indicators to be tested. Since the building data collected by the measuring robot is almost fully sampled, the measurement results can be quantitatively evaluated through the evaluation algorithm of the building point cloud data.
- the embodiments of the present invention provide a point cloud splicing method, device, equipment, and storage medium to realize the effect of high-precision point cloud splicing.
- an embodiment of the present invention provides a point cloud splicing method, which includes:
- the original point cloud data is spliced to obtain spliced target point cloud data.
- the collecting at least one frame of point cloud data for calibration using a preset scanning trajectory includes:
- the acquisition device Based on the preset scanning trajectory, the acquisition device performs point cloud scanning on the area to be reconstructed, and collects at least one frame of point cloud data for calibration, wherein different characteristic targets are set on the area to be reconstructed.
- the preset scan trajectory is planned in advance, and the point cloud scan is performed on the area to be reconstructed based on the preset scan trajectory, which saves time and improves scanning efficiency.
- each of the characteristic targets is provided with a unique code identifier, so that the obtained point cloud data for calibration can be corresponded to each of the characteristic targets through the unique code identifier.
- the characteristic target is set in a scan overlap area of the area to be reconstructed, wherein the scan overlap area is determined according to a preset scan trajectory and a field of view of the acquisition device.
- the determining the converted optical center coordinates of each of the at least one frame of calibration point cloud data before conversion in the reference coordinate system of the reference point cloud data includes:
- the respective transformation matrixes wherein the other point cloud data is the at least one The point cloud data of other frames except the reference point cloud data in the point cloud data for frame calibration; determine the other point cloud data according to the optical center coordinates before conversion of the other point cloud data and the corresponding respective transformation matrices.
- the converted optical center coordinates of any other point cloud data before the conversion in the reference coordinate system of the reference point cloud data can be accurately obtained, so that the optical center based on the accurate other point cloud data is in the reference coordinate system subsequently.
- the mechanism parameters of the acquisition device are obtained.
- the respective transformation matrices include:
- the conversion parameter includes a rotation angle and a translation amount.
- the calibration of the mechanism parameters of the collection device according to the optical center fitting trajectory of the converted optical center coordinates includes:
- the mechanical parameters of the acquisition device can be calibrated based on the optical center fitting trajectory, and then according to the obtained mechanical parameters, a high-precision target transformation matrix of the point cloud data under any scanning pose to the reference coordinate system can be obtained.
- the performing radius-constrained plane circle fitting on at least one of the converted optical center coordinates, and determining that the plane circle obtained by fitting is the optical center fitting trajectory includes:
- the fitting surface is a plane fitted according to the converted optical center coordinates
- the projection coordinate point is the nearest neighbor coordinate point projected on the fitting surface by the converted optical center coordinates
- a radius-constrained plane circle fitting is performed on multiple projection coordinate points, and the plane circle obtained by fitting is determined as the optical center fitting trajectory.
- the plane circle obtained by the fitting is determined as the optical center fitting trajectory, so as to obtain high-precision mechanism parameters based on the optical center fitting trajectory, and then to obtain the point cloud data orientation in any scanning position based on the mechanism parameters.
- the target transformation matrix of the reference coordinate system transformation is determined as the optical center fitting trajectory, so as to obtain high-precision mechanism parameters based on the optical center fitting trajectory, and then to obtain the point cloud data orientation in any scanning position based on the mechanism parameters.
- the mechanism parameters include: the axis length of each level of rotation axis of the collection device, the axis coordinates of each level of rotation axis, and the included angle of each level of rotation axis;
- the calibrating the mechanism parameters of the collection device according to the optical center fitting trajectory includes:
- the radius of the fitted plane circle corresponding to the current optical center fitting trajectory is the axis length of the current stage rotation axis of the acquisition device, and the center of the fitted plane circle is the axis coordinate of the current stage rotation axis ;
- the axis length of the at least one of the current-level rotation axis is fitted by the axis coordinates of the plane circle is the axis coordinates of the next-level rotation axis; according to the method of the plane circle fitted by the rotation axis of each level
- the vector determines the angle formed between the rotation axes of each level, wherein, in the acquisition device, the rotation axis of the next level is located inside the rotation axis of the current level.
- the axis length and axis coordinates of the current rotation axis of the acquisition device are fitted, and then the optical center fitting trajectory is performed based on the axis coordinates of the current rotation axis, and the next level of rotation can be fitted.
- the axis length and axis coordinates based on the normal vector of the plane circle fitted by each level of rotation axis, can determine the included angle between each level of rotation axis, so that the trajectory can be fitted based on the optical center, and the acquisition can be calibrated.
- the mechanism parameter of the device so as to obtain the target transformation matrix of the point cloud data under any scanning pose to the reference coordinate system based on the mechanism parameter.
- the inclination sensor of the acquisition device is used to feed back the actual rotation angle information of the motor, which is essentially based on With gravity sensing, the angle measured in this way is in the world coordinate system instead of the camera's coordinate system. Without complicated calibration conversion, it is difficult to feedback the actual rotation angle of the motor with high precision.
- the captured camera optical center represents the motion trajectory in the camera coordinate system, which reflects the actual scanning pose of the motor.
- the determination of the target transformation matrix for transforming the original point cloud data collected by the collection device in any scanning pose to the reference coordinate system according to the mechanism parameter includes:
- the initial point cloud data under any scanning poses are spliced to perform three-dimensional reconstruction of the area to be reconstructed.
- splicing the original point cloud data according to the target transformation matrix to obtain target point cloud data after splicing includes:
- the original point cloud data collected by the acquisition device in any scanning pose is converted to the reference coordinate system to obtain the spliced target point cloud data.
- the initial point cloud data under any scanning pose is unified to the reference coordinate system of the reference optical center coordinates of the reference point cloud data, so that high-precision stitching of the initial point cloud data under any scanning pose can be realized.
- high-precision three-dimensional reconstruction of the area to be reconstructed is realized.
- an embodiment of the present invention also provides a point cloud splicing device, which includes:
- the point cloud data acquisition module for calibration is used to collect at least one frame of point cloud data for calibration with a preset scanning trajectory
- the optical center coordinate conversion module is configured to determine, based on the at least one frame of calibration point cloud data, that the optical center coordinates of each of the at least one frame of calibration point cloud data before conversion are in the reference coordinate system of the reference point cloud data The converted optical center coordinates of, wherein the reference point cloud data is any one frame of the at least one frame of point cloud data for calibration;
- the mechanism parameter calibration module is used to calibrate the mechanism parameters of the acquisition device according to the optical center fitting trajectory of the converted optical center coordinates, wherein the optical center fitting trajectory is fitted according to the converted optical center coordinates traces of;
- a target transformation matrix determination module configured to determine, according to the mechanism parameters, a target transformation matrix for transforming the original point cloud data collected by the acquisition device in any scanning pose to the reference coordinate system;
- the point cloud data splicing module is used for splicing the original point cloud data according to the target transformation matrix to obtain the target point cloud after splicing.
- an embodiment of the present invention also provides a device, which includes:
- One or more processors are One or more processors;
- Storage device for storing one or more programs
- the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the point cloud splicing method described in any of the embodiments of the present invention.
- the embodiments of the present invention also provide a storage medium containing computer-executable instructions, which are used to execute the point cloud described in any of the embodiments of the present invention when the computer-executable instructions are executed by a computer processor. Splicing method.
- FIG. 1 is a flowchart of a point cloud splicing method in Embodiment 1 of the present invention
- Figure 2 is a schematic diagram of the arrangement of characteristic targets in the first embodiment of the present invention.
- Fig. 3 is a schematic diagram of a preset scanning trajectory in the first embodiment of the present invention.
- FIG. 4 is a flowchart of a point cloud splicing method in the second embodiment of the present invention.
- FIG. 5 is a schematic diagram of the optical centers of other point cloud data being unified to the reference coordinates in the second embodiment of the present invention.
- FIG. 6 is a flowchart of a point cloud splicing method in Embodiment 3 of the present invention.
- FIG. 7 is a schematic diagram of a dual-axis mechanism in the collection device in the third embodiment of the present invention.
- FIG. 8 is a schematic diagram of a calibration point cloud fitting trajectory in Embodiment 3 of the present invention.
- FIG. 9 is a flowchart of a point cloud splicing method in the fourth embodiment of the present invention.
- FIG. 10 is a schematic structural diagram of a point cloud splicing device in Embodiment 5 of the present invention.
- FIG. 11 is a schematic structural diagram of a device in Embodiment 6 of the present invention.
- FIG. 12 is a storage unit for storing or carrying the program code for implementing the point cloud splicing method according to the embodiment of the present invention according to the embodiment of the present invention.
- the actual building measurement robot Based on a high-precision visual sensing system, the actual building measurement robot performs three-dimensional reconstruction of the indoor data during the construction phase of the building, and processes the three-dimensional point cloud data through the measurement algorithm to obtain various indicators to be tested. Since the building data collected by the measuring robot is almost fully sampled, the measurement results can be quantitatively evaluated through the evaluation algorithm of the building point cloud data.
- the transformation matrix is mainly calculated by the following methods: First, the initial point cloud is obtained by feedback of the pose transformation of the three-dimensional acquisition device through the inertial measurement unit (IMU) and the simultaneous localization and mapping (SLAM) information The registration matrix is then used to extract matching point pairs through two-dimensional or three-dimensional image features to calculate an optimized matrix for precise registration.
- IMU inertial measurement unit
- SLAM simultaneous localization and mapping
- the above-mentioned point cloud splicing method in architectural surveying will have the following problems: First, because the point cloud of the house is usually a large plane perpendicular to each other, its image features are less, which is not conducive to the extraction of matching point pairs, so the initial pose information needs to have High accuracy; second, the positioning information provided by IMU and SLAM has low accuracy, which is difficult to meet the requirements of initial point cloud registration in building survey scenes; third, the current basic coordinate system and camera coordinate system are determined by hand-eye calibration tests. Convert the relationship to obtain the initial point cloud registration matrix, but the calibration object of this experiment is usually a calibration board, which is very small compared with the field of view of the building survey. In order to capture the same calibration board, the calibration scan amplitude is also It is very small, and the calibration test with such a difference greatly reduces the calibration effect as the field of view increases.
- the inventors proposed the point cloud splicing method, device, equipment and storage medium provided by the embodiments of the present invention.
- At least one frame of point cloud data for calibration is collected.
- the optical center coordinates of the data are unified under the same coordinate system, and the trajectory of the optical center of the optical center coordinates in the same coordinate system is used to calibrate the mechanism parameters of the acquisition device to realize the calibration of the acquisition device.
- the target transformation matrix of the initial point cloud data under any scanning pose to the reference coordinate system can be obtained, so as to realize the high precision of the initial point cloud data under any scanning pose in the area to be reconstructed Point cloud stitching.
- Fig. 1 is a flowchart of a point cloud splicing method provided by the first embodiment of the present invention. This embodiment is applicable to the case of splicing point clouds measured at any angle.
- the method can be executed by a point cloud splicing device.
- the splicing device can be implemented by software and/or hardware.
- the point cloud splicing device can be configured on a computing device, and specifically includes the following steps:
- S110 Collect at least one frame of point cloud data for calibration using a preset scanning trajectory.
- the preset scan trajectory may be a pre-planned scan trajectory based on the scene of the area to be reconstructed before scanning the area to be reconstructed.
- the point cloud data for calibration can be any point cloud data of the area to be reconstructed.
- the point cloud data for calibration can be the point cloud data of any area to be reconstructed taken through the acquisition device at an equal angle, and the collected point cloud data for calibration can be used to subsequently obtain any scan position in the area to be reconstructed based on the point cloud data for calibration.
- the target conversion matrix is used to stitch the initial point cloud data in the posture, so as to realize the high-precision point cloud stitching of the initial point cloud data under any scanning posture in the area to be reconstructed.
- the acquisition of at least one frame of point cloud data for calibration with a preset scan trajectory may specifically be: based on the preset scan trajectory, the acquisition device performs point cloud scanning on the area to be reconstructed, and collects at least one frame of point cloud for calibration The data, wherein the region to be reconstructed is provided with mutually different feature targets.
- the collection device may be a device that collects at least one frame of point cloud data for calibration of the area to be reconstructed, for example, a camera or the like.
- the area to be reconstructed may be an area to be reconstructed with point cloud data based on the calibration of any area obtained.
- the area to be reconstructed may be any building or any room. Taking the area to be reconstructed as a room in a large building as an example, since the point cloud in the room is usually a large plane perpendicular to each other, its image features are less, which is not conducive to the extraction of matching point pairs, and thus cannot accurately confirm the initial acquisition equipment.
- Pose information therefore, referring to the schematic diagram of the feature target layout shown in Figure 2, different feature targets can be set on the area to be reconstructed to increase the image features, and then obtain the accurate initial pose of the acquisition device Information
- the characteristic target here can be an identifier for marking set in the area to be reconstructed, for example, it can be calibration paper, and the calibration paper can also be provided with a unique identifier, so that the unique code can be used to identify
- the point cloud data for calibration corresponds to each characteristic target, so that the point cloud data for calibration in the area to be reconstructed can be spliced based on the correspondence between the unique identifier and each characteristic target.
- the unique identifier can be but It is not limited to numbers, characters, character strings, barcodes, or two-dimensional codes.
- the unique identifier may be an AprilTag mark.
- the characteristic target may be set in the scanning overlap area of the area to be reconstructed, wherein the scanning overlap area is determined according to the preset scanning trajectory and the field of view of the acquisition device. In this way, the preset scan trajectory is planned in advance, and the point cloud scan is performed on the area to be reconstructed based on the preset scan trajectory, which saves time and improves scanning efficiency.
- common scanning methods of acquisition devices include single/multi-track linear scanning, single/multi-axis rotary scanning, combined linear and rotary scanning, and the like.
- the trajectories of the above scanning methods are all combinations of straight lines and circles.
- the three-dimensional camera rotates 4 times at the pitch angle and the azimuth angle. Scan the area to be reconstructed where the characteristic target is arranged by rotating 12 times.
- S120 Determine the converted optical center coordinates of each of the at least one frame of calibration point cloud data before conversion in the reference coordinate system of the reference point cloud data, wherein the reference point cloud data is the at least one Any frame in the point cloud data for frame calibration.
- the reference point cloud data may be point cloud data that is moved or converted by other point cloud data based on the point cloud data.
- the reference point cloud data may be any frame selected from the point cloud data for calibration.
- the point cloud data is used as the reference point cloud data.
- the point cloud data for the first frame calibration may be used as the reference point cloud data.
- the optical center coordinates before conversion may be the initial optical center coordinates of at least one frame of point cloud data for calibration.
- the converted optical center coordinates may be the optical center coordinates obtained by converting at least one frame of point cloud data for calibration to the reference coordinate system of the reference point cloud data.
- the optical center coordinates of the collected point cloud data for calibration are not in the same coordinate system. If you want to pass the points for calibration After the cloud data is spliced to reconstruct the area to be reconstructed, the optical center coordinates of the point cloud data for calibration need to be unified under the same coordinate system.
- the optical center coordinates can be converted to the reference point cloud data through a certain distance of movement and/or a certain angle of rotation before the conversion of other calibration point cloud data except for the reference point cloud data.
- the optical center coordinates of the other calibration point cloud data except the reference point cloud data are obtained in the reference coordinate system corresponding to the reference point cloud data, so as to facilitate subsequent based on the same coordinate system
- the point cloud data used for calibration under the target conversion matrix is obtained for splicing the initial point cloud data under any scanning pose in the area to be reconstructed, so as to achieve high-precision points of the initial point cloud data under any scanning pose in the area to be reconstructed Cloud stitching.
- the mechanism parameter may be the parameter information of the collection device, for example, the axis length of each rotation axis of the collection device, the axis coordinates of each rotation axis and the angle formed by each rotation axis, etc.
- the normal vector of the fitting circle, the center coordinates and radius of the fitting circle can be obtained, based on the obtained fitting
- the normal vector of the circle, the center coordinates and radius of the fitted circle, according to the normal vector of the fitted circle, the center coordinates and radius of the fitted circle, and the corresponding relationship with the mechanism parameters of the acquisition device, the acquisition device in the base coordinate system can be obtained.
- Mechanism parameters based on the acquired mechanism parameters of the acquisition device, the target transformation matrix from the initial point cloud data under any scanning pose to the reference coordinate system can be obtained, so as to realize the initial point cloud under any scanning pose in the area to be reconstructed High-precision point cloud stitching of data.
- the trajectory is fitted based on the optical center coordinates of the point cloud data for calibration in the reference coordinate system, and the mechanical parameters of the acquisition device in the basic coordinate system are obtained according to the fitted trajectory, which can also be used as the mechanical parameter design of the inspection acquisition device
- a method of value, the actual mechanism parameters of the acquisition device and the design value have manufacturing errors and fit gaps.
- the trajectory fitting of the optical center coordinates of the point cloud data for calibration in the base coordinate system directly reflects the actual movement pose of the acquisition device, and thus the mechanical parameters of the acquisition device are fitted.
- S140 Determine, according to the mechanism parameter, a target transformation matrix for transforming the original point cloud data collected by the collection device in any scanning pose to the reference coordinate system.
- the initial point cloud data may be the point cloud data of the area to be reconstructed collected by the collecting device under any scanning pose.
- the target conversion matrix can be a matrix for converting point cloud data in any scanning pose to a reference coordinate system.
- the target conversion of the initial point cloud data under any scanning pose to the reference coordinate system can be determined according to the parameter information and forward kinematics of the acquisition device in any scanning pose Matrix, in order to subsequently realize the high-precision point cloud stitching of the initial point cloud data under any scanning pose in the area to be reconstructed based on the obtained target conversion matrix.
- the target point cloud data may be the point cloud data obtained after the initial point cloud data is subjected to the target transformation matrix, and the three-dimensional reconstruction of the area to be reconstructed can be realized by using the target point cloud data.
- the parameter information of the initial point cloud data under any scanning pose can be added to the target conversion matrix, so that the initial point cloud data under any scanning pose can be unified under the reference coordinate system of the reference point cloud data to complete any
- the registration of the point cloud data of the initial point cloud data in the scanning pose obtains the complete point cloud data of the area to be reconstructed, and based on the complete point cloud data of the area to be reconstructed, high-precision three-dimensional reconstruction of the area to be reconstructed is performed.
- the optical center coordinates of at least one frame of point cloud data for calibration are unified under the same coordinate system, and pass through the same coordinate system
- the optical center fits the trajectory of the optical center coordinates below to calibrate the mechanical parameters of the acquisition device to achieve the calibration of the acquisition device.
- the target transformation matrix of the initial point cloud data under any scanning pose to the reference coordinate system can be obtained, so as to realize the high precision of the initial point cloud data under any scanning pose in the area to be reconstructed Point cloud stitching.
- Fig. 4 is a flowchart of a point cloud splicing method provided in the second embodiment of the present invention.
- the embodiment of the present invention further optimizes the above-mentioned embodiment on the basis of the above-mentioned embodiment, and specifically includes the following steps:
- S210 Collect at least one frame of point cloud data for calibration using a preset scanning trajectory.
- the reference coordinate system may be the coordinate system where the reference point cloud data is located.
- the transformation matrix may be a matrix for transforming other point cloud data into a reference coordinate system corresponding to the reference point cloud data. Since the collected point cloud data for calibration are not in the same coordinate system, in order to unify other point cloud data under the coordinate system of the reference point cloud data, certain calculation rules can be used, such as the use of scale-invariant feature conversion The (Scale-invariant feature transform, SIFT) algorithm and the nearest iterative algorithm (Iterative Closest Point, ICP) determine the corresponding transformation matrix when other point cloud data is converted to the reference coordinate system corresponding to the reference point cloud data, so as to facilitate subsequent When transforming other point cloud data to the reference coordinate system corresponding to the reference point cloud data, the respective corresponding transformation matrices are used to obtain the mechanism parameters of the acquisition device in the reference coordinate system.
- SIFT Scale-invariant feature transform
- ICP Intelligent Closest Point
- the respective transformation matrix may specifically be: Perform feature extraction on cloud data and any of the other point cloud data to determine the matching feature pair for splicing; based on the matching feature pair, determine the respective conversion of any of the other point cloud data to the reference coordinate system Parameters; based on the respective conversion parameters, determine the respective conversion matrix when any of the other point cloud data is converted to the reference coordinate system.
- the matching feature pair may be a feature that can be matched between reference point cloud data and any other point cloud data, for example, it may be the feature target in the first embodiment above.
- the position of the characteristic target can be determined based on the RGB values of other point cloud data and the reference point cloud data, and based on the determined center position of the characteristic target, any other point cloud data and the reference point cloud data characteristic target phase can be obtained.
- the angle and distance between the matched feature targets are used to determine the matched feature pair.
- the feature target matching can be performed on the next frame and the previous frame.
- the reference point cloud data as the first frame of point cloud data as an example
- first use the feature of the second frame of point cloud data The target is matched with the characteristic target of the first frame.
- the point cloud data of the second frame is in the base coordinate system of the point cloud data of the first frame, and then the point cloud data of the third frame is matched with the second point cloud data.
- the feature targets of the frame point cloud data are matched, and so on, until all other point cloud data are matched, so as to avoid that the reference point may not exist in the other point cloud data of the frame farther from the reference point cloud data.
- cloud data is matched with matching features, the other point cloud data cannot be matched with the reference point cloud data.
- the conversion parameter can be a parameter to convert any other point cloud data to the reference coordinate system.
- the conversion parameter can be a rotation angle and a translation amount. Convert other point cloud data to the reference coordinate system, where the translation amount can be the translation distance required to convert any other point cloud data to the reference coordinate system.
- the conversion parameter for converting any other point cloud data to the reference coordinate system may be obtained based on the transformation rule.
- it may specifically be the matching feature pair determined based on RGB to extract any matching feature
- the center coordinates of the characteristic targets the center distance between the center coordinates of the two characteristic targets is obtained, the root mean square error function is constructed, and the parameters of the error function are fitted by the nearest iterative algorithm (Iterative Closest Point, ICP) until the error function is less than The iteration is stopped when the threshold is reached, and the parameter of the error function obtained after the iteration is stopped is the conversion parameter.
- ICP Intelligent Closest Point
- the conversion parameters can be combined to form a conversion matrix corresponding to any other point cloud data, so that any other point cloud data can be accurately obtained when the reference coordinate system corresponding to the reference point cloud data is converted, each corresponding The transformation matrix.
- the transformation matrix here can be: in, Is the rotation matrix of the rotation angle, Is the translation matrix of the translation amount. In this way, the respective transformation matrices can be obtained accurately when converting other point cloud data to the reference coordinate system corresponding to the reference point cloud data.
- S230 Determine the converted optical center coordinates of the other point cloud data before conversion in the reference coordinate system according to the optical center coordinates before conversion of the other point cloud data and the corresponding respective transformation matrices.
- the optical center coordinates before conversion of other point cloud data are all (0, 0, 0), and the obtained transformation matrix of other point cloud data and the optical center coordinates before conversion of other point cloud data are used according to the following
- the calculation formula can determine the optical center coordinates before conversion of other point cloud data in the reference coordinate system after conversion:
- the optical center coordinates of the reference point cloud data and the converted optical center coordinates of other point cloud data constitute at least one frame of the converted optical center coordinates of the point cloud data for calibration. In this way, the optical center coordinates of the optical center of any other point cloud data in the reference coordinate system of the reference point cloud data can be accurately obtained, so that the optical center coordinates of the optical center in the reference coordinate system based on the accurate other point cloud data can be subsequently obtained.
- the optical center coordinates before conversion of other point cloud data and the corresponding respective transformation matrices determine the optical center coordinates after conversion of the optical center coordinates before conversion of other point cloud data in the reference coordinate system, so that the calibration
- the point cloud data used are unified under a certain coordinate system of the point cloud data for calibration. It can be understood that the shooting is carried out in a dual-axis pitch and rotation shooting mode. Since the dual-axis of the collection device is perpendicular to each other, the optical center of the collection device is actually distributed On a spherical surface with the oblique triangle formed by the length of the biaxial axis as the radius of the sphere, the trajectory of the optical center of the collection device is a spherical surface.
- Figure a in Figure 5 is all the point cloud data for calibration, where A is the reference point cloud data, and the other point cloud data
- A is the reference point cloud data
- the other point cloud data For other point cloud data, it can be seen from Figure a in Figure 5 that all the point cloud data used for calibration are scattered and unrelated. After other point cloud data are transformed by their respective change matrices, the optical centers of other point cloud data are unified.
- the point cloud data for calibration are related to each other, as shown in figure b in Figure 5. For example, if they are unified on the surface of a sphere, it can be very clear Know the positional relationship between any other point cloud data and another point cloud data for calibration, so as to determine the mechanical parameters of the acquisition device later.
- the respective transformation matrices are determined, wherein the The other point cloud data is the point cloud data for calibration of other frames except the reference point cloud data in the at least one frame of point cloud data for calibration, so that the subsequent reference point cloud data corresponding to the reference point cloud data based on other point cloud data
- the corresponding transformation matrix is used to obtain the mechanism parameters of the acquisition device in the reference coordinate system.
- the optical center coordinates before conversion of the other point cloud data and the corresponding respective transformation matrices determine the optical center coordinates after conversion of the optical center coordinates before conversion of the other point cloud data in the reference coordinate system, so that Accurately obtain the optical center coordinates of the optical center of any other point cloud data in the reference coordinate system of the reference point cloud data, so as to obtain the optical center coordinates of the optical center in the reference coordinate system based on the accurate other point cloud data.
- Fig. 6 is a flowchart of a point cloud splicing method provided in the third embodiment of the present invention.
- the embodiment of the present invention further optimizes the above-mentioned embodiment on the basis of the above-mentioned embodiment, and specifically includes the following steps:
- S310 Collect at least one frame of point cloud data for calibration using a preset scanning trajectory.
- S330 Determine the converted optical center coordinates of the other point cloud data before conversion in the reference coordinate system according to the optical center coordinates before conversion of the other point cloud data and the corresponding respective transformation matrices.
- S340 Perform radius-constrained plane circle fitting on at least one of the converted optical center coordinates, and determine that the plane circle obtained by fitting is the optical center fitting trajectory.
- the radius-constrained plane circle fitting may be to perform plane circle fitting on at least one converted optical center coordinate with a certain radius.
- the plane circle obtained by fitting at least one converted optical center coordinate to a radius-constrained plane circle is the optical center fitting trajectory.
- the mechanical parameters of the acquisition device can be calibrated.
- perform radius-constrained plane circle fitting on at least one of the converted optical center coordinates, and determine that the fitted plane circle is the optical center fitting trajectory which may specifically be: according to at least one of the converted optical centers Coordinates, and at least one normal vector of the fitted surface of the converted optical center coordinates, determine the projected coordinate points of each of the converted optical center coordinates on the fitted surface, wherein the fitted surface is based on The plane to which the converted optical center coordinates are fitted, and the projection coordinate point is the nearest neighbor coordinate point projected on the fitting surface by the converted optical center coordinates; a radius constraint is performed on a plurality of projected coordinate points Plane circle fitting, determine the plane circle obtained by fitting as the optical center fitting trajectory.
- the azimuth angle rotation of the dual-axis rotating collection device as an example, assuming that the azimuth angle is rotated by 12 points in a unit of 30°, the light centers of other point cloud data in the base coordinate system are obtained through the above transformation matrix.
- the center coordinates are: P1(x1,y1,z1),...,P12(x12,y12,z12).
- Three-dimensional circle fitting is performed from these 12 calibration point cloud data coordinates. When fitting a three-dimensional circle, you first need to fit a three-dimensional rotation plane (normal vector), and then fit the three-dimensional circle center. The fitting of the rotation plane can be done by the least square method. Calculated, minimize the residual error of all calibration point cloud data to the fitted surface.
- Fitting a plane circle with radius constraints to multiple projection coordinate points can be specifically, according to the formula: In this way, the distance between all the point cloud data for calibration and the center coordinates of the fitted plane circle will be consistent, and the optical center fitting trajectory will be obtained.
- the projected coordinate points of each of the converted optical center coordinates on the fitted surface are determined .
- radius-constrained plane circle fitting based on the precisely determined projection coordinate point, and obtain accurate mechanical parameters of the acquisition device based on the plane circle obtained by the fitting.
- the mechanism parameter here may be the axis length of each level of rotation axis in the acquisition device, the axis coordinates of each level of rotation axis, and the included angle of each level of rotation axis.
- the mechanism parameter may be the axis length of each level of rotation axis in the acquisition device, the axis coordinates of each level of rotation axis, and the included angle of each level of rotation axis.
- calibrating the mechanical parameters of the acquisition device according to the optical center fitting trajectory may specifically be: determining that the radius of the fitted plane circle corresponding to the current optical center fitting trajectory is the radius of the acquisition device
- the axis length of the current-level rotation axis, and the center of the fitted plane circle is the axis coordinate of the current-level rotation axis; perform radius-constrained plane circle fitting on the axis coordinates of at least one of the current-level rotation axis to determine
- the radius of the plane circle fitted by the axis coordinates of the at least one current-level rotation axis is the axis length of the next-stage rotation axis, and the plane circle fitted by the axis coordinates of the at least one current-level rotation axis
- the center of the circle is the axis coordinate of the next level of rotation axis; according to the normal vector of the plane circle fitted by the level of rotation axis, the angle formed between the level of rotation axis is determined, wherein, in the acquisition device, The rotation axis
- the current rotation axis may be the outermost rotation axis of the collection device, for example, the rotation axis A in FIG. 7.
- the next-level rotation axis can be the rotation axis of the current rotation axis inward in the acquisition device, for example, the rotation axis B in Fig. 7.
- the current rotation axis is located at the outer end of the acquisition device compared to the next level rotation axis.
- the radius of the fitted plane circle corresponding to the current optical center fitting trajectory is the axis length of the current level rotation axis of the acquisition device, and the center of the fitted plane circle is the axis coordinate of the current level rotation axis.
- the rotation axis of the next level is located inside the rotation axis of the current level.
- the high-precision axis length, axis coordinates of each level of rotation axis and the included angle between each level of rotation axis can be obtained, so as to be based on the axis length, axis coordinates of each level of rotation axis and each level of rotation axis.
- the angle formed between the two obtains the target transformation matrix for transforming the initial point cloud data under any scanning pose to the reference coordinate system.
- the obtained axis lengths, axis coordinates and the included angles between the rotation axes of each level it can also be used to test the motor motion accuracy of the collection equipment.
- the acquisition equipment The inclination sensor feeds back the actual rotation angle information of the motor, which is essentially based on gravity sensing, so that the measured angle is in the world coordinate system instead of the camera coordinate system. Without complicated calibration conversion, it is difficult to feedback the actual rotation angle of the motor with high precision.
- the captured camera optical center represents the motion trajectory in the reference coordinate system, which reflects the actual scanning position of the motor.
- the current rotation is determined The center of the plane circle fitted by the optical center coordinates of the axis, and then the center of the plane circle fitted with the optical center coordinates of the current rotation axis, and then use the above fitting method to fit the optical center coordinates of the next level of rotation axis.
- the fitting becomes a sphere, and so on, the other point cloud data and the reference point cloud data can be unified under the same coordinate system, and the position of each point cloud data for calibration can be determined
- the relationship here can be understood as when fitting a plane circle, it is fitting from the outermost rotation axis of the collection device to the inner first-level rotation axis, until fitting to the innermost rotation axis of the collection device, the simulation of the rotation axis The order is from the outside to the inside.
- the plane circle obtained by the fitting is determined as the optical center fitting trajectory, so that the fitted optical center is simulated Combined trajectory, based on the correspondence between the parameters in the optical center fitting trajectory and the mechanical parameters of the acquisition device, the mechanical parameters of the acquisition device can be calibrated.
- the mechanical parameters of the acquisition device are calibrated, so that high-precision mechanical parameters can be obtained, so that the initial point cloud data under any scanning pose can be obtained based on the mechanical parameters and converted to the reference coordinate system.
- the target conversion matrix is performed by performing radius-constrained plane circle fitting on at least one of the converted optical center coordinates.
- Fig. 9 is a flowchart of a point cloud splicing method provided in the fourth embodiment of the present invention.
- the embodiment of the present invention further optimizes the above-mentioned embodiment on the basis of the above-mentioned embodiment, and specifically includes the following steps:
- S410 Collect at least one frame of point cloud data for calibration using a preset scanning trajectory.
- S430 Determine the converted optical center coordinates of the other point cloud data before conversion in the reference coordinate system according to the optical center coordinates before conversion of the other point cloud data and the corresponding respective transformation matrices.
- S440 Perform radius-constrained plane circle fitting on at least one of the converted optical center coordinates, and determine that the plane circle obtained by fitting is the optical center fitting trajectory.
- acquiring the rotation angle and displacement of the end scanner of the collection device in any scanning pose may be obtained by the sensor of the end scanner of the collection device.
- the rotation angle may be the rotation angle of the end scanner of the acquisition device moving from a certain scanning position to the next scanning position.
- the displacement can be the movement distance of the end scanner of the acquisition device from a certain scanning position to the next scanning position.
- the original point cloud data may be the point cloud data collected by the collecting device under any scanning pose. According to the sensor to obtain the rotation angle and displacement of the end scanner of the collection device in any scanning position and the mechanism parameters, the original point cloud data under any scanning position can be obtained with respect to the reference coordinate system.
- the splicing transformation matrix splicing transformation matrix Specifically, it can be: the transformation matrix from the axis coordinate of the azimuth rotation axis of the acquisition device to the origin (the center of the sphere in Figure 8), the rotation matrix of the azimuth rotation axis, and the axis coordinates of the pitch rotation axis to the axis coordinates of the azimuth rotation axis.
- the transformation matrix, the rotation matrix of the pitch rotation axis, and the transformation matrix from the optical center coordinates of any initial point cloud data to the axis coordinates of the pitch rotation axis.
- the splicing transformation matrix of the original point cloud data in any scanning pose relative to the reference coordinate system is obtained. Based on the following formula, the target transformation matrix of the original point cloud data collected by the acquisition device in any scanning pose to the reference coordinate system is obtained. :
- RT PanCent2L is the transformation matrix from the axis coordinate of the azimuth rotation axis to the origin (the center of the sphere in Figure 8)
- R ⁇ is the rotation matrix of the azimuth rotation axis
- RT TilCent2PanCent is the axis coordinate of the pitch rotation axis to the azimuth rotation axis
- the transformation matrix of the axis coordinates of, R ⁇ is the rotation matrix of the pitch rotation axis
- RT L2TilCent is the transformation matrix from the optical center coordinates of any initial point cloud data to the axis coordinates of the pitch rotation axis.
- the acquisition device in any embodiment of the present invention is a dual-axis mechanism as an example, and the motion trajectories of the mechanism are two With the superposition of the circle, at least three points can determine the direction of a circle and the center axis of the circle.
- the zero-point coordinates of the three frames are equal to the optical center coordinates in the base coordinate system, and the trajectory curve of the azimuth rotation can be obtained from the three-point coordinates.
- the dual-axis mechanism can fit a two-axis circular trajectory with at least 3 optical centers per axis and a total of 6 spatial coordinates, and can obtain a pose transformation matrix with any angle of pitch and rotation, which greatly simplifies In order to calibrate the mechanism of the three-dimensional camera acquisition equipment, as long as the two-axis motor has a high repeat positioning accuracy, the three-dimensional reconstruction of the room can be completed.
- the rotation angle and displacement of the end scanner of the acquisition device in any scanning pose are obtained, it is determined that the original point cloud data collected by the acquisition device in any scanning pose is converted to the reference coordinate system In this way, based on the target transformation matrix, the initial point cloud data under any scanning pose can be spliced to perform three-dimensional reconstruction of the area to be reconstructed.
- S470 According to the target transformation matrix, convert the original point cloud data collected by the acquisition device in any scanning pose to the reference coordinate system to obtain the target point cloud data after stitching.
- the target point cloud data may be the point cloud data formed by converting the original point cloud data collected by the collecting device in any scanning pose to the reference coordinate system based on the target transformation matrix.
- the initial point cloud data under any scanning pose can be converted to the reference coordinate system of the reference point cloud data, so that the initial point cloud data under any scanning pose can be unified to the reference point cloud data
- this reference coordinate system the effect of high-precision stitching of initial point cloud data under any scanning pose can be achieved, and then high-precision three-dimensional reconstruction of the area to be reconstructed can be achieved.
- the ICP algorithm in the second embodiment of the present invention can be used for fine matching to obtain more accurate and complete point cloud data of the area to be reconstructed.
- the technical solution of the embodiment of the present invention is to obtain the rotation angle and displacement of the end scanner of the collection device in any scanning pose; determine the collection device according to the mechanism parameters, the rotation angle and the displacement
- the target transformation matrix of the original point cloud data collected under any scanning pose to the reference coordinate system, so that based on the target transformation matrix, the initial point cloud data under any scanning pose is spliced to be reconstructed
- the area is reconstructed in three dimensions.
- the original point cloud data collected by the acquisition device in any scan pose is converted to the reference coordinate system to obtain the spliced target point cloud data, so that the data in any scan pose
- the initial point cloud data is unified under the reference coordinate system of the reference point cloud data, so that the effect of high-precision stitching of the initial point cloud data under any scanning pose can be achieved, and then the high-precision three-dimensional reconstruction of the area to be reconstructed can be realized.
- FIG. 10 is a schematic structural diagram of a point cloud splicing device provided by Embodiment 5 of the present invention. As shown in FIG. 10, the device includes: a point cloud data acquisition module 31 for calibration, an optical center coordinate determination module 32, a mechanism parameter calibration module 33, The target transformation matrix determination module 34 and the point cloud data splicing module 35.
- the point cloud data acquisition module 31 for calibration is used to collect at least one frame of point cloud data for calibration with a preset scanning trajectory
- the optical center coordinate determination module 32 is configured to determine, based on the at least one frame of point cloud data for calibration, that the optical center coordinates before conversion of each of the at least one frame of point cloud data for calibration are in the reference coordinate system of the reference point cloud data The converted optical center coordinates below, wherein the reference point cloud data is any one frame of the at least one frame of point cloud data for calibration;
- the mechanism parameter determination module 33 is used to calibrate the mechanism parameters of the acquisition device according to the optical center fitting trajectory of the converted optical center coordinates, wherein the optical center fitting trajectory is based on the optical center coordinate fitting after the conversion.
- the target transformation matrix determination module 34 is configured to determine, according to the mechanism parameters, a target transformation matrix for transforming the original point cloud data collected by the acquisition device in any scanning pose to the reference coordinate system;
- the point cloud data splicing module 35 is used for splicing the original point cloud data according to the target transformation matrix to obtain a spliced target point cloud.
- the point cloud data acquisition module 31 for calibration is specifically used for:
- the acquisition device Based on the preset scanning trajectory, the acquisition device performs point cloud scanning on the area to be reconstructed, and collects at least one frame of point cloud data for calibration, wherein different characteristic targets are set on the area to be reconstructed.
- each of the characteristic targets is provided with a unique code identifier, so that the obtained point cloud data for calibration can be corresponded to each of the characteristic targets through the unique code identifier.
- the characteristic target is set in a scan overlap area of the area to be reconstructed, wherein the scan overlap area is determined according to a preset scan trajectory and a field of view of the acquisition device.
- the optical center coordinate determination module 32 includes:
- the transformation matrix determining unit is used to determine the respective transformation matrices when the other point cloud data in the at least one frame of calibration point cloud data is converted to the reference coordinate system of the reference point cloud data, wherein the other points
- the cloud data is the point cloud data for calibration of the at least one frame of point cloud data for calibration other than the reference point cloud data;
- the optical center coordinate determining unit is configured to determine the optical center coordinates of the other point cloud data before the conversion in the reference coordinate system according to the optical center coordinates before the conversion of the other point cloud data and the corresponding respective transformation matrices.
- the optical center coordinates after conversion are configured to determine the optical center coordinates of the other point cloud data before the conversion in the reference coordinate system according to the optical center coordinates before the conversion of the other point cloud data and the corresponding respective transformation matrices.
- the optical center coordinates of the reference point cloud data and the converted optical center coordinates of the other point cloud data constitute the converted optical center coordinates of the at least one frame of calibration point cloud data.
- the transformation matrix determining unit includes:
- the matching feature pair determining subunit is used to perform feature extraction on the reference point cloud data and any of the other point cloud data, and determine a matching feature pair for splicing;
- a conversion parameter determination subunit configured to determine respective conversion parameters when any of the other point cloud data is converted to the reference coordinate system according to the matching feature pair;
- the transformation matrix determining subunit is used to determine the respective transformation matrix when any of the other point cloud data is transformed into the reference coordinate system according to the respective transformation parameters.
- the conversion parameter includes a rotation angle and a translation amount.
- the mechanism parameter calibration module 33 includes:
- the optical center fitting trajectory fitting unit is configured to perform radius-constrained plane circle fitting on at least one of the converted optical center coordinates, and determine that the plane circle obtained by the fitting is the optical center fitting trajectory;
- the mechanism parameter calibration unit is used to calibrate the mechanism parameters of the acquisition device according to the optical center fitting trajectory.
- the optical center fitting trajectory fitting unit includes:
- the projection coordinate point determination subunit is configured to determine that each of the converted optical center coordinates is in the Projected coordinate points on the fitting surface, wherein the fitting surface is a plane fitted according to the converted optical center coordinates, and the projected coordinate points are the converted optical center coordinates projected to the fitting The nearest neighbor coordinate point on the surface;
- the optical center fitting trajectory fitting subunit is used to perform radius-constrained plane circle fitting on multiple projection coordinate points, and determine the plane circle obtained by the fitting as the optical center fitting trajectory.
- the mechanism parameters include: the axis length of each level of rotation axis in the collection device, the axis coordinates of each level of rotation axis, and the included angle of each level of rotation axis.
- the mechanism parameter calibration unit includes:
- the current rotation axis mechanism parameter determination subunit determines that the radius of the fitted plane circle corresponding to the current optical center fitting trajectory is the axis length of the current stage rotation axis of the acquisition device, and the center of the fitted plane circle is State the axis coordinates of the current level of rotation axis;
- the next-level rotation axis mechanism parameter determination subunit performs radius-constrained plane circle fitting to at least one of the axis coordinates of the current level of rotation axis, and determines at least one of the axis coordinates of the current level of rotation axis to fit
- the radius of the plane circle is the axis length of the next level of rotation axis
- the center of the plane circle fitted by the axis coordinates of the at least one current level of rotation axis is the axis coordinates of the next level of rotation axis.
- the sub-units for determining the included angles of the rotating shafts at various levels are used to determine the included angles between the rotating shafts at various levels according to the normal vector of the plane circle fitted by the rotating shafts at each level.
- the next-stage rotation axis is located inside the current rotation axis.
- the target transformation matrix determination module 34 includes:
- a parameter acquisition unit which acquires the rotation angle and displacement of the end scanner of the acquisition device in any scanning pose
- the target transformation matrix determining unit is used to determine the target transformation of the original point cloud data collected by the acquisition device in any scanning pose to the reference coordinate system according to the mechanism parameters, the rotation angle and the displacement matrix.
- the point cloud data splicing module 35 is specifically used for:
- the original point cloud data collected by the acquisition device in any scanning pose is converted to the reference coordinate system to obtain the spliced target point cloud data.
- the point cloud splicing device provided in the embodiment of the present invention can execute the point cloud splicing method provided in any embodiment of the present invention, and has corresponding functional modules and beneficial effects for the execution method.
- FIG. 11 is a schematic structural diagram of a device provided by Embodiment 6 of the present invention.
- the device includes a processor 40, a memory 41, an input device 42, and an output device 43; the number of processors 40 in the device may be One or more, one processor 40 is taken as an example in FIG. 11; the processor 40, memory 41, input device 42, and output device 43 in the device can be connected by a bus or other means. In FIG. 11, a bus connection is taken as an example .
- the memory 41 can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the point cloud splicing method in the embodiment of the present invention (for example, point cloud data collection for calibration).
- Module 31 optical center coordinate determination module 32, mechanism parameter calibration module 33, target transformation matrix determination module 34, and point cloud data splicing module 35).
- the processor 40 executes various functional applications and data processing of the device by running the software programs, instructions, and modules stored in the memory 41, that is, realizes the above-mentioned point cloud splicing method.
- the memory 41 may mainly include a program storage area and a data storage area.
- the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the terminal, and the like.
- the memory 41 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
- the memory 41 may further include a memory remotely provided with respect to the processor 40, and these remote memories may be connected to the device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
- the input device 42 can be used to receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of the device.
- the output device 43 may include a display device such as a display screen.
- the seventh embodiment of the present invention also provides a storage medium containing computer-executable instructions.
- FIG. 12 is an embodiment of the present invention for storing or carrying program code for implementing the point cloud splicing method according to the embodiment of the present invention.
- the storage unit of which shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
- the computer-readable medium 50 stores program codes.
- the computer-executable instructions are used to execute a point cloud splicing method when executed by a computer processor.
- a storage medium 50 containing computer-executable instructions provided by an embodiment of the present invention and the computer-executable instructions are not limited to the method operations described above, and can also execute the point cloud splicing method provided by any embodiment of the present invention Related operations in.
- the computer software product can be stored in a computer-readable storage medium 50, such as a computer Floppy disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), flash memory (FLASH), hard disk or optical disk, etc., including several instructions to make a computer device (which can be a personal A computer, a server, or a network device, etc.) execute the method described in each embodiment of the present invention.
- the computer-readable storage medium 50 has storage space for the program code 51 for executing any method steps in the above-mentioned method. These program codes can be read from or written into one or more computer program products.
- the program code 51 may be compressed in an appropriate form, for example.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Manufacturing & Machinery (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
一种点云拼接方法、装置、设备和存储介质(50)。其中的方法包括:以预设扫描轨迹采集至少一帧标定用点云数据(110,210,310);确定至少一帧标定用点云数据各自的转换前光心坐标在基准点云数据的基准坐标系下的转换后光心坐标,其中,基准点云数据是至少一帧标定用点云数据中的任意一帧(120);根据转换后光心坐标的光心拟合轨迹,标定采集设备的机构参数,其中,光心拟合轨迹是根据转换后光心坐标拟合成的轨迹(130,240);根据机构参数,确定采集设备在任意扫描位姿下采集的原始点云数据向基准坐标系转换的目标变换矩阵(140,250,360);根据目标变换矩阵,对原始点云数据进行拼接,得到拼接后的目标点云数据(150,260,370)。由此,达到了高精度的点云拼接的效果。
Description
专利申请文件的交叉引用
本申请要求于2020年4月21日提交的申请号为202010317943.2的中国申请的优先权,其在此出于所有目的通过引用将其全部内容并入本文。
本发明实施例涉及轨迹标定技术,尤其涉及一种点云拼接方法、装置、设备和存储介质。
建筑实测实量机器人基于高精度视觉传感系统,对建筑施工阶段的室内数据进行三维重建,通过测量算法处理三维点云数据,从而得到各个待测指标。由于测量机器人采集到的建筑数据近乎全采样,可以通过对建筑点云数据的评测算法量化评估测量结果。
发明内容
本发明实施例提供一种点云拼接方法、装置、设备和存储介质,以实现高精度的点云拼接的效果。
第一方面,本发明实施例提供了一种点云拼接方法,该方法包括:
以预设扫描轨迹采集至少一帧标定用点云数据;
确定所述至少一帧标定用点云数据各自的转换前光心坐标在基准点云数据的基准坐标系下的转换后光心坐标,其中,所述基准点云数据是所述至少一帧标定用点云数据中的任意一帧;
根据所述转换后光心坐标的光心拟合轨迹,标定采集设备的机构参数,其中,所述光心拟合轨迹是根据所述转换后光心坐标拟合成的轨迹;
根据所述机构参数,确定所述采集设备在任意扫描位姿下采集的原始点云数据向所述基准坐标系转换的目标变换矩阵;
根据所述目标变换矩阵,对所述原始点云数据进行拼接,得到拼接后的目标点云数据。
可选的,所述以预设扫描轨迹采集至少一帧标定用点云数据,包括:
基于预设扫描轨迹,由采集设备对待重建区域进行点云扫描,采集至少一帧标定用点云数据,其中,所述待重建区域上设置有互不相同的特征标靶。
这样预先规划好预设扫描轨迹,基于预设扫描轨迹对待重建区域进行点云扫描,节省时间,提高扫描效率。
可选的,每一个所述特征标靶上设置有唯一编码标识,以通过唯一编码标识能将获取的所述标定用点云数据与各所述特征标靶对应。
可选的,所述特征标靶设置在所述待重建区域的扫描重叠区域,其中,所述扫描重叠区域根据所述采集设备的预设扫描轨迹和视场角确定。
可选的,所述确定所述至少一帧标定用点云数据各自的转换前光心坐标在基准点云数据的基准坐标系下的转换后光心坐标,包括:
确定所述至少一帧标定用点云数据中的其他点云数据向所述基准点云数据的基准坐标系进行转换时,各自的变换矩阵,其中,所述其他点云数据是所述至少一帧标定用点云数据中除所述基准点云数据以外的其他帧的标定用点云数据;根据所述其他点云数据的转换前光心坐标,及相应各自的变换矩阵,确定所述其他点云数据的转换前光心坐标在所述基准坐标系下的转换后光心坐标;其中,所述基准点云数据的光心坐标,以及所述其他点云数据的转换后光心坐标,构成了所述至少一帧标定用点云数据的转换后光心坐标。
这样可精确得到任一其他点云数据的转换前光心在基准点云数据的基准坐标系下的转换后光心坐标,以便后续基于该精确的其他点云数据的光心在基准坐标系下的转换后光心坐标,得到采集设备的机构参数。
可选的,所述确定所述至少一帧标定用点云数据中的其他点云数据向所述基准点云数据的基准坐标系进行转换时,各自的变换矩阵,包括:
对所述基准点云数据和任一所述其他点云数据进行特征提取,确定拼接用的匹配特征对;根据所述匹配特征对,确定任一所述其他点云数据转换至所述基准坐标系时各自的转换参数;根据所述各自的转换参数,确定任一所述其他点云数据向所述基准坐标系进行转换时,各自的变换矩阵。
这样可精确得到将其他点云数据向基准点云数据对应的基准坐标系进行转换的变换矩阵。
可选的,所述转换参数包括旋转角度和平移量。
可选的,所述根据所述转换后光心坐标的光心拟合轨迹,标定采集设备的机构参数,包括:
对至少一个所述转换后光心坐标进行半径约束的平面圆拟合,确定拟合得到的平面圆为光心拟合轨迹;根据所述光心拟合轨迹,标定所述采集设备的机构参数。
这样可基于光心拟合轨迹,标定采集设备的机构参数,进而后续可根据得到的机构参数,得到高精度的任意扫描位姿下的点云数据向基准坐标系转换的目标变换矩阵。
可选的,所述对至少一个所述转换后光心坐标进行半径约束的平面圆拟合,确定拟合得到的平面圆为光心拟合轨迹,包括:
根据至少一个所述转换后光心坐标,以及至少一个所述转换后光心坐标的拟合面的法向量,确定各所述转换后光心坐标在所述拟合面上的投影坐标点,其中,所述拟合面是根据所述转换后光心坐标拟合成的平面,所述投影坐标点是所述转换后光心坐标投射至所述拟合面上的最近邻坐标点;对多个投影坐标点进行半径约束的平面圆拟合,确定拟合得到的平面圆为光心拟合轨迹。
获取各转换后光心坐标在拟合面上的投影点坐标,这样以便后续基于该精确确定的投影坐标点来得到精确的机构参数,对多个投影坐标点进行半径约束的平面圆拟合,确定拟合得到的平面圆为光心拟合轨迹,这样以便后续基于该光心拟合轨迹,得到高精度的机构参数,进而以便后续基于该机构参数获取任意扫描位姿下的点云数据向基准坐标系转换的目标变换矩阵。
可选的,所述机构参数包括:所述采集设备的各级旋转轴的轴长、各级旋转轴的轴心坐标和各级旋转轴的夹角;
所述根据所述光心拟合轨迹,标定所述采集设备的机构参数,包括:
确定当前所述光心拟合轨迹对应的拟合的平面圆的半径为所述采集设备的当前级旋转轴的轴长,拟合的平面圆的圆心为所述当前级旋转轴的轴心坐标;对至少一个所述当前级旋转轴的轴心坐标进行半径约束的平面圆拟合,确定至少一个所述当前级旋转轴的轴心坐标所拟合的平面圆的半径为下一级旋转轴的轴长,所述至少一个所述当前级旋转轴的轴心坐标所拟合的平面圆的圆心为下一级旋转轴的轴心坐标;根据各级旋转轴的拟合的平面圆的法向量,确定各级旋转轴之间所成的夹角,其中,在所述采集设备中,所述下一级旋转轴位于所述当前级旋转轴的内侧。
这样根据当前光心拟合轨迹拟合出采集设备当前旋转轴的轴长和轴心坐标,然后基于当前旋转轴的轴心坐标再进行光心拟合轨迹,即可拟合出下一级旋转轴的轴长和轴心坐标,基于各级旋转轴的拟合的平面圆的法向量,可确定 各级旋转轴之间所成的夹角,这样便可基于光心拟合轨迹,标定采集设备的机构参数,以便后续基于该机构参数,获取任意扫描位姿下的点云数据向基准坐标系转换的目标变换矩阵。同时,根据得到的各旋转轴的轴长和轴心坐标,还可以用来检验采集设备的电机运动精度,现有技术中是通过采集设备的倾角传感器来反馈电机实际转角信息,其实质是基于重力传感,这样测得的角度是处于世界坐标系而非相机的坐标系。如果不经繁杂的标定转换,难以高精度地反馈电机实际转角。而捕捉的相机光心表征的是相机坐标系下的运动轨迹,即反映了实际的电机扫描位姿。
可选的,所述根据所述机构参数,确定所述采集设备在任意扫描位姿下采集的原始点云数据向所述基准坐标系转换的目标变换矩阵,包括:
获取所述采集设备的末端扫描器在任意扫描位姿下的旋转角以及位移;根据所述机构参数、所述旋转角和所述位移,确定所述采集设备在任意扫描位姿下采集的原始点云数据向所述基准坐标系转换的目标变换矩阵。
这样以便后续基于该目标变换矩阵,将任意扫描位姿下的初始点云数据进行拼接,以对待重建区域进行三维重建。
可选的,根据所述目标变换矩阵,对所述原始点云数据进行拼接,得到拼接后的目标点云数据,包括:
根据所述目标变换矩阵,将所述采集设备在任意扫描位姿下采集的原始点云数据转换至所述基准坐标系,得到拼接后的目标点云数据。
这样就将任意扫描位姿下的初始点云数据统一到基准点云数据的基准光心坐标的基准坐标系下,这样即可实现将任意扫描位姿下的初始点云数据进行高精度拼接的效果,进而实现待重建区域的高精度的三维重建。
第二方面,本发明实施例还提供了一种点云拼接装置,该装置包括:
标定用点云数据采集模块,用于以预设扫描轨迹采集至少一帧标定用点云数据;
光心坐标转换模块,用于基于所述至少一帧标定用点云数据,确定所述至少一帧标定用点云数据各自的转换前光心坐标在所述基准点云数据的基准坐标系下的转换后光心坐标,其中,所述基准点云数据是所述至少一帧标定用点云数据中的任意一帧;
机构参数标定模块,用于根据所述转换后光心坐标的光心拟合轨迹,标定采集设备的机构参数,其中,所述光心拟合轨迹是根据所述转换后光心坐标拟合成的轨迹;
目标变换矩阵确定模块,用于根据所述机构参数,确定所述采集设备在任意扫描位姿下采集的原始点云数据向所述基准坐标系转换的目标变换矩阵;
点云数据拼接模块,用于根据所述目标变换矩阵,对所述原始点云数据进行拼接,得到拼接后的目标点云。
第三方面,本发明实施例还提供了一种设备,该设备包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现本发明实施例中任一所述的点云拼接方法。
第四方面,本发明实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行本发明实施例中任一所述的点云拼接方法。
图1是本发明实施例一中的点云拼接方法的流程图;
图2是本发明实施例一中的特征标靶布置示意图;
图3是本发明实施例一中的预设扫描轨迹示意图;
图4是本发明实施例二中的点云拼接方法的流程图;
图5是本发明实施例二中的其他点云数据的光心统一至基准坐标下的示意图;
图6是本发明实施例三中的点云拼接方法的流程图;
图7是本发明实施例三中的采集设备为双轴机构的示意图;
图8是本发明实施例三中的校准点云拟合轨迹示意图;
图9是本发明实施例四中的点云拼接方法的流程图;
图10是本发明实施例五中的点云拼接装置的结构示意图;
图11是本发明实施例六中的一种设备的结构示意图;
图12是发明请实施例的用于保存或者携带实现根据本发明实施例的点云拼接方法的程序代码的存储单元。
下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此 处所描述的具体实施例仅仅用于解释本发明,而非对本发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部结构。
建筑实测实量机器人基于高精度视觉传感系统,对建筑施工阶段的室内数据进行三维重建,通过测量算法处理三维点云数据,从而得到各个待测指标。由于测量机器人采集到的建筑数据近乎全采样,可以通过对建筑点云数据的评测算法量化评估测量结果。
由于建筑测量的跨度范围通常较大,三维相机单帧拍摄得到的点云视域有限,需要算得多帧点云间的刚体变换矩阵,将其拼接得到完整的室内点云数据。目前主要由以下方式算得变换矩阵:首先通过惯性测量单元(Inertial measurement unit,IMU)和即时定位与地图构建(simultaneous localization and mapping,SLAM)信息反馈三维采集设备的位姿变换,获得初始的点云配准矩阵,再通过二维或三维的图像特征,提取匹配点对,算得精配准的优化矩阵。
在建筑测量使用上述点云拼接方法会存在以下一些问题:一是由于房屋点云通常为相互垂直的大平面,其图像特征较少,不利于匹配点对的提取,因此初始位姿信息需具有较高精度;二是IMU和SLAM所提供的定位信息精度较低,难以满足建筑测量场景下初始点云配准的要求;三是当前基本通过手眼标定试验来确定基坐标系与相机坐标系的转换关系,以求得初始点云配准矩阵,但该试验的标定对象通常是标定板,与建筑测量的视域相比非常小,且为了能拍到同一块标定板,标定的扫描幅度也很小,如此具有差异的标定试验随着视域的增大,其标定效果大幅下降。
针对上述问题,发明人提出了本发明实施例提供的点云拼接方法、装置、设备和存储介质,通过以预设扫描轨迹采集至少一帧标定用点云数据,将至少一帧标定用点云数据的光心坐标统一至同一坐标系下,并通过该同一坐标系下的光心坐标的光心拟合轨迹,标定采集设备的机构参数,实现对采集设备的标定。其中,基于采集设备的机构参数,可得到任意扫描位姿下的初始点云数据向基准坐标系转换的目标变换矩阵,以实现待重建区域中任意扫描位姿下的初始点云数据的高精度点云拼接。
下面对本发明实施例提供的点云拼接方法进行介绍。
实施例一
图1为本发明实施例一提供的点云拼接方法的流程图,本实施例可适用于 任意角度测得的点云进行拼接的情况,该方法可以由点云拼接装置来执行,该点云拼接装置可以由软件和/或硬件来实现,该点云拼接装置可以配置在计算设备上,具体包括如下步骤:
S110、以预设扫描轨迹采集至少一帧标定用点云数据。
示例性的,预设扫描轨迹可以是在对待重建区域进行扫描前,基于待重建区域的场景,预先规划的扫描轨迹。标定用点云数据可以是待重建区域的任意点云数据。该标定用点云数据可以是通过采集设备等角度拍摄的任意待重建区域的点云数据,通过采集的标定用点云数据,以便后续基于该标定用点云数据获得待重建区域中任意扫描位姿下的初始点云数据进行拼接的目标转换矩阵,以实现待重建区域中任意扫描位姿下的初始点云数据进行高精度点云拼接。
可选的,所述以预设扫描轨迹采集至少一帧标定用点云数据,具体可以是:基于预设扫描轨迹,由采集设备对待重建区域进行点云扫描,采集至少一帧标定用点云数据,其中,所述待重建区域上设置有互不相同的特征标靶。
示例性的,采集设备可以是采集待重建区域的至少一帧标定用点云数据的设备,例如可以是相机等。待重建区域可以是根据获取的任意区域的标定用点云数据进行重建的区域,例如,待重建区域可以是任一栋大楼,或者任一间房间等。以待重建区域为大型建筑内的一个房间为例,由于该房间内点云通常为相互垂直的大平面,其图像特征较少,不利于匹配点对的提取,进而无法精准确认采集设备的初始位姿信息,因此,参考图2所示的特征标靶布置示意图,可在待重建区域上设置有互不相同的特征标靶,用于增加图像特征,进而获取精准的采集设备的初始位姿信息,这里的特征标靶可以是在待重建区域内设置的一种用于标记的标识,例如,可以是标定纸,该标定纸上还可设置有唯一标识,以通过唯一编码标识能将获取的标定用点云数据与各特征标靶对应,以便于后续基于该唯一标识与各特征标靶的对应关系,将待重建区域内的标定用点云数据进行拼接,其中,唯一标识可以是但不限于数字、字符、字符串、条形码或二维码等,在一些实施例中,唯一标识可以是AprilTag标记。具体的,该特征标靶可以设置在待重建区域的扫描重叠区域,其中,该扫描重叠区域根据采集设备的预设扫描轨迹和视场角确定。这样预先规划好预设扫描轨迹,基于预设扫描轨迹对待重建区域进行点云扫描,节省时间,提高扫描效率。
示例性的,常见的采集设备的扫描方式包括单/多轨道的直线扫描、单/多轴的旋转扫描、直线与旋转的组合扫描等。上述扫描方式的轨迹都是直线和圆形的组合,例如,以双轴俯仰旋转的拍摄方式为例,如图3所示的预设扫描轨迹 示意图,三维相机以俯仰角旋转4次、方位角旋转12次的方式扫描布置有特征标靶的待重建区域。
S120、确定所述至少一帧标定用点云数据各自的转换前光心坐标在基准点云数据的基准坐标系下的转换后光心坐标,其中,所述基准点云数据是所述至少一帧标定用点云数据中的任意一帧。
示例性的,基准点云数据可以是其他点云数据以此点云数据为基准进行移动或者其他转换的点云数据,这里基准点云数据可以是从标定用点云数据中选取的任一帧点云数据作为基准点云数据,可选的,可以是将首帧标定用点云数据作为基准点云数据。转换前光心坐标可以是至少一帧标定用点云数据的初始光心坐标。转换后光心坐标可以是将至少一帧标定用点云数据转换至基准点云数据的基准坐标系下后的光心坐标。由于采集设备在采集待重建区域的标定用点云数据时,采集设备一直在移动,因此,采集到的各标定用点云数据的光心坐标不在同一坐标系下,若要通过各标定用点云数据经过拼接,进而重建待重建区域,则需将各标定用点云数据的光心坐标统一至同一坐标系下。这里可以是根据基准点云数据,可将除基准点云数据外的其他标定用点云数据的转换前光心坐标通过一定距离的移动和/或一定角度的旋转,转换到基准点云数据对应的基准坐标系下,得到除基准点云数据外的其他标定用点云数据的光心在基准点云数据对应的基准坐标系下的转换后光心坐标,这样以便于后续基于在同一坐标系下的标定用点云数据,获得待重建区域中任意扫描位姿下的初始点云数据进行拼接的目标转换矩阵,以实现待重建区域中任意扫描位姿下的初始点云数据的高精度点云拼接。
S130、根据所述转换后光心坐标的光心拟合轨迹,标定采集设备的机构参数,其中,所述光心拟合轨迹是根据所述转换后光心坐标拟合成的轨迹。
示例性的,机构参数可以是采集设备的参数信息,例如可以是采集设备的各旋转轴的轴长、各旋转轴的轴心坐标和各旋转轴所成的夹角等。将统一至基准坐标系的各标定用点云数据的光心坐标进行平面圆拟合,根据拟合轨迹可得到拟合圆的法向量、拟合圆的圆心坐标和半径,基于得到的拟合圆的法向量、拟合圆的圆心坐标和半径,根据拟合圆的法向量、拟合圆的圆心坐标和半径,与采集设备的机构参数的对应关系,可得到采集设备在基坐标系的机构参数,这样基于获得的采集设备的机构参数,可得到任意扫描位姿下的初始点云数据向基准坐标系转换的目标变换矩阵,以实现待重建区域中任意扫描位姿下的初始点云数据的高精度点云拼接。同时,基于各标定用点云数据在基准坐标系下 的光心坐标进行轨迹拟合,并根据拟合轨迹得到采集设备在基坐标系下的机构参数,也可作为检验采集设备的机构参数设计值的一种方式,采集设备的实际机构参数与设计值存在制造误差和配合间隙,即使对采集设备的外表面进行三维坐标扫描测量,也难以获得采集设备各结构关节间实际的距离,而基于各标定用点云数据在基坐标系下的光心坐标的轨迹拟合直接反映了采集设备的实际运动位姿,由此拟合出采集设备的机构参数。
S140、根据所述机构参数,确定所述采集设备在任意扫描位姿下采集的原始点云数据向所述基准坐标系转换的目标变换矩阵。
示例性的,初始点云数据可以是采集设备在任意扫描位姿下采集的待重建区域的点云数据。目标转换矩阵可以是任意扫描位姿下的点云数据向基准坐标系转换的矩阵。当获得采集设备在基准坐标系下的机构参数后,可根据采集设备在任意扫描位姿下参数信息的和正向运动学确定任意扫描位姿下的初始点云数据向基准坐标系转换的目标转换矩阵,这样以便后续基于获得的目标转换矩阵,来实现待重建区域中任意扫描位姿下的初始点云数据的高精度点云拼接。
S150、根据所述目标变换矩阵,对所述原始点云数据进行拼接,得到拼接后的目标点云数据。
示例性的,目标点云数据可以是初始点云数据经目标变换矩阵后,得到的点云数据,利用目标点云数据可实现对待重建区域的三维重建。可将任意扫描位姿下的初始点云数据的参数信息添加至目标转换矩阵中,由此可将任意扫描位姿下的初始点云数据统一在基准点云数据的基准坐标系下,完成任意扫描位姿下的初始点云数据的点云数据的配准,得到待重建区域的完整点云数据,基于待重建区域的完整点云数据,对待重建区域进行高精度的三维重建。
本发明实施例的技术方案,通过以预设扫描轨迹采集至少一帧标定用点云数据,将至少一帧标定用点云数据的光心坐标统一至同一坐标系下,并通过该同一坐标系下的光心坐标的光心拟合轨迹,标定采集设备的机构参数,实现对采集设备的标定。其中,基于采集设备的机构参数,可得到任意扫描位姿下的初始点云数据向基准坐标系转换的目标变换矩阵,以实现待重建区域中任意扫描位姿下的初始点云数据的高精度点云拼接。
实施例二
图4为本发明实施例二提供的点云拼接方法的流程图,本发明实施例是在上述实施例的基础上,对上述实施例的进一步优化,具体包括如下步骤:
S210、以预设扫描轨迹采集至少一帧标定用点云数据。
S220、确定所述至少一帧标定用点云数据中的其他点云数据向所述基准点云数据的基准坐标系进行转换时,各自的变换矩阵,其中,所述其他点云数据是所述至少一帧标定用点云数据中除所述基准点云数据以外的其他帧的标定用点云数据。
示例性的,基准坐标系可以是基准点云数据所在的坐标系。变换矩阵可以是其他点云数据向基准点云数据对应的基准坐标系进行转换的矩阵。由于采集到的各标定用点云数据不在同一坐标系下,因此为了将其他点云数据统一至基准点云数据的坐标系下,可以基于一定的计算规则,例如可以是利用尺度不变特征转换(Scale-invariant feature transform,SIFT)算法和最近迭代算法(Iterative Closest Point,ICP),确定其他点云数据向基准点云数据对应的基准坐标系进行转换时,各自对应的变换矩阵,这样以便后续基于其他点云数据向基准点云数据对应的基准坐标系进行转换时,各自对应的变换矩阵,来得到采集设备在基准坐标系下的机构参数。
可选的,确定所述至少一帧标定用点云数据中的其他点云数据向所述基准点云数据的基准坐标系进行转换时,各自的变换矩阵,具体可以是:对所述基准点云数据和任一所述其他点云数据进行特征提取,确定拼接用的匹配特征对;基于所述匹配特征对,确定任一所述其他点云数据转换至所述基准坐标系时各自的转换参数;基于所述各自的转换参数,确定任一所述其他点云数据向所述基准坐标系进行转换时,各自的变换矩阵。
示例性的,匹配特征对可以是基准点云数据和任一其他点云数据可进行匹配的特征,例如,可以是上述实施例一中的特征标靶等。具体的可以是基于其他点云数据和基准点云数据的RGB值确定特征标靶的位置,基于确定的特征标靶的中心位置,得到任一其他点云数据和基准点云数据特征标靶相匹配的特征标靶之间的角度和距离,来确定匹配特征对。在特征标靶匹配时,具体可以是,后一帧与前一帧进行特征标靶匹配,例如,以基准点云数据是首帧点云数据为例,先将第2帧点云数据的特征标靶与第1帧的特征标靶进行匹配,匹配完成后,第2帧的点云数据就在第1帧点云数据的基坐标系下,然后再将第3帧点云数据与第2帧点云数据的特征标靶进行匹配,依次类推,直至将所有其他点云数据均匹配完成,这样就可避免在距离基准点云数据较远帧的其他点云数据中可能不存在与基准点云数据进行匹配的匹配特征时,该其他点云数据无法与基准点云数据进行特征匹配的情况。转换参数可以是将任一其他点云数据转换 至基准坐标系下的参数,可选的,转换参数可以是旋转角度和平移量,这里的旋转角度可以是需经多少的旋转角度可将任一其他点云数据转换至基准坐标系下,这里的平移量可以是需经多少的平移距离可将任一其他点云数据转换至基准坐标系下。
示例性的,基于匹配特征对,可基于变换规则,得到任一其他点云数据转换至所述基准坐标系的转换参数,例如具体可以是将基于RGB确定的匹配特征对,提取任一匹配特征对的特征标靶的中心坐标,得到两特征标靶的中心坐标的中心距,构建均方根误差函数,通过最近迭代算法(Iterative Closest Point,ICP)拟合误差函数的参数,直至误差函数小于阈值时停止迭代,停止迭代后获取的误差函数的参数即为转换参数。确定了转换参数后,将转换参数组合可形成任一其他点云数据对应的变换矩阵,这样可以精确的获得任一其他点云数据向基准点云数据对应的基准坐标系进行转换时,各自对应的变换矩阵。例如,这里的变换矩阵可以是:
其中,
为旋转角度的旋转矩阵,
为平移量的平移矩阵。这样可得到精确的将其他点云数据向基准点云数据对应的基准坐标系进行转换时,各自的变换矩阵。
S230、根据所述其他点云数据的转换前光心坐标,及相应各自的变换矩阵,确定所述其他点云数据的转换前光心坐标在所述基准坐标系下的转换后光心坐标。
示例性的,其他点云数据的转换前光心坐标均为(0,0,0),利用得到的其他点云数据各自的变换矩阵,以及其他点云数据的转换前光心坐标,根据如下计算公式,即可确定其他点云数据的转换前光心坐标在基准坐标系下的转换后光心坐标:
其中,
为其他点云数据各自的变化矩阵,
为其他点云数据的转换前光心坐标,
为其他点云数据的转换前光心坐标在基准坐标系下的转换后光心坐标。其中,基准点云数据的光心坐标,以及其他点云数据的转换后光心坐标,构成了至少一帧标定用点云数据的转换后光心坐标。这样可精确得到任一其他点云数据的光心在基准点云数据的基准坐标系下的光心坐标,以便后续基于该精确的其他点云数据的光心在基准坐标系下的光心坐标,得到采集设备的机构参数。
示例性的,根据其他点云数据的转换前光心坐标,及相应各自的变换矩阵,确定其他点云数据的转换前光心坐标在基准坐标系下的转换后光心坐标,以使将标定用点云数据都统一到某一标定用点云数据的坐标系下,可以理解为,以双轴俯仰旋转的拍摄方式进行拍摄,由于采集设备的双轴相互垂直,采集设备光心实际上分布在以双轴轴长构成的斜三角边为球半径的球面上,因此,采集设备的光心的轨迹即呈现球面。参考图5所示的其他点云数据的光心统一至基准坐标下的示意图,图5中的a图为所有的标定用点云数据,其中,A为基准点云数据,其他的点云数据为其他点云数据,由图5中的a图可以看出,所有标定用点云数据彼此分散,没有关联,其他点云数据经各自的变化矩阵变换后,将其他点云数据的光心统一至基准点云数据的基准坐标系下,这样标定用点云数据彼此之间就有联系,如图5中的b图所示,例如,将他们统一至一个球的面上,这样可很清晰的知道任一其他点云数据和另一标定用点云数据的位置关系,以便后续确定采集设备的机构参数。
S240、根据所述转换后光心坐标的光心拟合轨迹,标定采集设备的机构参数,其中,所述光心拟合轨迹是根据所述转换后光心坐标拟合成的轨迹。
S250、根据所述机构参数,确定所述采集设备在任意扫描位姿下采集的原始点云数据向所述基准坐标系转换的目标变换矩阵。
S260、根据所述目标变换矩阵,对所述原始点云数据进行拼接,得到拼接后的目标点云数据。
本发明实施例的技术方案,通过确定所述至少一帧标定用点云数据中的其他点云数据向所述基准点云数据的基准坐标系进行转换时,各自的变换矩阵,其中,所述其他点云数据是所述至少一帧标定用点云数据中除所述基准点云数据以外的其他帧的标定用点云数据,这样以便后续基于其他点云数据向基准点云数据对应的基准坐标系进行转换时,各自对应的变换矩阵,来得到采集设备在基准坐标系下的机构参数。根据所述其他点云数据的转换前光心坐标,及相应各自的变换矩阵,确定所述其他点云数据的转换前光心坐标在所述基准坐标系下的转换后光心坐标,这样可精确得到任一其他点云数据的光心在基准点云数据的基准坐标系下的光心坐标,以便后续基于该精确的其他点云数据的光心在基准坐标系下的光心坐标,得到采集设备的机构参数。
实施例三
图6为本发明实施例三提供的点云拼接方法的流程图,本发明实施例是在上述实施例的基础上,对上述实施例的进一步优化,具体包括如下步骤:
S310、以预设扫描轨迹采集至少一帧标定用点云数据。
S320、确定所述至少一帧标定用点云数据中的其他点云数据向所述基准点云数据的基准坐标系进行转换时,各自的变换矩阵,其中,所述其他点云数据是所述至少一帧标定用点云数据中除所述基准点云数据以外的其他帧的标定用点云数据。
S330、根据所述其他点云数据的转换前光心坐标,及相应各自的变换矩阵,确定所述其他点云数据的转换前光心坐标在所述基准坐标系下的转换后光心坐标。
S340、对至少一个所述转换后光心坐标进行半径约束的平面圆拟合,确定拟合得到的平面圆为光心拟合轨迹。
示例性的,半径约束的平面圆拟合可以是对至少一个转换后光心坐标以一定的半径进行平面圆拟合。对至少一个转换后光心坐标进行半径约束的平面圆 拟合所得到的平面圆即为光心拟合轨迹。这样根据拟合的光心拟合轨迹,基于光心拟合轨迹中的参数与采集设备的机构参数的对应关系,可对采集设备的机构参数进行标定。
可选的,对至少一个所述转换后光心坐标进行半径约束的平面圆拟合,确定拟合得到的平面圆为光心拟合轨迹,具体可以是:根据至少一个所述转换后光心坐标,以及至少一个所述转换后光心坐标的拟合面的法向量,确定各所述转换后光心坐标在所述拟合面上的投影坐标点,其中,所述拟合面是根据所述转换后光心坐标拟合成的平面,所述投影坐标点是所述转换后光心坐标投射至所述拟合面上的最近邻坐标点;对多个投影坐标点进行半径约束的平面圆拟合,确定拟合得到的平面圆为光心拟合轨迹。
示例性的,以双轴旋转采集设备的方位角旋转为例,假设方位角以单位30°旋转了12个点位,经上述变换矩阵得到其他点云数据的光心在基坐标系下的光心坐标为:P1(x1,y1,z1),…,P12(x12,y12,z12)。由这12个标定用点云数据坐标进行三维圆拟合,在拟合三维圆时,首先需要拟合三维旋转平面(法向量),再拟合三维圆心,旋转平面的拟合可由最小二乘法算得,最小化所有标定用点云数据到拟合面的残差。
可设空间平面的方程为ax+by+cz=1,则可列出12个标定用点云数据的平面矩阵方程:
X=(NTN)-1NTI
将所得法向量系数单位化,设拟合空间平面方程为a’x+b’y+c’z+d=0,P1-P12这12个标定用点云数据在拟合平面上面的投影点为P1’(x1’,y1’,z1’),…,P12’(x12’,y12’,z12’),投影点坐标为:
式中,k=-a′ x
i-b′ y
i-c′ z
i-d。
这样根据至少一个所述转换后光心坐标,以及至少一个所述转换后光心坐标的拟合面的法向量,确定各所述转换后光心坐标在所述拟合面上的投影坐标点,以便后续基于该精确确定的投影坐标点进行半径约束的平面圆拟合,基于拟合得到的平面圆来得到精确的采集设备的机构参数。
S350、根据所述光心拟合轨迹,标定所述采集设备的机构参数。
示例性的,这里的机构参数可以是采集设备中各级旋转轴的轴长、各级旋转轴的轴心坐标和各级旋转轴的夹角。基于光心拟合轨迹,对各标定用点云数据在基准坐标系下的光心坐标进行半径约束的平面圆拟合,即可得到拟合的平面圆的半径及圆心,基于平面圆的半径和圆心,与各级旋转轴的轴长及轴心坐标的关系,即可获得各级旋转轴的轴长,以及各级旋转轴的轴心坐标。这样可得到高精度的机构参数,以便后续基于该机构参数获取任意扫描位姿下的初始点云数据向基准坐标系转换的目标转换矩阵。
可选的,根据所述光心拟合轨迹,标定所述采集设备的机构参数,具体可以是:确定当前所述光心拟合轨迹对应的拟合的平面圆的半径为所述采集设备的当前级旋转轴的轴长,拟合的平面圆的圆心为所述当前级旋转轴的轴心坐标;对至少一个所述当前级旋转轴的轴心坐标进行半径约束的平面圆拟合,确定至少一个所述当前级旋转轴的轴心坐标所拟合的平面圆的半径为下一级旋转轴的轴长,所述至少一个所述当前级旋转轴的轴心坐标所拟合的平面圆的圆心为下一级旋转轴的轴心坐标;根据各级旋转轴的拟合的平面圆的法向量,确定各级旋转轴之间所成的夹角,其中,在所述采集设备中,所述下一级旋转轴位于所述当前级旋转轴的内侧。
示例性的,参考图7所示的采集设备为双轴机构的示意图,这里当前旋转轴可以是采集设备的最外端的旋转轴,例如,如图7中的旋转轴A。下一级旋 转轴可以为采集设备中,当前旋转轴往里层的旋转轴,例如,如图7中的旋转轴B。这里当前旋转轴相较于下一级旋转轴,位于采集设备的外端。当前光心拟合轨迹对应的拟合的平面圆的半径为采集设备的当前级旋转轴的轴长,拟合的平面圆的圆心为当前级旋转轴的轴心坐标,当确定了当前级旋转轴的轴心坐标后,对至少一个当前级旋转轴的轴心坐标进行半径约束的平面圆拟合,将至少一个当前级旋转轴的轴心坐标所拟合的平面圆的半径作为下一级旋转轴的轴长,至少一个当前级旋转轴的轴心坐标所拟合的平面圆的圆心作为下一级旋转轴的轴心坐标,根据各级旋转轴的拟合的平面圆的法向量,即可确定各级旋转轴之间所成的夹角,这里,在采集设备中,下一级旋转轴位于当前级旋转轴的内侧。这样可得到高精度的各级旋转轴的轴长、轴心坐标和各级旋转轴之间所成的夹角,以便后续基于该各级旋转轴的轴长、轴心坐标和各级旋转轴之间所成的夹角获取任意扫描位姿下的初始点云数据向基准坐标系进行转换的目标转换矩阵。同时,根据得到的各级旋转轴的轴长、轴心坐标和各级旋转轴之间所成的夹角,还可以用来检验采集设备的电机运动精度,现有技术中是通过采集设备的倾角传感器来反馈电机实际转角信息,其实质是基于重力传感,这样测得的角度是处于世界坐标系而非相机坐标系。如果不经繁杂的标定转换,难以高精度地反馈电机实际转角。而捕捉的相机光心表征的是基准坐标系下的运动轨迹,即反映了实际的电机扫描位姿。
需要说明的是,参考图8所示的校准点云拟合轨迹示意图,如图8所示,虽然其他点云数据统一至基准点云数据的基准坐标系下,这样可知其他点云数据与基准点云数据的位置关系,但是各其他点云数据间的位置关系不明确,因此,需将其他点云数据和基准点云数据统一至另一个坐标系下,如图8中的a图所示,先基于当前旋转轴的光心坐标利用上述方法确定当前旋转轴的光心坐标拟合的平面圆,当将当前旋转轴的光心坐标拟合的平面圆拟合完成后,确定了当前旋转轴的光心坐标拟合的平面圆的圆心,然后将当前旋转轴的光心坐标拟合的平面圆的圆心,再利用上述拟合方法进行下一级旋转轴的光心坐标的拟合,如图8中的b图所示,拟合成为一个球体,依次类推,即可将其他点云数据和基准点云数据统一至同一个坐标系下,即可确定各标定用点云数据的位置关系,这里可以理解为作拟合平面圆时,是从采集设备的最外端旋转轴不断向内侧一级旋转轴进行拟合,直至拟合至采集设备额最内侧旋转轴,旋转轴的拟合顺序是从外向里。
S360、根据所述机构参数,确定所述采集设备在任意扫描位姿下采集的原 始点云数据向所述基准坐标系转换的目标变换矩阵。
S370、根据所述目标变换矩阵,对所述原始点云数据进行拼接,得到拼接后的目标点云数据。
本发明实施例的技术方案,通过对至少一个所述转换后光心坐标进行半径约束的平面圆拟合,确定拟合得到的平面圆为光心拟合轨迹,这样根据拟合的光心拟合轨迹,基于光心拟合轨迹中的参数与采集设备的机构参数的对应关系,可对采集设备的机构参数进行标定。根据所述光心拟合轨迹,标定所述采集设备的机构参数,这样可得到高精度的机构参数,以便后续基于该机构参数获取任意扫描位姿下的初始点云数据向基准坐标系转换的目标转换矩阵。
实施例四
图9为本发明实施例四提供的点云拼接方法的流程图,本发明实施例是在上述实施例的基础上,对上述实施例的进一步优化,具体包括如下步骤:
S410、以预设扫描轨迹采集至少一帧标定用点云数据。
S420、确定所述至少一帧标定用点云数据中的其他点云数据向所述基准点云数据的基准坐标系进行转换时,各自的变换矩阵,其中,所述其他点云数据是所述至少一帧标定用点云数据中除所述基准点云数据以外的其他帧的标定用点云数据。
S430、根据所述其他点云数据的转换前光心坐标,及相应各自的变换矩阵,确定所述其他点云数据的转换前光心坐标在所述基准坐标系下的转换后光心坐标。
S440、对至少一个所述转换后光心坐标进行半径约束的平面圆拟合,确定拟合得到的平面圆为光心拟合轨迹。
S450、根据所述光心拟合轨迹,标定所述采集设备的机构参数。
S460、获取所述采集设备的末端扫描器在任意扫描位姿下的旋转角以及位移;根据所述机构参数、所述旋转角和所述位移,确定所述采集设备在任意扫描位姿下采集的原始点云数据向所述基准坐标系转换的目标变换矩阵。
示例性的,获取采集设备的末端扫描器在任意扫描位姿下的旋转角以及位移可以是通过采集设备的末端扫描器的传感器获得。旋转角可以是采集设备的末端扫描器从某一扫描位姿移动到下一扫描位姿的旋转角度。位移可以是采集设备的末端扫描器从某一扫描位姿移动到下一扫描位姿的移动距离。原始点云数据可以是采集设备在任意扫描位姿下采集的点云数据。根据传感器获得采集设备的末端扫描器在任意扫描位姿下的旋转角和位移,以及机构参数,可得到 任意扫描位姿下的原始点云数据相对于基准坐标系的拼接变换矩阵,拼接变换矩阵具体可以是:采集设备的方位旋转轴的轴心坐标到原点(图8中的球心)的变换矩阵、方位旋转轴的旋转矩阵、俯仰旋转轴的轴心坐标到方位旋转轴的轴心坐标的变换矩阵、俯仰旋转轴的旋转矩阵和任一初始点云数据的光心坐标到俯仰转轴轴心坐标的变换矩阵。将得到任意扫描位姿下的原始点云数据相对于基准坐标系的拼接变换矩阵,基于如下公式,得到采集设备在任意扫描位姿下采集的原始点云数据向基准坐标系转换的目标变换矩阵:
f(R,t)=RT
PanCent2L*R
φ*RT
TilCent2PanCent*R
θ*RT
L2TilCent
其中,RT
PanCent2L为方位旋转轴的轴心坐标到原点(图8中的球心)的变换矩阵,R
φ为方位旋转轴的旋转矩阵,RT
TilCent2PanCent为俯仰旋转轴的轴心坐标到方位旋转轴的轴心坐标的变换矩阵,R
θ为俯仰旋转轴的旋转矩阵,RT
L2TilCent为任一初始点云数据的光心坐标到俯仰转轴轴心坐标的变换矩阵。
示例性的,如图7所示,其中,A和B分为为采集设备的两个旋转轴,本发明任意实施例中的采集设备为双轴机构为例,其机构的运动轨迹为两个圆周的叠加,至少三个点可以确定一个圆及圆心轴的方向,则可每旋转120°拍摄一幅点云,将360°全方位的点云通过视觉特征拼接即统一至基准图像的基坐标系下,三帧的零点坐标等于基坐标系下的光心坐标,即可由3点坐标求得方位旋转的轨迹曲线。同时,双轴机构可最少由每轴3个光心,共6个空间坐标拟合出的两轴的圆周轨迹,可以得到俯仰、旋转任意角度的位姿变换矩阵,这样在很大程度上简化了三维相机采集设备机构的标定,只要双轴电机具有较高的重复定位精度,即可完成该房间的三维重建。
这样根据所述机构参数、获取采集设备的末端扫描器在任意扫描位姿下的旋转角和位移,确定所述采集设备在任意扫描位姿下采集的原始点云数据向所述基准坐标系转换的目标变换矩阵,这样以便后续基于该目标变换矩阵,将任意扫描位姿下的初始点云数据进行拼接,以对待重建区域进行三维重建。
S470、根据所述目标变换矩阵,将所述采集设备在任意扫描位姿下采集的原始点云数据转换至所述基准坐标系,得到拼接后的目标点云数据。
示例性的,目标点云数据可以是基于目标变换矩阵,将采集设备在任意扫描位姿下采集的原始点云数据转换至基准坐标系下,所形成的点云数据。确定了目标变换矩阵后,可将任意扫描位姿下的初始点云数据转换至基准点云数据的基准坐标系下,这样就将任意扫描位姿下的初始点云数据统一到基准点云数 据的基准坐标系下,这样即可实现将任意扫描位姿下的初始点云数据进行高精度拼接的效果,进而实现待重建区域的高精度的三维重建。
需要说明的是,在将任意扫描位姿下的初始点云数据统一到基准点云数据的基准坐标系下后,实现了任意扫描位姿下的初始点云数据的粗匹配,但是可能还会存在极个别初始点云数据与基准坐标下的其他点云数据不太匹配,这样可以采用本发明实施例二中的ICP算法进行精细匹配,以获得更加精确完整的待重建区域的点云数据。
本发明实施例的技术方案,通过获取所述采集设备的末端扫描器在任意扫描位姿下的旋转角以及位移;根据所述机构参数、所述旋转角和所述位移,确定所述采集设备在任意扫描位姿下采集的原始点云数据向所述基准坐标系转换的目标变换矩阵,这样以便后续基于该目标变换矩阵,将任意扫描位姿下的初始点云数据进行拼接,以对待重建区域进行三维重建。根据所述目标变换矩阵,将所述采集设备在任意扫描位姿下采集的原始点云数据转换至所述基准坐标系,得到拼接后的目标点云数据,这样就将任意扫描位姿下的初始点云数据统一到基准点云数据的基准坐标系下,这样即可实现将任意扫描位姿下的初始点云数据进行高精度拼接的效果,进而实现待重建区域的高精度的三维重建。
实施例五
图10为本发明实施例五提供的点云拼接装置的结构示意图,如图10所示,该装置包括:标定用点云数据采集模块31、光心坐标确定模块32、机构参数标定模块33、目标变换矩阵确定模块34和点云数据拼接模块35。
其中,标定用点云数据采集模块31,用于以预设扫描轨迹采集至少一帧标定用点云数据;
光心坐标确定模块32,用于基于所述至少一帧标定用点云数据,确定所述至少一帧标定用点云数据各自的转换前光心坐标在所述基准点云数据的基准坐标系下的转换后光心坐标,其中,所述基准点云数据是所述至少一帧标定用点云数据中的任意一帧;
机构参数确定模块33,用于根据所述转换后光心坐标的光心拟合轨迹,标定采集设备的机构参数,其中,所述光心拟合轨迹是根据所述转换后光心坐标拟合成的轨迹;
目标变换矩阵确定模块34,用于根据所述机构参数,确定所述采集设备在任意扫描位姿下采集的原始点云数据向所述基准坐标系转换的目标变换矩阵;
点云数据拼接模块35,用于根据所述目标变换矩阵,对所述原始点云数据进行拼接,得到拼接后的目标点云。
在本发明实施例的技术方案的基础上,标定用点云数据采集模块31具体用于:
基于预设扫描轨迹,由采集设备对待重建区域进行点云扫描,采集至少一帧标定用点云数据,其中,所述待重建区域上设置有互不相同的特征标靶。
可选的,每一个所述特征标靶上设置有唯一编码标识,以通过唯一编码标识能将获取的所述标定用点云数据与各所述特征标靶对应。
可选的,所述特征标靶设置在待重建区域的扫描重叠区域,其中,所述扫描重叠区域根据所述采集设备的预设扫描轨迹和视场角确定。
在本发明实施例的技术方案的基础上,光心坐标确定模块32包括:
变换矩阵确定单元,用于确定所述至少一帧标定用点云数据中的其他点云数据向所述基准点云数据的基准坐标系进行转换时,各自的变换矩阵,其中,所述其他点云数据是所述至少一帧标定用点云数据中除所述基准点云数据以外的其他帧的标定用点云数据;
光心坐标确定单元,用于根据所述其他点云数据的转换前光心坐标,及相应各自的变换矩阵,确定所述其他点云数据的转换前光心坐标在所述基准坐标系下的转换后光心坐标。
可选的,所述基准点云数据的光心坐标,以及所述其他点云数据的转换后光心坐标,构成了所述至少一帧标定用点云数据的转换后光心坐标。
在本发明实施例的技术方案的基础上,变换矩阵确定单元包括:
匹配特征对确定子单元,用于对所述基准点云数据和任一所述其他点云数据进行特征提取,确定拼接用的匹配特征对;
转换参数确定子单元,用于根据所述匹配特征对,确定任一所述其他点云数据转换至所述基准坐标系时各自的转换参数;
变换矩阵确定子单元,用于根据所述各自的转换参数,确定任一所述其他点云数据向所述基准坐标系进行转换时,各自的变换矩阵。
可选的,所述转换参数包括旋转角度和平移量。
在本发明实施例的技术方案的基础上,机构参数标定模块33包括:
光心拟合轨迹拟合单元,用于对至少一个所述转换后光心坐标进行半径约束的平面圆拟合,确定拟合得到的平面圆为光心拟合轨迹;
机构参数标定单元,用于根据所述光心拟合轨迹,标定所述采集设备的机 构参数。
在本发明实施例的技术方案的基础上,光心拟合轨迹拟合单元包括:
投影坐标点确定子单元,用于根据至少一个所述转换后光心坐标,以及至少一个所述转换后光心坐标的拟合面的法向量,确定各所述转换后光心坐标在所述拟合面上的投影坐标点,其中,所述拟合面是根据所述转换后光心坐标拟合成的平面,所述投影坐标点是所述转换后光心坐标投射至所述拟合面上的最近邻坐标点;
光心拟合轨迹拟合子单元,用于对多个投影坐标点进行半径约束的平面圆拟合,确定拟合得到的平面圆为光心拟合轨迹。
可选的,所述机构参数包括:所述采集设备中各级旋转轴的轴长、各级旋转轴的轴心坐标和各级旋转轴的夹角。
在本发明实施例的技术方案的基础上,机构参数标定单元包括:
当前旋转轴机构参数确定子单元,确定当前所述光心拟合轨迹对应的拟合的平面圆的半径为所述采集设备的当前级旋转轴的轴长,拟合的平面圆的圆心为所述当前级旋转轴的轴心坐标;
下一级旋转轴机构参数确定子单元,对至少一个所述当前级旋转轴的轴心坐标进行半径约束的平面圆拟合,确定至少一个所述当前级旋转轴的轴心坐标所拟合的平面圆的半径为下一级旋转轴的轴长,所述至少一个所述当前级旋转轴的轴心坐标所拟合的平面圆的圆心为下一级旋转轴的轴心坐标。
各级旋转轴夹角确定子单元,用于根据各级旋转轴的拟合的平面圆的法向量,确定各级旋转轴之间所成的夹角。
可选的,在所述采集设备中,所述下一级旋转轴位于所述当前旋转轴的内侧。
在本发明实施例的技术方案的基础上,目标变换矩阵确定模块34包括:
参数获取单元,获取所述采集设备的末端扫描器在任意扫描位姿下的旋转角以及位移;
目标变换矩阵确定单元,用于根据所述机构参数、所述旋转角和所述位移,确定所述采集设备在任意扫描位姿下采集的原始点云数据向所述基准坐标系转换的目标变换矩阵。
在本发明实施例的技术方案的基础上,点云数据拼接模块35具体用于:
根据所述目标变换矩阵,将所述采集设备在任意扫描位姿下采集的原始点云数据转换至所述基准坐标系,得到拼接后的目标点云数据。
本发明实施例所提供的点云拼接装置可执行本发明任意实施例所提供的点云拼接方法,具备执行方法相应的功能模块和有益效果。
实施例六
图11为本发明实施例六提供的一种设备的结构示意图,如图11所示,该设备包括处理器40、存储器41、输入装置42和输出装置43;设备中处理器40的数量可以是一个或多个,图11中以一个处理器40为例;设备中的处理器40、存储器41、输入装置42和输出装置43可以通过总线或其他方式连接,图11中以通过总线连接为例。
存储器41作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本发明实施例中的点云拼接方法对应的程序指令/模块(例如,标定用点云数据采集模块31、光心坐标确定模块32、机构参数标定模块33、目标变换矩阵确定模块34和点云数据拼接模块35)。处理器40通过运行存储在存储器41中的软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述的点云拼接方法。
存储器41可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端的使用所创建的数据等。此外,存储器41可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器41可进一步包括相对于处理器40远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置42可用于接收输入的数字或字符信息,以及产生与设备的用户设置以及功能控制有关的键信号输入。输出装置43可包括显示屏等显示设备。
实施例七
本发明实施例七还提供一种包含计算机可执行指令的存储介质,请参阅图12,图12是本发明实施例的用于保存或者携带实现根据本发明实施例的点云拼接方法的程序代码的存储单元,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读介质50中存储有程序代码。所述计算机可执行指令在由计算机处理器执行时用于执行一种点云拼接方法。
当然,本发明实施例所提供的一种包含计算机可执行指令的存储介质50,其 计算机可执行指令不限于如上所述的方法操作,还可以执行本发明任意实施例所提供的点云拼接方法中的相关操作。
通过以上关于实施方式的描述,所属领域的技术人员可以清楚地了解到,本发明可借助软件及必需的通用硬件来实现,当然也可以通过硬件实现,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质50中,如计算机的软盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、闪存(FLASH)、硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。计算机可读存储介质50具有执行上述方法中的任何方法步骤的程序代码51的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码51可以例如以适当形式进行压缩。
值得注意的是,上述点云拼接装置的实施例中,所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本发明的保护范围。
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。
Claims (20)
- 一种点云拼接方法,其特征在于,包括:以预设扫描轨迹采集至少一帧标定用点云数据;确定所述至少一帧标定用点云数据各自的转换前光心坐标在基准点云数据的基准坐标系下的转换后光心坐标,其中,所述基准点云数据是所述至少一帧标定用点云数据中的任意一帧;根据所述转换后光心坐标的光心拟合轨迹,标定采集设备的机构参数,其中,所述光心拟合轨迹是根据所述转换后光心坐标拟合成的轨迹;根据所述机构参数,确定所述采集设备在任意扫描位姿下采集的原始点云数据向所述基准坐标系转换的目标变换矩阵;根据所述目标变换矩阵,对所述原始点云数据进行拼接,得到拼接后的目标点云数据。
- 根据权利要求1所述的方法,其特征在于,所述以预设扫描轨迹采集至少一帧标定用点云数据,包括:基于预设扫描轨迹,由采集设备对待重建区域进行点云扫描,采集至少一帧标定用点云数据,其中,所述待重建区域上设置有互不相同的特征标靶。
- 根据权利要求2所述的方法,其特征在于,每一个所述特征标靶上设置有唯一编码标识,以通过唯一编码标识能将获取的所述标定用点云数据与各所述特征标靶对应。
- 根据权利要求2所述的方法,其特征在于,所述特征标靶设置在所述待重建区域的扫描重叠区域,其中,所述扫描重叠区域根据所述采集设备的预设扫描轨迹和视场角确定。
- 根据权利要求1-4中任意一项所述的方法,其特征在于,所述确定所述至少一帧标定用点云数据各自的转换前光心坐标在基准点云数据的基准坐标系下的转换后光心坐标,包括:确定所述至少一帧标定用点云数据中的其他点云数据向所述基准点云数据的基准坐标系进行转换时,各自的变换矩阵,其中,所述其他点云数据是所述至少一帧标定用点云数据中除所述基准点云数据以外的其他帧的标定用点云数据;根据所述其他点云数据的转换前光心坐标,及相应各自的变换矩阵,确定所述其他点云数据的转换前光心坐标在所述基准坐标系下的转换后光心坐标;其中,所述基准点云数据的光心坐标,以及所述其他点云数据的转换后光 心坐标,构成了所述至少一帧标定用点云数据的转换后光心坐标。
- 根据权利要求5所述的方法,其特征在于,所述确定所述至少一帧标定用点云数据中的其他点云数据向所述基准点云数据的基准坐标系进行转换时,各自的变换矩阵,包括:通过尺度不变特征转换算法或最近迭代算法,确定所述至少一帧标定用点云数据中的其他点云数据向所述基准点云数据的基准坐标系进行转换时,各自的变换矩阵。
- 根据权利要求5所述的方法,其特征在于,所述确定所述至少一帧标定用点云数据中的其他点云数据向所述基准点云数据的基准坐标系进行转换时,各自的变换矩阵,包括:对所述基准点云数据和任一所述其他点云数据进行特征提取,确定拼接用的匹配特征对;根据所述匹配特征对,确定任一所述其他点云数据转换至所述基准坐标系时各自的转换参数;根据所述各自的转换参数,确定任一所述其他点云数据向所述基准坐标系进行转换时,各自的变换矩阵。
- 根据权利要求7所述的方法,其特征在于,所述对所述基准点云数据和任一所述其他点云数据进行特征提取,确定拼接用的匹配特征对,包括:对所述基准点云数据和任一所述其他点云数据进行特征提取;对所述基准点云数据和任一所述其他点云数据进行特征匹配,确定拼接用的匹配特征对。
- 根据权利要求7所述的方法,其特征在于,一个所述匹配特征对包括两帧点云数据,所述根据所述匹配特征对,确定任一所述其他点云数据转换至所述基准坐标系时各自的转换参数,包括:根据所述匹配特征对提取所述两帧点云数据分别对应的两个特征靶的中心坐标;计算所述两个特征靶的中心坐标之间的中心距离,且依据所述中心距离构建均方根误差函数;通过最近迭代算法拟合所述均方根误差函数的参数;当所述均方根误差函数小于阈值时,确定所述均方根误差函数的参数作为所述转换参数。
- 根据权利要求7或9所述的方法,其特征在于,所述转换参数包括旋 转角度和平移量。
- 根据权利要求1-10任意一项所述的方法,其特征在于,所述根据所述转换后光心坐标的光心拟合轨迹,标定采集设备的机构参数,包括:对至少一个所述转换后光心坐标进行半径约束的平面圆拟合,确定拟合得到的平面圆为光心拟合轨迹;根据所述光心拟合轨迹,标定所述采集设备的机构参数。
- 根据权利要求11所述的方法,其特征在于,所述对至少一个所述转换后光心坐标进行半径约束的平面圆拟合,确定拟合得到的平面圆为光心拟合轨迹,包括:根据至少一个所述转换后光心坐标,以及至少一个所述转换后光心坐标的拟合面的法向量,确定各所述转换后光心坐标在所述拟合面上的投影坐标点,其中,所述拟合面是根据所述转换后光心坐标拟合成的平面,所述投影坐标点是所述转换后光心坐标投射至所述拟合面上的最近邻坐标点;对多个投影坐标点进行半径约束的平面圆拟合,确定拟合得到的平面圆为光心拟合轨迹。
- 根据权利要求12所述的方法,其特征在于,所述机构参数包括:所述采集设备的各级旋转轴的轴长、各级旋转轴的轴心坐标和各级旋转轴的夹角;所述根据所述光心拟合轨迹,标定所述采集设备的机构参数,包括:确定当前所述光心拟合轨迹对应的拟合的平面圆的半径为所述采集设备的当前级旋转轴的轴长,拟合的平面圆的圆心为所述当前级旋转轴的轴心坐标;对至少一个所述当前级旋转轴的轴心坐标进行半径约束的平面圆拟合,确定至少一个所述当前级旋转轴的轴心坐标所拟合的平面圆的半径为下一级旋转轴的轴长,所述至少一个所述当前级旋转轴的轴心坐标所拟合的平面圆的圆心为下一级旋转轴的轴心坐标;根据各级旋转轴的拟合的平面圆的法向量,确定各级旋转轴之间所成的夹角;其中,在所述采集设备中,所述下一级旋转轴位于所述当前级旋转轴的内侧。
- 根据权利要求1或13所述的方法,其特征在于,所述根据所述机构参数,确定所述采集设备在任意扫描位姿下采集的原始点云数据向所述基准坐标系转换的目标变换矩阵,包括:根据各级旋转轴之间所成的夹角,获取任意扫描位姿下的初始点云数据向 基准坐标系进行转换的目标转换矩。
- 根据权利要求11所述的方法,其特征在于,所述根据所述光心拟合轨迹,标定所述采集设备的机构参数,包括:根据所述光心拟合轨迹得到拟合圆的法向量、圆心坐标和半径;基于预设对应关系,得到所述法向量、所述圆心坐标和所述半径对应的所述机构参数,其中,所述预设对应关系包括所述法向量、所述圆心坐标、所述半径和所述机构参数之间的对应关系。
- 根据权利要求1-15任意一项所述的方法,其特征在于,所述根据所述机构参数,确定所述采集设备在任意扫描位姿下采集的原始点云数据向所述基准坐标系转换的目标变换矩阵,包括:获取所述采集设备的末端扫描器在任意扫描位姿下的旋转角以及位移;根据所述机构参数、所述旋转角和所述位移,确定所述采集设备在任意扫描位姿下采集的原始点云数据向所述基准坐标系转换的目标变换矩阵。
- 根据权利要求1-16任意一项所述的方法,其特征在于,根据所述目标变换矩阵,对所述原始点云数据进行拼接,得到拼接后的目标点云数据,包括:根据所述目标变换矩阵,将所述采集设备在任意扫描位姿下采集的原始点云数据转换至所述基准坐标系,得到拼接后的目标点云数据。
- 一种点云拼接装置,其特征在于,包括:标定用点云数据采集模块,用于以预设扫描轨迹采集至少一帧标定用点云数据;光心坐标转换模块,用于基于所述至少一帧标定用点云数据,确定所述至少一帧标定用点云数据各自的转换前光心坐标在所述基准点云数据的基准坐标系下的转换后光心坐标,其中,所述基准点云数据是所述至少一帧标定用点云数据中的任意一帧;机构参数标定模块,用于根据所述转换后光心坐标的光心拟合轨迹,标定采集设备的机构参数,其中,所述光心拟合轨迹是根据所述转换后光心坐标拟合成的轨迹;目标变换矩阵确定模块,用于根据所述机构参数,确定所述采集设备在任意扫描位姿下采集的原始点云数据向所述基准坐标系转换的目标变换矩阵;点云数据拼接模块,用于根据所述目标变换矩阵,对所述原始点云数据进行拼接,得到拼接后的目标点云。
- 一种设备,其特征在于,所述设备包括:一个或多个处理器;存储装置,用于存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-17中任一所述的点云拼接方法。
- 一种包含计算机可执行指令的存储介质,其特征在于,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-17中任一所述的点云拼接方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010317943.2A CN113532311B (zh) | 2020-04-21 | 2020-04-21 | 点云拼接方法、装置、设备和存储设备 |
CN202010317943.2 | 2020-04-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021212844A1 true WO2021212844A1 (zh) | 2021-10-28 |
Family
ID=78093889
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/133375 WO2021212844A1 (zh) | 2020-04-21 | 2020-12-02 | 点云拼接方法、装置、设备和存储设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113532311B (zh) |
WO (1) | WO2021212844A1 (zh) |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114037612A (zh) * | 2021-11-02 | 2022-02-11 | 上海建工集团股份有限公司 | 一种大面积三维扫描固定测站点云数据拼接方法 |
CN114234862A (zh) * | 2021-12-27 | 2022-03-25 | 苏州方石科技有限公司 | 地坪检测设备及地坪检测设备的使用方法 |
CN114299104A (zh) * | 2021-12-23 | 2022-04-08 | 中铭谷智能机器人(广东)有限公司 | 一种基于多个3d视觉的汽车喷涂轨迹生成方法 |
CN114372916A (zh) * | 2021-12-31 | 2022-04-19 | 易思维(杭州)科技有限公司 | 一种自动化点云拼接方法 |
CN114383519A (zh) * | 2021-12-29 | 2022-04-22 | 国能铁路装备有限责任公司 | 一种转向架的装配高度差的测量方法、装置和测量设备 |
CN114440792A (zh) * | 2022-01-11 | 2022-05-06 | 重庆固高科技长江研究院有限公司 | 多线激光传感的封闭布局结构、扫描拼接及涂胶扫描方法 |
CN114485592A (zh) * | 2022-02-28 | 2022-05-13 | 中国电建集团西北勘测设计研究院有限公司 | 一种确保排水箱涵三维点云坐标转换精度的方法 |
CN114782315A (zh) * | 2022-03-17 | 2022-07-22 | 清华大学 | 轴孔装配位姿精度的检测方法、装置、设备及存储介质 |
CN114770516A (zh) * | 2022-05-19 | 2022-07-22 | 梅卡曼德(北京)机器人科技有限公司 | 通过点云获取装置对机器人进行标定的方法以及标定系统 |
CN115035206A (zh) * | 2022-05-09 | 2022-09-09 | 浙江华睿科技股份有限公司 | 一种激光点云的压缩方法、解压方法及相关装置 |
CN115033842A (zh) * | 2022-06-17 | 2022-09-09 | 合肥工业大学 | 一种空间6自由度位姿变换数据的拟合方法及拟合系统 |
CN115026834A (zh) * | 2022-07-02 | 2022-09-09 | 埃夫特智能装备股份有限公司 | 一种基于机器人模板程序纠偏功能的实现方法 |
CN115343299A (zh) * | 2022-10-18 | 2022-11-15 | 山东大学 | 一种轻量化公路隧道集成检测系统及方法 |
CN115423934A (zh) * | 2022-08-12 | 2022-12-02 | 北京城市网邻信息技术有限公司 | 户型图生成方法、装置、电子设备及存储介质 |
CN115439630A (zh) * | 2022-08-04 | 2022-12-06 | 思看科技(杭州)股份有限公司 | 标记点拼接方法、摄影测量方法、装置和电子装置 |
CN115908482A (zh) * | 2022-10-14 | 2023-04-04 | 荣耀终端有限公司 | 建模错误数据的定位方法和装置 |
CN116071231A (zh) * | 2022-12-16 | 2023-05-05 | 群滨智造科技(苏州)有限公司 | 眼镜框的点油墨工艺轨迹的生成方法、装置、设备及介质 |
US20230204758A1 (en) * | 2021-12-27 | 2023-06-29 | Suzhou Fangshi Technology Co., Ltd. | Terrace detection device and use method of terrace detection device |
CN116721239A (zh) * | 2023-06-12 | 2023-09-08 | 山西阳光三极科技股份有限公司 | 一种基于多个雷达设备的自动化点云拼接方法 |
CN116781837A (zh) * | 2023-08-25 | 2023-09-19 | 中南大学 | 一种自动化激光三维扫描系统 |
CN117197170A (zh) * | 2023-11-02 | 2023-12-08 | 佛山科学技术学院 | 一种单目相机视场角测量方法及系统 |
CN117470106A (zh) * | 2023-12-27 | 2024-01-30 | 中铁四局集团第二工程有限公司 | 狭小空间点云绝对数据采集方法以及模型建立设备 |
CN117557442A (zh) * | 2023-12-21 | 2024-02-13 | 江苏集萃激光科技有限公司 | 燃料电池极板3d点云模型获取装置及方法 |
CN117745537A (zh) * | 2024-02-21 | 2024-03-22 | 微牌科技(浙江)有限公司 | 隧道设备温度检测方法、装置、计算机设备和存储介质 |
CN117991250A (zh) * | 2024-01-04 | 2024-05-07 | 广州里工实业有限公司 | 一种移动机器人的定位检测方法、系统、设备及介质 |
CN118037729A (zh) * | 2024-04-12 | 2024-05-14 | 法奥意威(苏州)机器人系统有限公司 | 圆形焊缝焊接处理方法、装置、设备和介质 |
CN118134973A (zh) * | 2024-01-27 | 2024-06-04 | 南京林业大学 | 基于Gocator传感器的点云拼接与配准系统以及方法 |
CN118485809A (zh) * | 2024-07-09 | 2024-08-13 | 深圳市德壹医疗科技有限公司 | 一种面部理疗机器人6d姿态轨迹自动生成方法 |
CN118513749A (zh) * | 2024-07-24 | 2024-08-20 | 安徽工布智造工业科技有限公司 | 一种管板圆孔焊接方法和系统 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114092335B (zh) * | 2021-11-30 | 2023-03-10 | 群滨智造科技(苏州)有限公司 | 一种基于机器人标定的图像拼接方法、装置、设备及存储介质 |
CN114413790B (zh) * | 2022-01-31 | 2023-07-04 | 北京航空航天大学 | 固连摄影测量相机的大视场三维扫描装置及方法 |
CN114820307A (zh) * | 2022-04-02 | 2022-07-29 | 杭州汇萃智能科技有限公司 | 3d线扫描相机的点云拼接方法、系统和可读存储介质 |
CN114719775B (zh) * | 2022-04-06 | 2023-08-29 | 新拓三维技术(深圳)有限公司 | 一种运载火箭舱段自动化形貌重建方法及系统 |
CN114708150A (zh) * | 2022-05-02 | 2022-07-05 | 先临三维科技股份有限公司 | 一种扫描数据处理方法、装置、电子设备及介质 |
CN115638725B (zh) * | 2022-10-26 | 2024-07-26 | 成都清正公路工程试验检测有限公司 | 一种基于自动测量系统的目标点位测量方法 |
CN116228831B (zh) * | 2023-05-10 | 2023-08-22 | 深圳市深视智能科技有限公司 | 耳机接缝处的段差测量方法及系统、校正方法、控制器 |
CN116739898B (zh) * | 2023-06-03 | 2024-04-30 | 广东西克智能科技有限公司 | 基于圆柱特征的多相机点云拼接方法和装置 |
CN116486020B (zh) * | 2023-06-21 | 2024-02-13 | 季华实验室 | 一种三维重建方法及相关设备 |
CN118379469A (zh) * | 2024-05-28 | 2024-07-23 | 先临三维科技股份有限公司 | 扫描方法、电子设备和计算机可读存储介质 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102607457A (zh) * | 2012-03-05 | 2012-07-25 | 西安交通大学 | 基于惯性导航技术的大尺寸三维形貌测量装置及方法 |
CN105225269A (zh) * | 2015-09-22 | 2016-01-06 | 浙江大学 | 基于运动机构的三维物体建模系统 |
CN107833181A (zh) * | 2017-11-17 | 2018-03-23 | 沈阳理工大学 | 一种基于变焦立体视觉的三维全景图像生成方法及系统 |
CN109029284A (zh) * | 2018-06-14 | 2018-12-18 | 大连理工大学 | 一种基于几何约束的三维激光扫描仪与相机标定方法 |
CN109708578A (zh) * | 2019-02-25 | 2019-05-03 | 中国农业科学院农业信息研究所 | 一种植株表型参数测量装置、方法及系统 |
CN109901138A (zh) * | 2018-12-28 | 2019-06-18 | 文远知行有限公司 | 激光雷达标定方法、装置、设备和存储介质 |
US20190258225A1 (en) * | 2017-11-17 | 2019-08-22 | Kodak Alaris Inc. | Automated 360-degree dense point object inspection |
CN110751719A (zh) * | 2019-10-22 | 2020-02-04 | 深圳瀚维智能医疗科技有限公司 | 乳房三维点云重建方法、装置、存储介质及计算机设备 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106651752B (zh) * | 2016-09-27 | 2020-01-21 | 深圳市速腾聚创科技有限公司 | 三维点云数据配准方法及拼接方法 |
CN109064400A (zh) * | 2018-07-25 | 2018-12-21 | 博众精工科技股份有限公司 | 三维点云拼接方法、装置及系统 |
CN109253706B (zh) * | 2018-08-24 | 2020-07-03 | 中国科学技术大学 | 一种基于数字图像的隧道三维形貌测量方法 |
CN109509226B (zh) * | 2018-11-27 | 2023-03-28 | 广东工业大学 | 三维点云数据配准方法、装置、设备及可读存储介质 |
CN109856642B (zh) * | 2018-12-20 | 2023-05-16 | 上海海事大学 | 一种基于旋转三维激光测量系统的平面标定方法 |
CN110163797B (zh) * | 2019-05-31 | 2020-03-31 | 四川大学 | 一种标定转台位姿关系实现任意角点云拼接的方法及装置 |
CN111968129B (zh) * | 2020-07-15 | 2023-11-07 | 上海交通大学 | 具有语义感知的即时定位与地图构建系统及方法 |
-
2020
- 2020-04-21 CN CN202010317943.2A patent/CN113532311B/zh active Active
- 2020-12-02 WO PCT/CN2020/133375 patent/WO2021212844A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102607457A (zh) * | 2012-03-05 | 2012-07-25 | 西安交通大学 | 基于惯性导航技术的大尺寸三维形貌测量装置及方法 |
CN105225269A (zh) * | 2015-09-22 | 2016-01-06 | 浙江大学 | 基于运动机构的三维物体建模系统 |
CN107833181A (zh) * | 2017-11-17 | 2018-03-23 | 沈阳理工大学 | 一种基于变焦立体视觉的三维全景图像生成方法及系统 |
US20190258225A1 (en) * | 2017-11-17 | 2019-08-22 | Kodak Alaris Inc. | Automated 360-degree dense point object inspection |
CN109029284A (zh) * | 2018-06-14 | 2018-12-18 | 大连理工大学 | 一种基于几何约束的三维激光扫描仪与相机标定方法 |
CN109901138A (zh) * | 2018-12-28 | 2019-06-18 | 文远知行有限公司 | 激光雷达标定方法、装置、设备和存储介质 |
CN109708578A (zh) * | 2019-02-25 | 2019-05-03 | 中国农业科学院农业信息研究所 | 一种植株表型参数测量装置、方法及系统 |
CN110751719A (zh) * | 2019-10-22 | 2020-02-04 | 深圳瀚维智能医疗科技有限公司 | 乳房三维点云重建方法、装置、存储介质及计算机设备 |
Cited By (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114037612A (zh) * | 2021-11-02 | 2022-02-11 | 上海建工集团股份有限公司 | 一种大面积三维扫描固定测站点云数据拼接方法 |
CN114299104A (zh) * | 2021-12-23 | 2022-04-08 | 中铭谷智能机器人(广东)有限公司 | 一种基于多个3d视觉的汽车喷涂轨迹生成方法 |
CN114299104B (zh) * | 2021-12-23 | 2024-05-31 | 中铭谷智能机器人(广东)有限公司 | 一种基于多个3d视觉的汽车喷涂轨迹生成方法 |
CN114234862A (zh) * | 2021-12-27 | 2022-03-25 | 苏州方石科技有限公司 | 地坪检测设备及地坪检测设备的使用方法 |
US20230204758A1 (en) * | 2021-12-27 | 2023-06-29 | Suzhou Fangshi Technology Co., Ltd. | Terrace detection device and use method of terrace detection device |
US12000925B2 (en) * | 2021-12-27 | 2024-06-04 | Suzhou Fangshi Technology Co., Ltd. | Terrace detection device and use method of terrace detection device |
CN114383519A (zh) * | 2021-12-29 | 2022-04-22 | 国能铁路装备有限责任公司 | 一种转向架的装配高度差的测量方法、装置和测量设备 |
CN114372916B (zh) * | 2021-12-31 | 2024-05-31 | 易思维(杭州)科技股份有限公司 | 一种自动化点云拼接方法 |
CN114372916A (zh) * | 2021-12-31 | 2022-04-19 | 易思维(杭州)科技有限公司 | 一种自动化点云拼接方法 |
CN114440792A (zh) * | 2022-01-11 | 2022-05-06 | 重庆固高科技长江研究院有限公司 | 多线激光传感的封闭布局结构、扫描拼接及涂胶扫描方法 |
CN114485592A (zh) * | 2022-02-28 | 2022-05-13 | 中国电建集团西北勘测设计研究院有限公司 | 一种确保排水箱涵三维点云坐标转换精度的方法 |
CN114485592B (zh) * | 2022-02-28 | 2023-05-05 | 中国电建集团西北勘测设计研究院有限公司 | 一种确保排水箱涵三维点云坐标转换精度的方法 |
CN114782315A (zh) * | 2022-03-17 | 2022-07-22 | 清华大学 | 轴孔装配位姿精度的检测方法、装置、设备及存储介质 |
CN115035206A (zh) * | 2022-05-09 | 2022-09-09 | 浙江华睿科技股份有限公司 | 一种激光点云的压缩方法、解压方法及相关装置 |
CN115035206B (zh) * | 2022-05-09 | 2024-03-29 | 浙江华睿科技股份有限公司 | 一种激光点云的压缩方法、解压方法及相关装置 |
CN114770516A (zh) * | 2022-05-19 | 2022-07-22 | 梅卡曼德(北京)机器人科技有限公司 | 通过点云获取装置对机器人进行标定的方法以及标定系统 |
CN115033842A (zh) * | 2022-06-17 | 2022-09-09 | 合肥工业大学 | 一种空间6自由度位姿变换数据的拟合方法及拟合系统 |
CN115033842B (zh) * | 2022-06-17 | 2024-02-20 | 合肥工业大学 | 一种空间6自由度位姿变换数据的拟合方法及拟合系统 |
CN115026834A (zh) * | 2022-07-02 | 2022-09-09 | 埃夫特智能装备股份有限公司 | 一种基于机器人模板程序纠偏功能的实现方法 |
CN115439630A (zh) * | 2022-08-04 | 2022-12-06 | 思看科技(杭州)股份有限公司 | 标记点拼接方法、摄影测量方法、装置和电子装置 |
CN115439630B (zh) * | 2022-08-04 | 2024-04-19 | 思看科技(杭州)股份有限公司 | 标记点拼接方法、摄影测量方法、装置和电子装置 |
CN115423934A (zh) * | 2022-08-12 | 2022-12-02 | 北京城市网邻信息技术有限公司 | 户型图生成方法、装置、电子设备及存储介质 |
CN115423934B (zh) * | 2022-08-12 | 2024-03-08 | 北京城市网邻信息技术有限公司 | 户型图生成方法、装置、电子设备及存储介质 |
CN115908482A (zh) * | 2022-10-14 | 2023-04-04 | 荣耀终端有限公司 | 建模错误数据的定位方法和装置 |
CN115908482B (zh) * | 2022-10-14 | 2023-10-20 | 荣耀终端有限公司 | 建模错误数据的定位方法和装置 |
CN115343299A (zh) * | 2022-10-18 | 2022-11-15 | 山东大学 | 一种轻量化公路隧道集成检测系统及方法 |
CN115343299B (zh) * | 2022-10-18 | 2023-03-21 | 山东大学 | 一种轻量化公路隧道集成检测系统及方法 |
CN116071231B (zh) * | 2022-12-16 | 2023-12-29 | 群滨智造科技(苏州)有限公司 | 眼镜框的点油墨工艺轨迹的生成方法、装置、设备及介质 |
CN116071231A (zh) * | 2022-12-16 | 2023-05-05 | 群滨智造科技(苏州)有限公司 | 眼镜框的点油墨工艺轨迹的生成方法、装置、设备及介质 |
CN116721239B (zh) * | 2023-06-12 | 2024-01-26 | 山西阳光三极科技股份有限公司 | 一种基于多个雷达设备的自动化点云拼接方法 |
CN116721239A (zh) * | 2023-06-12 | 2023-09-08 | 山西阳光三极科技股份有限公司 | 一种基于多个雷达设备的自动化点云拼接方法 |
CN116781837B (zh) * | 2023-08-25 | 2023-11-14 | 中南大学 | 一种自动化激光三维扫描系统 |
CN116781837A (zh) * | 2023-08-25 | 2023-09-19 | 中南大学 | 一种自动化激光三维扫描系统 |
CN117197170A (zh) * | 2023-11-02 | 2023-12-08 | 佛山科学技术学院 | 一种单目相机视场角测量方法及系统 |
CN117197170B (zh) * | 2023-11-02 | 2024-02-09 | 佛山科学技术学院 | 一种单目相机视场角测量方法及系统 |
CN117557442A (zh) * | 2023-12-21 | 2024-02-13 | 江苏集萃激光科技有限公司 | 燃料电池极板3d点云模型获取装置及方法 |
CN117470106A (zh) * | 2023-12-27 | 2024-01-30 | 中铁四局集团第二工程有限公司 | 狭小空间点云绝对数据采集方法以及模型建立设备 |
CN117470106B (zh) * | 2023-12-27 | 2024-04-12 | 中铁四局集团有限公司 | 狭小空间点云绝对数据采集方法以及模型建立设备 |
CN117991250A (zh) * | 2024-01-04 | 2024-05-07 | 广州里工实业有限公司 | 一种移动机器人的定位检测方法、系统、设备及介质 |
CN118134973A (zh) * | 2024-01-27 | 2024-06-04 | 南京林业大学 | 基于Gocator传感器的点云拼接与配准系统以及方法 |
CN117745537B (zh) * | 2024-02-21 | 2024-05-17 | 微牌科技(浙江)有限公司 | 隧道设备温度检测方法、装置、计算机设备和存储介质 |
CN117745537A (zh) * | 2024-02-21 | 2024-03-22 | 微牌科技(浙江)有限公司 | 隧道设备温度检测方法、装置、计算机设备和存储介质 |
CN118037729A (zh) * | 2024-04-12 | 2024-05-14 | 法奥意威(苏州)机器人系统有限公司 | 圆形焊缝焊接处理方法、装置、设备和介质 |
CN118485809A (zh) * | 2024-07-09 | 2024-08-13 | 深圳市德壹医疗科技有限公司 | 一种面部理疗机器人6d姿态轨迹自动生成方法 |
CN118485809B (zh) * | 2024-07-09 | 2024-09-27 | 深圳市德壹医疗科技有限公司 | 一种面部理疗机器人6d姿态轨迹自动生成方法 |
CN118513749A (zh) * | 2024-07-24 | 2024-08-20 | 安徽工布智造工业科技有限公司 | 一种管板圆孔焊接方法和系统 |
Also Published As
Publication number | Publication date |
---|---|
CN113532311B (zh) | 2023-06-09 |
CN113532311A (zh) | 2021-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021212844A1 (zh) | 点云拼接方法、装置、设备和存储设备 | |
CN112183171B (zh) | 一种基于视觉信标建立信标地图方法、装置 | |
JP5746477B2 (ja) | モデル生成装置、3次元計測装置、それらの制御方法及びプログラム | |
Grant et al. | Finding planes in LiDAR point clouds for real-time registration | |
JP6011548B2 (ja) | カメラ校正装置、カメラ校正方法およびカメラ校正用プログラム | |
Heller et al. | Structure-from-motion based hand-eye calibration using L∞ minimization | |
US9467682B2 (en) | Information processing apparatus and method | |
CN110246185B (zh) | 图像处理方法、装置、系统、存储介质和标定系统 | |
JP6370038B2 (ja) | 位置姿勢計測装置及び方法 | |
WO2021185217A1 (zh) | 一种基于多激光测距和测角的标定方法 | |
Ahmadabadian et al. | An automatic 3D reconstruction system for texture-less objects | |
JP2012128744A (ja) | 物体認識装置、物体認識方法、学習装置、学習方法、プログラム、および情報処理システム | |
CN109215086A (zh) | 相机外参标定方法、设备及系统 | |
CN111811395A (zh) | 基于单目视觉的平面位姿动态测量方法 | |
JP2017117386A (ja) | 自己運動推定システム、自己運動推定システムの制御方法及びプログラム | |
WO2018233514A1 (zh) | 一种位姿测量方法、设备及存储介质 | |
Covas et al. | 3D reconstruction with fisheye images strategies to survey complex heritage buildings | |
Kukelova et al. | Hand-eye calibration without hand orientation measurement using minimal solution | |
CN115713564A (zh) | 相机标定方法及装置 | |
Wu | Photogrammetry: 3-D from imagery | |
JP2005275789A (ja) | 三次元構造抽出方法 | |
Castanheiro et al. | Modeling hyperhemispherical points and calibrating a dual-fish-eye system for close-range applications | |
Mahinda et al. | Development of an effective 3D mapping technique for heritage structures | |
CN113223163A (zh) | 点云地图构建方法及装置、设备、存储介质 | |
Xu et al. | Automatic registration method for TLS LiDAR data and image-based reconstructed data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20932571 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20932571 Country of ref document: EP Kind code of ref document: A1 |