CN113706591B - Point cloud-based three-dimensional reconstruction method for surface weak texture satellite - Google Patents

Point cloud-based three-dimensional reconstruction method for surface weak texture satellite Download PDF

Info

Publication number
CN113706591B
CN113706591B CN202110874322.9A CN202110874322A CN113706591B CN 113706591 B CN113706591 B CN 113706591B CN 202110874322 A CN202110874322 A CN 202110874322A CN 113706591 B CN113706591 B CN 113706591B
Authority
CN
China
Prior art keywords
point cloud
pose
point
frame
registration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110874322.9A
Other languages
Chinese (zh)
Other versions
CN113706591A (en
Inventor
易建军
张回
曾飞
丁洪凯
苏林
范体军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China University of Science and Technology
Original Assignee
East China University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China University of Science and Technology filed Critical East China University of Science and Technology
Priority to CN202110874322.9A priority Critical patent/CN113706591B/en
Publication of CN113706591A publication Critical patent/CN113706591A/en
Application granted granted Critical
Publication of CN113706591B publication Critical patent/CN113706591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a point cloud-based three-dimensional reconstruction method for a surface weak texture satellite. The three-dimensional reconstruction method of the surface weak texture satellite based on the point cloud comprises the steps of point cloud acquisition and preprocessing, interframe point cloud registration and key frame selection, loop detection and rear end pose optimization and model surface reconstruction. According to the method, the pose estimation is carried out by utilizing the processed point cloud, the key frame point cloud is screened out according to the pose result, the pose optimization is carried out, finally, the fusion point cloud is registered, the model surface reconstruction is completed, the satellite three-dimensional reconstruction under the conditions of weak texture on the satellite surface and poor illumination condition can be realized, and a good foundation is provided for the component identification and capture of the satellite.

Description

Point cloud-based three-dimensional reconstruction method for surface weak texture satellite
Technical Field
The invention relates to the technical field of three-dimensional reconstruction, in particular to a point cloud-based three-dimensional reconstruction method for a surface weak texture satellite.
Background
In space, a three-dimensional reconstruction of the satellite is required before the satellite is subjected to component identification and capture. However, due to the influence of factors such as poor illumination conditions in space, weak texture of satellite surfaces and the like, the satellite cannot be reconstructed by using an image-based visual three-dimensional reconstruction scheme.
Disclosure of Invention
The invention aims to provide a point cloud-based three-dimensional reconstruction method for a surface weak texture satellite, which is used for solving the technical problem that a satellite cannot be reconstructed by using an image-based visual three-dimensional reconstruction scheme at present.
In order to achieve the above objective, in one embodiment of the present invention, a method for three-dimensional reconstruction of a point cloud-based surface weak texture satellite is provided, comprising the following steps:
the method comprises the steps of obtaining and preprocessing point clouds, namely obtaining single-frame point clouds of satellites, and preprocessing the point clouds;
performing inter-frame point cloud registration and key frame selection, performing FPFH (field programmable gate array) feature coarse registration and ICP (inductively coupled plasma) fine registration on point cloud to obtain the accurate pose of a satellite, and screening key frames; the FPFH features are roughly registered, namely the point cloud features are put into a histogram in a unified mode, so that rough registration pose is obtained; the ICP fine registration is to take the coarse registration pose as an initial pose, and the fine registration pose is obtained through iteration;
a loop detection and back-end pose optimization step, namely carrying out pose map updating and loop detection based on the selected key frame, wherein the loop detection is to find out a frame which is not adjacent but close to the current key frame in position in the historical key frame; judging whether loop occurs, optimizing the pose graph when loop occurs, and judging whether inter-frame point cloud registration is finished after the pose graph is optimized and when loop does not occur; returning to the point cloud acquisition and preprocessing step to acquire the next frame of point cloud when the inter-frame point cloud registration is not finished; when the inter-frame point cloud registration is finished, pose map optimization is carried out; and
and (3) model surface reconstruction, namely performing FPFH surface reconstruction after pose diagram optimization is finished, and completing satellite three-dimensional modeling.
Further, the step of preprocessing the point cloud in the step of acquiring and preprocessing the point cloud includes: for each frame of point cloud, firstly, removing the background by utilizing spatial three-dimensional coordinate value screening according to the spatial position of the satellite, and obtaining the point cloud only with the satellite; and then, sequentially performing outlier removal and voxel filtering downsampling on the point cloud of the satellite, and calculating a corresponding normal vector to finish preprocessing of the point cloud.
Further, the steps of inter-frame point cloud registration and keyframe selection include:
a point cloud data processing step, namely defining a fixed local coordinate system, wherein a single-frame point cloud acquired firstly is a target point cloud, and a point cloud acquired later is a source point cloud;
an FPFH characteristic rough registration step, namely extracting FPFH characteristics of source point cloud and target point cloud in the local coordinate system, carrying out matching and pose adjustment on the FPFH characteristics of the source point cloud, combining the FPFH characteristics of the target point cloud to form corresponding characteristic point pairs, and overlapping the positions of the corresponding characteristic point pairs after iterative updating of the corresponding characteristic point pairs to form a point rapid characteristic histogram of a satellite as a rough registration final pose;
an ICP fine registration step, in the local coordinate system, taking the final pose of coarse registration as an initial pose, identifying a satellite edge frame in the initial pose as a historical key frame, carrying out coordinate transformation on the source point cloud by taking a plane in which the historical key frame is positioned as a reference, and forming the accurate pose of a satellite as the fine registration pose after iteratively calculating a preset iteration number or enabling the distance between the point of the source point cloud and the reference plane to be smaller than a distance threshold; and
and screening the key frames, namely screening satellite edge frames based on the accurate pose of the satellite to serve as current key frames.
Further, the FPFH feature coarse registration step includes an FPFH feature extraction step, where the FPFH feature describes local geometric characteristics of points, and is described by using a 33-dimensional feature vector; the calculation of the FPFH features is divided into two steps:
a) Defining a fixed local coordinate system, calculating a series of alpha, phi and theta characteristic values between each query point p and a neighborhood point of the query point p in the cloud point by using the following formula, and putting the characteristic values into a histogram in a unified mode to obtain a simplified point characteristic histogram;
α=v·n t
φ=(u·(p t -p s ))/||p t -p s ||
θ=arctan(w·n t ·u·n t );
wherein p is s Is a point in the point cloud;
p t is p s Is a neighborhood point of the target;
n s 、n t normal lines of the corresponding points respectively;
u, v, w are p s Three directional axes of a local coordinate system constructed for an origin;
alpha is n t An angle with the v-axis;
phi is n s And (p) t -p s ) Is included in the plane of the first part;
θ is n t In plane up t v is an included angle between the projection on v and the u axis; and
b) Re-determining k-neighborhood for each point in the point cloud, computing query point p using neighboring SPFH values using the following formula q An FPFH value of (2);
wherein the K neighborhood is a set of K points nearest to a point;
p is a point in the point cloud;
ω k as the weight, the query point p and the adjacent point p are represented k Distance between them.
Further, the FPFH feature extraction step further includes a feature registration step, which is feature registration based on random sampling consistency, specifically including:
a) Firstly, randomly sampling FPFH characteristics of a target point cloud, and inquiring characteristic points corresponding to sampling points in a source point cloud;
b) Then, resolving the pose of the inter-frame point cloud based on the queried characteristic points by adopting a least square method, and carrying out coordinate transformation;
c) Then, in the transformed inter-frame point cloud, searching for all feature matching points of the target point cloud by inquiring a 33-dimensional FPFH feature space of the source point cloud, and realizing feature mismatching elimination based on the Euclidean distance of the corresponding points, the length of a connecting line segment between two features and a normal vector of the feature points; and
d) Counting the number of feature corresponding points, namely the number of internal points, after mismatching and rejecting, judging whether the iteration times reach a termination condition, if the iteration is not terminated, updating the corresponding feature point pairs, and repeating the steps a and c; and if the iteration is ended, selecting the pose corresponding to the largest number of the interior points as a final pose result.
Further, the ICP fine registration step includes:
a) Carrying out coordinate transformation on the source point cloud by using the initial pose, then searching the nearest neighbor corresponding point of the transformed inter-frame point cloud, and marking a matching set formed by the corresponding points of the target point cloud p and the point cloud Tq after the source point cloud q transformation as kappa= { (p, q) };
b) The pose matrix T is solved by minimizing the point-to-face distance defined in the matching set k as the objective function E (T),where E (T) is the objective function of ICP registration, expressed as the point-to-face distance; t is a pose matrix to be calculated for ICP registration, wherein the pose matrix comprises a rotation matrix and a translation matrix; n is n p Is the normal vector of point p;
c) Judging whether the iteration times or the distance threshold value reach the iteration termination condition, if the iteration is not terminated, updating the initial pose by the solved pose, and repeating the steps; and if the iteration is terminated, obtaining the accurate pose.
Further, when the key frames are screened, the matching degree, the root mean square error of the interior points and the pose change amplitude are used as the basis; the matching degree is used for representing the size of a superposition area of two frames of point clouds, and specifically is the number of internal points matched with a source point cloud in a target point cloud P; the root mean square error of the inner points is the root mean square error of all the matched inner points; the pose change amplitude is the motion amplitude of a sensor for representing the acquisition point cloud, and a pose matrix T is used l,c Is commonly modulo and added to the translation vector.
Further, in the loop detection and rear end pose optimization step, the loop detection step is as follows:
a) Adding a vertex in the pose graph when detecting a key frame, and recording the pose matrix of the frame; adding an edge to connect the current vertex with the last vertex, and recording a transformation pose matrix before adjacent key frame values; taking a translation matrix in the frame pose matrix as a coordinate and recording;
b) When the total number of the key frames is more than 10, searching 5 key frames closest to the current key frame coordinate by using a k-dimensional tree;
c) If the difference quantity between the found key frame and the current key frame is more than 10, the loop is considered to occur; otherwise, considering that no loop is formed, and repeating the steps;
d) After detecting that loop back occurs, carrying out inter-frame registration on the loop back frame point cloud and the current frame point cloud, connecting two corresponding vertexes by one edge, recording pose errors between the two frames, updating and optimizing a pose graph, and repeating the steps.
Further, in the loop detection and rear end pose optimization step, the pose optimization reduces the accumulated error of point cloud registration by using a least squares optimization method on a pose graph, and averages residual errors to all key frames; in pose graph optimization, the graph vertex is an optimization variable with minimal nonlinearity and problems, and is expressed as an optimized key frame pose matrix; edges connecting the vertices are error terms between the optimization variables, expressed as frame pose estimation errors; i. j is the vertex corresponding to the key frame, T i And T is j The pose matrixes corresponding to the two vertexes i and j are respectively, and the transformation matrix between the vertexes i and j is T ij Edge error e corresponding to the connected vertex ij Represented asWherein, the upper right corner mark "-1" represents matrix inversion; the right upper corner mark V represents the operation of solving the vector uniquely corresponding to the antisymmetric matrix; and (3) completing pose diagram optimization by using a G2O optimization tool, minimizing errors of adjacent edges and loop edges, and obtaining an optimized pose matrix of each key frame.
Further, in the model surface reconstruction step, when the pose image optimization is finished, combining the depth image, the point cloud and the optimized pose image, and performing surface reconstruction on the satellite point cloud by using TSDF features, wherein the specific steps are as follows:
a) Dividing the whole modeling space into a plurality of small squares according to a certain size, and storing TSDF values in each square to represent the distance between the position and the surface of the object;
b) Integrating the key frame data of the depth map and the point cloud into a TSDF volume space, updating and calculating a TSDF value, and overlapping weight calculation to obtain the TSDF value;
c) The TSDF value in each square is greater than 0 and indicates that the object is positioned outside the object, less than 0 indicates that the object is positioned inside the object, and equal to 0 indicates that the object is positioned on the surface of the object, so that the surface of the reconstructed object is extracted through a cube matching algorithm;
d) And rendering a final model of the satellite by utilizing ray tracing.
The method has the beneficial effects that the method for reconstructing the satellite with the weak texture on the surface based on the point cloud has the advantages that the pose estimation is carried out by utilizing the processed point cloud, the key frame point cloud is screened out according to the pose result, the pose optimization is carried out, finally, the fusion point cloud is registered, the reconstruction of the model surface is completed, the three-dimensional reconstruction of the satellite under the conditions of weak texture on the surface of the satellite and poor illumination condition can be realized, and a good foundation is provided for the component identification and capture of the satellite.
Drawings
The technical solution and other advantageous effects of the present application will be presented by the detailed description of the specific embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for three-dimensional reconstruction of a point cloud-based surface weak texture satellite according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a three-dimensional reconstruction method of a surface weak texture satellite based on point cloud according to an embodiment of the present application.
Fig. 3 is a schematic diagram of the steps of inter-frame point cloud registration and keyframe selection according to an embodiment of the present application.
Fig. 4 is a flowchart of the steps of inter-frame point cloud registration and keyframe selection provided in an embodiment of the present application.
Fig. 5 is a fixed local coordinate system as shown in the present application.
FIG. 6 is a graph of point p q A k neighborhood influence range graph being the center.
Fig. 7 is a pose pictorial view of the present application.
Fig. 8 is a schematic diagram of a change of a keyframe track before and after the pose map is optimized.
Fig. 9 is a schematic view of a TSDF voxel model of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In the description of the present application, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically connected, electrically connected or can be communicated with each other; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art as the case may be.
Specifically, referring to fig. 1 and 2, an embodiment of the present application provides a method for three-dimensional reconstruction of a point cloud-based surface weak texture satellite, which includes the following steps S1 to S4.
S1, acquiring and preprocessing point clouds, namely acquiring single-frame point clouds of satellites and preprocessing the point clouds;
s2, performing inter-frame point cloud registration and key frame selection, performing FPFH (field effect transistor) feature coarse registration and ICP (inductively coupled plasma) fine registration on point cloud to obtain the accurate pose of a satellite, and screening key frames; the FPFH features are roughly registered, namely the point cloud features are put into a histogram in a unified mode, so that rough registration pose is obtained; the ICP fine registration is to take the coarse registration pose as an initial pose, and the fine registration pose is obtained through iteration;
s3, performing loop detection and back-end pose optimization, namely performing pose map updating and loop detection based on the selected key frame, wherein the loop detection is to find out a frame which is not adjacent to but close to the current key frame in position in the historical key frame; judging whether loop occurs, optimizing the pose graph when loop occurs, and judging whether inter-frame point cloud registration is finished after the pose graph is optimized and when loop does not occur; returning to the point cloud acquisition and preprocessing step to acquire the next frame of point cloud when the inter-frame point cloud registration is not finished; when the inter-frame point cloud registration is finished, pose map optimization is carried out; and
and S4, performing model surface reconstruction, namely performing FPFH surface reconstruction after the pose diagram optimization is finished, and completing satellite three-dimensional modeling.
1. Point cloud acquisition and preprocessing
According to the scheme, a Kinect and other ranging cameras are used as sensors for acquiring three-dimensional information to obtain a color map and a depth map of a satellite to be rebuilt. In order to avoid shielding and vision blind areas, the handheld sensor firstly horizontally winds the satellite for one circle when acquiring data, and then the handheld sensor is lifted to wind the satellite for one circle to acquire the omnibearing three-dimensional information of the satellite. And then, according to the camera model, calculating to obtain three-dimensional point cloud data by using the acquired color map, depth map and camera internal parameters, and finishing point cloud acquisition.
For each frame of point cloud, firstly, removing the background by utilizing spatial three-dimensional coordinate value screening according to the spatial position of the satellite, and obtaining the point cloud only with the satellite; and then, sequentially performing outlier removal and voxel filtering downsampling on the point cloud of the satellite, and calculating a corresponding normal vector to finish preprocessing of the point cloud.
2. Inter-frame point cloud registration and keyframe selection
The invention selects the key frame from the acquired data by using the interframe registration result, and then performs subsequent pose optimization and transformation fusion by using the key frame to realize satellite three-dimensional reconstruction. The schematic diagram of the steps of inter-frame point cloud registration and keyframe selection is shown in fig. 3.
The registration results were: and carrying out interframe registration on the point cloud of the previous key frame and the point cloud of the current frame to obtain a transformation pose. The specific registration method comprises the following steps: coarsening FPFH characteristicThe registration is combined with the point-to-face ICP fine registration, i.e., the pose transformation matrix obtained by the coarse registration calculation is used as input to the ICP algorithm for fine registration to obtain the final inter-frame transformation pose. P (P) l : the last key frame point cloud; p (P) c : a current key frame point cloud; t (T) (l,c) : a transformation matrix (rotation + translation matrix) between two frames of point clouds (l, c); FPFH: a fast feature histogram of points (Fast Point Feature Histogram, FPFH), a feature that represents three-dimensional points in a colloquial sense; ICP: iterative closest point (Iterative Closest Point, ICP), a registration algorithm; t (T) r : the pose transformation matrix (rotation + translation matrix) obtained by the rough registration calculation.
Specifically, referring to fig. 4, the step S2 of inter-frame point cloud registration and keyframe selection includes:
s21, a point cloud data processing step, namely defining a fixed local coordinate system, wherein a single-frame point cloud acquired firstly is a target point cloud, and a point cloud acquired later is a source point cloud;
s22, an FPFH characteristic rough registration step, namely extracting FPFH characteristics of a source point cloud and a target point cloud in the local coordinate system, carrying out matching and pose adjustment on the FPFH characteristics of the source point cloud, combining the FPFH characteristics of the target point cloud to form corresponding characteristic point pairs, and overlapping the positions of the corresponding characteristic point pairs after iterative updating of the corresponding characteristic point pairs to form a point rapid characteristic histogram of a satellite as a rough registration final pose;
s23, performing ICP fine registration, namely performing coordinate transformation on the source point cloud by taking a satellite edge frame in the initial pose as a historical key frame and taking a plane where the historical key frame is positioned as a reference, and forming the accurate pose of a satellite as a fine registration pose after iteratively calculating preset iteration times or enabling the distance between the point of the source point cloud and the reference plane to be smaller than a distance threshold value in the local coordinate system by taking the final pose of coarse registration as the initial pose; and
s24, screening key frames, namely screening satellite edge frames based on the accurate pose of the satellite to serve as current key frames.
2.1 coarse registration of FPFH features
The FPFH feature coarse registration step S22 includes an FPFH feature extraction step and a feature registration step.
1) FPFH feature extraction
The FPFH features describe the local geometry of the points, using a 33-dimensional feature vector. The calculation of the FPFH features is divided into two steps:
a) Defining a fixed local coordinate system, as shown in fig. 5, then calculating a series of alpha, phi and theta characteristic values between each query point p and its neighborhood point in the cloud point by using the following formula, and putting the characteristic values into the histogram in a unified manner to obtain a simplified point characteristic histogram;
α=v·n t
φ=(u·(p t -p s ))/||p t -p s ||
θ=arctan(w·n t ·u·n t );
wherein p is s Is a point in the point cloud;
p t is p s Is a neighborhood point of the target;
n s 、n t normal lines of the corresponding points respectively;
u, v, w are p s Three directional axes of a local coordinate system constructed for an origin;
alpha is n t An angle with the v-axis;
phi is n s And (p) t -p s ) Is included in the plane of the first part;
θ is n t In plane up t v is an included angle between the projection on v and the u axis; and
b) Re-determining k-neighborhood for each point in the point cloud, computing query point p using neighboring SPFH values using the following formula q An FPFH value of (2);
wherein the K neighborhood is a set of K points nearest to a point;
p is a point in the point cloud;
ω k as the weight, the query point p and the adjacent point p are represented k Distance between them.
This weighting approach can be helpful in understanding its importance as shown in fig. 6, which shows the k-neighborhood impact range centered on the point. In the specific calculation, the implementation of the FPFH uses 11 statistical subintervals, that is, the parameter interval of each Feature value is divided into 11 subintervals, feature histograms (Feature Histgram) are calculated respectively and then combined to obtain a 33-dimensional Feature vector with elements being floating point values, and the Feature vector is the obtained FPFH Feature.
2) Feature registration based on random sample consensus (RANSAC)
The feature registration step is feature registration based on random sampling consistency, and specifically comprises the following steps:
a) Firstly, randomly sampling FPFH characteristics of a target point cloud, and inquiring characteristic points corresponding to sampling points in a source point cloud;
b) Then, resolving the pose of the inter-frame point cloud based on the queried characteristic points by adopting a least square method, and carrying out coordinate transformation;
c) Then, in the transformed inter-frame point cloud, searching for all feature matching points of the target point cloud by inquiring a 33-dimensional FPFH feature space of the source point cloud, and realizing feature mismatching elimination based on the Euclidean distance of the corresponding points, the length of a connecting line segment between two features and a normal vector of the feature points; and
d) Counting the number of feature corresponding points, namely the number of internal points, after mismatching and rejecting, judging whether the iteration times reach a termination condition, if the iteration is not terminated, updating the corresponding feature point pairs, and repeating the steps a and c; and if the iteration is ended, selecting the pose corresponding to the largest number of the interior points as a final pose result.
2.2 Point-to-surface ICP fine registration
The ICP fine registration step S23 is a point-to-face ICP fine registration. The point-to-surface ICP configuration algorithm takes the pose obtained by coarse registration as an initial pose, and the fine registration pose is obtained by iteration, wherein the iteration comprises the following steps:
a) Cloud entry of source points with initial poseThe row coordinates are transformed, then the nearest neighbor corresponding point of the transformed inter-frame point cloud is found, and the target point cloud p (p in fig. 6 q ) The matching set formed by the corresponding points of the point cloud Tq after the source point cloud q is transformed is marked as kappa= { (p, q) };
b) The pose matrix T is solved by minimizing the point-to-face distance defined in the matching set k as the objective function E (T),where E (T) is the objective function of ICP registration, expressed as the point-to-face distance; t is a pose matrix to be calculated for ICP registration, wherein the pose matrix comprises a rotation matrix and a translation matrix; n is n p Is the normal vector of point p;
c) Judging whether the iteration times or the distance threshold value reach the iteration termination condition, if the iteration is not terminated, updating the initial pose by the solved pose, and repeating the steps; and if the iteration is terminated, obtaining the accurate pose.
2.3 screening key frames
In the step S24 of screening the key frames, the matching degree, the root mean square error of the interior points and the pose variation amplitude are used as the basis.
The matching degree is as follows: the size of the overlapping area of the two frames of point clouds is represented in the fine registration algorithm, specifically, the number of matched inner points in the target point cloud P, and the higher the matching degree is, the better the point cloud registration effect is.
The root mean square error of the inner points is: the root mean square error of all the matched inner points is represented, and the smaller the root mean square error of the inner points is, the better the point cloud registration effect is.
The pose change amplitude is as follows: representing the motion amplitude of an interframe sensor by using a pose matrix T l,c The rotation vector and translation vector are commonly modulo and added to measure, the larger the value is, the larger the motion amplitude is.
Norm(T)=|min(||T rot ||,2π-||T rot ||)|+||T trans ||
Wherein T is rot : a rotation matrix (3X 3) among the transformation matrices (4X 4); t (T) trans : a translation matrix (3X 1) among the transformation matrices (4X 4); t rot I: watch (watch)Showing the two norms of the rotation matrix; t trans I: representing a binary norm of the translation matrix; the norm of a matrix can be understood simply as a measure of the matrix; min: taking a small function; i: taking the absolute value.
If the matching degree is larger than the threshold value, the root mean square error of the interior points is lower than the threshold value, and the pose change amplitude is moderate, adding the frame into a key frame for subsequent three-dimensional reconstruction.
3. Loop detection and back end pose optimization
In the process of continuously carrying out multi-frame registration and pose calculation, the next calculation depends on the result of the last registration, and the accumulated error is inevitably generated, and along with the increase of the number of key frames of the frames, the greater accumulated error can reduce the precision of the three-dimensional reconstruction model. The invention uses loop detection and pose diagram optimization to reduce the accumulated error influence during reconstruction.
3.1 Loop detection and pose graph construction
The loop detection is as follows: and searching out frames which are not adjacent but close to the current key frame in the position of the historical key frame. The pose map is: the nonlinear least square problem is represented by a graph model in graph theory, and the nonlinear least square problem consists of a plurality of vertexes and edges connected with the vertexes. As shown in fig. 7, a gesture is schematically represented. And continuously updating the constructed pose graph and carrying out loop detection during reconstruction, wherein the loop detection comprises the following steps:
a) Adding a vertex in the pose graph when detecting a key frame, and recording the pose matrix of the frame; adding an edge to connect the current vertex with the last vertex, and recording a transformation pose matrix before adjacent key frame values; taking a translation matrix in the frame pose matrix as a coordinate and recording;
b) When the total number of the key frames is more than 10, searching 5 key frames closest to the current key frame coordinate by using a k-dimension Tree (KD-Tree); KD-Tree is an abbreviation for k-dimension Tree, which is a Tree data structure that stores instance points in k-dimensional space for quick retrieval.
c) If the difference quantity between the found key frame and the current key frame is more than 10, the loop is considered to occur; otherwise, considering that no loop is formed, and repeating the steps;
d) After detecting that loop back occurs, carrying out inter-frame registration on the loop back frame point cloud and the current frame point cloud, connecting two corresponding vertexes by one edge, recording pose errors between the two frames, updating and optimizing a pose graph, and repeating the steps.
3.2. Pose map optimization
In the loop detection and rear end pose optimization step S3, the pose optimization reduces the accumulated error of point cloud registration by using a least square optimization method on a pose graph, and averages residual errors to all key frames; in pose graph optimization, the graph vertex is an optimization variable with minimal nonlinearity and problems, and is expressed as an optimized key frame pose matrix; edges connecting the vertices are error terms between the optimization variables, expressed as frame pose estimation errors; i. j is the vertex corresponding to the key frame, T i And T is j The pose matrixes corresponding to the two vertexes i and j are respectively, and the transformation matrix between the vertexes i and j is T ij Edge error e corresponding to the connected vertex ij Represented asWherein, the upper right corner mark "-1" represents matrix inversion; the upper right corner mark "v" represents an operation for finding a vector uniquely corresponding to the antisymmetric matrix.
And (3) completing pose diagram optimization by using a G2O optimization tool, minimizing errors of adjacent edges and loop edges, and obtaining an optimized pose matrix of each key frame.
In order to ensure that large deformation is not generated due to accumulated errors in the registration process, the pose map is optimized once after loop-back is detected each time, the pose map is optimized once after all inter-frame registration is finished, the optimized pose result is updated to all key frames, and the track changes of the key frames before and after the pose map is optimized are shown in fig. 8.
4 model surface reconstruction
TSDF (truncated signed distance function) is a surface reconstruction algorithm that utilizes structured point cloud data and expresses a surface in parameters. The method is characterized in that point cloud data are mapped into a predefined three-dimensional space, and a truncated symbol distance function is used for representing an area near the surface of a real scene to establish a surface model. Fig. 9 is a schematic diagram of a TSDF voxel model.
In the model surface reconstruction step S4, after the pose map is optimized, combining the depth map, the point cloud and the optimized pose map, and performing surface reconstruction on the satellite point cloud by using the TSDF features, wherein the specific steps are as follows:
a) Dividing the whole modeling space into a plurality of small squares according to a certain size, and storing TSDF values in each square to represent the distance between the position and the surface of the object;
b) Integrating the key frame data of the depth map and the point cloud into a TSDF volume space, updating and calculating a TSDF value, and overlapping weight calculation to obtain the TSDF value;
c) The TSDF value in each square is greater than 0 and indicates that the object is positioned outside the object, less than 0 indicates that the object is positioned inside the object, and equal to 0 indicates that the object is positioned on the surface of the object, so that the surface of the reconstructed object is extracted through a cube matching algorithm;
d) And rendering a final model of the satellite by utilizing ray tracing.
The method has the beneficial effects that the method for reconstructing the satellite with the weak texture on the surface based on the point cloud has the advantages that the pose estimation is carried out by utilizing the processed point cloud, the key frame point cloud is screened out according to the pose result, the pose optimization is carried out, finally, the fusion point cloud is registered, the reconstruction of the model surface is completed, the three-dimensional reconstruction of the satellite under the conditions of weak texture on the surface of the satellite and poor illumination condition can be realized, and a good foundation is provided for the component identification and capture of the satellite.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
The foregoing has described in detail an electronic device provided by embodiments of the present application, and specific examples have been applied herein to illustrate the principles and embodiments of the present application, where the foregoing examples are only for aiding in understanding of the technical solutions and core ideas of the present application; those of ordinary skill in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (6)

1. The three-dimensional reconstruction method of the surface weak texture satellite based on the point cloud is characterized by comprising the following steps of:
the method comprises the steps of obtaining and preprocessing point clouds, namely obtaining single-frame point clouds of satellites, and preprocessing the point clouds;
performing inter-frame point cloud registration and key frame selection, performing FPFH (field programmable gate array) feature coarse registration and ICP (inductively coupled plasma) fine registration on point cloud to obtain the accurate pose of a satellite, and screening key frames; the FPFH features are roughly registered, namely the point cloud features are put into a histogram in a unified mode, so that rough registration pose is obtained; the ICP fine registration is to take the coarse registration pose as an initial pose, and the fine registration pose is obtained through iteration;
a loop detection and back-end pose optimization step, namely carrying out pose map updating and loop detection based on the selected key frame, wherein the loop detection is to find out a frame which is not adjacent but close to the current key frame in position in the historical key frame; judging whether loop occurs, optimizing the pose graph when loop occurs, and judging whether inter-frame point cloud registration is finished after the pose graph is optimized and when loop does not occur; returning to the point cloud acquisition and preprocessing step to acquire the next frame of point cloud when the inter-frame point cloud registration is not finished; when the inter-frame point cloud registration is finished, pose map optimization is carried out; and
a model surface reconstruction step, namely performing FPFH surface reconstruction after pose diagram optimization is finished, and completing satellite three-dimensional modeling;
the steps of inter-frame point cloud registration and key frame selection comprise:
a point cloud data processing step, namely defining a fixed local coordinate system, wherein a single-frame point cloud acquired firstly is a target point cloud, and a point cloud acquired later is a source point cloud;
an FPFH characteristic rough registration step, namely extracting FPFH characteristics of source point cloud and target point cloud in the local coordinate system, carrying out matching and pose adjustment on the FPFH characteristics of the source point cloud, combining the FPFH characteristics of the target point cloud to form corresponding characteristic point pairs, and overlapping the positions of the corresponding characteristic point pairs after iterative updating of the corresponding characteristic point pairs to form a point rapid characteristic histogram of a satellite as a rough registration final pose;
an ICP fine registration step, in the local coordinate system, taking the final pose of coarse registration as an initial pose, identifying a satellite edge frame in the initial pose as a historical key frame, carrying out coordinate transformation on the source point cloud by taking a plane in which the historical key frame is positioned as a reference, and forming the accurate pose of a satellite as the fine registration pose after iteratively calculating a preset iteration number or enabling the distance between the point of the source point cloud and the reference plane to be smaller than a distance threshold; and
a step of screening key frames, wherein satellite edge frames are screened as current key frames based on the accurate pose of the satellite;
wherein, the FPFH characteristic coarse registration step comprises an FPFH characteristic extraction step, wherein the FPFH characteristic describes the local geometric characteristics of points and is described by using a 33-dimension characteristic vector; the calculation of the FPFH features is divided into two steps:
a) Defining a fixed local coordinate system, calculating a series of alpha, phi and theta characteristic values between each query point p and a neighborhood point of the query point p in the cloud point by using the following formula, and putting the characteristic values into a histogram in a unified mode to obtain a simplified point characteristic histogram;
α=v·n t
φ=(u·(p t -p s ))/||p t -p s ||
θ=arctan(w·n t ·u·n t );
wherein p is s Is a point in the point cloud;
p t is p s Is a neighborhood point of the target;
n s 、n t normal lines of the corresponding points respectively;
u, v, w are p s Local coordinates built for originThree directional axes are tied;
alpha is n t An angle with the v-axis;
phi is n s And (p) t -p s ) Is included in the plane of the first part;
θ is n t In plane up t v is an included angle between the projection on v and the u axis; and
b) Re-determining k-neighborhood for each point in the point cloud, computing query point p using neighboring SPFH values using the following formula q An FPFH value of (2);
wherein the K neighborhood is a set of K points nearest to a point;
p is a point in the point cloud;
ω k as the weight, the query point p and the adjacent point p are represented k A distance therebetween;
wherein the ICP fine registration step includes:
a) Carrying out coordinate transformation on the source point cloud by using the initial pose, then searching the nearest neighbor corresponding point of the transformed inter-frame point cloud, and marking a matching set formed by the corresponding points of the target point cloud p and the point cloud Tq after the source point cloud q transformation as kappa= { (p, q) };
b) The pose matrix T is solved by minimizing the point-to-face distance defined in the matching set k as the objective function E (T),where E (T) is the objective function of ICP registration, expressed as the point-to-face distance; t is a pose matrix to be calculated for ICP registration, wherein the pose matrix comprises a rotation matrix and a translation matrix; n is n p Is the normal vector of point p;
c) Judging whether the iteration times or the distance threshold value reach the iteration termination condition, if the iteration is not terminated, updating the initial pose by the solved pose, and repeating the steps; if the iteration is terminated, the accurate pose can be obtained;
in the model surface reconstruction step, after the pose image optimization is finished, combining the depth image, the point cloud and the optimized pose image, and performing surface reconstruction on the satellite point cloud by utilizing TSDF features, wherein the specific steps are as follows:
a) Dividing the whole modeling space into a plurality of small squares according to a certain size, and storing TSDF values in each square to represent the distance between the position and the surface of the object;
b) Integrating the key frame data of the depth map and the point cloud into a TSDF volume space, updating and calculating a TSDF value, and overlapping weight calculation to obtain the TSDF value;
c) The TSDF value in each square is greater than 0 and indicates that the object is positioned outside the object, less than 0 indicates that the object is positioned inside the object, and equal to 0 indicates that the object is positioned on the surface of the object, so that the surface of the reconstructed object is extracted through a cube matching algorithm;
d) And rendering a final model of the satellite by utilizing ray tracing.
2. The method for three-dimensional reconstruction of a point cloud-based surface weak texture satellite according to claim 1, wherein the point cloud preprocessing step in the point cloud acquisition and preprocessing step comprises:
for each frame of point cloud, firstly, removing the background by utilizing spatial three-dimensional coordinate value screening according to the spatial position of the satellite, and obtaining the point cloud only with the satellite; and then, sequentially performing outlier removal and voxel filtering downsampling on the point cloud of the satellite, and calculating a corresponding normal vector to finish preprocessing of the point cloud.
3. The method for three-dimensional reconstruction of a point cloud based surface weak texture satellite according to claim 1, further comprising a feature registration step after the FPFH feature extraction step, wherein the feature registration step is a feature registration based on random sampling consistency, and specifically comprises:
a) Firstly, randomly sampling FPFH characteristics of a target point cloud, and inquiring characteristic points corresponding to sampling points in a source point cloud;
b) Then, resolving the pose of the inter-frame point cloud based on the queried characteristic points by adopting a least square method, and carrying out coordinate transformation;
c) Then, in the transformed inter-frame point cloud, searching for all feature matching points of the target point cloud by inquiring a 33-dimensional FPFH feature space of the source point cloud, and realizing feature mismatching elimination based on the Euclidean distance of the corresponding points, the length of a connecting line segment between two features and a normal vector of the feature points; and
d) Counting the number of feature corresponding points, namely the number of internal points, after mismatching and rejecting, judging whether the iteration times reach a termination condition, if the iteration is not terminated, updating the corresponding feature point pairs, and repeating the steps a and c; and if the iteration is ended, selecting the pose corresponding to the largest number of the interior points as a final pose result.
4. The three-dimensional reconstruction method of a point cloud-based surface weak texture satellite according to claim 1, wherein the key frames are screened based on matching degree, interior point root mean square error and pose variation amplitude;
the matching degree is used for representing the size of a superposition area of two frames of point clouds, and specifically is the number of internal points matched with a source point cloud in a target point cloud P;
the root mean square error of the inner points is the root mean square error of all the matched inner points;
the pose change amplitude is the motion amplitude of a sensor for representing the acquisition point cloud, and a pose matrix T is used l,c Is commonly modulo and added to the translation vector.
5. The method for three-dimensional reconstruction of a point cloud-based surface weak texture satellite according to claim 1, wherein in the loop detection and rear end pose optimization step, the loop detection step is as follows:
a) Adding a vertex in the pose graph when detecting a key frame, and recording the pose matrix of the frame; adding an edge to connect the current vertex with the last vertex, and recording a transformation pose matrix before adjacent key frame values; taking a translation matrix in the frame pose matrix as a coordinate and recording;
b) When the total number of the key frames is more than 10, searching 5 key frames closest to the current key frame coordinate by using a k-dimensional tree;
c) If the difference quantity between the found key frame and the current key frame is more than 10, the loop is considered to occur; otherwise, considering that no loop is formed, and repeating the steps;
d) After detecting that loop back occurs, carrying out inter-frame registration on the loop back frame point cloud and the current frame point cloud, connecting two corresponding vertexes by one edge, recording pose errors between the two frames, updating and optimizing a pose graph, and repeating the steps.
6. The three-dimensional reconstruction method of a point cloud-based surface weak texture satellite according to claim 1, wherein in the loop detection and rear end pose optimization step, the pose optimization reduces the accumulated error of point cloud registration by using a least squares optimization method on a pose map, and averages the residual error to all key frames; in pose graph optimization, the graph vertex is an optimization variable with minimal nonlinearity and problems, and is expressed as an optimized key frame pose matrix; edges connecting the vertices are error terms between the optimization variables, expressed as frame pose estimation errors; i. j is the vertex corresponding to the key frame, T i And T is j The pose matrixes corresponding to the two vertexes i and j are respectively, and the transformation matrix between the vertexes i and j is T ij Edge error e corresponding to the connected vertex ij Represented asWherein, the upper right corner mark "-1" represents matrix inversion; the right upper corner mark V represents the operation of solving the vector uniquely corresponding to the antisymmetric matrix; and (3) completing pose diagram optimization by using a G2O optimization tool, minimizing errors of adjacent edges and loop edges, and obtaining an optimized pose matrix of each key frame.
CN202110874322.9A 2021-07-30 2021-07-30 Point cloud-based three-dimensional reconstruction method for surface weak texture satellite Active CN113706591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110874322.9A CN113706591B (en) 2021-07-30 2021-07-30 Point cloud-based three-dimensional reconstruction method for surface weak texture satellite

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110874322.9A CN113706591B (en) 2021-07-30 2021-07-30 Point cloud-based three-dimensional reconstruction method for surface weak texture satellite

Publications (2)

Publication Number Publication Date
CN113706591A CN113706591A (en) 2021-11-26
CN113706591B true CN113706591B (en) 2024-03-19

Family

ID=78651042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110874322.9A Active CN113706591B (en) 2021-07-30 2021-07-30 Point cloud-based three-dimensional reconstruction method for surface weak texture satellite

Country Status (1)

Country Link
CN (1) CN113706591B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880690B (en) * 2022-11-23 2023-08-11 郑州大学 Method for quickly labeling objects in point cloud under assistance of three-dimensional reconstruction
CN115951589B (en) * 2023-03-15 2023-06-06 中科院南京天文仪器有限公司 Star uniform selection method based on maximized Kozachenko-Leonenko entropy
CN117829381B (en) * 2024-03-05 2024-05-14 成都农业科技职业学院 Agricultural greenhouse data optimization acquisition system based on Internet of things

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110930495A (en) * 2019-11-22 2020-03-27 哈尔滨工业大学(深圳) Multi-unmanned aerial vehicle cooperation-based ICP point cloud map fusion method, system, device and storage medium
WO2021088481A1 (en) * 2019-11-08 2021-05-14 南京理工大学 High-precision dynamic real-time 360-degree omnibearing point cloud acquisition method based on fringe projection
CN112907491A (en) * 2021-03-18 2021-06-04 中煤科工集团上海有限公司 Laser point cloud loopback detection method and system suitable for underground roadway

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021088481A1 (en) * 2019-11-08 2021-05-14 南京理工大学 High-precision dynamic real-time 360-degree omnibearing point cloud acquisition method based on fringe projection
CN110930495A (en) * 2019-11-22 2020-03-27 哈尔滨工业大学(深圳) Multi-unmanned aerial vehicle cooperation-based ICP point cloud map fusion method, system, device and storage medium
CN112907491A (en) * 2021-03-18 2021-06-04 中煤科工集团上海有限公司 Laser point cloud loopback detection method and system suitable for underground roadway

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张健 ; 李新乐 ; 宋莹 ; 王仁 ; 朱凡 ; 赵晓燕 ; .基于噪声点云的三维场景重建方法.计算机工程与设计.2020,(04),全文. *
李宜鹏 ; 解永春 ; .基于点云位姿平均的非合作目标三维重构.空间控制技术与应用.2020,(01),全文. *

Also Published As

Publication number Publication date
CN113706591A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN113706591B (en) Point cloud-based three-dimensional reconstruction method for surface weak texture satellite
US10706622B2 (en) Point cloud meshing method, apparatus, device and computer storage media
CN103426182B (en) The electronic image stabilization method of view-based access control model attention mechanism
CN104574347B (en) Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data
CN106709950B (en) Binocular vision-based inspection robot obstacle crossing wire positioning method
CN109118544B (en) Synthetic aperture imaging method based on perspective transformation
CN112686935A (en) Airborne depth sounding radar and multispectral satellite image registration method based on feature fusion
CN112991420A (en) Stereo matching feature extraction and post-processing method for disparity map
CN113838191A (en) Three-dimensional reconstruction method based on attention mechanism and monocular multi-view
CN109003307B (en) Underwater binocular vision measurement-based fishing mesh size design method
CN112163996B (en) Flat angle video fusion method based on image processing
CN116664892A (en) Multi-temporal remote sensing image registration method based on cross attention and deformable convolution
CN113538569A (en) Weak texture object pose estimation method and system
CN110942102B (en) Probability relaxation epipolar matching method and system
CN111126418A (en) Oblique image matching method based on planar perspective projection
CN111739071A (en) Rapid iterative registration method, medium, terminal and device based on initial value
CN114612412A (en) Processing method of three-dimensional point cloud data, application of processing method, electronic device and storage medium
CN114119437A (en) GMS-based image stitching method for improving moving object distortion
CN117132737B (en) Three-dimensional building model construction method, system and equipment
CN110969650B (en) Intensity image and texture sequence registration method based on central projection
CN116091706B (en) Three-dimensional reconstruction method for multi-mode remote sensing image deep learning matching
CN117351078A (en) Target size and 6D gesture estimation method based on shape priori
CN114998630B (en) Ground-to-air image registration method from coarse to fine
CN116452995A (en) Aerial image positioning method based on onboard mission machine
Kang et al. An adaptive fusion panoramic image mosaic algorithm based on circular LBP feature and HSV color system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant