CN113421290A - Power plant boiler interior three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition - Google Patents
Power plant boiler interior three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition Download PDFInfo
- Publication number
- CN113421290A CN113421290A CN202110757224.7A CN202110757224A CN113421290A CN 113421290 A CN113421290 A CN 113421290A CN 202110757224 A CN202110757224 A CN 202110757224A CN 113421290 A CN113421290 A CN 113421290A
- Authority
- CN
- China
- Prior art keywords
- point
- point cloud
- points
- aerial vehicle
- unmanned aerial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000012545 processing Methods 0.000 claims abstract description 13
- 238000000605 extraction Methods 0.000 claims abstract description 9
- 239000011159 matrix material Substances 0.000 claims description 33
- 238000010606 normalization Methods 0.000 claims description 21
- 230000002159 abnormal effect Effects 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 11
- 238000013519 translation Methods 0.000 claims description 11
- 150000001875 compounds Chemical class 0.000 claims description 9
- 238000000354 decomposition reaction Methods 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 8
- 239000013598 vector Substances 0.000 claims description 6
- 238000002955 isolation Methods 0.000 claims description 5
- LTXREWYXXSTFRX-QGZVFWFLSA-N Linagliptin Chemical compound N=1C=2N(C)C(=O)N(CC=3N=C4C=CC=CC4=C(C)N=3)C(=O)C=2N(CC#CC)C=1N1CCC[C@@H](N)C1 LTXREWYXXSTFRX-QGZVFWFLSA-N 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims 1
- 238000010276 construction Methods 0.000 abstract description 3
- 238000001514 detection method Methods 0.000 abstract description 3
- 239000000463 material Substances 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 7
- 238000009616 inductively coupled plasma Methods 0.000 description 5
- 239000003245 coal Substances 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000010248 power generation Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Chemical compound O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The invention belongs to the technical field of three-dimensional reconstruction, and discloses a power plant boiler internal three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition. The method specifically comprises the steps of firstly utilizing an unmanned aerial vehicle to enter the boiler to carry out ordered depth image shooting on a designated part, then sequentially utilizing a layered denoising method and a relation forest (IF) algorithm to carry out denoising processing on point clouds generated by each depth image, then carrying out feature point extraction on the point clouds, and respectively carrying out coarse registration and accurate registration based on an improved ICP algorithm to obtain complete point clouds so as to complete three-dimensional reconstruction. Compared with the traditional mode that the operation is manually carried out after a large scaffold or a lifting large platform is erected, the method reduces the workload of boiler shutdown detection, shortens the construction period, reduces the operation risk coefficient of workers and saves manpower and material resources.
Description
Technical Field
The invention belongs to the technical field of three-dimensional reconstruction, and particularly relates to a power plant boiler internal three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition.
Background
Through the leap-type development of more than 20 years, the thermal generator set has developed to the country with the largest installed capacity in the world in China. In recent years, the thermal power generation industry has become an important subject which needs to be solved urgently by integrating new technical revolution such as big data, artificial intelligence and the like to improve the equipment management level and the safety production level and promote the intelligent transformation and industrial upgrading of the thermal power plant.
The boiler of a thermal power plant is one of the power generation production devices at the core of the power plant. The volume is large, especially the supercritical million-grade boiler, the maximum is about 100m, the internal span can reach 60m, and the depth is about 40 m. The environment for burning the pulverized coal in the boiler is complex and severe, and the pulverized coal needs to stably run for a long time under the condition of supercritical high-temperature and high-pressure parameters of water vapor. Therefore, the requirement on the reliability of the equipment is very high, and the important significance is achieved for ensuring the safe operation of the boiler. Under the general condition, the power plant can halt and detect the boiler at intervals, the traditional detection mode generally adopts a mode of manually entering operation after a large scaffold or a lifting large platform is erected, the workload is large, the construction period is long, the risk coefficient is high, and the requirements on the skills and experience of inspectors are high.
Disclosure of Invention
Aiming at the problems, the invention provides a power plant boiler interior three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a power plant boiler interior three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition, which comprises the following steps:
step 3, denoising the point cloud subjected to denoising treatment in the step 2 again by adopting an Isolation forest algorithm;
step 4, extracting characteristic points of the point cloud denoised in the step 3, and constructing a characteristic point set;
step 5, performing coarse registration on the point cloud by adopting a PCA algorithm based on the feature point set;
step 6, carrying out accurate registration on the point cloud after the coarse registration by adopting an improved ICP algorithm;
and 7, starting registration splicing from the first image according to the sequence of images shot by the unmanned aerial vehicle, and then performing registration splicing on the spliced new point cloud and the point cloud of the next image until the splicing of all the point clouds is completed, namely completing three-dimensional reconstruction.
Further, the specific process of using the unmanned aerial vehicle to acquire images in the step 1 is as follows: the unmanned aerial vehicle enters the boiler from a boiler inlet of a power plant, flies to an area needing three-dimensional reconstruction, determines that the distance between the unmanned aerial vehicle and a furnace wall needing three-dimensional reconstruction is 0.5-07m by using an unmanned aerial vehicle ultrasonic range finder, and then carries out ordered depth image shooting.
Furthermore, the depth image shooting method specifically comprises the following steps: and starting from the upper left corner of the determined shooting area, performing depth image acquisition from top to bottom in an s-shaped track, wherein 10% -20% of overlapping parts exist between adjacent pictures.
Further, the step 2 of denoising the point cloud by using a layered denoising method specifically includes the following steps:
step 2.1, carrying out normalization processing on the point cloud generated by the depth image, specifically:
taking a certain vertex of a minimum cube containing point cloud as a coordinate origin o, and constructing a three-dimensional coordinate system oxyz to ensure that coordinates of all points of the point cloud are positive values;
finding the maximum value and the minimum value of the point cloud in three directions of the x axis, the y axis and the z axis: x is the number ofmax,xmin,ymax,ymin,zmax,zmin;
The coordinate information of all points in the point cloud is processed as follows: wherein xi,yi,ziIs the original coordinate, x, of a point in the point cloudnew,ynew,znewIs the new coordinates after normalization;
step 2.2, the point cloud after normalization processing in the step 2.1 is averagely divided into m layers along the z-axis direction
In the formula of UZThe minimum length of all normalized points included in the z-axis direction,
in the formula, N represents the number of all points in the point cloud, Sxoy,SyozAnd SxozRepresenting the areas of the point clouds after normalization processing projected on the xoy plane, the yoz plane and the xoz plane respectively;
step 2.3, calculating a boundary threshold beta of the point cloud after the normalization processing, wherein the calculation formula is as follows:
in the formula, omega is a weight adjustment coefficient;
and 2.4, deleting the noise points, specifically:
neglecting the influence of a z axis, converting the three-dimensional coordinates of the points contained in each layer into two-dimensional coordinates, if only one point exists in the threshold range of the two-dimensional coordinates, judging the point as a noise point, and deleting the noise point; otherwise, judging the points as non-noise points; judging the points contained in each layer according to the method, and deleting all noise points in the point cloud;
and 2.5, performing reverse normalization reduction on the point cloud remained after the noise points are deleted.
Further, the specific steps of performing denoising again on the point cloud denoised in step 2 by using the Isolation forest algorithm in step 3 are as follows:
step 3.1, constructing iTree to form iForest, which specifically comprises the following steps:
3.1.1 randomly extracting one percent of points from the point cloud as a sample;
3.1.2 randomly selecting one of three coordinate directions of x, y and z from the samples as an attribute q and an arbitrary value p between the maximum value and the minimum value in the samples under the coordinate direction;
3.1.3 dividing the point satisfying the condition that q is less than p into one part, and the point that q is more than or equal to p is the other part;
3.1.4 repeat 3.1.2 and 3.1.3 until one of three conditions is reached: the tree has reached a limited height; there is only one sample on a node; all the characteristics of the samples on the nodes are the same;
3.1.5 repeating the steps 3.1.1-3.1.4, constructing 100 iTrees to form iForest;
step 3.2, passing all points in the point cloud through each constructed iTree, and calculating corresponding abnormal scores l (x)
Wherein g (n) is a normalization constant of a point cloud set with point cloud number n, and E (k (x)) is an average path length of a point x in the point cloud in all constructed itree;
H(i)=ln(i)+0.5772156649
wherein i is the tree level and L is the root node to the end nodeLarge distance, eiThe number of edges that the point x passes from the root node to the end node;
and 3.3, setting a hyper-parameter lambda, wherein the lambda range is between 0 and 1, and deleting the abnormal points when the abnormal score l (x) exceeds lambda.
Further, in the step 4, feature point extraction is performed on the point cloud denoised in the step 3, and a specific process of constructing a feature point set is as follows:
step 4.1 because only a small part of the shot depth images are overlapped, in order to ensure the registration efficiency, the overlapped part of the shot depth images needs to be segmented: sequentially selecting denoised point clouds generated by two adjacent depth images as a point cloud p and a point cloud q according to the shooting sequence of the unmanned aerial vehicle, intercepting the point clouds of the overlapped parts as p 'and q', wherein the number of the intercepted points of the overlapped part of each image is 25% of the total number of the point clouds;
step 4.2, extracting the characteristic points of the point clouds p 'and q', and specifically comprises the following steps:
4.2.1 calculate Point p'iOf curvature of p'iFor any point in the point cloud p',
in the formula (I), the compound is shown in the specification,is p'iK is a distance point p ' in the point cloud p ' found by the K-D tree algorithm 'iNumber of nearest dots, p'ijIs of point p'iOne of the k number of neighboring points of (a),is of point p'ijToThe distance of the tangent plane;
4.2.2 calculating the evaluation threshold of the characteristic point, wherein the calculation formula is as follows:
in the formula, N represents the number of the middle points of the point cloud p', and alpha is an adjusting coefficient;
4.2.3. measure (p'i) Is > delta, point p'iIs a characteristic point, otherwise, is not;
4.2.4 repeat step 4.2.1-4.2.3, traverse all the characteristic point set p "of point cloud p 'that the point finds, characteristic point set q" of the point cloud q'.
Further, the specific process of performing coarse registration on the point cloud by using the PCA algorithm based on the feature point set in the step 5 is as follows:
step 5.1 set two feature point set data asp "is a matrix of n x 3, q" is a matrix of n' x 3, each row representing a data point; and respectively calculating the centers of the two groups of feature point set data, wherein the formula is as follows:
step 5.2, respectively calculating covariance matrixes of the two groups of feature point set data sets, wherein the formula is as follows:
step 5.3 covariance matrix C for two feature point set datasetsp”And Cq”Singular value decomposition is carried out, and the eigenvectors of the singular value decomposition and the singular value decomposition are respectively solved, wherein the formula is as follows:
in the formula of Up”,Uq”I.e. the eigenvector of the 3 x 3 matrix, i.e. the principal of the two sets of characteristic pointsDirection, Dp”And Dq”Is a semi-positive 3 x 3 order diagonal matrix with values on the diagonals of Cp”And Cq”The singular value of (a) is,andis a 3 x 3 unitary matrix;
step 5.4 to obtain the initial rigid body transformation parameter (R)0,T0) Wherein R is0Is the initial rotation matrix, T0Is the initial translation vector, the formula is as follows:
in the formula (I), the compound is shown in the specification,is Uq”The inverse matrix of (2) is obtained by rotating and translating the feature point set p 'according to the formula (13) to obtain p'new
p”new=p”*R0+T0 (13)
Step 5.5, correcting the feature point set in the main shaft direction: the rough registration of the point cloud has the problem of reverse main shaft, and the total number is 23The following conditions are adopted: (x)+,y+,z+),(x+,y+,z-),(x+,y-,z+),(x+,y-,z-),(x-,y+,z+),(x-,y+,z-),(x-,y-,z+),(x-,y-,z-) Wherein + indicates that the spindle is oriented correctly, -indicates that the spindle is oriented in the reverse direction; rotating the direction of the main shaft in each case by 180 ° in the opposite direction; respectively calculating 8 feature point sets and feature point sets obtained by rotating according to the 8 conditionsq"the minimum distance is the correct rotation feature point set。
Further, the specific process of performing accurate registration on the point cloud after the coarse registration by using an improved IPC algorithm based on the feature point collection in the step 6 is as follows:
step 6.1 initializes ψ to 0, gives a threshold τ, and the initial transformation shift matrix is (R)0,T0);
Step 6.2 for each point p "in the feature point set p"iFinding out the nearest point q 'in the feature point set q' by using Best Bin First algorithm "i;
6.3, calculating registration point pairs of p 'and q' by a quaternion method, and solving rigid transformation matrixes R and T;
step 6.4, solving a new target point set after rotation and translation, wherein the formula is as follows:
p”'=p”*R+T (14)
step 6.5, the convergence criterion coefficient is calculated according to the following formula:
in the formula (I), the compound is shown in the specification,andrespectively is a point p "iAnd q "iUnit normal vectors in respective planes, R-1Is the inverse matrix of R, if the error E is iterated twiceψ-Eψ+1If tau is less, iteration is stopped to obtain the optimal rotation-translation matrix, if E is less than tauψ-Eψ+1If the value is more than or equal to tau, another phi + 1, and repeating the steps 6.2-6.5 until E is reachedψ-Eψ+1<τ。
Compared with the prior art, the invention has the following advantages:
the invention provides the unmanned aerial vehicle image acquisition method which is less in workload, simple and efficient in consideration of the hardware equipment of the unmanned aerial vehicle and the complex situation in the boiler; a new algorithm (an improved ICP algorithm) is adopted to carry out secondary denoising treatment on the point cloud, so that noise points are removed more accurately and comprehensively under the condition of small calculation amount; the curvature of the points and the K-D tree are used for feature extraction, so that the extraction speed is higher, and the extraction effect is better; the step of point cloud calibration is added in the coarse registration, so that the problem of main shaft reversal of point cloud registration is avoided, and more reliable initial conditions are provided for accurate registration; an improved ICP accurate registration algorithm is provided, the registration efficiency is higher, and the registration result is more accurate. Therefore, compared with the traditional mode that the operation is manually carried out after a large scaffold or a lifting large platform is erected, the method reduces the workload of boiler shutdown detection, shortens the construction period, reduces the operation risk coefficient of workers and saves manpower and material resources.
Drawings
FIG. 1 is a schematic view of the process of the present invention.
Fig. 2 is a schematic diagram of an unmanned aerial vehicle performing depth image acquisition in a power plant boiler.
Fig. 3 shows a depth image capturing method.
FIG. 4 is a flow chart of point cloud denoising.
Fig. 5 is a schematic diagram of the division.
Fig. 6 is a schematic diagram of a threshold boundary after any point in the point cloud is converted into a two-dimensional coordinate.
FIG. 7 is a comparison diagram of point clouds before and after de-noising.
Fig. 8 is a flow chart of point cloud feature point extraction, coarse registration, and accurate registration.
Fig. 9 is a schematic diagram of segmenting overlapping portions of depth images.
FIG. 10 is a three-dimensional modeling effect diagram of a portion of a wall of a dense phase zone inside a boiler.
Detailed Description
The technical solution in the embodiments of the present invention will be specifically and specifically described below with reference to the embodiments of the present invention and the accompanying drawings. It should be noted that variations and modifications can be made by those skilled in the art without departing from the principle of the present invention, and these should also be construed as falling within the scope of the present invention.
Three-dimensional reconstruction of designated wall of dense-phase region in boiler by certain coal and electricity limited company
1. As shown in fig. 2, the unmanned aerial vehicle enters the interior of the boiler from the manhole of the boiler, keeps 0.6 meter with the designated wall of the dense phase zone through the ultrasonic range finder, and then performs depth image acquisition from top to bottom in an s-shaped track from the upper left corner of the shooting area in the manner of fig. 3. The bold black box in fig. 3 is the size of one depth image, and there is a 20% overlap (shaded) between adjacent pictures.
2. According to the flowchart of fig. 4, denoising the point cloud generated by each shot depth image, firstly denoising the point cloud by a layered denoising method, comprising the following steps:
2.1, carrying out normalization processing on the point cloud generated by the depth image, and specifically comprising the following steps:
2.1.1, constructing a three-dimensional coordinate system oxyz by taking a certain vertex of a minimum cube containing point cloud as a coordinate origin o, so that coordinates of all points of the point cloud are positive values;
2.1.2 finding the maximum value and the minimum value of the point cloud in the three directions of the x, y and z axes: x is the number ofmax,xmin,ymax,ymin,zmax,zmin;
2.1.3 the coordinate information of all points in the point cloud is processed as follows: wherein xi,yi,ziIs the original coordinate, x, of a point in the point cloudnew,ynew,znewIs the new coordinates after normalization;
2.2, the point cloud after normalization processing is equally divided into m layers along the z-axis direction, as shown in FIG. 5,
in the formula of UZThe minimum length of all normalized points included in the z-axis direction,
in the formula, N represents the number of all points in the point cloud, Sxoy,SyozAnd SxozRepresenting the areas of the point clouds after normalization processing projected on the xoy plane, the yoz plane and the xoz plane respectively;
2.3 calculating a boundary threshold beta of the three-dimensional point cloud after normalization processing, wherein the calculation formula is as follows:
in the formula, omega is a weight adjustment coefficient;
and in the ith layer of the three-dimensional point cloud, converting the three-dimensional coordinates into two-dimensional coordinates, and neglecting the influence of the z axis. Any point pi,j(xi,j,yi,j) Threshold boundary Q ofi,jAs shown in fig. 6: if at Qi,jIf there is only one point, it is considered as a drift point (noise point) and deleted; otherwise, regarding the points as non-noise points; and judging the points contained in each layer according to the method, and deleting all noise points in the point cloud.
And 2.4, performing inverse normalization reduction on the point cloud remained after the noise points are deleted.
And then denoising the point cloud again by adopting an Isolation Forest (IF) algorithm.
2.5 constructing iTree to form iForest, which specifically comprises the following steps:
2.5.1 randomly extracting one percent of points (256 points) from the point cloud as a sample;
2.5.2 randomly selecting one of three coordinate directions of x, y and z from the samples as an attribute q and an arbitrary value p between the maximum value and the minimum value in the samples under the coordinate direction;
2.5.3 dividing the point satisfying the condition that q is less than p into one part, and dividing the point that q is more than or equal to p into the other part;
2.5.4 repeat 3.1.2 and 3.1.3 until one of three conditions is reached: the tree has reached a limited height; there is only one sample on a node; all the characteristics of the samples on the nodes are the same;
2.5.5 repeating the steps 2.5.1-2.5.4, and constructing 100 iTrees to form iForest;
and 2.6, passing all points in the point cloud through each constructed iTree, calculating corresponding abnormal scores l (x), wherein the range is 0-1, marking the data points with the scores less than 0.5 as normal data points, and assigning a value of 1. Otherwise, the data points are considered as potential anomalies, where points with a score close to 1 are flagged as anomalies by assigning them a value of-1, the calculation formula is as follows:
where g (n) is a normalization constant of a point cloud set having a point cloud number n, E (k (x)) is an average path length of a point x in the point cloud in all constructed itree, and when E (k (x)) approaches 0, an abnormal score of a certain point approaches 1.
H(i)=ln(i)+0.5772156649
Where i is the tree level, L is the maximum distance from the root node to the end node, eiThe number of edges that the point x passes from the root node to the end node;
and 2.7 setting a hyper-parameter lambda, wherein the range of lambda is between 0 and 1, obtaining the abnormal scores of all the points, and deleting the abnormal scores l (x) which exceed lambda as abnormal points.
And (4) denoising the point clouds generated by each depth picture one by one according to the step 2. FIG. 7 is a comparison diagram of point clouds before and after de-noising.
3. Carrying out feature point extraction, coarse registration and accurate registration on the point cloud according to the process shown in FIG. 8
3.1 Point cloud feature Point extraction
Since only a small part of the captured depth images are overlapped, in order to ensure the efficiency of registration, the overlapped part needs to be segmented firstly. The method specifically comprises the following steps: the denoised point clouds generated by two adjacent depth images are sequentially selected according to the shooting sequence of the unmanned aerial vehicle, the relative relationship of the denoised point clouds is determined, for example, the two depth images in fig. 9 are obtained by firstly intercepting the right sides of the point clouds corresponding to the image A, recording the intercepted point clouds as p 'when the number of the intercepted point clouds is 25% of the total number in order to ensure that the overlapped parts are completely intercepted, and then intercepting the point clouds with the same number from the left sides of the point clouds corresponding to the image B, and recording the point clouds as q'.
Extracting the characteristic points of the point clouds p 'and q', and specifically comprising the following steps:
(1) calculate point p'iOf curvature of p'iFor any point in the point cloud p',
in the formula (I), the compound is shown in the specification,is p'iK is a distance point p ' in the point cloud p ' found by the K-D tree algorithm 'iNumber of nearest dots, p'ijIs of point p'iOne of the k number of neighboring points of (a),is of point p'ijToThe distance of the tangent plane;
(2) calculating a characteristic point evaluation threshold value, wherein the calculation formula is as follows:
in the formula, N represents the number of the middle points of the point cloud p', and alpha is an adjusting coefficient;
if measure (p'i) Is > delta, point p'iIs a characteristic point, otherwise, is not; and (3) repeating the steps (1) and (2), traversing the feature point set p 'of the point cloud p' and the feature point set q 'of the point cloud q'.
3.2, carrying out coarse registration on the point cloud by adopting a PCA algorithm, wherein the specific process is as follows:
3.2.1 setting the data of two feature point sets asp "is a matrix of n x 3, q" is a matrix of n' x 3, each row representing a data point; and respectively calculating the centers of the two groups of feature point set data, wherein the formula is as follows:
3.2.2 calculating covariance matrixes of the two sets of feature point set data sets respectively, wherein the formula is as follows:
3.2.3 covariance matrix C for two feature set datasetsp”And Cq”Singular value decomposition is carried out, and the eigenvectors of the singular value decomposition and the singular value decomposition are respectively solved, wherein the formula is as follows:
in the formula of Up”,Uq”I.e. the eigenvectors of the 3 x 3 matrix, i.e. the principal directions of the two sets of eigenvalue sets, Dp”And Dq”Is a semi-positive 3 x 3 order diagonal matrix with values on the diagonals of Cp”And Cq”The singular value of (a) is,andis a 3 x 3 unitary matrix;
3.2.4 obtaining initial rigid body transformation parameters (R)0,T0) Wherein R is0Is the initial rotation matrix, T0Is the initial translation vector, the formula is as follows:
in the formula (I), the compound is shown in the specification,is Uq”The inverse matrix of (2) is obtained by rotating and translating the feature point set p 'according to the formula (13) to obtain p'new
p”new=p”*R0+T0 (13)
3.2.5, the PCA rough registration of the point cloud has the problem of reverse main axis, so the correction in the direction of the main axis of the point cloud needs to be carried out. The main shaft reversal problem is 23The following conditions are adopted: in total 23The following conditions are adopted: (x)+,y+,z+),(x+,y+,z-),(x+,y-,z+),(x+,y-,z-),(x-,y+,z+),(x-,y+,z-),(x-,y-,z+),(x-,y-,z-) Where + indicates the spindle is correctly oriented, -indicates the spindle is reversed, requiring a 180 rotation;
and respectively calculating the average Euclidean distance between the 8 feature point sets obtained by rotating according to the 8 conditions and the feature point set q', wherein the minimum distance is the correct rotating feature point set.
3.3, accurately registering the point cloud by adopting an improved ICP (inductively coupled plasma) algorithm, and specifically comprising the following steps:
3.3.1 initialize ψ 0, given a threshold τ, the initial transform translation matrix is (R)0,T0);
3.3.2 pairs of Each Point p "in the feature Point set p"iFinding out the nearest point q 'in the feature point set q' by using Best Bin First algorithm "i;
3.3.3 calculating registration point pairs of p 'and q' by a quaternion method, and solving rigid transformation matrixes R and T;
3.3.4 solving a new target point set after rotation and translation according to the formula (14),
p”'=p”*R+T (14)
3.3.5 calculate the convergence criterion coefficient, the formula is as follows:
in the formula (I), the compound is shown in the specification,andrespectively is a point p "iAnd q "iUnit normal vectors in respective planes, R-1Is the inverse matrix of R, if the error E is iterated twiceψ-Eψ+1If tau is less, iteration is stopped to obtain the optimal rotation-translation matrix, if E is less than tauψ-Eψ+1If the value is more than or equal to tau, another phi + 1, and repeating the steps 3.3.2-3.3.5 until E is reachedψ-Eψ+1<τ。
4. And splicing the point clouds after accurate registration to complete three-dimensional reconstruction. And (3) moving the complete point clouds of the p 'and the q' according to the optimal rotation translation matrix, splicing, starting registration splicing from the first point cloud according to the sequence of images shot by the unmanned aerial vehicle, and then performing registration splicing on the spliced new point cloud and the point cloud of the next picture until the point clouds are spliced, wherein the wall modeling effect of the dense-phase area part is as shown in FIG. 10.
Claims (8)
1. A power plant boiler interior three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition is characterized by comprising the following steps:
step 1, utilizing an unmanned aerial vehicle to acquire images;
step 2, denoising the point cloud generated by each depth image acquired in the step 1 by utilizing a layered denoising method;
step 3, denoising the point cloud subjected to denoising treatment in the step 2 again by adopting an Isolation forest algorithm;
step 4, extracting characteristic points of the point cloud denoised in the step 3, and constructing a characteristic point set;
step 5, performing coarse registration on the point cloud by adopting a PCA algorithm based on the feature point set;
step 6, carrying out accurate registration on the point cloud after the coarse registration by adopting an improved ICP algorithm;
and 7, splicing the point clouds subjected to accurate registration one by one according to the sequence of the images acquired by the unmanned aerial vehicle to complete three-dimensional reconstruction.
2. The power plant boiler internal three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition as claimed in claim 1, wherein: the specific process of using the unmanned aerial vehicle to acquire the image in the step 1 is as follows: the unmanned aerial vehicle enters the boiler from a boiler inlet of a power plant, flies to an area needing three-dimensional reconstruction, determines that the distance between the unmanned aerial vehicle and a furnace wall needing three-dimensional reconstruction is 0.5-07m by using an unmanned aerial vehicle ultrasonic range finder, and then carries out ordered depth image shooting.
3. The power plant boiler internal three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition as claimed in claim 2, characterized in that: the depth image shooting method specifically comprises the following steps: and starting from the upper left corner of the determined shooting area, performing depth image acquisition from top to bottom in an s-shaped track, wherein 10% -20% of overlapping parts exist between adjacent pictures.
4. The power plant boiler internal three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition as claimed in claim 1, wherein: the step 2 of denoising the point cloud by using a layered denoising method specifically comprises the following steps:
step 2.1, carrying out normalization processing on the point cloud generated by the depth image, specifically:
taking a certain vertex of a minimum cube containing point cloud as a coordinate origin o, and constructing a three-dimensional coordinate system oxyz to ensure that coordinates of all points of the point cloud are positive values;
finding the maximum value and the minimum value of the point cloud in three directions of the x axis, the y axis and the z axis: x is the number ofmax,xmin,ymax,ymin,zmax,zmin;
The coordinate information of all points in the point cloud is processed as follows: wherein xi,yi,ziIs the original coordinate, x, of a point in the point cloudnew,ynew,znewIs the new coordinates after normalization;
step 2.2, the point cloud after normalization processing in the step 2.1 is averagely divided into m layers along the z-axis direction
In the formula of UZThe minimum length of all normalized points included in the z-axis direction,
in the formula, N represents the number of all points in the point cloud, Sxoy,SyozAnd SxozRepresenting the areas of the point clouds after normalization processing projected on the xoy plane, the yoz plane and the xoz plane respectively;
step 2.3, calculating a boundary threshold beta of the three-dimensional point cloud after normalization processing, wherein the calculation formula is as follows:
in the formula, omega is a weight adjustment coefficient;
and 2.4, deleting the noise points, specifically:
neglecting the influence of a z axis, converting the three-dimensional coordinates of the points contained in each layer into two-dimensional coordinates, if only one point exists in the threshold range of the two-dimensional coordinates, judging the point as a noise point, and deleting the noise point; otherwise, judging the points as non-noise points; judging the points contained in each layer according to the method, and deleting all noise points in the point cloud;
and 2.5, performing reverse normalization reduction on the point cloud remained after the noise points are deleted.
5. The power plant boiler internal three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition as claimed in claim 1, wherein: the specific steps of performing denoising again on the point cloud denoised in the step 2 by using the Isolation forest algorithm in the step 3 are as follows:
step 3.1, constructing iTree to form iForest, which specifically comprises the following steps:
3.1.1 randomly extracting one percent of points from the point cloud as a sample;
3.1.2 randomly selecting one of three coordinate directions of x, y and z from the samples as an attribute q and an arbitrary value p between the maximum value and the minimum value in the samples under the coordinate direction;
3.1.3 dividing the point satisfying the condition that q is less than p into one part, and the point that q is more than or equal to p is the other part;
3.1.4 repeat 3.1.2 and 3.1.3 until one of three conditions is reached: the tree has reached a limited height; there is only one sample on a node; all the characteristics of the samples on the nodes are the same;
3.1.5 repeating the steps 3.1.1-3.1.4, constructing 100 iTrees to form iForest;
step 3.2, passing all points in the point cloud through each constructed iTree, and calculating corresponding abnormal scores l (x)
Wherein g (n) is a normalization constant of a point cloud set with point cloud number n, and E (k (x)) is an average path length of a point x in the point cloud in all constructed itree;
H(i)=ln(i)+0.5772156649
where i is the tree level, L is the maximum distance from the root node to the end node, eiThe number of edges that the point x passes from the root node to the end node;
and 3.3, setting a hyper-parameter lambda, wherein the lambda range is between 0 and 1, and deleting the abnormal points when the abnormal score l (x) exceeds lambda.
6. The power plant boiler internal three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition as claimed in claim 1, wherein: in the step 4, feature point extraction is performed on the point cloud denoised in the step 3, and the specific process of constructing the feature point set is as follows:
step 4.1 segmentation of the overlapping parts of the captured depth images: sequentially selecting denoised point clouds generated by two adjacent depth images as a point cloud p and a point cloud q according to the shooting sequence of the unmanned aerial vehicle, intercepting the point clouds of the overlapped parts as p 'and q', wherein the number of the intercepted points of the overlapped part of each image is 25% of the total number of the point clouds;
step 4.2, extracting the characteristic points of the point clouds p 'and q', and specifically comprises the following steps:
4.2.1 calculate Point p'iOf curvature of p'iFor any point in the point cloud p',
in the formula (I), the compound is shown in the specification,is p'iK is a distance point p ' in the point cloud p ' found by the K-D tree algorithm 'iNumber of nearest dots, p'ijIs of point p'iOne of the k number of neighboring points of (a),is of point p'ijToThe distance of the tangent plane;
4.2.2 calculating the evaluation threshold of the characteristic point, wherein the calculation formula is as follows:
in the formula, N represents the number of the middle points of the point cloud p', and alpha is an adjusting coefficient;
4.2.3. measure (p'i) Is > delta, point p'iIs a characteristic point, otherwise, is not;
4.2.4 repeat steps 4.2.1-4.2.3, traverse all the point to find the characteristic point set p 'of the point cloud p', the characteristic point set q 'of the point cloud q'.
7. The power plant boiler internal three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition as claimed in claim 1, wherein: the specific process of performing coarse registration on the point cloud by adopting the PCA algorithm based on the feature point set in the step 5 is as follows:
step 5.1 set two feature point set data asp ' is a matrix of n x 3, q ' is a matrix of n ' x 3, each row represents a data point, and the centers of two sets of feature point set data are calculated respectively according to the following formula:
step 5.2, respectively calculating covariance matrixes of the two groups of feature point set data sets, wherein the formula is as follows:
step 5.3 covariance matrix C for two feature point set datasetsp″And Cq″Singular value decomposition is carried out, and the eigenvectors of the singular value decomposition and the singular value decomposition are respectively solved, wherein the formula is as follows:
in the formula of Up″,Uq″I.e. the eigenvectors of the 3 x 3 matrix, i.e. the principal directions of the two sets of eigenvalue sets, Dp″And Dq″Is a semi-positive 3 x 3 order diagonal matrix with values on the diagonals of Cp″And Cq″The singular value of (a) is,andis a 3 x 3 unitary matrix;
step 5.4 to obtain the initial rigid body transformation parameter (R)0,T0) Wherein R is0Is the initial rotationMatrix, T0Is the initial translation vector, the formula is as follows:
in the formula (I), the compound is shown in the specification,is Uq″The inverse matrix of (2) is used for rotating and translating the characteristic point set p' according to the formula (13) to obtain p ″)new
p″new=p″*R0+T0 (13)
Step 5.5, correcting the feature point set in the main shaft direction: the rough registration of the point cloud has the problem of reverse main shaft, and the total number is 23The following conditions are adopted: (x)+,y+,z+),(x+,y+,z-),(x+,y-,z+),(x+,y-,z-),(x-,y+,z+),(x-,y+,z-),(x-,y-,z+),(x-,y-,z-) Wherein + indicates that the spindle is oriented correctly, -indicates that the spindle is oriented in the reverse direction; rotating the direction of the main shaft in each case by 180 ° in the opposite direction; and respectively calculating the average Euclidean distance between the 8 feature point sets obtained by rotating according to the 8 conditions and the feature point set q', wherein the minimum distance is the correct rotating feature point set.
8. The power plant boiler internal three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition as claimed in claim 1, wherein: the specific process of performing accurate registration on the point cloud after coarse registration by using the improved IPC algorithm based on the feature point set in the step 6 is as follows:
step 6.1 initializes ψ to 0, gives a threshold τ, and the initial transformation shift matrix is (R)0,T0);
Step 6.2 for each point p 'in the feature point set p'iUsing Best Bin First calculationFinding out the nearest point q' in the characteristic point set qi;
6.3, calculating registration point pairs of p 'and q' by a quaternion method to obtain rigid transformation matrixes R and T;
step 6.4, solving a new target point set after rotation and translation, wherein the formula is as follows:
p″′=p″*R+T (14)
step 6.5, the convergence criterion coefficient is calculated according to the following formula:
in the formula (I), the compound is shown in the specification,andare respectively a point piAnd q ″)iUnit normal vectors in respective planes, R-1Is the inverse matrix of R, if the error E is iterated twiceψ-Eψ+1If tau is less, iteration is stopped to obtain the optimal rotation-translation matrix, if E is less than tauψ-Eψ+1If the value is more than or equal to tau, another phi +1, and repeating the steps 6.2-6.5 until E is reachedψ-Eψ+1<τ。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110757224.7A CN113421290A (en) | 2021-07-05 | 2021-07-05 | Power plant boiler interior three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110757224.7A CN113421290A (en) | 2021-07-05 | 2021-07-05 | Power plant boiler interior three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113421290A true CN113421290A (en) | 2021-09-21 |
Family
ID=77720382
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110757224.7A Pending CN113421290A (en) | 2021-07-05 | 2021-07-05 | Power plant boiler interior three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113421290A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104299260A (en) * | 2014-09-10 | 2015-01-21 | 西南交通大学 | Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration |
WO2015188684A1 (en) * | 2014-06-12 | 2015-12-17 | 深圳奥比中光科技有限公司 | Three-dimensional model reconstruction method and system |
CN108830931A (en) * | 2018-05-23 | 2018-11-16 | 上海电力学院 | A kind of laser point cloud compressing method based on dynamic grid k neighborhood search |
CN110517193A (en) * | 2019-06-28 | 2019-11-29 | 西安理工大学 | A kind of bottom mounted sonar Processing Method of Point-clouds |
CN111145232A (en) * | 2019-12-17 | 2020-05-12 | 东南大学 | Three-dimensional point cloud automatic registration method based on characteristic information change degree |
CN112132752A (en) * | 2020-09-29 | 2020-12-25 | 华中科技大学 | Fine splicing method for large complex curved surface multi-view scanning point cloud |
CN112712589A (en) * | 2021-01-08 | 2021-04-27 | 浙江工业大学 | Plant 3D modeling method and system based on laser radar and deep learning |
-
2021
- 2021-07-05 CN CN202110757224.7A patent/CN113421290A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015188684A1 (en) * | 2014-06-12 | 2015-12-17 | 深圳奥比中光科技有限公司 | Three-dimensional model reconstruction method and system |
CN104299260A (en) * | 2014-09-10 | 2015-01-21 | 西南交通大学 | Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration |
CN108830931A (en) * | 2018-05-23 | 2018-11-16 | 上海电力学院 | A kind of laser point cloud compressing method based on dynamic grid k neighborhood search |
CN110517193A (en) * | 2019-06-28 | 2019-11-29 | 西安理工大学 | A kind of bottom mounted sonar Processing Method of Point-clouds |
CN111145232A (en) * | 2019-12-17 | 2020-05-12 | 东南大学 | Three-dimensional point cloud automatic registration method based on characteristic information change degree |
CN112132752A (en) * | 2020-09-29 | 2020-12-25 | 华中科技大学 | Fine splicing method for large complex curved surface multi-view scanning point cloud |
CN112712589A (en) * | 2021-01-08 | 2021-04-27 | 浙江工业大学 | Plant 3D modeling method and system based on laser radar and deep learning |
Non-Patent Citations (5)
Title |
---|
SHENGTAO ZHOU 等: "Non-iterative denoising algorithm based on a dual threshold for a 3D point cloud", 《OPTICS AND LASERS IN ENGINEERING》 * |
SZYMON RUSINKIEWICZ 等: "A Symmetric Objective Function for ICP", 《ACM TRANSACTIONS ON GRAPHICS》 * |
YOUSRA REGAYA 等: "Point-Denoise: Unsupervised outlier detection for 3D point clouds enhancement", 《MULTIMEDIA TOOLS AND APPLICATIONS》 * |
李为民 等: "基于 PCA 的 ICP 点云配准算法的改进研究", 《工业控制计算机》 * |
李义琛 等: "基于二次误差度量的点云简化", 《小型微型计算机系统》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114170279B (en) | Point cloud registration method based on laser scanning | |
CN110340891B (en) | Mechanical arm positioning and grabbing system and method based on point cloud template matching technology | |
Grant et al. | Finding planes in LiDAR point clouds for real-time registration | |
CN111428748A (en) | Infrared image insulator recognition and detection method based on HOG characteristics and SVM | |
CN103886569A (en) | Parallel and matching precision constrained splicing method for consecutive frames of multi-feature-point unmanned aerial vehicle reconnaissance images | |
CN115546116A (en) | Method and system for extracting and calculating spacing of discontinuous surface of fully-covered rock mass | |
Zhan et al. | A point cloud segmentation method based on vector estimation and color clustering | |
Liu et al. | Application of three-dimensional laser scanning in the protection of multi-dynasty ceramic fragments | |
Agyemang et al. | Enhanced deep convolutional neural network for building component detection towards structural health monitoring | |
CN110942077A (en) | Feature line extraction method based on weight local change degree and L1 median optimization | |
Moritani et al. | Cylinder-based efficient and robust registration and model fitting of laser-scanned point clouds for as-built modeling of piping systems | |
CN111127667B (en) | Point cloud initial registration method based on region curvature binary descriptor | |
CN113111741A (en) | Assembly state identification method based on three-dimensional feature points | |
CN113421290A (en) | Power plant boiler interior three-dimensional reconstruction method based on unmanned aerial vehicle image acquisition | |
Magri et al. | Bending the doming effect in structure from motion reconstructions through bundle adjustment | |
Kawashima et al. | Automatic recognition of piping system from laser scanned point clouds using normal-based region growing | |
Chang et al. | 3D shape registration using regularized medial scaffolds | |
Andre Sorensen et al. | A RANSAC based CAD mesh reconstruction method using point clustering for mesh connectivity | |
Duan et al. | Rotation Initialization and Stepwise Refinement for Universal LiDAR Calibration | |
Ganovelli et al. | Reconstructing power lines from images | |
Li et al. | Optimization of radial distortion self-calibration for structure from motion from uncalibrated UAV images | |
Isa et al. | A review of data structure and filtering in handling 3D big point cloud data for building preservation | |
Soemantoro et al. | An AI automated self-organising, feature imitating approach for point cloud data reduction | |
Bassier et al. | Clustering of wall geometry from unstructured point clouds | |
Liu et al. | Terrain-adaptive ground filtering of airborne LIDAR data based on saliency-aware thin plate spline |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |