CN114170281A - Three-dimensional point cloud data acquisition and processing method - Google Patents

Three-dimensional point cloud data acquisition and processing method Download PDF

Info

Publication number
CN114170281A
CN114170281A CN202111498675.XA CN202111498675A CN114170281A CN 114170281 A CN114170281 A CN 114170281A CN 202111498675 A CN202111498675 A CN 202111498675A CN 114170281 A CN114170281 A CN 114170281A
Authority
CN
China
Prior art keywords
point
point cloud
cloud data
points
steps
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111498675.XA
Other languages
Chinese (zh)
Inventor
杨中凡
王小刚
唐磊
赵俊
奉志强
郭少杰
熊兴中
侯劲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University of Science and Engineering
Original Assignee
Sichuan University of Science and Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University of Science and Engineering filed Critical Sichuan University of Science and Engineering
Priority to CN202111498675.XA priority Critical patent/CN114170281A/en
Publication of CN114170281A publication Critical patent/CN114170281A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a three-dimensional point cloud data acquisition processing method, which solves the problems of large calculated amount and easy interference in the traditional technology and comprises the following steps: step A: acquiring point cloud data through a camera; and B: filtering the obtained point cloud data; and C: registering the filtered point cloud data based on 4PCS of ISS key points; step D: and fitting the registered point cloud data to obtain an object point cloud image, thereby realizing the technical effect of better processing standard on real objects.

Description

Three-dimensional point cloud data acquisition and processing method
Technical Field
The invention belongs to the technical field of three-dimensional image processing, and particularly relates to a three-dimensional point cloud data acquisition and processing method.
Background
Machine vision based on two-dimensional image processing has been applied to many fields such as face recognition, defect detection, target recognition, etc., but because three-dimensional depth distance information is not considered, there are still some shortcomings in recognition, detection, etc. compared with 3D vision. The 3D vision is inspired by a human vision system, mainly utilizes a camera to sense a three-dimensional space environment and generates three-dimensional point cloud data, and is the foremost subject in the fields of artificial intelligence, surveying and mapping, measurement and the like. For example, the method has great research significance in the aspects of unmanned vehicles, autonomous obstacle avoidance unmanned planes, reverse engineering of products in industrial manufacturing and the like.
One of the mainstream applications of 3D vision is three-dimensional reconstruction, and various three-dimensional reconstruction techniques such as binocular stereo vision, structured light, laser radar, kinect sensor, etc. have been developed based on different principles. The reconstruction techniques adopted according to the difference of the reconstructed scene are different, but the common purpose is to obtain the depth information. The project mainly aims at three-dimensional reconstruction of small objects, and finally selects a three-dimensional reconstruction technology based on Kinect by combining precision and cost factors. By referring to relevant documents and analyzing the three-dimensional reconstruction principle, the external reference matrix between the camera coordinate system and the world coordinate system is difficult to solve under the condition of no marker positioning, and the external reference matrix is directly solved, such as a three-coordinate measuring machine, a three-dimensional target direct calibration method, a two-dimensional plane target calibration method and the like. Three-coordinate measuring machines are expensive and complex to operate. The three-dimensional target direct method and the two-dimensional plane target calibration method need to solve an overdetermined equation, need to perform iterative optimization and have larger calculated amount. Some one-dimensional coordinates are limited and solved by constraint conditions, but the solution requires limited conditions such as an optical plane and a camera position and is easily interfered.
Disclosure of Invention
Aiming at the problems of large calculated amount and easy interference in the prior art, the invention provides a three-dimensional point cloud data acquisition processing method, which aims to: the efficiency and the precision of point cloud registration are improved, and a better processing standard for real objects is realized.
The technical scheme adopted by the invention is as follows:
a three-dimensional point cloud data acquisition processing method comprises the following steps:
step A: acquiring point cloud data through a camera;
and B: filtering the obtained point cloud data;
and C: registering the filtered point cloud data based on 4PCS of ISS key points;
step D: and fitting the registered point cloud data to obtain an object point cloud image.
By adopting the scheme, the success rate of registration can be improved through the ISS-4PCS algorithm, and firstly, compared with other key point detectors, the key points extracted by the ISS can represent local information more and can be matched with correct point pairs in higher quantity; secondly, when the overlapping rate is more than 20%, the instability of the 4PCS algorithm can be greatly improved by ISS key points.
The specific steps of the step A are as follows:
step A1: acquiring a depth image and depth coordinate information Z of an object at all angles through Kinect;
step A2: calculating an X coordinate and a Y coordinate in a world coordinate system through the depth coordinate information Z, wherein the calculation formula of X is as follows:
Figure BDA0003401898310000021
wherein the calculation formula of Y is as follows:
Figure BDA0003401898310000022
where u and v are the abscissa and ordinate on the depth image, Z is the measured depth value, f is the focal length of the camera, and X, Y are the abscissa and ordinate in the world coordinate system, respectively.
By adopting the scheme, the image data acquisition under the TOF principle can be realized through the Kinect camera, the high-precision photographing requirement is met, the color camera on the Kinect can acquire color images at the same time, the highest frame rate can reach 30fps, the infrared optical fiber with color development is not easy to receive the interference of other optical fibers, and the accurate acquisition of depth data can be realized.
The specific steps of the step B are as follows:
step B1: removing background noise through conditional filtering;
step B2: and removing abnormal values and isolated points through radius filtering and statistical filtering.
By adopting the scheme, the background and the noise except the target object can be taken out, only the influence of the target object is kept, and the interference caused by the influence except the target object when the point cloud data is acquired is prevented.
The concrete steps of the step C are as follows:
step C1: extracting characteristic points of the point cloud data to obtain a characteristic point set;
step C2: obtaining optimal congruent four-point matching on the feature point set through a 4PCS algorithm;
step C3: and obtaining the optimal rigid body transformation matrix.
By adopting the scheme, the difficulty that single-frame point clouds are registered and fused when Kinect is similar to perform three-dimensional reconstruction is overcome, the 4PCS algorithm in the method can achieve better robustness, and the success rate of registration is improved.
The specific steps of the step C1 are as follows:
step C11: there are n points (x) in the setpoint cloud data Pi,yi,zi) 1,2, 1, pi=(xi,yi,zi);
Step C12: for each point PiEstablishing a local coordinate system and setting a search radius r for all pointsframe
Step C13: determining P for each point cloud data PiIs a center rframeAll points in the radius area are calculated and the weight w of the points is calculatedijThe expression is as follows:
Figure BDA0003401898310000023
wherein p isjIs p isiIs the center of a sphere rframeIs any point within the radius.
Step C14: calculate each point PiThe formula of the covariance matrix is:
Figure BDA0003401898310000031
wherein T represents a pair vector (p)i-pj) Transposing;
step C15: calculate each point PiCovariance matrix cov (p)i) Characteristic value of
Figure BDA0003401898310000032
And are arranged in the order from big to small;
step C16: setting a threshold value epsilon1And ε2And the formula is satisfied:
Figure BDA0003401898310000033
step C17: steps C11-C16 are repeated until all points are completed.
The specific steps of the step C2 are as follows:
step C21: randomly giving point cloud data K and Q;
step C22: randomly selecting 4 coplanar points B { a, B, c, d } from the point cloud data K as coplanar four-point bases;
step C23: two scaling factors r between four-point bases are calculated1,r2The calculation formula is as follows:
Figure BDA0003401898310000034
step C24: for any point pair { Q) in Q1,q2Calculate q1,q2E Q, the calculation formula is:
Figure BDA0003401898310000035
in the formula, ei≈ejRepresenting found congruent four points corresponding to B ═ a, B, c, d, i, j represent the long base line point pairs in the ith and jth Q, respectively;
step C25: finding all coplanar four-point sets in the point cloud data K and recording as E ═ B1,B2,...,BmM is the total number of the four point sets in K,repeating the steps C21-C25 to obtain an congruent four-point set D ═ C1,C2,...,CnAnd n is the total number of congruent four-point sets.
The specific steps of the step C3 are as follows: in the set D ═ { C ═ C1,C2,...,CnIn the method, optimal congruent four-point matching point pairs are searched through an LCP strategy, congruent four-point rotation and translation transformation parameters are calculated, four-point transformation is applied to global point cloud transformation, a maximum consistent area contained in global registration is recorded as optimal matching, and a transformation matrix corresponding to the optimal matching is an optimal rigid body transformation matrix.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
1. the success rate of registration can be improved through a 4PCS algorithm, firstly, when the overlapping rate is less than 20%, the robustness of the algorithm in the traditional technology is lower than that of the standard 4PCS algorithm, and the number of key points extracted by a detector in an overlapping area is less, namely the correct electricity is reduced, and secondly, the 4PCS algorithm is a random selection algorithm, and the algorithm has certain randomness by randomly selecting points each time. Therefore, if the overlapping rate is low, there is a certain probability that the registration is correct, and when the overlapping rate is high, the probability of successful registration can be greatly improved.
2. Can realize the image data collection under the TOF principle through the Kinect camera, satisfy high accuracy photographic demand, and the color camera on the Kinect can gather the color image simultaneously, and the highest frame rate can reach 30fps, and the interference of other optic fibre is also difficult to receive to its infrared optic fibre of developing color, can realize accurately obtaining depth data.
3. The difficulty that single-frame point clouds are registered and fused when Kinect approaches to perform three-dimensional reconstruction is overcome, the 4PCS algorithm in the method can achieve better robustness, and the success rate of registration is improved.
Drawings
The invention will now be described, by way of example, with reference to the accompanying drawings, in which:
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a filtered point cloud of the present invention;
FIG. 3 is a frame target point cloud of the present invention;
FIG. 4 is a schematic view of two frame point cloud registration of the present invention;
FIG. 5 is a top view of the target object global point cloud of the present invention;
FIG. 6 is a front view of the overall point cloud of the target object of the present invention.
Detailed Description
All of the features disclosed in this specification, or all of the steps in any method or process so disclosed, may be combined in any combination, except combinations of features and/or steps that are mutually exclusive.
The present invention will be described in detail with reference to fig. 1.
The first embodiment is as follows:
a three-dimensional point cloud data acquisition processing method comprises the following steps:
step A: acquiring point cloud data through a camera;
and B: filtering the obtained point cloud data;
and C: registering the filtered point cloud data based on 4PCS of ISS key points;
step D: and fitting the registered point cloud data to obtain an object point cloud image.
The specific steps of the step A are as follows:
step A1: acquiring a depth image and depth coordinate information Z of an object at all angles through Kinect;
step A2: calculating an X coordinate and a Y coordinate in a world coordinate system through the depth coordinate information Z, wherein the calculation formula of X is as follows:
Figure BDA0003401898310000051
wherein the calculation formula of Y is as follows:
Figure BDA0003401898310000052
wherein u and v are the abscissa and ordinate of the depth image, Z is the measured depth value, f is the focal length of the camera, X, Y are in the world coordinate systemThe abscissa and the ordinate.
The specific steps of the step B are as follows:
step B1: removing background noise through conditional filtering;
step B2: and removing abnormal values and isolated points through radius filtering and statistical filtering.
The concrete steps of the step C are as follows:
step C1: extracting characteristic points of the point cloud data to obtain a characteristic point set;
step C2: obtaining optimal congruent four-point matching on the feature point set through a 4PCS algorithm;
step C3: and obtaining the optimal rigid body transformation matrix.
The specific steps of the step C1 are as follows:
step C11: there are n points (x) in the setpoint cloud data Pi,yi,zi) 1,2, 1, pi=(xi,yi,zi);
Step C12: for each point PiEstablishing a local coordinate system and setting a search radius r for all pointsframe
Step C13: determining P for each point cloud data PiIs a center rframeAll points in the radius area are calculated and the weight w of the points is calculatedijThe expression is as follows:
Figure BDA0003401898310000053
wherein p isjIs p isiIs the center of a sphere rframeIs any point within the radius;
step C14: calculate each point PiThe formula of the covariance matrix is:
Figure BDA0003401898310000054
wherein T represents a pair vector (p)i-pj) Transposing;
step C15: calculate each point PiCovariance matrix cov (p)i) Characteristic value of
Figure BDA0003401898310000055
And are arranged in the order from big to small;
step C16: setting a threshold value epsilon1And ε2And the formula is satisfied:
Figure BDA0003401898310000056
step C17: steps C11-C16 are repeated until all points are completed.
The specific steps of the step C2 are as follows:
step C21: randomly giving point cloud data K and Q;
step C22: randomly selecting 4 coplanar points B { a, B, c, d } from the point cloud data K as coplanar four-point bases;
step C23: two scaling factors r between four-point bases are calculated1,r2The calculation formula is as follows:
Figure BDA0003401898310000061
step C24: for any point pair { Q) in Q1,q2Calculate q1,q2E Q, the calculation formula is:
Figure BDA0003401898310000062
in the formula, ei≈ejRepresenting found congruent four points corresponding to B ═ a, B, c, d, i, j represent the long base line point pairs in the ith and jth Q, respectively;
step C25: finding all coplanar four-point sets in the point cloud data K and recording as E ═ B1,B2,...,BmAnd m is the four-point lumped number in K, and the steps C21-C25 are repeated to obtain an congruent four-point set D ═ C1,C2,...,CnAnd n is the total number of congruent four-point sets.
The specific steps of the step C3 are as follows: in the set D ═ { C ═ C1,C2,...,CnIn the method, the optimal congruent four-point matching point pair is found through an LCP strategy, so that congruent four-point rotation and translation transformation parameters are calculated, and four points are transformedAnd applying the transformation to global point cloud conversion, recording the maximum consistent region contained in the global registration as the optimal matching, and taking the transformation matrix corresponding to the optimal matching as the optimal rigid body transformation matrix.
In the above embodiment, the camera is controlled to move by the motors 1 and 2, the motors 3 and 4 control the object to rotate, wherein the motors 3 and 4 are respectively provided with a disc, the object is placed on the top of the disc to perform full-angle shooting of the object, the motors 1 and 2 can be respectively connected with a vertical slide rail and a horizontal slide rail to satisfy the full-angle shooting, in the embodiment, the camera adopts a Kinect2 camera, the imaging range of the camera is 0.5-4m, and the closer to 0.5m, the smaller the precision loss, so in the actual use, the distance between the camera and the target object can be kept at 0.6m to achieve the best imaging effect.
During shooting, the Kinect2 camera can obtain the color side view and the depth side view of the target object respectively, and the target point cloud screenshot is calculated through the depth side view and the calibrated camera internal reference.
The point cloud obtained in the above step contains a large amount of noise in addition to the background point cloud, and therefore needs to be preprocessed, that is, filtered in step B, in this embodiment, a one-stage statistical filtering algorithm of conditional filtering and radius removal outlier filtering is mainly adopted to remove the background, noise, and outlier from the original point cloud, and the removed point cloud is as shown in fig. 2.
In order to reconstruct the whole three-dimensional complete point cloud of the target point cloud, in this embodiment, the target is sampled at intervals of 45 degrees by a mechanical rotation platform, and the target is shot by equally dividing 0-360 degrees into 8 angles, so as to obtain 8 frames of point cloud data, as shown in fig. 3. After preprocessing, registration is needed, and the duck model point clouds with the angles of 0 and 45 degrees are registered through the step C, and the result is shown in FIG. 4.
The other adjacent angles are registered, the calculated transformation matrix is recorded, the images are transformed to the same world coordinate system according to the transformation relation between each angle and 0 degree, the fused final duck model point cloud is shown in the figures 5 and 6, the method overcomes the defects that the traditional two-dimensional vision cannot obtain the object dimension and distance information and the like from the three-dimensional angle, provides directions for measurement, detection and identification with higher requirements in the industry, obtains the point cloud data of a target, simultaneously aims at the problems that the registration time of partial overlapped point clouds is long, the matching of corresponding points is easy to make mistakes, the local optimization is easy to fall into, and the like, and realizes the better processing standard of true objects by combining ISS characteristic points with a 4PCS point cloud rough registration algorithm.
The above-mentioned embodiments only express the specific embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for those skilled in the art, without departing from the technical idea of the present application, several changes and modifications can be made, which are all within the protection scope of the present application.

Claims (7)

1. A three-dimensional point cloud data acquisition processing method is characterized by comprising the following steps:
step A: acquiring point cloud data through a camera;
and B: filtering the obtained point cloud data;
and C: registering the filtered point cloud data based on 4PCS of ISS key points;
step D: and fitting the registered point cloud data to obtain an object point cloud image.
2. The method for acquiring and processing the three-dimensional point cloud data according to claim 1, wherein the specific steps of the step A are as follows:
step A1: acquiring a depth image and depth coordinate information Z of an object at all angles through Kinect;
step A2: calculating an X coordinate and a Y coordinate in a world coordinate system through the depth coordinate information Z, wherein the calculation formula of X is as follows:
Figure FDA0003401898300000011
wherein the calculation formula of Y is as follows:
Figure FDA0003401898300000012
where u and v are the abscissa and ordinate on the depth image, Z is the measured depth value, f is the focal length of the camera, and X, Y are the abscissa and ordinate in the world coordinate system, respectively.
3. The method for acquiring and processing the three-dimensional point cloud data according to claim 1, wherein the step B comprises the following specific steps:
step B1: removing background noise through conditional filtering;
step B2: and removing abnormal values and isolated points through radius filtering and statistical filtering.
4. The method for acquiring and processing the three-dimensional point cloud data according to claim 1, wherein the specific steps in the step C are as follows:
step C1: extracting characteristic points of the point cloud data to obtain a characteristic point set;
step C2: obtaining optimal congruent four-point matching on the feature point set through a 4PCS algorithm;
step C3: and obtaining the optimal rigid body transformation matrix.
5. The method as claimed in claim 4, wherein the step C1 includes the following steps:
step C11: there are n points (x) in the setpoint cloud data Pi,yi,zi) 1,2, 1, pi=(xi,yi,zi);
Step C12: for each point PiEstablishing a local coordinate system and setting a search radius r for all pointsframe
Step C13: determining P for each point cloud data PiIs a center rframeIs a radius areaAll points in the interior, and calculating the weight w of the pointsijThe expression is as follows:
Figure FDA0003401898300000013
wherein p isjIs p isiIs the center of a sphere rframeIs any point within the radius;
step C14: calculate each point PiThe formula of the covariance matrix is:
Figure FDA0003401898300000021
wherein T represents a pair vector (p)i-pj) Transposing;
step C15: calculate each point PiCovariance matrix cov (p)i) Characteristic value of
Figure FDA0003401898300000022
And are arranged in the order from big to small;
step C16: setting a threshold value epsilon1And ε2And the formula is satisfied:
Figure FDA0003401898300000023
step C17: steps C11-C16 are repeated until all points are completed.
6. The method for acquiring and processing the three-dimensional point cloud data according to claim 5, wherein the step C2 comprises the following steps:
step C21: randomly giving point cloud data K and Q;
step C22: randomly selecting 4 coplanar points B { a, B, c, d } from the point cloud data K as coplanar four-point bases;
step C23: two scaling factors r between four-point bases are calculated1,r2The calculation formula is as follows:
Figure FDA0003401898300000024
step C24: to pairArbitrary point pair in Q { Q1,q2Calculate q1,q2E Q, the calculation formula is:
Figure FDA0003401898300000025
in the formula, ei≈ejRepresenting found congruent four points corresponding to B ═ a, B, c, d, i, j represent the long base line point pairs in the ith and jth Q, respectively;
step C25: finding all coplanar four-point sets in the point cloud data K and recording as E ═ B1,B2,...,BmAnd m is the four-point lumped number in K, and the steps C21-C25 are repeated to obtain an congruent four-point set D ═ C1,C2,...,CnAnd n is the total number of congruent four-point sets.
7. The method as claimed in claim 6, wherein the step C3 includes the following steps: in the set D ═ { C ═ C1,C2,...,CnIn the method, optimal congruent four-point matching point pairs are searched through an LCP strategy, congruent four-point rotation and translation transformation parameters are calculated, four-point transformation is applied to global point cloud transformation, a maximum consistent area contained in global registration is recorded as optimal matching, and a transformation matrix corresponding to the optimal matching is an optimal rigid body transformation matrix.
CN202111498675.XA 2021-12-09 2021-12-09 Three-dimensional point cloud data acquisition and processing method Pending CN114170281A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111498675.XA CN114170281A (en) 2021-12-09 2021-12-09 Three-dimensional point cloud data acquisition and processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111498675.XA CN114170281A (en) 2021-12-09 2021-12-09 Three-dimensional point cloud data acquisition and processing method

Publications (1)

Publication Number Publication Date
CN114170281A true CN114170281A (en) 2022-03-11

Family

ID=80485121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111498675.XA Pending CN114170281A (en) 2021-12-09 2021-12-09 Three-dimensional point cloud data acquisition and processing method

Country Status (1)

Country Link
CN (1) CN114170281A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115050071A (en) * 2022-06-20 2022-09-13 中国工商银行股份有限公司 Emotion recognition method and device, storage medium and electronic equipment
CN115050071B (en) * 2022-06-20 2024-07-05 中国工商银行股份有限公司 Emotion recognition method, device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115050071A (en) * 2022-06-20 2022-09-13 中国工商银行股份有限公司 Emotion recognition method and device, storage medium and electronic equipment
CN115050071B (en) * 2022-06-20 2024-07-05 中国工商银行股份有限公司 Emotion recognition method, device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
US7769205B2 (en) Fast three dimensional recovery method and apparatus
Wu et al. Passive measurement method of tree diameter at breast height using a smartphone
CN109341668B (en) Multi-camera measuring method based on refraction projection model and light beam tracking method
CN109579695B (en) Part measuring method based on heterogeneous stereoscopic vision
Moussa Integration of digital photogrammetry and terrestrial laser scanning for cultural heritage data recording
Moussa et al. An automatic procedure for combining digital images and laser scanner data
Liang et al. Automatic registration of terrestrial laser scanning data using precisely located artificial planar targets
CN115375842A (en) Plant three-dimensional reconstruction method, terminal and storage medium
Jin et al. An Indoor Location‐Based Positioning System Using Stereo Vision with the Drone Camera
Özdemir et al. A multi-purpose benchmark for photogrammetric urban 3D reconstruction in a controlled environment
Azevedo et al. 3D object reconstruction from uncalibrated images using an off-the-shelf camera
CN115909025A (en) Terrain vision autonomous detection and identification method for small celestial body surface sampling point
CN117197333A (en) Space target reconstruction and pose estimation method and system based on multi-view vision
Xinmei et al. Passive measurement method of tree height and crown diameter using a smartphone
Lalonde et al. Automatic three-dimensional point cloud processing for forest inventory
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN116518864A (en) Engineering structure full-field deformation detection method based on three-dimensional point cloud comparison analysis
CN115854895A (en) Non-contact stumpage breast diameter measurement method based on target stumpage form
Barazzetti et al. Automated and accurate orientation of complex image sequences
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
Jurjević et al. 3D data acquisition based on OpenCV for close-range photogrammetry applications
CN114170281A (en) Three-dimensional point cloud data acquisition and processing method
CN112862678A (en) Unmanned aerial vehicle image splicing method and device and storage medium
Grifoni et al. 3D multi-modal point clouds data fusion for metrological analysis and restoration assessment of a panel painting
CN115063524A (en) Reconstruction method and device for 3D face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination