CN110319772B - Visual large-span distance measurement method based on unmanned aerial vehicle - Google Patents

Visual large-span distance measurement method based on unmanned aerial vehicle Download PDF

Info

Publication number
CN110319772B
CN110319772B CN201910628362.8A CN201910628362A CN110319772B CN 110319772 B CN110319772 B CN 110319772B CN 201910628362 A CN201910628362 A CN 201910628362A CN 110319772 B CN110319772 B CN 110319772B
Authority
CN
China
Prior art keywords
camera
imu
aerial vehicle
unmanned aerial
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910628362.8A
Other languages
Chinese (zh)
Other versions
CN110319772A (en
Inventor
刘刚
杨执钧
白雪
乔丹
钟韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Electric Power University
Original Assignee
Shanghai Electric Power University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Electric Power University filed Critical Shanghai Electric Power University
Priority to CN201910628362.8A priority Critical patent/CN110319772B/en
Publication of CN110319772A publication Critical patent/CN110319772A/en
Application granted granted Critical
Publication of CN110319772B publication Critical patent/CN110319772B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention relates to a visual large-span distance measurement method based on an unmanned aerial vehicle, which can be used for large-scene distance measurement. Carrying a visual sensor by an aircraft, acquiring image data of an object scene to be measured and acquiring IMU information through flight control; transmitting the data to the ground station through graph transmission; tracking FAST characteristic points by an optical flow method and screening and extracting key frames; performing sensor data fusion through nonlinear optimization; carrying out three-dimensional modeling on a scene by utilizing a multi-stereoscopic vision technology; and selecting a measuring point in the three-dimensional model for ranging. The method reduces the number of redundant pictures through key frame and image ambiguity screening, reduces reconstruction time and improves reconstruction efficiency; compared with the three-dimensional reconstruction of pure pictures, the three-dimensional model with the real scale can be reconstructed, and the model precision is higher; compared with the GPS and image fusion, the three-dimensional reconstruction can be carried out in the environment without GPS signals (such as mountainous areas and indoors), and the robustness of the system is improved.

Description

Visual large-span distance measurement method based on unmanned aerial vehicle
Technical Field
The invention relates to a surveying and mapping technology, in particular to a visual large-span distance measurement method based on an unmanned aerial vehicle.
Background
The three-dimensional reconstruction technology of the target scene is widely applied in the mapping field. The three-dimensional reconstruction can reconstruct two-dimensional images shot from multiple visual angles into a three-dimensional model of a space object, and measure the size of the object. Taking the example of the flower bed, people want to measure the diameter and the circumference of the flower bed, and the flower bed cannot be directly measured due to the large size of the flower bed. Aerial photography of the flower bed by the unmanned aerial vehicle can reconstruct a three-dimensional model of the flower bed, and the true size of the flower bed can be obtained by selecting points on the model. There are two three-dimensional modeling approaches. 1) The three-dimensional modeling is completely carried out through images, but the obtained three-dimensional model lacks scale factors, and a scale needs to be converted in the model. 2) And calculating a three-dimensional model of the actual size in a mode of fusing a GPS and an image. However, the method has high positioning accuracy for the GPS. Reconstruction cannot be performed in the environment without GPS signals, such as mountainous areas, indoors and the like.
Disclosure of Invention
The invention provides a visual large-span distance measurement method based on an unmanned aerial vehicle aiming at the problems of space object distance measurement, and a three-dimensional reconstruction distance measurement method based on IMU (inertial measurement unit) and image fusion can obtain a three-dimensional model with a real scale, increase the robustness of reconstruction and perform three-dimensional reconstruction in the environment without GPS signals.
The technical scheme of the invention is as follows: a visual large-span distance measurement method based on an unmanned aerial vehicle specifically comprises the following steps:
1) the method comprises the steps that an unmanned aerial vehicle is used for collecting video data and IMU data, the video data are preprocessed, and then the preprocessed video data with characteristic points and key frames and the IMU data are transmitted to a ground server;
2) selecting key frames in the video by using average parallax, tracking quality and image fuzziness;
3) obtaining a rough pose of the camera through IMU pre-integration and a selected key frame by using a sensor fusion technology, and then performing nonlinear optimization through feature points in a video frame to obtain an accurate pose of the camera, namely a flight track;
4) and finally, carrying out dense reconstruction to obtain a three-dimensional reconstruction model of an actual scale, and directly carrying out point selection ranging on the established three-dimensional model.
The step 1) is realized by the following specific method:
1.1) firstly, calibrating internal parameters and external parameters of a camera and an IMU of the unmanned aerial vehicle by adopting a kalibr toolkit, and carrying out distortion correction on a video shot by the camera through calibrated parameters;
1.2) acquiring corrected video data and IMU data through unmanned aerial vehicle aerial photography, wherein the video data and the IMU data comprise triaxial acceleration and angular velocity of a camera and image information;
1.3) after video data and IMU data are obtained, extracting FAST feature points of a first video frame and tracking the FAST feature points through an optical flow method to obtain feature points matched with subsequent frames, screening key frames according to the number of the feature points in the image, and then transmitting the video data and the IMU data with the feature points and the key frames to a ground server through wireless transmission.
The step 2) is realized by the following specific method:
2.1) calculating a rotation matrix and a translation matrix of the current frame and the previous key frame by adopting a multi-point perspective imaging algorithm for the matched feature points; if the translation matrix of the rotation matrix tracked between the current frame and the last key frame exceeds a set threshold value, the current frame is regarded as a new key frame;
2.2) the key frames with the tracking feature quantity lower than the set quantity are also regarded as new key frames, so that the tracking features are prevented from being completely lost, and the tracking quality is improved;
2.3) under the condition of no reference image, after all the obtained key frames are subjected to Laplace transform, calculating the average gradient difference between adjacent pixels, using the average gradient difference to measure the fuzziness, wherein the lower the value is, the more the frame is fuzzy, and when the average gradient difference is smaller than a set gradient threshold value, the key frame is removed.
The step 3) is realized by the following specific method:
3.1) calculating the pose relationship of the key frame in a world coordinate system through IMU pre-integration, and calculating the rough pose of the camera; in the process of acquiring the rough pose of the camera, aligning IMU data and key frame data, and using matching between key frame images to restrict the predicted sub-error of the IMU;
and 3.2) optimizing the feature points and the pose of the camera through a nonlinear algorithm to obtain the accurate pose of the camera.
The step 4) is realized by the following specific method:
4.1) adopting an epipolar line search and block matching technology to select 3 x 3 pixel blocks on the epipolar line of each pixel point to compare the similarity with other key frame pictures one by one to obtain the positions of the pixels in each picture;
4.2) carrying out triangulation for multiple times by adopting a depth filtering technology on three-dimensional mapping points of the same pixel point in different key frames in space so as to make depth estimation converge to a stable value;
and 4.3) rotating and translating the point cloud under each key frame picture visual angle to a world coordinate system through the camera pose, splicing to obtain a three-dimensional model of an actual scale, and directly performing point selection and ranging on the established three-dimensional model.
The invention has the beneficial effects that: according to the unmanned aerial vehicle-based visual large-span distance measurement method, the number of redundant pictures is reduced through key frame and image ambiguity screening, the reconstruction time is reduced, and the reconstruction efficiency is improved; compared with the three-dimensional reconstruction of pure pictures, the three-dimensional model with the real scale can be reconstructed, and the model precision is higher; compared with the GPS and image fusion, the three-dimensional reconstruction can be carried out in the environment without GPS signals (such as mountainous areas and indoors), and the robustness of the system is improved.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a block flow diagram of the method of the present invention;
FIG. 3 is an IMU pre-integration schematic diagram according to an embodiment of the present invention;
FIG. 4 is a camera pose diagram after sensor optimization according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of epipolar constraint according to an embodiment of the present invention.
Detailed Description
As shown in fig. 1 and 2, the unmanned aerial vehicle-based visual large-span distance measurement method is a flowchart for measuring the distance of a three-dimensional model of a given object, including the size of a target object and the distance between a plurality of objects, and hardware required for measuring the distance is: unmanned aerial vehicle, ground station server, monocular camera, wireless router etc.. The specific implementation method comprises the following steps:
1. the method comprises the steps that an unmanned aerial vehicle is used for collecting video data and IMU data, image preprocessing is carried out on the video data, and the preprocessed image data and the preprocessed IMU data are transmitted to a ground server;
the step 1 is realized by the following steps:
1.1, firstly, calibrating internal parameters and external parameters of a camera and an IMU of the unmanned aerial vehicle by adopting a kalibr toolkit, and carrying out distortion correction on a video shot by the camera through calibrated parameters;
1.2, acquiring corrected video data and IMU data through unmanned aerial vehicle aerial photography, wherein the video data and the IMU data comprise triaxial acceleration and angular velocity of a camera and image information;
1.3 after video data and IMU data are obtained, extracting FAST feature points of a first video frame and tracking the FAST feature points through an optical flow method to obtain feature points matched with subsequent frames, screening key frames according to the number of the feature points in the image, and then transmitting the video data and the IMU data with the feature points and the key frames to a ground server through wireless transmission.
2. Selecting key frames in the video by using average parallax, tracking quality and image fuzziness;
the concrete implementation steps are as follows:
and 2.1, calculating a rotation matrix and a translation matrix of the current frame and the previous key frame by adopting a PNP (multipoint perspective imaging) algorithm on the matched characteristic points. If the translation matrix of the rotation matrix tracked between the current frame and the last key frame exceeds a set threshold value, the current frame is regarded as a new key frame;
2.2 in order to prevent tracking failure caused by too few characteristic points in the process of track tracking. By regarding the key frames with the tracking feature quantity lower than the set quantity as new key frames, the tracking features are prevented from being completely lost, and the tracking quality is improved.
2.3 under the condition of no reference image, firstly, performing Laplace transform on all the obtained key frames, then calculating the average gradient difference between adjacent pixels, using the average gradient difference to measure the fuzziness, wherein the lower the value is, the more the frame is fuzzy, and when the average gradient difference is smaller than a set gradient threshold value, rejecting the key frame.
3. And obtaining the rough pose of the camera by IMU pre-integration and the selected key frame by using a sensor fusion technology, and then carrying out nonlinear optimization by using the characteristic points in the video frame to obtain the precise pose of the camera, namely the flight track.
The concrete implementation steps are as follows:
3.1 calculating the pose relationship of the key frame in the world coordinate system through IMU pre-integration, and calculating the rough pose of the camera. Namely, under a world coordinate system, the position, the speed and the rotation of the IMU at the time k respectively correspond to the
Figure BDA0002127899810000041
Figure BDA0002127899810000042
As shown in fig. 3, the pose relationship between the keyframes at i and j times can be obtained by IMU pre-integration, which satisfies the iterative equation.
Figure BDA0002127899810000043
Figure BDA0002127899810000051
Figure BDA0002127899810000052
In the formula: superscript w represents the world coordinate system; subscript bkRepresenting the k time of the IMU coordinate system; the subscript t represents time t; subscript bk+1Represents the k +1 moment of the IMU coordinate system;
Figure BDA0002127899810000053
at time t, the IMU is rotated in the world coordinate system,
Figure BDA0002127899810000054
the acceleration of the IMU at time t;
Figure BDA0002127899810000055
is the bias of IMU acceleration at time t; gwIs gravity in a world coordinate system;
Figure BDA0002127899810000056
the angular velocity of the IMU at time t;
Figure BDA0002127899810000057
is the offset of IMU angular velocity at time t;
that is, the state of the current time can be iteratively obtained through the state of the previous time. And pre-integrating the IMU of a period of time between two adjacent key frames to obtain the relative motion of the IMU between the two frames.
And in the process of acquiring the rough pose of the camera, the IMU data and the key frame data are aligned, and the matching between the key frame images is used for restraining the expected sub-errors of the IMU.
And 3.2, optimizing the feature points and the pose of the camera through a nonlinear algorithm to obtain the accurate pose of the camera. Firstly, state quantities such as pose, speed and camera external parameters are integrated into one state vector to carry out nonlinear optimization. The state vector is as follows:
Figure BDA0002127899810000058
Figure BDA0002127899810000059
Figure BDA00021278998100000510
deducing (4) from (5) and (6)
In the formula: x is a state vector; xkIs the state variable of the IMU; bgIs the gravity bias of the IMU; baAn acceleration bias for the IMU;
Figure BDA00021278998100000511
is the state variable of the camera relative to the IMU;
Figure BDA00021278998100000512
is the displacement between the camera and the IMU;
Figure BDA00021278998100000513
is the rotation between the camera and the IMU; lambda [ alpha ]iIs a characteristic point coordinate, i ═ 1, 2.., m;
the nonlinear optimization objective function is as follows:
Figure BDA00021278998100000514
in the formula:
Figure BDA0002127899810000061
is an IMU-based displacement observation;
Figure BDA0002127899810000062
is an image-based displacement observation; r isβIs a relationship function of the IMU; r iscIs a relationship function of the image; k is the time instant, β represents IMU, l represents the number of feature points, j represents the number of frames, and C represents the camera.
The camera pose optimization results are shown in fig. 4.
4. And finally, carrying out dense reconstruction to obtain a three-dimensional reconstruction model of an actual scale, and directly carrying out point selection ranging on the established three-dimensional model.
The concrete implementation steps are as follows:
4.1, adopting epipolar line search and block matching technology to select 3 × 3 pixel blocks on the epipolar line of each pixel point to compare the similarity with other key frame pictures one by one, and obtaining the positions of the pixels in each picture. As shown in FIG. 5, d represents the depth of a three-dimensional point, o1、o2Representing the camera center of the first and second pictures, R, T representing the rotation and translation matrices between 2 two pictures, p1、p2Representing pixel points, l, on a first picture and a second picture2Represents p1Epipolar lines in the second picture.
And 4.2, carrying out triangulation for multiple times by adopting a depth filtering technology on three-dimensional mapping points of the same pixel point in different key frames in the space, so that the depth estimation is converged to a stable value.
And 4.3, rotating and translating the point cloud under each key frame picture visual angle to a world coordinate system through the camera pose, splicing to obtain a three-dimensional model of an actual scale, and directly performing point selection and ranging on the established three-dimensional model.
According to the experimental result, a three-dimensional model is obtained through a visual large-span distance measurement method based on the unmanned aerial vehicle, and real-scale distance measurement can be performed on the three-dimensional model. Based on the ranging system, a surveying and mapping worker can conveniently conduct surveying and mapping work.

Claims (3)

1. A visual large-span distance measurement method based on an unmanned aerial vehicle is characterized by comprising the following steps:
1) the method comprises the steps that an unmanned aerial vehicle is used for collecting video data and IMU data, the video data are preprocessed, and then the preprocessed video data with characteristic points and key frames and the IMU data are transmitted to a ground server; the specific implementation method comprises the following steps:
1.1) firstly, calibrating internal parameters and external parameters of a camera and an IMU of the unmanned aerial vehicle by adopting a kalibr toolkit, and carrying out distortion correction on a video shot by the camera through calibrated parameters;
1.2) acquiring corrected video data and IMU data through unmanned aerial vehicle aerial photography, wherein the video data and the IMU data comprise triaxial acceleration and angular velocity of a camera and image information;
1.3) after video data and IMU data are obtained, extracting FAST feature points of a first video frame and tracking the FAST feature points by an optical flow method to obtain feature points matched with subsequent frames, screening key frames according to the number of the feature points in the image, and then wirelessly transmitting the video data and the IMU data with the feature points and the key frames to a ground server;
2) selecting key frames in the video by using average parallax, tracking quality and image fuzziness; the specific implementation method comprises the following steps:
2.1) calculating a rotation matrix and a translation matrix of the current frame and the previous key frame by adopting a multi-point perspective imaging algorithm for the matched feature points; if the translation matrix of the rotation matrix tracked between the current frame and the last key frame exceeds a set threshold value, the current frame is regarded as a new key frame;
2.2) the key frames with the tracking feature quantity lower than the set quantity are also regarded as new key frames, so that the tracking features are prevented from being completely lost, and the tracking quality is improved;
2.3) under the condition of no reference image, after all the obtained key frames are subjected to Laplace transform, calculating the average gradient difference between adjacent pixels, using the average gradient difference to measure the fuzziness, wherein the lower the value is, the more the frame is fuzzy, and when the average gradient difference is smaller than a set gradient threshold value, the key frame is removed;
3) obtaining a rough pose of the camera through IMU pre-integration and a selected key frame by using a sensor fusion technology, and then performing nonlinear optimization through feature points in a video frame to obtain an accurate pose of the camera, namely a flight track;
4) and finally, carrying out dense reconstruction to obtain a three-dimensional reconstruction model of an actual scale, and directly carrying out point selection ranging on the established three-dimensional model.
2. The unmanned aerial vehicle-based visual large-span distance measurement method according to claim 1, wherein the step 3) is implemented as follows:
3.1) calculating the pose relationship of the key frame in a world coordinate system through IMU pre-integration, and calculating the rough pose of the camera; in the process of acquiring the rough pose of the camera, aligning IMU data and key frame data, and using matching between key frame images to restrict the predicted sub-error of the IMU;
and 3.2) optimizing the feature points and the pose of the camera through a nonlinear algorithm to obtain the accurate pose of the camera.
3. The unmanned aerial vehicle-based visual large-span distance measurement method according to claim 1 or 2, wherein the step 4) is implemented as follows:
4.1) adopting an epipolar line search and block matching technology to select 3 x 3 pixel blocks on the epipolar line of each pixel point to compare the similarity with other key frame pictures one by one to obtain the positions of the pixels in each picture;
4.2) carrying out triangulation for multiple times by adopting a depth filtering technology on three-dimensional mapping points of the same pixel point in different key frames in space so as to make depth estimation converge to a stable value;
and 4.3) rotating and translating the point cloud under each key frame picture visual angle to a world coordinate system through the camera pose, splicing to obtain a three-dimensional model of an actual scale, and directly performing point selection and ranging on the established three-dimensional model.
CN201910628362.8A 2019-07-12 2019-07-12 Visual large-span distance measurement method based on unmanned aerial vehicle Active CN110319772B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910628362.8A CN110319772B (en) 2019-07-12 2019-07-12 Visual large-span distance measurement method based on unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910628362.8A CN110319772B (en) 2019-07-12 2019-07-12 Visual large-span distance measurement method based on unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN110319772A CN110319772A (en) 2019-10-11
CN110319772B true CN110319772B (en) 2020-12-15

Family

ID=68122084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910628362.8A Active CN110319772B (en) 2019-07-12 2019-07-12 Visual large-span distance measurement method based on unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN110319772B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047620A (en) * 2019-11-15 2020-04-21 广东工业大学 Unmanned aerial vehicle visual odometer method based on depth point-line characteristics
CN113125791B (en) * 2019-12-30 2023-10-20 南京智能情资创新科技研究院有限公司 Motion camera speed measuring method based on characteristic object and optical flow method
CN112149495B (en) * 2020-08-07 2023-07-28 中国矿业大学(北京) Video key frame extraction method based on parallax tracking
CN112365537B (en) * 2020-10-13 2023-06-27 天津大学 Active camera repositioning method based on three-dimensional point cloud alignment
CN112697044B (en) * 2020-12-17 2021-11-26 北京航空航天大学 Static rigid object vision measurement method based on unmanned aerial vehicle platform
CN112505065B (en) * 2020-12-28 2022-11-04 上海工程技术大学 Method for detecting surface defects of large part by indoor unmanned aerial vehicle
CN113139949B (en) * 2021-04-30 2023-04-07 逻腾(杭州)科技有限公司 Robot image ambiguity detection method
CN115272494B (en) * 2022-09-29 2022-12-30 腾讯科技(深圳)有限公司 Calibration method and device for camera and inertial measurement unit and computer equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424630A (en) * 2013-08-20 2015-03-18 华为技术有限公司 Three-dimension reconstruction method and device, and mobile terminal
CN105953796A (en) * 2016-05-23 2016-09-21 北京暴风魔镜科技有限公司 Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
CN106123802A (en) * 2016-06-13 2016-11-16 天津大学 A kind of autonomous flow-type 3 D measuring method
WO2018081348A1 (en) * 2016-10-26 2018-05-03 The Charles Stark Draper Laboratory, Inc. Vision-inertial navigation with variable contrast tracking residual
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
CN108413917B (en) * 2018-03-15 2020-08-07 中国人民解放军国防科技大学 Non-contact three-dimensional measurement system, non-contact three-dimensional measurement method and measurement device
CN109520497B (en) * 2018-10-19 2022-09-30 天津大学 Unmanned aerial vehicle autonomous positioning method based on vision and imu

Also Published As

Publication number Publication date
CN110319772A (en) 2019-10-11

Similar Documents

Publication Publication Date Title
CN110319772B (en) Visual large-span distance measurement method based on unmanned aerial vehicle
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN110009739B (en) Method for extracting and coding motion characteristics of digital retina of mobile camera
CN107316325B (en) Airborne laser point cloud and image registration fusion method based on image registration
Teller et al. Calibrated, registered images of an extended urban area
CN108534782B (en) Binocular vision system-based landmark map vehicle instant positioning method
CN111815765B (en) Heterogeneous data fusion-based image three-dimensional reconstruction method
CN107560603B (en) Unmanned aerial vehicle oblique photography measurement system and measurement method
CN113850126A (en) Target detection and three-dimensional positioning method and system based on unmanned aerial vehicle
CN110799921A (en) Shooting method and device and unmanned aerial vehicle
CN106780729A (en) A kind of unmanned plane sequential images batch processing three-dimensional rebuilding method
CN108519102B (en) Binocular vision mileage calculation method based on secondary projection
KR102200299B1 (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
CN102072725A (en) Spatial three-dimension (3D) measurement method based on laser point cloud and digital measurable images
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
JP2012118666A (en) Three-dimensional map automatic generation device
CN112465969A (en) Real-time three-dimensional modeling method and system based on unmanned aerial vehicle aerial image data
CN110533719B (en) Augmented reality positioning method and device based on environment visual feature point identification technology
CN107038753B (en) Stereoscopic vision three-dimensional reconstruction system and method
CN112598706B (en) Multi-camera moving target three-dimensional track reconstruction method without accurate time-space synchronization
CN114638897B (en) Multi-camera system initialization method, system and device based on non-overlapping views
CN115371673A (en) Binocular camera target positioning method based on Bundle Adjustment in unknown environment
CN110986888A (en) Aerial photography integrated method
CN110458945B (en) Automatic modeling method and system by combining aerial oblique photography with video data
CN117115271A (en) Binocular camera external parameter self-calibration method and system in unmanned aerial vehicle flight process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant