CN110319772A - Visual large-span distance measurement method based on unmanned aerial vehicle - Google Patents

Visual large-span distance measurement method based on unmanned aerial vehicle Download PDF

Info

Publication number
CN110319772A
CN110319772A CN201910628362.8A CN201910628362A CN110319772A CN 110319772 A CN110319772 A CN 110319772A CN 201910628362 A CN201910628362 A CN 201910628362A CN 110319772 A CN110319772 A CN 110319772A
Authority
CN
China
Prior art keywords
key frame
imu
camera
frame
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910628362.8A
Other languages
Chinese (zh)
Other versions
CN110319772B (en
Inventor
刘刚
杨执钧
白雪
乔丹
钟韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Electric Power
Original Assignee
Shanghai University of Electric Power
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Electric Power filed Critical Shanghai University of Electric Power
Priority to CN201910628362.8A priority Critical patent/CN110319772B/en
Publication of CN110319772A publication Critical patent/CN110319772A/en
Application granted granted Critical
Publication of CN110319772B publication Critical patent/CN110319772B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a visual large-span distance measurement method based on an unmanned aerial vehicle, which can be used for large-scene distance measurement. Carrying a visual sensor by an aircraft, acquiring image data of an object scene to be measured and acquiring IMU information through flight control; transmitting the data to the ground station through graph transmission; tracking FAST characteristic points by an optical flow method and screening and extracting key frames; performing sensor data fusion through nonlinear optimization; carrying out three-dimensional modeling on a scene by utilizing a multi-stereoscopic vision technology; and selecting a measuring point in the three-dimensional model for ranging. The method reduces the number of redundant pictures through key frame and image ambiguity screening, reduces reconstruction time and improves reconstruction efficiency; compared with the three-dimensional reconstruction of pure pictures, the three-dimensional model with the real scale can be reconstructed, and the model precision is higher; compared with the GPS and image fusion, the three-dimensional reconstruction can be carried out in the environment without GPS signals (such as mountainous areas and indoors), and the robustness of the system is improved.

Description

Vision large span distance measuring method based on unmanned plane
Technical field
The present invention relates to a kind of surveying and mapping technology, in particular to a kind of vision large span distance measuring method based on unmanned plane.
Background technique
The Three Dimensional Reconfiguration of target scene is widely used in survey field.Three-dimensionalreconstruction can be by multi-angle of view The two dimensional image of shooting is reconstructed into the threedimensional model of space object, carries out the measurement of dimension of object.It is illustrated with flower bed, people want The diameter perimeter for measuring flower bed, can not directly measure since flower bed size is larger.It is taken photo by plane, can be reconstructed to flower bed by unmanned plane The threedimensional model of flower bed out, reconnaissance can be obtained the full-size(d) of flower bed on model.There are two types of the modes of three-dimensional modeling at present. 1) three-dimensional modeling is carried out by image completely, but resulting threedimensional model lacks scale factor, needs conversion scale in a model Ruler.2) by way of GPS and image co-registration, the threedimensional model of actual size is calculated.But this method, to GPS location precision It is high.It can not be reconstructed under no GPS signal environment, such as mountain area, interior etc..
Summary of the invention
The present invention be directed to space object ranging there are the problem of, propose a kind of vision large span based on unmanned plane and survey Away from method, the three-dimensionalreconstruction distance measuring method based on IMU (Inertial Measurement Unit) and image co-registration, the three of available true scale Dimension module, while the robustness of reconstruct is increased, it also can be carried out three-dimensionalreconstruction under no GPS signal environment.
The technical solution of the present invention is as follows: a kind of vision large span distance measuring method based on unmanned plane, specifically includes following step It is rapid:
1) using unmanned plane acquisition video data and IMU data, after being pre-processed to video data, by band after pretreatment The video data and IMU data for having characteristic point and key frame are transferred to ground-based server;
2) key frame in mean parallax, tracking quality and image blur Picking video is utilized;
3) sensor fusion techniques are utilized, the rough pose of camera is obtained by the key frame of IMU pre-integration and selection, then Nonlinear optimization is carried out by the characteristic point in video frame, obtains the accurate pose of camera, i.e. flight path;
4) finally carry out it is dense rebuild obtain physical size three-dimensionalreconstruction model, on the threedimensional model of foundation directly into Row reconnaissance ranging.
Step 1) the concrete methods of realizing is as follows:
1.1) ginseng outside the interior participation of the camera and IMU of unmanned plane is demarcated using kalibr kit first, is led to The parameter for crossing calibration carries out distortion correction to video captured by camera;
1.2) video data and IMU data, video data and IMU data packet by unmanned plane, after obtaining correction Include the 3-axis acceleration and angular speed of camera, image information;
1.3) after obtaining video data and IMU data, extract first video frame FAST characteristic point and by optical flow method with Track obtains the matched characteristic point of subsequent frame, how much carries out key frame screening according to characteristic point in image, then will have characteristic point Video data and IMU data with key frame is by being wirelessly transmitted to ground-based server.
Step 2) the concrete methods of realizing is as follows:
2.1) by calculating present frame and a upper key frame using perspective-n-point algorithm to matched characteristic point Spin matrix, translation matrix;If the spin matrix tracked between present frame and a upper key frame, translation matrix, which exceeds, to be set Determine threshold value, then present frame is considered as to new key frame;
2.2) by by tracking characteristics quantity lower than setting quantity key frame be also considered as new key frame, with this avoid with Track feature is lost completely, improves tracking quality;
2.3) it in non-reference picture, after carrying out Laplace transform to obtained all key frames, then counts The average gradient calculated between adjacent pixel is poor, fuzziness is measured with average gradient difference, numerical value is lower, and frame is fuzzyyer, works as average gradient When difference is less than setting Grads threshold, then the key frame is rejected.
Step 3) the concrete methods of realizing is as follows:
3.1) position orientation relation of the key frame under world coordinate system is calculated by IMU pre-integration, calculates the rough of camera Pose;By alignment IMU data and key frame data in the rough pose acquisition process of camera, using between key frame images It matches to constrain the pre- score error of IMU;
3.2) by nonlinear algorithm, optimize the pose of characteristic point and camera, obtain the accurate pose of camera.
Step 4) the concrete methods of realizing is as follows:
4.1) polar curve search and block-matching technique are used in key frame picture, to choosing 3* on the polar curve of each pixel 3 block of pixels and other key frame pictures mutually one by one compared with similarity, obtain position of the pixel in each picture;
4.2) by using depth filtering skill to the three-dimensional mapping point of the same pixel in different key frames in space Art, which repeatedly carries out triangulation, makes estimation of Depth converge to a stationary value;
4.3) the point cloud under each key frame picture visual angle rotated by camera pose, move to world coordinate system Under, splicing obtains the threedimensional model of physical size, and reconnaissance ranging can be directly carried out on the threedimensional model of foundation.
The beneficial effects of the present invention are: the present invention is based on the vision large span distance measuring methods of unmanned plane, pass through key frame It is screened with image blur and reduces redundancy picture number, mitigate reconstitution time, improve reconstruct efficiency;With the three-dimensionalreconstruction of pure picture It compares, the threedimensional model of true scale can be reconstructed and model accuracy is higher;It, can be in nothing with GPS compared with image co-registration (such as mountain area, interior) carries out three-dimensionalreconstruction under GPS signal environment, improves the robustness of system.
Detailed description of the invention
Fig. 1 is flow chart of the method for the present invention;
Fig. 2 is method flow block diagram of the invention;
Fig. 3 is IMU of embodiment of the present invention pre-integration schematic diagram;
Fig. 4 is the camera pose figure after sensor optimization of the embodiment of the present invention;
Fig. 5 is epipolar-line constraint of embodiment of the present invention schematic diagram.
Specific embodiment
Vision large span distance measuring method flow chart based on unmanned plane as shown in Figure 1, 2, to the threedimensional model of given object Carry out ranging, the size including target object, the distance between multiple objects, hardware needed for ranging are as follows: unmanned plane, earth station's clothes Business device, monocular cam, wireless router etc..Concrete methods of realizing is as follows:
1, video data and IMU data are acquired using unmanned plane, after carrying out image preprocessing to video data, will pre-processed Image data and IMU data are transferred to ground-based server afterwards;
Step 1 implements step are as follows:
1.1 first demarcate ginseng outside the interior participation of the camera and IMU of unmanned plane using kalibr kit, pass through The parameter of calibration carries out distortion correction to video captured by camera;
1.2 by unmanned plane, and video data and IMU data, video data and IMU data after obtaining correction include The 3-axis acceleration and angular speed of camera, image information;
After 1.3 obtain video datas and IMU data, extract first video frame FAST characteristic point and by optical flow method with Track obtains the matched characteristic point of subsequent frame, how much carries out key frame screening according to characteristic point in image, then will have characteristic point Video data and IMU data with key frame is by being wirelessly transmitted to ground-based server.
2, key frame in mean parallax, tracking quality and image blur Picking video is utilized;
Implement step are as follows:
2.1 by calculating present frame and a upper key using PNP (perspective-n-point) algorithm to matched characteristic point The spin matrix of frame, translation matrix.If the spin matrix tracked between present frame and a upper key frame, translation matrix is super Present frame is then considered as new key frame by given threshold out;
2.2 in order to prevent in track following characteristic point it is very few cause tracking fail.By the way that tracking characteristics quantity is lower than The key frame of setting quantity is also considered as new key frame, avoids tracking characteristics from losing completely with this, improves tracking quality.
2.3 in non-reference picture, after first carrying out Laplace transform to obtained all key frames, then calculates phase Average gradient between adjacent pixel is poor, and fuzziness is measured with average gradient difference, and numerical value is lower, and frame is fuzzyyer, when average gradient difference is small When setting Grads threshold, then the key frame is rejected.
3, using sensor fusion techniques, the rough pose of camera is obtained by the key frame of IMU pre-integration and selection, then Nonlinear optimization is carried out by the characteristic point in video frame, obtains the accurate pose of camera, i.e. flight path.
Implement step are as follows:
3.1 calculate position orientation relation of the key frame under world coordinate system by IMU pre-integration, calculate the rough position of camera Appearance.I.e. under world coordinate system, IMU is respectively corresponded in the position at k moment, speed, rotation Such as Fig. 3 institute Show, the position orientation relation between the key frame at i and j moment can be obtained by IMU pre-integration, they meet iterative.
In formula: subscript w represents world coordinate system;Subscript bkRepresent the k moment of IMU coordinate system;Subscript t represents t moment;Under Mark bk+1Represent the k+1 moment of IMU coordinate system;For t moment, rotation of the IMU under world coordinate system,For t moment IMU's Acceleration;For the biasing of t moment IMU acceleration;gwFor the gravity under world coordinate system;For the angle speed of t moment IMU Degree;For the biasing of t moment IMU angular speed;
I.e. the state at current time can be acquired by last moment state come iteration.By one between adjacent two key frame The IMU pre-integration of section time is got up, and the IMU relative motion between two frames is obtained.
By alignment IMU data and key frame data in the rough pose acquisition process of camera, using key frame images it Between matching constrain the pre- score error of IMU.
3.2, by nonlinear algorithm, optimize the pose of characteristic point and camera, obtain the accurate pose of camera.Firstly, by position Appearance, speed, the quantity of states such as Camera extrinsic are integrated into a state vector and carry out nonlinear optimization.State vector is as follows:
(4) are derived by (5) (6)
In formula: X is state vector;XkFor the state variable of IMU;bgFor the gravity biased of IMU;baIt is inclined for the acceleration of IMU It sets;For the camera state variable opposite with IMU;Displacement between camera and IMU;Between camera and IMU Rotation;λiIt is characterized a coordinate, i=1,2 ..., m;
Nonlinear optimization objective function is as follows:
In formula:For the displacement observation amount based on IMU;For the displacement observation amount based on image;rβFor the pass of IMU It is function;rcFor the relation function of image;K is the moment, and β represents IMU, which characteristic point l represents, and j represents which frame, C generation Table camera.
Camera pose optimum results are as shown in Figure 4.
4, finally carry out it is dense rebuild obtain physical size three-dimensionalreconstruction model, on the threedimensional model of foundation directly into Row reconnaissance ranging.
Implement step are as follows:
4.1 use polar curve search and block-matching technique in key frame picture, to choosing 3*3 on the polar curve of each pixel Block of pixels and other key frame pictures mutually one by one compared with similarity, obtain position of the pixel in each picture.Such as Fig. 5 institute Show, d indicates the depth of three-dimensional point, o1、o2Indicate that the image center of the first picture and the second picture, R, T indicate 2 liang of frame figures Spin matrix and translation matrix between piece, p1、p2Indicate the pixel on the first picture and the second picture, l2Indicate p1 Polar curve in the second picture.
4.2 by using depth filtering technology to the three-dimensional mapping point of the same pixel in different key frames in space Repeatedly carrying out triangulation makes estimation of Depth converge to a stationary value.
Point cloud under each key frame picture visual angle is rotated by camera pose, moves to world coordinate system by 4.3 Under, splicing obtains the threedimensional model of physical size, and reconnaissance ranging can be directly carried out on the threedimensional model of foundation.
Experimental result has obtained threedimensional model by the vision large span distance measuring method based on unmanned plane, and can be repeatedly The ranging of true scale is carried out on dimension module.Based on range-measurement system of the invention, mapping operations person can very easily be carried out Mapping operations.

Claims (5)

1. a kind of vision large span distance measuring method based on unmanned plane, which is characterized in that specifically comprise the following steps:
1) using unmanned plane acquisition video data and IMU data, after pre-processing to video data, spy will be had after pretreatment The video data and IMU data of sign point and key frame are transferred to ground-based server;
2) key frame in mean parallax, tracking quality and image blur Picking video is utilized;
3) sensor fusion techniques are utilized, the rough pose of camera are obtained by the key frame of IMU pre-integration and selection, then pass through Characteristic point in video frame carries out nonlinear optimization, obtains the accurate pose of camera, i.e. flight path;
4) the dense three-dimensionalreconstruction model rebuild and obtain physical size is finally carried out, is directly selected on the threedimensional model of foundation Point ranging.
2. the vision large span distance measuring method based on unmanned plane according to claim 1, which is characterized in that the step 1) tool Body implementation method is as follows:
1.1) ginseng outside the interior participation of the camera and IMU of unmanned plane is demarcated using kalibr kit first, passes through mark Fixed parameter carries out distortion correction to video captured by camera;
1.2) by unmanned plane, video data and IMU data after obtaining correction, video data and IMU data include phase The 3-axis acceleration and angular speed of machine, image information;
1.3) it after obtaining video data and IMU data, extracts first video frame FAST characteristic point and is tracked by optical flow method, obtained To the matched characteristic point of subsequent frame, how much key frame screening is carried out according to characteristic point in image, then will have characteristic point and pass The video data and IMU data of key frame are by being wirelessly transmitted to ground-based server.
3. the vision large span distance measuring method based on unmanned plane according to claim 1, which is characterized in that the step 2) tool Body implementation method is as follows:
2.1) pass through the rotation to matched characteristic point using perspective-n-point algorithm calculating present frame and a upper key frame Matrix, translation matrix;If the spin matrix tracked between present frame and a upper key frame, translation matrix is beyond setting threshold Present frame, then is considered as new key frame by value;
2.2) new key frame is also considered as by the key frame by tracking characteristics quantity lower than setting quantity, avoids tracking special with this Sign is lost completely, improves tracking quality;
2.3) in non-reference picture, after carrying out Laplace transform to obtained all key frames, then phase is calculated Average gradient between adjacent pixel is poor, and fuzziness is measured with average gradient difference, and numerical value is lower, and frame is fuzzyyer, when average gradient difference is small When setting Grads threshold, then the key frame is rejected.
4. according to claim 1 to the vision large span distance measuring method described in any one of 3 based on unmanned plane, feature exists In the step 3) concrete methods of realizing is as follows:
3.1) position orientation relation of the key frame under world coordinate system is calculated by IMU pre-integration, calculates the rough pose of camera; By alignment IMU data and key frame data in the rough pose acquisition process of camera, the matching between key frame images is used To constrain the pre- score error of IMU;
3.2) by nonlinear algorithm, optimize the pose of characteristic point and camera, obtain the accurate pose of camera.
5. according to claim 1 to the vision large span distance measuring method described in any one of 3 based on unmanned plane, feature exists In the step 4) concrete methods of realizing is as follows:
4.1) polar curve search and block-matching technique are used in key frame picture, to selection 3*3 on the polar curve of each pixel Block of pixels and other key frame pictures mutually one by one compared with similarity, obtain position of the pixel in each picture;
4.2) by more using depth filtering technology to the three-dimensional mapping point of the same pixel in different key frames in space Secondary progress triangulation makes estimation of Depth converge to a stationary value;
4.3) the point cloud under each key frame picture visual angle rotated by camera pose, moved under world coordinate system, Splicing obtains the threedimensional model of physical size, and reconnaissance ranging can be directly carried out on the threedimensional model of foundation.
CN201910628362.8A 2019-07-12 2019-07-12 Visual large-span distance measurement method based on unmanned aerial vehicle Active CN110319772B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910628362.8A CN110319772B (en) 2019-07-12 2019-07-12 Visual large-span distance measurement method based on unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910628362.8A CN110319772B (en) 2019-07-12 2019-07-12 Visual large-span distance measurement method based on unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN110319772A true CN110319772A (en) 2019-10-11
CN110319772B CN110319772B (en) 2020-12-15

Family

ID=68122084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910628362.8A Active CN110319772B (en) 2019-07-12 2019-07-12 Visual large-span distance measurement method based on unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN110319772B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047620A (en) * 2019-11-15 2020-04-21 广东工业大学 Unmanned aerial vehicle visual odometer method based on depth point-line characteristics
CN112149495A (en) * 2020-08-07 2020-12-29 中国矿业大学(北京) Video key frame extraction method based on parallax tracking
CN112365537A (en) * 2020-10-13 2021-02-12 天津大学 Active camera repositioning method based on three-dimensional point cloud alignment
CN112505065A (en) * 2020-12-28 2021-03-16 上海工程技术大学 Method for detecting surface defects of large part by indoor unmanned aerial vehicle
CN112697044A (en) * 2020-12-17 2021-04-23 北京航空航天大学 Static rigid object vision measurement method based on unmanned aerial vehicle platform
CN113125791A (en) * 2019-12-30 2021-07-16 南京智能情资创新科技研究院有限公司 Motion camera speed measurement method based on characteristic object and optical flow method
CN113139949A (en) * 2021-04-30 2021-07-20 逻腾(杭州)科技有限公司 Robot image ambiguity detection method
CN113284224A (en) * 2021-04-20 2021-08-20 北京行动智能科技有限公司 Automatic mapping method and device based on simplex code, and positioning method and equipment
CN115272494A (en) * 2022-09-29 2022-11-01 腾讯科技(深圳)有限公司 Calibration method and device for camera and inertial measurement unit and computer equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424630A (en) * 2013-08-20 2015-03-18 华为技术有限公司 Three-dimension reconstruction method and device, and mobile terminal
CN105953796A (en) * 2016-05-23 2016-09-21 北京暴风魔镜科技有限公司 Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
CN106123802A (en) * 2016-06-13 2016-11-16 天津大学 A kind of autonomous flow-type 3 D measuring method
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
US20180112985A1 (en) * 2016-10-26 2018-04-26 The Charles Stark Draper Laboratory, Inc. Vision-Inertial Navigation with Variable Contrast Tracking Residual
CN108413917A (en) * 2018-03-15 2018-08-17 中国人民解放军国防科技大学 Non-contact three-dimensional measurement system, non-contact three-dimensional measurement method and measurement device
CN109520497A (en) * 2018-10-19 2019-03-26 天津大学 The unmanned plane autonomic positioning method of view-based access control model and imu

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424630A (en) * 2013-08-20 2015-03-18 华为技术有限公司 Three-dimension reconstruction method and device, and mobile terminal
CN105953796A (en) * 2016-05-23 2016-09-21 北京暴风魔镜科技有限公司 Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
CN106123802A (en) * 2016-06-13 2016-11-16 天津大学 A kind of autonomous flow-type 3 D measuring method
US20180112985A1 (en) * 2016-10-26 2018-04-26 The Charles Stark Draper Laboratory, Inc. Vision-Inertial Navigation with Variable Contrast Tracking Residual
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
CN108413917A (en) * 2018-03-15 2018-08-17 中国人民解放军国防科技大学 Non-contact three-dimensional measurement system, non-contact three-dimensional measurement method and measurement device
CN109520497A (en) * 2018-10-19 2019-03-26 天津大学 The unmanned plane autonomic positioning method of view-based access control model and imu

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047620A (en) * 2019-11-15 2020-04-21 广东工业大学 Unmanned aerial vehicle visual odometer method based on depth point-line characteristics
CN113125791A (en) * 2019-12-30 2021-07-16 南京智能情资创新科技研究院有限公司 Motion camera speed measurement method based on characteristic object and optical flow method
CN113125791B (en) * 2019-12-30 2023-10-20 南京智能情资创新科技研究院有限公司 Motion camera speed measuring method based on characteristic object and optical flow method
CN112149495A (en) * 2020-08-07 2020-12-29 中国矿业大学(北京) Video key frame extraction method based on parallax tracking
CN112149495B (en) * 2020-08-07 2023-07-28 中国矿业大学(北京) Video key frame extraction method based on parallax tracking
CN112365537A (en) * 2020-10-13 2021-02-12 天津大学 Active camera repositioning method based on three-dimensional point cloud alignment
CN112365537B (en) * 2020-10-13 2023-06-27 天津大学 Active camera repositioning method based on three-dimensional point cloud alignment
CN112697044A (en) * 2020-12-17 2021-04-23 北京航空航天大学 Static rigid object vision measurement method based on unmanned aerial vehicle platform
CN112505065A (en) * 2020-12-28 2021-03-16 上海工程技术大学 Method for detecting surface defects of large part by indoor unmanned aerial vehicle
CN113284224A (en) * 2021-04-20 2021-08-20 北京行动智能科技有限公司 Automatic mapping method and device based on simplex code, and positioning method and equipment
CN113139949A (en) * 2021-04-30 2021-07-20 逻腾(杭州)科技有限公司 Robot image ambiguity detection method
CN115272494A (en) * 2022-09-29 2022-11-01 腾讯科技(深圳)有限公司 Calibration method and device for camera and inertial measurement unit and computer equipment

Also Published As

Publication number Publication date
CN110319772B (en) 2020-12-15

Similar Documents

Publication Publication Date Title
CN110319772A (en) Visual large-span distance measurement method based on unmanned aerial vehicle
CN112435325B (en) VI-SLAM and depth estimation network-based unmanned aerial vehicle scene density reconstruction method
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN109974693B (en) Unmanned aerial vehicle positioning method and device, computer equipment and storage medium
CN104748728B (en) Intelligent machine attitude matrix calculation method and its applied to photogrammetric method
CN105844696B (en) Image position method and device based on ray model three-dimensionalreconstruction
Teller et al. Calibrated, registered images of an extended urban area
Ventura et al. Wide-area scene mapping for mobile visual tracking
CN108534782B (en) Binocular vision system-based landmark map vehicle instant positioning method
KR100912715B1 (en) Method and apparatus of digital photogrammetry by integrated modeling for different types of sensors
CN109166149A (en) A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN108801274B (en) Landmark map generation method integrating binocular vision and differential satellite positioning
WO2015096806A1 (en) Attitude determination, panoramic image generation and target recognition methods for intelligent machine
CN112461210B (en) Air-ground cooperative building surveying and mapping robot system and surveying and mapping method thereof
CN104501779A (en) High-accuracy target positioning method of unmanned plane on basis of multi-station measurement
CN111815765B (en) Heterogeneous data fusion-based image three-dimensional reconstruction method
CN104361628A (en) Three-dimensional real scene modeling system based on aviation oblique photograph measurement
CN107560603B (en) Unmanned aerial vehicle oblique photography measurement system and measurement method
JP2012118666A (en) Three-dimensional map automatic generation device
CN110044374A (en) A kind of method and odometer of the monocular vision measurement mileage based on characteristics of image
CN107038753B (en) Stereoscopic vision three-dimensional reconstruction system and method
CN109739254A (en) Using the unmanned plane and its localization method of visual pattern positioning in a kind of electric inspection process
CN112129281A (en) High-precision image navigation positioning method based on local neighborhood map
CN112598706B (en) Multi-camera moving target three-dimensional track reconstruction method without accurate time-space synchronization
CN108917753A (en) Method is determined based on the position of aircraft of structure from motion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant