CN109685845B - POS system-based real-time image splicing processing method for FOD detection robot - Google Patents
POS system-based real-time image splicing processing method for FOD detection robot Download PDFInfo
- Publication number
- CN109685845B CN109685845B CN201811420436.0A CN201811420436A CN109685845B CN 109685845 B CN109685845 B CN 109685845B CN 201811420436 A CN201811420436 A CN 201811420436A CN 109685845 B CN109685845 B CN 109685845B
- Authority
- CN
- China
- Prior art keywords
- image
- images
- frame
- distortion
- detection robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 35
- 238000003672 processing method Methods 0.000 title claims abstract description 15
- 238000000034 method Methods 0.000 claims abstract description 15
- 230000036544 posture Effects 0.000 claims description 19
- 238000013507 mapping Methods 0.000 claims description 9
- 238000012937 correction Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 5
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 238000013178 mathematical model Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 2
- 238000003702 image correction Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 2
- 206010034719 Personality change Diseases 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
A real-time image splicing processing method based on a POS system for an FOD detection robot comprises the steps of shooting a ground to be detected by a linear array camera, detecting position and posture information of the robot, carrying out interpolation reconstruction, carrying out image correction and carrying out image splicing; the method combines the image with the real-time position and the attitude, and can effectively solve the problem of poor image splicing effect of the linear array camera caused by the inconsistency of the movement speed and the movement direction of the FOD detection robot in the movement process.
Description
The technical field is as follows:
the invention relates to the technical field of image processing, in particular to a real-time image stitching processing method based on a POS (position and Orientation System) System for a FOD (Foreign Object damage) detection robot.
(II) background art:
when the FOD detection robot detects foreign objects on a road surface, a large visual field, a high frame amplitude rate and high accuracy are needed, so that a linear array camera is needed for image acquisition, but the width direction of the linear array camera is usually only a few pixels, and a plurality of shot images need to be spliced in the width direction to obtain a large-width two-dimensional image. The image splicing method of the existing line camera usually uses equipment such as a grating and the like to record position information for splicing in fixed environments such as a lathe and the like, but the use scene of the FDO detection robot is open, the FDO detection robot has high degree of freedom, and the existing splicing method cannot be used.
The POS positioning and attitude determining system can acquire information such as high-precision position, navigation, attitude, speed and the like of a load in real time, and has not been widely applied in the field of FDO detection robots, particularly image processing directions.
(III) the invention content:
the invention aims to provide a real-time image splicing processing method based on a POS system for an FOD detection robot, which can make up for the defects of the prior art, is a simple and feasible image processing method and can solve the problem that images acquired by a linear array camera cannot be normally spliced due to speed change, steering or uneven road surface in the moving process of the FOD detection robot.
The technical scheme of the invention is as follows: a real-time image splicing processing method based on a POS system for an FOD detection robot is characterized by comprising the following steps:
(1) A rotary encoder arranged on a wheel hub of the FOD detection robot triggers a linear array camera to shoot the ground to be detected according to the rotation of the wheel;
(2) The position and the attitude information of the FOD detection robot at the current moment are acquired by the POS system through GPS signals while the ground is shot, so that the position and the attitude information of each frame of image and the FOD detection robot keep data synchronization;
(3) Aiming at the distortion condition of a linear array camera lens, acquiring distortion parameters of the linear array camera by shooting fixed distance sampling points, establishing a distortion correction function, calculating each pixel point of an image acquired by the linear array camera by using the distortion correction function for the image acquired by the linear array camera to obtain a corrected position, and recombining into a new image according to the corrected position of each pixel point to obtain a corrected image;
(4) Taking the first frame image as an initial position without adjusting; detecting the position and posture information of the robot according to the corresponding moment when the images are stored for each other frame of images so as to adjust the frame of images, comparing the position and posture information of the frame of images with the position and posture information of the previous frame of images, detecting the driving speed and direction of the robot according to the FOD, and if the positions and postures of the two frames of images have angle or position deviation, correspondingly rotating and translating the frame of images so as to keep the driving direction and displacement of the images and the detection robot consistent;
(5) Matching according to the gray level and the texture of the current frame and the previous frame of image, finding out the superposed ground position images in the two frames of images, acquiring the position mapping relation of the two frames of images according to the matched superposed position points if the superposed part appears, splicing the pixel point of the frame of image with the previous frame of image according to the mapping relation, deleting the repeated image data, and combining the image into one image; if the two frames of images have no overlapped part, acquiring a position mapping relation according to position posture information, splicing the frame of image with the previous frame of image, and performing bilinear interpolation processing on a gap part in the two frames of images according to an image pixel point adjacent to the gap part to completely fill up the gap part;
if the two frame images have no overlapped part and the position information is overlapped, deleting the frame image;
(6) And when the frame number of image splicing reaches a set value, outputting the spliced image at the moment as a final image.
The distortion parameters in the step (3) are parameters including radial distortion, tangential distortion and non-planar distortion.
The distortion function in the step (3) is established according to the following method:
x′=x-x 0
y′=y-y 0
r=x′*x′+y′*y′
x″=x′+k 1 x′r 2 +k 2 x′r 4 +k 3 x′r 6 +p 1 (r+2x′x′)+2p 2 x′y′+ap 1 x′+ap 2 *y′
y″=y′+k 1 y′r 2 +k 3 y′r 6 +p 1 (r+2y′y′)+2p 2 x′y′
in the formula, x 'and y' are the coordinates of the pixel points of the corrected image, x and y are the coordinates of the pixel points of the original image, x 0 、y 0 As principal point coordinates, k 1 、k 2 、k 3 Respectively in the radial direction of the camera lensFirst three terms, p, of the Taylor series expansion of the distorted mathematical model 1 、p 2 For the tangential distortion parameter of the camera lens, ap 1 、ap 2 Is a non-planar distortion parameter.
The bilinear interpolation method in the step (5) comprises the following steps:
wherein (x, y), (x) 1 ,y 1 )、(x 2 ,y 2 )、(x 1 ,y 2 )、(x 2 ,y 1 ) Respectively, five-point coordinates, x, on the two-dimensional image 1 、x 2 Is the image horizontal coordinate value, y 1 、y 2 And f (x, y) represents the pixel value of a point with the coordinate (x, y) on the image, wherein the pixel value of the point (x, y) is unknown, and the rest four points are the coordinates of the pixel points of the nearest four pixel values around the point (x, y).
The working principle of the invention is as follows:
POS system: the method adopts a GPS relative positioning method, two GPS signal receivers are used, one receiver is arranged in a fixed observation station in an airport, and the other receiver is arranged on an FOD detection robot to synchronously observe with the receiver of the fixed observation station so as to determine the instantaneous position of the FOD detection robot relative to the fixed observation station. And (3) carrying out high-precision dynamic relative positioning by adopting a phase measurement pseudo-range dynamic relative positioning method based on the whole cycle unknown number of the dynamic settlement carrier phase. And the attitude is settled by using three-dimensional velocity information output from a receiver provided on the FOD detection robot, whereby the position and attitude information of the FOD robot can be obtained. Meanwhile, the linear array camera and the receiver are positioned on the same platform, and the strict geometric relationship between the linear array camera and the receiver can be obtained through calibration.
2. Image correction and splicing: the reason for the geometric deformation of the image mainly includes two parts: lens distortion and image position deviation due to FOD robot travel offset and attitude change. Aiming at the distortion condition of a linear array camera lens, a group of mark points are extracted at equal intervals on a y axis by placing sampling points at equal intervals in space and keeping x-axis coordinates unchanged, points are continuously acquired in the same mode after the mark points are moved for a certain distance and a camera is used for photographing to obtain corresponding pixel points, the cross ratio value of actual sampling points and the cross ratio value of the corresponding pixel points are respectively obtained by utilizing the cross ratio invariance to obtain a corresponding cross ratio relation, meanwhile, a distortion function of the linear array camera is built and brought into the obtained cross ratio relation, a lens distortion correction function is built, and the distortion error of the camera lens is compensated and corrected.
Aiming at the driving deviation and the posture change of the FOD robot, the external orientation element of the linear array camera at the scanning moment is obtained by means of the real-time position and posture information acquired by the POS system and the calibrated geometric relationship between the linear array camera and the POS system receiver, and then the position of each scanning line image of the linear array camera in a ground coordinate system can be calculated. According to the relative position relation between each scanning line image and the previous scanning line image, corresponding correction operations such as rotation, translation and the like can be carried out on the current scanning line image, and finally matching and splicing are carried out on the current scanning line image and the previous scanning line image.
The invention has the advantages that: the image splicing processing method based on the POS system combines images with real-time positions and postures, and can effectively solve the problem of poor image splicing effect of the linear array camera caused by the fact that the FOD detection robot is inconsistent in moving speed and direction in the moving process.
(IV) description of the drawings:
fig. 1 is a schematic flow chart of a real-time image stitching processing method based on a POS system for an FOD detection robot according to the present invention.
Fig. 2 is a schematic diagram of a parameter relationship of a bilinear interpolation method in a real-time image stitching processing method based on a POS system for an FOD detection robot according to the present invention.
(V) specific embodiment:
example (b): a real-time image splicing processing method based on a POS system for an FOD detection robot is shown in figure 1 and is characterized by comprising the following steps:
(1) A rotary encoder arranged on a wheel hub of the FOD detection robot triggers a linear array camera to shoot the ground to be detected according to the rotation of the wheel;
(2) The position and the attitude information of the FOD detection robot at the current moment are acquired by the POS system through GPS signals while the ground is shot, so that the position and the attitude information of each frame of image and the FOD detection robot keep data synchronization;
(3) Aiming at the distortion condition of a linear array camera lens, distortion parameters of the linear array camera are obtained by shooting fixed distance sampling points, wherein the distortion parameters comprise parameters of radial distortion, tangential distortion and non-planar distortion, a distortion correction function is established, each pixel point of an image acquired by the linear array camera is calculated by the distortion correction function, and then the corrected position of each pixel point can be obtained, and the corrected position is recombined into a new image according to the corrected position of each pixel point, so that a corrected image can be obtained;
wherein the distortion function is established as follows:
x′=x-x 0
y′=y-y 0
r=x′*x′+y′*y′
x″=x′+k 1 x′r 2 +k 2 x′r 4 +k 3 x′r 6 +p 1 (r+2x′x′)+2p 2 x′y′+ap 1 x′+ap 2 *y′
y″=y′+k 1 y′r 2 +k 3 y′r 6 +p 1 (r+2y′y′)+2p 2 x′y′
in the formula, x 'and y' are the coordinates of the pixel points of the corrected image, x and y are the coordinates of the pixel points of the original image, x 0 、y 0 As principal point coordinates, k 1 、k 2 、k 3 The first three terms, p, of Taylor series expansion of mathematical model of radial distortion of camera lens 1 、p 2 For the tangential distortion parameter of the camera lens, ap 1 、ap 2 Is a non-planar distortion parameter.
(4) Taking the first frame image as an initial position without adjusting; detecting the position and posture information of the robot according to the corresponding moment when the images are stored for each other frame of images so as to adjust the frame of images, comparing the position and posture information of the frame of images with the position and posture information of the previous frame of images, detecting the driving speed and direction of the robot according to the FOD, and if the positions and postures of the two frames of images have angle or position deviation, correspondingly rotating and translating the frame of images so as to keep the driving direction and displacement of the images and the detection robot consistent;
(5) Matching according to the gray level and the texture of the current frame and the previous frame of image, finding out the superposed ground position images in the two frames of images, acquiring the position mapping relation of the two frames of images according to the matched superposed position points if the superposed part appears, splicing the pixel point of the frame of image with the previous frame of image according to the mapping relation, deleting the repeated image data, and combining the image into one image; if the two frames of images have no overlapped part, acquiring a position mapping relation according to position posture information, splicing the frame of image with the previous frame of image, and performing bilinear interpolation processing on a gap part in the two frames of images according to an image pixel point adjacent to the gap part to completely fill up the gap part;
the bilinear interpolation method comprises the following steps:
wherein, (x, y), (x) 1 ,y 1 )、(x 2 ,y 2 )、(x 1 ,y 2 )、(x 2 ,y 1 ) Respectively, five-point coordinates, x and x, on the two-dimensional image 1 、x 2 Is the image horizontal coordinate value, y 1 、y 2 The coordinate value in the vertical direction of the image is f (x, y), which represents the pixel value of a point with coordinates (x, y) on the image, wherein the pixel value of the point (x, y) is unknown, and the remaining four points are the coordinates of the pixel points of the nearest four pixel values around the point (x, y), as shown in fig. 2:
FIG. 2 is a schematic diagram of an image coordinate system, in which the horizontal and vertical coordinates represent the number of rows and columns of pixels, where the P point is a pixel point with unknown pixel value and the coordinates thereof are (x, y); q 11 、Q 12 、Q 21 、Q 22 The coordinates of four pixel points with known pixel values adjacent to the P point are (x) 1 ,y 1 )、(x 1 ,y 2 )、(x 2 ,y 1 )、(x 2 ,y 2 ) If so, the pixel value of the point P can be obtained by calculation according to the bilinear interpolation formula;
if the two frames of images have no overlapped part and the position information is overlapped, deleting the frame of image;
(6) And when the frame number of image splicing reaches a set value, outputting the spliced image at the moment as a final image.
Claims (4)
1. A real-time image splicing processing method based on a POS system and used for an FOD detection robot is characterized by comprising the following steps:
(1) A rotary encoder arranged on a wheel hub of the FOD detection robot triggers a linear array camera to shoot the ground to be detected according to the rotation of the wheel;
(2) The position and the attitude information of the FOD detection robot at the current moment are acquired by the POS system through GPS signals while the ground is shot, so that the position and the attitude information of each frame of image and the FOD detection robot keep data synchronization;
(3) Aiming at the distortion condition of a linear array camera lens, acquiring distortion parameters of the linear array camera by shooting fixed distance sampling points, establishing a distortion correction function, calculating each pixel point of an image acquired by the linear array camera by using the distortion correction function to obtain a corrected position, and recombining into a new image according to the corrected position of each pixel point to obtain a corrected image;
(4) Taking the corrected image in the step (3) as an initial position without adjustment; detecting the position and posture information of the robot according to the corresponding moment when the images are stored for each other frame of images so as to adjust the frame of images, comparing the position and posture information of the frame of images with the position and posture information of the previous frame of images, detecting the driving speed and direction of the robot according to the FOD, and if the positions and postures of the two frames of images have angle or position deviation, correspondingly rotating and translating the frame of images so as to keep the driving direction and displacement of the images and the detection robot consistent;
(5) Matching the adjusted images adjusted according to the position posture information according to the gray level and the texture of the images, finding out the superposed ground position images in the two adjacent frames of images, acquiring the position mapping relation of the two frames of images according to the matched superposed position points if the superposed parts appear and the position information is not overlapped, splicing the pixel points of the frame of image with the previous frame of image according to the mapping relation, deleting the repeated image data, and combining the image data into one image; if the two frames of images have no overlapped part, acquiring a position mapping relation according to position posture information, splicing the frame of image with the previous frame of image, and performing bilinear interpolation processing on the gap part in the two frames of images according to the image pixel point adjacent to the gap part to completely fill up the gap part;
if the two frames of images have no overlapped part and the position information is overlapped, deleting the frame of image;
(6) And when the frame number of image splicing reaches a set value, outputting the spliced image at the moment as a final image.
2. The real-time image stitching processing method based on the POS system for the FOD detection robot as claimed in claim 1, wherein the distortion parameters in the step (3) are parameters including radial distortion, tangential distortion and non-planar distortion.
3. The real-time image stitching processing method based on the POS system for the FOD detection robot as claimed in claim 1, wherein the distortion function in the step (3) is established according to the following method:
x′=x-x 0
y′=y-y 0
r 2 =x′*x′+y′*y′
x″=x′+k 1 x′r 2 +k 2 x′r 4 +k 3 x′r 6 +p 1 (r+2x′x′)+2p 2 x′y′+ap 1 x′+ap 2 *y′
y″=y′+k 1 y′r 2 +k 3 y′r 6 +p 1 (r+2y′y′)+2p 2 x′y′
in the formula, x 'and y' are the coordinates of the pixel points of the corrected image, x and y are the coordinates of the pixel points of the original image, x 0 、y 0 As principal point coordinates, k 1 、k 2 、k 3 The first three terms, p, of Taylor series expansion of mathematical model of radial distortion of camera lens 1 、p 2 For the tangential distortion parameter of the camera lens, ap 1 、ap 2 Is a non-planar distortion parameter.
4. The POS system-based real-time image stitching processing method for the FOD detection robot as claimed in claim 1, wherein the bilinear interpolation method in the step (5) is calculated by:
wherein, (x, y), (x) 1 ,y 1 )、(x 2 ,y 2 )、(x 1 ,y 2 )、(x 2 ,y 1 ) Respectively, five-point coordinates, x, on the two-dimensional image 1 、x 2 Is the image horizontal coordinate value, y 1 、y 2 And f (x, y) represents the pixel value of a point with the coordinate (x, y) on the image, wherein the pixel value of the point (x, y) is unknown, and the rest four points are the coordinates of the pixel points of the nearest four pixel values around the point (x, y).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811420436.0A CN109685845B (en) | 2018-11-26 | 2018-11-26 | POS system-based real-time image splicing processing method for FOD detection robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811420436.0A CN109685845B (en) | 2018-11-26 | 2018-11-26 | POS system-based real-time image splicing processing method for FOD detection robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109685845A CN109685845A (en) | 2019-04-26 |
CN109685845B true CN109685845B (en) | 2023-04-07 |
Family
ID=66184970
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811420436.0A Active CN109685845B (en) | 2018-11-26 | 2018-11-26 | POS system-based real-time image splicing processing method for FOD detection robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109685845B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110310248B (en) * | 2019-08-27 | 2019-11-26 | 成都数之联科技有限公司 | A kind of real-time joining method of unmanned aerial vehicle remote sensing images and system |
CN111179176A (en) * | 2019-12-30 | 2020-05-19 | 北京东宇宏达科技有限公司 | Automatic splicing calibration method for infrared panoramic imaging |
CN112215892B (en) * | 2020-10-22 | 2024-03-12 | 常州大学 | Method for monitoring position and motion path of site robot |
CN112200126A (en) * | 2020-10-26 | 2021-01-08 | 上海盛奕数字科技有限公司 | Method for identifying limb shielding gesture based on artificial intelligence running |
CN112950493B (en) * | 2021-02-01 | 2022-11-01 | 中车青岛四方车辆研究所有限公司 | Method and device for correcting image distortion of linear array camera of rail train |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2008152740A1 (en) * | 2007-06-13 | 2010-08-26 | 株式会社情報科学テクノシステム | Digital aerial 3D measurement system |
CN101505433A (en) * | 2009-03-13 | 2009-08-12 | 四川大学 | Real acquisition real display multi-lens digital stereo system |
CN102829763B (en) * | 2012-07-30 | 2014-12-24 | 中国人民解放军国防科学技术大学 | Pavement image collecting method and system based on monocular vision location |
CN104077760A (en) * | 2014-03-19 | 2014-10-01 | 中科宇图天下科技有限公司 | Rapid splicing system for aerial photogrammetry and implementing method thereof |
US9286680B1 (en) * | 2014-12-23 | 2016-03-15 | Futurewei Technologies, Inc. | Computational multi-camera adjustment for smooth view switching and zooming |
CN104506828B (en) * | 2015-01-13 | 2017-10-17 | 中南大学 | A kind of fixed point orientation video real-time joining method of nothing effectively overlapping structure changes |
CN105335931A (en) * | 2015-11-09 | 2016-02-17 | 广州视源电子科技股份有限公司 | board card image splicing method, processing device and system |
CN106447602B (en) * | 2016-08-31 | 2020-04-03 | 浙江大华技术股份有限公司 | Image splicing method and device |
US10269147B2 (en) * | 2017-05-01 | 2019-04-23 | Lockheed Martin Corporation | Real-time camera position estimation with drift mitigation in incremental structure from motion |
CN108257090B (en) * | 2018-01-12 | 2021-03-30 | 北京航空航天大学 | High-dynamic image splicing method for airborne line-scan camera |
-
2018
- 2018-11-26 CN CN201811420436.0A patent/CN109685845B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109685845A (en) | 2019-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109685845B (en) | POS system-based real-time image splicing processing method for FOD detection robot | |
Chen et al. | High-accuracy multi-camera reconstruction enhanced by adaptive point cloud correction algorithm | |
CN108020826B (en) | Multi-line laser radar and multichannel camera mixed calibration method | |
CN103759670B (en) | A kind of object dimensional information getting method based on numeral up short | |
Prescott et al. | Line-based correction of radial lens distortion | |
CN105716542B (en) | A kind of three-dimensional data joining method based on flexible characteristic point | |
EP3032818B1 (en) | Image processing device | |
CN107144293A (en) | A kind of geometric calibration method of video satellite area array cameras | |
CN106960174A (en) | High score image laser radar vertical control point is extracted and its assisted location method | |
JP2017017696A (en) | High resolution camera for unmanned aircraft involving correction of wobble type distortion | |
CN111612693B (en) | Method for correcting rotary large-width optical satellite sensor | |
CN104268876A (en) | Camera calibration method based on partitioning | |
CN111220120B (en) | Moving platform binocular ranging self-calibration method and device | |
CN110378969A (en) | A kind of convergence type binocular camera scaling method based on 3D geometrical constraint | |
CN104361563B (en) | GPS-based (global positioning system based) geometric precision correction method of hyperspectral remote sensing images | |
JP2019503484A (en) | Method and system for geometric referencing of multispectral data | |
CN113724337A (en) | Camera dynamic external parameter calibration method and device without depending on holder angle | |
CN110986888A (en) | Aerial photography integrated method | |
CN102944308B (en) | Attitude error correcting method of time-space joint modulation interference imaging spectrometer | |
CN105571598B (en) | A kind of assay method of laser satellite altimeter footmark camera posture | |
CN107677223B (en) | Wheel shooting measurement device and measurement method of non-contact four-wheel aligner | |
CN106920262B (en) | A kind of machine vision 3D four-wheel aligners instrument target binding method | |
CN108242047A (en) | Optical satellite remote sensing image data bearing calibration based on CCD | |
CN110686593B (en) | Method for measuring relative position relation of image sensors in spliced focal plane | |
CN116468621A (en) | One-key digital aviation image data processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |