CN111899174A - Single-camera rotation splicing method based on deep learning - Google Patents

Single-camera rotation splicing method based on deep learning Download PDF

Info

Publication number
CN111899174A
CN111899174A CN202010745129.0A CN202010745129A CN111899174A CN 111899174 A CN111899174 A CN 111899174A CN 202010745129 A CN202010745129 A CN 202010745129A CN 111899174 A CN111899174 A CN 111899174A
Authority
CN
China
Prior art keywords
camera
image
deep learning
images
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010745129.0A
Other languages
Chinese (zh)
Inventor
郑文涛
林姝含
吴刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tianrui Kongjian Technology Co ltd
Original Assignee
Beijing Tianrui Kongjian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tianrui Kongjian Technology Co ltd filed Critical Beijing Tianrui Kongjian Technology Co ltd
Priority to CN202010745129.0A priority Critical patent/CN111899174A/en
Publication of CN111899174A publication Critical patent/CN111899174A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The invention relates to a single-camera rotary splicing method based on deep learning, which comprises the steps of estimating the camera posture of a current frame image relative to a reference angle on line according to a regression model from the image to the camera posture, carrying out geometric transformation on the current frame image by utilizing camera internal parameters and camera external parameters of the current frame image relative to the reference angle obtained by online estimation, mapping to a spliced image coordinate system to form a transformation result image under the spliced image coordinate system, and carrying out image fusion on the transformation result images of all the frame images to form a spliced image. The method utilizes the deep learning technology to estimate the camera attitude in the rotation process on line, and can realize quick splicing; the off-line estimated camera internal parameters and the on-line estimated camera attitude are used for carrying out cylindrical projection transformation on the image, and splicing deformation caused by directly using a homography matrix is avoided.

Description

Single-camera rotation splicing method based on deep learning
Technical Field
The invention relates to a single-camera rotation splicing method based on deep learning.
Background
The image splicing is a process of re-projecting a plurality of images to a common surface after registration, fusing the images and finally generating a panoramic image, and can be applied to video monitoring of large-view scenes such as airports, parks, ports and the like. A common video stitching method includes: and splicing the images collected by the plurality of fixed cameras into a panoramic image, or splicing the images collected in the rotating process of the single camera into the panoramic image. No matter what splicing method, the images participating in splicing need to be subjected to two basic steps of image geometric transformation and color fusion.
In some application scenarios (such as thermal imaging video monitoring), the price of a single camera is high, and to acquire a panoramic image with a large field of view and high resolution, a multi-camera stitching method is adopted, which results in high cost. Therefore, the single-camera rotary shooting method is a reasonable choice for carrying out the rotary splicing of the single cameras.
Compared with image splicing of a plurality of fixed cameras, the posture of a single camera is changed continuously in the rotating process, so that the external parameters of the camera are changed continuously, the geometric transformation parameters of the image can not be obtained by estimating the internal and external parameter matrixes of the camera in advance like the image splicing of the plurality of fixed cameras, and how to efficiently and dynamically estimate the geometric transformation parameters of the image becomes a main problem to be solved by the rotating and splicing of the single camera.
If the pitching angle of the camera is zero and the horizontal plane faces the target scene, the cylindrical projection is carried out on the images sampled in the rotation process of the camera (video camera, camera), only displacement deviation exists between the images after projection transformation, and the phase correlation method is utilized to estimate displacement parameters to complete splicing[1]. The method is simple and efficient, but is only suitable for the specific condition of no camera pitching. Therefore, in the prior art, the estimation of the geometric transformation parameters of the image is usually realized by depending on the extraction and matching of image features with large time cost, and the splicing instantaneity is difficult to ensure.
In order to realize real-time splicing, an image transformation parameter estimation method based on machine learning is provided[2]Firstly, estimating displacement, rotation and scale deviation between adjacent sampling images in the rotation process by using a phase correlation method, and converting the displacement, rotation and scale deviation into absolute deviation relative to an appointed reference image; a Support Vector Regression (SVR) is then used to predict the homography matrix of the current sample image relative to the reference image for the stitching transformation. Although the method avoids the image feature extraction and matching with large time cost, the homography matrix is directly sampled for splicing, which is equivalent to adopting plane projection in splicing, and the splicing result has large deformation; at the same time, use the traditionThe SVR estimates the homography matrix, the prediction performance is limited, and the accuracy requirement of splicing is difficult to meet.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a single-camera rotating splicing method based on deep learning, so that the data processing amount is reduced, and the splicing speed is improved.
The technical scheme of the invention is as follows: a single-camera rotation splicing method based on deep learning comprises the steps of estimating a camera posture of a current frame image relative to a reference angle on line according to a regression model from an image to the camera posture, carrying out geometric transformation on the current frame image by utilizing camera internal parameters and camera external parameters of the current frame image relative to the reference angle obtained through online estimation, mapping to a spliced image coordinate system to form a transformation result image under the spliced image coordinate system, carrying out image fusion on the transformation result images of all the frame images to form a spliced image.
The geometric transformation may include:
a) image I is represented bynTo a reference image I0Corresponding three-dimensional camera coordinates (X, Y, Z):
Figure BDA0002608105510000031
wherein the content of the first and second substances,
Figure BDA0002608105510000032
Rnis an image InRelative reference picture I0Camera pose parameter of RnIs an image InCorresponding camera external parameter matrix obtained by on-line estimation, R0Is a reference picture I0A corresponding camera external parameter matrix (or a camera external parameter matrix at a reference angle), K is a camera internal parameter matrix, and subscript n is the number of an image;
b) projecting the three-dimensional coordinate (X, Y, Z) cylinder to the coordinate system of the spliced image, and calculating an image I according to the following formulanThe resulting image coordinates (u, v):
u=f*atan(X/Z)
Figure BDA0002608105510000033
where f is the camera focal length, which is obtained from the intra-parameter matrix K ═ diag (f, f, 1).
Preferably, before image fusion, a phase correlation method is used to estimate displacement deviation between adjacent change result images, deviation compensation is performed on the change result images, and the image fusion is performed according to the change result images after deviation compensation.
The regression model from image to camera pose can be obtained based on deep learning end-to-end method training.
Preferably, the training of the regression model from the image to the camera pose is performed offline, the single camera is used for shooting and sampling at the actual shooting position, image samples under different shooting angles covering the rotation shooting range of the single camera are obtained, the sampling and shooting density can be determined according to the precision requirement, the camera external parameters (camera pose) corresponding to the camera internal parameters and the image samples are estimated, the image samples are used as training samples, the corresponding camera external parameters are used as sample marks, and the training of the regression model is performed based on a deep neural network.
Estimating the camera internal parameters and the camera external parameters corresponding to the image samples by adopting the following modes: and extracting image features and performing feature matching, estimating a homography matrix between adjacent images according to the image features, and estimating internal parameters and external parameters of the camera by decomposing the homography matrix and iterative optimization to multiply the external parameter matrix obtained by estimation by the inverse of the external parameter matrix corresponding to the reference image so as to obtain external parameters (camera attitude) of the camera relative to the reference angle.
The camera external parameters can be represented by four-tuple: q. q.si=(qi0;qi1,qi2,qi3),
Wherein
Figure BDA0002608105510000041
Figure BDA0002608105510000042
Figure BDA0002608105510000043
Figure BDA0002608105510000044
Figure BDA0002608105510000045
Ri′={mi,st}
Wherein R isi' is a sample image IiRelative to a reference picture I0M of the external reference matrixi,stIs a matrix RiOf (a) element RiIs a sample image IiCorresponding external reference matrix, I0For reference pictures, the index i is the number of sample pictures.
Preferably, adjacent frame images have an overlap (same scene area) of more than 30% (with or without).
The photographing of the single camera may be periodic continuous rotation photographing.
A certain shooting angle of the single camera can be designated as a reference angle, the single camera from the reference angle to the reference angle returning again is taken as a splicing period, and all images (not including the image shot when the reference angle returning again, the image listed in the next period) in the splicing period are spliced.
The invention has the beneficial effects that: the online data processing capacity is small, the speed is high, and the requirement of fast splicing of single-camera rotation is met; by estimating attitude parameters (namely camera external parameters) in the camera rotation process instead of directly using a homography matrix, the geometric transformation of the image is realized, and the larger deformation in the splicing result is avoided; the camera attitude is directly estimated from the input image by using an end-to-end method, so that the problems of dependence on artificial design characteristics, insufficient prediction performance and the like of the traditional method are solved; the camera pitching angle is not constrained, the method is suitable for general camera rotation, and real-time splicing can be realized after GPU parallel calculation is adopted.
The camera attitude in the rotation process is estimated on line by utilizing the deep learning technology, so that the rapid splicing can be realized; the off-line estimated camera internal parameters and the on-line estimated camera attitude are used for carrying out cylindrical projection transformation on the image, and splicing deformation caused by directly using a homography matrix is avoided.
Drawings
FIG. 1 is a schematic flow diagram of the present invention.
Detailed Description
Referring to fig. 1, the processing flow of the method of the present invention includes an off-line part and an on-line part.
Wherein the content of the first and second substances,
a regression model for online estimation of camera pose is trained offline. The method comprises the following steps:
1) and acquiring images at different angles in the rotation process of the camera as image samples.
For the pose estimation model to be accurate enough, the sampling should be as dense as possible to cover different angles in the camera rotation process.
A certain photographing angle during the rotation of the camera is designated as a reference angle at which an image captured is regarded as a reference image.
2) The camera internal parameters and the camera external parameters (camera postures) at different angles are estimated.
Common method in collectable image stitching[4]The method comprises the following steps: extracting and matching image features, estimating a homography matrix between adjacent images, and estimating internal and external parameters of the camera through decomposing the homography matrix and iterative optimization.
Wherein the camera internal parameters remain unchanged; the external parameters vary with the camera rotation angle, and the estimated external parameter matrix should also be multiplied by the inverse of the external parameter matrix corresponding to the reference image, for representing the camera pose with respect to the reference angle.
3) A regression model for estimating the camera pose is trained.
An end-to-end method based on deep learning trains a regression model from images to camera poses. Wherein, the training samples are the images collected in the step (1) under different angles; the sample mark is the camera pose of the image relative to the reference angle at different angles estimated in the previous step 2).
The trained regression model is used for estimating the camera pose on line.
And splicing the shot images on the online part. The method comprises the following steps:
1) acquiring a new image: and images are collected at a constant speed, and the overlapping degree of more than 30% between the front frame and the rear frame of the collected images is ensured.
2) Estimating the camera pose: and estimating the camera pose of the current image relative to the reference angle by using the regression model obtained by the off-line part training.
3) Image transformation: and performing geometric transformation on the current image by utilizing the camera internal parameters estimated offline and the camera attitude estimated online, and finally, enabling the splicing result to be in the transformed image coordinate system.
4) And (3) transformation compensation: and correcting the possible deviation of the geometric transformation, and estimating the displacement deviation between the transformed image of the current frame and the transformed image of the previous frame by using a phase correlation method.
5) Image fusion: the image fusion is realized by adopting the prior art, and feather (feather) and other methods can be adopted[4]And the image chromatic aberration is reduced, and a better seamless splicing result is obtained visually.
Generally, on-line stitching is performed periodically, one rotation of the camera is called a period, when the camera rotates to a pre-specified reference angle, a new stitching period is started, and an image acquired at the reference angle is used as a reference image. Within one splicing cycle.
The camera pose estimation, image transformation, and bias correction involved in the above-described flow are further described below.
1) Camera pose estimation:
camera pose estimation involves offline training and online estimation.
a) Off-line training
Let Ii(i-0, 1, …, N) is forN number of training samples, where I0Is a reference image, and the subscript i in the parameter (variable) notation represents the parameter (variable) for the ith image. Obtaining the camera internal reference matrix K and each image I through the step 2) of the off-line partiCorresponding external parameter matrix RiThen, then
Figure BDA0002608105510000071
Is IiRelative to I0The camera pose parameter of (1).
The training process is based on a deep neural network, and the network input is each sampling image IiThe output is a camera pose parameter Ri' quadruple represents qi=(qi0;qi1,qi2,qi3),qi、Ri' satisfying the following conversion formula[5]:
Figure BDA0002608105510000072
Figure BDA0002608105510000073
Figure BDA0002608105510000074
Figure BDA0002608105510000075
Wherein m isi,stIs a matrix Ri' of an element, i.e. Ri′={mi,st},s,t∈[1,2,3]。
Network architecture reference PoseNet[3]The four-tuple-based neural network comprises 22 convolutional layers and 6 inclusion units, wherein the last layer is a full-connection layer containing 4 neurons and is used for outputting 4 parameters of a quadruple. The training loss function is defined as the label pose q of the input imageiAnd estimating attitude
Figure BDA0002608105510000081
L2 norm of difference:
Figure BDA0002608105510000082
qiand
Figure BDA0002608105510000083
can be a quadruple representation of the camera pose, one of which is the sample marker value and the other of which is the estimate (the estimate of the network output).
b) On-line estimation
When on-line estimation is performed, the current sampling image I is inputn(n is the number of the sampling images in the current stitching period), the camera attitude R relative to the reference angle can be predictednThe quadruple of' represents:
qn=(qn0;qn1,qn2,qn3)
from q is obtainable bynTo obtain Rn′:
Figure BDA0002608105510000084
For reference picture I0Its camera pose matrix R0' is a unit array.
2) Image transformation
The known camera internal reference matrix K and the above-mentioned attitude matrix Rn', the current sampled image I can benAnd transforming the image coordinates to the image coordinates of the splicing result, wherein the method comprises the following two steps:
a) will InTransforming the coordinates (x, y) of the two-dimensional image to a reference image I0Corresponding three-dimensional camera coordinates (X, Y, Z):
Figure BDA0002608105510000085
b) cylindrical projection of three-dimensional coordinates (X, Y, Z) onto the resulting image coordinates (u, v):
u=f*atan(X/Z)
Figure BDA0002608105510000091
where f is the camera focal length, which is obtained from the intra-parameter matrix K ═ diag (f, f, 1). The transformation avoids the problem of splicing deformation caused by a method of directly transforming images by using homography matrixes.
3) Transform compensation
Because the attitude prediction may have certain error, the images after the transformation can not be ensured to be completely aligned, and the phase correlation method is utilized in the step[6]And estimating the displacement deviation between adjacent images, and performing deviation compensation on the transformation.
Let In(u, v) is a transformed image of the current image, and the transformed image of the previous frame image is In-1(u, v) is provided with In-1After compensation, the phase difference of the two cross-power spectrums is calculated through FFT, and the displacement deviation (u) of the two cross-power spectrums in the time domain can be estimated0,v0):
Figure BDA0002608105510000092
Wherein, Fn-1(ξ,η)、Fn(xi, eta) respectively represent In-1(u,v)、In(u, v) Fourier transform.
In summary, the invention provides a splicing method suitable for camera rotation, which utilizes a deep learning technology to estimate the camera attitude in the rotation process on line and can realize quick splicing; the off-line estimated camera internal parameters and the on-line estimated camera attitude are used for carrying out cylindrical projection transformation on the image, and splicing deformation caused by directly using a homography matrix is avoided.
The technical means disclosed by the invention can be combined arbitrarily to form a plurality of different technical schemes except for special description and the further limitation that one technical means is another technical means.
Reference to the literature
[1]Chen,S.E.,“QuickTime VR–an image-based approach to virtualenvironment navigation”,Computer Graphics(SIGGRAPH’95),1995.
[2] Duitake et al, "fast image stitching under single-camera rotation monitoring", chinese graphic image proceedings, 21(2), 2016.
[3]Alex Kendall et al.,“PoseNet:A Convolutional Network for Real-Time6-DOF Camera Relocalization”,International Conference on Computer Vision(ICCV),2015.
[4]Richard Szeliski,“Image Alignment and Stitching:A Tutorial”,Microsoft Technical Report,2004.
6[5] Gaoyuang, billow, "Vision SLAM fourteen: from theory to practice ", electronics industry publishers, 2017.
[6]Reddy B S et al.,“An FFT-based technique for translation,rotation,and scale-invariant image registration”,IEEE Transactions on ImageProcessing,5(8),1996.

Claims (10)

1. A single-camera rotation splicing method based on deep learning comprises the steps of estimating a camera posture of a current frame image relative to a reference angle on line according to a regression model from an image to the camera posture, carrying out geometric transformation on the current frame image by utilizing camera internal parameters and camera external parameters of the current frame image relative to the reference angle obtained through online estimation, mapping to a spliced image coordinate system to form a transformation result image under the spliced image coordinate system, carrying out image fusion on the transformation result images of all the frame images to form a spliced image.
2. The single-camera rotational stitching method based on deep learning of claim 1, wherein the geometric transformation comprises:
a) image I is represented bynTo a reference image I0Corresponding three-dimensional camera coordinates (X, Y, Z):
Figure FDA0002608105500000011
wherein
Figure FDA0002608105500000012
R′nIs an image InRelative reference picture I0Camera pose parameter of RnIs an image InCorresponding camera external parameter matrix obtained by on-line estimation, R0Is a reference picture I0A corresponding camera external parameter matrix (or a camera external parameter matrix at a reference angle), K is a camera internal parameter matrix, and subscript n is the number of an image;
b) projecting the three-dimensional coordinate (X, Y, Z) cylinder to the coordinate system of the spliced image, and calculating an image I according to the following formulanThe resulting image coordinates (u, v):
u=f*atan(X/Z)
Figure FDA0002608105500000013
where f is the camera focal length, which is obtained from the intra-parameter matrix K ═ diag (f, f, 1).
3. The single-camera rotational stitching method based on deep learning of claim 1, wherein before image fusion, a phase correlation method is used to estimate displacement deviation between adjacent change result images, deviation compensation is performed on the change result images, and the image fusion is performed according to the deviation-compensated change result images.
4. The single-camera spin-stitching method based on deep learning of claim 1, wherein the regression model from image to camera pose is obtained based on deep learning end-to-end method training.
5. The method as claimed in claim 4, wherein the training of the regression model from image to camera pose is performed offline, the single camera is used for shooting and sampling at an actual shooting position, image samples at different shooting angles covering the rotation shooting range are obtained, camera external parameters corresponding to the camera internal parameters and the image samples are estimated, the image samples are used as training samples, the corresponding camera external parameters are used as sample labels, and the training of the regression model is performed based on a deep neural network.
6. The single-camera rotation stitching method based on deep learning of claim 5, wherein the camera internal parameters and the camera external parameters corresponding to the image samples are estimated in the following way: and extracting image features and performing feature matching, estimating a homography matrix between adjacent images according to the image features, and estimating internal parameters and external parameters of the camera by decomposing the homography matrix and iterative optimization so as to multiply the external parameter matrix obtained by estimation by the inverse of the external parameter matrix corresponding to the reference image and obtain the external parameters of the camera relative to the reference angle.
7. The single-camera spin-stitching method based on deep learning of claim 6, wherein the camera extrinsic parameters are represented by quadruples: q. q.si=(qi0;qi1,qi2,qi3),
Wherein
Figure FDA0002608105500000021
Figure FDA0002608105500000022
Figure FDA0002608105500000031
Figure FDA0002608105500000032
Figure FDA0002608105500000033
Ri′={mi,st}
mi,stIs a matrix Ri' of the element, RiIs a sample image IiCorresponding external reference matrix, RiIs a sample image IiRelative to a reference picture I0Outer reference matrix of (I)0For reference pictures, the index i is the number of sample pictures.
8. The single-camera rotational stitching method based on deep learning as claimed in any one of claims 1 to 7, wherein the adjacent frame images have an overlap degree of more than 30%.
9. The deep learning based single-camera rotational stitching method according to claim 8, characterized in that the shooting of the single camera is a periodic continuous rotational shooting.
10. The single-camera rotation stitching method based on deep learning of claim 9, wherein a certain shooting angle of the single camera is designated as a reference angle, and all images of one stitching cycle are stitched with the single camera returning to the reference angle from the reference angle as one stitching cycle.
CN202010745129.0A 2020-07-29 2020-07-29 Single-camera rotation splicing method based on deep learning Pending CN111899174A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010745129.0A CN111899174A (en) 2020-07-29 2020-07-29 Single-camera rotation splicing method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010745129.0A CN111899174A (en) 2020-07-29 2020-07-29 Single-camera rotation splicing method based on deep learning

Publications (1)

Publication Number Publication Date
CN111899174A true CN111899174A (en) 2020-11-06

Family

ID=73182959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010745129.0A Pending CN111899174A (en) 2020-07-29 2020-07-29 Single-camera rotation splicing method based on deep learning

Country Status (1)

Country Link
CN (1) CN111899174A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634140A (en) * 2021-03-08 2021-04-09 广州松合智能科技有限公司 High-precision full-size visual image acquisition system and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017114508A1 (en) * 2015-12-30 2017-07-06 清华大学 Method and device for three-dimensional reconstruction-based interactive calibration in three-dimensional surveillance system
CN110675453A (en) * 2019-10-16 2020-01-10 北京天睿空间科技股份有限公司 Self-positioning method for moving target in known scene
CN110717936A (en) * 2019-10-15 2020-01-21 哈尔滨工业大学 Image stitching method based on camera attitude estimation
CN110782394A (en) * 2019-10-21 2020-02-11 中国人民解放军63861部队 Panoramic video rapid splicing method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017114508A1 (en) * 2015-12-30 2017-07-06 清华大学 Method and device for three-dimensional reconstruction-based interactive calibration in three-dimensional surveillance system
CN110717936A (en) * 2019-10-15 2020-01-21 哈尔滨工业大学 Image stitching method based on camera attitude estimation
CN110675453A (en) * 2019-10-16 2020-01-10 北京天睿空间科技股份有限公司 Self-positioning method for moving target in known scene
CN110782394A (en) * 2019-10-21 2020-02-11 中国人民解放军63861部队 Panoramic video rapid splicing method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
小白学视觉: ""一文详解四元数、欧拉角、旋转矩阵、轴角如何相互转换"", 《HTTPS://MP.WEIXIN.QQ.COM/S?__BIZ=MZU0NJGZMDIXMQ==&MID=2247487439&IDX=1&SN=DA8C277D40911B114038A415F5873AC9&CHKSM=FB56ED23CC216435C3BAC0429620492E6B4380A63F30F026C8377F37BD81577A320188E1A404&SCENE=27》, pages 6 *
李佳;盛业华;张卡;段平;吴辉;: "基于未标定普通相机的全景图像拼接方法", 系统仿真学报, no. 09, pages 2070 - 2072 *
赵小川: "《MATLAB图像处理-能力提高与应用案例》", pages: 148 - 150 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634140A (en) * 2021-03-08 2021-04-09 广州松合智能科技有限公司 High-precision full-size visual image acquisition system and method
CN112634140B (en) * 2021-03-08 2021-08-13 广州松合智能科技有限公司 High-precision full-size visual image acquisition system and method

Similar Documents

Publication Publication Date Title
CN111311666B (en) Monocular vision odometer method integrating edge features and deep learning
CN107980150B (en) Modeling three-dimensional space
CN108711185B (en) Three-dimensional reconstruction method and device combining rigid motion and non-rigid deformation
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
Zhao et al. Deep direct visual odometry
US20090141043A1 (en) Image mosaicing apparatus for mitigating curling effect
CN107767339B (en) Binocular stereo image splicing method
Li et al. A study on automatic UAV image mosaic method for paroxysmal disaster
CN110675453B (en) Self-positioning method for moving target in known scene
CN110717936B (en) Image stitching method based on camera attitude estimation
Elhayek et al. Fully automatic multi-person human motion capture for vr applications
CN111798373A (en) Rapid unmanned aerial vehicle image stitching method based on local plane hypothesis and six-degree-of-freedom pose optimization
CN105894443A (en) Method for splicing videos in real time based on SURF (Speeded UP Robust Features) algorithm
Liu et al. High-speed video generation with an event camera
Molina et al. Persistent aerial video registration and fast multi-view mosaicing
Wan et al. Drone image stitching using local mesh-based bundle adjustment and shape-preserving transform
Guan et al. Minimal solutions for the rotational alignment of IMU-camera systems using homography constraints
CN111899174A (en) Single-camera rotation splicing method based on deep learning
CN110580715A (en) Image alignment method based on illumination constraint and grid deformation
Charco et al. Transfer Learning from Synthetic Data in the Camera Pose Estimation Problem.
Imre et al. Calibration of nodal and free-moving cameras in dynamic scenes for post-production
Halperin et al. Clear Skies Ahead: Towards Real‐Time Automatic Sky Replacement in Video
CN115456870A (en) Multi-image splicing method based on external parameter estimation
Jiazhen et al. Real-time mosaicking for infrared videos from an oblique sweeping camera
Qi et al. Image stitching based on improved SURF algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination