CN103093460A - Moving camera virtual array calibration method based on parallel parallax - Google Patents
Moving camera virtual array calibration method based on parallel parallax Download PDFInfo
- Publication number
- CN103093460A CN103093460A CN2013100037196A CN201310003719A CN103093460A CN 103093460 A CN103093460 A CN 103093460A CN 2013100037196 A CN2013100037196 A CN 2013100037196A CN 201310003719 A CN201310003719 A CN 201310003719A CN 103093460 A CN103093460 A CN 103093460A
- Authority
- CN
- China
- Prior art keywords
- camera
- plane
- parallax
- viewpoint
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a moving camera virtual array calibration method based on parallel parallax. The moving camera virtual array calibration method based on the parallel parallax is used for solving the technical problems that an existing fixed camera virtual array calibration method based on the parallel parallax needs a plurality of cameras. The technical scheme includes imitating a virtual camera array system by means of moving in parallel on a linear guide rail and tracking feature points of images shot at various viewpoints so as to estimate gestures of a multi-view camera. Firstly, a reference plane which is parallel to the direction of the linear guide rail is defined according to the obtained each viewpoint gesture and a homography matrix of each viewpoint induced by the plane and any one reference viewpoint is estimated. Secondly, parallaxes between each viewpoint and the reference viewpoint on the reference plane are calculated by the homography matrix. The relative position of each viewpoint is obtained by means of singular value decomposition (SVD) and calculation of a parallax matrix. When the moving camera virtual array calibration method based on the parallel parallax is used for conducting perspective imaging to a sheltered target in a static scene, a discovering effect is good.
Description
Technical field
The present invention relates to a kind of mobile camera virtual array scaling method, particularly relate to a kind of mobile camera virtual array scaling method based on Parallax.
Background technology
In numerous many viewpoints collecting devices, dense camera array is widely used in and gathers in light field, free view-point imaging and synthetic aperture imaging.Traditional camera array is many to be made of dense fixedly camera of arranging several, and its scaling method mainly contains: complete camera inner parameter and the scaling method of calibrating external parameters and Parallax.
Document " Using Plane+Parallax for Calibrating Dense Camera Arrays; IEEE CVPR2004, Vol.1Page.I-2-I-9 " discloses a kind of structural form of dense camera array and has provided fixedly camera array scaling method based on Parallax (Plane+Parallax) method.This camera array right coplanar 128 of arranging of system are camera and image acquisition node formation fixedly, can be used for the application such as optical field acquisition, image-based drafting Rendering and synthetic aperture imaging.Yet, still there is following inferior position in this camera array system: at first, because the position of camera is fixed, can not take the visual angle or increase new viewpoint according to the characteristics adaptively modifying of captured scene, secondly, because camera quantity is more, the aspects such as system constructing, image acquisition and camera parameter control all need spend more manpower and materials.In the document utilization array system, the coplanar characteristics of each camera photocentre are demarcated camera array.Two the parallel virtual planes with each plane, camera place have been defined in document: reference planes and focussing plane.At first calibration process utilizes the common cooperation of taking of each camera to demarcate the mark estimation should (Homography) projection matrix by the list of being induced by reference planes between the reference camera of each camera and certain appointment.And then to being estimated to the projection matrix (Homology) of focussing plane by reference planes.Point out in document that this projection matrix deteriorates to pure flat moving when camera plane, reference planes and focussing plane are parallel to each other, and can be by the parallax of each camera in reference planes calculated.
Summary of the invention
In order to overcome the existing deficiency that needs many cameras based on the fixedly camera array scaling method of Parallax, the invention provides a kind of mobile camera virtual array scaling method based on Parallax.The method is utilized virtual camera array system of single camera simulation of parallel on line slideway, and the unique point in the captured image of each viewpoint is followed the tracks of and estimated various visual angles camera attitude with this.Each visual angle attitude that utilization obtains, at first define one with line slideway to parallel reference planes, each visual angle that plane is thus induced and the homography matrix of a certain reference viewing angle are estimated.Then this homography matrix calculates each visual angle and reference viewing angle at the parallax of reference planes.The relative position of each viewpoint is by the SVD decomposition computation gained to the parallax matrix.Utilize the inventive method, can carry out perspective imaging to the target that is blocked in static scene, reach the good occlusion effect of going.
The technical solution adopted for the present invention to solve the technical problems is: a kind of mobile camera virtual array scaling method based on Parallax is characterized in comprising the following steps:
Step 1, at first the image sequence of input carried out SIFT feature point detection and coupling, use the RANSAC robust estimation method to carry out exterior point to detected unique point and remove.Then utilize point in characteristics of image correspondence on each visual angle two dimensional image, recover the three-dimensional coordinate of some sparse points in attitude, camera intrinsic parameter and the scene at each visual angle; At last the estimated result of all frames carried out global optimization under the framework that the bundle collection is adjusted.After this step finishes, set up the projection equation of space three-dimensional point on each visual angle image:
x
i=P
iX
Wherein X is the three-dimensional coordinate of putting under world coordinate system, P
iCamera attitude and Intrinsic Matrix for i visual angle estimating.
Step 2: establish in the motion process of camera on line slideway, each viewpoint photocentre plane of living in is camera plane, and the reference planes parallel with camera plane are ∏
RBe located in the camera motion process, gathered image k different visual angles altogether, i.e. V
1..., V
R..., V
k, V wherein
RIt is a reference viewing angle that is used for measuring parallax.If H
iExpression is by ∏
RInduce from V
iTo V
RHomography matrix.J reference mark XR1 arranged on reference planes ..., X
Rj, utilize the estimated P that goes out of the first step
iMatrix is with X
R1..., X
RjBe projected on each visual angle image, utilize the golden standard algorithm to plane ∏
RHomography induced is estimated.
Step 3: establish total I different plane and reference planes ∏
RParallel, be respectively ∏
1..., ∏
I, on each plane, J different reference mark is arranged all, j reference mark X on i plane
ijRepresent, utilize attitude and the camera intrinsic parameter of each viewpoint, with X
ijBe projected to visual angle V
kAnd reference viewing angle V
RImage on, establish this subpoint and be respectively x
ijkAnd x
ijRBy three-dimensional point X
ijThe visual angle V that induces
kWith V
RBetween parallax p
ijkAt reference planes ∏
ROn be expressed as follows:
p
ijk=H
ix
ijk-x
ijR
In formula, H
iBe the homography induced that estimates.J on a same plane parallax data is added up ask all, obtain at plane ∏
iUpward angle of visibility V
kWith V
RBetween mean parallax p
ik
Write the mean parallax of K different visual angles on I plane as matrix form, got the parallax matrix M, matrix M is carried out SVD decompose,
In formula, s is scalar factor, Δ x
kBe visual angle V
kWith reference viewing angle V
RRelative position, d
iBe plane ∏
iWith reference planes ∏
RRelative distance:
In formula, Δ z
iBe ∏
iTo ∏
RDistance, Z is ∏
iDistance to camera plane.
The invention has the beneficial effects as follows: due to the virtual camera array system of single camera simulation that utilizes parallel on line slideway, the unique point in the captured image of each viewpoint is followed the tracks of and estimated various visual angles camera attitude with this.Each visual angle attitude that utilization obtains, at first define one with line slideway to parallel reference planes, each visual angle that plane is thus induced and the homography matrix of a certain reference viewing angle are estimated.Then this homography matrix calculates each visual angle and reference viewing angle at the parallax of reference planes.The relative position of each viewpoint is by the SVD decomposition computation gained to the parallax matrix.Utilize the inventive method to carry out perspective imaging to the target that is blocked in static scene, go occlusion effect good.
Below in conjunction with embodiment, the present invention is elaborated.
Embodiment
The mobile camera virtual array scaling method concrete steps that the present invention is based on Parallax are as follows:
1, the various visual angles attitude is estimated.
By image sequence, the camera attitude is estimated and the sparse point in three-dimensional scenic is redeveloped into study hotspot in computer vision always, and the algorithm of existing many maturations, only sketch its estimation procedure here
At first, the image sequence of inputting is carried out automated characterization point detect, then the unique point of interframe is carried out the matching analysis.Owing to may having moving object in scene, some characteristic matching is not correct, therefore need to carry out outer point analysis and removal to the coupling that analyzes.Remaining match point becomes interior point, can be utilized to carry out the estimation of camera attitude.The estimated result of all frames carries out global optimization under the framework that the bundle collection is adjusted at last, obtains globally optimal solution.
2, each camera relative position is estimated.
If in the motion process of camera on line slideway, each viewpoint photocentre plane of living in is camera plane, establishes ∏
RBe the reference planes parallel with camera plane, at first calibration process should be estimated the list that reference planes are thus induced.Be located in the camera motion process, gathered image k different visual angles altogether, i.e. V
1..., V
R..., V
k, V wherein
RIt is a reference viewing angle that is used for measuring parallax.If H
iExpression is by ∏
RInduce from V
iTo V
RHomography matrix.Suppose to have j reference mark on reference planes, as X
R1..., X
Rj, because attitude and the camera intrinsic parameter of each viewpoint are known, this j reference mark is projected to vision point
iWith V
RPixel coordinate on image can calculate by following equation:
x
Ri=PX
Ri
Wherein P is camera attitude and the Intrinsic Matrix that estimates in the first step.By computer vision knowledge as can be known, when j 〉=4, the corresponding pixel points on available two width images is estimated homography matrix.
This homography matrix can be used for generating the synthetic aperture image that focuses in reference planes, when needs generate composograph on non-reference planes, need to carry out according to different visual angles after the translation of different parallax amount synthetic to image.This parallax amount can be calculated according to the relative position between each viewpoint.The method of estimation of the below's narration to the relative position between each viewpoint.
If total I different and reference planes ∏
RParallel plane is respectively ∏
1..., ∏
I, on each plane, J different reference mark is arranged all, j reference mark X on i plane
ijRepresent, utilize attitude and the camera intrinsic parameter of each viewpoint, can be with X
ijBe projected to visual angle V
kAnd reference viewing angle V
RImage on, establish this subpoint and be respectively x
ijkAnd x
ijRBy three-dimensional point X
ijThe visual angle V that induces
kWith V
RBetween parallax p
ijkCan be at reference planes ∏
ROn be expressed as follows:
p
ijk=H
ix
ijk-x
ijR
H wherein
iBe the homography induced that estimates.J on a same plane parallax data is added up ask all, obtain at plane ∏
iUpward angle of visibility V
kWith V
RBetween mean parallax p
ik
Write the mean parallax of K different visual angles on I plane as matrix form, can be got the parallax matrix M.Owing to there being the proportionate relationship of similar triangles between parallax and camera relative position, so the order of M is 1, matrix M carried out SVD decompose, and can get
Wherein s is scalar factor, Δ x
kBe visual angle V
kWith reference viewing angle V
RRelative position, d
iBe plane ∏
iWith reference planes ∏
RRelative distance:
Δ z wherein
iBe ∏
iTo ∏
RDistance, Z is ∏
iDistance to camera plane.Therefore work as d
iIn the time of all can be measured, the relative position between each visual angle can decompose gained by the SVD to the parallax matrix M.
Claims (1)
1. mobile camera virtual array scaling method based on Parallax is characterized in that comprising the following steps:
Step 1, at first the image sequence of input carried out SIFT feature point detection and coupling, use the RANSAC robust estimation method to carry out exterior point to detected unique point and remove; Then utilize point in characteristics of image correspondence on each visual angle two dimensional image, recover the three-dimensional coordinate of some sparse points in attitude, camera intrinsic parameter and the scene at each visual angle; At last global optimization is carried out in the estimated result of all frames under the framework that the bundle collection is adjusted; After this step finishes, set up the projection equation of space three-dimensional point on each visual angle image:
x
i=P
iX
Wherein X is the three-dimensional coordinate of putting under world coordinate system, P
iCamera attitude and Intrinsic Matrix for i visual angle estimating;
Step 2: establish in the motion process of camera on line slideway, each viewpoint photocentre plane of living in is camera plane, and the reference planes parallel with camera plane are ∏
RBe located in the camera motion process, gathered image k different visual angles altogether, i.e. V
1..., V
R..., V
k, V wherein
RIt is a reference viewing angle that is used for measuring parallax; If H
iExpression is by ∏
RInduce from V
iTo V
RHomography matrix; J reference mark X arranged on reference planes
R1.., X
Rj, utilize the estimated P that goes out of the first step
iMatrix is with X
R1..., X
RjBe projected on each visual angle image, utilize the golden standard algorithm to plane ∏
RHomography induced is estimated;
Step 3: establish total I different plane and reference planes ∏
RParallel, be respectively ∏
1..., ∏
I, on each plane, J different reference mark is arranged all, j reference mark X on i plane
ijRepresent, utilize attitude and the camera intrinsic parameter of each viewpoint, with X
ijBe projected to visual angle V
kAnd reference viewing angle V
RImage on, establish this subpoint and be respectively x
ijkAnd x
ijRBy three-dimensional point X
ijThe visual angle V that induces
kWith V
RBetween parallax p
ijkAt reference planes ∏
ROn be expressed as follows:
p
ijk=H
ix
ijk-x
ijR
In formula, H
iBe the homography induced that estimates; J on a same plane parallax data is added up ask all, obtain at plane ∏
iUpward angle of visibility V
kWith V
RBetween mean parallax p
ik
Write the mean parallax of K different visual angles on I plane as matrix form, got the parallax matrix M, matrix M is carried out SVD decompose,
In formula, s is scalar factor, Δ x
kBe visual angle V
kWith reference viewing angle V
RRelative position, d
iBe plane ∏
iWith reference planes ∏
RRelative distance:
In formula, Δ z
iBe ∏
iTo ∏
RDistance, Z is ∏
iDistance to camera plane.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013100037196A CN103093460A (en) | 2013-01-06 | 2013-01-06 | Moving camera virtual array calibration method based on parallel parallax |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013100037196A CN103093460A (en) | 2013-01-06 | 2013-01-06 | Moving camera virtual array calibration method based on parallel parallax |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103093460A true CN103093460A (en) | 2013-05-08 |
Family
ID=48205991
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2013100037196A Pending CN103093460A (en) | 2013-01-06 | 2013-01-06 | Moving camera virtual array calibration method based on parallel parallax |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103093460A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413304A (en) * | 2013-07-30 | 2013-11-27 | 西北工业大学 | Virtual array synthetic aperture perspective imaging method based on color depth fusion |
CN103413302A (en) * | 2013-07-29 | 2013-11-27 | 西北工业大学 | Camera array dynamic focal plane estimation method |
CN103426170A (en) * | 2013-07-29 | 2013-12-04 | 西北工业大学 | Hidden target imaging method based on non-structural light field synthesis aperture imaging |
CN104156972A (en) * | 2014-08-25 | 2014-11-19 | 西北工业大学 | Perspective imaging method based on laser scanning distance measuring instrument and multiple cameras |
WO2016101892A1 (en) * | 2014-12-23 | 2016-06-30 | Huawei Technologies Co., Ltd. | Computational multi-camera adjustment for smooth view switching and zooming |
CN107403423A (en) * | 2017-08-02 | 2017-11-28 | 清华大学深圳研究生院 | A kind of synthetic aperture of light-field camera removes occlusion method |
CN107430782A (en) * | 2015-04-23 | 2017-12-01 | 奥斯坦多科技公司 | Method for being synthesized using the full parallax squeezed light field of depth information |
CN111462317A (en) * | 2020-04-27 | 2020-07-28 | 清华大学 | Camera arrangement method for monitoring space grid structure based on multi-view three-dimensional reconstruction |
CN111939563A (en) * | 2020-08-13 | 2020-11-17 | 北京像素软件科技股份有限公司 | Target locking method, device, electronic equipment and computer readable storage medium |
-
2013
- 2013-01-06 CN CN2013100037196A patent/CN103093460A/en active Pending
Non-Patent Citations (5)
Title |
---|
VAIBHAV VAISH等: "Using Plane + Parallax for Calibrating Dense Camera Arrays", 《PROCEEDINGS OF THE 2004 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR’04)》 * |
XIAOQIANG ZHANG等: "Calibrate a Moving Camera on a Linear Translating Stage Using Virtual Plane + Parallax", 《ISCIDE"12 PROCEEDINGS OF THE THIRD SINO-FOREIGN-INTERCHANGE CONFERENCE ON INTELLIGENT SCIENCE AND INTELLIGENT DATA ENGINEERING》 * |
XIUWEI ZHANG等: "A Convenient Multi-camera Self-calibration Method Based on Human Body Motion Analysis", 《FIFTH INTERNATIONAL CONFERENCE ON IMAGE AND GRAPHICS, 2009. ICIG "09》 * |
ZHAO PEI等: "A novel method for detecting occluded object by multiple camera arrays", 《2012 9TH INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY (FSKD 2012)》 * |
杨涛等: "基于场景复杂度与不变特征的航拍视频实时配准算法", 《电子学报》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413302B (en) * | 2013-07-29 | 2016-01-13 | 西北工业大学 | camera array dynamic focal plane estimation method |
CN103413302A (en) * | 2013-07-29 | 2013-11-27 | 西北工业大学 | Camera array dynamic focal plane estimation method |
CN103426170A (en) * | 2013-07-29 | 2013-12-04 | 西北工业大学 | Hidden target imaging method based on non-structural light field synthesis aperture imaging |
CN103413304A (en) * | 2013-07-30 | 2013-11-27 | 西北工业大学 | Virtual array synthetic aperture perspective imaging method based on color depth fusion |
CN104156972B (en) * | 2014-08-25 | 2017-01-25 | 西北工业大学 | Perspective imaging method based on laser scanning distance measuring instrument and multiple cameras |
CN104156972A (en) * | 2014-08-25 | 2014-11-19 | 西北工业大学 | Perspective imaging method based on laser scanning distance measuring instrument and multiple cameras |
WO2016101892A1 (en) * | 2014-12-23 | 2016-06-30 | Huawei Technologies Co., Ltd. | Computational multi-camera adjustment for smooth view switching and zooming |
CN107430782A (en) * | 2015-04-23 | 2017-12-01 | 奥斯坦多科技公司 | Method for being synthesized using the full parallax squeezed light field of depth information |
CN107430782B (en) * | 2015-04-23 | 2021-06-04 | 奥斯坦多科技公司 | Method for full parallax compressed light field synthesis using depth information |
CN107403423A (en) * | 2017-08-02 | 2017-11-28 | 清华大学深圳研究生院 | A kind of synthetic aperture of light-field camera removes occlusion method |
CN107403423B (en) * | 2017-08-02 | 2019-12-03 | 清华大学深圳研究生院 | A kind of synthetic aperture of light-field camera removes occlusion method |
CN111462317A (en) * | 2020-04-27 | 2020-07-28 | 清华大学 | Camera arrangement method for monitoring space grid structure based on multi-view three-dimensional reconstruction |
CN111462317B (en) * | 2020-04-27 | 2022-06-21 | 清华大学 | Camera arrangement method for monitoring space grid structure based on multi-view three-dimensional reconstruction |
CN111939563A (en) * | 2020-08-13 | 2020-11-17 | 北京像素软件科技股份有限公司 | Target locking method, device, electronic equipment and computer readable storage medium |
CN111939563B (en) * | 2020-08-13 | 2024-03-22 | 北京像素软件科技股份有限公司 | Target locking method, device, electronic equipment and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103093460A (en) | Moving camera virtual array calibration method based on parallel parallax | |
Zhou et al. | Semi-dense 3D reconstruction with a stereo event camera | |
CN103115613B (en) | Three-dimensional space positioning method | |
CN110458897B (en) | Multi-camera automatic calibration method and system and monitoring method and system | |
CN103292695B (en) | A kind of single eye stereo vision measuring method | |
CN108399643A (en) | A kind of outer ginseng calibration system between laser radar and camera and method | |
Im et al. | High quality structure from small motion for rolling shutter cameras | |
CN110009672A (en) | Promote ToF depth image processing method, 3D rendering imaging method and electronic equipment | |
US20220051425A1 (en) | Scale-aware monocular localization and mapping | |
CN107358633A (en) | Join scaling method inside and outside a kind of polyphaser based on 3 points of demarcation things | |
CN103903263B (en) | A kind of 360 degrees omnidirection distance-finding method based on Ladybug panorama camera image | |
CN101976455A (en) | Color image three-dimensional reconstruction method based on three-dimensional matching | |
CN104156972A (en) | Perspective imaging method based on laser scanning distance measuring instrument and multiple cameras | |
CN103198524A (en) | Three-dimensional reconstruction method for large-scale outdoor scene | |
CN104677330A (en) | Small binocular stereoscopic vision ranging system | |
CN105222717A (en) | A kind of subject matter length measurement method and device | |
Rodríguez et al. | Obstacle avoidance system for assisting visually impaired people | |
CN104182968A (en) | Method for segmenting fuzzy moving targets by wide-baseline multi-array optical detection system | |
Grabner et al. | Gp2c: Geometric projection parameter consensus for joint 3d pose and focal length estimation in the wild | |
CN105809664A (en) | Method and device for generating three-dimensional image | |
CN116205961A (en) | Automatic registration method and system for multi-lens combined image and laser radar point cloud | |
CN103247065A (en) | Three-dimensional naked eye video generating method | |
US10134136B2 (en) | Image processing apparatus and image processing method | |
Sheng et al. | Mobile robot localization and map building based on laser ranging and PTAM | |
Chen et al. | Low cost and efficient 3D indoor mapping using multiple consumer RGB-D cameras |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20130508 |