CN103310445A - Parameter estimation method of virtual view point camera for drawing virtual view points - Google Patents

Parameter estimation method of virtual view point camera for drawing virtual view points Download PDF

Info

Publication number
CN103310445A
CN103310445A CN2013102136641A CN201310213664A CN103310445A CN 103310445 A CN103310445 A CN 103310445A CN 2013102136641 A CN2013102136641 A CN 2013102136641A CN 201310213664 A CN201310213664 A CN 201310213664A CN 103310445 A CN103310445 A CN 103310445A
Authority
CN
China
Prior art keywords
virtual view
virtual
video camera
rotation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013102136641A
Other languages
Chinese (zh)
Inventor
赵岩
陈贺新
汪敬媛
王世刚
王学军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN2013102136641A priority Critical patent/CN103310445A/en
Publication of CN103310445A publication Critical patent/CN103310445A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to a parameter estimation method of a virtual view point camera for drawing virtual view points. The method comprises the steps of extracting feature match points, establishing a relation model of a view point space position, the rotation angle of the camera and the coordinates of the match points, and estimating the parameters of the virtual view point camera according to the relation model. The continuity for drawing an image can be enhanced on the basis of the parameter estimation method, thus enabling a viewer to obtain a more real and smooth viewing effect.

Description

The virtual view camera parameters method of estimation that is used for virtual viewpoint rendering
Technical field
The invention belongs to field of machine vision, relate to the method for estimation of virtual view camera parameters, relate in particular to the virtual view camera parameters method of estimation for virtual viewpoint rendering.
Technical background
In recent years, along with the development of digital multimedia technology, people constantly proposed higher requirement for video interactive and sensory experience, and therefore, the free viewpoint video technology is arisen at the historic moment.At present, the key of free viewpoint video technology is the generation of new viewpoint content, i.e. the virtual viewpoint rendering technology.According to the form of expression of scene, the virtual viewpoint rendering technology sums up and can be divided into two large classes: drafting (Model-based Rendering, MBR) and image-based based on model are drawn (Image-based Rendering, IBR).Based on the rendering technique of model can draw out the higher image of quality and memory data output less, but the complexity of the complexity of modeling and scene is closely bound up.In the application of reality, be difficult to complex scene is carried out Real-time modeling set.The image-based rendering technique is a kind of direct method according to reference picture generating virtual visual point image.The speed of its drafting and the complexity of scene are irrelevant, hardware requirement to computing machine is not high, the render speed image sense of reality very fast and that generate is stronger, and based on rendering technique (the Depth-Image-based Rendering of the degree of depth, DIBR) then be to draw purpose image under the new viewpoint by the reference picture that comprises depth information, owing to the depth information with scene is incorporated among the IBR, thereby greatly reduce the number of reference picture, saved view data storage space and transmission bandwidth.
At present, have in the world the research that more paper and patent relate to the DIBR technology.Direction and ultimate principle different according to mapping can be divided into these research two large classes: the image conversion of forward (Image Warping) and reverse image conversion.But no matter be to adopt forward mapping or the method for reverse Mapping to draw virtual visual point image, all be under the known prerequisite of camera parameters of supposition virtual view, to carry out, for example, the cycle tests ballet and the breakdancer that have used Microsoft Research to provide among " the View generation with3D warping using depth information for FTV " that the people such as Yuji Mori the deliver drawing viewpoints of the three-dimension varying of depth information (in the free viewpoint video based on) test, cycle tests is obtained by No. 8 video cameras, the virtual view that experiment is synthesized as needs with the No. 4 video camera, the parameter that provides take the No. 4 video camera is as the virtual view camera parameters, but in actual applications, virtual view is non-existent, camera parameters also is unknown, has no at present bibliographical information and attempts addressing this problem.
Summary of the invention
The object of the invention is to propose a kind of camera parameters method of estimation for virtual viewpoint rendering, to improve the continuity of drawing image, make the beholder obtain viewing effect more true, that link up.
The virtual view camera parameters optimal estimation method that is used for virtual viewpoint rendering that the present invention proposes is characterized in that may further comprise the steps:
1) take the multi-view image that provides as reference picture, and an optional width of cloth is for processing object, utilize its unique point of Harris operator extraction, choosing in the unique point of extracting is point to be matched a bit, then utilizes the 3D rendering mapping equation to obtain its match points on other reference pictures;
2) choose as required choosing of sample viewpoint and sample viewpoint and should satisfy following condition: the intrinsic parameter of (1) video camera is identical; (2) there is the relative rotation on the horizontal direction in the same horizontal line and only in viewpoint, and according to the sample viewpoint of choosing, the match point on reference view image and the image utilizes the match point on 3D rendering mapping equation acquisition sample image and the sample image;
3) set up the relational model of view space position, the video camera anglec of rotation and match point coordinate;
4) the virtual view camera parameters is estimated.
Characteristics of the present invention and beneficial effect:
The present invention is applicable to the virtual viewpoint rendering based on DIBR, owing to reasonably estimated the camera parameters of virtual view, thus can draw the strong virtual image of continuity, improve the sense of reality and the comfort level of watching three-dimensional video-frequency.
Description of drawings:
Fig. 1 is the sample viewpoint camera arrangement schematic diagram of the inventive method.
Fig. 2 is the virtual visual point image that the embodiment of the invention is drawn.
Embodiment:
Core content of the present invention is according to the locus of reference view and virtual view, the camera parameters of reference view, rationally estimates the camera parameters of virtual view.
The virtual view camera parameters algorithm for estimating that is used for virtual viewpoint rendering that the present invention proposes reaches by reference to the accompanying drawings embodiment and is described in detail as follows:
The virtual view camera parameters algorithm for estimating that is used for virtual viewpoint rendering that the present invention proposes may further comprise the steps:
1) take the multi-view image that provides as reference picture, and an optional width of cloth is for processing object, utilize its unique point of Harris operator extraction, choosing in the unique point of extracting is point to be matched a bit, then utilizes the 3D rendering mapping equation to obtain its match points on other reference pictures;
2) choose as required choosing of sample viewpoint and sample viewpoint and should satisfy following condition: the intrinsic parameter of (1) video camera is identical; (2) there is the relative rotation on the horizontal direction in the same horizontal line and only in viewpoint, as shown in Figure 1, and according to the sample viewpoint of choosing, the match point on reference view image and the image utilizes the match point on 3D rendering mapping equation acquisition sample image and the sample image;
3) set up the relational model of view space position, the video camera anglec of rotation and match point coordinate, specifically may further comprise the steps:
Locus, the video camera anglec of rotation of 31) getting virtual view are independent variable, the match point horizontal ordinate of virtual visual point image is dependent variable, the locus of sample viewpoint, the video camera anglec of rotation are interpolation knot, and the match point horizontal ordinate of sample image is the functional value of interpolation knot;
32) utilize binary cubic spline interpolation method to set up three's relational model, as the formula (1):
spot=f(angle,location) (1)
Wherein, spot is the horizontal ordinate of match point on virtual visual point image, angle is the virtual view video camera anglec of rotation, location is that the locus of virtual view is (because the sample viewpoint is on the same level line and only has relative rotation on the horizontal direction, if therefore do not specify, locus in this patent only represents the coordinate on the horizontal direction, the anglec of rotation only represents the anglec of rotation on the horizontal direction), f (angle, location) is the binary cubic spline function.
4) the virtual view camera parameters is estimated, specifically may further comprise the steps:
41) coordinate substitution view space position, virtual view locus, the video camera anglec of rotation and the coupling that will choose as required
In point coordinate three's the relational model, obtain a new equation about the video camera anglec of rotation and match point horizontal ordinate:
spot=τ(angle) (2)
Wherein, τ (angle) is the monobasic cubic spline function.
42) for guaranteeing the continuity of image, the match point horizontal ordinate spot on the virtual visual point image v, be positioned at the match point horizontal ordinate spot on the sample visual point image of virtual view both sides lAnd spot rShould satisfy formula (3):
spot v=(1-q)spot l+qspot r (3)
q = location l - location v location l - location r - - - ( 4 )
Wherein, q is the weighting parameters of formula (3) definition, location l, location rAnd location vBe respectively the locus coordinate of left and right sides sample viewpoint and virtual view;
43) with the spot that estimates to obtain vBe brought in the formula (2), can obtain the video camera anglec of rotation angle of virtual view v-1(spot v);
44) according to the Eulerian angle representation formula of video camera rotation matrix, as the formula (5), obtain the rotation matrix of virtual view video camera R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 :
r 11=cosφcosθ
Figure BDA00003287927400042
Figure BDA00003287927400043
r 21=sinφcosθ
Figure BDA00003287927400045
r 31=-sinθ
Figure BDA00003287927400046
Figure BDA00003287927400047
Wherein,
Figure BDA00003287927400048
Angle, θ angle, φ angle are respectively camera coordinate system around the angle of world coordinate system x axle, y axle, the rotation of z axle, and each angle rotation positive dirction is defined as from true origin and is rotated counterclockwise direction when each axle positive dirction is observed;
45) according to the locus coordinate C=(C of virtual view x, C y, C z) TWith the rotation matrix of video camera, the locus coordinate of the virtual view that obtains and the video camera rotation matrix of trying to achieve be updated to obtain translation vector t among the relational expression t=-RC.
The present embodiment specifically may further comprise the steps:
1) adopts unique point on No. 0 reference picture of Harris operator extraction (No. 0 video camera catch image), and choose arbitrarily and a bit be point to be matched.Then utilize the 3D rendering mapping equation to obtain its match points (the present embodiment adopts 8 width of cloth reference pictures) on other 7 width of cloth images;
2) choose as required the sample viewpoint, and choosing of sample viewpoint should be satisfied following condition: the intrinsic parameter of (1) video camera is identical; (2) there is the relative rotation on the horizontal direction in the same horizontal line and only in viewpoint.The present embodiment is chosen 8 sample position, 8 angles of each position rotation, totally 64 sample viewpoints then according to the match point on sample viewpoint, reference view image and the image chosen, are utilized the match point on 3D rendering mapping equation acquisition sample image and the sample image;
3) set up the relational model of view space position, the video camera anglec of rotation and match point coordinate;
4) the virtual view camera parameters is estimated and is drawn virtual visual point image according to the parameter of estimating.Fig. 2 (a)-(d) is intrinsic parameter K = | 1888.9 0.559274 522.471 0 1892.575 382.6355 0 0 1 | The time draw the virtual visual point image obtain.Wherein, Fig. 2 (a) is positioned at (7.100000,0.001632,0.187546) anglec of rotation is (8.081583,-0.631789,0.404783) virtual visual point image, Fig. 2 (b) is positioned at (6.100000,0.001632,0.187546) anglec of rotation is the virtual visual point image of (7.164187 ,-0.631789,0.404783), Fig. 2 (c) is positioned at (5.100000,0.001632,0.187546) and the anglec of rotation is (6.240434 ,-0.631789,0.404783) virtual visual point image, Fig. 2 (d) is that to be positioned at (4.100000,0.001632,0.187546) anglec of rotation be (5.343205,-0.631789,0.404783) virtual visual point image.

Claims (3)

1. be used for the virtual view camera parameters method of estimation of virtual viewpoint rendering, it is characterized in that may further comprise the steps:
1) take the multi-view image that provides as reference picture, and an optional width of cloth is for processing object, utilize its unique point of Harris operator extraction, choosing in the unique point of extracting is point to be matched a bit, then utilizes the 3D rendering mapping equation to obtain its match points on other reference pictures;
2) choose as required choosing of sample viewpoint and sample viewpoint and should satisfy following condition: the intrinsic parameter of (1) video camera is identical; (2) there is the relative rotation on the horizontal direction in the same horizontal line and only in viewpoint, and according to the sample viewpoint of choosing, the match point on reference view image and the image utilizes the match point on 3D rendering mapping equation acquisition sample image and the sample image;
3) set up the relational model of view space position, the video camera anglec of rotation and match point coordinate;
4) the virtual view camera parameters is estimated.
2. the virtual view camera parameters method of estimation for virtual viewpoint rendering according to claim 1, it is characterized in that, set up the relational model of view space position, the video camera anglec of rotation and match point coordinate in the described step 3, specifically may further comprise the steps:
Locus, the video camera anglec of rotation of 31) getting virtual view are independent variable, the match point horizontal ordinate of virtual visual point image is dependent variable, the locus of sample viewpoint, the video camera anglec of rotation are interpolation knot, and the match point horizontal ordinate of sample image is the functional value of interpolation knot;
32) utilize binary cubic spline interpolation method to set up three's relational model.
3. the virtual view camera parameters method of estimation for virtual viewpoint rendering according to claim 1 is characterized in that, the virtual view camera parameters is estimated in the described step 4, specifically may further comprise the steps:
In coordinate substitution view space position, virtual view locus, the video camera anglec of rotation and the match point coordinate three's that 41) will choose as required the relational model, obtain a new equation about the video camera anglec of rotation and match point horizontal ordinate: spot=τ (angle);
42) horizontal ordinate of estimation match point;
43) according to the horizontal ordinate of the match point of estimating, obtain the video camera anglec of rotation of virtual view;
44) according to the Eulerian angle representation formula of video camera rotation matrix, obtain the rotation matrix of virtual view video camera;
45) according to the locus coordinate of virtual view and the rotation matrix of video camera, obtain translation vector.
CN2013102136641A 2013-06-01 2013-06-01 Parameter estimation method of virtual view point camera for drawing virtual view points Pending CN103310445A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013102136641A CN103310445A (en) 2013-06-01 2013-06-01 Parameter estimation method of virtual view point camera for drawing virtual view points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013102136641A CN103310445A (en) 2013-06-01 2013-06-01 Parameter estimation method of virtual view point camera for drawing virtual view points

Publications (1)

Publication Number Publication Date
CN103310445A true CN103310445A (en) 2013-09-18

Family

ID=49135622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013102136641A Pending CN103310445A (en) 2013-06-01 2013-06-01 Parameter estimation method of virtual view point camera for drawing virtual view points

Country Status (1)

Country Link
CN (1) CN103310445A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463899A (en) * 2014-12-31 2015-03-25 北京格灵深瞳信息技术有限公司 Target object detecting and monitoring method and device
WO2017020489A1 (en) * 2015-08-03 2017-02-09 京东方科技集团股份有限公司 Virtual reality display method and system
WO2020024744A1 (en) * 2018-08-01 2020-02-06 Oppo广东移动通信有限公司 Image feature point detecting method, terminal device, and storage medium
CN111669564A (en) * 2019-03-07 2020-09-15 阿里巴巴集团控股有限公司 Image reconstruction method, system, device and computer readable storage medium
US11257283B2 (en) 2019-03-07 2022-02-22 Alibaba Group Holding Limited Image reconstruction method, system, device and computer-readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020061131A1 (en) * 2000-10-18 2002-05-23 Sawhney Harpreet Singh Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery
CN102325259A (en) * 2011-09-09 2012-01-18 青岛海信数字多媒体技术国家重点实验室有限公司 Method and device for synthesizing virtual viewpoints in multi-viewpoint video

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020061131A1 (en) * 2000-10-18 2002-05-23 Sawhney Harpreet Singh Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery
CN102325259A (en) * 2011-09-09 2012-01-18 青岛海信数字多媒体技术国家重点实验室有限公司 Method and device for synthesizing virtual viewpoints in multi-viewpoint video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RU-SHANG WANG,ET AL: "Multiview Video Sequence Analysis, Compression,and Virtual Viewpoint Synthesis", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》, vol. 10, no. 3, 30 April 2000 (2000-04-30), XP011014055 *
高 凯,等: "面向虚拟视点合成的深度图编码", 《吉 林 大 学 学 报 ( 信 息 科 学 版)》, vol. 31, no. 2, 31 March 2013 (2013-03-31) *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463899A (en) * 2014-12-31 2015-03-25 北京格灵深瞳信息技术有限公司 Target object detecting and monitoring method and device
CN104463899B (en) * 2014-12-31 2017-09-22 北京格灵深瞳信息技术有限公司 A kind of destination object detection, monitoring method and its device
WO2017020489A1 (en) * 2015-08-03 2017-02-09 京东方科技集团股份有限公司 Virtual reality display method and system
US9881424B2 (en) 2015-08-03 2018-01-30 Boe Technology Group Co., Ltd. Virtual reality display method and system
WO2020024744A1 (en) * 2018-08-01 2020-02-06 Oppo广东移动通信有限公司 Image feature point detecting method, terminal device, and storage medium
CN111669564A (en) * 2019-03-07 2020-09-15 阿里巴巴集团控股有限公司 Image reconstruction method, system, device and computer readable storage medium
US11257283B2 (en) 2019-03-07 2022-02-22 Alibaba Group Holding Limited Image reconstruction method, system, device and computer-readable storage medium
US11341715B2 (en) 2019-03-07 2022-05-24 Alibaba Group Holding Limited Video reconstruction method, system, device, and computer readable storage medium
CN111669564B (en) * 2019-03-07 2022-07-26 阿里巴巴集团控股有限公司 Image reconstruction method, system, device and computer readable storage medium
US11521347B2 (en) 2019-03-07 2022-12-06 Alibaba Group Holding Limited Method, apparatus, medium, and device for generating multi-angle free-respective image data

Similar Documents

Publication Publication Date Title
CN102592275B (en) Virtual viewpoint rendering method
CN101902657B (en) Method for generating virtual multi-viewpoint images based on depth image layering
Lipski et al. Virtual video camera: Image-based viewpoint navigation through space and time
CN104780355A (en) Depth-based cavity repairing method in viewpoint synthesis
CN103310445A (en) Parameter estimation method of virtual view point camera for drawing virtual view points
CN102930593B (en) Based on the real-time drawing method of GPU in a kind of biocular systems
US10127714B1 (en) Spherical three-dimensional video rendering for virtual reality
EP2490452A1 (en) A method and system for rendering a stereoscopic view
Jantet et al. Joint projection filling method for occlusion handling in depth-image-based rendering
CN102316354A (en) Parallelly processable multi-view image synthesis method in imaging technology
CN103731657A (en) Hole filling processing method of hole-containing image processed by DIBR (Depth Image Based Rendering) algorithm
Lei et al. Deep Gradual-Conversion and Cycle Network for Single-View Synthesis
CN104717514A (en) Multi-viewpoint image rendering system and method
CN103945209A (en) DIBR method based on block projection
Wang et al. Virtual view synthesis without preprocessing depth image for depth image based rendering
Sun et al. Seamless view synthesis through texture optimization
Knorr et al. From 2D-to stereo-to multi-view video
Schnyder et al. Depth image based compositing for stereo 3D
Okura et al. Motion parallax representation for indirect augmented reality
Liu et al. Texture-adaptive hole-filling algorithm in raster-order for three-dimensional video applications
Cheng et al. A DIBR method based on inverse mapping and depth-aided image inpainting
Feng et al. Foreground-aware dense depth estimation for 360 images
CN117501313A (en) Hair rendering system based on deep neural network
Gao et al. A newly virtual view generation method based on depth image
Caviedes et al. Real time 2D to 3D conversion: Technical and visual quality requirements

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130918