CN102447927B - Method for warping three-dimensional image with camera calibration parameter - Google Patents

Method for warping three-dimensional image with camera calibration parameter Download PDF

Info

Publication number
CN102447927B
CN102447927B CN 201110278135 CN201110278135A CN102447927B CN 102447927 B CN102447927 B CN 102447927B CN 201110278135 CN201110278135 CN 201110278135 CN 201110278135 A CN201110278135 A CN 201110278135A CN 102447927 B CN102447927 B CN 102447927B
Authority
CN
China
Prior art keywords
ref
des
pixel
image
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201110278135
Other languages
Chinese (zh)
Other versions
CN102447927A (en
Inventor
刘然
田逢春
刘阳
鲁国宁
黄扬帆
甘平
谢辉
邰国钦
谭迎春
郭瑞丽
罗雯怡
许小艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Sichuan Hongwei Technology Co Ltd
Original Assignee
Chongqing University
Sichuan Hongwei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University, Sichuan Hongwei Technology Co Ltd filed Critical Chongqing University
Priority to CN 201110278135 priority Critical patent/CN102447927B/en
Publication of CN102447927A publication Critical patent/CN102447927A/en
Application granted granted Critical
Publication of CN102447927B publication Critical patent/CN102447927B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method for warping a three-dimensional image with calibration parameter. In the method, a pan camera and a shift-sensor camera are used for setting to generate a destination image, and one the premise of calculating the pixel coordinates of the destination image by using a simplified three-dimensional image warping formula based on depth image rendering and a pixel depth value calculating formula, the pixel values of reference image points are copied to corresponding points of the destination image, so that the calculated amount is reduced and the hardware implementation is facilitated.

Description

A kind of 3-D view transform method that the camera calibration parameter is arranged
Technical field
The invention belongs to and draw (Depth-Image-Based Rendering is called for short DIBR) technical field based on depth image in the 3D television system, more specifically say, relate to a kind of 3-D view transform method that the camera calibration parameter is arranged.
Background technology
To generate the new virtual visual point image of a width according to reference picture (reference image) and corresponding depth image (depth image) thereof based on the depth image rendering technique, i.e. target image (destination image).Be that the conventional three-dimensional video format is compared with utilizing the synthetic 3-dimensional image of left and right two-way planar video, employing DIBR technology only needs to transmit one road video afterwards and range image sequence just can synthesize 3-dimensional image, and can realize very easily the switching of two and three dimensions, avoided simultaneously the computational complexity of the three dimensions conversion that brought by classic view generation method.Just because of this, the DIBR technology is widely applied in the synthetic 3-dimensional image of 3D TV, and it has also caused more and more keen interest of people.Usually, people adopt needs the 3D video of DIBR technology to call 3D video (depth-image-based3D video) based on depth image.
The core procedure of DIBR technology is 3-D view conversion (3d image warping).The 3-D view conversion can be with reference to the spot projection in image to three dimensions, then with the some re-projection in three dimensions to the target image plane, thereby generate new viewpoint view, i.e. target image.
Yet in traditional 3-D view transform method, the target image pixel calculates more complicated, and amount of calculation is larger, can't accomplish at present the real-time drafting, and is unfavorable for the realization of hardware.
Summary of the invention
The object of the invention is to overcome the deficiencies in the prior art, provide a kind of amount of calculation the little 3-D view transform method that the camera calibration parameter is arranged.
For achieving the above object, the present invention has the 3-D view transform method of camera calibration parameter, it is characterized in that, comprises the following steps:
(1), initialization disparity map M, its all elements is set to cavity point parallax value;
(2), to distinguish target image be left view or right view, if left view is set to 1 with Boolean variable α, from right to left, sequential scanning reference picture from top to bottom is by row traversal reference picture I refIn pixel; If right view, from left to right, sequential scanning reference picture from top to bottom, and Boolean variable α is set to 0, by row traversal reference picture I refIn pixel;
In when traversal, 2.1), to reference picture I refIn v refRow u refThe pixel p of row ref, calculate it at target image I according to formula (1) desThe matched pixel point p of middle correspondence des
u des = ( - 1 ) α [ 2 h - f · s x · B z w ] + u ref v des = v ref - - - ( 1 )
In formula (1), (u ref, v ref), (u des, v des) represent respectively reference picture I refOn pixel p refWith it at target image I desThe matched pixel point p of upper correspondence desLevel, vertical coordinate, i.e. x axle, y axial coordinate, the horizontal displacement pixel number that h does when representing by sensing conversion video camera (shift-sensor camera), parallax free plane (ZPS plane) to be set, f presentation video focal length, s xWhen expression is changed to the image pixel coordinate system by image physical coordinates system on the x direction of principal axis number of pixel corresponding to per unit physical length, B represents the length of base, z wExpression pixel p refCorresponding depth value;
Depth value z wDetermine according to formula (2):
z w = 1 D ( u ref , v ref ) g - 1 × ( 1 z min - 1 z min ) + 1 z min , - - - ( 2 )
In formula (2), D (u ref, v ref) expression depth image in pixel (u ref, v ref) gray value, g is the depth image gray scale, z minBe nearest depth value, z maxBe depth value farthest; Usually depth image gray scale g is 8 bits, namely 256.
2.2), judgement pixel p desWhether drop on target image I desIn; If pixel p desHorizontal coordinate u desSatisfy:
0≤u des<W i, (1)
Show pixel p desDrop on target image I desIn, with pixel p refPixel value copy pixel p to des, with the element (u in disparity map M ref, v ref) be set to u des-u refWherein, W iFor the horizontal pixel of target image is counted;
(3), traveled through reference picture I refIn all pixels, export target image I desAnd disparity map M.
Goal of the invention of the present invention is achieved in that
The present invention has the 3-D view transform method of camera calibration parameter, pass through truck camera, sensing conversion video camera (shift-sensor camera) arranges the generation target image, the coordinate time that calculates the target image pixel in 3-D view transformation for mula and the pixel depth value computing formula based on the depth image drafting of utilizing simplification only need have certain translation in the horizontal direction, then the pixel value with reference to picture point copies on the target image corresponding points, thereby reduced amount of calculation, be conducive to hardware and realize and process in real time.
Description of drawings
Fig. 1 is that in the present invention, reference picture is the schematic diagram of right view as left view generation target image;
Fig. 2 is that in the present invention, reference picture is the schematic diagram of left view as right view generation target image;
Fig. 3 is the 3-D view transform method one specific implementation method flow chart that the present invention has the camera calibration parameter.
Embodiment
Below in conjunction with accompanying drawing, the specific embodiment of the present invention is described, so that those skilled in the art understands the present invention better.What need to point out especially is that in the following description, when perhaps the detailed description of known function and design can desalinate main contents of the present invention, these were described in here and will be left in the basket.
Fig. 1 is that in the present invention, reference picture is the schematic diagram of right view as left view generation target image.
In this enforcement, as shown in Figure 1, according to painter's algorithm, when drawing right view, should according to from left to right, from top to bottom sequential scanning reference picture, by row traversal reference picture I refIn pixel p ref, its coordinate is (u ref, v ref); Then find target image I according to the 3-D view transformation for mula desThe matched pixel point p of upper correspondence desLevel, vertical coordinate, with pixel p refPixel value copy pixel p to des, with the element (u in disparity map M ref, v ref) be set to u des-u ref
Fig. 2 is that in the present invention, reference picture is the schematic diagram of left view as right view generation target image.
In this enforcement, as shown in Figure 2, according to painter's algorithm, when drawing left view, should be according to from right to left, sequential scanning reference picture from top to bottom, by row traversal reference picture I refIn pixel p ref, its coordinate is (u ref, v ref); Then find target image I according to the 3-D view transformation for mula desThe matched pixel point p of upper correspondence desLevel, vertical coordinate, with pixel p refPixel value copy pixel p to des, with the element (u in disparity map M ref, v ref) be set to u des-u ref
Fig. 3 is the 3-D view transform method one specific implementation method flow chart that the present invention has the camera calibration parameter.
In this enforcement, as shown in Figure 3, the present invention has the 3-D view transform method of camera calibration parameter to realize in the situation that known camera calibration parameter, is generated the function of a width target image by a width reference picture and depth image thereof;
As shown in Figure 3, concrete steps are:
1), at first input reference picture I refAnd depth image D, their resolution is all W i* H iFocal distance f; The number s of pixel corresponding to per unit physical length on the x direction of principal axis xLength of base B; For the ZPS plane being set, the horizontal displacement pixel number h (h 〉=0) that reference picture is done; Nearest depth value z min, i.e. the depth value of nearly shear plane and farthest depth value z max, i.e. the depth value of shear plane far away; The scanning sequency sign rend_order of reference picture, the value of rend_order is obtained by painter's algorithm;
2), initiation parameter
Initialization disparity map M, all elements of putting M is-128, and this point of-128 expressions is the cavity, and namely just disparity map M all elements is set to cavity point parallax value; With reference to image I refThe pixel direct copying to target image I desIn, this pixel is at target image I desIn position and reference picture I refIn the position identical, i.e. reference picture I refWith target image I desIdentical, like this can be better the fill some little cavities;
3), with vertical coordinate v refBe set to 0, i.e. v ref=0;
4), determine reference picture I according to scanning sequency sign rend_order refScanning sequency, namely target image is left view or right view; Rend_order=0 represents from left to right, sequential scanning reference picture from top to bottom, and expression generates right view; Rend_order=1 represents from right to left, and sequential scanning reference picture from top to bottom generates left view; Concrete scanning and copy are as shown in Figure 1 and Figure 2;
5), for generating right view, with horizontal coordinate u refBe set to 0, i.e. u ref=0; For generating left view, with horizontal coordinate u refBe set to W i-1, i.e. u ref=W i-1;
6), according to formula (1) calculating pixel point p refCorresponding pixel points p desCoordinate u des
7), judgement pixel p desCoordinate u desWhether satisfy 0≤u des<W i, i.e. pixel p desWhether drop on target image I desIn; If satisfy, carry out step 8); Do not satisfy, directly carry out step 9);
8), with pixel p refPixel value copies pixel p to des, the element (u in disparity map M ref, v ref) be set to u des-u ref, i.e. M (u des, v des)=u des-u ref
9), for generating right view, with horizontal coordinate u ref Add 1, i.e. u ref=u ref+ 1; For generating left view, with horizontal coordinate u ref Subtract 1, i.e. u ref=u ref-1;
10), for generating right view, judgement u refWhether less than reference picture I refThe horizontal pixel W that counts i, if so, return to step 6), otherwise, carry out step 11); For generating left view, judgement u refWhether more than or equal to 0, if so, return to step 6), otherwise, carry out step 11); This is all to judge whether to travel through to complete delegation;
11), with vertical coordinate v ref Add 1, i.e. v ref=v ref+ 1;
12), judgement vertical coordinate v refWhether less than reference picture I refThe vertical pixel H that counts i, if so, return to step 5), otherwise, carry out step 13);
13), traveled through reference picture I refIn all pixels, export target image I desAnd disparity map M; Each element in disparity map MM is the signed integer of 8bits, has indicated target image I desWhether middle corresponding points are empty point, and-128 these points of expression are empty points, and this point of all the other value representations is not empty point, and its value is parallax value.
Although the above is described the illustrative embodiment of the present invention; so that those skilled in the art understand the present invention; but should be clear; the invention is not restricted to the scope of embodiment; to those skilled in the art; as long as various variations appended claim limit and the spirit and scope of the present invention determined in, these variations are apparent, all utilize innovation and creation that the present invention conceives all at the row of protection.

Claims (2)

1. the 3-D view transform method that the camera calibration parameter is arranged, is characterized in that, comprises the following steps:
(1), initialization disparity map M, its all elements is set to cavity point parallax value;
(2), to distinguish target image be left view or right view, if left view is set to 1 with Boolean variable α, from right to left, sequential scanning reference picture from top to bottom is by row traversal reference picture I refIn pixel; If right view, from left to right, sequential scanning reference picture from top to bottom, and Boolean variable α is set to 0, by row traversal reference picture I refIn pixel;
In when traversal, 2.1), to reference picture I refIn v refRow u refThe pixel p of row ref, calculate it at target image I according to formula (1) desThe matched pixel point p of middle correspondence des
u des = ( - 1 ) a [ 2 h - f &CenterDot; s x &CenterDot; B z w ] + u ref v des = v ref - - - ( 1 )
In formula (1), (u ref, v ref), (u des, v des) represent respectively reference picture I refOn pixel p refWith it at target image I desThe matched pixel point p of upper correspondence desLevel, vertical coordinate, i.e. x axle, y axial coordinate, the horizontal displacement pixel number that h does when representing by sensing conversion video camera, the parallax free plane to be set, f presentation video focal length, s xWhen expression is changed to the image pixel coordinate system by image physical coordinates system on the x direction of principal axis number of pixel corresponding to per unit physical length, B represents the length of base, z wExpression pixel p refCorresponding depth value;
Depth value z wDetermine according to formula (2):
z w = 1 D ( u ref , v ref ) g - 1 &times; ( 1 z min - 1 z max ) + 1 z max , - - - ( 2 )
In formula (2), D (u ref, v ref) expression depth image in pixel (u ref, v ref) gray value, g is the depth image gray scale, z minBe nearest depth value, z maxBe depth value farthest;
2.2), judgement pixel p desWhether drop on target image I desIn; If pixel p desHorizontal coordinate u desSatisfy:
0≤u des<W i, (1)
Show pixel p desDrop on target image I desIn, with pixel p refPixel value copy pixel p to des, with the element (u in disparity map M ref, v ref) be set to u des-u refWherein, W iFor the horizontal pixel of target image is counted;
(3), traveled through reference picture I refIn all pixels, export target image I desAnd disparity map M.
2. the 3-D view transform method that the camera calibration parameter is arranged according to claim 1, is characterized in that in step (1), also will be with reference to image I refThe pixel direct copying to target image I desIn, and this pixel is at target image I desIn position and reference picture I refIn the position identical.
CN 201110278135 2011-09-19 2011-09-19 Method for warping three-dimensional image with camera calibration parameter Expired - Fee Related CN102447927B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110278135 CN102447927B (en) 2011-09-19 2011-09-19 Method for warping three-dimensional image with camera calibration parameter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110278135 CN102447927B (en) 2011-09-19 2011-09-19 Method for warping three-dimensional image with camera calibration parameter

Publications (2)

Publication Number Publication Date
CN102447927A CN102447927A (en) 2012-05-09
CN102447927B true CN102447927B (en) 2013-11-06

Family

ID=46009947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110278135 Expired - Fee Related CN102447927B (en) 2011-09-19 2011-09-19 Method for warping three-dimensional image with camera calibration parameter

Country Status (1)

Country Link
CN (1) CN102447927B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102724526B (en) * 2012-06-14 2014-09-10 清华大学 Three-dimensional video rendering method and device
JP5977591B2 (en) * 2012-06-20 2016-08-24 オリンパス株式会社 Image processing apparatus, imaging apparatus including the same, image processing method, and computer-readable recording medium recording an image processing program
CN104683788B (en) * 2015-03-16 2017-01-04 四川虹微技术有限公司 Gap filling method based on image re-projection
CN109714587A (en) * 2017-10-25 2019-05-03 杭州海康威视数字技术股份有限公司 A kind of multi-view image production method, device, electronic equipment and storage medium
CN108900825A (en) * 2018-08-16 2018-11-27 电子科技大学 A kind of conversion method of 2D image to 3D rendering

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404777A (en) * 2008-11-06 2009-04-08 四川虹微技术有限公司 Drafting view synthesizing method based on depth image
CN101938669A (en) * 2010-09-13 2011-01-05 福州瑞芯微电子有限公司 Self-adaptive video converting system for converting 2D into 3D
US20110115886A1 (en) * 2009-11-18 2011-05-19 The Board Of Trustees Of The University Of Illinois System for executing 3d propagation for depth image-based rendering

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404777A (en) * 2008-11-06 2009-04-08 四川虹微技术有限公司 Drafting view synthesizing method based on depth image
US20110115886A1 (en) * 2009-11-18 2011-05-19 The Board Of Trustees Of The University Of Illinois System for executing 3d propagation for depth image-based rendering
CN101938669A (en) * 2010-09-13 2011-01-05 福州瑞芯微电子有限公司 Self-adaptive video converting system for converting 2D into 3D

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
一种基于DIBR的虚拟视点合成算法;陈思利等;《成都电子机械高等专科学校学报》;20100331;第13卷(第1期);全文 *
刘占伟等.基于D IBR和图像融合的任意视点绘制.《中国图象图形学报》.2007,第12卷(第10期),1696-1700.
基于D IBR和图像融合的任意视点绘制;刘占伟等;《中国图象图形学报》;20071031;第12卷(第10期);全文 *
基于深度图像绘制的二维转三维视频关键技术研究;徐萍;《南京邮电大学硕士学文论文》;20110331;全文 *
徐萍.基于深度图像绘制的二维转三维视频关键技术研究.《南京邮电大学硕士学文论文》.2011,26-28.
陈思利等.一种基于DIBR的虚拟视点合成算法.《成都电子机械高等专科学校学报》.2010,第13卷(第1期),15-18.

Also Published As

Publication number Publication date
CN102447927A (en) 2012-05-09

Similar Documents

Publication Publication Date Title
CN102447927B (en) Method for warping three-dimensional image with camera calibration parameter
CN103337095B (en) The tridimensional virtual display methods of the three-dimensional geographical entity of a kind of real space
CN102625127B (en) Optimization method suitable for virtual viewpoint generation of 3D television
CN100576934C (en) Virtual visual point synthesizing method based on the degree of depth and block information
CN102075779B (en) Intermediate view synthesizing method based on block matching disparity estimation
CN102034265B (en) Three-dimensional view acquisition method
CN103345771A (en) Efficient image rendering method based on modeling
CN102325259A (en) Method and device for synthesizing virtual viewpoints in multi-viewpoint video
CN104754359B (en) A kind of depth map encoding distortion prediction method of Two Dimensional Free viewpoint video
CN102307312A (en) Method for performing hole filling on destination image generated by depth-image-based rendering (DIBR) technology
CN102592275A (en) Virtual viewpoint rendering method
CN102930593B (en) Based on the real-time drawing method of GPU in a kind of biocular systems
CN106875437A (en) A kind of extraction method of key frame towards RGBD three-dimensional reconstructions
CN102547338A (en) DIBR (Depth Image Based Rendering) system suitable for 3D (Three-Dimensional) television
CN104506872B (en) A kind of method and device of converting plane video into stereoscopic video
CN103024421A (en) Method for synthesizing virtual viewpoints in free viewpoint television
CN104270624B (en) A kind of subregional 3D video mapping method
CN102831602B (en) Image rendering method and image rendering device based on depth image forward mapping
CN105894551A (en) Image drawing method and device
CN101383051B (en) View synthesizing method based on image re-projection
CN105989568A (en) OpenGL-based local refresh method and system
CN109345444A (en) The super-resolution stereo-picture construction method of depth perception enhancing
CN103945206B (en) A kind of stereo-picture synthesis system compared based on similar frame
CN101695139B (en) Gradable block-based virtual viewpoint image drawing method
CN101695140A (en) Object-based virtual image drawing method of three-dimensional/free viewpoint television

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20131106

Termination date: 20160919