CN102447927A - Method for warping three-dimensional image with camera calibration parameter - Google Patents
Method for warping three-dimensional image with camera calibration parameter Download PDFInfo
- Publication number
- CN102447927A CN102447927A CN2011102781350A CN201110278135A CN102447927A CN 102447927 A CN102447927 A CN 102447927A CN 2011102781350 A CN2011102781350 A CN 2011102781350A CN 201110278135 A CN201110278135 A CN 201110278135A CN 102447927 A CN102447927 A CN 102447927A
- Authority
- CN
- China
- Prior art keywords
- ref
- des
- pixel
- reference picture
- target image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
The invention discloses a method for warping a three-dimensional image with calibration parameter. In the method, a pan camera and a shift-sensor camera are used for setting to generate a destination image, and one the premise of calculating the pixel coordinates of the destination image by using a simplified three-dimensional image warping formula based on depth image rendering and a pixel depth value calculating formula, the pixel values of reference image points are copied to corresponding points of the destination image, so that the calculated amount is reduced and the hardware implementation is facilitated.
Description
Technical field
The invention belongs to and draw (Depth-Image-Based Rendering is called for short DIBR) technical field based on depth image in the 3D television system, more specifically, relate to a kind of 3-D view transform method that the camera calibration parameter is arranged.
Background technology
Based on the depth image rendering technique is to generate the new virtual visual point image of a width of cloth according to reference picture (reference image) and corresponding depth image (depth image) thereof, i.e. target image (destination image).With utilizing the synthetic 3-dimensional image of left and right sides two-way planar video is that the conventional three-dimensional video format is compared; Adopt the DIBR technology only need transmit one road video and range image sequence thereof afterwards and just can synthesize 3-dimensional image; And can realize the switching of two and three dimensions very easily, the computational complexity of the three dimensions conversion of having avoided simultaneously being brought by classic view generation method.Just because of this, the DIBR technology has obtained extensive use in the synthetic 3-dimensional image of 3D TV, and it has also caused more and more keen interest of people.Usually, people adopt needs the 3D video of DIBR technology to call the 3D video (depth-image-based 3Dvideo) based on depth image.
The core procedure of DIBR technology is 3-D view conversion (3d image warping).The 3-D view conversion can be with the spot projection in the reference picture to three dimensions, again with the some re-projection in the three dimensions to the target image plane, thereby generate new viewpoint view, i.e. target image.
Yet the target image pixel calculates more complicated in traditional 3-D view transform method, and amount of calculation is bigger, can't accomplish the real-time drafting at present, and is unfavorable for the realization of hardware.
Summary of the invention
The objective of the invention is to overcome the deficiency of prior art, provide a kind of amount of calculation the little 3-D view transform method that the camera calibration parameter is arranged.
For realizing above-mentioned purpose, the present invention has the 3-D view transform method of camera calibration parameter, it is characterized in that, may further comprise the steps:
(1), initialization disparity map M, its all elements is changed to cavity point parallax value;
(2), to distinguish target image be left view or right view, if left view then is changed to 1 with Boolean variable α, from right to left, sequential scanning reference picture from top to bottom is by row traversal reference picture I
RefIn pixel; If right view, then from left to right, sequential scanning reference picture from top to bottom, and Boolean variable α is changed to 0, by row traversal reference picture I
RefIn pixel;
In when traversal, 2.1), to reference picture I
RefIn v
RefRow u
RefThe pixel u of row
Ref, calculate it at target image I according to formula (1)
DesMiddle corresponding matched pixel point u
Des
In the formula (1), (u
Ref, v
Ref), (u
Des, v
Des) represent reference picture I respectively
RefOn pixel u
RefWith its at target image I
DesLast corresponding matched pixel point u
DesLevel, vertical coordinate, i.e. the horizontal displacement pixel number of being done when x axle, y axial coordinate, h represent by sensing conversion video camera (shift-sensor camera) parallax free plane (ZPS plane) to be set, f presentation video focal length, s
xExpression by image physical coordinates system when the image pixel coordinate system change on the x direction of principal axis number of the pixel of per unit physical length correspondence, B representes the length of base, z
wRemarked pixel point u
RefCorresponding depth value;
Depth value z
wConfirm according to formula (2):
In the formula (2), D (u
Ref, v
Ref) expression depth image in pixel (u
Ref, v
Ref) gray value, g is the depth image gray scale, z
MinBe nearest depth value, z
MaxBe depth value farthest; Usually depth image gray scale g is 8 bits, promptly 256.
2.2), judge pixel u
DesWhether drop on target image I
DesIn; If pixel u
DesHorizontal coordinate u
DesSatisfy:
0≤u
des<W
i, (1)
Then show pixel u
DesDrop on target image I
DesIn, with pixel u
RefPixel value copy pixel u to
Des, with the element (u among the disparity map M
Ref, v
Ref) be changed to u
Des-u
RefWherein, W
iFor the horizontal pixel of target image is counted;
(3), traveled through reference picture I
RefIn all pixels, export target image I then
DesAnd disparity map M.
Goal of the invention of the present invention is achieved in that
The present invention has the 3-D view transform method of camera calibration parameter; Pass through truck camera; Sensing conversion video camera (shift-sensor camera) is provided with the generation target image; The coordinate time that calculates the target image pixel in 3-D view transformation for mula and the pixel depth value computing formula based on the depth image drafting of utilizing simplification only need have certain translation in the horizontal direction; Pixel value with the reference diagram picture point copies on the target image corresponding points then, thereby has reduced amount of calculation, helps hardware and realizes and processing in real time.
Description of drawings
Fig. 1 is that reference picture is the sketch map of right view as left view generation target image among the present invention;
Fig. 2 is that reference picture is the sketch map of left view as right view generation target image among the present invention;
Fig. 3 is the 3-D view transform method one practical implementation method flow diagram that the present invention has the camera calibration parameter.
Embodiment
Describe below in conjunction with the accompanying drawing specific embodiments of the invention, so that those skilled in the art understands the present invention better.What need point out especially is that in the following description, when perhaps the detailed description of known function and design can desalinate main contents of the present invention, these were described in here and will be left in the basket.
Fig. 1 is that reference picture is the sketch map of right view as left view generation target image among the present invention.
In this enforcement, as shown in Figure 1, according to painter's algorithm, when drawing right view, should according to from left to right, from top to bottom sequential scanning reference picture, by row traversal reference picture I
RefIn pixel u
Ref, its coordinate is (u
Ref, v
Ref); Find target image I according to the 3-D view transformation for mula then
DesLast corresponding matched pixel point u
DesLevel, vertical coordinate, with pixel u
RefPixel value copy pixel u to
Des, with the element (u among the disparity map M
Ref, v
Ref) be changed to u
Des-u
Ref
Fig. 2 is that reference picture is the sketch map of left view as right view generation target image among the present invention.
In this enforcement, as shown in Figure 2, according to painter's algorithm, when drawing left view, should be according to from right to left, sequential scanning reference picture from top to bottom, by row traversal reference picture I
RefIn pixel u
Ref, its coordinate is (u
Ref, v
Ref); Find target image I according to the 3-D view transformation for mula then
DesLast corresponding matched pixel point u
DesLevel, vertical coordinate, with pixel u
RefPixel value copy pixel u to
Des, with the element (u among the disparity map M
Ref, v
Ref) be changed to u
Des-u
Ref
Fig. 3 is the 3-D view transform method one practical implementation method flow diagram that the present invention has the camera calibration parameter.
In this enforcement, as shown in Figure 3, the present invention has the 3-D view transform method of camera calibration parameter to be implemented under the situation of known camera calibration parameter, is generated the function of a width of cloth target image by a width of cloth reference picture and depth image thereof;
As shown in Figure 3, concrete steps are:
1), at first imports reference picture I
RefAnd depth image D, their resolution all is W
i* H
iFocal distance f; The number s of the pixel of per unit physical length correspondence on the x direction of principal axis
xLength of base B; For the ZPS plane being set, the horizontal displacement pixel number h (h>=0) that reference picture is done; Nearest depth value z
Min, the depth value of promptly near shear plane and depth value z farthest
Max, the depth value of shear plane promptly far away; The scanning sequency sign rend_order of reference picture, the value of rend_order is obtained by painter's algorithm;
2), initiation parameter
Initialization disparity map M, all elements of putting M is-128, and this is the cavity for-128 expressions, and promptly just disparity map M all elements is changed to cavity point parallax value; With reference picture I
RefPixel directly copy target image I to
DesIn, this pixel is at target image I
DesIn position and reference picture I
RefIn the position identical, i.e. reference picture I
RefWith target image I
DesIdentical, like this can be better the fill some little cavities;
3), with vertical coordinate v
RefBe changed to 0, i.e. v
Ref=0;
4), confirm reference picture I according to scanning sequency sign rend_order
RefScanning sequency, promptly target image is left view or right view; Rend_order=0 representes from left to right, sequential scanning reference picture from top to bottom, and expression generates right view; Rend_order=1 representes from right to left, and sequential scanning reference picture from top to bottom generates left view; Concrete scanning and copy are like Fig. 1, shown in Figure 2;
5), for generating right view, with horizontal coordinate u
RefBe changed to 0, i.e. u
Ref=0; For generating left view, with horizontal coordinate u
RefBe changed to W
i-1, i.e. u
Ref=W
i-1;
6), according to formula (1) calculating pixel point u
RefCorresponding pixel points u
DesCoordinate u
Des
7), judge pixel u
DesCoordinate u
DesWhether satisfy 0≤u
Des<W
i, i.e. pixel u
DesWhether drop on target image I
DesIn; If satisfy, carry out step 8); Do not satisfy, directly carry out step 9);
8), then with pixel u
RefPixel value copies pixel u to
Des, the element (u among the disparity map M
Ref, v
Ref) be changed to u
Des-u
Ref, i.e. M (u
Des, v
Des)=u
Des-u
Ref
9), for generating right view, with horizontal coordinate u
RefAdd 1, i.e. u
Ref=u
Ref+ 1; For generating left view, with horizontal coordinate u
RefSubtract 1, i.e. u
Ref=u
Ref-1;
10), for generating right view, judgement u
RefWhether less than reference picture I
RefThe horizontal pixel W that counts
i, if, then return step 6), otherwise, carry out step 11); For generating left view, judge u
RefWhether more than or equal to 0, if, then return step 6), otherwise, carry out step 11); This all is to judge whether traversal completion delegation;
11), with vertical coordinate v
RefAdd 1, i.e. v
Ref=v
Ref+ 1;
12), judge vertical coordinate v
RefWhether less than reference picture I
RefThe vertical pixel H that counts
i, if, return step 5), otherwise, carry out step 13);
13), traveled through reference picture I
RefIn all pixels, export target image I then
DesAnd disparity map M; Each element among the disparity map MM is the signed integer of 8bits, has indicated target image I
DesWhether middle corresponding points are empty point, and-128 these points of expression are empty points, and this point of all the other value representations is not empty point, and its value is parallax value.
Although above the illustrative embodiment of the present invention is described; So that the technical staff in present technique field understands the present invention, but should be clear, the invention is not restricted to the scope of embodiment; To those skilled in the art; As long as various variations appended claim limit and the spirit and scope of the present invention confirmed in, these variations are conspicuous, all utilize innovation and creation that the present invention conceives all at the row of protection.
Claims (2)
1. the 3-D view transform method that the camera calibration parameter is arranged is characterized in that, may further comprise the steps:
(1), initialization disparity map M, its all elements is changed to cavity point parallax value;
(2), to distinguish target image be left view or right view, if left view then is changed to 1 with Boolean variable α, from right to left, sequential scanning reference picture from top to bottom is by row traversal reference picture I
RefIn pixel; If right view, then from left to right, sequential scanning reference picture from top to bottom, and Boolean variable α is changed to 0, by row traversal reference picture I
RefIn pixel;
In when traversal, 2.1), to reference picture I
RefIn v
RefRow u
RefThe pixel u of row
Ref, calculate it at target image I according to formula (1)
DesMiddle corresponding matched pixel point u
Des
In the formula (1), (u
Ref, v
Ref), (u
Des, v
Des) represent reference picture I respectively
RefOn pixel u
RefWith its at target image I
DesLast corresponding matched pixel point u
DesLevel, vertical coordinate, i.e. the horizontal displacement pixel number of being done when x axle, y axial coordinate, h represent by sensing conversion video camera (shift-sensor camera) parallax free plane (ZPS plane) to be set, f presentation video focal length, s
xExpression by image physical coordinates system when the image pixel coordinate system change on the x direction of principal axis number of the pixel of per unit physical length correspondence, B representes the length of base, z
wRemarked pixel point u
RefCorresponding depth value;
Depth value z
wConfirm according to formula (2):
In the formula (2), D (u
Ref, v
Ref) expression depth image in pixel (u
Ref, v
Ref) gray value, g is the depth image gray scale, z
MinBe nearest depth value, z
MaxBe depth value farthest;
2.2), judge pixel u
DesWhether drop on target image I
DesIn; If pixel u
DesHorizontal coordinate u
DesSatisfy:
0≤u
des<W
i, (1)
Then show pixel u
DesDrop on target image I
DesIn, with pixel u
RefPixel value copy pixel u to
Des, with the element (u among the disparity map M
Ref, v
Ref) be changed to u
Des-u
RefWherein, W
iFor the horizontal pixel of target image is counted;
(3), traveled through reference picture I
RefIn all pixels, export target image I then
DesAnd disparity map M.
2. the 3-D view transform method that the camera calibration parameter is arranged according to claim 1 is characterized in that in step (1), also will be with reference picture I
RefPixel directly copy target image I to
DesIn, and this pixel is at target image I
DesIn position and reference picture I
RefIn the position identical.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110278135 CN102447927B (en) | 2011-09-19 | 2011-09-19 | Method for warping three-dimensional image with camera calibration parameter |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110278135 CN102447927B (en) | 2011-09-19 | 2011-09-19 | Method for warping three-dimensional image with camera calibration parameter |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102447927A true CN102447927A (en) | 2012-05-09 |
CN102447927B CN102447927B (en) | 2013-11-06 |
Family
ID=46009947
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110278135 Expired - Fee Related CN102447927B (en) | 2011-09-19 | 2011-09-19 | Method for warping three-dimensional image with camera calibration parameter |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102447927B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102724526A (en) * | 2012-06-14 | 2012-10-10 | 清华大学 | Three-dimensional video rendering method and device |
CN104396231A (en) * | 2012-06-20 | 2015-03-04 | 奥林巴斯株式会社 | Image processing device and image processing method |
CN104683788A (en) * | 2015-03-16 | 2015-06-03 | 四川虹微技术有限公司 | Cavity filling method based on image reprojection |
CN108900825A (en) * | 2018-08-16 | 2018-11-27 | 电子科技大学 | A kind of conversion method of 2D image to 3D rendering |
CN109714587A (en) * | 2017-10-25 | 2019-05-03 | 杭州海康威视数字技术股份有限公司 | A kind of multi-view image production method, device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101404777A (en) * | 2008-11-06 | 2009-04-08 | 四川虹微技术有限公司 | Drafting view synthesizing method based on depth image |
CN101938669A (en) * | 2010-09-13 | 2011-01-05 | 福州瑞芯微电子有限公司 | Self-adaptive video converting system for converting 2D into 3D |
US20110115886A1 (en) * | 2009-11-18 | 2011-05-19 | The Board Of Trustees Of The University Of Illinois | System for executing 3d propagation for depth image-based rendering |
-
2011
- 2011-09-19 CN CN 201110278135 patent/CN102447927B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101404777A (en) * | 2008-11-06 | 2009-04-08 | 四川虹微技术有限公司 | Drafting view synthesizing method based on depth image |
US20110115886A1 (en) * | 2009-11-18 | 2011-05-19 | The Board Of Trustees Of The University Of Illinois | System for executing 3d propagation for depth image-based rendering |
CN101938669A (en) * | 2010-09-13 | 2011-01-05 | 福州瑞芯微电子有限公司 | Self-adaptive video converting system for converting 2D into 3D |
Non-Patent Citations (3)
Title |
---|
刘占伟等: "基于D IBR和图像融合的任意视点绘制", 《中国图象图形学报》 * |
徐萍: "基于深度图像绘制的二维转三维视频关键技术研究", 《南京邮电大学硕士学文论文》 * |
陈思利等: "一种基于DIBR的虚拟视点合成算法", 《成都电子机械高等专科学校学报》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102724526A (en) * | 2012-06-14 | 2012-10-10 | 清华大学 | Three-dimensional video rendering method and device |
CN104396231A (en) * | 2012-06-20 | 2015-03-04 | 奥林巴斯株式会社 | Image processing device and image processing method |
CN104396231B (en) * | 2012-06-20 | 2018-05-01 | 奥林巴斯株式会社 | Image processing apparatus and image processing method |
CN104683788A (en) * | 2015-03-16 | 2015-06-03 | 四川虹微技术有限公司 | Cavity filling method based on image reprojection |
CN104683788B (en) * | 2015-03-16 | 2017-01-04 | 四川虹微技术有限公司 | Gap filling method based on image re-projection |
CN109714587A (en) * | 2017-10-25 | 2019-05-03 | 杭州海康威视数字技术股份有限公司 | A kind of multi-view image production method, device, electronic equipment and storage medium |
CN108900825A (en) * | 2018-08-16 | 2018-11-27 | 电子科技大学 | A kind of conversion method of 2D image to 3D rendering |
Also Published As
Publication number | Publication date |
---|---|
CN102447927B (en) | 2013-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103345771B (en) | A kind of Efficient image rendering intent based on modeling | |
CN100576934C (en) | Virtual visual point synthesizing method based on the degree of depth and block information | |
CN102625127B (en) | Optimization method suitable for virtual viewpoint generation of 3D television | |
CN102034265B (en) | Three-dimensional view acquisition method | |
CN101930620B (en) | Image processing method and associated apparatus for rendering three-dimensional effect using two-dimensional image | |
CN102968809B (en) | The method of virtual information mark and drafting marking line is realized in augmented reality field | |
CN102325259A (en) | Method and device for synthesizing virtual viewpoints in multi-viewpoint video | |
CN102075779B (en) | Intermediate view synthesizing method based on block matching disparity estimation | |
CN102447927B (en) | Method for warping three-dimensional image with camera calibration parameter | |
US20120139906A1 (en) | Hybrid reality for 3d human-machine interface | |
CN102572482A (en) | 3D (three-dimensional) reconstruction method for stereo/multi-view videos based on FPGA (field programmable gata array) | |
CN103402097B (en) | A kind of free viewpoint video depth map encoding method and distortion prediction method thereof | |
CN102307312A (en) | Method for performing hole filling on destination image generated by depth-image-based rendering (DIBR) technology | |
JP2005151534A (en) | Pseudo three-dimensional image creation device and method, and pseudo three-dimensional image display system | |
CN101556700A (en) | Method for drawing virtual view image | |
CN103024421A (en) | Method for synthesizing virtual viewpoints in free viewpoint television | |
CN102930593A (en) | Real-time rendering method based on GPU (Graphics Processing Unit) in binocular system | |
CN105979241B (en) | A kind of quick inverse transform method of cylinder three-dimensional panoramic video | |
CN104270624B (en) | A kind of subregional 3D video mapping method | |
CN103647960B (en) | A kind of method of compositing 3 d images | |
CN102840827A (en) | Monocular machine vision-based non-contact three-dimensional scanning method | |
CN101383051B (en) | View synthesizing method based on image re-projection | |
CN103731657B (en) | A kind of to the filling of the cavity containing the empty image processing method after DIBR algorithm process | |
CN105791798B (en) | A kind of 4K based on GPU surpasses the real-time method for transformation of multiple views 3D videos and device | |
CN103945206B (en) | A kind of stereo-picture synthesis system compared based on similar frame |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20131106 Termination date: 20160919 |
|
CF01 | Termination of patent right due to non-payment of annual fee |