CN104902253A - Three-dimensional image generating method based on improved Bayesian model - Google Patents
Three-dimensional image generating method based on improved Bayesian model Download PDFInfo
- Publication number
- CN104902253A CN104902253A CN201510064469.6A CN201510064469A CN104902253A CN 104902253 A CN104902253 A CN 104902253A CN 201510064469 A CN201510064469 A CN 201510064469A CN 104902253 A CN104902253 A CN 104902253A
- Authority
- CN
- China
- Prior art keywords
- pixel
- composograph
- input picture
- image
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the field of image processing, particularly relates to a three-dimensional image generating method based on an improved Bayesian model, and aims to generate three-dimensional images more efficiently and intuitively. The method comprises the following steps: firstly, calculating the per-pixel viewpoint position of a synthesized view angle with given parallax; secondly, generating a synthesized image through the improved Bayes model; and lastly, generating a red-blue three-dimensional image with a reference image and the synthesized image. Compared with an existing method, the three-dimensional image generating method based on the improved Bayesian model provided by the invention has remarkable advantages on the aspects of time performance, quality of generated images and universality. Through adoption of the method, the parallax can be specified intuitively pixel by pixel, and post-processing of the three-dimensional image is facilitated. The method has a wide application prospect.
Description
Technical field
The invention belongs to image processing field, being specifically related to a kind of stereoscopic image generation method based on improving Bayesian model, more efficiently, intuitively can generating stereo-picture.
Background technology
Along with 3D TV, film and game universal and popular, more and more 3D content is produced, and the stereoscopic display device of a new generation, also makes these 3D contents enter the life of ordinary consumer.But due to the complexity of stereoscopic display device and the shortage of three-dimensional material post-processed instrument, create credible, comfortable 3D content and remain a challenging job, traditional instrument and handling process can not well for the process of 3D content.Therefore, we need the handling process again thinking deeply 3D content creation and editor, and then invent more succinct, effective 3D content post-processing method.
Binocular parallax is the topmost visual cues of three-dimensional perception, therefore, for handling implement very important of parallax.The modal method of control binocular parallax is exactly the baseline between setting two different visual angles cameras, but the parallax shown by most 3D content has exceeded the receptible disparity range of people all, is very limited to the adjustment of baseline.Reduce baseline and can reduce binocular parallax, but whole scene also can be caused to flatten simultaneously, lack third dimension.It is the parallax of image of remapping that binocular parallax controls more complicated method, and then synthesize new image, the difficult point of this method is, the accurate calculating of parallax and synthetic images carry out hole repair etc.The people such as Changil Kim propose a kind of method by pixel parallax control in " Multi-Perspective Stereoscopy from Light Fields ", but the method is confined to light field condition, there is the deficiencies such as consuming time, parallax adjusting range is limited.
The people such as Sergi Pujades propose the visual angle synthetic method based on image in " Bayesian View Synthesis and Image-Based Rendering Principles ".Image error and geometric error is introduced in the modeling of the method system visual angle building-up process, construct the maximum a-posteriori estimation of a complete physics generation model and correspondence thereof, explain well based on the heuristic rule in the visual angle synthetic method of image.But the method does not consider the Generating Problems of stereo-picture, and can not by the control parallax of pixel, can not input picture be screened, therefore can not directly apply to the generation of stereo-picture.
Summary of the invention
The present invention is directed to stereo-picture Generating Problems, propose a kind of stereoscopic image generation method based on improving Bayesian model, the method has versatility, can control parallax by pixel.First, by given parallaxometer calculate synthesis visual angle by pixel viewpoint position; Then, the Bayesian model by improving generates composograph; Finally, reference picture and composograph is utilized to generate red blue stereogram.
The object of the invention is to be achieved through the following technical solutions.
A kind of stereoscopic image generation method based on improving Bayesian model of the present invention, is characterized in that: it comprises the following steps:
Step one, to obtain by reference to image composograph by pixel viewpoint position.
Described reference picture is the true picture that a width is specified, and reference picture all pixels viewpoint position is identical, and the viewpoint position of reference picture represents with symbol s.
Described composograph is the width composograph obtained by one group of input picture, and composograph specifies viewpoint position by pixel.
Step 1.1: (use symbol v by formula (1) computing reference image
0represent) each pixel of clock standardization parallax (use symbol d
0x () represents), standardization parallax d
0x () (uses symbol z by the standard depth of pixel each in reference picture
0(x) represent) and the imaging focal length (representing with symbol f) of reference picture obtain.Wherein, x is reference picture v
0the index value of middle pixel, x ∈ [1, N], N are reference picture v
0the quantity of middle pixel.
Step 1.2: given composograph (representing with symbol u) and reference picture v
0by pixel image parallax (representing with symbol D (x)), because the viewpoint of composograph and the viewpoint of reference picture are positioned at same plane, and the imaging plane of correspondence is identical, by formula (2) by image parallactic D (x) calculate composograph relative to reference picture by pixel viewpoint displacement (representing with symbol Δ s (x)).
Step 1.3: by formula (3) calculate composograph by pixel viewpoint position (representing with symbol s ' (x)).
s′(x)=s+Δs(x) (3)
Wherein, s is the viewpoint position of reference picture.
Step 2, the Bayesian model passing through to improve generate composograph.
Step 2.1: for viewpoint position s ' (x) of pixel each in composograph, determines that a new imaging plane (uses symbol Γ
jrepresent).
Step 2.2: the geometry of given scenario estimates and one group of input picture (uses symbol v
irepresent), the imaging plane calculating each input picture (uses symbol Ω
irepresent) in each pixel to new imaging plane Γ
jmapping relations (use symbol φ
ijrepresent) and the imaging plane Ω of input picture
iin each pixel be mapped to new imaging plane Γ
jafter whether visible observability operator (uses symbol m
ijrepresent).Described input picture v
iin the pixel count that comprises and reference picture v
0middle pixel count is identical.Wherein, i ∈ [1, n], n >=1; J ∈ [1, N].
Described observability operators m
ijbe a matrix, matrix m
ijline number and columns respectively with input picture v
ithe line number of middle pixel is identical with columns; Matrix m
ijelement value be that its respective pixel of 1 expression is visible, value is that 0 expression respective pixel is invisible.
The geometry of the corresponding scene of described input picture is estimated as the pixel depth information of input picture or the pixel spatial positional information of input picture.
Step 2.3: calculate input picture imaging plane Ω by formula (4) and formula (5)
iin each pixel (use symbol Γ to the imaging plane of composograph
0represent) mapping relations (use symbol tau
irepresent) and inverse mapping relation (use symbol beta
irepresent).Because the pixel count of input picture is all identical with the pixel count of reference picture with the pixel count of composograph, therefore x also represents the index value of pixel in input picture and composograph.
τ
i(x)=φ
ix(x) (4)
Wherein, x represents the index value of the pixel in input picture and composograph.
Step 2.4: the estimation calculating composograph u according to formula (6) (uses symbol
represent).
Wherein, n is the quantity of input picture; ο is fonction composition operator.
Step 2.5: calculate every width input picture v according to formula (7)
i(symbol kappa is used relative to the error factor of composograph u
irepresent).
Wherein,
for gradient operator; τ
iby τ
ix vector that () is formed, x ∈ [1, N]; z
ifor input picture v
ithe depth value of middle pixel.
By error factor κ
iascendingly to sort, select the minimum M width image of error factor as final input picture, and record picture number and (use symbol l
krepresent), as shown in formula (8).
l
k=i (8)
Wherein, k ∈ [1, M], l
k∈ [1, n].
Step 2.6: the energy term (with symbol E (u) represent) of structure as shown in formula (9).
E(u)=E
data(u)+λE
prior(u) (9)
Wherein, E
datau () obtains by formula (10); It is drawn by the reasoning of Bayes image generation model; λ is allotment coefficient, λ > 0; E
prioru () is level and smooth item, be used for filling the part lacking input image information in composograph u, E
prioru () obtains by formula (12).
Wherein,
for Gauss's sensing noise variance, it is constant;
for input picture
lkimage error, its value calculates by formula (11);
for observability operator;
Wherein,
for geometric error
for known quantity.
Step 2.7: the energy term E (u) that formula (9) constructs is minimized, solves composograph u.Be specially: the estimation of the composograph adopting step 2.4 to obtain
as iteration initial value, iteratively faster collapse threshold algorithm (fast iterative shrinkage-thresholding algorithm, FISTA) algorithm is utilized to carry out iterative.
Step 3, reference picture and composograph is utilized to generate the red blue stereogram meeting image parallactic D (x).Be specially: set pixel rgb value in a reference image as (R1, G1, B1), rgb value is in the composite image (R2, G2, B2), then the rgb value of this pixel in the red blue stereogram generated is (R1, G2, B2).
Beneficial effect
The stereoscopic image generation method based on improvement Bayesian model that the present invention proposes, compared with the conventional method, conveniently has clear superiority in time performance, synthetic image quality and versatility etc.; The method intuitively can also specify parallax by pixel, is convenient to the post-processed of stereo-picture, has broad application prospects.
Accompanying drawing explanation
Fig. 1 is the flow chart based on the stereoscopic image generation method of improvement Bayesian model in the specific embodiment of the invention;
Fig. 2 is the reference picture in the specific embodiment of the invention;
Fig. 3 is the input picture v in the specific embodiment of the invention
1;
Fig. 4 is the input picture v in the specific embodiment of the invention
2;
Fig. 5 is the input picture v in the specific embodiment of the invention
3;
Fig. 6 is the input picture v in the specific embodiment of the invention
4;
Fig. 7 is the input picture v in the specific embodiment of the invention
5;
Fig. 8 is the input picture v in the specific embodiment of the invention
6;
Fig. 9 is the input picture v in the specific embodiment of the invention
7;
Figure 10 is the input picture v in the specific embodiment of the invention
8;
Figure 11 is the input picture v in the specific embodiment of the invention
9;
Figure 12 is the composograph obtained in the specific embodiment of the invention;
Figure 13 is the red blue stereogram obtained in the specific embodiment of the invention.
Embodiment
Elaborate below in conjunction with the execution mode of accompanying drawing to the inventive method.
Adopt the stereoscopic image generation method based on improving Bayesian model to generate stereo-picture in the present embodiment, as shown in Figure 1, it comprises the following steps its operating process:
Step one, to obtain by reference to image composograph by pixel viewpoint position, reference picture is as shown in Figure 2.
Step 1.1: by formula (1) computing reference image v
0the each pixel of clock standardization parallax d
0(x), standardization parallax d
0x () is by the standard depth z of pixel each in reference picture
0x the imaging focal length f of () and reference picture obtains.
Step 1.2: given composograph u and reference picture v
0by pixel image parallax D (x), because the viewpoint of composograph and the viewpoint of reference picture are positioned at same plane, and the imaging plane of correspondence is identical, by formula (2) by image parallactic D (x) calculate composograph relative to reference picture by pixel viewpoint displacement s (x).
Step 1.3: by formula (3) calculate composograph by pixel viewpoint position s ' (x).
Step 2, the Bayesian model passing through to improve generate composograph.
Step 2.1: for viewpoint position s ' (x) of pixel each in composograph, determines a new imaging plane Γ
j.
Step 2.2: the geometry of given scenario is estimated and 9 width input picture v
i, as shown in Fig. 3 to Figure 11, calculate the imaging plane Ω of each input picture
iin each pixel to new imaging plane Γ
jmapping relations φ
ijand the imaging plane Ω of input picture
iin each pixel be mapped to new imaging plane Γ
jafter whether visible observability operators m
ij.Described input picture v
iin the pixel count that comprises and reference picture v
0middle pixel count is identical.Wherein, i ∈ [1,9], j ∈ [1, N].
Described observability operators m
ijbe a matrix, matrix m
ijline number and columns respectively with input picture v
ithe line number of middle pixel is identical with columns; Matrix m
ijelement value be that its respective pixel of 1 expression is visible, value is that 0 expression respective pixel is invisible.
The geometry of the corresponding scene of described input picture is estimated as the pixel depth information of input picture.
Step 2.3: calculate input picture imaging plane Ω by formula (4) and formula (5)
iin each pixel to the imaging plane Γ of composograph
0mapping relations τ
iand inverse mapping relation β
i.
Step 2.4: the estimation calculating composograph u according to formula (6)
.
Step 2.5: calculate every width input picture v according to formula (7)
irelative to the error factor κ of composograph u
i.By error factor κ
iascendingly to sort, select the minimum M width image of error factor as final input picture, and record picture number l
k=6,7,8,9
Step 2.6: the energy term E (u) of structure as shown in formula (9).
Step 2.7: the energy term E (u) that formula (9) constructs is minimized, solves composograph u.Be specially: the estimation of the composograph adopting step 2.4 to obtain
as iteration initial value, FISTA algorithm is utilized to carry out iterative.First FISTA algorithm external iteration number of times is set; Keep in each external iteration process
value constant, after meeting internal layer stopping criterion for iteration, upgrade
value, start external iteration next time.Repeat above-mentioned iterative process, until reach the external iteration number of times of setting.
The composograph obtained as shown in figure 12.
Step 3, reference picture and composograph is utilized to generate the red blue stereogram meeting image parallactic D (x).Be specially: set pixel rgb value in a reference image as (R1, G1, B1), rgb value is in the composite image (R2, G2, B2), then the rgb value of this pixel in the red blue stereogram generated is (R1, G2, B2).The red blue stereogram finally obtained as shown in figure 13.
Claims (5)
1., based on the stereoscopic image generation method improving Bayesian model, it is characterized in that: it comprises the following steps:
Step one, to obtain by reference to image composograph by pixel viewpoint position;
Described reference picture is the true picture that a width is specified, and reference picture all pixels viewpoint position is identical;
Described composograph is the width composograph obtained by one group of input picture, and composograph specifies viewpoint position by pixel;
Step 2, the Bayesian model passing through to improve generate composograph;
Step 2.1: for viewpoint position s ' (x) of pixel each in composograph, determines a new imaging plane Γ
j;
Step 2.2: the geometry of given scenario is estimated and one group of input picture v
i, calculate the imaging plane Ω of each input picture
iin each pixel to new imaging plane Γ
jmapping relations φ
ijand the imaging plane Ω of input picture
iin each pixel be mapped to new imaging plane Γ
jafter whether visible observability operators m
ij; Described input picture v
iin the pixel count that comprises and reference picture v
0middle pixel count is identical; Wherein, i ∈ [1, n], n>=1; J ∈ [1, N];
Step 2.3: calculate input picture imaging plane Ω by formula (4) and formula (5)
iin each pixel to the imaging plane Γ of composograph
0mapping relations τ
iand inverse mapping relation β
i; Because the pixel count of input picture is all identical with the pixel count of reference picture with the pixel count of composograph, therefore x also represents the index value of pixel in input picture and composograph;
τ
i(x)=φ
ix(x) (4)
Wherein, x represents the index value of the pixel in input picture and composograph;
Step 2.4: the estimation calculating composograph u according to formula (6)
Wherein, n is the quantity of input picture; o is fonction composition operator;
Step 2.5: calculate every width input picture v according to formula (7)
irelative to the error factor κ of composograph u
i;
Wherein,
for gradient operator; τ
iby τ
ix vector that () is formed, x ∈ [1, N]; z
ifor input picture v
ithe depth value of middle pixel;
By error factor κ
jascendingly to sort, select the minimum M width image of error factor as final input picture, and record picture number l
k, as shown in formula (8);
l
k=i (8)
Wherein, k ∈ [1, M], l
k∈ [1, n];
Step 2.6: the energy term E (u) of structure as shown in formula (9);
E(u)=E
data(u)+λE
prior(u) (9)
Wherein, E
datau () obtains by formula (10); It is drawn by the reasoning of Bayes image generation model; λ is allotment coefficient, λ > 0; E
prioru () is level and smooth item, be used for filling the part lacking input image information in composograph u, E
prioru () obtains by formula (12);
Wherein,
for Gauss's sensing noise variance, it is constant;
for input picture l
kimage error, its value calculates by formula (11);
for observability operator;
Wherein,
for geometric error
for known quantity;
Step 2.7: the energy term E (u) that formula (9) constructs is minimized, solves composograph u; Be specially: the estimation of the composograph adopting step 2.4 to obtain
as iteration initial value, iteratively faster collapse threshold FISTA algorithm is utilized to carry out iterative;
Step 3, reference picture and composograph is utilized to generate the red blue stereogram meeting image parallactic D (x).
2. a kind of based on improving the stereoscopic image generation method of Bayesian model as claimed in claim 1, it is characterized in that: the concrete operations by pixel viewpoint position obtaining composograph by reference to image described in step one are:
Step 1.1: by formula (1) computing reference image v
0the each pixel of clock standardization parallax d
0(x), standardization parallax d
0x () is by the standard depth z of pixel each in reference picture
0x the imaging focal length f of () and reference picture obtains; Wherein, x is reference picture v
0the index value of middle pixel, x ∈ [1, N], N are reference picture v
0the quantity of middle pixel;
Step 1.2: given composograph u and reference picture v
0by pixel image parallax D (x), because the viewpoint of composograph and the viewpoint of reference picture are positioned at same plane, and the imaging plane of correspondence is identical, by formula (2) by image parallactic D (x) calculate composograph relative to reference picture by pixel viewpoint displacement s (x);
Step 1.3: by formula (3) calculate composograph by pixel viewpoint position s ' (x);
s′(x)=s+Δs(x) (3)
Wherein, s is the viewpoint position of reference picture.
3. a kind of based on improving the stereoscopic image generation method of Bayesian model as claimed in claim 1 or 2, it is characterized in that: observability operators m described in step 2 step 2.2
ijbe a matrix, matrix m
ijline number and columns respectively with input picture v
ithe line number of middle pixel is identical with columns; Matrix m
ijelement value be that its respective pixel of 1 expression is visible, value is that 0 expression respective pixel is invisible.
4. a kind of based on improving the stereoscopic image generation method of Bayesian model as claimed in claim 1 or 2, it is characterized in that: the geometry of the corresponding scene of input picture described in step 2 step 2.2 is estimated as the pixel depth information of input picture or the pixel spatial positional information of input picture.
5. a kind of based on improving the stereoscopic image generation method of Bayesian model as claimed in claim 1 or 2, it is characterized in that: the concrete operations utilizing reference picture and composograph generation to meet the red blue stereogram of image parallactic D (x) described in step 3 are: set pixel rgb value in a reference image as (R1, G1, B1), rgb value is in the composite image (R2, G2, B2), then the rgb value of this pixel in the red blue stereogram generated is (R1, G2, B2).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510064469.6A CN104902253B (en) | 2015-02-09 | 2015-02-09 | A kind of based on the stereoscopic image generation method improving Bayesian model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510064469.6A CN104902253B (en) | 2015-02-09 | 2015-02-09 | A kind of based on the stereoscopic image generation method improving Bayesian model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104902253A true CN104902253A (en) | 2015-09-09 |
CN104902253B CN104902253B (en) | 2016-11-09 |
Family
ID=54034614
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510064469.6A Expired - Fee Related CN104902253B (en) | 2015-02-09 | 2015-02-09 | A kind of based on the stereoscopic image generation method improving Bayesian model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104902253B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108648821A (en) * | 2018-03-21 | 2018-10-12 | 北京理工大学 | Intelligent operation decision system and its application process towards puncturing operation robot |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070122001A1 (en) * | 2005-11-30 | 2007-05-31 | Microsoft Corporation | Real-time Bayesian 3D pose tracking |
CN102932657A (en) * | 2011-08-08 | 2013-02-13 | 索尼公司 | Image processing apparatus, image processing method, and program |
CN103383776A (en) * | 2013-07-14 | 2013-11-06 | 浙江大学 | Progressive type three-dimensional matching algorithm based on sectional matching and bayes estimation |
-
2015
- 2015-02-09 CN CN201510064469.6A patent/CN104902253B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070122001A1 (en) * | 2005-11-30 | 2007-05-31 | Microsoft Corporation | Real-time Bayesian 3D pose tracking |
CN102932657A (en) * | 2011-08-08 | 2013-02-13 | 索尼公司 | Image processing apparatus, image processing method, and program |
CN103383776A (en) * | 2013-07-14 | 2013-11-06 | 浙江大学 | Progressive type three-dimensional matching algorithm based on sectional matching and bayes estimation |
Non-Patent Citations (1)
Title |
---|
李娇等: "一种基于贝叶斯理论的高效立体匹配方法", 《激光与光电子学进展》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108648821A (en) * | 2018-03-21 | 2018-10-12 | 北京理工大学 | Intelligent operation decision system and its application process towards puncturing operation robot |
CN108648821B (en) * | 2018-03-21 | 2020-12-01 | 北京理工大学 | Intelligent operation decision system for puncture operation robot and application method thereof |
Also Published As
Publication number | Publication date |
---|---|
CN104902253B (en) | 2016-11-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kim et al. | Multi-perspective stereoscopy from light fields | |
Karsch et al. | Depth transfer: Depth extraction from video using non-parametric sampling | |
CN102307312B (en) | Method for performing hole filling on destination image generated by depth-image-based rendering (DIBR) technology | |
CN104504671B (en) | Method for generating virtual-real fusion image for stereo display | |
CN101271583B (en) | Fast image drafting method based on depth drawing | |
KR20140088200A (en) | Depth-map generation for an input image using an example approximate depth-map associated with an example similar image | |
CN103400409A (en) | 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera | |
Hung et al. | Consistent binocular depth and scene flow with chained temporal profiles | |
CN102034265A (en) | Three-dimensional view acquisition method | |
CN103702103B (en) | Based on the grating stereo printing images synthetic method of binocular camera | |
CN104065946A (en) | Cavity filling method based on image sequence | |
Shivakumar et al. | Real time dense depth estimation by fusing stereo with sparse depth measurements | |
CN104506872A (en) | Method and device for converting planar video into stereoscopic video | |
Casas et al. | 4D Model Flow: Precomputed Appearance Alignment for Real‐time 4D Video Interpolation | |
CN105809734A (en) | Mechanical model 3D modeling method based on multiple view interaction | |
CN103731657B (en) | A kind of to the filling of the cavity containing the empty image processing method after DIBR algorithm process | |
CN102447927B (en) | Method for warping three-dimensional image with camera calibration parameter | |
CN104902253A (en) | Three-dimensional image generating method based on improved Bayesian model | |
Lee et al. | Automatic 2d-to-3d conversion using multi-scale deep neural network | |
US8786681B1 (en) | Stereoscopic conversion | |
Zhou et al. | Single-view view synthesis with self-rectified pseudo-stereo | |
CN103945209A (en) | DIBR method based on block projection | |
Waschbüsch et al. | 3d video billboard clouds | |
Li et al. | DRI-MVSNet: A depth residual inference network for multi-view stereo images | |
Xie et al. | Depth-tunable three-dimensional display with interactive light field control |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20161109 Termination date: 20200209 |