CN102592275A - Virtual viewpoint rendering method - Google Patents

Virtual viewpoint rendering method Download PDF

Info

Publication number
CN102592275A
CN102592275A CN2011104289981A CN201110428998A CN102592275A CN 102592275 A CN102592275 A CN 102592275A CN 2011104289981 A CN2011104289981 A CN 2011104289981A CN 201110428998 A CN201110428998 A CN 201110428998A CN 102592275 A CN102592275 A CN 102592275A
Authority
CN
China
Prior art keywords
virtual
pixel
view
image
depth map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011104289981A
Other languages
Chinese (zh)
Other versions
CN102592275B (en
Inventor
雷建军
王来花
侯春萍
范科峰
吴冬燕
张海龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN 201110428998 priority Critical patent/CN102592275B/en
Publication of CN102592275A publication Critical patent/CN102592275A/en
Application granted granted Critical
Publication of CN102592275B publication Critical patent/CN102592275B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a virtual viewpoint rendering method, which belongs to the field of image processing and stereo imaging and can achieve high-quality virtual viewpoint rendering. The technical scheme adopted by the virtual viewpoint rendering method comprises the following steps: (1) preprocessing depth maps; (3) performing image fusion on the two virtual viewpoint images by using distance weighted values, eliminating most of hollows and producing a fusional initial virtual view; and (4) filling the hollows of the initial virtual view by a method on the basis of figure-ground segregation and obtaining a final virtual viewpoint image. The virtual viewpoint rendering method is mainly applicable to image processing and stereo imaging.

Description

Virtual viewpoint rendering method
Technical field
The invention belongs to Flame Image Process and three-dimensional imaging field, relate to a kind of virtual viewpoint rendering method based on the degree of depth.
Background technology
Traditional planar video is described three-dimensional scenic with plane picture, can not satisfy the requirement of people's stereo sense.Compare three-dimensional video-frequency with traditional planar video and have more intuitive visual expression effect, it can make people experience vivid depth perception and feeling of immersion.The virtual viewpoint rendering technology refers to the video image through the resulting Same Scene of two or more video camera viewpoints; Generating the process of other each these scene video images of video camera viewpoint, is to realize one of three-dimensional video-frequency key technologies for application such as free stereo TV and free view-point TV.
The virtual viewpoint rendering technology mainly comprise drafting based on model (Model Based Rendering, MBR) and based on the drafting of image (Image Based Rendering, IBR).Traditional drafting based on model utilizes computer graphics techniques; Scene is defined; Thereby set up three-dimensional scene models, utilize processing procedures such as computer realization scene model rendering, painted, illumination and projection then, finally generate the image of given viewpoint.Can the geometric model that therefore, accurately construct scene realistically to final imaging effect generation decisive influence.Because the complexity of scene often is not easy to control, be not very desirable through the MBR imaging effect under the complex scene, can't satisfy the constantly demand to the image sense of reality and feeling of immersion of development of user.How to adopt known visual point image to come directly to generate the correspondence image under the virtual view based on the main research of the drafting of image; Its drawing process has been avoided complicated Geometric Modeling and evaluation work; Therefore have advantage quick, simple, true to nature, become the current direction of active research the most.
Different IBR technology can be regarded as full light function (Plenoptic Function) concrete manifestation under certain condition.Full light function is expressed as scene at moment τ, is (V from three dimensional space coordinate x, V y, V z) viewpoint set out, the visual angle does
Figure BDA0000120991310000011
The employing wavelength is the information summation that the light of λ is sampled.Can scene be expressed as one 7 dimension function thus,
Figure BDA0000120991310000012
is called full light function.IBR can be described as the sample set of given full light function, generates the process of its continuous representation.Based on this description, the IBR process can be decomposed into the sampling of full light function, three processes of rebuilding and resample.But under actual conditions, can't obtain the complete or collected works of the full light function of scene in advance, 7 dimension functional forms need huge storage and calculated amount simultaneously, so the researcher has launched big quantity research to application and simplification.The employing scene geometric information what, can the IBR technology be divided three classes: the IBR method of 1) not having how much information according to; 2) use the IBR method that implies geological information; 3) the IBR method of the clear and definite geological information of use.
There is not the intensive sampling drawing three-dimensional scene of the IBR method of how much information based on scene; The panorama sketch technology (Panoramas) that mainly comprises 2 dimensions; The concentric picture mosaic technology (Concentric Mosaics) of 3 dimensions; Light field technology (Light fields) and the ray space implementation method (Ray Space Representation) of 4 dimensions, the full light modeling technique method for drafting such as (Plenoptic Modeling) of 5 dimensions.
Use the IBR method of implicit geological information only to need input picture seldom, can under certain error condition, come to generate fast the scene picture with less cost, the picture quality that is generated simultaneously is very high.Use the IBR method of implicit geological information mainly to comprise view deformation technology (View Morphing) and view interpolation technology (View Interpolation) etc.The view deformation technology utilizes the geometric projection principle to go out the image of each new viewpoint on the photocentre line from the image reconstruction of two different points of view.The view interpolation technology is based on the continuity between the adjacent image, utilizes the depth information of each pixel of image to set up the corresponding relation of pixel between the neighbouring sample point photo-realistic images through the vision ultimate principle.
(Layered Depth Image is LDI) with based on the 3D rendering deformation technology (3D Image Warping) of depth image etc. to use the IBR method of clear and definite geological information mainly to comprise texture (Text Mapping), layered depth image technology.The essence of texture is that the image replay is mapped on the surface of 3D scene, comes show complex mold surface details with texture.To image layered method, the geological information that is utilized is the depth information on some branch aspects to the employing of layered depth image technology according to depth information.Drafting (Depth Image Based Rendering based on depth image; DIBR) owing to the depth information with scene is incorporated among the IBR; Utilize source images and corresponding depth image thereof to generate the virtual view view through the 3-D view mapping; Thereby significantly reduced the number of reference picture, saved storage space and transmission bandwidth.
DIBR technology development in recent years is rapid, is the focus of current research.McMillan has proposed 3D image warping equation the earliest, obtains virtual view through image mapped, greatly reduces the complexity that generates new view, but also is faced with the problem of how to carry out effective hole-filling.Cooke has proposed to utilize sampling density figure to choose many view synthesizing methods that best surface generates virtual view; Because the influence in cavity in the single view of not considering to generate; Can not obtain complete sampling density figure, among the scene that is applied to complicacy that can not be successful.Muller has proposed the method by the parameter derivation virtual view parameter of two reference-view around the method for 3D mapping, thereby can obtain the virtual view of any viewpoint between two reference-view, but still does not effectively solve empty problem.Mori is through using 3D warping and carry out virtual viewpoint rendering and through depth map being projected to the virtual image plane and the depth map that obtains after the mapping being carried out the back Filtering Processing solve empty problem; Blur depth map but the depth map that obtains is carried out the back Filtering Processing, can influence the rendering quality of virtual view.
Summary of the invention
The present invention is intended to overcome the deficiency of prior art, and a kind of virtual viewpoint rendering method is provided, and realizes high-quality virtual viewpoint rendering.For achieving the above object, the technical scheme that the present invention takes is that virtual viewpoint rendering method comprises the following steps:
(1) adopt border restructing algorithm that depth map is carried out pre-service based on rim detection;
(2) utilize two viewpoint reference-view and corresponding depth map thereof, obtain the virtual view view through the 3-D view conversion:
When drawing virtual visual point image; Adopt 3D Image Warping method; Utilize the camera parameter of depth image and demarcation; Pixel in the reference picture is mapped in the target image; The camera parameter re-projection that at first pixel of source reference image is utilized its depth information and reference camera is to their corresponding three dimensions point positions, utilizes the camera parameter of position and the virtual view of virtual view that these three dimensions points are projected to the virtual camera plane more then and is carried out to picture, obtains virtual visual point image;
(3) two width of cloth virtual visual point images are carried out image co-registration and eliminate most of cavity, generate a width of cloth initial virtual view:
Utilize distance weighted value to merge two width of cloth virtual views, thus the initial virtual view that obtains merging;
(4) utilization of initial virtual view is carried out hole-filling based on the method for background separation, obtain final virtual visual point image.
Said employing in the step (1) is carried out pre-service based on the border restructing algorithm of rim detection to depth map and is specially:
At first utilize the canny operator to carry out rim detection, extract the edge of foreground object in the depth map, carry out depth boundary reconstruct filtering then the edge that extracts is handled, adopt following cost function: J depth map Recon(k)=J F(k)+J S(k)+J C(k),, find best brightness value to replace the brightness value of current pixel with maximum cost to the neighbor cost value of the edge calculations current pixel that extracts, wherein, the brightness value of k represent pixel, J ReconCost function when the represent pixel value is k, first filial generation valency function J FRepresent the frequency of occurrences of each pixel, it is obtained by the occurrence number of pixel; Second sub-cost function J SRepresent current pixel point to be adjacent the similarity of the brightness between the pixel; The 3rd sub-cost function J CRepresent between current pixel and the neighbor near degree, last, for the depth map after the smooth boundary reconstruct, utilize medium filtering and morphology to handle the edge of depth of smoothness figure, thereby acquisition quality depth map preferably.
Said in the step (4) carries out hole-filling to the utilization of initial virtual view based on the method for background separation, obtains final virtual visual point image and is specially:
At first utilize the depth value of the depth map that merges the back virtual view, choose threshold value γ, depth map is divided, be about to background area and foreground object and be separated from each other; Fill through the method for neighborhood interpolation isolated then background area; At last foreground area is reset in the virtual view,, utilizes the method for neighborhood interpolation that whole virtual view is carried out hole-filling again, obtain final virtual view for guaranteeing that all cavities are all filled.
Said border restructing algorithm cost function in the step (1) is specially:
First filial generation valency function J FDefine as follows:
N OC ( k ) = Σ i = 0 n × n - 1 δ [ k , w n × n ( i ) ] ,
Figure BDA0000120991310000032
J F ( k ) = N OC ( k ) - N OC ( min ) N OC ( max ) - N OC ( min )
Wherein, w N * n(i) represent window w N * nThe brightness value of middle pixel i, N OC(max) and N OC(min) represent minimum and maximum N respectively OCValue;
Second sub-cost function J SDefine as follows:
S(k)=|I cur-I k|
J S ( k ) = S ( max ) - S ( k ) S ( max ) - S ( min )
Wherein, S (max) and S (min) represent minimum and maximum S (k) value respectively;
The 3rd sub-cost function J CDefine as follows:
C ( k ) = 1 N OC ( k ) Σ i = 0 N OC ( k ) - 1 ( x cur - x i ) 2 + ( y cur - y i ) 2
J C ( k ) = C ( max ) - C ( k ) C ( max ) - C ( min )
Wherein, C (max) and C (min) represent minimum and maximum C (k) value respectively.
The present invention has following technique effect:
The present invention adopts the border restructing algorithm based on rim detection that depth map is carried out pre-service, has improved the quality of depth map; And adopt based on the cavity in the hole-filling algorithm process virtual view of background separation, realized high-quality virtual viewpoint rendering.
Description of drawings
Fig. 1 has provided the not part amplification of the virtual view view of process embodiment of the invention method drafting;
Fig. 2 has provided the part of the virtual view view of drawing through embodiment of the invention method and has amplified;
Fig. 3 has provided the virtual view view full figure of drawing through embodiment of the invention method;
Fig. 4 has provided virtual viewpoint rendering flow process of the present invention.
Embodiment
The objective of the invention is to overcome the deficiency of prior art, propose a kind of new method of virtual viewpoint rendering.The present invention at first adopts the border restructing algorithm based on rim detection that depth map is carried out pre-service to improve the quality of depth map; Utilize the mapping of left and right sides reference-view and pretreated depth map to obtain the virtual view images of positions then, and utilize image co-registration the synthetic width of cloth view of two width of cloth virtual images; Adopt at last based on the cavity in the hole-filling algorithm process virtual view of background separation, realize high-quality virtual viewpoint rendering.
A kind of virtual viewpoint rendering method said method comprising the steps of:
(1) adopt border restructing algorithm that depth map is carried out pre-service based on rim detection;
At first utilize the canny operator to carry out rim detection, extract the edge of foreground object in the depth map depth map.Carry out depth boundary reconstruct filtering then the edge that extracts is handled, the design cost function is following: J Recon(k)=J F(k)+J S(k)+J C(k), the neighbor cost value to the edge calculations current pixel that extracts finds the best brightness value with maximum cost to replace the brightness value of current pixel.Wherein, the brightness value of k represent pixel, J ReconCost function when the represent pixel value is k, first filial generation valency function J FRepresent the frequency of occurrences of each pixel, it is obtained by the occurrence number of pixel; Second sub-cost function J SRepresent current pixel point to be adjacent the similarity of the brightness between the pixel; The 3rd sub-cost function J CRepresent between current pixel and the neighbor near degree.At last,, utilize medium filtering and morphology to handle the edge of depth of smoothness figure, thereby obtain quality depth map preferably for the depth map after the smooth boundary reconstruct.
(2) utilize two viewpoint reference-view and corresponding depth map thereof, obtain the virtual view view through the 3-D view conversion;
When drawing virtual visual point image, adopt 3D Image Warping method, utilize the camera parameter of depth image and demarcation, the pixel in the reference picture is mapped in the target image.At first the pixel of source reference image is utilized the three dimensions point position of the camera parameter re-projection of its depth information and reference camera to their correspondences; Utilize the camera parameter of position and the virtual view of virtual view that these three dimensions points are carried out to picture projecting to the virtual camera plane then, obtain virtual visual point image.
In the process of carrying out 3D warping, exist a plurality of reference views to be mapped to the phenomenon of same virtual visual point image position; Can cause the pixel of back projection to cover the pixel of first projection; Thereby caused serious distortion for the virtual view that generates, choose the pixel of the less pixel of depth value as needs in order to address this is that the method that adopts Z-Buffer to compare depth value.
(3) two width of cloth virtual visual point images are carried out image co-registration and eliminate most of cavity, generate a width of cloth initial virtual view;
Obtain in the process of virtual visual point image in the purpose image, becoming visible because the relation of mapping makes sightless zone in reference picture in reference picture mapping, these zones can not find corresponding Pixel Information in reference picture.Therefore, utilize distance weighted value to merge two width of cloth virtual views, thus the initial virtual view that obtains merging.
(4) utilization of initial virtual view is carried out hole-filling based on the method for background separation, obtain final virtual visual point image;
At first, utilize the depth value of the depth map that merges the back virtual view, choose threshold value γ, depth map is divided, be about to background area and foreground object and be separated from each other.Then, fill through the method for neighborhood interpolation isolated background area, so just can the background area of former hollow sectors before this utilized the pixel of adjacent background area fill.At last, foreground area is reset in the virtual view,, utilizes the method for neighborhood interpolation that whole virtual view is carried out hole-filling again, obtain final virtual view for guaranteeing that all cavities are all filled.
For making the object of the invention, technical scheme and advantage clearer, will combine accompanying drawing that embodiment of the present invention is done to describe in detail further below.
(1) adopt border restructing algorithm that depth map is carried out pre-service based on rim detection;
At first utilize the canny operator to carry out rim detection, extract the edge of foreground object in the depth map depth map.Carrying out depth boundary reconstruct filtering then handles the edge that extracts.Through consider the frequency of occurrences, similarity, near three measuring methods of property, definition cost function J Recon(k)=J F(k)+J S(k)+J C(k), the neighbor cost value to the edge calculations current pixel that extracts finds the best brightness value with maximum cost to replace the brightness value of current pixel.Wherein, the brightness value of k represent pixel, J ReconCost function when the represent pixel value is k.First filial generation valency function J FRepresent the frequency of occurrences of each pixel, it is obtained by the occurrence number of pixel, defines as follows:
N OC ( k ) = Σ i = 0 n × n - 1 δ [ k , w n × n ( i ) ] ,
Figure BDA0000120991310000052
J F ( k ) = N OC ( k ) - N OC ( min ) N OC ( max ) - N OC ( min )
Wherein, w N * n(i) represent window w N * nThe brightness value of middle pixel i, N OC(max) and N OC(min) represent minimum and maximum N respectively OCValue.
Second sub-cost function J SMean current pixel point I CurBe adjacent pixel I kBetween the similarity of brightness, definition as follows:
S(k)=|I cur-I k|
J S ( k ) = S ( max ) - S ( k ) S ( max ) - S ( min )
Wherein, S (max) and S (min) represent minimum and maximum S (k) value respectively.
The 3rd sub-cost function J CRepresent current pixel (x Cur, y Cur) and neighbor (x i, y i) between distance feature, the definition as follows:
C ( k ) = 1 N OC ( k ) Σ i = 0 N OC ( k ) - 1 ( x cur - x i ) 2 + ( y cur - y i ) 2
J C ( k ) = C ( max ) - C ( k ) C ( max ) - C ( min )
Wherein, C (max) and C (min) represent minimum and maximum C (k) value respectively.
At last,, utilize medium filtering and morphology to handle the edge of depth of smoothness figure, thereby obtain quality depth map preferably for the depth map after the smooth boundary reconstruct.
(2) utilize two viewpoint reference-view and corresponding depth map thereof, obtain the virtual view view through the 3-D view conversion;
When drawing virtual visual point image; Adopt 3D Image Warping method; 3D image warping has described the transformation relation of the respective pixel of the projection of any three dimensions point on different imaging planes, utilizes depth image, and the pixel in the reference picture is mapped in the target image.At first the pixel of source reference image is utilized the three dimensions point position of the camera parameter re-projection of its depth information and reference camera to their correspondences; According to the position of virtual view and virtual view camera parameter these three dimensions points are projected to the virtual camera plane more then and be carried out to picture, obtain virtual visual point image.Specific algorithm is realized as follows:
P 11 r X + P 12 r Y + P 13 r Z + P 14 r = wu P 21 r X + P 22 r Y + P 23 r Z + P 24 r = wu P 31 r X + P 32 r Y + P 33 r Z + P 34 r = w
u v = P 11 v X + P 12 v Y + P 13 v Z + P 14 v P 31 v X + P 32 v Y + P 33 v Z + P 34 v v v = P 21 v X + P 22 v Y + P 23 v Z + P 24 v P 31 v X + P 32 v Y + P 33 v Z + P 34 v
Wherein, (u, v) TBe the pixel coordinate in the reference picture coordinate system, (u v, v v) TBe the coordinate points in the virtual view that obtains, w is a zoom factor, (X, Y, Z, 1) TFor the coordinate points in the world coordinate system (X, Y, Z) TCorresponding homogeneous coordinates point,
Figure BDA0000120991310000063
(i=1,2,3; J=1,2,3,4) be the element in the reference view projection matrix,
Figure BDA0000120991310000064
(i=1,2,3; J=1,2,3,4) be the element in the virtual view projection matrix, projection matrix P is by confidential reference items matrix K (3 * 3 matrixes), and rotation matrix R (3 * 3 matrixes) and translation matrix t (3 * 1 vectors) unite and obtain.
In the process of carrying out 3D warping, exist a plurality of reference views to be mapped to the phenomenon of same virtual visual point image position; Can cause the pixel of back projection to cover the pixel of first projection; Thereby caused serious distortion for the virtual view that generates, choose the pixel of the less pixel of depth value as needs in order to address this is that the method that adopts Z-Buffer to compare depth value.
(3) two width of cloth virtual visual point images are carried out image co-registration and eliminate most of cavity, generate a width of cloth initial virtual view;
Obtain in the process of virtual visual point image in the purpose image, becoming visible because the relation of mapping makes sightless zone in reference picture in reference picture mapping, these zones can not find corresponding Pixel Information in reference picture.Therefore, utilize distance weighted value to merge two width of cloth virtual views, thus the initial virtual view that obtains merging.Fusion formula is following:
Figure BDA0000120991310000065
Wherein,
Figure BDA0000120991310000066
(u v) is virtual visual point image coordinate (u, the pixel value of v) locating, I to I L, I RThe reference picture of representing two contiguous cameras respectively, (u L, v L) and (u R, v R) be respectively virtual visual point image pixel (u, v) corresponding point on the target image that generates respectively by two reference pictures, Z RAnd Z LRepresent I respectively RMiddle coordinate is (u R, v R) and I LMiddle coordinate is (u L, v L) depth value.
(4) utilization of initial virtual view is carried out hole-filling based on the method for background separation, obtain final virtual visual point image;
Image co-registration has been eliminated the most of cavity in the virtual view, but some cavity all is positioned at invisible area in reference picture, use the method for image co-registration not eliminate.In order to carry out high-quality hole-filling, avoid foreground pixel is filled on the background, design following hole-filling method:
At first, utilize the depth value of the depth map that merges the back virtual view, choose threshold value γ, depth map is divided, be about to background area and foreground object and be separated from each other.Then, fill through the method for neighborhood interpolation isolated background area, so just can the background area of former hollow sectors before this utilized the pixel of adjacent background area fill.At last, foreground area is reset in the virtual view,, utilizes the method for neighborhood interpolation that whole virtual view is carried out hole-filling again, obtain final virtual view for guaranteeing that all cavities are all filled.
For the effect that makes the embodiment of the invention has comparability; Adopt cam3 and two viewpoints of cam5 of Ballet database; Draw out the image of cam4 viewpoint; The part amplification of the virtual view view of process embodiment of the invention method drafting is not as shown in Figure 1, and the part amplification of the virtual view view that process embodiment of the invention method is drawn is as shown in Figure 2, and the virtual view view full figure of process embodiment of the invention method drafting is as shown in Figure 3.
Fig. 1 is not for using the inventive method, and the part of the virtual visual point image of directly drawing is amplified, and marked region exists foreground pixel to be filled into the phenomenon on the background among the figure, causes the visual effect of virtual view to descend.
Fig. 2 amplifies for the part of the virtual visual point image that use the inventive method is drawn, and marked region has been avoided foreground pixel is filled into the phenomenon on the background among the figure, and the rendering quality of virtual view is better.
The virtual view view full figure that Fig. 3 is to use the inventive method to draw, visible from figure, obtained virtual viewpoint rendering effect preferably.

Claims (4)

1. a virtual viewpoint rendering method is characterized in that, comprises the following steps:
(1) adopt border restructing algorithm that depth map is carried out pre-service based on rim detection;
(2) utilize two viewpoint reference-view and corresponding depth map thereof, obtain the virtual view view through the 3-D view conversion:
When drawing virtual visual point image; Adopt 3D Image Warping method; Utilize the camera parameter of depth image and demarcation; Pixel in the reference picture is mapped in the target image; The camera parameter re-projection that at first pixel of source reference image is utilized its depth information and reference camera is to their corresponding three dimensions point positions, utilizes the camera parameter of position and the virtual view of virtual view that these three dimensions points are projected to the virtual camera plane more then and is carried out to picture, obtains virtual visual point image;
(3) two width of cloth virtual visual point images are carried out image co-registration and eliminate most of cavity, generate a width of cloth initial virtual view:
Utilize distance weighted value to merge two width of cloth virtual views, thus the initial virtual view that obtains merging;
(4) utilization of initial virtual view is carried out hole-filling based on the method for background separation, obtain final virtual visual point image.
2. the method for claim 1 is characterized in that, the said employing in the step (1) is carried out pre-service based on the border restructing algorithm of rim detection to depth map and is specially:
At first utilize the canny operator to carry out rim detection, extract the edge of foreground object in the depth map, carry out depth boundary reconstruct filtering then the edge that extracts is handled, adopt following cost function: J depth map Recon(k)=J F(k)+J S(k)+J C(k),, find best brightness value to replace the brightness value of current pixel with maximum cost to the neighbor cost value of the edge calculations current pixel that extracts, wherein, the brightness value of k represent pixel, J ReconCost function when the represent pixel value is k, first filial generation valency function J FRepresent the frequency of occurrences of each pixel, it is obtained by the occurrence number of pixel; Second sub-cost function J SRepresent current pixel point to be adjacent the similarity of the brightness between the pixel; The 3rd sub-cost function J CRepresent between current pixel and the neighbor near degree, last, for the depth map after the smooth boundary reconstruct, utilize medium filtering and morphology to handle the edge of depth of smoothness figure, thereby acquisition quality depth map preferably.
3. the method for claim 1 is characterized in that, said in the step (4) carries out hole-filling to the utilization of initial virtual view based on the method for background separation, obtains final virtual visual point image and is specially:
At first utilize the depth value of the depth map that merges the back virtual view, choose threshold value γ, depth map is divided, be about to background area and foreground object and be separated from each other; Fill through the method for neighborhood interpolation isolated then background area; At last foreground area is reset in the virtual view,, utilizes the method for neighborhood interpolation that whole virtual view is carried out hole-filling again, obtain final virtual view for guaranteeing that all cavities are all filled.
4. the method for claim 1 is characterized in that, the said border restructing algorithm cost function in the step (1) is specially: first filial generation valency function J FDefine as follows:
N OC ( k ) = Σ i = 0 n × n - 1 δ [ k , w n × n ( i ) ] ,
Figure FDA0000120991300000012
J F ( k ) = N OC ( k ) - N OC ( min ) N OC ( max ) - N OC ( min )
Wherein, w N * n(i) represent window w N * nThe brightness value of middle pixel i, N OC(max) and N OC(min) represent minimum and maximum N respectively OCValue;
Second sub-cost function J SDefine as follows:
S(k)=|I cur-I k|
J S ( k ) = S ( max ) - S ( k ) S ( max ) - S ( min )
Wherein, S (max) and S (min) represent minimum and maximum S (k) value respectively;
The 3rd sub-cost function J CDefine as follows:
C ( k ) = 1 N OC ( k ) Σ i = 0 N OC ( k ) - 1 ( x cur - x i ) 2 + ( y cur - y i ) 2
J C ( k ) = C ( max ) - C ( k ) C ( max ) - C ( min )
Wherein, C (max) and C (min) represent minimum and maximum C (k) value respectively.
CN 201110428998 2011-12-16 2011-12-16 Virtual viewpoint rendering method Active CN102592275B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110428998 CN102592275B (en) 2011-12-16 2011-12-16 Virtual viewpoint rendering method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110428998 CN102592275B (en) 2011-12-16 2011-12-16 Virtual viewpoint rendering method

Publications (2)

Publication Number Publication Date
CN102592275A true CN102592275A (en) 2012-07-18
CN102592275B CN102592275B (en) 2013-12-25

Family

ID=46480865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110428998 Active CN102592275B (en) 2011-12-16 2011-12-16 Virtual viewpoint rendering method

Country Status (1)

Country Link
CN (1) CN102592275B (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819843A (en) * 2012-08-08 2012-12-12 天津大学 Stereo image parallax estimation method based on boundary control belief propagation
CN102917175A (en) * 2012-09-13 2013-02-06 西北工业大学 Sheltering multi-target automatic image matting method based on camera array synthetic aperture imaging
CN103345736A (en) * 2013-05-28 2013-10-09 天津大学 Virtual viewpoint rendering method
CN103414909A (en) * 2013-08-07 2013-11-27 电子科技大学 Hole filling method for three-dimensional video virtual viewpoint synthesis
CN103873867A (en) * 2014-03-31 2014-06-18 清华大学深圳研究生院 Free viewpoint video depth map distortion prediction method and free viewpoint video depth map coding method
CN104112275A (en) * 2014-07-15 2014-10-22 青岛海信电器股份有限公司 Image segmentation method and device
CN105261008A (en) * 2015-09-15 2016-01-20 天津大学 Method for estimating boundary protection depth video based on structure information
CN105898331A (en) * 2016-05-12 2016-08-24 天津大学 Bit allocation and rate control method for deep video coding
CN106028020A (en) * 2016-06-21 2016-10-12 电子科技大学 Multi-direction prediction based virtual visual-angle image cavity filling method
CN106355552A (en) * 2016-08-27 2017-01-25 天津大学 Depth map sampling method based on virtual-view drawing measurement,
CN106791773A (en) * 2016-12-30 2017-05-31 浙江工业大学 A kind of novel view synthesis method based on depth image
WO2017201751A1 (en) * 2016-05-27 2017-11-30 北京大学深圳研究生院 Hole filling method and device for virtual viewpoint video or image, and terminal
CN108537835A (en) * 2018-03-29 2018-09-14 香港光云科技有限公司 Hologram image generation, display methods and equipment
CN109003228A (en) * 2018-07-16 2018-12-14 杭州电子科技大学 A kind of micro- big visual field automatic Mosaic imaging method of dark field
CN109565605A (en) * 2016-08-10 2019-04-02 松下电器(美国)知识产权公司 Technique for taking generation method and image processor
CN109712067A (en) * 2018-12-03 2019-05-03 北京航空航天大学 A kind of virtual viewpoint rendering method based on depth image
CN109769109A (en) * 2019-03-05 2019-05-17 东北大学 Method and system based on virtual view synthesis drawing three-dimensional object
CN109905719A (en) * 2013-03-15 2019-06-18 谷歌有限责任公司 Generate the video with multiple viewpoints
CN110246146A (en) * 2019-04-29 2019-09-17 北京邮电大学 Full parallax light field content generating method and device based on multiple deep image rendering
CN110709893A (en) * 2017-11-01 2020-01-17 谷歌有限责任公司 High quality layered depth image texture rasterization
CN111405265A (en) * 2020-03-24 2020-07-10 杭州电子科技大学 Novel image drawing technology
CN111429513A (en) * 2020-04-26 2020-07-17 广西师范大学 Light field drawing method capable of optimizing visual occlusion scene
CN111667438A (en) * 2019-03-07 2020-09-15 阿里巴巴集团控股有限公司 Video reconstruction method, system, device and computer readable storage medium
CN112470189A (en) * 2018-04-17 2021-03-09 上海科技大学 Occlusion cancellation for light field systems
CN112565623A (en) * 2020-12-09 2021-03-26 深圳市达特照明股份有限公司 Dynamic image display system
CN112637582A (en) * 2020-12-09 2021-04-09 吉林大学 Three-dimensional fuzzy surface synthesis method for monocular video virtual view driven by fuzzy edge
CN113179396A (en) * 2021-03-19 2021-07-27 杭州电子科技大学 Double-viewpoint stereo video fusion method based on K-means model
CN113223132A (en) * 2021-04-21 2021-08-06 浙江大学 Indoor scene virtual roaming method based on reflection decomposition
CN113507599A (en) * 2021-07-08 2021-10-15 四川纵横六合科技股份有限公司 Education cloud service platform based on big data analysis
CN113837979A (en) * 2021-09-28 2021-12-24 北京奇艺世纪科技有限公司 Live image synthesis method and device, terminal device and readable storage medium
WO2022042413A1 (en) * 2020-08-24 2022-03-03 阿里巴巴集团控股有限公司 Image reconstruction method and apparatus, and computer readable storage medium, and processor
WO2022052620A1 (en) * 2020-09-10 2022-03-17 北京达佳互联信息技术有限公司 Image generation method and electronic device
WO2022110514A1 (en) * 2020-11-27 2022-06-02 叠境数字科技(上海)有限公司 Image interpolation method and apparatus employing rgb-d image and multi-camera system
WO2022155950A1 (en) * 2021-01-25 2022-07-28 京东方科技集团股份有限公司 Virtual viewpoint synthesis method, electronic device and computer readable medium
CN115908162A (en) * 2022-10-28 2023-04-04 中山职业技术学院 Virtual viewpoint generation method and system based on background texture recognition
CN116934984A (en) * 2023-09-19 2023-10-24 成都中轨轨道设备有限公司 Intelligent terminal and method for constructing virtual panoramic scene space

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271583A (en) * 2008-04-28 2008-09-24 清华大学 Fast image drafting method based on depth drawing

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271583A (en) * 2008-04-28 2008-09-24 清华大学 Fast image drafting method based on depth drawing

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
EDDIE COOKE, ET AL.: "Multi-view synthesis: A novel view creation approach for free viewpoint video", 《SIGNAL PROCESSING: IMAGE COMMUNICATION》 *
YUJI MORI, ET AL.: "View generation with 3D warping using depth information for FTV", 《SIGNALPROCESSING: IMAGE COMMUNICATION》 *
刘然,等: "基于图像重投影的视图合成", 《计算机应用》 *
许小艳,等: "基于深度图像绘制的视图合成", 《系统仿真学报》 *

Cited By (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819843B (en) * 2012-08-08 2014-10-29 天津大学 Stereo image parallax estimation method based on boundary control belief propagation
CN102819843A (en) * 2012-08-08 2012-12-12 天津大学 Stereo image parallax estimation method based on boundary control belief propagation
CN102917175A (en) * 2012-09-13 2013-02-06 西北工业大学 Sheltering multi-target automatic image matting method based on camera array synthetic aperture imaging
CN109905719B (en) * 2013-03-15 2021-05-07 谷歌有限责任公司 Generating video with multiple viewpoints
CN109905719A (en) * 2013-03-15 2019-06-18 谷歌有限责任公司 Generate the video with multiple viewpoints
CN103345736B (en) * 2013-05-28 2016-08-31 天津大学 A kind of virtual viewpoint rendering method
CN103345736A (en) * 2013-05-28 2013-10-09 天津大学 Virtual viewpoint rendering method
CN103414909A (en) * 2013-08-07 2013-11-27 电子科技大学 Hole filling method for three-dimensional video virtual viewpoint synthesis
CN103414909B (en) * 2013-08-07 2015-08-05 电子科技大学 A kind of hole-filling method being applied to dimensional video virtual viewpoint synthesis
CN103873867A (en) * 2014-03-31 2014-06-18 清华大学深圳研究生院 Free viewpoint video depth map distortion prediction method and free viewpoint video depth map coding method
CN103873867B (en) * 2014-03-31 2017-01-25 清华大学深圳研究生院 Free viewpoint video depth map distortion prediction method and free viewpoint video depth map coding method
CN104112275A (en) * 2014-07-15 2014-10-22 青岛海信电器股份有限公司 Image segmentation method and device
CN104112275B (en) * 2014-07-15 2017-07-04 青岛海信电器股份有限公司 A kind of method and device for generating viewpoint
CN105261008A (en) * 2015-09-15 2016-01-20 天津大学 Method for estimating boundary protection depth video based on structure information
CN105898331A (en) * 2016-05-12 2016-08-24 天津大学 Bit allocation and rate control method for deep video coding
WO2017201751A1 (en) * 2016-05-27 2017-11-30 北京大学深圳研究生院 Hole filling method and device for virtual viewpoint video or image, and terminal
CN106028020A (en) * 2016-06-21 2016-10-12 电子科技大学 Multi-direction prediction based virtual visual-angle image cavity filling method
CN109565605B (en) * 2016-08-10 2021-06-29 松下电器(美国)知识产权公司 Imaging technology generation method and image processing device
CN109565605A (en) * 2016-08-10 2019-04-02 松下电器(美国)知识产权公司 Technique for taking generation method and image processor
CN106355552A (en) * 2016-08-27 2017-01-25 天津大学 Depth map sampling method based on virtual-view drawing measurement,
CN106355552B (en) * 2016-08-27 2019-08-02 天津大学 A kind of depth map top sampling method based on virtual viewpoint rendering quality
CN106791773B (en) * 2016-12-30 2018-06-01 浙江工业大学 A kind of novel view synthesis method based on depth image
CN106791773A (en) * 2016-12-30 2017-05-31 浙江工业大学 A kind of novel view synthesis method based on depth image
CN110709893A (en) * 2017-11-01 2020-01-17 谷歌有限责任公司 High quality layered depth image texture rasterization
CN110709893B (en) * 2017-11-01 2024-03-15 谷歌有限责任公司 High-quality layered depth image texture rasterization method
CN108537835A (en) * 2018-03-29 2018-09-14 香港光云科技有限公司 Hologram image generation, display methods and equipment
CN112470189A (en) * 2018-04-17 2021-03-09 上海科技大学 Occlusion cancellation for light field systems
CN112470189B (en) * 2018-04-17 2024-03-29 上海科技大学 Occlusion cancellation for light field systems
CN109003228B (en) * 2018-07-16 2023-06-13 杭州电子科技大学 Dark field microscopic large-view-field automatic stitching imaging method
CN109003228A (en) * 2018-07-16 2018-12-14 杭州电子科技大学 A kind of micro- big visual field automatic Mosaic imaging method of dark field
CN109712067A (en) * 2018-12-03 2019-05-03 北京航空航天大学 A kind of virtual viewpoint rendering method based on depth image
CN109769109A (en) * 2019-03-05 2019-05-17 东北大学 Method and system based on virtual view synthesis drawing three-dimensional object
CN111667438A (en) * 2019-03-07 2020-09-15 阿里巴巴集团控股有限公司 Video reconstruction method, system, device and computer readable storage medium
CN111667438B (en) * 2019-03-07 2023-05-26 阿里巴巴集团控股有限公司 Video reconstruction method, system, device and computer readable storage medium
CN110246146A (en) * 2019-04-29 2019-09-17 北京邮电大学 Full parallax light field content generating method and device based on multiple deep image rendering
CN110246146B (en) * 2019-04-29 2021-07-30 北京邮电大学 Full-parallax light field content generation method and device based on multiple-time depth image rendering
CN111405265A (en) * 2020-03-24 2020-07-10 杭州电子科技大学 Novel image drawing technology
CN111429513B (en) * 2020-04-26 2022-09-13 广西师范大学 Light field drawing method capable of optimizing visual occlusion scene
CN111429513A (en) * 2020-04-26 2020-07-17 广西师范大学 Light field drawing method capable of optimizing visual occlusion scene
WO2022042413A1 (en) * 2020-08-24 2022-03-03 阿里巴巴集团控股有限公司 Image reconstruction method and apparatus, and computer readable storage medium, and processor
WO2022052620A1 (en) * 2020-09-10 2022-03-17 北京达佳互联信息技术有限公司 Image generation method and electronic device
WO2022110514A1 (en) * 2020-11-27 2022-06-02 叠境数字科技(上海)有限公司 Image interpolation method and apparatus employing rgb-d image and multi-camera system
CN112637582A (en) * 2020-12-09 2021-04-09 吉林大学 Three-dimensional fuzzy surface synthesis method for monocular video virtual view driven by fuzzy edge
CN112565623A (en) * 2020-12-09 2021-03-26 深圳市达特照明股份有限公司 Dynamic image display system
CN112637582B (en) * 2020-12-09 2021-10-08 吉林大学 Three-dimensional fuzzy surface synthesis method for monocular video virtual view driven by fuzzy edge
CN115176459A (en) * 2021-01-25 2022-10-11 京东方科技集团股份有限公司 Virtual viewpoint synthesis method, electronic device, and computer-readable medium
US20230162338A1 (en) * 2021-01-25 2023-05-25 Beijing Boe Optoelectronics Technology Co., Ltd. Virtual viewpoint synthesis method, electronic apparatus, and computer readable medium
WO2022155950A1 (en) * 2021-01-25 2022-07-28 京东方科技集团股份有限公司 Virtual viewpoint synthesis method, electronic device and computer readable medium
CN113179396A (en) * 2021-03-19 2021-07-27 杭州电子科技大学 Double-viewpoint stereo video fusion method based on K-means model
CN113223132A (en) * 2021-04-21 2021-08-06 浙江大学 Indoor scene virtual roaming method based on reflection decomposition
CN113507599A (en) * 2021-07-08 2021-10-15 四川纵横六合科技股份有限公司 Education cloud service platform based on big data analysis
CN113837979A (en) * 2021-09-28 2021-12-24 北京奇艺世纪科技有限公司 Live image synthesis method and device, terminal device and readable storage medium
CN113837979B (en) * 2021-09-28 2024-03-29 北京奇艺世纪科技有限公司 Live image synthesis method, device, terminal equipment and readable storage medium
CN115908162A (en) * 2022-10-28 2023-04-04 中山职业技术学院 Virtual viewpoint generation method and system based on background texture recognition
CN115908162B (en) * 2022-10-28 2023-07-04 中山职业技术学院 Virtual viewpoint generation method and system based on background texture recognition
CN116934984B (en) * 2023-09-19 2023-12-08 成都中轨轨道设备有限公司 Intelligent terminal and method for constructing virtual panoramic scene space
CN116934984A (en) * 2023-09-19 2023-10-24 成都中轨轨道设备有限公司 Intelligent terminal and method for constructing virtual panoramic scene space

Also Published As

Publication number Publication date
CN102592275B (en) 2013-12-25

Similar Documents

Publication Publication Date Title
CN102592275B (en) Virtual viewpoint rendering method
US10846913B2 (en) System and method for infinite synthetic image generation from multi-directional structured image array
CN109003325B (en) Three-dimensional reconstruction method, medium, device and computing equipment
CN113706714B (en) New view angle synthesizing method based on depth image and nerve radiation field
CN103400409B (en) A kind of coverage 3D method for visualizing based on photographic head attitude Fast estimation
US20170148186A1 (en) Multi-directional structured image array capture on a 2d graph
US9485497B2 (en) Systems and methods for converting two-dimensional images into three-dimensional images
CN105279789B (en) A kind of three-dimensional rebuilding method based on image sequence
CN103810685A (en) Super resolution processing method for depth image
CN106060509B (en) Introduce the free view-point image combining method of color correction
Bleyer et al. Temporally consistent disparity maps from uncalibrated stereo videos
CN103761766A (en) Three-dimensional object model texture mapping algorithm based on tone mapping and image smoothing
CN109461197B (en) Cloud real-time drawing optimization method based on spherical UV and re-projection
Lu et al. A survey on multiview video synthesis and editing
Chang et al. A review on image-based rendering
Gu et al. Ue4-nerf: Neural radiance field for real-time rendering of large-scale scene
Lu et al. Depth-based view synthesis using pixel-level image inpainting
CN103945209A (en) DIBR method based on block projection
CN110149508A (en) A kind of array of figure generation and complementing method based on one-dimensional integrated imaging system
Sun et al. Seamless view synthesis through texture optimization
CN105163104A (en) Intermediate view synthesis method without generating cavity
CN117501313A (en) Hair rendering system based on deep neural network
Wang et al. Identifying and filling occlusion holes on planar surfaces for 3-D scene editing
Chen et al. A quality controllable multi-view object reconstruction method for 3D imaging systems
Lai et al. Surface-based background completion in 3D scene

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant