WO2020187339A1 - 一种裸眼3d的虚拟视点图像生成方法及便携式终端 - Google Patents

一种裸眼3d的虚拟视点图像生成方法及便携式终端 Download PDF

Info

Publication number
WO2020187339A1
WO2020187339A1 PCT/CN2020/090416 CN2020090416W WO2020187339A1 WO 2020187339 A1 WO2020187339 A1 WO 2020187339A1 CN 2020090416 W CN2020090416 W CN 2020090416W WO 2020187339 A1 WO2020187339 A1 WO 2020187339A1
Authority
WO
WIPO (PCT)
Prior art keywords
reference image
virtual viewpoint
depth map
image
filling
Prior art date
Application number
PCT/CN2020/090416
Other languages
English (en)
French (fr)
Inventor
高瑞东
谢亮
Original Assignee
影石创新科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 影石创新科技股份有限公司 filed Critical 影石创新科技股份有限公司
Publication of WO2020187339A1 publication Critical patent/WO2020187339A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays

Definitions

  • the invention belongs to the field of image processing, and in particular relates to a method for generating a naked eye 3D virtual viewpoint image and a portable terminal.
  • the prior art naked-eye 3D virtual viewpoint images are usually generated in the following manner: the camera obtains two reference images on the left and right, and corrects the two reference images, uses a stereo matching algorithm to obtain a depth map, and uses a depth-based virtual viewpoint drawing algorithm to synthesize different The image under the virtual point of view generates a virtual point of view image.
  • 2Void Due to the front and back occlusion relationship between objects in the actual space, the same object appears in only one reference image or does not appear in both reference images. In this case, it is impossible to accurately calculate the The depth value of the occluded object. In the synthesized virtual viewpoint image, due to the deviation of the camera position, there will be a large area of cavities where the obstructed object appears.
  • the purpose of the present invention is to provide a naked-eye 3D virtual viewpoint image generation method, a computer-readable storage medium and a portable terminal, aiming to solve the problem of cracks and holes in the virtual viewpoint image synthesized by the prior art.
  • the present invention provides a naked-eye 3D virtual viewpoint image generation method, the method includes:
  • S103 Generate left and right virtual viewpoint images according to the left and right reference images and their depth maps respectively, and perform crack elimination processing and void filling processing in the process of generating the left and right virtual viewpoint images;
  • S104 Perform linear weighting fusion on the left and right virtual viewpoint images to obtain a naked eye 3D virtual viewpoint image;
  • the processing of filling the holes is specifically: segmenting the foreground and background scenes based on the depth map of the reference image to detect areas where holes may appear in the reference image and the depth map of the reference image, and using a multi-scale window filtering algorithm to fill the holes , Get the depth map of the reference image after fuzzy filling and the reference image after fuzzy filling;
  • the crack elimination process specifically includes: performing forward mapping on the depth map of the reference image and the depth map of the reference image after blur filling to eliminate the cracks.
  • the present invention provides a computer-readable storage medium storing a computer program that, when executed by a processor, realizes the naked-eye 3D virtual viewpoint as described in the first aspect The steps of the image generation method.
  • the present invention provides a portable terminal, including:
  • One or more processors are One or more processors;
  • One or more computer programs, the processor and the memory are connected by a bus, wherein the one or more computer programs are stored in the memory and configured to be executed by the one or more processors
  • the processor executes the computer program, the steps of the naked-eye 3D virtual viewpoint image generation method as described in the first aspect are implemented.
  • the filling processing of the holes is specifically: using the background scene based on the depth map of the reference image. Segmentation to detect areas where holes may appear in the reference image and the depth map of the reference image, and use a multi-scale window filtering algorithm to fill the holes to obtain the depth map of the reference image after fuzzy filling and the reference image after fuzzy filling;
  • the crack elimination process is specifically: performing forward mapping on the depth map of the reference image and the depth map of the reference image after blur filling to eliminate the cracks. Therefore, there are no cracks and holes in the synthesized virtual viewpoint image, and the image effect is good when used for naked eye viewing.
  • FIG. 1 is a flowchart of a method for generating a naked-eye 3D virtual viewpoint image according to Embodiment 1 of the present invention.
  • Fig. 2 is a specific structural block diagram of a portable terminal provided in the third embodiment of the present invention.
  • the method for generating a naked-eye 3D virtual viewpoint image includes the following steps:
  • S103 Generate left and right virtual viewpoint images according to the left and right reference images and their depth maps respectively, and perform crack elimination processing and void filling processing in the process of generating the left and right virtual viewpoint images;
  • S104 Perform linear weighted fusion on the left and right virtual view point images to obtain a naked eye 3D virtual view point image.
  • S103 for the process of generating a left virtual viewpoint image or a right virtual viewpoint image, S103 specifically includes the following steps:
  • the pixel values of the pixels at the virtual viewpoint position are obtained by interpolating the neighboring pixels at the corresponding positions in the reference image I and the blurred-filled reference image I_blur respectively, so as to obtain the pixel values of the virtual viewpoint positions.
  • the depth map depth_blur of the reference image after blur filling no longer has a depth discontinuity area
  • the depth map depth_blur_virt of the image after the blur filling of the virtual viewpoint position after forward mapping also eliminates the crack area, the resulting blur There is no void area in the filled view image_blur_virt. However, there are a lot of void areas in the view image_virt under the virtual viewpoint position.
  • the occluded area in the virtual view point image img_out should be a hollow place, which is taken from the blurred area in the blurred and filled view image_blur_virt, and the non-occluded area is taken from the view image_virt under the virtual view point position.
  • the background information is used to fill the holes caused by the occlusion, and the authenticity of the non-occluded areas is also guaranteed, and the holes will not be blurred due to the hole filling.
  • the position of the virtual view point is continuously translated, and the three-dimensional translation matrix is also changed accordingly, so as to obtain a series of virtual view point images under different view points.
  • S1031 specifically includes the following steps:
  • S10311 Perform boundary detection on the depth map depth of the reference image to obtain the main boundary.
  • S10311 may specifically be:
  • S10312 Segment the reference image I and the depth map depth of the reference image according to the main boundary.
  • S10312 may specifically be:
  • the reference image I and the depth map depth of the reference image are respectively segmented, and n local reference image sequences with the background and the background removed are segmented into ListI ⁇ I1, I2, I3,...,In ⁇ and the depth map sequence Listd ⁇ d1,d2,d3,...,dn ⁇ of the local reference image.
  • S10313 Perform foreground hole filling on the local reference image sequence and the depth map sequence of the local reference image respectively.
  • S10313 may specifically be:
  • the foreground area of each local reference image and the depth map of the local reference image is filled with 0, and the foreground area is filled with neighbors using a multi-scale window filter algorithm. Domain background information.
  • Step 1 Set the initial filter window size to the image width, and perform average filtering on the reference image to obtain a first filter result map
  • Step 2 Reduce the size of the filter window by half, and perform average filtering on the reference image to obtain the second filter result map; if there are still unfilled void areas in the second filter result map, use the corresponding position pixel in the first filter result map Point fill
  • Step 3 Perform step 2 multiple times, each time reduce the filter window size to half of the previous one, and then determine whether there is still a hole area, if there is, fill it with the pixels at the corresponding position in the previous filter result picture, when the filter window size When it is less than 3, stop the loop.
  • the mean filtering of the reference image can be quickly realized by the integral graph method, and the integral graph is also very convenient for parallel acceleration.
  • the hollow areas of the images in the local reference image sequence ListI are filled with the corresponding background information, and the filled local reference image sequence ListI_fill ⁇ I1_fill, I2_fill, I3_fill,...,In_fill ⁇ is obtained, and the closer it is to the boundary of the hole The larger the pixel weight value of the area, the smaller the pixel weight farther away from the void boundary area.
  • the fusion method can be specifically to copy the filled cavity area of the previous layer to the next layer, for example, copy the cavity filled area in In-1_fill to the corresponding position of In_fill, and so on, until all the sequences are merged, and a picture is obtained.
  • S1032 specifically includes the following steps:
  • the camera internal parameter matrix camK is obtained by camera calibration: Among them, fx, f y are the focal length of the camera, c x , c y are the principal point coordinates of the camera (usually in the center of the image), the two-dimensional image coordinates (U, V) and depth of the pixels in the reference image are known Value D, project the pixel to the three-dimensional space point P(X, Y, Z), where,
  • the panel method is used for mapping, and the bilinear interpolation algorithm is used to calculate the corresponding virtual viewpoint position in the depth map of the reference image from the region in the depth map of the reference image.
  • the depth value of each pixel in the region is calculated from the region in the depth map of the reference image after blur filling using a bilinear interpolation algorithm to calculate the corresponding virtual view point position of the region in the depth map of the reference image after blur filling.
  • the depth value of each pixel is calculated from the region in the depth map of the reference image after blur filling using a bilinear interpolation algorithm to calculate the corresponding virtual view point position of the region in the depth map of the reference image after blur filling.
  • D dstR f(D srcR ), where D dstR represents the depth value of the target area, D srcR represents the depth value of the source area, and f() represents the bilinear interpolation algorithm.
  • the depth map depth_virt of the image at the virtual viewpoint position calculated by the panel method and the depth map depth_blur_virt of the image after the blur filling of the virtual viewpoint position can effectively solve the crack phenomenon that appears in the continuous depth area.
  • the filling process of the holes is specifically: using the depth based on the reference image
  • the front and back scenes of the image are segmented to detect the areas where holes may appear in the reference image and the depth map of the reference image, and the holes are filled with a multi-scale window filter algorithm to obtain the depth map of the reference image after fuzzy filling and the fuzzy filled Reference image
  • the crack elimination process specifically includes: performing forward mapping on the depth map of the reference image and the depth map of the reference image after blur filling to eliminate the cracks. Therefore, there are no cracks and holes in the synthesized virtual viewpoint image, and the image effect is good when used for naked eye viewing.
  • the second embodiment of the present invention provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, it implements a naked-eye 3D display device as provided in the first embodiment of the present invention.
  • the computer-readable storage medium may be a non-transitory computer-readable storage medium.
  • FIG. 2 shows a specific structural block diagram of a portable terminal provided in the third embodiment of the present invention.
  • a portable terminal 100 includes: one or more processors 101, a memory 102, and one or more computer programs, wherein the processing The device 101 and the memory 102 are connected by a bus.
  • the one or more computer programs are stored in the memory 102 and are configured to be executed by the one or more processors 101, and the processor 101 executes
  • the computer program implements the steps of a naked-eye 3D virtual viewpoint image generation method provided in the first embodiment of the present invention.
  • the program can be stored in a computer-readable storage medium, and the storage medium can include: Read only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

一种裸眼3D的虚拟视点图像生成方法和便携式终端。所述方法包括:获取摄像机拍摄的左右两幅参考图像,并进行校正;根据校正后的左右两幅参考图像,利用立体匹配算法分别获取左右两幅参考图像的深度图;分别根据左右两幅参考图像及其深度图生成左右两幅虚拟视点图像,在生成左右两幅虚拟视点图像的过程中对裂纹进行消除处理和对空洞进行填充处理;将左右两幅虚拟视点图像进行线性加权融合,得到裸眼3D的虚拟视点图像。本方法合成出的虚拟视点图像不存在裂纹和空洞,用于裸眼观看时,图像效果好。

Description

一种裸眼3D的虚拟视点图像生成方法及便携式终端 技术领域
本发明属于图像处理领域,尤其涉及一种裸眼3D的虚拟视点图像生成方法和便携式终端。
背景技术
现有技术的裸眼3D的虚拟视点图像通常采用以下方式生成:摄像机获得左右两幅参考图像,并校正两幅参考图像,利用立体匹配算法获取深度图,采用基于深度的虚拟视点绘制算法合成出不同虚拟视点下的图像,生成虚拟视点图像。
但是由于计算中的投影误差以及物体遮挡等原因,合成出的虚拟视点图像主要存在两个问题:
①裂纹:由于重投影过程中的投影误差存在的原因,导致若干像素点未被赋予像素值,从而导致合成出的虚拟视点图像会存在裂纹。
②空洞:由于实际空间中物体之间的前后遮挡关系,存在同一个物体只在一幅参考图像中出现,或者在两幅参考图像中均不出现的情况,这种情况下无法准确计算出被遮挡物体的深度值。在合成的虚拟视点图像中,由于摄像机位置的偏移,被遮挡物体出现的位置会存在大面积空洞。
技术问题
本发明的目的在于提供一种裸眼3D的虚拟视点图像生成方法、计算机可读存储介质和便携式终端,旨在解决现有技术合成出的虚拟视点图像存在裂纹和空洞的问题。
技术解决方案
第一方面,本发明提供了一种裸眼3D的虚拟视点图像生成方法,所述方法包括:
S101、获取摄像机拍摄的左右两幅参考图像,并进行校正;
S102、根据校正后的左右两幅参考图像,利用立体匹配算法分别获取左右两幅参考图像的深度图;
S103、分别根据左右两幅参考图像及其深度图生成左右两幅虚拟视点图像,在生成左右两幅虚拟视点图像的过程中对裂纹进行消除处理和对空洞进行填充处理;
S104、将左右两幅虚拟视点图像进行线性加权融合,得到裸眼3D的虚拟视点图像;
所述对空洞进行填充处理具体为:采用基于参考图像的深度图的前后景进行分割来检测参考图像和参考图像的深度图中可能出现空洞的区域,并利用多尺度窗口滤波算法对空洞进行填充,得到模糊填充后的参考图像的深度图和模糊填充后的参考图像;
所述对裂纹进行消除处理具体为:将参考图像的深度图和模糊填充后的参考图像的深度图分别执行正向映射消除裂纹。
第二方面,本发明提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如第一方面所述的裸眼3D的虚拟视点图像生成方法的步骤。
第三方面,本发明提供了一种便携式终端,包括:
一个或多个处理器;
存储器;以及
一个或多个计算机程序,所述处理器和所述存储器通过总线连接,其中所述一个或多个计算机程序被存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,所述处理器执行所述计算机程序时实现如第一方面所述的裸眼3D的虚拟视点图像生成方法的步骤。
有益效果
在本发明中,由于在生成左右两幅虚拟视点图像的过程中对裂纹进行消除处理和对空洞进行填充处理;所述对空洞进行填充处理具体为:采用基 于参考图像的深度图的前后景进行分割来检测参考图像和参考图像的深度图中可能出现空洞的区域,并利用多尺度窗口滤波算法对空洞进行填充,得到模糊填充后的参考图像的深度图和模糊填充后的参考图像;所述对裂纹进行消除处理具体为:将参考图像的深度图和模糊填充后的参考图像的深度图分别执行正向映射消除裂纹。因此合成出的虚拟视点图像不存在裂纹和空洞,用于裸眼观看时,图像效果好。
附图说明
图1是本发明实施例一提供的裸眼3D的虚拟视点图像生成方法的流程图。
图2是本发明实施例三提供的便携式终端的具体结构框图。
本发明的实施方式
为了使本发明的目的、技术方案及有益效果更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
为了说明本发明所述的技术方案,下面通过具体实施例来进行说明。
实施例一:
请参阅图1,本发明实施例一提供的裸眼3D的虚拟视点图像生成方法包括以下步骤:
S101、获取摄像机拍摄的左右两幅参考图像,并进行校正;
S102、根据校正后的左右两幅参考图像,利用立体匹配算法分别获取左右两幅参考图像的深度图;
S103、分别根据左右两幅参考图像及其深度图生成左右两幅虚拟视点图像,在生成左右两幅虚拟视点图像的过程中对裂纹进行消除处理和对空洞进行填充处理;
S104、将左右两幅虚拟视点图像进行线性加权融合,得到裸眼3D的虚拟视点图像。
在本发明实施例一中,针对生成左虚拟视点图像或右虚拟视点图像的过程,S103具体包括以下步骤:
S1031、采用基于参考图像的深度图depth的前后景进行分割来检测参考图像I和参考图像的深度图depth中可能出现空洞的区域,并利用多尺度窗口滤波算法对空洞进行填充,得到模糊填充后的参考图像的深度图depth_blur和模糊填充后的参考图像I_blur。
S1032、将参考图像的深度图depth和模糊填充后的参考图像的深度图depth_blur分别执行正向映射消除裂纹,得到虚拟视点位置的图像的深度图depth_virt和虚拟视点位置的模糊填充后的图像的深度图depth_blur_virt。
S1033、采用反向映射方法,利用虚拟视点位置的图像的深度图depth_virt和虚拟视点位置的模糊填充后的图像的深度图depth_blur_virt将虚拟视点位置的像素点重投影到三维空间,平移后再重投影到参考视点位置。
S1034、采用双线性插值算法,分别从参考图像I和模糊填充后的参考图像I_blur中对应位置的邻域像素点插值得到虚拟视点位置的像素点的像素值,从而分别得到虚拟视点位置下的视图image_virt和模糊填充后的视图image_blur_virt。
由于模糊填充后的参考图像的深度图depth_blur中已经不存在深度不连续区域,并且经过正向映射后的虚拟视点位置的模糊填充后的图像的深度图depth_blur_virt也消除了裂纹区域,因此得到的模糊填充后的视图image_blur_virt中不存在空洞区域。但虚拟视点位置下的视图image_virt中存在大量空洞区域。
S1035、将虚拟视点位置下的视图image_virt中没有空洞的区域复制到模糊填充后的视图image_blur_virt中,得到一幅融合后的虚拟视点图像img_out。
虚拟视点图像img_out中遮挡区域本来应该是空洞的地方,取自模糊填充后的视图image_blur_virt中的模糊区域,非遮挡区域取自虚拟视点位置下的视图image_virt。这样既利用背景信息填充了遮挡造成的空洞,也保证了非遮挡区域的真实性,不会因为空洞填充而模糊掉。
S1036、不断平移虚拟视点位置,三维平移矩阵也随之改变,从而得到一系列不同视点下的虚拟视点图像。
在本发明实施例一中,S1031具体包括以下步骤:
S10311、对参考图像的深度图depth进行边界检测,得到主要边界。
由于遮挡区域往往出现在深度不连续区域,在深度图中表现为边界区域。S10311具体可以为:
利用sobel算子检测出参考图像的深度图depth中的主要边界,并获取主要边界区域的n个深度值序列ListD{D1,D2,D3,…,Dn},n是自然数。
S10312、根据主要边界对参考图像I和参考图像的深度图depth进行分割。
S10312具体可以为:
以边界区域的深度值为分割阈值,根据分割阈值由小到大的策略,分别对参考图像I和参考图像的深度图depth进行区域分割,分割出n个去除前后景的局部参考图像序列ListI{I1,I2,I3,…,In}和局部参考图像的深度图序列Listd{d1,d2,d3,…,dn}。
S10313、分别对局部参考图像序列和局部参考图像的深度图序列进行前景空洞填充。
S10313具体可以为:
在局部参考图像序列ListI和局部参考图像的深度图序列Listd中,每一幅局部参考图像和局部参考图像的深度图的前景区域被填充为0,采用多尺度窗口滤波算法将前景区域填充为邻域背景信息。
针对每一幅局部参考图像和局部参考图像的深度图,执行如下步骤:
步骤一、设定初始滤波窗口尺寸为图像宽度,对参考图像进行均值滤波得到第一滤波结果图;
步骤二、将滤波窗口尺寸缩小一半,对参考图像进行均值滤波得到第二滤波结果图;如果第二滤波结果图中仍然存在未被填充的空洞区域,则用第一滤波结果图中对应位置像素点填充;
步骤三、多次执行步骤二,每次缩小滤波窗口尺寸为前一次的一半,然后判断是否仍然存在空洞区域,如果存在,则用前一次滤波结果图中对应位置像素点填充,当滤波窗口尺寸小于3时,停止循环。
所述对参考图像进行均值滤波可以用积分图方法快速实现,而积分图也很方便用于并行加速。
经过S10313的填充,局部参考图像序列ListI中图像的空洞区域均被对应的背景信息填充,得到填充后的局部参考图像序列ListI_fill{I1_fill,I2_fill,I3_fill,…,In_fill},并且是越靠近空洞边界区域的像素点权重值越大,越远离空洞边界区域的像素点权重越小。
S10314、分别对进行前景空洞填充后的局部参考图像序列和局部参考图像的深度图序列进行融合,得到一幅模糊填充后的参考图像I_blur和模糊填充后的参考图像的深度图depth_blur。
融合方法具体可以是将上一层填充后的空洞区域复制到下一层,例如将In-1_fill中的空洞填充区域复制到In_fill对应位置,依次类推,直到将所有序列融合完成,得到一幅所有前景区域均被模糊填充的视图。
在本发明实施例一中,S1032具体包括以下步骤:
S10321、根据摄像机内参矩阵和虚拟视点位置与参考视点位置之间的三维平移矩阵计算得到从参考视点到虚拟视点的映射map。
摄像机内参矩阵camK通过摄像机标定得到:
Figure PCTCN2020090416-appb-000001
其中,fx,f y是摄像机的焦距,c x,c y是摄像机的主点坐标(通常在图像的中心), 已知参考图像中的像素点的二维图像坐标(U,V)和深度值D,将该像素点投影到三维空间点P(X,Y,Z),其中,
Figure PCTCN2020090416-appb-000002
根据虚拟视点位置与参考视点位置之间的三维平移矩阵T,平移三维空间点P得到P’(X’,Y’,Z’),P′=P+T;
将P’重投影到虚拟视点图像平面得到(U’,V’),
Figure PCTCN2020090416-appb-000003
S10322、根据从参考视点到虚拟视点的映射map,采用面元法进行映射,利用双线性插值算法从参考图像的深度图中的区域计算出对应的虚拟视点位置的参考图像的深度图中的区域的每个像素点的深度值,利用双线性插值算法从模糊填充后的参考图像的深度图中的区域计算出对应的虚拟视点位置的模糊填充后的参考图像的深度图中的区域的每个像素点的深度值。
例如,选取参考图像的深度图中2x2区域srcRegion,然后在从参考视点到虚拟视点的映射map中找出对应的虚拟视点图像中的区域dstRegion。利用双线性插值算法从srcRegion中计算出dstRegion中的每个像素点的深度值。D dstR=f(D srcR),其中D dstR表示目标区域深度值,D srcR表示源区域深度值,f()表示双线性插值算法。
采用面元法计算出的虚拟视点位置的图像的深度图depth_virt和虚拟视点位置的模糊填充后的图像的深度图depth_blur_virt能够有效解决在深度连续区域出现的裂纹现象。
在本发明中,在本发明中,由于在生成左右两幅虚拟视点图像的过程中对裂纹进行消除处理和对空洞进行填充处理;所述对空洞进行填充处理具体为:采用基于参考图像的深度图的前后景进行分割来检测参考图像和参考图像的深度图中可能出现空洞的区域,并利用多尺度窗口滤波算法对空 洞进行填充,得到模糊填充后的参考图像的深度图和模糊填充后的参考图像;所述对裂纹进行消除处理具体为:将参考图像的深度图和模糊填充后的参考图像的深度图分别执行正向映射消除裂纹。因此合成出的虚拟视点图像不存在裂纹和空洞,用于裸眼观看时,图像效果好。
实施例二:
本发明实施例二提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如本发明实施例一提供的一种裸眼3D的虚拟视点图像生成方法的步骤,所述计算机可读存储介质可以是非暂态计算机可读存储介质。
实施例三:
图2示出示出了本发明实施例三提供的便携式终端的具体结构框图,一种便携式终端100包括:一个或多个处理器101、存储器102、以及一个或多个计算机程序,其中所述处理器101和所述存储器102通过总线连接,所述一个或多个计算机程序被存储在所述存储器102中,并且被配置成由所述一个或多个处理器101执行,所述处理器101执行所述计算机程序时实现如本发明实施例一提供的一种裸眼3D的虚拟视点图像生成方法的步骤。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种裸眼3D的虚拟视点图像生成方法,其特征在于,所述方法包括:
    S101、获取摄像机拍摄的左右两幅参考图像,并进行校正;
    S102、根据校正后的左右两幅参考图像,利用立体匹配算法分别获取左右两幅参考图像的深度图;
    S103、分别根据左右两幅参考图像及其深度图生成左右两幅虚拟视点图像,在生成左右两幅虚拟视点图像的过程中对裂纹进行消除处理和对空洞进行填充处理;
    S104、将左右两幅虚拟视点图像进行线性加权融合,得到裸眼3D的虚拟视点图像;
    所述对空洞进行填充处理具体为:采用基于参考图像的深度图的前后景进行分割来检测参考图像和参考图像的深度图中可能出现空洞的区域,并利用多尺度窗口滤波算法对空洞进行填充,得到模糊填充后的参考图像的深度图和模糊填充后的参考图像;
    所述对裂纹进行消除处理具体为:将参考图像的深度图和模糊填充后的参考图像的深度图分别执行正向映射消除裂纹。
  2. 如权利要求1所述的方法,其特征在于,针对生成左虚拟视点图像或右虚拟视点图像的过程,S103具体包括以下步骤:
    S1031、采用基于参考图像的深度图的前后景进行分割来检测参考图像和参考图像的深度图中可能出现空洞的区域,并利用多尺度窗口滤波算法对空洞进行填充,得到模糊填充后的参考图像的深度图和模糊填充后的参考图像;
    S1032、将参考图像的深度图和模糊填充后的参考图像的深度图分别执行正向映射消除裂纹,得到虚拟视点位置的图像的深度图和虚拟视点位置的模糊填充后的图像的深度图;
    S1033、采用反向映射方法,利用虚拟视点位置的图像的深度图和虚拟视点位置的模糊填充后的图像的深度图将虚拟视点位置的像素点重投影到三维空间,平移后再重投影到参考视点位置;
    S1034、采用双线性插值算法,分别从参考图像和模糊填充后的参考图像中对应位置的邻域像素点插值得到虚拟视点位置的像素点的像素值,从而分别得到虚拟视点位置下的视图和模糊填充后的视图;
    S1035、将虚拟视点位置下的视图中没有空洞的区域复制到模糊填充后的视图中,得到一幅融合后的虚拟视点图像;
    S1036、不断平移虚拟视点位置,三维平移矩阵也随之改变,从而得到一系列不同视点下的虚拟视点图像。
  3. 如权利要求2所述的方法,其特征在于,S1031具体包括以下步骤:
    S10311、对参考图像的深度图进行边界检测,得到主要边界;
    S10312、根据主要边界对参考图像和参考图像的深度图进行分割;
    S10313、分别对局部参考图像序列和局部参考图像的深度图序列进行前景空洞填充;
    S10314、分别对进行前景空洞填充后的局部参考图像序列和局部参考图像的深度图序列进行融合,得到一幅模糊填充后的参考图像和模糊填充后的参考图像的深度图。
  4. 如权利要求3所述的方法,其特征在于,S10311具体为:
    利用sobel算子检测出参考图像的深度图中的主要边界,并获取主要边界区域的n个深度值序列,n是自然数。
  5. 如权利要求3所述的方法,其特征在于,S10312具体为:
    以边界区域的深度值为分割阈值,根据分割阈值由小到大的策略,分别对参考图像和参考图像的深度图进行区域分割,分割出n个去除前后景的局部参考图像序列和局部参考图像的深度图序列。
  6. 如权利要求5所述的方法,其特征在于,S10313具体为:
    在局部参考图像序列和局部参考图像的深度图序列中,每一幅局部参考图像和局部参考图像的深度图的前景区域被填充为0,采用多尺度窗口滤波算法将前景区域填充为邻域背景信息。
  7. 如权利要求6所述的方法,其特征在于,针对每一幅局部参考图像和局部参考图像的深度图,执行如下步骤:
    步骤一、设定初始滤波窗口尺寸为图像宽度,对参考图像进行均值滤波得到第一滤波结果图;
    步骤二、将滤波窗口尺寸缩小一半,对参考图像进行均值滤波得到第二滤波结果图;如果第二滤波结果图中仍然存在未被填充的空洞区域,则用第一滤波结果图中对应位置像素点填充;
    步骤三、多次执行步骤二,每次缩小滤波窗口尺寸为前一次的一半,然后判断是否仍然存在空洞区域,如果存在,则用前一次滤波结果图中对应位置像素点填充,当滤波窗口尺寸小于3时,停止循环。
  8. 如权利要求2所述的方法,其特征在于,S1032具体包括以下步骤:
    S10321、根据摄像机内参矩阵和虚拟视点位置与参考视点位置之间的三维平移矩阵计算得到从参考视点到虚拟视点的映射;
    S10322、根据从参考视点到虚拟视点的映射,采用面元法进行映射,利用双线性插值算法从参考图像的深度图中的区域计算出对应的虚拟视点位置的参考图像的深度图中的区域的每个像素点的深度值,利用双线性插值算法从模糊填充后的参考图像的深度图中的区域计算出对应的虚拟视点位置的模糊填充后的参考图像的深度图中的区域的每个像素点的深度值。
  9. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至8任一项所述的裸眼3D的虚拟视点图像生成方法的步骤,所述计算机可读存储介质可以是非暂态计算机可读存储介质。
  10. 一种便携式终端,包括:
    一个或多个处理器;
    存储器;以及
    一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存 储器中,并且被配置成由所述一个或多个处理器执行,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至8任一项所述的裸眼3D的虚拟视点图像生成方法的步骤。
PCT/CN2020/090416 2019-03-18 2020-05-15 一种裸眼3d的虚拟视点图像生成方法及便携式终端 WO2020187339A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910203920.6A CN109982064B (zh) 2019-03-18 2019-03-18 一种裸眼3d的虚拟视点图像生成方法和便携式终端
CN201910203920.6 2019-03-18

Publications (1)

Publication Number Publication Date
WO2020187339A1 true WO2020187339A1 (zh) 2020-09-24

Family

ID=67079327

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/090416 WO2020187339A1 (zh) 2019-03-18 2020-05-15 一种裸眼3d的虚拟视点图像生成方法及便携式终端

Country Status (2)

Country Link
CN (1) CN109982064B (zh)
WO (1) WO2020187339A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109982064B (zh) * 2019-03-18 2021-04-27 影石创新科技股份有限公司 一种裸眼3d的虚拟视点图像生成方法和便携式终端
CN112188186B (zh) * 2020-09-28 2023-01-24 南京工程学院 一种基于归一化无限视点的获取裸眼3d合成图方法
CN113382227A (zh) * 2021-06-03 2021-09-10 天翼阅读文化传播有限公司 一种基于智能手机的裸眼3d全景视频渲染装置及方法
CN113450274B (zh) * 2021-06-23 2022-08-05 山东大学 一种基于深度学习的自适应视点融合方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103581651A (zh) * 2013-10-28 2014-02-12 西安交通大学 一种用于车载多目摄像机环视系统的虚拟视点合成方法
US20140198977A1 (en) * 2012-03-21 2014-07-17 Texas Instruments Incorporated Enhancement of Stereo Depth Maps
CN106791774A (zh) * 2017-01-17 2017-05-31 湖南优象科技有限公司 基于深度图的虚拟视点图像生成方法
CN107018401A (zh) * 2017-05-03 2017-08-04 曲阜师范大学 基于逆映射的虚拟视点空洞填补方法
CN109982064A (zh) * 2019-03-18 2019-07-05 深圳岚锋创视网络科技有限公司 一种裸眼3d的虚拟视点图像生成方法和便携式终端

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101556700B (zh) * 2009-05-15 2012-02-15 宁波大学 一种虚拟视点图像绘制方法
CN101635859B (zh) * 2009-08-21 2011-04-27 清华大学 一种实现平面视频转立体视频的方法和装置
JP2012215852A (ja) * 2011-03-25 2012-11-08 Semiconductor Energy Lab Co Ltd 画像処理方法、表示装置
CN102325259A (zh) * 2011-09-09 2012-01-18 青岛海信数字多媒体技术国家重点实验室有限公司 多视点视频中虚拟视点合成方法及装置
CN102447925B (zh) * 2011-09-09 2014-09-10 海信集团有限公司 一种虚拟视点图像合成方法及装置
CN103024421B (zh) * 2013-01-18 2015-03-04 山东大学 自由视点电视中的虚拟视点合成方法
CN103581648B (zh) * 2013-10-18 2015-08-26 清华大学深圳研究生院 绘制新视点中的空洞填补方法
CN106023299B (zh) * 2016-05-04 2019-01-04 上海玮舟微电子科技有限公司 一种基于深度图的虚拟视图绘制方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140198977A1 (en) * 2012-03-21 2014-07-17 Texas Instruments Incorporated Enhancement of Stereo Depth Maps
CN103581651A (zh) * 2013-10-28 2014-02-12 西安交通大学 一种用于车载多目摄像机环视系统的虚拟视点合成方法
CN106791774A (zh) * 2017-01-17 2017-05-31 湖南优象科技有限公司 基于深度图的虚拟视点图像生成方法
CN107018401A (zh) * 2017-05-03 2017-08-04 曲阜师范大学 基于逆映射的虚拟视点空洞填补方法
CN109982064A (zh) * 2019-03-18 2019-07-05 深圳岚锋创视网络科技有限公司 一种裸眼3d的虚拟视点图像生成方法和便携式终端

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
EBROUL, IZQUIERDO M. ET AL.: "Virtual 3D-view Generation from Stereoscopic Video Data", SMC'98 CONFERENCE PROCEEDINGS. 1998 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (CAT. NO.98CH36218), 31 December 1998 (1998-12-31), XP010311066, DOI: 20200713103459A *
FOLKER, WIENTAPPER ET AL.: "Reconstruction and Accurate Alignment of Feature Maps for Augmented Reality", 2011 INTERNATIONAL CONFERENCE ON 3D IMAGING, MODELING, PROCESSING, VISUALIZATION AND TRANSMISSION, 31 December 2011 (2011-12-31), XP031896478, DOI: 20200713103554A *

Also Published As

Publication number Publication date
CN109982064B (zh) 2021-04-27
CN109982064A (zh) 2019-07-05

Similar Documents

Publication Publication Date Title
WO2020187339A1 (zh) 一种裸眼3d的虚拟视点图像生成方法及便携式终端
US10368062B2 (en) Panoramic camera systems
JP7403528B2 (ja) シーンの色及び深度の情報を再構成するための方法及びシステム
US9445071B2 (en) Method and apparatus generating multi-view images for three-dimensional display
EP1303839B1 (en) System and method for median fusion of depth maps
US8791941B2 (en) Systems and methods for 2-D to 3-D image conversion using mask to model, or model to mask, conversion
JP5651909B2 (ja) エッジの検出およびシェーダの再利用による多視点光線追跡
US9013482B2 (en) Mesh generating apparatus, method and computer-readable medium, and image processing apparatus, method and computer-readable medium
IL259401A (en) Methods and systems for determining large scale rgbd camera presentations
TWI398158B (zh) 產生立體影像之影像深度的方法
KR20090052889A (ko) 이미지들로부터 깊이 맵을 결정하기 위한 방법 및 깊이 맵을 결정하기 위한 디바이스
JP2011060216A (ja) 画像処理装置および画像処理方法
US10796496B2 (en) Method of reconstrucing 3D color mesh and apparatus for same
WO2020125637A1 (zh) 一种立体匹配方法、装置和电子设备
CN111462030A (zh) 多图像融合的立体布景视觉新角度构建绘制方法
JP7116262B2 (ja) 画像深度推定方法および装置、電子機器、ならびに記憶媒体
JP4796072B2 (ja) 画像セグメンテーションに基づく画像レンダリング
JP2022509329A (ja) 点群融合方法及び装置、電子機器、コンピュータ記憶媒体並びにプログラム
JP2020098421A (ja) 三次元形状モデル生成装置、三次元形状モデル生成方法、及びプログラム
US11475629B2 (en) Method for 3D reconstruction of an object
Muddala et al. Depth-based inpainting for disocclusion filling
WO2022155950A1 (zh) 虚拟视点合成方法、电子设备和计算机可读介质
RU2791081C2 (ru) Способ трехмерной реконструкции объекта
AU2022368363B2 (en) Method and system for three-dimensional reconstruction of target object
US20230419586A1 (en) Apparatus and method for generating texture map for 3d wide area terrain

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20774483

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09.11.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 20774483

Country of ref document: EP

Kind code of ref document: A1