CN108900825A - A kind of conversion method of 2D image to 3D rendering - Google Patents

A kind of conversion method of 2D image to 3D rendering Download PDF

Info

Publication number
CN108900825A
CN108900825A CN201810933341.2A CN201810933341A CN108900825A CN 108900825 A CN108900825 A CN 108900825A CN 201810933341 A CN201810933341 A CN 201810933341A CN 108900825 A CN108900825 A CN 108900825A
Authority
CN
China
Prior art keywords
image
depth map
original
depth
generate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810933341.2A
Other languages
Chinese (zh)
Inventor
李建平
顾小丰
胡健
刘丹
李伟
王晓明
赖志龙
孙睿男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201810933341.2A priority Critical patent/CN108900825A/en
Publication of CN108900825A publication Critical patent/CN108900825A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of conversion methods of 2D image to 3D rendering.The invention proposes the new fast algorithms for by 2D Content Transformation being 3D content, not only novel but also quick, and reduce time complexity and memory complexity, reduce calculating cost, keep high-definition image/video more life-like, improve the quality of depth map, improves the real-time of 3D output.

Description

一种2D图像到3D图像的转换方法A method for converting 2D images to 3D images

技术领域technical field

本发明涉及图像处理方法,具体涉及一种2D图像到3D图像的转换方法。The invention relates to an image processing method, in particular to a conversion method from a 2D image to a 3D image.

背景技术Background technique

如今,3D技术正变得非常流行,它极大地增强了人们在日常生活中的视觉体验,使这个词的使用变得非常普遍。由于它的高需求和普及,这一领域正受到人们的关注,其首要目的是创造高质量的视觉效果。但是它并不容易。因此,它涉及到为了达到预期目标而需要处理的具有挑战性的任务。现有的方法可以达到预期的目标,但要将2D内容转换为3D内容需要更多的时间。Nowadays, 3D technology is becoming very popular and it greatly enhances people's visual experience in daily life, making the usage of this word very common. Due to its high demand and popularity, this field is gaining attention, with the primary purpose of creating high-quality visuals. But it's not easy. Therefore, it involves challenging tasks that need to be tackled in order to achieve the desired goal. Existing methods can achieve the desired goal, but it takes more time to convert 2D content to 3D content.

但是与转换有关的另一个问题是生成的深度看起来是人为的,从而抑制了3D内容的现实特征。这会对图像/视频的整体显示产生严重的影响,同时也会给观看者带来健康弱点。But another problem with conversion is that the resulting depth looks artificial, suppressing the realistic character of 3D content. This can have a severe impact on the overall display of the image/video as well as a health vulnerability for the viewer.

发明内容Contents of the invention

针对现有技术中的上述不足,本发明提供的一种2D图像到3D图像的转换方法解决了将2D内容转换为3D内容需要更多时间,图像质量较差的问题。Aiming at the above-mentioned shortcomings in the prior art, the present invention provides a method for converting 2D images to 3D images, which solves the problems of more time required and poor image quality for converting 2D content into 3D content.

为了达到上述发明目的,本发明采用的技术方案为:一种2D图像到3D图像的转换方法,包括以下步骤:In order to achieve the above-mentioned purpose of the invention, the technical solution adopted in the present invention is: a conversion method from a 2D image to a 3D image, comprising the following steps:

S1、获取原始2D图像的深度图;S1. Obtain the depth map of the original 2D image;

S2、通过DIBR单元将深度图和原始2D图像生成右图像和左图像;S2. Generate a right image and a left image from the depth map and the original 2D image through the DIBR unit;

S3、对左图像和右图像进行补孔填充,并调整左图像和右图像的大小为原始2D图像大小;S3. Filling the left image and the right image, and adjusting the size of the left image and the right image to the size of the original 2D image;

S4、合并左图像和右图像,生成3D图像。S4. Merge the left image and the right image to generate a 3D image.

进一步地:所述步骤S1的具体步骤为:Further: the specific steps of the step S1 are:

S11、缩小原始2D图像的大小生成收缩图像,所述原始2D图像的大小为720×1280,所述收缩图像的大小为320×360;S11. Reduce the size of the original 2D image to generate a contracted image, the size of the original 2D image is 720×1280, and the size of the contracted image is 320×360;

S12、将收缩图像的RGB转换到YCbCr,并右移2比特,转化公式为:S12. Convert the RGB of the contracted image to YC b C r , and shift right by 2 bits. The conversion formula is:

上式中,Y为颜色的亮度成分,Cb为蓝色的浓度偏移量成分,Cr为红色的浓度偏移量成分,R为红色成分,G为绿色成分,B为蓝色成分;In the above formula, Y is the brightness component of the color, C b is the concentration offset component of blue, C r is the concentration offset component of red, R is the red component, G is the green component, and B is the blue component;

S13、对YCbCr图像进行近似边缘检测,得到正面深度图和边缘深度图,并合并正面深度图和边缘深度图,左移2比特后生成深度图。S13. Perform approximate edge detection on the YC b C r image to obtain a frontal depth map and an edge depth map, merge the frontal depth map and the edge depth map, and shift left by 2 bits to generate a depth map.

进一步地:所述步骤S2中深度图和原始2D图像通过偏移计算后生成左图像和右图像,偏移值Xview的计算公式为:Further: in the step S2, the depth map and the original 2D image are calculated by offset to generate the left image and the right image, and the calculation formula of the offset value X view is:

在公式(4)中,Xc为中间视图的水平坐标,n为虚拟图的个数,δ为奇数或偶数,i为虚拟摄像机放置在中心的顺序,α为确定Xview左视图或右视图对应水平坐标所需的值,tx为左右虚拟摄像机之间的距离,f为相机焦距,vf为前景中的最小深度值或背景中的最大深度值,v为像素的深度值,α和δ的计算公式为:In formula (4), X c is the horizontal coordinate of the middle view, n is the number of virtual images, δ is an odd or even number, i is the order in which the virtual camera is placed in the center, and α is the left view or right view of the determined X view The value corresponding to the horizontal coordinate, t x is the distance between the left and right virtual cameras, f is the focal length of the camera, v f is the minimum depth value in the foreground or the maximum depth value in the background, v is the depth value of the pixel, α and The calculation formula of δ is:

在公式(5)中,Xl为左图像的水平坐标,Xr为右图像的水平坐标。In formula (5), X l is the horizontal coordinate of the left image, and X r is the horizontal coordinate of the right image.

进一步地:所述步骤S3中的补孔填充通过2D高斯平滑滤波器完成。Further: the hole filling in step S3 is completed through a 2D Gaussian smoothing filter.

本发明的有益效果为:本发明提出了将2D内容转换为3D内容的新的快速算法,既新颖又快速,并且降低了时间复杂度以及存储器复杂度,降低了计算成本,使高清图像/视频更加逼真,提高了深度图的质量,提高了3D输出的实时性。The beneficial effects of the present invention are: the present invention proposes a new fast algorithm for converting 2D content into 3D content, which is novel and fast, and reduces time complexity and memory complexity, reduces computing costs, and makes high-definition images/videos It is more realistic, improves the quality of the depth map, and improves the real-time performance of 3D output.

附图说明Description of drawings

图1为本发明总流程图;Fig. 1 is the general flowchart of the present invention;

图2为本发明实施例中不同深度感知的测试图像;Fig. 2 is the test image of different depth perception in the embodiment of the present invention;

图3为本发明实施例中不同深度感知测试图像的深度图像;Fig. 3 is the depth image of different depth perception test images in the embodiment of the present invention;

图4为本发明实施例中不同深度感知测试图像的左图像;Fig. 4 is the left image of different depth perception test images in the embodiment of the present invention;

图5为本发明实施例中不同深度感知测试图像的右图像;Fig. 5 is the right image of different depth perception test images in the embodiment of the present invention;

图6为本发明实施例中不同深度感知测试图像的3D图像;6 is a 3D image of different depth perception test images in an embodiment of the present invention;

图7为本发明与基于边缘算法和实时算法的结构相似度对比图;Fig. 7 is a comparison diagram of the structure similarity between the present invention and the edge algorithm and real-time algorithm;

图8为本发明与基于边缘算法和实时算法的峰值信噪比对比图;Fig. 8 is a comparison chart of peak signal-to-noise ratio between the present invention and based on edge algorithm and real-time algorithm;

图9为本发明与基于边缘算法和实时算法的相关性对比图;Fig. 9 is a correlation comparison diagram between the present invention and an edge-based algorithm and a real-time algorithm;

图10为本发明测试图像的平均主观分析等级图。Fig. 10 is a graph of average subjective analysis grades of test images of the present invention.

具体实施方式Detailed ways

下面对本发明的具体实施方式进行描述,以便于本技术领域的技术人员理解本发明,但应该清楚,本发明不限于具体实施方式的范围,对本技术领域的普通技术人员来讲,只要各种变化在所附的权利要求限定和确定的本发明的精神和范围内,这些变化是显而易见的,一切利用本发明构思的发明创造均在保护之列。The specific embodiments of the present invention are described below so that those skilled in the art can understand the present invention, but it should be clear that the present invention is not limited to the scope of the specific embodiments. For those of ordinary skill in the art, as long as various changes Within the spirit and scope of the present invention defined and determined by the appended claims, these changes are obvious, and all inventions and creations using the concept of the present invention are included in the protection list.

如图1所示,一种2D图像到3D图像的转换方法,包括以下步骤:As shown in Figure 1, a kind of conversion method of 2D image to 3D image, comprises the following steps:

S1、获取原始2D图像的深度图,具体步骤为:S1. Obtain the depth map of the original 2D image, the specific steps are:

S11、缩小原始2D图像的大小生成收缩图像,所述原始2D图像的大小为720×1280,所述收缩图像的大小为320×360;S11. Reduce the size of the original 2D image to generate a contracted image, the size of the original 2D image is 720×1280, and the size of the contracted image is 320×360;

S12、将收缩图像的RGB转换到YCbCr,并右移2比特,转化公式为:S12. Convert the RGB of the contracted image to YC b C r , and shift right by 2 bits. The conversion formula is:

上式中,Y为颜色的亮度(luma)成分,Cb为蓝色的浓度偏移量成分,Cr为红色的浓度偏移量成分,R为红色成分,G为绿色成分,B为蓝色成分;In the above formula, Y is the brightness (luma) component of the color, C b is the density offset component of blue, C r is the density offset component of red, R is the red component, G is the green component, and B is blue color components;

S13、对YCbCr图像进行近似边缘检测,得到正面深度图和边缘深度图,并合并正面深度图和边缘深度图,左移2比特后生成深度图。S13. Perform approximate edge detection on the YC b C r image to obtain a frontal depth map and an edge depth map, merge the frontal depth map and the edge depth map, and shift left by 2 bits to generate a depth map.

S2、通过DIBR单元将深度图和原始2D图像生成右图像和左图像,深度图和原始2D图像通过偏移计算后生成左图像和右图像,偏移值Xview的计算公式为:S2. Generate the right image and the left image from the depth map and the original 2D image through the DIBR unit. The depth map and the original 2D image generate the left image and the right image after offset calculation. The calculation formula of the offset value X view is:

在公式(4)中,Xc为中间视图的水平坐标,n为虚拟图的个数,δ为奇数或偶数,i为虚拟摄像机放置在中心的顺序,α为确定Xview左视图或右视图对应水平坐标所需的值,tx为左右虚拟摄像机之间的距离,f为相机焦距,vf为前景中的最小深度值或背景中的最大深度值,v为像素的深度值,α和δ的计算公式为:In formula (4), X c is the horizontal coordinate of the middle view, n is the number of virtual images, δ is an odd or even number, i is the order in which the virtual camera is placed in the center, and α is the left view or right view of the determined X view The value corresponding to the horizontal coordinate, t x is the distance between the left and right virtual cameras, f is the focal length of the camera, v f is the minimum depth value in the foreground or the maximum depth value in the background, v is the depth value of the pixel, α and The calculation formula of δ is:

在公式(5)中,Xl为左图像的水平坐标,Xr为右图像的水平坐标。In formula (5), X l is the horizontal coordinate of the left image, and X r is the horizontal coordinate of the right image.

S3、对左图像和右图像进行补孔填充,并调整左图像和右图像的大小为原始2D图像大小,补孔填充通过2D高斯平滑滤波器完成。S3. Fill the left image and the right image, and adjust the size of the left image and the right image to the size of the original 2D image. The hole filling is completed through a 2D Gaussian smoothing filter.

S4、合并左图像和右图像,生成3D图像。S4. Merge the left image and the right image to generate a 3D image.

本发明是通过使用MATLAB实现的,并且对不同的深度感知和不同的测试图像执行主观和客观分析。这里,通过应用本发明将不同的深度感知图像作为实验,生成结果。测试单元中的图像包括:电影图像、高深度图像、正面图像、自然图像和低深度图像。在算法层次上,分别进行了主观和客观两方面的分析。测试图像如图2所示。用于实验的所有图像具有不同的感知,在深度图像方法的基础上,我们能够获得测试图像的深度图像,图3显示了所有测试图像的深度信息。图4和图5显示了测试图像的左侧生成视图和右侧生成视图。从测试图像中我们可以看到测试图像的深度感知各不相同。每个图像都有不同的深度感知,提高了算法的可信度。对于具有不同深度感知的整套测试图像生成具有3D输出的结果,如图6所示。The present invention is realized by using MATLAB and performing subjective and objective analysis on different depth perceptions and different test images. Here, the results are generated by applying the invention to different depth-perceived images as experiments. The images in the test unit include: cinematic images, high-depth images, frontal images, natural images, and low-depth images. At the algorithm level, the subjective and objective analysis are carried out respectively. The test image is shown in Figure 2. All the images used for the experiment have different perceptions, based on the depth image method, we are able to obtain the depth images of the test images, Figure 3 shows the depth information of all the test images. Figures 4 and 5 show the left and right generated views of the test images. From the test images we can see that the depth perception of the test images varies. Each image has a different perception of depth, increasing the confidence of the algorithm. Results with 3D output are generated for the full set of test images with different depth perceptions, as shown in Figure 6.

客观分析为包括结构相似度(SSIM)、峰值信噪比(PSNR)和相关性分析。Objective analysis included structural similarity (SSIM), peak signal-to-noise ratio (PSNR) and correlation analysis.

本发明中结构相似度(SSIM)可以基于结构的像素来解释。它表示场景中物体的结构,与平均亮度和对比度无关。亮度可以作为像素的平均强度,其标准偏差是归一化的对比度和结构。结构相似性(SSIM)指数值介于0到1之间。本发明与基于边缘算法和实时算法的比较如图7所示,结构相似性(SSIM)的计算公式为:Structural similarity (SSIM) in the present invention can be interpreted based on the pixels of the structure. It represents the structure of objects in the scene independent of the average brightness and contrast. Brightness can be taken as the average intensity of a pixel whose standard deviation is normalized for contrast and structure. Structural Similarity (SSIM) Index values range from 0 to 1. The comparison between the present invention and based on edge algorithm and real-time algorithm is shown in Figure 7, and the calculation formula of structural similarity (SSIM) is:

在公式(7)中,SSIM(x,y)为,μx为图像X的均值,μy为图像Y的均值,C1为常数,σx,y为图像X和Y的协方差,σx为图像X的方差,σy为图像Y的方差,C2为常数,C1和C2对于自然图像为0。In formula (7), SSIM(x, y) is, μ x is the mean value of image X, μ y is the mean value of image Y, C 1 is a constant, σ x, y is the covariance of images X and Y, σ x is the variance of image X, σy is the variance of image Y , C2 is a constant, and C1 and C2 are 0 for natural images.

本发明中峰值信噪比(PSNR)可以描述为信号的最大(最大)可能值(功率)与影响其描述质量的失真噪声功率之间的比率。PSNR通常根据对数分贝标度来描述。本发明与基于边缘算法和实时算法的比较如图8所示,峰值信噪比(PSNR)的计算公式为:The peak signal-to-noise ratio (PSNR) in the present invention can be described as the ratio between the maximum (maximum) possible value (power) of a signal and the power of distortion noise that affects its description quality. PSNR is usually described on a logarithmic decibel scale. The comparison between the present invention and based on edge algorithm and real-time algorithm is shown in Figure 8, and the calculation formula of peak signal-to-noise ratio (PSNR) is:

在公式(8)中,M为图像的高度,N为图像的宽度,为原图像与处理图像之间均方误差,图像颜色的最大值用8位采样点表示为255。In formula (8), M is the height of the image, N is the width of the image, is the mean square error between the original image and the processed image, and the maximum value of the image color is expressed as 255 with 8-bit sampling points.

相关性是图像之间统计关系的比较度量,这导致相似度指数的测量。该参数可用于指示图像之间的关系,本发明与基于边缘算法和实时算法的比较如图9所示。Correlation is a comparative measure of the statistical relationship between images, which leads to the measure of a similarity index. This parameter can be used to indicate the relationship between images. The comparison between the present invention and the edge-based algorithm and the real-time algorithm is shown in FIG. 9 .

综上,本发明工作结果在时间和性能方面的表现上优于基于边缘算法和实时算法,记忆复杂度降低了39%。时间复杂度问题减少了35%To sum up, the working results of the present invention are superior to edge-based algorithms and real-time algorithms in terms of time and performance, and the memory complexity is reduced by 39%. 35% reduction in time complexity issues

主观分析是一种用于检查所生成的输出的质量和视觉舒适能力的方法。在2D到3D转换中,主观分析也是一个非常重要的部分,因为它直接涵盖了生成的3D内容对人类健康的影响。通过使用此分析,我们可以检查生成的3D内容的视觉质量和深度。Subjective analysis is a method used to check the quality and visual comfort of the generated output. In 2D to 3D conversion, subjective analysis is also a very important part, as it directly covers the impact of generated 3D content on human health. By using this analysis, we can check the visual quality and depth of the generated 3D content.

如图10所示,(a)为测试图像的平均深度等级,(b)为平均视觉等级,在此分析中,平均分数是从20人的得分计算出来的。在本发明中,每幅图像生成的视觉评分在70-78之间。As shown in Figure 10, (a) is the average depth rating of the test images, (b) is the average visual rating, and in this analysis, the average score is calculated from the scores of 20 people. In the present invention, each image generates a visual score between 70-78.

根据ITU的主观分析,视觉评定等级范围从0到100,它被分为五组,这些组被标记为0-20非常不舒服,21-40不舒服,41-60轻度舒适,61-80舒适,81-100非常舒适。因此,通过提出的方法获得的范围属于舒适区域。因此,通过本方法从2D内容创建3D内容实现了有利的结果。Based on the ITU's subjective analysis, the visual rating scale ranges from 0 to 100, and it is divided into five groups, which are marked as 0-20 very uncomfortable, 21-40 uncomfortable, 41-60 mildly comfortable, and 61-80 Comfortable, 81-100 is very comfortable. Therefore, the range obtained by the proposed method belongs to the comfort zone. Hence, advantageous results are achieved by the method for creating 3D content from 2D content.

根据ITU的主观分析,每张图像的深度等级的类似生成得分在75-80之间,深度等级范围从0到100,并且还分为五组,0到20为坏,21-40为差,41-60为中等,61-80为好,最后81-100为优秀。According to ITU's subjective analysis, the similar generation score of the depth level of each image is between 75-80, the depth level ranges from 0 to 100, and is also divided into five groups, 0 to 20 is bad, 21-40 is poor, 41-60 is average, 61-80 is good, and finally 81-100 is excellent.

本发明的平均深度评级属于良好区域类别。因此,包括两个参数在内的主观分析表明,通过实现良好和舒适区域,结果表明,生成的深度是真实的视图,而不是虚拟深度。The average depth rating of the invention falls into the Good Regional category. Therefore, the subjective analysis including two parameters shows that by achieving the good and comfort zone, the results show that the generated depth is the real view, not the virtual depth.

Claims (4)

1.一种2D图像到3D图像的转换方法,其特征在于,包括以下步骤:1. A conversion method from a 2D image to a 3D image, comprising the following steps: S1、获取原始2D图像的深度图;S1. Obtain the depth map of the original 2D image; S2、通过DIBR单元将深度图和原始2D图像生成右图像和左图像;S2. Generate a right image and a left image from the depth map and the original 2D image through the DIBR unit; S3、对左图像和右图像进行补孔填充,并调整左图像和右图像的大小为原始2D图像大小;S3. Filling the left image and the right image, and adjusting the size of the left image and the right image to the size of the original 2D image; S4、合并左图像和右图像,生成3D图像。S4. Merge the left image and the right image to generate a 3D image. 2.根据权利要求1所述的2D图像到3D图像的转换方法,其特征在于,所述步骤S1的具体步骤为:2. The conversion method from 2D image to 3D image according to claim 1, characterized in that, the specific steps of the step S1 are: S11、缩小原始2D图像的大小生成收缩图像,所述原始2D图像的大小为720×1280,所述收缩图像的大小为320×360;S11. Reduce the size of the original 2D image to generate a contracted image, the size of the original 2D image is 720×1280, and the size of the contracted image is 320×360; S12、将收缩图像的RGB转换到YCbCr,并右移2比特,转化公式为:S12. Convert the RGB of the contracted image to YC b C r , and shift right by 2 bits. The conversion formula is: 上式中,Y为颜色的亮度成分,Cb为蓝色的浓度偏移量成分,Cr为红色的浓度偏移量成分,R为红色成分,G为绿色成分,B为蓝色成分;In the above formula, Y is the brightness component of the color, C b is the concentration offset component of blue, C r is the concentration offset component of red, R is the red component, G is the green component, and B is the blue component; S13、对YCbCr图像进行近似边缘检测,得到正面深度图和边缘深度图,并合并正面深度图和边缘深度图,左移2比特后生成深度图。S13. Perform approximate edge detection on the YC b C r image to obtain a frontal depth map and an edge depth map, merge the frontal depth map and the edge depth map, and shift left by 2 bits to generate a depth map. 3.根据权利要求1所述的2D图像到3D图像的转换方法,其特征在于,所述步骤S2中深度图和原始2D图像通过偏移计算后生成左图像和右图像,偏移值Xview的计算公式为:3. The conversion method from 2D image to 3D image according to claim 1, characterized in that, in the step S2, the depth map and the original 2D image generate a left image and a right image after offset calculation, and the offset value X view The calculation formula is: 在公式(4)中,Xc为中间视图的水平坐标,n为虚拟图的个数,δ为奇数或偶数,i为虚拟摄像机放置在中心的顺序,α为确定Xview左视图或右视图对应水平坐标所需的值,tx为左右虚拟摄像机之间的距离,f为相机焦距,vf为前景中的最小深度值或背景中的最大深度值,v为像素的深度值,α和δ的计算公式为:In formula (4), X c is the horizontal coordinate of the middle view, n is the number of virtual images, δ is an odd or even number, i is the order in which the virtual camera is placed in the center, and α is the left view or right view of the determined X view The value corresponding to the horizontal coordinate, t x is the distance between the left and right virtual cameras, f is the focal length of the camera, v f is the minimum depth value in the foreground or the maximum depth value in the background, v is the depth value of the pixel, α and The calculation formula of δ is: 在公式(5)中,Xl为左图像的水平坐标,Xr为右图像的水平坐标。In formula (5), X l is the horizontal coordinate of the left image, and X r is the horizontal coordinate of the right image. 4.根据权利要求1所述的2D图像到3D图像的转换方法,其特征在于,所述步骤S3中的补孔填充通过2D高斯平滑滤波器完成。4. The conversion method from 2D image to 3D image according to claim 1, characterized in that the hole filling in the step S3 is completed by a 2D Gaussian smoothing filter.
CN201810933341.2A 2018-08-16 2018-08-16 A kind of conversion method of 2D image to 3D rendering Pending CN108900825A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810933341.2A CN108900825A (en) 2018-08-16 2018-08-16 A kind of conversion method of 2D image to 3D rendering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810933341.2A CN108900825A (en) 2018-08-16 2018-08-16 A kind of conversion method of 2D image to 3D rendering

Publications (1)

Publication Number Publication Date
CN108900825A true CN108900825A (en) 2018-11-27

Family

ID=64354669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810933341.2A Pending CN108900825A (en) 2018-08-16 2018-08-16 A kind of conversion method of 2D image to 3D rendering

Country Status (1)

Country Link
CN (1) CN108900825A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110312117A (en) * 2019-06-12 2019-10-08 北京达佳互联信息技术有限公司 Method for refreshing data and device
CN111970503A (en) * 2020-08-24 2020-11-20 腾讯科技(深圳)有限公司 Method, device and equipment for three-dimensionalizing two-dimensional image and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102307312A (en) * 2011-08-31 2012-01-04 四川虹微技术有限公司 Method for performing hole filling on destination image generated by depth-image-based rendering (DIBR) technology
CN102447927A (en) * 2011-09-19 2012-05-09 四川虹微技术有限公司 Method for warping three-dimensional image with camera calibration parameter
CN102790896A (en) * 2012-07-19 2012-11-21 彩虹集团公司 Conversion method for converting 2D (Two Dimensional) into 3D (Three Dimensional)
CN103714573A (en) * 2013-12-16 2014-04-09 华为技术有限公司 Virtual view generating method and virtual view generating device
CN103903256A (en) * 2013-09-22 2014-07-02 四川虹微技术有限公司 Depth estimation method based on relative height-depth clue
CN105069808A (en) * 2015-08-31 2015-11-18 四川虹微技术有限公司 Video image depth estimation method based on image segmentation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102307312A (en) * 2011-08-31 2012-01-04 四川虹微技术有限公司 Method for performing hole filling on destination image generated by depth-image-based rendering (DIBR) technology
CN102447927A (en) * 2011-09-19 2012-05-09 四川虹微技术有限公司 Method for warping three-dimensional image with camera calibration parameter
CN102790896A (en) * 2012-07-19 2012-11-21 彩虹集团公司 Conversion method for converting 2D (Two Dimensional) into 3D (Three Dimensional)
CN103903256A (en) * 2013-09-22 2014-07-02 四川虹微技术有限公司 Depth estimation method based on relative height-depth clue
CN103714573A (en) * 2013-12-16 2014-04-09 华为技术有限公司 Virtual view generating method and virtual view generating device
CN105069808A (en) * 2015-08-31 2015-11-18 四川虹微技术有限公司 Video image depth estimation method based on image segmentation

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110312117A (en) * 2019-06-12 2019-10-08 北京达佳互联信息技术有限公司 Method for refreshing data and device
CN110312117B (en) * 2019-06-12 2021-06-18 北京达佳互联信息技术有限公司 Data refreshing method and device
CN111970503A (en) * 2020-08-24 2020-11-20 腾讯科技(深圳)有限公司 Method, device and equipment for three-dimensionalizing two-dimensional image and computer readable storage medium
WO2022042062A1 (en) * 2020-08-24 2022-03-03 腾讯科技(深圳)有限公司 Three-dimensional processing method and apparatus for two-dimensional image, device, and computer readable storage medium
JP2023519728A (en) * 2020-08-24 2023-05-12 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 2D image 3D conversion method, apparatus, equipment, and computer program
CN111970503B (en) * 2020-08-24 2023-08-22 腾讯科技(深圳)有限公司 Three-dimensional method, device and equipment for two-dimensional image and computer readable storage medium
JP7432005B2 (en) 2020-08-24 2024-02-15 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 Methods, devices, equipment and computer programs for converting two-dimensional images into three-dimensional images
US12113953B2 (en) 2020-08-24 2024-10-08 Tencent Technology (Shenzhen) Company Limited Three-dimensionalization method and apparatus for two-dimensional image, device and computer-readable storage medium

Similar Documents

Publication Publication Date Title
CN110930301B (en) Image processing method, device, storage medium and electronic equipment
CN107509031B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
US10565742B1 (en) Image processing method and apparatus
CN108537749B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
WO2018082185A1 (en) Image processing method and device
CN108830800B (en) A method for enhancing the brightness of images in dark scenes
CN108431751B (en) background removal
CN110910336B (en) Three-dimensional high dynamic range imaging method based on full convolution neural network
CN115063331B (en) Ghost-free multi-exposure image fusion method based on multi-scale block LBP operator
CN115205160A (en) Reference-free low-light image enhancement method based on local scene perception
US20240296531A1 (en) System and methods for depth-aware video processing and depth perception enhancement
US10074209B2 (en) Method for processing a current image of an image sequence, and corresponding computer program and processing device
CN108234884A (en) A kind of automatic focusing method of camera of view-based access control model conspicuousness
CN102223545B (en) Rapid multi-view video color correction method
CN108900825A (en) A kind of conversion method of 2D image to 3D rendering
CN106686320B (en) A Tone Mapping Method Based on Number Density Equalization
CN116468636A (en) Low-illumination enhancement method, device, electronic equipment and readable storage medium
WO2016113805A1 (en) Image processing method, image processing apparatus, image pickup apparatus, program, and storage medium
CN111147924B (en) Video enhancement processing method and system
CN112435173B (en) Image processing and live broadcasting method, device, equipment and storage medium
CN110796689B (en) Video processing method, electronic device and storage medium
CN105721863B (en) Method for evaluating video quality
CN108898566B (en) A Low-Illumination Color Video Enhancement Method Using Spatio-temporal Illuminance Map
CN107886476A (en) Method of texture synthesis and image processing apparatus using the same
CN117689550A (en) Low-light image enhancement method and device based on progressive generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181127

RJ01 Rejection of invention patent application after publication