CN108900825A - A kind of conversion method of 2D image to 3D rendering - Google Patents
A kind of conversion method of 2D image to 3D rendering Download PDFInfo
- Publication number
- CN108900825A CN108900825A CN201810933341.2A CN201810933341A CN108900825A CN 108900825 A CN108900825 A CN 108900825A CN 201810933341 A CN201810933341 A CN 201810933341A CN 108900825 A CN108900825 A CN 108900825A
- Authority
- CN
- China
- Prior art keywords
- image
- depth
- rendering
- original
- depth map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a kind of conversion methods of 2D image to 3D rendering.The invention proposes the new fast algorithms for by 2D Content Transformation being 3D content, not only novel but also quick, and reduce time complexity and memory complexity, reduce calculating cost, keep high-definition image/video more life-like, improve the quality of depth map, improves the real-time of 3D output.
Description
Technical field
The present invention relates to image processing methods, and in particular to a kind of conversion method of 2D image to 3D rendering.
Background technique
Nowadays, 3D technology is just becoming very popular, it significantly enhances the visual experience of people in daily life, makes
The use of this word becomes very universal.Due to its high demand and universal, this field is just concerned by people, primary
Purpose is to create the visual effect of high quality.But it is not easy to.Therefore, it is related to needing to reach target
The challenging task of processing.Existing method can achieve the set goal, but be 3D content by 2D Content Transformation
Need more times.
But with related another problem is converted, to be that the depth generated looks like artificial, to inhibit 3D content
Real-world characteristics.This can produce serious influence to the whole display of image/video, while can also bring health weak to viewer
Point.
Summary of the invention
For above-mentioned deficiency in the prior art, a kind of conversion method of 2D image provided by the invention to 3D rendering is solved
It is that 3D content needs the more time by 2D Content Transformation, picture quality poor problem.
In order to achieve the above object of the invention, the technical solution adopted by the present invention is:A kind of conversion of 2D image to 3D rendering
Method includes the following steps:
S1, the depth map for obtaining original 2D image;
S2, depth map and original 2D image are generated by right image and left image by DIBR unit;
S3, perforations adding filling is carried out to left image and right image, and adjusts the size of left image and right image as original 2D figure
As size;
S4, merge left image and right image, generate 3D rendering.
Further:The step S1 the specific steps are:
S11, the size for reducing original 2D image, which generate, shrinks image, and the size of the original 2D image is 720 × 1280,
The size for shrinking image is 320 × 360;
S12, the RGB for shrinking image is transformed into YCbCr, and 2 bits are moved to right, conversion formula is:
In above formula, Y is the luminance components of color, CbFor the concentration excursion amount ingredient of blue, CrFor red concentration excursion amount
Ingredient, R are red color components, and G is green components, and B is blue component;
S13, to YCbCrImage carries out approximate edge detection, obtains front depth map and edge depth map, and merge front
Depth map and edge depth map, generate depth map after moving to left 2 bits.
Further:Depth map and original 2D image are by generating left image and right figure in the step S2 after calculations of offset
Picture, deviant XviewCalculation formula be:
In formula (4), XcFor the horizontal coordinate of medial view, n is the number of virtual graph, and δ is odd number or even number, and i is
The sequence that virtual camera is centrally disposed, α are to determine XviewValue needed for left view or right view correspond to horizontal coordinate, txFor
The distance between left and right virtual camera, f are camera focus, vfFor the depth capacity in the minimum depth value or background in prospect
Value, v are the depth value of pixel, and the calculation formula of α and δ are:
In formula (5), XlFor the horizontal coordinate of left image, XrFor the horizontal coordinate of right image.
Further:Perforations adding filling in the step S3 is completed by 2D Gaussian filter.
Beneficial effects of the present invention are:The invention proposes the new fast algorithms for by 2D Content Transformation being 3D content, both
It is novel and quick, and time complexity and memory complexity are reduced, calculating cost is reduced, high-definition image/view is made
Frequently more life-like, the quality of depth map is improved, the real-time of 3D output is improved.
Detailed description of the invention
Fig. 1 is general flow chart of the present invention;
Fig. 2 is the test image of different depth perception in the embodiment of the present invention;
Fig. 3 is the depth image of different depth perception test image in the embodiment of the present invention;
Fig. 4 is the left image of different depth perception test image in the embodiment of the present invention;
Fig. 5 is the right image of different depth perception test image in the embodiment of the present invention;
Fig. 6 is the 3D rendering of different depth perception test image in the embodiment of the present invention;
Fig. 7 is the present invention and the structural similarity comparison diagram based on edge algorithms and real time algorithm;
Fig. 8 is the present invention and the Y-PSNR comparison diagram based on edge algorithms and real time algorithm;
Fig. 9 is the present invention and the correlation comparison diagram based on edge algorithms and real time algorithm;
Figure 10 is the mean subjective analytical grade figure of test image of the present invention.
Specific embodiment
A specific embodiment of the invention is described below, in order to facilitate understanding by those skilled in the art this hair
It is bright, it should be apparent that the present invention is not limited to the ranges of specific embodiment, for those skilled in the art,
As long as various change is in the spirit and scope of the present invention that the attached claims limit and determine, these variations are aobvious and easy
See, all are using the innovation and creation of present inventive concept in the column of protection.
As shown in Figure 1, a kind of 2D image includes the following steps to the conversion method of 3D rendering:
S1, the depth map for obtaining original 2D image, the specific steps are:
S11, the size for reducing original 2D image, which generate, shrinks image, and the size of the original 2D image is 720 × 1280,
The size for shrinking image is 320 × 360;
S12, the RGB for shrinking image is transformed into YCbCr, and 2 bits are moved to right, conversion formula is:
In above formula, Y is brightness (luma) ingredient of color, CbFor the concentration excursion amount ingredient of blue, CrFor red concentration
Offset ingredient, R are red color components, and G is green components, and B is blue component;
S13, to YCbCrImage carries out approximate edge detection, obtains front depth map and edge depth map, and merge front
Depth map and edge depth map, generate depth map after moving to left 2 bits.
S2, depth map and original 2D image are generated by right image and left image, depth map and original 2D by DIBR unit
Image generates left image and right image, deviant X after passing through calculations of offsetviewCalculation formula be:
In formula (4), XcFor the horizontal coordinate of medial view, n is the number of virtual graph, and δ is odd number or even number, and i is
The sequence that virtual camera is centrally disposed, α are to determine XviewValue needed for left view or right view correspond to horizontal coordinate, txFor
The distance between left and right virtual camera, f are camera focus, vfFor the depth capacity in the minimum depth value or background in prospect
Value, v are the depth value of pixel, and the calculation formula of α and δ are:
In formula (5), XlFor the horizontal coordinate of left image, XrFor the horizontal coordinate of right image.
S3, perforations adding filling is carried out to left image and right image, and adjusts the size of left image and right image as original 2D figure
As size, perforations adding filling is completed by 2D Gaussian filter.
S4, merge left image and right image, generate 3D rendering.
The present invention is to be realized by using MATLAB, and execute to different depth perceptions and different test images
Subjective and objective analysis.Here, result is generated using different depth perception images as experiment by the application present invention.Test
Image in unit includes:Film image, high depth image, direct picture, natural image and low depth image.In algorithm level
On, the analysis of subjective and objective both sides has been carried out respectively.Test image is as shown in Figure 2.All images for experiment have
Different perception, on the basis of depth image method, we can obtain the depth image of test image, and Fig. 3 shows institute
There is the depth information of test image.Fig. 4 and Fig. 5 shows that the left side of test image generates view and right side generates view.From survey
Attempt as in it may be seen that the depth perception of test image is different.Each image has different depth perceptions, mentions
The high confidence level of algorithm.It is that there is 3D to export as a result, such as Fig. 6 that a whole set of test image perceived with different depth is generated
It is shown.
Objective analysis be include structural similarity (SSIM), Y-PSNR (PSNR) and correlation analysis.
Structural similarity (SSIM) can be explained in the present invention with structure-based pixel.It indicates the knot of object in scene
Structure, it is unrelated with average brightness and contrast.Brightness can be used as the mean intensity of pixel, and standard deviation is normalized comparison
Degree and structure.Structural similarity (SSIM) index value is between 0 to 1.The present invention with based on edge algorithms and real time algorithm
Comparison is as shown in fig. 7, the calculation formula of structural similarity (SSIM) is:
In formula (7), SSIM (x, y) is μxFor the mean value of image X, μyFor the mean value of image Y, C1For constant, σx,yFor
The covariance of image X and Y, σxFor the variance of image X, σyFor the variance of image Y, C2For constant, C1And C2It is for natural image
0。
Y-PSNR (PSNR) can be described as maximum (maximum) probable value (power) of signal and influence in the present invention
Ratio between the distortion noise power of its describing mass.PSNR is described generally according to logarithm decibel scale.The present invention and base
In edge algorithms and real time algorithm comparison as shown in figure 8, the calculation formula of Y-PSNR (PSNR) is:
In formula (8), M is the height of image, and N is the width of image,For original image
As mean square error, the maximum value of color of image are expressed as 255 with 8 sampled points between processing image.
Correlation is the comparison measuring of statistical relationship between image, this leads to the measurement of index similarity.The parameter is available
Relationship between instruction image, the present invention are as shown in Figure 9 compared with based on edge algorithms and real time algorithm.
To sum up, working result of the present invention is in the performance of time and aspect of performance better than based on edge algorithms and real-time calculation
Method, memory complexity reduce 39%.Time complexity problem reduces 35%
Subjective analysis is the method for a kind of quality for checking output generated and euphorosia ability.In 2D to 3D
In conversion, subjective analysis is also a very important part, because it directly covers 3D content on human's health of generation
Influence.By using this analysis, we can check the visual quality and depth of the 3D content of generation.
As shown in Figure 10, (a) is the mean depth grade of test image, (b) is average visual grade, in the analysis,
Average mark is calculated from the score of 20 people.In the present invention, each image generate visual score 70-78 it
Between.
According to the subjective analysis of ITU, for visual assessment rate range from 0 to 100, it is divided into five groups, these groups are labeled
Very uncomfortable for 0-20,21-40 is uncomfortable, and 41-60 is slightly comfortable, and 61-80 is comfortable, and 81-100 is as snug as a bug in a rug.Therefore, pass through
The range that the method for proposition obtains belongs to zone of comfort.Therefore, it is realized favorably by this method from 2D content creating 3D content
Result.
According to the subjective analysis of ITU, the similar generation score of the depth levels of every image is between 75-80, depth etc.
Grade range and is also divided into five groups from 0 to 100,0 to 20 be it is bad, 21-40 be it is poor, 41-60 be it is medium, 61-80 preferably, finally
81-100 is outstanding.
Mean depth grading of the invention belongs to good area classification.Therefore, the subjective analysis including two parameters
Show by realizing good and zone of comfort, the results showed that, the depth of generation is true view, rather than virtual depth.
Claims (4)
1. a kind of 2D image is to the conversion method of 3D rendering, which is characterized in that include the following steps:
S1, the depth map for obtaining original 2D image;
S2, depth map and original 2D image are generated by right image and left image by DIBR unit;
S3, perforations adding filling is carried out to left image and right image, and the size for adjusting left image and right image is that original 2D image is big
It is small;
S4, merge left image and right image, generate 3D rendering.
2. 2D image according to claim 1 is to the conversion method of 3D rendering, which is characterized in that the step S1's is specific
Step is:
S11, the size for reducing original 2D image, which generate, shrinks image, and the size of the original 2D image is 720 × 1280, described
The size for shrinking image is 320 × 360;
S12, the RGB for shrinking image is transformed into YCbCr, and 2 bits are moved to right, conversion formula is:
In above formula, Y is the luminance components of color, CbFor the concentration excursion amount ingredient of blue, CrFor red concentration excursion amount at
Point, R is red color components, and G is green components, and B is blue component;
S13, to YCbCrImage carries out approximate edge detection, obtains front depth map and edge depth map, and merge positive depth
Figure and edge depth map, generate depth map after moving to left 2 bits.
3. 2D image according to claim 1 is to the conversion method of 3D rendering, which is characterized in that depth in the step S2
Figure and original 2D image are by generating left image and right image, deviant X after calculations of offsetviewCalculation formula be:
In formula (4), XcFor the horizontal coordinate of medial view, n is the number of virtual graph, and δ is odd number or even number, and i is virtually to take the photograph
The sequence that camera is centrally disposed, α are to determine XviewValue needed for left view or right view correspond to horizontal coordinate, txIt is empty for left and right
The distance between quasi- video camera, f is camera focus, vfFor the maximum depth value in the minimum depth value or background in prospect, v is
The calculation formula of the depth value of pixel, α and δ is:
In formula (5), XlFor the horizontal coordinate of left image, XrFor the horizontal coordinate of right image.
4. 2D image according to claim 1 is to the conversion method of 3D rendering, which is characterized in that the benefit in the step S3
Hole filling is completed by 2D Gaussian filter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810933341.2A CN108900825A (en) | 2018-08-16 | 2018-08-16 | A kind of conversion method of 2D image to 3D rendering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810933341.2A CN108900825A (en) | 2018-08-16 | 2018-08-16 | A kind of conversion method of 2D image to 3D rendering |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108900825A true CN108900825A (en) | 2018-11-27 |
Family
ID=64354669
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810933341.2A Pending CN108900825A (en) | 2018-08-16 | 2018-08-16 | A kind of conversion method of 2D image to 3D rendering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108900825A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110312117A (en) * | 2019-06-12 | 2019-10-08 | 北京达佳互联信息技术有限公司 | Method for refreshing data and device |
CN111970503A (en) * | 2020-08-24 | 2020-11-20 | 腾讯科技(深圳)有限公司 | Method, device and equipment for three-dimensionalizing two-dimensional image and computer readable storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102307312A (en) * | 2011-08-31 | 2012-01-04 | 四川虹微技术有限公司 | Method for performing hole filling on destination image generated by depth-image-based rendering (DIBR) technology |
CN102447927A (en) * | 2011-09-19 | 2012-05-09 | 四川虹微技术有限公司 | Method for warping three-dimensional image with camera calibration parameter |
CN102790896A (en) * | 2012-07-19 | 2012-11-21 | 彩虹集团公司 | Conversion method for converting 2D (Two Dimensional) into 3D (Three Dimensional) |
CN103714573A (en) * | 2013-12-16 | 2014-04-09 | 华为技术有限公司 | Virtual view generating method and virtual view generating device |
CN103903256A (en) * | 2013-09-22 | 2014-07-02 | 四川虹微技术有限公司 | Depth estimation method based on relative height-depth clue |
CN105069808A (en) * | 2015-08-31 | 2015-11-18 | 四川虹微技术有限公司 | Video image depth estimation method based on image segmentation |
-
2018
- 2018-08-16 CN CN201810933341.2A patent/CN108900825A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102307312A (en) * | 2011-08-31 | 2012-01-04 | 四川虹微技术有限公司 | Method for performing hole filling on destination image generated by depth-image-based rendering (DIBR) technology |
CN102447927A (en) * | 2011-09-19 | 2012-05-09 | 四川虹微技术有限公司 | Method for warping three-dimensional image with camera calibration parameter |
CN102790896A (en) * | 2012-07-19 | 2012-11-21 | 彩虹集团公司 | Conversion method for converting 2D (Two Dimensional) into 3D (Three Dimensional) |
CN103903256A (en) * | 2013-09-22 | 2014-07-02 | 四川虹微技术有限公司 | Depth estimation method based on relative height-depth clue |
CN103714573A (en) * | 2013-12-16 | 2014-04-09 | 华为技术有限公司 | Virtual view generating method and virtual view generating device |
CN105069808A (en) * | 2015-08-31 | 2015-11-18 | 四川虹微技术有限公司 | Video image depth estimation method based on image segmentation |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110312117A (en) * | 2019-06-12 | 2019-10-08 | 北京达佳互联信息技术有限公司 | Method for refreshing data and device |
CN110312117B (en) * | 2019-06-12 | 2021-06-18 | 北京达佳互联信息技术有限公司 | Data refreshing method and device |
CN111970503A (en) * | 2020-08-24 | 2020-11-20 | 腾讯科技(深圳)有限公司 | Method, device and equipment for three-dimensionalizing two-dimensional image and computer readable storage medium |
WO2022042062A1 (en) * | 2020-08-24 | 2022-03-03 | 腾讯科技(深圳)有限公司 | Three-dimensional processing method and apparatus for two-dimensional image, device, and computer readable storage medium |
JP2023519728A (en) * | 2020-08-24 | 2023-05-12 | ▲騰▼▲訊▼科技(深▲セン▼)有限公司 | 2D image 3D conversion method, apparatus, equipment, and computer program |
CN111970503B (en) * | 2020-08-24 | 2023-08-22 | 腾讯科技(深圳)有限公司 | Three-dimensional method, device and equipment for two-dimensional image and computer readable storage medium |
JP7432005B2 (en) | 2020-08-24 | 2024-02-15 | ▲騰▼▲訊▼科技(深▲セン▼)有限公司 | Methods, devices, equipment and computer programs for converting two-dimensional images into three-dimensional images |
US12113953B2 (en) | 2020-08-24 | 2024-10-08 | Tencent Technology (Shenzhen) Company Limited | Three-dimensionalization method and apparatus for two-dimensional image, device and computer-readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10855909B2 (en) | Method and apparatus for obtaining binocular panoramic image, and storage medium | |
CN104756491B (en) | Depth cue based on combination generates depth map from monoscopic image | |
CN108537155B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
US9030469B2 (en) | Method for generating depth maps from monocular images and systems using the same | |
US8711204B2 (en) | Stereoscopic editing for video production, post-production and display adaptation | |
CN112884682B (en) | Stereo image color correction method and system based on matching and fusion | |
US20150245007A1 (en) | Image processing method, image processing device, and electronic apparatus | |
CN111199518B (en) | Image presentation method, device and equipment of VR equipment and computer storage medium | |
US20110090216A1 (en) | Pseudo 3D image creation apparatus and display system | |
TWI531212B (en) | System and method of rendering stereoscopic images | |
CN105046708A (en) | Color correction objective assessment method consistent with subjective perception | |
CN108022223A (en) | A kind of tone mapping method based on the processing fusion of logarithmic mapping function piecemeal | |
WO2022126674A1 (en) | Method and system for evaluating quality of stereoscopic panoramic image | |
IL257304A (en) | 2d-to-3d video frame conversion | |
TWI457853B (en) | Image processing method for providing depth information and image processing system using the same | |
CN109345502A (en) | A kind of stereo image quality evaluation method based on disparity map stereochemical structure information extraction | |
US20240296531A1 (en) | System and methods for depth-aware video processing and depth perception enhancement | |
US10074209B2 (en) | Method for processing a current image of an image sequence, and corresponding computer program and processing device | |
CN110866882A (en) | Layered joint bilateral filtering depth map restoration algorithm based on depth confidence | |
Jung | A modified model of the just noticeable depth difference and its application to depth sensation enhancement | |
CN106686320B (en) | A kind of tone mapping method based on number density equilibrium | |
CN108900825A (en) | A kind of conversion method of 2D image to 3D rendering | |
CN105139368B (en) | A kind of mixed type tone mapping method available for machine vision | |
JP2013172214A (en) | Image processing device and image processing method and program | |
CN110060291B (en) | Three-dimensional apparent distance resolving method considering human factors |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181127 |