CN111405264B - 3D video comfort level improving method based on depth adjustment - Google Patents
3D video comfort level improving method based on depth adjustment Download PDFInfo
- Publication number
- CN111405264B CN111405264B CN202010068774.3A CN202010068774A CN111405264B CN 111405264 B CN111405264 B CN 111405264B CN 202010068774 A CN202010068774 A CN 202010068774A CN 111405264 B CN111405264 B CN 111405264B
- Authority
- CN
- China
- Prior art keywords
- depth
- image
- map
- viewpoint
- filtering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/15—Processing image signals for colour aspects of image signals
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a 3D video comfort level improving method based on depth adjustment. The problems that the general 3D video is uncomfortable to watch and the visual experience is not good are solved. The invention comprises the following steps: s1: preprocessing the depth maps of the left view map and the right view map to obtain preprocessed maps; s2: carrying out depth filtering on the preprocessed image to obtain a filtering image; s3: carrying out depth de-texturing on the filtering image to obtain a de-textured image; s4: drawing a virtual viewpoint according to the texture-removed picture to obtain a virtual right viewpoint color picture; s5: and replacing the color image of the virtual right viewpoint and the original viewpoint image to obtain the improved 3D video. The method has the advantages of reducing the influence of parallax, gradient change and texture on the viewing experience and improving the overall comfort.
Description
Technical Field
The invention relates to the field of three-dimensional video comfort enhancement and 3D image processing, in particular to a 3D video comfort improvement method based on depth adjustment.
Background
With the application development of 3D technologies such as virtual reality, augmented reality, holographic projection and the like in human life, the human viewing demand for 3D images is increasing. At present, the generation and display technology of 3D video is not perfect, the quality thereof is not uniform, and some uncomfortable physiological phenomena are generated when people watch 3D images and videos with poor quality for a long time. For example, the application number CN201510275341.4 is a stereo video comfort level enhancement method based on disparity change continuity adjustment, which extracts disparity and motion vector information of frames in each investigation period in a stereo video decoding process, calculates a stereo video comfort level average value in each investigation period according to a comfort level evaluation model, and determines whether a frame in the investigation period is an uncomfortable frame, and performs disparity adjustment on the uncomfortable frame, where the process is complex, and generally disparity information obtained through subsequent calculation is inaccurate.
Disclosure of Invention
The invention solves the problems of uncomfortable viewing and poor visual experience of general 3D videos and provides a 3D video comfort level improving method based on depth adjustment.
In order to solve the technical problems, the technical scheme of the invention is as follows: A3D video comfort improvement method based on depth adjustment, wherein a 3D video comprises a left viewpoint image and a right viewpoint image, and the left viewpoint image and the right viewpoint image both comprise a color image and a depth image, the improvement method comprises the following steps:
s1: preprocessing the depth maps of the left view map and the right view map to obtain preprocessed maps;
s2: carrying out depth filtering on the preprocessed image to obtain a filtering image;
s3: carrying out depth de-texturing on the filtering image to obtain a de-textured image;
s4: drawing a virtual viewpoint according to the texture-removed picture to obtain a virtual right viewpoint color picture;
s5: and replacing the color image of the virtual right viewpoint and the original viewpoint image to obtain the improved 3D video.
The original depth map may have a problem of excessive parallax, and the preprocessing process is to properly adjust the depth value of the depth map, so as to reduce the influence of the excessive parallax on the comfort level of the 3D video. The necessity of depth filtering is that it can solve the discomfort caused by gradient variations in the depth map. When texture information is excessive, the human eyes can generate obvious fusion difficulty when fusing 3D images, and the comfort level can be improved by carrying out texture removing operation on the images. And the whole appearance experience can be improved by drawing the virtual viewpoint of the image and carrying out related replacement.
As a preferable scheme of the foregoing scheme, the preprocessing is to substitute a depth value of each pixel point of the depth map into a preprocessed objective function for operation, where an image obtained after the operation is a preprocessed image, and the preprocessed objective function is:
wherein ZproRepresenting the depth value after pre-processing, Z representing the original depth value, Zm representing the minimum of depth in the depth image. The depth pixel value represented in the existing depth map is inversely proportional to the depth, and the smaller the depth, the larger the depth pixel value, the closer to white. The "0.8" ratio in the formula indicates that the partial depth larger than half of the maximum value is shifted to a distance of 80%, that is, when the parallax is too large, it is adjusted to some extent.
As a preferable scheme of the foregoing scheme, the depth filtering is to substitute each pixel point of the preprocessed map into an objective function of the depth filtering for operation, where an image obtained after the operation is a filtered map, and the objective function of the depth filtering is:
O(x,y)=∑m,nZ(x+m,y+n)*K(m,n)
where O (x, y) represents the pixel output value at the (x, y) position after filtering, (x, y) represents the position of the depth map pixel point, and Z (x + m, y + n) represents the depth value at the (x + m, y + n) position of the preprocessed map. K (m, n) represents a filter kernel, and (m, n) represents coordinates in a filter kernel position coordinate diagram, wherein the filter kernel adopts Gaussian filtering. Gaussian filtering is a linear filtering, which is a process of weighted averaging of the whole image, and unlike the case where all the template coefficients in the mean filter are 1, the template coefficients of the gaussian filter are varied, and the coefficients are inversely proportional to the distance of the pixel from the center of the template. Gaussian filtering is used because it smoothes the gradient changes, improving discomfort.
As a preferable solution of the above solution, the gaussian filtering employs selectionBlurring by a 3 × 3 Gaussian kernel, wherein the expression of a two-dimensional Gaussian function corresponding to the 3 × 3 Gaussian kernel isWhere (x, y) represents coordinates in a 3 × 3 gaussian kernel position coordinate graph, σ represents a standard deviation, and the coordinate correction is performed using σ of 0.8, which is an optimum value obtained from a significant time test. The expression of K (m, n) in the depth filtering objective function adopts the expression of a two-dimensional gaussian function, namely K (m, n) is h (m, n).
As a preferable mode of the above, the coordinate correction includes the steps of:
s51: substituting 9 coordinates in the 3 multiplied by 3 Gaussian kernel position coordinate graph into a two-dimensional Gaussian function to obtain corresponding 9 values;
s52: summing the 9 values, and taking the reciprocal of the sum as alpha;
s53: multiplying the 9 coordinates in the 3 x 3 Gaussian kernel position coordinate graph by alpha to obtain a new 3 x 3 Gaussian kernel position coordinate graph;
s54: and substituting the 9 coordinate values in the new 3 x 3 Gaussian kernel position coordinate graph into the target function of the depth filtering to perform corresponding calculation.
The weight calculation essentially obtains 9 coefficients, the 9 coefficients are substituted into the objective function of the depth filtering, namely, each pixel point is taken as the center, the pixel point and 8 pixel points around the offset of the pixel point are respectively multiplied by each coefficient, and finally, the 9 products are summed to obtain the pixel output value of the position of the center pixel point.
As a preferable aspect of the foregoing solution, the depth de-texturing includes the steps of:
s61: using Z ═ Z (i, j)]W×HRepresenting a given filter map with a resolution of WxH and performing DCT transformation;
the objective function of the DCT transformation is:
wherein:
w and H represent two values of resolution, (i, j) represent coordinates of pixel points in the filter map, (μ, v) represent coordinates of pixel points in the filter map after DCT transformation, u represents a horizontal direction frequency of the two-dimensional wave, and v is a vertical direction frequency of the two-dimensional wave.
S62: and (3) performing inverse DCT (discrete cosine transform) after the DCT result is limited by a threshold value to obtain a de-texture map, wherein the threshold value is limited by an objective function as follows:
where DM denotes the de-texture map and T is an experimentally selected threshold.
When texture information is too much, the human eyes are difficult to fuse the 3D images, so that the human eyes can fuse the 3D influence more easily by performing texture removing operation on the images, and the comfort is improved.
As a preferable solution of the above solution, the virtual viewpoint rendering adopts DIBR rendering, and includes the following steps: s71: amending leftneiarestdepthvalue, leftfarthentdepthvalue, rightneiarestdepthvalue and rightfarthentdepthvalue, wherein the amended objective function is as follows:
DVad=DVor-DVor×0.1
wherein DVadRepresenting maximum and minimum depth values after adjustment, DVorRepresenting the maximum value and the minimum value of original depth, wherein leftneisterdepthvalue represents the minimum value of the depth of the left viewpoint de-texture map, leftfarthdepthvalue represents the maximum value of the depth of the left viewpoint de-texture map, rightneisterdepthvalue represents the minimum value of the depth of the right viewpoint de-texture map, and rightfarteddepthvalue represents the maximum value of the depth of the right viewpoint de-texture map;
s72: DIBR rendering is performed according to LeftNearestDepthValue, LeftFarthestDepthValue, RightNearestDepthValue and RightFarthestDepthValue to obtain the virtual right viewpoint color image. The parallax is also reduced to some extent by rendering the virtual right viewpoint color map 1 by DIBR.
As a preferable scheme of the above scheme, the replacing with the virtual right viewpoint color image and the original viewpoint image is to replace the virtual right viewpoint color image, the original left viewpoint color image, the left viewpoint de-texture image and the right viewpoint de-texture image with the right viewpoint color image, the left viewpoint depth image and the right viewpoint depth image in the original 3D video. And combining the color image of the virtual right viewpoint with the original left viewpoint and the de-texture image of the left viewpoint and the right viewpoint and replacing the original image so as to improve the overall 3D video comfort.
Compared with the prior art, the invention has the beneficial effects that:
1. the influence of excessive parallax on the 3D video comfort level can be reduced by the preprocessing method, and the 3D video comfort level is adjusted to be comfortable;
2. the discomfort caused by gradient change in the depth map can be reduced by adopting Gaussian filtering;
3. the de-texturing process can improve the influence of excessive texture features in the image on the appearance and the feeling;
4. the rendering of the virtual viewpoint and the associated replacement achieve an improvement in the overall comfort.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic representation of the Gaussian kernel position coordinates of the present invention.
Detailed Description
The technical solution of the present invention is further described below by way of examples with reference to the accompanying drawings.
Example (b): in this embodiment, a method for improving the comfort of a 3D video based on depth adjustment includes a left viewpoint image and a right viewpoint image, where the left viewpoint image and the right viewpoint image both include a color image and a depth image, and the method includes the following steps: the first step is as follows: and preprocessing the depth maps of the left view map and the right view map to obtain preprocessed maps. The preprocessing is to substitute the depth value of each pixel point of the depth map into a preprocessed target function for operation, the obtained map after the operation is a preprocessed map, and the preprocessed target function is as follows:
wherein ZproRepresenting the depth value after pre-processing, Z representing the original depth value, Zm representing the minimum of depth in the depth image. The depth pixel value represented in the existing depth map is inversely proportional to the depth, and the smaller the depth, the larger the depth pixel value, the closer to white. The "0.8" ratio in the formula indicates that the partial depth larger than half of the maximum value is shifted to a distance of 80%, that is, when the parallax is too large, it is adjusted to some extent.
The second step is that: and carrying out depth filtering on the preprocessed image to obtain a filtering image. The depth filtering is to substitute each pixel point of the preprocessed image into a target function of the depth filtering for operation, the image obtained after the operation is a filtering image, and the target function of the depth filtering is as follows:
O(x,y)=∑m,nZ(x+m,y+n)*K(m,n)
where O (x, y) represents the pixel output value at the (x, y) position after filtering, (x, y) represents the position of the depth map pixel point, and Z (x + m, y + n) represents the depth value at the (x + m, y + n) position of the preprocessed map. K (m, n) represents a filter kernel, and (m, n) represents coordinates in a filter kernel position coordinate diagram, wherein the filter kernel adopts Gaussian filtering. Gaussian filtering is a linear filtering, which is a process of weighted averaging of the whole image, and unlike the case where all the template coefficients in the mean filter are 1, the template coefficients of the gaussian filter are varied, and the coefficients are inversely proportional to the distance of the pixel from the center of the template. Gaussian filtering is used because it smoothes the gradient changes, improving discomfort.
Wherein, Gaussian filtering adopts a Gaussian kernel of 3 × 3 to carry out blurring, and the expression of a two-dimensional Gaussian function corresponding to the Gaussian kernel of 3 × 3 isWhere (x, y) represents coordinates in a 3 × 3 gaussian kernel position coordinate graph, and σ represents a standard deviation as shown in fig. 2, and the coordinate correction is performed using σ of 0.8, which is the optimum value obtained from the significant-order test. The expression of K (m, n) in the depth filtering objective function adopts the expression of a two-dimensional gaussian function, namely K (m, n) is h (m, n).
The coordinate correction is to substitute 9 coordinates in the 3 × 3 gaussian kernel position coordinate graph into a two-dimensional gaussian function to obtain corresponding 9 values, then sum the 9 values, and take the reciprocal of the sum as α, then multiply all 9 coordinates in the 3 × 3 gaussian kernel position coordinate graph by α to obtain a new 3 × 3 gaussian kernel position coordinate graph, and finally substitute 9 coordinate values in the new 3 × 3 gaussian kernel position coordinate graph into a target function of depth filtering to perform corresponding calculation. The weight calculation essentially obtains 9 coefficients, the 9 coefficients are substituted into the objective function of the depth filtering, namely, each pixel point is taken as the center, the pixel point and 8 pixel points around the offset of the pixel point are respectively multiplied by each coefficient, and finally, the 9 products are summed to obtain the pixel output value of the position of the center pixel point.
The third step: and carrying out depth de-texturing on the filtering image to obtain a de-textured image. The deep de-texturing comprises the following steps:
1. using Z ═ Z (i, j)]W×HRepresenting a given filter map with a resolution of WxH and performing DCT transformation;
the objective function of the DCT transformation is:
wherein:
w and H represent two values of resolution, (i, j) represent coordinates of pixel points in the filter graph, (mu, v) represent coordinates of pixel points in the filter graph after DCT transformation, u represents horizontal direction frequency of the two-dimensional wave, and v is twoThe vertical direction frequency of the dimensional wave.
2. And (3) performing inverse DCT (discrete cosine transform) after the DCT result is limited by a threshold value to obtain a de-texture map, wherein the threshold value is limited by an objective function as follows:
where DM denotes the de-texture map and T is an experimentally selected threshold.
When texture information is too much, the human eyes are difficult to fuse the 3D images, so that the human eyes can fuse the 3D influence more easily by performing texture removing operation on the images, and the comfort is improved.
The fourth step: and drawing a virtual viewpoint according to the de-texture image to obtain a virtual right viewpoint color image. The virtual viewpoint drawing adopts DIBR drawing, and comprises the following steps:
1. amending leftneiarestdepthvalue, leftfarthentdepthvalue, rightneiarestdepthvalue and rightfarthentdepthvalue, wherein the amended objective function is as follows:
DVad=DVor-DVor×0.1
wherein DVadRepresenting maximum and minimum depth values after adjustment, DVorRepresenting the maximum value and the minimum value of original depth, wherein leftneisterdepthvalue represents the minimum value of the depth of the left viewpoint de-texture map, leftfarthdepthvalue represents the maximum value of the depth of the left viewpoint de-texture map, rightneisterdepthvalue represents the minimum value of the depth of the right viewpoint de-texture map, and rightfarteddepthvalue represents the maximum value of the depth of the right viewpoint de-texture map;
2. DIBR rendering is performed according to LeftNearestDepthValue, LeftFarthestDepthValue, RightNearestDepthValue and RightFarthestDepthValue to obtain the virtual right viewpoint color image. The parallax is also reduced to some extent by rendering the virtual right viewpoint color map 1 by DIBR.
And a last step: the replacement by the virtual right viewpoint color image and the original viewpoint image is to replace the virtual right viewpoint color image, the original left viewpoint color image, the left viewpoint de-texture image and the right viewpoint de-texture image with the right viewpoint color image, the left viewpoint depth image and the right viewpoint depth image in the original 3D video, and then the improved 3D video can be obtained. The combination of the color image of the virtual right viewpoint, the original left viewpoint and the de-texture image of the left viewpoint and the right viewpoint replaces the original image, and finally the 3D video comfort is improved on the whole.
Claims (8)
1. A3D video comfort improvement method based on depth adjustment, wherein a 3D video comprises a left viewpoint image and a right viewpoint image, and the left viewpoint image and the right viewpoint image both comprise a color image and a depth image, the improvement method comprises the following steps:
s1: preprocessing the depth maps of the left view map and the right view map to obtain preprocessed maps;
s2: carrying out depth filtering on the preprocessed image to obtain a filtering image;
s3: carrying out depth de-texturing on the filtering image to obtain a de-textured image;
s4: drawing a virtual viewpoint according to the texture-removed picture to obtain a virtual right viewpoint color picture;
s5: and replacing the color image of the virtual right viewpoint and the original viewpoint image to obtain the improved 3D video.
2. The method according to claim 1, wherein the preprocessing is performed by substituting a depth value of each pixel point of a depth map into a preprocessed objective function, and the computed map is a preprocessed map, and the preprocessed objective function is:
wherein ZproRepresenting the depth value after pre-processing, Z representing the original depth value, Zm representing the minimum of depth in the depth image.
3. The method according to claim 1, wherein the depth filtering is performed by substituting each pixel point of the preprocessed map into a target function of the depth filtering, the calculated map is a filtered map, and the target function of the depth filtering is:
O(x,y)=∑m,nZ(x+m,y+n)*K(m,n)
wherein O (x, y) represents a pixel output value of an (x, y) position after filtering, (x, y) represents a position of a pixel point of the depth map, and Z (x + m, y + n) represents a depth value at the position of the depth map (x + m, y + n);
k (m, n) represents a filter kernel, and (m, n) represents coordinates in a filter kernel position coordinate diagram, wherein the filter kernel adopts Gaussian filtering.
4. The method of claim 3, wherein the Gaussian filtering is performed by selecting a 3 x 3 Gaussian kernel for blurring, and the expression of the two-dimensional Gaussian function corresponding to the 3 x 3 Gaussian kernel isWhere (m, n) represents coordinates in a 3 × 3 gaussian kernel position coordinate diagram, and σ represents a standard deviation, where σ is 0.8 for coordinate correction.
5. The method of claim 4, wherein the coordinate modification comprises the following steps:
s51: substituting 9 coordinates in the 3 x 3 Gaussian kernel position coordinate diagram into a two-dimensional Gaussian functionSolving corresponding 9 values;
s52: summing the 9 values, and taking the reciprocal of the sum as alpha;
s53: multiplying 9 coordinates in the 3 × 3 Gaussian kernel position coordinate schematic diagram by alpha to obtain a new 3 × 3 Gaussian kernel position coordinate schematic diagram;
s54: and substituting the 9 coordinate values in the new 3 x 3 Gaussian kernel position coordinate schematic diagram into the target function of the depth filtering to perform corresponding calculation.
6. The method of claim 1, wherein the depth de-texturing comprises:
s61: using Z ═ Z (i, j)]W×HRepresenting a given filter map with a resolution of WxH and performing DCT transformation;
the objective function of the DCT transformation is:
wherein:
w and H represent two values of resolution, (i, j) represent coordinates of pixel points in the filter graph, (u, v) represent coordinates of pixel points in the filter graph after DCT transformation, u represents a coordinate value of the two-dimensional wave in the horizontal direction in the frequency domain, and v is a coordinate value of the two-dimensional wave in the vertical direction in the frequency domain;
s62: and (3) performing inverse DCT (discrete cosine transform) after the DCT result is limited by a threshold value to obtain a de-texture map, wherein the threshold value is limited by an objective function as follows:
where DM denotes the de-texture map and T is an experimentally selected threshold.
7. The method of claim 1, wherein the virtual viewpoint rendering adopts DIBR rendering, and comprises the following steps:
s71: amending leftneiarestdepthvalue, leftfarthentdepthvalue, rightneiarestdepthvalue and rightfarthentdepthvalue, wherein the amended objective function is as follows:
DVad=DVor-DVor×0.1
wherein DVadRepresenting maximum and minimum depth values after adjustment, DVorRepresenting the maximum value and the minimum value of original depth, wherein leftneisterdepthvalue represents the minimum value of the depth of the left viewpoint de-texture map, leftfarthdepthvalue represents the maximum value of the depth of the left viewpoint de-texture map, rightneisterdepthvalue represents the minimum value of the depth of the right viewpoint de-texture map, and rightfarteddepthvalue represents the maximum value of the depth of the right viewpoint de-texture map;
s72: DIBR rendering is performed according to LeftNearestDepthValue, LeftFarthestDepthValue, RightNearestDepthValue and RightFarthestDepthValue to obtain the virtual right viewpoint color image.
8. The method as claimed in claim 1, wherein the replacing with the virtual right viewpoint color image and the original viewpoint color image is to replace the virtual right viewpoint color image, the original left viewpoint color image, the left viewpoint de-texture image and the right viewpoint de-texture image with the virtual right viewpoint color image, the original left viewpoint color image, the left viewpoint de-texture image and the right viewpoint de-texture image in the original 3D video.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010068774.3A CN111405264B (en) | 2020-01-20 | 2020-01-20 | 3D video comfort level improving method based on depth adjustment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010068774.3A CN111405264B (en) | 2020-01-20 | 2020-01-20 | 3D video comfort level improving method based on depth adjustment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111405264A CN111405264A (en) | 2020-07-10 |
CN111405264B true CN111405264B (en) | 2022-04-12 |
Family
ID=71428381
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010068774.3A Active CN111405264B (en) | 2020-01-20 | 2020-01-20 | 3D video comfort level improving method based on depth adjustment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111405264B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112040091B (en) * | 2020-09-01 | 2023-07-21 | 先临三维科技股份有限公司 | Camera gain adjusting method and device and scanning system |
CN117591385B (en) * | 2024-01-19 | 2024-04-16 | 深圳清华大学研究院 | Control system for VR projection |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102256149A (en) * | 2011-07-13 | 2011-11-23 | 深圳创维-Rgb电子有限公司 | Three-dimensional (3D) display effect regulation method, device and television |
CN102333230A (en) * | 2011-09-21 | 2012-01-25 | 山东大学 | Method for improving quality of synthetized virtual views in three-dimensional video system |
CN105208369A (en) * | 2015-09-23 | 2015-12-30 | 宁波大学 | Method for enhancing visual comfort of stereoscopic image |
CN109510981A (en) * | 2019-01-23 | 2019-03-22 | 杭州电子科技大学 | A kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8798158B2 (en) * | 2009-03-11 | 2014-08-05 | Industry Academic Cooperation Foundation Of Kyung Hee University | Method and apparatus for block-based depth map coding and 3D video coding method using the same |
US9525858B2 (en) * | 2011-07-06 | 2016-12-20 | Telefonaktiebolaget Lm Ericsson (Publ) | Depth or disparity map upscaling |
-
2020
- 2020-01-20 CN CN202010068774.3A patent/CN111405264B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102256149A (en) * | 2011-07-13 | 2011-11-23 | 深圳创维-Rgb电子有限公司 | Three-dimensional (3D) display effect regulation method, device and television |
CN102333230A (en) * | 2011-09-21 | 2012-01-25 | 山东大学 | Method for improving quality of synthetized virtual views in three-dimensional video system |
CN105208369A (en) * | 2015-09-23 | 2015-12-30 | 宁波大学 | Method for enhancing visual comfort of stereoscopic image |
CN109510981A (en) * | 2019-01-23 | 2019-03-22 | 杭州电子科技大学 | A kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform |
Non-Patent Citations (1)
Title |
---|
《Stereoscopic Visual Discomfort Prediction Using Multi-scale》;Yang Zhou等;《Proceedings of the 27th ACM International Conference on Multimedia, ACM》;20191031;第184-191页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111405264A (en) | 2020-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8711204B2 (en) | Stereoscopic editing for video production, post-production and display adaptation | |
JP4966431B2 (en) | Image processing device | |
CN109462747B (en) | DIBR system cavity filling method based on generation countermeasure network | |
CN112543317B (en) | Method for converting high-resolution monocular 2D video into binocular 3D video | |
US20110304708A1 (en) | System and method of generating stereo-view and multi-view images for rendering perception of depth of stereoscopic image | |
CN103053165B (en) | Method for converting 2D into 3D based on image motion information | |
JP5673032B2 (en) | Image processing apparatus, display apparatus, image processing method, and program | |
CN102098528B (en) | Method and device for converting planar image into stereoscopic image | |
US9041773B2 (en) | Conversion of 2-dimensional image data into 3-dimensional image data | |
Blum et al. | The effect of out-of-focus blur on visual discomfort when using stereo displays | |
JPWO2014083949A1 (en) | Stereoscopic image processing apparatus, stereoscopic image processing method, and program | |
WO2024022065A1 (en) | Virtual expression generation method and apparatus, and electronic device and storage medium | |
CN111405264B (en) | 3D video comfort level improving method based on depth adjustment | |
KR20110014067A (en) | Method and system for transformation of stereo content | |
WO2012036176A1 (en) | Reducing viewing discomfort | |
CN105657401A (en) | Naked eye 3D display method and system and naked eye 3D display device | |
Tam et al. | Stereoscopic image rendering based on depth maps created from blur and edge information | |
Liu et al. | A retargeting method for stereoscopic 3D video | |
Zhang et al. | Deep learning-based perceptual video quality enhancement for 3D synthesized view | |
Guo et al. | Adaptive estimation of depth map for two-dimensional to three-dimensional stereoscopic conversion | |
CN112308957A (en) | Optimal fat and thin face portrait image automatic generation method based on deep learning | |
CN109257591A (en) | Based on rarefaction representation without reference stereoscopic video quality method for objectively evaluating | |
Seitner et al. | Trifocal system for high-quality inter-camera mapping and virtual view synthesis | |
Xue et al. | Disparity-based just-noticeable-difference model for perceptual stereoscopic video coding using depth of focus blur effect | |
Chen et al. | Virtual view quality assessment based on shift compensation and visual masking effect |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |