CN103997653A - Depth video encoding method based on edges and oriented toward virtual visual rendering - Google Patents
Depth video encoding method based on edges and oriented toward virtual visual rendering Download PDFInfo
- Publication number
- CN103997653A CN103997653A CN201410197819.1A CN201410197819A CN103997653A CN 103997653 A CN103997653 A CN 103997653A CN 201410197819 A CN201410197819 A CN 201410197819A CN 103997653 A CN103997653 A CN 103997653A
- Authority
- CN
- China
- Prior art keywords
- depth map
- macro block
- value
- depth
- filter function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The invention discloses a depth video encoding method based on edges and oriented toward virtual visual rendering. The method comprises the steps that (1) the edges are detected, macro blocks of a depth map are processed through a Sobel edge detection algorithm, and edge values of the macro blocks of the depth map are detected out; (2) the types of the macro blocks of the depth map are classified, a threshold value lambda used for classifying the macro blocks is set, the edge values of the macro blocks of the depth map are compared with the set threshold value lambda, and the macro blocks are classified into edge areas and flat areas; (3) the macro blocks of the depth map are encoded to obtain encoded macro blocks of the depth map, and different encoding forecasting modes are adopted; (4) mid-value mean shift filtering is carried out, a mean shift filter is used for removing a block effect of the encoded macro blocks of the depth map in the edge areas, and the edges are protected. The algorithm improves the compression speed of the depth map and the encoding quality of the depth map of the virtual vision on the premise that the subjective quality of virtual vision videos is basically not changed.
Description
Technical field
The present invention relates to a kind of coding method of deep video, particularly a kind of Virtual based on edge is looked the coding method of the deep video of drafting.
Background technology
3D video can provide the visual effect of three dimensional depth to spectators, strengthens vision presence and realism.Existing a kind of stero of looking based on the degree of depth more, first this system utilizes depth estimation algorithm to obtain depth information from two-way color video; Then one or more color video and depth information are encoded, transmitted; Finally, in decoding end, utilize synthetic 8 viewpoints of drafting (Depth Image Based Rending, DIBR) technology based on depth image.Therefore based on the degree of depth look the looking compared with stero of stero and color video more more, the data volume of transmission obviously declines, and can in finite bandwidth, realize real-time Transmission.
Deep video is made up of the depth map of a frame frame.Depth map is the gray level image of 8 bit quantizations, and gray value is 0 to 255, represents the range information of object scene, and the gray value of each pixel represents the relative distance of this pixel and video camera.Gray value 255(white) represent that pixel distance video camera is nearest; Gray value 0(black) represent pixel distance video camera farthest.Compared with traditional color video, depth map texture is single, clear-cut margin, characteristic point are few.In 3DV system, depth map is not directly used in demonstration, but synthetic for decoding end virtual view, therefore the quality of depth map is most important for the synthetic quality of looking of terminal.
Can be divided into two kinds for the coding method of deep video, a kind of is using deep video as the method that independently greyscale video is encoded, and the method is directly encoded to deep video by the method for encoding texture video; Another kind is by the join together method of coding of deep video and texture video.Because the feature of depth map and texture maps is different, therefore that deep video is low as its code efficiency of method that independently greyscale video is encoded, and coding after depth map of poor quality; And although the join together method of coding of deep video and texture video has been overcome the deficiency of deep video absolute coding, algorithm complex is too high.For example, document [1] proposes a kind of deep video encryption algorithm based on adaptive spatial alternation framework, the method belongs to the join together method of coding of deep video and texture video, it utilizes the similitude of marginal texture in depth map and texture maps, uses the edge decomposition depth map in cromogram.This method takes full advantage of the sharp transitions at edge and smooth region in depth map, thereby under the prerequisite that ensures virtual apparent mass, has improved the code efficiency of depth map, but limited to the lifting of code efficiency.
Document [1]: Kwan-Jung Oh; Vetro A, Yo-Sung H. Depth Coding Using a Boundary Reconstruction Filter for 3-D Video Systems[J]. IEEE Transactions on Circuits and Systems for Video Technology, March 2011, vol.21, no.3,350-359
Summary of the invention
The object of this invention is to provide a kind of Virtual based on edge and look the coding method of the deep video of drafting.With traditional using deep video compared with the method that independently greyscale video is encoded, the present invention be directed to virtually drawing and look the coding method of a kind of deep video of proposition, utilize edge detection algorithm to detect the marginal value of the macro block of depth map, depth map is divided into fringe region and flat site, coded prediction pattern is selected respectively in the smooth region of edge regional peace again, finally process depth map with intermediate value three limit filtering, this algorithm is ensureing virtual looking under the prerequisite that Subjective video quality is substantially constant, improves compression speed, the virtual quality of looking of depth map.
For achieving the above object, design of the present invention is: first with edge detection algorithm, the macro block (Microblock, MB) of depth map is divided into fringe region and flat site, the smooth region of edge regional peace adopts respectively different coded prediction patterns; Then utilize the similitude at depth map and texture maps edge, build intermediate value trilateral filter, to improve code efficiency and depth map encoding quality.
According to above-mentioned design, realize technical scheme of the present invention and be:
Virtual based on edge is looked a coding method for the deep video of drafting, and its concrete steps are:
(1), rim detection: adopt Sobel edge detection algorithm to deal with the macro block of depth map, detect the marginal value of the macro block of depth map;
(2), divide the macro block (mb) type of depth map: set for divide macro block threshold value
, the marginal value that the macro block of each depth map above-mentioned steps (1) Suo Shu is obtained
gwith the threshold value of setting
make comparisons, if marginal value G is greater than threshold value
, the macro block of this depth map is judged to be to fringe region; If marginal value G is less than threshold value
, the macro block of this depth map is judged to be to flat site, be fringe region and flat site by macroblock partitions, if depth map macro block is fringe region, go to step (3-1), if flat site goes to step (3-2);
(3) macro block, to the depth map after the macroblock coding acquisition coding of depth map, specific as follows:
(3-1). the macro block of the depth map that is judged to fringe region above-mentioned steps (2) Suo Shu is adopted to the predictive mode of SKIP or 16 × 16 coded prediction pattern or 16 × 8 coded prediction patterns or 8 × 16 coded prediction patterns or 4 × 4 block prediction modes coding, obtain the depth map macro block of the fringe region after coding;
(3-2). the macro block employing SKIP predictive mode to the depth map that is judged to flat site above-mentioned steps (2) Suo Shu or the coded prediction pattern-coding of 16 × 16, the macro block of the depth map of the flat site after acquisition coding;
(4), the macro block intermediate value three limit filtering to depth map: structure intermediate value trilateral filter; this filter is to location of pixels airspace filter function, depth value filter function, the normalization of texture maps filtering filter function; the macro block of the depth map of the fringe region after then adopting this intermediate value trilateral filter to coding above-mentioned steps (3) Suo Shu is removed blocking effect, protection edge.
The present invention compared with the prior art, has following apparent substantive outstanding feature and remarkable advantage:
The depth map being obtained by the depth map fast encoding method of deep video, the virtual apparent mass obtaining after drawing can obviously decline, Virtual provided by the invention is looked the coding method of the deep video of drafting, by location of pixels airspace filter function, depth value filter function, the normalization of texture maps filtering filter function, obtain intermediate value trilateral filter, effectively protect the marginal information of depth map, can improve and rebuild virtual quality of looking; When virtual apparent mass is constant, by coded prediction model selection, code efficiency is improved significantly, in test, can make the scramble time shorten 54.1%; In addition, this multi-view point video encoding method does not increase complicated especially cataloged procedure, improves Video coding compression efficiency with less complexity.
Brief description of the drawings
Fig. 1 is that cycle tests is the parameter of Akko & Kayo sequence, Breakdancers sequence and Ballet sequence.
Fig. 2 is the FB(flow block) that a kind of Virtual based on edge is looked the coding method of the deep video of drafting.
Fig. 3 a is the edge image that detects video sequence Ballet in the present invention with Sobel algorithm.
Fig. 3 b is the edge image that detects video sequence Ballet in the present invention with Sobel algorithm.
Fig. 4 is that the virtual view drawn of the depth map that utilizes JM method and the present invention to rebuild is as objective quality comparison.
Fig. 5 utilizes JM method and the present invention to draw the virtual PSNR looking to change (dB).
Fig. 6 is the acceleration performance of different cycle testss the present invention and JM method under different Q P value.
Fig. 7 a is the depth map of the original, uncompressed of Ballet sequence.
Fig. 7 b is the Ballet sequence depth map that compression is rebuild through JM method.
Fig. 7 c is the Ballet sequence depth map that the present invention compresses reconstruction.
Fig. 8 a is the depth map detail section of the Ballet sequence of original, uncompressed.
Fig. 8 b is the Ballet sequence depth map detail section that compression is rebuild through JM method.
Fig. 8 c is the Ballet sequence depth map detail section that compresses reconstruction through the present invention.
Fig. 9 a is synthetic virtual the looking of depth map of Ballet sequence original, uncompressed.
Fig. 9 b is synthetic virtual the looking of depth map that Ballet sequence is rebuild with the compression through JM method.
Fig. 9 c is synthetic virtual the looking of depth map that the compression of Ballet sequence the inventive method is rebuild.
Figure 10 a is the synthetic virtual details of looking of depth map of Ballet sequence original, uncompressed.
Figure 10 b is depth map synthetic virtual the look details of Ballet sequence through the compression of JM method.
Figure 10 c is the synthetic virtual details of looking of depth map of Ballet sequence the inventive method compression.
Embodiment
Of the present invention one be implemented as follows described in.
The present invention taking coded reference software JM18.0 and virtual depending on synthesized reference software VSRS3.5 as experiment porch, in test, video sequence as shown in Figure 1, it in table, is Akko & Kayo sequence, the parameter of Breakdancers sequence and Ballet sequence, above-mentioned three video sequences are respectively 50 frames, 100 frames and 100 frames, resolution is followed successively by 640 × 480, 1024 × 768 and 1024 × 768, the coded views of Akko & Kayo sequence is viewpoint 27 and viewpoint 29, the coded views of Breakdancers sequence and Ballet sequence is viewpoint 0 and viewpoint 2.
Referring to Fig. 2, a kind of Virtual based on edge of the present invention is looked the coding method of the deep video of drafting, the steps include:
(1), rim detection: adopt Sobel edge detection algorithm to deal with the macro block (Microblock, MB) of depth map, detect the marginal value of the macro block of the depth map of video sequence Ballet; For example, shown in Fig. 3 a and 3b, concrete steps are as follows:
(1-1), establish the transverse matrix of Sobel operator, be designated as:
;
If longitudinal matrix of Sobel operator, is designated as:
;
If the macro block MB of depth map, is designated as:
,
The transverse matrix of Sobel operator and longitudinal matrix are made to planar convolution with the macro block MB of depth map respectively, draw the horizontal luminance difference score value of macro block of depth map
g x the luminance difference score value longitudinal with the macro block of depth map
g y , its expression formula is as follows:
(1)
(2)
Obtain
g x with
g y it is specific as follows,
Wherein,
depth value on the position (i, j) of the macro block of expression depth map;
(1-2), the marginal value of the macro block of compute depth figure is designated as
g, its calculating formula is:
(3)
(2), divide the macro block (mb) type of depth map: set for divide macro block threshold value
, the marginal value that the macro block of each depth map above-mentioned steps (1) Suo Shu is obtained
gwith the threshold value of setting
make comparisons, if marginal value G is greater than threshold value
, the macro block of this depth map is judged to be to fringe region, if marginal value G is less than threshold value
, the macro block of this depth map is judged to be to flat site, be fringe region and flat site by macroblock partitions, if depth map macro block is fringe region, go to step (3-1), if flat site goes to step (3-2);
(3) macro block, to the depth map after the macroblock coding acquisition coding of depth map, specific as follows:
(3-1). the macro block of the depth map that is judged to fringe region above-mentioned steps (2) Suo Shu is adopted to the predictive mode of SKIP or 16 × 16 coded prediction pattern or 16 × 8 coded prediction patterns or 8 × 16 coded prediction patterns or 4 × 4 block prediction modes coding, obtain the depth map macro block of the fringe region after coding;
(3-2). the macro block employing SKIP predictive mode to the depth map that is judged to flat site above-mentioned steps (2) Suo Shu or the coded prediction pattern-coding of 16 × 16, the macro block of the depth map of the flat site after acquisition coding;
(4), the macro block of depth map is done to intermediate value three limit filtering: structure intermediate value trilateral filter; this filter is to location of pixels airspace filter function, depth value filter function, the normalization of texture maps filtering filter function; the macro block of the depth map of the fringe region after then adopting this intermediate value trilateral filter to coding above-mentioned steps (3) Suo Shu is removed blocking effect; protection edge, concrete steps are as follows:
(4-1), location of pixels airspace filter function, depth value filter function, texture maps filter function are set, specific as follows: location of pixels airspace filter function to be set, to be designated as
, its expression formula is:
Wherein,
pfor the center pixel of depth map,
qfor neighborhood
interior pixel,
for the threshold value of location of pixels airspace filter function,
Depth value filter function is set, is designated as
, its expression formula is:
Wherein,
for the depth value of depth map center pixel,
for neighborhood
interior pixel
qdepth value,
for the threshold value of depth value filter function,
Texture maps filter function is set, is designated as
, its expression formula is:
Wherein,
for the texture maps corresponding with depth map exists
pthe pixel value at place,
for neighborhood
the interior texture maps corresponding with depth map exists
qthe pixel value at place,
for the threshold value of texture maps filter function.,
(4-2), rectangular function is set, be designated as
, expression formula is as follows:
Wherein,
the numerical value calculating for filter function,
for threshold value;
(4-3), by location of pixels airspace filter function, depth value filter function, the normalization of texture maps filtering filter function, obtain intermediate value trilateral filter, be designated as
, its expression formula is:
(3)
Wherein,
for normalization coefficient,
for the neighborhood of depth map center pixel p;
(4-4), adopt the macro block of the depth map of median filter after to coding above-mentioned steps (3) Suo Shu to remove blocking effect, protect edge.
In order to verify algorithm effect of the present invention, by method of the present invention and existing standard code method JM method and not compression method make comparisons: cycle tests is Akko & Kayo sequence, Breakdancers sequence and Ballet sequence.
Fig. 4 has listed the test comparison as objective quality of virtual view that the depth map that utilizes JM method and method of the present invention to rebuild three test video sequence draws, and in table, first row represents the coding of three sequences; Secondary series represents the value of cycle tests at different Q P, and in this experiment, the value of QP gets 22,27,32 and 37; The 3rd row and the 4th row are respectively the experimental results of the method for JM method and the present invention's proposition, and wherein PSNR is Y-PSNR, and SSIM is structural similarity sex index.PSNR and SSIM weigh the index of picture quality, are worth larger explanation quality better; The quality of being drawn the image obtaining by unpressed depth map is shown in the 5th list.The bit rate that method of the present invention is saved compared with JM method is shown in the 6th list; The time that method of the present invention is saved compared with JM method is shown in the 6th list.
Fig. 5 represent to utilize method of the present invention under different QP, draw virtual depending on PSNR change and JM method is drawn the virtual PSNR variation of looking under different QP.Result shows, method of the present invention is compared with respect to JM method, drafting depending on objective quality on be improved, wherein PSNR has improved respectively 0.226dB, 0.050dB, 0.039dB especially with respect to Akko & Kayo sequence, draw depending on objective quality be significantly improved.
Fig. 6 shows different cycle testss under different Q P value, and the present invention is with respect to JM method, and compression speed of the present invention is approximately 2 times of JM method, has improved coding rate.Fig. 7 a, Fig. 7 b, Fig. 7 c and Fig. 8 a, Fig. 8 b, Fig. 8 c represent the different coding method gained depth map subjective quality comparison of Ballet sequence, with respect to original depth-map, JM method edge and method edge of the present invention all have good smooth effect, can remove the HFS at edge.To be that the drafting of Ballet sequence different coding method is virtual look subjective quality comparison for Fig. 9 a, Fig. 9 b, Fig. 9 c and Figure 10 a, Figure 10 b, Figure 10 c, with respect to not compression method and JM method, method of the present invention can obtain drawing result more clearly in edge and the complicated place of texture.
Claims (5)
1. the Virtual based on edge is looked the coding method of the deep video of drafting, it is characterized in that, first with edge detection algorithm, the macro block of depth map (MB) is divided into fringe region and flat site, the smooth region of edge regional peace adopts respectively different coded prediction patterns; Then utilize the similitude at depth map and texture maps edge, build intermediate value trilateral filter, to improve code efficiency and depth map encoding quality, its concrete steps are:
(1), rim detection: adopt Sobel edge detection algorithm to deal with the macro block of depth map, detect the marginal value of the macro block of depth map;
(2), divide the macro block (mb) type of depth map: set for divide macro block threshold value
, the marginal value that the macro block of each depth map above-mentioned steps (1) Suo Shu is obtained
gwith the threshold value of setting
make comparisons, if marginal value G is greater than threshold value
, the macro block of this depth map is judged to be to fringe region; If marginal value G is less than threshold value
, the macro block of this depth map is judged to be to flat site, be fringe region and flat site by macroblock partitions, if depth map macro block is fringe region, go to step (3-1), if flat site goes to step (3-2);
(3), the macroblock coding of depth map is obtained the macro block of the depth map after coding;
(4), the macro block intermediate value three limit filtering to depth map: structure intermediate value trilateral filter; this filter is to location of pixels airspace filter function, depth value filter function, the normalization of texture maps filtering filter function; the macro block of the depth map of the fringe region after then adopting this intermediate value trilateral filter to coding above-mentioned steps (3) Suo Shu is removed blocking effect, protection edge.
2. a kind of Virtual based on edge according to claim 1 is looked the coding method of the deep video of drafting, it is characterized in that the rim detection in described step (1): adopt the macro block (Microblock of Sobel edge detection algorithm to depth map, MB) deal with, detect the marginal value of the macro block of the depth map of video sequence Ballet; For example, shown in Fig. 3 a and 3b, concrete steps are as follows:
(1-1), establish the transverse matrix of Sobel operator, be designated as:
;
If longitudinal matrix of Sobel operator, is designated as:
;
If the macro block MB of depth map, is designated as:
,
The transverse matrix of Sobel operator and longitudinal matrix are made to planar convolution with the macro block MB of depth map respectively, draw the horizontal luminance difference score value of macro block of depth map
g x the luminance difference score value longitudinal with the macro block of depth map
g y , its expression formula is as follows:
(1)
(2)
Obtain
g x with
g y it is specific as follows,
Wherein,
depth value on the position (i, j) of the macro block of expression depth map;
(1-2), the marginal value of the macro block of compute depth figure is designated as
g, its calculating formula is:
(3) 。
3. a kind of Virtual based on edge according to claim 1 is looked the coding method of the deep video of drafting, it is characterized in that the macro block (mb) type of the division depth map in described step (2), and its concrete steps are:
Set for divide macro block threshold value
, the marginal value that the macro block of each depth map above-mentioned steps (1) Suo Shu is obtained
gwith the threshold value of setting
make comparisons, if marginal value G is greater than threshold value
, the macro block of this depth map is judged to be to fringe region, if marginal value G is less than threshold value
, the macro block of this depth map is judged to be to flat site, be fringe region and flat site by macroblock partitions, if depth map macro block is fringe region, go to step (3-1), if flat site goes to step (3-2).
4. a kind of Virtual based on edge according to claim 1 is looked the coding method of the deep video of drafting, it is characterized in that the macroblock coding to depth map in described step (3) obtains the macro block of the depth map after coding, and concrete steps are as follows:
(3-1). the macro block of the depth map that is judged to fringe region above-mentioned steps (2) Suo Shu is adopted to the predictive mode of SKIP or 16 × 16 coded prediction pattern or 16 × 8 coded prediction patterns or 8 × 16 coded prediction patterns or 4 × 4 block prediction modes coding, obtain the depth map macro block of the fringe region after coding;
(3-2). the macro block employing SKIP predictive mode to the depth map that is judged to flat site above-mentioned steps (2) Suo Shu or the coded prediction pattern-coding of 16 × 16, the macro block of the depth map of the flat site after acquisition coding.
5. a kind of Virtual based on edge according to claim 1 is looked the coding method of the deep video of drafting; it is characterized in that the macro block to depth map in described step (4) does intermediate value three limit filtering: structure intermediate value trilateral filter; this filter is to location of pixels airspace filter function, depth value filter function, the normalization of texture maps filtering filter function; the macro block of the depth map of the fringe region after then adopting this intermediate value trilateral filter to coding above-mentioned steps (3) Suo Shu is removed blocking effect; protection edge, concrete steps are as follows:
(4-1), location of pixels airspace filter function, depth value filter function, texture maps filter function are set, specific as follows: location of pixels airspace filter function to be set, to be designated as
, its expression formula is:
Wherein,
pfor the center pixel of depth map,
qfor neighborhood
interior pixel,
for the threshold value of location of pixels airspace filter function,
Depth value filter function is set, is designated as
, its expression formula is:
Wherein,
for the depth value of depth map center pixel,
for neighborhood
interior pixel
qdepth value,
for the threshold value of depth value filter function,
Texture maps filter function is set, is designated as
, its expression formula is:
Wherein,
for the texture maps corresponding with depth map exists
pthe pixel value at place,
for neighborhood
the interior texture maps corresponding with depth map exists
qthe pixel value at place,
for the threshold value of texture maps filter function;
(4-2), rectangular function is set, be designated as
, expression formula is as follows:
Wherein,
the numerical value calculating for filter function,
for threshold value;
(4-3), by location of pixels airspace filter function, depth value filter function, the normalization of texture maps filtering filter function, obtain intermediate value trilateral filter, be designated as
, its expression formula is:
(3)
Wherein,
for normalization coefficient,
for the neighborhood of depth map center pixel p;
(4-4), adopt the macro block of the depth map of median filter after to coding above-mentioned steps (3) Suo Shu to remove blocking effect, protect edge.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410197819.1A CN103997653A (en) | 2014-05-12 | 2014-05-12 | Depth video encoding method based on edges and oriented toward virtual visual rendering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410197819.1A CN103997653A (en) | 2014-05-12 | 2014-05-12 | Depth video encoding method based on edges and oriented toward virtual visual rendering |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103997653A true CN103997653A (en) | 2014-08-20 |
Family
ID=51311639
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410197819.1A Pending CN103997653A (en) | 2014-05-12 | 2014-05-12 | Depth video encoding method based on edges and oriented toward virtual visual rendering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103997653A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104717478A (en) * | 2015-02-04 | 2015-06-17 | 四川长虹电器股份有限公司 | Method for detecting and repairing blocking effect in 3D player |
CN105611287A (en) * | 2015-12-29 | 2016-05-25 | 上海大学 | Low-complexity depth video and multiview video encoding method |
CN106331729A (en) * | 2016-09-06 | 2017-01-11 | 山东大学 | Method of adaptively compensating stereo video frame rate up conversion based on correlation |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101742052A (en) * | 2009-12-02 | 2010-06-16 | 北京中星微电子有限公司 | Method and device for suppressing pseudo color based on edge detection |
US20110069142A1 (en) * | 2009-09-24 | 2011-03-24 | Microsoft Corporation | Mapping psycho-visual characteristics in measuring sharpness feature and blurring artifacts in video streams |
-
2014
- 2014-05-12 CN CN201410197819.1A patent/CN103997653A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110069142A1 (en) * | 2009-09-24 | 2011-03-24 | Microsoft Corporation | Mapping psycho-visual characteristics in measuring sharpness feature and blurring artifacts in video streams |
CN101742052A (en) * | 2009-12-02 | 2010-06-16 | 北京中星微电子有限公司 | Method and device for suppressing pseudo color based on edge detection |
Non-Patent Citations (3)
Title |
---|
张秋闻 等: "面向编码和绘制的多视点图像深度估计", 《光电子•激光》 * |
张艳 等: "恰可察觉深度差异模型的深度图优化方法", 《光电子·激光》 * |
邬芙琼 等: "实时3DV系统中面向虚拟视绘制的快速深度编码", 《信号处理》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104717478A (en) * | 2015-02-04 | 2015-06-17 | 四川长虹电器股份有限公司 | Method for detecting and repairing blocking effect in 3D player |
CN105611287A (en) * | 2015-12-29 | 2016-05-25 | 上海大学 | Low-complexity depth video and multiview video encoding method |
CN106331729A (en) * | 2016-09-06 | 2017-01-11 | 山东大学 | Method of adaptively compensating stereo video frame rate up conversion based on correlation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101374242B (en) | Depth map encoding compression method for 3DTV and FTV system | |
CN101374243B (en) | Depth map encoding compression method for 3DTV and FTV system | |
CN103002306B (en) | Depth image coding method | |
CN104202612B (en) | The division methods and method for video coding of coding unit based on quaternary tree constraint | |
CN102404594A (en) | 2D-to-3D conversion method based on image edge information | |
CN106303521B (en) | A kind of HEVC Rate-distortion optimization method based on sensitivity of awareness | |
Gu et al. | Fast bi-partition mode selection for 3D HEVC depth intra coding | |
CN102801996B (en) | Rapid depth map coding mode selection method based on JNDD (Just Noticeable Depth Difference) model | |
CN103763564B (en) | Depth map encoding method based on edge lossless compress | |
Milani et al. | Efficient depth map compression exploiting segmented color data | |
CN103067705B (en) | A kind of multi-view depth video preprocess method | |
CN105306954A (en) | Method for sensing stereoscopic video coding based on parallax just-noticeable difference model | |
Tsai et al. | Quality assessment of 3D synthesized views with depth map distortion | |
CN101662695B (en) | Method and device for acquiring virtual viewport | |
CN102710949B (en) | Visual sensation-based stereo video coding method | |
CN104778673B (en) | A kind of improved gauss hybrid models depth image enhancement method | |
CN104506871B (en) | A kind of 3D video fast encoding methods based on HEVC | |
CN104780383B (en) | A kind of 3D HEVC multi-resolution video coding methods | |
CN103997653A (en) | Depth video encoding method based on edges and oriented toward virtual visual rendering | |
CN102769749A (en) | Post-processing method for depth image | |
CN103957422A (en) | Depth video encoding method for virtual view rendering based on edge | |
CN104394399B (en) | Three limit filtering methods of deep video coding | |
CN105141967B (en) | Based on the quick self-adapted loop circuit filtering method that can just perceive distortion model | |
Hasan et al. | No-reference quality assessment of 3D videos based on human visual perception | |
CN105915886B (en) | A kind of depth map reasoning algorithm based on video compress domain |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20140820 |
|
WD01 | Invention patent application deemed withdrawn after publication |