CN111225217B - 3D-HEVC error concealment method based on virtual viewpoint rendering - Google Patents

3D-HEVC error concealment method based on virtual viewpoint rendering Download PDF

Info

Publication number
CN111225217B
CN111225217B CN201911296384.5A CN201911296384A CN111225217B CN 111225217 B CN111225217 B CN 111225217B CN 201911296384 A CN201911296384 A CN 201911296384A CN 111225217 B CN111225217 B CN 111225217B
Authority
CN
China
Prior art keywords
block
frame
pixel
difference
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911296384.5A
Other languages
Chinese (zh)
Other versions
CN111225217A (en
Inventor
周洋
崔金鹏
梁文青
张博文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gongzhi Yukong Technology Suzhou Co ltd
Original Assignee
Gongzhi Yukong Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gongzhi Yukong Technology Suzhou Co ltd filed Critical Gongzhi Yukong Technology Suzhou Co ltd
Priority to CN201911296384.5A priority Critical patent/CN111225217B/en
Publication of CN111225217A publication Critical patent/CN111225217A/en
Application granted granted Critical
Publication of CN111225217B publication Critical patent/CN111225217B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • H04N19/895Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder in combination with error concealment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Abstract

The invention provides a 3D-HEVC error concealment method based on virtual viewpoint rendering, which comprises the following steps: s1, color image compensation, wherein the lost whole frame color image is drawn and a drawing viewpoint is generated, and the drawn color image is compensated according to the information of the current viewpoint and the forward and backward reference frames of the drawing viewpoint due to the blocking block and the distortion in the drawing view; and S2, judging the occlusion blocks and carrying out local filling, extracting the small occlusion blocks of the compensated color image and carrying out local filling. The invention breaks through the limitation of the parallax vector, improves the distinguishing and filling mode of the blocking block generated by virtual viewpoint drawing under the condition of not using the parallax vector, improves the error concealment accuracy, reduces the overall complexity and better eliminates the blocking block and image distortion.

Description

3D-HEVC error concealment method based on virtual viewpoint rendering
Technical Field
The invention relates to the technical field of viewpoint rendering, in particular to a 3D-HEVC error concealment method based on virtual viewpoint rendering.
Background
During the transmission process of the limited bandwidth communication system, the video data packet is lost due to network congestion and routing problems, which results in the reduction of transmission efficiency. Existing error control methods, such as automatic repeat request (ARQ), Forward Error Correction (FEC), and Multiple Description Coding (MDC), are designed to improve error robustness. While these methods generally work well at low error rates, their performance decreases as the error rate increases. Another solution to alleviate these problems in the field of image and video transmission is error concealment techniques. The error concealment technique is to conceal an error region in a decoded frame by using inter-frame and intra-frame compression redundancy of a video stream at a decoding end so as to achieve the effect of imperceptibility in subjective vision.
For the whole frame loss, the research uses the motion vector correlation of macro blocks in adjacent viewpoints to grade the important level of the B frame, and different error concealment methods are adopted for different levels of frames. It is proposed to adopt different error concealment methods according to the structural similarity of the macro block in combination with the structural similarity. There is an error concealment method that provides a joint motion decision method for image blocks based on virtual viewpoint rendering and adds the average difference of the pixels of the image blocks to the rendered image. And an error hiding method for projecting the frame difference between the adjacent frame of the lost view point frame and the previous frame onto the current lost view point frame by utilizing the motion consistency of frame objects in the adjacent view points. There is also a study to use the motion information in the time domain and the neighboring view frames of the current lost frame to distinguish between moving and stationary regions in the lost frame. And directly copying the static area, further dividing the movement area into an occlusion boundary area and a non-occlusion area, and respectively recovering by adopting movement compensation and parallax compensation. The method utilizes the inter-view correlation and the disparity vector to carry out error concealment, and has the defects that the disparity vector has limitations, namely, the information which can be referred to between the views is less, no reference is generally made at the image boundary, and the difficulty in restoring the whole frame is higher.
The method adopts an image block joint motion judgment method to carry out error concealment by combining virtual viewpoint drawing, and has the defect that the difference between the average pixel difference and each pixel difference in a motion block is overlarge, so that the effect of a compensation process is poor. And the judgment mode of the occlusion block generated by the virtual viewpoint drawing is imperfect.
Disclosure of Invention
Aiming at the problems that the whole frame recovery difficulty is high due to limitation of a parallax vector and the difference between the average pixel difference in a motion block and each pixel difference is too large due to joint motion of the image block in the prior art, the invention provides a 3D-HEVC error concealment method based on virtual viewpoint drawing, which breaks through the limitation of the parallax vector, improves the judgment and filling mode of a block generated by virtual viewpoint drawing under the condition of not using the parallax vector, reduces the overall complexity while improving the error concealment accuracy, and better eliminates the block and image distortion.
In order to achieve the above purpose, the present invention provides the following technical solutions:
A3D-HEVC error concealment method based on virtual view rendering comprises the following steps:
s1, color image compensation, wherein the lost whole frame color image is drawn and a drawing viewpoint is generated, and the drawn color image is compensated according to the information of the current viewpoint and the forward and backward reference frames of the drawing viewpoint due to the blocking block and the distortion in the drawing view;
and S2, judging the occlusion blocks and carrying out local filling, extracting the small occlusion blocks of the compensated color image and carrying out local filling.
Preferably, the step S1 specifically includes:
s101, obtaining the pixel difference front _ difference (T-1) and back _ difference (T +1) of the color map of the forward frame T-1 frame and the backward frame T +1 frame of the current viewpoint view3 and the rendering viewpoint DIBR view3
front_difference(T-1)=View3(T-1)-DIBRView3(T-1);
back_difference(T+1)=View3(T+1)-DIBRView3(T+1);
Front _ difference (T-1) and back _ difference (T +1) are introduced in the color map compensation process, the total pixel difference of the current T frame is cooperatively determined by the pixel differences of the reference frames in the front direction and the back direction, and the contingency that the pixel difference of the reference frame in one direction is only used as the total pixel difference is eliminated.
S102: respectively extrapolating the pixel difference in the forward and backward directions to the current frame by using the motion vector MV;
the motion vector of the view3 cannot be obtained due to the loss of the T frame, but can be obtained by the motion vector reference of the forward frame, namely, the motion vector MV pointing to the T-2 frame from the T-1 frame is inverted to become the motion vector MV pointing to the T frame from the T-1 frame;
let the forward frame pixel coordinate be (x)1,y1) The motion vector MV thereof is (MV (x)1,T-1),mv(y1T-1)), the pixel coordinates extrapolated to the T frame are (x)1-mv(x1,T-1),y1-mv(y1T-1)), the pixel difference extrapolated to the T frame is
front_MVEdifference(x1-mv(x1,T-1),y1-mv(y1,T-1),T)=front_difference(x1,y1,T-1)
And the backward frame motion vector MV pointing to the T frame from the T +1 frame can be directly obtained, and the pixel coordinate of the backward frame is set as (x)2,y2) Outside itThe difference of the pixels pushed to the T frame is
back_MVEdifference(x2+mv(x2,T+1),y2+mv(y2,T+1),T)=back_difference(x2,y2,T+1);
In the extrapolation process of the color map compensation, the forward frame MV is converted from the MV pointing to the T-2 frame from the T-1 frame into the MV pointing to the T frame from the T-1 frame, and the forward frame MV obtained through inversion sometimes cannot accurately reflect T frame information. And the backward frame motion vector pointing to the T frame from the T +1 frame is introduced to more accurately reflect T frame information and can be directly obtained without MV inversion, so that the accuracy is improved and the overall complexity is reduced.
S103: carrying out weighted square root processing on the extrapolated pixel difference to obtain the pixel difference of the current frame, and adding the pixel difference and the current frame of the rendering viewpoint to obtain a compensated color map;
let T frame pixel coordinate be (x, y), because the forward and backward motion vectors MV may have difference, leading to that front _ mvdifference (x, y, T) and back _ mvdifference (x, y, T) of some motion pixels are different, the invention adopts weighted square root processing, so that the final pixel difference is closer to the larger pixel difference of the two than the average value, so as to better eliminate the block and image distortion.
Final T frame pixel difference
Figure BDA0002320667560000031
Compensated T frame pixels
View3(x,y,T)=DIBRView3(x,y,T)+pixel_difference(x,y,T)。
Preferably, the step S2 specifically includes:
s201, dividing an image block into a plurality of depth co-located blocks;
s202: judging whether a plurality of depth co-location blocks have a blocking block, firstly judging a non-blocking block, and then judging the blocks which do not belong to the non-blocking block as the blocking blocks;
s203: local filling of occlusion blocks, according to video sequenceTime correlation, the information of background area sheltered by moving object can be obtained in adjacent reference frame, firstly extracting the background area whose pixel value is less than that of depth parity block corresponding to the sheltering block, and setting the minimum pixel value in several depth parity blocks as DminRegarding the pixel with the pixel value less than T as the background area pixel, wherein T ═ Dmin+40, obtained through image binarization experiment, and then the part of pixels are filled with the same-position pixels of the adjacent reference frame. Because the depth pixel of the background area is smaller, compared with the maximum value of the depth pixel in the block, the background area can be extracted more accurately from the minimum value of the depth pixel in the block.
Although image distortion and a large blocking block are repaired in the compensation process, due to the fact that the obtained MV of the moving pixel is not accurate enough, a blocking block generated when a moving object in a foreground region blocks a background region cannot be completely repaired, and the compensated color image still has a small blocking block. For repairing the part of the blocking blocks, the invention adopts a local filling algorithm based on the pixel values of the depth images, and if the depth frame of the current viewpoint is not lost and the position of the current viewpoint relative to the adjacent viewpoint is horizontally different, the blocking blocks are better judged and filled according to the precision of the image blocks by comparing the pixel difference of the local depth in the blocks.
Preferably, the depth co-located blocks are divided from image blocks, the image blocks are divided into 32x32 size blocks, and the divided blocks are further divided into 16 8x8 blocks. In the process of judging the block, the block is subdivided, the original 64x64 image block is divided into 32x32 size, and then the block is divided into 16 8x8 blocks. If the image block is large (64x64), the occlusion block will be identified as a non-occlusion block, resulting in the occlusion block not being filled subsequently.
Preferably, the step S202 specifically includes the following steps:
if block _1-block _2 is less than M, block _1-block _3 is less than M, block _1-block _4 is less than M, block _5-block _6 is less than M, block _5-block _7 is less than M, block _5-block _8 is less than M, the whole image block is judged to be a non-blocking block, and otherwise the whole image block is a blocking block to be processed;
where block _ i (i ═ 1,2,3, … 8) is the average pixel value of each block within an 8x8 block, M is the pixel difference threshold, and M ═ 25, obtained by an image binarization experiment.
If the blocking block is directly judged according to the pixel difference of the corner blocks at the two boundaries of the image block, the blocks at the two boundaries belong to the foreground area, and the blocking block of which the middle block belongs to the background area is wrongly judged as the non-blocking block. In order to improve the judging mode, the invention adopts reverse thinking, firstly judges the non-blocking block, and then judges the block which does not belong to the non-blocking block as the blocking block. By comparing the local depth pixel difference in the block, the blocking block is better judged and filled according to the precision of the block.
Preferably, in the color map compensation process, only the backward pixel difference is extrapolated to the current frame as a total pixel difference of the current frame, i.e., pixel _ difference (x, y, T) ═ back _ mvefficiency (x, y, T), using the motion vector MV.
Preferably, the step S202 specifically includes the following steps: when the position of the current viewpoint relative to the adjacent viewpoint is changed to be vertically different,
if block _1-block _2 is less than M, block _1-block _3 is less than M, block _1-block _4 is less than M, block _5-block _6 is less than M, block _5-block _7 is less than M, block _5-block _8 is less than M, the whole image block is judged to be a non-blocking block, and otherwise the whole image block is a blocking block to be processed.
Where block _ i (i ═ 1,2,3, … 8) is the average pixel value of each block within an 8x8 block, M is the pixel difference threshold, and M ═ 25, obtained by an image binarization experiment.
The invention has the following beneficial effects: front _ difference (T-1) and back _ difference (T +1) are introduced, the total pixel difference of the current T frame is cooperatively determined by the pixel difference of the reference frame in the front direction and the back direction, and the contingency that only the pixel difference of the unidirectional reference frame is used as the total pixel difference is eliminated; the backward frame motion vector pointing to the T frame from the T +1 frame is introduced to more accurately reflect T frame information and can be directly obtained without MV inversion, so that the overall complexity is reduced while the accuracy is improved; because the depth pixel of the background area is smaller, compared with the maximum value of the depth pixel in the block, the background area can be extracted more accurately from the minimum value of the depth pixel in the block. By comparing the local depth pixel difference in the block, the blocking block is better judged and filled according to the precision of the block.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic block diagram of example 1;
FIG. 3 is a schematic block diagram of example 2;
Detailed Description
Example 1:
the embodiment provides a 3D-HEVC error concealment method based on virtual view rendering, which, with reference to fig. 1, includes the following steps:
s1, color image compensation, wherein the lost whole frame color image is drawn and a drawing viewpoint is generated, and the drawn color image is compensated according to the information of the current viewpoint and the forward and backward reference frames of the drawing viewpoint due to the blocking block and the distortion in the drawing view;
step S1 specifically includes:
s101, obtaining the pixel difference front _ difference (T-1) and back _ difference (T +1) of the color map of the forward frame T-1 frame and the backward frame T +1 frame of the current viewpoint view3 and the rendering viewpoint DIBR view3
front_difference(T-1)=View3(T-1)-DIBRView3(T-1);
back_difference(T+1)=View3(T+1)-DIBRView3(T+1);
Front _ difference (T-1) and back _ difference (T +1) are introduced in the color map compensation process, the total pixel difference of the current T frame is cooperatively determined by the pixel differences of the reference frames in the front direction and the back direction, and the contingency that the pixel difference of the reference frame in one direction is only used as the total pixel difference is eliminated.
S102: respectively extrapolating the pixel difference in the forward and backward directions to the current frame by using the motion vector MV;
the motion vector of the view3 cannot be obtained due to the loss of the T frame, but can be obtained by the motion vector reference of the forward frame, namely, the motion vector MV pointing to the T-2 frame from the T-1 frame is inverted to become the motion vector MV pointing to the T frame from the T-1 frame;
let the forward frame pixel coordinate be (x)1,y1) The motion vector MV thereof is (MV (x)1,T-1),mv(y1T-1)), thenThe pixel coordinate extrapolated to the T frame is (x)1-mv(x1,T-1),y1-mv(y1T-1)), the pixel difference extrapolated to the T frame is
front_MVEdifference(x1-mv(x1,T-1),y1-mv(y1,T-1),T)=front_difference(x1,y1,T-1)
And the backward frame motion vector MV pointing to the T frame from the T +1 frame can be directly obtained, and the pixel coordinate of the backward frame is set as (x)2,y2) The pixel difference extrapolated to the T frame is
back_MVEdifference(x2+mv(x2,T+1),y2+mv(y2,T+1),T)=back_difference(x2,y2,T+1);
In the extrapolation process of color map compensation, the forward frame MV obtained through inversion sometimes cannot accurately reflect T frame information because the forward frame MV is inverted from the MV pointing to the T-2 frame by the T-1 frame to the MV pointing to the T-1 frame by the T-1 frame. And the backward frame motion vector pointing to the T frame from the T +1 frame is introduced to more accurately reflect T frame information and can be directly obtained without MV inversion, so that the accuracy is improved and the overall complexity is reduced.
S103: carrying out weighted square root processing on the extrapolated pixel difference to obtain the pixel difference of the current frame, and adding the pixel difference and the current frame of the rendering viewpoint to obtain a compensated color map;
let T frame pixel coordinate be (x, y), because the forward and backward motion vectors MV may have difference, leading to that front _ mvdifference (x, y, T) and back _ mvdifference (x, y, T) of some motion pixels are different, the invention adopts weighted square root processing, so that the final pixel difference is closer to the larger pixel difference of the two than the average value, so as to better eliminate the block and image distortion.
Final T frame pixel difference
Figure BDA0002320667560000051
Compensated T frame pixels
View3(x,y,T)=DIBRView3(x,y,T)+pixel_difference(x,y,T);
And S2, judging the occlusion blocks and carrying out local filling, extracting the small occlusion blocks of the compensated color image and carrying out local filling.
Step S2 specifically includes:
s201, dividing an image block into a plurality of depth co-located blocks;
referring to fig. 2, an image block is divided into a size of 32x32, and then divided into 16 depth co-located blocks of 8x 8.
S202: judging whether a plurality of depth co-location blocks have a blocking block, firstly judging a non-blocking block, and then judging a block which does not belong to the non-blocking block as a blocking block; in the process of determining the occlusion blocks, the occlusion blocks are subdivided, and original 64x64 image blocks are divided into 32x32 size blocks, and then divided into 16 8x8 blocks. If the image block is large (64x64), the occlusion block will be identified as a non-occlusion block, resulting in the occlusion block not being filled subsequently.
Step S202 specifically includes the following steps:
if block _1-block _2 is less than M, block _1-block _3 is less than M, block _1-block _4 is less than M, block _5-block _6 is less than M, block _5-block _7 is less than M, block _5-block _8 is less than M, the whole image block is judged to be a non-blocking block, and otherwise the whole image block is a blocking block to be processed;
where block _ i (i ═ 1,2,3, … 8) is the average pixel value of each block within an 8x8 block, M is the pixel difference threshold, and M ═ 25, obtained by an image binarization experiment.
If the blocking block is directly judged according to the pixel difference of the corner blocks at the two boundaries of the image block, the blocks at the two boundaries belong to the foreground area, and the blocking block of which the middle block belongs to the background area is wrongly judged as the non-blocking block. In order to improve the judging mode, the invention adopts reverse thinking, firstly judges the non-blocking block, and then judges the block which does not belong to the non-blocking block as the blocking block. By comparing the local depth pixel difference in the block, the blocking block is better judged and filled according to the precision of the block.
S203: the occlusion blocks are filled locally, and according to the time correlation of the video sequence, the information of the background area occluded by the moving object can beTo obtain the background area in the adjacent reference frame, the background area with smaller pixel value of the corresponding depth parity block of the block is first extracted, and the minimum pixel value in the depth parity block of 32x32 is set as DminRegarding the pixel with the pixel value less than T as the background area pixel, wherein T ═ Dmin+40, obtained through image binarization experiment, and then the part of pixels are filled with the same-position pixels of the adjacent reference frame. Because the depth pixel of the background area is smaller, compared with the maximum value of the depth pixel in the block, the background area can be extracted more accurately from the minimum value of the depth pixel in the block.
Although image distortion and a large blocking block are repaired in the compensation process, due to the fact that the obtained MV of the moving pixel is not accurate enough, a blocking block generated when a moving object in a foreground region blocks a background region cannot be completely repaired, and the compensated color image still has a small blocking block. For repairing the part of the blocking blocks, the invention adopts a local filling algorithm based on the pixel values of the depth images, and if the depth frame of the current viewpoint is not lost and the position of the current viewpoint relative to the adjacent viewpoint is horizontally different, the blocking blocks are better judged and filled according to the precision of the image blocks by comparing the pixel difference of the local depth in the blocks.
Example 2:
in this embodiment, based on embodiment 1, the position of the current viewpoint relative to the adjacent viewpoint is changed to be a vertical difference, referring to fig. 3, and the step S202 changes, specifically including the following steps:
if block _1-block _2 is less than M, block _1-block _3 is less than M, block _1-block _4 is less than M, block _5-block _6 is less than M, block _5-block _7 is less than M, block _5-block _8 is less than M, the whole image block is judged to be a non-blocking block, and otherwise the whole image block is a blocking block to be processed.
Where block _ i (i ═ 1,2,3, … 8) is the average pixel value of each block within an 8x8 block, M is the pixel difference threshold, and M ═ 25, obtained by an image binarization experiment.
Meanwhile, in step S1, using the motion vector MV, only the backward pixel difference is extrapolated to the current frame as the total pixel difference of the current frame, i.e., pixel _ difference (x, y, T) ═ back _ mvedge (x, y, T), making the algorithm simpler.
Therefore, the invention has the following beneficial effects: front _ difference (T-1) and back _ difference (T +1) are introduced, the total pixel difference of the current T frame is cooperatively determined by the pixel difference of the reference frame in the front direction and the back direction, and the contingency that only the pixel difference of the unidirectional reference frame is used as the total pixel difference is eliminated; the backward frame motion vector pointing to the T frame from the T +1 frame is introduced to more accurately reflect T frame information and can be directly obtained without MV inversion, so that the overall complexity is reduced while the accuracy is improved; because the depth pixel of the background area is smaller, compared with the maximum value of the depth pixel in the block, the background area can be extracted more accurately from the minimum value of the depth pixel in the block. By comparing the local depth pixel difference in the block, the blocking block is better judged and filled according to the precision of the block.

Claims (6)

1. A3D-HEVC error concealment method based on virtual viewpoint drawing is characterized by comprising the following steps:
s1, color image compensation, wherein the lost whole frame color image is drawn and a drawing viewpoint is generated, and the drawn color image is compensated according to the information of the current viewpoint and the forward and backward reference frames of the drawing viewpoint due to the blocking block and the distortion in the drawing view; the step S1 specifically includes:
s101, obtaining the pixel difference front _ difference (T-1) and back _ difference (T +1) of the color map of the forward frame T-1 frame and the backward frame T +1 frame of the current viewpoint view3 and the rendering viewpoint DIBR view3
front_difference(T-1)=View3(T-1)-DIBRView3(T-1);
back_difference(T+1)=View3(T+1)-DIBRView3(T+1);
S102: respectively extrapolating the pixel difference in the forward and backward directions to the current frame by using the motion vector MV;
the motion vector MV pointing to the T-2 frame from the T-1 frame is inverted and changed into the motion vector MV pointing to the T frame from the T-1 frame;
let the forward frame pixel coordinate be (x)1,y1) The motion vector MV thereof is (MV (x)1,T-1),mv(y1T-1)), the pixel coordinates extrapolated to the T frame are (x)1-mv(x1,T-1),y1-mv(y1T-1)), the pixel difference extrapolated to the T frame is
front_MVEdifference(x1-mv(x1,T-1),y1-mv(y1,T-1),T)=front_difference(x1,y1,T-1)
And the backward frame motion vector MV pointing to the T frame from the T +1 frame can be directly obtained, and the pixel coordinate of the backward frame is set as (x)2,y2) The pixel difference extrapolated to the T frame is
back_MVEdifference(x2+mv(x2,T+1),y2+mv(y2,T+1),T)=back_difference(x2,y2,T+1);
S103: carrying out weighted square root processing on the extrapolated pixel difference to obtain the pixel difference of the current frame, and adding the pixel difference and the current frame of the rendering viewpoint to obtain a compensated color map;
final T frame pixel difference
Figure FDA0003431218250000021
Compensated T frame pixels
View3(x,y,T)=DIBRView3(x,y,T)+pixel_difference(x,y,T);
And S2, judging the occlusion blocks and carrying out local filling, extracting the occlusion blocks still existing in the compensated color image and carrying out local filling.
2. The method as claimed in claim 1, wherein the step S2 further includes:
s201, dividing an image block into a plurality of depth co-located blocks;
s202: judging whether a plurality of depth co-location blocks have a blocking block, firstly judging a non-blocking block, and then judging a block which does not belong to the non-blocking block as a blocking block;
s203: the occlusion blocks are partially filled, and according to the time correlation of the video sequence, the information of the background area occluded by the moving object can be obtained in the adjacent reference frame, firstlyExtracting a background region with a smaller pixel value corresponding to the depth parity block, and setting the minimum pixel value in a plurality of depth parity blocks as DminRegarding the pixel with the pixel value less than T as the background area pixel, wherein T ═ Dmin+40, and then padding the part of pixels with the same-position pixels of the adjacent reference frame.
3. The method of claim 2, wherein the depth co-located blocks are partitioned into image blocks, the image blocks are partitioned into 32x32 size, and the image blocks are further partitioned into 16 blocks of 8x8 size.
4. The method of claim 3, wherein the step S202 further comprises the following steps:
if block _1-block _2 is less than M, block _1-block _3 is less than M, block _1-block _4 is less than M, block _5-block _6 is less than M, block _5-block _7 is less than M, block _5-block _8 is less than M, the whole image block is judged to be a non-blocking block, and if not, the whole image block is a blocking block to be processed;
when the horizontal difference exists, block _1 to block _4 are positioned in a first row in the sequence from left to right; block _5 to block _8 are positioned in the fourth row in the order of left to right;
where block _ i (i ═ 1,2,3, … 8) is the average pixel value of each block within an 8x8 block, M is the pixel difference threshold, and M ═ 25.
5. The method of claim 1, wherein in the color scheme compensation process, only backward pixel differences are extrapolated to a current frame as a total pixel difference of the current frame by using Motion Vectors (MVs), i.e., pixel _ difference (x, y, T) back MVEdifference (x, y, T).
6. The method as claimed in claim 3, wherein the step S202 further comprises the following steps: when the position of the current viewpoint relative to the adjacent viewpoint is changed to be vertically different,
if block _1-block _2 is less than M, block _1-block _3 is less than M, block _1-block _4 is less than M, block _5-block _6 is less than M, block _5-block _7 is less than M, block _5-block _8 is less than M, the whole image block is judged to be a non-blocking block, and otherwise the whole image block is a blocking block to be processed; block _1 to block _4 are positioned in a first column in the sequence from top to bottom; block _5 to block _8 are positioned in a fourth column in the sequence from top to bottom;
where block _ i (i ═ 1,2,3, … 8) is the average pixel value of each block within an 8x8 block, M is the pixel difference threshold, and M ═ 25.
CN201911296384.5A 2019-12-16 2019-12-16 3D-HEVC error concealment method based on virtual viewpoint rendering Active CN111225217B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911296384.5A CN111225217B (en) 2019-12-16 2019-12-16 3D-HEVC error concealment method based on virtual viewpoint rendering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911296384.5A CN111225217B (en) 2019-12-16 2019-12-16 3D-HEVC error concealment method based on virtual viewpoint rendering

Publications (2)

Publication Number Publication Date
CN111225217A CN111225217A (en) 2020-06-02
CN111225217B true CN111225217B (en) 2022-05-10

Family

ID=70827808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911296384.5A Active CN111225217B (en) 2019-12-16 2019-12-16 3D-HEVC error concealment method based on virtual viewpoint rendering

Country Status (1)

Country Link
CN (1) CN111225217B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101312540A (en) * 2008-07-03 2008-11-26 浙江大学 Virtual visual point synthesizing method based on depth and block information
TW201044880A (en) * 2009-06-11 2010-12-16 Univ Nat Central Multiview video encode/decode method
CN104969556A (en) * 2013-01-08 2015-10-07 Lg电子株式会社 Method and apparatus for processing video signal
CN108924568A (en) * 2018-06-01 2018-11-30 杭州电子科技大学 A kind of deep video error concealing method based on 3D-HEVC frame
CN109819230A (en) * 2019-01-28 2019-05-28 杭州电子科技大学 A kind of stereoscopic three-dimensional video error concealment method based on HEVC standard
CN110062219A (en) * 2019-03-12 2019-07-26 杭州电子科技大学 In conjunction with virtual viewpoint rendering 3D-HEVC entire frame loss error concealing method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101676830B1 (en) * 2010-08-16 2016-11-17 삼성전자주식회사 Image processing apparatus and method
US20130100245A1 (en) * 2011-10-25 2013-04-25 Samsung Electronics Co., Ltd. Apparatus and method for encoding and decoding using virtual view synthesis prediction

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101312540A (en) * 2008-07-03 2008-11-26 浙江大学 Virtual visual point synthesizing method based on depth and block information
TW201044880A (en) * 2009-06-11 2010-12-16 Univ Nat Central Multiview video encode/decode method
CN104969556A (en) * 2013-01-08 2015-10-07 Lg电子株式会社 Method and apparatus for processing video signal
CN108924568A (en) * 2018-06-01 2018-11-30 杭州电子科技大学 A kind of deep video error concealing method based on 3D-HEVC frame
CN109819230A (en) * 2019-01-28 2019-05-28 杭州电子科技大学 A kind of stereoscopic three-dimensional video error concealment method based on HEVC standard
CN110062219A (en) * 2019-03-12 2019-07-26 杭州电子科技大学 In conjunction with virtual viewpoint rendering 3D-HEVC entire frame loss error concealing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向三维高校视频编码的深度图错误隐藏;周洋,吴佳忆,陆宇,殷海兵;《电子与信息学报》;20191130;全文 *

Also Published As

Publication number Publication date
CN111225217A (en) 2020-06-02

Similar Documents

Publication Publication Date Title
CN108924568B (en) Depth video error concealment method based on 3D-HEVC framework
JP5970609B2 (en) Method and apparatus for unified disparity vector derivation in 3D video coding
CN102113015B (en) Use of inpainting techniques for image correction
US10349058B2 (en) Method for predicting depth map coding distortion of two-dimensional free viewpoint video
CN107682705B (en) Stereo video B frame error concealment method based on MV-HEVC framework
CN110660131B (en) Virtual viewpoint hole filling method based on deep background modeling
CN104602028B (en) A kind of three-dimensional video-frequency B frames entire frame loss error concealing method
CN102572446B (en) Method for concealing entire frame loss error of multi-view video
Yang et al. Depth-assisted temporal error concealment for intra frame slices in 3-D video
CN108668135B (en) Stereoscopic video B frame error concealment method based on human eye perception
WO2016078162A1 (en) Hevc-based 3d video fast coding method
CN109819230B (en) Three-dimensional video error concealment method based on HEVC standard
Rahaman et al. Hole-filling for single-view plus-depth based rendering with temporal texture synthesis
CN110062219B (en) 3D-HEVC (high efficiency video coding) whole frame loss error concealment method by combining virtual viewpoint drawing
CN111225217B (en) 3D-HEVC error concealment method based on virtual viewpoint rendering
CN103856782A (en) Self-adaptation error concealment method based on multi-view video whole-frame loss
Kurc et al. Depth map inter-view consistency refinement for multiview video
Amado Assuncao et al. Spatial error concealment for intra-coded depth maps in multiview video-plus-depth
Shih et al. Spiral-like pixel reconstruction algorithm for spatiotemporal video error concealment
CN111770346B (en) Three-dimensional video error concealment method based on significance
CN109922349B (en) Stereo video right viewpoint B frame error concealment method based on disparity vector extrapolation
CN104581185B (en) A kind of adaptive hypermedia system method applied to three-dimensional video-frequency switch frame
CN111083502B (en) 3D video error concealment method based on block coding characteristics
JP5393593B2 (en) Multi-viewpoint image correction device
Jin et al. Quality enhancement of quality-asymmetric multiview plus depth video by using virtual view

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220407

Address after: 215000 room 203-2, building 2, No.8 Changting Road, high tech Zone, Suzhou City, Jiangsu Province

Applicant after: Gongzhi Yukong Technology (Suzhou) Co.,Ltd.

Address before: 310018 Xiasha Higher Education Zone, Hangzhou, Zhejiang, Jianggan District

Applicant before: HANGZHOU DIANZI University

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant