CN102769749A - Post-processing method for depth image - Google Patents

Post-processing method for depth image Download PDF

Info

Publication number
CN102769749A
CN102769749A CN2012102260184A CN201210226018A CN102769749A CN 102769749 A CN102769749 A CN 102769749A CN 2012102260184 A CN2012102260184 A CN 2012102260184A CN 201210226018 A CN201210226018 A CN 201210226018A CN 102769749 A CN102769749 A CN 102769749A
Authority
CN
China
Prior art keywords
depth
mrow
image
msup
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012102260184A
Other languages
Chinese (zh)
Other versions
CN102769749B (en
Inventor
邵枫
蒋刚毅
郁梅
彭宗举
李福翠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luyake Fire Vehicle Manufacturing Co ltd
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201210226018.4A priority Critical patent/CN102769749B/en
Publication of CN102769749A publication Critical patent/CN102769749A/en
Application granted granted Critical
Publication of CN102769749B publication Critical patent/CN102769749B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a post-processing method for a depth image. The method comprises the following steps of: coding an acquired color image and a corresponding depth image to obtain a coded code stream; acquiring a coding distortion compensation parameter of the depth image, and coding the coding distortion compensation parameter of the depth image to obtain a parameter code stream; decoding the coded code stream and the parameter code stream to obtain a decoded color image, a decoded depth image and a decoded coding distortion compensation parameter; and compensating the decoded depth image by using the coding distortion compensation parameter of the depth image to obtain a depth compensated image, and filtering the depth compensated image to obtain a depth filtered image which is used for drawing a virtual viewpoint image. The method has the advantages that the influence of coding distortion on the drawing of the virtual viewpoint image is lowered on the basis of maintaining the compression efficiency of the depth image, and the drawing performance of the virtual viewpoint image is greatly improved.

Description

Post-processing method of depth image
Technical Field
The present invention relates to an image processing method, and in particular, to a depth image post-processing method.
Background
Three-Dimensional Video (3 DV) is an advanced visual mode, which makes people have stereoscopic impression and immersion when watching images on a screen, and can meet the requirement that people watch Three-Dimensional (3D) scenes from different angles. A typical three-dimensional video system is shown in fig. 1, and mainly includes modules of video capturing, video encoding, transport decoding, virtual viewpoint rendering, and interactive display.
Multi-view video plus depth (MVD) is the 3D scene information representation adopted by current ISO/MPEG recommendations. The MVD data increases the depth information of corresponding viewpoints on the basis of a multi-viewpoint color image, and two basic approaches are mainly used for obtaining the depth information at present: 1) acquiring through a depth camera; 2) depth information is generated from a general two-dimensional (2D) video by a generation method. Depth Image Based Rendering (DIBR) is a method of generating a virtual viewpoint image by Depth image Rendering corresponding to a color image of a reference viewpoint, and synthesizes a virtual viewpoint image of a three-dimensional scene by using the color image of the reference viewpoint and Depth information corresponding to each pixel point in the color image of the reference viewpoint.
However, compared with color images, depth images have simple textures and include more flat regions, but due to the limitation of depth image acquisition methods, depth images generally have the problems of poor time continuity, depth discontinuity and the like, and more importantly, depth images are not directly used for viewing, but are used for assisting DIBR and 3D display. At present, related researchers have proposed some preprocessing methods for depth images, such as symmetric gaussian filtering and asymmetric gaussian filtering, however, these preprocessing methods consider how to improve the performance of coding, and the improvement of the coding performance inevitably sacrifices the rendering performance of virtual viewpoints.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a depth image post-processing method which can effectively improve the rendering performance of a virtual viewpoint image on the basis of keeping the compression efficiency of the depth image.
The technical scheme adopted by the invention for solving the technical problems is as follows: a method for post-processing a depth image is characterized in that the processing process comprises the following steps: firstly, coding an obtained color image and a depth image corresponding to the color image to obtain a coded code stream; then, obtaining coding distortion compensation parameters of the depth image, and coding the coding distortion compensation parameters of the depth image to obtain a parameter code stream; then decoding the coded code stream and the parameter code stream to obtain a decoded color image, a decoded depth image and a decoded coding distortion compensation parameter of the depth image; and then, compensating the decoded depth image by using the coding distortion compensation parameter of the depth image to obtain a depth compensation image, and performing filtering processing on the depth compensation image to obtain a depth filtering image, wherein the depth filtering image is used for drawing a virtual viewpoint image.
The post-processing method comprises the following specific steps:
acquiring K color images with YUV color spaces of K reference viewpoints at t moment and K depth images corresponding to the color images, and recording the color image of the kth reference viewpoint at t moment as
Figure BDA00001829844700021
Record the depth image of the kth reference viewpoint at the time t as
Figure BDA00001829844700022
Wherein, K is not less than 1 and not more than K, the initial value of K is 1, i is 1,2,3 respectively represent three components of YUV color space, the 1 st component of YUV color space is a brightness component and is marked as Y, the 2 nd component is a first chroma component and is marked as U, and the 3 rd component is a second chroma component and is marked as V, (x, Y) represents the coordinate position of the pixel point in the color image and the depth image, x is not less than 1 and not more than W, Y is not less than 1 and not more than H, W represents the width of the color image and the depth image, H represents the height of the color image and the depth image,
Figure BDA00001829844700023
color image representing the kth reference viewpoint at time t
Figure BDA00001829844700024
The value of the ith component of the pixel point with the middle coordinate position of (x, y),
Figure BDA00001829844700025
depth image representing kth reference viewpoint at time tThe middle coordinate position is the depth value of the pixel point of (x, y);
respectively coding K color images with YUV color spaces of K reference viewpoints at the time t and K depth images corresponding to the color images according to a set coding prediction structure, outputting color image code streams and depth image code streams frame by frame to obtain coding code streams, and transmitting the coding code streams to a user terminal by a server through a network;
thirdly, according to the K depth images of the K reference viewpoints at the time t and the K depth images of the K reference viewpoints at the time t obtained by decoding after encoding, predicting and obtaining encoding distortion compensation parameters of the K depth images of the K reference viewpoints at the time t by adopting a wiener filter, then respectively encoding the encoding distortion compensation parameters of the K depth images of the K reference viewpoints at the time t by adopting a CABAC lossless compression method, outputting parameter code streams frame by frame, and finally transmitting the parameter code streams to a user terminal by a service terminal through a network;
decoding the coded code stream sent by the server by the user end to respectively obtain K color images and corresponding K depth images of K reference viewpoints at t moment after decoding, and correspondingly recording the color image and the corresponding depth image of the kth reference viewpoint at t moment after decoding as
Figure BDA00001829844700027
And
Figure BDA00001829844700028
wherein,
Figure BDA00001829844700029
color image representing k-th reference viewpoint at time t after decoding
Figure BDA00001829844700031
The middle coordinate position is (x, y)The value of the ith component of the pixel point of (a),
Figure BDA00001829844700032
depth image representing kth reference viewpoint at decoded t-time
Figure BDA00001829844700033
The middle coordinate position is the depth value of the pixel point of (x, y);
fifthly, the user end decodes the parameter code stream sent by the server end to obtain the coding distortion compensation parameters of the K depth images of the K reference viewpoints at the t moment, then the coding distortion compensation parameters of the K depth images of the K reference viewpoints at the t moment are utilized to compensate the K depth images of the K reference viewpoints at the t moment after decoding, the K depth compensation images of the K reference viewpoints at the t moment after decoding are obtained, and the depth compensation image of the kth reference viewpoint at the t moment after decoding is recorded as the depth compensation image of the kth reference viewpoint at the t momentWherein,depth compensated image representing kth reference viewpoint at decoded t-time
Figure BDA00001829844700036
The middle coordinate position is the depth value of the pixel point of (x, y);
sixthly, respectively carrying out bidirectional filtering processing on the decoded K depth compensation images of the K reference viewpoints at the t moment by adopting a bidirectional filter to obtain K depth filtering images of the K reference viewpoints at the t moment, and recording the depth filtering image of the kth reference viewpoint at the t moment as a decoded depth filtering imageWherein,
Figure BDA00001829844700038
indicating time t after decodingDepth filtered image of kth reference viewpoint
Figure BDA00001829844700039
The middle coordinate position is the depth value of the pixel point of (x, y).
The specific process of obtaining the coding distortion compensation parameters of the K depth images of the K reference viewpoints at the t moment in the third step is as follows:
③ 1, the depth image of the kth reference viewpoint currently processed in the K depth images of the K reference viewpoints at the time t
Figure BDA000018298447000310
Defining the depth image as a current depth image;
③ 2, for the current depth image
Figure BDA000018298447000311
Implementing 3-level wavelet transform to obtain wavelet coefficient matrix of 3 directional sub-bands of each level of wavelet transform, the 3 directional sub-bands including horizontal sub-band, vertical sub-band and diagonal sub-bandThe wavelet coefficient matrix of the nth direction sub-band obtained after the mth level wavelet transformation is carried out is recorded as
Figure BDA000018298447000313
Wherein m is more than or equal to 1 and less than or equal to 3, n is more than or equal to 1 and less than or equal to 3,
Figure BDA000018298447000314
to represent
Figure BDA000018298447000315
The middle coordinate position is the wavelet coefficient at (x, y);
thirdly, 3, the depth image of the kth reference viewpoint at the t moment obtained by decoding after encoding
Figure BDA000018298447000316
Implementing 3-level wavelet transform to obtain wavelet coefficient matrix of 3 directional sub-bands of each level of wavelet transform, the 3 directional sub-bands including horizontal sub-band, vertical sub-band and diagonal sub-band
Figure BDA000018298447000317
The wavelet coefficient matrix of the nth direction sub-band obtained after the mth level wavelet transformation is carried out is recorded as
Figure BDA00001829844700041
Wherein m is more than or equal to 1 and less than or equal to 3, n is more than or equal to 1 and less than or equal to 3,
Figure BDA00001829844700042
to represent
Figure BDA00001829844700043
The middle coordinate position is the wavelet coefficient at (x, y);
thirdly, predicting and obtaining the depth image of the kth reference viewpoint at the t moment after decoding by adopting a wiener filter
Figure BDA00001829844700044
The coding distortion compensation parameters of the wavelet coefficient matrix of each directional subband of each level of wavelet transform are
Figure BDA00001829844700045
Is recorded as a coding distortion compensation parameter Wherein L represents the filtering length range of the wiener filter,
Figure BDA00001829844700048
expression solution
Figure BDA00001829844700049
Mathematical expectation ofThe value of the one or more of,
Figure BDA000018298447000410
to representThe middle coordinate position is the wavelet coefficient at (X + p, y + q), argmin (X) represents the parameter that minimizes the function X;
③ -5, according to the depth image of the kth reference viewpoint at the moment t after decoding
Figure BDA000018298447000412
The coding distortion compensation parameters of the wavelet coefficient matrixes of the sub-bands in all directions of each level of wavelet transformation are obtained to obtain the current depth image
Figure BDA000018298447000413
Coding distortion compensation parameter of
Figure BDA000018298447000414
And (3) taking the depth image of the next to-be-processed reference viewpoint in the K depth images of the K reference viewpoints at the time t as the current depth image, and then returning to the step (2) to continue executing until the depth images of all the reference viewpoints in the K depth images of the K reference viewpoints at the time t are processed, wherein the initial value of K' is 0.
The fifth step obtains the depth compensation image of the kth reference viewpoint at the t moment after decoding
Figure BDA000018298447000415
The specific process comprises the following steps:
fifthly-1, decoding the depth image of the kth reference viewpoint at the t momentImplementing 3-level wavelet transform to obtain 3 directional sub-band minimums of each level of wavelet transformA wave coefficient matrix, 3 directional subbands including a horizontal subband, a vertical subband and a diagonal subband
Figure BDA000018298447000417
The wavelet coefficient matrix of the nth direction sub-band obtained after the mth level wavelet transformation is carried out is recorded as
Figure BDA000018298447000418
Wherein m is more than or equal to 1 and less than or equal to 3, n is more than or equal to 1 and less than or equal to 3,to represent
Figure BDA00001829844700051
The middle coordinate position is the wavelet coefficient at (x, y);
fifthly-2, calculating the depth image of the kth reference viewpoint at the t moment after decoding
Figure BDA00001829844700052
The wavelet coefficient matrixes of all direction sub-bands of each level of wavelet transform are respectively compensated to obtain wavelet coefficient matrixes
Figure BDA00001829844700053
The compensated wavelet coefficient matrix is recorded as
Figure BDA00001829844700054
Figure BDA00001829844700055
Wherein,
Figure BDA00001829844700056
to representThe middle coordinate position is a wavelet coefficient at (x + p, y + q);
fifthly, 3, decoding the depth image of the kth reference viewpoint at the t moment
Figure BDA00001829844700058
The wavelet coefficient matrixes of the sub-bands in all directions of each level of wavelet transform are respectively subjected to inverse wavelet transform after compensation to obtain a depth compensation image of the kth reference viewpoint at the t moment after decoding, and the depth compensation image is recorded as the depth compensation image
Figure BDA00001829844700059
Wherein,
Figure BDA000018298447000510
depth compensated image representing kth reference viewpoint at decoded t-time
Figure BDA000018298447000511
The middle coordinate position is the depth value of the pixel point of (x, y).
The step of compensating the depth of the kth reference viewpoint of the decoded t time
Figure BDA000018298447000512
The specific process of performing the bidirectional filtering processing is as follows:
sixthly-1, defining the depth compensation image of the kth reference viewpoint at the t moment after decoding
Figure BDA000018298447000513
The currently processed pixel point is the current pixel point;
sixthly-2, recording the coordinate position of the current pixel point as p ', recording the coordinate position of the neighborhood pixel point of the current pixel point as q', and then adopting a gradient template GxFor the depth value of the current pixel point
Figure BDA000018298447000514
Performing convolution operation to obtain gradient value gx (p') of current pixel point,then judging whether | gx (p') | is more than or equal to T, if so, executing a step (c-3), otherwise, executing a step (c-4), wherein, G x = - 1 0 1 - 2 0 2 - 1 0 1 , "+" is convolution operation symbol, "| |" is operation symbol for solving absolute value, T is gradient amplitude threshold value;
sixthly-3, adopting standard deviation of (sigma)s1r1) The depth value of the two-way filter to the neighborhood pixel point of the current pixel point
Figure BDA000018298447000517
Filtering operation is carried out to obtain the depth value of the current pixel point after filtering, and the depth value is recorded as
Figure BDA000018298447000518
Figure BDA000018298447000519
Wherein, <math> <mrow> <msub> <mi>r</mi> <mrow> <mi>s</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>/</mo> <munder> <mi>&Sigma;</mi> <mrow> <msup> <mi>q</mi> <mo>&prime;</mo> </msup> <mo>&Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> </mrow> </munder> <msub> <mi>G</mi> <mrow> <mi>&sigma;s</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>-</mo> <msup> <mi>q</mi> <mo>&prime;</mo> </msup> <mo>|</mo> <mo>|</mo> <mo>)</mo> </mrow> <msub> <mi>G</mi> <mrow> <mi>&sigma;s</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mo>|</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>q</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>|</mo> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>G</mi> <mrow> <mi>&sigma;s</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>-</mo> <msup> <mi>q</mi> <mo>&prime;</mo> </msup> <mo>|</mo> <mo>|</mo> <mo>)</mo> </mrow> </mrow> </math> denotes the standard deviation as σs1The function of the gaussian function of (a) is,
Figure BDA00001829844700062
| p '-q' | | represents the euclidean distance between the coordinate position p 'and the coordinate position q', "| | | |" is a euclidean distance symbol,
Figure BDA00001829844700063
denotes the standard deviation as σr1The function of the gaussian function of (a) is, <math> <mrow> <msub> <mi>G</mi> <mrow> <mi>&sigma;r</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mo>|</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>q</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>|</mo> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>q</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <msub> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mrow> <mi>r</mi> <mn>1</mn> </mrow> </msub> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> "| |" is an operation symbol for absolute value,
Figure BDA00001829844700065
color image representing k-th reference viewpoint at time t after decoding
Figure BDA00001829844700066
The value of the i-th component of the pixel point with the middle coordinate position p',
Figure BDA00001829844700067
color image representing k-th reference viewpoint at time t after decoding
Figure BDA00001829844700068
The value of the i-th component of the pixel point with the middle coordinate position q',
Figure BDA00001829844700069
depth compensated image representing kth reference viewpoint at decoded t-time
Figure BDA000018298447000610
The depth value of a pixel point with a middle coordinate position of q ', exp () represents an exponential function with e as a base, e =2.71828183, and N (p ') represents a 7 × 7 neighborhood window with the pixel point with the coordinate position of p ' as a center; then executing the step (sixthly-5);
sixthly, 4, calculating the depth value of the current pixel point
Figure BDA000018298447000611
As filtered depth values
Figure BDA000018298447000612
Namely, it isWherein,
Figure BDA000018298447000614
"=" in (1) is an assigned symbol; then executing the step (sixthly-5);
sixthly-5, depth compensation image of k-th reference viewpoint at t moment after decodingTaking the next pixel point to be processed as the current pixel point, then returning to the step (c) -2 to continue executing until the depth compensation image of the kth reference viewpoint at the moment t after decoding
Figure BDA000018298447000616
After all the pixel points in the image are processed, a filtered depth filtering image is obtained and recorded as
Figure BDA000018298447000617
And the coding prediction structure set in the step two is an HBP coding prediction structure.
Compared with the prior art, the invention has the advantages that:
1) according to the method, the coding distortion compensation parameters of the depth image are obtained, the decoded depth image is compensated by using the coding distortion compensation parameters, the depth compensation image obtained after compensation is filtered, and the depth filtering image obtained after filtering is used for drawing the virtual viewpoint image, so that the influence of coding distortion on the drawing of the virtual viewpoint image is reduced on the basis of keeping the compression efficiency of the depth image, and the drawing performance of the virtual viewpoint image is greatly improved.
2) The method of the invention obtains the coding distortion compensation parameters of the wavelet coefficient matrixes of different sub-bands of the depth image by adopting the wiener filter for prediction, codes the coding distortion compensation parameters by adopting a distortion-free compression mode, and then compensates the decoded depth image at a user terminal, thereby reducing the influence of coding distortion on the drawing of the virtual viewpoint image.
3) The method of the invention considers the characteristic that the edge area of the depth image is discontinuous and the depth distortion of the edge area can generate larger influence on the virtual viewpoint image drawing, and adopts the two-way filter to carry out filtering processing on the depth value of each pixel point of the edge area of the depth compensation image, thus effectively improving the drawing performance of the virtual viewpoint image.
Drawings
FIG. 1 is a block diagram of the basic components of a typical three-dimensional video system;
FIG. 2a is a color image of the 8 th reference viewpoint of the "Bookarrival" three-dimensional video test sequence;
FIG. 2b is a color image of the 10 th reference viewpoint of the "Bookarrival" three-dimensional video test sequence;
FIG. 2c is a depth image corresponding to the color image shown in FIG. 2 a;
FIG. 2d is a depth image corresponding to the color image shown in FIG. 2 b;
FIG. 3a is a color image of the 8 th reference viewpoint of the "Altmoabit" three-dimensional video test sequence;
FIG. 3b is a color image of the 10 th reference viewpoint of the "Altmoabit" three-dimensional video test sequence;
FIG. 3c is a depth image corresponding to the color image shown in FIG. 3 a;
FIG. 3d is a depth image corresponding to the color image shown in FIG. 3 b;
fig. 4a is a decoded depth image of the 8 th reference view of the "bookangular" three-dimensional video test sequence;
FIG. 4b is a depth filtering image obtained by the method of the present invention for the 8 th reference viewpoint of the "Bookarrival" three-dimensional video test sequence;
FIG. 5a is a decoded depth image of the 8 th reference view of the "Altmoabit" three-dimensional video test sequence;
FIG. 5b is a depth filtered image obtained by the method of the present invention for the 8 th reference viewpoint of the "Altmoabit" three-dimensional video test sequence;
fig. 6a is a virtual viewpoint image obtained by drawing the original depth image of the 9 th reference viewpoint of the "bookarrrival" three-dimensional video test sequence;
fig. 6b is a virtual viewpoint image obtained by drawing a 9 th reference viewpoint of the "bookarrrival" three-dimensional video test sequence by using a decoded depth image;
fig. 6c is a virtual viewpoint image obtained by rendering the 9 th reference viewpoint of the "bookkarrival" three-dimensional video test sequence by using the method of the present invention;
fig. 7a is a virtual viewpoint image obtained by drawing an original depth image of the 9 th reference viewpoint of the "Altmoabit" three-dimensional video test sequence;
fig. 7b is a virtual viewpoint image obtained by drawing a 9 th reference viewpoint of the "altmoobit" three-dimensional video test sequence by using a decoded depth image;
FIG. 7c is a virtual viewpoint image obtained by rendering the 9 th reference viewpoint of the Altmoabit three-dimensional video test sequence by the method of the present invention;
FIG. 8a is an enlarged view of a portion of FIG. 6 a;
FIG. 8b is an enlarged view of a portion of FIG. 6 b;
FIG. 8c is an enlarged view of a portion of FIG. 6 c;
FIG. 9a is an enlarged view of a portion of FIG. 7 a;
FIG. 9b is an enlarged view of a portion of FIG. 7 b;
fig. 9c is an enlarged view of a detail of fig. 7 c.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The invention provides a method for post-processing a depth image, which comprises the following processing procedures: firstly, coding an obtained color image and a depth image corresponding to the color image to obtain a coded code stream; then, obtaining coding distortion compensation parameters of the depth image, and coding the coding distortion compensation parameters of the depth image to obtain a parameter code stream; then decoding the coded code stream and the parameter code stream to obtain a decoded color image, a decoded depth image and a decoded coding distortion compensation parameter of the depth image; and then, compensating the decoded depth image by using the coding distortion compensation parameter of the depth image to obtain a depth compensation image, and filtering the depth compensation image to obtain a depth filtering image, wherein the depth filtering image is used for drawing a virtual viewpoint image, namely the virtual viewpoint image can be obtained by drawing based on the depth image according to the decoded color image and the depth filtering image. The method specifically comprises the following steps:
acquiring K color images with YUV color spaces of K reference viewpoints at t moment and K depth images corresponding to the color images, and recording the color image of the kth reference viewpoint at t moment as
Figure BDA00001829844700081
Record the depth image of the kth reference viewpoint at the time t as
Figure BDA00001829844700082
Wherein, K is not less than 1 and not more than K, the initial value of K is 1, i is 1,2,3 respectively represent three components of YUV color space, the 1 st component of YUV color space is a brightness component and is marked as Y, the 2 nd component is a first chroma component and is marked as U, and the 3 rd component is a second chroma component and is marked as V, (x, Y) represents the coordinate position of the pixel point in the color image and the depth image, x is not less than 1 and not more than W, Y is not less than 1 and not more than H, W represents the width of the color image and the depth image, H represents the height of the color image and the depth image,
Figure BDA00001829844700083
color image representing the kth reference viewpoint at time t
Figure BDA00001829844700084
The value of the ith component of the pixel point with the middle coordinate position of (x, y),
Figure BDA00001829844700085
indicating the depth of the kth reference viewpoint at time tImage of a personThe middle coordinate position is the depth value of the pixel point of (x, y).
Here, three-dimensional video test sequences "bookwarrival" and "altmobait" provided by HHI laboratories in germany are used, each of which includes 16 color images of 16 reference viewpoints and corresponding 16 depth images, each of which has a resolution of 1024 × 768 and a frame rate of 15 frames per second, i.e., 15fps, and are standard test sequences recommended by ISO/MPEG. Fig. 2a and 2b show a color image of the 8 th and 10 th reference viewpoints of "bookangular", respectively; fig. 2c and 2d show the depth images corresponding to the color images of the 8 th and 10 th reference viewpoints of "Bookarrival", respectively; FIGS. 3a and 3b show a color image of the 8 th and 10 th reference viewpoints of "Altmoabit", respectively; fig. 3c and 3d show the depth images corresponding to the color images of the 8 th and 10 th reference viewpoints of "Altmoabit", respectively.
And secondly, respectively coding K color images with YUV color spaces of K reference viewpoints at the time t and K depth images corresponding to the color images according to a set coding prediction structure, then outputting the color image code stream and the depth image code stream frame by frame to obtain a coded code stream, and transmitting the coded code stream to the user side by the service side through a network.
Here, the set coding prediction structure is a known HBP coding prediction structure.
The Coding of the depth image can cause the quality of the decoded depth image to be reduced and inevitably cause the drawing performance of the virtual viewpoint image to be reduced, so the invention adopts a wiener filter to predict and obtain the Coding distortion compensation parameters of the K depth images of the K reference viewpoints at the t moment according to the K depth images of the K reference viewpoints at the t moment and the K depth images of the K reference viewpoints at the t moment after Coding and then decoding, then adopts a CABAC (Context-based Adaptive Binary Arithmetic Coding, previously referenced Adaptive Binary Arithmetic Coding) lossless compression method to respectively code the Coding distortion compensation parameters of the K depth images of the K reference viewpoints at the t moment, then outputs parameter code streams frame by frame, and finally transmits the parameter code streams to a user end through a network by a service end.
In this specific embodiment, the specific process of obtaining the coding distortion compensation parameters of the K depth images of the K reference viewpoints at the time t in the step (c) is as follows:
③ 1, the depth image of the kth reference viewpoint currently processed in the K depth images of the K reference viewpoints at the time t
Figure BDA00001829844700092
Defined as the current depth image.
③ 2, for the current depth image
Figure BDA00001829844700093
Implementing 3-level wavelet transform to obtain wavelet coefficient matrix of 3 directional sub-bands of each level of wavelet transform, the 3 directional sub-bands including horizontal sub-band, vertical sub-band and diagonal sub-bandThe wavelet coefficient matrix of the nth direction sub-band obtained after the mth level wavelet transformation is carried out is recorded as
Figure BDA00001829844700095
Wherein m is more than or equal to 1 and less than or equal to 3, n is more than or equal to 1 and less than or equal to 3,to representThe middle coordinate position is the wavelet coefficient at (x, y).
Thirdly, 3, the depth image of the kth reference viewpoint at the t moment obtained by decoding after encoding
Figure BDA00001829844700101
Implementing 3-level wavelet transform to obtain wavelet coefficient matrix of 3 directional sub-bands of each level of wavelet transform, the 3 directional sub-bands including horizontal sub-band, vertical sub-band and diagonal sub-band
Figure BDA00001829844700102
The wavelet coefficient matrix of the nth direction sub-band obtained after the mth level wavelet transformation is carried out is recorded as
Figure BDA00001829844700103
Wherein m is more than or equal to 1 and less than or equal to 3, n is more than or equal to 1 and less than or equal to 3,
Figure BDA00001829844700104
to represent
Figure BDA00001829844700105
The middle coordinate position is the wavelet coefficient at (x, y).
Thirdly, predicting and obtaining the depth image of the kth reference viewpoint at the t moment after decoding by adopting a wiener filter
Figure BDA00001829844700106
The coding distortion compensation parameters of the wavelet coefficient matrix of each directional subband of each level of wavelet transform are
Figure BDA00001829844700107
Is recorded as a coding distortion compensation parameter
Figure BDA00001829844700108
Figure BDA00001829844700109
Wherein L represents the filtering length range of the wiener filter,
Figure BDA000018298447001010
expression solution
Figure BDA000018298447001011
The mathematical expected value of (a) is,
Figure BDA000018298447001012
to represent
Figure BDA000018298447001013
The wavelet coefficient at the middle coordinate position (X + p, y + q), argmin (X) represents the parameter that minimizes the function X, i.e. theIs shown to make
Figure BDA000018298447001015
The minimum parameter.
③ -5, according to the depth image of the kth reference viewpoint at the moment t after decodingThe coding distortion compensation parameters of the wavelet coefficient matrixes of the sub-bands in all directions of each level of wavelet transformation are obtained to obtain the current depth image
Figure BDA000018298447001017
Coding distortion compensation parameter of
And (3) taking the depth image of the next to-be-processed reference viewpoint in the K depth images of the K reference viewpoints at the time t as the current depth image, and then returning to the step (2) to continue executing until the depth images of all the reference viewpoints in the K depth images of the K reference viewpoints at the time t are processed, wherein the initial value of K' is 0.
Decoding the coded code stream sent by the server by the user end to respectively obtain K color images and corresponding K depth images of K reference viewpoints at the t moment after decoding, and decoding the t moment after decodingRespectively correspond to the color image and the corresponding depth image of the kth reference viewpointAnd
Figure BDA00001829844700112
wherein,
Figure BDA00001829844700113
color image representing k-th reference viewpoint at time t after decoding
Figure BDA00001829844700114
The value of the ith component of the pixel point with the middle coordinate position of (x, y),
Figure BDA00001829844700115
depth image representing kth reference viewpoint at decoded t-timeThe middle coordinate position is the depth value of the pixel point of (x, y).
Fifthly, the user end decodes the parameter code stream sent by the server end to obtain the coding distortion compensation parameters of the K depth images of the K reference viewpoints at the t moment, then the coding distortion compensation parameters of the K depth images of the K reference viewpoints at the t moment are utilized to compensate the K depth images of the K reference viewpoints at the t moment after decoding, the K depth compensation images of the K reference viewpoints at the t moment after decoding are obtained, and the depth compensation image of the kth reference viewpoint at the t moment after decoding is recorded as the depth compensation image of the kth reference viewpoint at the t moment
Figure BDA00001829844700117
Wherein,
Figure BDA00001829844700118
depth compensated image representing kth reference viewpoint at decoded t-time
Figure BDA00001829844700119
The middle coordinate position is the depth value of the pixel point of (x, y).
In this embodiment, the depth compensation image of the kth reference viewpoint at time t after decoding is obtained in the fifth step
Figure BDA000018298447001110
The specific process comprises the following steps:
fifthly-1, decoding the depth image of the kth reference viewpoint at the t moment
Figure BDA000018298447001111
Implementing 3-level wavelet transform to obtain wavelet coefficient matrix of 3 directional sub-bands of each level of wavelet transform, the 3 directional sub-bands including horizontal sub-band, vertical sub-band and diagonal sub-band
Figure BDA000018298447001112
The wavelet coefficient matrix of the nth direction sub-band obtained after the mth level wavelet transformation is carried out is recorded as
Figure BDA000018298447001113
Wherein m is more than or equal to 1 and less than or equal to 3, n is more than or equal to 1 and less than or equal to 3,
Figure BDA000018298447001114
to represent
Figure BDA000018298447001115
The middle coordinate position is the wavelet coefficient at (x, y).
Fifthly-2, calculating the depth image of the kth reference viewpoint at the t moment after decoding
Figure BDA000018298447001116
The wavelet coefficient matrixes of all direction sub-bands of each level of wavelet transform are respectively compensated to obtain wavelet coefficient matrixes
Figure BDA000018298447001117
Compensated smallThe wave coefficient matrix is recorded as
Figure BDA000018298447001118
Figure BDA000018298447001119
Wherein,to represent
Figure BDA000018298447001121
The middle coordinate position is the wavelet coefficient at (x + p, y + q).
Fifthly, 3, decoding the depth image of the kth reference viewpoint at the t moment
Figure BDA000018298447001122
The wavelet coefficient matrixes of the sub-bands in all directions of each level of wavelet transform are respectively subjected to inverse wavelet transform after compensation to obtain a depth compensation image of the kth reference viewpoint at the t moment after decoding, and the depth compensation image is recorded as the depth compensation image
Figure BDA00001829844700121
Wherein,
Figure BDA00001829844700122
depth compensated image representing kth reference viewpoint at decoded t-timeThe middle coordinate position is the depth value of the pixel point of (x, y).
Sixthly, because of the limitation of the depth image obtaining method, the edge area of the depth image is discontinuous, strong correlation exists between the depth image and the color image, and the moving object boundary of the depth image and the color image are consistent, therefore, the edge information of the color image can be used for assisting the filtering processing of the depth imageCarrying out bidirectional filtering processing on the K depth compensation images of the viewpoints to obtain K depth filtering images of K reference viewpoints at t moment after decoding, and recording the depth filtering image of the kth reference viewpoint at t moment after decoding as
Figure BDA00001829844700124
Wherein,
Figure BDA00001829844700125
depth filtered image representing kth reference viewpoint at decoded t-time
Figure BDA00001829844700126
The middle coordinate position is the depth value of the pixel point of (x, y). When the virtual viewpoint image is rendered, the virtual viewpoint image can be obtained by rendering based on the depth image according to the K color images of the K reference viewpoints at the t moment after decoding and the K depth filtering images of the K reference viewpoints at the t moment after decoding.
In this embodiment, the depth compensated image of the kth reference viewpoint at time t after decoding in step [ ]
Figure BDA00001829844700127
The specific process of performing the bidirectional filtering processing is as follows:
sixthly-1, defining the depth compensation image of the kth reference viewpoint at the t moment after decoding
Figure BDA00001829844700128
And the currently processed pixel point is the current pixel point.
Sixthly-2, recording the coordinate position of the current pixel point as p ', recording the coordinate position of the neighborhood pixel point of the current pixel point as q', and then adopting a gradient template GxFor the depth value of the current pixel pointPerforming convolution operation to obtain gradient value gx (p') of current pixel point,
Figure BDA000018298447001210
then judging whether | gx (p') | ≧ T is true, if true,
the step of sixthly-3 is executed, otherwise, the step of sixthly-4 is executed, wherein, G x = - 1 0 1 - 2 0 2 - 1 0 1 , "+" is the convolution operation symbol, "|" is the operation symbol for absolute value, T is the gradient amplitude threshold, in this embodiment, T = 5.
Sixthly-3, adopting standard deviation of (sigma)s1r1) The depth value of the two-way filter to the neighborhood pixel point of the current pixel point
Figure BDA000018298447001212
Filtering operation is carried out to obtain the depth value of the current pixel point after filtering, and the depth value is recorded as
Figure BDA000018298447001213
Figure BDA00001829844700131
Wherein, <math> <mrow> <msub> <mi>r</mi> <mrow> <mi>s</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>/</mo> <munder> <mi>&Sigma;</mi> <mrow> <msup> <mi>q</mi> <mo>&prime;</mo> </msup> <mo>&Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> </mrow> </munder> <msub> <mi>G</mi> <mrow> <mi>&sigma;s</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>-</mo> <msup> <mi>q</mi> <mo>&prime;</mo> </msup> <mo>|</mo> <mo>|</mo> <mo>)</mo> </mrow> <msub> <mi>G</mi> <mrow> <mi>&sigma;s</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mo>|</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>q</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>|</mo> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>G</mi> <mrow> <mi>&sigma;s</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>-</mo> <msup> <mi>q</mi> <mo>&prime;</mo> </msup> <mo>|</mo> <mo>|</mo> <mo>)</mo> </mrow> </mrow> </math> denotes the standard deviation as σs1The function of the gaussian function of (a) is,
Figure BDA00001829844700133
| p '-q' | | represents the euclidean distance between the coordinate position p 'and the coordinate position q', "| | | |" is a euclidean distance symbol,denotes the standard deviation as σr1The function of the gaussian function of (a) is, <math> <mrow> <msub> <mi>G</mi> <mrow> <mi>&sigma;r</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mo>|</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>q</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>|</mo> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>q</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <msub> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mrow> <mi>r</mi> <mn>1</mn> </mrow> </msub> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> "| |" is an operation symbol for absolute value,
Figure BDA00001829844700136
color image representing k-th reference viewpoint at time t after decoding
Figure BDA00001829844700137
The value of the i-th component of the pixel point with the middle coordinate position p',
Figure BDA00001829844700138
color image representing k-th reference viewpoint at time t after decoding
Figure BDA00001829844700139
The value of the i-th component of the pixel point with the middle coordinate position q',
Figure BDA000018298447001310
depth compensated image representing kth reference viewpoint at decoded t-time
Figure BDA000018298447001311
The depth value of a pixel point with a middle coordinate position of q ', exp () represents an exponential function with e as a base, e =2.71828183, and N (p ') represents a 7 × 7 neighborhood window with a pixel point with a coordinate position of p ' as a center, and in the actual processing process, neighborhood windows with other sizes can be adopted, but a large number of experiments show that the best effect can be achieved when the 7 × 7 neighborhood window is adopted; then the step of (4) is executed.
In the present embodiment, the standard deviation (σ)s1r1)=(5,0.1)。
Sixthly, 4, calculating the depth value of the current pixel point
Figure BDA000018298447001312
As filtered depth values
Figure BDA000018298447001313
Namely, it isWherein "
Figure BDA000018298447001315
"in" = "is an assigned symbol; then the step of (4) is executed.
Sixthly-5, depth compensation image of k-th reference viewpoint at t moment after decoding
Figure BDA000018298447001316
Taking the next pixel point to be processed as the current pixel point, then returning to the step (c) -2 to continue executing until the depth compensation image of the kth reference viewpoint at the moment t after decoding
Figure BDA000018298447001317
After all the pixel points in the image are processed, a filtered depth filtering image is obtained and recorded as
Figure BDA000018298447001318
The depth images of the three-dimensional video test sequences of "bookwarrival" and "altmoobit" are subjected to a filtering process experiment, fig. 4a shows a decoded depth image of the 8 th reference viewpoint of "bookwarrival", fig. 4b shows a depth filtered image of the 8 th reference viewpoint of "bookwarfarval" obtained by the method of the present invention, fig. 5a shows a decoded depth image of the 8 th reference viewpoint of "altmoobit", fig. 5b shows a depth filtered image of the 8 th reference viewpoint of "altmoobit" obtained by the method of the present invention, and as can be seen from fig. 4a to 5b, the depth images obtained after the filtering process by the method of the present invention, i.e., the depth filtered images, maintain important geometric features of the depth images, and generate satisfactory edges and smooth contours.
The subjective performance of virtual viewpoint image rendering on the Bookarrival three-dimensional video test sequence and the Altmoabit three-dimensional video test sequence is compared by using the method.
The virtual viewpoint image obtained by the method of the present invention is compared with a virtual viewpoint image obtained without the method of the present invention (directly using the decoded image). Fig. 6a shows a virtual viewpoint image obtained by rendering a 9 th reference viewpoint of "bookwarrival" with an original depth image, fig. 6b shows a virtual viewpoint image obtained by rendering a 9 th reference viewpoint of "bookwarrival" with a decoded depth image, fig. 6c shows a virtual viewpoint image obtained by rendering a 9 th reference viewpoint of "bookwarrival" with the method of the present invention, fig. 7a shows a virtual viewpoint image obtained by rendering a 9 th reference viewpoint of "altmoebit" with an original depth image, fig. 7b shows a virtual viewpoint image obtained by rendering a 9 th reference viewpoint of "altmoebit" with a decoded depth image, fig. 7c shows a virtual viewpoint image obtained by rendering a 9 th reference viewpoint of "altmoebit" with the method of the present invention, fig. 8a, fig. 8b and fig. 8c respectively show partial details of fig. 6a, fig. 6b and fig. 6c, and fig. 9a partial enlarged view of fig. 9a, Fig. 9b and 9c show enlarged partial detail views of fig. 7a, 7b and 7c, respectively. As can be seen from fig. 6a to 9c, the virtual viewpoint image obtained by the method of the present invention can maintain better object contour information, thereby reducing coverage of the background generated in the mapping process to the foreground due to distortion of the depth image, and performing bidirectional filtering processing on the edge area of the depth image according to the edge information of the color image, so as to effectively eliminate stripe noise in the drawn virtual viewpoint image.
The peak signal-to-noise ratio (PSNR) of the virtual viewpoint image obtained by the method of the present invention is compared with the peak signal-to-noise ratio (PSNR) of the virtual viewpoint image obtained by the method without the method of the present invention, and the comparison results are listed in table 1, and it can be seen from table 1 that the quality of the virtual viewpoint image obtained by the method of the present invention is significantly better than the quality of the virtual viewpoint image obtained by the method without the present invention, which is sufficient to show that the method is effective and feasible.
TABLE 1 comparison of peak signal-to-noise ratio using and without the inventive method
Figure BDA00001829844700141

Claims (6)

1. A method for post-processing a depth image is characterized in that the processing process comprises the following steps: firstly, coding an obtained color image and a depth image corresponding to the color image to obtain a coded code stream; then, obtaining coding distortion compensation parameters of the depth image, and coding the coding distortion compensation parameters of the depth image to obtain a parameter code stream; then decoding the coded code stream and the parameter code stream to obtain a decoded color image, a decoded depth image and a decoded coding distortion compensation parameter of the depth image; and then, compensating the decoded depth image by using the coding distortion compensation parameter of the depth image to obtain a depth compensation image, and performing filtering processing on the depth compensation image to obtain a depth filtering image, wherein the depth filtering image is used for drawing a virtual viewpoint image.
2. The method for post-processing the depth image according to claim 1, comprising the following steps:
acquiring K color images with YUV color spaces of K reference viewpoints at t moment and K depth images corresponding to the color images, and recording the color image of the kth reference viewpoint at t moment as
Figure FDA00001829844600011
Record the depth image of the kth reference viewpoint at the time t as
Figure FDA00001829844600012
Wherein, K is not less than 1 and not more than K, the initial value of K is 1, i is 1,2,3 respectively represent three components of YUV color space, the 1 st component of YUV color space is a brightness component and is marked as Y, the 2 nd component is a first chroma component and is marked as U, and the 3 rd component is a second chroma component and is marked as V, (x, Y) represents the coordinate position of the pixel point in the color image and the depth image, x is not less than 1 and not more than W, Y is not less than 1 and not more than H, W represents the width of the color image and the depth image, H represents the height of the color image and the depth image,
Figure FDA00001829844600013
color image representing the kth reference viewpoint at time t
Figure FDA00001829844600014
The value of the ith component of the pixel point with the middle coordinate position of (x, y),
Figure FDA00001829844600015
depth image representing kth reference viewpoint at time t
Figure FDA00001829844600016
The middle coordinate position is the depth value of the pixel point of (x, y);
respectively coding K color images with YUV color spaces of K reference viewpoints at the time t and K depth images corresponding to the color images according to a set coding prediction structure, outputting color image code streams and depth image code streams frame by frame to obtain coding code streams, and transmitting the coding code streams to a user terminal by a server through a network;
thirdly, according to the K depth images of the K reference viewpoints at the time t and the K depth images of the K reference viewpoints at the time t obtained by decoding after encoding, predicting and obtaining encoding distortion compensation parameters of the K depth images of the K reference viewpoints at the time t by adopting a wiener filter, then respectively encoding the encoding distortion compensation parameters of the K depth images of the K reference viewpoints at the time t by adopting a CABAC lossless compression method, outputting parameter code streams frame by frame, and finally transmitting the parameter code streams to a user terminal by a service terminal through a network;
decoding the coded code stream sent by the server by the user end to respectively obtain K color images and corresponding K depth images of K reference viewpoints at t moment after decoding, and correspondingly recording the color image and the corresponding depth image of the kth reference viewpoint at t moment after decoding as
Figure FDA00001829844600021
And
Figure FDA00001829844600022
wherein,
Figure FDA00001829844600023
color image representing k-th reference viewpoint at time t after decoding
Figure FDA00001829844600024
The value of the ith component of the pixel point with the middle coordinate position of (x, y),representation decodingDepth image of kth reference viewpoint at subsequent t timeThe middle coordinate position is the depth value of the pixel point of (x, y);
fifthly, the user end decodes the parameter code stream sent by the server end to obtain the coding distortion compensation parameters of the K depth images of the K reference viewpoints at the t moment, then the coding distortion compensation parameters of the K depth images of the K reference viewpoints at the t moment are utilized to compensate the K depth images of the K reference viewpoints at the t moment after decoding, the K depth compensation images of the K reference viewpoints at the t moment after decoding are obtained, and the depth compensation image of the kth reference viewpoint at the t moment after decoding is recorded as the depth compensation image of the kth reference viewpoint at the t moment
Figure FDA00001829844600027
Wherein,
Figure FDA00001829844600028
depth compensated image representing kth reference viewpoint at decoded t-time
Figure FDA00001829844600029
The middle coordinate position is the depth value of the pixel point of (x, y);
sixthly, respectively carrying out bidirectional filtering processing on the decoded K depth compensation images of the K reference viewpoints at the t moment by adopting a bidirectional filter to obtain K depth filtering images of the K reference viewpoints at the t moment, and recording the depth filtering image of the kth reference viewpoint at the t moment as a decoded depth filtering image
Figure FDA000018298446000210
Wherein,
Figure FDA000018298446000211
depth filtered image representing kth reference viewpoint at decoded t-time
Figure FDA000018298446000212
The middle coordinate position is the depth value of the pixel point of (x, y).
3. The method of claim 2, wherein the specific process of obtaining the coding distortion compensation parameters of K depth images of K reference viewpoints at time t comprises:
③ 1, the depth image of the kth reference viewpoint currently processed in the K depth images of the K reference viewpoints at the time t
Figure FDA000018298446000213
Defining the depth image as a current depth image;
③ 2, for the current depth image
Figure FDA000018298446000214
Implementing 3-level wavelet transform to obtain wavelet coefficient matrix of 3 directional sub-bands of each level of wavelet transform, the 3 directional sub-bands including horizontal sub-band, vertical sub-band and diagonal sub-bandThe wavelet coefficient matrix of the nth direction sub-band obtained after the mth level wavelet transformation is carried out is recorded asWherein m is more than or equal to 1 and less than or equal to 3, n is more than or equal to 1 and less than or equal to 3,
Figure FDA000018298446000217
to represent
Figure FDA000018298446000218
The middle coordinate position is the wavelet coefficient at (x, y);
thirdly, 3, the depth image of the kth reference viewpoint at the t moment obtained by decoding after encoding
Figure FDA000018298446000219
Implementing 3-level wavelet transform to obtain wavelet coefficient matrix of 3 directional sub-bands of each level of wavelet transform, the 3 directional sub-bands including horizontal sub-band, vertical sub-band and diagonal sub-band
Figure FDA00001829844600031
The wavelet coefficient matrix of the nth direction sub-band obtained after the mth level wavelet transformation is carried out is recorded asWherein m is more than or equal to 1 and less than or equal to 3, n is more than or equal to 1 and less than or equal to 3,to represent
Figure FDA00001829844600034
The middle coordinate position is the wavelet coefficient at (x, y);
thirdly, predicting and obtaining the depth image of the kth reference viewpoint at the t moment after decoding by adopting a wiener filterThe coding distortion compensation parameters of the wavelet coefficient matrix of each directional subband of each level of wavelet transform are
Figure FDA00001829844600036
Is recorded as a coding distortion compensation parameter
Figure FDA00001829844600037
Figure FDA00001829844600038
Wherein L represents the filtering length range of the wiener filter,expression solution
Figure FDA000018298446000310
The mathematical expected value of (a) is,to representThe middle coordinate position is the wavelet coefficient at (X + p, y + q), argmin (X) represents the parameter that minimizes the function X;
③ -5, according to the depth image of the kth reference viewpoint at the moment t after decoding
Figure FDA000018298446000313
The coding distortion compensation parameters of the wavelet coefficient matrixes of the sub-bands in all directions of each level of wavelet transformation are obtained to obtain the current depth image
Figure FDA000018298446000314
Coding distortion compensation parameter of
Figure FDA000018298446000315
And (3) taking the depth image of the next to-be-processed reference viewpoint in the K depth images of the K reference viewpoints at the time t as the current depth image, and then returning to the step (2) to continue executing until the depth images of all the reference viewpoints in the K depth images of the K reference viewpoints at the time t are processed, wherein the initial value of K' is 0.
4. A method for post-processing depth image as claimed in claim 2 or 3, wherein said step (c) obtains the depth compensation image of the kth reference viewpoint at the time t after decoding
Figure FDA000018298446000316
The specific process comprises the following steps:
⑤-1、for the depth image of the k-th reference viewpoint at the decoded t timeImplementing 3-level wavelet transform to obtain wavelet coefficient matrix of 3 directional sub-bands of each level of wavelet transform, the 3 directional sub-bands including horizontal sub-band, vertical sub-band and diagonal sub-band
Figure FDA00001829844600041
The wavelet coefficient matrix of the nth direction sub-band obtained after the mth level wavelet transformation is carried out is recorded as
Figure FDA00001829844600042
Wherein m is more than or equal to 1 and less than or equal to 3, n is more than or equal to 1 and less than or equal to 3,
Figure FDA00001829844600043
to represent
Figure FDA00001829844600044
The middle coordinate position is the wavelet coefficient at (x, y);
fifthly-2, calculating the depth image of the kth reference viewpoint at the t moment after decodingThe wavelet coefficient matrixes of all direction sub-bands of each level of wavelet transform are respectively compensated to obtain wavelet coefficient matrixes
Figure FDA00001829844600046
The compensated wavelet coefficient matrix is recorded as
Figure FDA00001829844600047
Figure FDA00001829844600048
Wherein,
Figure FDA00001829844600049
to represent
Figure FDA000018298446000410
The middle coordinate position is a wavelet coefficient at (x + p, y + q);
fifthly, 3, decoding the depth image of the kth reference viewpoint at the t moment
Figure FDA000018298446000411
The wavelet coefficient matrixes of the sub-bands in all directions of each level of wavelet transform are respectively subjected to inverse wavelet transform after compensation to obtain a depth compensation image of the kth reference viewpoint at the t moment after decoding, and the depth compensation image is recorded as the depth compensation image
Figure FDA000018298446000412
Wherein,
Figure FDA000018298446000413
depth compensated image representing kth reference viewpoint at decoded t-timeThe middle coordinate position is the depth value of the pixel point of (x, y).
5. The method of claim 4, wherein the step of compensating the depth of the kth reference viewpoint for the decoded t-th time is performed
Figure FDA000018298446000415
The specific process of performing the bidirectional filtering processing is as follows:
sixthly-1, defining the depth compensation image of the kth reference viewpoint at the t moment after decoding
Figure FDA000018298446000416
The currently processed pixel point is the current pixel point;
sixthly-2, recording the coordinate position of the current pixel point as p ', recording the coordinate position of the neighborhood pixel point of the current pixel point as q', and then adopting a gradient template GxFor the depth value of the current pixel pointPerforming convolution operation to obtain gradient value gx (p') of current pixel point,
Figure FDA000018298446000418
then judging whether | gx (p') | is more than or equal to T, if so, executing a step (c-3), otherwise, executing a step (c-4), wherein, G x = - 1 0 1 - 2 0 2 - 1 0 1 , "+" is convolution operation symbol, "| |" is operation symbol for solving absolute value, T is gradient amplitude threshold value;
sixthly-3, adopting standard deviation of (sigma)s1r1) The depth value of the two-way filter to the neighborhood pixel point of the current pixel point
Figure FDA00001829844600051
Filtering operation is carried out to obtain the depth value of the current pixel point after filtering, and the depth value is recorded as Wherein, <math> <mrow> <msub> <mi>r</mi> <mrow> <mi>s</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>/</mo> <munder> <mi>&Sigma;</mi> <mrow> <msup> <mi>q</mi> <mo>&prime;</mo> </msup> <mo>&Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> </mrow> </munder> <msub> <mi>G</mi> <mrow> <mi>&sigma;s</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>-</mo> <msup> <mi>q</mi> <mo>&prime;</mo> </msup> <mo>|</mo> <mo>|</mo> <mo>)</mo> </mrow> <msub> <mi>G</mi> <mrow> <mi>&sigma;s</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mo>|</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>q</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>|</mo> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> Gσs1(| p '-q' |) represents a standard deviation of σs1The function of the gaussian function of (a) is,
Figure FDA00001829844600055
| p '-q' | | represents the euclidean distance between the coordinate position p 'and the coordinate position q', "| | | |" is a euclidean distance symbol,
Figure FDA00001829844600056
denotes the standard deviation as σr1The function of the gaussian function of (a) is, <math> <mrow> <msub> <mi>G</mi> <mrow> <mi>&sigma;r</mi> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mo>|</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>q</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>|</mo> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mover> <mi>I</mi> <mo>~</mo> </mover> <mrow> <mi>R</mi> <mo>,</mo> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>q</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <msub> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mrow> <mi>r</mi> <mn>1</mn> </mrow> </msub> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> "| |" is an operation symbol for absolute value,
Figure FDA00001829844600058
color image representing k-th reference viewpoint at time t after decoding
Figure FDA00001829844600059
The value of the i-th component of the pixel point with the middle coordinate position p',
Figure FDA000018298446000510
color image representing k-th reference viewpoint at time t after decoding
Figure FDA000018298446000511
The value of the i-th component of the pixel point with the middle coordinate position q',depth compensated image representing kth reference viewpoint at decoded t-time
Figure FDA000018298446000513
The depth value of a pixel point with a middle coordinate position of q ', exp () represents an exponential function with e as a base, e =2.71828183, and N (p ') represents a 7 × 7 neighborhood window with the pixel point with the coordinate position of p ' as a center; then executing the step (sixthly-5);
sixthly, 4, calculating the depth value of the current pixel point
Figure FDA000018298446000514
As filtered depth valuesNamely, it isWherein "
Figure FDA000018298446000517
"in" = "is an assigned symbol; then executing the step (sixthly-5);
sixthly-5, depth compensation image of k-th reference viewpoint at t moment after decoding
Figure FDA000018298446000518
Taking the next pixel point to be processed as the current pixel point, then returning to the step (c) -2 to continue executing until the depth compensation image of the kth reference viewpoint at the moment t after decodingAfter all the pixel points in the image are processed, a filtered depth filtering image is obtained and recorded as
Figure FDA000018298446000520
6. The method as claimed in claim 5, wherein the coding prediction structure set in step (ii) is an HBP coding prediction structure.
CN201210226018.4A 2012-06-29 2012-06-29 Post-processing method for depth image Expired - Fee Related CN102769749B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210226018.4A CN102769749B (en) 2012-06-29 2012-06-29 Post-processing method for depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210226018.4A CN102769749B (en) 2012-06-29 2012-06-29 Post-processing method for depth image

Publications (2)

Publication Number Publication Date
CN102769749A true CN102769749A (en) 2012-11-07
CN102769749B CN102769749B (en) 2015-03-18

Family

ID=47096985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210226018.4A Expired - Fee Related CN102769749B (en) 2012-06-29 2012-06-29 Post-processing method for depth image

Country Status (1)

Country Link
CN (1) CN102769749B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177440A (en) * 2012-12-20 2013-06-26 香港应用科技研究院有限公司 System and method of generating image depth map
CN103369341A (en) * 2013-07-09 2013-10-23 宁波大学 Post-processing method of range image
CN103391446A (en) * 2013-06-24 2013-11-13 南京大学 Depth image optimizing method based on natural scene statistics
CN103813149A (en) * 2012-11-15 2014-05-21 中国科学院深圳先进技术研究院 Image and video reconstruction method of encoding and decoding system
CN104102068A (en) * 2013-04-11 2014-10-15 聚晶半导体股份有限公司 Automatic focusing method and automatic focusing device
CN109963135A (en) * 2017-12-22 2019-07-02 宁波盈芯信息科技有限公司 A kind of depth network camera device and method based on RGB-D

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101170702A (en) * 2007-11-23 2008-04-30 四川虹微技术有限公司 Multi-view video coding method
WO2010008134A2 (en) * 2008-07-15 2010-01-21 Samsung Electronics Co., Ltd. Image processing method
CN101888566A (en) * 2010-06-30 2010-11-17 清华大学 Estimation method of distortion performance of stereo video encoding rate
CN101937578A (en) * 2010-09-08 2011-01-05 宁波大学 Method for drawing virtual view color image
CN102158712A (en) * 2011-03-22 2011-08-17 宁波大学 Multi-viewpoint video signal coding method based on vision
CN102271254A (en) * 2011-07-22 2011-12-07 宁波大学 Depth image preprocessing method
CN102333233A (en) * 2011-09-23 2012-01-25 宁波大学 Stereo image quality objective evaluation method based on visual perception
CN102523468A (en) * 2011-12-16 2012-06-27 宁波大学 Method for ensuring optimal code rate proportion of three-dimensional video coding

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101170702A (en) * 2007-11-23 2008-04-30 四川虹微技术有限公司 Multi-view video coding method
WO2010008134A2 (en) * 2008-07-15 2010-01-21 Samsung Electronics Co., Ltd. Image processing method
CN101888566A (en) * 2010-06-30 2010-11-17 清华大学 Estimation method of distortion performance of stereo video encoding rate
CN101937578A (en) * 2010-09-08 2011-01-05 宁波大学 Method for drawing virtual view color image
CN102158712A (en) * 2011-03-22 2011-08-17 宁波大学 Multi-viewpoint video signal coding method based on vision
CN102271254A (en) * 2011-07-22 2011-12-07 宁波大学 Depth image preprocessing method
CN102333233A (en) * 2011-09-23 2012-01-25 宁波大学 Stereo image quality objective evaluation method based on visual perception
CN102523468A (en) * 2011-12-16 2012-06-27 宁波大学 Method for ensuring optimal code rate proportion of three-dimensional video coding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
D. V. S. X. DE SILVA,ET AL: "A Depth Map Post-Processing Technique for 3D-TV Systems based on Compression Artifact Analysis", 《IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING》 *
顾珊波,等: "一种基于最小可察觉失真的立体图像质量客观", 《光电子· 激光》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103813149A (en) * 2012-11-15 2014-05-21 中国科学院深圳先进技术研究院 Image and video reconstruction method of encoding and decoding system
CN103813149B (en) * 2012-11-15 2016-04-13 中国科学院深圳先进技术研究院 A kind of image of coding/decoding system and video reconstruction method
CN103177440A (en) * 2012-12-20 2013-06-26 香港应用科技研究院有限公司 System and method of generating image depth map
CN103177440B (en) * 2012-12-20 2015-09-16 香港应用科技研究院有限公司 The system and method for synthetic image depth map
CN104102068A (en) * 2013-04-11 2014-10-15 聚晶半导体股份有限公司 Automatic focusing method and automatic focusing device
CN104102068B (en) * 2013-04-11 2017-06-30 聚晶半导体股份有限公司 Atomatic focusing method and automatic focusing mechanism
CN103391446A (en) * 2013-06-24 2013-11-13 南京大学 Depth image optimizing method based on natural scene statistics
CN103369341A (en) * 2013-07-09 2013-10-23 宁波大学 Post-processing method of range image
CN109963135A (en) * 2017-12-22 2019-07-02 宁波盈芯信息科技有限公司 A kind of depth network camera device and method based on RGB-D

Also Published As

Publication number Publication date
CN102769749B (en) 2015-03-18

Similar Documents

Publication Publication Date Title
Cheng et al. Learning image and video compression through spatial-temporal energy compaction
CN102769749B (en) Post-processing method for depth image
KR101484606B1 (en) Methods and apparatus for adaptive reference filtering
CN102271254B (en) Depth image preprocessing method
US9681154B2 (en) System and method for depth-guided filtering in a video conference environment
JPH07203435A (en) Method and apparatus for enhancing distorted graphic information
WO2012090181A1 (en) Depth map coding
CN105096280A (en) Method and device for processing image noise
US20200404339A1 (en) Loop filter apparatus and method for video coding
CN102438167B (en) Three-dimensional video encoding method based on depth image rendering
CN103002306A (en) Depth image coding method
Aziz et al. Motion estimation and motion compensated video compression using DCT and DWT
Zhang et al. Low bit-rate compression of underwater imagery based on adaptive hybrid wavelets and directional filter banks
CN102710949B (en) Visual sensation-based stereo video coding method
US9894384B2 (en) Multiview video signal encoding method and decoding method, and device therefor
Yuan et al. Object shape approximation and contour adaptive depth image coding for virtual view synthesis
EP2735144B1 (en) Adaptive filtering based on pattern information
WO2018117893A1 (en) Mixed domain collaborative post filter for lossy still image coding
Lan et al. Multisensor collaboration network for video compression based on wavelet decomposition
CN103826135A (en) Three-dimensional video depth map coding method based on just distinguishable parallax error estimation
US20170103499A1 (en) Method and apparatus for de-noising an image using video epitome
Jang et al. FDQM: Fast quality metric for depth maps without view synthesis
EP2375746A1 (en) Method for encoding texture data of free viewpoint television signals, corresponding method for decoding and texture encoder and decoder
CN117528079A (en) Image processing apparatus and method for performing quality-optimized deblocking
Zhang et al. An efficient depth map filtering based on spatial and texture features for 3D video coding

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20191217

Address after: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee after: Huzhou You Yan Intellectual Property Service Co.,Ltd.

Address before: 315211 Zhejiang Province, Ningbo Jiangbei District Fenghua Road No. 818

Patentee before: Ningbo University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221222

Address after: 276000 NO.119 Jinhu Industrial Park, West Jiefang Road, high tech Zone, Linyi City, Shandong Province

Patentee after: Luyake Fire Vehicle Manufacturing Co.,Ltd.

Address before: Room 1,020, Nanxun Science and Technology Pioneering Park, 666 Chaoyang Road, Nanxun Town, Huzhou City, Zhejiang Province

Patentee before: Huzhou You Yan Intellectual Property Service Co.,Ltd.

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150318

CF01 Termination of patent right due to non-payment of annual fee