KR20130123240A - Apparatus and method for processing depth image - Google Patents

Apparatus and method for processing depth image Download PDF

Info

Publication number
KR20130123240A
KR20130123240A KR1020120046534A KR20120046534A KR20130123240A KR 20130123240 A KR20130123240 A KR 20130123240A KR 1020120046534 A KR1020120046534 A KR 1020120046534A KR 20120046534 A KR20120046534 A KR 20120046534A KR 20130123240 A KR20130123240 A KR 20130123240A
Authority
KR
South Korea
Prior art keywords
pixel
weight
pixels
depth
depth value
Prior art date
Application number
KR1020120046534A
Other languages
Korean (ko)
Inventor
위호천
이재준
송윤석
이천
호요성
Original Assignee
삼성전자주식회사
광주과학기술원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사, 광주과학기술원 filed Critical 삼성전자주식회사
Priority to KR1020120046534A priority Critical patent/KR20130123240A/en
Publication of KR20130123240A publication Critical patent/KR20130123240A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

An apparatus and a method for processing depth images are disclosed. The depth image processing apparatus comprises: a pixel setting unit for setting second pixels around first pixels existing on the boundary of an object; a weight determining unit for, by using the second pixels, determining at least one weight among a first weight based on the distance between pixels, a second weight based on a difference in depth values between pixels, and a third weight based on a pixel slope and the directionality of the boundary; and a depth value adjusting unit for adjusting the depth values of the first pixels using the determined weight. [Reference numerals] (100) Depth image processing device;(110) Deblocking filter;(120) Distance weighted value;(130) Depth weighted value;(140) Direction weighted value;(150) Depth value correction;(160) Depth image

Description

[0001] APPARATUS AND METHOD FOR PROCESSING DEPTH IMAGE [0002]

The following embodiments relate to an apparatus and method for processing a depth image, and an apparatus and method for processing an error occurring in a depth image during an encoding process.

Unlike the two-dimensional image, the three-dimensional image provides a more realistic effect to the user through the sense of depth. In the 3D image, the depth of the virtual view corresponding to the 3D image is determined using the left view depth image and the right view depth image, and the left view color image and the right view color image are synthesized according to the depth of the virtual view. Is generated.

In this case, the depth image may be distorted in the boundary of the object due to an error in the encoding process.

When synthesizing a 3D image into a depth image in which distortion occurs, distortion may also occur at a boundary of an object included in the 3D image.

Therefore, there is a need for a method capable of processing an error of a depth image according to encoding.

In one embodiment, a depth image processing apparatus includes a first pixel determiner configured to determine, as a first pixel, pixels located at a boundary of an object among pixels included in a depth image; A second pixel determination unit which determines pixels of a predetermined area as a second pixel with respect to the first pixel; And a depth value controller configured to adjust the depth value of the first pixel using the depth values of the second pixels.

The depth value controller of the depth image processing apparatus may adjust the depth value of the first pixel by using a first weight proportional to a distance between the first pixel and the second pixel and a depth value of the second pixels.

The depth adjusting unit of the depth image processing apparatus according to an exemplary embodiment uses the second weight and the depth value of the second pixels, which are inversely proportional to the difference between the depth value of the first pixel and the depth value of the second pixel, to determine the depth of the first pixel. You can adjust the value.

The depth value controller of the depth image processing apparatus according to an exemplary embodiment may use the depth of the first pixel using the third weight and the depth value of the second pixels according to the direction of the boundary including the first pixel and the slope of the second pixel. You can adjust the value.

In one embodiment, a depth image processing apparatus includes a pixel setting unit configured to set second pixels around first pixels on a boundary of an object; A weight determination unit for determining at least one of the first weight based on the distance between pixels, the second weight based on the difference between pixel depth values, and the third weight based on the directionality of the boundary and the pixel slope using the second pixels. ; And a depth value controller configured to adjust depth values of the first pixels using the determined weight.

The weight determiner of the depth image processing apparatus may determine a first weight of a second pixel far from the first pixel to be higher than a first weight of a second pixel close to the first pixel.

The weight determiner of the depth image processing apparatus may further include a direction determiner configured to determine a direction of a boundary using a gradient of an area including second pixels; And

An inclination determination unit may be configured to determine an inclination of the second pixel by using the position of the first pixel and the position of the second pixel.

When the horizontal coordinate of the first pixel and the horizontal coordinate of the second pixel are the same, the inclination determiner of the depth image processing apparatus according to an exemplary embodiment may determine a difference between the vertical coordinate of the first pixel and the vertical coordinate of the second pixel. You can determine the slope.

According to an embodiment, a depth image processing method may include: determining pixels located at a boundary of an object from among pixels included in a depth image as a first pixel; Determining pixels of a predetermined area as a second pixel with respect to the first pixel; And adjusting the depth value of the first pixel using the depth value of the second pixels.

According to an embodiment, a depth image processing method may include: setting second pixels around first pixels at a boundary of an object; Determining, using the second pixels, at least one of a first weight based on the inter-pixel distance, a second weight based on the inter-pixel depth value difference, and a third weight based on the directionality of the boundary and the pixel slope; And adjusting the depth value of the first pixels using the determined weight.

1 is a diagram illustrating an operation of a depth image processing apparatus according to an exemplary embodiment.
2 is a diagram illustrating a structure of a depth image processing apparatus according to an exemplary embodiment.
3 is a diagram illustrating a structure of a pixel setting unit according to an exemplary embodiment.
4 is a diagram illustrating a structure of a weight determining unit according to an embodiment.
5 illustrates an example of determining a distance weight according to an embodiment.
6 illustrates an example of determining a direction weight according to an embodiment.
7 is a diagram illustrating a structure of an encoding apparatus, according to an embodiment.
8 is a diagram illustrating a structure of a decoding apparatus according to an embodiment.
9 is a diagram illustrating a depth image processing method, according to an exemplary embodiment.
10 illustrates a method of determining a direction weight according to an embodiment.
11 is a diagram illustrating an encoding method, according to an embodiment.
12 is a diagram illustrating a decoding method according to an embodiment.

Hereinafter, embodiments will be described in detail with reference to the accompanying drawings.

1 is a diagram illustrating an operation of a depth image processing apparatus according to an exemplary embodiment.

As shown in FIG. 1, the depth image processing apparatus 100 may post-process a depth image passing through the deblocking filter 110 in a process of decoding the depth image, and then generate the depth image at a boundary of an object of the depth image. Distortion can be prevented.

In detail, the depth image processing apparatus 100 may set the second pixels using the first pixel, which is the pixel on which the deblocking filter 110 performs deblocking filtering. Next, the depth image processing apparatus 100 may adjust the depth value of the first pixel by using at least one of the distance weight 120, the depth value 130, and the direction weight 140 of the second pixels ( 150, the depth image 160 in which distortion does not occur at the boundary of the object may be output. At this time, the distance weight 120 is a weight based on the distance between pixels, the depth value 130 is a weight based on the difference between depth values between pixels, and the direction weight 140 may be a weight based on the directionality of the block and the pixel slope. have.

In this case, the first pixel may be pixels located at the boundary of the object among pixels included in the depth image and having a large difference between the adjacent pixels and the depth value. Also, the second pixel may be pixels located within a predetermined area from the first pixel.

In addition, the distortion generated at the boundary of the object is generated because the difference between the depth value of the pixel corresponding to the object and the depth value of the pixel corresponding to the background is too large based on the boundary. Therefore, the depth image processing apparatus 100 according to an exemplary embodiment may adjust the depth value of the first pixel positioned in the middle of the pixel corresponding to the object and the pixel corresponding to the background, and thus the boundary of the object due to a sudden difference in depth value. This can prevent distortion from occurring.

2 is a diagram illustrating a structure of a depth image processing apparatus according to an exemplary embodiment.

Referring to FIG. 2, the depth image processing apparatus 100 may include a pixel setting unit 210, a weight determiner 220, and a depth value adjuster 230.

The pixel setting unit 210 may set the second pixels using the first pixel, which is the pixel on which the deblocking filter 110 performs the deblocking filtering. In detail, the pixel setting unit 210 may set pixels located in a predetermined area from the first pixel as a second pixel corresponding to the first pixel.

A detailed configuration of the pixel setting unit 210 will be described in detail with reference to FIG. 3 below.

The weight determination unit 220 uses the second pixels set by the pixel setting unit 210 to determine the distance weight based on the distance between pixels, the depth value based on the difference between depth values between pixels, At least one weight of the direction weights may be determined.

A configuration in which the weight determination unit 220 determines the distance weight, the depth value weight, and the direction weight will be described in detail with reference to FIG. 4.

The depth value controller 230 may adjust the depth values of the first pixels using at least one of the distance weight, the depth value weight, and the direction weight determined by the weight determiner 220. For example, the depth value controller 230 may adjust the depth value of the first pixel by using Equation 1.

Figure pat00001

In this case, p may be a first pixel and q may be a second pixel. In addition, D p and new may be depth values of the first pixel adjusted by the depth value controller 230, and d q may be a depth value of the second pixel. W ran may be a distance weight, W dep may be a depth value weight, and W dir may be a direction weight. For example, W ran may be determined according to Equation 2, and W dep may be determined according to Equation 3. In addition, W dir may be determined using one of Equations 8 to 11 according to the direction of the boundary.

The details of W ran , W dep , and W dir and the decision process are described in more detail later.

3 is a diagram illustrating a structure of a pixel setting unit according to an exemplary embodiment.

Referring to FIG. 3, the pixel setting unit 210 includes a first pixel determination unit 310 and a second pixel determination unit 320.

The first pixel determiner 310 may determine, as a first pixel, pixels located at a boundary of the object among pixels included in the depth image. In this case, the pixels located at the boundary of the object may be pixels filtered by the deblocking filter.

The second pixel determiner 320 may determine, as a second pixel, pixels included in a predetermined area around the first pixel determined by the first pixel determiner 310. In detail, the second pixel determiner 320 may apply a fixed window or a variable window around the first pixel, and determine the pixels included in the window as the second pixel.

4 is a diagram illustrating a structure of a weight determining unit according to an embodiment.

Referring to FIG. 4, the weight determiner 220 includes a distance weight determiner 410, a depth value determiner 420, a directional determiner 430, a slope determiner 440, and a direction weight determiner 450. ).

The distance weight determination unit 410 may determine the distance weight of the second pixel according to the distance between the first pixel and the second pixel.

For example, the distance weight determiner 410 may determine the distance weight of the second pixel using Equation 2 using the Gaussian function and the Euclidean distance.

Figure pat00002

In this case, W ran may be a distance weight, and exp may be an exponential function. In addition, p x and p y may be the x and y coordinates of the first pixel, respectively, and q x and q y may be the x and y coordinates of the second pixel, respectively. And, δ r a n 2 may be a dispersion value of the distance between the first pixel and the second pixel.

In this case, the distance weight determination unit 410 may determine the distance weight of the second pixel to be proportional to the distance between the first pixel and the second pixel.

For example, the distance weight determiner 410 may determine a higher distance weight to a second pixel farther from the first pixel than the second pixel closer to the first pixel.

Since the depth value of the second pixel close to the first pixel is similar to the depth value of the first pixel, the weight of the second pixel close to the first pixel is conventionally set high. However, being close to the first pixel means being close to the boundary of the object, and the possibility of distortion caused by the boundary of the object also increases.

Accordingly, the distance weight determination unit 410 may determine the distance weight of the second pixel having the greatest distance to the first pixel among the second pixels.

In this case, the distance weight determining unit 410 determines not only all pixels included in the depth image, but also determines the distance weight of the second pixel having the longest distance from the second pixel to the first pixel, thereby relevance to the first pixel. It is possible to prevent giving a high weight to this small pixel.

The second pixel includes a pixel corresponding to the object and a pixel corresponding to the background. In addition, the second pixel for which the distance weight determining unit 410 determines the distance weight as the highest is the second pixel at the position farthest from the boundary. Accordingly, the distance weight determination unit 410 may determine a distance between a pixel having the furthest distance from the first pixel among the second pixels corresponding to the object and a pixel having the furthest distance from the first pixel among the second pixels corresponding to the background. The weight can be determined highest.

In this case, since the region including the second pixels is set around the first pixel, the pixels having the highest distance weights have the same distance from the first pixel. That is, the distance weight determination unit 410 may determine the distance weight of two pixels having different depth values as the highest.

Therefore, the depth value adjusting unit 230 sets the weight value of the depth value of the second pixel having a small difference from the depth value of the first pixel among the second pixels having the high distance weight by using the depth value weight together with the distance weight. Can be used to adjust the first pixel.

A process of determining the distance weight by the distance weight determiner 410 will be described in detail with reference to FIG. 5.

The depth value weight determining unit 420 may determine the depth value weight of the second pixel according to the difference between the depth value of the first pixel and the depth value of the second pixel.

In this case, the depth value weight may be inversely proportional to the absolute difference between the depth value of the first pixel and the depth value of the second pixel. That is, as the absolute difference between the depth value of the first pixel and the depth value of the second pixel is smaller, the depth value weight of the second pixel is increased, and as the absolute difference between the depth value of the first pixel and the depth value of the second pixel is larger, The depth value weight of the second pixel may decrease.

For example, the depth value weight determining unit 420 may determine the depth value weight of the second pixel using Equation 3.

Figure pat00003

In this case, W dep may be a depth value weight, and exp may be an exponential function. In addition, D p may be a depth value of the first pixel, and D q may be a depth value of the second pixel. Δ dep 2 may be a dispersion value of the distance between the first pixel and the second pixel.

The direction determiner 430 may determine the directionality of the boundary including the first pixel using the depth values of the second pixels.

The depth information is different from the depth value of the pixel corresponding to the object and the depth value of the pixel corresponding to the background based on the boundary of the object.

For example, a second pixel corresponding to a boundary including the first pixel, a second pixel corresponding to an object, and a second pixel located at the same distance as the first pixel. The second pixels corresponding to the background all have different depth values.

Accordingly, the direction determining unit 430 may determine the weight of the direction in which the second pixel is in the first pixel by determining the direction of the boundary according to the shape of the boundary of the object. In this case, the direction determining unit 430 may determine the direction of the boundary as one of vertical, horizontal, diagonal in the left direction, and diagonal in the right direction according to the shape of the boundary of the object.

In detail, the directional determination unit 430 may determine the horizontal gradient and the vertical gradient of the area including the second pixels using an edge detection process. In this case, the direction determination unit 430 may use a Sobel operator as an edge detection process.

For example, the direction determination unit 430 may determine G x , which is a horizontal gradient, using Equation 4 and determine G y , which is a vertical gradient, using Equation 5.

Figure pat00004

Figure pat00005

In this case, A may be a depth image.

Next, the direction determining unit 430 may determine the direction of the gradient using a horizontal gradient and a vertical gradient. For example, the direction determining unit 430 may determine θ, which is the direction of the gradient, using Equation 6.

Figure pat00006

Finally, the direction determining unit 430 may determine the direction of the boundary using the direction of the gradient.

For example, when the direction of the gradient is 0 ° or 180 °, the direction determining unit 430 horizontally determines the direction of the boundary. When the direction of the gradient is 90 ° or 270 °, the direction determination unit 430 determines the direction of the boundary vertically. Can be.

In addition, when the direction of the gradient is between 90 ° and 180 ° or between 270 ° and 360 °, the direction determining unit 430 may determine the direction of the boundary as a diagonal line in the left direction. At this time, the diagonal in the left direction may be a diagonal in which the left direction is higher than the right direction.

When the direction of the gradient is between 0 ° and 90 ° or between 180 ° and 270 °, the direction determining unit 430 may determine the direction of the boundary as a diagonal line in the right direction. At this time, the diagonal in the right direction may be a diagonal in which the right direction is higher than the left direction.

A process of determining the directionality of the boundary by the direction determining unit 430 will be described in detail with reference to FIG. 6.

The slope determiner 440 may determine the slope of the second pixel using the position of the first pixel and the position of the second pixel.

In detail, the slope determiner 440 may determine the slope of the second pixel using the vertical distance between the first pixel and the second pixel and the horizontal distance between the first pixel and the second pixel. In this case, when the horizontal coordinate of the first pixel and the horizontal coordinate of the second pixel are the same, the denominator may be 0, and the slope of the second pixel may be infinite. Therefore, when the horizontal coordinate of the first pixel and the horizontal coordinate of the second pixel are the same, the tilt determination unit 440 sets the denominator to 1 by setting the difference between the horizontal coordinate of the first pixel and the horizontal coordinate of the second pixel to 1. By determining, the difference between the vertical coordinate of the first pixel and the vertical coordinate of the second pixel may be determined as the slope of the second pixel.

For example, the slope determination unit 440 may determine the slope that is the slope of the second pixel using Equation 7.

Figure pat00007

In this case, p x and p y may be the x and y coordinates of the first pixel, respectively, and q x and q y may be the x and y coordinates of the second pixel, respectively.

The direction weight determination unit 450 may determine the direction weight of the second pixel using the directionality of the boundary and the slope of the second pixel.

In this case, the direction weight determiner 450 inputs the slope of the second pixel determined by the slope determiner 440 into a formula corresponding to the directionality of the boundary determined by the direction determiner 430 among the equations defining the direction weights. The direction weight of 2 pixels can be determined. In this case, the equation defining the direction weight may be Equations 8 to 11.

Figure pat00008

In this case, W dir _ vertical may be used as W dir which is a direction weight of the second pixel when the boundary direction is vertical. In addition, Equation 8 may be a formula for determining the direction weight of the second pixel higher as the slope of the second pixel is closer to horizontal, and lowering the direction weight of the second pixel as the slope of the second pixel is closer to vertical. have.

Figure pat00009

In this case, W dir _ horizontai may be used as W dir which is a direction weight of the second pixel when the boundary direction is horizontal. In addition, Equation 9 may be a formula for determining a direction weight of the second pixel higher as the slope of the second pixel is closer to vertical, and lowering the direction weight of the second pixel as the slope of the second pixel is closer to horizontal. have.

Figure pat00010

In this case, Wdir _ diagornal _ upleft may be used as W dir which is a direction weight of the second pixel when the boundary direction is a diagonal line in the left direction. In addition, Equation 10 may be a formula for determining a direction weight of the second pixel to be low when the slope of the second pixel is close to a diagonal line having a high left direction.

Figure pat00011

In this case, W dir _ diagornal _ upleft may be used as W dir which is a direction weight of the second pixel when the boundary direction is a diagonal line in the right direction. In addition, Equation 11 may be a formula for determining the direction weight of the second pixel to be low when the slope of the second pixel is close to the diagonal line having the right direction.

For example, if the direction of the boundary is horizontal and the slope of the second pixel is 0.8, the direction weight determiner 450 may determine that the direction weight of the second pixel is 0.3 according to Equation (9). In addition, even if the inclination of the second pixel is equal to 0.8, the direction weight determining unit 450 may determine the direction weight of the second pixel as 0.1 according to Equation 8 when the boundary direction is vertical.

In addition, when the inclination of the second pixel is 0.8 and the direction of the boundary is a diagonal line in the left direction, the direction weight determining unit 450 may determine the direction weight of the second pixel as 0.7 according to Equation 10.

As in the above examples, the direction weight determining unit 450 may determine the direction weight of the second pixel by using a different equation according to the direction of the boundary, so that the direction weight may change according to the direction of the boundary.

5 illustrates an example of determining a distance weight according to an embodiment.

The distance weight determination unit 410 may determine the distance weight of the second pixel according to the distance between the first pixel 510 and the second pixel.

For example, as illustrated in FIG. 5, the distance weight determiner 410 may adjust the distance weight of the second pixel 521 located at the outer side of the region 520 including the second pixels to be closer to the first pixel 510. It may be determined to be higher than the distance weight of the second pixel 522.

In this case, since the second pixel 522 close to the first pixel 510 is close to the boundary 500 of the object included in the region 520, as shown in FIG. 5, distortion by the boundary 500 may occur. This is high. Accordingly, the distance weight determination unit 410 may determine that the distance weight of the second pixel 521 located at the outer edge of the area 520 is higher than the distance weight of the second pixel 522 proximate the first pixel 510. .

The pixels corresponding to the object may be disposed on one side and the pixels corresponding to the background may be disposed on the other side of the object 500.

Accordingly, the distance weight determination unit 410 may determine that the distance between the second pixel 523 that is farthest from the first pixel among the second pixels corresponding to the object and the first pixel among the second pixels that correspond to the background are determined. The distance weight of the farthest second pixel 521 may be determined to be the highest.

In this case, the direction weight determiner 450 may refer to the depth value of the pixel having the higher difference weight among the second pixel 523 and the second pixel 521 to adjust the depth value of the first pixel.

6 illustrates an example of determining a direction weight according to an embodiment.

When the object boundary 611 of the region including the second pixels is horizontally formed as illustrated in FIG. 6, the direction determiner 430 may determine the directionality of the boundary horizontally.

Also, the tilt determiner 440 may determine the slope 602 of the second pixel using the position of the second pixel 601 and the position of the first pixel 600.

In this case, the direction weight determining unit 450 determines a value corresponding to the slope 602 of the second pixel as the direction weight of the second pixel in Equation 9 that determines the direction weight when the boundary direction is horizontal. Can be. In this case, since the second pixel 601 is different from the advancing direction of the boundary 611, the direction weight determining unit 450 may determine the direction weight to be high.

In addition, when the object boundary 621 of the region including the second pixels is vertically formed (620) as illustrated in FIG. 6, the direction determiner 430 may vertically determine the direction of the boundary.

In this case, the direction weight determining unit 450 determines a value corresponding to the slope 602 of the second pixel as the direction weight of the second pixel in Equation 8 that determines the direction weight when the boundary direction is vertical. Can be. In this case, since the second pixel 601 is different from the advancing direction of the boundary 621, the direction weight determining unit 450 may determine the direction weight to be high.

In addition, when the object boundary 631 of the region including the second pixels is formed at an angle between 0 ° and 90 ° or 180 ° to 270 ° as shown in FIG. 630), the direction of the boundary may be determined by a diagonal line in the right direction.

At this time, the direction weight determining unit 450 determines a value corresponding to the slope 602 of the second pixel in Equation 11 to determine the direction weight when the boundary direction is the diagonal of the right direction. Can be determined by weight. In this case, since the second pixel 601 is different from the advancing direction of the boundary 631, the direction weight determining unit 450 may determine the direction weight to be high.

In addition, when the object boundary 641 of the region including the second pixels is formed at an angle between 90 ° and 180 ° or 270 ° to 360 ° as shown in FIG. 640, the direction of the boundary may be determined by the diagonal line in the left direction.

At this time, the direction weight determining unit 450 determines a value corresponding to the slope 602 of the second pixel in Equation 10 that determines the direction weight when the direction of the boundary is a diagonal of the left direction. Can be determined by weight. At this time, the advancing direction of the boundary 641 is close to the second pixel 601 as shown in FIG. 6. Therefore, the direction weight determiner 450 may determine a lower direction weight of the second pixel 601.

7 is a diagram illustrating a structure of an encoding apparatus, according to an embodiment.

The residual information determiner 710 of the encoding apparatus may generate a prediction image by predicting a value of a next frame of the depth image, and determine the residual information by comparing the prediction frame with a next frame of the depth image. In this case, the residual information determiner 710 may generate the prediction image by using one of the inter mode and the intra mode. Also, the residual information may be a difference value between the next frame of the prediction image and the actual depth image.

Next, the encoder 720 may quantize, encode, and transmit the residual information determined by the residual information determiner 710 to the decoding apparatus.

The decoder 730 may decode, dequantize, and decode the information transmitted by the encoder 720 to the decoder.

The depth image determiner 740 may combine the residual information decoded by the decoder 730 with the prediction image generated by the residual information determiner 710 to determine the depth image. In this case, the determined depth image may be in a state where distortion occurs at the boundary of the object due to information loss in the encoding process.

Therefore, the first filter 750 may reduce the distortion of the depth image by encoding using a deblocking filter.

The second filter 760 may post-process the depth image passing through the deblocking filter using the depth image processing apparatus 100, thereby preventing distortion occurring at the boundary of the object due to a sudden difference in depth value. have.

In detail, the second filter 760 may set the second pixels using the first pixel, which is the pixel on which the first filter 750 performs the deblocking filtering. Next, the second filter 760 may output a depth image in which the depth value of the first pixel is adjusted using at least one of the distance weight, the depth value weight, and the direction weight of the second pixels.

In this case, the residual information determiner 710 may refer to the depth image output by the second filter 760 when generating the prediction image of the next frame.

8 is a diagram illustrating a structure of a decoding apparatus according to an embodiment.

The decoder 810 may decode the residual information from the information received from the encoding apparatus. In this case, the decoder 810 may decode the residual information by using entropy decoding, reordering, and inverse quantization.

The prediction image generator 820 may generate the prediction image of the next frame using the depth image decoded from the received information. In this case, the prediction image generator 820 may generate the prediction image by using one of the inter mode and the intra mode. Also, the prediction image generator 820 may generate the prediction image using the same mode as the residual information determiner 710 of the encoding apparatus.

The depth image determiner 830 may determine the depth image by combining the residual information decoded by the decoder 810 with the prediction image generated by the prediction image generator 820. In this case, the determined depth image may be in a state where distortion occurs at the boundary of the object due to information loss in the encoding process.

Accordingly, the first filter 840 may reduce the distortion of the depth image by encoding using a deblocking filter.

The second filter 850 postprocesses the depth image passing through the deblocking filter using the depth image processing apparatus 100, thereby preventing distortion occurring at the boundary of the object due to a sudden difference in depth value. have.

In detail, the second filter 760 may set the second pixels using the first pixel, which is the pixel on which the first filter 750 performs the deblocking filtering. Next, the second filter 760 may output a depth image in which the depth value of the first pixel is adjusted using at least one of the distance weight, the depth value weight, and the direction weight of the second pixels.

In this case, when the prediction image generator 820 generates the prediction image of the next frame, the prediction image generator 820 may refer to the depth image output by the second filter 850.

9 is a diagram illustrating a depth image processing method, according to an exemplary embodiment.

In operation 910, the first pixel determiner 310 may determine, as the first pixel, pixels located at a boundary of the object among pixels included in the depth image. In this case, the pixels located at the boundary of the object may be pixels filtered by the deblocking filter.

In operation 920, the second pixel determiner 320 may determine, as the second pixel, pixels included in a predetermined area around the first pixel determined by the first pixel determiner 310. In detail, the second pixel determiner 320 may apply a fixed window or a variable window around the first pixel, and determine the pixels included in the window as the second pixel.

In operation 930, the distance weight determiner 410 may determine the distance weight of the second pixel according to the distance between the first pixel and the second pixel. In this case, the distance weight determination unit 410 may determine the distance weight of the second pixel to be proportional to the distance between the first pixel and the second pixel.

In operation 940, the depth value determiner 420 may determine the depth value weight of the second pixel according to the difference between the depth value of the first pixel and the depth value of the second pixel. In this case, the depth value weight may be inversely proportional to the absolute difference between the depth value of the first pixel and the depth value of the second pixel.

In operation 940, the direction weight determination unit 450 may determine the direction weight of the second pixel using the direction of the boundary including the first pixel and the slope of the second pixel.

The process of determining the direction weight will be described in detail with reference to FIG. 10.

In operation 950, the depth controller 230 may determine the depth of the first pixels by using at least one of the distance weight determined in operation 930, the depth value determined in operation 940, and the direction weight determined in operation 950. You can adjust the depth value.

In this case, the steps 930 to 950 are not to be sequentially performed, or may be performed in parallel or in a different order.

10 illustrates a method of determining a direction weight according to an embodiment. In this case, steps 1010 to 1030 in FIG. 10 may be included in step 940 of FIG. 5.

In operation 1010, the direction determiner 430 may determine the direction of the boundary including the first pixel by using the depth values of the second pixels.

In detail, the directional determination unit 430 may determine the horizontal gradient and the vertical gradient of the area including the second pixels using an edge detection process. Next, the direction determining unit 430 may determine the direction of the gradient using a horizontal gradient and a vertical gradient. Finally, the direction determining unit 430 may determine the direction of the boundary using the direction of the gradient.

In operation 1010, the tilt determiner 440 may determine the slope of the second pixel using the position of the first pixel and the position of the second pixel.

In detail, the slope determiner 440 may determine the slope of the second pixel using the vertical distance between the first pixel and the second pixel and the horizontal distance between the first pixel and the second pixel.

In operation 1030, the direction weight determination unit 450 may determine the direction weight of the second pixel using the direction of the boundary determined in operation 1010 and the slope of the second pixel determined in operation 1020.

In this case, the direction weight determiner 450 inputs the slope of the second pixel determined by the slope determiner 440 into a formula corresponding to the directionality of the boundary determined by the direction determiner 430 among the equations defining the direction weights. The direction weight of 2 pixels can be determined.

11 is a diagram illustrating an encoding method, according to an embodiment.

In operation 1110, the residual information determiner 710 may generate a prediction image by predicting a value of a next frame of the depth image, and determine the residual information by comparing the prediction frame with a next frame of the depth image.

In operation 1120, the encoder 720 may quantize, encode, and transmit the residual information determined in operation 1110 to the decoding apparatus.

In operation 1130, the decoder 730 may decode and dequantize the information encoded in operation 1120 to decode the residual information.

In operation 1140, the depth image determiner 740 may combine the residual information decoded in operation 1130 with the prediction image generated by the residual information determiner 710 to determine the depth image.

In operation 1150, the first filter 750 may reduce the distortion of the depth image by filtering the depth image determined in operation 1140 using a deblocking filter.

In operation 1160, the second filter 760 post-processes the depth image filtered in operation 1150 using the depth image processing apparatus 100, thereby causing distortion at an object boundary due to a sudden difference in depth values. Can be prevented.

In detail, the second filter 760 may set the second pixels using the first pixel, which is the pixel on which the first filter 750 performs the deblocking filtering. Next, the second filter 760 may output a depth image in which the depth value of the first pixel is adjusted using at least one of the distance weight, the depth value weight, and the direction weight of the second pixels.

12 is a diagram illustrating a decoding method according to an embodiment.

In operation 1210, the decoder 810 may decode the residual information from the information received from the encoding apparatus.

In operation 1220, the prediction image generator 820 may generate the prediction image of the next frame using the depth image decoded from the received information.

In operation 1230, the depth image determiner 830 may combine the residual information decoded in operation 1210 with the prediction image generated in operation 1220 to determine the depth image.

In operation 1240, the first filter 840 may filter the depth image determined in operation 1230 using a deblocking filter to reduce distortion of the depth image.

In operation 1250, the second filter 850 post-processes the depth image filtered in operation 1240 using the depth image processing apparatus 100, thereby causing distortion at an object boundary due to a sudden difference in depth values. Can be prevented.

In detail, the second filter 760 may set the second pixels using the first pixel, which is the pixel on which the first filter 750 performs the deblocking filtering. Next, the second filter 760 may output a depth image in which the depth value of the first pixel is adjusted using at least one of the distance weight, the depth value weight, and the direction weight of the second pixels.

The apparatus described above may be implemented as a hardware component, a software component, and / or a combination of hardware components and software components. For example, the apparatus and components described in the embodiments may be implemented within a computer system, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable array (FPA) A programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For ease of understanding, the processing apparatus may be described as being used singly, but those skilled in the art will recognize that the processing apparatus may have a plurality of processing elements and / As shown in FIG. For example, the processing unit may comprise a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as a parallel processor.

The software may include a computer program, code, instructions, or a combination of one or more of the foregoing, and may be configured to configure the processing device to operate as desired or to process it collectively or collectively Device can be commanded. Software and / or data may be any type of machine, component, physical device, virtual equipment, computer storage medium or device in order to be interpreted by or to provide instructions or data to the processing device. May be permanently or temporarily embodied in the transmitted ignal wave. The software may be distributed over a networked computer system and stored or executed in a distributed manner. The software and data may be stored on one or more computer readable recording media.

The method according to an embodiment may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI > or equivalents, even if it is replaced or replaced.

Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.

100: depth image processing device
110: pixel setting unit
120: weight setting unit
130: depth value control unit

Claims (21)

A first pixel determiner configured to determine, as a first pixel, pixels located at a boundary of an object among pixels included in a depth image;
A second pixel determination unit which determines pixels of a predetermined area as a second pixel with respect to the first pixel; And
Depth adjustment unit for adjusting the depth value of the first pixel using the depth value of the second pixel
And the depth image processing unit.
The method of claim 1,
The depth value adjusting unit,
And a depth value of the first pixel is adjusted using a first weight proportional to a distance between the first pixel and the second pixel and a depth value of the second pixels.
The method of claim 1,
The depth value adjusting unit,
And a depth value of the first pixel is adjusted using a second weight and an depth value of the second pixels inversely proportional to a difference between the depth value of the first pixel and the depth value of the second pixel.
The method of claim 1,
The depth value adjusting unit,
And a depth value of the first pixel is adjusted by using a third weight and depth values of the second pixels according to the direction of the boundary including the first pixel and the slope of the second pixel.
5. The method of claim 4,
The directionality of the boundary,
And depth information processing apparatus according to the gradient of the region including the second pixels.
5. The method of claim 4,
The slope of the second pixel is,
And when the horizontal coordinates of the first pixel and the horizontal coordinates of the second pixel are the same, the depth image processing apparatus of claim 1, wherein the depth coordinate is a difference between the vertical coordinates of the first pixel and the vertical coordinates of the second pixel.
A pixel setting unit configured to set second pixels around first pixels of a boundary of the object;
A weight determination unit for determining at least one of the first weight based on the distance between pixels, the second weight based on the difference between pixel depth values, and the third weight based on the directionality of the boundary and the pixel slope using the second pixels. ; And
Depth adjustment unit for adjusting the depth value of the first pixel using the determined weight
And the depth image processing unit.
The method of claim 7, wherein
The weight determination unit,
And determining a first weight of a second pixel far from the first pixel higher than a first weight of a second pixel close to the first pixel.
The method of claim 7, wherein
The weight determination unit,
A directional determination unit that determines a directionality of a boundary by using a gradient of an area including second pixels; And
Gradient determination unit for determining the slope of the second pixel using the position of the first pixel and the position of the second pixel
And the depth image processing unit.
10. The method of claim 9,
The tilt determination unit,
And when the horizontal coordinates of the first pixel and the horizontal coordinates of the second pixel are the same, a difference between the vertical coordinates of the first pixel and the vertical coordinates of the second pixel is determined as the slope of the second pixel.
Determining pixels located at a boundary of the object among the pixels included in the depth image as the first pixel;
Determining pixels of a predetermined area as a second pixel with respect to the first pixel; And
Adjusting the depth value of the first pixel using the depth value of the second pixels
/ RTI >
12. The method of claim 11,
Adjusting the depth value,
And controlling the depth value of the first pixel using a first weight proportional to a distance between the first pixel and the second pixel and a depth value of the second pixels.
12. The method of claim 11,
Adjusting the depth value,
And controlling the depth value of the first pixel using a second weight value inversely proportional to a difference between the depth value of the first pixel and the depth value of the second pixel and the depth value of the second pixels.
12. The method of claim 11,
Adjusting the depth value,
And controlling the depth value of the first pixel using the third weight and the depth value of the second pixels according to the direction of the boundary including the first pixel and the slope of the second pixel.
15. The method of claim 14,
The directionality of the boundary,
The depth image processing method of claim 1, wherein the depth information is information determined according to a gradient of an area including the second pixels.
15. The method of claim 14,
The slope of the second pixel is,
And when the horizontal coordinate of the first pixel and the horizontal coordinate of the second pixel are the same, a difference between the vertical coordinate of the first pixel and the vertical coordinate of the second pixel.
Setting second pixels around first pixels of an object's boundary
Determining, using the second pixels, at least one of a first weight based on the inter-pixel distance, a second weight based on the inter-pixel depth value difference, and a third weight based on the directionality of the boundary and the pixel slope; And
Adjusting a depth value of the first pixels using the determined weight value
/ RTI >
18. The method of claim 17,
Determining the weight,
And determining a first weight of a second pixel far from the first pixel higher than a first weight of a second pixel close to the first pixel.
18. The method of claim 17,
Determining a directionality of the boundary using a gradient of the region in which the second pixels are included; And
Determining the slope of the second pixel using the position of the first pixel and the position of the second pixel
The depth image processing method further comprising:
20. The method of claim 19,
Determining the slope,
And when the horizontal coordinates of the first pixel and the horizontal coordinates of the second pixel are the same, the difference between the vertical coordinates of the first pixel and the vertical coordinates of the second pixel is determined as the slope of the second pixel.
A computer-readable recording medium having recorded thereon a program for executing the method of any one of claims 11 to 20.
KR1020120046534A 2012-05-02 2012-05-02 Apparatus and method for processing depth image KR20130123240A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020120046534A KR20130123240A (en) 2012-05-02 2012-05-02 Apparatus and method for processing depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020120046534A KR20130123240A (en) 2012-05-02 2012-05-02 Apparatus and method for processing depth image

Publications (1)

Publication Number Publication Date
KR20130123240A true KR20130123240A (en) 2013-11-12

Family

ID=49852618

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120046534A KR20130123240A (en) 2012-05-02 2012-05-02 Apparatus and method for processing depth image

Country Status (1)

Country Link
KR (1) KR20130123240A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938665A (en) * 2020-07-14 2022-01-14 宏达国际电子股份有限公司 Method and electronic device for transmitting reduced depth information
CN116503570A (en) * 2023-06-29 2023-07-28 聚时科技(深圳)有限公司 Three-dimensional reconstruction method and related device for image

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938665A (en) * 2020-07-14 2022-01-14 宏达国际电子股份有限公司 Method and electronic device for transmitting reduced depth information
CN113938665B (en) * 2020-07-14 2023-10-13 宏达国际电子股份有限公司 Method and electronic device for transmitting reduced depth information
US11869167B2 (en) 2020-07-14 2024-01-09 Htc Corporation Method for transmitting reduced depth information and electronic device
CN116503570A (en) * 2023-06-29 2023-07-28 聚时科技(深圳)有限公司 Three-dimensional reconstruction method and related device for image
CN116503570B (en) * 2023-06-29 2023-11-24 聚时科技(深圳)有限公司 Three-dimensional reconstruction method and related device for image

Similar Documents

Publication Publication Date Title
CN109804633B (en) Method and apparatus for omni-directional video encoding and decoding using adaptive intra-prediction
AU2020309130B2 (en) Sample padding in adaptive loop filtering
KR101957873B1 (en) Apparatus and method for image processing for 3d image
CN113301334B (en) Method and apparatus for adaptive filtering of video coding samples
US11212497B2 (en) Method and apparatus for producing 360 degree image content on rectangular projection by selectively applying in-loop filter
CN108293129B (en) Coding sequence coding method and its equipment and decoding method and its equipment
ES2607451T3 (en) Video encoding using an image gradation reduction loop filter
ES2583129T3 (en) Method and apparatus for noise filtering in video coding
US9225967B2 (en) Multi-view image processing apparatus, method and computer-readable medium
US10185145B2 (en) Display apparatus and operating method of display apparatus
US20210297663A1 (en) Systems and methods for image coding
WO2020249124A1 (en) Handling video unit boundaries and virtual boundaries based on color format
US20140267808A1 (en) Video transmission apparatus
JPWO2009037828A1 (en) Image coding apparatus and image decoding apparatus
US10123021B2 (en) Image encoding apparatus for determining quantization parameter, image encoding method, and program
WO2017128634A1 (en) Deblocking filter method and apparatus
KR102522098B1 (en) Method and apparatus for measuring image quality base on perceptual sensitivity
KR20130123240A (en) Apparatus and method for processing depth image
US11330295B2 (en) Determining inter-view prediction areas in images captured with a multi-camera device
CN114827603A (en) CU block division method, device and medium based on AVS3 texture information
KR20120087084A (en) Image processing apparatus and method for defining distortion function for synthesis image of intermediate view
JP6239838B2 (en) Moving picture encoding apparatus, control method thereof, and imaging apparatus
KR102564477B1 (en) Method for detecting object and apparatus thereof
EP3854087A1 (en) Method and apparatus of encoding or decoding using reference samples determined by predefined criteria
TW202007149A (en) Boundary filtering for sub-block

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination