KR20130091500A - Apparatus and method for processing depth image - Google Patents

Apparatus and method for processing depth image Download PDF

Info

Publication number
KR20130091500A
KR20130091500A KR1020120012824A KR20120012824A KR20130091500A KR 20130091500 A KR20130091500 A KR 20130091500A KR 1020120012824 A KR1020120012824 A KR 1020120012824A KR 20120012824 A KR20120012824 A KR 20120012824A KR 20130091500 A KR20130091500 A KR 20130091500A
Authority
KR
South Korea
Prior art keywords
depth image
filter
autocorrelation
input
depth
Prior art date
Application number
KR1020120012824A
Other languages
Korean (ko)
Inventor
임일순
위호천
이재준
Original Assignee
삼성전자주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사 filed Critical 삼성전자주식회사
Priority to KR1020120012824A priority Critical patent/KR20130091500A/en
Publication of KR20130091500A publication Critical patent/KR20130091500A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A compression method of a depth map, the input unit receiving a depth image and a color image, a boundary emphasis unit for emphasizing a boundary in the input depth image using the input depth image and the color image, and It may include a noise removing unit for removing noise from the depth image with the boundary highlighted.

Description

[0001] APPARATUS AND METHOD FOR PROCESSING DEPTH IMAGE [0002]

The present invention relates to a compression method of a depth map and a method of supporting a reconstruction function in an in-loop.

The stereoscopic image compression system is a system for compressing color video and depth video, depth map. On the other hand, compression for color images can be efficiently compressed by the same methods as H.264 / AVC, H.264 / MVC, and HEVC, but the depth characteristics are significantly different from those of color images.

The existing compression (encoding) standards for video compression are H.261, H.263, MPEG-1, MPEG-2, MPEG-4, H.264 and HEVC (High Efficiency Video Coding).

Although the existing compression standards vary slightly, they generally consist of similar structures including motion estimation and compensation, transcoding, and entropy encoding.

In particular, H.264 and HEVC are known to minimize the block boundary distortion existing in the reconstructed image, thereby improving not only the subjective image quality but also more precise prediction in the motion estimation and compensation process, thereby improving the overall coding efficiency.

Such a deblocking filter shows good performance in a low bitrate image, but has little performance in a high quality image, or rather has a problem of degrading encoding performance.

The adaptive loop filter (ALF) adopted in recent compression standards minimizes errors between the reconstructed image and the original image. When applied as a deblocking filter in a high-quality image, the adaptive loop filter (ALF) It is effective.

A typical adaptive loop filter was a Wiener filter based restoration filter.

Recently, an adaptive loop filter has been proposed to improve the objective image quality in the next stage of the deblocking filter.

However, such a deblocking filter and a Wiener filter-based adaptive loop filter do not reflect the visual characteristics of a viewer who prefers clear image quality by over-smoothing the image.

Therefore, the deblocking filter and the Wiener filter-based adaptive loop filter have a problem of improving subjective image quality, and a fatal problem that causes an incorrect rendering result in rendering using a depth map of a 3D image. have.

Therefore, the conventional deblocking filter and the adaptive loop filter have a problem of decreasing the subjective quality or the rendering quality in the process of minimizing the error between the reconstructed picture and the original picture, thereby reducing the compression efficiency. .

Depth image processing apparatus according to an embodiment of the present invention includes an input unit for receiving a depth image and a color image, a boundary emphasis unit for emphasizing the boundary in the input depth image, using the input depth image and color image; It may include a noise removing unit for removing noise from the depth image with the boundary highlighted.

In one embodiment, a method of operating a depth image processing apparatus includes receiving a depth image and a color image, emphasizing a boundary in the input depth image by using the input depth image and a color image, And removing noise from the depth-enhanced depth image.

1 is a block diagram illustrating a depth image processing apparatus according to an exemplary embodiment of the present invention.
FIG. 2 is a diagram illustrating an embodiment in which a depth image processing apparatus according to an embodiment of the present invention is implemented as a loop filter applied to an in-loop position to an encoder of a compression system.
3 is a diagram illustrating an embodiment in which a depth image processing apparatus according to an embodiment of the present invention is implemented as a loop filter applied to an in-loop position in a decoder of a compression system.
FIG. 4 is a diagram illustrating an embodiment in which a depth image processing apparatus according to an embodiment of the present invention is implemented as a post filter applied to a post of an encoder of a compression system.
FIG. 5 is a diagram illustrating an embodiment in which a depth image processing apparatus according to an embodiment of the present invention is implemented as a post filter applied to a post of a decoder of a compression system.
6 is a block diagram illustrating a boundary emphasis unit in detail according to an embodiment of the present invention.
7 is a view for explaining an embodiment of calculating the autocorrelation in the autocorrelation calculation unit according to an embodiment of the present invention.
8 is a diagram illustrating a direction of autocorrelation calculated by an autocorrelation calculator according to an embodiment of the present invention.
9 is a diagram illustrating a filter structure for each direction according to autocorrelation of a depth image.
10 is a flowchart illustrating an operation of a noise canceling unit according to an embodiment of the present invention.
11 is a flowchart illustrating a method of operating a depth image processing apparatus according to an exemplary embodiment.

Hereinafter, preferred embodiments according to the present invention will be described in detail with reference to the accompanying drawings.

In the following description of the present invention, detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear. The terminologies used herein are terms used to properly represent preferred embodiments of the present invention, which may vary depending on the user, the intent of the operator, or the practice of the field to which the present invention belongs. Therefore, the definitions of the terms should be made based on the contents throughout the specification. Like reference symbols in the drawings denote like elements.

1 is a block diagram illustrating a depth image processing apparatus 100 according to an exemplary embodiment.

The depth image processing apparatus 100 according to an embodiment of the present invention generates and applies an in-loop filter that minimizes an error between a compressed image and an original image during an image encoding process, thereby encoding a subsequent image by removing an encoding error. In this case, it supports in-loop filter function that can improve the coding efficiency by making more accurate prediction in motion estimation and compensation process.

To this end, the depth image processing apparatus 100 according to an embodiment of the present invention may perform a boundary enhancement function as a two-phase structure, perform a boundary enhancement function in the first composition, and perform a noise removing function in the second composition. Can be.

In detail, the depth image processing apparatus 100 according to the exemplary embodiment may include an input unit 110, a boundary emphasis unit 120, and a noise remover 130.

The input unit 110 according to an embodiment of the present invention may receive a depth image and a color image.

The boundary emphasis unit 120 according to an embodiment of the present invention may emphasize the boundary in the input depth image by using the input depth image and the color image.

The noise removing unit 130 according to an embodiment of the present invention may remove noise from the depth image in which the boundary is emphasized.

Depth image processing apparatus 100 according to an embodiment of the present invention can be used in the field of image production / compression / transmission / display.

For example, 3D TV, Multi-view Video, Super Multi-view Video (SMV), Free Viewpoint TV (FTV) It can be used for all fields of 3D stereoscopic image.

In particular, due to the limited bandwidth it can be usefully used in the field of reducing the bit rate (bit rate) of the image.

FIG. 2 is a diagram illustrating an embodiment in which a depth image processing apparatus according to an embodiment of the present invention is implemented as a loop filter applied to an in-loop position to an encoder of a compression system.

2, a depth image processing apparatus according to an embodiment of the present invention may be included as a loop filter 200 in an encoder of a compression system.

The encoder of the compression system includes a predictor 201, a motion estimation and compensation unit 202, a first-order brancher 203, a transform and quantization unit 204, an entropy coding unit 207, an inverse quantization and inverse transform unit 205. ), A second divider 206, a loop filter 200, and a picture buffer 208.

The encoder of the compression system may apply a rain filter operation to the input compression depth image, and store the reconstructed depth image in the picture buffer 208. The additional information determined by the encoder of the compression system may be recorded in the bit stream and transmitted to the decoder of the compression system.

The prediction unit 201 largely performs intra prediction and inter prediction.

The prediction image output from the prediction unit 201 and the difference image output from the transform and quantization unit 204 are combined to generate a compressed image.

The depth image processing apparatus according to an embodiment of the present invention performs a reconstruction filter on the compressed image, stores the resultant image in the picture buffer 208, and transmits additional information to the entropy coding unit 207.

3 is a diagram illustrating an embodiment in which a depth image processing apparatus according to an embodiment of the present invention is implemented as a loop filter applied to an in-loop position in a decoder of a compression system.

Referring to FIG. 3, a depth image processing apparatus according to an embodiment of the present invention may be included in a decoder of a compression system in the form of a loop filter 300.

The decoding apparatus of the compression system includes an entropy decoding unit 301, an inverse quantization and inverse transform unit 302, a motion estimation and compensation unit 303, an adder 304, a loop filter 300, and a picture buffer 305. can do.

The decoder of the compression system may receive a bit stream transmitted from an encoder of the compression system and obtain additional information from the received bit stream. In addition, the decoder of the compression system may reconstruct the depth image using the additional information, and may store the reconstructed depth image in the picture buffer 305.

The loop filter 300 has a two-stage structure. Step 1 performs the boundary emphasis function. The iteration equation of the decoder performs the number of iterations of additional transmission. Step 2 performs noise reduction. The filter parameter to determine the amount of noise cancellation is performed using the additionally transmitted optimal parameter.

FIG. 4 is a diagram illustrating an embodiment in which a depth image processing apparatus according to an embodiment of the present invention is implemented as a post filter applied to a post of an encoder of a compression system.

4, a depth image processing apparatus according to an embodiment of the present invention may be included as a post filter 420 in an encoder of a compression system.

The encoder of the compression system includes a predictor 401, a motion estimation and compensation unit 402, a first-order brancher 403, a transform and quantization unit 404, an entropy coding unit 407, an inverse quantization and inverse transform unit 405. ), A second divider 406, a post filter 420, and a picture buffer 408.

The encoder of the compression system may apply a rain filter operation to the input compressed depth image, and store the reconstructed depth image in the picture buffer 408. The additional information determined by the encoder of the compression system may be recorded in the bit stream and transmitted to the decoder of the compression system.

The prediction unit 401 largely performs intra prediction and inter prediction.

The predicted image output from the predictor 401 and the differential image output from the transform and quantizer 404 are combined to generate a compressed image.

The present invention performs a reconstruction filter on the compressed image, stores the resultant image in the picture buffer 408, and transmits additional information to the entropy coding unit 407.

FIG. 5 is a diagram illustrating an embodiment in which a depth image processing apparatus according to an embodiment of the present invention is implemented as a post filter applied to a post of a decoder of a compression system.

Referring to FIG. 5, a depth image processing apparatus according to an embodiment of the present invention may be included in a decoder of a compression system in the form of a post filter 520.

The decoding apparatus of the compression system includes an entropy decoding unit 501, an inverse quantization and inverse transform unit 502, a motion estimation and compensation unit 503, an adder 504, a post filter 520, and a picture buffer 505. can do.

The decoder of the compression system may receive a bit stream transmitted from an encoder of the compression system and obtain additional information from the received bit stream. In addition, the decoder of the compression system may reconstruct the depth image using the additional information, and may store the reconstructed depth image in the picture buffer 505.

6 is a block diagram illustrating in detail the boundary emphasis unit 600 according to an embodiment of the present invention.

The boundary emphasis unit 600 according to an embodiment of the present invention may include an autocorrelation calculator 610, a cross correlation calculator 620, and an iteration equation performer 630.

First, the autocorrelation calculator 610 calculates a first autocorrelation of the input depth image and a second autocorrelation of the input color image. The autocorrelation calculation unit 612 is included.

The cross-correlation calculator 620 according to an embodiment of the present invention may calculate the cross-correlation based on the first autocorrelation and the second auto-correlation.

The iteration equation performing unit 630 according to an embodiment of the present invention may perform the iteration equation based on the calculated cross correlation and the number of iterations proportional to the magnitude of the calculated cross correlation.

The boundary of the depth image has an important influence on the synthesis of the virtual viewpoint image. However, when the signal is mixed with the encoding noise, it becomes difficult to distinguish the signal from the noise.

The boundary emphasis unit 600 according to an embodiment of the present invention may solve a problem in that it is difficult to distinguish a signal from noise while the signal is mixed with encoding noise.

The boundary emphasis unit 600 according to an embodiment of the present invention may use the cross correlation diagram C xy as a quantitative unit for distinguishing a signal from noise.

The boundary emphasis unit 600 according to an embodiment of the present invention may adjust the edge enhancement intensity of the depth image according to the magnitude of the cross correlation.

As used herein, autocorrelation and cross-correlation may be interpreted as a unit indicating the size and direction of a local structure in an image.

The magnitude of the cross correlation (C xy ) is a function of the auto correlation (C xx ) of the depth image and the auto correlation (C yy ) of the color image corresponding to the depth image, and can be expressed by Equation 1 below. In Equation 1, T x and T y are predetermined thresholds.

[Equation 1]

Figure pat00001

FIG. 7 illustrates an embodiment 700 for calculating autocorrelation in an autocorrelation calculator according to an embodiment of the present invention.

8 is an embodiment 800 illustrating a direction of autocorrelation calculated by an autocorrelation calculator according to an embodiment of the present invention.

7 is a diagram illustrating a process of calculating autocorrelation.

The autocorrelation calculator according to an embodiment of the present invention calculates a horizontal gradient and a vertical gradient with respect to the input image when the input image is received.

The autocorrelation calculator according to an embodiment of the present invention defines a sum of absolute values of gradients in each direction as autocorrelation.

That is, the autocorrelation calculator according to an embodiment of the present invention calculates at least one or more gradients in each direction with respect to the input depth image, and adds the sum of the absolute values of each of the at least one or more gradients. Autocorrelation can be calculated.

In addition, the autocorrelation calculator according to an embodiment of the present invention calculates at least one or more gradients in each direction with respect to the input color image, and adds the sum of absolute values of each of the calculated at least one gradient to the second. Autocorrelation can be calculated.

In addition, the autocorrelation calculator according to an embodiment of the present invention determines the final cross-correlation according to the autocorrelation of the depth image and the color image through [Equation 1].

The autocorrelation calculator according to an embodiment of the present invention calculates the direction of the autocorrelation of the depth image using [Equation 2]. In Equation 2, G x and G y denote the x-direction gradient value and the y-direction gradient shown in FIG. 7, and tan −1 means arctan.

&Quot; (2) "

Figure pat00002

The resultant angle θ is output as a value between 0 and 180 °, and is converted to DirIdx and used for a range calculation in Equation 8 later.

The relationship between the angle and DirIdx is expressed through Equation 3. In Equation 3, ┗ 를 means floor operator.

&Quot; (3) "

Figure pat00003

9 is a diagram illustrating a filter structure for each direction according to autocorrelation of a depth image.

The autocorrelation directions 900 are all determined in one of nine directions.

As shown in 910, if DirIdx is 0, there is no directivity and it has a square filter structure.

The remaining eight have directivity to control the filter structure.

For example, if the correlation is in the vertical direction, the filter structure has a vertically long structure.

If the correlation is diagonal, the filter structure has a long structure in the diagonal direction.

If the correlation is in the horizontal direction, the filter structure is horizontally long.

Equation 4 is an equation for performing the boundary enhancement function.

In the initial condition (I (x, y, 0) = I0 (x, y)) and the Newmann boundary condition, the boundary enhancement function can be performed by solving partial differential equations.

&Quot; (4) "

Figure pat00004

Figure pat00005

Figure pat00006

In Equation 4, I is an image, (x, y) is a spatial coordinate of an image, t is time, c is a diffusion coefficient,

Figure pat00007
Denotes the normal direction of the boundary line, respectively. Also
Figure pat00008
Is the partial derivative of t,
Figure pat00009
Is the norm operation symbol.

[Equation 5] is a discrete approximation equation that performs the boundary enhancement function.

[Equation 5] is a discrete approximation solution for [Equation 4], and it is possible to increase the accuracy of the solution by iterative calculation.

&Quot; (5) "

Figure pat00010

In Equation (5)

Figure pat00011
Is the convergence factor, Q (·) is the quantization symbol function, A (·) is the quantization magnitude function,
Figure pat00012
The
Figure pat00013
First derivative in the direction,
Figure pat00014
Is the second derivative of the direction.

The quantization symbol function is defined by [Equation 6].

&Quot; (6) "

Figure pat00015

In Equation 6, T a is a threshold for starting the reaction of an output signal with respect to an input signal as a quantization response threshold. The advantage is that noise-resistant systems can be implemented by not sending output signals for small input signals.

The quantization magnitude function can be defined by Equation 7. In [Equation 7]

Figure pat00016
, min means the minimum operator.

[Equation 7]

Figure pat00017

The quantization magnitude function measures the magnitude of the increment or decrement and outputs the absolute magnitude. In addition, the quantization symbol function measures the direction of the increment or decrease and outputs the direction.

The number of repetitions n of the iteration equation is an important factor in determining the slope of the curve. The performance of boundary enhancement is determined according to the number of iterations, and the slope of the curve becomes steeper as the number of iterations increases. The number of repetitions is not the same for all the pixels of a frame, and the number of repetitions is different for each pixel. In the present invention, the number of repetitions proposes a relationship proportional to the magnitude of the cross correlation. Table 1 shows the relationship between the number of repetitions and the correlation.

[Table 1]

Figure pat00018

10 is a flowchart illustrating an operation of a noise canceling unit according to an embodiment of the present invention.

The amount of noise mixed in the signal depends on the size of the signal.

Based on this, the noise canceling unit according to an embodiment of the present invention may determine that the noise is mixed when the signal size is large, thereby increasing the intensity of the noise canceling.

The noise level signal has noise characteristics in the same direction.

Based on this, the noise canceling unit according to the embodiment of the present invention adaptively adjusts the filter structure of the noise removing step (step 1001).

For example, the noise canceller according to an embodiment of the present invention may adjust the filter structure according to the autocorrelation direction of the depth image as shown in FIG. 7.

After determining the filter structure, the actual noise cancellation is performed through operation using a range filter or a bilateral filter (step 1002).

The calculation through the range filter is shown in [Equation 8].

[Equation 8]

Figure pat00019

In Equation 8, M (x) means a set of neighboring pixels, K means a normalization constant, and I (x) means a depth pixel of x at the current position.

In other words, M (x) is a set of neighboring pixels centered on x, and forms a filter structure, and the shape of the filter structure at this time may be determined according to the direction of autocorrelation.

Optimal parameters for the range filter or birec filter

Figure pat00020
) Describes "how to minimize distortion".

One optimal parameter is generated per frame or picture.

In addition, the optimal parameter has an advantage of reducing overhead while improving the image quality of the depth image than occurs in the entire video or each block constituting the picture.

According to an embodiment of the present invention, the noise canceling unit includes a possible range of filter parameters (

Figure pat00021
), The distortion is calculated for each filter parameter.

The noise canceller according to an embodiment of the present invention determines whether the calculated distortion is the minimum distortion through the determination of step 1003, and branches to step 1001 if the noise is not the minimum distortion. If the minimum distortion, that is, the parameter that outputs the smallest distortion among the calculated distortions is determined as the optimal parameter.

The distortion is defined as the sum of squared difference (SSD) between the original depth image and the reconstructed depth image. Another distortion can be defined as the SSD between the image synthesized from the original color image and the original depth image, and the image synthesized from the compressed color image and the reconstructed color image.

Another method of determining the optimal parameters of a range filter or bilateral filter is a modeling based method. One optimal parameter is generated per frame or picture. Optimal parameters based on modeling are determined according to Equation 9 below. In Equation 9, QP is a quantization parameter (QP) of a current frame (picture), th is a predetermined threshold value,

Figure pat00022
Denotes predetermined parameters, respectively.

&Quot; (9) "

Figure pat00023

The noise canceller according to an embodiment of the present invention stores the reconstructed depth image in a picture buffer for use as a reference image, and performs additional information through a bitstream through an entropy coding process. To record. The bitstream is transmitted to the receiver through a channel and used for decoding.

Table 2 summarizes the additional information recorded in the bitstream proposed by the present invention. The additional information in Table 2 is a new element added to the syntax of the compression system.

[Table 2]

Figure pat00024

11 is a flowchart illustrating a method of operating a depth image processing apparatus according to an exemplary embodiment.

In operation 1001 of the depth image processing apparatus, a depth image and a color image are input (step 1001).

Next, in the method of operating the depth image processing apparatus according to the exemplary embodiment of the present invention, the boundary is highlighted in the input depth image using the input depth image and the color image (step 1002), and the boundary is highlighted. Noise may be removed from the depth image (step 1003).

In one example, the operation method of the depth image processing apparatus according to an embodiment of the present invention calculates a first autocorrelation of the input depth image in order to emphasize a boundary in the input depth image, and inputs the input color. The second autocorrelation of the image may be calculated.

In addition, in the method of operating the depth image processing apparatus according to the exemplary embodiment, the cross correlation may be calculated based on the first autocorrelation and the second autocorrelation.

In addition, the operating method of the depth image processing apparatus according to an embodiment of the present invention performs the iteration equation based on the calculated cross correlation and the number of repetitions proportional to the magnitude of the calculated cross correlation to perform the input depth. You can emphasize the boundaries in the image.

The method of operating the depth image processing apparatus according to an exemplary embodiment of the present invention determines the structure of the filter based on the direction of the first autocorrelation to remove noise in the depth-enhanced depth image, and determines the determined filter. A range filter can be formed based on the structure of.

In addition, the method of operating the depth image processing apparatus according to the exemplary embodiment may remove noise from the depth image in which the boundary is emphasized by using the formed range filter.

As a result, according to the present invention, it is possible to improve the compression efficiency of an image by improving the reconstruction function of an encoder and a decoder of an image compression system.

The operating method of the depth image processing apparatus according to the exemplary embodiment of the present invention may be implemented in the form of program instructions that may be executed by various computer means and may be recorded in a computer readable medium. The computer readable medium may include program instructions, data files, data structures, etc. alone or in combination. The program instructions recorded on the medium may be those specially designed and constructed for the present invention or may be available to those skilled in the art of computer software. Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical media such as CD-ROMs, DVDs, and magnetic disks, such as floppy disks. Magneto-optical media, and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like. Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like. The hardware device described above may be configured to operate as one or more software modules to perform the operations of the present invention, and vice versa.

As described above, the present invention has been described by way of limited embodiments and drawings, but the present invention is not limited to the above embodiments, and those skilled in the art to which the present invention pertains various modifications and variations from such descriptions. This is possible.

Therefore, the scope of the present invention should not be limited to the described embodiments, but should be determined by the equivalents of the claims, as well as the claims.

100: depth image processing device
110: input unit
120: boundary emphasis
130: noise canceling unit

Claims (12)

An input unit for receiving a depth image and a color image;
A boundary emphasis unit for emphasizing a boundary in the input depth image using the input depth image and the color image; And
Noise removing unit for removing noise from the depth-enhanced depth image
Depth image processing apparatus comprising a.
The method of claim 1,
The boundary emphasis portion,
An autocorrelation calculator for calculating a first autocorrelation of the input depth image and a second autocorrelation of the input color image; And
A cross correlation calculator calculating a cross correlation based on the first auto correlation and the second auto correlation.
Depth image processing apparatus comprising a.
The method of claim 2,
The autocorrelation calculation unit,
The depth image processing apparatus for calculating the first autocorrelation for the input depth image, calculating at least one or more gradients in each direction, and calculating a sum of absolute values of each of the at least one or more gradients.
The method of claim 2,
The autocorrelation calculation unit,
And calculating at least one gradient for each color direction for the input color image, and calculating the second autocorrelation based on a sum of absolute values of each of the calculated at least one gradient.
The method of claim 2,
An iteration equation performer that performs an iteration equation based on the calculated cross correlation and the number of iterations proportional to the magnitude of the calculated cross correlation
Depth image processing device further comprising.
The method of claim 2,
The noise canceling unit;
The structure of the filter is determined based on the direction of the first autocorrelation to form a range filter or a bilateral filter, and noise is extracted from the depth-enhanced depth image using the formed range filter or bilateral filter. Depth image processing device to remove.
Receiving a depth image and a color image;
Emphasizing a boundary in the input depth image by using the input depth image and the color image; And
Removing noise from the depth-enhanced depth image
Method of operating a depth image processing apparatus comprising a.
The method of claim 7, wherein
The step of emphasizing a boundary in the input depth image,
Calculating a first autocorrelation of the input depth image;
Calculating a second autocorrelation of the input color image;
Calculating a cross correlation based on the first auto correlation and the second auto correlation; And
Performing an iteration equation based on the calculated cross correlation and the number of iterations proportional to the magnitude of the calculated cross correlation;
Operation method of a depth image processing apparatus further comprising.
The method of claim 7, wherein
Removing noise from the depth-enhanced depth image,
Determining a structure of the filter based on the direction of the first autocorrelation;
Forming a range filter or a bilateral filter based on the determined filter structure; And
Removing noise from the depth-enhanced depth image by using the formed range filter or bilateral filter
Method of operating a depth image processing apparatus comprising a.
10. The method of claim 9,
Forming a range filter or a bilateral filter based on the determined filter structure,
Determining the optimum parameter based on modeling to form the range filter or the bilateral filter
Method of operating a depth image processing apparatus comprising a.
The method of claim 10,
Determining an optimal parameter based on modeling to form the range filter or the bilateral filter,
A quantization parameter (QP) for the current frame is compared with a plurality of predetermined thresholds th 1 and th 2 , and the predetermined parameters (
Figure pat00025
Determining the optimal parameter by selecting a specific parameter from
Method of operating a depth image processing apparatus comprising a.
A computer-readable recording medium having recorded thereon a program for performing the method of any one of claims 7 to 11.
KR1020120012824A 2012-02-08 2012-02-08 Apparatus and method for processing depth image KR20130091500A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020120012824A KR20130091500A (en) 2012-02-08 2012-02-08 Apparatus and method for processing depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020120012824A KR20130091500A (en) 2012-02-08 2012-02-08 Apparatus and method for processing depth image

Publications (1)

Publication Number Publication Date
KR20130091500A true KR20130091500A (en) 2013-08-19

Family

ID=49216671

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120012824A KR20130091500A (en) 2012-02-08 2012-02-08 Apparatus and method for processing depth image

Country Status (1)

Country Link
KR (1) KR20130091500A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101532686B1 (en) * 2013-12-30 2015-07-09 재단법인대구경북과학기술원 Device and method for processing image using discontinuity adaptive filter

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101532686B1 (en) * 2013-12-30 2015-07-09 재단법인대구경북과학기술원 Device and method for processing image using discontinuity adaptive filter

Similar Documents

Publication Publication Date Title
KR102185954B1 (en) Apparatus and method for image coding and decoding
US11240496B2 (en) Low complexity mixed domain collaborative in-loop filter for lossy video coding
JP5535625B2 (en) Method and apparatus for adaptive reference filtering
US9241160B2 (en) Reference processing using advanced motion models for video coding
JP5312468B2 (en) Improved in-loop fidelity for video compression
CN112042199A (en) Adaptive interpolation filter
KR20120003147A (en) Depth map coding and decoding apparatus using loop-filter
TWI452907B (en) Optimized deblocking filters
CN114143538A (en) Inter-frame prediction method and related video processing device
RU2684193C1 (en) Device and method for motion compensation in video content
US20150365698A1 (en) Method and Apparatus for Prediction Value Derivation in Intra Coding
US20140071233A1 (en) Apparatus and method for processing image using correlation between views
KR20130091500A (en) Apparatus and method for processing depth image
Zhang et al. Artifact reduction of compressed video via three-dimensional adaptive estimation of transform coefficients
Lim et al. Region-based adaptive bilateral filter in depth map coding
Aflaki et al. Adaptive spatial resolution selection for stereoscopic video compression with MV-HEVC: a frequency based approach
KR20190109373A (en) Apparatus for image coding/decoding and the method thereof
Ma et al. Zero-synthesis view difference aware view synthesis optimization for HEVC based 3D video compression
Shen et al. Efficient depth coding in 3D video to minimize coding bitrate and complexity
KR102668077B1 (en) Apparatus and method for image coding and decoding
KR20200004348A (en) Method and apparatus for processing video signal through target region correction
Oh An adaptive quantization algorithm without side information for depth map coding
KR20130098121A (en) Device and method for encoding/decoding image using adaptive interpolation filters
KR20140128041A (en) Apparatus and method of improving quality of image
KR20130029572A (en) Post filter, loop filter, video data encoding/decoding apparatus and method thereof

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination