KR20140010856A - Image processing apparatus and method - Google Patents

Image processing apparatus and method Download PDF

Info

Publication number
KR20140010856A
KR20140010856A KR1020120131505A KR20120131505A KR20140010856A KR 20140010856 A KR20140010856 A KR 20140010856A KR 1020120131505 A KR1020120131505 A KR 1020120131505A KR 20120131505 A KR20120131505 A KR 20120131505A KR 20140010856 A KR20140010856 A KR 20140010856A
Authority
KR
South Korea
Prior art keywords
depth
image
contrast
luminance value
value
Prior art date
Application number
KR1020120131505A
Other languages
Korean (ko)
Other versions
KR101978176B1 (en
Inventor
홍지영
조양호
이호영
Original Assignee
삼성전자주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사 filed Critical 삼성전자주식회사
Priority to CN201310292218.4A priority Critical patent/CN103546736B/en
Priority to US13/940,456 priority patent/US9661296B2/en
Publication of KR20140010856A publication Critical patent/KR20140010856A/en
Application granted granted Critical
Publication of KR101978176B1 publication Critical patent/KR101978176B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

An image processing device is provided. An object separation unit of the image processing device can separate an object area and a background area from each other by using one or more of a color image and a depth image connected to the color image. A contrast calculation unit of the image processing device calculates a contrast between the object area and the background area. A depth adjustment unit adjusts the depth of the depth image by using the contrast. [Reference numerals] (110) Object separation unit; (120) Contrast calculation unit; (130) Depth adjustment unit

Description

[0001] IMAGE PROCESSING APPARATUS AND METHOD [0002]

It relates to an image processing apparatus and method, and more particularly to an image processing apparatus and method for adjusting the depth of a 3D (3-Dimensional) image.

The 3D display is highly realistic, but there may be a problem of deterioration of image quality or fatigue of viewing. Therefore, there is a need for a technique of re-adjusting the depth to minimize viewing deterioration while minimizing image quality deterioration of the 3D display.

US Patent US 7,557,824 "Method and apparatus for generating a stereoscopic image", one of the conventional methods for readjusting the depth, obtains a Region of Interest (ROI) using the depth information and viewing distance of the image, Based on the obtained ROI, depth mapping is proposed to readjust the sense of depth corresponding to the near and far regions.

According to one side, the object separation unit for separating the object region and the background region using at least one of the color image and the depth image associated with the color image; A contrast calculator for calculating a contrast between the object area and the background area; And a depth adjusting unit adjusting the depth of the depth image by using the contrast.

According to an embodiment, the object separator may separate the object region from the background region to generate a mask and store the mask in a buffer.

According to an embodiment, the object separator may separate the object region from the background region using a visual attention map based on a human visual cognitive characteristic model.

According to another embodiment, the object separator may separate the object region from the background region by using a region of interest.

According to another embodiment, the object separator may segment the level of the depth value using the depth image and remove the horizontal plane by using a horizontal plane equation to separate the object area from the background area.

The contrast calculator may include a color value converter configured to convert color values of at least one pixel of the input color image into luminance values reflecting display characteristics associated with the image processing apparatus.

According to an embodiment, the color value converter may convert a color value of the at least one pixel into a luminance value reflecting the display characteristic by using a PLCwise Model (Piecewise Linear Interpolation assuming Constant Chromaticity coordinates).

In this case, the contrast calculator calculates a first luminance value representing the luminance value of the object region and a second luminance value representing the luminance value of the background region using the luminance value, and comparing the luminance value with the first luminance value. The contrast may be calculated based on the difference between the second luminance values.

According to an embodiment, the contrast calculator may calculate the contrast by applying the first luminance value and the second luminance value to a Michelson contrast calculation method.

The depth adjuster may include: a depth normalizer configured to normalize the depth image by using a maximum depth value that can be represented by a display associated with the image processing apparatus; A scale factor determiner that determines a scale factor using the contrast; And a depth rescaling unit to rescale the depth value of the depth image using the scale factor.

In this case, the scale factor may be determined as a smaller value as the contrast becomes larger using a database constructed by experimenting with a minimum branch difference with respect to a depth value.

The scale factor determination unit may determine a predetermined scale factor corresponding to a contrast level including the contrast among a plurality of predetermined contrast levels.

The image processing apparatus may further include an image renderer configured to render a 3D image corresponding to at least one viewpoint based on a result of the color image and the depth adjusting unit adjusting the depth of the depth image. Can be.

In this case, the image renderer may render the 3D image by using a depth image based rendering (DIBR) technique.

According to another aspect, the method may further include: separating an object region and a background region by using at least one of a color image and a depth image associated with the color image; Calculating a contrast between the object area and the background area; And adjusting the depth of the depth image by using the contrast.

The calculating of the contrast may include converting a color value of at least one pixel of the input color image into a luminance value reflecting a display characteristic associated with the image processing apparatus. Can be.

In this case, the converting of the color values may include converting the color values of the at least one pixel into luminance values reflecting the display characteristics by using a PLCwise Model (Piecewise Linear Interpolation assuming Constant Chromaticity coordinates). The calculating may include calculating a first luminance value representing a luminance value of the object region and a second luminance value representing a luminance value of the background region using the luminance value, and calculating the first luminance value and the second luminance value. The contrast can be calculated based on the difference in luminance value.

The adjusting of the depth may include: normalizing the depth image using a maximum depth value that can be represented by a display associated with the image processing apparatus; Determining a scale factor using the contrast; And rescaling a depth value of the depth image by using the scale factor.

In this case, the scale factor may be determined as a smaller value as the contrast becomes larger using a database constructed by experimenting with a minimum branch difference with respect to a depth value.

1 is a block diagram of an image processing apparatus according to an embodiment.
2 is a conceptual diagram referred to for describing an image input to an image processing apparatus, according to an exemplary embodiment.
3 is a conceptual diagram illustrating a result of separation of an object region and a background region, according to an exemplary embodiment.
4 is a conceptual diagram illustrating a process of separating an object region and a background region and storing them in a buffer according to an embodiment.
5 is a block diagram illustrating a process of performing a color value conversion by a color value conversion unit according to an embodiment.
6 is a detailed block diagram of a depth adjuster according to an exemplary embodiment.
7 is a block diagram illustrating an image processing apparatus according to another exemplary embodiment.
8 is an exemplary flowchart illustrating an image processing method, according to an exemplary embodiment.

In the following, some embodiments will be described in detail with reference to the accompanying drawings. However, it is not limited or limited by these embodiments. Like reference symbols in the drawings denote like elements.

1 is a block diagram of an image processing apparatus 100 according to an embodiment.

According to an exemplary embodiment, the object separator 110 of the image processing apparatus 100 may separate the object region and the background region by using at least one of an input color image and a depth image.

Various embodiments may exist in the method of separating the object region and the background region by the object separator 110. For example, a background and an object may be distinguished using a visual attention map based on a human visual perception characteristic. In addition, regions may be separated using a region of interest (ROI), or regions may be separated using only depth information. These embodiments will be described later in more detail with reference to FIGS. 2 and 3.

The contrast calculator 120 calculates a representative luminance value of each of the separated object area and the background area, and calculates a contrast that is a difference between the two values. Various embodiments of the contrast calculation method will be described later in more detail with reference to FIGS. 5 and 6.

The depth adjuster 130 readjusts the depth of the input image by using the calculated contrast. This process may be referred to as re-scaling below.

According to an embodiment, the depth adjusting unit 130 performs depth adjustment using a minimum noticeable difference (JND) that is a minimum depth difference that can be recognized according to contrast among human visual perception characteristics. do.

The three-dimensional perception, which is referred to in human visual cognitive characteristics, can be understood as a technology that reproduces reality by combining the principle of binocular disparity with empirical perceptual factors. I understand it as a mechanism.

In embodiments, one of these human cognitive traits may be based on the understanding that human vision is sensitive to brightness. In such embodiments, contrast masking is used to calculate the contrast of the input image, and the minimum depth difference found theoretically and / or experimentally for the calculated contrast is databased. When the contrast between the object region and the background region of the input image is calculated, the depth is adjusted using this database.

Various embodiments of the depth adjustment process will be described later in more detail with reference to FIGS. 5 to 6.

According to the image processing apparatus 100 according to these embodiments, deterioration of image quality due to a hole or the like that may occur due to depth adjustment of a stereoscopic image and / or conversion from 2D to 3D may be prevented, and Fatigue can be reduced.

2 is a conceptual diagram referred to for describing an image input to an image processing apparatus, according to an exemplary embodiment.

The input color image 210 may include an object area 211 and a background area 212. For example, the input color image 210 may include a color value based on an RGB (Red, Green, Blue) model.

The depth image 220 may be input together with the input color image 210. However, this is only an example, and a disparity map (not shown) that may replace the depth image 220 may be input. In this case, the depth adjustment described later may correspond to disparity adjustment, which is obvious to those skilled in the art.

Furthermore, according to another exemplary embodiment, another color image (not shown) different from the input color image 210 and a point of view may be input instead of the depth image 220 or the disparity map. This case may be understood as a case where a stereoscopic image is input. In this case, the depth image 220 or the disparity map can be easily obtained from the stereoscopic image in a known manner.

Therefore, hereinafter, an embodiment in which the depth image 220 is input and the depth adjustment is exemplarily described. However, the present invention is not limited thereto, and other applications may be used, such as a disparity map or a stereoscopic image.

The input depth image 220 includes a depth value matching the input color image 210. This depth value is information for generating a 3D image. When the depth is excessive, viewing fatigue may be caused, and when the depth is too small, the reality of the 3D image may be small.

Therefore, the process of readjusting the depth may be required. In the past, the depth was often adjusted without considering the human visual perception.

According to an embodiment, the image processing apparatus 100 may analyze the contrast of the input color image and adaptively readjust the depth of the depth image 220 based on the human visual perception characteristic.

According to an exemplary embodiment, the object separator 110 separates the object region 221 included in the depth image 220 from the background region. This process will be described later in more detail with reference to FIG. 3.

3 is a conceptual diagram illustrating a result 300 of separating an object region and a background region, according to an exemplary embodiment.

The object region 310 may be understood as a foreground region in some cases. There may be a plurality of such object areas 310 depending on the contents of the image. In this case, according to an embodiment, the representative object area may be selected in consideration of the user's viewpoint, the movement, the size, etc. of the plurality of object areas. According to another exemplary embodiment, the image processing described below may be performed in consideration of a plurality of object areas.

There are various embodiments in which the object separator 110 separates the object region 310 and the backstroke region 320.

For example, a background and an object may be distinguished by using a visual attention map (not shown) generated based on a human visual perception characteristic.

According to another embodiment, region segmentation using a region of interest (ROI), which may be predetermined or determined in real time, may be used.

Furthermore, according to another exemplary embodiment, pixels having a depth estimated as the depth of the object may be determined using depth information of the depth image 220.

In this embodiment, the pixels of the depth estimated as the depth of the object are separated from the depth image 220, and the plane equation representing the horizontal plane is obtained to accurately separate the background similar to the object, thereby accurately separating the object from the background. You may.

In addition, various pre-processing and / or post-processing processes may be added to separate the object region 310 from the background region 320. Because it is well known to, detailed description is omitted.

The result of separating the object region 310 and the background region 320 by the object separator 110 may be stored in the form of a mask map of the same scale as the result 300 illustrated in FIG. 3. This is explained with reference to FIG. 4.

4 is a conceptual diagram illustrating a process in which an object region and a background region are separated and stored in the buffer 400 according to an embodiment.

For example, the buffer 400 may store a digital value '1' for the object region 310 and a digital value '0' for the background region 320. Of course, the digital levels of the binary data stored in the buffer may be set opposite to each other.

According to an embodiment, one bit of binary data stored in the buffer 400 may represent a pixel level of the input color image 210 or the input depth image 220.

However, this is only an example, and in order to speed up image processing or for other purposes, one bit of binary data stored in the buffer 400 may include a plurality of pixels of the input color image 210 or the input depth image 220. May be represented. For example, pixels of the image may be grouped in units of blocks, and information of whether the image corresponds to the object region 310 or the background region 320 may be stored in the buffer 400 for each block.

The result of separating the object region 310 and the background region 320 may be used to determine a representative depth value of the object region. In addition, the result may be basic information for calculating a contrast between the object region and the background region.

5 is a block diagram illustrating a process of performing a color value conversion by a color value conversion unit according to an embodiment.

The color value converting unit 510 converts the color values of each pixel, which are divided into the object region 310 and the background region 320, and converts the color values through the following equations to perform X, Y, and Z. The value can be derived.

Figure pat00001

According to an embodiment, in the equation 1, a PLCwise Linear Interpolation assuming Constant Chromaticity coordinates (PLCC) model may be used for X, Y, and Z conversion.

The PLCC model may convert input R, G, and B values into X, Y, and Z values using previously measured values by reflecting display characteristics associated with the image processing apparatus 100.

At least some of these may be values that must be used to measure display characteristics. For example, a correlation between X, Y, Z values measured at intervals of 0 to 255 and black values of R, G, and B may be measured.

The Y value calculated by this process may be associated with the luminance value. When the luminance value representing the object region 310 and the luminance value representing the background region 320 are compared, a contrast between the object region 310 and the background region 320 may be calculated. The luminance value representing each region may be, for example, an average value of each region luminance value, and in another example, may be determined in consideration of a distribution frequency for each luminance value. More details will be described later with reference to FIG. 6.

6 is a detailed block diagram of the depth adjusting unit 130 according to an embodiment.

According to an embodiment, the depth adjuster 130 may include a depth normalizer 610.

The depth normalizer 610 normalizes the depth of the input image based on the maximum depth and / or disparity that the display associated with the image processing apparatus 100 can represent. .

Depending on the hardware characteristics or the calibration level, since the maximum depth that each of the displays can represent may be different from each other, such a depth normalization process may be performed.

When the depth normalization is performed in this way, the scale factor determiner 620 uses the contrast between the object region and the background region calculated by the contrast calculator 120 to scale the normalized depth. Determine the scale factor.

According to an embodiment, the contrast calculator 120 calculates a contrast using the representative luminance values of each of the object region and the background region described with reference to FIG. 5.

Representative luminance values of each of the object region and the background region are determined using 'Y' values corresponding to luminance or brightness among the X, Y, and Z values derived as described with reference to FIG. 5.

As described above, according to the exemplary embodiment, the representative luminance value may be obtained as an average of Y values of pixels belonging to each region, or may be calculated using a Y value distribution frequency of the pixels.

According to an embodiment, the contrast calculator 120 may calculate the contrast using a Michelson Contrast calculation method, which is a well-known contrast calculation method.

Figure pat00002

Referring to Equation 2, C Michelson compared to Michelson of the input image may be calculated using Ymax and Ymin, in which the larger value of the representative Y value of the object region and the representative Y value of the background region is Ymax.

However, this calculation method is only one embodiment and various other applications that can calculate the contrast are not excluded.

The scale factor determination unit 620 determines the scale factor using the calculated contrast.

According to embodiments in the scale factor determination process, the human visual recognition characteristics may be reflected.

In terms of psychophysics, the Just Noticeable Difference (JND) means the minimum difference in stimuli that can be distinguished by humans or animals. This JND may be interpreted as a minimum branch difference.

JND was first studied by Weber, and numerous experiments were conducted to measure the brightness value L and the delta L, the JND of the brightness value.

Figure pat00003

This means that the ratio of all brightness values L and thereby the delta L, which is JND, is constant constant k. Later scientists have suggested that this JND or constant k changes with L depending on the brightness value, but the above equation 3 can still be considered valid unless it is a precise level of accuracy.

As a result of theoretical and / or experimental studies on the correlation between the brightness value and the depth value, the larger the contrast that is the difference between the brightness values of the object region and the background region, the smaller the JND of the depth value is. For example, it may be understood that the greater the contrast between the object region and the background region, the more susceptible a human is to a change in depth value.

According to an exemplary embodiment, based on the determination, the scale factor determiner 620 determines a smaller value of the scale factor for depth adjustment as the contrast between the object region and the background region increases.

The mapping table and / or calculation data for determining the value of the scale factor according to the calculated contrast may be stored in the database 601. However, in some embodiments, the database 601 may be stored in storage and / or memory included in the image processing apparatus 100.

Meanwhile, according to one embodiment, the calculated contrast may be determined to be included in any one of several predetermined contrast levels.

For example, the scale factor determiner 620 adjusts the level of contrast to high contrast, middle contrast, low contrast, and negligible no contrast. It is also possible to predetermine in several groups, and to determine in advance the scale factor corresponding to each group.

In this case, the scale factor determiner 620 determines which of the groups the contrast between the object region and the background region calculated by the contrast calculator 120 belongs to, and corresponds to a scale corresponding to the corresponding group. The factor may be determined as the final scale factor for adjusting the depth of the input image.

Then, the depth rescaling unit 630 readjusts the depth of the input image according to the scale factor thus determined.

According to an embodiment, the depth rescaling unit 630 may adjust the depth value such that the maximum depth value of the input image matches the scale factor for readjusting the depth.

According to an embodiment, the depth value of the input depth image may be normalized, so that the depth adjustment may be performed by multiplying the normalized depth value by the scale factor.

When the image is reconstructed according to the adjusted depth value, the contrast of the input image and the human visual perception characteristics are reflected, thereby minimizing image quality deterioration while maintaining the perceived depth. It is possible to provide an image with less viewing fatigue.

The reproduction of the image will be described later in more detail with reference to FIG. 7.

7 is a block diagram illustrating an image processing apparatus 700 according to another exemplary embodiment.

According to an exemplary embodiment, the image processing apparatus 700 may further include an image renderer 710 in addition to the object separator 110, the contrast calculator 120, and the depth adjuster 130 described with reference to FIGS. 1 to 6. It may include.

According to an embodiment, the image renderer 710 re-renders the 3D image in which the depth is adjusted using the depth values adjusted by the depth adjuster 130.

For example, the image renderer 710 may use a depth image including a reference image and distance information corresponding to each pixel of the reference image by using a depth image based rendering (DIBR) method. Image can be rendered at any point in time.

The process of rendering an image at any point of time using the DIBR method may be understood with reference to the following equation.

Figure pat00004

In Equation 4, u v denotes an arbitrary virtual view to be obtained, u r denotes an input, d denotes a depth, and β denotes an arbitrary number changeable to set a view desired to be made. Can be understood.

For example, after reprojecting an image to a three-dimensional space using depth information of one reference image and a depth image, the left image moves objects in the image to the right relative to the original color image according to the depth. The right image can move objects in the image to the left. At this time, the movement of the object, such as disparity, is determined in proportion to the depth value.

8 is an exemplary flowchart illustrating an image processing method, according to an exemplary embodiment.

In step 810, an input color image and an input depth image are received. As described above, the input depth image may be replaced with various modified embodiments, such as another color image input for a stereoscopic configuration, a disparity map input, and the like.

In operation 820, the object separator 110 of the image processing apparatus 100 may separate the object region and the background region by using at least one of an input color image and a depth image.

As described above, various embodiments may exist in the method of separating the object region and the background region by the object separator 110, and the embodiments are the same as described above with reference to FIGS. 1 to 3.

Then, in operation 831, the color value converter 510 of FIG. 5 may convert the RGB color values of the input image into X, Y, and Z values reflecting display characteristics, and this process will be described with reference to FIG. 5. Same as one.

In operation 832, the contrast calculator 120 calculates a representative luminance value of each of the separated object region and the background region, and calculates a contrast that is a difference between the two values. Various embodiments of the contrast calculation method have been described above with reference to FIGS. 1, 5, 6, and the like.

In operation 833, the scale factor determiner 620 of the depth adjuster 130 may determine a scale factor corresponding to the calculated contrast. Various embodiments thereof have been described above with reference to FIGS. 5 and 6.

The depth adjuster 130 may calculate a representative depth value of the object region in operation 840. The representative depth value may be determined by an average of the depths included in the object area, or may be determined in consideration of the frequency of the depth values. This process is also as described above with reference to FIG.

In operation 842, the depth normalizer 610 of the depth adjuster 130 normalizes the depth value by using the maximum depth value that can be expressed by the display, and the details are as described above with reference to FIG. 6.

When the scale factor is determined and the depth value is normalized, the depth rescaling unit 630 may rescale the depth of the input depth image (850).

According to an embodiment, in operation 860, the image renderer 710 may reproduce the 3D image using the adjusted depth together with the reference image. In this process, it is also possible to render a 3D image at any point in time. Details are as described above with reference to FIG.

According to these embodiments, since the depth adjustment is adaptively made using the Just Noticeable Difference (JND), which is the minimum detectable depth difference (JND) of the human visual perception characteristics, It is possible to reduce viewing fatigue without deteriorating image quality and unnecessary adjustment of the sense of depth.

The apparatus described above may be implemented as a hardware component, a software component, and / or a combination of hardware components and software components. For example, the apparatus and components described in the embodiments may be implemented within a computer system, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable array (FPA) A programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For ease of understanding, the processing apparatus may be described as being used singly, but those skilled in the art will recognize that the processing apparatus may have a plurality of processing elements and / As shown in FIG. For example, the processing unit may comprise a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as a parallel processor.

 The software may include a computer program, code, instructions, or a combination of one or more of the foregoing, and may be configured to configure the processing device to operate as desired or to process it collectively or collectively Device can be commanded. The software and / or data may be in the form of any type of machine, component, physical device, virtual equipment, computer storage media, or device , Or may be permanently or temporarily embodied in a transmitted signal wave. The software may be distributed over a networked computer system and stored or executed in a distributed manner. The software and data may be stored on one or more computer readable recording media.

 The method according to an embodiment may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

 While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI > or equivalents, even if it is replaced or replaced.

 Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.

Claims (20)

An object separator configured to separate the object region and the background region using at least one of a color image and a depth image associated with the color image;
A contrast calculator for calculating a contrast between the object area and the background area; And
Depth adjustment unit for adjusting the depth of the depth image by using the contrast
And the image processing apparatus.
The method of claim 1,
The object separator is configured to separate the object region and the background region to generate a mask to store in the buffer.
The method of claim 1,
The object separator is further configured to separate the object region and the background region using a visual attention map based on a human visual cognitive characteristic model.
The method of claim 1,
And the object separator is configured to separate the object region and the background region by using a region of interest.
The method of claim 1,
The object separator may segment the level of the depth value using the depth image, and remove the horizontal plane by using a horizontal plane equation to separate the object area from the background area.
The method of claim 1,
And the contrast calculator comprises a color value converter configured to convert color values of at least one pixel of the input color image into luminance values reflecting display characteristics associated with the image processing apparatus.
The method according to claim 6,
The color value converting unit converts a color value of the at least one pixel into a luminance value reflecting the display characteristic by using a PLCwise Model (Piecewise Linear Interpolation assuming Constant Chromaticity coordinates).
The method of claim 7, wherein
The contrast calculator calculates a first luminance value representing a luminance value of the object region and a second luminance value representing a luminance value of the background region using the luminance value, and the first luminance value and the second luminance value. And calculating the contrast based on a difference in luminance value.
9. The method of claim 8,
And the contrast calculator is configured to calculate the contrast by applying the first luminance value and the second luminance value to a Michelson contrast calculation method.
The method of claim 1,
The depth adjustment unit,
A depth normalizer which normalizes the depth image by using a maximum depth value that can be represented by a display associated with the image processing apparatus;
A scale factor determiner that determines a scale factor using the contrast; And
Depth rescaling unit to rescale the depth value of the depth image using the scale factor
And the image processing apparatus.
The method of claim 10,
And the scale factor is determined to be smaller as the contrast is greater using a database constructed by experimenting with a minimum difference in depth value.
The method of claim 10,
The scale factor determiner is configured to determine a predetermined scale factor corresponding to a contrast level including the contrast among a plurality of predetermined contrast levels.
The method of claim 1,
An image renderer that renders a 3D image corresponding to at least one viewpoint by using a result of the color image and the depth adjuster adjusting the depth of the depth image.
Further comprising:
The method of claim 13,
And the image renderer renders the 3D image by using a depth image based rendering (DIBR) technique.
Separating an object region and a background region using at least one of a color image and a depth image associated with the color image;
Calculating a contrast between the object area and the background area; And
Adjusting the depth of the depth image by using the contrast
And an image processing method.
16. The method of claim 15,
The calculating of the contrast may include converting a color value of at least one pixel of the input color image into a luminance value reflecting display characteristics associated with the image processing apparatus.
17. The method of claim 16,
The converting of the color values may include converting color values of the at least one pixel into luminance values reflecting the display characteristics by using a PLCwise Model (Piecewise Linear Interpolation assuming Constant Chromaticity coordinates).
The calculating of the contrast may include calculating a first luminance value representing a luminance value of the object region and a second luminance value representing a luminance value of the background region using the luminance value, and calculating the first luminance value. And calculating the contrast based on a difference between the second luminance value and the second luminance value.
16. The method of claim 15,
Adjusting the depth,
Normalizing the depth image by using a maximum depth value that can be represented by a display associated with the image processing apparatus;
Determining a scale factor using the contrast; And
Rescaling the depth value of the depth image using the scale factor
And an image processing method.
19. The method of claim 18,
The scale factor is determined to be smaller as the contrast is larger using a database constructed by experimenting with a minimum branch difference with respect to a depth value.
20. The computer readable medium according to any one of claims 15 to 19, comprising a program for performing the image processing method.
KR1020120131505A 2012-07-12 2012-11-20 Image processing apparatus and method KR101978176B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201310292218.4A CN103546736B (en) 2012-07-12 2013-07-12 Image processing equipment and method
US13/940,456 US9661296B2 (en) 2012-07-12 2013-07-12 Image processing apparatus and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261670830P 2012-07-12 2012-07-12
US61/670,830 2012-07-12

Publications (2)

Publication Number Publication Date
KR20140010856A true KR20140010856A (en) 2014-01-27
KR101978176B1 KR101978176B1 (en) 2019-08-29

Family

ID=50143366

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120131505A KR101978176B1 (en) 2012-07-12 2012-11-20 Image processing apparatus and method

Country Status (1)

Country Link
KR (1) KR101978176B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180066479A (en) * 2016-12-09 2018-06-19 한국전자통신연구원 Automatic object separation method and apparatus using plenoptic refocus

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008033897A (en) * 2006-06-29 2008-02-14 Matsushita Electric Ind Co Ltd Image processor, image processing method, program, storage medium, and integrated circuit
KR20080015078A (en) * 2005-06-17 2008-02-18 마이크로소프트 코포레이션 Image segmentation
KR20110051416A (en) * 2009-11-10 2011-05-18 삼성전자주식회사 Image processing apparatus and method
KR20110059531A (en) * 2009-11-27 2011-06-02 소니 주식회사 Image processing apparatus, image processing method and program
KR20110094957A (en) * 2010-02-18 2011-08-24 중앙대학교 산학협력단 Apparatus and method for object segmentation from range image
JP2011223566A (en) * 2010-04-12 2011-11-04 Samsung Electronics Co Ltd Image converting device and three-dimensional image display device including the same

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080015078A (en) * 2005-06-17 2008-02-18 마이크로소프트 코포레이션 Image segmentation
JP2008033897A (en) * 2006-06-29 2008-02-14 Matsushita Electric Ind Co Ltd Image processor, image processing method, program, storage medium, and integrated circuit
KR20110051416A (en) * 2009-11-10 2011-05-18 삼성전자주식회사 Image processing apparatus and method
KR20110059531A (en) * 2009-11-27 2011-06-02 소니 주식회사 Image processing apparatus, image processing method and program
KR20110094957A (en) * 2010-02-18 2011-08-24 중앙대학교 산학협력단 Apparatus and method for object segmentation from range image
JP2011223566A (en) * 2010-04-12 2011-11-04 Samsung Electronics Co Ltd Image converting device and three-dimensional image display device including the same

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180066479A (en) * 2016-12-09 2018-06-19 한국전자통신연구원 Automatic object separation method and apparatus using plenoptic refocus

Also Published As

Publication number Publication date
KR101978176B1 (en) 2019-08-29

Similar Documents

Publication Publication Date Title
US9661296B2 (en) Image processing apparatus and method
US9445075B2 (en) Image processing apparatus and method to adjust disparity information of an image using a visual attention map of the image
CA2727218C (en) Methods and systems for reducing or eliminating perceived ghosting in displayed stereoscopic images
US9171372B2 (en) Depth estimation based on global motion
US8638329B2 (en) Auto-stereoscopic interpolation
US9123115B2 (en) Depth estimation based on global motion and optical flow
US9137512B2 (en) Method and apparatus for estimating depth, and method and apparatus for converting 2D video to 3D video
US8977039B2 (en) Pulling keys from color segmented images
JP2015156607A (en) Image processing method, image processing apparatus, and electronic device
Jung et al. Depth sensation enhancement using the just noticeable depth difference
US10210654B2 (en) Stereo 3D navigation apparatus and saliency-guided camera parameter control method thereof
US11908051B2 (en) Image processing system and method for generating image content
TW201622418A (en) Processing of disparity of a three dimensional image
KR20180064028A (en) Method and apparatus of image processing
KR101978176B1 (en) Image processing apparatus and method
KR101849696B1 (en) Method and apparatus for obtaining informaiton of lighting and material in image modeling system
KR20060114708A (en) Method and scaling unit for scaling a three-dimensional model
US8482574B2 (en) System, method, and computer program product for calculating statistics associated with a surface to be rendered utilizing a graphics processor
KR100914312B1 (en) Method and system of generating saliency map using graphics hardware and programmable shader and recording medium therewith
US20140198176A1 (en) Systems and methods for generating a depth map and converting two-dimensional data to stereoscopic data
US9137519B1 (en) Generation of a stereo video from a mono video

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant