WO2010070732A1 - Image processing device, display device, image processing method, program therefor, and recording medium containing the program - Google Patents

Image processing device, display device, image processing method, program therefor, and recording medium containing the program Download PDF

Info

Publication number
WO2010070732A1
WO2010070732A1 PCT/JP2008/072846 JP2008072846W WO2010070732A1 WO 2010070732 A1 WO2010070732 A1 WO 2010070732A1 JP 2008072846 W JP2008072846 W JP 2008072846W WO 2010070732 A1 WO2010070732 A1 WO 2010070732A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
processing
image
focus
image processing
Prior art date
Application number
PCT/JP2008/072846
Other languages
French (fr)
Japanese (ja)
Inventor
賢司 奥道
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Priority to PCT/JP2008/072846 priority Critical patent/WO2010070732A1/en
Publication of WO2010070732A1 publication Critical patent/WO2010070732A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/142Edging; Contouring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/57Control of contrast or brightness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Definitions

  • the present invention relates to an image processing device, a display device, an image processing method, a program thereof, and a recording medium on which the program is recorded.
  • Patent Document 1 there has been known a configuration in which an object contour enhancement process is performed on an image (see, for example, Patent Document 1 and Patent Document 2).
  • the level of the peak point of the video signal is calculated, and the correction value is calculated from the calculated level with the rated video signal level being 100%.
  • amends the level of an outline emphasis signal according to this correction value is taken.
  • the center frequency of the aperture signal is increased so that a fine picture can be more easily seen by the fine aperture signal.
  • the center frequency of the aperture signal is lowered, and a sharp and clear image is obtained with a thick and clear aperture signal.
  • JP 2004-193769 A Japanese Patent Laid-Open No. 6-30303
  • An object of the present invention is to provide an image processing device, a display device, an image processing method, a program thereof, and a recording medium on which the program is recorded, which can effectively enhance the sense of depth of the image.
  • An image processing apparatus includes a focus state recognition unit that recognizes a focus area in which an object is in focus in an input image and an unfocus area in which the object is not in focus, and a contour for the recognized focus area And a depth processing unit that performs an enhancement process and generates an output image obtained by performing a blurring process on the unfocus area.
  • a display device includes the above-described image processing device and a display unit that displays an output image generated by the image processing device.
  • the image processing method of the present invention is an image processing method for generating an output image obtained by performing predetermined processing on an input image by a calculation unit, and the calculation unit focuses on a subject in the input image.
  • a focus state recognition step for recognizing a focused area and an unfocused area where the subject is not in focus, contour enhancement processing for the recognized focus area, and blurring processing for the unfocused area
  • a depth processing step for generating the executed output image.
  • An image processing program according to the present invention is characterized by causing an arithmetic means to execute the above-described image processing method.
  • the image processing program of the present invention is characterized in that the calculation means functions as the above-described image processing apparatus.
  • the recording medium on which the image processing program of the present invention is recorded is characterized in that the above-described image processing program is recorded so as to be readable by the arithmetic means.
  • FIG. 2 is a block diagram illustrating a schematic configuration of a focus state recognition unit in the first embodiment. It is a schematic diagram which shows the zone
  • FIG. 10 is a schematic diagram showing first to fourth regions in an input image in the fourth embodiment. It is a block diagram which shows schematic structure of the focus state recognition part which concerns on the said modification. It is a block diagram which shows schematic structure of the focus state recognition part which concerns on the said other modification.
  • FIG. 1 is a block diagram illustrating a schematic configuration of a display device.
  • FIG. 2 is a schematic diagram of an input image.
  • FIG. 3 is a schematic diagram of an output image.
  • FIG. 4 is a block diagram illustrating a schematic configuration of the focus state recognition unit.
  • FIG. 5 is a schematic diagram showing a band characteristic calculation block in the input image.
  • FIG. 6 is a schematic diagram showing the relationship between the frequency band characteristics after Hadamard transform and the high-frequency component and low-frequency component.
  • FIG. 7 is a block diagram illustrating a schematic configuration of the polarity gain determination unit.
  • FIG. 8 is a schematic diagram showing a setting state of the degree of focus for each block in each setting unit block.
  • FIG. 9 is a schematic diagram showing a setting state of a difference value between the average value of the degree of focus for each block and the degree of focus for each block of FIG.
  • FIG. 10 is a schematic diagram showing a state in which the absolute value of the difference value in FIG. 9 is taken.
  • FIG. 11 is a schematic diagram illustrating a positional relationship between a processing target pixel which is a linear interpolation processing target and a central pixel.
  • FIG. 12 is a schematic diagram for explaining a method of calculating a data conversion value.
  • the display device 100 includes a display unit 110 and an image processing device 120 as a calculation unit.
  • the display unit 110 displays the image processed by the image processing device 120.
  • Examples of the display unit 110 include a PDP (Plasma Display Panel), a liquid crystal panel, an organic EL (Electro Luminescence) panel, a CRT (Cathode-Ray Tube), an FED (Field Emission Display), and an electrophoretic display panel.
  • the image processing apparatus 120 processes the input image Pi including the focus area Af where the subject is focused and the unfocus area Au where the subject is not focused, as shown in FIG.
  • the output image Po in which the feeling is emphasized is output to the display unit 110.
  • the image processing apparatus 120 includes a focus state recognition unit 130, a time constant processing unit 140, an inter-block linear interpolation unit 150, and a depth processing unit 160 that are configured from various programs.
  • the focus state recognition unit 130 includes a Hadamard transform unit 131, a high frequency component accumulation unit 132, a low frequency component accumulation unit 133, a focus degree calculation unit 134, a block-by-block accumulation unit 135, A polarity gain determination unit 136.
  • the Hadamard transform unit 131 acquires the input signal Vi of the input image Pi from the image signal output unit 10. Further, based on this input signal Vi, as shown in FIG. 5, Hadamard transform is performed for each band characteristic calculation block Bf composed of a total of 64 pixels (not shown) arranged in a vertical direction and 8 in the horizontal direction. Processing is performed to calculate a frequency band component F of the input signal Vi as shown in FIG.
  • the square portion (indicated by a black square) at the upper left corner in FIG. 6 is a direct current band, so the component is replaced with 0. Then, the frequency band component F is output to the high frequency component accumulating unit 132 and the low frequency component accumulating unit 133.
  • the high-frequency component accumulating unit 132 converts a component of the frequency band component F that has a frequency band equal to or higher than a preset threshold to a high-frequency component Fh (a square portion that is hatched with a line extending diagonally to the left in FIG. 6). And the high frequency component accumulation value Sh obtained by accumulating the high frequency component Fh for each band characteristic calculation block Bf is output to the focus degree calculation unit 134.
  • the low-frequency component accumulating unit 133 recognizes a component of the frequency band component F that is less than the above threshold value as a low-frequency component Fl (a square portion hatched with a line extending diagonally upward to the right in FIG. 6).
  • the low frequency component accumulation value S1 obtained by accumulating the frequency component Fl for each band characteristic calculation block Bf is output to the focus degree calculation unit 134.
  • the focus degree calculation unit 134 outputs, for each band characteristic calculation block Bf, a value obtained by dividing the high-frequency component accumulated value Sh by the low-frequency component accumulated value S1 to the block-by-block accumulation unit 135 as the focus degree Du.
  • the block-by-block accumulating unit 135 determines the polarity-by-block focus degree Ds obtained by accumulating the degree of focus Du for each set unit block Bs composed of (a ⁇ b) band characteristic calculation blocks Bf as shown in FIG. To the unit 136.
  • the setting unit block Bs is a block obtained by dividing the input image Pi into five pieces in the vertical direction and eight pieces in the horizontal direction.
  • the block-specific focus degree Ds is set to a value of 0 to 255 (8 bits).
  • the polarity gain determination unit 136 sets polarity information and a gain value for each setting unit block Bs based on the block-specific focus degree Ds. As shown in FIG. 7, the polarity gain determination unit 136 includes an average calculation unit 136A, a difference calculation unit 136B, a polarity determination unit 136C, and a gain corresponding value setting unit 136D.
  • the average calculation unit 136A calculates an average value of the block-specific focus degrees Ds and outputs the calculation result to the difference calculation unit 136B. For example, as shown in FIG.
  • the average of the block-by-block focus degrees Ds of all the set unit blocks Bs The value is calculated as 64.
  • the difference calculation unit 136B calculates a value obtained by subtracting the average value from the average calculation unit 136A from the block-specific focus degree Ds of each setting unit block Bs from the block-by-block accumulation unit 135 as the difference value of each setting unit block Bs. Then, the calculation result is output to the polarity determination unit 136C and the gain corresponding value setting unit 136D. For example, the average value 64 is subtracted from the block-specific focus degree Ds of each setting unit block Bs shown in FIG. 8 to obtain a difference value as shown in FIG.
  • the polarity determination unit 136C acquires the difference value from the difference calculation unit 136B, and sets the setting unit block Bs having the difference value of 0 or more to the focus area Af (a line extending diagonally right above in FIGS. 9, 10, and 12). And the polarity information Km of “+1” indicating that fact is output to the time constant processing unit 140. Further, it is determined that the set unit block Bs having the difference value less than 0 is the unfocus area Au (a square portion hatched with a line extending diagonally to the upper left in FIGS. 9, 10, and 12). Is output to the time constant processing unit 140.
  • the gain corresponding value setting unit 136D acquires the difference value from the difference calculation unit 136B and calculates the absolute value of the difference value of each setting unit block Bs as shown in FIG. Further, when the absolute value of the setting unit block Bs is 127 or less, this absolute value is set as a gain corresponding value. When the absolute value is 128 or more, 127 is set as a gain corresponding value, that is, the absolute value represented by 8 bits is limited to 7 bits. Then, the gain corresponding value setting unit 136D outputs the gain corresponding value Gm to the time constant processing unit 140.
  • the time constant processing unit 140 determines whether or not the continuous input image Pi corresponds to a moving image scene change.
  • the scene change means that the relevance between successive input images Pi is low, such as when a moving image scene changes. If it is determined that the scene change is detected, the polarity information Km and the gain corresponding value Gm are output as they are to the inter-block linear interpolation unit 150 as the time constant processing polarity information Kj and the time constant processing gain corresponding value Gj.
  • time constant processing unit 140 determines that the scene change does not occur, the time constant processing unit 140 performs time constant processing on each of the polarity information Km and the gain corresponding value Gm to thereby obtain the time constant processing polarity information Kj and the time constant processing gain.
  • Corresponding value Gj is calculated.
  • time constant processing polarity information Kj and time constant processing gain corresponding value Gj are calculated by performing IIR (Infinite Impulse Response) filter processing in the time direction on polarity information Km and gain corresponding value Gm. .
  • IIR Infinite Impulse Response
  • the inter-block linear interpolation unit 150 performs linear interpolation processing on each of the time constant processing polarity information Kj and the time constant processing gain corresponding value Gj, and performs linear interpolation polarity information Ks and linear interpolation for each pixel Q shown in FIG. A gain corresponding value Gs is calculated. Note that the size of the pixel Q with respect to the setting unit block Bs is exaggerated for easy understanding of the content. Specifically, the inter-block linear interpolation unit 150 calculates linear interpolation polarity information Ks and a linear interpolation gain corresponding value Gs of a processing target pixel (hereinafter referred to as a processing target pixel) Qs included in the setting unit block Bs2.
  • the time constant processing polarity is applied to the central pixel Q1, Q3, and Q4 of the setting unit block Bs1, Bs3, and Bs4 adjacent to the setting unit block Bs2 and the pixel Q2 (hereinafter referred to as the central pixel) Q2 positioned at the center of the setting unit block Bs2. It is assumed that the information Kj and the time constant processing gain corresponding value Gj are set. Further, the data conversion value H is calculated by substituting the time constant processing polarity information Kj and the time constant processing gain corresponding value Gj of each setting unit block Bs into the following equation (1).
  • the data conversion values H1, H2, and the like of the setting unit blocks Bs1 to Bs4 are set.
  • H3 and H4 are 104, 150, 104 and 162, respectively.
  • the inter-block linear interpolation unit 150 then calculates the vertical distance m from the processing target pixel Qs to the central pixel Q2, the vertical distance n to the central pixel Q4, the horizontal distance x to the central pixel Q1, and the central pixel Q2.
  • the first linear interpolation value I of the processing target pixel Qs is calculated by substituting the horizontal direction distance y and the data conversion values H1, H2, H3, and H4 into the following equation (2).
  • the inter-block linear interpolation unit 150 calculates the second linear interpolation value by subtracting 128 from the linear interpolation value I. Then, when the second linear interpolation value is greater than or equal to 0, the inter-block linear interpolation unit 150 indicates that the polarity of the processing target pixel Qs is “1”, that is, the linear interpolation that the processing target pixel Qs is included in the focus area Af.
  • the inter-block linear interpolation unit 150 outputs the absolute value of the second linear interpolation value as a linear interpolation gain corresponding value Gs to the gain adjustment unit 163 described later of the depth processing unit 160. For example, in the case shown in FIGS.
  • the second linear interpolation value is calculated as 12.75, and the inter-block linear interpolation unit 150 performs linear processing indicating that the polarity of the processing target pixel Qs is “1”.
  • the interpolation polarity information Ks is output to the polarity adjustment unit 162, and the value of 12.75 is output to the gain adjustment unit 163 as the linear interpolation gain corresponding value Gs.
  • the depth processing unit 160 includes a high frequency component detection unit 161, a polarity adjustment unit 162, a gain adjustment unit 163, and an addition unit 164.
  • the high-frequency component detection unit 161 acquires the input signal Vi from the image signal output unit 10 and performs high-pass filter (HPF) processing such as a Laplacian filter, that is, the subject in the input image Pi.
  • HPF high-pass filter
  • the components constituting the contour and the like are output to the polarity adjustment unit 162.
  • the polarity adjustment unit 162 acquires the high frequency component Vh from the high frequency component detection unit 161 and the linear interpolation polarity information Ks from the inter-block linear interpolation unit 150.
  • the linear interpolation polarity information Ks of the pixel Q corresponding to the high frequency component Vh is “1”, that is, when the high frequency component Vh is in the focus area Af, the polarity without changing the polarity (positive / negative) of the high frequency component Vh.
  • the adjusted high frequency component Vk is output to the gain adjusting unit 163.
  • the linear interpolation polarity information Ks of the pixel Q corresponding to the high frequency component Vh is “ ⁇ 1”, that is, when the high frequency component Vh is in the unfocused area Au, the polarity of the high frequency component Vh is changed to change the polarity.
  • the component Vk is output to the gain adjustment unit 163.
  • the gain adjustment unit 163 acquires the linear interpolation gain corresponding value Gs of the predetermined pixel Q from the inter-block linear interpolation unit 150, and sets a value obtained by dividing the linear interpolation gain corresponding value Gs by 64 as a gain value. To do. Then, the gain adjustment high frequency component Vg obtained by multiplying the gain value by the polarity adjustment high frequency component Vk corresponding to the pixel Q from the polarity adjustment unit 162 is output to the addition unit 164.
  • the addition unit 164 acquires the input signal Vi from the image signal output unit 10 and the gain adjustment high-frequency component Vg from the gain adjustment unit 163. Then, the gain adjustment high-frequency component Vg corresponding to each pixel Q and the high-frequency component Vh corresponding to the gain adjustment high-frequency component Vg in the input signal Vi are added to generate the output signal Vo of the output image Po, and the display unit 110 Output.
  • the polarity of the high frequency component Vh is the same as the polarity of the gain adjustment high frequency component Vg. Therefore, the high frequency component Vh and the gain adjustment high frequency component Vg are added, the high frequency component of the focus area Af in the output image Po is larger than that of the input image Pi, and the outline of the focus area Af in the output image Po is To be emphasized.
  • the polarity of the high frequency component Vh is different from the polarity of the gain adjustment high frequency component Vg.
  • the gain adjustment high frequency component Vg is subtracted from the high frequency component Vh, the high frequency component of the unfocus area Au in the output image Po becomes smaller than that of the input image Pi, and the contour of the unfocus area Au in the output image Po. Will be blurred.
  • the image processing device 120 of the display device 100 recognizes the focus area Af and the unfocus area Au in the input image Pi, performs an edge emphasis process on the focus area Af, and applies the unfocus area Au to the unfocus area Au.
  • an output image Po subjected to the blurring process is generated. For this reason, it is possible to effectively enhance the sense of depth in the output image Po by simultaneously performing the contour enhancement process and the blurring process on the input image Pi.
  • the image processing device 120 calculates a time constant processing gain corresponding value Gj by performing time constant processing on the gain corresponding value Gm of the input image Pi that is configured and continuously input. Then, based on the time constant processing gain corresponding value Gj, an output image Po subjected to contour enhancement processing and blur processing is generated. That is, the image processing apparatus 120 performs time constant processing on the degree of contour enhancement and the degree of blurring on the input image Pi, and an output image that has been subjected to the contour enhancement processing and blurring processing based on the degree of the time constant processing. Generate Po.
  • the degree of contour emphasis or blurring between input images Pi is large when contour emphasis processing or blurring processing is performed based on the gain corresponding value Gm. It may change and a moving image with a sense of incongruity may be displayed.
  • contour emphasis processing and blurring processing based on the time constant processing gain corresponding value Gj, contour emphasis and blurring between the input images Pi can be performed as compared with processing based on the gain corresponding value Gm. The change in the degree can be reduced, and a moving image with a reduced sense of incongruity can be displayed.
  • the image processing device 120 does not perform time constant processing when it is determined that the input image Pi corresponds to a scene change, and performs time constant processing when it is determined that the input image Pi does not correspond to a scene change.
  • time constant processing is performed at the time of a scene change, a time constant processing gain corresponding value Gj reflecting the gain corresponding value Gm of the immediately preceding input image Pi with low relevance for a predetermined input image Pi. Since contour enhancement processing and blurring processing are performed based on this, there is a possibility that a moving image with a sense of incongruity may be displayed.
  • the predetermined input image Pi is contoured without reflecting the gain corresponding value Gm of the immediately preceding input image Pi with low relevance. Emphasis processing and blurring processing can be performed, and moving images with a reduced sense of incongruity can be displayed.
  • the image processing apparatus 120 calculates a linear interpolation gain corresponding value Gs by performing linear interpolation processing on the time constant processing gain corresponding value Gj of a predetermined adjacent pixel Q of the input image Pi. Then, based on the linear interpolation gain corresponding value Gs, an output image Po that has undergone contour enhancement processing and blurring processing is generated. That is, the image processing apparatus 120 performs linear interpolation processing on the degree of contour enhancement and the degree of blurring on adjacent pixels Q in the input image Pi, and performs contour enhancement processing and processing based on the degree of linear interpolation processing. An output image Po subjected to the blurring process is generated.
  • contour enhancement processing or blurring processing is performed based on the time constant processing gain correspondence values Gj
  • contour enhancement is performed at the boundary between the pixels Q.
  • the degree of blurring may be discontinuous, and a moving image with a discontinuous boundary may be displayed.
  • contour enhancement processing and blurring processing based on the linear interpolation gain corresponding value Gs
  • the contour at the boundary portion between the pixels Q is compared with the case where processing based on the time constant processing gain corresponding value Gj is performed.
  • the continuity of the degree of emphasis and blurring can be increased, and a moving image with a continuous boundary can be displayed.
  • FIG. 13 is a block diagram illustrating a schematic configuration of the display device.
  • FIG. 14 is a block diagram illustrating a schematic configuration of the focus state recognition unit.
  • FIG. 15 is a schematic diagram for explaining a method of calculating the horizontal band centroid value and the vertical band centroid value.
  • FIG. 16 is a schematic diagram illustrating a setting state of the filter horizontal band and the unsharp horizontal band.
  • FIG. 17 is a schematic diagram illustrating an adjustment state of the filter horizontal band and the unsharp horizontal band with respect to the focus area and the unfocus area.
  • the display device 200 includes a display unit 110 and an image processing device 220 as a calculation unit.
  • the image processing apparatus 220 includes a focus state recognition unit 230 and a depth processing unit 240 as an outline emphasis processing unit and a blur processing unit, which are configured by various programs.
  • the focus state recognition unit 230 includes a Hadamard transform unit 131, a horizontal / vertical band centroid calculation unit 232, a blur band adjustment unit 233, an edge enhancement band adjustment unit 234, and a high frequency component accumulation unit 132.
  • the Hadamard transform unit 131 performs a Hadamard transform process on one input image Pi for each band characteristic calculation block Bf, and converts the 64 frequency band components F obtained by this process into a horizontal / vertical band centroid calculation unit 232, The high frequency component accumulation unit 132 and the low frequency component accumulation unit 133 are output.
  • the horizontal / vertical band center-of-gravity calculation unit 232 normalizes 64 frequency band components by replacing DC band components with 0 in the frequency band components F obtained by Hadamard transform processing. Specifically, the horizontal / vertical band centroid calculating unit 232 acquires the maximum value Max among the 64 frequency band components F, and multiplies the value obtained by dividing 4096 by the maximum value Max and the frequency band component F. 15 is calculated as a normalized frequency band component (referred to as a normalized frequency band component) as shown in FIG.
  • the numerical value in the square part of FIG. 15 represents the normalized frequency band component.
  • the square portion at the upper left corner is a direct current band, the component is replaced with 0.
  • the horizontal / vertical band centroid calculation unit 232 calculates the horizontal band centroid value Jh and the vertical band centroid value Jv based on the normalized frequency band component, and supplies them to the blur band adjustment unit 233 and the contour enhancement band adjustment unit 234. Output. Specifically, when calculating the horizontal band centroid value Jh, the horizontal / vertical band centroid calculating unit 232 calculates a vertical accumulated value obtained by accumulating components arranged in the vertical direction, as shown in FIG. Then, 61944 obtained by accumulating these vertical accumulated values is calculated as an overall accumulated value.
  • the vertical multiplication value is calculated by multiplying the vertical cumulative value of the horizontal band Tth (T is 1 to 8) by (T-1), and 145541 obtained by accumulating these vertical multiplication values is used as the vertical multiplication cumulative value. calculate. Then, 2.35 obtained by dividing the vertical multiplication cumulative value by the total cumulative value is calculated as the horizontal band centroid value Jh. Similarly, the horizontal / vertical band centroid calculating unit 232 calculates 2.49 obtained by dividing the horizontal multiplication accumulated value obtained by accumulating the horizontal multiplication value by the total accumulated value obtained by accumulating the horizontal accumulated value as the vertical band centroid value Jv. To do.
  • the blur band adjusting unit 233 blurs the unsharp horizontal band Ub1, the blur unsharp vertical band Ub2, and the blur band for each band characteristic calculation block Bf.
  • a filter horizontal band Ub3 and a blurring filter vertical band Ub4 are set. Specifically, when the level of each horizontal band of the input signal Vi is set as shown in FIG. 16, the blur band adjusting unit 233 sets the horizontal band centroid to the maximum horizontal band frequency (half the sampling frequency) Fmax. A value obtained by multiplying by the value Jh and dividing by 7 is calculated as the horizontal band centroid frequency.
  • frequencies lower than the horizontal band centroid frequency by a predetermined value are set as the blurring unsharp horizontal band Ub1 and the blurring filter horizontal band Ub3.
  • the blur band adjusting unit 233 calculates a vertical band centroid frequency by multiplying the vertical band maximum frequency Fmax by the vertical band centroid value Jv and dividing the result by 7 as a vertical band centroid frequency.
  • the frequency lower by the value is set as the blurring unsharp vertical band Ub2 and the blurring filter vertical band Ub4.
  • the blurring band adjustment unit 233 sends the blurring unsharp horizontal band Ub1 and blurring unsharp vertical band Ub2 to the blurring filter horizontal band Ub3 and blurring filter vertical to the unsharp processing unit 242 (to be described later) of the depth processing unit 240.
  • the band Ub4 is output to the later-described filter processing unit 241 of the depth processing unit 240, respectively.
  • the contour emphasis band adjusting unit 2344 based on the horizontal band centroid value Jh and the vertical band centroid value Jv, for each band characteristic calculation block Bf, the unsharp horizontal band Ur1 during contour emphasis and the unsharp vertical band Ur2 during contour emphasis.
  • the filter-enhanced filter horizontal band Ur3 and the filter-enhanced filter vertical band Ur4 are set. Specifically, when the frequency level of the input signal Vi is set as shown in FIG. 16, the contour enhancement band adjustment unit 234 calculates the horizontal band centroid frequency in the same manner as the blur band adjustment unit 233, A frequency higher than the horizontal band centroid frequency by a predetermined value is set as the unsharp horizontal band Ur1 during contour emphasis.
  • the unsharp horizontal band Ur1 at the time of contour emphasis is set so that the horizontal band centroid frequency corresponds to the centroid between the filter horizontal band Ub3 at the time of blurring and the unsharp horizontal band Ur1 at the time of contour emphasis. Further, a frequency higher than the unsharp horizontal band Ur1 at the time of contour emphasis by a predetermined value is set as the filter horizontal band Ur3 at the time of contour emphasis. Similarly, the contour enhancement band adjustment unit 234 sets a frequency higher than the vertical band centroid frequency by a predetermined value as the unsharp vertical band Ur2 during contour enhancement.
  • the unsharp vertical band Ur2 at the time of contour emphasis is set such that the vertical band centroid frequency is a frequency corresponding to the centroid of the filter vertical band Ub4 at the time of blurring and the unsharp vertical band Ur2 at the time of contour emphasis. Further, a frequency higher than the unsharp vertical band Ur2 at the time of contour enhancement by a predetermined value is set as the filter vertical band Ur4 at the time of contour emphasis. Then, the contour emphasis band adjusting unit 234 sends the unsharp horizontal band Ur1 during contour emphasis and the unsharp vertical band Ur2 during contour emphasis to the unsharp processing unit 242, the filter emphasis horizontal band Ur3 and the contour emphasis filter vertical band Ur4. Are output to the filter processing unit 241.
  • the polarity gain determining unit 136 supplies the polarity information Km generated by the polarity determining unit 136C to the filter processing unit 241 and the unsharp processing unit 242, and the gain corresponding value Gm set by the gain corresponding value setting unit 136D to be described later of the depth processing unit 240. Output to the multiplication unit 244.
  • the depth processing unit 240 includes a filter processing unit 241, an unsharp processing unit 242, a subtraction unit 243, a multiplication unit 244, and an addition unit 245.
  • the filter processing unit 241 acquires the input signal Vi. Further, for each band characteristic calculation block Bf of the input signal Vi, a blurring filter horizontal band Ub3, a blurring filter vertical band Ub4, a contour emphasis filter horizontal band Ur3, and a contour emphasis filter vertical band Ur4 are obtained. To do. Further, the polarity information Km for each set unit block Bs of the input signal Vi is acquired. When the filter processing unit 241 recognizes that the band characteristic calculation block Bf is the focus area Af based on the polarity information Km, as shown in FIG.
  • the filtered post-blurring horizontal component Sx3 and the post-filtering blur vertical component Sx4 obtained by removing components equal to or higher than the blurring filter vertical band Ub4 from the vertical component of the band characteristic calculation block Bf are output to the subtracting unit 243 and the adding unit 245. To do.
  • the unsharp processing unit 242 includes the unsharp unsharp horizontal band Ub1, the unsharp unsharp vertical band Ub2, and the unsharp unsharp horizontal band for each of the input signal Vi and the band characteristic calculation block Bf of the input signal Vi. Ur1, the unsharp vertical band Ur2 during edge enhancement, and the polarity information Km for each set unit block Bs are acquired. Then, when the unsharp processing unit 242 recognizes that the band characteristic calculation block Bf is the focus area Af based on the polarity information Km, as shown in FIG. 17, the unsharpness at the time of contour emphasis is obtained from the horizontal component of the band characteristic calculation block Bf.
  • the post-mask blurring horizontal component Sy3 from which the component is removed and the post-mask blurring vertical component Sy4 from which the component equal to or greater than the blurring unsharp vertical band Ub2 is removed from the vertical component of the band characteristic calculation block Bf are output to the subtracting unit 243.
  • the subtraction unit 243 subtracts the post-subtraction contour enhancement horizontal component Sx1 from the unsharp processing unit 242 from the post-filter contour enhancement horizontal component Sx1 from the filter processing unit 241.
  • Sg1 is calculated and output to the multiplication unit 244.
  • the subtracting unit 243 calculates a post-subtraction contour emphasizing vertical component Sg2 obtained by subtracting the post-mask contour emphasizing vertical component Sy2 from the post-filtering contour emphasizing vertical component Sx2, and outputs it to the multiplying unit 244.
  • the post-filter contour enhancement horizontal component Sx1 and the post-mask contour enhancement horizontal component Sy1 are different, and the post-filter contour enhancement vertical component Sx2 and the post-mask contour enhancement vertical component Sy2 are different.
  • the subtractor 243 subtracts the post-mask blur horizontal component Sy3 from the post-filter blur horizontal component Sx3, and subtracts the post-mask blur vertical component Sy4 from the post-filter blur vertical component Sx4.
  • the post-filter blur horizontal component Sx3 and the post-mask blur horizontal component Sy3 are the same, and the post-filter blur vertical component Sx4 and the post-mask blur vertical component Sy4 are the same.
  • the multiplication unit 244 outputs the gain-adjusted horizontal component Sp1 obtained by multiplying the post-subtraction contour emphasizing horizontal component Sg1 and the gain corresponding value Gm of the band characteristic calculation block Bf to the addition unit 245. Also, the gain-adjusted vertical component Sp2 obtained by multiplying the post-subtraction contour emphasizing vertical component Sg2 and the gain corresponding value Gm is output to the adder 245. That is, the multiplying unit 244 outputs a gain-adjusted horizontal component Sp1 and a gain-adjusted vertical component Sp2 corresponding to the band characteristic calculation block Bf of the focus area Af. Further, since the component corresponding to the band characteristic calculation block Bf of the unfocused area Au from the subtracting unit 243 is 0, the multiplying unit 244 indicates that the result obtained by multiplying this component by the gain corresponding value Gm is 0. Output to.
  • the adder 245 acquires the components Sx1, Sx2, Sx3, and Sx4 from the filter processor 241. Then, the post-filter contour emphasis horizontal component Sx1 and the gain-adjusted horizontal component Sp1 corresponding to the band characteristic calculation block Bf of the focus area Af are added, and the post-filter contour emphasis vertical component Sx2 and the gain-adjusted vertical component Sp2 are obtained. to add. Further, the adding unit 245 adds the post-filtering blurred horizontal component Sx3 corresponding to the band characteristic calculation block Bf of the unfocused area Au and 0 which is the output result from the multiplying unit 244, and the post-filtered blurred vertical component Sx4 and 0 And add.
  • an output signal Vo of the output image Po is generated and output to the display unit 110.
  • the horizontal component Sp1 after gain adjustment and the vertical component Sp2 after gain adjustment are not zero.
  • the horizontal and vertical components of the focus area Af in the output image Po are larger than the components Sx1 and Sx2, that is, larger than those of the input image Pi, and the outline of the focus area Af in the output image Po is emphasized. Is done.
  • the above-described addition result becomes the post-filter blurring horizontal component Sx3 and the post-filter blurring vertical component Sx4. For this reason, the horizontal and vertical components of the unfocus area Au in the output image Po are smaller than those in the input image Pi, and the unfocus area Au in the output image Po is blurred.
  • FIG. 18 is a block diagram illustrating a schematic configuration of the display device.
  • an image processing device 320 as a calculation unit of the display device 300 includes a face detection unit 321 and a gain weighting unit 322 as a face correspondence correction control unit in the image processing device 120 of the first embodiment. It has the provided structure.
  • the face detection unit 321 detects a human or animal face present in the input image Pi of the input signal Vi. Then, face information Gu having information regarding whether or not a face has been detected and information regarding its position when it has been detected is output to gain weighting section 322.
  • the gain weighting unit 322 acquires the linear interpolation gain corresponding value Gs from the inter-block linear interpolation unit 150 and the face information Gu from the face detection unit 321. Then, based on the face information Gu, it is determined whether or not a face has been detected from the input image Pi. If it is determined that a face has been detected, the first face is set to the linear interpolation gain corresponding value Gs of the pixel Q corresponding to this face. A weighting gain corresponding value Gk as face corresponding correction information multiplied by a correction value (for example, less than 1) is calculated, and a second face larger than the first face correction value is added to the linear interpolation gain corresponding value Gs of the pixels Q other than the face.
  • a correction value for example, less than 1
  • a weighting gain corresponding value Gk multiplied by a correction value (for example, 1) is calculated. If it is determined that no face is detected, a weighting gain corresponding value Gk obtained by multiplying the linear interpolation gain corresponding value Gs of all the pixels Q by the second face correction value is calculated. That is, the correction is performed so that the value of the linear interpolation gain corresponding value Gs of the pixel Q corresponding to the face is smaller than that in the case where it is not detected. Then, the gain weighting unit 322 outputs the weighting gain corresponding value Gk to the gain adjustment unit 163.
  • a correction value for example, 1
  • a weighting gain corresponding value Gk obtained by subtracting a predetermined value from the linear interpolation gain corresponding value Gs of only the pixel Q where the face is detected is calculated, or the linear interpolation gain corresponding value Gs of only the pixel Q where the face is not detected is calculated. Even if correction is made to reduce the value of the linear interpolation gain corresponding value Gs of the pixel Q corresponding to the face compared to the case where it is not detected by calculating the weighting gain corresponding value Gk to which a predetermined value is added. Good.
  • the gain adjustment unit 163 sets a value obtained by dividing the weighted gain corresponding value Gk of the predetermined pixel Q by 64 as a gain value, and a gain adjustment high frequency component based on the gain value and the polarity adjustment high frequency component Vk Vg is output to the adder 164.
  • the gain adjustment high-frequency component Vg smaller than the pixel Q not corresponding to the face in the focus area Af is added to the high-frequency component Vh in the pixel Q corresponding to the face in the focus area Af. For this reason, the degree of contour emphasis on the face in the output image Po is lower than that other than the face.
  • the gain adjustment high-frequency component Vg smaller than the pixel Q not corresponding to the face in the unfocus area Au is subtracted from the high-frequency component Vh, so the face in the output image Po
  • the degree of blurring with respect to is lower than that other than the face.
  • the image processing device 320 makes the degree of the contour enhancement processing for the face in the focus area Af of the input image Pi lower than the degree of the contour enhancement processing for the part other than the face in the focus area Af. Furthermore, the degree of blurring processing for the face in the unfocused area Au is set lower than the degree of blurring processing for the part other than the face in the unfocused area Au.
  • humans have a higher visibility to a face such as a person than a visibility to a portion other than the face. For this reason, if an excessive outline emphasis process or blurring process is performed on the face, an output image Po with a sense of incongruity will be generated.
  • by reducing the degree of contour enhancement processing and blurring processing on the face as compared with the portions other than the face it is possible to enhance the sense of depth while eliminating the uncomfortable feeling.
  • FIG. 19 is a block diagram illustrating a schematic configuration of the focus state recognition unit.
  • FIG. 20 is a schematic diagram showing first to fourth regions in the input image.
  • an image processing device 420 as a calculation unit of the display device 400 includes a screen position weight as a position correspondence correction control unit in the focus state recognition unit 130 of the image processing device 120 of the first embodiment.
  • a focus state recognizing unit 430 including an adding unit 437 is provided.
  • the screen position weight addition unit 437 acquires the focus degree Du for each band characteristic calculation block Bf from the focus degree calculation unit 134. Then, as shown in FIG. 20, a horizontally long elliptical first region R1 located at the center of the input image Pi, a ring-shaped second region R2 surrounding the first region R1, and a ring shape surrounding the second region R2. The third region R3 and the ring-shaped fourth region R4 surrounding the third region R3 are recognized. Then, the screen position weight addition unit 437 calculates a weight addition focus degree Dg as position correspondence correction information obtained by multiplying the pixel Q of the first region R1 by a first position correction value (for example, 1).
  • a first position correction value for example, 1).
  • the weight added focus degree Dg obtained by multiplying the pixel Q in the second region R2 by the second position correction value smaller than the first position correction value, and the pixel Q in the third region R3 is smaller than the second position correction value.
  • a weight-added focus degree Dg obtained by multiplying the three-position correction value and a weight-added focus degree Dg obtained by multiplying the pixel Q in the fourth region R4 by a fourth position correction value smaller than the third position correction value are calculated. Then, the screen position weight addition unit 437 outputs the weight addition focus degree Dg to the block accumulation unit 135.
  • the block accumulation unit 135 outputs the block-specific focus degree Ds obtained by accumulating the weight-added focus degree Dg for each set unit block Bs to the polarity gain determination unit 136.
  • the polarity gain determination unit 136 sets the polarity information Km and the gain corresponding value Gm for each set unit block Bs based on the block-specific focus degree Ds, and outputs it to the time constant processing unit 140.
  • the gain adjustment unit 163 outputs a gain adjustment high frequency component Vg that is smaller as the block-specific focus degree Ds is smaller to the addition unit 164.
  • the gain adjustment high frequency component Vg is the smallest in the fourth region R4 and the largest in the first region R1.
  • the degree of contour emphasis in the output image Po decreases as the distance from the center increases.
  • the gain adjustment high frequency component Vg that is smaller as the pixel Q is farther from the center of the input image Pi is subtracted from the high frequency component Vh.
  • the degree of blur in the image Po decreases as the distance from the center increases.
  • the image processing apparatus 420 decreases the degree of the contour enhancement process as the distance from the center of the input image Pi increases.
  • the focus area Af is often the center of the input image Pi. For this reason, by increasing the degree of enhancement processing at the center of the input image Pi, it is possible to prevent erroneous detection of a portion that emphasizes an edge or a portion that is blurred, and the detection accuracy can be increased.
  • display devices 500 and 600 including image processing devices 520 and 620 may be used instead of the image processing device 120 of the first embodiment.
  • the image processing apparatuses 520 and 620 have a configuration in which focus state recognition units 530 and 630 are provided instead of the focus state recognition unit 130.
  • the focus state recognition unit 530 of the display device 500 has the same configuration as the focus state recognition unit 130 of the first embodiment except that a Fourier transform unit 531 is provided instead of the Hadamard transform unit 131. have.
  • the Fourier transform unit 531 Based on the input signal Vi of the input image Pi from the image signal output unit 10, the Fourier transform unit 531 performs a Fourier transform process for each band characteristic calculation block Bf as shown in FIG. 5, for example, and the frequency of the input signal Vi.
  • the band component F is calculated.
  • a frequency band component F having only positive frequency components is calculated.
  • a frequency band component F having positive and negative frequency components is calculated.
  • the Fourier transform unit 531 outputs the frequency band component F to the high frequency component accumulation unit 132 and the low frequency component accumulation unit 133.
  • the high frequency component accumulating unit 132 and the low frequency component accumulating unit 133 specify only the positive frequency component of the frequency band component F.
  • the high frequency component accumulating unit 132 recognizes, as the high frequency component Fh, a component whose frequency band is equal to or higher than a predetermined threshold value among the specified frequency components, and the high frequency component Fh is determined for each band characteristic calculation block Bf.
  • the accumulated high frequency component accumulated value Sh is output to the focus degree calculation unit 134.
  • the low frequency component accumulating unit 133 recognizes a component less than a preset threshold value as the low frequency component Fl, and uses the low frequency component accumulated value Sl as a focus degree calculating unit. To 134. Even if it is such a structure, the effect similar to the said 1st Embodiment can be anticipated.
  • the focus state recognition unit 630 of the display device 600 includes a high frequency edge detection unit 631, a low frequency edge detection unit 632, a focus degree calculation unit 134, a block-by-block accumulation unit 135, A polarity gain determination unit 136.
  • the high frequency edge detection unit 631 is an HPF (High Pass Filter) or the like, and detects a high frequency component Fh as shown in FIG.
  • the low frequency edge detection unit 632 is an LPF (low pass filter) or the like, and detects a low frequency component Fl as shown in FIG.
  • the high frequency edge detection unit 631 and the low frequency edge detection unit 632 detect the positive and negative high frequency components Fh and the positive and negative low frequency components Fl, respectively.
  • the high frequency edge detection unit 631 and the low frequency edge detection unit 632 calculate absolute values of the high frequency component Fh and the low frequency component Fl, and output the absolute values to the focus degree calculation unit 134. Even if it is such a structure, the effect similar to the said 1st Embodiment can be anticipated.
  • At least one of the time constant processing unit 140 and the inter-block linear interpolation unit 150 may not be provided, or the time constant processing based on the scene change may not be performed.
  • the time constant process and the linear interpolation process may be performed only for the degree of the contour enhancement process or the blur process.
  • the degree of adjustment for only the outline enhancement process or the blurring process may be performed based on the face detection.
  • each function described above is constructed as a program, but it may be configured by hardware such as a circuit board or an element such as one IC (Integrated Circuit), and can be used in any form.
  • IC Integrated Circuit
  • the image processing device 120 of the display device 100 recognizes the focus area Af and the unfocus area Au in the input image Pi, and performs contour enhancement processing on the focus area Af.
  • the output image Po is generated by performing the blurring process on the unfocus area Au. For this reason, it is possible to effectively enhance the sense of depth in the output image Po by simultaneously performing the contour enhancement process and the blurring process on the input image Pi.
  • the present invention can be used as an image processing device, a display device, an image processing method, a program thereof, and a recording medium on which the program is recorded.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The image processing device of a display device recognizes a focus area (Af) and an unfocus area (Au) in an input image, applies contour emphasizing processing to the focus area (Af), and generates an output image (Po) in which blurring processing is applied to the unfocus area (Au). Thus, the image processing device can effectively emphasize the sense of depth in the output image (Po) by applying the contour emphasizing processing to the focus area (Af) and the blurring processing to the unfocus area (Au) at the same time.

Description

画像処理装置、表示装置、画像処理方法、そのプログラム、および、そのプログラムを記録した記録媒体Image processing apparatus, display apparatus, image processing method, program thereof, and recording medium recording the program
 本発明は、画像処理装置、表示装置、画像処理方法、そのプログラム、および、そのプログラムを記録した記録媒体に関する。 The present invention relates to an image processing device, a display device, an image processing method, a program thereof, and a recording medium on which the program is recorded.
 従来、画像に対して被写体の輪郭強調処理を施す構成が知られている(例えば、特許文献1および特許文献2参照)。
 特許文献1に記載のものは、映像信号のピーク点のレベルを算出して、この算出したレベルから定格映像信号レベルを100%として補正値を算出する。そして、この補正値に応じて輪郭強調信号のレベルを補正する構成が採られている。
 特許文献2に記載のものは、被写体の明るさおよびコントラストが大きくなるシーンでは、アパーチャ信号の中心周波数を高くして、細かいアパーチャ信号により細かい絵をより見やすくしている。また、被写体の明るさおよびコントラストが小さくなるシーンでは、アパーチャ信号の中心周波数を低くして、太めのはっきりしたアパーチャ信号によりノイズ感が少なくかつくっきりした画像を得る構成が採られている。
2. Description of the Related Art Conventionally, there has been known a configuration in which an object contour enhancement process is performed on an image (see, for example, Patent Document 1 and Patent Document 2).
In the device described in Patent Document 1, the level of the peak point of the video signal is calculated, and the correction value is calculated from the calculated level with the rated video signal level being 100%. And the structure which correct | amends the level of an outline emphasis signal according to this correction value is taken.
According to the technique disclosed in Patent Document 2, in a scene where the brightness and contrast of a subject are large, the center frequency of the aperture signal is increased so that a fine picture can be more easily seen by the fine aperture signal. Further, in a scene where the brightness and contrast of the subject are small, the center frequency of the aperture signal is lowered, and a sharp and clear image is obtained with a thick and clear aperture signal.
特開2004-193769号公報JP 2004-193769 A 特開平6-30303号公報Japanese Patent Laid-Open No. 6-30303
 ところで、画像中に、被写体にフォーカスが合っているフォーカス領域と、フォーカスが合っていないアンフォーカス領域とが存在するということは、フォーカス領域の被写体から撮像手段までの距離と、アンフォーカス領域の被写体から撮像手段までの距離とが異なるため、画像において奥行き感が表現されているということである。近年の画像処理技術の進歩に伴い、このような奥行き感が表現された画像を処理して、より奥行き感を強調したいというニーズが生まれることが考えられる。
 しかしながら、特許文献1,2のような構成では、フォーカス領域やアンフォーカス領域であるか否かにかかわらず輪郭強調処理が施されるおそれがあり、効果的に奥行き感を強調できないという問題点がある。
By the way, in the image, there are a focus area where the subject is focused and an unfocus area where the subject is not focused. This means that the distance from the subject in the focus area to the imaging means and the subject in the unfocus area This means that a sense of depth is expressed in the image because the distance from the camera to the imaging means is different. With recent advances in image processing technology, it is considered that there is a need to enhance the sense of depth by processing images expressing such a sense of depth.
However, in the configurations as disclosed in Patent Documents 1 and 2, there is a possibility that the contour emphasis processing may be performed regardless of whether the region is a focus region or an unfocus region, and there is a problem in that the sense of depth cannot be effectively enhanced. is there.
 本発明の目的は、画像の奥行き感を効果的に強調可能な画像処理装置、表示装置、画像処理方法、そのプログラム、および、そのプログラムを記録した記録媒体を提供することである。 An object of the present invention is to provide an image processing device, a display device, an image processing method, a program thereof, and a recording medium on which the program is recorded, which can effectively enhance the sense of depth of the image.
 本発明の画像処理装置は、入力画像における被写体にフォーカスが合っているフォーカス領域および被写体にフォーカスが合っていないアンフォーカス領域を認識するフォーカス状態認識部と、前記認識されたフォーカス領域に対して輪郭強調処理を実施するとともに、前記アンフォーカス領域に対してぼかし処理を実施した出力画像を生成する奥行き処理部と、を具備したことを特徴とする。 An image processing apparatus according to the present invention includes a focus state recognition unit that recognizes a focus area in which an object is in focus in an input image and an unfocus area in which the object is not in focus, and a contour for the recognized focus area And a depth processing unit that performs an enhancement process and generates an output image obtained by performing a blurring process on the unfocus area.
 本発明の表示装置は、上述の画像処理装置と、この画像処理装置で生成された出力画像を表示する表示部と、を具備したことを特徴とする。 A display device according to the present invention includes the above-described image processing device and a display unit that displays an output image generated by the image processing device.
 本発明の画像処理方法は、演算手段により、入力画像に対して所定の処理を実施した出力画像を生成する画像処理方法であって、前記演算手段は、前記入力画像における被写体にフォーカスが合っているフォーカス領域および被写体にフォーカスが合っていないアンフォーカス領域を認識するフォーカス状態認識工程と、前記認識されたフォーカス領域に対して輪郭強調処理を実施するとともに、前記アンフォーカス領域に対してぼかし処理を実施した出力画像を生成する奥行き処理工程と、を実施することを特徴とする。 The image processing method of the present invention is an image processing method for generating an output image obtained by performing predetermined processing on an input image by a calculation unit, and the calculation unit focuses on a subject in the input image. A focus state recognition step for recognizing a focused area and an unfocused area where the subject is not in focus, contour enhancement processing for the recognized focus area, and blurring processing for the unfocused area And a depth processing step for generating the executed output image.
 本発明の画像処理プログラムは、上述の画像処理方法を演算手段に実行させることを特徴とする。 An image processing program according to the present invention is characterized by causing an arithmetic means to execute the above-described image processing method.
 本発明の画像処理プログラムは、演算手段を上述の画像処理装置として機能させることを特徴とする。 The image processing program of the present invention is characterized in that the calculation means functions as the above-described image processing apparatus.
 本発明の画像処理プログラムを記録した記録媒体は、上述の画像処理プログラムが演算手段にて読取可能に記録されたことを特徴とする。 The recording medium on which the image processing program of the present invention is recorded is characterized in that the above-described image processing program is recorded so as to be readable by the arithmetic means.
本発明の第1,第4実施形態、変形例、他の変形例に係る表示装置の概略構成を示すブロック図である。It is a block diagram which shows schematic structure of the display apparatus which concerns on 1st, 4th embodiment of this invention, a modification, and another modification. 前記第1実施形態における入力画像の模式図である。It is a schematic diagram of the input image in the first embodiment. 前記第1実施形態における出力画像の模式図である。It is a schematic diagram of the output image in the first embodiment. 前記第1実施形態におけるフォーカス状態認識部の概略構成を示すブロック図である。FIG. 2 is a block diagram illustrating a schematic configuration of a focus state recognition unit in the first embodiment. 前記第1実施形態における入力画像中の帯域特性算出ブロックを示す模式図である。It is a schematic diagram which shows the zone | band characteristic calculation block in the input image in the said 1st Embodiment. 前記第1実施形態におけるアダマール変換後の周波帯域特性と、高域成分および低域成分との関係を示す模式図である。It is a schematic diagram which shows the relationship between the frequency band characteristic after Hadamard transformation in the said 1st Embodiment, and a high region component and a low region component. 前記第1実施形態における極性ゲイン判定部の概略構成を示すブロック図である。It is a block diagram which shows schematic structure of the polarity gain determination part in the said 1st Embodiment. 前記第1実施形態における各設定単位ブロックにおけるブロック別フォーカス度の設定状態を示す模式図である。It is a schematic diagram which shows the setting state of the focus degree according to block in each setting unit block in the said 1st Embodiment. 前記第1実施形態におけるブロック別フォーカス度の平均値と図8のブロック別フォーカス度との差分値の設定状態を示す模式図である。It is a schematic diagram which shows the setting state of the difference value of the average value of the focus degree according to block in the said 1st Embodiment, and the focus degree according to block of FIG. 前記第1実施形態における図9の差分値の絶対値をとった状態を示す模式図である。It is a schematic diagram which shows the state which took the absolute value of the difference value of FIG. 9 in the said 1st Embodiment. 前記第1実施形態における線形補間処理対象である処理対象画素と中心画素との位置関係を示す模式図である。It is a schematic diagram which shows the positional relationship of the process target pixel which is a linear interpolation process target in the said 1st Embodiment, and a center pixel. 前記第1実施形態におけるデータ変換値の算出方法を説明するための模式図である。It is a schematic diagram for demonstrating the calculation method of the data conversion value in the said 1st Embodiment. 本発明の第2実施形態に係る表示装置の概略構成を示すブロック図である。It is a block diagram which shows schematic structure of the display apparatus which concerns on 2nd Embodiment of this invention. 前記第2実施形態におけるフォーカス状態認識部の概略構成を示すブロック図である。It is a block diagram which shows schematic structure of the focus state recognition part in the said 2nd Embodiment. 前記第2実施形態における水平帯域重心値および垂直帯域重心値の算出方法を説明するための模式図である。It is a schematic diagram for demonstrating the calculation method of the horizontal band centroid value and the vertical band centroid value in the said 2nd Embodiment. 前記第2実施形態におけるフィルタ水平帯域およびアンシャープ水平帯域の設定状態を示す模式図である。It is a schematic diagram which shows the setting state of the filter horizontal band and unsharp horizontal band in the said 2nd Embodiment. 前記第2実施形態におけるフォーカス領域およびアンフォーカス領域に対するフィルタ水平帯域およびアンシャープ水平帯域の調整状態を示す模式図である。It is a schematic diagram which shows the adjustment state of the filter horizontal band and unsharp horizontal band with respect to the focus area | region and unfocus area | region in the said 2nd Embodiment. 本発明の第3実施形態に係る表示装置の概略構成を示すブロック図である。It is a block diagram which shows schematic structure of the display apparatus which concerns on 3rd Embodiment of this invention. 前記第4実施形態におけるフォーカス状態認識部の概略構成を示すブロック図である。It is a block diagram which shows schematic structure of the focus state recognition part in the said 4th Embodiment. 前記第4実施形態における入力画像中の第1~第4領域を示す模式図である。FIG. 10 is a schematic diagram showing first to fourth regions in an input image in the fourth embodiment. 前記変形例に係るフォーカス状態認識部の概略構成を示すブロック図である。It is a block diagram which shows schematic structure of the focus state recognition part which concerns on the said modification. 前記他の変形例に係るフォーカス状態認識部の概略構成を示すブロック図である。It is a block diagram which shows schematic structure of the focus state recognition part which concerns on the said other modification.
符号の説明Explanation of symbols
 100,200,300,400,500,600…表示装置
 110…表示部
 120,220、320,420,520,620…演算手段としての画像処理装置
 130,230,430,530,630…フォーカス状態認識部
 140…時定数処理部
 150…ブロック間線形補間部
 321…顔検出部
 322…顔対応補正制御部としてのゲイン重み付け部
 437…位置対応補正制御部としての画面位置重み付加部
   Dg…位置対応補正情報としての重み付加フォーカス度
   Gk…顔対応補正情報としての重み付けゲイン対応値
DESCRIPTION OF SYMBOLS 100, 200, 300, 400, 500, 600 ... Display apparatus 110 ... Display part 120, 220, 320, 420, 520, 620 ... Image processing apparatus 130, 230, 430, 530, 630 ... Focus state recognition Unit 140 ... time constant processing unit 150 ... inter-block linear interpolation unit 321 ... face detection unit 322 ... gain weighting unit as a face correspondence correction control unit 437 ... screen position weight addition unit as a position correspondence correction control unit Dg ... position correspondence correction Weight added focus degree as information Gk: Weighted gain correspondence value as face correspondence correction information
[第1実施形態]
 以下、本発明に係る第1実施形態を図面に基づいて説明する。
 この第1実施形態および後述する第2~第4実施形態では、本発明の画像処理装置を備えた表示装置であって、画像の奥行き感を強調する構成を例示して説明する。
 図1は、表示装置の概略構成を示すブロック図である。図2は、入力画像の模式図である。図3は、出力画像の模式図である。図4は、フォーカス状態認識部の概略構成を示すブロック図である。図5は、入力画像中の帯域特性算出ブロックを示す模式図である。図6は、アダマール変換後の周波帯域特性と、高域成分および低域成分との関係を示す模式図である。図7は、極性ゲイン判定部の概略構成を示すブロック図である。図8は、各設定単位ブロックにおけるブロック別フォーカス度の設定状態を示す模式図である。図9は、ブロック別フォーカス度の平均値と図8のブロック別フォーカス度との差分値の設定状態を示す模式図である。図10は、図9の差分値の絶対値をとった状態を示す模式図である。図11は、線形補間処理対象である処理対象画素と中心画素との位置関係を示す模式図である。図12は、データ変換値の算出方法を説明するための模式図である。
[First Embodiment]
DESCRIPTION OF EXEMPLARY EMBODIMENTS Hereinafter, a first embodiment according to the invention will be described with reference to the drawings.
In the first embodiment and second to fourth embodiments to be described later, description will be given by exemplifying a configuration that is a display device including the image processing device of the present invention and emphasizes the sense of depth of an image.
FIG. 1 is a block diagram illustrating a schematic configuration of a display device. FIG. 2 is a schematic diagram of an input image. FIG. 3 is a schematic diagram of an output image. FIG. 4 is a block diagram illustrating a schematic configuration of the focus state recognition unit. FIG. 5 is a schematic diagram showing a band characteristic calculation block in the input image. FIG. 6 is a schematic diagram showing the relationship between the frequency band characteristics after Hadamard transform and the high-frequency component and low-frequency component. FIG. 7 is a block diagram illustrating a schematic configuration of the polarity gain determination unit. FIG. 8 is a schematic diagram showing a setting state of the degree of focus for each block in each setting unit block. FIG. 9 is a schematic diagram showing a setting state of a difference value between the average value of the degree of focus for each block and the degree of focus for each block of FIG. FIG. 10 is a schematic diagram showing a state in which the absolute value of the difference value in FIG. 9 is taken. FIG. 11 is a schematic diagram illustrating a positional relationship between a processing target pixel which is a linear interpolation processing target and a central pixel. FIG. 12 is a schematic diagram for explaining a method of calculating a data conversion value.
 〔表示装置の構成〕
 図1に示すように、表示装置100は、表示部110と、演算手段としての画像処理装置120と、を備えている。
 表示部110は、画像処理装置120で処理された画像を表示させる。この表示部110としては、例えばPDP(Plasma Display Panel)、液晶パネル、有機EL(Electro Luminescence)パネル、CRT(Cathode-Ray Tube)、FED(Field Emission Display)、電気泳動ディスプレイパネルなどが例示できる。
[Configuration of display device]
As shown in FIG. 1, the display device 100 includes a display unit 110 and an image processing device 120 as a calculation unit.
The display unit 110 displays the image processed by the image processing device 120. Examples of the display unit 110 include a PDP (Plasma Display Panel), a liquid crystal panel, an organic EL (Electro Luminescence) panel, a CRT (Cathode-Ray Tube), an FED (Field Emission Display), and an electrophoretic display panel.
 画像処理装置120は、図2に示すような、被写体にフォーカスが合っているフォーカス領域Afおよびフォーカスが合っていないアンフォーカス領域Auを含む入力画像Piを処理して、図3に示すように奥行き感を強調した出力画像Poを表示部110へ出力する。そして、画像処理装置120は、各種プログラムから構成された、フォーカス状態認識部130と、時定数処理部140と、ブロック間線形補間部150と、奥行き処理部160と、を備えている。 The image processing apparatus 120 processes the input image Pi including the focus area Af where the subject is focused and the unfocus area Au where the subject is not focused, as shown in FIG. The output image Po in which the feeling is emphasized is output to the display unit 110. The image processing apparatus 120 includes a focus state recognition unit 130, a time constant processing unit 140, an inter-block linear interpolation unit 150, and a depth processing unit 160 that are configured from various programs.
 フォーカス状態認識部130は、図4に示すように、アダマール変換部131と、高域成分累積部132と、低域成分累積部133と、フォーカス度算出部134と、ブロック別累積部135と、極性ゲイン判定部136と、を備えている。
 アダマール変換部131は、画像信号出力部10から入力画像Piの入力信号Viを取得する。また、この入力信号Viに基づいて、図5に示すように、垂直方向に8個、水平方向に8個並んだ計64個の図示しない画素で構成される帯域特性算出ブロックBf単位でアダマール変換処理を実施して、図6に示すような入力信号Viの周波帯域成分Fを算出する。ここで、図6における左上隅部の四角部分(黒色の四角で示す)は、直流帯域となるため、成分が0に置き換えられている。そして、この周波帯域成分Fを高域成分累積部132および低域成分累積部133へ出力する。
As shown in FIG. 4, the focus state recognition unit 130 includes a Hadamard transform unit 131, a high frequency component accumulation unit 132, a low frequency component accumulation unit 133, a focus degree calculation unit 134, a block-by-block accumulation unit 135, A polarity gain determination unit 136.
The Hadamard transform unit 131 acquires the input signal Vi of the input image Pi from the image signal output unit 10. Further, based on this input signal Vi, as shown in FIG. 5, Hadamard transform is performed for each band characteristic calculation block Bf composed of a total of 64 pixels (not shown) arranged in a vertical direction and 8 in the horizontal direction. Processing is performed to calculate a frequency band component F of the input signal Vi as shown in FIG. Here, the square portion (indicated by a black square) at the upper left corner in FIG. 6 is a direct current band, so the component is replaced with 0. Then, the frequency band component F is output to the high frequency component accumulating unit 132 and the low frequency component accumulating unit 133.
 高域成分累積部132は、周波帯域成分Fのうち、周波帯域があらかじめ設定された閾値以上の成分を高域成分Fh(図6における左斜め上に伸びる線のハッチングが施された四角部分)として認識し、この高域成分Fhを帯域特性算出ブロックBfごとに累積した高域成分累積値Shをフォーカス度算出部134へ出力する。
 低域成分累積部133は、周波帯域成分Fのうち、上記閾値未満の成分を低域成分Fl(図6における右斜め上に伸びる線のハッチングが施された四角部分)として認識し、この低域成分Flを帯域特性算出ブロックBfごとに累積した低域成分累積値Slをフォーカス度算出部134へ出力する。
 フォーカス度算出部134は、帯域特性算出ブロックBfごとに、高域成分累積値Shを低域成分累積値Slで除した値をフォーカス度Duとしてブロック別累積部135へ出力する。
 ブロック別累積部135は、図5に示すような(a×b)個の帯域特性算出ブロックBfで構成される設定単位ブロックBsごとにフォーカス度Duを累積したブロック別フォーカス度Dsを極性ゲイン判定部136へ出力する。なお、設定単位ブロックBsは、入力画像Piを垂直方向に5個、水平方向に8個分割して得られるブロックである。また、ブロック別フォーカス度Dsは、0~255(8bit)の値に設定されている。
The high-frequency component accumulating unit 132 converts a component of the frequency band component F that has a frequency band equal to or higher than a preset threshold to a high-frequency component Fh (a square portion that is hatched with a line extending diagonally to the left in FIG. 6). And the high frequency component accumulation value Sh obtained by accumulating the high frequency component Fh for each band characteristic calculation block Bf is output to the focus degree calculation unit 134.
The low-frequency component accumulating unit 133 recognizes a component of the frequency band component F that is less than the above threshold value as a low-frequency component Fl (a square portion hatched with a line extending diagonally upward to the right in FIG. 6). The low frequency component accumulation value S1 obtained by accumulating the frequency component Fl for each band characteristic calculation block Bf is output to the focus degree calculation unit 134.
The focus degree calculation unit 134 outputs, for each band characteristic calculation block Bf, a value obtained by dividing the high-frequency component accumulated value Sh by the low-frequency component accumulated value S1 to the block-by-block accumulation unit 135 as the focus degree Du.
The block-by-block accumulating unit 135 determines the polarity-by-block focus degree Ds obtained by accumulating the degree of focus Du for each set unit block Bs composed of (a × b) band characteristic calculation blocks Bf as shown in FIG. To the unit 136. The setting unit block Bs is a block obtained by dividing the input image Pi into five pieces in the vertical direction and eight pieces in the horizontal direction. The block-specific focus degree Ds is set to a value of 0 to 255 (8 bits).
 極性ゲイン判定部136は、ブロック別フォーカス度Dsに基づいて、設定単位ブロックBsごとに極性情報およびゲイン値を設定する。この極性ゲイン判定部136は、図7に示すように、平均算出部136Aと、差分算出部136Bと、極性判定部136Cと、ゲイン対応値設定部136Dとを備えている。
 平均算出部136Aは、ブロック別フォーカス度Dsの平均値を算出し、この算出結果を差分算出部136Bへ出力する。例えば、図8に示すように、四角で図示された設定単位ブロックBsのブロック別フォーカス度Ds(四角中の数値)が設定されている場合、全設定単位ブロックBsのブロック別フォーカス度Dsの平均値を64と算出する。
 差分算出部136Bは、ブロック別累積部135からの各設定単位ブロックBsのブロック別フォーカス度Dsから、平均算出部136Aからの平均値を減じた値を、各設定単位ブロックBsの差分値として算出し、この算出結果を極性判定部136Cおよびゲイン対応値設定部136Dへ出力する。例えば、図8に示す各設定単位ブロックBsのブロック別フォーカス度Dsから平均値である64を減じて、図9に示すような差分値を得る。
The polarity gain determination unit 136 sets polarity information and a gain value for each setting unit block Bs based on the block-specific focus degree Ds. As shown in FIG. 7, the polarity gain determination unit 136 includes an average calculation unit 136A, a difference calculation unit 136B, a polarity determination unit 136C, and a gain corresponding value setting unit 136D.
The average calculation unit 136A calculates an average value of the block-specific focus degrees Ds and outputs the calculation result to the difference calculation unit 136B. For example, as shown in FIG. 8, when the block-by-block focus degree Ds (the numerical value in the square) of the set unit block Bs illustrated by a square is set, the average of the block-by-block focus degrees Ds of all the set unit blocks Bs The value is calculated as 64.
The difference calculation unit 136B calculates a value obtained by subtracting the average value from the average calculation unit 136A from the block-specific focus degree Ds of each setting unit block Bs from the block-by-block accumulation unit 135 as the difference value of each setting unit block Bs. Then, the calculation result is output to the polarity determination unit 136C and the gain corresponding value setting unit 136D. For example, the average value 64 is subtracted from the block-specific focus degree Ds of each setting unit block Bs shown in FIG. 8 to obtain a difference value as shown in FIG.
 極性判定部136Cは、差分算出部136Bからの差分値を取得して、この差分値が0以上の設定単位ブロックBsをフォーカス領域Af(図9、図10、図12における右斜め上に伸びる線のハッチングが施された四角部分)であると判定し、その旨を表す「+1」の極性情報Kmを時定数処理部140へ出力する。また、差分値が0未満の設定単位ブロックBsをアンフォーカス領域Au(図9、図10、図12における左斜め上に伸びる線のハッチングが施された四角部分)であると判定し、その旨を表す「-1」の極性情報Kmを時定数処理部140へ出力する。
 ゲイン対応値設定部136Dは、差分算出部136Bからの差分値を取得して、図10に示すように各設定単位ブロックBsの差分値の絶対値を算出する。さらに、設定単位ブロックBsの絶対値が127以下の場合、この絶対値をゲイン対応値として設定する。また、絶対値が128以上の場合、127をゲイン対応値として設定する、すなわち8bitで表される絶対値に対して7bitでリミットする。そして、ゲイン対応値設定部136Dは、ゲイン対応値Gmを時定数処理部140へ出力する。
The polarity determination unit 136C acquires the difference value from the difference calculation unit 136B, and sets the setting unit block Bs having the difference value of 0 or more to the focus area Af (a line extending diagonally right above in FIGS. 9, 10, and 12). And the polarity information Km of “+1” indicating that fact is output to the time constant processing unit 140. Further, it is determined that the set unit block Bs having the difference value less than 0 is the unfocus area Au (a square portion hatched with a line extending diagonally to the upper left in FIGS. 9, 10, and 12). Is output to the time constant processing unit 140.
The gain corresponding value setting unit 136D acquires the difference value from the difference calculation unit 136B and calculates the absolute value of the difference value of each setting unit block Bs as shown in FIG. Further, when the absolute value of the setting unit block Bs is 127 or less, this absolute value is set as a gain corresponding value. When the absolute value is 128 or more, 127 is set as a gain corresponding value, that is, the absolute value represented by 8 bits is limited to 7 bits. Then, the gain corresponding value setting unit 136D outputs the gain corresponding value Gm to the time constant processing unit 140.
 時定数処理部140は、入力信号Viに基づいて、連続する入力画像Piが動画のシーンチェンジに該当するか否かを判断する。ここで、シーンチェンジとは、動画の場面が変わった場合など、連続する入力画像Pi間の関連性が低いときのことを意味する。そして、シーンチェンジに該当すると判断した場合、極性情報Kmおよびゲイン対応値Gmをそのまま時定数処理極性情報Kjおよび時定数処理ゲイン対応値Gjとして、ブロック間線形補間部150へ出力する。
 一方、時定数処理部140は、シーンチェンジに該当しないと判断した場合、極性情報Kmおよびゲイン対応値Gmのそれぞれに対して時定数処理をして、時定数処理極性情報Kjおよび時定数処理ゲイン対応値Gjを算出する。
 具体的には、極性情報Kmおよびゲイン対応値Gmに対して、時間方向でIIR(Infinite Impulse Response)フィルタ処理を施すことにより、時定数処理極性情報Kjおよび時定数処理ゲイン対応値Gjを算出する。そして、時定数処理部140は、時定数処理極性情報Kjおよび時定数処理ゲイン対応値Gjをブロック間線形補間部150へ出力する。
Based on the input signal Vi, the time constant processing unit 140 determines whether or not the continuous input image Pi corresponds to a moving image scene change. Here, the scene change means that the relevance between successive input images Pi is low, such as when a moving image scene changes. If it is determined that the scene change is detected, the polarity information Km and the gain corresponding value Gm are output as they are to the inter-block linear interpolation unit 150 as the time constant processing polarity information Kj and the time constant processing gain corresponding value Gj.
On the other hand, when the time constant processing unit 140 determines that the scene change does not occur, the time constant processing unit 140 performs time constant processing on each of the polarity information Km and the gain corresponding value Gm to thereby obtain the time constant processing polarity information Kj and the time constant processing gain. Corresponding value Gj is calculated.
Specifically, time constant processing polarity information Kj and time constant processing gain corresponding value Gj are calculated by performing IIR (Infinite Impulse Response) filter processing in the time direction on polarity information Km and gain corresponding value Gm. . Then, the time constant processing unit 140 outputs the time constant processing polarity information Kj and the time constant processing gain corresponding value Gj to the inter-block linear interpolation unit 150.
 ブロック間線形補間部150は、時定数処理極性情報Kjおよび時定数処理ゲイン対応値Gjのそれぞれに対して線形補間処理をして、図11に示す画素Qごとに線形補間極性情報Ksおよび線形補間ゲイン対応値Gsを算出する。なお、設定単位ブロックBsに対する画素Qの大きさは、内容を理解しやすくするために誇張して図示している。
 具体的には、ブロック間線形補間部150は、設定単位ブロックBs2に含まれる処理対象の画素(以下、処理対象画素と称す)Qsの線形補間極性情報Ksおよび線形補間ゲイン対応値Gsを算出する場合、設定単位ブロックBs2の中心に位置する画素(以下、中心画素と称す)Q2、設定単位ブロックBs2に隣接する設定単位ブロックBs1,Bs3,Bs4の中心画素Q1,Q3,Q4に時定数処理極性情報Kjおよび時定数処理ゲイン対応値Gjが設定されていると仮定する。また、各設定単位ブロックBsの時定数処理極性情報Kj、時定数処理ゲイン対応値Gjを以下の式(1)に代入して、データ変換値Hを算出する。
The inter-block linear interpolation unit 150 performs linear interpolation processing on each of the time constant processing polarity information Kj and the time constant processing gain corresponding value Gj, and performs linear interpolation polarity information Ks and linear interpolation for each pixel Q shown in FIG. A gain corresponding value Gs is calculated. Note that the size of the pixel Q with respect to the setting unit block Bs is exaggerated for easy understanding of the content.
Specifically, the inter-block linear interpolation unit 150 calculates linear interpolation polarity information Ks and a linear interpolation gain corresponding value Gs of a processing target pixel (hereinafter referred to as a processing target pixel) Qs included in the setting unit block Bs2. In this case, the time constant processing polarity is applied to the central pixel Q1, Q3, and Q4 of the setting unit block Bs1, Bs3, and Bs4 adjacent to the setting unit block Bs2 and the pixel Q2 (hereinafter referred to as the central pixel) Q2 positioned at the center of the setting unit block Bs2. It is assumed that the information Kj and the time constant processing gain corresponding value Gj are set. Further, the data conversion value H is calculated by substituting the time constant processing polarity information Kj and the time constant processing gain corresponding value Gj of each setting unit block Bs into the following equation (1).
 (数1)
 H=(Kj×128)+Gj…(1)
(Equation 1)
H = (Kj × 128) + Gj (1)
 例えば、設定単位ブロックBs1~Bs4の時定数処理極性情報Kj、時定数処理ゲイン対応値Gjが図12に示すように設定されている場合、設定単位ブロックBs1~Bs4のデータ変換値H1,H2,H3,H4は、104,150,104,162となる。
 そして、ブロック間線形補間部150は、処理対象画素Qsから中心画素Q2までの垂直方向距離m、中心画素Q4までの垂直方向距離n、中心画素Q1までの水平方向距離x、中心画素Q2までの水平方向距離y、データ変換値H1,H2,H3,H4を以下の式(2)に代入して、処理対象画素Qsの第1線形補間値Iを算出する。
For example, when the time constant processing polarity information Kj and the time constant processing gain corresponding value Gj of the setting unit blocks Bs1 to Bs4 are set as shown in FIG. 12, the data conversion values H1, H2, and the like of the setting unit blocks Bs1 to Bs4 are set. H3 and H4 are 104, 150, 104 and 162, respectively.
The inter-block linear interpolation unit 150 then calculates the vertical distance m from the processing target pixel Qs to the central pixel Q2, the vertical distance n to the central pixel Q4, the horizontal distance x to the central pixel Q1, and the central pixel Q2. The first linear interpolation value I of the processing target pixel Qs is calculated by substituting the horizontal direction distance y and the data conversion values H1, H2, H3, and H4 into the following equation (2).
 (数2)
 I=(((H1×y+H2×x)/(x+y))×n+((H3×y+H4×x)/(x+y))×m)/(n+m)…(2)
(Equation 2)
I = ((((H1 * y + H2 * x) / (x + y)) * n + ((H3 * y + H4 * x) / (x + y)) * m) / (n + m) (2)
 例えば、x=96、y=32、m=32、n=96、H1~H4に図12に示す値(H1=104、H2=150、H3=104、H4=162)を代入すると、線形補間値Iは、140.75となる。
 また、ブロック間線形補間部150は、線形補間値Iから128を減じて、第2線形補間値を算出する。
 そして、ブロック間線形補間部150は、第2線形補間値が0以上の場合、処理対象画素Qsの極性が「1」である、つまり処理対象画素Qsがフォーカス領域Afに含まれる旨の線形補間極性情報Ksを奥行き処理部160の後述する極性調整部162へ出力し、第2線形補間値が0未満の場合、処理対象画素Qsの極性が「-1」である、つまり処理対象画素Qsがアンフォーカス領域Auに含まれる旨の線形補間極性情報Ksを出力する。
 さらに、ブロック間線形補間部150は、第2線形補間値の絶対値を線形補間ゲイン対応値Gsとして奥行き処理部160の後述するゲイン調整部163へ出力する。
 例えば、図11および図12に示すような場合、第2線形補間値は、12.75と算出され、ブロック間線形補間部150は、処理対象画素Qsの極性が「1」である旨の線形補間極性情報Ksを極性調整部162へ出力し、12.75の値を線形補間ゲイン対応値Gsとしてゲイン調整部163へ出力する。
For example, if x = 96, y = 32, m = 32, n = 96, and the values shown in FIG. 12 (H1 = 104, H2 = 150, H3 = 104, H4 = 162) are substituted into H1 to H4, linear interpolation is performed. The value I is 140.75.
Further, the inter-block linear interpolation unit 150 calculates the second linear interpolation value by subtracting 128 from the linear interpolation value I.
Then, when the second linear interpolation value is greater than or equal to 0, the inter-block linear interpolation unit 150 indicates that the polarity of the processing target pixel Qs is “1”, that is, the linear interpolation that the processing target pixel Qs is included in the focus area Af. When the polarity information Ks is output to the polarity adjusting unit 162 described later of the depth processing unit 160 and the second linear interpolation value is less than 0, the polarity of the processing target pixel Qs is “−1”, that is, the processing target pixel Qs is Linear interpolation polarity information Ks that is included in the unfocus area Au is output.
Further, the inter-block linear interpolation unit 150 outputs the absolute value of the second linear interpolation value as a linear interpolation gain corresponding value Gs to the gain adjustment unit 163 described later of the depth processing unit 160.
For example, in the case shown in FIGS. 11 and 12, the second linear interpolation value is calculated as 12.75, and the inter-block linear interpolation unit 150 performs linear processing indicating that the polarity of the processing target pixel Qs is “1”. The interpolation polarity information Ks is output to the polarity adjustment unit 162, and the value of 12.75 is output to the gain adjustment unit 163 as the linear interpolation gain corresponding value Gs.
 奥行き処理部160は、図1に示すように、高周波成分検出部161と、極性調整部162と、ゲイン調整部163と、加算部164と、を備えている。
 高周波成分検出部161は、画像信号出力部10から入力信号Viを取得して、ラプラシアンフィルタなどのHPF(High-Pass Filter)処理を施して得られる高周波成分Vh、つまり入力画像Pi中の被写体の輪郭などを構成する成分を極性調整部162へ出力する。
As shown in FIG. 1, the depth processing unit 160 includes a high frequency component detection unit 161, a polarity adjustment unit 162, a gain adjustment unit 163, and an addition unit 164.
The high-frequency component detection unit 161 acquires the input signal Vi from the image signal output unit 10 and performs high-pass filter (HPF) processing such as a Laplacian filter, that is, the subject in the input image Pi. The components constituting the contour and the like are output to the polarity adjustment unit 162.
 極性調整部162は、高周波成分検出部161からの高周波成分Vhと、ブロック間線形補間部150からの線形補間極性情報Ksとを取得する。そして、高周波成分Vhに対応する画素Qの線形補間極性情報Ksが「1」の場合、つまり高周波成分Vhがフォーカス領域Afのものである場合、高周波成分Vhの極性(正負)を変えずに極性調整高周波成分Vkとしてゲイン調整部163へ出力する。また、高周波成分Vhに対応する画素Qの線形補間極性情報Ksが「-1」の場合、つまり高周波成分Vhがアンフォーカス領域Auのものである場合、高周波成分Vhの極性を変えて極性調整高周波成分Vkとしてゲイン調整部163へ出力する。 The polarity adjustment unit 162 acquires the high frequency component Vh from the high frequency component detection unit 161 and the linear interpolation polarity information Ks from the inter-block linear interpolation unit 150. When the linear interpolation polarity information Ks of the pixel Q corresponding to the high frequency component Vh is “1”, that is, when the high frequency component Vh is in the focus area Af, the polarity without changing the polarity (positive / negative) of the high frequency component Vh. The adjusted high frequency component Vk is output to the gain adjusting unit 163. In addition, when the linear interpolation polarity information Ks of the pixel Q corresponding to the high frequency component Vh is “−1”, that is, when the high frequency component Vh is in the unfocused area Au, the polarity of the high frequency component Vh is changed to change the polarity. The component Vk is output to the gain adjustment unit 163.
 ゲイン調整部163は、ブロック間線形補間部150から所定の画素Qの線形補間ゲイン対応値Gsを取得して、この線形補間ゲイン対応値Gsを64で除して得られる値をゲイン値として設定する。そして、このゲイン値と、極性調整部162からのこの画素Qに対応する極性調整高周波成分Vkとを乗じて得られるゲイン調整高周波成分Vgを加算部164へ出力する。 The gain adjustment unit 163 acquires the linear interpolation gain corresponding value Gs of the predetermined pixel Q from the inter-block linear interpolation unit 150, and sets a value obtained by dividing the linear interpolation gain corresponding value Gs by 64 as a gain value. To do. Then, the gain adjustment high frequency component Vg obtained by multiplying the gain value by the polarity adjustment high frequency component Vk corresponding to the pixel Q from the polarity adjustment unit 162 is output to the addition unit 164.
 加算部164は、画像信号出力部10からの入力信号Viと、ゲイン調整部163からのゲイン調整高周波成分Vgとを取得する。そして、各画素Qに対応するゲイン調整高周波成分Vgと、入力信号Viにおけるゲイン調整高周波成分Vgに対応する高周波成分Vhとを加えて、出力画像Poの出力信号Voを生成し、表示部110へ出力する。 The addition unit 164 acquires the input signal Vi from the image signal output unit 10 and the gain adjustment high-frequency component Vg from the gain adjustment unit 163. Then, the gain adjustment high-frequency component Vg corresponding to each pixel Q and the high-frequency component Vh corresponding to the gain adjustment high-frequency component Vg in the input signal Vi are added to generate the output signal Vo of the output image Po, and the display unit 110 Output.
 以上の処理により、フォーカス領域Afでは、高周波成分Vhの極性と、ゲイン調整高周波成分Vgの極性とが同じになる。このため、高周波成分Vhとゲイン調整高周波成分Vgとが加えられることとなり、出力画像Poにおけるフォーカス領域Afの高周波成分が入力画像Piのものよりも大きくなり、出力画像Poにおけるフォーカス領域Afの輪郭が強調される。
 また、アンフォーカス領域Auでは、高周波成分Vhの極性と、ゲイン調整高周波成分Vgの極性とが異なる。このため、高周波成分Vhからゲイン調整高周波成分Vgが減じられることとなり、出力画像Poにおけるアンフォーカス領域Auの高周波成分が入力画像Piのものよりも小さくなり、出力画像Poにおけるアンフォーカス領域Auの輪郭がぼかされる。
With the above processing, in the focus area Af, the polarity of the high frequency component Vh is the same as the polarity of the gain adjustment high frequency component Vg. Therefore, the high frequency component Vh and the gain adjustment high frequency component Vg are added, the high frequency component of the focus area Af in the output image Po is larger than that of the input image Pi, and the outline of the focus area Af in the output image Po is To be emphasized.
In the unfocus area Au, the polarity of the high frequency component Vh is different from the polarity of the gain adjustment high frequency component Vg. For this reason, the gain adjustment high frequency component Vg is subtracted from the high frequency component Vh, the high frequency component of the unfocus area Au in the output image Po becomes smaller than that of the input image Pi, and the contour of the unfocus area Au in the output image Po. Will be blurred.
 〔第1実施形態の作用効果〕
 上述したように、上記第1実施形態では、以下のような作用効果を奏することができる。
[Effects of First Embodiment]
As described above, in the first embodiment, the following operational effects can be achieved.
 (1)表示装置100の画像処理装置120は、入力画像Pi中のフォーカス領域Afとアンフォーカス領域Auとを認識し、フォーカス領域Afに対して輪郭強調処理を施すとともに、アンフォーカス領域Auに対してぼかし処理を施した出力画像Poを生成する。
 このため、入力画像Piに対して輪郭強調処理とぼかし処理とを同時に施すことにより、出力画像Poにおける奥行き感を効果的に強調できる。
(1) The image processing device 120 of the display device 100 recognizes the focus area Af and the unfocus area Au in the input image Pi, performs an edge emphasis process on the focus area Af, and applies the unfocus area Au to the unfocus area Au. Thus, an output image Po subjected to the blurring process is generated.
For this reason, it is possible to effectively enhance the sense of depth in the output image Po by simultaneously performing the contour enhancement process and the blurring process on the input image Pi.
 (2)画像処理装置120は、動画を構成し連続して入力される入力画像Piのゲイン対応値Gmに対して時定数処理を実施することにより、時定数処理ゲイン対応値Gjを算出する。そして、時定数処理ゲイン対応値Gjに基づいて、輪郭強調処理、および、ぼかし処理を施した出力画像Poを生成する。つまり、画像処理装置120は、入力画像Piに対して輪郭強調の度合いおよびぼかしの度合いに対する時定数処理を実施して、この時定数処理した度合いに基づき輪郭強調処理およびぼかし処理を施した出力画像Poを生成する。
 ここで、連続する入力画像Pi間でゲイン対応値Gmの差が大きい場合、このゲイン対応値Gmに基づき輪郭強調処理やぼかし処理を施すと、入力画像Pi間で輪郭強調やぼかしの度合いが大きく変化してしまい、違和感がある動画が表示されてしまうおそれがある。
 これに対して、時定数処理ゲイン対応値Gjに基づき輪郭強調処理やぼかし処理を施すことで、ゲイン対応値Gmに基づく処理をする場合と比べて、入力画像Pi間での輪郭強調やぼかしの度合いの変化を小さくすることができ、違和感が減った動画を表示できる。
(2) The image processing device 120 calculates a time constant processing gain corresponding value Gj by performing time constant processing on the gain corresponding value Gm of the input image Pi that is configured and continuously input. Then, based on the time constant processing gain corresponding value Gj, an output image Po subjected to contour enhancement processing and blur processing is generated. That is, the image processing apparatus 120 performs time constant processing on the degree of contour enhancement and the degree of blurring on the input image Pi, and an output image that has been subjected to the contour enhancement processing and blurring processing based on the degree of the time constant processing. Generate Po.
Here, when the difference in gain corresponding value Gm between consecutive input images Pi is large, the degree of contour emphasis or blurring between input images Pi is large when contour emphasis processing or blurring processing is performed based on the gain corresponding value Gm. It may change and a moving image with a sense of incongruity may be displayed.
On the other hand, by performing contour emphasis processing and blurring processing based on the time constant processing gain corresponding value Gj, contour emphasis and blurring between the input images Pi can be performed as compared with processing based on the gain corresponding value Gm. The change in the degree can be reduced, and a moving image with a reduced sense of incongruity can be displayed.
 (3)画像処理装置120は、入力画像Piがシーンチェンジに該当すると判断した場合に時定数処理を実施せずに、シーンチェンジに該当しないと判断した場合に時定数処理を実施する。
 ここで、シーンチェンジのときに時定数処理を施すと、所定の入力画像Piに対して、関連性が低い直前の入力画像Piのゲイン対応値Gmを反映させた時定数処理ゲイン対応値Gjに基づき輪郭強調処理やぼかし処理が施されるため、違和感がある動画が表示されてしまうおそれがある。
 これに対して、シーンチェンジに該当する場合に時定数処理を実施しないことで、所定の入力画像Piに対して、関連性が低い直前の入力画像Piのゲイン対応値Gmを反映させることなく輪郭強調処理やぼかし処理を施すことができ、違和感が減った動画を表示できる。
(3) The image processing device 120 does not perform time constant processing when it is determined that the input image Pi corresponds to a scene change, and performs time constant processing when it is determined that the input image Pi does not correspond to a scene change.
Here, if time constant processing is performed at the time of a scene change, a time constant processing gain corresponding value Gj reflecting the gain corresponding value Gm of the immediately preceding input image Pi with low relevance for a predetermined input image Pi. Since contour enhancement processing and blurring processing are performed based on this, there is a possibility that a moving image with a sense of incongruity may be displayed.
On the other hand, by not performing the time constant processing in the case of a scene change, the predetermined input image Pi is contoured without reflecting the gain corresponding value Gm of the immediately preceding input image Pi with low relevance. Emphasis processing and blurring processing can be performed, and moving images with a reduced sense of incongruity can be displayed.
 (4)画像処理装置120は、入力画像Piの所定の近接する画素Qの時定数処理ゲイン対応値Gjに対して線形補間処理を実施することにより、線形補間ゲイン対応値Gsを算出する。そして、線形補間ゲイン対応値Gsに基づいて、輪郭強調処理、および、ぼかし処理を施した出力画像Poを生成する。つまり、画像処理装置120は、入力画像Pi中の近接する画素Qに対して、輪郭強調の度合いおよびぼかしの度合いに対する線形補間処理を実施して、この線形補間処理した度合いに基づき輪郭強調処理およびぼかし処理を施した出力画像Poを生成する。
 ここで、近接する画素Qに対する時定数処理ゲイン対応値Gjの差が大きい場合、この時定数処理ゲイン対応値Gjに基づき輪郭強調処理やぼかし処理を施すと、画素Q間の境界部で輪郭強調やぼかしの度合いが不連続になってしまい、境界部が不連続な動画が表示されてしまうおそれがある。
 これに対して、線形補間ゲイン対応値Gsに基づき輪郭強調処理やぼかし処理を施すことで、時定数処理ゲイン対応値Gjに基づく処理をする場合と比べて、画素Q間の境界部での輪郭強調やぼかしの度合いの連続性を高めることができ、境界部が連続した動画を表示できる。
(4) The image processing apparatus 120 calculates a linear interpolation gain corresponding value Gs by performing linear interpolation processing on the time constant processing gain corresponding value Gj of a predetermined adjacent pixel Q of the input image Pi. Then, based on the linear interpolation gain corresponding value Gs, an output image Po that has undergone contour enhancement processing and blurring processing is generated. That is, the image processing apparatus 120 performs linear interpolation processing on the degree of contour enhancement and the degree of blurring on adjacent pixels Q in the input image Pi, and performs contour enhancement processing and processing based on the degree of linear interpolation processing. An output image Po subjected to the blurring process is generated.
Here, when the difference between the time constant processing gain corresponding values Gj with respect to the adjacent pixels Q is large, if contour enhancement processing or blurring processing is performed based on the time constant processing gain correspondence values Gj, contour enhancement is performed at the boundary between the pixels Q. Or the degree of blurring may be discontinuous, and a moving image with a discontinuous boundary may be displayed.
On the other hand, by performing contour enhancement processing and blurring processing based on the linear interpolation gain corresponding value Gs, the contour at the boundary portion between the pixels Q is compared with the case where processing based on the time constant processing gain corresponding value Gj is performed. The continuity of the degree of emphasis and blurring can be increased, and a moving image with a continuous boundary can be displayed.
[第2実施形態]
 次に、本発明に係る第2実施形態を図面に基づいて説明する。
 なお、第1実施形態と同一の構成については、同一の名称および符号を付し説明を省略または簡略にする。
 図13は、表示装置の概略構成を示すブロック図である。図14は、フォーカス状態認識部の概略構成を示すブロック図である。図15は、水平帯域重心値および垂直帯域重心値の算出方法を説明するための模式図である。図16は、フィルタ水平帯域およびアンシャープ水平帯域の設定状態を示す模式図である。図17は、フォーカス領域およびアンフォーカス領域に対するフィルタ水平帯域およびアンシャープ水平帯域の調整状態を示す模式図である。
[Second Embodiment]
Next, a second embodiment according to the present invention will be described with reference to the drawings.
In addition, about the structure same as 1st Embodiment, the same name and code | symbol are attached | subjected and description is abbreviate | omitted or simplified.
FIG. 13 is a block diagram illustrating a schematic configuration of the display device. FIG. 14 is a block diagram illustrating a schematic configuration of the focus state recognition unit. FIG. 15 is a schematic diagram for explaining a method of calculating the horizontal band centroid value and the vertical band centroid value. FIG. 16 is a schematic diagram illustrating a setting state of the filter horizontal band and the unsharp horizontal band. FIG. 17 is a schematic diagram illustrating an adjustment state of the filter horizontal band and the unsharp horizontal band with respect to the focus area and the unfocus area.
 〔表示装置の構成〕
 図13に示すように、表示装置200は、表示部110と、演算手段としての画像処理装置220と、を備えている。
 画像処理装置220は、各種プログラムから構成された、フォーカス状態認識部230と、輪郭強調処理部およびぼかし処理部としての奥行き処理部240と、を備えている。
 フォーカス状態認識部230は、図14に示すように、アダマール変換部131と、水平垂直帯域重心計算部232と、ぼかし帯域調整部233と、輪郭強調帯域調整部234と、高域成分累積部132と、低域成分累積部133と、フォーカス度算出部134と、ブロック別累積部135と、極性ゲイン判定部136と、を備えている。
[Configuration of display device]
As shown in FIG. 13, the display device 200 includes a display unit 110 and an image processing device 220 as a calculation unit.
The image processing apparatus 220 includes a focus state recognition unit 230 and a depth processing unit 240 as an outline emphasis processing unit and a blur processing unit, which are configured by various programs.
As shown in FIG. 14, the focus state recognition unit 230 includes a Hadamard transform unit 131, a horizontal / vertical band centroid calculation unit 232, a blur band adjustment unit 233, an edge enhancement band adjustment unit 234, and a high frequency component accumulation unit 132. A low-frequency component accumulating unit 133, a focus degree calculating unit 134, a block-by-block accumulating unit 135, and a polarity gain determining unit 136.
 アダマール変換部131は、1つの入力画像Piに対して帯域特性算出ブロックBf単位でアダマール変換処理を実施して、この処理で得られる64個の周波帯域成分Fを水平垂直帯域重心計算部232、高域成分累積部132、低域成分累積部133へ出力する。 The Hadamard transform unit 131 performs a Hadamard transform process on one input image Pi for each band characteristic calculation block Bf, and converts the 64 frequency band components F obtained by this process into a horizontal / vertical band centroid calculation unit 232, The high frequency component accumulation unit 132 and the low frequency component accumulation unit 133 are output.
 水平垂直帯域重心計算部232は、アダマール変換処理で得られた周波帯域成分Fのうち直流帯域成分を0に置き換えて、64個の周波帯域成分を正規化する。
 具体的には、水平垂直帯域重心計算部232は、64個の周波帯域成分Fのうち最大値Maxを取得し、4096を最大値Maxで除した値と周波帯域成分Fとを乗じた値を、図15に示すような正規化した周波帯域成分(正規化周波帯域成分と称す)として算出する。ここで、図15の四角部分内の数値は、正規化周波帯域成分を表している。また、左上隅部の四角部分は、直流帯域となるため、成分が0に置き換えられている。
 また、水平垂直帯域重心計算部232は、正規化周波帯域成分に基づいて、水平帯域重心値Jhおよび垂直帯域重心値Jvを算出し、これらをぼかし帯域調整部233および輪郭強調帯域調整部234へ出力する。
 具体的には、水平垂直帯域重心計算部232は、水平帯域重心値Jhを算出する場合、図15に示すように、垂直方向に並ぶ成分を累積した垂直累積値を算出する。そして、これらの垂直累積値を累積した61944を全体累積値として算出する。また、水平帯域がT番目(Tは1~8)に低い垂直累積値に(T-1)を乗じて垂直乗算値を算出し、これらの垂直乗算値を累積した145541を垂直乗算累積値として算出する。そして、垂直乗算累積値を全体累積値で除した2.35を水平帯域重心値Jhとして算出する。
 また、水平垂直帯域重心計算部232は、同様にして、水平乗算値を累積した水平乗算累積値を、水平累積値を累積した全体累積値で除した2.49を垂直帯域重心値Jvとして算出する。
The horizontal / vertical band center-of-gravity calculation unit 232 normalizes 64 frequency band components by replacing DC band components with 0 in the frequency band components F obtained by Hadamard transform processing.
Specifically, the horizontal / vertical band centroid calculating unit 232 acquires the maximum value Max among the 64 frequency band components F, and multiplies the value obtained by dividing 4096 by the maximum value Max and the frequency band component F. 15 is calculated as a normalized frequency band component (referred to as a normalized frequency band component) as shown in FIG. Here, the numerical value in the square part of FIG. 15 represents the normalized frequency band component. In addition, since the square portion at the upper left corner is a direct current band, the component is replaced with 0.
Further, the horizontal / vertical band centroid calculation unit 232 calculates the horizontal band centroid value Jh and the vertical band centroid value Jv based on the normalized frequency band component, and supplies them to the blur band adjustment unit 233 and the contour enhancement band adjustment unit 234. Output.
Specifically, when calculating the horizontal band centroid value Jh, the horizontal / vertical band centroid calculating unit 232 calculates a vertical accumulated value obtained by accumulating components arranged in the vertical direction, as shown in FIG. Then, 61944 obtained by accumulating these vertical accumulated values is calculated as an overall accumulated value. Further, the vertical multiplication value is calculated by multiplying the vertical cumulative value of the horizontal band Tth (T is 1 to 8) by (T-1), and 145541 obtained by accumulating these vertical multiplication values is used as the vertical multiplication cumulative value. calculate. Then, 2.35 obtained by dividing the vertical multiplication cumulative value by the total cumulative value is calculated as the horizontal band centroid value Jh.
Similarly, the horizontal / vertical band centroid calculating unit 232 calculates 2.49 obtained by dividing the horizontal multiplication accumulated value obtained by accumulating the horizontal multiplication value by the total accumulated value obtained by accumulating the horizontal accumulated value as the vertical band centroid value Jv. To do.
 ぼかし帯域調整部233は、水平帯域重心値Jhおよび垂直帯域重心値Jvに基づいて、帯域特性算出ブロックBfごとの、ぼかし時アンシャープ水平帯域Ub1と、ぼかし時アンシャープ垂直帯域Ub2と、ぼかし時フィルタ水平帯域Ub3と、ぼかし時フィルタ垂直帯域Ub4とを設定する。
 具体的には、ぼかし帯域調整部233は、入力信号Viの各水平帯域のレベルが図16に示すように設定されている場合、水平帯域の最高周波数(サンプリング周波数の半分)Fmaxに水平帯域重心値Jhを乗じて7で除した値を水平帯域重心周波数として算出する。そして、水平帯域重心周波数よりも所定値だけ低い周波数をぼかし時アンシャープ水平帯域Ub1およびぼかし時フィルタ水平帯域Ub3として設定する。
 また、ぼかし帯域調整部233は、同様にして、垂直帯域の最高周波数Fmaxに垂直帯域重心値Jvを乗じて7で除した値を垂直帯域重心周波数として算出し、この垂直帯域重心周波数よりも所定値だけ低い周波数をぼかし時アンシャープ垂直帯域Ub2およびぼかし時フィルタ垂直帯域Ub4として設定する。
 そして、ぼかし帯域調整部233は、ぼかし時アンシャープ水平帯域Ub1およびぼかし時アンシャープ垂直帯域Ub2を奥行き処理部240の後述するアンシャープ処理部242へ、ぼかし時フィルタ水平帯域Ub3およびぼかし時フィルタ垂直帯域Ub4を奥行き処理部240の後述するフィルタ処理部241へ、それぞれ出力する。
Based on the horizontal band centroid value Jh and the vertical band centroid value Jv, the blur band adjusting unit 233 blurs the unsharp horizontal band Ub1, the blur unsharp vertical band Ub2, and the blur band for each band characteristic calculation block Bf. A filter horizontal band Ub3 and a blurring filter vertical band Ub4 are set.
Specifically, when the level of each horizontal band of the input signal Vi is set as shown in FIG. 16, the blur band adjusting unit 233 sets the horizontal band centroid to the maximum horizontal band frequency (half the sampling frequency) Fmax. A value obtained by multiplying by the value Jh and dividing by 7 is calculated as the horizontal band centroid frequency. Then, frequencies lower than the horizontal band centroid frequency by a predetermined value are set as the blurring unsharp horizontal band Ub1 and the blurring filter horizontal band Ub3.
Similarly, the blur band adjusting unit 233 calculates a vertical band centroid frequency by multiplying the vertical band maximum frequency Fmax by the vertical band centroid value Jv and dividing the result by 7 as a vertical band centroid frequency. The frequency lower by the value is set as the blurring unsharp vertical band Ub2 and the blurring filter vertical band Ub4.
Then, the blurring band adjustment unit 233 sends the blurring unsharp horizontal band Ub1 and blurring unsharp vertical band Ub2 to the blurring filter horizontal band Ub3 and blurring filter vertical to the unsharp processing unit 242 (to be described later) of the depth processing unit 240. The band Ub4 is output to the later-described filter processing unit 241 of the depth processing unit 240, respectively.
 輪郭強調帯域調整部234は、水平帯域重心値Jhおよび垂直帯域重心値Jvに基づいて、帯域特性算出ブロックBfごとの、輪郭強調時アンシャープ水平帯域Ur1と、輪郭強調時アンシャープ垂直帯域Ur2と、輪郭強調時フィルタ水平帯域Ur3と、輪郭強調時フィルタ垂直帯域Ur4とを設定する。
 具体的には、輪郭強調帯域調整部234は、入力信号Viの周波数のレベルが図16に示すように設定されている場合、ぼかし帯域調整部233と同様に水平帯域重心周波数を算出して、水平帯域重心周波数よりも所定値だけ高い周波数を輪郭強調時アンシャープ水平帯域Ur1として設定する。なお、輪郭強調時アンシャープ水平帯域Ur1は、水平帯域重心周波数がぼかし時フィルタ水平帯域Ub3と輪郭強調時アンシャープ水平帯域Ur1との重心に対応する周波数となるように設定されている。また、輪郭強調時アンシャープ水平帯域Ur1よりも所定値だけ高い周波数を輪郭強調時フィルタ水平帯域Ur3として設定する。
 また、輪郭強調帯域調整部234は、同様にして、垂直帯域重心周波数よりも所定値だけ高い周波数を輪郭強調時アンシャープ垂直帯域Ur2として設定する。なお、輪郭強調時アンシャープ垂直帯域Ur2は、垂直帯域重心周波数がぼかし時フィルタ垂直帯域Ub4と輪郭強調時アンシャープ垂直帯域Ur2との重心に対応する周波数となるように設定されている。さらに、輪郭強調時アンシャープ垂直帯域Ur2よりも所定値だけ高い周波数を輪郭強調時フィルタ垂直帯域Ur4として設定する。
 そして、輪郭強調帯域調整部234は、輪郭強調時アンシャープ水平帯域Ur1および輪郭強調時アンシャープ垂直帯域Ur2をアンシャープ処理部242へ、輪郭強調時フィルタ水平帯域Ur3および輪郭強調時フィルタ垂直帯域Ur4をフィルタ処理部241へ、それぞれ出力する。
The contour emphasis band adjusting unit 234, based on the horizontal band centroid value Jh and the vertical band centroid value Jv, for each band characteristic calculation block Bf, the unsharp horizontal band Ur1 during contour emphasis and the unsharp vertical band Ur2 during contour emphasis. The filter-enhanced filter horizontal band Ur3 and the filter-enhanced filter vertical band Ur4 are set.
Specifically, when the frequency level of the input signal Vi is set as shown in FIG. 16, the contour enhancement band adjustment unit 234 calculates the horizontal band centroid frequency in the same manner as the blur band adjustment unit 233, A frequency higher than the horizontal band centroid frequency by a predetermined value is set as the unsharp horizontal band Ur1 during contour emphasis. The unsharp horizontal band Ur1 at the time of contour emphasis is set so that the horizontal band centroid frequency corresponds to the centroid between the filter horizontal band Ub3 at the time of blurring and the unsharp horizontal band Ur1 at the time of contour emphasis. Further, a frequency higher than the unsharp horizontal band Ur1 at the time of contour emphasis by a predetermined value is set as the filter horizontal band Ur3 at the time of contour emphasis.
Similarly, the contour enhancement band adjustment unit 234 sets a frequency higher than the vertical band centroid frequency by a predetermined value as the unsharp vertical band Ur2 during contour enhancement. The unsharp vertical band Ur2 at the time of contour emphasis is set such that the vertical band centroid frequency is a frequency corresponding to the centroid of the filter vertical band Ub4 at the time of blurring and the unsharp vertical band Ur2 at the time of contour emphasis. Further, a frequency higher than the unsharp vertical band Ur2 at the time of contour enhancement by a predetermined value is set as the filter vertical band Ur4 at the time of contour emphasis.
Then, the contour emphasis band adjusting unit 234 sends the unsharp horizontal band Ur1 during contour emphasis and the unsharp vertical band Ur2 during contour emphasis to the unsharp processing unit 242, the filter emphasis horizontal band Ur3 and the contour emphasis filter vertical band Ur4. Are output to the filter processing unit 241.
 極性ゲイン判定部136は、極性判定部136Cで生成した極性情報Kmをフィルタ処理部241およびアンシャープ処理部242へ、ゲイン対応値設定部136Dで設定したゲイン対応値Gmを奥行き処理部240の後述する乗算部244へ、それぞれ出力する。 The polarity gain determining unit 136 supplies the polarity information Km generated by the polarity determining unit 136C to the filter processing unit 241 and the unsharp processing unit 242, and the gain corresponding value Gm set by the gain corresponding value setting unit 136D to be described later of the depth processing unit 240. Output to the multiplication unit 244.
 奥行き処理部240は、図13に示すように、フィルタ処理部241と、アンシャープ処理部242と、減算部243と、乗算部244と、加算部245と、を備えている。
 フィルタ処理部241は、入力信号Viを取得する。また、この入力信号Viの帯域特性算出ブロックBfごとの、ぼかし時フィルタ水平帯域Ub3と、ぼかし時フィルタ垂直帯域Ub4と、輪郭強調時フィルタ水平帯域Ur3と、輪郭強調時フィルタ垂直帯域Ur4とを取得する。さらに、入力信号Viの設定単位ブロックBsごとの極性情報Kmを取得する。
 そして、フィルタ処理部241は、極性情報Kmに基づいて、帯域特性算出ブロックBfがフォーカス領域Afであると認識すると、図17に示すように、この帯域特性算出ブロックBfの水平成分から輪郭強調時フィルタ水平帯域Ur3以上の成分を除去したフィルタ後輪郭強調水平成分Sx1と、帯域特性算出ブロックBfの垂直成分から輪郭強調時フィルタ垂直帯域Ur4以上の成分を除去したフィルタ後輪郭強調垂直成分Sx2と、を減算部243と加算部245とへ出力する。
 さらに、フィルタ処理部241は、帯域特性算出ブロックBfがアンフォーカス領域Auであると認識すると、図17に示すように、帯域特性算出ブロックBfの水平成分からぼかし時フィルタ水平帯域Ub3以上の成分を除去したフィルタ後ぼかし水平成分Sx3と、帯域特性算出ブロックBfの垂直成分からぼかし時フィルタ垂直帯域Ub4以上の成分を除去したフィルタ後ぼかし垂直成分Sx4と、を減算部243と加算部245とへ出力する。
As illustrated in FIG. 13, the depth processing unit 240 includes a filter processing unit 241, an unsharp processing unit 242, a subtraction unit 243, a multiplication unit 244, and an addition unit 245.
The filter processing unit 241 acquires the input signal Vi. Further, for each band characteristic calculation block Bf of the input signal Vi, a blurring filter horizontal band Ub3, a blurring filter vertical band Ub4, a contour emphasis filter horizontal band Ur3, and a contour emphasis filter vertical band Ur4 are obtained. To do. Further, the polarity information Km for each set unit block Bs of the input signal Vi is acquired.
When the filter processing unit 241 recognizes that the band characteristic calculation block Bf is the focus area Af based on the polarity information Km, as shown in FIG. A filtered post-contour emphasis horizontal component Sx1 from which components above the filter horizontal band Ur3 have been removed, and a post-filter outline emphasis vertical component Sx2 from which components above the filter emphasis filter vertical band Ur4 have been removed from the vertical component of the band characteristic calculation block Bf; Is output to the subtracting unit 243 and the adding unit 245.
Further, when the filter processing unit 241 recognizes that the band characteristic calculation block Bf is the unfocus area Au, as shown in FIG. 17, a component equal to or higher than the blurring filter horizontal band Ub3 is calculated from the horizontal component of the band characteristic calculation block Bf. The filtered post-blurring horizontal component Sx3 and the post-filtering blur vertical component Sx4 obtained by removing components equal to or higher than the blurring filter vertical band Ub4 from the vertical component of the band characteristic calculation block Bf are output to the subtracting unit 243 and the adding unit 245. To do.
 アンシャープ処理部242は、入力信号Viと、この入力信号Viの帯域特性算出ブロックBfごとの、ぼかし時アンシャープ水平帯域Ub1と、ぼかし時アンシャープ垂直帯域Ub2と、輪郭強調時アンシャープ水平帯域Ur1と、輪郭強調時アンシャープ垂直帯域Ur2と、設定単位ブロックBsごとの極性情報Kmと、を取得する。
 そして、アンシャープ処理部242は、極性情報Kmに基づき帯域特性算出ブロックBfがフォーカス領域Afであると認識すると、図17に示すように、帯域特性算出ブロックBfの水平成分から輪郭強調時アンシャープ水平帯域Ur1以上の成分を除去したマスク後輪郭強調水平成分Sy1と、帯域特性算出ブロックBfの垂直成分から輪郭強調時アンシャープ垂直帯域Ur2以上の成分を除去したマスク後輪郭強調垂直成分Sy2と、を減算部243へ出力する。
 さらに、アンシャープ処理部242は、帯域特性算出ブロックBfがアンフォーカス領域Auであると認識すると、図17に示すように、帯域特性算出ブロックBfの水平成分からぼかし時アンシャープ水平帯域Ub1以上の成分を除去したマスク後ぼかし水平成分Sy3と、帯域特性算出ブロックBfの垂直成分からぼかし時アンシャープ垂直帯域Ub2以上の成分を除去したマスク後ぼかし垂直成分Sy4と、を減算部243へ出力する。
The unsharp processing unit 242 includes the unsharp unsharp horizontal band Ub1, the unsharp unsharp vertical band Ub2, and the unsharp unsharp horizontal band for each of the input signal Vi and the band characteristic calculation block Bf of the input signal Vi. Ur1, the unsharp vertical band Ur2 during edge enhancement, and the polarity information Km for each set unit block Bs are acquired.
Then, when the unsharp processing unit 242 recognizes that the band characteristic calculation block Bf is the focus area Af based on the polarity information Km, as shown in FIG. 17, the unsharpness at the time of contour emphasis is obtained from the horizontal component of the band characteristic calculation block Bf. A post-mask contour enhancement horizontal component Sy1 from which components above the horizontal band Ur1 are removed, and a post-mask contour enhancement vertical component Sy2 from which components above the unsharp vertical band Ur2 during contour enhancement are removed from the vertical components of the band characteristic calculation block Bf; Is output to the subtraction unit 243.
Further, when the unsharp processing unit 242 recognizes that the band characteristic calculation block Bf is the unfocus area Au, as shown in FIG. 17, the unsharpening unsharp horizontal band Ub1 or more from the horizontal component of the band characteristic calculation block Bf is obtained. The post-mask blurring horizontal component Sy3 from which the component is removed and the post-mask blurring vertical component Sy4 from which the component equal to or greater than the blurring unsharp vertical band Ub2 is removed from the vertical component of the band characteristic calculation block Bf are output to the subtracting unit 243.
 減算部243は、図17に示すように、フィルタ処理部241からのフィルタ後輪郭強調水平成分Sx1から、アンシャープ処理部242からのマスク後輪郭強調水平成分Sy1を減じた減算後輪郭強調水平成分Sg1を算出して、乗算部244へ出力する。また、減算部243は、フィルタ後輪郭強調垂直成分Sx2からマスク後輪郭強調垂直成分Sy2を減じた減算後輪郭強調垂直成分Sg2を算出して、乗算部244へ出力する。ここで、フィルタ後輪郭強調水平成分Sx1とマスク後輪郭強調水平成分Sy1とが異なり、フィルタ後輪郭強調垂直成分Sx2とマスク後輪郭強調垂直成分Sy2とが異なることから、減算後輪郭強調水平成分Sg1および減算後輪郭強調垂直成分Sg2が0になることはない。
 また、減算部243は、フィルタ後ぼかし水平成分Sx3からマスク後ぼかし水平成分Sy3を減じ、フィルタ後ぼかし垂直成分Sx4からマスク後ぼかし垂直成分Sy4を減じる。ここで、フィルタ後ぼかし水平成分Sx3とマスク後ぼかし水平成分Sy3とが同じであり、フィルタ後ぼかし垂直成分Sx4とマスク後ぼかし垂直成分Sy4とが同じであることから、前述の減じた結果が0である旨を乗算部244へ出力する。
As shown in FIG. 17, the subtraction unit 243 subtracts the post-subtraction contour enhancement horizontal component Sx1 from the unsharp processing unit 242 from the post-filter contour enhancement horizontal component Sx1 from the filter processing unit 241. Sg1 is calculated and output to the multiplication unit 244. Further, the subtracting unit 243 calculates a post-subtraction contour emphasizing vertical component Sg2 obtained by subtracting the post-mask contour emphasizing vertical component Sy2 from the post-filtering contour emphasizing vertical component Sx2, and outputs it to the multiplying unit 244. Here, the post-filter contour enhancement horizontal component Sx1 and the post-mask contour enhancement horizontal component Sy1 are different, and the post-filter contour enhancement vertical component Sx2 and the post-mask contour enhancement vertical component Sy2 are different. Also, the post-subtraction contour emphasis vertical component Sg2 never becomes zero.
The subtractor 243 subtracts the post-mask blur horizontal component Sy3 from the post-filter blur horizontal component Sx3, and subtracts the post-mask blur vertical component Sy4 from the post-filter blur vertical component Sx4. Here, the post-filter blur horizontal component Sx3 and the post-mask blur horizontal component Sy3 are the same, and the post-filter blur vertical component Sx4 and the post-mask blur vertical component Sy4 are the same. To the multiplication unit 244.
 乗算部244は、帯域特性算出ブロックBfの減算後輪郭強調水平成分Sg1とゲイン対応値Gmとを乗じたゲイン調整後水平成分Sp1を加算部245へ出力する。また、減算後輪郭強調垂直成分Sg2とゲイン対応値Gmとを乗じたゲイン調整後垂直成分Sp2を加算部245へ出力する。つまり、乗算部244は、フォーカス領域Afの帯域特性算出ブロックBfに対応するゲイン調整後水平成分Sp1とゲイン調整後垂直成分Sp2を出力する。
 また、乗算部244は、減算部243からのアンフォーカス領域Auの帯域特性算出ブロックBfに対応する成分が0なので、この成分にゲイン対応値Gmを乗じた結果が0である旨を加算部245へ出力する。
The multiplication unit 244 outputs the gain-adjusted horizontal component Sp1 obtained by multiplying the post-subtraction contour emphasizing horizontal component Sg1 and the gain corresponding value Gm of the band characteristic calculation block Bf to the addition unit 245. Also, the gain-adjusted vertical component Sp2 obtained by multiplying the post-subtraction contour emphasizing vertical component Sg2 and the gain corresponding value Gm is output to the adder 245. That is, the multiplying unit 244 outputs a gain-adjusted horizontal component Sp1 and a gain-adjusted vertical component Sp2 corresponding to the band characteristic calculation block Bf of the focus area Af.
Further, since the component corresponding to the band characteristic calculation block Bf of the unfocused area Au from the subtracting unit 243 is 0, the multiplying unit 244 indicates that the result obtained by multiplying this component by the gain corresponding value Gm is 0. Output to.
 加算部245は、フィルタ処理部241からの各成分Sx1,Sx2,Sx3,Sx4を取得する。そして、フォーカス領域Afの帯域特性算出ブロックBfにそれぞれ対応するフィルタ後輪郭強調水平成分Sx1とゲイン調整後水平成分Sp1とを加算し、フィルタ後輪郭強調垂直成分Sx2とゲイン調整後垂直成分Sp2とを加算する。
 また、加算部245は、アンフォーカス領域Auの帯域特性算出ブロックBfに対応するフィルタ後ぼかし水平成分Sx3と乗算部244からの出力結果である0とを加算し、フィルタ後ぼかし垂直成分Sx4と0とを加算する。
 そして、各帯域特性算出ブロックBfの上述の加算結果に基づいて、出力画像Poの出力信号Voを生成して、表示部110へ出力する。
 ここで、フォーカス領域Afでは、ゲイン調整後水平成分Sp1およびゲイン調整後垂直成分Sp2が0でない。このため、出力画像Poにおけるフォーカス領域Afの水平成分および垂直成分は、各成分Sx1,Sx2よりも大きくなり、つまり入力画像Piのものよりも大きくなり、出力画像Poにおけるフォーカス領域Afの輪郭が強調される。
 また、アンフォーカス領域Auでは、上述の加算結果がフィルタ後ぼかし水平成分Sx3およびフィルタ後ぼかし垂直成分Sx4となる。このため、出力画像Poにおけるアンフォーカス領域Auの水平成分および垂直成分は、入力画像Piのものよりも小さくなり、出力画像Poにおけるアンフォーカス領域Auがぼかされる。
The adder 245 acquires the components Sx1, Sx2, Sx3, and Sx4 from the filter processor 241. Then, the post-filter contour emphasis horizontal component Sx1 and the gain-adjusted horizontal component Sp1 corresponding to the band characteristic calculation block Bf of the focus area Af are added, and the post-filter contour emphasis vertical component Sx2 and the gain-adjusted vertical component Sp2 are obtained. to add.
Further, the adding unit 245 adds the post-filtering blurred horizontal component Sx3 corresponding to the band characteristic calculation block Bf of the unfocused area Au and 0 which is the output result from the multiplying unit 244, and the post-filtered blurred vertical component Sx4 and 0 And add.
Then, based on the above-described addition result of each band characteristic calculation block Bf, an output signal Vo of the output image Po is generated and output to the display unit 110.
Here, in the focus area Af, the horizontal component Sp1 after gain adjustment and the vertical component Sp2 after gain adjustment are not zero. For this reason, the horizontal and vertical components of the focus area Af in the output image Po are larger than the components Sx1 and Sx2, that is, larger than those of the input image Pi, and the outline of the focus area Af in the output image Po is emphasized. Is done.
Further, in the unfocus area Au, the above-described addition result becomes the post-filter blurring horizontal component Sx3 and the post-filter blurring vertical component Sx4. For this reason, the horizontal and vertical components of the unfocus area Au in the output image Po are smaller than those in the input image Pi, and the unfocus area Au in the output image Po is blurred.
 〔第2実施形態の作用効果〕
 上述したように、上記第2実施形態では、上記第1実施形態の(1)に加えて、以下のような作用効果を奏することができる。
[Effects of Second Embodiment]
As described above, in the second embodiment, the following operational effects can be achieved in addition to (1) of the first embodiment.
 (5)入力画像Piの絵柄に応じて強調する、もしくは、ぼかす周波数帯域を可変することで常に最適な強調処理、ぼかし処理を実施できる。 (5) It is possible to always perform optimum enhancement processing and blurring processing by emphasizing according to the pattern of the input image Pi or changing the frequency band to be blurred.
[第3実施形態]
 次に、本発明に係る第3実施形態を図面に基づいて説明する。
 なお、第1,第2実施形態と同一の構成については、同一の名称および符号を付し説明を省略または簡略にする。
 図18は、表示装置の概略構成を示すブロック図である。
[Third Embodiment]
Next, 3rd Embodiment which concerns on this invention is described based on drawing.
In addition, about the structure same as 1st, 2nd embodiment, the same name and code | symbol are attached | subjected and description is abbreviate | omitted or simplified.
FIG. 18 is a block diagram illustrating a schematic configuration of the display device.
 〔表示装置の構成〕
 図18に示すように、表示装置300の演算手段としての画像処理装置320は、第1実施形態の画像処理装置120に顔検出部321と、顔対応補正制御部としてのゲイン重み付け部322とを設けた構成を有している。
 顔検出部321は、入力信号Viの入力画像Piに存在する人物あるいは動物の顔を検出する。そして、顔を検出できたか否かに関する情報と、検出できた場合のその位置に関する情報とを有する顔情報Guをゲイン重み付け部322へ出力する。
[Configuration of display device]
As shown in FIG. 18, an image processing device 320 as a calculation unit of the display device 300 includes a face detection unit 321 and a gain weighting unit 322 as a face correspondence correction control unit in the image processing device 120 of the first embodiment. It has the provided structure.
The face detection unit 321 detects a human or animal face present in the input image Pi of the input signal Vi. Then, face information Gu having information regarding whether or not a face has been detected and information regarding its position when it has been detected is output to gain weighting section 322.
 ゲイン重み付け部322は、ブロック間線形補間部150からの線形補間ゲイン対応値Gsと、顔検出部321からの顔情報Guとを取得する。そして、顔情報Guに基づき入力画像Piから顔が検出されたか否かを判断して、検出されていると判断した場合、この顔に対応する画素Qの線形補間ゲイン対応値Gsに第1顔補正値(例えば、1未満)を乗じた顔対応補正情報としての重み付けゲイン対応値Gkを算出し、顔以外の画素Qの線形補間ゲイン対応値Gsに第1顔補正値よりも大きい第2顔補正値(例えば、1)を乗じた重み付けゲイン対応値Gkを算出する。また、顔が検出されていないと判断した場合、全画素Qの線形補間ゲイン対応値Gsに第2顔補正値を乗じた重み付けゲイン対応値Gkを算出する。つまり、顔に対応する画素Qの線形補間ゲイン対応値Gsの値を、検出されていない場合と比べて小さくする補正をする。そして、ゲイン重み付け部322は、重み付けゲイン対応値Gkをゲイン調整部163へ出力する。
 なお、顔が検出された画素Qのみの線形補間ゲイン対応値Gsから所定値を減じた重み付けゲイン対応値Gkを算出したり、顔が検出されていない画素Qのみの線形補間ゲイン対応値Gsに所定値を加えた重み付けゲイン対応値Gkを算出したりすることで、顔に対応する画素Qの線形補間ゲイン対応値Gsの値を、検出されていない場合と比べて小さくする補正をしてもよい。
The gain weighting unit 322 acquires the linear interpolation gain corresponding value Gs from the inter-block linear interpolation unit 150 and the face information Gu from the face detection unit 321. Then, based on the face information Gu, it is determined whether or not a face has been detected from the input image Pi. If it is determined that a face has been detected, the first face is set to the linear interpolation gain corresponding value Gs of the pixel Q corresponding to this face. A weighting gain corresponding value Gk as face corresponding correction information multiplied by a correction value (for example, less than 1) is calculated, and a second face larger than the first face correction value is added to the linear interpolation gain corresponding value Gs of the pixels Q other than the face. A weighting gain corresponding value Gk multiplied by a correction value (for example, 1) is calculated. If it is determined that no face is detected, a weighting gain corresponding value Gk obtained by multiplying the linear interpolation gain corresponding value Gs of all the pixels Q by the second face correction value is calculated. That is, the correction is performed so that the value of the linear interpolation gain corresponding value Gs of the pixel Q corresponding to the face is smaller than that in the case where it is not detected. Then, the gain weighting unit 322 outputs the weighting gain corresponding value Gk to the gain adjustment unit 163.
Note that a weighting gain corresponding value Gk obtained by subtracting a predetermined value from the linear interpolation gain corresponding value Gs of only the pixel Q where the face is detected is calculated, or the linear interpolation gain corresponding value Gs of only the pixel Q where the face is not detected is calculated. Even if correction is made to reduce the value of the linear interpolation gain corresponding value Gs of the pixel Q corresponding to the face compared to the case where it is not detected by calculating the weighting gain corresponding value Gk to which a predetermined value is added. Good.
 ゲイン調整部163は、所定の画素Qの重み付けゲイン対応値Gkを64で除して得られる値をゲイン値として設定して、このゲイン値と、極性調整高周波成分Vkとに基づくゲイン調整高周波成分Vgを加算部164へ出力する。
 そして、加算部164での処理により、フォーカス領域Af中の顔に対応する画素Qでは、フォーカス領域Af中の顔に対応しない画素Qよりも小さいゲイン調整高周波成分Vgが高周波成分Vhに加えられる。このため、出力画像Poにおける顔に対する輪郭強調の度合いは、顔以外のものよりも低くなる。
 また、アンフォーカス領域Au中の顔に対応する画素Qでは、アンフォーカス領域Au中の顔に対応しない画素Qよりも小さいゲイン調整高周波成分Vgが高周波成分Vhから減じられるため、出力画像Poにおける顔に対するぼかしの度合いは、顔以外のものよりも低くなる。
The gain adjustment unit 163 sets a value obtained by dividing the weighted gain corresponding value Gk of the predetermined pixel Q by 64 as a gain value, and a gain adjustment high frequency component based on the gain value and the polarity adjustment high frequency component Vk Vg is output to the adder 164.
As a result of the processing by the adding unit 164, the gain adjustment high-frequency component Vg smaller than the pixel Q not corresponding to the face in the focus area Af is added to the high-frequency component Vh in the pixel Q corresponding to the face in the focus area Af. For this reason, the degree of contour emphasis on the face in the output image Po is lower than that other than the face.
In the pixel Q corresponding to the face in the unfocus area Au, the gain adjustment high-frequency component Vg smaller than the pixel Q not corresponding to the face in the unfocus area Au is subtracted from the high-frequency component Vh, so the face in the output image Po The degree of blurring with respect to is lower than that other than the face.
 〔第3実施形態の作用効果〕
 上述したように、上記第3実施形態では、第1実施形態の(1)~(4)と同様の作用効果に加えて、以下のような作用効果を奏することができる。
[Effects of Third Embodiment]
As described above, in the third embodiment, in addition to the same functions and effects as the first embodiment (1) to (4), the following functions and effects can be achieved.
 (6)画像処理装置320は、入力画像Piのフォーカス領域Af中の顔に対する輪郭強調処理の度合いを、フォーカス領域Af中の顔以外の部分に対する輪郭強調処理の度合いよりも低くしている。さらに、アンフォーカス領域Au中の顔に対するぼかし処理の度合いを、アンフォーカス領域Au中の顔以外の部分に対するぼかし処理の度合いよりも低くしている。
 ここで、一般的に、人間は、人物などの顔に対する視感度が顔以外の部分に対する視感度よりも高い。このため、顔に対して過度の輪郭強調処理やぼかし処理を施してしまうと、違和感がある出力画像Poとなってしまう。
 これに対して、顔に対する輪郭強調処理やぼかし処理の度合いを顔以外の部分と比べて低くすることで、違和感をなくしつつ奥行き感を強調できる。
(6) The image processing device 320 makes the degree of the contour enhancement processing for the face in the focus area Af of the input image Pi lower than the degree of the contour enhancement processing for the part other than the face in the focus area Af. Furthermore, the degree of blurring processing for the face in the unfocused area Au is set lower than the degree of blurring processing for the part other than the face in the unfocused area Au.
Here, in general, humans have a higher visibility to a face such as a person than a visibility to a portion other than the face. For this reason, if an excessive outline emphasis process or blurring process is performed on the face, an output image Po with a sense of incongruity will be generated.
On the other hand, by reducing the degree of contour enhancement processing and blurring processing on the face as compared with the portions other than the face, it is possible to enhance the sense of depth while eliminating the uncomfortable feeling.
[第4実施形態]
 次に、本発明に係る第4実施形態を図面に基づいて説明する。
 なお、第1,第2実施形態と同一の構成については、同一の名称および符号を付し説明を省略または簡略にする。
 図19は、フォーカス状態認識部の概略構成を示すブロック図である。図20は、入力画像中の第1~第4領域を示す模式図である。
[Fourth Embodiment]
Next, 4th Embodiment which concerns on this invention is described based on drawing.
In addition, about the structure same as 1st, 2nd embodiment, the same name and code | symbol are attached | subjected and description is abbreviate | omitted or simplified.
FIG. 19 is a block diagram illustrating a schematic configuration of the focus state recognition unit. FIG. 20 is a schematic diagram showing first to fourth regions in the input image.
 〔表示装置の構成〕
 図1および図19に示すように、表示装置400の演算手段としての画像処理装置420は、第1実施形態の画像処理装置120のフォーカス状態認識部130に位置対応補正制御部としての画面位置重み付加部437を加えたフォーカス状態認識部430を備えている。
[Configuration of display device]
As shown in FIGS. 1 and 19, an image processing device 420 as a calculation unit of the display device 400 includes a screen position weight as a position correspondence correction control unit in the focus state recognition unit 130 of the image processing device 120 of the first embodiment. A focus state recognizing unit 430 including an adding unit 437 is provided.
 画面位置重み付加部437は、フォーカス度算出部134から帯域特性算出ブロックBfごとのフォーカス度Duを取得する。そして、図20に示すように、入力画像Piの中央に位置する横長楕円状の第1領域R1、この第1領域R1を囲むリング状の第2領域R2、この第2領域R2を囲むリング状の第3領域R3、および、第3領域R3を囲むリング状の第4領域R4を認識する。
 そして、画面位置重み付加部437は、第1領域R1の画素Qに第1位置補正値(例えば、1)を乗じた位置対応補正情報としての重み付加フォーカス度Dgを算出する。また、第2領域R2の画素Qに第1位置補正値よりも小さい第2位置補正値を乗じた重み付加フォーカス度Dgと、第3領域R3の画素Qに第2位置補正値よりも小さい第3位置補正値を乗じた重み付加フォーカス度Dgと、第4領域R4の画素Qに第3位置補正値よりも小さい第4位置補正値を乗じた重み付加フォーカス度Dgと、を算出する。
 そして、画面位置重み付加部437は、重み付加フォーカス度Dgをブロック別累積部135へ出力する。
The screen position weight addition unit 437 acquires the focus degree Du for each band characteristic calculation block Bf from the focus degree calculation unit 134. Then, as shown in FIG. 20, a horizontally long elliptical first region R1 located at the center of the input image Pi, a ring-shaped second region R2 surrounding the first region R1, and a ring shape surrounding the second region R2. The third region R3 and the ring-shaped fourth region R4 surrounding the third region R3 are recognized.
Then, the screen position weight addition unit 437 calculates a weight addition focus degree Dg as position correspondence correction information obtained by multiplying the pixel Q of the first region R1 by a first position correction value (for example, 1). Further, the weight added focus degree Dg obtained by multiplying the pixel Q in the second region R2 by the second position correction value smaller than the first position correction value, and the pixel Q in the third region R3 is smaller than the second position correction value. A weight-added focus degree Dg obtained by multiplying the three-position correction value and a weight-added focus degree Dg obtained by multiplying the pixel Q in the fourth region R4 by a fourth position correction value smaller than the third position correction value are calculated.
Then, the screen position weight addition unit 437 outputs the weight addition focus degree Dg to the block accumulation unit 135.
 ブロック別累積部135は、設定単位ブロックBsごとに重み付加フォーカス度Dgを累積したブロック別フォーカス度Dsを極性ゲイン判定部136へ出力する。極性ゲイン判定部136は、ブロック別フォーカス度Dsに基づいて、設定単位ブロックBsごとに極性情報Kmおよびゲイン対応値Gmを設定して、時定数処理部140へ出力する。ゲイン調整部163は、第1領域R1~第4領域R4の極性調整高周波成分Vkが同じ値の場合、ブロック別フォーカス度Dsが小さいほど小さいゲイン調整高周波成分Vgを加算部164へ出力する。つまり、ゲイン調整高周波成分Vgは、第4領域R4のものが最も小さく、第1領域R1のものが最も大きくなる。
 そして、加算部164での処理により、第1領域R1~第4領域R4の極性調整高周波成分Vkが同じ値の場合、入力画像Piの中央から離れた画素Qほど小さいゲイン調整高周波成分Vgが高周波成分Vhに加えられる。このため、出力画像Poにおける輪郭強調の度合いは、中央から離れるほど低くなる。
 また、第1領域R1~第4領域R4の極性調整高周波成分Vkが同じ値の場合、入力画像Piの中央から離れた画素Qほど小さいゲイン調整高周波成分Vgが高周波成分Vhから減じられるため、出力画像Poにおけるぼかしの度合いは、中央から離れるほど低くなる。
The block accumulation unit 135 outputs the block-specific focus degree Ds obtained by accumulating the weight-added focus degree Dg for each set unit block Bs to the polarity gain determination unit 136. The polarity gain determination unit 136 sets the polarity information Km and the gain corresponding value Gm for each set unit block Bs based on the block-specific focus degree Ds, and outputs it to the time constant processing unit 140. When the polarity adjustment high frequency component Vk of the first region R1 to the fourth region R4 has the same value, the gain adjustment unit 163 outputs a gain adjustment high frequency component Vg that is smaller as the block-specific focus degree Ds is smaller to the addition unit 164. That is, the gain adjustment high frequency component Vg is the smallest in the fourth region R4 and the largest in the first region R1.
When the polarity adjustment high-frequency component Vk in the first region R1 to the fourth region R4 has the same value by the processing in the adding unit 164, the gain adjustment high-frequency component Vg that is smaller in the pixel Q farther from the center of the input image Pi Added to component Vh. For this reason, the degree of contour emphasis in the output image Po decreases as the distance from the center increases.
Also, when the polarity adjustment high frequency component Vk in the first region R1 to the fourth region R4 has the same value, the gain adjustment high frequency component Vg that is smaller as the pixel Q is farther from the center of the input image Pi is subtracted from the high frequency component Vh. The degree of blur in the image Po decreases as the distance from the center increases.
 〔第4実施形態の作用効果〕
 上述したように、上記第4実施形態では、上記第1実施形態の(1)~(4)と同様の作用効果に加えて、以下のような作用効果を奏することができる。
[Effects of Fourth Embodiment]
As described above, in the fourth embodiment, in addition to the same functions and effects as in (1) to (4) of the first embodiment, the following functions and effects can be achieved.
 (7)画像処理装置420は、入力画像Piの中央から離れるほど輪郭強調処理の度合いを低くしている。
 ここで、一般的に、フォーカス領域Afは、入力画像Piの中央であることが多い。
 このため、入力画像Piの中央の強調処理の度合いを高くすることで、エッジを強調する部分やぼかす部分の誤検出を防ぐことができ、検出精度を上げることができる。
(7) The image processing apparatus 420 decreases the degree of the contour enhancement process as the distance from the center of the input image Pi increases.
Here, in general, the focus area Af is often the center of the input image Pi.
For this reason, by increasing the degree of enhancement processing at the center of the input image Pi, it is possible to prevent erroneous detection of a portion that emphasizes an edge or a portion that is blurred, and the detection accuracy can be increased.
[実施形態の変形]
 なお、本発明は、上述した第1,第2実施形態に限定されるものではなく、本発明の目的を達成できる範囲で以下に示される変形をも含むものである。
[Modification of Embodiment]
Note that the present invention is not limited to the first and second embodiments described above, and includes modifications shown below within a range in which the object of the present invention can be achieved.
 すなわち、図1に示すように、第1実施形態の画像処理装置120の代わりに画像処理装置520,620を備えた表示装置500,600としてもよい。この画像処理装置520,620は、フォーカス状態認識部130の代わりにフォーカス状態認識部530,630を設けた構成を有している。 That is, as shown in FIG. 1, display devices 500 and 600 including image processing devices 520 and 620 may be used instead of the image processing device 120 of the first embodiment. The image processing apparatuses 520 and 620 have a configuration in which focus state recognition units 530 and 630 are provided instead of the focus state recognition unit 130.
 表示装置500のフォーカス状態認識部530は、図21に示すように、アダマール変換部131の代わりにフーリエ変換部531を設けたこと以外は、第1実施形態のフォーカス状態認識部130と同様の構成を有している。
 フーリエ変換部531は、画像信号出力部10から入力画像Piの入力信号Viに基づいて、例えば図5に示すような帯域特性算出ブロックBf単位でフーリエ変換処理を実施して、入力信号Viの周波帯域成分Fを算出する。ここで、アダマール変換処理の場合は、正のみの周波数成分を有する周波帯域成分Fが算出されるが、フーリエ変換処理の場合は、正負の周波数成分を有する周波帯域成分Fが算出される。なお、この帯域特性算出ブロックBfを構成する画素数は、第1実施形態のものと異なっていてもよい。
 そして、フーリエ変換部531は、周波帯域成分Fを高域成分累積部132と、低域成分累積部133とへ出力する。
 高域成分累積部132および低域成分累積部133は、周波帯域成分Fのうち、正の周波数成分のみを特定する。そして、高域成分累積部132は、特定した周波数成分のうち周波帯域があらかじめ設定された閾値以上の成分を高域成分Fhとして認識して、この高域成分Fhを帯域特性算出ブロックBfごとに累積した高域成分累積値Shをフォーカス度算出部134へ出力する。また、低域成分累積部133は、高域成分累積部132と同様にして、あらかじめ設定された閾値未満の成分を低域成分Flとして認識して、低域成分累積値Slをフォーカス度算出部134へ出力する。
 このような構成にしても、上記第1実施形態と同様の作用効果を期待できる。
As shown in FIG. 21, the focus state recognition unit 530 of the display device 500 has the same configuration as the focus state recognition unit 130 of the first embodiment except that a Fourier transform unit 531 is provided instead of the Hadamard transform unit 131. have.
Based on the input signal Vi of the input image Pi from the image signal output unit 10, the Fourier transform unit 531 performs a Fourier transform process for each band characteristic calculation block Bf as shown in FIG. 5, for example, and the frequency of the input signal Vi. The band component F is calculated. Here, in the case of Hadamard transform processing, a frequency band component F having only positive frequency components is calculated. In the case of Fourier transform processing, a frequency band component F having positive and negative frequency components is calculated. Note that the number of pixels constituting the band characteristic calculation block Bf may be different from that of the first embodiment.
Then, the Fourier transform unit 531 outputs the frequency band component F to the high frequency component accumulation unit 132 and the low frequency component accumulation unit 133.
The high frequency component accumulating unit 132 and the low frequency component accumulating unit 133 specify only the positive frequency component of the frequency band component F. Then, the high frequency component accumulating unit 132 recognizes, as the high frequency component Fh, a component whose frequency band is equal to or higher than a predetermined threshold value among the specified frequency components, and the high frequency component Fh is determined for each band characteristic calculation block Bf. The accumulated high frequency component accumulated value Sh is output to the focus degree calculation unit 134. Similarly to the high frequency component accumulating unit 132, the low frequency component accumulating unit 133 recognizes a component less than a preset threshold value as the low frequency component Fl, and uses the low frequency component accumulated value Sl as a focus degree calculating unit. To 134.
Even if it is such a structure, the effect similar to the said 1st Embodiment can be anticipated.
 また、表示装置600のフォーカス状態認識部630は、図22に示すように、高周波数エッジ検出部631と、低周波数エッジ検出部632と、フォーカス度算出部134と、ブロック別累積部135と、極性ゲイン判定部136と、を備えている。
 高周波数エッジ検出部631は、HPF(ハイパスフィルタ)などであり、周波帯域成分Fから図6に示すような高域成分Fhを検出する。また、低周波数エッジ検出部632は、LPF(ローパスフィルタ)などであり、周波帯域成分Fから図6に示すような低域成分Flを検出する。ここで、高周波数エッジ検出部631および低周波数エッジ検出部632では、正負の高域成分Fhおよび正負の低域成分Flがそれぞれ検出される。
 そして、高周波数エッジ検出部631および低周波数エッジ検出部632は、高域成分Fhおよび低域成分Flの絶対値を算出して、フォーカス度算出部134へ出力する。
 このような構成にしても、上記第1実施形態と同様の作用効果を期待できる。
Further, as shown in FIG. 22, the focus state recognition unit 630 of the display device 600 includes a high frequency edge detection unit 631, a low frequency edge detection unit 632, a focus degree calculation unit 134, a block-by-block accumulation unit 135, A polarity gain determination unit 136.
The high frequency edge detection unit 631 is an HPF (High Pass Filter) or the like, and detects a high frequency component Fh as shown in FIG. Further, the low frequency edge detection unit 632 is an LPF (low pass filter) or the like, and detects a low frequency component Fl as shown in FIG. Here, the high frequency edge detection unit 631 and the low frequency edge detection unit 632 detect the positive and negative high frequency components Fh and the positive and negative low frequency components Fl, respectively.
Then, the high frequency edge detection unit 631 and the low frequency edge detection unit 632 calculate absolute values of the high frequency component Fh and the low frequency component Fl, and output the absolute values to the focus degree calculation unit 134.
Even if it is such a structure, the effect similar to the said 1st Embodiment can be anticipated.
 また、例えば第1実施形態において、時定数処理部140およびブロック間線形補間部150のうち少なくとも一方を設けなくてもよいし、シーンチェンジに基づく時定数処理を実施しない構成としてもよい。
 そして、時定数処理や線形補間処理は、輪郭強調処理またはぼかし処理の度合いに対してのみ実施してもよい。
 さらに、顔検出に基づいて、輪郭強調処理またはぼかし処理のみに対する度合いの調整をしてもよい。
For example, in the first embodiment, at least one of the time constant processing unit 140 and the inter-block linear interpolation unit 150 may not be provided, or the time constant processing based on the scene change may not be performed.
The time constant process and the linear interpolation process may be performed only for the degree of the contour enhancement process or the blur process.
Furthermore, the degree of adjustment for only the outline enhancement process or the blurring process may be performed based on the face detection.
 そして、本発明の画像処理装置を、表示装置に適用した構成について例示したが、例えば再生装置、記録再生装置、ビデオカメラなどに適用してもよい。 The configuration in which the image processing apparatus of the present invention is applied to a display device has been exemplified.
 また、上述した各機能をプログラムとして構築したが、例えば回路基板などのハードウェアあるいは1つのIC(Integrated Circuit)などの素子にて構成するなどしてもよく、いずれの形態としても利用できる。なお、プログラムや別途記録媒体から読み取らせる構成とすることにより、取扱が容易で、利用の拡大が容易に図れる。 In addition, each function described above is constructed as a program, but it may be configured by hardware such as a circuit board or an element such as one IC (Integrated Circuit), and can be used in any form. In addition, by adopting a configuration that allows reading from a program or a separate recording medium, handling is easy, and usage can be easily expanded.
 その他、本発明の実施の際の具体的な構造および手順は、本発明の目的を達成できる範囲で他の構造などに適宜変更できる。 In addition, the specific structure and procedure for carrying out the present invention can be appropriately changed to other structures and the like within a range in which the object of the present invention can be achieved.
[実施形態の効果]
 上述したように、上記実施形態では、表示装置100の画像処理装置120は、入力画像Pi中のフォーカス領域Afとアンフォーカス領域Auとを認識し、フォーカス領域Afに対して輪郭強調処理を施すとともに、アンフォーカス領域Auに対してぼかし処理を施した出力画像Poを生成する。
 このため、入力画像Piに対して輪郭強調処理とぼかし処理とを同時に施すことにより、出力画像Poにおける奥行き感を効果的に強調できる。
[Effect of the embodiment]
As described above, in the above-described embodiment, the image processing device 120 of the display device 100 recognizes the focus area Af and the unfocus area Au in the input image Pi, and performs contour enhancement processing on the focus area Af. The output image Po is generated by performing the blurring process on the unfocus area Au.
For this reason, it is possible to effectively enhance the sense of depth in the output image Po by simultaneously performing the contour enhancement process and the blurring process on the input image Pi.
 本発明は、画像処理装置、表示装置、画像処理方法、そのプログラム、および、そのプログラムを記録した記録媒体として利用できる。 The present invention can be used as an image processing device, a display device, an image processing method, a program thereof, and a recording medium on which the program is recorded.

Claims (11)

  1.  入力画像における被写体にフォーカスが合っているフォーカス領域および被写体にフォーカスが合っていないアンフォーカス領域を認識するフォーカス状態認識部と、
     前記認識されたフォーカス領域に対して輪郭強調処理を実施するとともに、前記アンフォーカス領域に対してぼかし処理を実施した出力画像を生成する奥行き処理部と、
     を具備したことを特徴とする画像処理装置。
    A focus state recognition unit for recognizing a focus area in which an object is in focus in an input image and an unfocus area in which the object is not in focus;
    A depth processing unit that performs an edge enhancement process on the recognized focus area and generates an output image in which a blur process is performed on the unfocus area;
    An image processing apparatus comprising:
  2.  請求項1に記載の画像処理装置において、
     前記被写体に人物および動物のうち少なくとも一方の顔が含まれていることを検出する顔検出部と、
     前記フォーカス領域に顔が含まれている場合に前記顔に対する輪郭強調の度合いを前記フォーカス領域の顔以外の部分に対する輪郭強調の度合いよりも低くする処理、および、前記アンフォーカス領域に顔が含まれている場合に前記顔に対するぼかしの度合いを前記アンフォーカス領域の顔以外の部分に対するぼかしの度合いよりも低くする処理のうち少なくとも一方の処理をする旨の顔対応補正情報を生成する顔対応補正制御部と、を具備し、
     前記奥行き処理部は、前記顔対応補正情報に基づく前記少なくとも一方の処理を実施した出力画像を生成する
     ことを特徴とする画像処理装置。
    The image processing apparatus according to claim 1.
    A face detection unit for detecting that the subject includes at least one face of a person and an animal;
    When the focus area includes a face, processing for reducing the degree of contour enhancement for the face lower than the degree of contour enhancement for a portion other than the face of the focus area, and the unfocus area includes a face. Face correspondence correction control for generating face correspondence correction information indicating that at least one of the processing of making the degree of blurring of the face lower than the degree of blurring of a part other than the face of the unfocused region is performed. And comprising
    The depth processing unit generates an output image in which at least one of the processes based on the face correspondence correction information is performed.
  3.  請求項1に記載の画像処理装置において、
     前記フォーカス領域における入力画像の中央から離れた部分ほど輪郭強調の度合いを高くする処理をする旨の位置対応補正情報を生成する位置対応補正制御部を具備し、
     前記奥行き処理部は、前記位置対応補正情報に基づく前記輪郭強調処理を実施した出力画像を生成する
     ことを特徴とする画像処理装置。
    The image processing apparatus according to claim 1.
    A position-corresponding correction control unit that generates position-corresponding correction information for processing to increase the degree of contour emphasis as the part of the focus area that is farther from the center of the input image comprises:
    The depth processing unit generates an output image that has been subjected to the contour enhancement processing based on the position correspondence correction information.
  4.  請求項1から請求項3のいずれかに記載の画像処理装置において、
     動画を構成し連続して入力される入力画像に対して前記輪郭強調の度合いおよび前記ぼかしの度合いのうち少なくとも一方の度合いに対する時定数処理を実施する時定数処理部を具備し、
     前記奥行き処理部は、前記時定数処理された度合いに基づく処理を実施した出力画像を生成する
     ことを特徴とする画像処理装置。
    The image processing apparatus according to any one of claims 1 to 3,
    A time constant processing unit that performs time constant processing for at least one of the degree of edge enhancement and the degree of blurring of an input image that is configured and continuously input as a moving image;
    The depth processing unit generates an output image that has undergone processing based on the degree of time constant processing.
  5.  請求項4に記載の画像処理装置において、
     前記時定数処理部は、前記連続して入力される入力画像が動画のシーンチェンジに該当すると判断した場合、前記時定数処理を実施しない出力画像を生成し、前記シーンチェンジに該当しないと判断した場合、前記時定数処理を実施した出力画像を生成する
     ことを特徴とする画像処理装置。
    The image processing apparatus according to claim 4.
    The time constant processing unit generates an output image that does not perform the time constant processing when it is determined that the continuously input image corresponds to a scene change of a moving image, and determines that it does not correspond to the scene change. In this case, an image processing apparatus that generates an output image that has been subjected to the time constant processing.
  6.  請求項1から請求項5のいずれかに記載の画像処理装置において、
     前記入力画像における所定の近接する部分に対して前記輪郭強調の度合いおよび前記ぼかしの度合いのうち少なくとも一方の度合いに対する線形補間処理を実施する線形補間部を具備し、
     前記奥行き処理部は、前記線形補間処理された度合いに基づく処理を実施した出力画像を生成する
     ことを特徴とする画像処理装置。
    The image processing apparatus according to any one of claims 1 to 5,
    A linear interpolation unit that performs linear interpolation processing on at least one of the degree of edge enhancement and the degree of blurring of a predetermined adjacent portion in the input image;
    The depth processing unit generates an output image subjected to processing based on the degree of linear interpolation processing.
  7.  請求項1から請求項6のいずれかに記載の画像処理装置と、
     この画像処理装置で生成された出力画像を表示する表示部と、
     を具備したことを特徴とする表示装置。
    An image processing apparatus according to any one of claims 1 to 6,
    A display unit for displaying an output image generated by the image processing apparatus;
    A display device comprising:
  8.  演算手段により、入力画像に対して所定の処理を実施した出力画像を生成する画像処理方法であって、
     前記演算手段は、
     前記入力画像における被写体にフォーカスが合っているフォーカス領域および被写体にフォーカスが合っていないアンフォーカス領域を認識するフォーカス状態認識工程と、
     前記認識されたフォーカス領域に対して輪郭強調処理を実施するとともに、前記アンフォーカス領域に対してぼかし処理を実施した出力画像を生成する奥行き処理工程と、
     を実施することを特徴とする画像処理方法。
    An image processing method for generating an output image obtained by performing predetermined processing on an input image by an arithmetic means,
    The computing means is
    A focus state recognition step for recognizing a focus area in which the subject in the input image is in focus and an unfocus area in which the subject is not in focus;
    A depth processing step for generating an output image in which a contour enhancement process is performed on the recognized focus area and a blur process is performed on the unfocus area;
    The image processing method characterized by implementing.
  9.  請求項8に記載の画像処理方法を演算手段に実行させる
     ことを特徴とする画像処理プログラム。
    An image processing program for causing an arithmetic means to execute the image processing method according to claim 8.
  10.  演算手段を請求項1ないし請求項6のいずれかに記載の画像処理装置として機能させる
     ことを特徴とする画像処理プログラム。
    An image processing program for causing a calculation means to function as the image processing apparatus according to any one of claims 1 to 6.
  11.  請求項9または請求項10に記載の画像処理プログラムが演算手段にて読取可能に記録された
     ことを特徴とする画像処理プログラムを記録した記録媒体。
    11. A recording medium on which an image processing program is recorded, wherein the image processing program according to claim 9 or 10 is recorded so as to be readable by an arithmetic means.
PCT/JP2008/072846 2008-12-16 2008-12-16 Image processing device, display device, image processing method, program therefor, and recording medium containing the program WO2010070732A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2008/072846 WO2010070732A1 (en) 2008-12-16 2008-12-16 Image processing device, display device, image processing method, program therefor, and recording medium containing the program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2008/072846 WO2010070732A1 (en) 2008-12-16 2008-12-16 Image processing device, display device, image processing method, program therefor, and recording medium containing the program

Publications (1)

Publication Number Publication Date
WO2010070732A1 true WO2010070732A1 (en) 2010-06-24

Family

ID=42268423

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2008/072846 WO2010070732A1 (en) 2008-12-16 2008-12-16 Image processing device, display device, image processing method, program therefor, and recording medium containing the program

Country Status (1)

Country Link
WO (1) WO2010070732A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2752815A1 (en) * 2013-01-02 2014-07-09 Samsung Electronics Co., Ltd Display method and display apparatus
WO2014111871A1 (en) 2013-01-17 2014-07-24 Aurigene Discovery Technologies Limited 4,5-dihydroisoxazole derivatives as nampt inhibitors
JP2017102642A (en) * 2015-12-01 2017-06-08 カシオ計算機株式会社 Image processor, image processing method and program
EP4040775A1 (en) 2013-07-23 2022-08-10 Sony Group Corporation Image processing device, method of processing image, image processing program, and imaging device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11196294A (en) * 1998-01-05 1999-07-21 Hitachi Ltd Video signal processing unit
JP2003116022A (en) * 1993-01-19 2003-04-18 Matsushita Electric Ind Co Ltd Method for displaying image
JP2004266842A (en) * 2004-03-15 2004-09-24 Sony Corp Image processing apparatus
JP2007259404A (en) * 2006-02-21 2007-10-04 Kyocera Corp Image pickup apparatus
JP2008004996A (en) * 2006-06-20 2008-01-10 Casio Comput Co Ltd Imaging apparatus and program thereof
JP2008258830A (en) * 2007-04-03 2008-10-23 Nikon Corp Image processing apparatus, imaging apparatus, and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003116022A (en) * 1993-01-19 2003-04-18 Matsushita Electric Ind Co Ltd Method for displaying image
JPH11196294A (en) * 1998-01-05 1999-07-21 Hitachi Ltd Video signal processing unit
JP2004266842A (en) * 2004-03-15 2004-09-24 Sony Corp Image processing apparatus
JP2007259404A (en) * 2006-02-21 2007-10-04 Kyocera Corp Image pickup apparatus
JP2008004996A (en) * 2006-06-20 2008-01-10 Casio Comput Co Ltd Imaging apparatus and program thereof
JP2008258830A (en) * 2007-04-03 2008-10-23 Nikon Corp Image processing apparatus, imaging apparatus, and program

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2752815A1 (en) * 2013-01-02 2014-07-09 Samsung Electronics Co., Ltd Display method and display apparatus
WO2014111871A1 (en) 2013-01-17 2014-07-24 Aurigene Discovery Technologies Limited 4,5-dihydroisoxazole derivatives as nampt inhibitors
EP4040775A1 (en) 2013-07-23 2022-08-10 Sony Group Corporation Image processing device, method of processing image, image processing program, and imaging device
US11418714B2 (en) 2013-07-23 2022-08-16 Sony Group Corporation Image processing device, method of processing image, image processing program, and imaging device
JP2017102642A (en) * 2015-12-01 2017-06-08 カシオ計算機株式会社 Image processor, image processing method and program

Similar Documents

Publication Publication Date Title
JP4152398B2 (en) Image stabilizer
US9727984B2 (en) Electronic device and method for processing an image
JP2006129236A (en) Ringing eliminating device and computer readable recording medium with ringing elimination program recorded thereon
JP2007288595A (en) Frame circulation noise reduction device
KR100791388B1 (en) Apparatus and method for improving clarity of image
CN107979712B (en) Video noise reduction method and device
JP2007088829A (en) Blurring detecting device
WO2011099202A1 (en) Signal processing device, control programme, and integrated circuit
JP2014021928A (en) Image processor, image processing method and program
WO2010070732A1 (en) Image processing device, display device, image processing method, program therefor, and recording medium containing the program
JP2010041327A (en) Noise reduction apparatus and noise reduction method
JP2009025862A (en) Image processor, image processing method, image processing program and image display device
WO2014102876A1 (en) Image processing device and image processing method
JP2015012318A (en) Image processing apparatus
JP5279830B2 (en) Video signal processing device and video display device
JP4246178B2 (en) Image processing apparatus and image processing method
JP2012032739A (en) Image processing device, method and image display device
JP2007179211A (en) Image processing device, image processing method, and program for it
KR20150122656A (en) Image processing device, image processing method
JP2015177528A (en) image processing apparatus, image processing method, and program
JP5559275B2 (en) Image processing apparatus and control method thereof
JP5131300B2 (en) Image processing apparatus, imaging apparatus, and image processing program
JP2019140605A (en) Image processing system, image processing method and program
JP5294595B2 (en) Outline enhancement method
US11159724B2 (en) Image blur correction device, control method thereof, and imaging apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08878900

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 08878900

Country of ref document: EP

Kind code of ref document: A1