WO2012153661A1 - Image correction device, image correction display device, image correction method, program, and recording medium - Google Patents

Image correction device, image correction display device, image correction method, program, and recording medium Download PDF

Info

Publication number
WO2012153661A1
WO2012153661A1 PCT/JP2012/061447 JP2012061447W WO2012153661A1 WO 2012153661 A1 WO2012153661 A1 WO 2012153661A1 JP 2012061447 W JP2012061447 W JP 2012061447W WO 2012153661 A1 WO2012153661 A1 WO 2012153661A1
Authority
WO
WIPO (PCT)
Prior art keywords
correction
target area
data
input image
signal
Prior art date
Application number
PCT/JP2012/061447
Other languages
French (fr)
Japanese (ja)
Inventor
張 小▲忙▼
上野 雅史
宮田 英利
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2012153661A1 publication Critical patent/WO2012153661A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • G06T5/70
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Definitions

  • the present invention relates to an image correction apparatus that corrects image data, an image correction display apparatus, and an image correction method.
  • the entire image is corrected uniformly.
  • the image quality of a part of the image is improved, but the image quality of the other part of the image may be deteriorated.
  • noise reduction that removes noise components included in image data is generally performed. If noise reduction is performed uniformly on the entire image, natural correction can be performed on blue sky clouds and human skin in the image data, but details disappear on bricks and lawns, etc. The problem that it will be done occurs. Therefore, the conventional technology that uniformly corrects the entire image cannot sufficiently display the potential of a display with high performance.
  • Patent Document 1 listed below discloses a technique relating to an imaging device that corrects the hue of a color information signal corresponding to a human face area out of color information signals of an image based on a calculated hue correction value. Has been.
  • FIG. 13 shows the main components of the imaging device disclosed in Patent Document 1.
  • FIG. 13 is a block diagram showing main components of the imaging device 90 disclosed in Patent Document 1.
  • the imaging device 90 includes a face area extraction unit 91 that extracts a color information signal corresponding to a person's face area, and a hue correction value that corrects the hue of the color information signal corresponding to the person's face area.
  • a hue correction value calculation unit 92 to be calculated, and a hue correction unit 93 to correct the hue of the color information signal corresponding to the face area based on the hue correction value are provided.
  • Patent Document 1 As described above, if the technique disclosed in Patent Document 1 is used, a facial area in an image is specified, and color correction is performed on the specified facial area. It is possible to appropriately perform at least correction of the face area of a person, compared with the case where correction is made uniformly.
  • Patent Document 2 a color is applied to a skin color region representing a skin portion exposed by a person, which is detected based on a high brightness region indicating a high temperature portion of a subject extracted from an infrared image.
  • a technique relating to a digital camera that performs correction is disclosed.
  • Patent Document 2 As described above, if the technique disclosed in Patent Document 2 is used, an area corresponding to the exposed skin portion of a person is specified, and color correction is performed on the specified area. As compared with the case where the entire image is corrected uniformly, at least correction of the exposed skin region of the person can be performed appropriately.
  • the area corresponding to the color information signal corresponding to the skin color in the image occupies a predetermined range with respect to the entire image, and the aspect ratio of the range
  • an area having a predetermined ratio (for example, length: width ⁇ 1: 1) is determined as a face area. For this reason, it is difficult to extract only the face area by the method of discriminating the area that satisfies the above conditions. That is, in the technique described in Patent Document 1, an area other than an area that actually includes a face among areas corresponding to a color information signal corresponding to skin color may be determined as a face area.
  • the area corresponding to the color information signal corresponding to the skin color in the image occupies a size of a predetermined range with respect to the entire image,
  • the aspect ratio of the range is a region having a predetermined ratio (for example, length: width ⁇ 1: 1), and the region is a high-luminance region indicating a portion having a high temperature, a very high probability It becomes possible to specify the face area.
  • the present invention has been made to solve the above-described problems, and its main purpose is to display data indicating a target area to be corrected from image data for each characteristic even when a moving image is included in the image.
  • An object of the present invention is to provide an image correction apparatus that can extract and perform correction suitable for data indicating the extracted target area.
  • an image correction apparatus includes a motion vector detection unit that detects a motion vector between frames of input image data, and target region data stored in a storage unit.
  • Target area extraction means for extracting target area data including the specific features from the input image data by referring to a target feature database including feature data indicating characteristics specific to the area extracted as the storage area, and the storage unit
  • the image correction is performed on the target area data extracted by the target area extraction means in the input image data by referring to the correction content database in which the correction contents for the target area data are defined.
  • Correction processing means, and the target region extraction means is detected by the motion vector detection means.
  • the target area data including the same specific features as the target area data extracted from the input image data of the previous frame is input from the input image data of the current frame. It is characterized by extracting.
  • feature data indicating characteristics peculiar to the area (correction target area) extracted as the target area data included in the target feature database can be extracted as the target region data.
  • input image data applicable to the feature data as the target area data for example, when a face area is extracted as the target area data, the same color as the partial area of the body or the face of the same color as the face area It is possible to prevent erroneous detection of the wall area of the camera.
  • correction based on the correction content included in the correction content database can be performed on the extracted target area data.
  • the content of the correction may be incompatible with the area other than the correction target area while being compatible with the characteristics of the correction target area. Therefore, since the data other than the target area data is not subjected to inadequate correction, the same correction is uniformly performed on the entire input image data, thereby preventing an unnatural output image.
  • target area data to be corrected can be extracted from the input image data, and correction suitable for the extracted target area data can be performed.
  • the target region extracting means uses the motion vector between the current frame and the immediately preceding frame detected by the motion vector detecting means, from the input image data of the current frame to the input image data of the immediately preceding frame. If the target area data extracted from the input image data of the immediately preceding frame is included in the input image data of the current frame by extracting the target area data containing the same unique features as the target area data extracted from The target area data can be reliably extracted. In other words, the target area extraction means tracks whether the target area data extracted from the input image data of the immediately preceding frame is included in the input image data of the current frame. Therefore, it is possible to reduce the leakage of the target area data.
  • the target area extraction unit inputs the target area data that cannot be extracted in the previous frame.
  • the target area data having the same characteristic as the target area data that cannot be extracted.
  • the target region data that cannot be extracted is determined as the target data.
  • List area data that does not apply to the relative distance between eyes, nose, and mouth that is, area data of a person's profile, and area data of a person's face that is partially hidden by an object Can do.
  • the input image data includes moving image data, as well as still image data such as when a still image is displayed on a part of a moving image or a moving image is displayed on a part of a still image.
  • moving image data is also included.
  • An image correction method for an image correction apparatus is an image correction method for an image correction apparatus that corrects an input image in order to solve the above-described problem.
  • the input image data is obtained by referring to a target feature database including a motion vector detection step for detecting a motion vector and feature data indicating features specific to a region extracted as target region data stored in the storage unit.
  • an image correction apparatus is specific to a correction target region stored in a storage unit and a motion vector detection unit that detects a motion vector between frames of input image data.
  • a target feature database including feature data indicating features
  • target region extraction means for extracting target region data including specific features from input image data
  • Correction processing means for performing image correction on the target area data extracted by the target area extraction means in the input image data by referring to a correction content database in which correction contents are defined, and the target area
  • the extracting means is between the current frame detected by the motion vector detecting means and the immediately preceding frame. That by using a motion vector, from the input image data of the current frame, it is characterized by extracting the object region data including the same unique features as target region data extracted from the input image data of the previous frame.
  • target area data to be corrected is extracted from the input image data, and correction suitable for the extracted target area data can be performed.
  • FIG. 4 is a flowchart illustrating an example of a flow of correction target area extraction processing in a target area extraction unit of the image correction apparatus illustrated in FIG. 2.
  • 3 is a flowchart illustrating an example of a flow of image correction processing in a correction processing unit of the image correction apparatus illustrated in FIG. 2.
  • FIG. 4 is a diagram showing color difference correction for hue information of face area data in the CbCr coordinate system in the color difference correction processing unit of the image correction apparatus shown in FIG. 2, and (a) shows the range of skin color hue information of the face area data in CbCr coordinates; (B) is a diagram showing a range of hue information before and after correcting the skin color of the face area data in the CbCr coordinate system.
  • It is a block diagram which shows the detail of a structure of the image correction apparatus which concerns on the modification of one Embodiment of this invention. It is a block diagram which shows the detail of a structure of the image correction apparatus which concerns on the other modification of one Embodiment of this invention.
  • FIG. 10 is a block diagram illustrating main components of a face color correction device disclosed in Patent Document 1.
  • FIG. 1 is a block diagram illustrating a configuration of an image correction apparatus 1 according to the present embodiment.
  • FIG. 2 is a block diagram showing details of the configuration of the image correction apparatus 1 according to the present embodiment.
  • the image correction device 1 is mounted on an image correction display device such as a television receiver or an information processing device that includes a display unit (not shown) that displays the input image data corrected by the image correction device. Has been.
  • the image correction apparatus 1 is mounted on, for example, a television receiver or an information processing apparatus, and corrects the image quality of input image data included in a broadcast signal or an output signal of the image output apparatus.
  • the image correction apparatus 1 includes a target area extraction unit 10 (target area extraction unit), a correction processing unit 20 (correction processing unit), a storage unit 30, a motion vector detection unit (motion vector detection). Means) 50 and a frame memory 51.
  • the storage unit 30 stores a target feature database 31 and a correction content database 32.
  • the image correction apparatus 1 further includes an RGB conversion unit 40 (first color space signal conversion means).
  • the correction processing unit 20 of the image correction apparatus 1 includes a luminance correction processing unit 21 (luminance correction processing unit), a color difference correction processing unit 22 (color difference correction processing unit), and a noise reduction processing unit 23 (noise reduction processing unit). It is comprised including.
  • the target area extraction unit 10 is means for extracting correction target area data (target area data) indicating a target to be corrected from input image data.
  • the target feature database 31 stored in the storage unit 30 is referred to, and feature information (a region extracted as correction target region data (correction target region) that is predetermined in the target feature database 31 is unique. Data having a value within the range value indicated by the feature data indicating the feature of (2) is extracted from the input image data as correction target region data.
  • the target feature database 31 will be described later.
  • the target area extraction unit 10 extracts the correction target area data from the input image data that is currently input based on the motion vector supplied from the motion vector detection unit 50 described later. Region data having the same characteristic as the correction target region data extracted from the input image data input immediately before the input image data is extracted as correction target region data. That is, the target area extraction unit 10 determines whether area data having the same characteristic as the correction target area data extracted from the input image data input immediately before is included in the currently input image data. Tracking is performed using a motion vector, and if it is included, correction target area data is extracted. A specific method for extracting correction target area data will be described later.
  • the correction processing unit 20 is means for performing image quality correction processing (hereinafter also referred to as target region correction processing) on the correction target region data. Specifically, the correction processing unit 20 stores, in the storage unit 30, the correction content most suitable for the target indicated by the correction target region data with respect to the correction target region data extracted by the target region extraction unit 10. The target area correction process is executed based on the determined content with reference to the correction content database 32 being determined.
  • the target area correction processing is, for example, a luminance signal including luminance information (brightness value) indicating the degree of brightness of the correction target area data included in the correction target area data, and the color of the correction target area data. It means that at least one of the color difference signals quantitatively indicating the perceptual difference is adjusted. Further, the color difference signal includes hue information (hue value) indicating the hue of the correction target area data and the attribute of the color to be characterized, and saturation information (saturation value) indicating the degree of vividness of the correction target area data. include.
  • the correction content database 32 will be described later.
  • the luminance correction processing unit 21 is provided in the correction processing unit 20 and executes a correction process on the luminance information included in the luminance signal of the correction target region data extracted by the target region extraction unit 10. Specifically, the luminance correction processing unit 21 corrects the luminance information in the correction target area data, thereby correcting a thick outline (for example, a thick edge such as a face outline), a thin and sharp outline. For example, fine outline emphasis correction for emphasizing (for example, thin and sharp edges such as eyelashes) and texture emphasis correction for emphasizing texture (for example, thin edges such as lawn and brick) are performed.
  • a thick outline for example, a thick edge such as a face outline
  • texture emphasis correction for emphasizing texture
  • texture for example, thin edges such as lawn and brick
  • the luminance correction processing unit 21 preferably includes a band pass filter and a high pass filter.
  • the luminance correction processing unit 21 performs thick contour emphasis correction using a bandpass filter, performs fine contour emphasis correction using a bandpass filter having a higher pass frequency band than the bandpass filter used for thick contour emphasis correction, Texture enhancement correction may be performed using a high-pass filter.
  • the configuration of the luminance correction processing unit 21 is not limited to this.
  • the chrominance correction processing unit 22 is provided in the correction processing unit 20 and is chromaticity indicating hue information and chromaticity information indicating hue information included in the chrominance signal of the correction target region data extracted by the target region extraction unit 10. Execute correction processing for information. Specifically, the color difference correction processing unit 22 performs a hue correction process on the hue information of the correction target area data, and performs a saturation correction process on the saturation information of the correction target area data.
  • the value of the hue information included in the correction target area data is corrected to a value within an appropriate range that is predetermined for the specific feature of the correction target area data.
  • Specific features include, but are not limited to, for example, face, lawn, and blue sky.
  • Examples of the hue correction process may include, but are not limited to, a skin color hue correction process that corrects the facial hue within an appropriate range, a blue sky hue correction process, and a lawn hue correction process. Is not to be done.
  • the saturation correction process is performed by multiplying the color difference signal by a positive coefficient.
  • the coefficient is larger than 1, the color is corrected so as to be vivid, and when the coefficient is smaller than 1, the color is corrected so as to become light.
  • saturation correction is not performed.
  • r is the saturation and ⁇ is the hue.
  • the noise reduction processing unit 23 removes noise in the luminance signal and the color difference signal included in the correction target area data. Specifically, the noise reduction processing unit 23 removes noise (flickering, roughness, etc.) in the luminance signal and removes noise (excess color information, etc.) in the color difference signal, thereby removing the noise in the correction target area data. Remove.
  • the noise reduction processing unit 23 preferably includes a low-pass filter or a median filter.
  • the median filter arranges the density values in the respective pixels of the mask having a mask size of n ⁇ n (n is a natural number) (for example, 3 ⁇ 3 or 5 ⁇ 5) in ascending order,
  • the noise is removed by setting the output density of the target pixel.
  • the median filter has a larger noise removal effect as the mask size is larger.
  • the low-pass filter sets a coefficient for each pixel of the mask of the mask size n ⁇ n so that the sum of the coefficients becomes 1 and the coefficient of the target pixel becomes the maximum, and weighted averaging processing is performed. By removing the noise. When the coefficients are uniform, the noise removal effect is maximized. If the coefficient of the pixel of interest is 1, other coefficients are 0, and the noise removal effect is lost.
  • a filter coefficient having a large noise removal effect may be used. For a target region with many edges, a filter coefficient with a small noise removal effect may be used.
  • the noise reduction processing unit 23 when the noise reduction processing unit 23 is a low-pass filter, it is possible to perform correction by weighted average processing on the correction target region data, and when the noise reduction processing unit 23 is a median filter. Correction that removes fine noise can be performed.
  • the storage unit 30 includes a target feature database 31 that is referred to when the target region extraction unit 10 extracts correction target region data, and a correction that is referred to when the target region correction process is executed in the correction processing unit 20.
  • the storage unit 30 is a program for operating a computer as the image correction apparatus 1, and serves as a computer-readable recording medium that records a program that causes the computer to function as each unit of the image correction apparatus 1. Also bears.
  • the RGB conversion unit 40 converts the color space signal of the color system indicating the input image data corrected by the correction processing unit 20 into a color space signal of another color system and outputs it as output image data.
  • the RGB conversion unit 40 converts the color space signal (Y, Cb, Cr) of the input image data input in the color system represented by the YCbCr color space into the color space signal (R, G) of the RGB color system. , B).
  • the RGB conversion unit 40 will be described by taking as an example a configuration for converting a color space signal of the color system expressed in the YCbCr color space into a color space signal of the RGB color system.
  • the present invention is not limited to this.
  • a configuration for converting to a color space signal of the CIE L * a * b * color system may be adopted.
  • the motion vector detection unit 50 is a means for detecting a motion vector between frames from input image data.
  • the motion vector detection unit 50 obtains information such as a moving speed, a moving direction, and a moving distance from the difference between the input image data of the current frame and the input image data of the frame immediately before the current frame stored in the frame memory 51.
  • a motion vector including is detected.
  • the motion vector detection unit 50 supplies the detected motion vector to the target region extraction unit 10.
  • the frame memory 51 functions as a storage device that temporarily stores input image data in units of frames.
  • the input image data stored at a certain time in the frame memory 51 is read out by the motion vector detecting unit 50 when the input image data stored immediately after the certain time is input. That is, when the input image data of the current frame is input, the input image data of the immediately previous frame stored in the frame memory 51 is read by the motion vector detection unit 50.
  • FIG. 3 is a diagram showing an example of the target feature database 31 in the present embodiment.
  • FIG. 4 is a diagram showing an example of the correction content database 32 in the present embodiment.
  • the target feature database 31 extracts, for example, face feature information and a blue sky that are referred to when extracting a human face as feature information of a region extracted as correction target region data from input image data.
  • This is a database in which a plurality of feature information, such as blue sky feature information referred to when performing grass extraction, and lawn feature information referred to when extracting grass, is set in advance.
  • the facial feature information for example, a range value value of hue information indicating the skin color of the face, a range value value of saturation information indicating the saturation of the face, a range value of value of luminance information indicating the brightness of the face, and The range value of the relative distance relationship of facial features such as eyes, nose and mouth is set.
  • the blue sky feature information for example, the range value of the hue information, the range value of the saturation information value, the range value of the luminance information value, the range value of the high frequency component, and the range of the position on the screen Value is set.
  • the range value of the high frequency component in the blue sky feature information includes, for example, a threshold value indicating that the ratio of the high frequency component included is below a certain level (for example, a high frequency component other than the high frequency component caused by noise) A small value that indicates that the position is not set), and the range value of the position on the screen is a value of a predetermined position that specifies that the position is higher than the predetermined position on the screen Should be set.
  • the range value of the hue information value, the range value of the saturation information value, the range value of the luminance information value, the range value of the high frequency component, and the range of the position on the screen Value is set.
  • the range value of the high frequency component in the lawn feature information is set with a threshold value indicating that the ratio of the high frequency component included is equal to or greater than a certain value, and the range value of the position on the screen includes the screen It is only necessary to set a value of a predetermined position that specifies that the position is lower than the predetermined position above.
  • the correction content included in the correction content database includes luminance correction content indicating correction content for luminance information, color difference correction content indicating correction content for color difference information, and noise reduction content indicating correction content for both luminance information and color difference information. Includes at least one of them.
  • the correction content database 32 is referred to when correcting the face correction content, which is referred to when correcting a person's face in the correction target region data, and when correcting the blue sky.
  • Correction content information such as the blue sky correction content and the lawn correction content referred to when correcting the lawn is predetermined.
  • values within an appropriate range are determined in advance as correction content information for the unique features of each correction target area data.
  • hue correction processing is performed on hue information indicating parts other than the eyes, nose, and mouth, the hue is corrected to an appropriate range of standard skin color (for example, compression), and hue information indicating the lip part is referred to Select the color that corrects the hue information indicating the lip from several predetermined colors, perform hue correction, perform edge enhancement on eyes, nose and mouth, and edge enhancement on eyelashes Etc. are set.
  • standard skin color for example, compression
  • the blue sky correction content is set such that the hue information indicating the blue sky is corrected to the appropriate range of the standard blue, and the noise reduction is performed on the hue information indicating the blue sky.
  • the lawn correction contents are set such that the hue information indicating the lawn is corrected to a proper range of standard green, and the high frequency component is emphasized for the hue information indicating the lawn.
  • the correction for the luminance information, the correction for the hue information and the saturation information, and the correction for the luminance information, the hue information and the saturation information are respectively performed as necessary on the characteristics of the correction target region. And natural correction can be performed on the input image data.
  • correction processing that is unnecessary or incompatible due to the characteristics of the correction target region is not corrected by the corresponding processing unit, and the signal is passed through. You can do it.
  • the target area extraction unit 10 and the correction processing unit 20 can perform processing based on the luminance information and the color difference information, the luminance signal and the color difference signal constituting the input image data are represented by the luminance signal and the color difference signal. It is possible to perform processing without converting to a color space signal of a color system different from the displayed color system.
  • the luminance signal and the color difference signal including the luminance information and the color difference information processed by the correction processing unit 20 are different from the color system represented by the luminance signal and the color difference signal. After being converted to a color space signal, it can be output as output image data.
  • FIG. 5 is a flowchart illustrating an example of the flow of the correction target area extraction process in the target area extraction unit 10 included in the image correction apparatus 1 according to the present embodiment.
  • the target area extraction unit 10 acquires the face feature information, the blue sky feature information, and the lawn feature information shown in FIG. 3 from the target feature database 31 as the feature information.
  • the feature information acquired by the target region extraction unit 10 in the present invention is not limited to this.
  • the target region extraction unit 10 may adopt a configuration in which which feature information is acquired can be set in advance by a setting unit (not shown).
  • the target area extracting unit 10 acquires feature information defined in the target feature database 31 stored in the storage unit 30 (step S1).
  • the target region extraction unit 10 compares the luminance information, the hue information, and the saturation information included in the luminance signal and the color difference signal of the supplied input image data with the feature information (step S2).
  • the target area extraction unit 10 is data in which the area data indicated by the luminance information, hue information, and saturation information included in the luminance signal and the color difference signal of the input image data has a value within each range value of the face feature information. Is determined based on the comparison result (step S3).
  • the target area extraction unit 10 determines that the area data indicated by the luminance information, hue information, and saturation information in the input image data is data having a value within each range value of the face feature information (in step S3). YES) It is determined whether or not the eye, nose and mouth features can be extracted from the area data (step S4).
  • the target region extraction unit 10 determines that the relative distance relationship between the eyes, nose, and mouth is within the range of the face feature information. It is determined whether it is a value (step S5).
  • the target region extraction unit 10 determines the luminance information, hue information, and saturation information. Region data having a value within each range value of the face feature information from the input image data indicated by, and the region data in which the relative distance relationship between the eyes, nose and mouth is a value within the range value of the face feature information Are extracted as face area data to be corrected (step S6).
  • the target area extraction unit 10 determines that the feature information of the eyes, nose, and mouth cannot be extracted (NO in step S4), the relative distance relationship between the eyes, nose, and mouth is not within the range value of the face feature information. If it is determined (NO in step S5), or whether the luminance information, hue information, and saturation information of the supplied input image data have been compared with all feature information after extracting face area data to be corrected Is determined (step S7).
  • step S7 If it is determined that the luminance information, hue information, and saturation information of the supplied input image data are not compared with all the feature information (NO in step S7), the target area extraction unit 10 again performs the processing from step S2. repeat.
  • the target area extraction unit 10 determines that the area data indicated by the luminance information, the hue information, and the saturation information is not data having a value within each range value of the face feature information in the input image data (NO in step S3). ) In the region, it is determined whether or not the region data indicated by the luminance information, hue information, and saturation information is data that is a value within each range value of the blue sky feature information (step S8).
  • the target area extraction unit 10 determines that the area data indicated by the luminance information, the hue information, and the saturation information in the input image data is data having a value within each range value of the blue sky feature information (in step S8). YES) In the area data, it is determined whether or not a high frequency component is included in the area data that is a value within each range value of the blue sky feature information (step S9). For example, the ratio value of the high frequency component included in the area data is compared with the threshold value set in the range value of the high frequency component in the blue sky feature information, and the ratio of the high frequency component included in the area data is equal to or less than the threshold value. In this case, it may be determined that the high frequency component is not included.
  • the target region extraction unit 10 determines that the region data determined not to include the high frequency component is above the image represented by the input image data. It is determined whether or not the data has a value within a range value indicating that it is located at (step S10). For example, the target area extraction unit 10 determines whether or not a predetermined percentage of the area data determined to contain no high-frequency components is above the center of the image represented by the input image data. The determination in step S10 may be performed by determining the above.
  • the area data determined not to contain the high frequency component is area data having a value within a range value indicating that the area data is located above the image represented by the input image data (YES in step S10)
  • the area data indicated by the luminance information, the hue information, and the saturation information is a value within each range value of the blue sky feature information, does not include a high frequency component
  • the input image data is Area data having a value within a range value indicating that it is located above the image to be represented is extracted as blue sky area data to be corrected (step S11).
  • the target region extraction unit 10 is not region data having a value within a range value indicating that it is located above the image represented by the input image data. (NO in step S10), or after extracting the blue sky region data to be corrected, whether the luminance information, hue information and saturation information of the supplied input image data are compared with all feature information It is determined again whether or not (step S7).
  • step S7 If it is determined that the luminance information, hue information, and saturation information included in the luminance signal and the color difference signal of the supplied input image data are not compared with all the feature information (NO in step S7), the target region extraction unit 10 Repeats the process from step S2 again.
  • the target area extraction unit 10 is an area in which the luminance information, the hue information, and the saturation information are determined not to be within the range values of the face feature information in the input image data (NO in step S3), and each of the blue sky feature information Whether or not the area data indicated by the luminance information, the hue information, and the saturation information is data that is a value within each range value of the lawn feature information in the area determined not to be within the range value (NO in step S8). Is determined (step S12).
  • the target region extraction unit 10 determines that the region data indicated by the luminance information, hue information, and saturation information in the input image data is data having a value within each range value of the lawn feature information (in step S12). YES) In the area data, it is determined whether or not a high frequency component is included in the area data within each range value of the lawn feature information (step S13). For example, the ratio value of the high frequency component included in the area data is compared with the threshold value set for the range value of the high frequency component in the lawn feature information, and the ratio of the high frequency component included in the area data is equal to or greater than the threshold value. If it is, it may be determined that a high frequency component is included.
  • the target region extraction unit 10 When it is determined that a high frequency component is included (YES in step S13), the target region extraction unit 10 has a region data determined to include a high frequency component below the image represented by the input image data. It is determined whether or not the data has a value within a range value indicating that it is located at (step S14). For example, the target area extraction unit 10 determines whether or not a predetermined percentage of the area data determined to include a high frequency component is below the center of the image represented by the input image data. By determining the above, the determination in step S14 may be performed.
  • the target region extraction unit 10 is a region data in which the region data indicated by the luminance information, the hue information, and the saturation information has a value within each range value of the lawn feature information, and includes high frequency components. And region data having a value within a range value indicating that the input image data is located below the image represented by the input image data is extracted as lawn region data to be corrected (step S15).
  • the target area extraction unit 10 includes, in the input image data, area data that is determined that the area data indicated by the luminance information, the hue information, and the saturation information has data within each range value of the lawn feature information. If not (YES in step S12), if it is determined that a high frequency component is not included (NO in step S13), a value within a range value indicating that the input image data is positioned below the image is displayed. If it is determined that the region data is not included (NO in step S14), or after extracting the lawn region data to be corrected, luminance information, hue information, and saturation information of the supplied input image data are all feature information. Is again determined (step S7).
  • step S7 If it is determined that the luminance information, hue information, and saturation information included in the luminance signal and the color difference signal of the supplied input image data are not compared with all the feature information (NO in step S7), the target region extraction unit 10 Repeats the process from step S2 again.
  • the target region extraction unit 10 executes the tracking extraction process of the correction target area data (step S16). The tracking extraction processing of the correction target area data will be described later.
  • the target area extraction unit 10 ends the correction target area extraction process after executing the tracking extraction process of the correction target area data.
  • luminance information, hue information, and saturation information are first compared with face feature information, then compared with blue sky feature information, and further compared with lawn feature information.
  • the order of comparison is not limited to this, and the order may be changed as appropriate.
  • the target area extraction unit 10 tracks whether or not the correction target area data extracted from the input image data of the immediately preceding frame is included in the input image data of the current frame using the motion vector. If so, the correction target area data is extracted. Note that the input image data of the immediately preceding frame is also referred to as immediately preceding input image data.
  • step S16 shown in FIG. 5 the tracking extraction process (step S16 shown in FIG. 5) of the correction target area data executed by the target area extraction unit 10 will be described.
  • the target area extraction unit 10 is first extracted from the current input image data from the correction target area data extracted from the previous input image data using the motion vector. Tracking correction target area data that may be generated.
  • the corrected image area data extracted from the immediately preceding input image data and the generated tracking correction target area data are also referred to as immediately preceding correction target area data.
  • the target area extraction unit 10 refers to the correction target area data extracted in the correction target area extraction process (steps S1 to S15 shown in FIG. 5) that extracts correction target area data with reference to the target feature database 31, and Compare with the previous correction target area data. That is, the target area extraction unit 10 determines whether or not area data other than area data having the same characteristic as the correction target area data extracted in the correction target area extraction process exists in the immediately previous correction target area data. To do.
  • the target area extraction unit 10 does not include area data other than area data having the same unique characteristics as the correction target area data extracted from the input image data of the current frame by the correction target area extraction processing in the previous correction target area data. In this case, the tracking extraction process of the correction target area data is terminated.
  • the target area extraction unit 10 converts correction target area data other than area data having the same characteristic as the correction target area data extracted from the input image data of the current frame by the correction target area extraction processing into the immediately previous correction target area data. If it is included, the correction target area data other than the area data having the same characteristic as the correction target area data extracted from the input image data of the current frame by the correction target area extraction process is input to the input image data of the current frame. Are extracted as correction target area data candidates.
  • the target area extraction unit 10 determines whether or not area data having the same characteristic as the correction target area data candidate exists in the input image data of the current frame using the motion vector. When it is determined that area data having the same unique characteristics as the correction target area data candidate exists in the input image data of the current frame, the target area extraction unit 10 extracts the area data as correction target area data.
  • the target area extraction unit 10 performs the tracking extraction process of the correction target area data.
  • the target region extraction unit 10 applies the input image data of the (n ⁇ 1) th (where n is an integer greater than or equal to 2) frame. Whether or not the correction target area data included is included in the input image data of the nth frame can be tracked based on the motion vector. Then, the target area extracting unit 10, when the area data having the same characteristic as the target area data included in the n ⁇ 1 input image data is included in the input image data of the nth frame, Data can be extracted from the input image data of the nth frame as correction target area data.
  • the target area extraction unit 10 performs the tracking target extraction process of the correction target area data after executing the correction target area extraction process of extracting the correction target area data with reference to the target feature database.
  • the target region extraction unit 10 may execute a correction target region extraction process of extracting correction target region data with reference to the target feature database after executing the tracking extraction processing of the correction target region data.
  • the area data having the same characteristic as the immediately previous correction target area data is extracted from the current input image data, and then is not extracted in the tracking extraction process in the correction target area extraction process.
  • the correction target area data is extracted from the area data. Accordingly, the area data to be extracted in the correction target area extraction process can be reduced, so that the addition of the target area extraction unit 10 related to the correction target area extraction process can be reduced.
  • FIG. 6 is a flowchart illustrating an example of the flow of target area correction processing in the correction processing unit 20 of the image correction apparatus 1 according to the present embodiment.
  • a target area correction process performed on the correction target face area data extracted by the target area extraction unit 10 will be described as an example.
  • the correction processing unit 20 acquires correction content information defined in the correction content database 32 stored in the storage unit 30 (step S21). .
  • the correction processing unit 20 supplies information regarding correction for luminance information and correction target area data among the acquired correction content information to the luminance correction processing unit 21. In addition, the correction processing unit 20 supplies information regarding correction for hue information and saturation information and correction target area data among the acquired correction content information to the color difference correction processing unit 22. Further, the correction processing unit 20 supplies information regarding noise removal (noise reduction) and correction target area data among the acquired correction content information to the noise reduction processing unit 23.
  • the luminance correction processing unit 21 performs luminance correction processing on the luminance information included in the correction target region data extracted by the target region extraction unit 10 (step S22). For example, when the correction target area data is face area data, the luminance correction processing unit 21 refers to the face correction content information, performs thick outline emphasis correction that emphasizes thick outlines of eyes, nose, and mouth, and narrows the eyelashes. Performs fine contour emphasis correction that emphasizes steep contours. Note that the luminance correction processing unit 21 does not have to perform correction when the correction content for the luminance information is not defined in the correction content information. The luminance correction processing unit 21 supplies the input image data to the noise reduction processing unit 23 regardless of whether or not the luminance correction processing has been performed.
  • the color difference correction processing unit 22 performs a hue correction for correcting the hue information included in the correction target region data extracted by the target region extraction unit 10 (step S23).
  • the color difference correction processing unit 22 refers to the face correction content information and performs the correction calculation using the above-described equation (1), whereby the eyes, nose, and mouth. Correct the hue of the part other than to the appropriate range of the standard skin color.
  • the color difference correction processing unit 22 refers to the face correction content information, and refers to the lip portion with reference to the lip color of the original image.
  • the hue correction is performed by selecting and applying the most natural lip color from the colors.
  • the color difference correction processing unit 22 does not need to perform correction when the correction content for the hue information is not defined in the correction content information.
  • FIG. 7 is a diagram showing color difference correction for the hue information of the face area data in the CbCr coordinate system.
  • FIG. 7A is a diagram showing the range of skin color hue information of the face area data in the CbCr coordinate system
  • FIG. 7B is a diagram before and after correcting the skin color of the face area data in the CbCr coordinate system. It is a figure which shows the range of the hue information after performing.
  • the hue ⁇ shown in the hue information of the face area data is a value in the range from ⁇ 1 ⁇ 1 to ⁇ 1 + ⁇ 2 with ⁇ 1 as the center
  • the saturation shown in the saturation information r is a value within the range from r1 to r2.
  • the intersection of ⁇ 1 + ⁇ 2 and r1 is a
  • the intersection of ⁇ 1 ⁇ 1 and r1 is b
  • the intersection of ⁇ 1 ⁇ 1 and r2 is c
  • ⁇ 1 + ⁇ 2 and r2 When the intersection point with d is d, the value of the skin color hue information of the face area data is a value in the range abcd.
  • the color difference correction processing unit 22 corrects the value of the hue ⁇ from ⁇ 1 ⁇ 1 ′ to ⁇ 1 + ⁇ 2 ′ centering on ⁇ 1, as shown in FIG. (However, ⁇ 1 ⁇ ⁇ 1 ′ and ⁇ 2 ⁇ ⁇ 2 ′). That is, as shown in FIG. 7B, the color difference correction processing unit 22 sets the intersection of ⁇ 1 + ⁇ 2 ′ and r1 as e, sets the intersection of ⁇ 1 ⁇ 1 ′ and r1 as f, and sets ⁇ 1 ⁇ 1 ′ and r2.
  • the hue information of the value in the range abcd becomes the value in the range efgh by performing the correction calculation using the equation (1). So that it can be corrected.
  • the hue value can be corrected within the range of the coordinate area where the input image data becomes a natural image in the CbCr coordinate system.
  • the color difference correction processing unit 22 performs saturation correction for correcting the saturation information included in the correction target region data extracted by the target region extraction unit 10 (step S24). For example, when the correction target area data is face area data, the color difference correction processing unit 22 performs correction with reference to the face correction content information. Note that the color difference correction processing unit 22 may not perform correction when the correction content for the saturation information is not defined.
  • the noise reduction processing unit 23 performs noise reduction that is correction for removing noise in luminance information and color difference information included in the correction target region data extracted by the target region extraction unit 10 (step S25). For example, when the correction target area data is face area data, the noise reduction processing unit 23 performs correction by referring to the face correction content information. In addition, the noise reduction process part 23 does not need to perform correction
  • the correction processing unit 20 determines whether or not the target region correction processing has been performed on all the correction target region data extracted by the target region extraction unit 10 (step S26).
  • step S26 If it is determined that the target area correction process has not been performed on all the correction target area data extracted by the target area extraction unit 10 (NO in step S26), the correction processing unit 20 has not performed the target area correction process.
  • the processing from step S22 to S26 is executed for the correction target area data. That is, the correction processing unit 20 repeats steps S22 to S26 until it is determined that the target region correction processing has been performed on all the correction target region data extracted by the target region extraction unit 10.
  • the correction processing unit 20 ends the target area correction process.
  • the correction processing unit 20 when the correction processing unit 20 extracts the correction target area data from the input image data, the data corresponding to the feature data indicating the characteristic specific to the correction target area included in the target feature database 31 is extracted. It can be extracted as correction target area data. Accordingly, in order to extract the input image data applicable to the feature data as the correction target area data, for example, when extracting a face area as the correction target area data, a part of the body of the same color as the face area or a face It is possible to prevent erroneous detection of the same color wall area.
  • the correction processing unit 20 can perform correction based on the correction content included in the correction content database 32 on the extracted correction target region data.
  • the content of the correction may be incompatible with the area other than the correction target area while being compatible with the characteristics of the correction target area. Accordingly, inadequate correction is not performed on data other than the correction target area data, so that the same correction is uniformly performed on the entire input image data, and an unnatural output image can be prevented.
  • the image correction apparatus 1 can extract the correction target area data to be corrected from the input image data, and can perform correction suitable for the extracted correction target area data.
  • a device such as a television receiver or an information processing device in which the image correction apparatus 1 according to this embodiment is mounted can display input image data corrected by the image correction apparatus 1, a more natural image can be displayed.
  • the input image data corrected to become can be displayed.
  • the target area extraction unit 10 has the same characteristic as the target area data extracted from the input image data of the frame immediately before the input image data of the current frame based on the motion vector detected by the motion vector detection unit 50.
  • the target area data can be extracted from the input image data of the current frame. That is, the target area extraction unit 10 tracks whether the target area data extracted from the input image data of the immediately preceding frame is included in the input image data that is currently input. Data can be extracted.
  • the target area extraction unit 10 determines that the target area data that cannot be extracted is the input image of the previous frame. If it has been extracted from the data, the target area data that cannot be extracted can be extracted based on the motion vector.
  • target area data that cannot be extracted includes the determined eyes, Area data that does not apply to the relative distance between the nose and mouth, that is, area data of a person's profile, and area data of a person's face that has been partially hidden by an object crossing .
  • the target area extraction unit 10 refers to the target feature database including the characteristic data of the front of the person's face.
  • the area data of a person's profile cannot be extracted as correction target area data.
  • the target area extraction unit 10 can extract the human side profile area data as the correction target area data based on the motion vector supplied from the motion vector detection unit 50.
  • FIG. 8 is a block diagram showing details of the configuration of the image correction apparatus 1a according to this modification.
  • the configuration is the same as that of the image correction apparatus 1.
  • the correction processing unit 20a is means for performing target area correction processing on the correction target area data. Specifically, for the correction target region data extracted by the target region extraction unit 10 based on the target feature database 31 and the motion vector, the content of correction most suitable for the target indicated by the correction target region data is stored in the storage unit 30. Determination is made with reference to the stored correction content database 32, and target area correction processing is executed based on the determined content.
  • the correction processing unit 20a includes a luminance correction processing unit 21a, a color difference correction processing unit 22a, and a noise reduction processing unit 23a.
  • the luminance correction processing unit 21a is provided in the correction processing unit 20a, and executes correction processing on luminance information included in the luminance signal of the correction target region data extracted by the target region extraction unit 10 from the input image data.
  • the luminance correction processing unit 21a supplies the input image data that has been subjected to the correction process on the correction target region data to the RGB conversion unit 40a.
  • the color difference correction processing unit 22a is provided in the correction processing unit 20a, and executes correction processing on hue information and saturation information included in the color difference signal of the correction target region data extracted by the target region extraction unit 10 from the input image data. To do. Further, the color difference correction processing unit 22a supplies the input image data that has been subjected to the correction processing on the correction target area data to the RGB conversion unit 40a.
  • the RGB conversion unit 40a converts the color space signal of the color system indicating the input image data subjected to the correction process on the correction target area data in the luminance correction processing unit 21a and the color difference correction processing unit 22a into another color system. Convert to color space signal.
  • the RGB conversion unit 40 converts the color space signal (Y, Cb, Cr) of the input image data input in the color system represented by the YCbCr color space into the color space signal (R, G) of the RGB color system. , B). Further, the RGB conversion unit 40a supplies the input image data of the converted color space signal to the noise reduction processing unit 23a.
  • the noise reduction processing unit 23a is provided in the correction processing unit 20a, and removes noise in the luminance signal and the color difference signal included in the correction target region data from the input image data supplied from the RGB conversion unit 40a.
  • the noise reduction processing unit 23a outputs the input image data from which noise has been removed as output image data.
  • correction target area extraction processing (Correction target area extraction processing, target area correction processing)
  • the correction target area extraction process in the present modification is the same as the correction target area extraction process in the first embodiment shown in FIG.
  • the input image data is converted by the RGB conversion unit 40a between the saturation correction process (step S24) and the noise reduction process (step S25) shown in the target area correction process of FIG.
  • This is the same as the target area correction processing according to the first embodiment, except that a step of converting the color space signal of the color system shown into a color space signal of another color system is included.
  • the noise reduction processing unit 23a can perform the noise reduction process using the color space signal of the color system converted from the color system represented by the color space signal of the luminance signal and the color difference signal.
  • FIG. 9 is a block diagram showing details of the configuration of the image correction apparatus 1b according to the present modification.
  • the image correction apparatus 1 b has the same configuration as the image correction apparatus 1 according to the first embodiment except that the image correction apparatus 1 b includes a correction processing unit 20 b instead of the correction processing unit 20.
  • the correction processing unit 20b is means for performing target area correction processing on the correction target area data. Specifically, for the correction target region data extracted by the target region extraction unit 10 based on the target feature database 31 and the motion vector, the content of correction most suitable for the target indicated by the correction target region data is stored in the storage unit 30. Determination is made with reference to the stored correction content database 32, and target area correction processing is executed based on the determined content.
  • the correction processing unit 20b includes a luminance correction processing unit 21b, a color difference correction processing unit 22b, and a noise reduction processing unit 23b.
  • the noise reduction processing unit 23b is provided in the correction processing unit 20a, and removes noise in the luminance signal and the color difference signal of the correction target region data extracted by the target region extraction unit 10 from the input image data.
  • the noise reduction processing unit 23b supplies the input image data from which noise has been removed to the luminance correction processing unit 21b and the color difference correction processing unit 22b.
  • the luminance correction processing unit 21b is provided in the correction processing unit 20b, and the luminance included in the luminance signal of the correction target region data extracted by the target region extraction unit 10 out of the input image data supplied from the noise reduction processing unit 23b. Execute correction processing for information. In addition, the luminance correction processing unit 21b supplies input image data obtained by performing correction processing on the correction target area data to the RGB conversion unit 40b.
  • the color difference correction processing unit 22b is provided in the correction processing unit 20b, and the hue included in the color difference signal of the correction target region data extracted by the target region extraction unit 10 out of the input image data supplied from the noise reduction processing unit 23b. Correction processing for information and saturation information is executed. Further, the color difference correction processing unit 22b supplies an input image signal obtained by performing correction processing on the correction target region data to the RGB conversion unit 40b.
  • correction target area extraction processing (Correction target area extraction processing, target area correction processing)
  • the correction target area extraction process in the present modification is the same as the correction target area extraction process in the first embodiment shown in FIG.
  • the target area correction process in the present modification is the same as the target area correction process shown in FIG. 6, after performing the noise reduction process (step S25), the luminance correction process (step S22), the hue correction process (step S23), and the saturation.
  • the target area correction process is the same as that of the first embodiment except that the correction process (step S24) is performed.
  • the correction processing unit 20b can perform correction on the luminance signal and correction on the color difference signal after the noise reduction processing is performed in the noise reduction processing unit 23.
  • FIG. 10 is a block diagram showing details of the configuration of the image correction apparatus 2 according to this modification.
  • the image correction apparatus 2 has the same configuration as the image correction apparatus 1 according to the first embodiment except that the image correction apparatus 2 further includes a YCbCr conversion unit 41 (second color space signal conversion unit).
  • the YCbCr conversion unit 41 converts the color space signal of the color system indicating the input image data into a color space signal of another color system.
  • the YCbCr conversion unit 41 converts an RGB color system color space signal representing input image data into a color system color space signal represented by a YCbCr color space.
  • the YCbCr conversion unit 41 supplies the input image data obtained by converting the color space signal to the target area extraction unit 10, the correction processing unit 20, the motion vector detection unit 50, and the frame memory 51.
  • correction target area extraction processing (Correction target area extraction processing, target area correction processing)
  • the correction target area extraction process and the target area correction process in the present embodiment are the same as the correction target area extraction process in the first embodiment shown in FIG. 5 and the target area correction process in the first embodiment shown in FIG. Description is omitted.
  • the input image data is a color space signal having a color system different from the color system represented by the YCbCr color space represented by, for example, a luminance signal and a color difference signal
  • the YCbCr color space is used.
  • a color space signal of a color system different from the represented color system can be converted into a luminance signal and a color difference signal representing the YCbCr color space.
  • the input image data can be corrected by correcting the luminance signal and the color difference signal regardless of the color space signal of the color system that the input image data is.
  • FIG. 11 is a block diagram showing details of the configuration of the image correction apparatus 2a according to this modification.
  • the image correction apparatus 2 a includes a correction processing unit 20 a instead of the correction processing unit 20, an RGB conversion unit 40 a instead of the RGB conversion unit 40, and a YCbCr conversion unit 41. Except for this, the configuration is the same as that of the image correction apparatus 1 according to the first embodiment.
  • the YCbCr conversion unit 41 converts the color space signal of the color system indicating the input image data into a color space signal of another color system.
  • the YCbCr conversion unit 41 converts an RGB color system color space signal representing input image data into a color system color space signal represented by a YCbCr color space.
  • the YCbCr conversion unit 41 supplies the input image data obtained by converting the color space signal to the target area extraction unit 10, the correction processing unit 20 a, the motion vector detection unit 50, and the frame memory 51.
  • the correction processing unit 20a is means for performing target area correction processing on the correction target area data. Specifically, for the correction target region data extracted by the target region extraction unit 10 based on the target feature database 31 and the motion vector, the content of correction most suitable for the target indicated by the correction target region data is stored in the storage unit 30. Determination is made with reference to the stored correction content database 32, and target area correction processing is executed based on the determined content.
  • the correction processing unit 20a includes a luminance correction processing unit 21a, a color difference correction processing unit 22a, and a noise reduction processing unit 23a.
  • the luminance correction processing unit 21a is provided in the correction processing unit 20a, and executes correction processing on luminance information included in the luminance signal of the correction target region data extracted by the target region extraction unit 10 from the input image data.
  • the luminance correction processing unit 21a supplies the input image data that has been subjected to the correction process on the correction target region data to the RGB conversion unit 40a.
  • the color difference correction processing unit 22a is provided in the correction processing unit 20a, and executes correction processing on hue information and saturation information included in the color difference signal of the correction target region data extracted by the target region extraction unit 10 from the input image data. To do. Further, the color difference correction processing unit 22a supplies the input image data that has been subjected to the correction processing on the correction target area data to the RGB conversion unit 40a.
  • the RGB conversion unit 40a converts the color space signal of the color system indicating the input image data subjected to the correction process on the correction target area data in the luminance correction processing unit 21a and the color difference correction processing unit 22a into another color system. Convert to color space signal.
  • the RGB converter 40 converts the color space signal (Y, Cb, Cr) of the input image data input in the color system expressed in the YCbCr color space into the color space signal (R, G in the RGB color system). , B). Further, the RGB conversion unit 40a supplies the input image data of the converted color space signal to the noise reduction processing unit 23a.
  • the noise reduction processing unit 23a is provided in the correction processing unit 20a, and removes noise in the luminance signal and the color difference signal included in the correction target region data from the input image data supplied from the RGB conversion unit 40a.
  • the noise reduction processing unit 23a outputs the input image data from which noise has been removed as output image data.
  • correction target area extraction processing (Correction target area extraction processing, target area correction processing)
  • the correction target area extraction process in the present modification is the same as the correction target area extraction process in the first embodiment shown in FIG.
  • the input image data is converted by the RGB conversion unit 40a between the saturation correction process (step S24) and the noise reduction process (step S25) shown in the target area correction process of FIG.
  • This is the same as the target area correction processing according to the first embodiment, except that a step of converting the color space signal of the color system shown into a color space signal of another color system is included.
  • FIG. 12 is a block diagram showing details of the configuration of the image correction apparatus 2b according to this modification.
  • the image correction apparatus 2 b is the same as the image correction apparatus 1 according to the first embodiment except that the image correction apparatus 2 b includes a correction processing unit 20 b instead of the correction processing unit 20 and further includes a YCbCr conversion unit 41. It is a configuration.
  • the YCbCr conversion unit 41 converts the color space signal of the color system indicating the input image data into a color space signal of another color system.
  • the YCbCr conversion unit 41 converts an RGB color system color space signal representing input image data into a color system color space signal represented by a YCbCr color space.
  • the YCbCr conversion unit 41 supplies the input image data obtained by converting the color space signal to the target area extraction unit 10, the correction processing unit 20 b, the motion vector detection unit 50, and the frame memory 51.
  • the correction processing unit 20b is means for performing target area correction processing on the correction target area data. Specifically, for the correction target region data extracted by the target region extraction unit 10 based on the target feature database 31 and the motion vector, the content of correction most suitable for the target indicated by the correction target region data is stored in the storage unit 30. Determination is made with reference to the stored correction content database 32, and target area correction processing is executed based on the determined content.
  • the correction processing unit 20b includes a luminance correction processing unit 21b, a color difference correction processing unit 22b, and a noise reduction processing unit 23b.
  • the noise reduction processing unit 23b is provided in the correction processing unit 20a, and removes noise in the luminance signal and the color difference signal of the correction target region data extracted by the target region extraction unit 10 from the input image data.
  • the noise reduction processing unit 23b supplies the input image data from which noise has been removed to the luminance correction processing unit 21b and the color difference correction processing unit 22b.
  • the luminance correction processing unit 21b is provided in the correction processing unit 20b, and the luminance included in the luminance signal of the correction target region data extracted by the target region extraction unit 10 out of the input image data supplied from the noise reduction processing unit 23b. Execute correction processing for information. In addition, the luminance correction processing unit 21b supplies input image data obtained by performing correction processing on the correction target area data to the RGB conversion unit 40b.
  • the color difference correction processing unit 22b is provided in the correction processing unit 20b, and the hue included in the color difference signal of the correction target region data extracted by the target region extraction unit 10 out of the input image data supplied from the noise reduction processing unit 23b. Correction processing for information and saturation information is executed. Further, the color difference correction processing unit 22b supplies an input image signal obtained by performing correction processing on the correction target region data to the RGB conversion unit 40b.
  • correction target area extraction processing (Correction target area extraction processing, target area correction processing)
  • the correction target area extraction process in the present modification is the same as the correction target area extraction process in the first embodiment shown in FIG.
  • the target area correction process in the present modification is the same as the target area correction process shown in FIG. 6, after performing the noise reduction process (step S25), the luminance correction process (step S22), the hue correction process (step S23), and the saturation.
  • the target area correction process is the same as that of the first embodiment except that the correction process (step S24) is performed.
  • Each block of the image correction apparatus 1 may be realized in hardware by a logic circuit formed on an integrated circuit (IC chip), or may be realized in software using a CPU (Central Processing Unit). Good.
  • IC chip integrated circuit
  • CPU Central Processing Unit
  • the image correction apparatus 1 includes a CPU that executes instructions of a program that realizes each function, a ROM (Read Memory) that stores the program, a RAM (Random Access Memory) that expands the program, the program,
  • a storage device (recording medium) such as a memory for storing various data is provided.
  • An object of the present invention is a recording medium on which a program code (execution format program, intermediate code program, source program) of a control program of the image correction apparatus 1 which is software for realizing the functions described above is recorded so as to be readable by a computer. This can also be achieved by supplying the image correction apparatus 1 and reading and executing the program code recorded on the recording medium by the computer (or CPU or MPU).
  • Examples of the recording medium include tapes such as magnetic tapes and cassette tapes, magnetic disks such as floppy (registered trademark) disks / hard disks, and disks including optical disks such as CD-ROM / MO / MD / DVD / CD-R.
  • IC cards including memory cards
  • semiconductor memories such as mask ROM / EPROM / EEPROM / flash ROM, or PLD (Programmable logic device) or FPGA (Field Programmable Gate Array) Logic circuits can be used.
  • the program code may be supplied to the image correction apparatus 1 via a communication network.
  • the communication network is not particularly limited as long as it can transmit the program code.
  • the Internet intranet, extranet, LAN, ISDN, VAN, CATV communication network, virtual private network (Virtual Private Network), telephone line network, mobile communication network, satellite communication network, etc. can be used.
  • the transmission medium constituting the communication network may be any medium that can transmit the program code, and is not limited to a specific configuration or type.
  • wired such as IEEE 1394, USB, power line carrier, cable TV line, telephone line, ADSL (Asymmetric Digital Subscriber Line) line, infrared such as IrDA or remote control, Bluetooth (registered trademark), IEEE 8021 wireless, HDR (High Data) Rate), NFC (Near Field Communication), DLNA (Digital Living Network Alliance), mobile phone network, satellite line, terrestrial digital network, and the like.
  • wired such as IEEE 1394, USB, power line carrier, cable TV line, telephone line, ADSL (Asymmetric Digital Subscriber Line) line, infrared such as IrDA or remote control, Bluetooth (registered trademark), IEEE 8021 wireless, HDR (High Data) Rate), NFC (Near Field Communication), DLNA (Digital Living Network Alliance), mobile phone network, satellite line, terrestrial digital network, and the like.
  • wired such as IEEE 1394, USB, power line carrier, cable TV line, telephone line, ADSL (Asymmetric Digital Subscriber Line) line, infrared such as IrDA or remote control, Bluetooth (registered trademark), IEEE
  • the image correction apparatus is extracted as target region data stored in the storage unit and the motion vector detection unit that detects the motion vector between frames of the input image data.
  • a target area extraction unit that extracts target area data including the specific feature from the input image data by referring to a target feature database including characteristic data indicating a characteristic peculiar to the area, and is stored in the storage unit.
  • Correction processing means for performing image correction on the target area data extracted by the target area extraction means of the input image data by referring to a correction content database in which correction contents for the target area data are defined.
  • the target area extracting means includes a current frame detected by the motion vector detecting means. Using the motion vector between the previous frame and the target image data including the same specific features as the target image data extracted from the input image data of the previous frame is extracted from the input image data of the current frame. It is a feature.
  • the target area data when the target area data is extracted from the input image data, data corresponding to feature data indicating characteristics specific to the correction target area included in the target feature database is used as the target area data. Can be extracted.
  • input image data applicable to the feature data for example, when a face area is extracted as the target area data, the same color as the partial area of the body or the face of the same color as the face area It is possible to prevent erroneous detection of the wall area of the camera.
  • correction based on the correction content included in the correction content database can be performed on the extracted target area data.
  • the content of the correction may be incompatible with the area other than the correction target area while being compatible with the characteristics of the correction target area. Therefore, since the data other than the target area data is not subjected to inadequate correction, the same correction is uniformly performed on the entire input image data, thereby preventing an unnatural output image.
  • target area data to be corrected can be extracted from the input image data, and correction suitable for the extracted target area data can be performed.
  • the target region extracting means uses the motion vector between the current frame and the immediately preceding frame detected by the motion vector detecting means, from the input image data of the current frame to the input image data of the immediately preceding frame. If the target area data extracted from the input image data of the immediately preceding frame is included in the input image data of the current frame by extracting the target area data containing the same unique features as the target area data extracted from The target area data can be reliably extracted. In other words, the target area extraction means tracks whether the target area data extracted from the input image data of the immediately preceding frame is included in the input image data of the current frame. Therefore, it is possible to reduce the leakage of the target area data.
  • the target area extraction unit inputs the target area data that cannot be extracted in the previous frame.
  • the target area data having the same characteristic as the target area data that cannot be extracted.
  • the target region data that cannot be extracted is determined as the target data.
  • List area data that does not apply to the relative distance between eyes, nose, and mouth that is, area data of a person's profile, and area data of a person's face that is partially hidden by an object Can do.
  • the input image data includes moving image data, as well as still image data such as when a still image is displayed on a part of a moving image or a moving image is displayed on a part of a still image.
  • moving image data is also included.
  • the target feature database in the image correction apparatus includes a plurality of types of feature data including the feature data, and the correction content database includes a plurality of types of correction for each of the plurality of types of target region data.
  • the target region extraction means includes a plurality of types of target region data corresponding to each of the plurality of types of feature data and the motion vector, and inputs the frame immediately before the input image data of the current frame.
  • the target area data having the same characteristic as the target area data extracted from the image data is extracted, and the correction processing means performs image correction corresponding to each of the target area data extracted by the target area extraction means. Is preferred.
  • the correction processing means uses the target feature database and the motion vector, and each of the unique characteristics possessed by the plurality of types of target area data extracted by the target area extraction means. Can be corrected.
  • the correction processing unit can appropriately correct a plurality of correction target regions having different unique features, and thus can perform more natural correction on the input image data.
  • the target region data includes a luminance signal and a color difference signal
  • the correction content includes a luminance correction content indicating a correction content for the luminance signal and a correction for the color difference signal. It is preferable to include at least one of a color difference correction content indicating the content and a noise reduction content indicating the correction content for both the luminance signal and the color difference signal.
  • the correction processing means in the image correction apparatus is based on the luminance correction processing means for correcting the luminance signal of the target area data based on the luminance correction content, and on the color difference correction content.
  • the color difference correction processing means for correcting the color difference signal of the target area data, the luminance signal supplied from the luminance correction processing means based on the noise reduction content, and the color difference correction processing means supplied from the color difference correction means Noise reduction processing means for correcting both of the color difference signals, and the luminance signal and the color difference signal of the input image data supplied from the correction processing means are represented by the luminance signal and the color difference signal. It is preferable to further include first color space signal conversion means for converting to a color space signal of a color system different from the color system.
  • the correction processing unit includes the luminance correction processing unit, the color difference correction processing unit, and the noise reduction processing unit
  • the correction for the luminance value indicated by the luminance signal, the color difference signal is The correction for the hue value and the saturation value shown, and the correction for the luminance value, the hue value and the saturation value can be respectively performed according to the characteristics of the correction target area, and the input image data Can be corrected naturally.
  • the target area extraction unit and the correction processing unit can perform processing based on the luminance signal and the color difference signal, the luminance signal and the color difference signal constituting the input image data are converted into the luminance signal. Processing can be performed without conversion to a color space signal of a color system different from the color system represented by the signal and the color difference signal.
  • the color system represented by the color space signal of the luminance signal and the color difference signal is, for example, a color system represented by a YCbCr color space.
  • different color system color space signals include, for example, CIEL * a * b * color system color space signals (L * signal, a * signal, b * signal) and RGB color system colors. Spatial signals (R (Red: red), G (Green: green), B (Blue: blue))) can be used.
  • the luminance signal and the color difference signal processed by the correction processing means are different from the color system represented by the luminance signal and the color difference signal. It can be output as output image data after being converted into a color space signal of a color system (for example, an RGB signal of an RGB color system).
  • the correction processing means in the image correction apparatus is based on the luminance correction processing means for correcting the luminance signal of the target area data based on the luminance correction content, and on the color difference correction content.
  • Color difference correction processing means for correcting the color difference signal of the target area data
  • noise reduction processing means for correcting the color space signal supplied from the first color space signal conversion means based on the noise reduction content.
  • the first color space signal converting means includes the luminance signal and the color difference signal of the input image data supplied from the correction processing means, and a color system represented by the luminance signal and the color difference signal; It is preferable to convert to a color space signal of a different color system.
  • the noise reduction processing unit performs noise reduction processing using the color space signal of the color system converted from the color system represented by the color space signal of the luminance signal and the color difference signal. It can be carried out.
  • the correction processing unit includes a noise reduction processing unit configured to correct the luminance signal and the chrominance signal based on the noise reduction content, and the luminance correction content.
  • Luminance correction processing means for correcting the luminance signal supplied from the noise reduction processing means, and color difference correction processing for correcting the color difference signal supplied from the noise reduction processing means based on the color difference correction content.
  • the luminance signal and the color difference signal of the input image data supplied from the correction processing means are of a color system different from the color system represented by the luminance signal and the color difference signal. It is preferable to further comprise first color space signal conversion means for converting to a color space signal.
  • the luminance signal and the color difference signal can be corrected.
  • An image correction apparatus uses a color space signal of input image data input by a color space signal of a color system different from the color system represented by the luminance signal and the color difference signal as the luminance signal. It is preferable to further include second color space signal conversion means for converting the signal and the color difference signal.
  • the input image data is a color space signal of a color system different from the color system represented by the YCbCr color space represented by the luminance signal and the color difference signal
  • a color space signal of a color system different from the color system represented by the YCbCr color space can be converted into the luminance signal and the color difference signal representing the YCbCr color space.
  • the input image data can be corrected by correcting the luminance signal and the color difference signal.
  • the brightness correction processing means in the image correction apparatus is a band pass filter and a high pass filter.
  • the bandpass filter corrects the data indicating the contour of the specific target included in the target region data, and the high pass filter includes the specific target included in the target region data. It is possible to correct the data indicating the texture. As a result, necessary correction can be performed on the target area data for each data included in the target area data, so that the same correction is performed on the entire target area data, resulting in an unnatural output image. Can be prevented.
  • the color difference correction processing unit in the image correction apparatus performs a correction operation on the saturation and hue in the CbCr coordinate system.
  • the saturation and hue values can be corrected in the appropriate range of the coordinate area so that the input image data becomes a natural image in the CbCr coordinate system.
  • the noise reduction processing unit is preferably a low-pass filter or a median filter.
  • the noise reduction processing means when the noise reduction processing means is a low-pass filter, the target area data can be corrected by weighted average processing, and the noise reduction processing means is a median filter. Correction that removes fine noise can be performed.
  • the color difference signal includes hue information and saturation information
  • the color difference correction processing unit includes the value of the hue information in the target area data. It is preferable that the hue information is corrected by setting the value within an appropriate range determined in advance according to the unique feature, and the saturation information is corrected by multiplying the color difference signal by a positive coefficient.
  • the image correction display device preferably includes the image correction device and a display unit that displays the input image data corrected by the image correction device.
  • An image correction method for an image correction apparatus is an image correction method for an image correction apparatus that corrects an input image as described above, and detects a motion vector between frames of input image data.
  • a target feature database including feature data indicating features specific to a region extracted as target region data, stored in the storage unit, and stored in the storage unit, from the input image data.
  • a target area extracting step for extracting target area data including features and a correction content database in which correction contents for the target area data are stored, which is stored in the storage unit, Correction processing for performing image correction on the target area data extracted in the target area extraction step And using the motion vector between the current frame detected in the motion vector detection step and the previous frame in the target region extraction step, the previous frame is extracted from the input image data of the current frame.
  • the target area data including the same characteristic features as the target area data extracted from the input image data is extracted.
  • a program for causing a computer to operate as the image correction apparatus according to the present invention, which causes the computer to function as each unit of the image correction apparatus, and a computer recording the program A readable recording medium is also included in the scope of the present invention.
  • the image correction apparatus can be suitably applied to a television receiver, a personal computer, a car navigation system, a mobile phone, a digital camera, a digital video camera, and the like.
  • Target area extraction unit target area extraction means
  • Correction processing unit correction processing means
  • Luminance correction processing unit luminance correction processing means
  • Color difference correction processing section color difference correction processing means
  • Noise Reduction Processing Unit Noise Reduction Processing Means
  • Storage Unit 31 Target Feature Database 32
  • Correction Content Database 40
  • RGB Conversion Unit First Color Space Signal Conversion Unit
  • YCbCr converter second color space signal converter
  • Motion vector detection unit motion vector detection means
  • Frame Memory 90 Imaging Device 91 Face Area Extraction Unit 92 Hue Correction Value Calculation Unit 93 Hue Correction Unit

Abstract

One embodiment of the present invention pertains to an image correction device (1) comprising a target region extraction unit (10) that extracts, from input image data for the current frame, target region data including the same specific characteristics as target region data extracted from input image data for the immediately preceding frame, using a motion vector between the current frame and the immediately preceding frame and by browsing a target characteristics database (31).

Description

画像補正装置、画像補正表示装置、画像補正方法、プログラム、及び、記録媒体Image correction apparatus, image correction display apparatus, image correction method, program, and recording medium
 本発明は、画像データの補正を行う画像補正装置、画像補正表示装置、及び、画像補正方法に関する。 The present invention relates to an image correction apparatus that corrects image data, an image correction display apparatus, and an image correction method.
 近年、画像(動画像及び静止画像)を表示するディスプレイに対し、大画面化及び高画質化などが進められている。これに伴い、ディスプレイに表示される画像の画質を向上させるための技術が求められてきている。 In recent years, display devices that display images (moving images and still images) have been increased in screen size and image quality. Accordingly, a technique for improving the image quality of an image displayed on a display has been demanded.
 ところで、従来の画質向上に関する技術は、画像全体に対して一様に補正が行われることが一般的である。このように、画像全体に対して一様に補正を行った場合、画像の一部の画質は向上するものの、逆に画像の他の一部の画質が低下してしまうことがある。例えば、画質向上のために、画像データに含まれるノイズ成分を除去するノイズリダクションを実行することが一般的に行われている。ノイズリダクションを画像全体に対して一様に実行すれば、画像データの青空の雲及び人の肌などでは自然な補正を行うことができるが、煉瓦及び芝生などでは細部が消え、不自然な補正がされてしまうという問題が生じる。したがって、従来のように、画像全体に対して一様に補正を行う技術では、高性能化したディスプレイの潜在能力を十分に発揮することはできない。 By the way, in the conventional technique for improving the image quality, it is general that the entire image is corrected uniformly. As described above, when the correction is performed uniformly on the entire image, the image quality of a part of the image is improved, but the image quality of the other part of the image may be deteriorated. For example, in order to improve image quality, noise reduction that removes noise components included in image data is generally performed. If noise reduction is performed uniformly on the entire image, natural correction can be performed on blue sky clouds and human skin in the image data, but details disappear on bricks and lawns, etc. The problem that it will be done occurs. Therefore, the conventional technology that uniformly corrects the entire image cannot sufficiently display the potential of a display with high performance.
 そこで、画像全体を一様に補正するのではなく、画像の領域ごとの特性を考慮して補正する技術が種々提案されている。 Therefore, various techniques have been proposed in which the entire image is not corrected uniformly but in consideration of the characteristics of each area of the image.
 例えば、下掲の特許文献1には、画像の色情報信号のうち、人物の顔面領域に相当する色情報信号の色相を、算出した色相補正値に基づいて補正する撮像機器についての技術が開示されている。 For example, Patent Document 1 listed below discloses a technique relating to an imaging device that corrects the hue of a color information signal corresponding to a human face area out of color information signals of an image based on a calculated hue correction value. Has been.
 図13に、特許文献1に開示されている撮像機器の主な構成要素を示す。図13は、特許文献1に開示されている撮像機器90の主な構成要素を示すブロック図である。図13に示すように、撮像機器90は、人物の顔面領域に相当する色情報信号を抽出する顔領域抽出部91、人物の顔面領域に相当する色情報信号の色相を補正する色相補正値を算出する色相補正値算出部92、及び、色相補正値に基づいて顔面領域に相当する色情報信号の色相を補正する色相補正部93を備えている。 FIG. 13 shows the main components of the imaging device disclosed in Patent Document 1. FIG. 13 is a block diagram showing main components of the imaging device 90 disclosed in Patent Document 1. As shown in FIG. As illustrated in FIG. 13, the imaging device 90 includes a face area extraction unit 91 that extracts a color information signal corresponding to a person's face area, and a hue correction value that corrects the hue of the color information signal corresponding to the person's face area. A hue correction value calculation unit 92 to be calculated, and a hue correction unit 93 to correct the hue of the color information signal corresponding to the face area based on the hue correction value are provided.
 このように、特許文献1に開示された技術を用いれば、画像のなかの顔面領域を特定し、特定した顔面領域に対して色補正を行うようになるので、従来のように、画像全体を一様に補正した場合よりも、少なくとも人物の顔面領域の補正については適切に行うことが可能となる。 As described above, if the technique disclosed in Patent Document 1 is used, a facial area in an image is specified, and color correction is performed on the specified facial area. It is possible to appropriately perform at least correction of the face area of a person, compared with the case where correction is made uniformly.
 また、下掲の特許文献2には、赤外画像から抽出した被写体の温度の高い部分を示す高輝度領域に基づいて検出された、人物の露出した肌の部分を表わす肌色領域に対して色補正を行うデジタルカメラについての技術が開示されている。 Further, in Patent Document 2 listed below, a color is applied to a skin color region representing a skin portion exposed by a person, which is detected based on a high brightness region indicating a high temperature portion of a subject extracted from an infrared image. A technique relating to a digital camera that performs correction is disclosed.
 このように、特許文献2に開示された技術を用いれば、人物の露出した肌の部分に対応する領域を特定し、特定した領域に対して色補正を行うようになるので、従来のように、画像全体を一様に補正した場合よりも、少なくとも人物の露出した肌領域の補正については適切に行うことが可能となる。 As described above, if the technique disclosed in Patent Document 2 is used, an area corresponding to the exposed skin portion of a person is specified, and color correction is performed on the specified area. As compared with the case where the entire image is corrected uniformly, at least correction of the exposed skin region of the person can be performed appropriately.
日本国公開特許公報「特開2004-180114号公報(2004年6月24日公開)」Japanese Patent Publication “Japanese Patent Laid-Open No. 2004-180114 (published on June 24, 2004)” 日本国公開特許公報「特開2004-207953号公報(2004年7月22日公開)」Japanese Patent Publication “Japanese Patent Laid-Open No. 2004-207953” (published on July 22, 2004)
 しかし、特許文献1に記載の技術では、画像の中の肌色に相当する色情報信号に対応した領域のうち、画像全体に対して所定の範囲の大きさを占め、かつ、その範囲の縦横比が、所定の比率(例えば、縦:横≒1:1)を有する領域を顔面領域として判別している。このため、上記の条件を満たす領域を判別するやり方では、顔面領域だけを抽出することは困難である。つまり、特許文献1に記載の技術では、肌色に相当する色情報信号に対応した領域のうち、実際に顔を含む領域以外の領域を顔面領域として判別してしまうことがある。このため、顔面領域以外の領域を補正対象としてしまい、顔面領域以外の領域にはすべきではない不自然な補正を、顔面領域以外の領域に対応した肌色の色情報信号に対して行ってしまうことがあるという問題があった。 However, in the technique described in Patent Document 1, the area corresponding to the color information signal corresponding to the skin color in the image occupies a predetermined range with respect to the entire image, and the aspect ratio of the range However, an area having a predetermined ratio (for example, length: width≈1: 1) is determined as a face area. For this reason, it is difficult to extract only the face area by the method of discriminating the area that satisfies the above conditions. That is, in the technique described in Patent Document 1, an area other than an area that actually includes a face among areas corresponding to a color information signal corresponding to skin color may be determined as a face area. For this reason, an area other than the face area is set as a correction target, and an unnatural correction that should not be made in the area other than the face area is performed on the skin color information signal corresponding to the area other than the face area. There was a problem that there was something.
 また、特許文献2に記載の技術では、人物の露出した肌の部分を全て検出して補正を行ってしまうため、人物の顔及び腕などを個々に検出することができず、それぞれの部分に適した補正を行うことができないという問題があった。 Further, in the technique described in Patent Document 2, since all the exposed skin parts of the person are detected and corrected, the person's face and arms cannot be individually detected, and each part is not detected. There was a problem that appropriate correction could not be performed.
 さらに、特許文献1の技術に特許文献2の技術を適用した場合、画像の中の肌色に相当する色情報信号に対応した領域のうち、画像全体に対して所定の範囲の大きさを占め、かつ、その範囲の縦横比が、所定の比率(例えば、縦:横≒1:1)を有する領域であって、その領域が温度の高い部分を示す高輝度領域である場合、非常に高い確率で顔面領域を特定することが可能となる。 Furthermore, when the technique of Patent Document 2 is applied to the technique of Patent Document 1, the area corresponding to the color information signal corresponding to the skin color in the image occupies a size of a predetermined range with respect to the entire image, In addition, if the aspect ratio of the range is a region having a predetermined ratio (for example, length: width≈1: 1), and the region is a high-luminance region indicating a portion having a high temperature, a very high probability It becomes possible to specify the face area.
 したがって、顔面領域以外の領域を補正対象としてしまい、顔面領域以外の領域にはすべきではない不自然な補正を、色情報信号に対して行ってしまうという事態を回避することができる。 Therefore, it is possible to avoid a situation in which an area other than the face area is targeted for correction and an unnatural correction that should not be performed on the area other than the face area is performed on the color information signal.
 しかしながら、顔面領域以外の肌領域が手なのか、足なのか等の詳細な領域の特定を行うことはできない。 However, it is not possible to specify a detailed area such as whether the skin area other than the face area is a hand or a foot.
 つまり、特許文献1,2を組み合わせても、顔面領域の特定を行うことができるものの、それ以外の領域の特定を行うことができず、顔面領域以外の領域に応じた適切な補正を行うことができないという問題が生じる。 That is, even if Patent Documents 1 and 2 are combined, the face area can be specified, but other areas cannot be specified, and appropriate correction according to areas other than the face area is performed. The problem that cannot be done.
 以上のように、従来技術では、顔面領域の特定を可能にしているものの、それ以外の画像の領域を特性毎に特定することができないので、各画像領域の特性に応じた適切な補正を行うことができない。 As described above, although the conventional technique enables the specification of the face area, other image areas cannot be specified for each characteristic, so appropriate correction according to the characteristics of each image area is performed. I can't.
 また、従来の技術において、画像に動画を含む場合では、フレーム毎に補正の対象となる領域が移動する可能性があるため、補正の対象となる領域を追跡し、特定するのが難しいので、画面全体に対して一様な補正を行わざるを得ない。 In addition, in the conventional technology, when a moving image is included in an image, there is a possibility that the region to be corrected is moved for each frame, so it is difficult to track and specify the region to be corrected. A uniform correction must be made on the entire screen.
 本発明は、上記の課題を解決するためになされたものであり、その主たる目的は、画像に動画が含まれる場合であっても、画像データから補正を行う対象領域を示すデータを特性毎に抽出し、抽出した対象領域を示すデータに適した補正を行うことができる画像補正装置を提供することにある。 The present invention has been made to solve the above-described problems, and its main purpose is to display data indicating a target area to be corrected from image data for each characteristic even when a moving image is included in the image. An object of the present invention is to provide an image correction apparatus that can extract and perform correction suitable for data indicating the extracted target area.
 本発明の一態様に係る画像補正装置は、上記の課題を解決するために、入力画像データのフレーム間における動きベクトルを検出する動きベクトル検出手段と、記憶部に格納されている、対象領域データとして抽出される領域に特有の特徴を示す特徴データを含む対象特徴データベースを参照することによって、上記入力画像データから上記特有の特徴を含む対象領域データを抽出する対象領域抽出手段と、上記記憶部に格納されている、上記対象領域データに対する補正内容が定められた補正内容データベースを参照することによって、上記入力画像データのうち、上記対象領域抽出手段において抽出された対象領域データに対する画像補正を行う補正処理手段と、を備え、上記対象領域抽出手段は、上記動きベクトル検出手段が検出した現在のフレームと直前のフレームとの間における動きベクトルを用いて、現在のフレームの入力画像データから、直前のフレームの入力画像データから抽出した対象領域データと同じ特有の特徴を含む対象領域データを抽出することを特徴としている。 In order to solve the above problem, an image correction apparatus according to an aspect of the present invention includes a motion vector detection unit that detects a motion vector between frames of input image data, and target region data stored in a storage unit. Target area extraction means for extracting target area data including the specific features from the input image data by referring to a target feature database including feature data indicating characteristics specific to the area extracted as the storage area, and the storage unit The image correction is performed on the target area data extracted by the target area extraction means in the input image data by referring to the correction content database in which the correction contents for the target area data are defined. Correction processing means, and the target region extraction means is detected by the motion vector detection means. Using the motion vector between the current frame and the previous frame, the target area data including the same specific features as the target area data extracted from the input image data of the previous frame is input from the input image data of the current frame. It is characterized by extracting.
 上記の構成によれば、上記入力画像データから上記対象領域データを抽出する際に、上記対象特徴データベースに含まれる対象領域データとして抽出される領域(補正対象領域)に特有の特徴を示す特徴データを含む(あてはまる)データを、上記対象領域データとして抽出することができる。上記特徴データにあてはまる入力画像データを上記対象領域データとして抽出するため、例えば、上記対象領域データとして顔領域を抽出する場合に、顔領域と同じ色の体の一部の領域又は顔と同じ色の壁の領域などを誤って検出することを防ぐことができる。 According to the above configuration, when extracting the target area data from the input image data, feature data indicating characteristics peculiar to the area (correction target area) extracted as the target area data included in the target feature database Can be extracted as the target region data. In order to extract input image data applicable to the feature data as the target area data, for example, when a face area is extracted as the target area data, the same color as the partial area of the body or the face of the same color as the face area It is possible to prevent erroneous detection of the wall area of the camera.
 また、抽出された上記対象領域データに対し、上記補正内容データベースに含まれる補正内容に基づいた補正を行うことができる。その補正内容は、補正対象領域の特性に適合している一方で、補正対象領域以外の領域には不適合であり得る。したがって、上記対象領域データ以外のデータには、不適合な補正が施されないため、上記入力画像データ全体に同じ補正が一律に行われ、不自然な出力画像になるという不具合を防ぐことが出来る。 Further, correction based on the correction content included in the correction content database can be performed on the extracted target area data. The content of the correction may be incompatible with the area other than the correction target area while being compatible with the characteristics of the correction target area. Therefore, since the data other than the target area data is not subjected to inadequate correction, the same correction is uniformly performed on the entire input image data, thereby preventing an unnatural output image.
 したがって、入力画像データから補正を行う対象領域データを抽出し、抽出した対象領域データに適した補正を行うことができる。 Therefore, target area data to be corrected can be extracted from the input image data, and correction suitable for the extracted target area data can be performed.
 さらに、上記対象領域抽出手段は、上記動きベクトル検出手段が検出した現在のフレームと直前のフレームとの間における動きベクトルを用いて、現在のフレームの入力画像データから、直前のフレームの入力画像データから抽出した対象領域データと同じ特有の特徴を含む対象領域データを抽出することで、直前のフレームの入力画像データから抽出した対象領域データが、現在のフレームの入力画像データに含まれている場合に確実に当該対象領域データを抽出することができる。つまり、上記対象領域抽出手段は、直前のフレームの入力画像データから抽出した対象領域データが、現在のフレームの入力画像データに含まれているかを追跡し、含まれている場合には対象領域データを抽出することができるので、対象領域データの抽出もれを低減できる。 Further, the target region extracting means uses the motion vector between the current frame and the immediately preceding frame detected by the motion vector detecting means, from the input image data of the current frame to the input image data of the immediately preceding frame. If the target area data extracted from the input image data of the immediately preceding frame is included in the input image data of the current frame by extracting the target area data containing the same unique features as the target area data extracted from The target area data can be reliably extracted. In other words, the target area extraction means tracks whether the target area data extracted from the input image data of the immediately preceding frame is included in the input image data of the current frame. Therefore, it is possible to reduce the leakage of the target area data.
 このため、上記対象領域抽出手段は、現在のフレームの入力画像データから対象特徴データベースを参照することによって抽出できない対象領域データが存在する場合にも、当該抽出できない対象領域データが直前のフレームの入力画像データに含まれている場合には、上記動きベクトルを用いれば、上記抽出できない対象領域データと同じ特有な特徴を有する対象領域データを抽出することができる。 For this reason, even if there is target area data that cannot be extracted by referring to the target feature database from the input image data of the current frame, the target area extraction unit inputs the target area data that cannot be extracted in the previous frame. When included in image data, using the motion vector, it is possible to extract target area data having the same characteristic as the target area data that cannot be extracted.
 したがって、上記動きベクトルを用いることにより、対象領域データの抽出漏れ及び誤抽出を低減することができる。 Therefore, by using the motion vector, it is possible to reduce omission of extraction and erroneous extraction of target area data.
 なお、上記特徴データベースに特徴データとして、例えば、人の顔を正面から見た場合の目、鼻及び口の相対距離関係が定められている場合、上記抽出できない対象領域データとしては、定められた目、鼻及び口の相対距離関係に当てはまらない領域データ、すなわち、人の横顔の領域データ、及び、物体が横切ることにより顔の一部が隠れてしまった人の顔の領域データなどを挙げることができる。 Note that, as the feature data in the feature database, for example, when the relative distance relationship between eyes, nose, and mouth when a human face is viewed from the front is determined, the target region data that cannot be extracted is determined as the target data. List area data that does not apply to the relative distance between eyes, nose, and mouth, that is, area data of a person's profile, and area data of a person's face that is partially hidden by an object Can do.
 なお、上記入力画像データは、動画像データを含んでいるほかに、動画の一部に静止画が表示されたり、静止画の一部に動画が表示される場合のような、静止画像データと動画像データとの組み合わせをも含んでいる。 The input image data includes moving image data, as well as still image data such as when a still image is displayed on a part of a moving image or a moving image is displayed on a part of a still image. A combination with moving image data is also included.
 本発明の一態様に係る画像補正装置の画像補正方法は、上記の課題を解決するために、入力された画像を補正する画像補正装置の画像補正方法であって、入力画像データのフレーム間における動きベクトルを検出する動きベクトル検出ステップと、記憶部に格納されている、対象領域データとして抽出される領域に特有の特徴を示す特徴データを含む対象特徴データベースを参照することによって、上記入力画像データから上記特有の特徴を含む対象領域データを抽出する対象領域抽出ステップと、上記記憶部に格納されている、上記対象領域データに対する補正内容が定められた補正内容データベースを参照することによって、上記入力画像データのうち、上記対象領域抽出ステップにおいて抽出された対象領域データに対する画像補正を行う補正処理ステップと、を含み、上記対象領域抽出ステップにおいて、上記動きベクトル検出ステップにおいて検出した現在のフレームと直前のフレームとの間における動きベクトルを用いて、現在のフレームの入力画像データから、直前のフレームの入力画像データから抽出した対象領域データと同じ特有の特徴を含む対象領域データを抽出することを特徴としている。 An image correction method for an image correction apparatus according to an aspect of the present invention is an image correction method for an image correction apparatus that corrects an input image in order to solve the above-described problem. The input image data is obtained by referring to a target feature database including a motion vector detection step for detecting a motion vector and feature data indicating features specific to a region extracted as target region data stored in the storage unit. The target area extraction step for extracting the target area data including the specific features from the above, and the input by referring to the correction content database in which the correction contents for the target area data stored in the storage unit are defined Image correction for the target area data extracted in the target area extraction step in the image data Performing the correction processing step, and in the target region extraction step, using the motion vector between the current frame detected in the motion vector detection step and the immediately preceding frame, from the input image data of the current frame, It is characterized in that target area data including the same characteristic as the target area data extracted from the input image data of the immediately preceding frame is extracted.
 上記の構成によれば、上述した画像補正装置と同様の効果を奏する。 According to the configuration described above, the same effects as those of the image correction device described above can be obtained.
 本発明に係る画像補正装置は、上記の課題を解決するために、入力画像データのフレーム間における動きベクトルを検出する動きベクトル検出手段と、記憶部に格納されている、補正対象領域に特有の特徴を示す特徴データを含む対象特徴データベースを参照することによって、入力画像データから特有の特徴を含む対象領域データを抽出する対象領域抽出手段と、記憶部に格納されている、上記対象領域データに対する補正内容が定められた補正内容データベースを参照することによって、上記入力画像データのうち、上記対象領域抽出手段において抽出された対象領域データに対する画像補正を行う補正処理手段と、を備え、上記対象領域抽出手段は、上記動きベクトル検出手段が検出した現在のフレームと直前のフレームとの間における動きベクトルを用いて、現在のフレームの入力画像データから、直前のフレームの入力画像データから抽出した対象領域データと同じ特有の特徴を含む対象領域データを抽出することを特徴としている。 In order to solve the above-described problem, an image correction apparatus according to the present invention is specific to a correction target region stored in a storage unit and a motion vector detection unit that detects a motion vector between frames of input image data. By referring to a target feature database including feature data indicating features, target region extraction means for extracting target region data including specific features from input image data, and for the target region data stored in the storage unit Correction processing means for performing image correction on the target area data extracted by the target area extraction means in the input image data by referring to a correction content database in which correction contents are defined, and the target area The extracting means is between the current frame detected by the motion vector detecting means and the immediately preceding frame. That by using a motion vector, from the input image data of the current frame, it is characterized by extracting the object region data including the same unique features as target region data extracted from the input image data of the previous frame.
 これによって、入力画像データから補正を行う対象領域データを抽出し、抽出した対象領域データに適した補正を行うことができるという効果を奏する。 Thus, there is an effect that target area data to be corrected is extracted from the input image data, and correction suitable for the extracted target area data can be performed.
本発明の一実施形態に係る画像補正装置の構成を示すブロック図である。It is a block diagram which shows the structure of the image correction apparatus which concerns on one Embodiment of this invention. 本発明の一実施形態に係る画像補正装置の構成の詳細を示すブロック図である。It is a block diagram which shows the detail of a structure of the image correction apparatus which concerns on one Embodiment of this invention. 図2に示す画像補正装置における、対象特徴データベースの一例を示す図である。It is a figure which shows an example of the object characteristic database in the image correction apparatus shown in FIG. 図2に示す画像補正装置における、補正内容データベースの一例を示す図である。It is a figure which shows an example of the correction content database in the image correction apparatus shown in FIG. 図2に示す画像補正装置の対象領域抽出部における、補正対象領域抽出処理の流れの一例を示すフローチャートである。4 is a flowchart illustrating an example of a flow of correction target area extraction processing in a target area extraction unit of the image correction apparatus illustrated in FIG. 2. 図2に示す画像補正装置の補正処理部における、画像補正処理の流れの一例を示すフローチャートである。3 is a flowchart illustrating an example of a flow of image correction processing in a correction processing unit of the image correction apparatus illustrated in FIG. 2. 図2に示す画像補正装置の色差補正処理部における、CbCr座標系における顔領域データの色相情報に対する色差補正を示す図であり、(a)は顔領域データの肌色の色相情報の範囲をCbCr座標系において示した図であり、(b)はCbCr座標系における顔領域データの肌色の補正を行う前及び補正を行った後の色相情報の範囲を示す図である。FIG. 4 is a diagram showing color difference correction for hue information of face area data in the CbCr coordinate system in the color difference correction processing unit of the image correction apparatus shown in FIG. 2, and (a) shows the range of skin color hue information of the face area data in CbCr coordinates; (B) is a diagram showing a range of hue information before and after correcting the skin color of the face area data in the CbCr coordinate system. 本発明の一実施形態の変形例に係る画像補正装置の構成の詳細を示すブロック図である。It is a block diagram which shows the detail of a structure of the image correction apparatus which concerns on the modification of one Embodiment of this invention. 本発明の一実施形態の他の変形例に係る画像補正装置の構成の詳細を示すブロック図である。It is a block diagram which shows the detail of a structure of the image correction apparatus which concerns on the other modification of one Embodiment of this invention. 本発明の他の実施形態に係る画像補正装置の構成の詳細を示すブロック図である。It is a block diagram which shows the detail of a structure of the image correction apparatus which concerns on other embodiment of this invention. 本発明の他の実施形態の変形例に係る画像補正装置の構成の詳細を示すブロック図である。It is a block diagram which shows the detail of a structure of the image correction apparatus which concerns on the modification of other embodiment of this invention. 本発明の他の実施形態の他の変形例に係る画像補正装置の構成の詳細を示すブロック図である。It is a block diagram which shows the detail of a structure of the image correction apparatus which concerns on the other modification of other embodiment of this invention. 特許文献1に開示されている顔色補正装置の主な構成要素を示すブロック図である。FIG. 10 is a block diagram illustrating main components of a face color correction device disclosed in Patent Document 1.
 <実施形態1>
 本発明の一実施形態に係る画像補正装置について、図1から図7を参照して説明する。但し、この実施形態に記載されている構成は、特に特定的な記載がない限り、この発明の範囲をそれのみに限定する趣旨ではなく、単なる説明例に過ぎない。
<Embodiment 1>
An image correction apparatus according to an embodiment of the present invention will be described with reference to FIGS. However, unless otherwise specified, the configuration described in this embodiment is not merely intended to limit the scope of the present invention, but is merely an illustrative example.
 (画像補正装置の構成)
 本実施形態に係る画像補正装置の構成について、図1及び図2を参照して説明する。図1は、本実施形態に係る画像補正装置1の構成を示すブロック図である。図2は、本実施形態に係る画像補正装置1の構成の詳細を示すブロック図である。
(Configuration of image correction device)
The configuration of the image correction apparatus according to the present embodiment will be described with reference to FIGS. 1 and 2. FIG. 1 is a block diagram illustrating a configuration of an image correction apparatus 1 according to the present embodiment. FIG. 2 is a block diagram showing details of the configuration of the image correction apparatus 1 according to the present embodiment.
 上記画像補正装置1は、上記画像補正装置において補正された上記入力画像データを表示する表示部(図示せず)を備えた、例えばテレビジョン受像機又は情報処理装置などの画像補正表示装置に実装されている。 The image correction device 1 is mounted on an image correction display device such as a television receiver or an information processing device that includes a display unit (not shown) that displays the input image data corrected by the image correction device. Has been.
 すなわち、上記画像補正装置1は、例えばテレビジョン受像機又は情報処理装置などに実装され、放送信号又は画像出力装置の出力信号などに含まれる入力画像データの画質を補正する。そのために、図1に示すように、画像補正装置1は、対象領域抽出部10(対象領域抽出手段)、補正処理部20(補正処理手段)、記憶部30、動きベクトル検出部(動きベクトル検出手段)50、及び、フレームメモリ51を備えている。また、図1に示すように、記憶部30には、対象特徴データベース31、及び、補正内容データベース32が格納されている。 That is, the image correction apparatus 1 is mounted on, for example, a television receiver or an information processing apparatus, and corrects the image quality of input image data included in a broadcast signal or an output signal of the image output apparatus. For this purpose, as shown in FIG. 1, the image correction apparatus 1 includes a target area extraction unit 10 (target area extraction unit), a correction processing unit 20 (correction processing unit), a storage unit 30, a motion vector detection unit (motion vector detection). Means) 50 and a frame memory 51. As shown in FIG. 1, the storage unit 30 stores a target feature database 31 and a correction content database 32.
 また、図2に示すように、画像補正装置1はさらに、RGB変換部40(第1色空間信号変換手段)を備えている。また、画像補正装置1の補正処理部20は、輝度補正処理部21(輝度補正処理手段)、色差補正処理部22(色差補正処理手段)、及び、ノイズリダクション処理部23(ノイズリダクション処理手段)を含んで構成される。 Further, as shown in FIG. 2, the image correction apparatus 1 further includes an RGB conversion unit 40 (first color space signal conversion means). The correction processing unit 20 of the image correction apparatus 1 includes a luminance correction processing unit 21 (luminance correction processing unit), a color difference correction processing unit 22 (color difference correction processing unit), and a noise reduction processing unit 23 (noise reduction processing unit). It is comprised including.
 対象領域抽出部10は、入力画像データから、補正を行う対象を示す補正対象領域データ(対象領域データ)を抽出する手段である。具体的には、記憶部30に格納されている対象特徴データベース31を参照し、対象特徴データベース31において予め定められている特徴情報(補正対象領域データとして抽出される領域(補正対象領域)に特有の特徴を示す特徴データ)の示す範囲値内の値を有するデータを、入力画像データから補正対象領域データとして抽出する。なお、対象特徴データベース31については後述する。 The target area extraction unit 10 is means for extracting correction target area data (target area data) indicating a target to be corrected from input image data. Specifically, the target feature database 31 stored in the storage unit 30 is referred to, and feature information (a region extracted as correction target region data (correction target region) that is predetermined in the target feature database 31 is unique. Data having a value within the range value indicated by the feature data indicating the feature of (2) is extracted from the input image data as correction target region data. The target feature database 31 will be described later.
 また、対象領域抽出部10は、現在入力されている入力画像データから補正対象領域データを抽出する際に、後述する上記動きベクトル検出部50から供給された動きベクトルに基づいて、現在入力されている入力画像データの直前に入力された入力画像データから抽出した補正対象領域データと同じ特有な特徴を有する領域データを補正対象領域データとして抽出する。つまり、対象領域抽出部10は、直前に入力された入力画像データから抽出した補正対象領域データと同じ特有な特徴を有する領域データが、現在入力されている入力画像データに含まれているかを、動きベクトルを用いて追跡し、含まれている場合には補正対象領域データを抽出する。なお、具体的な補正対象領域データの抽出方法については、後述する。 Further, the target area extraction unit 10 extracts the correction target area data from the input image data that is currently input based on the motion vector supplied from the motion vector detection unit 50 described later. Region data having the same characteristic as the correction target region data extracted from the input image data input immediately before the input image data is extracted as correction target region data. That is, the target area extraction unit 10 determines whether area data having the same characteristic as the correction target area data extracted from the input image data input immediately before is included in the currently input image data. Tracking is performed using a motion vector, and if it is included, correction target area data is extracted. A specific method for extracting correction target area data will be described later.
 補正処理部20は、補正対象領域データに対し、画質の補正処理(以降、対象領域補正処理とも呼称する)を行う手段である。具体的には、補正処理部20は、対象領域抽出部10において抽出された補正対象領域データに対し、補正対象領域データの示す対象に最も適した補正の内容を、記憶部30に格納されている補正内容データベース32を参照して決定し、決定した内容に基づいて対象領域補正処理を実行する。 The correction processing unit 20 is means for performing image quality correction processing (hereinafter also referred to as target region correction processing) on the correction target region data. Specifically, the correction processing unit 20 stores, in the storage unit 30, the correction content most suitable for the target indicated by the correction target region data with respect to the correction target region data extracted by the target region extraction unit 10. The target area correction process is executed based on the determined content with reference to the correction content database 32 being determined.
 ここで、対象領域補正処理とは、例えば、補正対象領域データに含まれる、補正対象領域データの明るさの度合いを示す輝度情報(輝度値)を含む輝度信号、及び、補正対象領域データの色の知覚的な相違を定量的に示した色差信号の少なくとも何れかを調整することを指す。また、色差信号には、補正対象領域データの色合い及び特徴付ける色の属性などを示す色相情報(色相値)、及び、補正対象領域データの鮮やかさの度合いを示す彩度情報(彩度値)が含まれている。なお、補正内容データベース32については後述する。 Here, the target area correction processing is, for example, a luminance signal including luminance information (brightness value) indicating the degree of brightness of the correction target area data included in the correction target area data, and the color of the correction target area data. It means that at least one of the color difference signals quantitatively indicating the perceptual difference is adjusted. Further, the color difference signal includes hue information (hue value) indicating the hue of the correction target area data and the attribute of the color to be characterized, and saturation information (saturation value) indicating the degree of vividness of the correction target area data. include. The correction content database 32 will be described later.
 輝度補正処理部21は、補正処理部20に備えられ、対象領域抽出部10において抽出された補正対象領域データの輝度信号に含まれる輝度情報に対する補正処理を実行する。具体的には、輝度補正処理部21は、補正対象領域データにおける輝度情報を補正することによって、太い輪郭(例えば、顔の輪郭などの太いエッジ)を強調する太輪郭強調補正、細く急峻な輪郭(例えば、睫毛などの細く急峻なエッジ)を強調する細輪郭強調補正、及び、テクスチャ(例えば、芝生及び煉瓦などの細いエッジ)を強調するテクスチャ強調補正などを行う。 The luminance correction processing unit 21 is provided in the correction processing unit 20 and executes a correction process on the luminance information included in the luminance signal of the correction target region data extracted by the target region extraction unit 10. Specifically, the luminance correction processing unit 21 corrects the luminance information in the correction target area data, thereby correcting a thick outline (for example, a thick edge such as a face outline), a thin and sharp outline. For example, fine outline emphasis correction for emphasizing (for example, thin and sharp edges such as eyelashes) and texture emphasis correction for emphasizing texture (for example, thin edges such as lawn and brick) are performed.
 なお、輝度補正処理部21は、バンドパスフィルタ及びハイパスフィルタを備えていることが好ましい。輝度補正処理部21は、バンドパスフィルタを用いて太輪郭強調補正を行い、太輪郭強調補正に用いたバンドパスフィルタよりも通過周波数帯域の高いバンドパスフィルタを用いて細輪郭強調補正を行い、ハイパスフィルタを用いてテクスチャ強調補正を行えばよい。もちろん、輝度補正処理部21の構成は、これに限定されるものではない。 Note that the luminance correction processing unit 21 preferably includes a band pass filter and a high pass filter. The luminance correction processing unit 21 performs thick contour emphasis correction using a bandpass filter, performs fine contour emphasis correction using a bandpass filter having a higher pass frequency band than the bandpass filter used for thick contour emphasis correction, Texture enhancement correction may be performed using a high-pass filter. Of course, the configuration of the luminance correction processing unit 21 is not limited to this.
 これによって、補正対象領域データに対して、補正対象領域データに含まれるデータ毎に対して必要な補正を行うことができるため、補正対象領域データ全体に同じ補正が行われ、不自然な出力画像になることを防ぐことができる。 As a result, necessary correction can be performed for each data included in the correction target area data with respect to the correction target area data, so that the same correction is performed on the entire correction target area data, resulting in an unnatural output image. Can be prevented.
 色差補正処理部22は、補正処理部20に備えられ、対象領域抽出部10において抽出された補正対象領域データの色差信号に含まれる色相の情報を示す色相情報及び彩度の情報を示す彩度情報に対する補正処理を実行する。具体的には、色差補正処理部22は、補正対象領域データの色相情報に対する色相補正処理を行い、また、補正対象領域データの彩度情報に対する彩度補正処理を行う。 The chrominance correction processing unit 22 is provided in the correction processing unit 20 and is chromaticity indicating hue information and chromaticity information indicating hue information included in the chrominance signal of the correction target region data extracted by the target region extraction unit 10. Execute correction processing for information. Specifically, the color difference correction processing unit 22 performs a hue correction process on the hue information of the correction target area data, and performs a saturation correction process on the saturation information of the correction target area data.
 色相補正処理においては、補正対象領域データに含まれる色相情報の値を、当該補正対象領域データの有する特有の特徴に対して予め定められた適正範囲内の値に補正する。特有の特徴としては、例えば、顔、芝生、及び青空などを挙げることができるが、これに限定されるものではない。また、色相補正処理としては、例えば、顔の色相を適正範囲内に補正する肌色の色相補正処理、青空の色相補正処理、及び、芝生の色相補正処理などを挙げることができるが、これに限定されるものではない。 In the hue correction process, the value of the hue information included in the correction target area data is corrected to a value within an appropriate range that is predetermined for the specific feature of the correction target area data. Specific features include, but are not limited to, for example, face, lawn, and blue sky. Examples of the hue correction process may include, but are not limited to, a skin color hue correction process that corrects the facial hue within an appropriate range, a blue sky hue correction process, and a lawn hue correction process. Is not to be done.
 また、彩度補正処理は、色差信号に対し、正の係数を掛けることによって行う。なお、係数が1より大きい場合には、色が鮮やかになるように補正され、係数が1より小さい場合には、色が淡くなるように補正される。また、係数が1の場合には彩度補正は行われない。 Also, the saturation correction process is performed by multiplying the color difference signal by a positive coefficient. When the coefficient is larger than 1, the color is corrected so as to be vivid, and when the coefficient is smaller than 1, the color is corrected so as to become light. When the coefficient is 1, saturation correction is not performed.
 なお、色相補正処理は、式(1)を用いて行われる。 Note that the hue correction process is performed using the equation (1).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 ここで、rは彩度であり、θは色相である。また、係数kは0≦k≦1の範囲で任意の値をとり、kの値が小さいほど色相補正の効果が強くなり、k=0の場合に色相補正の効果が最も強く、補正対象領域データに含まれる色相は全てθ1に補正される。また、k=1の場合には色相補正は行われない。 Where r is the saturation and θ is the hue. The coefficient k takes an arbitrary value within the range of 0 ≦ k ≦ 1, and the smaller the value of k, the stronger the effect of hue correction, and the strongest effect of hue correction when k = 0, All hues included in the data are corrected to θ1. When k = 1, no hue correction is performed.
 ノイズリダクション処理部23は、補正対象領域データに含まれる輝度信号及び色差信号におけるノイズを除去する。具体的には、ノイズリダクション処理部23は、輝度信号におけるノイズ(ちらつき及びざらつきなど)を除去し、色差信号におけるノイズ(余分な色情報など)を除去することによって、補正対象領域データにおけるノイズを除去する。なお、ノイズリダクション処理部23は、ローパスフィルタ、又は、メディアンフィルタを備えていることが好ましい。 The noise reduction processing unit 23 removes noise in the luminance signal and the color difference signal included in the correction target area data. Specifically, the noise reduction processing unit 23 removes noise (flickering, roughness, etc.) in the luminance signal and removes noise (excess color information, etc.) in the color difference signal, thereby removing the noise in the correction target area data. Remove. The noise reduction processing unit 23 preferably includes a low-pass filter or a median filter.
 ここで、メディアンフィルタは、マスクサイズn×n(nは自然数)(例えば、3×3又は5×5など)のマスクの各画素における濃度値を小さい順に並べ、中央にくる濃度値をマスク中央の注目画素の出力濃度とすることによって、ノイズを除去する。なお、メディアンフィルタは、マスクサイズが大きいほど、ノイズ除去の効果が大きい。 Here, the median filter arranges the density values in the respective pixels of the mask having a mask size of n × n (n is a natural number) (for example, 3 × 3 or 5 × 5) in ascending order, The noise is removed by setting the output density of the target pixel. The median filter has a larger noise removal effect as the mask size is larger.
 また、ローパスフィルタは、係数の合計が1になり、かつ、注目画素の係数が最大になるように、マスクサイズn×nのマスクの各画素のそれぞれに対して係数を設定し、重み付け平均処理を行うことによって、ノイズを除去する。係数が均一の場合に、ノイズ除去の効果が最大になる。また、注目画素の係数を1とすると、他の係数は0となり、ノイズ除去の効果がなくなる。ローパスフィルタの係数としては、顔及び青空などに代表される、平坦でエッジの少ないい対象領域に対しては、ノイズ除去効果の大きいフィルタ係数を用いればよく、芝生などに代表される、テクスチャやエッジの多い対象領域に対しては、ノイズ除去効果の小さいフィルタ係数を用いればよい。 In addition, the low-pass filter sets a coefficient for each pixel of the mask of the mask size n × n so that the sum of the coefficients becomes 1 and the coefficient of the target pixel becomes the maximum, and weighted averaging processing is performed. By removing the noise. When the coefficients are uniform, the noise removal effect is maximized. If the coefficient of the pixel of interest is 1, other coefficients are 0, and the noise removal effect is lost. As a low-pass filter coefficient, for a target region that is flat and has few edges, such as a face and a blue sky, a filter coefficient having a large noise removal effect may be used. For a target region with many edges, a filter coefficient with a small noise removal effect may be used.
 これによって、例えば、ノイズリダクション処理部23がローパスフィルタである場合には、補正対象領域データに対して重み付け平均処理による補正を行うことができ、ノイズリダクション処理部23がメディアンフィルタである場合には、細かなノイズを除去する補正を行うことができる。 Thereby, for example, when the noise reduction processing unit 23 is a low-pass filter, it is possible to perform correction by weighted average processing on the correction target region data, and when the noise reduction processing unit 23 is a median filter. Correction that removes fine noise can be performed.
 記憶部30は、対象領域抽出部10において補正対象領域データが抽出される際に参照される対象特徴データベース31、及び、補正処理部20において対象領域補正処理が実行される際に参照される補正内容データベース32が格納される、読み出し可能かつ書き込み可能なメモリである。 The storage unit 30 includes a target feature database 31 that is referred to when the target region extraction unit 10 extracts correction target region data, and a correction that is referred to when the target region correction process is executed in the correction processing unit 20. A readable and writable memory in which the content database 32 is stored.
 なお、記憶部30は、コンピュータを画像補正装置1として動作させるためのプログラムであって、上記コンピュータを画像補正装置1の上記各手段として機能させるプログラムを記録したコンピュータ読み取り可能な記録媒体としての役割も担っている。 The storage unit 30 is a program for operating a computer as the image correction apparatus 1, and serves as a computer-readable recording medium that records a program that causes the computer to function as each unit of the image correction apparatus 1. Also bears.
 RGB変換部40は、補正処理部20において補正された入力画像データを示す表色系の色空間信号を、他の表色系の色空間信号に変換し、出力画像データとして出力する。例えば、RGB変換部40は、YCbCr色空間で表される表色系で入力された入力画像データの色空間信号(Y、Cb、Cr)を、RGB表色系の色空間信号(R、G、B)に変換する。 The RGB conversion unit 40 converts the color space signal of the color system indicating the input image data corrected by the correction processing unit 20 into a color space signal of another color system and outputs it as output image data. For example, the RGB conversion unit 40 converts the color space signal (Y, Cb, Cr) of the input image data input in the color system represented by the YCbCr color space into the color space signal (R, G) of the RGB color system. , B).
 なお、本実施形態では、RGB変換部40は、YCbCr色空間で表される表色系の色空間信号から、RGB表色系の色空間信号に変換する構成を例に挙げて説明するが、本発明はこれに限定されるものではなく、例えば、CIEL*a*b*表色系の色空間信号に変換する構成を採用してもよい。 In the present embodiment, the RGB conversion unit 40 will be described by taking as an example a configuration for converting a color space signal of the color system expressed in the YCbCr color space into a color space signal of the RGB color system. The present invention is not limited to this. For example, a configuration for converting to a color space signal of the CIE L * a * b * color system may be adopted.
 動きベクトル検出部50は、入力画像データから、フレーム間における動きベクトルを検出する手段である。動きベクトル検出部50は、現在フレームの入力画像データと、フレームメモリ51に格納されている、現在フレームの直前のフレームの入力画像データとの差分から、移動速度、移動方向、移動距離などの情報を含んだ動きベクトルを検出する。また、動きベクトル検出部50は、検出した動きベクトルを対象領域抽出部10に供給する。 The motion vector detection unit 50 is a means for detecting a motion vector between frames from input image data. The motion vector detection unit 50 obtains information such as a moving speed, a moving direction, and a moving distance from the difference between the input image data of the current frame and the input image data of the frame immediately before the current frame stored in the frame memory 51. A motion vector including is detected. In addition, the motion vector detection unit 50 supplies the detected motion vector to the target region extraction unit 10.
 フレームメモリ51は、入力画像データをフレーム単位で一時的に格納する記憶装置として機能する。フレームメモリ51にある時点で格納された入力画像データは、ある時点の直後に格納される入力画像データが入力されたときに、動きベクトル検出部50によって読み出される。つまり、現在フレームの入力画像データが入力されたとき、フレームメモリ51に格納されている、直前のフレームの入力画像データが、動きベクトル検出部50によって読み出される。 The frame memory 51 functions as a storage device that temporarily stores input image data in units of frames. The input image data stored at a certain time in the frame memory 51 is read out by the motion vector detecting unit 50 when the input image data stored immediately after the certain time is input. That is, when the input image data of the current frame is input, the input image data of the immediately previous frame stored in the frame memory 51 is read by the motion vector detection unit 50.
 (対象特徴データベース)
 記憶部30に格納されている対象特徴データベース31及び補正内容データベース32について、図3及び図4を参照して説明する。図3は、本実施形態における対象特徴データベース31の一例を示す図である。図4は、本実施形態における補正内容データベース32の一例を示す図である。
(Target feature database)
The target feature database 31 and the correction content database 32 stored in the storage unit 30 will be described with reference to FIGS. 3 and 4. FIG. 3 is a diagram showing an example of the target feature database 31 in the present embodiment. FIG. 4 is a diagram showing an example of the correction content database 32 in the present embodiment.
 対象特徴データベース31は、図3に示すように、例えば、入力画像データから補正対象領域データとして抽出する領域の特徴情報として、人物の顔を抽出する際に参照される顔特徴情報、青空を抽出する際に参照される青空特徴情報、及び、芝生を抽出する際に参照される芝生特徴情報など、複数の特徴情報を予め設定しているデータベースである。 As illustrated in FIG. 3, the target feature database 31 extracts, for example, face feature information and a blue sky that are referred to when extracting a human face as feature information of a region extracted as correction target region data from input image data. This is a database in which a plurality of feature information, such as blue sky feature information referred to when performing grass extraction, and lawn feature information referred to when extracting grass, is set in advance.
 顔特徴情報としては、例えば、顔の肌色を示す色相情報の値の範囲値、顔の彩度を示す彩度情報の値の範囲値、顔の輝度を示す輝度情報の値の範囲値、及び、目、鼻及び口など顔の特徴部分の相対距離関係の範囲値などが設定されている。 As the facial feature information, for example, a range value value of hue information indicating the skin color of the face, a range value value of saturation information indicating the saturation of the face, a range value of value of luminance information indicating the brightness of the face, and The range value of the relative distance relationship of facial features such as eyes, nose and mouth is set.
 また、青空特徴情報としては、例えば、色相情報の値の範囲値、彩度情報の値の範囲値、輝度情報の値の範囲値、高周波数成分の範囲値、及び、画面上の位置の範囲値などが設定されている。なお、青空特徴情報における高周波数成分の範囲値には、例えば、高周波数成分の含まれる割合が一定以下であることを示す閾値(例えば、ノイズに起因する高周波数成分以外の高周波数成分が含まれていないことを示すような小さな値)が設定されており、画面上の位置の範囲値には、画面上の所定の位置よりも上方に位置していることを指定する所定の位置の値が設定されていればよい。 Further, as the blue sky feature information, for example, the range value of the hue information, the range value of the saturation information value, the range value of the luminance information value, the range value of the high frequency component, and the range of the position on the screen Value is set. In addition, the range value of the high frequency component in the blue sky feature information includes, for example, a threshold value indicating that the ratio of the high frequency component included is below a certain level (for example, a high frequency component other than the high frequency component caused by noise) A small value that indicates that the position is not set), and the range value of the position on the screen is a value of a predetermined position that specifies that the position is higher than the predetermined position on the screen Should be set.
 また、芝生特徴情報としては、例えば、色相情報の値の範囲値、彩度情報の値の範囲値、輝度情報の値の範囲値、高周波数成分の範囲値、及び、画面上の位置の範囲値などが設定されている。なお、芝生特徴情報における高周波数成分の範囲値には、例えば、高周波数成分の含まれる割合が一定以上であることを示す閾値が設定されており、画面上の位置の範囲値には、画面上の所定の位置よりも下方に位置していることを指定する所定の位置の値が設定されていればよい。 Further, as the lawn feature information, for example, the range value of the hue information value, the range value of the saturation information value, the range value of the luminance information value, the range value of the high frequency component, and the range of the position on the screen Value is set. The range value of the high frequency component in the lawn feature information is set with a threshold value indicating that the ratio of the high frequency component included is equal to or greater than a certain value, and the range value of the position on the screen includes the screen It is only necessary to set a value of a predetermined position that specifies that the position is lower than the predetermined position above.
 これによって、複数種類の補正対象領域データを抽出し、抽出した複数種類の補正対象領域データのそれぞれに対応した画像補正を行うことができる。すなわち、複数の補正対象領域を補正することができ、入力画像データに対して、より自然な補正を行うことができる。 Thus, it is possible to extract a plurality of types of correction target area data and perform image correction corresponding to each of the extracted types of correction target area data. That is, a plurality of correction target areas can be corrected, and more natural correction can be performed on the input image data.
 (補正内容データベース)
 補正内容データベースに含まれる補正内容は、輝度情報に対する補正内容を示す輝度補正内容、色差情報に対する補正内容を示す色差補正内容、及び、輝度情報及び色差情報の双方に対する補正内容を示すノイズリダクション内容のうちの少なくとも何れかを含んでいる。
(Correction content database)
The correction content included in the correction content database includes luminance correction content indicating correction content for luminance information, color difference correction content indicating correction content for color difference information, and noise reduction content indicating correction content for both luminance information and color difference information. Includes at least one of them.
 これによって、補正対象領域データのそれぞれに特有の特徴に応じた補正を行うことができる。例えば、ノイズリダクションを行うべきではない補正対象領域データに対しては、補正内容にノイズリダクション内容を含まないことによって、ノイズリダクションが行われないようにすることができる。 This makes it possible to perform corrections according to the characteristics peculiar to each correction target area data. For example, for the correction target area data that should not be subjected to noise reduction, the noise reduction content is not included in the correction content, so that the noise reduction can be prevented.
 補正内容データベース32には、具体的には、図4に示すように、補正対象領域データのうち、人物の顔を補正する際に参照される顔補正内容、青空を補正する際に参照される青空補正内容、及び、芝生を補正する際に参照される芝生補正内容などの補正内容情報が予め定められている。このように、補正内容データベース32には、各補正対象領域データの有する特有の特徴に対して、予め適正範囲内の値が補正内容情報として定められている。 Specifically, as shown in FIG. 4, the correction content database 32 is referred to when correcting the face correction content, which is referred to when correcting a person's face in the correction target region data, and when correcting the blue sky. Correction content information such as the blue sky correction content and the lawn correction content referred to when correcting the lawn is predetermined. As described above, in the correction content database 32, values within an appropriate range are determined in advance as correction content information for the unique features of each correction target area data.
 例えば、顔補正内容として、目、鼻及び口以外の部分を示す色相情報に色相補正処理を行い、色相を標準肌色の適正範囲に補正(例えば圧縮)すること、唇部分を示す色相情報を参考にして予め定められた数種類の色から、唇部分を示す色相情報を補正する色を選択して色相補正を行うこと、目、鼻及び口に輪郭強調を行うこと、睫毛にエッジ強調を行うことなどが設定されている。 For example, as face correction content, hue correction processing is performed on hue information indicating parts other than the eyes, nose, and mouth, the hue is corrected to an appropriate range of standard skin color (for example, compression), and hue information indicating the lip part is referred to Select the color that corrects the hue information indicating the lip from several predetermined colors, perform hue correction, perform edge enhancement on eyes, nose and mouth, and edge enhancement on eyelashes Etc. are set.
 また、青空補正内容として、青空を示す色相情報に色相補正を行い標準青色の適正範囲に補正すること、青空を示す色相情報に対してノイズリダクションを行うことなどが設定されている。 Also, the blue sky correction content is set such that the hue information indicating the blue sky is corrected to the appropriate range of the standard blue, and the noise reduction is performed on the hue information indicating the blue sky.
 また、芝生補正内容として、芝生を示す色相情報に色相補正を行い標準緑色の適正範囲に補正すること、芝生を示す色相情報に対して高周波数成分の強調を行うことなどが設定されている。 Also, the lawn correction contents are set such that the hue information indicating the lawn is corrected to a proper range of standard green, and the high frequency component is emphasized for the hue information indicating the lawn.
 上記の構成によれば、輝度情報に対する補正、色相情報及び彩度情報に対する補正、及び、輝度情報、色相情報及び彩度情報に対する補正を、補正対象領域の特性上の必要に応じてそれぞれ実行することができ、入力画像データに対して自然な補正を行うことができる。 According to the above configuration, the correction for the luminance information, the correction for the hue information and the saturation information, and the correction for the luminance information, the hue information and the saturation information are respectively performed as necessary on the characteristics of the correction target region. And natural correction can be performed on the input image data.
 なお、上述した輝度補正内容、色差補正内容、及び、ノイズリダクション内容のうち、補正対象領域の特性上、不必要または不適合な補正内容については、対応する処理部は補正を行わず、信号をスルーさせればよい。 Of the above-described brightness correction content, color difference correction content, and noise reduction content, correction processing that is unnecessary or incompatible due to the characteristics of the correction target region is not corrected by the corresponding processing unit, and the signal is passed through. You can do it.
 また、対象領域抽出部10及び補正処理部20が、輝度情報及び色差情報に基づいて処理を行うことができるため、入力画像データを構成する輝度信号及び色差信号を、輝度信号及び色差信号で表される表色系と異なる表色系の色空間信号に変換することなく処理を行うことができる。 In addition, since the target area extraction unit 10 and the correction processing unit 20 can perform processing based on the luminance information and the color difference information, the luminance signal and the color difference signal constituting the input image data are represented by the luminance signal and the color difference signal. It is possible to perform processing without converting to a color space signal of a color system different from the displayed color system.
 また、RGB変換部40を備えていることによって、補正処理部20において処理された輝度情報及び色差情報を含む輝度信号及び色差信号を、輝度信号及び色差信号で表される表色系と異なる表色系の色空間信号に変換した後に出力画像データとして出力することができる。 In addition, since the RGB conversion unit 40 is provided, the luminance signal and the color difference signal including the luminance information and the color difference information processed by the correction processing unit 20 are different from the color system represented by the luminance signal and the color difference signal. After being converted to a color space signal, it can be output as output image data.
 (補正対象領域抽出処理)
 次に、図5を参照して、対象領域抽出部10における補正対象領域抽出処理の流れについて説明する。図5は、本実施形態に係る画像補正装置1の備える対象領域抽出部10における、補正対象領域抽出処理の流れの一例を示すフローチャートである。
(Correction target area extraction process)
Next, the flow of the correction target area extraction process in the target area extraction unit 10 will be described with reference to FIG. FIG. 5 is a flowchart illustrating an example of the flow of the correction target area extraction process in the target area extraction unit 10 included in the image correction apparatus 1 according to the present embodiment.
 なお、本実施形態では、図5に示すように、対象領域抽出部10が対象特徴データベース31から、特徴情報として、図3に示す顔特徴情報、青空特徴情報及び芝生特徴情報を取得する場合を例に挙げて説明するが、本発明において対象領域抽出部10が取得する特徴情報はこれに限定されるものではない。また、対象領域抽出部10において、何れの特徴情報を取得するかを、不図示の設定部において予め設定することが可能な構成を採用してもよい。 In the present embodiment, as shown in FIG. 5, the target area extraction unit 10 acquires the face feature information, the blue sky feature information, and the lawn feature information shown in FIG. 3 from the target feature database 31 as the feature information. As an example, the feature information acquired by the target region extraction unit 10 in the present invention is not limited to this. Moreover, the target region extraction unit 10 may adopt a configuration in which which feature information is acquired can be set in advance by a setting unit (not shown).
 図5に示すように、対象領域抽出部10は、入力画像データが供給されると、記憶部30に格納されている対象特徴データベース31において定められている特徴情報を取得する(ステップS1)。 As shown in FIG. 5, when the input image data is supplied, the target area extracting unit 10 acquires feature information defined in the target feature database 31 stored in the storage unit 30 (step S1).
 特徴情報を取得すると、対象領域抽出部10は、供給された入力画像データの輝度信号及び色差信号に含まれる輝度情報、色相情報及び彩度情報を、特徴情報と比較する(ステップS2)。 When the feature information is acquired, the target region extraction unit 10 compares the luminance information, the hue information, and the saturation information included in the luminance signal and the color difference signal of the supplied input image data with the feature information (step S2).
 対象領域抽出部10は、入力画像データの輝度信号及び色差信号に含まれる輝度情報、色相情報及び彩度情報で示される領域データが、顔特徴情報の各範囲値内の値を有するデータであるか否かを、比較結果に基づいて判定する(ステップS3)。 The target area extraction unit 10 is data in which the area data indicated by the luminance information, hue information, and saturation information included in the luminance signal and the color difference signal of the input image data has a value within each range value of the face feature information. Is determined based on the comparison result (step S3).
 対象領域抽出部10は、入力画像データのうち、輝度情報、色相情報及び彩度情報で示される領域データが顔特徴情報の各範囲値内の値を有するデータであると判定した(ステップS3においてYES)領域データにおいて、目、鼻及び口の特徴を抽出可能か否かを判定する(ステップS4)。 The target area extraction unit 10 determines that the area data indicated by the luminance information, hue information, and saturation information in the input image data is data having a value within each range value of the face feature information (in step S3). YES) It is determined whether or not the eye, nose and mouth features can be extracted from the area data (step S4).
 目、鼻及び口の特徴情報が抽出可能であると判定した場合(ステップS4においてYES)、対象領域抽出部10は、目、鼻及び口の相対距離関係が、顔特徴情報の範囲値内の値であるか否かを判定する(ステップS5)。 If it is determined that the eye, nose, and mouth feature information can be extracted (YES in step S4), the target region extraction unit 10 determines that the relative distance relationship between the eyes, nose, and mouth is within the range of the face feature information. It is determined whether it is a value (step S5).
 目、鼻及び口の相対距離関係が、顔特徴情報の範囲値内の値であると判定した場合、(ステップS5においてYES)、対象領域抽出部10は、輝度情報、色相情報及び彩度情報で示される入力画像データから、顔特徴情報の各範囲値内の値を有する領域データであり、かつ、目、鼻及び口の相対距離関係が顔特徴情報の範囲値内の値である領域データを、補正対象の顔領域データとして抽出する(ステップS6)。 When it is determined that the relative distance relationship between the eyes, the nose, and the mouth is a value within the range value of the facial feature information (YES in step S5), the target region extraction unit 10 determines the luminance information, hue information, and saturation information. Region data having a value within each range value of the face feature information from the input image data indicated by, and the region data in which the relative distance relationship between the eyes, nose and mouth is a value within the range value of the face feature information Are extracted as face area data to be corrected (step S6).
 対象領域抽出部10は、目、鼻及び口の特徴情報が抽出可能でないと判定した場合(ステップS4においてNO)、目、鼻及び口の相対距離関係が、顔特徴情報の範囲値内でないと判定した場合(ステップS5においてNO)、又は、補正対象の顔領域データを抽出した後、供給された入力画像データの輝度情報、色相情報及び彩度情報を、全ての特徴情報と比較したか否かを判定する(ステップS7)。 If the target area extraction unit 10 determines that the feature information of the eyes, nose, and mouth cannot be extracted (NO in step S4), the relative distance relationship between the eyes, nose, and mouth is not within the range value of the face feature information. If it is determined (NO in step S5), or whether the luminance information, hue information, and saturation information of the supplied input image data have been compared with all feature information after extracting face area data to be corrected Is determined (step S7).
 供給された入力画像データの輝度情報、色相情報及び彩度情報を、全ての特徴情報と比較していないと判定すると(ステップS7においてNO)、対象領域抽出部10は、再びステップS2からの処理を繰り返す。 If it is determined that the luminance information, hue information, and saturation information of the supplied input image data are not compared with all the feature information (NO in step S7), the target area extraction unit 10 again performs the processing from step S2. repeat.
 対象領域抽出部10は、入力画像データのうち、輝度情報、色相情報及び彩度情報で示される領域データが顔特徴情報の各範囲値内の値を有するデータでないと判定した(ステップS3においてNO)領域において、輝度情報、色相情報及び彩度情報で示される領域データが、青空特徴情報の各範囲値内の値であるデータであるか否かを判定する(ステップS8)。 The target area extraction unit 10 determines that the area data indicated by the luminance information, the hue information, and the saturation information is not data having a value within each range value of the face feature information in the input image data (NO in step S3). ) In the region, it is determined whether or not the region data indicated by the luminance information, hue information, and saturation information is data that is a value within each range value of the blue sky feature information (step S8).
 対象領域抽出部10は、入力画像データのうち、輝度情報、色相情報及び彩度情報で示される領域データが青空特徴情報の各範囲値内の値を有するデータであると判定した(ステップS8においてYES)領域データにおいて、青空特徴情報の各範囲値内の値である領域データに高周波数成分が含まれているか否かを判定する(ステップS9)。例えば、領域データに含まれる高周波数成分の割合の値と、青空特徴情報における高周波数成分の範囲値に設定されている閾値とを比較し、領域データに含まれる高周波数成分の割合が閾値以下である場合に、高周波数成分が含まれていないと判定すればよい。 The target area extraction unit 10 determines that the area data indicated by the luminance information, the hue information, and the saturation information in the input image data is data having a value within each range value of the blue sky feature information (in step S8). YES) In the area data, it is determined whether or not a high frequency component is included in the area data that is a value within each range value of the blue sky feature information (step S9). For example, the ratio value of the high frequency component included in the area data is compared with the threshold value set in the range value of the high frequency component in the blue sky feature information, and the ratio of the high frequency component included in the area data is equal to or less than the threshold value. In this case, it may be determined that the high frequency component is not included.
 高周波数成分が含まれていないと判定した場合(ステップS9においてNO)、対象領域抽出部10は、高周波数成分が含まれていないと判定された領域データが、入力画像データが表す画像の上方に位置することを示す範囲値内の値を有するデータであるか否かを判定する(ステップS10)。対象領域抽出部10は、例えば、高周波数成分が含まれていないと判定された領域データのうち、所定の割合の領域データが、入力画像データが表す画像の中心よりも上方であるか否かを判定することによって、ステップS10における判定を行えばよい。 When it is determined that the high frequency component is not included (NO in step S9), the target region extraction unit 10 determines that the region data determined not to include the high frequency component is above the image represented by the input image data. It is determined whether or not the data has a value within a range value indicating that it is located at (step S10). For example, the target area extraction unit 10 determines whether or not a predetermined percentage of the area data determined to contain no high-frequency components is above the center of the image represented by the input image data. The determination in step S10 may be performed by determining the above.
 高周波数成分が含まれていないと判定された領域データが入力画像データが表す画像の上方に位置することを示す範囲値内の値を有する領域データであると判定した場合(ステップS10においてYES)、対象領域抽出部10は、輝度情報、色相情報及び彩度情報で示される領域データが青空特徴情報の各範囲値内の値であり、高周波数成分が含まれておらず、入力画像データが表す画像の上方に位置することを示す範囲値内の値を有する領域データを、補正対象の青空領域データとして抽出する(ステップS11)。 When it is determined that the area data determined not to contain the high frequency component is area data having a value within a range value indicating that the area data is located above the image represented by the input image data (YES in step S10) In the target area extraction unit 10, the area data indicated by the luminance information, the hue information, and the saturation information is a value within each range value of the blue sky feature information, does not include a high frequency component, and the input image data is Area data having a value within a range value indicating that it is located above the image to be represented is extracted as blue sky area data to be corrected (step S11).
 対象領域抽出部10は、高周波数成分が含まれていると判定した場合(ステップS9においてYES)、入力画像データが表す画像の上方に位置することを示す範囲値内の値を有する領域データでないと判定した場合(ステップS10においてNO)、又は、補正対象の青空領域データを抽出した後、供給された入力画像データの輝度情報、色相情報及び彩度情報を、全ての特徴情報と比較したか否かを再び判定する(ステップS7)。 When it is determined that the high-frequency component is included (YES in step S9), the target region extraction unit 10 is not region data having a value within a range value indicating that it is located above the image represented by the input image data. (NO in step S10), or after extracting the blue sky region data to be corrected, whether the luminance information, hue information and saturation information of the supplied input image data are compared with all feature information It is determined again whether or not (step S7).
 供給された入力画像データの輝度信号及び色差信号に含まれる輝度情報、色相情報及び彩度情報を、全ての特徴情報と比較していないと判定すると(ステップS7においてNO)、対象領域抽出部10は、再びステップS2からの処理を繰り返す。 If it is determined that the luminance information, hue information, and saturation information included in the luminance signal and the color difference signal of the supplied input image data are not compared with all the feature information (NO in step S7), the target region extraction unit 10 Repeats the process from step S2 again.
 対象領域抽出部10は、入力画像データのうち、輝度情報、色相情報及び彩度情報が顔特徴情報の各範囲値内でないと判定した(ステップS3においてNO)領域であり、青空特徴情報の各範囲値内でないと判定した(ステップS8においてNO)領域において、輝度情報、色相情報及び彩度情報で示される領域データが、芝生特徴情報の各範囲値内の値であるデータであるか否かを判定する(ステップS12)。 The target area extraction unit 10 is an area in which the luminance information, the hue information, and the saturation information are determined not to be within the range values of the face feature information in the input image data (NO in step S3), and each of the blue sky feature information Whether or not the area data indicated by the luminance information, the hue information, and the saturation information is data that is a value within each range value of the lawn feature information in the area determined not to be within the range value (NO in step S8). Is determined (step S12).
 対象領域抽出部10は、入力画像データのうち、輝度情報、色相情報及び彩度情報で示される領域データが芝生特徴情報の各範囲値内の値を有するデータであると判定した(ステップS12においてYES)領域データにおいて、芝生特徴情報の各範囲値内である領域データに高周波数成分が含まれているか否かを判定する(ステップS13)。例えば、領域データに含まれる高周波数成分の割合の値と、芝生特徴情報における高周波数成分の範囲値に設定されている閾値とを比較し、領域データに含まれる高周波数成分の割合が閾値以上である場合に、高周波数成分が含まれていると判定すればよい。 The target region extraction unit 10 determines that the region data indicated by the luminance information, hue information, and saturation information in the input image data is data having a value within each range value of the lawn feature information (in step S12). YES) In the area data, it is determined whether or not a high frequency component is included in the area data within each range value of the lawn feature information (step S13). For example, the ratio value of the high frequency component included in the area data is compared with the threshold value set for the range value of the high frequency component in the lawn feature information, and the ratio of the high frequency component included in the area data is equal to or greater than the threshold value. If it is, it may be determined that a high frequency component is included.
 高周波数成分が含まれていると判定した場合(ステップS13においてYES)、対象領域抽出部10は、高周波数成分が含まれていると判定された領域データが、入力画像データが表す画像の下方に位置することを示す範囲値内の値を有するデータであるか否かを判定する(ステップS14)。対象領域抽出部10は、例えば、高周波数成分が含まれていると判定された領域データのうち、所定の割合の領域データが、入力画像データが表す画像の中心よりも下方であるか否かを判定することによって、ステップS14における判定を行えばよい。 When it is determined that a high frequency component is included (YES in step S13), the target region extraction unit 10 has a region data determined to include a high frequency component below the image represented by the input image data. It is determined whether or not the data has a value within a range value indicating that it is located at (step S14). For example, the target area extraction unit 10 determines whether or not a predetermined percentage of the area data determined to include a high frequency component is below the center of the image represented by the input image data. By determining the above, the determination in step S14 may be performed.
 高周波数成分が含まれていると判定された領域データが入力画像データが表す画像の下方に位置することを示す範囲値内の値を有する領域データであると判定した場合(ステップS14においてYES)、対象領域抽出部10は、輝度情報、色相情報及び彩度情報で示される領域データが芝生特徴情報の各範囲値内の値を有する領域データであり、高周波数成分が含まれている領域データであり、かつ、入力画像データが表す画像の下方に位置することを示す範囲値内の値を有する領域データを、補正対象の芝生領域データとして抽出する(ステップS15)。 When it is determined that the area data determined to contain a high frequency component is area data having a value within a range value indicating that the area data is located below the image represented by the input image data (YES in step S14) The target region extraction unit 10 is a region data in which the region data indicated by the luminance information, the hue information, and the saturation information has a value within each range value of the lawn feature information, and includes high frequency components. And region data having a value within a range value indicating that the input image data is located below the image represented by the input image data is extracted as lawn region data to be corrected (step S15).
 対象領域抽出部10は、入力画像データに、輝度情報、色相情報及び彩度情報で示される領域データが芝生特徴情報の各範囲値内の値を有するデータであると判定される領域データが含まれていない場合(ステップS12においてYES)、高周波数成分が含まれていないと判定した場合(ステップS13においてNO)、入力画像データが表す画像の下方に位置することを示す範囲値内の値を有する領域データでないと判定した場合(ステップS14においてNO)、又は、補正対象の芝生領域データを抽出した後、供給された入力画像データの輝度情報、色相情報及び彩度情報を、全ての特徴情報と比較したか否かを再び判定する(ステップS7)。 The target area extraction unit 10 includes, in the input image data, area data that is determined that the area data indicated by the luminance information, the hue information, and the saturation information has data within each range value of the lawn feature information. If not (YES in step S12), if it is determined that a high frequency component is not included (NO in step S13), a value within a range value indicating that the input image data is positioned below the image is displayed. If it is determined that the region data is not included (NO in step S14), or after extracting the lawn region data to be corrected, luminance information, hue information, and saturation information of the supplied input image data are all feature information. Is again determined (step S7).
 供給された入力画像データの輝度信号及び色差信号に含まれる輝度情報、色相情報及び彩度情報を、全ての特徴情報と比較していないと判定すると(ステップS7においてNO)、対象領域抽出部10は、再びステップS2からの処理を繰り返す。 If it is determined that the luminance information, hue information, and saturation information included in the luminance signal and the color difference signal of the supplied input image data are not compared with all the feature information (NO in step S7), the target region extraction unit 10 Repeats the process from step S2 again.
 また、供給された入力画像データの輝度信号及び色差信号に含まれる輝度情報、色相情報及び彩度情報を、全ての特徴情報と比較したと判定すると(ステップS7においてNO)、対象領域抽出部10は、補正対象領域データの追跡抽出処理を実行する(ステップS16)。なお、補正対象領域データの追跡抽出処理については、後述する。 If it is determined that the luminance information, hue information, and saturation information included in the luminance signal and the color difference signal of the supplied input image data are compared with all the feature information (NO in step S7), the target region extraction unit 10 Executes the tracking extraction process of the correction target area data (step S16). The tracking extraction processing of the correction target area data will be described later.
 対象領域抽出部10は、補正対象領域データの追跡抽出処理を実行した後、補正対象領域抽出処理を終了する。 The target area extraction unit 10 ends the correction target area extraction process after executing the tracking extraction process of the correction target area data.
 なお、本実施形態においては、輝度情報、色相情報及び彩度情報を、まず顔特徴情報と比較し、次に青空特徴情報と比較し、さらに芝生特徴情報と比較する構成を例に挙げて説明したが、比較する順番はこれに限定されるものではなく、適宜順番を入れ替えてもよい。 In the present embodiment, a configuration in which luminance information, hue information, and saturation information are first compared with face feature information, then compared with blue sky feature information, and further compared with lawn feature information will be described as an example. However, the order of comparison is not limited to this, and the order may be changed as appropriate.
 また、対象特徴データベースにおいて、顔特徴情報、青空特徴情報及び芝生特徴情報以外の特徴情報が定められている場合には、入力画像データの輝度情報、色相情報及び彩度情報と、顔特徴情報、青空特徴情報及び芝生特徴情報以外の特徴情報とを比較するステップをさらに追加すればよい。 Further, when feature information other than face feature information, blue sky feature information, and lawn feature information is defined in the target feature database, luminance information, hue information and saturation information of the input image data, face feature information, A step of comparing feature information other than blue sky feature information and lawn feature information may be added.
 また、対象特徴データベースにおいて、複数の特徴情報が定められている場合には、複数の特徴情報のうち何れの特徴情報に基づいて補正対象領域抽出処理を行うかを予め選択することができる構成を採用してもよい。これによって、輝度情報、色相情報及び彩度情報と、複数の特徴情報のうち予め選択された特徴情報とを比較することによって、予め選択された特徴領域と一致する領域のみを補正対象領域として抽出することができる。 In addition, when a plurality of feature information is defined in the target feature database, a configuration in which it is possible to select in advance which one of the plurality of feature information is to be subjected to the correction target region extraction processing. It may be adopted. As a result, by comparing the luminance information, hue information, and saturation information with the preselected feature information among a plurality of feature information, only the region that matches the preselected feature region is extracted as the correction target region. can do.
 (補正対象領域データの追跡抽出処理)
 対象領域抽出部10は、上述したように、直前のフレームの入力画像データから抽出した補正対象領域データが、現在フレームの入力画像データに含まれているかを、動きベクトルを用いて追跡し、含まれている場合には補正対象領域データを抽出する。なお、直前のフレームの入力画像データを、直前入力画像データとも呼称する。
(Trace extraction processing of correction target area data)
As described above, the target area extraction unit 10 tracks whether or not the correction target area data extracted from the input image data of the immediately preceding frame is included in the input image data of the current frame using the motion vector. If so, the correction target area data is extracted. Note that the input image data of the immediately preceding frame is also referred to as immediately preceding input image data.
 ここで、対象領域抽出部10が実行する、補正対象領域データの追跡抽出処理(図5に示す、ステップS16)について説明する。 Here, the tracking extraction process (step S16 shown in FIG. 5) of the correction target area data executed by the target area extraction unit 10 will be described.
 対象領域抽出部10は、補正対象領域データの追跡抽出処理が開始されると、まず、動きベクトルを用いて、直前入力画像データから抽出した補正対象領域データから、現在の入力画像データから抽出される可能性のある追跡補正対象領域データを生成する。なお、直前入力画像データから抽出した補正画像領域データと、生成した追跡補正対象領域データとを含めて、直前補正対象領域データとも呼称する。 When the tracking extraction process of the correction target area data is started, the target area extraction unit 10 is first extracted from the current input image data from the correction target area data extracted from the previous input image data using the motion vector. Tracking correction target area data that may be generated. The corrected image area data extracted from the immediately preceding input image data and the generated tracking correction target area data are also referred to as immediately preceding correction target area data.
 次に、対象領域抽出部10は、対象特徴データベース31を参照して補正対象領域データを抽出する補正対象領域抽出処理(図5に示す、ステップS1~S15)において抽出した補正対象領域データと、直前補正対象領域データとを比較する。すなわち、対象領域抽出部10は、補正対象領域抽出処理にて抽出した補正対象領域データと同じ特有な特徴を有する領域データ以外の領域データが、直前補正対象領域データに存在するか否かを判定する。 Next, the target area extraction unit 10 refers to the correction target area data extracted in the correction target area extraction process (steps S1 to S15 shown in FIG. 5) that extracts correction target area data with reference to the target feature database 31, and Compare with the previous correction target area data. That is, the target area extraction unit 10 determines whether or not area data other than area data having the same characteristic as the correction target area data extracted in the correction target area extraction process exists in the immediately previous correction target area data. To do.
 対象領域抽出部10は、現在フレームの入力画像データから補正対象領域抽出処理にて抽出した補正対象領域データと同じ特有な特徴を有する領域データ以外の領域データが、直前補正対象領域データに存在しない場合には、補正対象領域データの追跡抽出処理を終了する。 The target area extraction unit 10 does not include area data other than area data having the same unique characteristics as the correction target area data extracted from the input image data of the current frame by the correction target area extraction processing in the previous correction target area data. In this case, the tracking extraction process of the correction target area data is terminated.
 対象領域抽出部10は、現在フレームの入力画像データから補正対象領域抽出処理にて抽出した補正対象領域データと同じ特有な特徴を有する領域データ以外の補正対象領域データが、直前補正対象領域データに含まれている場合、当該現在フレームの入力画像データから補正対象領域抽出処理にて抽出した補正対象領域データと同じ特有な特徴を有する領域データ以外の補正対象領域データを、現在フレームの入力画像データの補正対象領域データ候補として抽出する。 The target area extraction unit 10 converts correction target area data other than area data having the same characteristic as the correction target area data extracted from the input image data of the current frame by the correction target area extraction processing into the immediately previous correction target area data. If it is included, the correction target area data other than the area data having the same characteristic as the correction target area data extracted from the input image data of the current frame by the correction target area extraction process is input to the input image data of the current frame. Are extracted as correction target area data candidates.
 次に、対象領域抽出部10は、補正対象領域データ候補と同じ特有な特徴を有する領域データが現在フレームの入力画像データに存在するか否かを、動きベクトルを用いて判定する。対象領域抽出部10は、補正対象領域データ候補と同じ特有な特徴を有する領域データが現在フレームの入力画像データに存在すると判定した場合、当該領域データを補正対象領域データとして抽出する。 Next, the target area extraction unit 10 determines whether or not area data having the same characteristic as the correction target area data candidate exists in the input image data of the current frame using the motion vector. When it is determined that area data having the same unique characteristics as the correction target area data candidate exists in the input image data of the current frame, the target area extraction unit 10 extracts the area data as correction target area data.
 上述のようにして、対象領域抽出部10は、補正対象領域データの追跡抽出処理を実行する。 As described above, the target area extraction unit 10 performs the tracking extraction process of the correction target area data.
 このようにして、例えば、入力画像データが複数のフレームを含む動画像データである場合、対象領域抽出部10は、n-1(ただし、nは2以上の整数)フレーム目の入力画像データに含まれる補正対象領域データが、nフレーム目の入力画像データに含まれているか否かを上記動きベクトルに基づいて追跡することができる。そして、対象領域抽出部10は、n-1の入力画像データに含まれる対象領域データと同じ特有な特徴を有する領域データがnフレーム目の入力画像データに含まれている場合には、当該領域データをnフレーム目の入力画像データから補正対象領域データとして抽出することができる。 In this way, for example, when the input image data is moving image data including a plurality of frames, the target region extraction unit 10 applies the input image data of the (n−1) th (where n is an integer greater than or equal to 2) frame. Whether or not the correction target area data included is included in the input image data of the nth frame can be tracked based on the motion vector. Then, the target area extracting unit 10, when the area data having the same characteristic as the target area data included in the n−1 input image data is included in the input image data of the nth frame, Data can be extracted from the input image data of the nth frame as correction target area data.
 なお、本実施形態において、対象領域抽出部10は、対象特徴データベースを参照して補正対象領域データを抽出する補正対象領域抽出処理を実行した後に、補正対象領域データの追跡抽出処理を実行する構成を例に挙げて説明したが、本発明はこれに限定されるものではない。例えば、対象領域抽出部10は、補正対象領域データの追跡抽出処理を実行した後に、対象特徴データベースを参照して補正対象領域データを抽出する補正対象領域抽出処理を実行してもよい。 In the present embodiment, the target area extraction unit 10 performs the tracking target extraction process of the correction target area data after executing the correction target area extraction process of extracting the correction target area data with reference to the target feature database. However, the present invention is not limited to this. For example, the target region extraction unit 10 may execute a correction target region extraction process of extracting correction target region data with reference to the target feature database after executing the tracking extraction processing of the correction target region data.
 この場合には、追跡抽出処理において、直前補正対象領域データと同じ特有な特徴を有する領域データを現在入力画像データから抽出した後、補正対象領域抽出処理において、追跡抽出処理にて抽出されていない領域データから、補正対象領域データを抽出する。これによって、補正対象領域抽出処理において抽出の対象となる領域データを少なくすることができるため、補正対象領域抽出処理に関する対象領域抽出部10の付加を軽減することができる。 In this case, in the tracking extraction process, the area data having the same characteristic as the immediately previous correction target area data is extracted from the current input image data, and then is not extracted in the tracking extraction process in the correction target area extraction process. The correction target area data is extracted from the area data. Accordingly, the area data to be extracted in the correction target area extraction process can be reduced, so that the addition of the target area extraction unit 10 related to the correction target area extraction process can be reduced.
 (対象領域補正処理)
 次に、図6及び図7を参照して、補正処理部20における対象領域補正処理の流れについて説明する。図6は、本実施形態に係る画像補正装置1の補正処理部20における、対象領域補正処理の流れの一例を示すフローチャートである。なお、本実施形態においては、対象領域抽出部10において抽出された補正対象の顔領域データに対して行う対象領域補正処理を例に挙げて説明する。
(Target area correction processing)
Next, the flow of target area correction processing in the correction processing unit 20 will be described with reference to FIGS. 6 and 7. FIG. 6 is a flowchart illustrating an example of the flow of target area correction processing in the correction processing unit 20 of the image correction apparatus 1 according to the present embodiment. In the present embodiment, a target area correction process performed on the correction target face area data extracted by the target area extraction unit 10 will be described as an example.
 図6に示すように、補正処理部20は、補正対象領域データが供給されると、記憶部30に格納されている補正内容データベース32において定められている補正内容情報を取得する(ステップS21)。 As shown in FIG. 6, when the correction target area data is supplied, the correction processing unit 20 acquires correction content information defined in the correction content database 32 stored in the storage unit 30 (step S21). .
 補正処理部20は、取得した補正内容情報のうち、輝度情報に対する補正に関する情報、及び、補正対象領域データを輝度補正処理部21に供給する。また、補正処理部20は、取得した補正内容情報のうち、色相情報及び彩度情報に対する補正に関する情報、及び、補正対象領域データを色差補正処理部22に供給する。さらに、補正処理部20は、取得した補正内容情報のうち、ノイズの除去(ノイズリダクション)に関する情報、及び、補正対象領域データをノイズリダクション処理部23に供給する。 The correction processing unit 20 supplies information regarding correction for luminance information and correction target area data among the acquired correction content information to the luminance correction processing unit 21. In addition, the correction processing unit 20 supplies information regarding correction for hue information and saturation information and correction target area data among the acquired correction content information to the color difference correction processing unit 22. Further, the correction processing unit 20 supplies information regarding noise removal (noise reduction) and correction target area data among the acquired correction content information to the noise reduction processing unit 23.
 輝度補正処理部21は、対象領域抽出部10において抽出された補正対象領域データに含まれる輝度情報に対して、輝度補正処理を行う(ステップS22)。例えば、補正対象領域データが顔領域データである場合、輝度補正処理部21は、顔補正内容情報を参照し、目、鼻及び口の太い輪郭を強調する太輪郭強調補正を行い、睫毛の細く急峻な輪郭を強調する細輪郭強調補正を行う。なお、輝度補正処理部21は、補正内容情報において輝度情報に対する補正内容が定められていない場合には、補正を行わなくてもよい。輝度補正処理部21は、輝度補正処理を行ったか否かに係わらず、入力画像データをノイズリダクション処理部23に供給する。 The luminance correction processing unit 21 performs luminance correction processing on the luminance information included in the correction target region data extracted by the target region extraction unit 10 (step S22). For example, when the correction target area data is face area data, the luminance correction processing unit 21 refers to the face correction content information, performs thick outline emphasis correction that emphasizes thick outlines of eyes, nose, and mouth, and narrows the eyelashes. Performs fine contour emphasis correction that emphasizes steep contours. Note that the luminance correction processing unit 21 does not have to perform correction when the correction content for the luminance information is not defined in the correction content information. The luminance correction processing unit 21 supplies the input image data to the noise reduction processing unit 23 regardless of whether or not the luminance correction processing has been performed.
 色差補正処理部22は、対象領域抽出部10において抽出された補正対象領域データに含まれる色相情報に対して補正を行う色相補正を行う(ステップS23)。例えば、補正対象領域データが顔領域データである場合、色差補正処理部22は、顔補正内容情報を参照し、上述した式(1)を用いて補正演算を行うことにより、目、鼻及び口以外の部分の色相を、標準肌色の適正範囲に補正する。 The color difference correction processing unit 22 performs a hue correction for correcting the hue information included in the correction target region data extracted by the target region extraction unit 10 (step S23). For example, when the correction target area data is face area data, the color difference correction processing unit 22 refers to the face correction content information and performs the correction calculation using the above-described equation (1), whereby the eyes, nose, and mouth. Correct the hue of the part other than to the appropriate range of the standard skin color.
 また、補正対象領域データが顔領域データである場合、色差補正処理部22は、顔補正内容情報を参照し、唇部分に、元画像の唇の色を参考にして、予め定められた複数種類の色から最も自然な唇の色を選択して適用することにより、色相補正を行う。 In addition, when the correction target area data is face area data, the color difference correction processing unit 22 refers to the face correction content information, and refers to the lip portion with reference to the lip color of the original image. The hue correction is performed by selecting and applying the most natural lip color from the colors.
 なお、色差補正処理部22は、補正内容情報において色相情報に対する補正内容が定められていない場合には、補正を行わなくてもよい。 Note that the color difference correction processing unit 22 does not need to perform correction when the correction content for the hue information is not defined in the correction content information.
 ここで、図7を参照して、CbCr座標系における、顔領域データの色相情報に対する補正演算について説明する。図7は、CbCr座標系における、顔領域データの色相情報に対する色差補正を示す図である。図7の(a)は、顔領域データの肌色の色相情報の範囲をCbCr座標系において示した図であり、(b)は、CbCr座標系における顔領域データの肌色の補正を行う前及び補正を行った後の色相情報の範囲を示す図である。 Here, with reference to FIG. 7, the correction calculation for the hue information of the face area data in the CbCr coordinate system will be described. FIG. 7 is a diagram showing color difference correction for the hue information of the face area data in the CbCr coordinate system. FIG. 7A is a diagram showing the range of skin color hue information of the face area data in the CbCr coordinate system, and FIG. 7B is a diagram before and after correcting the skin color of the face area data in the CbCr coordinate system. It is a figure which shows the range of the hue information after performing.
 図7の(a)に示すように、顔領域データの色相情報に示される色相θは、θ1を中心としてθ1-Δθ1からθ1+Δθ2までの範囲内の値であり、彩度情報に示される彩度rはr1からr2までの範囲内の値である。また、図7の(b)に示すように、θ1+Δθ2とr1との交点をaとし、θ1-Δθ1とr1との交点をbとし、θ1-Δθ1とr2との交点をcとし、θ1+Δθ2とr2との交点をdとすると、顔領域データの肌色の色相情報の値は、範囲abcd内の値である。 As shown in FIG. 7A, the hue θ shown in the hue information of the face area data is a value in the range from θ1−Δθ1 to θ1 + Δθ2 with θ1 as the center, and the saturation shown in the saturation information r is a value within the range from r1 to r2. Further, as shown in FIG. 7B, the intersection of θ1 + Δθ2 and r1 is a, the intersection of θ1−Δθ1 and r1 is b, the intersection of θ1−Δθ1 and r2 is c, and θ1 + Δθ2 and r2 When the intersection point with d is d, the value of the skin color hue information of the face area data is a value in the range abcd.
 色差補正処理部22は、上述した式(1)を用いて補正を行うことにより、図7の(a)に示すように、色相θの値を、θ1を中心としてθ1-Δθ1’からθ1+Δθ2’までの範囲に補正することができる(ただし、Δθ1≧Δθ1’、及び、Δθ2≧Δθ2’)。すなわち、色差補正処理部22は、図7の(b)に示すように、θ1+Δθ2’とr1との交点をeとし、θ1-Δθ1’とr1との交点をfとし、θ1-Δθ1’とr2との交点をgとし、θ1+Δθ2’とr2との交点をhとすると、式(1)を用いて補正演算を行うことにより、範囲abcd内の値の色相情報を、範囲efgh内の値になるよう、補正することができる。 The color difference correction processing unit 22 corrects the value of the hue θ from θ1−Δθ1 ′ to θ1 + Δθ2 ′ centering on θ1, as shown in FIG. (However, Δθ1 ≧ Δθ1 ′ and Δθ2 ≧ Δθ2 ′). That is, as shown in FIG. 7B, the color difference correction processing unit 22 sets the intersection of θ1 + Δθ2 ′ and r1 as e, sets the intersection of θ1−Δθ1 ′ and r1 as f, and sets θ1−Δθ1 ′ and r2. When the intersection point between and the angle θ1 + Δθ2 ′ and r2 is h, the hue information of the value in the range abcd becomes the value in the range efgh by performing the correction calculation using the equation (1). So that it can be corrected.
 これによって、色相の値を、CbCr座標系において、入力画像データが自然な画像になるような座標領域の範囲内に補正することができる。 Thereby, the hue value can be corrected within the range of the coordinate area where the input image data becomes a natural image in the CbCr coordinate system.
 色差補正処理部22は、対象領域抽出部10において抽出された補正対象領域データに含まれる彩度情報に対して補正を行う彩度補正を行う(ステップS24)。例えば、補正対象領域データが顔領域データである場合、色差補正処理部22は、顔補正内容情報を参照して補正を行う。なお、色差補正処理部22は、彩度情報に対する補正内容が定められていない場合には、補正を行わなくてもよい。 The color difference correction processing unit 22 performs saturation correction for correcting the saturation information included in the correction target region data extracted by the target region extraction unit 10 (step S24). For example, when the correction target area data is face area data, the color difference correction processing unit 22 performs correction with reference to the face correction content information. Note that the color difference correction processing unit 22 may not perform correction when the correction content for the saturation information is not defined.
 ノイズリダクション処理部23は、対象領域抽出部10において抽出された補正対象領域データに含まれる輝度情報及び色差情報におけるノイズを除去する補正であるノイズリダクションを行う(ステップS25)。例えば、補正対象領域データが顔領域データである場合、ノイズリダクション処理部23は、顔補正内容情報を参照して補正を行う。なお、ノイズリダクション処理部23は、ノイズリダクションに関する補正内容が定められていない場合には、補正を行わなくてもよい。 The noise reduction processing unit 23 performs noise reduction that is correction for removing noise in luminance information and color difference information included in the correction target region data extracted by the target region extraction unit 10 (step S25). For example, when the correction target area data is face area data, the noise reduction processing unit 23 performs correction by referring to the face correction content information. In addition, the noise reduction process part 23 does not need to perform correction | amendment, when the correction content regarding noise reduction is not defined.
 補正処理部20は、対象領域抽出部10において抽出された全ての補正対象領域データに対して対象領域補正処理を行ったか否かを判定する(ステップS26)。 The correction processing unit 20 determines whether or not the target region correction processing has been performed on all the correction target region data extracted by the target region extraction unit 10 (step S26).
 対象領域抽出部10において抽出された全ての補正対象領域データに対して対象領域補正処理を行っていないと判定すると(ステップS26においてNO)、補正処理部20は、対象領域補正処理を行っていない補正対象領域データに対して、ステップS22からS26までの処理を実行する。すなわち、補正処理部20は、対象領域抽出部10において抽出された全ての補正対象領域データに対して対象領域補正処理を行ったと判定するまで、ステップS22からS26までのステップを繰り返す。 If it is determined that the target area correction process has not been performed on all the correction target area data extracted by the target area extraction unit 10 (NO in step S26), the correction processing unit 20 has not performed the target area correction process. The processing from step S22 to S26 is executed for the correction target area data. That is, the correction processing unit 20 repeats steps S22 to S26 until it is determined that the target region correction processing has been performed on all the correction target region data extracted by the target region extraction unit 10.
 例えば、対象領域抽出部10において顔領域データ以外に青空領域データ及び芝生領域データが抽出されている場合には、青空領域データ及び芝生領域データに対して対象領域補正処理が行われるように、ステップS22からS26までのステップを繰り返す。 For example, when blue sky area data and lawn area data are extracted in addition to the face area data in the target area extraction unit 10, a step is performed so that the target area correction processing is performed on the blue sky area data and the lawn area data. The steps from S22 to S26 are repeated.
 対象領域抽出部10において抽出された全ての補正対象領域データに対して対象領域補正処理を行ったと判定すると(ステップS26においてYES)、補正処理部20は、対象領域補正処理を終了する。 If it is determined that the target area correction process has been performed on all the correction target area data extracted by the target area extraction unit 10 (YES in step S26), the correction processing unit 20 ends the target area correction process.
 上述の構成によれば、補正処理部20は、入力画像データから補正対象領域データを抽出する際に、対象特徴データベース31に含まれる補正対象領域に特有の特徴を示す特徴データにあてはまるデータを、補正対象領域データとして抽出することができる。これにより、特徴データにあてはまる入力画像データを補正対象領域データとして抽出するため、例えば、補正対象領域データとして顔領域を抽出する場合に、顔領域と同じ色の体の一部の領域又は顔と同じ色の壁の領域などを誤って検出することを防ぐことができる。 According to the above-described configuration, when the correction processing unit 20 extracts the correction target area data from the input image data, the data corresponding to the feature data indicating the characteristic specific to the correction target area included in the target feature database 31 is extracted. It can be extracted as correction target area data. Accordingly, in order to extract the input image data applicable to the feature data as the correction target area data, for example, when extracting a face area as the correction target area data, a part of the body of the same color as the face area or a face It is possible to prevent erroneous detection of the same color wall area.
 また、補正処理部20は、抽出された補正対象領域データに対し、補正内容データベース32に含まれる補正内容に基づいた補正を行うことができる。その補正内容は、補正対象領域の特性に適合している一方で、補正対象領域以外の領域には不適合であり得る。したがって、補正対象領域データ以外のデータには、不適合な補正が施されないため、入力画像データ全体に同じ補正が一律に行われ、不自然な出力画像になるという不具合を防ぐことができる。 Further, the correction processing unit 20 can perform correction based on the correction content included in the correction content database 32 on the extracted correction target region data. The content of the correction may be incompatible with the area other than the correction target area while being compatible with the characteristics of the correction target area. Accordingly, inadequate correction is not performed on data other than the correction target area data, so that the same correction is uniformly performed on the entire input image data, and an unnatural output image can be prevented.
 以上の構成によれば、本実施携帯に係る画像補正装置1は、入力画像データから補正を行う補正対象領域データを抽出し、抽出した補正対象領域データに適した補正を行うことができる。 According to the above configuration, the image correction apparatus 1 according to the present embodiment can extract the correction target area data to be corrected from the input image data, and can perform correction suitable for the extracted correction target area data.
 また、本実施形態に係る画像補正装置1を実装するテレビジョン受像機又は情報処理装置などの装置は、画像補正装置1において補正された入力画像データを表示することができるため、より自然な画像になるよう補正された入力画像データを表示することができる。 In addition, since a device such as a television receiver or an information processing device in which the image correction apparatus 1 according to this embodiment is mounted can display input image data corrected by the image correction apparatus 1, a more natural image can be displayed. The input image data corrected to become can be displayed.
 さらに、対象領域抽出部10は、動きベクトル検出部50が検出した動きベクトルに基づいて、現在フレームの入力画像データの直前のフレームの入力画像データから抽出した対象領域データと同じ特有な特徴を有する対象領域データを、現在フレームの入力画像データから抽出することができる。つまり、対象領域抽出部10は、直前のフレームの入力画像データから抽出した対象領域データが、現在入力されている入力画像データに含まれているかを追跡し、含まれている場合には対象領域データを抽出することができる。 Further, the target area extraction unit 10 has the same characteristic as the target area data extracted from the input image data of the frame immediately before the input image data of the current frame based on the motion vector detected by the motion vector detection unit 50. The target area data can be extracted from the input image data of the current frame. That is, the target area extraction unit 10 tracks whether the target area data extracted from the input image data of the immediately preceding frame is included in the input image data that is currently input. Data can be extracted.
 このため、対象領域抽出部10は、現在フレームの入力画像データから対象特徴データベースを参照することによって抽出できない対象領域データが存在する場合にも、当該抽出できない対象領域データが直前のフレームの入力画像データから抽出されていた場合には、動きベクトルに基づいて、当該抽出できない対象領域データを抽出することができる。 Therefore, even when there is target area data that cannot be extracted by referring to the target feature database from the input image data of the current frame, the target area extraction unit 10 determines that the target area data that cannot be extracted is the input image of the previous frame. If it has been extracted from the data, the target area data that cannot be extracted can be extracted based on the motion vector.
 したがって、動きベクトルを用いることにより、対象領域データの抽出漏れ及び誤抽出を低減することができる。 Therefore, by using the motion vector, it is possible to reduce omission of extraction and erroneous extraction of the target area data.
 なお、特徴データベースに特徴データとして、例えば、人の顔を正面から見た場合の目、鼻及び口の相対距離関係が定められている場合、抽出できない対象領域データとしては、定められた目、鼻及び口の相対距離関係に当てはまらない領域データ、すなわち、人の横顔の領域データ、及び、物体が横切ることにより顔の一部が隠れてしまった人の顔の領域データなどを挙げることができる。 As feature data in the feature database, for example, when the relative distance relationship between eyes and nose and mouth when a human face is viewed from the front is determined, target area data that cannot be extracted includes the determined eyes, Area data that does not apply to the relative distance between the nose and mouth, that is, area data of a person's profile, and area data of a person's face that has been partially hidden by an object crossing .
 例えば、現在フレームの入力画像データに人の横顔の領域データが含まれているとき、対象領域抽出部10は、人の顔の正面の特徴データが含まれた対象特徴データベースを参照することによっては、人の横顔の領域データを補正対象領域データとして抽出することができない場合がある。このような場合にも、対象領域抽出部10は、動きベクトル検出部50から供給された動きベクトルに基づいて、人の横顔の領域データを補正対象領域データとして抽出することができる。 For example, when the area image data of the person's profile is included in the input image data of the current frame, the target area extraction unit 10 refers to the target feature database including the characteristic data of the front of the person's face. In some cases, the area data of a person's profile cannot be extracted as correction target area data. Even in such a case, the target area extraction unit 10 can extract the human side profile area data as the correction target area data based on the motion vector supplied from the motion vector detection unit 50.
 〔変形例1〕
 本実施形態の変形例について、図8に基づいて説明する。なお、説明の便宜上、実施形態1の構成要素と同様の機能を有する構成要素には同一の番号を付し、その説明を省略する。本実施形態では、主に、実施形態1との相違点について説明するものとする。
[Modification 1]
A modification of this embodiment will be described with reference to FIG. For convenience of explanation, components having the same functions as those of the first embodiment are denoted by the same reference numerals, and description thereof is omitted. In the present embodiment, differences from the first embodiment will be mainly described.
 (画像補正装置の構成)
 図8は、本変形例に係る画像補正装置1aの構成の詳細を示すブロック図である。図8に示すように、画像補正装置1aは、補正処理部20の代わりに補正処理部20a、及び、RGB変換部40の代わりにRGB変換部40aを備えていること以外は実施形態1に係る画像補正装置1と同じ構成である。
(Configuration of image correction device)
FIG. 8 is a block diagram showing details of the configuration of the image correction apparatus 1a according to this modification. As shown in FIG. 8, the image correction apparatus 1 a according to the first embodiment except that the image processing apparatus 1 a includes a correction processing unit 20 a instead of the correction processing unit 20 and an RGB conversion unit 40 a instead of the RGB conversion unit 40. The configuration is the same as that of the image correction apparatus 1.
 補正処理部20aは、補正対象領域データに対し、対象領域補正処理を行う手段である。具体的には、対象特徴データベース31及び動きベクトルに基づき対象領域抽出部10において抽出された補正対象領域データに対し、補正対象領域データの示す対象に最も適した補正の内容を、記憶部30に格納されている補正内容データベース32を参照して決定し、決定した内容に基づいて対象領域補正処理を実行する。 The correction processing unit 20a is means for performing target area correction processing on the correction target area data. Specifically, for the correction target region data extracted by the target region extraction unit 10 based on the target feature database 31 and the motion vector, the content of correction most suitable for the target indicated by the correction target region data is stored in the storage unit 30. Determination is made with reference to the stored correction content database 32, and target area correction processing is executed based on the determined content.
 また、補正処理部20aは、図8に示すように、輝度補正処理部21a、色差補正処理部22a、及び、ノイズリダクション処理部23aを含んで構成される。 Further, as shown in FIG. 8, the correction processing unit 20a includes a luminance correction processing unit 21a, a color difference correction processing unit 22a, and a noise reduction processing unit 23a.
 輝度補正処理部21aは、補正処理部20aに備えられ、入力画像データのうち対象領域抽出部10において抽出された補正対象領域データの輝度信号に含まれる輝度情報に対する補正処理を実行する。また、輝度補正処理部21aは、補正対象領域データに対する補正処理を行った入力画像データをRGB変換部40aに供給する。 The luminance correction processing unit 21a is provided in the correction processing unit 20a, and executes correction processing on luminance information included in the luminance signal of the correction target region data extracted by the target region extraction unit 10 from the input image data. In addition, the luminance correction processing unit 21a supplies the input image data that has been subjected to the correction process on the correction target region data to the RGB conversion unit 40a.
 色差補正処理部22aは、補正処理部20aに備えられ、入力画像データのうち対象領域抽出部10において抽出された補正対象領域データの色差信号に含まれる色相情報及び彩度情報に対する補正処理を実行する。また、色差補正処理部22aは、補正対象領域データに対する補正処理を行った入力画像データをRGB変換部40aに供給する。 The color difference correction processing unit 22a is provided in the correction processing unit 20a, and executes correction processing on hue information and saturation information included in the color difference signal of the correction target region data extracted by the target region extraction unit 10 from the input image data. To do. Further, the color difference correction processing unit 22a supplies the input image data that has been subjected to the correction processing on the correction target area data to the RGB conversion unit 40a.
 RGB変換部40aは、輝度補正処理部21a及び色差補正処理部22aにおいて、補正対象領域データに対する補正処理が行われた入力画像データを示す表色系の色空間信号を、他の表色系の色空間信号に変換する。例えば、RGB変換部40は、YCbCr色空間で表される表色系で入力された入力画像データの色空間信号(Y、Cb、Cr)を、RGB表色系の色空間信号(R、G、B)に変換する。また、RGB変換部40aは、変換した色空間信号の入力画像データをノイズリダクション処理部23aに供給する。 The RGB conversion unit 40a converts the color space signal of the color system indicating the input image data subjected to the correction process on the correction target area data in the luminance correction processing unit 21a and the color difference correction processing unit 22a into another color system. Convert to color space signal. For example, the RGB conversion unit 40 converts the color space signal (Y, Cb, Cr) of the input image data input in the color system represented by the YCbCr color space into the color space signal (R, G) of the RGB color system. , B). Further, the RGB conversion unit 40a supplies the input image data of the converted color space signal to the noise reduction processing unit 23a.
 ノイズリダクション処理部23aは、補正処理部20aに備えられ、RGB変換部40aから供給された入力画像データのうち補正対象領域データに含まれる輝度信号及び色差信号におけるノイズを除去する。また、ノイズリダクション処理部23aは、ノイズを除去した入力画像データを、出力画像データとして出力する。 The noise reduction processing unit 23a is provided in the correction processing unit 20a, and removes noise in the luminance signal and the color difference signal included in the correction target region data from the input image data supplied from the RGB conversion unit 40a. The noise reduction processing unit 23a outputs the input image data from which noise has been removed as output image data.
 (補正対象領域抽出処理、対象領域補正処理)
 本変形例における補正対象領域抽出処理は、図5に示す、実施形態1における補正対象領域抽出処理と同じであるため、説明を省略する。
(Correction target area extraction processing, target area correction processing)
The correction target area extraction process in the present modification is the same as the correction target area extraction process in the first embodiment shown in FIG.
 本変形例における対象領域補正処理は、図6の対象領域補正処理に示す彩度補正処理(ステップS24)とノイズリダクション処理(ステップS25)との間に、RGB変換部40aにおいて、入力画像データを示す表色系の色空間信号を他の表色系の色空間信号に変換するステップが含まれることを除いて、実施形態1に係る対象領域補正処理と同じである。 In the target area correction process in this modification, the input image data is converted by the RGB conversion unit 40a between the saturation correction process (step S24) and the noise reduction process (step S25) shown in the target area correction process of FIG. This is the same as the target area correction processing according to the first embodiment, except that a step of converting the color space signal of the color system shown into a color space signal of another color system is included.
 これによって、ノイズリダクション処理部23aは、輝度信号及び色差信号の色空間信号によって表される表色系から変換された表色系の色空間信号を用いてノイズリダクション処理を行うことができる。 Thereby, the noise reduction processing unit 23a can perform the noise reduction process using the color space signal of the color system converted from the color system represented by the color space signal of the luminance signal and the color difference signal.
 〔変形例2〕
 本実施形態の他の変形例について、図9に基づいて説明する。なお、説明の便宜上、実施形態1の構成要素と同様の機能を有する構成要素には同一の番号を付し、その説明を省略する。本実施形態では、主に、実施形態1との相違点について説明するものとする。
[Modification 2]
Another modification of the present embodiment will be described with reference to FIG. For convenience of explanation, components having the same functions as those of the first embodiment are denoted by the same reference numerals, and description thereof is omitted. In the present embodiment, differences from the first embodiment will be mainly described.
 (画像補正装置の構成)
 図9は、本変形例に係る画像補正装置1bの構成の詳細を示すブロック図である。図9に示すように、画像補正装置1bは、補正処理部20の代わりに補正処理部20bを備えていること以外は実施形態1に係る画像補正装置1と同じ構成である。
(Configuration of image correction device)
FIG. 9 is a block diagram showing details of the configuration of the image correction apparatus 1b according to the present modification. As illustrated in FIG. 9, the image correction apparatus 1 b has the same configuration as the image correction apparatus 1 according to the first embodiment except that the image correction apparatus 1 b includes a correction processing unit 20 b instead of the correction processing unit 20.
 補正処理部20bは、補正対象領域データに対し、対象領域補正処理を行う手段である。具体的には、対象特徴データベース31及び動きベクトルに基づき対象領域抽出部10において抽出された補正対象領域データに対し、補正対象領域データの示す対象に最も適した補正の内容を、記憶部30に格納されている補正内容データベース32を参照して決定し、決定した内容に基づいて対象領域補正処理を実行する。 The correction processing unit 20b is means for performing target area correction processing on the correction target area data. Specifically, for the correction target region data extracted by the target region extraction unit 10 based on the target feature database 31 and the motion vector, the content of correction most suitable for the target indicated by the correction target region data is stored in the storage unit 30. Determination is made with reference to the stored correction content database 32, and target area correction processing is executed based on the determined content.
 補正処理部20bは、図9に示すように、輝度補正処理部21b、色差補正処理部22b、及び、ノイズリダクション処理部23bを含んで構成される。 As shown in FIG. 9, the correction processing unit 20b includes a luminance correction processing unit 21b, a color difference correction processing unit 22b, and a noise reduction processing unit 23b.
 ノイズリダクション処理部23bは、補正処理部20aに備えられ、入力画像データのうち対象領域抽出部10において抽出された補正対象領域データの輝度信号及び色差信号におけるノイズを除去する。また、ノイズリダクション処理部23bは、ノイズを除去した入力画像データを、輝度補正処理部21b及び色差補正処理部22bに供給する。 The noise reduction processing unit 23b is provided in the correction processing unit 20a, and removes noise in the luminance signal and the color difference signal of the correction target region data extracted by the target region extraction unit 10 from the input image data. The noise reduction processing unit 23b supplies the input image data from which noise has been removed to the luminance correction processing unit 21b and the color difference correction processing unit 22b.
 輝度補正処理部21bは、補正処理部20bに備えられ、ノイズリダクション処理部23bから供給された入力画像データのうち、対象領域抽出部10において抽出された補正対象領域データの輝度信号に含まれる輝度情報に対する補正処理を実行する。また、輝度補正処理部21bは、補正対象領域データに対する補正処理を行った入力画像データをRGB変換部40bに供給する。 The luminance correction processing unit 21b is provided in the correction processing unit 20b, and the luminance included in the luminance signal of the correction target region data extracted by the target region extraction unit 10 out of the input image data supplied from the noise reduction processing unit 23b. Execute correction processing for information. In addition, the luminance correction processing unit 21b supplies input image data obtained by performing correction processing on the correction target area data to the RGB conversion unit 40b.
 色差補正処理部22bは、補正処理部20bに備えられ、ノイズリダクション処理部23bから供給された入力画像データのうち、対象領域抽出部10において抽出された補正対象領域データの色差信号に含まれる色相情報及び彩度情報に対する補正処理を実行する。また、色差補正処理部22bは、補正対象領域データに対する補正処理を行った入力画像信号をRGB変換部40bに供給する。 The color difference correction processing unit 22b is provided in the correction processing unit 20b, and the hue included in the color difference signal of the correction target region data extracted by the target region extraction unit 10 out of the input image data supplied from the noise reduction processing unit 23b. Correction processing for information and saturation information is executed. Further, the color difference correction processing unit 22b supplies an input image signal obtained by performing correction processing on the correction target region data to the RGB conversion unit 40b.
 (補正対象領域抽出処理、対象領域補正処理)
 本変形例における補正対象領域抽出処理は、図5に示す、実施形態1における補正対象領域抽出処理と同じであるため、説明を省略する。
(Correction target area extraction processing, target area correction processing)
The correction target area extraction process in the present modification is the same as the correction target area extraction process in the first embodiment shown in FIG.
 本変形例における対象領域補正処理は、図6に示す対象領域補正処理において、ノイズリダクション処理(ステップS25)を行った後に、輝度補正処理(ステップS22)、色相補正処理(ステップS23)、彩度補正処理(ステップS24)を行うことを除いて、実施形態1に係る対象領域補正処理と同じである。 The target area correction process in the present modification is the same as the target area correction process shown in FIG. 6, after performing the noise reduction process (step S25), the luminance correction process (step S22), the hue correction process (step S23), and the saturation. The target area correction process is the same as that of the first embodiment except that the correction process (step S24) is performed.
 これによって、補正処理部20bは、ノイズリダクション処理部23においてノイズリダクション処理を行った後に、輝度信号に対する補正、及び、色差信号に対する補正を行うことができる。 Thereby, the correction processing unit 20b can perform correction on the luminance signal and correction on the color difference signal after the noise reduction processing is performed in the noise reduction processing unit 23.
 <実施形態2>
 本発明の他の実施形態について、図10に基づいて説明する。なお、説明の便宜上、実施形態1の構成要素と同様の機能を有する構成要素には同一の番号を付し、その説明を省略する。本実施形態では、主に、実施形態1との相違点について説明するものとする。
<Embodiment 2>
Another embodiment of the present invention will be described with reference to FIG. For convenience of explanation, components having the same functions as those of the first embodiment are denoted by the same reference numerals, and description thereof is omitted. In the present embodiment, differences from the first embodiment will be mainly described.
 (画像補正装置の構成)
 図10は、本変形例に係る画像補正装置2の構成の詳細を示すブロック図である。図10に示すように、画像補正装置2は、YCbCr変換部41(第2色空間信号変換手段)をさらに備えていること以外は実施形態1に係る画像補正装置1と同じ構成である。
(Configuration of image correction device)
FIG. 10 is a block diagram showing details of the configuration of the image correction apparatus 2 according to this modification. As shown in FIG. 10, the image correction apparatus 2 has the same configuration as the image correction apparatus 1 according to the first embodiment except that the image correction apparatus 2 further includes a YCbCr conversion unit 41 (second color space signal conversion unit).
 YCbCr変換部41は、入力画像データを示す表色系の色空間信号を、他の表色系の色空間信号に変換する。例えば、YCbCr変換部41は、入力画像データを表すRGB表色系の色空間信号を、YCbCr色空間で表される表色系の色空間信号に変換する。また、YCbCr変換部41は、色空間信号が変換された入力画像データを、対象領域抽出部10、補正処理部20、動きベクトル検出部50、及び、フレームメモリ51に供給する。 The YCbCr conversion unit 41 converts the color space signal of the color system indicating the input image data into a color space signal of another color system. For example, the YCbCr conversion unit 41 converts an RGB color system color space signal representing input image data into a color system color space signal represented by a YCbCr color space. The YCbCr conversion unit 41 supplies the input image data obtained by converting the color space signal to the target area extraction unit 10, the correction processing unit 20, the motion vector detection unit 50, and the frame memory 51.
 (補正対象領域抽出処理、対象領域補正処理)
 本実施形態における補正対象領域抽出処理及び対象領域補正処理は、図5に示す実施形態1における補正対象領域抽出処理、及び、図6に示す実施形態1における対象領域補正処理と同じであるため、説明を省略する。
(Correction target area extraction processing, target area correction processing)
The correction target area extraction process and the target area correction process in the present embodiment are the same as the correction target area extraction process in the first embodiment shown in FIG. 5 and the target area correction process in the first embodiment shown in FIG. Description is omitted.
 上記の構成によって、入力画像データが、例えば輝度信号及び色差信号で表されるようなYCbCr色空間で表される表色系と異なる表色系の色空間信号である場合に、YCbCr色空間で表される表色系と異なる表色系の色空間信号を、YCbCr色空間を表す輝度信号及び色差信号に変換することができる。これによって、入力画像データがどのような表色系の色空間信号であっても、輝度信号及び色差信号に対する補正を行うことによって、入力画像データを補正することができる。 With the above configuration, when the input image data is a color space signal having a color system different from the color system represented by the YCbCr color space represented by, for example, a luminance signal and a color difference signal, the YCbCr color space is used. A color space signal of a color system different from the represented color system can be converted into a luminance signal and a color difference signal representing the YCbCr color space. As a result, the input image data can be corrected by correcting the luminance signal and the color difference signal regardless of the color space signal of the color system that the input image data is.
 〔変形例1〕
 本実施形態の変形例について、図11に基づいて説明する。なお、説明の便宜上、実施形態1の構成要素と同様の機能を有する構成要素には同一の番号を付し、その説明を省略する。本実施形態では、主に、実施形態1との相違点について説明するものとする。
[Modification 1]
A modification of this embodiment will be described with reference to FIG. For convenience of explanation, components having the same functions as those of the first embodiment are denoted by the same reference numerals, and description thereof is omitted. In the present embodiment, differences from the first embodiment will be mainly described.
 (画像補正装置の構成)
 図11は、本変形例に係る画像補正装置2aの構成の詳細を示すブロック図である。図11に示すように、画像補正装置2aは、補正処理部20の代わりに補正処理部20a、及び、RGB変換部40の代わりにRGB変換部40aを備え、さらにYCbCr変換部41を備えていること以外は実施形態1に係る画像補正装置1と同じ構成である。
(Configuration of image correction device)
FIG. 11 is a block diagram showing details of the configuration of the image correction apparatus 2a according to this modification. As illustrated in FIG. 11, the image correction apparatus 2 a includes a correction processing unit 20 a instead of the correction processing unit 20, an RGB conversion unit 40 a instead of the RGB conversion unit 40, and a YCbCr conversion unit 41. Except for this, the configuration is the same as that of the image correction apparatus 1 according to the first embodiment.
 YCbCr変換部41は、入力画像データを示す表色系の色空間信号を、他の表色系の色空間信号に変換する。例えば、YCbCr変換部41は、入力画像データを表すRGB表色系の色空間信号を、YCbCr色空間で表される表色系の色空間信号に変換する。また、YCbCr変換部41は、色空間信号が変換された入力画像データを、対象領域抽出部10、補正処理部20a、動きベクトル検出部50、及び、フレームメモリ51に供給する。 The YCbCr conversion unit 41 converts the color space signal of the color system indicating the input image data into a color space signal of another color system. For example, the YCbCr conversion unit 41 converts an RGB color system color space signal representing input image data into a color system color space signal represented by a YCbCr color space. In addition, the YCbCr conversion unit 41 supplies the input image data obtained by converting the color space signal to the target area extraction unit 10, the correction processing unit 20 a, the motion vector detection unit 50, and the frame memory 51.
 補正処理部20aは、補正対象領域データに対し、対象領域補正処理を行う手段である。具体的には、対象特徴データベース31及び動きベクトルに基づき対象領域抽出部10において抽出された補正対象領域データに対し、補正対象領域データの示す対象に最も適した補正の内容を、記憶部30に格納されている補正内容データベース32を参照して決定し、決定した内容に基づいて対象領域補正処理を実行する。 The correction processing unit 20a is means for performing target area correction processing on the correction target area data. Specifically, for the correction target region data extracted by the target region extraction unit 10 based on the target feature database 31 and the motion vector, the content of correction most suitable for the target indicated by the correction target region data is stored in the storage unit 30. Determination is made with reference to the stored correction content database 32, and target area correction processing is executed based on the determined content.
 補正処理部20aは、図11に示すように、輝度補正処理部21a、色差補正処理部22a、及び、ノイズリダクション処理部23aを含んで構成される。 As shown in FIG. 11, the correction processing unit 20a includes a luminance correction processing unit 21a, a color difference correction processing unit 22a, and a noise reduction processing unit 23a.
 輝度補正処理部21aは、補正処理部20aに備えられ、入力画像データのうち対象領域抽出部10において抽出された補正対象領域データの輝度信号に含まれる輝度情報に対する補正処理を実行する。また、輝度補正処理部21aは、補正対象領域データに対する補正処理を行った入力画像データをRGB変換部40aに供給する。 The luminance correction processing unit 21a is provided in the correction processing unit 20a, and executes correction processing on luminance information included in the luminance signal of the correction target region data extracted by the target region extraction unit 10 from the input image data. In addition, the luminance correction processing unit 21a supplies the input image data that has been subjected to the correction process on the correction target region data to the RGB conversion unit 40a.
 色差補正処理部22aは、補正処理部20aに備えられ、入力画像データのうち対象領域抽出部10において抽出された補正対象領域データの色差信号に含まれる色相情報及び彩度情報に対する補正処理を実行する。また、色差補正処理部22aは、補正対象領域データに対する補正処理を行った入力画像データをRGB変換部40aに供給する。 The color difference correction processing unit 22a is provided in the correction processing unit 20a, and executes correction processing on hue information and saturation information included in the color difference signal of the correction target region data extracted by the target region extraction unit 10 from the input image data. To do. Further, the color difference correction processing unit 22a supplies the input image data that has been subjected to the correction processing on the correction target area data to the RGB conversion unit 40a.
 RGB変換部40aは、輝度補正処理部21a及び色差補正処理部22aにおいて、補正対象領域データに対する補正処理が行われた入力画像データを示す表色系の色空間信号を、他の表色系の色空間信号に変換する。例えば、RGB変換部40は、YCbCr色空間で表される表色系で入力される入力画像データの色空間信号(Y、Cb、Cr)を、RGB表色系の色空間信号(R、G、B)に変換する。また、RGB変換部40aは、変換した色空間信号の入力画像データをノイズリダクション処理部23aに供給する。 The RGB conversion unit 40a converts the color space signal of the color system indicating the input image data subjected to the correction process on the correction target area data in the luminance correction processing unit 21a and the color difference correction processing unit 22a into another color system. Convert to color space signal. For example, the RGB converter 40 converts the color space signal (Y, Cb, Cr) of the input image data input in the color system expressed in the YCbCr color space into the color space signal (R, G in the RGB color system). , B). Further, the RGB conversion unit 40a supplies the input image data of the converted color space signal to the noise reduction processing unit 23a.
 ノイズリダクション処理部23aは、補正処理部20aに備えられ、RGB変換部40aから供給された入力画像データのうち補正対象領域データに含まれる輝度信号及び色差信号におけるノイズを除去する。また、ノイズリダクション処理部23aは、ノイズを除去した入力画像データを、出力画像データとして出力する。 The noise reduction processing unit 23a is provided in the correction processing unit 20a, and removes noise in the luminance signal and the color difference signal included in the correction target region data from the input image data supplied from the RGB conversion unit 40a. The noise reduction processing unit 23a outputs the input image data from which noise has been removed as output image data.
 (補正対象領域抽出処理、対象領域補正処理)
 本変形例における補正対象領域抽出処理は、図5に示す、実施形態1における補正対象領域抽出処理と同じであるため、説明を省略する。
(Correction target area extraction processing, target area correction processing)
The correction target area extraction process in the present modification is the same as the correction target area extraction process in the first embodiment shown in FIG.
 本変形例における対象領域補正処理は、図6の対象領域補正処理に示す彩度補正処理(ステップS24)とノイズリダクション処理(ステップS25)との間に、RGB変換部40aにおいて、入力画像データを示す表色系の色空間信号を他の表色系の色空間信号に変換するステップが含まれることを除いて、実施形態1に係る対象領域補正処理と同じである。 In the target area correction process in this modification, the input image data is converted by the RGB conversion unit 40a between the saturation correction process (step S24) and the noise reduction process (step S25) shown in the target area correction process of FIG. This is the same as the target area correction processing according to the first embodiment, except that a step of converting the color space signal of the color system shown into a color space signal of another color system is included.
 〔変形例2〕
 本実施形態の他の変形例について、図12に基づいて説明する。なお、説明の便宜上、実施形態1の構成要素と同様の機能を有する構成要素には同一の番号を付し、その説明を省略する。本実施形態では、主に、実施形態1との相違点について説明するものとする。
[Modification 2]
Another modification of the present embodiment will be described with reference to FIG. For convenience of explanation, components having the same functions as those of the first embodiment are denoted by the same reference numerals, and description thereof is omitted. In the present embodiment, differences from the first embodiment will be mainly described.
 (画像補正装置の構成)
 図12は、本変形例に係る画像補正装置2bの構成の詳細を示すブロック図である。図12に示すように、画像補正装置2bは、補正処理部20の代わりに補正処理部20bを備え、さらにYCbCr変換部41を備えていること以外は実施形態1に係る画像補正装置1と同じ構成である。
(Configuration of image correction device)
FIG. 12 is a block diagram showing details of the configuration of the image correction apparatus 2b according to this modification. As illustrated in FIG. 12, the image correction apparatus 2 b is the same as the image correction apparatus 1 according to the first embodiment except that the image correction apparatus 2 b includes a correction processing unit 20 b instead of the correction processing unit 20 and further includes a YCbCr conversion unit 41. It is a configuration.
 YCbCr変換部41は、入力画像データを示す表色系の色空間信号を、他の表色系の色空間信号に変換する。例えば、YCbCr変換部41は、入力画像データを表すRGB表色系の色空間信号を、YCbCr色空間で表される表色系の色空間信号に変換する。また、YCbCr変換部41は、色空間信号が変換された入力画像データを、対象領域抽出部10、補正処理部20b、動きベクトル検出部50、及び、フレームメモリ51に供給する。 The YCbCr conversion unit 41 converts the color space signal of the color system indicating the input image data into a color space signal of another color system. For example, the YCbCr conversion unit 41 converts an RGB color system color space signal representing input image data into a color system color space signal represented by a YCbCr color space. The YCbCr conversion unit 41 supplies the input image data obtained by converting the color space signal to the target area extraction unit 10, the correction processing unit 20 b, the motion vector detection unit 50, and the frame memory 51.
 補正処理部20bは、補正対象領域データに対し、対象領域補正処理を行う手段である。具体的には、対象特徴データベース31及び動きベクトルに基づき対象領域抽出部10において抽出された補正対象領域データに対し、補正対象領域データの示す対象に最も適した補正の内容を、記憶部30に格納されている補正内容データベース32を参照して決定し、決定した内容に基づいて対象領域補正処理を実行する。 The correction processing unit 20b is means for performing target area correction processing on the correction target area data. Specifically, for the correction target region data extracted by the target region extraction unit 10 based on the target feature database 31 and the motion vector, the content of correction most suitable for the target indicated by the correction target region data is stored in the storage unit 30. Determination is made with reference to the stored correction content database 32, and target area correction processing is executed based on the determined content.
 補正処理部20bは、図12に示すように、輝度補正処理部21b、色差補正処理部22b、及び、ノイズリダクション処理部23bを含んで構成される。 As shown in FIG. 12, the correction processing unit 20b includes a luminance correction processing unit 21b, a color difference correction processing unit 22b, and a noise reduction processing unit 23b.
 ノイズリダクション処理部23bは、補正処理部20aに備えられ、入力画像データのうち対象領域抽出部10において抽出された補正対象領域データの輝度信号及び色差信号におけるノイズを除去する。また、ノイズリダクション処理部23bは、ノイズを除去した入力画像データを、輝度補正処理部21b及び色差補正処理部22bに供給する。 The noise reduction processing unit 23b is provided in the correction processing unit 20a, and removes noise in the luminance signal and the color difference signal of the correction target region data extracted by the target region extraction unit 10 from the input image data. The noise reduction processing unit 23b supplies the input image data from which noise has been removed to the luminance correction processing unit 21b and the color difference correction processing unit 22b.
 輝度補正処理部21bは、補正処理部20bに備えられ、ノイズリダクション処理部23bから供給された入力画像データのうち、対象領域抽出部10において抽出された補正対象領域データの輝度信号に含まれる輝度情報に対する補正処理を実行する。また、輝度補正処理部21bは、補正対象領域データに対する補正処理を行った入力画像データをRGB変換部40bに供給する。 The luminance correction processing unit 21b is provided in the correction processing unit 20b, and the luminance included in the luminance signal of the correction target region data extracted by the target region extraction unit 10 out of the input image data supplied from the noise reduction processing unit 23b. Execute correction processing for information. In addition, the luminance correction processing unit 21b supplies input image data obtained by performing correction processing on the correction target area data to the RGB conversion unit 40b.
 色差補正処理部22bは、補正処理部20bに備えられ、ノイズリダクション処理部23bから供給された入力画像データのうち、対象領域抽出部10において抽出された補正対象領域データの色差信号に含まれる色相情報及び彩度情報に対する補正処理を実行する。また、色差補正処理部22bは、補正対象領域データに対する補正処理を行った入力画像信号をRGB変換部40bに供給する。 The color difference correction processing unit 22b is provided in the correction processing unit 20b, and the hue included in the color difference signal of the correction target region data extracted by the target region extraction unit 10 out of the input image data supplied from the noise reduction processing unit 23b. Correction processing for information and saturation information is executed. Further, the color difference correction processing unit 22b supplies an input image signal obtained by performing correction processing on the correction target region data to the RGB conversion unit 40b.
 (補正対象領域抽出処理、対象領域補正処理)
 本変形例における補正対象領域抽出処理は、図5に示す、実施形態1における補正対象領域抽出処理と同じであるため、説明を省略する。
(Correction target area extraction processing, target area correction processing)
The correction target area extraction process in the present modification is the same as the correction target area extraction process in the first embodiment shown in FIG.
 本変形例における対象領域補正処理は、図6に示す対象領域補正処理において、ノイズリダクション処理(ステップS25)を行った後に、輝度補正処理(ステップS22)、色相補正処理(ステップS23)、彩度補正処理(ステップS24)を行うことを除いて、実施形態1に係る対象領域補正処理と同じである。 The target area correction process in the present modification is the same as the target area correction process shown in FIG. 6, after performing the noise reduction process (step S25), the luminance correction process (step S22), the hue correction process (step S23), and the saturation. The target area correction process is the same as that of the first embodiment except that the correction process (step S24) is performed.
 (プログラム、記憶媒体)
 画像補正装置1の各ブロックは、集積回路(ICチップ)上に形成された論理回路によってハードウェア的に実現してもよいし、CPU(Central Processing Unit)を用いてソフトウェア的に実現してもよい。
(Program, storage medium)
Each block of the image correction apparatus 1 may be realized in hardware by a logic circuit formed on an integrated circuit (IC chip), or may be realized in software using a CPU (Central Processing Unit). Good.
 後者の場合、画像補正装置1は、各機能を実現するプログラムの命令を実行するCPU、上記プログラムを格納したROM(Read Only Memory)、上記プログラムを展開するRAM(Random Access Memory)、上記プログラムおよび各種データを格納するメモリ等の記憶装置(記録媒体)などを備えている。そして、本発明の目的は、上述した機能を実現するソフトウェアである画像補正装置1の制御プログラムのプログラムコード(実行形式プログラム、中間コードプログラム、ソースプログラム)をコンピュータで読み取り可能に記録した記録媒体を、画像補正装置1に供給し、そのコンピュータ(またはCPUやMPU)が記録媒体に記録されているプログラムコードを読み出し実行することによっても、達成可能である。 In the latter case, the image correction apparatus 1 includes a CPU that executes instructions of a program that realizes each function, a ROM (Read Memory) that stores the program, a RAM (Random Access Memory) that expands the program, the program, A storage device (recording medium) such as a memory for storing various data is provided. An object of the present invention is a recording medium on which a program code (execution format program, intermediate code program, source program) of a control program of the image correction apparatus 1 which is software for realizing the functions described above is recorded so as to be readable by a computer. This can also be achieved by supplying the image correction apparatus 1 and reading and executing the program code recorded on the recording medium by the computer (or CPU or MPU).
 上記記録媒体としては、例えば、磁気テープやカセットテープ等のテープ類、フロッピー(登録商標)ディスク/ハードディスク等の磁気ディスクやCD-ROM/MO/MD/DVD/CD-R等の光ディスクを含むディスク類、ICカード(メモリカードを含む)/光カード等のカード類、マスクROM/EPROM/EEPROM/フラッシュROM等の半導体メモリ類、あるいはPLD(Programmable logic device)やFPGA(Field Programmable Gate Array)等の論理回路類などを用いることができる。 Examples of the recording medium include tapes such as magnetic tapes and cassette tapes, magnetic disks such as floppy (registered trademark) disks / hard disks, and disks including optical disks such as CD-ROM / MO / MD / DVD / CD-R. IC cards (including memory cards) / optical cards, semiconductor memories such as mask ROM / EPROM / EEPROM / flash ROM, or PLD (Programmable logic device) or FPGA (Field Programmable Gate Array) Logic circuits can be used.
 また、上記プログラムコードは、通信ネットワークを介して画像補正装置1に供給してもよい。この通信ネットワークは、プログラムコードを伝送可能であればよく、特に限定されない。例えば、インターネット、イントラネット、エキストラネット、LAN、ISDN、VAN、CATV通信網、仮想専用網(Virtual Private Network)、電話回線網、移動体通信網、衛星通信網等が利用可能である。また、この通信ネットワークを構成する伝送媒体も、プログラムコードを伝送可能な媒体であればよく、特定の構成または種類のものに限定されない。例えば、IEEE1394、USB、電力線搬送、ケーブルTV回線、電話線、ADSL(Asymmetric Digital Subscriber Line)回線等の有線でも、IrDAやリモコンのような赤外線、Bluetooth(登録商標)、IEEE80211無線、HDR(High Data Rate)、NFC(Near Field Communication)、DLNA(Digital Living Network Alliance)、携帯電話網、衛星回線、地上波デジタル網等の無線でも利用可能である。 Further, the program code may be supplied to the image correction apparatus 1 via a communication network. The communication network is not particularly limited as long as it can transmit the program code. For example, the Internet, intranet, extranet, LAN, ISDN, VAN, CATV communication network, virtual private network (Virtual Private Network), telephone line network, mobile communication network, satellite communication network, etc. can be used. The transmission medium constituting the communication network may be any medium that can transmit the program code, and is not limited to a specific configuration or type. For example, in the case of wired such as IEEE 1394, USB, power line carrier, cable TV line, telephone line, ADSL (Asymmetric Digital Subscriber Line) line, infrared such as IrDA or remote control, Bluetooth (registered trademark), IEEE 8021 wireless, HDR (High Data) Rate), NFC (Near Field Communication), DLNA (Digital Living Network Alliance), mobile phone network, satellite line, terrestrial digital network, and the like.
 〔付記事項〕
 本発明の一態様に係る画像補正装置は、上述のように、入力画像データのフレーム間における動きベクトルを検出する動きベクトル検出手段と、記憶部に格納されている、対象領域データとして抽出される領域に特有の特徴を示す特徴データを含む対象特徴データベースを参照することによって、上記入力画像データから上記特有の特徴を含む対象領域データを抽出する対象領域抽出手段と、上記記憶部に格納されている、上記対象領域データに対する補正内容が定められた補正内容データベースを参照することによって、上記入力画像データのうち、上記対象領域抽出手段において抽出された対象領域データに対する画像補正を行う補正処理手段と、を備え、上記対象領域抽出手段は、上記動きベクトル検出手段が検出した現在のフレームと直前のフレームとの間における動きベクトルを用いて、現在のフレームの入力画像データから、直前のフレームの入力画像データから抽出した対象領域データと同じ特有の特徴を含む対象領域データを抽出することを特徴としている。
[Additional Notes]
As described above, the image correction apparatus according to an aspect of the present invention is extracted as target region data stored in the storage unit and the motion vector detection unit that detects the motion vector between frames of the input image data. A target area extraction unit that extracts target area data including the specific feature from the input image data by referring to a target feature database including characteristic data indicating a characteristic peculiar to the area, and is stored in the storage unit. Correction processing means for performing image correction on the target area data extracted by the target area extraction means of the input image data by referring to a correction content database in which correction contents for the target area data are defined. The target area extracting means includes a current frame detected by the motion vector detecting means. Using the motion vector between the previous frame and the target image data including the same specific features as the target image data extracted from the input image data of the previous frame is extracted from the input image data of the current frame. It is a feature.
 上記の構成によれば、上記入力画像データから上記対象領域データを抽出する際に、上記対象特徴データベースに含まれる補正対象領域に特有の特徴を示す特徴データにあてはまるデータを、上記対象領域データとして抽出することができる。上記特徴データにあてはまる入力画像データを上記対象領域データとして抽出するため、例えば、上記対象領域データとして顔領域を抽出する場合に、顔領域と同じ色の体の一部の領域又は顔と同じ色の壁の領域などを誤って検出することを防ぐことができる。 According to the above configuration, when the target area data is extracted from the input image data, data corresponding to feature data indicating characteristics specific to the correction target area included in the target feature database is used as the target area data. Can be extracted. In order to extract input image data applicable to the feature data as the target area data, for example, when a face area is extracted as the target area data, the same color as the partial area of the body or the face of the same color as the face area It is possible to prevent erroneous detection of the wall area of the camera.
 また、抽出された上記対象領域データに対し、上記補正内容データベースに含まれる補正内容に基づいた補正を行うことができる。その補正内容は、補正対象領域の特性に適合している一方で、補正対象領域以外の領域には不適合であり得る。したがって、上記対象領域データ以外のデータには、不適合な補正が施されないため、上記入力画像データ全体に同じ補正が一律に行われ、不自然な出力画像になるという不具合を防ぐことが出来る。 Further, correction based on the correction content included in the correction content database can be performed on the extracted target area data. The content of the correction may be incompatible with the area other than the correction target area while being compatible with the characteristics of the correction target area. Therefore, since the data other than the target area data is not subjected to inadequate correction, the same correction is uniformly performed on the entire input image data, thereby preventing an unnatural output image.
 したがって、入力画像データから補正を行う対象領域データを抽出し、抽出した対象領域データに適した補正を行うことができる。 Therefore, target area data to be corrected can be extracted from the input image data, and correction suitable for the extracted target area data can be performed.
 さらに、上記対象領域抽出手段は、上記動きベクトル検出手段が検出した現在のフレームと直前のフレームとの間における動きベクトルを用いて、現在のフレームの入力画像データから、直前のフレームの入力画像データから抽出した対象領域データと同じ特有の特徴を含む対象領域データを抽出することで、直前のフレームの入力画像データから抽出した対象領域データが、現在のフレームの入力画像データに含まれている場合に確実に当該対象領域データを抽出することができる。つまり、上記対象領域抽出手段は、直前のフレームの入力画像データから抽出した対象領域データが、現在のフレームの入力画像データに含まれているかを追跡し、含まれている場合には対象領域データを抽出することができるので、対象領域データの抽出もれを低減できる。 Further, the target region extracting means uses the motion vector between the current frame and the immediately preceding frame detected by the motion vector detecting means, from the input image data of the current frame to the input image data of the immediately preceding frame. If the target area data extracted from the input image data of the immediately preceding frame is included in the input image data of the current frame by extracting the target area data containing the same unique features as the target area data extracted from The target area data can be reliably extracted. In other words, the target area extraction means tracks whether the target area data extracted from the input image data of the immediately preceding frame is included in the input image data of the current frame. Therefore, it is possible to reduce the leakage of the target area data.
 このため、上記対象領域抽出手段は、現在のフレームの入力画像データから対象特徴データベースを参照することによって抽出できない対象領域データが存在する場合にも、当該抽出できない対象領域データが直前のフレームの入力画像データに含まれている場合には、上記動きベクトルを用いれば、上記抽出できない対象領域データと同じ特有な特徴を有する対象領域データを抽出することができる。 For this reason, even if there is target area data that cannot be extracted by referring to the target feature database from the input image data of the current frame, the target area extraction unit inputs the target area data that cannot be extracted in the previous frame. When included in image data, using the motion vector, it is possible to extract target area data having the same characteristic as the target area data that cannot be extracted.
 したがって、上記動きベクトルを用いることにより、対象領域データの抽出漏れ及び誤抽出を低減することができる。 Therefore, by using the motion vector, it is possible to reduce omission of extraction and erroneous extraction of target area data.
 なお、上記特徴データベースに特徴データとして、例えば、人の顔を正面から見た場合の目、鼻及び口の相対距離関係が定められている場合、上記抽出できない対象領域データとしては、定められた目、鼻及び口の相対距離関係に当てはまらない領域データ、すなわち、人の横顔の領域データ、及び、物体が横切ることにより顔の一部が隠れてしまった人の顔の領域データなどを挙げることができる。 Note that, as the feature data in the feature database, for example, when the relative distance relationship between eyes, nose, and mouth when a human face is viewed from the front is determined, the target region data that cannot be extracted is determined as the target data. List area data that does not apply to the relative distance between eyes, nose, and mouth, that is, area data of a person's profile, and area data of a person's face that is partially hidden by an object Can do.
 なお、上記入力画像データは、動画像データを含んでいるほかに、動画の一部に静止画が表示されたり、静止画の一部に動画が表示される場合のような、静止画像データと動画像データとの組み合わせをも含んでいる。 The input image data includes moving image data, as well as still image data such as when a still image is displayed on a part of a moving image or a moving image is displayed on a part of a still image. A combination with moving image data is also included.
 本発明の一態様に係る画像補正装置における上記対象特徴データベースは、上記特徴データを含む複数種類の特徴データを含み、上記補正内容データベースは、上記複数種類の対象領域データのそれぞれに対する複数種類の補正内容を含み、上記対象領域抽出手段は、上記複数種類の特徴データのそれぞれに対応した複数種類の対象領域データ、及び、上記動きベクトルを用い、現在のフレームの入力画像データの直前のフレームの入力画像データから抽出した対象領域データと同じ特有な特徴を有する対象領域データを抽出し、上記補正処理手段は、上記対象領域抽出手段において抽出された対象領域データのそれぞれに対応した画像補正を行うことが好ましい。 The target feature database in the image correction apparatus according to an aspect of the present invention includes a plurality of types of feature data including the feature data, and the correction content database includes a plurality of types of correction for each of the plurality of types of target region data. The target region extraction means includes a plurality of types of target region data corresponding to each of the plurality of types of feature data and the motion vector, and inputs the frame immediately before the input image data of the current frame. The target area data having the same characteristic as the target area data extracted from the image data is extracted, and the correction processing means performs image correction corresponding to each of the target area data extracted by the target area extraction means. Is preferred.
 上記の構成によれば、上記補正処理手段は、上記対象特徴データベース及び上記動きベクトルを用いて、上記対象領域抽出手段によって抽出された上記複数種類の対象領域データに対し、それぞれが有する特有な特徴に対応した画像補正を行うことができる。すなわち、上記補正処理手段は、特有な特徴がそれぞれ異なる複数の補正対象領域を適切に補正することができるため、入力画像データに対して、より自然な補正を行うことができる。 According to the above configuration, the correction processing means uses the target feature database and the motion vector, and each of the unique characteristics possessed by the plurality of types of target area data extracted by the target area extraction means. Can be corrected. In other words, the correction processing unit can appropriately correct a plurality of correction target regions having different unique features, and thus can perform more natural correction on the input image data.
 本発明の一態様に係る画像補正装置は、上記対象領域データには、輝度信号及び色差信号が含まれており、上記補正内容は、輝度信号に対する補正内容を示す輝度補正内容、色差信号に対する補正内容を示す色差補正内容、及び、輝度信号及び色差信号の双方に対する補正内容を示すノイズリダクション内容のうちの少なくとも何れかを含んでいることが好ましい。 In the image correction apparatus according to an aspect of the present invention, the target region data includes a luminance signal and a color difference signal, and the correction content includes a luminance correction content indicating a correction content for the luminance signal and a correction for the color difference signal. It is preferable to include at least one of a color difference correction content indicating the content and a noise reduction content indicating the correction content for both the luminance signal and the color difference signal.
 上記の構成によれば、上記対象領域データのそれぞれに特有の特徴に応じた補正を行うことができる。例えば、ノイズリダクションを行うべきではない対象領域データに対しては、上記補正内容にノイズリダクション内容を含まいことによって、ノイズリダクションが行われないようにすることができる。 According to the above configuration, it is possible to perform corrections according to features specific to each of the target area data. For example, with respect to target area data that should not be subjected to noise reduction, noise reduction can be prevented from being performed by including the noise reduction content in the correction content.
 本発明の一態様に係る画像補正装置における上記補正処理手段は、上記輝度補正内容に基づいて、上記対象領域データの上記輝度信号に対する補正を行う輝度補正処理手段と、上記色差補正内容に基づいて、上記対象領域データの上記色差信号に対する補正を行う色差補正処理手段と、上記ノイズリダクション内容に基づいて、上記輝度補正処理手段から供給された輝度信号、及び、上記色差補正処理手段から供給された色差信号の双方に対する補正を行うノイズリダクション処理手段と、を備え、上記補正処理手段から供給された上記入力画像データの上記輝度信号及び上記色差信号を、上記輝度信号及び上記色差信号で表される表色系と異なる表色系の色空間信号に変換する第1色空間信号変換手段をさらに備えていることが好ましい。 The correction processing means in the image correction apparatus according to one aspect of the present invention is based on the luminance correction processing means for correcting the luminance signal of the target area data based on the luminance correction content, and on the color difference correction content. The color difference correction processing means for correcting the color difference signal of the target area data, the luminance signal supplied from the luminance correction processing means based on the noise reduction content, and the color difference correction processing means supplied from the color difference correction means Noise reduction processing means for correcting both of the color difference signals, and the luminance signal and the color difference signal of the input image data supplied from the correction processing means are represented by the luminance signal and the color difference signal. It is preferable to further include first color space signal conversion means for converting to a color space signal of a color system different from the color system.
 上記の構成によれば、上記補正処理手段が、上記輝度補正処理手段、上記色差補正処理手段、及び、上記ノイズリダクション処理手段を含むため、上記輝度信号が示す輝度値に対する補正、上記色差信号が示す色相値及び彩度値に対する補正、及び、上記輝度値、上記色相値及び上記彩度値に対する補正を、補正対象領域の特性上の必要に応じてそれぞれ実行することができ、上記入力画像データに対して自然な補正を行うことができる。 According to the above configuration, since the correction processing unit includes the luminance correction processing unit, the color difference correction processing unit, and the noise reduction processing unit, the correction for the luminance value indicated by the luminance signal, the color difference signal is The correction for the hue value and the saturation value shown, and the correction for the luminance value, the hue value and the saturation value can be respectively performed according to the characteristics of the correction target area, and the input image data Can be corrected naturally.
 なお、上述した輝度補正内容、色差補正内容、及び、ノイズリダクション内容のうち、補正対象領域の特性上、不必要または不適合な補正内容については、対応する処理手段は補正を行わず、信号をスルーさせればよい。 Of the above-described luminance correction content, color difference correction content, and noise reduction content, correction processing that is unnecessary or non-conforming due to the characteristics of the correction target region is not corrected by the corresponding processing means, and the signal is passed through. You can do it.
 また、上記対象領域抽出手段及び上記補正処理手段が、上記輝度信号及び上記色差信号に基づいて処理を行うことができるため、上記入力画像データを構成する上記輝度信号及び上記色差信号を、上記輝度信号及び上記色差信号で表される表色系と異なる表色系の色空間信号に変換することなく処理を行うことができる。 In addition, since the target area extraction unit and the correction processing unit can perform processing based on the luminance signal and the color difference signal, the luminance signal and the color difference signal constituting the input image data are converted into the luminance signal. Processing can be performed without conversion to a color space signal of a color system different from the color system represented by the signal and the color difference signal.
 ここで、上記輝度信号及び上記色差信号の色空間信号によって表現される表色系は、例えば、YCbCr色空間で表される表色系である。また、異なる表色系の色空間信号とは、例えば、CIEL*a*b*表色系の色空間信号(L*信号、a*信号、b*信号)、及び、RGB表色系の色空間信号(R(Red:赤)、G(Green:緑)、B(Blue:青))などを挙げることができる。 Here, the color system represented by the color space signal of the luminance signal and the color difference signal is, for example, a color system represented by a YCbCr color space. Further, different color system color space signals include, for example, CIEL * a * b * color system color space signals (L * signal, a * signal, b * signal) and RGB color system colors. Spatial signals (R (Red: red), G (Green: green), B (Blue: blue))) can be used.
 また、上記第1色空間信号変換手段を備えていることによって、上記補正処理手段において処理された上記輝度信号及び上記色差信号を、上記輝度信号及び上記色差信号で表される表色系と異なる表色系の色空間信号(例えば、RGB表色系のRGB信号など)に変換した後に出力画像データとして出力することができる。 Further, by providing the first color space signal conversion means, the luminance signal and the color difference signal processed by the correction processing means are different from the color system represented by the luminance signal and the color difference signal. It can be output as output image data after being converted into a color space signal of a color system (for example, an RGB signal of an RGB color system).
 本発明の一態様に係る画像補正装置における上記補正処理手段は、上記輝度補正内容に基づいて、上記対象領域データの上記輝度信号に対する補正を行う輝度補正処理手段と、上記色差補正内容に基づいて、上記対象領域データの上記色差信号に対する補正を行う色差補正処理手段と、上記ノイズリダクション内容に基づいて、第1色空間信号変換手段から供給された色空間信号に対する補正を行うノイズリダクション処理手段と、備え、上記第1色空間信号変換手段は、上記補正処理手段から供給された上記入力画像データの上記輝度信号及び上記色差信号を、上記輝度信号及び上記色差信号で表される表色系と異なる表色系の色空間信号に変換することが好ましい。 The correction processing means in the image correction apparatus according to one aspect of the present invention is based on the luminance correction processing means for correcting the luminance signal of the target area data based on the luminance correction content, and on the color difference correction content. Color difference correction processing means for correcting the color difference signal of the target area data; and noise reduction processing means for correcting the color space signal supplied from the first color space signal conversion means based on the noise reduction content. The first color space signal converting means includes the luminance signal and the color difference signal of the input image data supplied from the correction processing means, and a color system represented by the luminance signal and the color difference signal; It is preferable to convert to a color space signal of a different color system.
 上記の構成によれば、上記ノイズリダクション処理部において、上記輝度信号及び上記色差信号の色空間信号によって表される表色系から変換された表色系の色空間信号を用いてノイズリダクション処理を行うことができる。 According to the above configuration, the noise reduction processing unit performs noise reduction processing using the color space signal of the color system converted from the color system represented by the color space signal of the luminance signal and the color difference signal. It can be carried out.
 なお、上述した輝度補正内容、色差補正内容、及び、ノイズリダクション内容のうち、補正対象領域の特性上、不必要または不適合な補正内容については、対応する処理手段は補正を行わず、信号をスルーさせればよい。 Of the above-described luminance correction content, color difference correction content, and noise reduction content, correction processing that is unnecessary or non-conforming due to the characteristics of the correction target region is not corrected by the corresponding processing means, and the signal is passed through. You can do it.
 本発明の一態様に係る画像補正装置において、上記補正処理手段は、上記ノイズリダクション内容に基づいて、上記輝度信号及び上記色差信号に対する補正を行うノイズリダクション処理手段と、上記輝度補正内容に基づいて、上記ノイズリダクション処理手段から供給された上記輝度信号に対する補正を行う輝度補正処理手段と、上記色差補正内容に基づいて、上記ノイズリダクション処理手段から供給された上記色差信号に対する補正を行う色差補正処理手段と、を備えており、上記補正処理手段から供給された上記入力画像データの上記輝度信号及び上記色差信号を、上記輝度信号及び上記色差信号で表される表色系と異なる表色系の色空間信号に変換する第1色空間信号変換手段をさらに備えていることが好ましい。 In the image correction apparatus according to an aspect of the present invention, the correction processing unit includes a noise reduction processing unit configured to correct the luminance signal and the chrominance signal based on the noise reduction content, and the luminance correction content. Luminance correction processing means for correcting the luminance signal supplied from the noise reduction processing means, and color difference correction processing for correcting the color difference signal supplied from the noise reduction processing means based on the color difference correction content. And the luminance signal and the color difference signal of the input image data supplied from the correction processing means are of a color system different from the color system represented by the luminance signal and the color difference signal. It is preferable to further comprise first color space signal conversion means for converting to a color space signal.
 上記の構成によれば、上記ノイズリダクション処理部においてノイズリダクション処理を行った後に、上記輝度信号に対する補正、及び、上記色差信号に対する補正を行うことができる。 According to the above configuration, after the noise reduction processing is performed in the noise reduction processing unit, the luminance signal and the color difference signal can be corrected.
 なお、上述した輝度補正内容、色差補正内容、及び、ノイズリダクション内容のうち、補正対象領域の特性上、不必要または不適合な補正内容については、対応する処理手段は補正を行わず、信号をスルーさせればよい。 Of the above-described luminance correction content, color difference correction content, and noise reduction content, correction processing that is unnecessary or non-conforming due to the characteristics of the correction target region is not corrected by the corresponding processing means, and the signal is passed through. You can do it.
 本発明の一態様に係る画像補正装置は、上記輝度信号及び上記色差信号で表される表色系と異なる表色系の色空間信号によって入力される入力画像データの色空間信号を、上記輝度信号及び上記色差信号に変換する第2色空間信号変換手段をさらに備えていることが好ましい。 An image correction apparatus according to an aspect of the present invention uses a color space signal of input image data input by a color space signal of a color system different from the color system represented by the luminance signal and the color difference signal as the luminance signal. It is preferable to further include second color space signal conversion means for converting the signal and the color difference signal.
 上記の構成によって、上記入力画像データが、例えば上記輝度信号及び上記色差信号で表されるようなYCbCr色空間で表される表色系と異なる表色系の色空間信号である場合に、当該YCbCr色空間で表される表色系と異なる表色系の色空間信号を、YCbCr色空間を表す上記輝度信号及び上記色差信号に変換することができる。これによって、上記入力画像データがどのような表色系の色空間信号であっても、上記輝度信号及び上記色差信号に対する補正を行うことによって、入力画像データを補正することができる。 With the above configuration, when the input image data is a color space signal of a color system different from the color system represented by the YCbCr color space represented by the luminance signal and the color difference signal, for example, A color space signal of a color system different from the color system represented by the YCbCr color space can be converted into the luminance signal and the color difference signal representing the YCbCr color space. Thus, regardless of the color space signal of the color system in which the input image data is input, the input image data can be corrected by correcting the luminance signal and the color difference signal.
 本発明の一態様に係る画像補正装置における上記輝度補正処理手段は、バンドパスフィルタ及びハイパスフィルタであることが好ましい。 It is preferable that the brightness correction processing means in the image correction apparatus according to one aspect of the present invention is a band pass filter and a high pass filter.
 上記の構成によれば、例えば、上記バンドパスフィルタにおいて上記対象領域データに含まれる上記特定の対象の輪郭を示すデータに対する補正を行い、上記ハイパスフィルタにおいて上記対象領域データに含まれる上記特定の対象のテクスチャを示すデータに対する補正を行うことができる。これによって、上記対象領域データに対して、上記対象領域データに含まれるデータ毎に対して必要な補正を行うことができるため、上記対象領域データ全体に同じ補正が行われ、不自然な出力画像になることを防ぐことが出来る。 According to the above configuration, for example, the bandpass filter corrects the data indicating the contour of the specific target included in the target region data, and the high pass filter includes the specific target included in the target region data. It is possible to correct the data indicating the texture. As a result, necessary correction can be performed on the target area data for each data included in the target area data, so that the same correction is performed on the entire target area data, resulting in an unnatural output image. Can be prevented.
 本発明の一態様に係る画像補正装置における上記色差補正処理手段は、CbCr座標系における彩度及び色相に対して補正演算を行うことが好ましい。 It is preferable that the color difference correction processing unit in the image correction apparatus according to an aspect of the present invention performs a correction operation on the saturation and hue in the CbCr coordinate system.
 上記の構成によれば、彩度及び色相の値を、CbCr座標系において、入力画像データが自然な画像になるような座標領域の適正範囲に補正することができる。 According to the above configuration, the saturation and hue values can be corrected in the appropriate range of the coordinate area so that the input image data becomes a natural image in the CbCr coordinate system.
 本発明の一態様に係る画像補正装置における上記ノイズリダクション処理手段は、ローパスフィルタ、又は、メディアンフィルタであることが好ましい。 In the image correction apparatus according to one aspect of the present invention, the noise reduction processing unit is preferably a low-pass filter or a median filter.
 上記の構成によれば、例えば、上記ノイズリダクション処理手段がローパスフィルタである場合、上記対象領域データに対して重み付け平均処理による補正を行うことができ、上記ノイズリダクション処理手段がメディアンフィルタである場合、細かなノイズを除去する補正を行うことができる。 According to the above configuration, for example, when the noise reduction processing means is a low-pass filter, the target area data can be corrected by weighted average processing, and the noise reduction processing means is a median filter. Correction that removes fine noise can be performed.
 本発明の一態様に係る画像補正装置において、上記色差信号には、色相情報及び彩度情報が含まれており、上記色差補正処理手段は、上記色相情報の値を、上記対象領域データの有する特有の特徴に応じて予め定められた適正範囲内の値にすることによって上記色相情報を補正し、上記色差信号に正の係数を掛けることによって上記彩度情報を補正する、ことが好ましい。 In the image correction apparatus according to an aspect of the present invention, the color difference signal includes hue information and saturation information, and the color difference correction processing unit includes the value of the hue information in the target area data. It is preferable that the hue information is corrected by setting the value within an appropriate range determined in advance according to the unique feature, and the saturation information is corrected by multiplying the color difference signal by a positive coefficient.
 本発明の一態様に係る画像補正表示装置は、上記画像補正装置と、上記画像補正装置において補正された上記入力画像データを表示する表示部と、を備えていることが好ましい。 The image correction display device according to an aspect of the present invention preferably includes the image correction device and a display unit that displays the input image data corrected by the image correction device.
 上記の構成によれば、入力画像データを上記画像補正装置において、より自然な画像になるよう補正された入力画像データを表示することができる。 According to the above configuration, it is possible to display the input image data in which the input image data is corrected so as to become a more natural image in the image correction device.
 本発明の一態様に係る画像補正装置の画像補正方法は、上述のように、入力された画像を補正する画像補正装置の画像補正方法であって、入力画像データのフレーム間における動きベクトルを検出する動きベクトル検出ステップと、記憶部に格納されている、対象領域データとして抽出される領域に特有の特徴を示す特徴データを含む対象特徴データベースを参照することによって、上記入力画像データから上記特有の特徴を含む対象領域データを抽出する対象領域抽出ステップと、上記記憶部に格納されている、上記対象領域データに対する補正内容が定められた補正内容データベースを参照することによって、上記入力画像データのうち、上記対象領域抽出ステップにおいて抽出された対象領域データに対する画像補正を行う補正処理ステップと、を含み、上記対象領域抽出ステップにおいて、上記動きベクトル検出ステップにおいて検出した現在のフレームと直前のフレームとの間における動きベクトルを用いて、現在のフレームの入力画像データから、直前のフレームの入力画像データから抽出した対象領域データと同じ特有の特徴を含む対象領域データを抽出することを特徴としている。 An image correction method for an image correction apparatus according to an aspect of the present invention is an image correction method for an image correction apparatus that corrects an input image as described above, and detects a motion vector between frames of input image data. By referring to a target feature database including feature data indicating features specific to a region extracted as target region data, stored in the storage unit, and stored in the storage unit, from the input image data. By referring to a target area extracting step for extracting target area data including features and a correction content database in which correction contents for the target area data are stored, which is stored in the storage unit, Correction processing for performing image correction on the target area data extracted in the target area extraction step And using the motion vector between the current frame detected in the motion vector detection step and the previous frame in the target region extraction step, the previous frame is extracted from the input image data of the current frame. The target area data including the same characteristic features as the target area data extracted from the input image data is extracted.
 上記の構成によれば、上述した画像補正装置と同様の効果を奏する。 According to the configuration described above, the same effects as those of the image correction device described above can be obtained.
 なお、コンピュータを本発明に係る画像補正装置として動作させるためのプログラムであって、上記コンピュータを上記画像補正装置の各手段として機能させることを特徴とするプログラム、及び、それらのプログラムを記録したコンピュータ読み取り可能な記録媒体も本発明の範疇に含まれる。 A program for causing a computer to operate as the image correction apparatus according to the present invention, which causes the computer to function as each unit of the image correction apparatus, and a computer recording the program A readable recording medium is also included in the scope of the present invention.
 なお、本発明は上述した本実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能である。 In addition, this invention is not limited to this embodiment mentioned above, A various change is possible in the range shown to the claim.
 本発明に係る画像補正装置は、テレビジョン受像機、パーソナルコンピュータ、カーナビゲーションシステム、携帯電話、デジタルカメラ、及び、デジタルビデオカメラなどに好適に適用することができる。 The image correction apparatus according to the present invention can be suitably applied to a television receiver, a personal computer, a car navigation system, a mobile phone, a digital camera, a digital video camera, and the like.
 1、2   画像補正装置
 10    対象領域抽出部(対象領域抽出手段)
 20    補正処理部(補正処理手段)
 21    輝度補正処理部(輝度補正処理手段)
 22    色差補正処理部(色差補正処理手段)
 23    ノイズリダクション処理部(ノイズリダクション処理手段)
 30    記憶部
 31    対象特徴データベース
 32    補正内容データベース
 40    RGB変換部(第1色空間信号変換手段)
 41    YCbCr変換部(第2色空間信号変換手段)
 50    動きベクトル検出部(動きベクトル検出手段)
 51    フレームメモリ
 90    撮像機器
 91    顔領域抽出部
 92    色相補正値算出部
 93    色相補正部
1, 2 Image correction device 10 Target area extraction unit (target area extraction means)
20 Correction processing unit (correction processing means)
21 Luminance correction processing unit (luminance correction processing means)
22 Color difference correction processing section (color difference correction processing means)
23 Noise Reduction Processing Unit (Noise Reduction Processing Means)
30 Storage Unit 31 Target Feature Database 32 Correction Content Database 40 RGB Conversion Unit (First Color Space Signal Conversion Unit)
41 YCbCr converter (second color space signal converter)
50 Motion vector detection unit (motion vector detection means)
51 Frame Memory 90 Imaging Device 91 Face Area Extraction Unit 92 Hue Correction Value Calculation Unit 93 Hue Correction Unit

Claims (15)

  1.  入力画像データのフレーム間における動きベクトルを検出する動きベクトル検出手段と、
     記憶部に格納されている、対象領域データとして抽出される領域に特有の特徴を示す特徴データを含む対象特徴データベースを参照することによって、上記入力画像データから上記特有の特徴を含む対象領域データを抽出する対象領域抽出手段と、
     上記記憶部に格納されている、上記対象領域データに対する補正内容が定められた補正内容データベースを参照することによって、上記入力画像データのうち、上記対象領域抽出手段において抽出された対象領域データに対する画像補正を行う補正処理手段と、を備え、
     上記対象領域抽出手段は、上記動きベクトル検出手段が検出した現在のフレームと直前のフレームとの間における動きベクトルを用いて、現在のフレームの入力画像データから、直前のフレームの入力画像データから抽出した対象領域データと同じ特有の特徴を含む対象領域データを抽出する
    ことを特徴とする画像補正装置。
    Motion vector detecting means for detecting a motion vector between frames of input image data;
    By referring to a target feature database including feature data indicating features specific to a region extracted as target region data, stored in the storage unit, target region data including the specific features is input from the input image data. A target area extracting means for extracting;
    An image of the target area data extracted by the target area extraction unit of the input image data by referring to a correction content database in which correction contents for the target area data are determined, stored in the storage unit. Correction processing means for performing correction,
    The target area extracting unit extracts from the input image data of the immediately preceding frame from the input image data of the immediately preceding frame using the motion vector between the current frame and the immediately preceding frame detected by the motion vector detecting unit. An image correction apparatus characterized by extracting target area data including the same characteristic as the target area data.
  2.  上記対象特徴データベースは、上記特徴データを含む複数種類の特徴データを含み、
     上記補正内容データベースは、上記複数種類の対象領域データのそれぞれに対する複数種類の補正内容を含み、
     上記対象領域抽出手段は、上記複数種類の特徴データのそれぞれに対応した複数種類の対象領域データ、及び、上記動きベクトルを用い、現在のフレームの入力画像データの直前のフレームの入力画像データから抽出した対象領域データと同じ特有な特徴を有する対象領域データを抽出し、
     上記補正処理手段は、上記対象領域抽出手段において抽出された対象領域データのそれぞれに対応した画像補正を行う
    ことを特徴とする請求項1に記載の画像補正装置。
    The target feature database includes a plurality of types of feature data including the feature data,
    The correction content database includes a plurality of types of correction content for each of the plurality of types of target area data,
    The target area extraction unit extracts a plurality of types of target area data corresponding to each of the plurality of types of feature data and the motion vector from the input image data of the frame immediately before the input image data of the current frame. Target area data having the same characteristic as the target area data extracted,
    The image correction apparatus according to claim 1, wherein the correction processing unit performs image correction corresponding to each of the target area data extracted by the target area extraction unit.
  3.  上記対象領域データには、輝度信号及び色差信号が含まれており、
     上記補正内容は、輝度信号に対する補正内容を示す輝度補正内容、色差信号に対する補正内容を示す色差補正内容、及び、輝度信号及び色差信号の双方に対する補正内容を示すノイズリダクション内容のうちの少なくとも何れかを含んでいる
    ことを特徴とする請求項1又は2に記載の画像補正装置。
    The target area data includes a luminance signal and a color difference signal,
    The correction content is at least one of luminance correction content indicating the correction content for the luminance signal, color difference correction content indicating the correction content for the color difference signal, and noise reduction content indicating the correction content for both the luminance signal and the color difference signal. The image correction apparatus according to claim 1, further comprising:
  4.  上記補正処理手段は、
      上記輝度補正内容に基づいて、上記対象領域データの上記輝度信号に対する補正を行う輝度補正処理手段と、
      上記色差補正内容に基づいて、上記対象領域データの上記色差信号に対する補正を行う色差補正処理手段と、
      上記ノイズリダクション内容に基づいて、上記輝度補正処理手段から供給された輝度信号、及び、上記色差補正処理手段から供給された色差信号の双方に対する補正を行うノイズリダクション処理手段と、を備え、
     上記補正処理手段から供給された上記入力画像データの上記輝度信号及び上記色差信号を、上記輝度信号及び上記色差信号で表される表色系と異なる表色系の色空間信号に変換する第1色空間信号変換手段をさらに備えている
    ことを特徴とする請求項3に記載の画像補正装置。
    The correction processing means includes
    Brightness correction processing means for correcting the brightness signal of the target area data based on the brightness correction content;
    Color difference correction processing means for correcting the color difference signal of the target area data based on the color difference correction content;
    Noise reduction processing means for correcting both the luminance signal supplied from the luminance correction processing means and the color difference signal supplied from the color difference correction processing means based on the noise reduction content,
    A first color signal that converts the luminance signal and the color difference signal of the input image data supplied from the correction processing means into a color space signal of a color system different from the color system represented by the luminance signal and the color difference signal. The image correction apparatus according to claim 3, further comprising color space signal conversion means.
  5.  上記補正処理手段は、
      上記輝度補正内容に基づいて、上記対象領域データの上記輝度信号に対する補正を行う輝度補正処理手段と、
      上記色差補正内容に基づいて、上記対象領域データの上記色差信号に対する補正を行う色差補正処理手段と、
      上記ノイズリダクション内容に基づいて、第1色空間信号変換手段から供給された色空間信号に対する補正を行うノイズリダクション処理手段と、備え、
     上記第1色空間信号変換手段は、上記補正処理手段から供給された上記入力画像データの上記輝度信号及び上記色差信号を、上記輝度信号及び上記色差信号で表される表色系と異なる表色系の色空間信号に変換する
    ことを特徴とする請求項3に記載の画像補正装置。
    The correction processing means includes
    Brightness correction processing means for correcting the brightness signal of the target area data based on the brightness correction content;
    Color difference correction processing means for correcting the color difference signal of the target area data based on the color difference correction content;
    Noise reduction processing means for correcting the color space signal supplied from the first color space signal conversion means based on the content of the noise reduction,
    The first color space signal conversion unit converts the luminance signal and the color difference signal of the input image data supplied from the correction processing unit to a color system different from the color system represented by the luminance signal and the color difference signal. The image correction apparatus according to claim 3, wherein the image correction apparatus is converted into a system color space signal.
  6.  上記補正処理手段は、
      上記ノイズリダクション内容に基づいて、上記輝度信号及び上記色差信号に対する補正を行うノイズリダクション処理手段と、
      上記輝度補正内容に基づいて、上記ノイズリダクション処理手段から供給された上記輝度信号に対する補正を行う輝度補正処理手段と、
      上記色差補正内容に基づいて、上記ノイズリダクション処理手段から供給された上記色差信号に対する補正を行う色差補正処理手段と、を備えており、
     上記補正処理手段から供給された上記入力画像データの上記輝度信号及び上記色差信号を、上記輝度信号及び上記色差信号で表される表色系と異なる表色系の色空間信号に変換する第1色空間信号変換手段をさらに備えている
    ことを特徴とする請求項3に記載の画像補正装置。
    The correction processing means includes
    Noise reduction processing means for correcting the luminance signal and the color difference signal based on the noise reduction content;
    Luminance correction processing means for correcting the luminance signal supplied from the noise reduction processing means based on the luminance correction content;
    Color difference correction processing means for correcting the color difference signal supplied from the noise reduction processing means based on the content of the color difference correction;
    A first color signal that converts the luminance signal and the color difference signal of the input image data supplied from the correction processing means into a color space signal of a color system different from the color system represented by the luminance signal and the color difference signal. The image correction apparatus according to claim 3, further comprising color space signal conversion means.
  7.  上記輝度信号及び上記色差信号で表される表色系と異なる表色系の色空間信号によって入力される入力画像データの色空間信号を、上記輝度信号及び上記色差信号に変換する第2色空間信号変換手段をさらに備えている
    ことを特徴とする請求項4から6までの何れか1項に記載の画像補正装置。
    A second color space for converting a color space signal of input image data inputted by a color space signal of a color system different from the color system represented by the luminance signal and the color difference signal into the luminance signal and the color difference signal 7. The image correction apparatus according to claim 4, further comprising a signal conversion unit.
  8.  上記輝度補正処理手段は、バンドパスフィルタ及びハイパスフィルタである
    ことを特徴とする請求項4から7までの何れか1項に記載の画像補正装置。
    The image correction apparatus according to claim 4, wherein the brightness correction processing unit is a band-pass filter and a high-pass filter.
  9.  上記色差補正処理手段は、CbCr座標系における彩度及び色相に対して補正演算を行う
    ことを特徴とする請求項4から8までの何れか1項に記載の画像補正装置。
    9. The image correction apparatus according to claim 4, wherein the color difference correction processing unit performs a correction operation on saturation and hue in a CbCr coordinate system.
  10.  上記ノイズリダクション処理手段は、ローパスフィルタ、又は、メディアンフィルタである
    ことを特徴とする請求項4から9までの何れか1項に記載の画像補正装置。
    The image correction apparatus according to claim 4, wherein the noise reduction processing unit is a low-pass filter or a median filter.
  11.  上記色差信号には、色相情報及び彩度情報が含まれており、
     上記色差補正処理手段は、
      上記色相情報の値を、上記対象領域データの有する特有の特徴に対して予め定められた適正範囲内の値にすることによって上記色相情報を補正し、
      上記色差信号に正の係数を掛けることによって上記彩度情報を補正する、
    ことを特徴とする請求項4から10の何れか1項に記載の画像補正装置。
    The color difference signal includes hue information and saturation information,
    The color difference correction processing means includes:
    Correcting the hue information by setting the value of the hue information to a value within an appropriate range that is predetermined for the specific characteristics of the target area data;
    Correcting the saturation information by multiplying the color difference signal by a positive coefficient;
    The image correction apparatus according to claim 4, wherein the image correction apparatus is an image correction apparatus.
  12.  請求項1から11までの何れか1項に記載の画像補正装置と、
     上記画像補正装置において補正された上記入力画像データを表示する表示部と、を備えている
    ことを特徴とする画像補正表示装置。
    The image correction apparatus according to any one of claims 1 to 11,
    And a display unit for displaying the input image data corrected by the image correction apparatus.
  13.  入力された画像を補正する画像補正装置の画像補正方法であって、
     入力画像データのフレーム間における動きベクトルを検出する動きベクトル検出ステップと、
     記憶部に格納されている、対象領域データとして抽出される領域に特有の特徴を示す特徴データを含む対象特徴データベースを参照することによって、上記入力画像データから上記特有の特徴を含む対象領域データを抽出する対象領域抽出ステップと、
     上記記憶部に格納されている、上記対象領域データに対する補正内容が定められた補正内容データベースを参照することによって、上記入力画像データのうち、上記対象領域抽出ステップにおいて抽出された対象領域データに対する画像補正を行う補正処理ステップと、を含み、
     上記対象領域抽出ステップにおいて、上記動きベクトル検出ステップにおいて検出した現在のフレームと直前のフレームとの間における動きベクトルを用いて、現在のフレームの入力画像データから、直前のフレームの入力画像データから抽出した対象領域データと同じ特有の特徴を含む対象領域データを抽出する
    ことを特徴とする画像補正方法。
    An image correction method of an image correction apparatus for correcting an input image,
    A motion vector detection step for detecting a motion vector between frames of the input image data;
    By referring to a target feature database including feature data indicating features specific to a region extracted as target region data, stored in the storage unit, target region data including the specific features is input from the input image data. A target area extraction step to be extracted;
    An image of the target area data extracted in the target area extraction step of the input image data by referring to a correction content database in which correction contents for the target area data are determined, stored in the storage unit. A correction processing step for performing correction,
    In the target region extracting step, using the motion vector between the current frame and the immediately preceding frame detected in the motion vector detecting step, extraction is performed from the input image data of the immediately preceding frame from the input image data of the immediately preceding frame. An image correction method characterized by extracting target area data including the same characteristic as the target area data.
  14.  コンピュータを請求項1から11までの何れか1項に記載の画像補正装置として動作させるためのプログラムであって、上記コンピュータを上記画像補正装置の各手段として機能させるプログラム。 A program for operating a computer as the image correction apparatus according to any one of claims 1 to 11, wherein the computer functions as each unit of the image correction apparatus.
  15.  請求項14に記載のプログラムを記録したコンピュータ読み取り可能な記録媒体。 A computer-readable recording medium on which the program according to claim 14 is recorded.
PCT/JP2012/061447 2011-05-06 2012-04-27 Image correction device, image correction display device, image correction method, program, and recording medium WO2012153661A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011103528 2011-05-06
JP2011-103528 2011-05-06

Publications (1)

Publication Number Publication Date
WO2012153661A1 true WO2012153661A1 (en) 2012-11-15

Family

ID=47139145

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/061447 WO2012153661A1 (en) 2011-05-06 2012-04-27 Image correction device, image correction display device, image correction method, program, and recording medium

Country Status (1)

Country Link
WO (1) WO2012153661A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016171259A1 (en) * 2015-04-23 2016-10-27 株式会社ニコン Image processing device and imaging apparatus
JP2017102642A (en) * 2015-12-01 2017-06-08 カシオ計算機株式会社 Image processor, image processing method and program
US20180288287A1 (en) * 2014-12-08 2018-10-04 Sharp Kabushiki Kaisha Video processing device
US20200027244A1 (en) * 2018-07-23 2020-01-23 Kabushiki Kaisha Toshiba Information processing apparatus, information processing method, and computer program product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0678320A (en) * 1992-08-25 1994-03-18 Matsushita Electric Ind Co Ltd Color adjustment device
JP2002101422A (en) * 2000-09-26 2002-04-05 Minolta Co Ltd Image processing unit, image processing method and computer-readable recording medium for recording image processing program
JP2003348614A (en) * 2002-03-18 2003-12-05 Victor Co Of Japan Ltd Video correction apparatus and method, video correction program, and recording medium for recording the same
JP2005228138A (en) * 2004-02-13 2005-08-25 Konica Minolta Photo Imaging Inc Image processing method, image processing apparatus and image processing program
JP2007164628A (en) * 2005-12-15 2007-06-28 Canon Inc Image processing device and method
JP2009005239A (en) * 2007-06-25 2009-01-08 Sony Computer Entertainment Inc Encoder and encoding method
JP2010147660A (en) * 2008-12-17 2010-07-01 Nikon Corp Image processor, electronic camera and image processing program

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0678320A (en) * 1992-08-25 1994-03-18 Matsushita Electric Ind Co Ltd Color adjustment device
JP2002101422A (en) * 2000-09-26 2002-04-05 Minolta Co Ltd Image processing unit, image processing method and computer-readable recording medium for recording image processing program
JP2003348614A (en) * 2002-03-18 2003-12-05 Victor Co Of Japan Ltd Video correction apparatus and method, video correction program, and recording medium for recording the same
JP2005228138A (en) * 2004-02-13 2005-08-25 Konica Minolta Photo Imaging Inc Image processing method, image processing apparatus and image processing program
JP2007164628A (en) * 2005-12-15 2007-06-28 Canon Inc Image processing device and method
JP2009005239A (en) * 2007-06-25 2009-01-08 Sony Computer Entertainment Inc Encoder and encoding method
JP2010147660A (en) * 2008-12-17 2010-07-01 Nikon Corp Image processor, electronic camera and image processing program

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180288287A1 (en) * 2014-12-08 2018-10-04 Sharp Kabushiki Kaisha Video processing device
WO2016171259A1 (en) * 2015-04-23 2016-10-27 株式会社ニコン Image processing device and imaging apparatus
JP2017102642A (en) * 2015-12-01 2017-06-08 カシオ計算機株式会社 Image processor, image processing method and program
CN106815812A (en) * 2015-12-01 2017-06-09 卡西欧计算机株式会社 IMAGE PROCESSING APPARATUS and image processing method
US20200027244A1 (en) * 2018-07-23 2020-01-23 Kabushiki Kaisha Toshiba Information processing apparatus, information processing method, and computer program product
US11069089B2 (en) * 2018-07-23 2021-07-20 Kabushiki Kaisha Toshiba Information processing apparatus, information processing method, and computer program product

Similar Documents

Publication Publication Date Title
US8059911B2 (en) Depth-based image enhancement
US9569827B2 (en) Image processing apparatus and method, and program
WO2020125631A1 (en) Video compression method and apparatus, and computer-readable storage medium
US10565742B1 (en) Image processing method and apparatus
US7933469B2 (en) Video processing
CN107507144B (en) Skin color enhancement processing method and device and image processing device
US20120093433A1 (en) Dynamic Adjustment of Noise Filter Strengths for use with Dynamic Range Enhancement of Images
JP2003230160A (en) Color picture saturation adjustment apparatus and method therefor
US20170154437A1 (en) Image processing apparatus for performing smoothing on human face area
CN110070507B (en) Matting method and device for video image, storage medium and matting equipment
US20080056566A1 (en) Video processing
JP2004310475A (en) Image processor, cellular phone for performing image processing, and image processing program
WO2012153661A1 (en) Image correction device, image correction display device, image correction method, program, and recording medium
WO2018100950A1 (en) Image processing device, digital camera, image processing program, and recording medium
CN109636739B (en) Detail processing method and device for enhancing image saturation
WO2012099013A1 (en) Image correction device, image correction display device, image correction method, program, and recording medium
JP5286215B2 (en) Outline extracting apparatus, outline extracting method, and outline extracting program
CN110298812B (en) Image fusion processing method and device
JP5327766B2 (en) Memory color correction in digital images
WO2023000868A1 (en) Image processing method and apparatus, device, and storage medium
TWI531246B (en) Color adjustment method and its system
JPWO2006106750A1 (en) Image processing apparatus, image processing method, and image processing program
CN105631812B (en) Control method and control device for color enhancement of display image
CN113781330A (en) Image processing method, device and electronic system
KR102135155B1 (en) Display apparatus and control method for the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12781696

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12781696

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP