WO2008018459A1 - Image processing method, image processing apparatus, image processing program, and image pickup apparatus - Google Patents

Image processing method, image processing apparatus, image processing program, and image pickup apparatus Download PDF

Info

Publication number
WO2008018459A1
WO2008018459A1 PCT/JP2007/065446 JP2007065446W WO2008018459A1 WO 2008018459 A1 WO2008018459 A1 WO 2008018459A1 JP 2007065446 W JP2007065446 W JP 2007065446W WO 2008018459 A1 WO2008018459 A1 WO 2008018459A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
face
image processing
processing method
Prior art date
Application number
PCT/JP2007/065446
Other languages
French (fr)
Japanese (ja)
Inventor
Akihiko Utsugi
Original Assignee
Nikon Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nikon Corporation filed Critical Nikon Corporation
Publication of WO2008018459A1 publication Critical patent/WO2008018459A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Definitions

  • Image processing method image processing apparatus, image processing program, and imaging apparatus
  • the present invention relates to an image processing method, an image processing device, an image processing program, and an imaging device that determine whether or not there is a specific image in an acquired image.
  • face determination is performed by calculating the degree of matching between the template face image and the determination target image.
  • SVM face determination by calculating the degree of coincidence between a large number of support vectors (template face images and non-face images selected from learning samples) and the image to be determined, it can be used for various nominations of the face image to be determined. It can respond flexibly.
  • the operation of face determination by a neural network is slightly similar to SVM, but uses multiple weighted coefficient maps obtained by learning instead of support vectors.
  • the face determination operation by AdaBoost is a little like a neural network, but uses multiple rectangular filters selected by learning instead of the weighted coefficient map obtained by learning.
  • Patent Document 1 Japanese Patent Laid-Open No. 2005-44330
  • face determination by template matching cannot flexibly cope with various nominations of the determination target face image.
  • the illumination conditions of the template face image and the discrimination target face image are different, there are many cases where the determination cannot be made correctly.
  • face determination by SVM has a problem that enormous processing time is required to calculate the degree of coincidence between many support vectors and a determination target image.
  • face detection by a neural network is faster than SVM, but has a problem that processing time required for determination is slightly longer than that of AdaBoost or the like.
  • AdaBoost's face determination using a rectangular filter requires processing time because it needs to calculate the rectangular filter each time the force determination target area, which is faster than SVM or neural network, is changed.
  • the image processing method for determining whether or not the image is of a specific type obtains an image composed of a plurality of pixels, and determines the degree of image quality of the specific type as a pixel value and A lookup table shown for each pixel position is stored, a determination image is generated based on the acquired image, and the pixel value and pixel position of the pixel of the determination image are used to determine the pixel value of the pixel.
  • the degree of likelihood of a specific type of image is obtained, the degree of likelihood of pixels of the obtained image for determination is integrated, and based on the result of integration, it is determined whether the input image is a specific type of image.
  • the specific type of image is preferably a face image.
  • the determination image is preferably generated by extracting an edge component of the acquired image.
  • the determination image is an edge component having a concave structure in which the pixel value is recessed locally from the periphery of the acquired image. It is preferable to generate by extracting.
  • the look-up table includes a pixel at a pixel position corresponding to a characteristic element of a specific type of image. If the edge component of the image is large, the degree of image-likeness of the specific type is set to a value larger than the degree of image-likeness of the specific type of image when the edge component is small. At the pixel position, the edge component of the pixel is large! /, The degree of image-likeness of a particular type of case! /, And the edge component is small! /, A value smaller than the degree of image-likeness of a particular type of It ’s better to do that.
  • the determination image is generated by extracting an edge component of the acquired image, and corresponds to any region of the eye-nose mouth.
  • the edge component of the pixel is large, the degree of the image quality of the face is larger than the degree of the image quality of the face when the edge component is small, and it corresponds to the area other than the eyes and nose and mouth
  • the degree of the image quality of the face in the case is set to a small value compared to the degree of the image quality of the face in the case of the small edge component. Is preferred.
  • the lookup table includes a determination target image sample group belonging to a specific type of image and a specific type of image. It is preferably generated by statistical processing based on a non-determination target image sample group that does not belong.
  • the first process is performed based on the determination target image sample group by a process equivalent to that for generating the determination image.
  • Generate an image sample group generate a second image sample group based on the non-determination target image sample group, and the frequency P at which the pixel value at the pixel position (x, y) of the first image sample group is E (x, y) (E) and the image at pixel position (x, y) of the second image sample group.
  • ⁇ , And ⁇ and ⁇ are preferably predetermined constants.
  • a plurality of lookup tables corresponding to the degree of contrast are stored, the contrast of the acquired image is calculated, and the plurality of lookup tables are calculated. It is preferable to select a lookup table according to the contrast.
  • the image processing method for determining whether or not the image is a specific type of image obtains an image composed of a plurality of pixels, and determines the degree of the image quality of the specific type as the pixel value and Stores a look-up table for each pixel position, generates multiple reduced images of images acquired at multiple different reduction magnifications, generates a judgment image based on multiple reduced images, and A determination target area is set for the first reduced image, which is one of the reduced images, and based on the pixel value of the pixel in the determination target area and the pixel position in the determination target area, the look-up table is used.
  • the degree of image-likeness of a particular type of pixel is obtained, the degree of image-likeness of the particular type of pixel in the obtained determination target area is integrated, and an image corresponding to the determination target area in the acquired image is obtained based on the integration result.It is determined whether the image is of a specific type.
  • the second reduced image corresponding to the determination target region is further reduced with respect to the second reduced image further reduced than the first reduced image.
  • the determination target area is further set, and a second lookup table is further stored for each pixel position corresponding to the pixel value and the second determination target area, indicating the degree of image-likeness of a specific type, and the second determination target Based on the pixel value of the pixel in the area and the pixel position in the second determination target area, the second look-up table is used to determine the degree of image-likeness of the specific type of the pixel, and the obtained second determination
  • the degree of likelihood of a particular type of image of a pixel in the target area is integrated, and the result of integration of the degree of likelihood of a particular type of image in the pixel of the judgment target area and a particular type of image of the pixel in the second judgment target area
  • the degree of uniqueness It is preferable to determine whether the image corresponding to the determination target area
  • the image processing program is any one of the first to twelfth aspects.
  • the image processing apparatus is an image processing apparatus equipped with the image processing program of the thirteenth aspect. Is.
  • the imaging apparatus is an imaging apparatus that mounts the image processing program according to the thirteenth aspect.
  • the present invention is configured as described above, a specific type of image can be determined at high speed without being affected by various illumination conditions.
  • FIG. 1 is a diagram showing an image processing apparatus according to an embodiment of the present invention.
  • FIG. 2 is a diagram showing a flowchart of an image processing program executed by the personal computer 1.
  • FIG. 3 is a diagram showing an edge extraction target pixel and peripheral pixels with coordinates xy.
  • FIG. 4 is a diagram showing the result of creating a luminance concave image E (x, y) for various luminance structures.
  • FIG. 1 A first figure.
  • FIG.6 Creates a specific edge image! /, Generates facial appearance V (x, y), and calculates facial appearance V
  • FIG. 8 is a diagram showing a flowchart of the processing after obtaining the face-likeness Vsum to Vsum of the partial image in the face determination processing in step S6 of FIG.
  • FIG. 9 is a diagram showing a flowchart of a process for obtaining facial appearance L (E).
  • FIG. 10 is a diagram showing a flowchart of an image processing program according to a second embodiment executed by the personal computer 1.
  • FIG. 12 is a diagram showing a configuration of a digital camera 100 that is an imaging apparatus.
  • FIG. 1 is a diagram showing an image processing apparatus according to an embodiment of the present invention.
  • the image processing apparatus is realized by the personal computer 1.
  • the personal computer 1 is connected to a digital camera 2, a recording medium 3 such as a CD-ROM, another computer 4, etc., and receives various images (image data).
  • the personal computer 1 performs the image processing described below on the provided image.
  • the computer 4 is connected via the Internet and other telecommunications lines 5.
  • the program executed by the personal computer 1 for image processing is similar to the configuration of FIG. 1, such as a recording medium such as a CD-ROM, other computer power via the Internet or other electric communication lines, and the like.
  • a recording medium such as a CD-ROM
  • the personal computer 1 is composed of a CPU (not shown) and its peripheral circuits (not shown), and executes a program in which the CPU is installed.
  • the program When the program is provided via the Internet or another telecommunication line, the program is transmitted after being converted into a signal on a carrier wave carrying a telecommunication line, that is, a transmission medium.
  • the program is supplied as a computer readable computer program product in various forms such as a recording medium and a carrier wave.
  • the personal computer 1 performs image processing for detecting a medium facial image of a photographed image. Specifically, an edge component is extracted based on the input image to generate an edge image, and it is determined whether there is a face image based on the generated edge image.
  • the processing in the present embodiment is characterized by the edge component extraction method and the face determination method based on the edge image.
  • an edge is a portion (area, pixel) where the luminance value or pixel value is smaller than the surrounding area, is larger than the surrounding area! /, And the value is protruding! / Locations (areas, pixels), steps This refers to the difference (area, pixel).
  • the indented area (area, pixel) is called a concave edge
  • the protruding area (area, pixel) is called a convex edge.
  • FIG. 2 is a diagram showing a flowchart of an image processing program executed by the personal computer 1.
  • step S1 an image (image data) to be detected for a face photographed (captured) with a digital camera or the like is input (acquired).
  • Each pixel of the input image includes R, G, and B color components, and each color component ranges from 0 to 255.
  • step S2 a luminance image Y is generated by the following formula based on R, G, and B of the input image. That is, the luminance image Y plane is generated.
  • step S3 the generated luminance image is hierarchically reduced and output. For example, given by the reduction ratio ⁇ of 0.9 n for integer n of 0 to 3 1, and outputs the luminance image which has been reduced by the reduction magnification ⁇ of the 32 patterns.
  • a reduction method for example, Cubic scaling or linear scaling may be used. The reason why multiple reduced images are generated in this way is that it is unclear whether the input image has a face image of any size, so that it can handle face images of any size. .
  • step S4 the reduced luminance image Y (x, y) force and four kinds of edge images E (x, y) to E (x, y) are generated by the following procedure.
  • the x direction is the horizontal direction of the image
  • the horizontal direction and the y direction are vertical or vertical.
  • Y (x, y) (Y (x, y- l) + 2 XY (x, y) + Y (x, y + 1) ⁇ / 4
  • Y (x, y) (Y (x- l, y) + 2 XY (x, y) + Y (x + l, y) ⁇ / 4
  • E (x, y) Di-image E (x, y) is generated. Each pixel of the edge image is called an edge pixel.
  • E '(x, y) Min (Y (x, yl), Y (x, y + 2))
  • a vertical edge image E (x, y) is generated from the following equation.
  • a lateral edge image E (x, y) is generated from the following equation.
  • Min () is a function that returns the minimum value of 0.
  • ⁇ (E) is a function that performs ⁇ conversion and clipping. It performs the following operations and outputs an integer from 0 to 31.
  • This Ml N () process is a nonlinear filter process. It can also be called non-linear filter processing, including ⁇ conversion and clipping processing.
  • FIG. 3 is a diagram in which the edge extraction target pixel and the surrounding pixels are represented by coordinates xy.
  • the above E ′ (x, y) is the luminance image Y (x, y) plane, 4 pixels in the vertical direction Y (x, y— 1), Y (x, y), Y (x, y + l),
  • the difference between the minimum value of +2) and the minimum value of the inner two pixels Y (x, y) and Y (x, y + l) is obtained.
  • the positive value of E ′ (x, y) indicates that the value in the vicinity of the target pixel (x, y) is smaller than the value in the vertical peripheral pixel, that is, the pixel value is in the vertical direction. Indicates that it is dented from the surroundings. Therefore, the value of E (x, y) generated in this way is treated as a pixel value, and the generated image is called a vertical luminance concave image.
  • a value obtained by adding a difference in luminance value with the adjacent pixel is shown. That is, a large value is generated when the luminance value changes greatly between the adjacent pixels in the vertical direction. Therefore, the value of E (x, y) generated in this way is treated as a pixel value, and the generated image is treated as a vertically adjacent pixel.
  • the vertically adjacent pixel difference image detects the edge of the concave structure, the edge of the convex structure, and the edge of the step without distinction.
  • E (x, y) generated in this way is the horizontal luminance concave image
  • E (x, y) is the horizontal adjacent image
  • FIG. 4 is a diagram showing the result of creating a luminance recess image E (x, y) for various luminance structures.
  • Fig. 4 (a) shows a case where the luminance is concave
  • Fig. 4 (b) shows a case where the luminance is protruding
  • Fig. 4 (c) shows a case where the luminance is stepped.
  • the luminance concave image has a positive value only when the luminance is concave. Therefore, if the negative value of the luminance recess image E ′ is clipped to 0, an edge image E (x, y) that reacts only to the luminance recess is generated.
  • the reaction is particularly good in a locally dark spot such as the eyes and nose and mouth.
  • Fig. 5 shows the four types of edge images E (x, y) to E (x, y
  • the luminance depression image has a sharp peak at the position of the eyes and nose!
  • the vertical luminance concave image E in FIG. 5 it reacts to the eyes, nostrils, mouth, etc. Among them, it reacts strongly to the eyes, nostrils, etc. and becomes white. In other words, the value of E at that position is large. Therefore, the face can be detected with high accuracy by analyzing such a luminance concave image.
  • the reason why the edge image is gamma-converted is to convert the edge amount into an appropriate feature amount E.
  • a subtle difference in the edge amount in a place where there is almost no edge! / Has a larger meaning than a slight difference in the amount of edge in a part where there is a large edge! /.
  • the difference in the edge amount at the location is converted into a large difference in the feature amount E.
  • the difference in edge amount is converted into a small difference in feature amount E.
  • a face determination target area of 19 ⁇ 19 pixels is set for every other pixel of the reduced image, and a partial image of the edge image in that area is output. This is performed for all reduced images.
  • the 19 x 19 pixel face detection target area is suitable for detecting the eyes, nose, mouth, etc. with about 2 pixels when the area is a face.
  • step S 6 it is determined for each partial image of the edge image output in step 5 whether this area is a face image.
  • the determination of the face image is performed by the method described below.
  • V (x, y) is a numerical expression of facialness at each pixel position, and indicates the degree and degree of facialness.
  • V (x, y) may be said to be a likelihood representing a degree of likelihood as a face.
  • V (x, y) L (E (x, y))
  • L (E) is a look-up table created in advance for each pixel position (x, y) (0 ⁇ x ⁇ 18, 0 ⁇ v ⁇ 18) by statistical processing described later.
  • Y represents the face likeness when the edge E (x, y) is E.
  • the generated facial appearance V (x, y) is integrated for all pixels (x, y) (0 ⁇ x ⁇ 18, 0 ⁇ y ⁇ 18) to calculate facial appearance V.
  • FIG. 6 is a diagram illustrating an example in which the above processing is performed on a specific edge image.
  • the face-like image in Fig. 6 the part that looks like a face is displayed in white, and the part that does not look like a face is displayed in black.
  • the face-like image generated from the face edge image shown in Fig. 6 (a) is large overall. Has a value. That is, the overall image is whitish.
  • the facial image generated from the non-facial edge image shown in Fig. 6 (b) has small values in some places. That is, the image becomes dark in some places.
  • FIG. 7 shows specific values of the lookup table L (E) for each edge size.
  • FIG. 7 the larger the face-like value, the more white it is displayed.
  • the left side is the facial appearance when the edge is small, and the right side is the facial appearance when the edge is large.
  • look-up table L (E) in FIG. 7 has a specific value for each edge size.
  • the diagram on the left represents the facial appearance when the edge is small. Looking at the figure on the left, the facial features of the eyes, nose and mouth are small. This means that if the edges of the eyes, nose, and mouth are small, the area is not likely to be a face. For example, in the non-face example in Fig. 6 (a), the edge of the part corresponding to the nose is small, so the part does not look like a face.
  • FIG. 7 shows the facial appearance when the edge is large.
  • the face-likeness of parts other than the eyes, nose, and mouth is small. This means that if the edge of a part other than the eyes, nose, or mouth is large, that part does not look like a face.
  • the edge of the part corresponding to the space between the eyes and both sides of the mouth is large, so the part is not likely to be a face.
  • the face image is a specific type of image and the eyes, nose, mouth, etc. are characteristic elements of the specific type of image, it corresponds to the characteristic elements of the specific type of image.
  • Pixel position the degree of likelihood of a specific type of image when the edge component of the pixel is large is set to a value larger than the degree of likelihood of the specific type of image when the edge component is small.
  • the degree of a particular type of image when the edge component of that pixel is large is expressed as the degree of a particular type of image when the edge component is small V. As a small value compared to the degree of!
  • look-up table L (E) corresponding to the value of edge E is selected from 32 look-up tables.
  • the facial appearance Vsum of the partial image is generated based on the edge image E (x, y). Then, based on the edge images E (x, y) to E (x, y), the facial image Vsu
  • FIG. 8 is a diagram showing a flowchart of the processing after obtaining the face-likeness Vsum to Vsum of the partial image in the face determination processing in step S6 of FIG. Face determination process in step S6
  • facial appearance Vsum to Vsum are generated step by step
  • the face is determined.
  • the process of comparing the evaluation value with the threshold value is performed at each stage as shown in Fig. 8, so that images that are clearly not faces are excluded at an early stage and at a stage so that efficient processing can be performed.
  • step S11 an evaluation value for determining whether or not a partial image is a face image is set as the face likelihood Vsum of the edge image E (x, y).
  • step S12 it is determined whether or not the evaluation value is larger than a predetermined threshold value th. If this evaluation value is larger than the threshold value th, the process proceeds to step S13. If this evaluation value is not larger than the threshold value thl, the partial image is If it is not a face image, the face determination process for the target partial image is terminated.
  • step S13 the evaluation value is changed to the evaluation value in step S11, and the face appearance of the edge image E (x, y) is determined.
  • step S14 this evaluation value is greater than a predetermined threshold th2.
  • step S15 If the evaluation value is larger than the threshold value th2, the process proceeds to step S15. If the evaluation value is not larger than the threshold value th2, the partial image is determined not to be a face image and the face determination of the target partial image is performed. End the process.
  • step S15 the evaluation value is changed to the evaluation value in step S13, and the face appearance of the edge image E (x, y) is determined.
  • step S16 this evaluation value is greater than a predetermined threshold th3.
  • step S17 If the evaluation value is larger than the threshold th3, the process proceeds to step S17. If the evaluation value is not larger than the threshold th3, the partial image is determined not to be a face image and the face determination of the target partial image is performed. End the process.
  • step S 17 the evaluation value is changed to the evaluation value in step S 15, and the face appearance of the edge image E (x, y) is displayed.
  • step S18 this evaluation value is greater than a predetermined threshold th4.
  • step S18 judges whether or not. If the evaluation value is larger than the threshold th4 in step S18, it is finally determined that the partial image is a face image. If this evaluation value is not greater than the threshold th4, the partial image is not a face image, and the face determination process for the target partial image is terminated.
  • step S 7 if it is determined in step 6 that a partial image is a face, the face size S and coordinates (X, Y) for the input image of the partial image are output.
  • the position and size of the face image are detected and output.
  • FIG. 10 is a diagram showing a flowchart of processing for obtaining the facial appearance L (E). This process
  • step S21 images of several hundred or more faces are acquired. That is, several hundred or more faces are photographed (captured) with a digital camera or the like, and the images (image data) are acquired.
  • the acquired image is an image composed of the same color components as the image input in step S1 of FIG.
  • step S22 the image of the face photographed is scaled so that the size of the face area becomes 19 ⁇ 19 pixels, and the partial images cut out of the face area are set as face image sample groups.
  • step S23 several hundred patterns or more of 19 ⁇ 19 pixel non-face image sample groups are acquired. This is appropriately extracted from images other than the face photographed with a digital camera and made into a non-face image sample group. It is also possible to extract from an image showing a face while avoiding the face area. In this case, the user may appropriately designate the non-face image area from the image captured on the monitor.
  • step S24 an edge component is extracted from the face image sample group to generate a face edge image sample group.
  • This process is the same as the process for generating the edge image E (x, y) in the face detection process.
  • step S25 an edge component is extracted from the non-face image sample group to generate a non-face edge image sample group. This process is also performed in the same manner as the process for generating the edge image E (x, y) in the face detection process.
  • step S26 the frequency P (x, y, E) at which the edge of (x, y) becomes E is obtained for the face edge image sample group!
  • step S27 for the non-face edge image sample group, (x, y),
  • step S28 the facial appearance L (E) of the pixel when the edge E (x, y) at the pixel position (x, y) is E is calculated by the following equation.
  • L (E) log ⁇ (P ( ⁇ , ⁇ , ⁇ ) + 8) / ( ⁇ ( ⁇ , ⁇ , ⁇ ) + ⁇ ) ⁇
  • ⁇ and ⁇ are predetermined constants, introduced to suppress logarithmic divergence and overlearning.
  • the value of ⁇ should be set to about 1/1000 of the average value of P (x, y, E).
  • the value of ⁇ should be set to several tens of times the value of ⁇ .
  • the value increases monotonously in the direction of addition, and the value decreases monotonously in the direction of increase in the distribution of non-face image samples whose edge E (x, y) is E at the pixel position (x, y). It is a function that does.
  • the distribution of image samples is usually normal.
  • the luminance concave image has a sharp peak at the position of the eyes and nose. Therefore, the face can be detected with high accuracy by analyzing such a luminance concave image.
  • the edge image created by the conventional method is used together with the luminance concave image alone, so that it is possible to determine the face with even higher accuracy! /. RU
  • edge edge is gamma-converted.
  • image analysis subtle effects in areas with few edges
  • the difference in the amount of edge has a larger meaning than the difference in the amount of edge slightly where there is a large edge! /.
  • the difference in the edge amount at the point where there is almost no edge is converted into a large difference in the feature amount E, and the difference in the edge amount at the point where there is a large edge is the feature amount E. Is translated into a small difference.
  • the difference in edge amount matches the difference in image structure.
  • the accuracy of face determination is increased.
  • the luminance concave image has a positive value only when the luminance is concave. Therefore, in this embodiment, the negative value of the luminance recess image E ′ is clipped to 0. As a result, an edge image E (x, y) that reacts only to the brightness dent is generated, and the processing using the edge image E becomes frustrating.
  • a face image can be detected by a simple and high-speed process in which pixel values of an edge image are converted into facial appearance using a lookup table and integrated. Also, by determining the edge image, there is an effect of suppressing the influence of the lighting conditions when shooting the image.
  • the second embodiment a face determination method that is highly resistant to variations in the contrast of the determination target image will be described.
  • the second embodiment is realized by the personal computer 1 as in the first embodiment. Therefore, the configuration of the image processing apparatus according to the second embodiment is referred to FIG. 1 of the first embodiment.
  • gain is applied to the pixel values of the face image sample group, and adjustment is performed so that the variance of the pixel values is about 100.
  • a facial image sample group having a pixel value variance of less than 200 is extracted.
  • the face image sample group adjusted or extracted in this way and the previously obtained face image sample group.
  • a look-up table for face determination is created in the same manner as in steps S24 to S28 in FIG.
  • the look-up table obtained in this way is called a low-contrast face determination look-up table.
  • a gain different from the above is applied to the pixel values of the face image sample group to adjust the pixel values to have a dispersion power of about 00.
  • a face image sample group having a pixel value variance of 200 or more is extracted.
  • a look-up table for face determination is performed in the same manner as in steps S24 to S28 in FIG. Create The lookup table obtained in this way is called a high-contrast face determination lookup table.
  • FIG. 10 is a diagram illustrating a flowchart of the image processing program according to the second embodiment executed by the personal computer 1.
  • Steps S31 to S34 are the same as steps S1 to S4 in FIG. 2 of the first embodiment.
  • step S 38 an integral image I (x, y) of the luminance image and an integral image I (x, y) of the square of the pixel value of the luminance image are created based on the following expression! /.
  • step S35 a face determination target region is set in the same manner as in step S5 in FIG. 2 of the first embodiment.
  • step S39 the distribution ⁇ 2 of the pixel values of the luminance image Y (x, y) in the face determination target region is calculated.
  • Face detection target area with 4 points 0 ⁇ ) + ⁇ ⁇ ), 0 ⁇ + 11) + ⁇ ⁇ + 11) as vertices In this area, the value Ysum obtained by integrating the luminance image Y (x, y) and the value Ysum2 obtained by integrating the square of the luminance image are calculated by the following formula.
  • the integral can be obtained simply by adding and subtracting the pixel values of the four points, so that high-speed calculation is possible. Then, the variance ⁇ 2 of the pixel values of the luminance image Y (x, y) in the face determination target region is given by the following equation.
  • step S40 when the variance ⁇ is less than 200, a low-contrast face detection look-up table is selected. If the variance ⁇ 2 is 200 or more, a high-contrast face detection lookup table is selected. Incidentally, when the variance sigma 2 is large indicates that the image of Koko contrast, when the variance sigma 2 is small indicates that an image of low contrast.
  • step S36 using the face detection lookup table selected in step S40, the face determination process is performed in the same manner as in step S6 of the first embodiment.
  • the detection result is output as in step S7 of the first embodiment.
  • the contrast of the face determination target region is measured at high speed, and a face detection look-up table is selected according to the collation, thereby reducing various determination processing times. It is possible to make a highly accurate determination with respect to contrast.
  • the lookup table that differs depending on the contrast is used because an edge is excessively large in an image having a high contrast. In other words, a high-contrast face can be determined by using a high-level look-up table for those with high contrast.
  • the third embodiment a method for performing face determination with higher accuracy by using images of a plurality of different resolutions will be described.
  • the third embodiment is realized by the personal computer 1 as in the first embodiment. Therefore, the configuration of the image processing apparatus according to the third embodiment is referred to FIG. 1 of the first embodiment.
  • a look-up table for face determination similar to that in the first embodiment is created in the same manner as in steps S21 to S28 in FIG.
  • this lookup table is referred to as a normal size face determination lookup table.
  • the face image sample group acquired in the step S22 in FIG. 9 is reduced to a size of about 12 ⁇ 12 pixels.
  • the non-face image sample group acquired in step S23 in FIG. 9 is reduced to a size of about 12 ⁇ 12 pixels.
  • a look-up table for face determination is created in the same manner as in steps S24 to S28 in FIG.
  • the lookup table obtained in this way is referred to as a reduced size face determination lookup table.
  • Steps S1 to S4 are the same as Steps S1 to S4 in the first embodiment. It is like.
  • step S5 a 19 ⁇ 19 pixel face determination target region is set every other pixel of the reduced image, and partial images of edge images E to E in that region are output. Etsu output here
  • This image is called a normal size edge image. Further, with respect to the reduced image, 0.9 for the four of the second reduced image reduced at reduced small magnification, the 19 X 19 pixels of the face determination target area 12 corresponding to the same object and the X 12 A reduced size face determination target area is set for pixels, and the partial images of edge images E to E created for the second reduced image in that area are set.
  • the edge image output here is called a reduced size edge image.
  • step S6 the likelihood of a face is calculated for the normal size edge image using the normal size face determination look-up table in the same manner as in the first embodiment. Further, the face-likeness is calculated for the reduced-size edge image in the same manner as in the first embodiment using the reduced-size face determination lookup table.
  • FIG. 11 is a diagram showing a flowchart of the processing after obtaining the facial appearance of each partial image with respect to the normal size edge image and the reduced size edge image as described above.
  • the likelihood of a face is generated in stages, and if the integrated evaluation value is greater than the threshold value, the face is determined.
  • it is clearly not a face! /, And the image is excluded at an early stage. Make it possible to perform efficient processing.
  • step S 51 the evaluation value for determining whether or not the partial image is a face image is set as the face likelihood Vsum of the reduced size edge image E (x, y).
  • step S52 it is determined whether or not the evaluation value is larger than the predetermined threshold th. If the evaluation value is larger than the threshold th, the process proceeds to step S53. If the evaluation value is not larger than the threshold th, the partial image is a face image. If it is not an image! /, The face determination process for the target partial image is terminated.
  • step S53 the evaluation value is reduced to the evaluation value in step S51, and the reduced size edge image E (x, y
  • step S54 Face likeness Vsum is added.
  • the evaluation value is a predetermined threshold th.
  • step S55 the evaluation value is reduced to the evaluation value in step S53, and the reduced size edge image E (x, y
  • step S56 Face likeness Vsum is added.
  • this evaluation value is set to a predetermined threshold th.
  • step S57 If the evaluation value is larger than the threshold th3, the process proceeds to step S57. If the evaluation value is not larger than the threshold th3, the partial image is not a face image and the face determination of the target partial image is performed. End the process.
  • step S57 the evaluation value is converted into the evaluation value in step S55, and the reduced size edge image E (x, y
  • step S58 Face likeness Vsum is added.
  • the evaluation value is a predetermined threshold th.
  • step S59 If the evaluation value is larger than the threshold th4, the process proceeds to step S59. If the evaluation value is not larger than the threshold th4, the partial image is determined not to be a face image and the face determination of the target partial image is performed. The process ends.
  • Step S59 force and step S66 perform the same processing for the normal size edge image.
  • the evaluation value is larger than the threshold th8 in step S66, it is finally determined that the partial image is a face image. If this evaluation value is not greater than the threshold th8, the partial image is determined not to be a face image, and the face determination process for the target partial image is terminated.
  • step 7 as in the first embodiment, the face detection result is output.
  • the third embodiment compared to the first embodiment, it is possible to perform a face determination process with higher accuracy by further evaluating the facialness of the reduced-size edge image.
  • the face determination target area is 19 x 19 pixels
  • the eyes are about 2 pixels and easy to detect, but the mouth is about 4 pixels and difficult to detect.
  • the same face determination target area becomes 12 ⁇ 12 pixels
  • the mouth becomes about 2 pixels and it is easy to detect. Therefore, by adding the evaluation of the face likeness of the reduced-size edge image, it becomes easy to detect the concave structure such as the mouth, and the face determination process can be performed with higher accuracy.
  • the present invention can also be applied to images other than facial images.
  • the present invention can also be applied to determining whether or not a certain type of image has a certain force in the acquired image.
  • a look-up table indicating the degree of image-likeness of the specific type for each pixel value and pixel position is prepared by statistical processing, and this look-up table is used for each pixel of the determination image. specific If you ask for the degree of image quality of the kind.
  • a luminance concave image is generated as an edge image, and a locally dark spot such as the eyes and nose of the face is appropriately determined.
  • a locally dark spot such as the eyes and nose of the face
  • the brightness of the mouth showing teeth and laughing, and the nose shining with light are locally brighter than the surroundings.
  • a brightness convex image may be generated as an edge image by the following formula, and the facial appearance may be obtained in the same manner.
  • L (E) log ⁇ (P (x, y, E) + ⁇ ) / ( ⁇ (x, y, E) + ⁇ ) ⁇
  • the first term ⁇ ⁇ P (x, y, E) ⁇ is also a monotonically increasing function
  • the second term ⁇ ⁇ P (x, y, E) ⁇ is also monotonically decreasing.
  • the personal computer 1 performs the image processing for detecting the face image from the captured image.
  • the above-described processing may be performed on the captured image in an imaging apparatus such as a digital still camera.
  • FIG. 12 is a diagram showing a configuration of a digital camera 100 that is such an imaging apparatus.
  • the digital camera 100 includes a photographing lens 102, an image sensor 103 including a CCD, a control device 104 including a CPU and peripheral circuits, a memory 105, and the like.
  • the image sensor 103 captures (captures) the subject 101 via the photographing lens 102 and outputs the captured image data to the control device 104.
  • the control device 104 performs image processing for detecting the face image described above on the image (image data) captured by the image sensor 103. Then, the control device 104 performs white balance adjustment and various other image processing on the image captured based on the detection result of the face image! /, And applies the image data after the image processing.
  • the image processing program executed by the control device 104 is stored in a ROM (not shown).
  • the processing described above can also be applied to a video camera. Furthermore, it can also be applied to surveillance cameras that monitor suspicious individuals and devices that identify individuals based on captured face images and estimate gender, age, and facial expressions. That is, the present invention can be applied to all devices such as an image processing device and an imaging device that extract and process a specific type of image such as a face image.

Abstract

An image processing method for determining whether an image is a particular type of image. The image processing method comprises acquiring an image consisting of a plurality of pixels; storing a look-up table indicative of a likelihood of a particular type of image for each pixel value and for each pixel position; generating, based on the acquired image, an image to be determined; obtaining, based on the pixel values and pixel positions of the pixels of the image to be determined, likelihoods of the particular type of image on those pixels by use of the look-up table; totalizing the obtained likelihoods of the particular type of image as to the pixels of the image to be determined; and determining, based on the totalization result, whether the input image is the particular type of image.

Description

明 細 書  Specification
画像処理方法、画像処理装置、画像処理プログラム、撮像装置  Image processing method, image processing apparatus, image processing program, and imaging apparatus
技術分野  Technical field
[0001] 本発明は、取得した画像において特定の画像があるかどうか判定する画像処理方 法、画像処理装置、画像処理プログラム、撮像装置に関する。  The present invention relates to an image processing method, an image processing device, an image processing program, and an imaging device that determine whether or not there is a specific image in an acquired image.
背景技術  Background art
[0002] デジタル画像処理において、撮影された画像の中力 顔画像を検出する処理の需 要は高い。例えば、デジタルカメラにおいて、検出された顔領域を好ましい色や階調 に変換する処理や、ビデオ画像において、特定の人物の登場場面を抽出する処理 や、監視カメラにおいて、不審者の画像を抽出する処理などがある。  [0002] In digital image processing, there is a high demand for processing for detecting a medium facial image of a captured image. For example, in a digital camera, a process that converts a detected face area into a preferred color or gradation, a process that extracts the appearance of a specific person in a video image, or a suspicious person image that is extracted in a surveillance camera There is processing.
[0003] 顔を検出するための画像処理としては、入力画像を様々な倍率で縮小し、縮小した 画像の様々な位置に顔判定領域を設定し、その顔判定領域が顔であるか所定の顔 判定方法により判定する方法が一般的である。従来の顔判定方法としては、テンプレ 一トマッチング、 SVM (サポートベクターマシン)、二ユーラノレネットワーク、 AdaBoos tなどが提案されている。  [0003] As image processing for detecting a face, an input image is reduced at various magnifications, face determination areas are set at various positions in the reduced image, and whether the face determination area is a face or not is determined. A method of determining by a face determination method is common. As conventional face determination methods, template matching, SVM (support vector machine), two Eurora network, AdaBoost, etc. have been proposed.
[0004] テンプレートマッチングによる顔判定では、テンプレート顔画像と判定対象画像との 一致度を算出することにより、顔判定を行う。 SVMによる顔判定では、多くのサポート ベクター(学習サンプルから選ばれたテンプレート顔画像と非顔画像)と判定対象画 像との一致度を算出することにより、判定対象顔画像の様々なノ リエーシヨンに柔軟 に対応することができる。  In face determination by template matching, face determination is performed by calculating the degree of matching between the template face image and the determination target image. In SVM face determination, by calculating the degree of coincidence between a large number of support vectors (template face images and non-face images selected from learning samples) and the image to be determined, it can be used for various nominations of the face image to be determined. It can respond flexibly.
[0005] ニューラルネットワーク(3層パーセプトロン)による顔判定の動作は SVMに少し似 ているが、サポートベクターの代わりに、学習によって求められた加重係数マップを 複数用いる。 AdaBoostによる顔判定の動作は、ニューラルネットワークに少し似て いるが、学習によって求めた加重係数マップの代わりに、学習によって選ばれた矩形 フィルターを複数用いる。  [0005] The operation of face determination by a neural network (three-layer perceptron) is slightly similar to SVM, but uses multiple weighted coefficient maps obtained by learning instead of support vectors. The face determination operation by AdaBoost is a little like a neural network, but uses multiple rectangular filters selected by learning instead of the weighted coefficient map obtained by learning.
[0006] 特許文献 1:特開 2005— 44330号公報  [0006] Patent Document 1: Japanese Patent Laid-Open No. 2005-44330
発明の開示 発明が解決しょうとする課題 Disclosure of the invention Problems to be solved by the invention
[0007] しかしながら、テンプレートマッチングによる顔判定では、判定対象顔画像の様々な ノ リエーシヨンに柔軟に対応することができない。特に、テンプレート顔画像と判別対 象顔画像との照明条件が異なる場合などには、正しく判定できない場合が多い。また 、 SVMによる顔判定では、多くのサポートべクタ一と判定対象画像との一致度を算 出するために、膨大な処理時間が必要であるという問題がある。 [0007] However, face determination by template matching cannot flexibly cope with various nominations of the determination target face image. In particular, when the illumination conditions of the template face image and the discrimination target face image are different, there are many cases where the determination cannot be made correctly. In addition, face determination by SVM has a problem that enormous processing time is required to calculate the degree of coincidence between many support vectors and a determination target image.
[0008] また、ニューラルネットワークによる顔検出は SVMに比べれば高速であるものの、 A daBoostなどに比べると判定に必要な処理時間がやや長いという問題がある。また、 矩形フィルターを用いた AdaBoostによる顔判定は、 SVMやニューラルネットワーク よりも高速である力 判定対象領域を変更するたびに矩形フィルターを演算する必要 があるため、処理時間が必要である。 [0008] Further, face detection by a neural network is faster than SVM, but has a problem that processing time required for determination is slightly longer than that of AdaBoost or the like. In addition, AdaBoost's face determination using a rectangular filter requires processing time because it needs to calculate the rectangular filter each time the force determination target area, which is faster than SVM or neural network, is changed.
課題を解決するための手段  Means for solving the problem
[0009] 本発明の第 1の態様によると、特定種類の画像であるかどうかを判定する画像処理 方法は、複数の画素からなる画像を取得し、特定種類の画像らしさの度合いを画素 値および画素位置ごとに示すルックアップテーブルを格納し、取得した画像に基づ いて判定用画像を生成し、判定用画像の画素の画素値および画素位置に基づき、 ルックアップテーブルを用いて、その画素における特定種類の画像らしさの度合いを 求め、求めた判定用画像の画素の画像らしさの度合いを積算し、積算した結果に基 づき、入力画像が特定種類の画像であるかどうかを判定する。 [0009] According to the first aspect of the present invention, the image processing method for determining whether or not the image is of a specific type obtains an image composed of a plurality of pixels, and determines the degree of image quality of the specific type as a pixel value and A lookup table shown for each pixel position is stored, a determination image is generated based on the acquired image, and the pixel value and pixel position of the pixel of the determination image are used to determine the pixel value of the pixel. The degree of likelihood of a specific type of image is obtained, the degree of likelihood of pixels of the obtained image for determination is integrated, and based on the result of integration, it is determined whether the input image is a specific type of image.
本発明の第 2の態様によると、第 1の態様の画像処理方法において、特定種類の 画像は、顔の画像であるのが好ましい。  According to the second aspect of the present invention, in the image processing method according to the first aspect, the specific type of image is preferably a face image.
本発明の第 3の態様によると、第 1または第 2の態様の画像処理方法において、判 定用画像は、取得した画像のエッジ成分を抽出して生成されるのが好ましい。  According to the third aspect of the present invention, in the image processing method according to the first or second aspect, the determination image is preferably generated by extracting an edge component of the acquired image.
本発明の第 4の態様によると、第 1または第 2の態様の画像処理方法において、判 定用画像は、取得した画像の局所的に周辺より画素値がへこんでいる凹構造のエツ ジ成分を抽出して生成されるのが好ましい。  According to the fourth aspect of the present invention, in the image processing method according to the first or second aspect, the determination image is an edge component having a concave structure in which the pixel value is recessed locally from the periphery of the acquired image. It is preferable to generate by extracting.
本発明の第 5の態様によると、第 1の態様の画像処理方法において、ルックアップ テーブルは、特定種類の画像の特徴的な要素に対応する画素位置では、その画素 のエッジ成分が大きい場合の特定種類の画像らしさの度合いを、エッジ成分が小さ い場合の特定種類の画像らしさの度合いに比べて大きな値とし、特定種類の画像の 特徴的な要素以外に対応する画素位置では、その画素のエッジ成分が大き!/、場合 の特定種類の画像らしさの度合!/、を、エッジ成分が小さ!/、場合の特定種類の画像ら しさの度合いに比べて小さな値とするのが好ましレ、。 According to a fifth aspect of the present invention, in the image processing method according to the first aspect, the look-up table includes a pixel at a pixel position corresponding to a characteristic element of a specific type of image. If the edge component of the image is large, the degree of image-likeness of the specific type is set to a value larger than the degree of image-likeness of the specific type of image when the edge component is small. At the pixel position, the edge component of the pixel is large! /, The degree of image-likeness of a particular type of case! /, And the edge component is small! /, A value smaller than the degree of image-likeness of a particular type of It ’s better to do that.
本発明の第 6の態様によると、第 2の態様の画像処理方法において、判定用画像 は、取得した画像のエッジ成分を抽出して生成され、 目鼻口のいずれかの領域に対 応する画素位置では、その画素のエッジ成分が大きレ、場合の顔の画像らしさの度合 いを、エッジ成分が小さい場合の顔の画像らしさの度合いに比べて大きな値とし、 目 鼻口以外の領域に対応する画素位置では、その画素のエッジ成分が大き!/、場合の 顔の画像らしさの度合レ、を、エッジ成分が小さレ、場合の顔の画像らしさの度合いに比 ベて小さな値とするのが好ましレ、。  According to the sixth aspect of the present invention, in the image processing method according to the second aspect, the determination image is generated by extracting an edge component of the acquired image, and corresponds to any region of the eye-nose mouth. At the position, if the edge component of the pixel is large, the degree of the image quality of the face is larger than the degree of the image quality of the face when the edge component is small, and it corresponds to the area other than the eyes and nose and mouth At the pixel position where the edge component of the pixel is large! /, The degree of the image quality of the face in the case is set to a small value compared to the degree of the image quality of the face in the case of the small edge component. Is preferred.
本発明の第 7の態様によると、第 1から第 6のいずれかの態様の画像処理方法にお いて、ルックアップテーブルは、特定種類の画像に属する判定対象画像サンプル群 と特定種類の画像に属さない非判定対象画像サンプル群とに基づく統計処理により 生成されるのが好ましい。  According to the seventh aspect of the present invention, in the image processing method according to any one of the first to sixth aspects, the lookup table includes a determination target image sample group belonging to a specific type of image and a specific type of image. It is preferably generated by statistical processing based on a non-determination target image sample group that does not belong.
本発明の第 8の態様によると、第 7の態様の画像処理方法において、統計処理に おいて、判定用画像を生成するときと等価な処理により、判定対象画像サンプル群 に基づいて第 1の画像サンプル群を生成し、非判定対象画像サンプル群に基づいて 第 2の画像サンプル群を生成し、第 1の画像サンプル群の画素位置 (x,y)における画 素値が Eとなる頻度 P (x,y)(E)と、第 2の画像サンプル群の画素位置 (x,y)における画  According to the eighth aspect of the present invention, in the image processing method according to the seventh aspect, in the statistical process, the first process is performed based on the determination target image sample group by a process equivalent to that for generating the determination image. Generate an image sample group, generate a second image sample group based on the non-determination target image sample group, and the frequency P at which the pixel value at the pixel position (x, y) of the first image sample group is E (x, y) (E) and the image at pixel position (x, y) of the second image sample group.
1  1
素値が Eとなる頻度 P (x,y)(E)とを求め、判定用画像の画素位置 (x,y)における画素値 Eに対してその画素における特定種類の画像らしさの度合い V(x,y)を、 V(x,y) = L(x,y )(E)で与える画素位置 (x,y)におけるルックアップテーブル L(x,y)(E)を、 L(x,y)(E) = f( P (x,y)(E), P (x,y)(E) )により生成し、関数 ί( P (x,y)(E), P (x,y)(E) )は、 P (x,y)(E)につThe frequency P (x, y) (E) at which the prime value is E is obtained, and the degree V of the image-likeness of a specific type at that pixel with respect to the pixel value E at the pixel position (x, y) of the judgment image V ( x, y) is represented by V (x, y) = L (x, y) (E), and the lookup table L (x, y) (E) at the pixel position (x, y) is represented by L (x, y) y) (E) = f (P (x, y) (E), P (x, y) (E)), and the function ί (P (x, y) (E), P (x, y ) (E)) is assigned to P (x, y) (E)
1 2 1 2 1 1 2 1 2 1
いて実質的に広義の単調増加関数であり、 P2(xy)(E)について実質的に広義の単調 減少関数であるのが好ましレ、。 It is preferably a monotonically increasing function in a broad sense, and substantially monotonically decreasing in a broad sense with respect to P 2 ( x , y ) ( E ).
本発明の第 9の態様によると、第 8の態様の画像処理方法において、関数 P (x,y)( E), P2(x,y)(E) )は、 f( Pi(x,y)(E), P2(Xy)(E) ) = log{ ε )According to a ninth aspect of the present invention, in the image processing method according to the eighth aspect, the function P (x, y) ( E), P 2 (x, y) (E)) is f (P i (x, y) (E), P 2 ( X , y ) (E)) = log {ε)
Figure imgf000006_0001
Figure imgf000006_0001
}であり、 ε と ε は所定の定数であるのが好ましい。  }, And ε and ε are preferably predetermined constants.
1 2  1 2
本発明の第 10の態様によると、第 1の態様の画像処理方法において、コントラスト の程度に応じた複数のルックアップテーブルを格納し、取得した画像のコントラストを 算出し、複数のルックアップテーブルからコントラストに応じたルックアップテーブルを 選択するのが好ましい。  According to a tenth aspect of the present invention, in the image processing method of the first aspect, a plurality of lookup tables corresponding to the degree of contrast are stored, the contrast of the acquired image is calculated, and the plurality of lookup tables are calculated. It is preferable to select a lookup table according to the contrast.
本発明の第 1 1の態様によると、特定種類の画像であるかどうかを判定する画像処 理方法は、複数の画素からなる画像を取得し、特定種類の画像らしさの度合いを画 素値および画素位置ごとに示すルックアップテーブルを格納し、複数の異なる縮小 倍率により取得した画像の複数の縮小画像を生成し、複数の縮小画像に基づ!/、て 判定用画像を生成し、複数の縮小画像の 1つである第 1の縮小画像に対して判定対 象領域を設定し、判定対象領域の画素の画素値および判定対象領域内の画素位置 に基づき、ルックアップテーブルを用いて、その画素における特定種類の画像らしさ の度合いを求め、求めた判定対象領域の画素の特定種類の画像らしさの度合いを 積算し、積算した結果に基づき、取得した画像内の判定対象領域に対応する画像が 特定種類の画像であるかどうかを判定する。  According to the first aspect of the present invention, the image processing method for determining whether or not the image is a specific type of image obtains an image composed of a plurality of pixels, and determines the degree of the image quality of the specific type as the pixel value and Stores a look-up table for each pixel position, generates multiple reduced images of images acquired at multiple different reduction magnifications, generates a judgment image based on multiple reduced images, and A determination target area is set for the first reduced image, which is one of the reduced images, and based on the pixel value of the pixel in the determination target area and the pixel position in the determination target area, the look-up table is used. The degree of image-likeness of a particular type of pixel is obtained, the degree of image-likeness of the particular type of pixel in the obtained determination target area is integrated, and an image corresponding to the determination target area in the acquired image is obtained based on the integration result.It is determined whether the image is of a specific type.
本発明の第 12の態様によると、第 1 1の態様の画像処理方法において、第 1の縮小 画像よりもさらに縮小された第 2の縮小画像に対して、判定対象領域に対応する第 2 の判定対象領域をさらに設定し、特定種類の画像らしさの度合いを画素値および第 2の判定対象領域に対応した画素位置ごとに示す第 2のルックアップテーブルをさら に格納し、第 2の判定対象領域の画素の画素値および第 2の判定対象領域内の画 素位置に基づき、第 2のルックアップテーブルを用いて、その画素における特定種類 の画像らしさの度合いを求め、求めた第 2の判定対象領域の画素の特定種類の画像 らしさの度合レ、を積算し、判定対象領域の画素の特定種類の画像らしさの度合レ、の 積算結果および第 2の判定対象領域の画素の特定種類の画像らしさの度合いの積 算結果に基づき、取得した画像内の判定対象領域に対応する画像が特定種類の画 像であるかどうかを判定するのが好ましレ、。  According to a twelfth aspect of the present invention, in the image processing method according to the first aspect, the second reduced image corresponding to the determination target region is further reduced with respect to the second reduced image further reduced than the first reduced image. The determination target area is further set, and a second lookup table is further stored for each pixel position corresponding to the pixel value and the second determination target area, indicating the degree of image-likeness of a specific type, and the second determination target Based on the pixel value of the pixel in the area and the pixel position in the second determination target area, the second look-up table is used to determine the degree of image-likeness of the specific type of the pixel, and the obtained second determination The degree of likelihood of a particular type of image of a pixel in the target area is integrated, and the result of integration of the degree of likelihood of a particular type of image in the pixel of the judgment target area and a particular type of image of the pixel in the second judgment target area The degree of uniqueness It is preferable to determine whether the image corresponding to the determination target area in the acquired image is a specific type of image based on the result of the integration.
本発明の第 13の態様によると、画像処理プログラムは、第 1から 12のいずれかの態 様の画像処理方法をコンピュータに実行させる画像処理プログラムとするものである 本発明の第 14の態様によると、画像処理装置は、第 13の態様の画像処理プロダラ ムを搭載する画像処理装置とするものである。 According to a thirteenth aspect of the present invention, the image processing program is any one of the first to twelfth aspects. According to a fourteenth aspect of the present invention, the image processing apparatus is an image processing apparatus equipped with the image processing program of the thirteenth aspect. Is.
本発明の第 15の態様によると、撮像装置は、第 13の態様の画像処理プログラムを 搭載する撮像装置とするものである。  According to a fifteenth aspect of the present invention, the imaging apparatus is an imaging apparatus that mounts the image processing program according to the thirteenth aspect.
発明の効果  The invention's effect
[0010] 本発明は以上のように構成しているので、特定種類の画像を、照明条件の様々な ノ リエーシヨンに影響を受けずに高速に判定することができる。  [0010] Since the present invention is configured as described above, a specific type of image can be determined at high speed without being affected by various illumination conditions.
図面の簡単な説明  Brief Description of Drawings
[0011] [図 1]本発明の一実施の形態である画像処理装置を示す図である。  FIG. 1 is a diagram showing an image processing apparatus according to an embodiment of the present invention.
[図 2]パーソナルコンピュータ 1が実行する画像処理プログラムのフローチャートを示 す図である。  FIG. 2 is a diagram showing a flowchart of an image processing program executed by the personal computer 1.
[図 3]エッジ抽出対象画素と周辺画素とを座標 xyで表した図である。  FIG. 3 is a diagram showing an edge extraction target pixel and peripheral pixels with coordinates xy.
[図 4]輝度の様々な構造に対して輝度凹部画像 E (x,y)を作成した結果を示す図で ある。  FIG. 4 is a diagram showing the result of creating a luminance concave image E (x, y) for various luminance structures.
[図 5]具体的な顔の輝度画像について 4種類のエッジ画像 E (x,y)〜E (x,y)を生成  [Figure 5] Four types of edge images E (x, y) to E (x, y) are generated for a specific facial luminance image
1 4  14
した例を示す図である。  FIG.
[図 6]具体的なエッジ画像につ!/、て、顔らしさ V (x,y)を生成し、顔らしさ V を算  [Fig.6] Creates a specific edge image! /, Generates facial appearance V (x, y), and calculates facial appearance V
1 SUM1 出する処理を行った例を示す図である。  It is a figure which shows the example which performed the process which issues 1SUM1.
[図 7]ルックアップテーブル L (E)の具体的な値をエッジの大きさ毎に表した図で  [Fig.7] A specific value of the lookup table L (E) is shown for each edge size.
1 (x,y)  1 (x, y)
ある。  is there.
[図 8]図 2のステップ S6の顔判定の処理において、部分画像の顔らしさ Vsum〜Vsu mを求めた後の処理のフローチャートを示す図である。  FIG. 8 is a diagram showing a flowchart of the processing after obtaining the face-likeness Vsum to Vsum of the partial image in the face determination processing in step S6 of FIG.
4  Four
[図 9]顔らしさ L (E)を求める処理のフローチャートを示す図である。  FIG. 9 is a diagram showing a flowchart of a process for obtaining facial appearance L (E).
1 (x,y)  1 (x, y)
[図 10]パーソナルコンピュータ 1が実行する第 2の実施の形態の画像処理プログラム のフローチャートを示す図である。  FIG. 10 is a diagram showing a flowchart of an image processing program according to a second embodiment executed by the personal computer 1.
[図 11]通常サイズエッジ画像および縮小サイズエッジ画像に対して、各部分画像の 顔らしさを求めた後の処理のフローチャートを示す図である。 [Figure 11] Normal image edge image and reduced size edge image It is a figure which shows the flowchart of the process after calculating | requiring face-likeness.
[図 12]撮像装置であるデジタルカメラ 100の構成を示す図である。  FIG. 12 is a diagram showing a configuration of a digital camera 100 that is an imaging apparatus.
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0012] 第 1の実施の形態  [0012] First Embodiment
図 1は、本発明の一実施の形態である画像処理装置を示す図である。画像処理装 置は、パーソナルコンピュータ 1で実現される。パーソナルコンピュータ 1は、デジタル カメラ 2、 CD— ROMなどの記録媒体 3、他のコンピュータ 4などと接続され、各種の 画像(画像データ)の提供を受ける。パーソナルコンピュータ 1は、提供された画像に 対して、以下に説明する画像処理を行う。コンピュータ 4は、インターネットやその他 の電気通信回線 5を経由して接続される。  FIG. 1 is a diagram showing an image processing apparatus according to an embodiment of the present invention. The image processing apparatus is realized by the personal computer 1. The personal computer 1 is connected to a digital camera 2, a recording medium 3 such as a CD-ROM, another computer 4, etc., and receives various images (image data). The personal computer 1 performs the image processing described below on the provided image. The computer 4 is connected via the Internet and other telecommunications lines 5.
[0013] パーソナルコンピュータ 1が画像処理のために実行するプログラムは、図 1の構成と 同様に、 CD— ROMなどの記録媒体や、インターネットやその他の電気通信回線を 経由した他のコンピュータ力、ら提供され、パーソナルコンピュータ 1内にインストールさ れる。パーソナルコンピュータ 1は、 CPU (不図示)およびその周辺回路(不図示)か ら構成され、 CPUがインストールされたプログラムを実行する。  [0013] The program executed by the personal computer 1 for image processing is similar to the configuration of FIG. 1, such as a recording medium such as a CD-ROM, other computer power via the Internet or other electric communication lines, and the like. Provided and installed in the personal computer 1. The personal computer 1 is composed of a CPU (not shown) and its peripheral circuits (not shown), and executes a program in which the CPU is installed.
[0014] プログラムがインターネットやその他の電気通信回線を経由して提供される場合は 、プログラムは、電気通信回線、すなわち、伝送媒体を搬送する搬送波上の信号に 変換して送信される。このように、プログラムは、記録媒体や搬送波などの種々の形 態のコンピュータ読み込み可能なコンピュータプログラム製品として供給される。  [0014] When the program is provided via the Internet or another telecommunication line, the program is transmitted after being converted into a signal on a carrier wave carrying a telecommunication line, that is, a transmission medium. Thus, the program is supplied as a computer readable computer program product in various forms such as a recording medium and a carrier wave.
[0015] 本実施の形態のパーソナルコンピュータ 1は、撮影された画像の中力 顔画像を検 出する画像処理を行う。具体的には、入力した画像に基づきエッジ成分を抽出して エッジ画像を生成し、生成したエッジ画像に基づき顔の画像があるかどうかを判定す る。本実施の形態における処理では、このエッジ成分の抽出方法およびエッジ画像 に基づく顔の判定方法に特徴を有する。  [0015] The personal computer 1 according to the present embodiment performs image processing for detecting a medium facial image of a photographed image. Specifically, an edge component is extracted based on the input image to generate an edge image, and it is determined whether there is a face image based on the generated edge image. The processing in the present embodiment is characterized by the edge component extraction method and the face determination method based on the edge image.
[0016] なお、以下では、画像に対して画像処理を行うという表現をするが、実際には入力 した画像データに対して画像処理を行うことを意味する。また、本実施の形態で言う エッジとは、輝度値や画素値が周囲より小さい値を示しへこんでいる箇所 (領域、画 素)、周囲より大き!/、値を示し出っ張って!/、る(突出してレ、る)箇所 (領域、画素)、段 差になっている箇所 (領域、画素)のことを言う。特に、周囲よりへこんでいる箇所 (領 域、画素)を凹構造のエッジ、周囲より出っ張っている箇所 (領域、画素)を凸構造の エッジと言う。 [0016] In the following, it is expressed that image processing is performed on an image, but actually it means that image processing is performed on input image data. Also, in this embodiment, an edge is a portion (area, pixel) where the luminance value or pixel value is smaller than the surrounding area, is larger than the surrounding area! /, And the value is protruding! / Locations (areas, pixels), steps This refers to the difference (area, pixel). In particular, the indented area (area, pixel) is called a concave edge, and the protruding area (area, pixel) is called a convex edge.
[0017] 以下、本実施の形態のパーソナルコンピュータ 1が撮影された画像の中力 顔画像 を検出する画像処理について詳細に説明する。図 2は、パーソナルコンピュータ 1が 実行する画像処理プログラムのフローチャートを示す図である。  Hereinafter, image processing for detecting a medium facial image of an image captured by the personal computer 1 of the present embodiment will be described in detail. FIG. 2 is a diagram showing a flowchart of an image processing program executed by the personal computer 1.
[0018] ステップ S1では、デジタルカメラなどで撮影(撮像)した顔を検出する対象の画像( 画像データ)を入力(取得)する。入力画像の各画素は R, G, Bの各色成分を含み、 各色成分の範囲は 0〜255とする。ステップ S2では、入力画像の R, G, Bに基づき、 輝度画像 Yを次の式で生成する。すなわち、輝度画像 Y面を生成する。  [0018] In step S1, an image (image data) to be detected for a face photographed (captured) with a digital camera or the like is input (acquired). Each pixel of the input image includes R, G, and B color components, and each color component ranges from 0 to 255. In step S2, a luminance image Y is generated by the following formula based on R, G, and B of the input image. That is, the luminance image Y plane is generated.
Y= (R+ 2G + B) /4  Y = (R + 2G + B) / 4
[0019] ステップ S3では、生成した輝度画像を階層的に縮小して出力する。例えば、 0〜3 1までの整数 nに対して縮小倍率 κを 0.9nで与え、その 32通りの縮小倍率 κで縮小さ れた輝度画像を出力する。なお、縮小方法は、例えば Cubic変倍や線形変倍を用い ればよい。このように複数通りの縮小画像を生成するのは、入力した画像にはどのよ うなサイズの顔画像があるかどうか不明であり、あらゆるサイズの顔の画像に対応でき るようにするためである。 In step S3, the generated luminance image is hierarchically reduced and output. For example, given by the reduction ratio κ of 0.9 n for integer n of 0 to 3 1, and outputs the luminance image which has been reduced by the reduction magnification κ of the 32 patterns. As a reduction method, for example, Cubic scaling or linear scaling may be used. The reason why multiple reduced images are generated in this way is that it is unclear whether the input image has a face image of any size, so that it can handle face images of any size. .
[0020] ステップ S4では、縮小されたそれぞれの輝度画像 Y (x,y)力、ら 4種類のエッジ画像 E (x,y)〜E (x,y)を以下の手順で生成する。以下では、 x方向を画像の横方向ある [0020] In step S4, the reduced luminance image Y (x, y) force and four kinds of edge images E (x, y) to E (x, y) are generated by the following procedure. In the following, the x direction is the horizontal direction of the image
1 4 14
いは水平方向、 y方向を縦方向あるいは鉛直方向とする。  Or the horizontal direction and the y direction are vertical or vertical.
[0021] まず、以下の式より、縦方向に平滑化した画像 Y (x,y)と横方向に平滑化した画 [0021] First, an image Y (x, y) smoothed in the vertical direction and an image smoothed in the horizontal direction are obtained from the following equations.
LV  LV
像 Y (x,y)を生成する。縦方向のエッジ成分を抽出するためには、横方向を平滑化 Generate the image Y (x, y). To extract the edge component in the vertical direction, smooth the horizontal direction.
LH LH
した画像データを使用し、横方向のエッジ成分を抽出するためには、縦方向を平滑 化した画像データを使用するのが好ましいからである。  This is because, in order to extract the edge component in the horizontal direction using the processed image data, it is preferable to use the image data in which the vertical direction is smoothed.
Y (x,y) = {Y (x,y- l) + 2 XY (x,y) +Y (x,y+ 1) }/4  Y (x, y) = (Y (x, y- l) + 2 XY (x, y) + Y (x, y + 1)} / 4
LV  LV
Y (x,y) = {Y (x- l,y) + 2 XY (x,y) +Y (x+ l,y) }/4  Y (x, y) = (Y (x- l, y) + 2 XY (x, y) + Y (x + l, y)} / 4
LH  LH
[0022] 次に、横方向を平滑化した画像 Y (x,y)を使用して、以下の式より、縦方向のエツ  [0022] Next, using the image Y (x, y) smoothed in the horizontal direction, the vertical direction
LH  LH
ジ画像 E (x,y)を生成する。なお、エッジ画像の各画素はエッジ画素と言う。 E '(x,y)=Min(Y (x,y-l), Y (x,y+2)) Di-image E (x, y) is generated. Each pixel of the edge image is called an edge pixel. E '(x, y) = Min (Y (x, yl), Y (x, y + 2))
1 LH LH  1 LH LH
-Min (Y (x,y) , Y (x,y+l))  -Min (Y (x, y), Y (x, y + l))
LH LH  LH LH
E (x,y) = γ (E ' (x,y) )  E (x, y) = γ (E '(x, y))
[0023] 次に、以下の式より、縦方向のエッジ画像 E (x,y)を生成する。  Next, a vertical edge image E (x, y) is generated from the following equation.
2  2
E ' (x,y) = I Y (x,y-l)-Y (x,y) |  E '(x, y) = I Y (x, y-l) -Y (x, y) |
2 LH LH  2 LH LH
+ I Y (x,y+l)-Y (x,y) |  + I Y (x, y + l) -Y (x, y) |
LH LH  LH LH
E (x,y) = γ (E ' (x,y) )  E (x, y) = γ (E '(x, y))
2 2  twenty two
[0024] 次に、縦方向を平滑化した画像 Y (x,y)を使用して、以下の式より、横方向のエツ  [0024] Next, using the image Y (x, y) smoothed in the vertical direction, the horizontal direction
LV  LV
ジ画像 E (x,y)を生成する c C to generate the image E (x, y)
3  Three
E ' (x,y)二 Min (Υ (χ- 1,y), Y (x+2,y) )  E '(x, y) 2 Min (Υ (χ-1, y), Y (x + 2, y))
3 LV LV  3 LV LV
Min (Y (x,y), Y (x + 1,y) )  Min (Y (x, y), Y (x + 1, y))
LV LV  LV LV
E (x,y) = γ (E ' (x,y) )  E (x, y) = γ (E '(x, y))
3 3  3 3
[0025] 次に、以下の式より、横方向のエッジ画像 E (x,y)を生成する。  Next, a lateral edge image E (x, y) is generated from the following equation.
4  Four
E ' (x,y) = I Y (x-l,y)-Y (x,y) |  E '(x, y) = I Y (x-l, y) -Y (x, y) |
4 LV LV  4 LV LV
+ I Y (x+l,y)-Y (x,y) |  + I Y (x + l, y) -Y (x, y) |
LV LV  LV LV
E (x,y) = γ (E ' (x,y) )  E (x, y) = γ (E '(x, y))
4 4  4 4
[0026] ここで、 Min()は、 0の中の最小の値を戻す関数である。また、 Ί (E)は、 γ変換と クリッピングを行う関数であり、以下の演算を行い、 0〜31の整数を出力する。この Ml N()処理は、非線形フィルタ処理である。また、 γ変換やクリッピング処理を含めて非 線形フィルタ処理と言ってもょレ、。  Here, Min () is a function that returns the minimum value of 0. Ί (E) is a function that performs γ conversion and clipping. It performs the following operations and outputs an integer from 0 to 31. This Ml N () process is a nonlinear filter process. It can also be called non-linear filter processing, including γ conversion and clipping processing.
Ε<0の場合 γ (Ε)=0  When Ε <0 γ (Ε) = 0
Ε〉63の場合 Ί (Ε)=31 Ε> 63 ( Ί ) = 31
0≤Ε≤63の場合 γ (E) = (int) (4Χ^Ε)  When 0≤Ε≤63 γ (E) = (int) (4Χ ^ Ε)
[0027] 上記エッジ画像の生成について、図 3を参照してさらに詳しく説明する。図 3は、ェ ッジ抽出対象画素と周辺画素とを座標 xyで表した図である。上記 E '(x,y)は、輝度 画像 Y (x,y)面において、縦方向 4画素 Y (x,y— 1)、Y (x,y)、 Y (x,y+l)、 The generation of the edge image will be described in more detail with reference to FIG. FIG. 3 is a diagram in which the edge extraction target pixel and the surrounding pixels are represented by coordinates xy. The above E ′ (x, y) is the luminance image Y (x, y) plane, 4 pixels in the vertical direction Y (x, y— 1), Y (x, y), Y (x, y + l),
LH LH LH LH LH LH LH LH
Y (x,y+2)のうち、対象画素(x, y)を基準に、外側 2画素 Y (x,y— 1)、 Y (x,yOut of Y (x, y + 2), the outer two pixels Y (x, y— 1), Y (x, y) based on the target pixel ( x , y)
LH LH LH LH LH LH
+ 2)の最小値と内側 2画素 Y (x,y)、Y (x,y+l)の最小値の差を求めている。 [0028] E ' (x,y)の値が正の値を示すことは、対象画素(x, y)近辺の値が、縦方向周辺画 素の値より小さい、すなわち画素値が縦方向の周辺よりへこんでいることを示す。従 つて、このようにして生成した E (x,y)の値を画素値として取り扱い、生成された画像 を縦方向輝度凹部画像と言う。 The difference between the minimum value of +2) and the minimum value of the inner two pixels Y (x, y) and Y (x, y + l) is obtained. [0028] The positive value of E ′ (x, y) indicates that the value in the vicinity of the target pixel (x, y) is smaller than the value in the vertical peripheral pixel, that is, the pixel value is in the vertical direction. Indicates that it is dented from the surroundings. Therefore, the value of E (x, y) generated in this way is treated as a pixel value, and the generated image is called a vertical luminance concave image.
[0029] 上記 E ' (x,y)は、輝度画像 Y (x,y)面において、対象画素 (x, y)と縦方向に隣  [0029] The above E '(x, y) is adjacent to the target pixel (x, y) in the vertical direction in the luminance image Y (x, y) plane.
2 LH  2 LH
接する画素との輝度値の差分を足し込んだ値を示す。すなわち、縦方向隣接画素と の間で輝度値の変化が大きい場合に大きな値が生成される。従って、このようにして 生成した E (x,y)の値を画素値として取り扱い、生成された画像を縦方向隣接画素  A value obtained by adding a difference in luminance value with the adjacent pixel is shown. That is, a large value is generated when the luminance value changes greatly between the adjacent pixels in the vertical direction. Therefore, the value of E (x, y) generated in this way is treated as a pixel value, and the generated image is treated as a vertically adjacent pixel.
2  2
差分画像と言う。縦方向隣接画素差分画像は、凹部構造のエッジ、凸部構造のエツ ジ、段差のエッジを区別なく検出する。  This is called a difference image. The vertically adjacent pixel difference image detects the edge of the concave structure, the edge of the convex structure, and the edge of the step without distinction.
[0030] 上記 E ' (x,y)および E (x,y)、 E ' (x,y)および E (x,y)は、横方向のエッジ画像  [0030] The above E '(x, y) and E (x, y), E' (x, y) and E (x, y) are horizontal edge images
3 3 4 4  3 3 4 4
を生成するためのものである。上記 E ' ( )ぉょび£ (x,y)、 E ' ( )ぉょび£ (x,  Is for generating. E '() selection £ (x, y), E' () selection £ (x,
1 1 2 2 y)に対して、縦と横をひっくり返して考え、後は同様に演算するものである。従って、 このようにして生成された E (x,y)を横方向輝度凹部画像、 E (x,y)を横方向隣接画  For 1 1 2 2 y), the vertical and horizontal directions are reversed, and the rest is calculated in the same way. Therefore, E (x, y) generated in this way is the horizontal luminance concave image, and E (x, y) is the horizontal adjacent image.
3 4  3 4
素差分画像と言う。  This is called an elementary difference image.
[0031] 図 4は、輝度の様々な構造に対して輝度凹部画像 E (x,y)を作成した結果を示す 図である。図 4 (a)は輝度が凹んでいる場合であり、図 4 (b)は輝度が突出している場 合であり、図 4 (c)は輝度が段差になっている場合である。図 4を見ると、輝度が凹ん でいる場合のみ輝度凹部画像が正の値を持つことがわかる。従って、輝度凹部画像 E'の負の値を 0にクリッピングすれば、輝度の凹みだけに反応するエッジ画像 E (x,y )が生成される。  [0031] FIG. 4 is a diagram showing the result of creating a luminance recess image E (x, y) for various luminance structures. Fig. 4 (a) shows a case where the luminance is concave, Fig. 4 (b) shows a case where the luminance is protruding, and Fig. 4 (c) shows a case where the luminance is stepped. As can be seen from Fig. 4, the luminance concave image has a positive value only when the luminance is concave. Therefore, if the negative value of the luminance recess image E ′ is clipped to 0, an edge image E (x, y) that reacts only to the luminance recess is generated.
[0032] この輝度凹部画像によると、 目鼻口などの局所的に暗い箇所に特に良く反応する。  [0032] According to the luminance concave image, the reaction is particularly good in a locally dark spot such as the eyes and nose and mouth.
図 5は、具体的な顔の輝度画像について上記 4種類のエッジ画像 E (x,y)〜E (x,y  Fig. 5 shows the four types of edge images E (x, y) to E (x, y
1 4 14
)を生成した例を示す図である。実際、輝度凹部画像は、 目鼻口の位置に鋭!/、ピーク を持つ。特に、図 5の縦方向輝度凹部画像 Eでは、 目、鼻の穴、口などに反応し、そ の中でも目、鼻の穴などには強く反応し白くなつている。すなわち、その位置の Eの 値が大きな値となっている。従って、このような輝度凹部画像を解析することにより、 顔を高精度に検出することができる。ただし、輝度凹部画像だけを用いるのではなく 、従来の方法で作成したエッジ画像も合わせて用いることが望ましい。 FIG. In fact, the luminance depression image has a sharp peak at the position of the eyes and nose! In particular, in the vertical luminance concave image E in FIG. 5, it reacts to the eyes, nostrils, mouth, etc. Among them, it reacts strongly to the eyes, nostrils, etc. and becomes white. In other words, the value of E at that position is large. Therefore, the face can be detected with high accuracy by analyzing such a luminance concave image. However, instead of using only the luminance concave image It is also desirable to use an edge image created by a conventional method.
[0033] なお、上記エッジ画像 をガンマ変換した理由は、エッジ量 を適切な特徴量 E に変換するためである。画像解析において、ほとんどエッジがない箇所での微妙なェ ッジ量の違!/、は、大きなエッジがある箇所での多少のエッジ量の違!/、よりも大きな意 味を持つ。エッジ量 ΕΊこ対してガンマ変換を施すことにより上記の効果が実現され、 ほとんどエッジがな!/、箇所でのエッジ量の違いは特徴量 Eの大きな違いに変換され、 大きなエッジがある箇所でのエッジ量の違いは特徴量 Eの小さな違いに変換される。 [0033] The reason why the edge image is gamma-converted is to convert the edge amount into an appropriate feature amount E. In image analysis, a subtle difference in the edge amount in a place where there is almost no edge! / Has a larger meaning than a slight difference in the amount of edge in a part where there is a large edge! /. By applying gamma conversion to the edge amount, the above effect is realized, and there is almost no edge! / The difference in the edge amount at the location is converted into a large difference in the feature amount E. The difference in edge amount is converted into a small difference in feature amount E.
[0034] 次に、図 2に戻って、ステップ S5では、縮小した画像の 1画素おきに 19 X 19画素 の顔判定対象領域を設定し、その領域におけるエッジ画像の部分画像を出力する。 これをすベての縮小画像において行う。 19 X 19画素の顔判定対象領域は、その領 域が顔である場合に目や鼻や口などが 2画素程度で検出できるのに適したサイズで ある。 Next, returning to FIG. 2, in step S5, a face determination target area of 19 × 19 pixels is set for every other pixel of the reduced image, and a partial image of the edge image in that area is output. This is performed for all reduced images. The 19 x 19 pixel face detection target area is suitable for detecting the eyes, nose, mouth, etc. with about 2 pixels when the area is a face.
[0035] ステップ S6では、ステップ 5で出力したエッジ画像の各部分画像に対して、この領 域が顔の画像であるかどうか判定する。本実施の形態では、この顔の画像の判定を 以下に説明する手法により行う。  In step S 6, it is determined for each partial image of the edge image output in step 5 whether this area is a face image. In the present embodiment, the determination of the face image is performed by the method described below.
[0036] まず、エッジ画像 E (x,y)の部分画像の各画素位置(x,y) (0≤x≤18、 0≤y≤18 )について、次の式に基づいてその位置の顔らしさ V (x,y)を生成する。顔らしさ V ( x,y)は、各画素位置で顔らしさを数値化したもので、顔らしさの度合いや程度を示す ものである。 V (x,y)は、顔として尤もらしい度合いを表す尤度といってもよい。  [0036] First, for each pixel position (x, y) (0≤x≤18, 0≤y≤18) of the partial image of the edge image E (x, y), the face of that position is based on the following equation: Produces the likelihood V (x, y). Facialness V (x, y) is a numerical expression of facialness at each pixel position, and indicates the degree and degree of facialness. V (x, y) may be said to be a likelihood representing a degree of likelihood as a face.
V (x,y) =L (E (x,y) )  V (x, y) = L (E (x, y))
ここで、 L (E)は、各画素位置 (x,y) (0≤x≤18, 0≤v≤18)について、後述す る統計処理によりあらかじめ作成されているルックアップテーブルであり、画素位置 Here, L (E) is a look-up table created in advance for each pixel position (x, y) (0≤x≤18, 0≤v≤18) by statistical processing described later. position
,y)のエッジ E (x,y)が Eである時のその箇所の顔らしさを表す。 , Y) represents the face likeness when the edge E (x, y) is E.
[0037] そして、生成した顔らしさ V (x,y)を全画素(x,y) (0≤x≤18, 0≤y≤18)につい て積算し、顔らしさ V を算出する。 [0037] Then, the generated facial appearance V (x, y) is integrated for all pixels (x, y) (0≤x≤18, 0≤y≤18) to calculate facial appearance V.
[0038] 図 6は、具体的なエッジ画像について上記の処理を行った例を示す図である。図 6 の顔らしさ画像では、顔らしい箇所が白く表示され、顔らしくない箇所が黒く表示され ている。図 6 (a)に示す顔のエッジ画像から生成した顔らしさ画像は、全体的に大きな 値を持つ。すなわち、全体的に白っぽい画像となる。しかし、図 6 (b)に示す非顔のェ ッジ画像から生成した顔らしさ画像は所々小さな値を持つ。すなわち、所々黒っぽく なった画像となる。 FIG. 6 is a diagram illustrating an example in which the above processing is performed on a specific edge image. In the face-like image in Fig. 6, the part that looks like a face is displayed in white, and the part that does not look like a face is displayed in black. The face-like image generated from the face edge image shown in Fig. 6 (a) is large overall. Has a value. That is, the overall image is whitish. However, the facial image generated from the non-facial edge image shown in Fig. 6 (b) has small values in some places. That is, the image becomes dark in some places.
[0039] 図 6 (b)の非顔の例では、 目の間、鼻、口の両横に対応する領域が顔らしくないとさ れて、顔らしさ画像ではその領域の画素値は小さな値となり黒!、画像となってレ、る。 従って、非顔画像の顔らしさ画像を全画素積算した値 V は小さな値になる。  [0039] In the non-face example shown in Fig. 6 (b), the area corresponding to both sides of the eyes, the nose, and the mouth does not look like a face. In the face-like image, the pixel value of that area is a small value. Next black! Therefore, the value V obtained by integrating all the pixels of the face-like image of the non-face image is a small value.
SUM1  SUM1
[0040] 図 7は、ルックアップテーブル L (E)の具体的な値をエッジの大きさ毎に表した  FIG. 7 shows specific values of the lookup table L (E) for each edge size.
1 (x,y)  1 (x, y)
図である。図 7では、顔らしさの値が大きいほど白く表示されている。図 7において、 左側はエッジが小さい時の顔らしさであり、右側はエッジが大きい時の顔らしさである FIG. In FIG. 7, the larger the face-like value, the more white it is displayed. In Fig. 7, the left side is the facial appearance when the edge is small, and the right side is the facial appearance when the edge is large.
。なお、ルックアップテーブル L (E)の全ての値を図示するなら、前述の通りエツ . If all the values in the lookup table L (E) are illustrated,
1 (x,y)  1 (x, y)
ジは 0〜31の で生成されているので、 L (0)〜L (31)の 32通りの図ができ
Figure imgf000013_0001
Is generated from 0 to 31, so 32 diagrams from L (0) to L (31) can be created.
Figure imgf000013_0001
る。し力、し、図 7では、図示の便宜上そのうちの 8通りのみ表示している。  The In FIG. 7, only eight of them are shown for convenience of illustration.
[0041] なお、図 7のルックアップテーブル L (E)は、具体的な値をエッジの大きさ毎に  [0041] Note that the look-up table L (E) in FIG. 7 has a specific value for each edge size.
1 (x,y)  1 (x, y)
視覚的に表した図である。実際には、画素位置 (X, y)を引数とした画素値のテープ ルが、エッジの値毎にメモリに格納されている。すなわち、 32個の画素位置 (X, y)を 引数とした画素値のテーブル力 Sメモリに格納されている。  It is the figure represented visually. Actually, a table of pixel values with the pixel position (X, y) as an argument is stored in the memory for each edge value. In other words, the table power S of the pixel value with 32 pixel positions (X, y) as an argument is stored in the S memory.
[0042] 図 7において、左側の図はエッジが小さい時の顔らしさを表す。左側の図を見ると、 目、鼻、口の箇所の顔らしさが小さな値になっている。これは、 目、鼻、口の箇所のェ ッジが小さい場合には、その箇所は顔らしくないということを表している。例えば、図 6 (a)の非顔の例では、鼻に対応する箇所のエッジが小さいので、その箇所は顔らしく ないとされる。  In FIG. 7, the diagram on the left represents the facial appearance when the edge is small. Looking at the figure on the left, the facial features of the eyes, nose and mouth are small. This means that if the edges of the eyes, nose, and mouth are small, the area is not likely to be a face. For example, in the non-face example in Fig. 6 (a), the edge of the part corresponding to the nose is small, so the part does not look like a face.
[0043] また、図 7の右側の図はエッジが大きい時の顔らしさを表す。右側の図を見ると、 目 、鼻、口以外の箇所の顔らしさが小さな値になっている。これは、 目、鼻、口以外の箇 所のエッジが大きい場合には、その箇所は顔らしくないということを表している。例え ば、図 6 (a)の非顔の例では、 目の間と口の両横に対応する箇所のエッジが大きいの で、その箇所は顔らしくないとされる。  In addition, the diagram on the right side of FIG. 7 shows the facial appearance when the edge is large. Looking at the figure on the right, the face-likeness of parts other than the eyes, nose, and mouth is small. This means that if the edge of a part other than the eyes, nose, or mouth is large, that part does not look like a face. For example, in the non-face example shown in Fig. 6 (a), the edge of the part corresponding to the space between the eyes and both sides of the mouth is large, so the part is not likely to be a face.
[0044] すなわち、顔の画像を特定種類の画像とし、 目、鼻、口などを特定種類の画像の特 徴的な要素であると考えると、特定種類の画像の特徴的な要素に対応する画素位置 では、その画素のエッジ成分が大きい場合の特定種類の画像らしさの度合いを、エツ ジ成分が小さレ、場合の特定種類の画像らしさの度合いに比べて大きな値としてレ、る 。また、特定種類の画像の特徴的な要素以外に対応する画素位置では、その画素 のエッジ成分が大きい場合の特定種類の画像らしさの度合いを、エッジ成分が小さ V、場合の特定種類の画像らしさの度合いに比べて小さな値として!/、る。 [0044] That is, assuming that the face image is a specific type of image and the eyes, nose, mouth, etc. are characteristic elements of the specific type of image, it corresponds to the characteristic elements of the specific type of image. Pixel position In this case, the degree of likelihood of a specific type of image when the edge component of the pixel is large is set to a value larger than the degree of likelihood of the specific type of image when the edge component is small. In addition, at pixel positions corresponding to elements other than the characteristic elements of a specific type of image, the degree of a particular type of image when the edge component of that pixel is large is expressed as the degree of a particular type of image when the edge component is small V. As a small value compared to the degree of!
[0045] 上記ルックアップテーブルを参照する処理を整理すると、まず、エッジ画像 E (x,y) の部分画像において、 x = 0、 = 0のェッジ£の値を得る。次に、このエッジ Eの値 に相当するルックアップテーブル L (E )を 32個のルックアップテーブルの中から When the process of referring to the look-up table is arranged, first, in the partial image of the edge image E (x, y), the value of the edge of x = 0, = 0 is obtained. Next, look-up table L (E) corresponding to the value of edge E is selected from 32 look-up tables.
1 (x,y) 1  1 (x, y) 1
決める。ルックアップテーブル L (E )が決まると、このルックアップテーブル L  Decide. When the lookup table L (E) is determined, this lookup table L
1 (x,y) 1 1 vx.y) 1 (x, y) 1 1 vx.y)
(E )の画素位置(0, 0)の値を得る。これが、エッジ画像 E (x,y)の画素位置(0, 0) の顔らしさの値である。この処理を、 x = 0、 y= 0の画素力、ら x= 18、 y= 18の画素ま で順次行い、顔らしさ画像 V (x,y)を得る。そして、 V (x,y)をすベて積算して Vsum を得る。 The value of the pixel position (0, 0) of (E) is obtained. This is the faceness value at the pixel position (0, 0) of the edge image E (x, y). This process is sequentially performed until the pixel power of x = 0, y = 0, and the pixels of x = 18, y = 18, and the face-like image V (x, y) is obtained. Then, Vsum is obtained by summing all V (x, y).
[0046] 以上の処理により、エッジ画像 E (x,y)に基づいて部分画像の顔らしさ Vsumが生 成される。そして、エッジ画像 E (x,y)〜E (x,y)に基づいて部分画像の顔らしさ Vsu  [0046] Through the above processing, the facial appearance Vsum of the partial image is generated based on the edge image E (x, y). Then, based on the edge images E (x, y) to E (x, y), the facial image Vsu
2 4  twenty four
m〜Vsumを生成する処理も同様に行う。  The process of generating m to Vsum is performed in the same way.
2 4  twenty four
[0047] 図 8は、図 2のステップ S6の顔判定の処理において、部分画像の顔らしさ Vsum〜 Vsumを求めた後の処理のフローチャートを示す図である。ステップ S6の顔判定処 [0047] FIG. 8 is a diagram showing a flowchart of the processing after obtaining the face-likeness Vsum to Vsum of the partial image in the face determination processing in step S6 of FIG. Face determination process in step S6
4 Four
理では、上記に説明したように、顔らしさ Vsum〜Vsumを段階的に生成し、それらを  In fact, as explained above, facial appearance Vsum to Vsum are generated step by step,
1 4  14
積算した評価値が閾値よりも大きければ顔とする。ただし、評価値を閾値と比較する 処理を図 8に示すように各段階において行うことにより、明らかに顔ではない画像を 早レ、段階で除外して、効率的な処理を行えるようにしてレ、る。  If the integrated evaluation value is larger than the threshold value, the face is determined. However, the process of comparing the evaluation value with the threshold value is performed at each stage as shown in Fig. 8, so that images that are clearly not faces are excluded at an early stage and at a stage so that efficient processing can be performed. RU
[0048] まず、ステップ S 11では、部分画像が顔の画像であるかどうかを判定する評価値を 、エッジ画像 E (x,y)の顔らしさ Vsumとする。ステップ S 12では、評価値が所定の閾 値 thはり大きいかどうかを判定し、この評価値が閾値 thはり大きければステップ S 13 に進み、この評価値が閾値 thlより大きくなければ、部分画像は顔の画像でないとし て、対象の部分画像の顔判定の処理を終了する。  [0048] First, in step S11, an evaluation value for determining whether or not a partial image is a face image is set as the face likelihood Vsum of the edge image E (x, y). In step S12, it is determined whether or not the evaluation value is larger than a predetermined threshold value th. If this evaluation value is larger than the threshold value th, the process proceeds to step S13. If this evaluation value is not larger than the threshold value thl, the partial image is If it is not a face image, the face determination process for the target partial image is terminated.
[0049] ステップ S 13では、評価値をステップ S 11の評価値にエッジ画像 E (x,y)の顔らし  [0049] In step S13, the evaluation value is changed to the evaluation value in step S11, and the face appearance of the edge image E (x, y) is determined.
2 さ Vsumを足した値とする。ステップ S 14では、この評価値が所定の閾値 th2より大き2 Let Vsum be the value added. In step S14, this evaluation value is greater than a predetermined threshold th2.
2 2
いかどうかを判定し、評価値が閾値 th2より大きければステップ S 1 5に進み、この評価 値が閾値 th2より大きくなければ、部分画像は顔の画像でないとして、対象の部分画 像の顔判定の処理を終了する。  If the evaluation value is larger than the threshold value th2, the process proceeds to step S15. If the evaluation value is not larger than the threshold value th2, the partial image is determined not to be a face image and the face determination of the target partial image is performed. End the process.
[0050] ステップ S 1 5では、評価値をステップ S 13の評価値にエッジ画像 E (x,y)の顔らし [0050] In step S15, the evaluation value is changed to the evaluation value in step S13, and the face appearance of the edge image E (x, y) is determined.
3  Three
さ Vsumを足した値とする。ステップ S 16では、この評価値が所定の閾値 th3より大き  Let Vsum be the value added. In step S16, this evaluation value is greater than a predetermined threshold th3.
3  Three
いかどうかを判定し、評価値が閾値 th3より大きければステップ S 1 7に進み、この評価 値が閾値 th3より大きくなければ、部分画像は顔の画像でないとして、対象の部分画 像の顔判定の処理を終了する。  If the evaluation value is larger than the threshold th3, the process proceeds to step S17. If the evaluation value is not larger than the threshold th3, the partial image is determined not to be a face image and the face determination of the target partial image is performed. End the process.
[0051] ステップ S 1 7では、評価値をステップ S 1 5の評価値にエッジ画像 E (x,y)の顔らし [0051] In step S 17, the evaluation value is changed to the evaluation value in step S 15, and the face appearance of the edge image E (x, y) is displayed.
4  Four
さ Vsumを足した値とする。ステップ S 18では、この評価値が所定の閾値 th4より大き  Let Vsum be the value added. In step S18, this evaluation value is greater than a predetermined threshold th4.
4  Four
いかどうかを判定する。ステップ S 18において、評価値が閾値 th4より大きければ、最 終的にこの部分画像は顔の画像であると判定する。この評価値が閾値 th4より大きく なければ、部分画像は顔の画像でないとして、対象の部分画像の顔判定の処理を終 了する。  Judge whether or not. If the evaluation value is larger than the threshold th4 in step S18, it is finally determined that the partial image is a face image. If this evaluation value is not greater than the threshold th4, the partial image is not a face image, and the face determination process for the target partial image is terminated.
[0052] 以上説明した部分画像の顔判定処理を、各縮小画像において、 1ビットずつずらし た各部分画像についてすべて行い、顔の画像と判定できる部分画像をすベて抽出し 、ステップ S 7に進む。  [0052] The partial image face determination process described above is performed for each partial image shifted by 1 bit in each reduced image, and all partial images that can be determined as face images are extracted. move on.
[0053] ステップ S 7では、ステップ 6によりある部分画像が顔であると判定された場合には、 その部分画像の入力画像に対する顔の大きさ Sと座標 (X, Y)を出力する。 S , X, Y は、縮小画像における顔のサイズ S ' = 19と、顔とされた領域の座標 (Χ' , Υ' )と縮小 倍率 κとを用いて、次の式で与えられる。  In step S 7, if it is determined in step 6 that a partial image is a face, the face size S and coordinates (X, Y) for the input image of the partial image are output. S 1, X, and Y are given by the following expression using the size S ′ = 19 of the face in the reduced image, the coordinates (Χ ′, Υ ′) of the face area and the reduction magnification κ.
S = S ' / κ  S = S '/ κ
X = X' / κ  X = X '/ κ
Υ = Υ' / κ  Υ = Υ '/ κ
[0054] 以上のようにして、入力画像に顔の画像がある場合は、その顔の画像の位置と大き さが検出されて出力される。  As described above, when a face image is included in the input image, the position and size of the face image are detected and output.
[0055] <統計処理〉 次に、前述した統計処理について説明する。すなわち、画素位置 (x,y)のエッジ (x,y)が Eであるときのその画素の顔らしさ L (E)を求める方法を説明する。図 9は [0055] <Statistical processing> Next, the statistical processing described above will be described. That is, a method for obtaining the facial appearance L (E) of the pixel when the edge (x, y) of the pixel position (x, y) is E will be described. Figure 9
1 (x,y)  1 (x, y)
、この顔らしさ L (E)を求める処理のフローチャートを示す図である。この処理は、
Figure imgf000016_0001
FIG. 10 is a diagram showing a flowchart of processing for obtaining the facial appearance L (E). This process
Figure imgf000016_0001
パーソナルコンピュータ 1におレ、て実行される。  It is executed on the personal computer 1.
[0056] ステップ S21では、数百人以上の顔の画像を取得する。すなわち、数百人以上の 顔をデジタルカメラ等で撮影 (撮像)し、その画像 (画像データ)を取得する。取得す る画像は、図 2のステップ S 1で入力する画像と同様な色成分で構成された画像であ る。ステップ S22では、顔が撮影されている画像を、顔領域の大きさが 19 X 19画素 になるように変倍して、顔領域を切り出した部分画像を顔画像サンプル群とする。 [0056] In step S21, images of several hundred or more faces are acquired. That is, several hundred or more faces are photographed (captured) with a digital camera or the like, and the images (image data) are acquired. The acquired image is an image composed of the same color components as the image input in step S1 of FIG. In step S22, the image of the face photographed is scaled so that the size of the face area becomes 19 × 19 pixels, and the partial images cut out of the face area are set as face image sample groups.
[0057] ステップ S23では、 19 X 19画素の非顔画像サンプル群を、数百パターン以上取得 する。これは、デジタルカメラで撮影した顔以外の画像から適宜抽出して非顔画像サ ンプル群とする。顔が写っている画像から、顔の領域を避けて抽出するようにしてもよ い。この場合は、モニタに写された画像から、ユーザが適宜非顔画像の領域を指定 すればよい。  In step S23, several hundred patterns or more of 19 × 19 pixel non-face image sample groups are acquired. This is appropriately extracted from images other than the face photographed with a digital camera and made into a non-face image sample group. It is also possible to extract from an image showing a face while avoiding the face area. In this case, the user may appropriately designate the non-face image area from the image captured on the monitor.
[0058] ステップ S24では、顔画像サンプル群からエッジ成分を抽出して、顔エッジ画像サ ンプル群を生成する。この処理は、顔検出処理においてエッジ画像 E (x,y)を生成 する処理と同様に行う。ステップ S25では、非顔画像サンプル群からエッジ成分を抽 出して、非顔エッジ画像サンプル群を生成する。この処理も、顔検出処理においてェ ッジ画像 E (x,y)を生成する処理と同様に行う。  In step S24, an edge component is extracted from the face image sample group to generate a face edge image sample group. This process is the same as the process for generating the edge image E (x, y) in the face detection process. In step S25, an edge component is extracted from the non-face image sample group to generate a non-face edge image sample group. This process is also performed in the same manner as the process for generating the edge image E (x, y) in the face detection process.
[0059] ステップ S26では、顔エッジ画像サンプル群につ!/、て、(x,y)のエッジが Eとなる頻 度 P (x,y,E)を求める。すなわち、画素(X, y)の値が Eとなる画像力^、くつあるかを 顔  [0059] In step S26, the frequency P (x, y, E) at which the edge of (x, y) becomes E is obtained for the face edge image sample group! In other words, the image power with a pixel (X, y) value of E ^
カウントする。ステップ S27では、非顔エッジ画像サンプル群について、同様に、(x,y Count. In step S27, for the non-face edge image sample group, (x, y
)のエッジが Eとなる頻度 P (x,y,E)を求める。 The frequency P (x, y, E) at which the edge of) becomes E is obtained.
非顔  Non-face
[0060] ステップ S28では、画素位置(x,y)のエッジ E (x,y)が Eであるときのその画素の顔 らしさ L (E)を、次の式によって算出する。  In step S28, the facial appearance L (E) of the pixel when the edge E (x, y) at the pixel position (x, y) is E is calculated by the following equation.
1 (x,y)  1 (x, y)
L (E) =log{ (P (χ,γ,Ε) + 8 ) / (Ρ (χ,γ,Ε) + ε ) }  L (E) = log {(P (χ, γ, Ε) + 8) / (Ρ (χ, γ, Ε) + ε)}
1 (x,y) 顔 1 非顔 2  1 (x, y) face 1 nonface 2
ここで、 ε と ε は所定の定数であり、対数の発散や過学習を抑制するために導入  Where ε and ε are predetermined constants, introduced to suppress logarithmic divergence and overlearning.
1 2  1 2
している。 ε の値は P (x,y,E)の平均的な値の 1000分の 1程度に設定すればよく 、 ε の値は ε の値の数十倍に設定すればよい。 is doing. The value of ε should be set to about 1/1000 of the average value of P (x, y, E). The value of ε should be set to several tens of times the value of ε.
2 1  twenty one
[0061] 上記し (Ε)を求める式において、 log{ (P (χ,γ,Ε) + ε ) }は、単調増加関数  [0061] In the above equation (Ε), log {(P (χ, γ, Ε) + ε)} is a monotonically increasing function.
Kx,y) 顔 1  Kx, y) face 1
であり、 log{ l/ (P (χ,γ,Ε) + 8 ) }は、単調減少関数である。すなわち、顔らしさ L 非顔 2  And log {l / (P (χ, γ, Ε) +8)} is a monotonically decreasing function. That is, facial appearance L Non-face 2
(Ε)は、画素位置(x,y)のエッジ E (x,y)が Eである顔画像サンプルの分布が増 (Ε) shows an increase in the distribution of face image samples whose edge E (x, y) is E at the pixel position (x, y).
1 (x'y) 1 1 (x'y) 1
加していく方向にその値は単調増加し、画素位置 (x,y)のエッジ E (x,y)が Eである 非顔画像サンプルの分布が増加していく方向にその値は単調減少していく関数であ る。なお、画素位置(x,y)のエッジ E (x,y)が Eである顔画像サンプルの分布、および 、画素位置 (x,y)のエッジ E (x,y)が Eである非顔画像サンプルの分布は、通常正規 分布している。  The value increases monotonously in the direction of addition, and the value decreases monotonously in the direction of increase in the distribution of non-face image samples whose edge E (x, y) is E at the pixel position (x, y). It is a function that does. The distribution of face image samples whose edge E (x, y) is E at the pixel position (x, y), and the non-face whose edge E (x, y) is E at the pixel position (x, y) The distribution of image samples is usually normal.
[0062] エッジ画像 E (x,y)〜E (x,y)を顔らしさに変換するルックアップテーブル L (E  [0062] A look-up table L (E for converting the edge images E (x, y) to E (x, y) to facial appearance
2 4 2 (x,y) 2 4 2 (x, y)
)〜L (E)を生成するには、上記ステップ S24、ステップ S25のエッジ成分抽出処) To L (E) to generate the edge component extraction process in steps S24 and S25 above.
4 (x,y) 4 (x, y)
理を、顔検出処理におけるエッジ画像 E (x,y)〜E (x,y)を生成する処理と同様に  In the same way as the processing for generating edge images E (x, y) to E (x, y) in face detection processing
2 4  twenty four
すればよい。  do it.
[0063] 以上説明した第 1の実施の形態の処理を行うと、次のような効果を奏する。  [0063] When the processing of the first embodiment described above is performed, the following effects are obtained.
(1)顔画像の目、鼻、口などの位置は周辺に比べて局所的に暗い。従来のエッジ抽 出方法では、エッジ構造が局所的に暗い構造である力、、局所的に明るい構造である 力、、あるいはそれ以外の構造である力、を識別することはできな力 た。しかし、上記の ように凹部構造のエッジを検出し、エッジ画像である凹部画像を生成することにより顔 画像の局所的に暗い構造である目、鼻、口などを適切に抽出することができる。その 結果、顔の画像を正確に判定することができる。  (1) The positions of eyes, nose, mouth, etc. in the face image are locally darker than the surroundings. In the conventional edge extraction method, it was impossible to distinguish between a force having a locally dark structure, a force having a locally bright structure, or a force having another structure. However, by detecting the edge of the concave structure as described above and generating a concave image that is an edge image, it is possible to appropriately extract the eyes, nose, mouth, and the like that are locally dark structures of the face image. As a result, the face image can be accurately determined.
[0064] (2)輝度凹部画像によると、 目鼻口などの局所的に暗い箇所に特に良く反応する。  [0064] (2) According to the luminance concave image, it reacts particularly well to locally dark places such as the eyes and nose and mouth.
実際、輝度凹部画像は、 目鼻口の位置に鋭いピークを持つ。従って、このような輝度 凹部画像を解析することにより、顔を高精度に検出することができる。本実施の形態 では、輝度凹部画像だけを用いるのではなぐ従来の方法で作成したエッジ画像も 合わせて用いるようにしてレ、るので、さらにより精度の高!/、顔の判定を可能としてレ、る In fact, the luminance concave image has a sharp peak at the position of the eyes and nose. Therefore, the face can be detected with high accuracy by analyzing such a luminance concave image. In the present embodiment, the edge image created by the conventional method is used together with the luminance concave image alone, so that it is possible to determine the face with even higher accuracy! /. RU
Yes
[0065] (3)上記エッジ エッジをガンマ変換した理由は、エッジ量 を適切な特徴量 Eに 変換するためである。画像解析において、ほとんどエッジがない箇所での微妙なエツ ジ量の違いは、大きなエッジがある箇所での多少のエッジ量の違!/、よりも大きな意味 を持つ。エッジ量 ΕΊこ対してガンマ変換を施すことにより、ほとんどエッジがない箇所 でのエッジ量の違いは特徴量 Eの大きな違いに変換され、大きなエッジがある箇所で のエッジ量の違いは特徴量 Eの小さな違いに変換される。これにより、エッジ量の違い が画像の構造の違いに一致するようになる。この結果、顔判定の精度も高くなる。 (3) The reason why the edge edge is gamma-converted is to convert the edge amount into an appropriate feature amount E. In image analysis, subtle effects in areas with few edges The difference in the amount of edge has a larger meaning than the difference in the amount of edge slightly where there is a large edge! /. By applying gamma conversion to the edge amount, the difference in the edge amount at the point where there is almost no edge is converted into a large difference in the feature amount E, and the difference in the edge amount at the point where there is a large edge is the feature amount E. Is translated into a small difference. As a result, the difference in edge amount matches the difference in image structure. As a result, the accuracy of face determination is increased.
[0066] (4)上記実施の形態の図 4から明らかなように、輝度が凹んでいる場合のみ輝度凹 部画像が正の値を持つことがわかる。従って、本実施の形態では、輝度凹部画像 E' の負の値を 0にクリッピングするようにした。これにより、輝度の凹みだけに反応するェ ッジ画像 E (x,y)が生成され、エッジ画像 Eを使用する処理がしゃすくなる。  (4) As is clear from FIG. 4 of the above embodiment, it can be seen that the luminance concave image has a positive value only when the luminance is concave. Therefore, in this embodiment, the negative value of the luminance recess image E ′ is clipped to 0. As a result, an edge image E (x, y) that reacts only to the brightness dent is generated, and the processing using the edge image E becomes frustrating.
[0067] (5)エッジ画像の画素値をルックアップテーブルを用いて顔らしさに変換して積算す るという単純で高速な処理により、顔の画像を検出することができる。また、エッジ画 像を判定することにより、画像を撮影する際の照明条件の影響を抑制する効果がある [0067] (5) A face image can be detected by a simple and high-speed process in which pixel values of an edge image are converted into facial appearance using a lookup table and integrated. Also, by determining the edge image, there is an effect of suppressing the influence of the lighting conditions when shooting the image.
Yes
[0068] 第 2の実施の形態  [0068] Second Embodiment
第 2の実施の形態では、判定対象画像のコントラストの変動に強!/、顔判定方法を説 明する。第 2の実施の形態は、第 1の実施の形態と同様に、パーソナルコンピュータ 1 で実現される。従って、第 2の実施の形態の画像処理装置の構成は、第 1の実施の 形態の図 1を参照することとする。  In the second embodiment, a face determination method that is highly resistant to variations in the contrast of the determination target image will be described. The second embodiment is realized by the personal computer 1 as in the first embodiment. Therefore, the configuration of the image processing apparatus according to the second embodiment is referred to FIG. 1 of the first embodiment.
[0069] <統計処理〉 [0069] <Statistical processing>
まず、以下で説明する統計処理を行い、第 2の実施の形態の顔判定用のルックアツ プテーブル(LUT)を作成する。第 2の実施の形態のルックアップテーブルの作成に ついて、第 1の実施の形態の図 9を参照しながら以下説明をする。  First, statistical processing described below is performed to create a look-up table (LUT) for face determination according to the second embodiment. The creation of the lookup table of the second embodiment will be described below with reference to FIG. 9 of the first embodiment.
[0070] まず、図 9のステップ S21からステップ S23までと同様にして、数百以上の 19 X 19 画素の顔画像サンプル群と、数百以上の 19 X 19画素の非顔画像サンプル群とを取 得する。 First, in the same manner as in Step S21 to Step S23 in FIG. 9, several hundred or more 19 × 19 pixel face image sample groups and several hundred or more 19 × 19 pixel non-face image sample groups are obtained. get.
[0071] 次に、顔画像サンプル群の画素値にゲインをかけ、画素値の分散が 100程度にな るように調整する。または、顔画像サンプル群の中から、画素値の分散が 200未満で あるものを抽出する。このようにして調整または抽出した顔画像サンプル群と、先に求 めた非顔画像サンプル群を使って、図 9のステップ S24からステップ S28までと同様 にして、顔判定用のルックアップテーブルを作成する。このようにして求めたルックァ ップテーブルを、低コントラスト顔判定用ルックアップテーブルと言う。 Next, gain is applied to the pixel values of the face image sample group, and adjustment is performed so that the variance of the pixel values is about 100. Alternatively, a facial image sample group having a pixel value variance of less than 200 is extracted. The face image sample group adjusted or extracted in this way and the previously obtained face image sample group. Using the collected non-face image sample group, a look-up table for face determination is created in the same manner as in steps S24 to S28 in FIG. The look-up table obtained in this way is called a low-contrast face determination look-up table.
[0072] 次に、顔画像サンプル群の画素値に上記とは異なるゲインをかけ、画素値の分散 力 00程度になるように調整する。または、顔画像サンプル群の中から、画素値の分 散が 200以上であるものを抽出する。このようにして調整または抽出した顔画像サン プル群と、先に求めた非顔画像サンプル群を使って、図 9のステップ S24からステツ プ S28までと同様にして、顔判定用のルックアップテーブルを作成する。このようにし て求めたルックアップテーブルを、高コントラスト顔判定用ルックアップテーブルと言うNext, a gain different from the above is applied to the pixel values of the face image sample group to adjust the pixel values to have a dispersion power of about 00. Alternatively, a face image sample group having a pixel value variance of 200 or more is extracted. Using the face image sample group adjusted or extracted in this way and the non-face image sample group obtained earlier, a look-up table for face determination is performed in the same manner as in steps S24 to S28 in FIG. Create The lookup table obtained in this way is called a high-contrast face determination lookup table.
Yes
[0073] 次に、上記のようにして求めた低コントラスト顔判定用ルックアップテーブルと高コン トラスト顔判定用ルックアップテーブルを使用して、撮影された画像の中から顔画像 を検出する画像処理について説明する。図 10は、パーソナルコンピュータ 1が実行 する第 2の実施の形態の画像処理プログラムのフローチャートを示す図である。  [0073] Next, using the low contrast face determination look-up table and the high contrast face determination look-up table obtained as described above, image processing for detecting a face image from the captured image Will be described. FIG. 10 is a diagram illustrating a flowchart of the image processing program according to the second embodiment executed by the personal computer 1.
[0074] ステップ S31からステップ S34は、第 1の実施の形態の図 2のステップ S1からステツ プ S4と同様である。ステップ S 38では、輝度画像の積分画像 I(x,y)と輝度画像の画素 値の二乗の積分画像 I (x,y)を次の式に基づ!/、て作成する。  Steps S31 to S34 are the same as steps S1 to S4 in FIG. 2 of the first embodiment. In step S 38, an integral image I (x, y) of the luminance image and an integral image I (x, y) of the square of the pixel value of the luminance image are created based on the following expression! /.
 Country
【数 1】 [Equation 1]
ζ·-0 0
Figure imgf000019_0001
ζ · -0 0
Figure imgf000019_0001
[0075] ステップ S35では、第 1の実施の形態の図 2のステップ S5と同様に顔判定対象領域 を設定する。ステップ S39では、顔判定対象領域内の輝度画像 Y(x,y)の画素値の分 散 σ 2を計算する。顔判定対象領域を、 4点0^) +\^),0^+11) +\^+11)を頂点とす る長方形領域とすると、その領域内で輝度画像 Y(x,y)を積分した値 Ysumと輝度画像 の二乗を積分した値 Ysum2、は、次の式で算出される。 In step S35, a face determination target region is set in the same manner as in step S5 in FIG. 2 of the first embodiment. In step S39, the distribution σ 2 of the pixel values of the luminance image Y (x, y) in the face determination target region is calculated. Face detection target area with 4 points 0 ^) + \ ^), 0 ^ + 11) + \ ^ + 11) as vertices In this area, the value Ysum obtained by integrating the luminance image Y (x, y) and the value Ysum2 obtained by integrating the square of the luminance image are calculated by the following formula.
[数 2]  [Equation 2]
【数 2】 [Equation 2]
Ysum^I{x + w,y + h)-I(x-l,y + h)-I(x + w,y-l) + I(x-l,y-Y) Ysuml = I2(x + w,y + h)-I2(x-l,y + h)-I2(x + w,y-Y) + Lix-l,y-l) Ysum ^ I {x + w, y + h) -I (xl, y + h) -I (x + w, yl) + I (xl, yY) Ysuml = I 2 (x + w, y + h) -I 2 (xl, y + h) -I 2 (x + w, yY) + Lix-l, yl)
[0076] 上記演算によると、 4点の画素値の加減算を行うだけで積分が求まるので、高速な 演算が可能である。そして、顔判定対象領域内の輝度画像 Y(x,y)の画素値の分散 σ 2は次の式で与えられる。 [0076] According to the above calculation, the integral can be obtained simply by adding and subtracting the pixel values of the four points, so that high-speed calculation is possible. Then, the variance σ 2 of the pixel values of the luminance image Y (x, y) in the face determination target region is given by the following equation.
[数 3]  [Equation 3]
【数 3】
Figure imgf000020_0001
[Equation 3]
Figure imgf000020_0001
[0077] ステップ S40では、上記分散 σ が 200未満の場合には、低コントラスト顔検出用ル ックアップテーブルを選択する。また、上記分散 σ 2が 200以上の場合には、高コント ラスト顔検出用ルックアップテーブルを選択する。なお、分散 σ 2が大きい場合は高コ ントラストの画像であることを示し、分散 σ 2が小さい場合は低コントラストの画像である ことを示している。 In step S40, when the variance σ is less than 200, a low-contrast face detection look-up table is selected. If the variance σ 2 is 200 or more, a high-contrast face detection lookup table is selected. Incidentally, when the variance sigma 2 is large indicates that the image of Koko contrast, when the variance sigma 2 is small indicates that an image of low contrast.
[0078] ステップ S36では、ステップ S40で選択された顔検出用ルックアップテーブルを用 いて、第 1の実施の形態のステップ S6と同様の方法で顔判定処理を行う c [0078] In step S36, using the face detection lookup table selected in step S40, the face determination process is performed in the same manner as in step S6 of the first embodiment. C
37では、第 1の実施の形態のステップ S7と同様に検出結果を出力する。  In 37, the detection result is output as in step S7 of the first embodiment.
[0079] 本実施の形態によると、顔判定対象領域のコントラストを高速に測定し、そのコ: ラストに応じて顔検出用ルックアップテーブルを選択することにより、判定処理時間を 抑えながら、様々なコントラストに対して精度が高い判定をすることができる。 [0080] なお、コントラストによって異なるルックアップテーブルを使用するのは、コントラスト が高い画像であると、エッジが大きめに出すぎてしまうためである。すなわち、コントラ ストが高いものは高いなりのルックアップテーブルを使用することにより、精度の高い 顔判定が可能となるからである。 [0079] According to the present embodiment, the contrast of the face determination target region is measured at high speed, and a face detection look-up table is selected according to the collation, thereby reducing various determination processing times. It is possible to make a highly accurate determination with respect to contrast. [0080] Note that the lookup table that differs depending on the contrast is used because an edge is excessively large in an image having a high contrast. In other words, a high-contrast face can be determined by using a high-level look-up table for those with high contrast.
[0081] 第 3の実施の形態  [0081] Third Embodiment
第 3の実施の形態では、異なる複数の解像度の画像を用いることにより、より高精度 な顔判定を行う方法を説明する。第 3の実施の形態は、第 1の実施の形態と同様に、 パーソナルコンピュータ 1で実現される。従って、第 3の実施の形態の画像処理装置 の構成は、第 1の実施の形態の図 1を参照することとする。  In the third embodiment, a method for performing face determination with higher accuracy by using images of a plurality of different resolutions will be described. The third embodiment is realized by the personal computer 1 as in the first embodiment. Therefore, the configuration of the image processing apparatus according to the third embodiment is referred to FIG. 1 of the first embodiment.
[0082] <統計処理〉  [0082] <Statistical processing>
まず、以下で説明する統計処理を行い、第 3の実施の形態の顔判定用のルックアツ プテーブル(LUT)を作成する。第 3の実施の形態のルックアップテーブルの作成に ついて、第 1の実施の形態の図 9を参照しながら以下説明をする。  First, statistical processing described below is performed to create a look-up table (LUT) for face determination according to the third embodiment. The creation of the lookup table of the third embodiment will be described below with reference to FIG. 9 of the first embodiment.
[0083] まず、図 9のステップ S21からステップ S28までと同様にして、第 1の実施の形態と同 様な顔判定用ルックアップテーブルを作成する。以後、このルックアップテーブルを、 通常サイズ顔判定用ルックアップテーブルと言う。  First, a look-up table for face determination similar to that in the first embodiment is created in the same manner as in steps S21 to S28 in FIG. Hereinafter, this lookup table is referred to as a normal size face determination lookup table.
[0084] 次に、図 9のステップ S22の段階で取得した顔画像サンプル群を縮小し、 12 X 12 画素程度の大きさにする。同様にして、図 9のステップ S23の段階で取得した非顔画 像サンプル群を縮小し、 12 X 12画素程度の大きさにする。このようにして作成した顔 画像サンプル群と非顔画像サンプル群を使って、図 9のステップ S24からステップ S2 8までと同様にして、顔判定用のルックアップテーブルを作成する。このようにして求 めたルックアップテーブルを、縮小サイズ顔判定用ルックアップテーブルと呼ぶ。  Next, the face image sample group acquired in the step S22 in FIG. 9 is reduced to a size of about 12 × 12 pixels. Similarly, the non-face image sample group acquired in step S23 in FIG. 9 is reduced to a size of about 12 × 12 pixels. Using the face image sample group and the non-face image sample group thus created, a look-up table for face determination is created in the same manner as in steps S24 to S28 in FIG. The lookup table obtained in this way is referred to as a reduced size face determination lookup table.
[0085] 次に、上記のようにして求めた通常サイズ顔判定用ルックアップテーブルと縮小サ ィズ顔判定用ルックアップテーブルを使用して、撮影された画像の中から顔画像を検 出する画像処理について説明する。パーソナルコンピュータ 1が実行する第 3の実施 の形態の画像処理プログラムは、第 1の実施の形態の図 2のフローチャートと、処理 の流れとしては同様であるので、図 2を参照しながら以下説明をする。  [0085] Next, using the normal size face determination lookup table and the reduced size face determination lookup table obtained as described above, a face image is detected from the captured images. Image processing will be described. Since the image processing program of the third embodiment executed by the personal computer 1 is the same as the flowchart of FIG. 2 of the first embodiment, the following description will be given with reference to FIG. To do.
[0086] ステップ S1からステップ S4は、第 1の実施の形態のステップ S1からステップ S4と同 様である。 [0086] Steps S1 to S4 are the same as Steps S1 to S4 in the first embodiment. It is like.
[0087] ステップ S5では、縮小画像の 1画素おきに 19 X 19画素の顔判定対象領域を設定 し、その領域におけるエッジ画像 E〜Eの部分画像を出力する。ここで出力したエツ  In step S5, a 19 × 19 pixel face determination target region is set every other pixel of the reduced image, and partial images of edge images E to E in that region are output. Etsu output here
1 4  14
ジ画像を通常サイズエッジ画像と呼ぶ。さらに、前記縮小画像に対して、 0. 94の縮 小倍率で縮小された第 2の縮小画像に対して、上記 19 X 19画素の顔判定対象領域 と同一の被写体に対応する 12 X 12画素の縮小サイズ顔判定対象領域を設定し、そ の領域における、前記第 2の縮小画像に対して作成したエッジ画像 E〜Eの部分画 This image is called a normal size edge image. Further, with respect to the reduced image, 0.9 for the four of the second reduced image reduced at reduced small magnification, the 19 X 19 pixels of the face determination target area 12 corresponding to the same object and the X 12 A reduced size face determination target area is set for pixels, and the partial images of edge images E to E created for the second reduced image in that area are set.
1 4 像を出力する。ここで出力したエッジ画像を縮小サイズエッジ画像と呼ぶ。  1 4 Outputs an image. The edge image output here is called a reduced size edge image.
[0088] ステップ S6では、通常サイズエッジ画像に対して、通常サイズ顔判定用ルックアツ プテーブルを用いて、第 1の実施の形態と同様にして顔らしさを算出する。さらに、縮 小サイズエッジ画像に対して、縮小サイズ顔判定用ルックアップテーブルを用いて、 第 1の実施の形態と同様にして顔らしさを算出する。 [0088] In step S6, the likelihood of a face is calculated for the normal size edge image using the normal size face determination look-up table in the same manner as in the first embodiment. Further, the face-likeness is calculated for the reduced-size edge image in the same manner as in the first embodiment using the reduced-size face determination lookup table.
[0089] 図 11は、上記のようにして、通常サイズエッジ画像および縮小サイズエッジ画像に 対して、各部分画像の顔らしさを求めた後の処理のフローチャートを示す図である。 ステップ S6の顔判定処理では、第 1の実施の形態と同様に、顔らしさを段階的に生 成し、それらを積算した評価値が閾値よりも大きければ顔とする。ただし、評価値を閾 値と比較する処理を図 11に示すように各段階にお!/、て行うことにより、明らかに顔で はな!/、画像を早レ、段階で除外して、効率的な処理を行えるようにしてレ、る。  FIG. 11 is a diagram showing a flowchart of the processing after obtaining the facial appearance of each partial image with respect to the normal size edge image and the reduced size edge image as described above. In the face determination process in step S6, as in the first embodiment, the likelihood of a face is generated in stages, and if the integrated evaluation value is greater than the threshold value, the face is determined. However, by performing the process of comparing the evaluation value with the threshold value at each stage as shown in Fig. 11, it is clearly not a face! /, And the image is excluded at an early stage. Make it possible to perform efficient processing.
[0090] まず、ステップ S 51では、部分画像が顔の画像であるかどうかを判定する評価値を 、縮小サイズエッジ画像 E (x,y)の顔らしさ Vsumとする。ステップ S52では、評価値 が所定の閾値 thはり大きいかどうかを判定し、この評価値が閾値 thはり大きければ ステップ S53に進み、この評価値が閾値 thはり大きくなければ、部分画像は顔の画 像でな!/、として、対象の部分画像の顔判定の処理を終了する。  First, in step S 51, the evaluation value for determining whether or not the partial image is a face image is set as the face likelihood Vsum of the reduced size edge image E (x, y). In step S52, it is determined whether or not the evaluation value is larger than the predetermined threshold th. If the evaluation value is larger than the threshold th, the process proceeds to step S53. If the evaluation value is not larger than the threshold th, the partial image is a face image. If it is not an image! /, The face determination process for the target partial image is terminated.
[0091] ステップ S53では、評価値をステップ S 51の評価値に縮小サイズエッジ画像 E (x,y  [0091] In step S53, the evaluation value is reduced to the evaluation value in step S51, and the reduced size edge image E (x, y
2 2
)の顔らしさ Vsumを足した値とする。ステップ S54では、この評価値が所定の閾値 th ) Face likeness Vsum is added. In step S54, the evaluation value is a predetermined threshold th.
2  2
2より大きいかどうかを判定し、評価値が閾値 th2より大きければステップ S55に進み、 この評価値が閾値 th2より大きくなければ、部分画像は顔の画像でないとして、対象 の部分画像の顔判定の処理を終了する。 [0092] ステップ S55では、評価値をステップ S53の評価値に縮小サイズエッジ画像 E (x,y If the evaluation value is greater than the threshold value th2, the process proceeds to step S55.If the evaluation value is not greater than the threshold value th2, the partial image is determined not to be a face image and the face determination of the target partial image is performed. The process ends. [0092] In step S55, the evaluation value is reduced to the evaluation value in step S53, and the reduced size edge image E (x, y
3 Three
)の顔らしさ Vsumを足した値とする。ステップ S56では、この評価値が所定の閾値 th ) Face likeness Vsum is added. In step S56, this evaluation value is set to a predetermined threshold th.
3  Three
3より大きいかどうかを判定し、評価値が閾値 th3より大きければステップ S57に進み、 この評価値が閾値 th3より大きくなければ、部分画像は顔の画像でないとして、対象 の部分画像の顔判定の処理を終了する。  If the evaluation value is larger than the threshold th3, the process proceeds to step S57. If the evaluation value is not larger than the threshold th3, the partial image is not a face image and the face determination of the target partial image is performed. End the process.
[0093] ステップ S57では、評価値をステップ S55の評価値に縮小サイズエッジ画像 E (x,y [0093] In step S57, the evaluation value is converted into the evaluation value in step S55, and the reduced size edge image E (x, y
4 Four
)の顔らしさ Vsumを足した値とする。ステップ S58では、この評価値が所定の閾値 th ) Face likeness Vsum is added. In step S58, the evaluation value is a predetermined threshold th.
4  Four
4より大きいかどうかを判定し、評価値が閾値 th4より大きければステップ S59に進み、 この評価値が閾値 th4より大きくなければ、部分画像は顔の画像でないとして、対象 の部分画像の顔判定の処理を終了する。  If the evaluation value is larger than the threshold th4, the process proceeds to step S59. If the evaluation value is not larger than the threshold th4, the partial image is determined not to be a face image and the face determination of the target partial image is performed. The process ends.
[0094] ステップ S59力、らステップ S66は、通常サイズエッジ画像について同様の処理を行 う。その結果、ステップ S66において、評価値が閾値 th8より大きければ、最終的にこ の部分画像は顔の画像であると判定する。この評価値が閾値 th8より大きくなければ 、部分画像は顔の画像でないとして、対象の部分画像の顔判定の処理を終了する。  Step S59 force and step S66 perform the same processing for the normal size edge image. As a result, if the evaluation value is larger than the threshold th8 in step S66, it is finally determined that the partial image is a face image. If this evaluation value is not greater than the threshold th8, the partial image is determined not to be a face image, and the face determination process for the target partial image is terminated.
[0095] ステップ 7では、第 1の実施の形態と同様に、顔検出結果を出力する。  [0095] In step 7, as in the first embodiment, the face detection result is output.
[0096] 第 3の実施形態によると、第 1の実施の形態に比べて、さらに縮小サイズエッジ画像 の顔らしさの評価を加えることにより、さらに高精度な顔判定処理を行うことができる。 例えば、顔判定対象領域が 19 X 19画素であると、 目は 2画素程度になって検出しや すいが、口は 4画素程度になって検出しにくくなる。ところが、同じ顔判定対象領域が 12 X 12画素になると、口は 2画素程度になって検出しやすくなる。従って、このような 縮小サイズエッジ画像の顔らしさの評価を加えることにより、口などの凹構造が検出し やすくなり、さらに高精度な顔判定処理を行うことができるようになる。  [0096] According to the third embodiment, compared to the first embodiment, it is possible to perform a face determination process with higher accuracy by further evaluating the facialness of the reduced-size edge image. For example, if the face determination target area is 19 x 19 pixels, the eyes are about 2 pixels and easy to detect, but the mouth is about 4 pixels and difficult to detect. However, if the same face determination target area becomes 12 × 12 pixels, the mouth becomes about 2 pixels and it is easy to detect. Therefore, by adding the evaluation of the face likeness of the reduced-size edge image, it becomes easy to detect the concave structure such as the mouth, and the face determination process can be performed with higher accuracy.
[0097] 一変形例  [0097] Variation
上記実施の形態では、顔の画像を判定する例を説明した。しかし、顔の画像以外 の画像にも本発明は適用できる。すなわち、特定種類の画像が取得した画像にある 力、どうかを判定する場合にも適用できる。このような場合は、その特定種類の画像らし さの度合いを画素値および画素位置ごとに示すルックアップテーブルを統計処理に より準備し、このルックアップテーブルを用いて、判定用画像の各画素における特定 種類の画像らしさの度合レ、を求めるようにすればょレ、。 In the above-described embodiment, an example of determining a face image has been described. However, the present invention can also be applied to images other than facial images. In other words, the present invention can also be applied to determining whether or not a certain type of image has a certain force in the acquired image. In such a case, a look-up table indicating the degree of image-likeness of the specific type for each pixel value and pixel position is prepared by statistical processing, and this look-up table is used for each pixel of the determination image. specific If you ask for the degree of image quality of the kind.
[0098] 上記実施の形態では、エッジ画像として輝度凹部画像を生成し、顔の目鼻口など の局所的に暗い箇所を適切に判断する例を説明した。しかし、歯を見せて笑ってい る口や、光が当たって光っている頰ゃ鼻では、輝度が周囲に比べて局所的に明るく なっている。このような顔の局所的に明るい箇所も適切に検出するために、次のような 式により、エッジ画像として輝度凸部画像を生成し、同様にして顔らしさを求めるよう にしてもよい。 In the above embodiment, an example has been described in which a luminance concave image is generated as an edge image, and a locally dark spot such as the eyes and nose of the face is appropriately determined. However, the brightness of the mouth showing teeth and laughing, and the nose shining with light, are locally brighter than the surroundings. In order to appropriately detect such a locally bright spot on the face, a brightness convex image may be generated as an edge image by the following formula, and the facial appearance may be obtained in the same manner.
E '(x,y)=Max(Y (x,y-l), Y (x,y+2))  E '(x, y) = Max (Y (x, y-l), Y (x, y + 2))
5 LH LH  5 LH LH
-Max(Y (x,y) , Y (x,y+l))  -Max (Y (x, y), Y (x, y + l))
LH LH  LH LH
E (x,y) = γ (E ' (x,y) )  E (x, y) = γ (E '(x, y))
5 5  5 5
[0099] 上記実施の形態では、顔らしさ L (E)を、次の式によって算出する例を説明した
Figure imgf000024_0001
[0099] In the above-described embodiment, an example in which the face-likeness L (E) is calculated by the following equation has been described.
Figure imgf000024_0001
 Yes
L (E)=log{(P (x,y,E) + ε )/(Ρ (x,y,E) + ε )}  L (E) = log {(P (x, y, E) + ε) / (Ρ (x, y, E) + ε)}
1 (x,y) 顔 1 非顔 2  1 (x, y) face 1 nonface 2
しかし、次のような式を用いてもよい。  However, the following formula may be used.
L (E)=^{P (x,y,E)}-^{P (x,y,E) }  L (E) = ^ {P (x, y, E)}-^ {P (x, y, E)}
1 (x,y) 顔 非顔  1 (x, y) face non-face
第 1項の^ {P (x,y,E) }も単調増加関数と言え、第 2項の^ {P (x,y,E) }も単調減  The first term ^ {P (x, y, E)} is also a monotonically increasing function, and the second term ^ {P (x, y, E)} is also monotonically decreasing.
顔 非顔  Non-face
少関数と言える。  It can be said to be a small function.
[0100] 上記実施の形態では、パーソナルコンピュータ 1が、撮影された画像の中から顔画 像を検出する画像処理を行う例を説明した。しかし、デジタルスチルカメラなどの撮像 装置内で、撮像した画像に対し上記説明した処理を行うようにしてもよい。  In the above embodiment, the example in which the personal computer 1 performs the image processing for detecting the face image from the captured image has been described. However, the above-described processing may be performed on the captured image in an imaging apparatus such as a digital still camera.
[0101] 図 12は、このような撮像装置であるデジタルカメラ 100の構成を示す図である。デ ジタルカメラ 100は、撮影レンズ 102、 CCDなどからなる撮像素子 103、 CPUおよび 周辺回路からなる制御装置 104、メモリ 105などから構成される。  FIG. 12 is a diagram showing a configuration of a digital camera 100 that is such an imaging apparatus. The digital camera 100 includes a photographing lens 102, an image sensor 103 including a CCD, a control device 104 including a CPU and peripheral circuits, a memory 105, and the like.
[0102] 撮像素子 103は、被写体 101を撮影レンズ 102を介して撮影(撮像)し、撮影した 画像データを制御装置 104へ出力する。制御装置 104は、撮像素子 103で撮影さ れた画像(画像データ)に対して、上記で説明した顔画像を検出する画像処理を行う 。そして、制御装置 104は、顔画像の検出結果に基づき撮影した画像に対し、ホワイ トバランスの調整やその他の各種の画像処理を行!/、、画像処理後の画像データを適 宜メモリ 105に格納する。また、制御装置 104は、顔画像の検出結果を、オートフォ 一カス処理などにも利用することができる。なお、制御装置 104が実行する画像処理 プログラムは、不図示の ROMに格納されている。 The image sensor 103 captures (captures) the subject 101 via the photographing lens 102 and outputs the captured image data to the control device 104. The control device 104 performs image processing for detecting the face image described above on the image (image data) captured by the image sensor 103. Then, the control device 104 performs white balance adjustment and various other image processing on the image captured based on the detection result of the face image! /, And applies the image data after the image processing. Store in memory 105. Further, the control device 104 can also use the detection result of the face image for autofocus processing or the like. The image processing program executed by the control device 104 is stored in a ROM (not shown).
[0103] また、上記説明した処理をビデオカメラにも適用できる。さらに、不審者を監視する 監視カメラや、撮影された顔画像に基づいて個人を識別したり、性別や年齢や表情 を推定するような装置にも適用できる。すなわち、顔の画像など特定種類の画像を抽 出して処理する画像処理装置や撮像装置などのあらゆる装置に本発明を適用するこ と力 Sできる。 [0103] The processing described above can also be applied to a video camera. Furthermore, it can also be applied to surveillance cameras that monitor suspicious individuals and devices that identify individuals based on captured face images and estimate gender, age, and facial expressions. That is, the present invention can be applied to all devices such as an image processing device and an imaging device that extract and process a specific type of image such as a face image.
[0104] 上記では、種々の実施の形態および変形例を説明したが、本発明はこれらの内容 に限定されるものではない。本発明の技術的思想の範囲内で考えられるその他の態 様も本発明の範囲内に含まれる。  [0104] While various embodiments and modifications have been described above, the present invention is not limited to these contents. Other forms conceivable within the scope of the technical idea of the present invention are also included in the scope of the present invention.
[0105] 次の優先権基礎出願の開示内容は引用文としてここに組み込まれる。  [0105] The disclosure of the following priority application is incorporated herein by reference.
日本国特許出願 2006年第 215943号(2006年 8月 8日出願)  Japanese patent application No. 215943 (filed August 8, 2006)

Claims

請求の範囲 The scope of the claims
[1] 特定種類の画像であるかどうかを判定する画像処理方法であって、  [1] An image processing method for determining whether an image is of a specific type,
複数の画素からなる画像を取得し、  Acquire an image consisting of multiple pixels,
特定種類の画像らしさの度合いを画素値および画素位置ごとに示すルックアップテ 一ブルを格納し、  Stores a look-up table that shows the degree of image-likeness of a specific type for each pixel value and pixel position.
前記取得した画像に基づいて判定用画像を生成し、  A determination image is generated based on the acquired image,
前記判定用画像の画素の画素値および画素位置に基づき、前記ルックアップテー ブルを用いて、その画素における前記特定種類の画像らしさの度合いを求め、 前記求めた判定用画像の画素の画像らしさの度合いを積算し、  Based on the pixel value and the pixel position of the pixel of the image for determination, the look-up table is used to determine the degree of the image quality of the specific type of the pixel, and the image quality of the pixel of the determined image for determination is calculated. Accumulating the degree,
前記積算した結果に基づき、前記入力画像が前記特定種類の画像であるかどうか を判定する画像処理方法。  An image processing method for determining whether or not the input image is the specific type of image based on the accumulated result.
[2] 請求項 1に記載の画像処理方法において、  [2] In the image processing method according to claim 1,
前記特定種類の画像は、顔の画像である画像処理方法。  The image processing method, wherein the specific type of image is a face image.
[3] 請求項 1または 2に記載の画像処理方法において、 [3] In the image processing method according to claim 1 or 2,
前記判定用画像は、前記取得した画像のエッジ成分を抽出して生成される画像処 理方法。  The image processing method, wherein the determination image is generated by extracting an edge component of the acquired image.
[4] 請求項 1または 2に記載の画像処理方法において、  [4] In the image processing method according to claim 1 or 2,
前記判定用画像は、前記取得した画像の局所的に周辺より画素値がへこんでいる 凹構造のエッジ成分を抽出して生成される画像処理方法。  An image processing method in which the determination image is generated by extracting an edge component having a concave structure in which a pixel value is locally recessed from the periphery of the acquired image.
[5] 請求項 1に記載の画像処理方法において、  [5] The image processing method according to claim 1,
前記ルックアップテーブルは、  The lookup table is
前記特定種類の画像の特徴的な要素に対応する画素位置では、その画素のエツ ジ成分が大き!/、場合の前記特定種類の画像らしさの度合!/、を、エッジ成分が小さ!/、 場合の前記特定種類の画像らしさの度合いに比べて大きな値とし、  At the pixel position corresponding to the characteristic element of the specific type of image, the edge component of the pixel is large! /, The degree of image quality of the specific type in the case! /, And the edge component is small! /, And a larger value than the degree of image quality of the specific type
前記特定種類の画像の特徴的な要素以外に対応する画素位置では、その画素の エッジ成分が大きレ、場合の前記特定種類の画像らしさの度合!/、を、エッジ成分が小 さレヽ場合の前記特定種類の画像らしさの度合いに比べて小さな値とする画像処理方 法。 At pixel positions corresponding to other than the characteristic elements of the specific type of image, the edge component of the pixel is large, the degree of image quality of the specific type in the case! /, When the edge component is small. An image processing method in which the value is smaller than the degree of image quality of the specific type.
[6] 請求項 2に記載の画像処理方法において、 [6] The image processing method according to claim 2,
前記判定用画像は、前記取得した画像のエッジ成分を抽出して生成され、 目鼻口の!/、ずれかの領域に対応する画素位置では、その画素のエッジ成分が大き V、場合の前記顔の画像らしさの度合!/、を、エッジ成分が小さ!/、場合の前記顔の画像 らしさの度合いに比べて大きな値とし、  The determination image is generated by extracting an edge component of the acquired image, and the face component in the case where the edge component of the pixel is large V at a pixel position corresponding to the area of the eye / nose! The degree of image-likeness! / Is set to a large value compared to the degree of image-likeness of the face when the edge component is small! /
目鼻口以外の領域に対応する画素位置では、その画素のエッジ成分が大き!/、場 合の前記顔の画像らしさの度合レ、を、エッジ成分が小さ!/、場合の前記顔の画像らし さの度合いに比べて小さな値とする画像処理方法。  At the pixel position corresponding to the region other than the eyes and nose and mouth, the edge component of the pixel is large! /, The degree of the image quality of the face when it is small, and the image of the face when the edge component is small! / An image processing method in which the value is smaller than the degree of thickness.
[7] 請求項 1から 6の!/、ずれかに記載の画像処理方法にお!/、て、 [7] Claims 1 to 6! / In the image processing method according to any one of claims! /,
前記ルックアップテーブルは、前記特定種類の画像に属する判定対象画像サンプ ル群と前記特定種類の画像に属さない非判定対象画像サンプル群とに基づく統計 処理により生成される画像処理方法。  The look-up table is an image processing method generated by statistical processing based on a determination target image sample group belonging to the specific type of image and a non-determination target image sample group not belonging to the specific type of image.
[8] 請求項 7に記載の画像処理方法において、 [8] In the image processing method according to claim 7,
前記統計処理において、  In the statistical processing,
前記判定用画像を生成するときと等価な処理により、前記判定対象画像サンプノレ 群に基づいて第 1の画像サンプル群を生成し、前記非判定対象画像サンプル群に 基づ!/、て第 2の画像サンプル群を生成し、  A first image sample group is generated based on the determination target image sample group by a process equivalent to the generation of the determination image, and the second image sample is generated based on the non-determination target image sample group! Generate image samples,
前記第 1の画像サンプル群の画素位置 (x,y)における画素値が Eとなる頻度 P (x,y)(  Frequency P (x, y) () where the pixel value at the pixel position (x, y) of the first image sample group is E
1 1
E)と、前記第 2の画像サンプル群の画素位置 (x,y)における画素値が Eとなる頻度 P (X, y)(E)とを求め、 E) and the frequency P (X, y) (E) at which the pixel value at the pixel position (x, y) of the second image sample group becomes E,
前記判定用画像の画素位置 (x,y)における画素値 Eに対してその画素における前記 特定種類の画像らしさの度合い V(x,y)を、 V(x,y) = L(x,y)(E)で与える画素位置 (x,y)に おける前記ルックアップテーブル L(x,y)(E)を、 L(x,y)(E) = f( P (x,y)(E), P (x,y)(E) )に  For the pixel value E at the pixel position (x, y) of the image for determination, the degree V (x, y) of the particular kind of image at that pixel is expressed as V (x, y) = L (x, y ) (E), the lookup table L (x, y) (E) at the pixel position (x, y) given by L (x, y) (E) = f (P (x, y) (E ), P (x, y) (E))
1 2  1 2
より生成し、  And generate
前記関数 ί( P (x,y)(E), P (x,y)(E) )は、 P (x,y)(E)について実質的に広義の単調増  The function ί (P (x, y) (E), P (x, y) (E)) is substantially monotonically increasing in a broad sense with respect to P (x, y) (E).
1 2 1  1 2 1
加関数であり、 P (x,y)(E)について実質的に広義の単調減少関数である画像処理方 法。  An image processing method that is an additive function and is a substantially monotonically decreasing function in the broad sense of P (x, y) (E).
[9] 請求項 8に記載の画像処理方法において、 前記関数 ί( P (x,y)(E) [9] The image processing method according to claim 8, The function ί (P (x, y) (E)
1 , P (x,y)(E) )は、  1, P (x, y) (E)) is
2  2
ί( Ρ (x,y)(E), Ρ (x,y)(E) ) = log{ (P (x,y)(E) + ε ) / (P (x,y)(E) + ε ) }であり、  ί (Ρ (x, y) (E), Ρ (x, y) (E)) = log {(P (x, y) (E) + ε) / (P (x, y) (E) + ε)},
1 2 1 1 2 2  1 2 1 1 2 2
前記 ε と ε は所定の定数である画像処理方法。  The image processing method, wherein ε and ε are predetermined constants.
1 2  1 2
[10] 請求項 1に記載の画像処理方法にお!/、て、  [10] The image processing method according to claim 1! /,
コントラストの程度に応じた複数のルックアップテーブルを格納し、  Stores multiple lookup tables according to the degree of contrast,
前記取得した画像のコントラストを算出し、  Calculating the contrast of the acquired image;
前記複数のルックアップテーブルから前記コントラストに応じたルックアップテープ ルを選択する画像処理方法。  An image processing method for selecting a lookup table corresponding to the contrast from the plurality of lookup tables.
[11] 特定種類の画像であるかどうかを判定する画像処理方法であって、 [11] An image processing method for determining whether an image is of a specific type,
複数の画素からなる画像を取得し、  Acquire an image consisting of multiple pixels,
特定種類の画像らしさの度合いを画素値および画素位置ごとに示すルックアップテ 一ブルを格納し、  Stores a look-up table that shows the degree of image-likeness of a specific type for each pixel value and pixel position.
複数の異なる縮小倍率により前記取得した画像の複数の縮小画像を生成し、 前記複数の縮小画像に基づいて判定用画像を生成し、  Generating a plurality of reduced images of the acquired image with a plurality of different reduction magnifications; generating a determination image based on the plurality of reduced images;
前記複数の縮小画像の 1つである第 1の縮小画像に対して判定対象領域を設定し 前記判定対象領域の画素の画素値および前記判定対象領域内の画素位置に基 づき、前記ルックアップテーブルを用いて、その画素における前記特定種類の画像ら しさの度合いを求め、  A determination target region is set for a first reduced image that is one of the plurality of reduced images, and the lookup table is based on a pixel value of a pixel in the determination target region and a pixel position in the determination target region. Is used to determine the degree of image quality of the specific type at that pixel,
前記求めた判定対象領域の画素の前記特定種類の画像らしさの度合いを積算し、 前記積算した結果に基づき、前記取得した画像内の前記判定対象領域に対応す る画像が前記特定種類の画像であるかどうかを判定する画像処理方法。  The degree of the image-likeness of the specific type of pixels of the obtained determination target region is integrated, and based on the integrated result, an image corresponding to the determination target region in the acquired image is the specific type image. An image processing method for determining whether or not there is.
[12] 請求項 11に記載の画像処理方法にお!/ヽて、  [12] The image processing method according to claim 11!
前記第 1の縮小画像よりもさらに縮小された第 2の縮小画像に対して、前記判定対 象領域に対応する第 2の判定対象領域をさらに設定し、  A second determination target area corresponding to the determination target area is further set for the second reduced image further reduced than the first reduced image;
特定種類の画像らしさの度合いを画素値および前記第 2の判定対象領域に対応し た画素位置ごとに示す第 2のルックアップテーブルをさらに格納し、  A second look-up table further indicating a degree of image-likeness of a specific type for each pixel position corresponding to a pixel value and the second determination target region;
前記第 2の判定対象領域の画素の画素値および前記第 2の判定対象領域内の画 素位置に基づき、前記第 2のルックアップテーブルを用いて、その画素における前記 特定種類の画像らしさの度合いを求め、 Pixel values of the pixels in the second determination target area and images in the second determination target area Based on the raw position, the second look-up table is used to determine the degree of the particular kind of image at the pixel,
前記求めた第 2の判定対象領域の画素の前記特定種類の画像らしさの度合いを 積算し、  Integrating the degree of image quality of the specific type of pixels of the obtained second determination target region,
前記判定対象領域の画素の前記特定種類の画像らしさの度合いの積算結果およ び前記第 2の判定対象領域の画素の前記特定種類の画像らしさの度合いの積算結 果に基づき、前記取得した画像内の前記判定対象領域に対応する画像が前記特定 種類の画像であるかどうかを判定する画像処理方法。  The acquired image based on the integration result of the degree of image specificity of the specific type of pixels in the determination target region and the integration result of the degree of image specificity of the specific type of pixels in the second determination target region. An image processing method for determining whether an image corresponding to the determination target region is an image of the specific type.
[13] 請求項 1から 12のいずれかに記載の画像処理方法をコンピュータに実行させる画 像処理プログラム。 [13] An image processing program for causing a computer to execute the image processing method according to any one of claims 1 to 12.
[14] 請求項 13に記載の画像処理プログラムを搭載する画像処理装置。  14. An image processing apparatus equipped with the image processing program according to claim 13.
[15] 請求項 13に記載の画像処理プログラムを搭載する撮像装置。 [15] An imaging device equipped with the image processing program according to claim 13.
PCT/JP2007/065446 2006-08-08 2007-08-07 Image processing method, image processing apparatus, image processing program, and image pickup apparatus WO2008018459A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006-215943 2006-08-08
JP2006215943A JP2009258770A (en) 2006-08-08 2006-08-08 Image processing method, image processor, image processing program, and imaging device

Publications (1)

Publication Number Publication Date
WO2008018459A1 true WO2008018459A1 (en) 2008-02-14

Family

ID=39032989

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2007/065446 WO2008018459A1 (en) 2006-08-08 2007-08-07 Image processing method, image processing apparatus, image processing program, and image pickup apparatus

Country Status (2)

Country Link
JP (1) JP2009258770A (en)
WO (1) WO2008018459A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325249A (en) * 2012-03-22 2013-09-25 日本电气株式会社 Capture image processing device and capture image processing method
CN112165571A (en) * 2020-09-09 2021-01-01 支付宝实验室(新加坡)有限公司 Certificate image acquisition method, device and equipment

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7315631B1 (en) 2006-08-11 2008-01-01 Fotonation Vision Limited Real-time face tracking in a digital image acquisition device
US7403643B2 (en) 2006-08-11 2008-07-22 Fotonation Vision Limited Real-time face tracking in a digital image acquisition device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06348851A (en) * 1993-04-12 1994-12-22 Sony Corp Object position detecting method
JPH0883341A (en) * 1994-09-12 1996-03-26 Nippon Telegr & Teleph Corp <Ntt> Method and device for extracting object area and object recognizing device
JP2000259833A (en) * 1999-03-08 2000-09-22 Toshiba Corp Face image processor and processing method therefor
JP2001175869A (en) * 1999-12-07 2001-06-29 Samsung Electronics Co Ltd Device and method for detecting speaker's hand position
JP2001192898A (en) * 2000-01-03 2001-07-17 Motorola Inc Method for forming semi-conductor device
JP2006092095A (en) * 2004-09-22 2006-04-06 Sony Corp Image processor, image processing method and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06348851A (en) * 1993-04-12 1994-12-22 Sony Corp Object position detecting method
JPH0883341A (en) * 1994-09-12 1996-03-26 Nippon Telegr & Teleph Corp <Ntt> Method and device for extracting object area and object recognizing device
JP2000259833A (en) * 1999-03-08 2000-09-22 Toshiba Corp Face image processor and processing method therefor
JP2001175869A (en) * 1999-12-07 2001-06-29 Samsung Electronics Co Ltd Device and method for detecting speaker's hand position
JP2001192898A (en) * 2000-01-03 2001-07-17 Motorola Inc Method for forming semi-conductor device
JP2006092095A (en) * 2004-09-22 2006-04-06 Sony Corp Image processor, image processing method and program

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325249A (en) * 2012-03-22 2013-09-25 日本电气株式会社 Capture image processing device and capture image processing method
CN112165571A (en) * 2020-09-09 2021-01-01 支付宝实验室(新加坡)有限公司 Certificate image acquisition method, device and equipment
CN112165571B (en) * 2020-09-09 2022-02-08 支付宝实验室(新加坡)有限公司 Certificate image acquisition method, device and equipment

Also Published As

Publication number Publication date
JP2009258770A (en) 2009-11-05

Similar Documents

Publication Publication Date Title
CN108038456B (en) Anti-deception method in face recognition system
WO2020207423A1 (en) Skin type detection method, skin type grade classification method and skin type detection apparatus
US10304164B2 (en) Image processing apparatus, image processing method, and storage medium for performing lighting processing for image data
US7778483B2 (en) Digital image processing method having an exposure correction based on recognition of areas corresponding to the skin of the photographed subject
CN106056064B (en) A kind of face identification method and face identification device
US7925093B2 (en) Image recognition apparatus
WO2010122721A1 (en) Matching device, matching method, and matching program
US8175382B2 (en) Learning image enhancement
US20140056527A1 (en) Face detection using division-generated haar-like features for illumination invariance
CN111401324A (en) Image quality evaluation method, device, storage medium and electronic equipment
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
US11232585B2 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
JP2010045770A (en) Image processor and image processing method
JP5241606B2 (en) Object identification device and object identification method
JP2004310475A (en) Image processor, cellular phone for performing image processing, and image processing program
WO2008018459A1 (en) Image processing method, image processing apparatus, image processing program, and image pickup apparatus
Laytner et al. Robust face detection from still images
JPWO2020213166A1 (en) Image processing device, image processing method, and image processing program
Fathy et al. Benchmarking of pre-processing methods employed in facial image analysis
JP6930605B2 (en) Image processing device, image processing method and image processing program
JP6351550B2 (en) Harness feeling evaluation apparatus, firmness evaluation method, and firmness evaluation program
CN111539914A (en) Mobile phone photo quality comparison and evaluation method, system and terminal
JP2021009493A (en) Image processing device, control method of image processing device, and program
WO2008018460A1 (en) Image processing method, image processing apparatus, image processing program, and image pickup apparatus
JP5568166B2 (en) Image processing apparatus and image processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07792115

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 07792115

Country of ref document: EP

Kind code of ref document: A1