WO2005055144A1 - Procede de detection de machoire sur le visage d'un individu, systeme de detection de machoire, et programme detection de machoire - Google Patents

Procede de detection de machoire sur le visage d'un individu, systeme de detection de machoire, et programme detection de machoire Download PDF

Info

Publication number
WO2005055144A1
WO2005055144A1 PCT/JP2004/018451 JP2004018451W WO2005055144A1 WO 2005055144 A1 WO2005055144 A1 WO 2005055144A1 JP 2004018451 W JP2004018451 W JP 2004018451W WO 2005055144 A1 WO2005055144 A1 WO 2005055144A1
Authority
WO
WIPO (PCT)
Prior art keywords
chin
face
edge
detection
image
Prior art date
Application number
PCT/JP2004/018451
Other languages
English (en)
Japanese (ja)
Inventor
Toshinori Nagahashi
Takashi Hyuga
Original Assignee
Seiko Epson Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seiko Epson Corporation filed Critical Seiko Epson Corporation
Publication of WO2005055144A1 publication Critical patent/WO2005055144A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • TECHNICAL FIELD A chin detection method, a chin detection system, and a chin detection program for a human face
  • the present invention relates to a pattern recognition (Patternrecognition) object recognition technology, and more particularly to a chin detection method and a chin detection method for accurately detecting a chin position of a person's face from a face image in which the person's face is captured. This is related to the detection system and chin detection program.
  • the presence or absence of a flesh-color area is determined, the mosaic size is automatically determined for the flesh-color area, and the mosaic is performed.
  • the presence or absence of a human face is determined by calculating the distance between the area and the human face dictionary, and by extracting the human face, erroneous extraction due to the influence of the background etc. is reduced, and the human face is efficiently extracted from the image. The face in between is automatically found.
  • a face photograph (face image) of a person which is indispensable for a passport or an ID card, has its size, direction, size, and position of the person's face set in detail.
  • the condition of [, no background, and wearing no accessories such as hats is that the face of the person in the picture is facing the front, and that the face of the person is in the center of the photo. It is specified in detail that the position of the chin of the face in the image is within a certain range from the frame below the photo, and so on. In principle, photos (face images) that deviate from the standard are not adopted.
  • a face image of a required person can be directly obtained as digital image data by a digital still camera using an electronic imaging device such as a CCD or CMOS, or an analog photograph in which a human face has been photographed in advance.
  • an electronic imaging device such as a CCD or CMOS
  • an analog photograph in which a human face has been photographed in advance is obtained as digital image data using an electro-optical image reading device such as a scanner, and this digital image data is used using an image processing system consisting of a general-purpose computer such as a PC and general-purpose software.
  • the processing operation can be directly performed by a human using a general-purpose input / output device such as a mouse, a keyboard, or a monitor.
  • a general-purpose input / output device such as a mouse, a keyboard, or a monitor.
  • the number is huge, it is necessary to perform the processing automatically using the above-described conventional technology.
  • the outline of the chin may be unclear, depending on the facial features, a relatively strong edge between the lips and the bottom of the chin may be detected, or the clothes may be worn. A strong edge is also detected at the border between the collar and the neck. Also, depending on age and body type, a stronger edge is often generated in the neck wrinkle than in the chin contour, and these may be erroneously detected as the chin contour.
  • the present invention has been devised in order to effectively solve such a problem.
  • the purpose of the present invention is to remove a portion of a face image whose chin outline is difficult to detect as described above. It is intended to provide a new chin detection method, a chin detection system and a chin detection program capable of accurately and quickly detecting the bottom of a chin under a robust condition. Disclosure of the invention
  • the chin detection method of the human face of the invention 1 is
  • a method for detecting the lower bottom of a chin of a person's face from an image including a person's face comprising detecting a face image in a range that includes both eyes and lips of the person's face and does not include a chin.
  • the intensity distribution of the edges in the chin detection window is determined, and the edge intensity equal to or greater than the threshold value is determined from the edge intensity distribution.
  • an approximation curve is obtained so as to best fit the distribution of the detected pixels, and the lowest bottom of the approximation curve is set as the lower bottom of the chin of the person's face.
  • a component having a very high possibility of including a chin of a human face is selected, a chin detection window is set in that portion, and the intensity distribution of the edge in the chin detection window is determined.
  • the contour including the lower bottom of the chin generally changes sharply in contrast to the surrounding area, and the edge strength is increased. Therefore, by obtaining the intensity distribution of the edge in the chin detection window, it is possible to easily and surely select a candidate region serving as a contour including the lower bottom of the chin of the answer included in the chin detection window. .
  • pixels having an edge intensity equal to or higher than a threshold value are detected from this distribution. That is, since the contour including the lower and lower parts of the chin generally has a high edge strength, a pixel having an edge strength equal to or higher than a certain threshold is selected, and other pixels are excluded. Only pixels that are likely to correspond to the contour including the lower bottom of the chin can be selected.
  • a method for detecting a lower bottom portion of a chin of a person's face from an image including a person's face comprising detecting a face image in a range that includes both eyes and lips of the person's face and does not include a chin.
  • After setting a chin detection window large enough to include the chin of the person's face at the bottom of the face image obtain the intensity distribution of the first derivative type edge in the chin detection window, and obtain the threshold from the distribution of the edge intensity.
  • Pixels with edge strength are detected, and then the pixels to be used are narrowed down using the sign inversion of the edge of the second derivative type from the pixels, and then narrowed down to the pixel distribution that best matches the distribution of pixels.
  • an approximate curve is obtained by using the least squares method, and the lowest bottom of the approximate curve is set as the lower bottom of the chin of the person's face. .
  • the present invention more specifically describes the method of calculating the edge intensity distribution (first-order differential type), the method of selecting pixels (second-order differential type), and the method of calculating the approximate curve (least square method) among the methods of the first invention.
  • first-order differential type the method of calculating the edge intensity distribution
  • second-order differential type the method of selecting pixels
  • approximate curve the approximate curve
  • the chin detection window has a horizontally long rectangular shape, and has a width wider than a face width of the human face and a height thereof. Is smaller than the above-mentioned width.
  • the lower bottom of the chin of the person's face to be detected can be reliably captured in the chin detection window, and thus the lower bottom of the chin can be detected more accurately.
  • the first-order differential edge intensity distribution uses a Sobele edge detection operator.
  • the most typical method for detecting a sudden change in light and shade in an image is to find a derivative relating to light and shade. Then, since the differentiation of the digital image is substituted by the difference, the first-order differentiation of the original image in the chin detection window effectively detects the edge portion in the image where the shading changes rapidly. Can be .
  • the present invention uses this first-order differential type edge detection operator (filter) as: A known edge detection operator of Sove 1 having excellent detection performance is used, whereby an edge portion in the jaw detection window can be reliably detected.
  • the edge of the second derivative type uses a Laplace edge detection operator.
  • the least-squares method using a quadratic function is used for the approximate curve.
  • the present invention uses a least square method by a quadratic function.
  • the contour of the chin of the human face in the chin detection window can be obtained at high speed.
  • the “least squares method” is, as is generally understood, the error of the error from the function to be fitted to a set of samplings. This is a method to find a coefficient that minimizes the sum of squares.For example, if it is a phenomenon that behaves as a quadratic equation with respect to experimental data, a quadratic equation may be used, and an exponential function If expected behavior can be calculated by taking the logarithm.
  • the calculation of the approximate curve by the least squares method can be easily realized by using software (programs) that are already incorporated in many scientific calculators and spreadsheet software as they are. .
  • Invention 7 The human face chin detection system
  • An image reading means for detecting a lower bottom portion of a chin of a person's face from an image containing a person's face, wherein the image reading means reads an image containing the person's face.
  • a face detection unit configured to detect, from an image read by the image reading unit, a surrounding area including both eyes and lips of the human face and not including a chin, and to set a face detection frame in the detected range;
  • Jaw detection window setting means for setting a chin detection window having a size including the chin of the person's face at the lower part of the human face;
  • a pixel selecting means for selecting a pixel having an edge strength of ⁇ or more from the obtained edge strength distribution, and a curve approximating means for obtaining an approximate curve that best fits the distribution of each pixel selected by the pixel selecting means.
  • a chin detecting means for detecting the lowest bottom of the approximation curve obtained by the curve approximating means as the lower bottom of the chin of the person's face.
  • Invention 8 The human face chin detection system
  • the pixel selecting means obtains a threshold value from a distribution of first-order differential type edge intensities calculated by the edge calculating means, and determines a pixel having an edge intensity not less than the threshold value. It is characterized in that a pixel to be used is detected and a pixel to be used is selected from the pixels by utilizing the sign inversion of the edge of the second derivative type.
  • a plug that detects the bottom of the chin of the person's face from the image containing the person's face An image reading step of reading an image including the human face, and detecting a range that includes both eyes and lips of the human face and does not include a chin from the image read in the image reading step; A face detection step of setting a face detection frame in the detected range; a chin detection window setting step of setting a chin detection window having a size including the chin of the human face below the detection frame; and An edge calculation step for obtaining an edge intensity distribution of the edge, a pixel selection step for selecting a pixel having an edge intensity equal to or greater than a threshold from the edge intensity distribution obtained in the edge calculation step, and a pixel selection step.
  • a curve approximation step for finding an approximate curve that best fits the distribution of each pixel; and It is characterized in that to achieve a chin detecting step of detecting a lower bottom portion, to the computer.
  • Invention 10 is a human face chin detection program
  • the human face chin detection program obtains a threshold value from a distribution of first-order differential edge strength calculated in the edge calculation step, and has an edge strength not less than the threshold value.
  • the method is characterized in that a pixel is detected, and a pixel to be used is selected from the surface elements by using sign inversion of a second-order differential type edge.
  • FIG. 1 is a block diagram showing an embodiment of a jaw detection system according to the present invention.
  • FIG. 2 is a configuration diagram showing hardware constituting the chin detection system.
  • FIG. 3 is a flowchart showing an embodiment of a jaw detection method according to the present invention.
  • FIG. 4 is a graph showing the relationship between the luminance and the pixel position in the face image.
  • FIG. 5 is a graph showing the relationship between the edge intensity in the face image and the pixel position.
  • FIG. 6 is a diagram showing an example of a face image to be a chin detection target.
  • FIG. 7 is a diagram illustrating a state in which a face detection frame is set in a face image.
  • FIG. 8 is a diagram showing a state in which a chin detection window is set below the face detection frame.
  • FIG. 9 is a diagram showing a state in which the lower bottom of the chin is detected and its position is corrected.
  • FIG. 10 is a diagram showing a chin detection window displaying only pixels having edge strengths equal to or greater than a threshold value.
  • FIG. 11 is a diagram showing a chin detection window that displays only selected pixels as a result of sign inversion.
  • 'FIG. 12 is a diagram showing an edge detection filter of S obe 1.
  • FIG. 1 shows an embodiment of a human face chin detection system 100 according to the present invention.
  • the chin detection system 100 includes the face of a person.
  • An image reading means 10 for reading the face image G; a face detection means 12 for detecting a human face from the medium image G read by the image reading means 10 and setting a face detection frame F of the human face;
  • a chin detection window setting means 14 for setting a chin detection window W having a size including the chin of the person's face below the face detection frame F; and an edge calculation for obtaining an intensity distribution of edges in the chin detection window W.
  • the image reading means 10 is a visual person attached to a public identification card such as a passport or a driver's license or a private document identification card such as an employee ID card, a student ID card or a membership card.
  • a proving face photograph for identification that is, a background image G containing only a large face facing the front of the person is stored in a CCD (Charge Coupled Device) or CMO S (Co A function to acquire digital image data consisting of R (red), G (green), and B (blue) pixel data by using an imaging sensor such as an image sensor (sampler).
  • CCD Charge Coupled Device
  • CMO S Co A function to acquire digital image data consisting of R (red), G (green), and B (blue) pixel data by using an imaging sensor such as an image sensor (sampler).
  • the digital camera is a CCD such as a digital still camera or a digital video camera, a CMOS camera, a vidicon camera, an image scanner, a drum scanner, or the like.
  • the face image G read optically by the imaging sensor is subjected to AZD conversion.
  • a function of sequentially transmitting the digital image data to the face detection means 20 is provided.
  • the image reading means 10 has a data storage function, and the read face image data can be appropriately stored in a storage device such as a hard disk drive (HDD) or a storage medium such as a DVD-ROM. It has become. In addition, the face image is converted into digital image data via a network or a storage medium. When supplied, the image reading means 10 becomes unnecessary or functions as a communication means, an interface (IZF) or the like.
  • a storage device such as a hard disk drive (HDD) or a storage medium such as a DVD-ROM. It has become.
  • the face image is converted into digital image data via a network or a storage medium.
  • the image reading means 10 becomes unnecessary or functions as a communication means, an interface (IZF) or the like.
  • the face detection means 12 detects a human face from the face image G read by the image reading means 10 and sets a face detection frame F in the relevant part.
  • the face detection frame F has a size (area) including both eyes and lips around the nose of the human face and not including the chin of the human face.
  • the algorithm for detecting a human face by the face detection means 12 is not particularly limited, but, for example, a conventional method as shown in the following literature or the like can be used as it is.
  • a face image of a region including both eyes and lips of a human face and not including a chin is created, a neural network is trained using this image, and a human face is detected using the trained dual neural network.
  • a region from both eyes to the lips is detected as a face image region.
  • the size of the face detection frame F is not invariable, and is appropriately increased or decreased according to the size of the target face image.
  • the chin detection window setting means 14 sets a chin detection window W having a size including the chin of the person's face below the face detection frame F set by the face detection means 20. ing.
  • a target area for accurately detecting the contour including the lower bottom of the chin of the human face by the following means is selected from the face image G using the chin detection window W.
  • the edge calculating means 16 provides a function for obtaining the intensity distribution of the edge of the image in the chin detection window W. For example, as described later, the first derivative using the edge detection operator of Sobe 1 is used. Calculate the intensity distribution of the edge of the mold! / Puru.
  • Pixel selection means 18 ⁇ which provides a function of selecting a pixel having an edge strength equal to or greater than a threshold value from the distribution of the edge strength obtained by the edge calculation means 16, as will be described later.
  • a filter Laplacian (Lap 1 acian) filter
  • candidate images obtained by the edge detection operator of the above-mentioned Sove 1 are narrowed down by detecting the sign inversion of the edge.
  • the curve approximation means 20 provides a function of obtaining an approximate curve so as to best fit the distribution of each pixel selected by the pixel selection means 18. Specifically, as will be described later, the following equation is used.
  • the chin detection means 22 provides a function of detecting the lowermost part of the approximation curve obtained by the curve approximation means 20 as the lower part of the chin of the person's face. A noticeable mar power M or the like may be applied to the lower bottom portion of the jaw to explicitly indicate it.
  • the means 10 to 22 and the like constituting the chin detection system 100 are actually composed of hardware such as a CPU RAM and a dedicated computer program (software) as shown in FIG. It is realized by a computer system such as a personal computer (PC). That is, as shown in FIG. 2, for example, hardware for realizing this jaw detection system 100 is a CPU (Central Processing Unit) 4 which is a central processing unit that performs various controls and arithmetic processing.
  • CPU Central Processing Unit
  • RAM Random Access Memory
  • ROM Read Only Memory
  • An auxiliary storage device such as a node disk drive device (HDD) or semiconductor memory (S econdary storage) 43, and an output device 44 such as a monitor (LCD (liquid crystal display) or CRT (cathode ray tube)).
  • An input device 45 consisting of an image sensor such as an image scan keypad, a mouse, a CCD (Charge Coiled Device) or a CMOS Combo (Chemical Component Device), and an input / output device for these devices.
  • This bus is connected by various internal / external buses 47 such as a processor bus, a memory bus, a system bus, and an input / output bus, such as a nos and an industrial standard architecture (ISA) bus.
  • a processor bus such as a central processing unit (CPU) bus
  • a memory bus such as a central processing unit (CPU) bus
  • a system bus such as a central processing unit (CPU) bus
  • an input / output bus such as a nos and an industrial standard architecture (ISA) bus.
  • ISA industrial standard architecture
  • a storage medium such as a CD-ROM, a DVD-ROM, a flexible disk (FD), or various control programs and data supplied via a communication network (LAN, WAN, Internet, etc.) N
  • the program and data are installed in the auxiliary storage device 43 and the like, and the programs and data are loaded into the main storage device 41 as needed.
  • the CPU 40 makes full use of various resources according to the program loaded in the main storage device 41 and performs predetermined operations. It performs control and arithmetic processing, outputs the processing results (processing data) to an output device 44 via a bus 47, and displays the data.
  • the data is also stored in a database formed by an auxiliary storage device 43 as necessary. It is designed to be stored and saved (updated).
  • FIG. 3 is a flowchart showing an example of a chin detection method for a face image G to be actually detected.
  • step S 101 a face included in the face image G from a face image G to be a chin detection target previously read by the image reading means 10 by the above-described face detection means 12. And then set the face detection frame F to identify the detected human faces.
  • the image to be detected by the chin of the present invention is limited to an image in which one person's face is shown.
  • the position of the person's face is first determined by the face detection means 12.
  • a rectangular face detection frame F is set on the person's face as shown in FIG.
  • the size (area) is such that it includes both the eyes and lips around the nose of the human face and does not include the chin of the human face.
  • the face detection frame F does not include the chin portion of the person's face, it is not always necessary to stick to the size and shape as exemplified.
  • the size of the person's face and the horizontal position of the display frame Y are within the standard, but the position of the chin is too low. This indicates a state where the standard position has not been reached.
  • step S103 As shown in FIG. A rectangular jaw detection window W is set, and the position of the jaw of the person's face is specified.
  • the size and shape of the chin detection window W are not strict, and are not particularly limited as long as the size and shape are below the lower lip of the person's face and always include the lower bottom of the chin.
  • many confusing lines and contours of the chin such as chin shadows, neck wrinkles, and shawl collars will appear in the chin detection window W, and it will take a lot of time to detect edges later. It took Conversely, if it is too small, the lower base of the chin to be detected may not be included due to individual differences.
  • the chin detection window W is set in close contact with the lower side of the face detection frame F '.
  • the chin detection window W does not necessarily need to be in close contact with the face detection frame F. In short, it is only necessary that the chin detection window W keeps a predetermined positional relationship with respect to the face detection frame F.
  • the process proceeds to the next step S105, in which the luminance of each pixel in the chin detection window W is determined.
  • the primary in the chin detection window W is calculated using a first-order differential (difference-type) edge detection operator represented by an edge detection operator j of Sobe 1 and the like. Find the edge intensity distribution of the differential type.
  • FIGS. 12 (a) and 12 (b) show this “Sobel edge detection operator”.
  • the vertical and horizontal edges are detected by adjusting the three pixel values in the upper row and lower row, respectively, and enhancing the vertical edges.
  • Figure 4 shows the relationship between the luminance (vertical axis) and the pixel position (horizontal axis) of the face image G. Since the brightness of the edge portion of the image such as the outline of the chin changes greatly, the portion where the brightness changes greatly is represented by a first-order differential type (such as “Sobel's edge detection operator”). By using the edge detection operator of (type), it can be calculated as a parabolic approximated curve as shown in Fig. 5 (a).
  • a first-order differential type
  • the process proceeds to the next step S107, and a threshold value is obtained from the edge intensity distribution. That is, as described above, since the edge strength is greatly affected by the shooting conditions (illumination conditions) and the like, it is difficult to determine the edge corresponding to the jaw contour from the edge strength including other areas. .
  • the threshold value for determining a pixel is not particularly limited, but, for example, a maximum edge intensity of 110 detected in the chin detection window W is set as the threshold value, and the threshold value is set to be stronger than this threshold value.
  • a pixel having an edge is selected as a candidate pixel for obtaining the lower part of the chin.
  • step S111 when the threshold value for selecting the pixel value is determined in this way, the process proceeds to step S111, and all pixels constituting the upper side of the chin detection window W are set as the base points as shown in FIG. While scanning in the vertical direction, only pixels having an edge intensity exceeding the threshold are selected, and pixels below the threshold are excluded.
  • Fig. 10 shows the pixel distribution selected in this way (exceeding the threshold) in an easy-to-understand manner.
  • the chin detection window W is scanned in the X direction from the upper left of the chin detection window W, and sequentially scanned in the Y direction.
  • the pixels in each row are scanned in a non-interlaced manner, such as moving the pixels to pixels, and pixels having an edge intensity equal to or higher than a threshold are identified and displayed.
  • the search from the upper left of the chin detection window W is performed in order to select the earliest appearing pixel in the Y direction that is equal to or greater than the threshold value as the effective lower-jaw catcher. It is possible to detect a pixel corresponding to the contour. In other words, the edge that is confusing with the jaw contour is more pronounced at the neck wrinkles and shirt collar below the actual jaw contour than at the top, This is to reduce the priority of those edges.
  • step S113 if a pixel having an edge strength exceeding the threshold value is selected in this way, the process proceeds to step S113, and for each pixel column (Y direction) of the selected pixels, To narrow down the pixels with the highest edge strength, the sign of the second derivative wedge is detected for each column.
  • a second-order differential type edge detection filter (Lablassian filter) as shown in FIG. 13 to detect the sign inversion of the edge, as shown in FIG.
  • One of the pixels is determined (Fig. 11). For example, as shown in FIG. 10, assuming that a plurality of pixels are selected for each row from “a” to “g” as a result of searching for pixels having an edge strength equal to or greater than the threshold as shown in FIG. As a result of detecting the sign inversion of the edge, in FIG. 11, in the “a”, “b”, “d”, “f”, and “ rg ” columns, the uppermost pixel is a candidate pixel constituting the chin outline. Selected as " c ",
  • step S115 the approximate curve as described above is added to the distribution of the searched pixels.
  • the bottom of the chin will be determined by applying it to Figure 11.
  • FIGS. 9 (a) and 9 (b) When the bottom of the chin is detected in this way, a marker M is placed on the bottom of the chin as shown in FIGS. 9 (a) and 9 (b), and the position of the marker M is set to the specified lower jaw. Move the entire human face so that it is at the same height as the bottom position.
  • Fig. 9 (a) the lower part of the chin of the person's face is located at a considerably lower position, so the lower part of the chin is moved to the specified position by moving the person's face vertically upward as shown in Fig. 9 (b). Can be matched.
  • FIG. 9 (a) and the like the image below the person's neck is cut off, but it is assumed that the image of the hidden part actually exists as it is.
  • the present invention sets the chin detection window using a known person face detection method, and then detects the lower bottom of the person face based on the intensity distribution of the edge in the chin detection window. Even in the case of a face image in which it is difficult to detect the chin outline, it is possible to detect the portion accurately and at high speed to detect a robust (robust) lower part of the chin.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Selon l'invention, le visage d'un individu est détecté et une fenêtre de détection de mâchoire est établie au niveau d'une partie inférieure du visage. La distribution d'intensité de bord dans la fenêtre de détection de mâchoire est calculée. Des pixels possédant une intensité de bord supérieure ou égale à une valeur seuil sont détectés à partir de la distribution d'intensité de bord. Une courbe d'approximation est calculée de façon à correspondre à la distribution des pixels détectés. La partie la plus inférieure de la courbe d'approximation est désignée comme étant la partie inférieure de la mâchoire de l'individu. La partie inférieure de la mâchoire sur le visage d'un individu peut ainsi être détectée avec rapidité et précision.
PCT/JP2004/018451 2003-12-05 2004-12-03 Procede de detection de machoire sur le visage d'un individu, systeme de detection de machoire, et programme detection de machoire WO2005055144A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2003407911A JP2005165983A (ja) 2003-12-05 2003-12-05 人物顔のあご検出方法及びあご検出システム並びにあご検出プログラム
JP2003-407911 2003-12-05

Publications (1)

Publication Number Publication Date
WO2005055144A1 true WO2005055144A1 (fr) 2005-06-16

Family

ID=34650325

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2004/018451 WO2005055144A1 (fr) 2003-12-05 2004-12-03 Procede de detection de machoire sur le visage d'un individu, systeme de detection de machoire, et programme detection de machoire

Country Status (4)

Country Link
US (1) US20060010582A1 (fr)
JP (1) JP2005165983A (fr)
TW (1) TW200527319A (fr)
WO (1) WO2005055144A1 (fr)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5095182B2 (ja) * 2005-12-01 2012-12-12 株式会社 資生堂 顔分類装置、顔分類プログラム、及び該プログラムが記録された記録媒体
US7953253B2 (en) * 2005-12-31 2011-05-31 Arcsoft, Inc. Face detection on mobile devices
US7643659B2 (en) * 2005-12-31 2010-01-05 Arcsoft, Inc. Facial feature detection on mobile devices
US8417033B2 (en) * 2007-04-27 2013-04-09 Hewlett-Packard Development Company, L.P. Gradient based background segmentation and enhancement of images
JP2009239394A (ja) * 2008-03-26 2009-10-15 Seiko Epson Corp ぬりえ製造装置およびぬりえ製造方法
CN102914286B (zh) * 2012-09-12 2014-09-10 福建网龙计算机网络信息技术有限公司 基于手持设备对使用者坐姿进行自动检测方法
JP6307873B2 (ja) * 2013-12-24 2018-04-11 富士通株式会社 対象線検出装置、方法、及びプログラム
CN106156692B (zh) 2015-03-25 2019-12-13 阿里巴巴集团控股有限公司 一种用于人脸边缘特征点定位的方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05151352A (ja) * 1991-11-26 1993-06-18 Glory Ltd エツジ検出方法およびこれを用いた画像認識方法
JPH0877334A (ja) * 1994-09-09 1996-03-22 Konica Corp 顔画像の特徴点自動抽出方法
JPH11306372A (ja) * 1998-04-17 1999-11-05 Sharp Corp 画像加工装置、画像加工方法およびその方法を記憶した記憶媒体
JP2001184512A (ja) * 1999-12-24 2001-07-06 Sanyo Electric Co Ltd 画像処理装置及び方法並びに記録媒体

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3735893B2 (ja) * 1995-06-22 2006-01-18 セイコーエプソン株式会社 顔画像処理方法および顔画像処理装置
US5642441A (en) * 1995-10-24 1997-06-24 Neopath, Inc. Separation apparatus and method for measuring focal plane
US6330348B1 (en) * 1999-01-21 2001-12-11 Resolution Sciences Corporation Method and apparatus for measurement of microtome performance

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05151352A (ja) * 1991-11-26 1993-06-18 Glory Ltd エツジ検出方法およびこれを用いた画像認識方法
JPH0877334A (ja) * 1994-09-09 1996-03-22 Konica Corp 顔画像の特徴点自動抽出方法
JPH11306372A (ja) * 1998-04-17 1999-11-05 Sharp Corp 画像加工装置、画像加工方法およびその方法を記憶した記憶媒体
JP2001184512A (ja) * 1999-12-24 2001-07-06 Sanyo Electric Co Ltd 画像処理装置及び方法並びに記録媒体

Also Published As

Publication number Publication date
TW200527319A (en) 2005-08-16
US20060010582A1 (en) 2006-01-19
JP2005165983A (ja) 2005-06-23

Similar Documents

Publication Publication Date Title
TWI550549B (zh) 圖像處理裝置及圖像處理方法
KR100480781B1 (ko) 치아영상으로부터 치아영역 추출방법 및 치아영상을이용한 신원확인방법 및 장치
JP4505362B2 (ja) 赤目検出装置および方法並びにプログラム
US7460705B2 (en) Head-top detecting method, head-top detecting system and a head-top detecting program for a human face
JP4307496B2 (ja) 顔部位検出装置及びプログラム
US20050196044A1 (en) Method of extracting candidate human region within image, system for extracting candidate human region, program for extracting candidate human region, method of discerning top and bottom of human image, system for discerning top and bottom, and program for discerning top and bottom
US9858680B2 (en) Image processing device and imaging apparatus
JP4739870B2 (ja) サングラス検出装置及び顔中心位置検出装置
US20050220346A1 (en) Red eye detection device, red eye detection method, and recording medium with red eye detection program
CN107368806B (zh) 图像矫正方法、装置、计算机可读存储介质和计算机设备
JP2007114029A (ja) 顔中心位置検出装置、顔中心位置検出方法、及び、プログラム
JP2007272435A (ja) 顔特徴抽出装置及び顔特徴抽出方法
JP3459950B2 (ja) 顔検出及び顔追跡方法並びにその装置
CN112001853A (zh) 图像处理设备、图像处理方法、摄像设备和存储介质
WO2005055144A1 (fr) Procede de detection de machoire sur le visage d'un individu, systeme de detection de machoire, et programme detection de machoire
JP2005134966A (ja) 顔画像候補領域検索方法及び検索システム並びに検索プログラム
JP2014102713A (ja) 顔構成部抽出装置、顔構成部抽出方法及びプログラム
RU2329535C2 (ru) Способ автоматического кадрирования фотографий
JP2005316958A (ja) 赤目検出装置および方法並びにプログラム
EP2541469A2 (fr) Dispositif de reconnaissance d'image, procédé de reconnaissance d'image et programme de reconnaissance d'image
JP2007048108A (ja) 画像評価システム及び画像評価方法並びに画像評価プログラム
JP5128454B2 (ja) 瞼検出装置、瞼検出方法及びプログラム
JP3963789B2 (ja) 眼検出装置、眼検出プログラム、そのプログラムを記録する記録媒体及び眼検出方法
JP2007219899A (ja) 個人識別装置、個人識別方法および個人識別プログラム
JP5995610B2 (ja) 被写体認識装置及びその制御方法、撮像装置、表示装置、並びにプログラム

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

122 Ep: pct application non-entry in european phase

Ref document number: 04819981

Country of ref document: EP

Kind code of ref document: A1

WWW Wipo information: withdrawn in national office

Ref document number: 4819981

Country of ref document: EP