WO2002069266A1 - Procede d'extraction de l'image d'un visage et dispositif, support d'enregistrement et programme a cet effet - Google Patents

Procede d'extraction de l'image d'un visage et dispositif, support d'enregistrement et programme a cet effet Download PDF

Info

Publication number
WO2002069266A1
WO2002069266A1 PCT/JP2001/001541 JP0101541W WO02069266A1 WO 2002069266 A1 WO2002069266 A1 WO 2002069266A1 JP 0101541 W JP0101541 W JP 0101541W WO 02069266 A1 WO02069266 A1 WO 02069266A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
extracted
eye
image
template
Prior art date
Application number
PCT/JP2001/001541
Other languages
English (en)
Japanese (ja)
Inventor
Nobuyuki Matsui
Takeshi Torada
Original Assignee
Step One Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Step One Co., Ltd. filed Critical Step One Co., Ltd.
Priority to PCT/JP2001/001541 priority Critical patent/WO2002069266A1/fr
Publication of WO2002069266A1 publication Critical patent/WO2002069266A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Definitions

  • the present invention relates to a method, an apparatus, a storage medium, and a program for extracting a face from a color image.
  • Extracting a human face from a color image captured by a computer using an image input device such as a digital camera is required in various fields.
  • the security field it is required as a pre-process to authenticate an individual
  • the monitoring field it is required to improve safety, etc. in promoting automation
  • the driver's face it is required to detect the movement of the speaker and automatically apply braking.
  • the position of the speaker Is required to recognize the face of the interlocutor using a humanoid robot.
  • Pixels of skin color or a color close to skin color are extracted from color images (for example, RGB images) taken into a computer by a digital camera or the like, and the pixels extracted by labeling are connected.
  • color images for example, RGB images
  • a luminance value conversion process is performed on a color image (for example, an RGB image) captured by a digital camera or the like into a computer to generate a luminance value image. Extraction of face region by performing matching process using Has been proposed.
  • the face area is extracted by using the method (1), the face area is affected by individual differences such as white and black, the influence of a light source (a change in light amount), and the color adjustment of a digital camera or the like. As a result, there is a disadvantage that the extraction accuracy of the face region is reduced. Also captured
  • the present invention has been made in view of the above problems, and it is possible to extract a face region with high accuracy without being affected by individual differences, the influence of a light source, and color adjustment of a digital power camera. It is an object of the present invention to provide a face extraction method, a device, a storage medium, and a program that can be performed and the required time can be significantly reduced.
  • the face extraction method performs a brightness value conversion on a color image including a face region to generate a brightness value conversion image, and obtains a primary image using the information between adjacent pixels from the brightness value conversion image.
  • a differential image is created, and a matching process is performed using a primary differential face template to extract a face region.
  • a face extraction method is a method of ignoring points of a primary differential face template that are equal to or less than a high luminance value when creating a primary differential image.
  • a face extraction method is a method of extracting a face region using a first-order differential face template that is lightly expressed by a light color ratio.
  • a point corresponding to the lowest luminance value is extracted as an eye candidate position from the face region extracted by the method according to any one of claims 1 to 3, and the extracted eye candidate position is extracted.
  • the eye position is extracted by performing template matching based on, and the face is extracted by normalizing the face based on the extracted distance between the eyes.
  • the face extraction method according to claim 5 the face area extracted by the method according to any one of claims 1 to 3 is divided into left and right, and a point corresponding to the lowest luminance value in each of the divided areas is determined as an eye candidate position. 'Is a method of extracting.
  • the face extraction method divides the face region extracted by any one of claims 1 to 3 into left and right, analyzes the minimum luminance position in each of the divided regions, and uses the template from the analysis information.
  • eye candidate positions are extracted by matching, the eye candidate position is expressed using the average luminance value, and eye template matching is performed to extract the eye position.
  • a face extraction method is a method of extracting a low luminance set as an eyeball position.
  • the face extraction device further comprising: a brightness value conversion image creating unit configured to perform brightness value conversion on the color image including the face area to create a brightness value conversion image; And a face area extracting means for extracting a face area by performing a matching process using a primary differential face template.
  • the first derivative image creating means ignores points of high luminance value or less in the first derivative face template when creating the first derivative image. Is adopted.
  • the face extracting apparatus wherein the face area extracting means employs a face area extracting means using a first-order differential face template lightly expressed by a light color ratio.
  • the face extraction device of claim 11 is an eye candidate position for extracting a point corresponding to the lowest luminance value as an eye candidate position from the face region extracted by any of the devices of claims 8 to 10. Extraction means, eye position extraction means for extracting eye positions by performing template matching based on the extracted eye candidate positions, and face extraction by normalizing the face based on the extracted distance between both eyes And a face extracting means.
  • a face extraction device is characterized in that, as the eye candidate position extraction means, the face region extracted by the device according to any one of claims 8 to 10 is divided into left and right sides. In this case, a point corresponding to the lowest luminance value is extracted as an eye candidate position.
  • a face extraction device is characterized in that, as the eye candidate position extraction means, the face area extracted by the method according to any one of claims 1 to 3 is divided into right and left, and a minimum in each of the divided areas.
  • the eye position extracting means means for extracting the position of the eye by performing template matching of the eye is employed.
  • a face extraction device employs, as the eye position extraction means, a device that extracts a low luminance set as an eyeball position.
  • a storage medium stores a computer program for executing the processing procedure according to any one of the first to seventh aspects.
  • the program of claim 16 is for causing a computer to execute the processing procedure of any one of claims 1 to 7.
  • the face extraction method of claim 2 when creating the first derivative image, points below the high luminance value of the first-order finely divided face template are ignored. It is possible to further improve the extraction accuracy of, and to greatly reduce the time required for extracting the face region.
  • the comparison is performed. A one-way comparison can be made to the control.
  • a point corresponding to the lowest luminance value is extracted as a candidate eye position from the face region extracted by the method of any one of claims 1 to 3, and the extracted eyes are extracted.
  • template matching is performed to extract the eye position, and the face is normalized based on the extracted distance between the eyes to extract the face.
  • the time required to extract the face region can be significantly reduced.
  • the face area extracted by the method of any one of claims 1 to 3 is divided into left and right, and a point corresponding to the lowest luminance value in each of the divided areas is determined as an eye candidate position. Therefore, in addition to the effect of claim 4, the time required for extracting the face region can be further reduced.
  • the face area extracted by the method of any one of claims 1 to 3 is divided into right and left, the minimum luminance position is analyzed in each of the divided areas, and the analysis information is used.
  • the eye candidate position is extracted by template matching, the eye candidate position is expressed using the average brightness value, and the eye position is determined by performing eye template matching. Since the extraction is performed, the extraction accuracy of the face region can be further enhanced in addition to the effect of the fourth or fifth aspect.
  • the extraction accuracy of the eyeball position should be improved. As a result, the extraction accuracy of the face region can be improved.
  • the luminance value conversion image is generated by the luminance value conversion image generating means, and the luminance value conversion image is generated by performing the luminance value conversion on the color image including the face region.
  • a primary differential image using information between adjacent pixels is created from the luminance value converted image, and a face area can be extracted by performing matching processing using a next differential face template by a face area extracting means. .
  • the primary differential image generating means employs a device that ignores points of high luminance value or less in the primary differential face template when generating the primary differential image, In addition to eliminating the effect of a background image, the extraction accuracy of the face region can be further improved, and the time required for extracting the face region can be significantly reduced.
  • the face area extracting means employs a face differential which is extracted by using a first-order differential face template lightly expressed by a light color ratio, the face area is extracted.
  • a comparison can be made in a certain direction with respect to a comparative control.
  • the eye candidate position extracting means corresponds to the lowest luminance value from the face area derived from the face area by any one of claims 8 to 10.
  • the eye position is extracted by the eye position extracting means, and the eye position is extracted by performing template matching based on the extracted eye candidate positions. Then, the face can be extracted by normalizing the face based on the extracted distance between the eyes. .
  • the face area can be extracted with higher accuracy, and the time required for extracting the face area can be significantly reduced.
  • the face region extracted by the device according to any one of claims 8 to 10 is divided into left and right sides as the eye candidate position extraction means, Since the method that extracts the point corresponding to the lowest luminance value as the eye candidate position in (2) is adopted, in addition to the effect of Claim 11, the time required to extract the face region should be significantly reduced. Can be. ' ⁇ ,
  • the face extraction device of claim 13 as the eye candidate position extraction means, the face area extracted by the method according to any one of claims 1 to 3 is divided into right and left, and a minimum is set in each of the divided areas.
  • the position extraction means a means for extracting the position of the eye by performing template matching of the eye is adopted. Therefore, in addition to the function of claim 11 or claim 12, the extraction accuracy of the face region can be further improved. Can be.
  • the eye position extraction means adopts a device that extracts a low-luminance set as an eyeball position, the operation of any one of claims 11 to 13 is performed. In addition to this, the extraction accuracy of the eyeball position can be improved, and thus the extraction accuracy of the face region can be improved.
  • FIG. 1 is a block diagram showing an embodiment of the face extraction device of the present invention.
  • FIG. 2 is a flowchart illustrating an embodiment of the face extraction method of the present invention.
  • FIG. 3 is a flowchart for explaining in detail the processing of step SP2 in the flowchart of FIG.
  • FIG. 4 is a flowchart illustrating in detail a part of the process in step SP3 of the flowchart in FIG.
  • FIG. 5 is a flowchart for explaining in detail the rest of the process of step SP3 in the flowchart of FIG.
  • FIG. 6 is a diagram illustrating a specific example of a process for obtaining a reduced image by applying an average luminance method to a luminance value image.
  • FIG. 7 is a diagram for explaining a specific example of a process for obtaining a primary differential image and a specific example of a process for ignoring pixels exceeding a maximum differential value.
  • FIG. 8 is a diagram for explaining a specific example of a process for creating a primary differential face template.
  • FIG. 9 is a diagram for explaining a process of scanning a primary differential image using a primary differential face template.
  • FIG. 1 and FIG. 0 are diagrams for explaining a specific example of the process of expanding and dividing a rough face area.
  • FIG. 11 is a diagram showing an example of a minimum point group.
  • FIG. 12 is a diagram showing an example of the minimum luminance template of the eye.
  • FIG. 13 is a diagram for explaining a specific example of a process of scanning the minimum point group with the minimum luminance template of the eye.
  • FIG. 14 is a diagram for explaining a specific example of a process of extracting an eye region.
  • FIG. 15 is a diagram illustrating a specific example of a process for extracting an eye region more accurately.
  • FIG. 16 is a diagram illustrating a specific example of a face region extraction process based on the distance between the eyes.
  • FIG. 1 is a block diagram showing an embodiment of a face extraction device according to the present invention.
  • This face extraction device performs luminance value conversion on an image input unit 1 such as a digital camera, and a digital color image (for example, an RGB image) input by the image input unit 1 to generate a luminance value image.
  • a first-order differential image creating unit 3 that creates a first-order differential image by calculating a brightness difference using information between adjacent pixels based on the brightness-value image;
  • a first-order differential face template holding unit 4 for holding a face template, and a rough face area extracting unit 5 for extracting a rough face area by performing matching processing on the first-order differential image using the first-order face template;
  • a minimum luminance value point extraction unit 6 for extracting points of the minimum luminance value from the approximate area of the face in the luminance value image, an eyeball extraction unit 7 for extracting a portion where the minimum luminance value points are concentrated as an eyeball, The distance between the eyes And the interocular distance detecting section 8 for output, and a ⁇ region extracting unit 9 for extracting a face region by performing normalization processing on the
  • FIG. 2 is a flowchart schematically illustrating one embodiment of the face extraction method of the present invention.
  • a digital color image for example, an RGB image
  • an image input unit such as a digital camera.
  • a face outline is generated using a face template based on the input 'digital color image. Extract the area,
  • step SP3 extract the exact positions of both eyes from the approximate area of the face, In step SP4, an accurate face region is extracted based on the distance between the eyes.
  • FIG. 3 is a flowchart illustrating in detail the process of step SP2 in the flowchart of FIG.
  • a luminance value image is created by performing a luminance value conversion process on the RGB image (for example, a Y value in YIQ conversion is used as the luminance value), and in step SP2, A reduced image is obtained by applying the average luminance method to the luminance value image, and in step SP3, a luminance difference is calculated using information between adjacent pixels in the reduced image to create a primary differential image.
  • the differential operator used at this time for example, a conventionally publicly-known R0berts operator can be exemplified.
  • the maximum differential value is extracted from the primary differential face template created in advance.
  • the primary differential face template is created, for example, by merging a plurality of primary differential face samples.
  • each pixel of the first derivative face template be lightened using a certain light color ratio.
  • the template image is compared with the detected image rather than the detected image. The whole pixel value can be fixed at a low position, and the comparison accuracy can be improved.
  • step SP5 of all the pixels included in the primary differential image, the value of the pixel exceeding the maximum differential value is replaced with 0, thereby ignoring the corresponding pixel (by performing this processing, However, it is possible to prevent false edge detection that the high-level edge expression in the background disappears and the approximate face area deviates from the actual face by the template matching process described later.)
  • step SP7 the position with the highest degree of matching is detected as the approximate position of the face (specifically, for example, a simple scalar of the vector with the first derivative face template)
  • the approximate area of the face is extracted by calculating the distance and detecting the minimum value as the approximate position of the face.
  • the series of processing ends as it is.
  • FIGS. 4 and 5 are flowcharts for explaining in detail the processing of step SP3 in the flowchart of FIG.
  • step SP1 a rectangle corresponding to the approximate face area is extracted from the luminance value image, and in step SP2, the extracted rectangle is expanded vertically and horizontally.In step SP3, the expanded rectangular area is cross-shaped. Partition and select the upper left partition area.
  • step SP4 a first luminance threshold value for detecting an eye from the luminance value at the center of the cross and the typical luminance difference between the eyes is calculated, and in step SP5, the selected rectangular section area is determined.
  • a pixel (minimum luminance point) having a minimum luminance value is extracted for each column, and among the extracted minimum luminance points, only the minimum luminance point having a luminance value lower than the first luminance threshold is stored as the minimum point.
  • step S f 5 minimum luminance template of a typical eye previously prepared minimum point group created (e.g., the (See Fig. 2).
  • the scanning method for example, it is preferable to adopt a method using the AND condition of the minimum point.When simply determining the position of the eye based only on the scan result, the eyeglasses, the eyebrows, and the hairline The inconvenience of erroneous detection of an error can be prevented. Then, in step SP7, up to three regions from the upper region are extracted as eye candidate regions from the regions exceeding the preset matching degree threshold for the scanned region.
  • step SP8 it is determined whether or not there are a plurality of eye candidate areas. If it is determined that there are a plurality of eye candidate areas, at step SP9, it corresponds to each eye candidate area of the luminance value image. The average luminance value in the region is calculated, and 3/4 and 1/2 of the average luminance value are set as the second and third luminance threshold values. In step SP10, each eye candidate region of the luminance value image is set.
  • the brightness value of each pixel in the region corresponding to is represented by a ternary value by comparing with the second and third brightness threshold values, and in step SP11, the brightness value is represented by a ternary value using the prepared ternary template of the eye Narrow the optimal eye candidate area from each candidate area (By selecting an eye candidate area that matches the most ternary template of the eye to prevent erroneous detection such as eyeglass vines), in step SP12, the optimal eye candidate area is re-evaluated. It is expressed as a value, and the portion where the low luminance points are most dense is extracted as an eyeball. In step SP13, it is determined whether or not the extraction of both eyes has been completed, and the extraction of both eyes has not been completed.
  • step SP14 If it is determined in step SP14, in step SP14, the upper right section area divided into a cross is selected, and the processing in step SP4 is performed again. Conversely, if it is determined in step SP13 that the extraction of both eyes has been completed, the series of processing ends. '
  • step SP8 If it is determined in step SP8 that there is only one eye candidate region, the process of step SP12 is performed as it is.
  • step SP4 of the flowchart of FIG. 2 the distance between the eyes is detected based on the eyes extracted as described above, and the face area is determined using the typical aspect ratio of the face. Then, the final face area is extracted by enlarging the face area based on a preset magnification.
  • FIG. 6 is a diagram illustrating a specific example of a process for obtaining a reduced image by applying the average luminance method to a luminance value image.
  • the luminance value image can be reduced as shown in (B) of FIG.
  • FIG. 6 illustrates the case of reducing to 1/2, but it is possible to reduce to ⁇ Zn (n is an integer of 3 or more) as necessary. If the luminance value image is reduced in this manner, the number of pixels to be processed in the subsequent processing can be reduced, so that the processing time can be reduced.
  • FIG. 7 is a diagram for explaining a specific example of a process for obtaining a primary differential image and a specific example of a process for ignoring pixels exceeding a maximum differential value.
  • the value of the pixel exceeding this threshold is set to 0 by setting the maximum value of the prominent value obtained from the primary differential face template as the threshold, and As shown in (C) in the figure, it is possible to obtain an image excluding the background signal.
  • the value of the pixel exceeding this threshold is set to 0 by setting the maximum value of the prominent value obtained from the primary differential face template as the threshold, and As shown in (C) in the figure, it is possible to obtain an image excluding the background signal.
  • since there is no high-level edge expression in the background it is possible to prevent the inconvenience of extracting an area other than the face when acquiring the approximate face area using the template.
  • FIG. 8 is a view for explaining a specific example of a process for creating a primary differential face template.
  • each pixel of the primary differential face template shown in '(E) in Fig. 8 is lightened using a certain light color ratio. ⁇ The luminance value of each pixel is multiplied by the light color ratio (a value less than 1.0). To make the image uniformly lighter, to create a lightened primary differential face template shown in (F) in FIG. ⁇
  • the position with the nose at the center can be determined as the approximate position of the face.
  • FIG. 9 is a diagram for explaining the process of scanning the primary differential image using the primary differential face template.
  • the pixel is shifted from the upper left corner to the lower right corner as shown by the arrow in Fig. 9, and a simple scalar of the vector with the first derivative face template is used.
  • the distance ⁇ the Euclidean distance (norm), which is the square root of the sum of the squares of the differences for each element of the vector ⁇ is calculated, and the position corresponding to the minimum value is determined as the approximate face position F. ..
  • the luminance value of each pixel of an image of n X m pixels is V 11,..., V n 1,.
  • the scalar distance is obtained by calculating the square root of the sum of squares of the difference between each element of the first derivative face template vector and each image vector, and the position of the image corresponding to the minimum scalar distance. Is determined as the approximate face position F.
  • FIG. 10 is a diagram for explaining a specific example of the expansion and division processing of the face outline area.
  • the area obtained by acquiring the approximate face position (see ( ⁇ ) in Fig. 10) is applied to the corresponding position of the luminance value and the image, and this area is multiplied by a real number in all directions from the center to the enlarged area (No. 10). Then, the enlarged area is divided into crosses for subsequent processing ⁇ see (C) in FIG. 10 ⁇ .
  • FIG. 13 is a view for explaining a specific example of a process of scanning the minimum point group with the minimum luminance template of the eye.
  • FIG. 14 is a diagram illustrating a specific example of a process of extracting an eye region. ⁇
  • a minimum point cloud is obtained as shown in (B) in FIG. 14, the minimum luminance template of the eye is used, and the threshold of the matching degree is obtained.
  • a value is set and up to three candidates from the upper position exceeding the threshold are extracted as eye candidate areas D1 and .D2 (see (C) in FIG. 14).
  • regions corresponding to the eye candidate regions Dl and D2 are extracted (see (El) (E2) in FIG. 14), and 3/4 and 1 of the average luminance value in the eye catching region are extracted.
  • / 3 is used to perform ternary representation ⁇ see (Fl) and (F2) in Fig. 14 ⁇ , and matching is performed using the ternary representation of the eye template as shown in (G) in Fig. 14.
  • one eye candidate area is selected as an eye area as shown in (H) in FIG.
  • FIG. 15 is a diagram illustrating a specific example of a process of extracting an eye region more accurately.
  • this region is binarized and expressed again (see (B) in Fig. 15), and the portion where the low luminance points are most concentrated is shown.
  • D is extracted as the eyeball position (see (C) in Fig. 15)
  • the eye region is more accurately extracted.
  • FIG. 16 is a view for explaining a specific example of a face region extraction process based on the distance between the eyes.
  • the face area A 1 is obtained using a typical aspect ratio of the face, and the face area A 1 is enlarged using a predetermined magnification based on the center of the face area A 1, thereby obtaining the final image.
  • Typical The face area A 2 is obtained.
  • each pixel of the face template lighter by a certain ratio, it is possible to absorb the difference in the first derivative caused by the sharpness of the image due to the camera characteristics, and to reduce the size of the face included in the detected image. Regardless, the face region can be accurately extracted.
  • the eye Focusing on the fact that the eyeball absorbs light, and performing template matching on the information of the point showing the lowest luminance in the surrounding area, the eye can be adjusted regardless of the effects of individual differences, light quantity, light source, etc.
  • the position of the face can be accurately extracted, and thus the face region can be accurately extracted, and the processing time can be reduced.
  • a computer for executing each processing procedure of the face extraction method described above can be stored in a storage medium such as a floppy disk, CD-ROM, MO, or the like.
  • the computer program stored in the storage medium is executed by a computer, and the same as above is executed.
  • the effect of the above can be achieved.
  • the invention of claim 1 is capable of extracting a face region with high accuracy by largely eliminating the effects of individual differences, the influence of a light source, and the color adjustment of a digital camera, etc. This has the unique effect that the required time can be significantly reduced.
  • the invention of claim 2 is unique in that the effect of extracting a face region can be further improved by eliminating the influence of various background images, and the time required for extracting the face region can be greatly reduced. Has the effect of
  • the invention of claim 3 has a unique effect that it is possible to perform comparison in a certain direction with respect to a comparative control in addition to the effect of claim 1 or claim 2.
  • the fourth aspect it is possible to extract the face region with higher accuracy, and it is possible to significantly reduce the time required for extracting the face region.
  • the invention of claim 5 has a unique effect that the time required for extracting the face region can be significantly reduced in addition to the effect of claim 4.
  • the invention of claim 6 has a unique effect that the extraction accuracy of the face region can be further enhanced in addition to the effect of claim 4 or claim 5.
  • the invention of claim 7 has, in addition to the effect of any one of claims 4 to 6, a unique effect that the extraction accuracy of the eyeball position can be increased, and thus the extraction accuracy of the face region can be increased. Play. ⁇ According to the invention of claim 8, the face area can be extracted with high accuracy by largely eliminating the influence of individual differences, the influence of a light source, the color adjustment of a digital camera, etc. This has the unique effect that the required time can be significantly reduced.
  • the invention of claim 9 is unique in that the effect of extracting a face region can be further improved by eliminating the influence of various background images, and the time required for extracting the face region can be significantly reduced. Has the effect of
  • the invention of claim 11 has a specific effect that the face area can be extracted with higher accuracy and the time required for extracting the face area can be greatly reduced.
  • the invention of claim 12 has a unique effect that the time required for extracting the face region can be further greatly reduced, in addition to the effect of claim 11.
  • the invention of claim 13 has a unique effect that the extraction accuracy of the face region can be further improved in addition to the effect of claim 11 or claim 12.
  • the invention of claim 15 has the same effect as any of claims 1 to 7.
  • the invention of claim 16 has the same effect as any of claims 1 to 7.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Une image numérique en couleur ayant été prise par une caméra numérique, les contours du visage en sont extraits à l'aide d'un gabarit facial s'appliquant à l'image. La position exacte des yeux est déterminée à partir de la zone intérieure aux contours, et le visage se déduit de la distance entre les yeux. On peut ainsi extraire avec une grande précision la zone du visage sans être influencée ni par les différences entre individus, ni par la source de lumière, ni par le réglage des couleurs de la caméra, et cela en un temps relativement court.
PCT/JP2001/001541 2001-02-28 2001-02-28 Procede d'extraction de l'image d'un visage et dispositif, support d'enregistrement et programme a cet effet WO2002069266A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2001/001541 WO2002069266A1 (fr) 2001-02-28 2001-02-28 Procede d'extraction de l'image d'un visage et dispositif, support d'enregistrement et programme a cet effet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2001/001541 WO2002069266A1 (fr) 2001-02-28 2001-02-28 Procede d'extraction de l'image d'un visage et dispositif, support d'enregistrement et programme a cet effet

Publications (1)

Publication Number Publication Date
WO2002069266A1 true WO2002069266A1 (fr) 2002-09-06

Family

ID=11737079

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2001/001541 WO2002069266A1 (fr) 2001-02-28 2001-02-28 Procede d'extraction de l'image d'un visage et dispositif, support d'enregistrement et programme a cet effet

Country Status (1)

Country Link
WO (1) WO2002069266A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1703480A3 (fr) * 2005-03-17 2007-02-14 Delphi Technologies, Inc. Système et procédé de détermination de l'état de conscience
US8363957B2 (en) 2009-08-06 2013-01-29 Delphi Technologies, Inc. Image classification system and method thereof

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5982912A (en) * 1996-03-18 1999-11-09 Kabushiki Kaisha Toshiba Person identification apparatus and method using concentric templates and feature point candidates

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5982912A (en) * 1996-03-18 1999-11-09 Kabushiki Kaisha Toshiba Person identification apparatus and method using concentric templates and feature point candidates

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1703480A3 (fr) * 2005-03-17 2007-02-14 Delphi Technologies, Inc. Système et procédé de détermination de l'état de conscience
US7697766B2 (en) 2005-03-17 2010-04-13 Delphi Technologies, Inc. System and method to determine awareness
US8363957B2 (en) 2009-08-06 2013-01-29 Delphi Technologies, Inc. Image classification system and method thereof

Similar Documents

Publication Publication Date Title
KR102279350B1 (ko) 자율 주행 상황에서 장애물 검출을 위한 cnn 학습용 이미지 데이터 세트의 생성 방법 및 장치, 그리고 이를 이용한 테스트 방법 및 테스트 장치
KR102592076B1 (ko) 딥러닝 기반 영상 처리 장치 및 방법, 학습 장치
JP2020126614A (ja) 高精度イメージを分析するディープラーニングネットワークの学習に利用するためのトレーニングイメージをオートラベリングするための方法、及びこれを利用したオートラベリング装置{method for auto−labeling training images for use in deep learning network to analyze images with high precision, and auto−labeling device using the same}
JP5047005B2 (ja) 画像処理方法、パターン検出方法、パターン認識方法及び画像処理装置
JP4410732B2 (ja) 顔画像検出装置、顔画像検出方法および顔画像検出プログラム
JP4708909B2 (ja) デジタル画像の対象物検出方法および装置並びにプログラム
WO2021016873A1 (fr) Procédé de détection d'attention basé sur un réseau neuronal en cascade, dispositif informatique et support d'informations lisible par ordinateur
US8452091B2 (en) Method and apparatus for converting skin color of image
JPH10191020A (ja) 被写体画像切出し方法及び装置
CN113689436B (zh) 图像语义分割方法、装置、设备及存储介质
JP2023535084A (ja) 施設平面図に含まれた記号分析装置及び方法
CN113269089A (zh) 基于深度学习的实时手势识别方法及系统
CN114549557A (zh) 一种人像分割网络训练方法、装置、设备及介质
Escalera et al. Fast greyscale road sign model matching and recognition
JP2008003749A (ja) 特徴点検出装置および方法並びにプログラム
JP7084444B2 (ja) 2d画像のラベリング情報に基づく3d画像ラベリング方法及び3d画像ラベリング装置
CN113989814A (zh) 图像生成方法、装置、计算机设备及存储介质
CN112434581A (zh) 一种室外目标颜色识别方法、系统、电子设备及存储介质
US20150023558A1 (en) System and method for face detection and recognition using locally evaluated zernike and similar moments
CN116824333A (zh) 一种基于深度学习模型的鼻咽癌检测系统
JPH10222678A (ja) 物体検出装置および物体検出方法
WO2002069266A1 (fr) Procede d'extraction de l'image d'un visage et dispositif, support d'enregistrement et programme a cet effet
CN113095119A (zh) 一种修正人脸裁剪框的人脸识别系统
JP2006285959A (ja) 顔判別装置の学習方法、顔判別方法および装置並びにプログラム
JPH11272800A (ja) 文字認識装置

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AT CA CN IL IN JP KR NO RU SG US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP