WO2006013913A1 - オブジェクト画像検出装置、 顔画像検出プログラムおよび顔画像検出方法 - Google Patents
オブジェクト画像検出装置、 顔画像検出プログラムおよび顔画像検出方法 Download PDFInfo
- Publication number
- WO2006013913A1 WO2006013913A1 PCT/JP2005/014271 JP2005014271W WO2006013913A1 WO 2006013913 A1 WO2006013913 A1 WO 2006013913A1 JP 2005014271 W JP2005014271 W JP 2005014271W WO 2006013913 A1 WO2006013913 A1 WO 2006013913A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- detection target
- detection
- object image
- face
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Definitions
- the present invention relates to pattern recognition and object recognition technology.
- the present invention relates to an object image detection device, a face image detection program, and a face image detection method for detecting whether or not an object such as a human face is included.
- Japanese Patent Laid-Open No. 2 0 0 3 — 2 7 1 As described in Japanese Patent No. 9 33, a face image was detected by performing matching with the detection target image using a template representing an average face image.
- the object image detection device determines whether or not an object image exists in a detection target image in which it is not known whether or not the object image is included.
- An object image detection device for detecting a plurality of images of the detection target region that is normalized by a predetermined size and an image reading unit that reads the predetermined region of the detection target image as the detection target region.
- a feature vector generating means for calculating a representative value of an image feature amount indicating a predetermined image characteristic for each time, and generating a feature vector indicating the feature of the image characteristic in the detection target region from the representative value; Based on the image characteristics indicated by the feature vector, at least two or more identification means for identifying whether or not an object image exists in the detection target region based on different criteria. It is characterized by providing.
- the feature vector indicating the image feature amount of the detection target region is generated from the representative value of the image feature amount of the detection target region divided into a plurality of blocks, and is identified with different criteria. Both are input to two or more identification means. Therefore, since the detection target region is identified as an object based on at least two different criteria, the object image detection device of the present invention can detect the detection target regardless of the object orientation. The object of the area can be detected with high reliability.
- the object image detection apparatus of the present invention preferably further includes an identification means selection unit that selects the identification means in accordance with a statistical characteristic of the image characteristic indicated by the characteristic vector.
- the feature vector generation means includes: a normalization unit that normalizes an image in the detection target region with a predetermined size; and the image with the predetermined image characteristic An image characteristic calculation unit that calculates and numerically calculates, and an average for dividing the detection target region into a plurality of blocks and calculating an average value or a variance value of the numerical values for each of the blocks- It is preferable to provide a variance value calculation unit.
- the image of the detection target area normalized to a predetermined size is divided into a plurality of blocks, and the characteristic vector has a characteristic value for each block. Calculated by averaging or representing by variance. Therefore, in addition to maintaining the characteristic value of each block, Since the amount of computation for calculating the vector is drastically reduced, the characteristic vector can be calculated accurately and at high speed.
- the image characteristic calculation unit includes a luminance calculation unit that calculates a luminance value in each pixel constituting the image in the detection target region.
- the image characteristic calculation unit includes an edge intensity calculation unit that calculates the intensity of the edge in the detection target region.
- the image when an object image exists in the detection target area, the image can be identified accurately and at high speed.
- the edge intensity is calculated using a Sobe 1 operator in each pixel constituting the image in the detection target area.
- the strength of the wedge can be detected with high accuracy.
- the identification means comprises a support vector machine that has previously learned a plurality of learning sample object images and sample non-object images.
- the discriminator includes a front-stage discriminator and a rear-stage discriminator, and the front-stage discriminator performs a discrimination process faster than the rear-stage discriminator. It is preferable that the discriminator in the subsequent stage can perform discrimination processing with higher accuracy than the discriminator in the previous stage.
- the latter classifier identifies the feature vector that can be identified by the former classifier.
- the classification process can be performed efficiently.
- the preceding classifier is a front It is preferable to provide a linear kernel function as an identification function of the support vector machine.
- the latter classifier includes a non-linear kernel function as an identification function of the support vector machine.
- the calculation amount can be reduced in addition to the simplification of the calculation. Therefore, a high-speed identification process is possible.
- the classifier includes one classifier and the other classifier arranged at the subsequent stage of the one classifier, and each classifier is respectively It is preferable to identify based on the different image characteristics.
- the other classifier includes a support vector machine that has learned the learning object image and a non-talented object image misidentified by the one classifier. > O ⁇ - ⁇
- the other discriminator can effectively learn so as not to mis-identify the mis-identified image again.
- the face image detection program of the present invention is a face that detects whether or not a face image exists in a detection target image that is not known whether or not a face image is included.
- An image detection program comprising: an image reading unit that reads a predetermined area of the detection target image as a detection target area; and a plurality of images of the detection target area normalized by a predetermined size.
- the image is divided into A feature vector generating means for calculating a representative value of the image feature amount indicating the characteristic, and generating a feature vector indicating the feature of the image characteristic in the detection target region from the representative value; and the feature vector.
- the detection target area Based on the image characteristics indicated by the image, whether or not an object image exists in the detection target area is made to function as at least two or more identification means for identifying based on different criteria. It is characterized by.
- the feature vector indicating the image feature amount of the detection target region is generated from the representative value of the image feature amount of the detection target region divided into a plurality of blocks, and is identified with different criteria. Both are input to two or more identification means. Therefore, since the detection target area is identified as an object based on at least two or more different criteria, the face image detection program of the present invention can detect the detection target area regardless of the orientation of the object. Objects can be detected with high reliability.
- the face image detection method of the present invention detects whether or not a face image exists in a detection target image in which it is not known whether or not a face image is included.
- a face image detection method wherein a predetermined area in the detection target image is selected as a detection target area, and an image in the detection target area is selected to have a predetermined size. Normalizing by: c, dividing the detection target region into a plurality of blocks, calculating a representative value of the image feature amount included in each block, and A step of generating a feature vector indicating a feature of an image in the detection target region from the representative value and a criterion for identifying the feature vector are at least different.
- the feature vector indicating the image feature amount of the detection target region is generated from the representative value of the image feature amount of the detection target region divided into a plurality of blocks, and is identified with different criteria. And are input to two or more classifiers. Therefore, the area to be detected is at least two different. Since the object is identified based on the reference, the face image detection method of the present invention can detect the object in the detection target region with high reliability regardless of the orientation of the object.
- FIG. 1 is a block diagram showing an embodiment of a face image detecting apparatus according to the present invention.
- FIG. 2 is a diagram illustrating a hardware configuration of the face image detection apparatus.
- FIG. 3 is a schematic configuration diagram showing the configuration of the SVM in the present embodiment.
- FIG. 4 is a diagram for explaining a learning method for SVM.
- FIG. 5 is a flowchart showing an example of a face image detection method for an image to be searched.
- Figure 6 is a flowchart showing the generation of image feature vectors.
- FIG. 7 is a diagram illustrating the shape of the filter of Sobel.
- Fig. 8 is a diagram showing the detection of the target area. .. Best Mode for Carrying Out the Invention
- FIG. 1 is a block diagram showing a configuration of an embodiment of a face image detection apparatus 1 according to the present invention.
- this face image detection device 1 is a learning device.
- Image reading means 10 for reading the learning image 80, which is a sample image for learning, and the detection target image 90, and a feature vector for generating a feature vector of the image divided into a plurality of blocks.
- SVM support vector machine
- SVM support vector machine
- an identification means selection unit 4 0 that selects an S VM 5 0 to be identified from among the plurality of S VMs 50.
- the image reading means 10 is a CCD (ChargeCooledDevic: charge coupled device) camera, a vidicon camera, an image scanner or the like such as a digital still camera or a digital video camera.
- This image reading means 10 reads a predetermined area in the detection target image 90 and a plurality of face images and non-face images as the learning image 80, and at the same time performs AZD conversion on the read data.
- Vector generation means 3 Provides a function to send to 0.
- the characteristic vector generation means 30 is a normalization section 3 1 that normalizes the image read by the image reading means 10 to a predetermined size, and the image characteristics of the normalized image are converted into numerical values and calculated.
- Image characteristic calculator 3 2 and the pixel area is divided into multiple blocks with a predetermined size and digitized.Average value of numerical value-or average for calculating variance value ⁇ Variance calculation unit 3 ⁇
- the image characteristic calculation unit 32 includes a luminance calculation unit 3 4 that calculates the luminance in the image and an edge strength calculation unit 3 6 that calculates the strength of the edge in the image.
- Image feature vectors indicating the image characteristics of the detection target image 90 and the learning image 80 are respectively generated by these functional units and sequentially sent to the identifying means selecting unit 40.
- the normalization unit 3 1 normalizes the sizes of the detection target image 90 and the learning image 80 to a predetermined size (for example, 2 4 X 2 4 pixels).
- a normalization method a bilinear method or a bicubic method that is an interpolation method between image pixels can be employed. Details of other functions will be described later.
- the discriminating means selection unit 40 selects an appropriate SVM 50 from the multiple SVMs 50 according to the statistical characteristics of the image characteristics indicated by the generated image feature vector. The process for selecting this SVM 50 will be described later.
- the SVM 50 learns a large number of face images and non-face images from the learning image 80, and based on the learning result, the SVM 50 contains the detection target image 90 generated by the feature vector generation means 30.
- a function is provided to identify whether or not a predetermined area is a face image.
- This S VM 50 used in the present invention will be described in detail.
- This S VM 50 was proposed in the framework of statistical learning theory by AT & T V. V apnik in 1959.
- This SVM 50 is a learning machine that can find the optimal hyperplane for linearly separating all two classes of input data using the margin indicator. It is known to be one of the best learning models in power. Even when linear separation is impossible, it is known that high discrimination ability can be demonstrated by using a technique called kernel trick.
- the S VM 50 used in the present embodiment is divided into the following two steps: ⁇ f. ⁇ '.
- a learning image 80 comprising a number of face images and non-face images as a learning sample is read by the image reading means 10. Then, the feature vector of the normalized learning image 80 is generated by the feature vector generation means 30 and learned as an image feature vector.
- predetermined regions in the detection target image 90 are sequentially read, and the normalized image feature vectors of the regions are generated by the feature vector generating means 30. This is input as a feature vector. Then, the input image feature vector will be displayed with respect to the identification hyperplane. It is determined whether or not the region falls within the region, and based on this result, it is determined whether or not the region is likely to have the facial image power S.
- SVM 50 is explained in more detail based on the description of “Statistics of Pattern Recognition and Learning” (Iwanami Shoten, Hideki Aso, Koji Tsuda, Noboru Murata) pp. 1 0 7-1 1 8 To do. That is, when the object to be identified is nonlinear, SVM 50 can use a nonlinear kernel function, and the identification function in this case is expressed by the following equation.
- the value of the discriminant function is “0”, it becomes the discriminant hyperplane, and when it is not “0”, it is the distance from the discriminative hyperplane calculated from the given image feature vector. If the result of the discrimination function is non-negative, it is a face image, and if it is negative, it is a non-face image.
- x is a feature vector
- x i is a support vector
- K is a kernel function, and generally a polynomial or Gaussian function is used.
- a high-speed S V. M. (.5.0 A, Fig. 3 ⁇ ) that is processed at high speed: is applied to the .-line .: form function, while the high-performance S VM (50 B-,) applies the RBF function which is a nonlinear function. Note that the identification characteristics of the plurality of SVMs 50 in this embodiment will be described later.
- FIG. 2 is a diagram showing a hardware configuration of the face image detection apparatus 1.
- the hardware configuration of the face image detection device 1 consists of a central processing unit (CPU) 60 that is a central processing unit responsible for various controls and arithmetic processing, and a main memory (M a Random Access Memory (RAM) 6 4 used for ain Storage, R OM (Read Only Memory) 6 2 as a read-only storage device, and a hard disk drive device ( Auxiliary storage devices such as HD D) and semiconductor memory (S e condary storage) 6 6, and an output device 7 2 consisting of a monitor (LCD (liquid crystal display) or CRT (cathode ray tube)), image scanner, keyboard, mouse, CCD (Charge Coupe 1 ed D evice And CMOS (C omp ementary Metal Oxide Semiconductor) ⁇ (D between the input device 7 4 consisting of an image sensor, etc., and the input / output interface (I / F) 68
- CPU central processing
- processor bus memory bus
- system bus I / O
- Peripheral Component I nterconnect such as Peripheral Component I nterconnect, ISA (Industrial standard Architecture), USB (Universal Serial Bus) bus, etc.
- the buses are connected by various buses 70 such as buses.
- P U 60 uses various resources to perform predetermined control and performance processing.
- the image reading means 10 constituting the face image detecting device 1, the feature vector generating means 30,
- the identification means selection unit 4 0 and S V M 5 0 etc. are actually C P U 6 0 and
- FIG. 3 is a schematic configuration diagram showing the configuration of the SVM 50 in the present embodiment.
- the plurality of S.VMs 50 constituting the face image detection device 1 can be roughly classified into two types of identification characteristics.
- the generalization performance indicating the estimation capability is not necessarily high, but the high-speed SVM 50 A capable of high-speed identification processing.
- the processing time is higher than that of the high-speed SVM 50 A, but it is a high-performance SVM 50 B that has high generalization performance and can perform identification processing with high accuracy.
- the luminance feature SVM (5 0 E, 50 G) corresponding to the image feature vector generated using the luminance value at each pixel of the image, or the image feature vector generated using the edge strength of the image.
- S VM (5 OF, 50 H) corresponding to the edge strength corresponding to the torque.
- the high-performance S VM 50 B can use the statistical properties of image features to dedicate S VM 50.
- image features For example, for a human face, one is a Japanese face S VM 50 0 C learned from a Japanese face as a sample image, and the other is a Western face learned from a Western face as a sample image
- SVM 5.0.D.
- the statistical-quality of such image features can be obtained as follows. -:
- the Euclidean distance between the average face image of each group is obtained, and it is determined that the learning image belongs to a dull having the minimum distance.
- the S VM 50 described above is appropriately selected by the identification means selection unit 40 according to the stage and hierarchy of each process.
- the identification means selection unit 40 is not limited to selecting only one SVM 50, but performs multiple identification processing by selecting SVM 50 in multiple stages. This multistage selection The following combinations can be applied as alternative patterns.
- “High-speed SVM 50 A” is selected as the first stage of face image selection as the “serial pattern of high-speed processing and high-precision processing”. Identifies with coarse accuracy.
- high-performance SVM 50 B is selected as the subsequent processing, and this high-performance SVM 50 B is highly accurate for images that can be identified by high-speed SVM 50 A.
- Identify. 10 For example, 'Sequence pattern of processing with different image characteristics' is, for example, the brightness-capable S VM 5 OE is selected as the first stage processing in the high performance S VM 50B, This luminance-corresponding S VM 5 0 E identifies a face image.
- SVM 5 0 F corresponding to edge strength is selected as the subsequent processing for the identified image, and this SVM 5 0 F 15 corresponding to edge strength is targeted to the identified image.
- identification with higher accuracy is performed.
- FIG. 4 is a diagram illustrating a learning method for SVM 50.
- the combination of SVM 5250 is composed of (2) “Serial pattern of processing with different image characteristics”.
- the edge strength SVM 5 OF is applied as the previous process
- the brightness S VM 5 0 E is used as the subsequent process.
- a Japanese face SV learned using a Japanese face as a sample image The explanation is limited to the case of M 50 C.
- SVM 5. OD for Western faces that learns Western faces as sample images can be learned in the same way as Japanese faces.
- the edge strength calculation unit 36 calculates the edge strength of the normalized image, and generates a feature vector having the calculated edge strength as an image characteristic. Then, this feature vector is input to the edge strength corresponding SVM 50 F, and the edge strength corresponding S VM 50 F is trained.
- the non-face image 8 2 5 used in the learning of edge strength SVM 5 OF is divided into a group 8 5 B that has been recognized normally and a group 8 5 A that has been recognized incorrectly. Then, the misrecognized group 85 A non-face image 83 is used as a learning image 80 B in the subsequent SVM. In addition, this latter stage S VM
- -Learning in Image 8 and Face Image 8 1- Used in ⁇ ⁇ . 1- is the learning of the previous, step. Use 8 1. ——— .. 0
- learning is performed with the luminance-corresponding S VM 50 O in the Japanese face S VM 50 C. That is, the face image 8 1 or the non-face image 8 3 is selected from the learning image 80 0 B in the latter stage S VM, and the selected image is selected by the normalization unit 3 1 of the feature vector generation means 30.
- the luminance calculation unit 34 calculates the luminance of the normalized image, and generates a feature vector having the calculated luminance as the image characteristic.
- this feature vector is input to the luminance-corresponding S VM 50 E and is learned by the luminance-corresponding S VM 50 E.
- the estimation capability of the brightness-corresponding SVM50E is improved.
- FIG. 5 is a flowchart showing an example of a face image detection method for an image to be actually searched.
- the SVM 50 used for the identification as described above is made to learn the face image and the non-face image that are the sample images for learning. It is necessary to go through the steps.
- a feature vector is generated for each face image and non-face image as a sample image, and information about whether the feature vector is a face image or a non-face image is used. Input.
- the learning image used for learning is an image that has been subjected to the same processing as the region of the actual detection target image.
- the image area to be identified in the present invention is dimensionally compressed, it is possible to perform identification at higher speed and higher accuracy by using an image compressed in advance to the same dimension. .
- the identification process by S VM 50 is first identified by (1) “Serial pattern of high-speed processing and high-precision processing”. ... Next. (.: 3 ⁇ '..) Performs the processing of “Statistical properties of image features-for use: converted parallels.: Pattern”. Then, in this parallel pattern, (2) “sequential patterns of processing with different image characteristics” are combined to perform composite identification processing.
- the face image detection device 1 When the face image detection device 1 starts to be used, first, a process of inputting a detection target image is executed. That is, the face image detection device 1 inputs the detection target image 90 that is a target of face detection from the input device 74 by the image reading means 10 (step S 1 0 0).
- the face image detection device 1 sets a detection target area in the detection target image.
- the method for determining the detection target area is not particularly limited, and an area obtained by other face image identification means can be used as it is, or a user of this system can detect the detection target area. Specified arbitrarily in the image This area may be adopted.
- this detection target image in principle, it is considered that in most cases it is not known whether the face image is included, as well as where the face image is included. Therefore, for example, starting from a certain area starting from the upper left corner of the detection target image, all areas are searched while sequentially shifting by a certain pixel in the horizontal and vertical directions. It is desirable to select that area.
- the size of the area does not have to be constant, and may be selected while changing the size as appropriate (step S 100 2).
- the face image detection apparatus 1 normalizes (resizes) the size of the first detection target area to a predetermined size, for example, 2 4 X 24 pixels, by the normalizing unit 3 1. That is, as a general rule, the size of the face image is unknown as well as whether or not the face image is included in the image to be detected. Therefore, depending on the size of the face image in the selected area, the number of pixels may be Since it is significantly different, the selected area is normalized to a reference size (2 4 X 2 4 pixels) (step S 1 0 4).
- the face image detection device 1 uses the feature vector generation means to generate an image feature vector using the edge strength of the image that is one of the image characteristics. -For the generation of feature vectors, see below-C-Step S-.
- the face image detection device 1 selects a high-speed SVM 5 OA by the identification means selection unit 40 and inputs an image feature vector so that the face image is detected in the first detection target area. It is determined whether or not it exists (step S 1 0 8).
- step S 1 1 1 0 if it is not determined that a face image exists (No in step S 1 1 0), the process proceeds to a step of determining that the image in the first detection target area is a non-face image (step S 1 2 6).
- step S 1 110 if it is determined that a face image exists (Y e s. In step S 1 110), the process proceeds to the next step (step S 1 1 2).
- the face image detection device 1 uses the identification means selection unit 40 to perform a face image special feature.
- One of the dedicated high-performance S VMs (50 C, 50 D) is selected using the statistical properties of the collected amount (step S1 1 2). Since the subsequent steps are the same regardless of which one is selected, it is assumed for convenience that S VM 50 C for personal face B learned with a Japanese face is selected.
- the face image detection device 1 inputs the image feature vector generated using the edge strength of the image to the edge strength corresponding SVM 50 F, so that the face image is detected in the first detection target region. It is determined whether or not it exists (step S 1 1 4).
- step S 1 1 6 If it is not determined that a face image exists (No in step S 1 1 6), the process proceeds to a step of determining that the image in the first detection target area is a non-face image (step S 1 2 6 ).
- step S 1 16 if it is determined that a face image exists (Y e s in step S 1 16), the process proceeds to the next step (step S 1 18).
- the face image detection device 1 generates an image feature vector by using the brightness of the image that is the other image characteristic by the feature vector generation means. The generation of this image feature vector will be described later (step S 1 18).
- the face image detection device 1 inputs the image brightness vector generated by using the brightness of the image to the brightness-enabled S VM 50 E.
- -Detection pair It is determined whether a face image exists in the elephant area (step S 1 2 0).
- step S 1 2 2 If it is not determined that a face image exists (No in step S 1 2 2), the process proceeds to a step of determining that the image in the first detection target region is a non-face image (step S 1 2 6).
- step S 1 2 2 the process proceeds to a step of determining that it is a face image (step S 1 2 4).
- each determination result is displayed each time the determination is made. Or, together with other judgment results, the identification result is output to the user of the face-image detection device 1 via the output device 7 2.
- Output 9 5 and the next process step S 1 2 8)
- the face image detection apparatus 1 determines whether or not the detection target image area has been detected in all the set detection target areas (step S 1 2 5 8).
- step S 1 3 0 the normalization unit 3 1 returns to the step (S 1 0 4) 10 in which the size of the detection target area is normalized to a predetermined size.
- step S 1 28 If it is determined that all the detections in the detection target area having the set size are completed (Y e s in step S 1 28), the face image detection process ends.
- step S 1 4 0 a step for determining whether or not to adopt edge strength as the image characteristic is executed.
- the edge intensity is adopted as the image characteristic, or the brightness of the pixel is changed. -.
- step S 140 If it is determined that the image characteristic adopts the edge strength according to the above determination (Yes in step S 140), the face image detection device 1 uses the edge strength calculation unit 36 as the image feature amount. Then, the edge strength of the image of the detection target area is calculated (step S 1 4 2). Thereafter, the process proceeds to the next step (step S 25 1 4 6).
- “Sobel's operator”, which is one of the differential edge detection operators as shown in FIG. 7, can be applied.
- the operator (filter) shown in Fig. 7 (a) is one of the eight pixel values surrounding the pixel of interest in the second row and second column, those located in the left and right columns.
- the operator shown in Fig. 7 (b) can select the upper row and the upper row of the eight pixel values surrounding the pixel of interest in the second row and the second column.
- the vertical and horizontal edges are detected by adjusting the three pixel values located in the lower row and emphasizing the vertical edges.
- the edge strength is obtained by taking the square root, and the edge strength or edge variance value at each pixel is generated. It is known that image feature vectors can be detected with high accuracy.
- the face image detection device 1 uses the luminance calculation unit 34 to determine the image feature amount. Then, the brightness of the pixel in the detection target area is calculated (step S 1 4 4). Thereafter, the process proceeds to the next step (step S 1 4 6).
- the face image detection apparatus 1 divides the detection target area into a plurality of blocks.
- FIG. 8 is a diagram showing the block area, and the normalized pixel 9 2 in the detection target area 9 OA is divided by the block 9 4 of 4 ⁇ 3 pixels (pixels) (steps). S 1 4 6).
- Image detection-Outfit is the average' Variance calculation part. 3:: 8. (Step S 1 4 8). Here, whether the average value or the variance value is adopted as the representative value is determined in advance before the image feature vector generation process is started.
- step S 1 4 8 If it is determined that the average value is to be adopted according to the above decision (Yes in step S 1 4 8), the average value of the image feature values in each block is calculated (step S 1 5 0), and the image The feature vector generation process ends.
- the variance value of the image feature value of each block is calculated (step S 1 5 2).
- the image feature vector generation process ends.
- the embodiment described above has the following effects. (1) In the learning of high-performance sy M 50 B composed of serial patterns, the learning image used in the rear-stage SVM 50 E is the non-face image that was misrecognized by the front-stage SVM 50 F 8 3 is used, effective learning is possible in the latter stage, and a highly reliable learning effect is obtained in which a misrecognized non-face image 83 is not recognized again.
- S V M 50 0 E on the front side and S V M 50 0 F on the rear side use different image feature amounts, so that the face image can be identified reliably.
- the classification hyperplane is simplified by using the dedicated SVM 50 depending on the statistical properties. Can be improved.
- the detection target image is an image of the entire human face.
- the present invention is not limited to this, and a human face or a specific part of the body, or an animal that is not limited to humans. It can also be applied to face recognition.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP05768665A EP1775683A1 (en) | 2004-08-04 | 2005-07-28 | Object image detection device, face image detection program, and face image detection method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004227567A JP2006048322A (ja) | 2004-08-04 | 2004-08-04 | オブジェクト画像検出装置、顔画像検出プログラムおよび顔画像検出方法 |
JP2004-227567 | 2004-08-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006013913A1 true WO2006013913A1 (ja) | 2006-02-09 |
Family
ID=35757456
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/014271 WO2006013913A1 (ja) | 2004-08-04 | 2005-07-28 | オブジェクト画像検出装置、 顔画像検出プログラムおよび顔画像検出方法 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20060029276A1 (ja) |
EP (1) | EP1775683A1 (ja) |
JP (1) | JP2006048322A (ja) |
CN (1) | CN1973300A (ja) |
TW (1) | TW200609849A (ja) |
WO (1) | WO2006013913A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8705886B2 (en) | 2006-09-25 | 2014-04-22 | Samsung Electronics Co., Ltd. | System, medium, and method compensating brightness of an image |
Families Citing this family (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7599527B2 (en) * | 2005-09-28 | 2009-10-06 | Facedouble, Inc. | Digital image search system and method |
US7587070B2 (en) * | 2005-09-28 | 2009-09-08 | Facedouble, Inc. | Image classification and information retrieval over wireless digital networks and the internet |
US8600174B2 (en) | 2005-09-28 | 2013-12-03 | Facedouble, Inc. | Method and system for attaching a metatag to a digital image |
US8311294B2 (en) | 2009-09-08 | 2012-11-13 | Facedouble, Inc. | Image classification and information retrieval over wireless digital networks and the internet |
US20070121094A1 (en) * | 2005-11-30 | 2007-05-31 | Eastman Kodak Company | Detecting objects of interest in digital images |
JP4811651B2 (ja) * | 2006-03-30 | 2011-11-09 | 独立行政法人産業技術総合研究所 | ステレオカメラを用いた車椅子使用者検出システム |
KR100842258B1 (ko) | 2006-10-26 | 2008-06-30 | 한국전자통신연구원 | 얼굴 영상의 위조 여부를 판별하는 방법 및 그 장치 |
US8249359B2 (en) * | 2007-04-13 | 2012-08-21 | Panasonic Corporation | Detector for detecting a predetermined image in an input image, and detection method and integrated circuit for performing the same |
TWI399704B (zh) * | 2007-12-31 | 2013-06-21 | Hon Hai Prec Ind Co Ltd | 影像雜質分析系統及方法 |
JP2009199232A (ja) * | 2008-02-20 | 2009-09-03 | Seiko Epson Corp | 画像処理装置 |
US8265339B2 (en) * | 2008-04-25 | 2012-09-11 | Panasonic Corporation | Image processing device, image processing method, and integrated circuit for processing images |
JP4663756B2 (ja) * | 2008-04-28 | 2011-04-06 | 株式会社日立製作所 | 異常行動検知装置 |
US7890512B2 (en) * | 2008-06-11 | 2011-02-15 | Microsoft Corporation | Automatic image annotation using semantic distance learning |
JP2010186288A (ja) * | 2009-02-12 | 2010-08-26 | Seiko Epson Corp | 顔画像の所定のテクスチャー特徴量を変更する画像処理 |
JP5472976B2 (ja) * | 2009-08-18 | 2014-04-16 | Necソフト株式会社 | 対象物検出装置、対象物検出方法、プログラムおよび記録媒体 |
JP5381498B2 (ja) * | 2009-08-24 | 2014-01-08 | 株式会社ニコン | 画像処理装置、画像処理プログラムおよび画像処理方法 |
CN102713974B (zh) | 2010-01-06 | 2015-09-30 | 日本电气株式会社 | 学习装置、识别装置、学习识别系统和学习识别装置 |
JP2011141809A (ja) * | 2010-01-08 | 2011-07-21 | Sumitomo Electric Ind Ltd | 画像データ分析装置及び画像データ分析方法 |
US9053384B2 (en) | 2011-01-20 | 2015-06-09 | Panasonic Intellectual Property Management Co., Ltd. | Feature extraction unit, feature extraction method, feature extraction program, and image processing device |
JP5668932B2 (ja) * | 2011-05-23 | 2015-02-12 | 株式会社モルフォ | 画像識別装置、画像識別方法、画像識別プログラム及び記録媒体 |
JP5849558B2 (ja) * | 2011-09-15 | 2016-01-27 | オムロン株式会社 | 画像処理装置、画像処理方法、制御プログラムおよび記録媒体 |
KR101289087B1 (ko) * | 2011-11-03 | 2013-08-07 | 인텔 코오퍼레이션 | 얼굴 검출 방법, 장치, 및 이 방법을 실행하기 위한 컴퓨터 판독 가능한 기록 매체 |
KR101877981B1 (ko) * | 2011-12-21 | 2018-07-12 | 한국전자통신연구원 | 가버 특징과 svm 분류기를 이용하여 위변조 얼굴을 인식하기 위한 시스템 및 그 방법 |
US11321772B2 (en) * | 2012-01-12 | 2022-05-03 | Kofax, Inc. | Systems and methods for identification document processing and business workflow integration |
US9165187B2 (en) | 2012-01-12 | 2015-10-20 | Kofax, Inc. | Systems and methods for mobile image capture and processing |
US9213892B2 (en) * | 2012-12-21 | 2015-12-15 | Honda Motor Co., Ltd. | Real-time bicyclist detection with synthetic training data |
US9230383B2 (en) * | 2012-12-28 | 2016-01-05 | Konica Minolta Laboratory U.S.A., Inc. | Document image compression method and its application in document authentication |
US10783615B2 (en) | 2013-03-13 | 2020-09-22 | Kofax, Inc. | Content-based object detection, 3D reconstruction, and data extraction from digital images |
US10127636B2 (en) | 2013-09-27 | 2018-11-13 | Kofax, Inc. | Content-based detection and three dimensional geometric reconstruction of objects in image and video data |
US11620733B2 (en) * | 2013-03-13 | 2023-04-04 | Kofax, Inc. | Content-based object detection, 3D reconstruction, and data extraction from digital images |
US10013078B2 (en) * | 2014-04-11 | 2018-07-03 | Pixart Imaging Inc. | Optical navigation device and failure identification method thereof |
JP6361387B2 (ja) * | 2014-09-05 | 2018-07-25 | オムロン株式会社 | 識別装置および識別装置の制御方法 |
CN104463136B (zh) * | 2014-12-19 | 2019-03-29 | 中科创达软件股份有限公司 | 一种文字图像识别方法及装置 |
US10242285B2 (en) | 2015-07-20 | 2019-03-26 | Kofax, Inc. | Iterative recognition-guided thresholding and data extraction |
US10467465B2 (en) | 2015-07-20 | 2019-11-05 | Kofax, Inc. | Range and/or polarity-based thresholding for improved data extraction |
EP3422254B1 (en) | 2017-06-29 | 2023-06-14 | Samsung Electronics Co., Ltd. | Method and apparatus for separating text and figures in document images |
JP6907774B2 (ja) * | 2017-07-14 | 2021-07-21 | オムロン株式会社 | 物体検出装置、物体検出方法、およびプログラム |
CN107909011B (zh) * | 2017-10-30 | 2021-08-24 | Oppo广东移动通信有限公司 | 人脸识别方法及相关产品 |
US10803350B2 (en) | 2017-11-30 | 2020-10-13 | Kofax, Inc. | Object detection and image cropping using a multi-detector approach |
KR102093208B1 (ko) * | 2018-11-22 | 2020-03-26 | 동국대학교 산학협력단 | 화소 분석에 기초한 인물 인식 장치 및 그 동작 방법 |
CN112101360B (zh) * | 2020-11-17 | 2021-04-27 | 浙江大华技术股份有限公司 | 一种目标检测方法、装置以及计算机可读存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0530355A (ja) * | 1991-03-07 | 1993-02-05 | Fuji Photo Film Co Ltd | 被写体像内画像点決定方法 |
JPH05205057A (ja) * | 1992-01-25 | 1993-08-13 | Mitsubishi Kasei Corp | パターン認識装置 |
JP2001216515A (ja) * | 2000-02-01 | 2001-08-10 | Matsushita Electric Ind Co Ltd | 人物の顔の検出方法およびその装置 |
JP2003271933A (ja) * | 2002-03-18 | 2003-09-26 | Sony Corp | 顔検出装置及び顔検出方法並びにロボット装置 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6181805B1 (en) * | 1993-08-11 | 2001-01-30 | Nippon Telegraph & Telephone Corporation | Object image detecting method and system |
US6157921A (en) * | 1998-05-01 | 2000-12-05 | Barnhill Technologies, Llc | Enhancing knowledge discovery using support vector machines in a distributed network environment |
US6501857B1 (en) * | 1999-07-20 | 2002-12-31 | Craig Gotsman | Method and system for detecting and classifying objects in an image |
US6618490B1 (en) * | 1999-09-16 | 2003-09-09 | Hewlett-Packard Development Company, L.P. | Method for efficiently registering object models in images via dynamic ordering of features |
US6795567B1 (en) * | 1999-09-16 | 2004-09-21 | Hewlett-Packard Development Company, L.P. | Method for efficiently tracking object models in video sequences via dynamic ordering of features |
US20020172419A1 (en) * | 2001-05-15 | 2002-11-21 | Qian Lin | Image enhancement using face detection |
US7130446B2 (en) * | 2001-12-03 | 2006-10-31 | Microsoft Corporation | Automatic detection and tracking of multiple individuals using multiple cues |
EP3196805A3 (en) * | 2003-06-12 | 2017-11-01 | Honda Motor Co., Ltd. | Target orientation estimation using depth sensing |
-
2004
- 2004-08-04 JP JP2004227567A patent/JP2006048322A/ja active Pending
-
2005
- 2005-07-22 TW TW094124950A patent/TW200609849A/zh unknown
- 2005-07-28 CN CNA2005800206164A patent/CN1973300A/zh active Pending
- 2005-07-28 WO PCT/JP2005/014271 patent/WO2006013913A1/ja active Application Filing
- 2005-07-28 EP EP05768665A patent/EP1775683A1/en not_active Withdrawn
- 2005-08-03 US US11/197,671 patent/US20060029276A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0530355A (ja) * | 1991-03-07 | 1993-02-05 | Fuji Photo Film Co Ltd | 被写体像内画像点決定方法 |
JPH05205057A (ja) * | 1992-01-25 | 1993-08-13 | Mitsubishi Kasei Corp | パターン認識装置 |
JP2001216515A (ja) * | 2000-02-01 | 2001-08-10 | Matsushita Electric Ind Co Ltd | 人物の顔の検出方法およびその装置 |
JP2003271933A (ja) * | 2002-03-18 | 2003-09-26 | Sony Corp | 顔検出装置及び顔検出方法並びにロボット装置 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8705886B2 (en) | 2006-09-25 | 2014-04-22 | Samsung Electronics Co., Ltd. | System, medium, and method compensating brightness of an image |
Also Published As
Publication number | Publication date |
---|---|
CN1973300A (zh) | 2007-05-30 |
TW200609849A (en) | 2006-03-16 |
US20060029276A1 (en) | 2006-02-09 |
JP2006048322A (ja) | 2006-02-16 |
EP1775683A1 (en) | 2007-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2006013913A1 (ja) | オブジェクト画像検出装置、 顔画像検出プログラムおよび顔画像検出方法 | |
US7190829B2 (en) | Speedup of face detection in digital images | |
JP5406705B2 (ja) | データ補正装置及び方法 | |
US8594432B2 (en) | Image processing apparatus and image processing method | |
JP4933186B2 (ja) | 画像処理装置、画像処理方法、プログラム及び記憶媒体 | |
EP1596323B1 (en) | Specified object detection apparatus | |
US20090226047A1 (en) | Apparatus and Method of Processing Image and Human Face Detection System using the smae | |
TWI254891B (en) | Face image detection method, face image detection system, and face image detection program | |
US20050084133A1 (en) | Object measuring apparatus, object measuring method, and program product | |
JP5517504B2 (ja) | 画像処理装置、画像処理方法、およびプログラム | |
US20110211233A1 (en) | Image processing device, image processing method and computer program | |
US20100074479A1 (en) | Hierarchical face recognition training method and hierarchical face recognition method thereof | |
WO2005038715A1 (ja) | 顔画像候補領域検索方法及び顔画像候補領域検索システム並びに顔画像候補領域検索プログラム | |
JP2011053953A (ja) | 画像処理装置及びプログラム | |
CN108256454B (zh) | 一种基于cnn模型的训练方法、人脸姿态估测方法及装置 | |
US20090232400A1 (en) | Image evaluation apparatus, method, and program | |
US7403636B2 (en) | Method and apparatus for processing an image | |
WO2005041128A1 (ja) | 顔画像候補領域検索方法及び顔画像候補領域検索システム並びに顔画像候補領域検索プログラム | |
JP5100688B2 (ja) | 対象物検出装置及びプログラム | |
JP5201184B2 (ja) | 画像処理装置及びプログラム | |
CN110909568A (zh) | 用于面部识别的图像检测方法、装置、电子设备及介质 | |
JP4749884B2 (ja) | 顔判別装置の学習方法、顔判別方法および装置並びにプログラム | |
JP2006323779A (ja) | 画像処理方法、画像処理装置 | |
JP2012243285A (ja) | 特徴点位置決定装置、特徴点位置決定方法及びプログラム | |
CN111652080A (zh) | 基于rgb-d图像的目标跟踪方法和装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2005768665 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200580020616.4 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 2005768665 Country of ref document: EP |