WO2013122009A1 - 信頼度取得装置、信頼度取得方法および信頼度取得プログラム - Google Patents
信頼度取得装置、信頼度取得方法および信頼度取得プログラム Download PDFInfo
- Publication number
- WO2013122009A1 WO2013122009A1 PCT/JP2013/053105 JP2013053105W WO2013122009A1 WO 2013122009 A1 WO2013122009 A1 WO 2013122009A1 JP 2013053105 W JP2013053105 W JP 2013053105W WO 2013122009 A1 WO2013122009 A1 WO 2013122009A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- reliability
- information
- image
- feature point
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/167—Detection; Localisation; Normalisation using comparisons between temporally consecutive images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Definitions
- the present invention relates to a reliability acquisition device, a reliability acquisition method, and a reliability acquisition program.
- the discriminator used for face detection normally prepares a large number of images obtained by cutting out face regions and images that do not include faces, and performs learning to generate them.
- a learning image group that completely includes information necessary for determining whether a face is a face
- a constructed discriminator is accompanied by a certain degree of detection error.
- detection errors there are two types of such detection errors: there are cases where a face area cannot be determined as a face and is overlooked (undetected), and a non-face area is erroneously determined as a face (false detection).
- Several techniques are known for reducing the number.
- Patent Document 1 describes a method for determining whether the color of a region detected as a face is a skin color as a method for reducing erroneous detection of a face.
- Patent Document 2 describes a method of using a statistical model related to the texture and shape of a face as a method for reducing erroneous face detection.
- a parameter of a statistical model is adjusted, and a difference in luminance value between a face image generated from the model and an image cut out as a face region based on the face detection result is minimized.
- the face image generated from the model and the image cut out as the face area based on the face detection result are normalized with respect to the face shape.
- the difference of the minimized luminance value is more than a predetermined threshold value, it determines with a detection result being a false detection.
- a statistical model related to a face has poor expressiveness of images other than the face.
- Non-Patent Document 1 as a method of reducing false detection of a face, a false detection detector for a face is learned by a support vector machine (SVM), and the determination device is extracted into an area cut out based on a face detection result.
- SVM support vector machine
- an image feature amount extracted by Gabor wavelet transform is learned by SVM, and a discriminator for identifying whether or not the texture of the target region is a face is constructed.
- a difference related to a luminance image between a detected face area and a statistical model related to a face is used.
- the brightness value of the face area also changes in an environment where lighting, facial expressions, etc. change in a complex manner, so what is the magnitude of the difference value of the brightness image between the detected face area and the statistical model? For example, it is difficult to fix whether it is a false detection. Therefore, the method for determining whether or not the face is based on the difference value of the luminance image has a problem that sufficient accuracy cannot be obtained.
- an image feature amount by Gabor wavelet transform is extracted from the entire detected face area, and the feature amount is used for determining whether or not the face is a face.
- the face detector mistakenly identifies an area that is not a face, such as a background, as a face, it is thought that the cause of the false detection is that the area contains textures that are likely to be faces.
- the image feature amount extracted from the entire detected face area may indicate that the target area is likely to be a face. Therefore, as in the method of Non-Patent Document 1, the method of judging based on the image feature amount extracted from the entire detected face region does not provide sufficient accuracy, and it is difficult to reduce face detection errors. was there.
- the present invention has been made to solve the above-described problem, and in order to reduce detection errors such as erroneous detection, it is determined whether or not the input image is an image to be detected (for example, a face image).
- the purpose is to obtain the reliability for determination with high accuracy.
- a reliability acquisition device includes an identifier storage unit that stores information on an identifier that outputs a part-likeness related to a predetermined region to be detected when applied to an image, and an input image.
- a part-likeness acquisition unit that determines the part-likeness based on the information of the classifier, and a part-position determining unit that determines the position of the predetermined part in the input image based on the obtained part-likeness
- a difference acquisition unit, and a reliability acquisition unit that obtains a reliability indicating the possibility that the input image is a detection target image based on the difference information.
- a discriminator storage unit that stores information on a discriminator that outputs a part-likeness related to a predetermined part to be detected. Referring to the image region included in the input image, the part-likeness is obtained based on the information of the discriminator, the position of the predetermined part in the input image is determined based on the obtained part-likeness, With reference to a reference position information storage unit that stores information related to a reference position of a predetermined part, based on the information of the reference position, difference information between the reference position of the predetermined part and the determined position of the predetermined part is obtained. Then, the reliability indicating the possibility that the input image is the detection target image is obtained based on the difference information.
- a program refers to a discriminator storage unit that stores information on a discriminator that outputs part-likeness related to a predetermined part to be detected when applied to an image on a computer.
- the “unit” does not simply mean a physical means, but includes a case where the function of the “unit” is realized by software. Also, even if the functions of one “unit” or device are realized by two or more physical means or devices, the functions of two or more “units” or devices are realized by one physical means or device. May be.
- the reliability for determining whether or not the input image is a detection target image can be obtained with high accuracy.
- the detection target is a face and the feature point likelihood (face feature point reliability) of the feature point (face feature point) corresponding to the face portion such as the eyes and nose is used as the portion likelihood related to the face portion is taken as an example.
- the feature point likelihood (face feature point reliability) of the feature point (face feature point) corresponding to the face portion such as the eyes and nose is used as the portion likelihood related to the face portion is taken as an example.
- FIG. 1 is a diagram illustrating a configuration example of a face area reliability calculation apparatus 1 according to an embodiment of the present invention.
- the face area reliability calculation device 1 includes a data processing device 100 and a storage device 200.
- the data processing apparatus 100 includes a face image input unit 110, a face feature point reliability generation unit 120, a face feature point position determination unit 130, a face feature point position difference calculation unit 140, and a face area reliability calculation unit 150.
- the storage device 200 includes a face feature point discriminator storage unit 210 and a face shape model storage unit 220.
- the face area reliability calculation device 1 may include a conventional face detector.
- the face feature point discriminator storage unit 210 stores, for each face feature point, information on a face feature point discriminator that outputs the likelihood of a face feature point when applied to an image.
- a facial feature point classifier can be generated using various conventional techniques.
- the face shape model storage unit 220 stores information on the face shape model that defines the reference position of the face feature point based on the statistical distribution related to the position (coordinates) of the face feature point as information on the reference position of the face part.
- a face shape model for example, a model that specifies the average coordinates of a plurality of face feature points for each face feature point as a reference position, and a vector X having the position coordinates of each face feature point as elements, A model that defines a subspace obtained by principal component analysis of a vector X obtained from a face image, a model that defines a reference position of a facial feature point by a parametric function, an environment in which the size of the input image and the shooting position of the face are fixed The model that holds the average position (the position in the image coordinate system) of the facial feature points can be considered.
- the face image input unit 110 acquires an input image to be processed.
- the face feature point reliability generation unit 120 performs face feature points on the image area (including pixels) included in the input image based on the information of the face feature point classifier stored in the face feature point classifier storage unit 210.
- the reliability image is obtained and a reliability image representing the distribution of the facial feature point likelihood is generated.
- the face feature point position determination unit 130 determines the position of the face feature point (detection feature point) in the input image based on the reliability image generated by the face feature point reliability generation unit 120.
- the face feature point position difference calculation unit 140 obtains a reference position of the face feature point based on the face shape model stored in the face shape model storage unit 220, and determines the reference position of the face feature point and the face feature point position. The difference information of the position of the detected feature point determined by the unit 130 is obtained.
- the face area reliability calculation unit 150 calculates a face area reliability indicating the possibility that the input image is a face image based on the difference information obtained by the face feature point position difference calculation unit 140.
- FIG. 2 is a flowchart showing the operation of the face area reliability calculation apparatus 1 shown in FIG.
- the face image input unit 110 acquires an input image to be processed (step S111).
- the face feature point reliability generation unit 120 determines the likelihood of the face feature point for the image region included in the input image based on the information of the face feature point classifier stored in the face feature point classifier storage unit 210. Then, a reliability image representing the distribution of the likelihood of facial feature points is generated (step S112).
- the face feature point position determination unit 130 determines the position of the face feature point (detected feature point) in the input image based on the reliability image generated in S112 (step S113).
- the face feature point position difference calculation unit 140 obtains a reference position of the face feature point based on the face shape model stored in the face shape model storage unit 220, and the reference position of the face feature point and the step S113 Difference information with respect to the determined position of the detected feature point is obtained (step S114).
- the face area reliability calculation unit 150 calculates the face area reliability indicating the possibility that the input image is a face image based on the difference information obtained in S114 (step S115).
- the face area reliability calculation unit calculates the face area reliability based on whether or not the positions of the facial feature points of the parts such as the eyes, nose, and mouth are arranged like a face. Therefore, the face area reliability can be calculated with higher practical accuracy. By using such face area reliability, even if the face detector mistakenly detects a non-face area such as the background as a face area, it is accurate whether the detected face area is a true face or not. It is possible to make a good judgment.
- the data processing device 100 can be configured using an information processing device such as a personal computer or a portable information terminal.
- the storage device 200 (the face feature point discriminator storage unit 210 and the face shape model storage unit 220) can be configured by, for example, a semiconductor memory or a hard disk.
- the face image input unit 110, the face feature point reliability generation unit 120, the face feature point position determination unit 130, the face feature point position difference calculation unit 140, and the face area reliability calculation unit 150 are, for example, in the data processing apparatus 100. It can be realized by executing a program stored in a storage unit by a CPU (Central Processing Unit). Note that some or all of the units of the data processing apparatus 100 may be realized as hardware.
- the face image input unit 110 may include an imaging unit such as a digital camera or a scanner, and may include a communication module for acquiring an input image by communicating with an external device.
- the face image input unit 110 acquires a target input image for calculating the face area reliability.
- the acquired input image may be a face area image detected by a conventional face detector, or an image obtained by capturing a person with a digital camera or the like.
- FIG. 3 is a diagram illustrating an example of an input image.
- the input image may include a background.
- the face area reliability calculation device 1 includes a face detector, a face area image can be cut out by performing face detection processing on an image captured by a digital camera or the like and used as an input image.
- the face feature point reliability generation unit 120 applies the face feature point classifier stored in the face feature point classifier storage unit 210 to the input image acquired by the face image input unit 110, for example, For each pixel, the likelihood of facial feature points corresponding to each part of the face such as eyes and nose is obtained.
- FIG. 4 is a diagram showing an example of facial feature points.
- the facial feature points are indicated by crosses.
- 14 points on both ends of the left and right eyebrows, the center and both ends of the left and right eyes, the lower part of the nose, and both ends and the center of the mouth are used as face feature points.
- the face feature points are not limited to the example shown in FIG. 4, and may be one point other than 14 points, for example.
- the face feature point reliability generation unit 120 generates a reliability image having the face feature point likelihood as a pixel value for each face feature point.
- 14 reliability images are generated.
- a method for calculating the likelihood of a facial feature point by applying a facial feature point classifier various conventionally proposed methods can be used.
- a reliability image is generated by applying a classifier for each facial feature point configured using AdaBoost based on Haar-like features by Viola and Jones to the entire input image area. May be.
- AdaBoost based on Haar-like features by Viola and Jones
- FIG. 5 is a diagram showing an example of a reliability image corresponding to the center portion of the right eye.
- the face is shown in darker black.
- the face in the center of the right eye near the center of the left eye, near the right eyebrow, and under the nose. It is shown that the feature point is large (it seems to be the center of the right eye).
- the face feature point position determination unit 130 determines the position of the face feature point in the input image based on the reliability image generated by the face feature point reliability generation unit 120.
- the face feature point position determination unit 130 may determine, for example, the position of the face feature point that is most likely to be the position of the corresponding face feature point in each reliability image generated by the face feature point reliability generation unit 120. it can.
- the position where the facial feature point likelihood is maximized in the reliability image instead of the position where the facial feature point likelihood is maximized may be determined as the facial feature point position. Good.
- FIG. 6 is a diagram showing a position where the likelihood of a facial feature point is maximized in a reliability image corresponding to the central portion of the right eye with a cross.
- the face feature point position difference calculation unit 140 is based on the face shape model stored in the face shape model storage unit 220 and is determined based on the face shape model (face shape model feature point). And the difference information between the reference position of the face feature point and the position of the face feature point (detected feature point) in the input image determined by the face feature point position determination unit 130 is calculated for each face feature point. .
- the calculation of the difference information of the face feature point position is performed as follows, for example.
- the reference position of the face feature point is directly recorded in the face shape model storage unit 220 as the face shape model.
- two-dimensional coordinate values (28 values) are recorded for the 14 face feature points shown in FIG. 4 as the face shape model.
- the positions of the detected feature points determined by the face feature point position determination unit 130 are two-dimensional coordinate values (28 values) for the 14 face feature points shown in FIG.
- the coordinate transformation p Helmatic transformation, which is coordinate transformation with respect to translation in the x-axis direction, translation in the y-axis direction, rotation in the screen direction, and scale.
- the coordinate transformation p is specified by the four parameters (pa, pb, pc, pd) that define the Helmart transformation shown in Equation (1).
- t is a coordinate before conversion
- u is a coordinate after conversion.
- the parameters of the Helmart transform p are obtained by the least square method.
- p that minimizes the square error (equation (2)) with the face shape model feature point when the detected feature point t is transformed by a certain coordinate transformation p is a parameter of the Helmart transformation to be obtained.
- Equation (3) The coordinate transformation p that minimizes the square error represented by Equation (2) can be analytically obtained from Equation (3).
- n is the number of data for obtaining the least square
- [z] is the average value of z.
- the Euclidean distance between the face shape model feature point coordinate k and the detected feature point coordinate t is obtained as a difference ⁇ for each face feature point according to the equation (4).
- the face feature point position difference calculation unit 140 calculates the difference ⁇ between the coordinate k of the face shape model feature point and the coordinate t of the detected feature point, and the coordinate k of the face shape model feature point and the detected feature point.
- Another distance measure such as Mahalanobis distance may be used instead of the Euclidean distance of t.
- the face feature point position difference calculation unit 140 may have a face feature point whose position has failed to be determined by the face feature point position determination unit 130 (for example, a situation where the face feature point is shielded by sunglasses or a mask, When the input image is unclear and it is difficult to specify the position of the face feature point (when an incorrect position is determined as the face feature point), such a face feature point may be handled as an outlier.
- a face feature point whose face feature point likelihood determined by the face feature point position determination unit 130 is equal to or smaller than a predetermined threshold is obtained as a coordinate transformation p from the detected feature point t to the face shape model feature point k. A method not used at the time is conceivable.
- the face feature point position determining unit 130 determines that the face feature point that is significantly different from the true position (for example, the face feature point of the right eye is determined to exist at a position near the left eye).
- the coordinate transformation p from the coordinate k of the face shape model feature point to the coordinate t of the detected feature point can be obtained by a robust estimation method.
- various conventionally proposed methods can be used.
- two face feature points are randomly selected from the 14 face feature points shown in FIG. Below, a and b are attached
- a set of coordinates corresponding to two randomly selected face feature points among the detected feature points is (ta, tb), and two randomly selected faces among the face shape model feature points.
- a set of coordinates corresponding to the feature points is (ka, kb). Note that ka, kb, ta, and tb are two-dimensional vectors representing coordinate values, respectively.
- the parameters of the Helmart transformation p from the coordinate set (ta, tb) to the coordinate set (ka, kb) are obtained.
- the parameter is uniquely obtained for conversion from two points to two points.
- 14 points of the coordinate t of the detected feature point are coordinate-converted by the obtained Helmat transform p, and the converted coordinate is set as u.
- the Euclidean distance between the coordinates u and k is obtained for each of the 14 face feature points, and the median of the distances for the 14 points is held.
- the face area reliability calculation unit 150 calculates a face area reliability J indicating that the input image is a face image based on the position difference information for each face feature point calculated by the face feature point position difference calculation unit 140. Calculate and store in the storage unit.
- the face area reliability J stored in the storage unit can be read out by various applications such as face recognition, and can be used according to the purpose.
- the face area reliability J is calculated by calculating the median of the values obtained by converting the difference ⁇ by the function ⁇ according to the equation (5) from the difference ⁇ of the position of the facial feature point calculated by the facial feature point position difference calculation unit 140.
- the function ⁇ is a function such that the value of the function ⁇ decreases as the value of the difference ⁇ increases.
- the sigmoid function shown in Expression (6) is used.
- a and b in Expression (6) are parameters for adjusting how much the value of the function ⁇ is decreased when the value of the difference ⁇ increases, and a is a negative number.
- the face area reliability calculation unit 150 does not use the median of the values obtained by converting the difference ⁇ of the face feature point positions by the function ⁇ as the face area reliability J, but the value obtained by converting the difference ⁇ by the function ⁇ . May be used as the face area reliability J.
- the face area reliability calculation unit 150 adds the face feature point position in addition to the position difference ⁇ for each face feature point obtained by the face feature point position difference calculation unit 140.
- the likelihood of the face feature point at the position of the detected feature point obtained by the determination unit 130 may be used.
- the calculation of the face area reliability J is a value obtained by converting the difference ⁇ by the function ⁇ according to Equation (7), where s is the position difference ⁇ for each face feature point, and s is the likelihood of the face feature point for each face feature point. Can be calculated by calculating a weighted sum of the median (or average value) of the median (or average value) of the facial feature point likelihood s.
- c in Equation (7) is a parameter for adjusting the balance between the difference ⁇ between the face feature point positions and the likelihood s of the face feature points.
- C is a real value in the range of 0 to 1.
- the face area reliability calculation unit 150 calculates the position difference ⁇ (or in addition to the facial feature points) obtained by the face feature point position difference calculation unit 140.
- the face feature point likelihood s at the face feature point position obtained by the point position determination unit 130 an additional face area reliability obtained by a method different from the face area reliability J (for example, a conventional face detection device is used).
- the integrated face area reliability may be obtained by using one or a plurality of values representing the face-likeness to be output.
- the integrated face area reliability J hat may be calculated from the area reliability J) and the additional face area reliability J0 according to the equation (8).
- d in Expression (8) is a parameter for adjusting the balance between the face area reliability J and the additional face area reliability J0. Note that d is a real value in the range of 0 to 1.
- the face area reliability acquisition unit does not calculate the reliability based on whether or not the input image entirely includes a texture that is likely to be a detection target (a face in this embodiment). Based on whether the position of the predetermined part detected in the input image (in this embodiment, the position of the facial feature point corresponding to the eyes, nose, mouth, etc.) is arranged like the predetermined part to be detected, Alternatively, since the face area reliability is obtained based on the part-likeness at the detected position (in this embodiment, the face feature point-likeness s), the face area reliability can be calculated with higher practical accuracy. it can. In addition, by using such face area reliability, whether or not the detected face area is a true face even when the face detector erroneously detects a non-face area such as a background as a face area. Can be accurately determined.
- the face area reliability calculation device, the face area reliability calculation method, and the face area reliability calculation program according to the present embodiment are used to improve the accuracy in processing that uses face images such as face detection, face authentication, and facial expression recognition. Widely available.
- this embodiment is for making an understanding of this invention easy, and is not for limiting and interpreting this invention.
- the present invention can be changed / improved without departing from the spirit thereof, and the present invention includes equivalents thereof.
- the face area reliability calculation device has been described as an example in which the facial feature point likelihood is obtained as the part likelihood, but the present invention is not limited to such a configuration.
- a part other than the face can be set as the detection target, and the part-likeness can be the area-likeness instead of the feature point-likeness.
- the present invention is applied to such a detection target and part. And the reliability can be obtained.
- a discriminator storage unit that stores information on a discriminator that outputs a part-likeness related to a predetermined part to be detected, and an image region included in the input image, Information about a part likelihood acquisition unit for determining the likelihood of the part based on information, a part position determination unit for determining the position of the predetermined part in the input image based on the calculated part likelihood, and information on a reference position of the predetermined part
- a reference position information storage unit that stores information, a difference acquisition unit that obtains difference information between the reference position of the predetermined part and the determined position of the predetermined part based on the information of the reference position, and the difference information
- a reliability obtaining unit that obtains a reliability indicating that the input image may be an image to be detected.
- the said acquisition object is a reliability acquisition apparatus of Additional remark 1 characterized by the above-mentioned.
- Each of the discriminator storage unit and the reference position information storage unit stores discriminator information and reference position information for a plurality of predetermined parts, and the part-likeness acquisition part and the part position determination part
- the reliability acquisition apparatus according to appendix 1 or 2, wherein each of the difference acquisition units acquires a part-likeness for each predetermined part, determines a position of the predetermined part, and obtains difference information.
- the said reliability acquisition part calculates
- the reliability acquisition apparatus in any one of Additional remark 1 thru
- the said reliability acquisition part calculates
- the reliability acquisition apparatus in any one of.
- the position information storage unit based on the information on the reference position, difference information between the reference position of the predetermined part and the determined position of the predetermined part is obtained, and based on the difference information, an input image is obtained.
- a reliability acquisition method for obtaining a reliability indicating the possibility that is an image to be detected An image region included in an input image with reference to a discriminator storage unit that stores information on a discriminator that outputs a part-likeness related to a predetermined part to be detected when applied to an image on a computer
- a function for obtaining a reliability indicating the possibility that the input image is an image to be detected based on the program An image region included in an input image with reference to a discriminator storage unit that stores information on a discriminator that outputs a part-likeness related to a predetermined part to be detected when applied to an image on a computer
- Facial region reliability calculation device 1 DESCRIPTION OF SYMBOLS 100 Data processing apparatus 110 Face image input part 120 Face feature point reliability production
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
(付記1)画像に対して適用した場合に、検出対象の所定部位に関する部位らしさを出力する識別器の情報を記憶する識別器記憶部と、入力画像に含まれる画像領域について、前記識別器の情報に基づいて前記部位らしさを求める部位らしさ取得部と、前記求めた部位らしさに基づいて、前記入力画像における前記所定部位の位置を決定する部位位置決定部と、前記所定部位の基準位置に関する情報を記憶する基準位置情報記憶部と、前記基準位置の情報に基づいて、前記所定部位の基準位置と、前記所定部位の前記決定した位置との差分情報を求める差分取得部と、前記差分情報に基づいて、入力画像が検出対象の画像である可能性を示す信頼度を求める信頼度取得部と、を備えたことを特徴とする信頼度取得装置。
(付記2)前記検出対象は顔であることを特徴とする付記1記載の信頼度取得装置。
(付記3)前記識別器記憶部及び前記基準位置情報記憶部はそれぞれ、複数の所定部位について識別器の情報、基準位置の情報を記憶しており、前記部位らしさ取得部、前記部位位置決定部、前記差分取得部はそれぞれ、各所定部位について、部位らしさを取得し、所定部位の位置を決定し、差分情報を求めることを特徴とする付記1又は2記載の信頼度取得装置。
(付記4)前記信頼度取得部は、前記差分情報と、前記求めた部位らしさとに基づいて、前記信頼度を求めることを特徴とする付記1乃至3のいずれかに記載の信頼度取得装置。
(付記5)前記信頼度取得部は、前記信頼度と、前記信頼度とは異なる方法により求めた1以上の追加信頼度とに基づいて、統合信頼度を求めることを特徴とする付記1乃至4のいずれかに記載の信頼度取得装置。
(付記6)前記差分取得部は、ロバスト推定の手法を用いて、前記位置決定部において決定した部位位置が外れ値であるかどうかに基づいて差分情報を求めることを特徴とする付記1乃至5のいずれかに記載の信頼度取得装置。
(付記7)コンピュータが、画像に対して適用した場合に、検出対象の所定部位に関する部位らしさを出力する識別器の情報を記憶する識別器記憶部を参照して、入力画像に含まれる画像領域について、前記識別器の情報に基づいて前記部位らしさを求め、前記求めた部位らしさに基づいて、前記入力画像における前記所定部位の位置を決定し、前記所定部位の基準位置に関する情報を記憶する基準位置情報記憶部を参照して、前記基準位置の情報に基づいて、前記所定部位の基準位置と、前記所定部位の前記決定した位置との差分情報を求め、前記差分情報に基づいて、入力画像が検出対象の画像である可能性を示す信頼度を求める、信頼度取得方法。
(付記8)コンピュータに、画像に対して適用した場合に、検出対象の所定部位に関する部位らしさを出力する識別器の情報を記憶する識別器記憶部を参照して、入力画像に含まれる画像領域について、前記識別器の情報に基づいて前記部位らしさを求める機能と、前記求めた部位らしさに基づいて、前記入力画像における前記所定部位の位置を決定する機能と、前記所定部位の基準位置に関する情報を記憶する基準位置情報記憶部を参照して、前記基準位置の情報に基づいて、前記所定部位の基準位置と、前記所定部位の前記決定した位置との差分情報を求める機能と、前記差分情報に基づいて、入力画像が検出対象の画像である可能性を示す信頼度を求める機能と、を実現させるためのプログラム。
100 データ処理装置
110 顔画像入力部
120 顔特徴点信頼度生成部
130 顔特徴点位置決定部
140 顔特徴点位置差分計算部
150 顔領域信頼度算出部
200 記憶装置
210 顔特徴点識別器記憶部
220 顔形状モデル記憶部
Claims (8)
- 画像に対して適用した場合に、検出対象の所定部位に関する部位らしさを出力する識別器の情報を記憶する識別器記憶部と、
入力画像に含まれる画像領域について、前記識別器の情報に基づいて前記部位らしさを求める部位らしさ取得部と、
前記求めた部位らしさに基づいて、前記入力画像における前記所定部位の位置を決定する部位位置決定部と、
前記所定部位の基準位置に関する情報を記憶する基準位置情報記憶部と、
前記基準位置の情報に基づいて、前記所定部位の基準位置と、前記所定部位の前記決定した位置との差分情報を求める差分取得部と、
前記差分情報に基づいて、入力画像が検出対象の画像である可能性を示す信頼度を求める信頼度取得部と、
を備えたことを特徴とする信頼度取得装置。 - 前記検出対象は顔であることを特徴とする請求項1記載の信頼度取得装置。
- 前記識別器記憶部及び前記基準位置情報記憶部はそれぞれ、複数の所定部位について識別器の情報、基準位置の情報を記憶しており、
前記部位らしさ取得部、前記部位位置決定部、前記差分取得部はそれぞれ、各所定部位について、部位らしさを取得し、所定部位の位置を決定し、差分情報を求めることを特徴とする請求項1又は2記載の信頼度取得装置。 - 前記信頼度取得部は、前記差分情報と、前記求めた部位らしさとに基づいて、前記信頼度を求めることを特徴とする請求項1乃至3のいずれか1項に記載の信頼度取得装置。
- 前記信頼度取得部は、前記信頼度と、前記信頼度とは異なる方法により求めた1以上の追加信頼度とに基づいて、統合信頼度を求めることを特徴とする請求項1乃至4のいずれか1項に信頼度取得装置。
- 前記差分取得部は、ロバスト推定の手法を用いて、前記位置決定部において決定した部位位置が外れ値であるかどうかに基づいて差分情報を求めることを特徴とする請求項1乃至5のいずれか1項に記載の信頼度取得装置。
- コンピュータが、
画像に対して適用した場合に、検出対象の所定部位に関する部位らしさを出力する識別器の情報を記憶する識別器記憶部を参照して、入力画像に含まれる画像領域について、前記識別器の情報に基づいて前記部位らしさを求め、前記求めた部位らしさに基づいて、前記入力画像における前記所定部位の位置を決定し、前記所定部位の基準位置に関する情報を記憶する基準位置情報記憶部を参照して、前記基準位置の情報に基づいて、前記所定部位の基準位置と、前記所定部位の前記決定した位置との差分情報を求め、前記差分情報に基づいて、入力画像が検出対象の画像である可能性を示す信頼度を求める、
信頼度取得方法。 - コンピュータに、
画像に対して適用した場合に、検出対象の所定部位に関する部位らしさを出力する識別器の情報を記憶する識別器記憶部を参照して、入力画像に含まれる画像領域について、前記識別器の情報に基づいて前記部位らしさを求める機能と、
前記求めた部位らしさに基づいて、前記入力画像における前記所定部位の位置を決定する機能と、
前記所定部位の基準位置に関する情報を記憶する基準位置情報記憶部を参照して、前記基準位置の情報に基づいて、前記所定部位の基準位置と、前記所定部位の前記決定した位置との差分情報を求める機能と、
前記差分情報に基づいて、入力画像が検出対象の画像である可能性を示す信頼度を求める機能と、
を実現させるためのプログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013558675A JP6071002B2 (ja) | 2012-02-16 | 2013-02-08 | 信頼度取得装置、信頼度取得方法および信頼度取得プログラム |
US14/379,304 US9858501B2 (en) | 2012-02-16 | 2013-02-08 | Reliability acquiring apparatus, reliability acquiring method, and reliability acquiring program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012031319 | 2012-02-16 | ||
JP2012-031319 | 2012-02-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013122009A1 true WO2013122009A1 (ja) | 2013-08-22 |
Family
ID=48984121
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/053105 WO2013122009A1 (ja) | 2012-02-16 | 2013-02-08 | 信頼度取得装置、信頼度取得方法および信頼度取得プログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US9858501B2 (ja) |
JP (1) | JP6071002B2 (ja) |
WO (1) | WO2013122009A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019003973A1 (ja) * | 2017-06-26 | 2019-01-03 | 日本電気株式会社 | 顔認証装置、顔認証方法およびプログラム記録媒体 |
JP2020527792A (ja) * | 2017-11-23 | 2020-09-10 | ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド | 目標物認識方法、装置、記憶媒体および電子機器 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9904843B2 (en) * | 2012-03-27 | 2018-02-27 | Nec Corporation | Information processing device, information processing method, and program |
US9020213B1 (en) | 2013-10-17 | 2015-04-28 | Daon Holdings Limited | Methods and systems for detecting biometric characteristics in an image |
JP6770521B2 (ja) * | 2015-02-12 | 2020-10-14 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | 堅牢な分類器 |
US11521460B2 (en) | 2018-07-25 | 2022-12-06 | Konami Gaming, Inc. | Casino management system with a patron facial recognition system and methods of operating same |
AU2019208182B2 (en) | 2018-07-25 | 2021-04-08 | Konami Gaming, Inc. | Casino management system with a patron facial recognition system and methods of operating same |
KR20210157052A (ko) * | 2020-06-19 | 2021-12-28 | 삼성전자주식회사 | 객체 인식 방법 및 객체 인식 장치 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000339476A (ja) * | 1999-05-28 | 2000-12-08 | Oki Electric Ind Co Ltd | 目位置及び顔位置検出装置 |
JP2005149506A (ja) * | 2003-11-14 | 2005-06-09 | Fuji Photo Film Co Ltd | 対象物自動認識照合方法および装置 |
JP2009089077A (ja) * | 2007-09-28 | 2009-04-23 | Fujifilm Corp | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム |
JP2011130203A (ja) * | 2009-12-17 | 2011-06-30 | Canon Inc | 映像情報処理方法及びその装置 |
WO2011148596A1 (ja) * | 2010-05-26 | 2011-12-01 | 日本電気株式会社 | 顔特徴点位置補正装置、顔特徴点位置補正方法および顔特徴点位置補正プログラム |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4999570B2 (ja) * | 2007-06-18 | 2012-08-15 | キヤノン株式会社 | 表情認識装置及び方法、並びに撮像装置 |
JP2009123081A (ja) | 2007-11-16 | 2009-06-04 | Fujifilm Corp | 顔検出方法及び撮影装置 |
JP5202037B2 (ja) * | 2008-02-29 | 2013-06-05 | キヤノン株式会社 | 特徴点位置決定方法及び装置 |
JP4513898B2 (ja) * | 2008-06-09 | 2010-07-28 | 株式会社デンソー | 画像識別装置 |
JP2010191592A (ja) | 2009-02-17 | 2010-09-02 | Seiko Epson Corp | 顔の特徴部位の座標位置を検出する画像処理装置 |
-
2013
- 2013-02-08 WO PCT/JP2013/053105 patent/WO2013122009A1/ja active Application Filing
- 2013-02-08 US US14/379,304 patent/US9858501B2/en active Active
- 2013-02-08 JP JP2013558675A patent/JP6071002B2/ja active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000339476A (ja) * | 1999-05-28 | 2000-12-08 | Oki Electric Ind Co Ltd | 目位置及び顔位置検出装置 |
JP2005149506A (ja) * | 2003-11-14 | 2005-06-09 | Fuji Photo Film Co Ltd | 対象物自動認識照合方法および装置 |
JP2009089077A (ja) * | 2007-09-28 | 2009-04-23 | Fujifilm Corp | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム |
JP2011130203A (ja) * | 2009-12-17 | 2011-06-30 | Canon Inc | 映像情報処理方法及びその装置 |
WO2011148596A1 (ja) * | 2010-05-26 | 2011-12-01 | 日本電気株式会社 | 顔特徴点位置補正装置、顔特徴点位置補正方法および顔特徴点位置補正プログラム |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019003973A1 (ja) * | 2017-06-26 | 2019-01-03 | 日本電気株式会社 | 顔認証装置、顔認証方法およびプログラム記録媒体 |
JPWO2019003973A1 (ja) * | 2017-06-26 | 2020-03-26 | 日本電気株式会社 | 顔認証装置、顔認証方法およびプログラム |
US11210498B2 (en) | 2017-06-26 | 2021-12-28 | Nec Corporation | Facial authentication device, facial authentication method, and program recording medium |
US11915518B2 (en) | 2017-06-26 | 2024-02-27 | Nec Corporation | Facial authentication device, facial authentication method, and program recording medium |
JP2020527792A (ja) * | 2017-11-23 | 2020-09-10 | ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド | 目標物認識方法、装置、記憶媒体および電子機器 |
US11182592B2 (en) | 2017-11-23 | 2021-11-23 | Beijing Sensetime Technology Development Co., Ltd. | Target object recognition method and apparatus, storage medium, and electronic device |
JP6994101B2 (ja) | 2017-11-23 | 2022-01-14 | ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド | 目標物認識方法、装置、記憶媒体および電子機器 |
Also Published As
Publication number | Publication date |
---|---|
US20150023606A1 (en) | 2015-01-22 |
US9858501B2 (en) | 2018-01-02 |
JPWO2013122009A1 (ja) | 2015-05-11 |
JP6071002B2 (ja) | 2017-02-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6071002B2 (ja) | 信頼度取得装置、信頼度取得方法および信頼度取得プログラム | |
Chakraborty et al. | An overview of face liveness detection | |
JP6544900B2 (ja) | オブジェクト識別装置、オブジェクト識別方法及びプログラム | |
JP5772821B2 (ja) | 顔特徴点位置補正装置、顔特徴点位置補正方法および顔特徴点位置補正プログラム | |
WO2020000908A1 (zh) | 一种人脸活体检测方法及装置 | |
JP4459137B2 (ja) | 画像処理装置及びその方法 | |
US7912253B2 (en) | Object recognition method and apparatus therefor | |
WO2015149696A1 (en) | Method and system for extracting characteristic of three-dimensional face image | |
US20080219516A1 (en) | Image matching apparatus, image matching method, computer program and computer-readable storage medium | |
US20180075291A1 (en) | Biometrics authentication based on a normalized image of an object | |
KR101612605B1 (ko) | 얼굴 특징점 추출 방법 및 이를 수행하는 장치 | |
JP2008191816A (ja) | 画像処理装置、および画像処理方法、並びにコンピュータ・プログラム | |
US11126827B2 (en) | Method and system for image identification | |
JP6822482B2 (ja) | 視線推定装置、視線推定方法及びプログラム記録媒体 | |
JP6351243B2 (ja) | 画像処理装置、画像処理方法 | |
WO2020195732A1 (ja) | 画像処理装置、画像処理方法、およびプログラムが格納された記録媒体 | |
CN110751069A (zh) | 一种人脸活体检测方法及装置 | |
JP2015197708A (ja) | オブジェクト識別装置、オブジェクト識別方法及びプログラム | |
JP5791361B2 (ja) | パターン識別装置、パターン識別方法およびプログラム | |
JP2013015891A (ja) | 画像処理装置、画像処理方法及びプログラム | |
US20160292529A1 (en) | Image collation system, image collation method, and program | |
JP4816874B2 (ja) | パラメータ学習装置、パラメータ学習方法、およびプログラム | |
JP2006293720A (ja) | 顔検出装置、顔検出方法、及び顔検出プログラム | |
Hashim et al. | Local and semi-global feature-correlative techniques for face recognition | |
JP7103443B2 (ja) | 情報処理装置、情報処理方法、およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13748970 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013558675 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14379304 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13748970 Country of ref document: EP Kind code of ref document: A1 |