WO2002007096A1 - Dispositif de recherche d'un point caracteristique sur un visage - Google Patents

Dispositif de recherche d'un point caracteristique sur un visage Download PDF

Info

Publication number
WO2002007096A1
WO2002007096A1 PCT/JP2000/004798 JP0004798W WO0207096A1 WO 2002007096 A1 WO2002007096 A1 WO 2002007096A1 JP 0004798 W JP0004798 W JP 0004798W WO 0207096 A1 WO0207096 A1 WO 0207096A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature point
face
specific pattern
person
image
Prior art date
Application number
PCT/JP2000/004798
Other languages
English (en)
Japanese (ja)
Inventor
Kentaro Hayashi
Kazuhiko Sumi
Manabu Hashimoto
Original Assignee
Mitsubishi Denki Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Denki Kabushiki Kaisha filed Critical Mitsubishi Denki Kabushiki Kaisha
Priority to PCT/JP2000/004798 priority Critical patent/WO2002007096A1/fr
Publication of WO2002007096A1 publication Critical patent/WO2002007096A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Definitions

  • the present invention relates to a technology for tracking a feature point of a face in an input image, and relates to a face feature point tracking device for detecting and tracking a feature portion of a person's face such as eyes and nose. is there. Background art
  • Japanese Patent Application Laid-Open No. 9-251534 as shown in a block diagram in FIG. Some were disclosed.
  • This device consists of an image input unit 111, a face region extraction unit 112, a feature point extraction unit 113, a feature point set candidate selection unit 114, a pattern evaluation unit 115, a normalization generation unit 111 6.
  • an image of a person to be recognized is input by the image input unit 111.
  • the face region extraction unit 112 extracts the person's face region from the input image
  • the feature point extraction unit 113 extracts the eyeball (black eye) from the extracted face regions by using the degree of separation filter.
  • Feature point candidates for the face such as the face and nostrils are extracted.
  • the feature point set candidate selection unit 114 narrows down the feature point set candidates from among the feature point candidates extracted by the feature point extraction unit 113 using structural constraints of the face.
  • the 'similarity calculation unit 115a generates a feature extracted based on each feature point for each feature point set selected by the feature point set candidate selection unit 114.
  • Partial templates such as eye, nose, mouth area etc
  • the degree of similarity to the pattern of port 1 15b is calculated, the degree of consistency of the weighted sum is obtained, and the feature point set having the highest degree of consistency is selected as the correct feature point set.
  • the normalization generation unit 1 16 generates a normalized image using the feature point set selected as correct.
  • the similarity calculating unit 117a compares the normalized image obtained in the normalization generation unit 116 with the dictionary image 111b of each registrant registered in advance. The similarity is calculated, and the person representing the dictionary image having a high similarity is identified as the person.
  • feature points of a face are extracted from an input image as described above.
  • an eyeball Each feature point candidate of the face such as the iris and nostrils is extracted, and each feature point set (; :) extracted by the pattern evaluation unit 115 using the pattern matching method is used. Since the point set is selected, if the eyes cannot be extracted because the eyes are closed and the nostrils cannot be extracted due to the angle of the face, the pattern evaluation at the next stage cannot be performed accurately or at all. There is a problem that the face feature points cannot be tracked stably and accurately because they may not exist.
  • the present invention has been made in order to solve the above-described problems, and it is not possible to detect a feature point such as a closed eye or a nostril due to a positional relationship between a face and an image input unit (camera). It is an object of the present invention to provide a facial feature point tracking device that can stably track feature points even in a case. Disclosure of the invention
  • a first feature point tracking device for a face sequentially detects the position of a characteristic portion of the face of a person from an image of the face of the person captured in time series, Device that traces,
  • An image pattern including a characteristic portion of the person's face stored in advance is defined as a specific pattern, and the specific pattern or an image pattern close to the specific pattern exists at any position in the captured image.
  • Invariant feature point position detecting means for detecting a position of an invariant feature point included in a characteristic portion of the human face
  • An output position integrating means for storing an image of a characteristic portion of the detected person's face as a new specific pattern is provided.
  • a change in the position of a characteristic portion of the face can be detected if detected by at least one of the specific pattern position detecting means and the invariant feature point position detecting means, so that more stable detection is possible.
  • the specific pattern is updated each time the detection is performed, so the specific pattern position detection means should detect the characteristic part of the face even if it changes over time. Can be done, and mouth-bust tracking becomes possible.
  • a second facial feature point tracking device is the first facial feature point tracking device, wherein the nose position detecting means for detecting the nose position from the captured image of the face of the person;
  • the apparatus further includes feature point detection means for detecting the position of a characteristic portion of the face of a person other than the nose from the nose position detected by the nose position detection means.
  • the position of the nose which is relatively easy to detect, is first detected, and based on the relative positional relationship between the nose and the characteristic part of the face other than the nose, the characteristic of the face of the person other than the nose is detected.
  • the detection is easier than in the case where the position of the feature point is directly detected from the captured image.
  • a third facial feature point tracking device is characterized in that in the first facial feature point tracking device, a change in the position of a characteristic portion of the person's face obtained from the output position integrating means is examined.
  • a movement state detecting means for accumulating the time required for the position change if the change in the position is smaller than a predetermined threshold value; and, if the accumulated time is longer than a preset time, accumulating the preset time.
  • setting means for updating the predetermined threshold value to the minimum value of the amount of change in the position changed for each of the accumulated times.
  • FIG. 1 is a block diagram showing the configuration of a face feature point tracking device according to the first embodiment of the present invention
  • FIG. 2 is a flowchart for explaining the operation of the face feature point tracking device according to the first embodiment of the present invention.
  • FIG. 3 is a block diagram showing the configuration of a face feature point tracking device according to the second embodiment of the present invention.
  • FIG. 4 is a diagram for explaining the operation of the face feature point tracking device according to the second embodiment of the present invention.
  • FIG. 5 is a block diagram showing the configuration of the face feature point tracking device according to the third embodiment of the present invention.
  • FIG. 6 is a diagram showing the operation of the face feature point tracking device according to the third embodiment of the present invention.
  • FIG. 7 is a block diagram showing the configuration of a conventional person authentication device. BEST MODE FOR CARRYING OUT THE INVENTION
  • FIG. 1 is a block diagram showing the configuration of a face feature point tracking device according to the first embodiment of the present invention
  • FIG. 2 is a flowchart for explaining the operation of the face feature point tracking device according to the first embodiment of the present invention.
  • reference numeral 1 denotes an imaging means for inputting an image of a person's face, which is, for example, a CCD camera.
  • Numeral 2 is included when the face is imaged from the input image obtained by the imaging means 1 such as a pupil of the eye (black eye) or a nostril of the nose, and the shape thereof has a relatively small variation with time.
  • Invariant feature points and are invariant feature point position detecting means for detecting this invariant feature point and detecting its position. After detecting the invariant feature points (pupils, nostrils), the invariant feature point detection means 2 calculates and outputs the position of the center point (center of gravity, etc.).
  • Specific pattern position detecting means for detecting a position of a specific pattern on a face such as an eye or a nose.
  • the specific pattern is, for example, a region (such as a rectangle) surrounding the invariant feature points.
  • the position of the specific pattern for example, the position of the center point (such as the center of gravity) of this area is used.
  • the invariant feature point position detecting means 2, the specific pattern position detecting means 3, and the output position integrating means 4 are realized by, for example, a computer.
  • FIG. 2 is a diagram for explaining the operation of the facial feature point tracking apparatus according to the first embodiment.
  • invariant feature points are pupils and nostrils
  • specific patterns are eyes and nose
  • eyes and nose of a face are tracked.
  • step 1 it is checked whether the feature point position was detected in the previous calculation, and if so, tracking from the input image based on the feature point position (eye and nose position) obtained in the previous calculation. Set the search area to search for the target feature point.
  • a rectangular area having a certain size surrounding the eyes and a rectangular area having a certain size surrounding the nose are set.
  • the rectangular area to be searched (referred to as a search rectangular area) is usually a rectangular area of a certain size centered on the feature point position obtained by the previous calculation.
  • the size of this rectangular area may be determined as appropriate according to the time interval for capturing the input image, the time interval for detecting the feature points, the speed at which the person moves the face, etc. Also, when the feature point position is not detected in the previous calculation In other words, when detecting feature points for the first time, for example, binarizing the input image using a means (not shown) for binarizing the input image, and detecting the position of the nostril using general knowledge of the face structure Set the eye search rectangular area from.
  • step 2 a search area for searching for invariant feature points is set in the search rectangular area set in step 1.
  • a rectangular area centered on the feature point used in step 1 and smaller than the rectangular area set in step 1 is set.
  • the positions of the invariant feature points usually exist within a specific pattern, and the size of the region is smaller than that of the specific pattern.
  • the entire search area set in step 1 is invariant feature
  • the position of the invariant feature point can be calculated in a shorter time than in the case where the search area is the position of the point.
  • the invariant feature point is a pupil
  • a shape (circle) corresponding to the pupil is set, and a rectangular area including the shape around the feature point is set as a search area for searching for the invariant feature point. It is.
  • the size of the rectangular area for searching for the invariant feature points may be appropriately determined according to the time interval for capturing the input image, the time interval for detecting the invariant feature points, the speed at which the pupil moves, and the like.
  • the search area is set based on the relative positional relationship between the invariant feature point position and the specific pattern position obtained by the previous calculation, and the area with the highest probability of the invariant feature point being present, such as statistics, is calculated.
  • the minimum rectangular area which is estimated using a method and includes an area where the probability that an invariant feature point exists is sufficiently high, may be used as a search area for searching for an invariant feature point.
  • invariant feature points are detected from the search area for searching for the invariant feature points set in step 2, and the positions of the invariant feature points are output.
  • invariant feature points for example, the shape of invariant feature points stored in advance (pupil shape for a pupil, nostril shape for a nostril). More specifically, for example, by examining where in the search area set in step 2 a shape that matches the shape corresponding to the invariant feature point stored in advance, the invariant feature point is detected, and the detected shape Calculate the center position
  • the separation degree filter (not shown) is used.
  • the invariant feature points can be detected with higher accuracy if the image in the search area set in step 2 is used.
  • the invariant feature point is the pupil, part or all of the pupil will be hidden during the movement while closing the eyes, such as blinking. Therefore, since the shape of the invariant feature point stored in advance is different from the shape of the pupil in the actual input image, the invariant feature point described above is not detected.
  • an invariant feature point cannot be detected in the search area set in step 2, it is determined that blinking is in progress, and processing such as not detecting the position of the invariant feature point is performed.
  • the additional detection improves the reliability of detection in step 3.
  • a position where an invariant feature point such as a pupil or a nostril exists can be detected from an image.
  • Step 2 and Step 3 Immediately before performing Step 2 and Step 3, or Step 2 and Step Perform step 4 while performing step 3.
  • Step 4 specifically detects a specific pattern from the search rectangular area set in step 1 and outputs the center position of the specific pattern.
  • the specific pattern obtained by the previous calculation is used as a template image pattern
  • a specific pattern is detected from the search area using a method such as pattern matching, and the center position of the detected specific pattern is input this time. Output as the position of the specific pattern included in the image.
  • Pattern matching is a method of detecting a specific pattern existing in a search area of an input image by finding a position where the sum of differences between a specific pattern and a corresponding pixel value of the input image is small.
  • the position where the specific pattern of the eyes and the nose exists can be detected from the image.
  • the position of the invariant feature point obtained in step 3 and the position of the specific pattern obtained in step 4 should ideally coincide with each other, but in reality this may not be the case.
  • step 5 for example, the midpoint of the position obtained by steps 3 and 4 is adopted.
  • step 5 the midpoint of the two feature point positions was adopted, but it was not the midpoint, but an internally divided point with a weight that is closer to the feature point position obtained from one of the steps. You may.
  • the feature point position obtained in step 5 is stored as feature point position information in step 6, and is prepared for feature point position detection performed immediately after. In this way, the positions of the feature points on the face can be stably tracked.
  • step 2 if no invariant feature point is detected in the search area set in step 2, it is determined that blinking is in progress, and processing such as not detecting the position of the invariant feature point is performed. If this output is made, the position of the specific nodal output in step 4 is set as the position of the feature point, so that the position of the invariant score point cannot be detected due to blinking or the like. Even feature points can be tracked.
  • step 5 The output of step 5 is used in step 6, step 7, and step 9.
  • step 6 the information on the position of the feature point output in step 5 is retained and updated, and is used as the position of the feature point in step 1 in the next detection.
  • the search area in step 1 can be set to an appropriate position.
  • a specific pattern (template) is obtained from the surrounding image information.
  • step 8 the acquired specific pattern is used as a new specific pattern, and the updated specific pattern is used as a specific pattern used for detecting the next specific pattern. However, this can be detected appropriately.
  • step 9 the feature point position obtained in step 5 is output, and the process ends.
  • the face feature point tracking operation as described above is performed at every time interval at which the image capturing means 1 takes in images in time series, for example, at every 1/30 second.
  • the flowchart shown in FIG. 2 is an example, and another flowchart may be used as long as the input / output relationship of each step is appropriate.
  • the position of the center point of the pupil or the nostril based on the position of the invariant feature point detected by the invariant feature point position detection means 2 and the pattern matching by the specific pattern position detection means 3
  • the output integration means 4 adjusts to the value taking into account the detection values from both detection means 2 and 3.
  • feature points far from the true position are not output, and feature points on the face, such as eyes and nose, can be tracked in the mouth past.
  • the invariant feature point position detecting means 2 and the specific pattern position detecting means 3 respectively detect the invariant feature point position and the specific pattern position independently, either one of the detecting means 2 (or 3) is used. Even when detection by the other means is impossible, the detection result by the other detection means 3 (or 2) can be obtained. Can be tracked stably.
  • the characteristic point position obtained by the output position integrating means 4 and the specific pattern acquired based on the characteristic point position are updated, and the next invariable characteristic point position detecting means 2 or the specific pattern position detecting means 3 are updated. Therefore, the specific pattern position detecting means 4 can detect the characteristic portion of the person's face following the change even when the characteristic portion changes with time. Therefore, robust tracking becomes possible.
  • the detection time of the specific pattern is shorter than when the entire image of the person's face is set as the search area. Can also be shortened. Example 2.
  • FIG. 3 is a block diagram showing the configuration of a face feature point tracking device according to the second embodiment of the present invention
  • FIG. 4 is a flowchart for explaining the operation of the face feature point tracking device according to the second embodiment of the present invention.
  • reference numeral 5 denotes a means for detecting the position of the nose from the image of the face obtained by the imaging means 1
  • reference numeral 6 denotes a position of the nose detected by the nose position detection means 5, and the imaging means 1 is used.
  • This is means for detecting the position of the eyes from the image of the face obtained by the above.
  • the nose position detecting means 5 and the eye position detecting means 6 are both realized by a computer.
  • the facial feature point tracking apparatus configured as described above can be operated, for example, in the order shown in the flowchart of FIG.
  • the image capturing means 1, the invariant feature point position detecting means 2, and the identification are used in the same manner as in the first embodiment.
  • the feature points of the face are tracked by the pattern position detecting means 3 and the output position integrating means 4.
  • the positions of both eyes and the nose are detected by the nose position detecting means 5 and the eye position detecting means 6, and the feature point positions are calculated based on the detection results.
  • FIG. 4 is a flowchart for explaining the operation of the facial feature point tracking apparatus according to the second embodiment.
  • step 11 it is determined whether or not the feature point position has been obtained last time.
  • the procedure for detecting a feature point when the feature point position has been previously obtained is the same as that described in the first embodiment with reference to FIG. 2, and a description thereof will be omitted.
  • the nostrils appear in black areas in the image. Therefore, in order to detect the nose region, it is sufficient to detect these two black regions, and the detection can be performed with relatively high accuracy.
  • a specific candidate region where the nose is likely to be extracted is extracted in step 12 and the image is binarized with a specific threshold value for the extracted region.
  • step 12 above in order to cope with a change in brightness of the image, binarization may be performed using a value obtained by adding a specific value to the darkest pixel value based on the minimum pixel value in the image as a threshold value.
  • step 13 a region having an area within a specific range is extracted from the image binarized in step 1 2, and a certain range Detects two regions that are located on the left and right at a distance.
  • These two regions are nostrils, and the position of the nose can be represented by, for example, the midpoint of each center of gravity of the two regions.
  • step 14 an area where an eye is likely to be seen relatively is set from the position of the nose detected in step 13.
  • step 15 the region where the eyes set in step 14 are likely to exist is binarized.
  • step 16 a region having an area within a specific range is extracted from the image binarized in step 15, and an arbitrary part of the region is extracted.
  • two regions that are arranged on the left and right within a certain range are detected as pupils, and the center position of each pupil is set as a feature point position.
  • the positions of the nose and both eyes are detected, and these positions are output as feature point positions in step 9.
  • the feature point positions are retained in step 6 to prepare for the next tracking.
  • a predetermined range is cut out around the feature point position, and this is obtained as a specific pattern.
  • the position of the nose that is relatively easy to detect is detected first.
  • other feature points for example, the positions of eyes are detected, so that stable and mouth-to-mouth tracking can be performed.
  • the flowchart shown in FIG. 4 is an example, and another flowchart may be used as long as the input / output relationship of each step is appropriate.
  • the positions of the eyes and the nose are tracked as the feature point positions.
  • the present invention is not limited to this.
  • the positions of the mouth may be used.
  • the position of the nose which is relatively easy to detect is first detected, and the position of the mouth is detected based on this. .
  • the position of the nose which is relatively easy to detect, is first detected, and the characteristic portion of the face of the person other than the nose is determined based on the relative positional relationship between the nose and the characteristic portion of the face other than the nose.
  • the position of the feature point of a person's face other than the nose is easier to detect than if it is detected directly from the captured image.
  • FIG. 5 is a block diagram showing a configuration of a facial feature point tracking apparatus according to Embodiment 3 of the present invention.
  • FIG. 6 is a flowchart for explaining the operation of the facial feature point tracking device according to the third embodiment of the present invention.
  • 7 is a movement width detecting means corresponding to the movement state detecting means
  • 8 is a reference movement width
  • 9 is a reference tracking time
  • 10 is a stable tracking determination means
  • 11 is a reference movement width setting means
  • 1 2 Are reference tracking time setting means, which are both realized by a computer and constitute a stability tracking means for tracking the stability of the face.
  • the reference movement width setting means 11 and the reference tracking time setting means 12 constitute a setting means.
  • the stability tracking determination means 10 uses the output of the output position integrating means 4, the reference movement width 8 and the reference tracking time 9 to detect the movement width of the feature point position by the movement width detection means 7, and based on the detection result, The tracking determination means 10 determines whether stable tracking is performed and outputs the result.
  • the reference movement width setting means 11 updates the reference movement width 8
  • the reference tracking time setting means 12 updates the reference tracking time 9.
  • step 21 it is determined whether or not this stability tracking method is to be used for the first time, and if so, in step 22, the reference tracking time is set to a predetermined specific fixed value. Then, the reference motion width is set to a predetermined specific fixed value. Then, the change of the position of the feature point is examined for a time corresponding to the reference tracking time.
  • the movement width of the position of the feature point is set to a value smaller than, for example, the reference movement width.
  • step 23 the positions of the feature points obtained by the previous calculation and the current calculation are used.
  • the movement width of the feature point position is calculated from the obtained feature point position. At this time, for example, if the movement width is defined as the relative distance of the feature point position from the previous feature point position, the movement width of the feature point position becomes the length of the vector representing the difference between the positions.
  • step 24 it is checked whether or not the motion width of the feature point position set in step 22 or step 23 is smaller than the reference motion width.
  • step 25 the tracking time is set to 0 in step 25, the fact that the tracking is not stable (not stable) is output in step 26, and the process returns to step 21.
  • the motion width is smaller than the reference motion width in step 24, the time required to change the position of the feature point in step 24 in step 27, that is, the time when the previous feature point position was calculated and the current time The difference from the time when the feature point position was calculated is accumulated in the tracking time.
  • step 28 it is checked whether this tracking time is longer than the reference tracking time.
  • step 26 if it is determined that the tracking time is equal to or shorter than the reference tracking time, it is output in step 26 that the tracking is not stable tracking, and the process returns to step 21.
  • step 28 If it is determined in step 28 that the tracking time is longer than the reference tracking time, the tracking time used in step 29 is set as a new reference tracking time, and the updated reference tracking time is held in step 30. .
  • step 32 the updated reference motion width is held, and in step 33, the fact that the tracking is stable is output.
  • it is used for tracking feature points of a face to track the 3D direction of the face, and enables stable and robust tracking of the 3D direction of the face. It is also used for feature point detection and tracking for face individual identification, and enables stable and mouth-bust personal identification of faces.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Collating Specific Patterns (AREA)

Abstract

Un dispositif permettant de repérer un point caractéristique sur un visage comprend un moyen (3) de mesure de la position d'une structure spécifique servant à mesurer la position d'une structure spécifique telle qu'une structure d'image comprenant une partie de caractéristique du visage d'une personne stockée au préalable ou bien une structure d'image semblable à la structure spécifique située dans une image saisie; un moyen (2) de mesure de la position d'un point caractéristique constant servant à mesurer la position d'un point caractéristique constant inclus dans la partie caractéristique du visage de la personne ; et un moyen (4) d'intégration de la position de sortie servant à mesurer la position de la partie caractéristique dans l'image saisie sur la base de la sortie du moyen de mesure de la position du point caractéristique constant et de la sortie du moyen de mesure de la position de la structure spécifique et le stockage dans ce dernier de l'image de la partie caractéristique en tant que nouvelle structure spécifique.
PCT/JP2000/004798 2000-07-17 2000-07-17 Dispositif de recherche d'un point caracteristique sur un visage WO2002007096A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2000/004798 WO2002007096A1 (fr) 2000-07-17 2000-07-17 Dispositif de recherche d'un point caracteristique sur un visage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2000/004798 WO2002007096A1 (fr) 2000-07-17 2000-07-17 Dispositif de recherche d'un point caracteristique sur un visage

Publications (1)

Publication Number Publication Date
WO2002007096A1 true WO2002007096A1 (fr) 2002-01-24

Family

ID=11736270

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2000/004798 WO2002007096A1 (fr) 2000-07-17 2000-07-17 Dispositif de recherche d'un point caracteristique sur un visage

Country Status (1)

Country Link
WO (1) WO2002007096A1 (fr)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005078311A (ja) * 2003-08-29 2005-03-24 Fujitsu Ltd 顔部位の追跡装置、眼の状態判定装置及びコンピュータプログラム
JP2007212430A (ja) * 2006-08-07 2007-08-23 Kurabo Ind Ltd 写真測量装置及び写真測量システム
JP2008501172A (ja) * 2004-05-28 2008-01-17 ソニー・ユナイテッド・キングダム・リミテッド 画像比較方法
JP2010026804A (ja) * 2008-07-18 2010-02-04 Olympus Corp 画像処理装置、画像処理プログラム、および画像処理方法
WO2012032747A1 (fr) * 2010-09-06 2012-03-15 日本電気株式会社 Système de sélection de point caractéristique, procédé de sélection de point caractéristique, programme de sélection de point caractéristique
JP2012254221A (ja) * 2011-06-09 2012-12-27 Canon Inc 画像処理装置、画像処理装置の制御方法、およびプログラム
JP2013179687A (ja) * 2013-05-22 2013-09-09 Yoichiro Ito 光情報差分から生成される連結線を用いた認証方法およびシステム
JP2013225316A (ja) * 2013-05-22 2013-10-31 Yoichiro Ito 個体差認証のための光情報差分の抽出方法およびシステム
US9008360B2 (en) 2010-02-12 2015-04-14 Yoichiro Ito Authentication system, and method for registering and matching authentication information
JP2017027602A (ja) * 2015-07-24 2017-02-02 株式会社リコー 対象追跡方法及び装置
KR20210060554A (ko) * 2018-12-10 2021-05-26 텐센트 테크놀로지(센젠) 컴퍼니 리미티드 얼굴 랜드마크 검출 방법과 장치, 컴퓨터 장치, 및 저장 매체

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04174309A (ja) * 1990-11-07 1992-06-22 Nissan Motor Co Ltd 運転車の眼位置検出装置及び状態検出装置
JPH06337998A (ja) * 1993-05-31 1994-12-06 Nec Corp 車両検出装置および車両追跡装置および車両監視装置
JPH08287216A (ja) * 1995-04-18 1996-11-01 Sanyo Electric Co Ltd 顔面内部位認識方法
JPH1139469A (ja) * 1997-07-24 1999-02-12 Mitsubishi Electric Corp 顔画像処理装置
US5926251A (en) * 1997-08-12 1999-07-20 Mitsubishi Denki Kabushiki Kaisha Eye image tracking apparatus
JP2000148977A (ja) * 1998-11-10 2000-05-30 Meidensha Corp 車両検出方法及びそれを実現するソフトウェアを記録した記録媒体並びにその方法を実現するための装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04174309A (ja) * 1990-11-07 1992-06-22 Nissan Motor Co Ltd 運転車の眼位置検出装置及び状態検出装置
JPH06337998A (ja) * 1993-05-31 1994-12-06 Nec Corp 車両検出装置および車両追跡装置および車両監視装置
JPH08287216A (ja) * 1995-04-18 1996-11-01 Sanyo Electric Co Ltd 顔面内部位認識方法
JPH1139469A (ja) * 1997-07-24 1999-02-12 Mitsubishi Electric Corp 顔画像処理装置
US5926251A (en) * 1997-08-12 1999-07-20 Mitsubishi Denki Kabushiki Kaisha Eye image tracking apparatus
JP2000148977A (ja) * 1998-11-10 2000-05-30 Meidensha Corp 車両検出方法及びそれを実現するソフトウェアを記録した記録媒体並びにその方法を実現するための装置

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005078311A (ja) * 2003-08-29 2005-03-24 Fujitsu Ltd 顔部位の追跡装置、眼の状態判定装置及びコンピュータプログラム
JP2008501172A (ja) * 2004-05-28 2008-01-17 ソニー・ユナイテッド・キングダム・リミテッド 画像比較方法
JP2007212430A (ja) * 2006-08-07 2007-08-23 Kurabo Ind Ltd 写真測量装置及び写真測量システム
JP2010026804A (ja) * 2008-07-18 2010-02-04 Olympus Corp 画像処理装置、画像処理プログラム、および画像処理方法
US9111150B2 (en) 2010-02-12 2015-08-18 Yoichiro Ito Authentication system, and method for registering and matching authentication information
US9008360B2 (en) 2010-02-12 2015-04-14 Yoichiro Ito Authentication system, and method for registering and matching authentication information
WO2012032747A1 (fr) * 2010-09-06 2012-03-15 日本電気株式会社 Système de sélection de point caractéristique, procédé de sélection de point caractéristique, programme de sélection de point caractéristique
JP2012254221A (ja) * 2011-06-09 2012-12-27 Canon Inc 画像処理装置、画像処理装置の制御方法、およびプログラム
JP2013179687A (ja) * 2013-05-22 2013-09-09 Yoichiro Ito 光情報差分から生成される連結線を用いた認証方法およびシステム
JP2013225316A (ja) * 2013-05-22 2013-10-31 Yoichiro Ito 個体差認証のための光情報差分の抽出方法およびシステム
JP2017027602A (ja) * 2015-07-24 2017-02-02 株式会社リコー 対象追跡方法及び装置
KR20210060554A (ko) * 2018-12-10 2021-05-26 텐센트 테크놀로지(센젠) 컴퍼니 리미티드 얼굴 랜드마크 검출 방법과 장치, 컴퓨터 장치, 및 저장 매체
JP2022502751A (ja) * 2018-12-10 2022-01-11 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 顔キーポイント検出方法、装置、コンピュータ機器及びコンピュータプログラム
KR102592270B1 (ko) * 2018-12-10 2023-10-19 텐센트 테크놀로지(센젠) 컴퍼니 리미티드 얼굴 랜드마크 검출 방법과 장치, 컴퓨터 장치, 및 저장 매체
US11915514B2 (en) 2018-12-10 2024-02-27 Tencent Technology (Shenzhen) Company Limited Method and apparatus for detecting facial key points, computer device, and storage medium

Similar Documents

Publication Publication Date Title
CN107438854B (zh) 使用移动设备捕获的图像执行基于指纹的用户认证的系统和方法
Kawaguchi et al. Detection of eyes from human faces by Hough transform and separability filter
US7127086B2 (en) Image processing apparatus and method
JP4863423B2 (ja) 虹彩を用いた身元確認システム及び方法並びにその方法を実行するための身元確認プログラムを格納したコンピュータ読取り可能な記録媒体
US8515136B2 (en) Image processing device, image device, image processing method
JP2006133946A (ja) 動体認識装置
KR102554391B1 (ko) 홍채 인식 기반 사용자 인증 장치 및 방법
WO2002071316A1 (fr) Procede de reconnaissance de l'iris humain de type sans contact reposant sur la correction de l'image irienne en rotation
KR101626837B1 (ko) 손가락 마디 및 지정맥 기반의 융합형 생체 인증 방법 및 그 장치
US20190392129A1 (en) Identity authentication method
JP2007140823A (ja) 顔照合装置、顔照合方法及びプログラム
US11756338B2 (en) Authentication device, authentication method, and recording medium
JP4860289B2 (ja) ロボット装置
JP2007004321A (ja) 画像処理装置及び入退室管理システム
WO2020195732A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image, et support d'enregistrement dans lequel un programme est stocké
CN110796101A (zh) 一种嵌入式平台的人脸识别方法及系统
WO2002007096A1 (fr) Dispositif de recherche d'un point caracteristique sur un visage
US11380133B2 (en) Domain adaptation-based object recognition apparatus and method
JP3823760B2 (ja) ロボット装置
JP2008090452A (ja) 検出装置、方法およびプログラム
WO2022130616A1 (fr) Procédé d'authentification, dispositif de traitement d'informations et programme d'authentification
US20230386253A1 (en) Image processing device, image processing method, and program
CN112418078A (zh) 分数调制方法、人脸识别方法、装置及介质
KR101718244B1 (ko) 얼굴 인식을 위한 광각 영상 처리 장치 및 방법
JP2000067237A (ja) 人物識別装置

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase