JP2007080087A - Facial parts extraction method and face authentication device - Google Patents

Facial parts extraction method and face authentication device Download PDF

Info

Publication number
JP2007080087A
JP2007080087A JP2005268969A JP2005268969A JP2007080087A JP 2007080087 A JP2007080087 A JP 2007080087A JP 2005268969 A JP2005268969 A JP 2005268969A JP 2005268969 A JP2005268969 A JP 2005268969A JP 2007080087 A JP2007080087 A JP 2007080087A
Authority
JP
Japan
Prior art keywords
face
image
pixels
differential
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2005268969A
Other languages
Japanese (ja)
Other versions
JP4470848B2 (en
Inventor
Atsuyuki Hirono
淳之 広野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Electric Works Co Ltd
Original Assignee
Matsushita Electric Works Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Works Ltd filed Critical Matsushita Electric Works Ltd
Priority to JP2005268969A priority Critical patent/JP4470848B2/en
Publication of JP2007080087A publication Critical patent/JP2007080087A/en
Application granted granted Critical
Publication of JP4470848B2 publication Critical patent/JP4470848B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To provide a facial parts extraction method capable of extracting the facial parts image comprising a stable binary image, without affected by image brightness or contrast, illumination direction, etc. <P>SOLUTION: The facial parts extraction section 4 processes a grayscale image into a differential intensity image by differential processing, performs processing for sorting pixels in the descending order of differential intensity within a predefined area of the differential intensity image including the facial parts to extract, then sets threshold for a region selected by a specified number of pixels, associated with the facial parts as a region with intense concentration change and the rest region as a region with non-intense concentration change, and outputs the differential intensity image as the binarized image by use of the thresholds. <P>COPYRIGHT: (C)2007,JPO&INPIT

Description

本発明は、顔認証に必要な顔部品抽出方法及び顔認証装置に関するものである。   The present invention relates to a face component extraction method and face authentication apparatus necessary for face authentication.

顔認証に用いる顔の目や口等の部品を探索する装置としては顔領域の画像部分の濃淡を検出し、この濃淡に基づいて、顔部品を検出する顔部品探索方法が提供されている(特許文献1)。   As a device for searching for parts such as eyes and mouth of a face used for face authentication, there is provided a face part search method for detecting the density of an image part of a face area and detecting the face part based on the density ( Patent Document 1).

また、濃淡変化やヒストグラムなどの画素値情報を用いた目や口の領域を決定して顔部品領域を切り出す装置も提供されている(特許文献2)。
特開2003−281539公報(公報(1)頁の左欄) 特開平8−221591号公報(段落番号0015、0023)
There is also provided an apparatus for determining a region of an eye or mouth using pixel value information such as shading change or a histogram and cutting out a face part region (Patent Document 2).
JP 2003-281539 A (left column on page (1)) Japanese Patent Laid-Open No. 8-221591 (paragraph numbers 0015 and 0023)

ところで、特許文献1,2に開示されているような濃淡を用いて顔部品を検出したり切り出す方法では、画像の明るさやコントラスト、照明照射方向などの影響を受け、例えば二値化画像として顔部品画像を抽出する場合、二値化画像が上述の影響により変化してしまうという問題があった。   By the way, in the method of detecting and extracting a facial part using shading as disclosed in Patent Documents 1 and 2, it is affected by the brightness and contrast of the image, the illumination irradiation direction, and the like. When extracting a component image, there is a problem that the binarized image changes due to the above-described influence.

本発明は、上述の点に鑑みて為されたもので、その目的とするところは、画像の明るさやコントラスト、照明照射方向などの影響を受けることなく、安定した二値化画像からなる顔部品画像を抽出することができる顔部品抽出方法及び顔認証装置を提供することにある。   The present invention has been made in view of the above-described points, and the object of the present invention is to make a facial component composed of a stable binary image without being affected by the brightness, contrast, illumination illumination direction, and the like of the image. It is an object of the present invention to provide a face component extraction method and a face authentication device that can extract an image.

上述の目的を達成するために、請求項1の顔部品抽出方法の発明では、人物が撮像された濃淡画像から顔部位を探索した後、顔部位の濃淡画像において濃度変化の激しい画素及びその近隣の画素に顔部品を示す情報を含むものとしてこれら濃度変化の激しい画素及びその近隣の画素を選択して二値化画像を顔部品画像として抽出する顔部品抽出方法であって、顔部位探索後、顔部位の濃淡画像を微分処理して微分強度画像とし、この微分強度画像の内、抽出したい顔部品が含まれる所定領域内で画素を微分強度順に並び替え、この並び替え後に微分強度順に基づいて顔部品に対応付けてある指定画素数分だけ選択し、その選択した部位と、その他の部位とを二値化した微分強度二値化画像を顔部品画像として抽出することを特徴とする。   In order to achieve the above-mentioned object, in the invention of the facial part extraction method according to claim 1, after searching for a facial part from a grayscale image in which a person is imaged, a pixel having a sharp density change in the grayscale image of the facial part and its vicinity A face component extraction method for extracting a binarized image as a face component image by selecting a pixel having a large density change and a pixel in the vicinity thereof as a pixel including information indicating a face component, and after searching for a facial part Then, the grayscale image of the facial part is differentiated to obtain a differential intensity image, and the pixels are rearranged in the differential intensity order within the predetermined area including the facial part to be extracted in the differential intensity image, and based on the differential intensity order after the rearrangement. In this case, the selected number of pixels corresponding to the face part is selected, and a differential intensity binarized image obtained by binarizing the selected part and other parts is extracted as a face part image.

請求項1の顔部品抽出方法の発明によれば、微分強度の並び替え対象とする所定領域に含まれる顔部品の画素数を用いて、閾値で二値化した顔部品画像を抽出することができるため、画像の明るさやコントラスト、照明照射方向などの影響を受けることなく安定した顔部品抽出ができる。   According to the invention of the face part extraction method of claim 1, the face part image binarized with the threshold value can be extracted using the number of pixels of the face part included in the predetermined area to be rearranged for the differential intensity. Therefore, stable face component extraction can be performed without being affected by image brightness, contrast, illumination irradiation direction, and the like.

請求項2の顔部品抽出方法の発明では、請求項1の発明において、前記微分強度の並び替えを微分強度の大きい順とし、この微分強度の大きい方から顔部品に対応付けてある指定画素数分だけ選択し、該選択した部位を濃度の変化の激しいところ、それ以外の部位を濃度変化の激しくない平坦な部位として二値化することを特徴とする。   According to a second aspect of the present invention, the rearrangement of the differential intensities is performed in descending order of the differential intensities, and the number of designated pixels associated with the face part from the higher one of the differential intensities. This method is characterized in that the selected portion is binarized, and the selected portion is binarized as a flat portion where the concentration change is intense and the other portions are not drastically changed in concentration.

請求項2の顔部品抽出方法の発明によれば、微分強度の大きく、濃度変化の激しい部位に注目する形で顔部品を抽出して、請求項1と同様に画像の明るさやコントラスト、照明照射方向などの影響を受けることなく安定した顔部品抽出ができる。   According to the invention of the method for extracting a facial part of claim 2, facial parts are extracted by paying attention to a part having a large differential intensity and a sharp density change, and the brightness, contrast, illumination illumination of the image as in the case of claim 1. Stable facial parts can be extracted without being affected by the direction.

請求項3の顔部品抽出方法の発明では、請求項1の発明において、記微分強度の並び替えを微分強度の小さい順とし、この微分強度の小さい方から顔部品に対応付けてある指定画素数分だけ選択し、該選択した部位を濃度の変化の激しくない平坦な部位とし、それ以外の部位を濃度変化の激しい部位として二値化することを特徴とする。   According to a third aspect of the present invention for extracting a facial part, in the first aspect of the invention, the rearrangement of differential intensities is performed in the order of decreasing differential intensities, and the designated number of pixels associated with the facial part from the one with the lower differential intensities. The selected portion is selected as a flat portion where the concentration does not change drastically, and the other portion is binarized as a portion where the concentration changes drastically.

請求項3の顔部品抽出方法の発明によれば、微分強度が小さく濃度変化が激しくない平坦な部位に注目する形で顔部品を抽出して、請求項1と同様に画像の明るさやコントラスト、照明照射方向などの影響を受けることなく安定した顔部品抽出ができる。   According to the invention of the facial part extraction method of claim 3, facial parts are extracted in a manner that pays attention to a flat part having a low differential intensity and a low density change, and the brightness and contrast of the image as in claim 1, Stable facial component extraction can be performed without being affected by the illumination direction.

請求項4の顔部品抽出方法の発明では、請求項1乃至3の何れか記載の顔部品を構成する画素数を基準となる顔画像から予め求めておき、この基準の画素数を基に抽出対象の顔の大きさに応じて前記指定画素数を決定することを特徴とする。   According to a fourth aspect of the present invention, the number of pixels constituting the facial part according to any one of the first to third aspects is obtained in advance from a reference face image, and extracted based on the reference number of pixels. The specified number of pixels is determined according to the size of the target face.

請求項4の顔部品抽出方法の発明によれば、抽出対象となる顔の大きさに応じて二値化の閾値を設定できるため、顔部品抽出が的確に行える。   According to the invention of the face part extraction method of claim 4, since the binarization threshold can be set according to the size of the face to be extracted, the face part extraction can be performed accurately.

請求項5の顔部品抽出方法の発明では、請求項1乃至4の何れかの発明において、顔認証に有効で且つ個人差の大きな部位について顔認証に必要な画素数を上記指定画素数とは別に予め求めておくことを特徴とする。   In the invention of the face component extraction method of claim 5, in the invention of any one of claims 1 to 4, the number of pixels required for face authentication is effective for face authentication and for a part having a large individual difference. It is characterized in that it is separately obtained in advance.

請求項5の顔部品抽出方法の発明によれば、抽出した顔部品画像に基づいて顔認証を行う際に、鼻の横のしわ、顎のライン、眼鏡の枠など個人差が大きく位置による指定が困難な顔部品による認証も可能となる。   According to the invention of the face part extraction method of claim 5, when face authentication is performed based on the extracted face part image, there are large individual differences such as wrinkles on the nose, chin lines, eyeglass frames, etc. Authentication with face parts that are difficult to perform is also possible.

請求項6の顔部品抽出方法の発明では、請求項1乃至5の何れかの発明において、微分強度二値化画像として抽出される領域を膨張処理することを特徴とする。   According to a sixth aspect of the present invention, the region extracted as a differential intensity binarized image is subjected to an expansion process.

請求項6の発明によれば、微分強度の大きい周辺の画素が持つ顔部品固有の情報を用いることを可能とする。   According to the sixth aspect of the present invention, it is possible to use information unique to a face part possessed by peripheral pixels having a high differential intensity.

請求項7の顔部品抽出方法の発明では、請求項1乃至6の何れかの発明において、前記所定領域は、照明方向によって同じように影響を受ける領域毎にグループ化した領域であることを特徴とする。   According to a seventh aspect of the present invention, the predetermined region is a region grouped for each region that is similarly affected by the illumination direction. And

請求項7の顔部品抽出方法の発明によれば、明るさ分布の影響をなくした微分強度順の並び替えが行え、明るさ分布の差の影響を受けない顔部品画像の抽出ができる。   According to the invention of the facial part extraction method of claim 7, the order of the differential intensity can be rearranged without the influence of the brightness distribution, and the facial part image which is not affected by the difference of the brightness distribution can be extracted.

請求項8の顔部品抽出方法の発明では、請求項1乃至7の何れかの発明において、前記並べ替え後に、前記指定画素数での最下位置付近の微分強度が所定の微分強度値を満たさないときには、所定の微分強度値を越えるように撮像カメラの撮像パラメータを制御することを特徴とする。   According to an invention of a facial part extraction method of an eighth aspect, in the invention of any one of the first to seventh aspects, after the rearrangement, the differential intensity near the lowest position at the specified number of pixels satisfies a predetermined differential intensity value. If not, the imaging parameter of the imaging camera is controlled so as to exceed a predetermined differential intensity value.

請求項8の顔部品抽出方法の発明によれば、適正な顔部品抽出を行うための微分強度を確保することができる。   According to the invention of the face part extraction method of claim 8, it is possible to secure the differential strength for performing proper face part extraction.

請求項9の顔部品抽出方法の発明では、請求項1乃至8の発明において、前記濃淡画像の濃度値分布が所定の範囲の分布広さを満たさないときには、濃度値分布が所定の範囲の分布広さを満たすように撮像カメラの撮像パラメータを制御することを特徴とする。   According to a ninth aspect of the present invention, there is provided a face component extracting method according to the first to eighth aspects, wherein when the density value distribution of the grayscale image does not satisfy a predetermined range, the density value distribution is a predetermined range distribution. The imaging parameter of the imaging camera is controlled to satisfy the area.

請求項9の顔部品抽出方法の発明によれば、適正な顔部品抽出を行うための微分強度を確保することができる。   According to the invention of the facial part extraction method of claim 9, it is possible to ensure the differential strength for performing proper facial part extraction.

請求項10の顔認証装置の発明では、請求項1乃至9の顔部品抽出方法によって二値化画像として顔部品を抽出する顔部品抽出手段と、該顔部品抽出手段からの二値化画像を取り込み、顔部品に対応付けてある場所の画素値を用いて濃淡マッチング或いは濃度勾配方向マッチングにより顔認証判断処理を行うことを特徴とする。   According to another aspect of the present invention, the facial part extracting means for extracting a facial part as a binarized image by the facial part extracting method according to any one of claims 1 to 9, and the binarized image from the facial part extracting means. It is characterized in that the face authentication determination process is performed by density matching or density gradient direction matching using the pixel value of the location that is captured and associated with the face part.

請求項10の顔認証装置の発明によれば、画像の明るさやコントラスト、照明照射方向などの影響を受けることなく安定して抽出される顔部品を用いて顔認証を行うため、高い確度で顔認証が行える。   According to the invention of the face authentication apparatus of claim 10, since face authentication is performed using face parts that are stably extracted without being affected by the brightness, contrast, illumination illumination direction, etc. of the face, the face is highly accurate. Authentication can be performed.

本発明は、微分強度の並び替え対象とする所定領域に含まれる顔部品の画素数を用いた閾値で二値化した顔部品画像を抽出することができるため、画像の明るさやコントラスト、照明照射方向などの影響を受けることなく、安定した顔部品抽出ができる顔部品抽出方法を提供でき、また高い確度で顔認証が行える顔認証装置を提供できるという効果がある。   The present invention can extract a facial part image binarized with a threshold value using the number of pixels of a facial part included in a predetermined area to be rearranged for differential intensities. There is an effect that it is possible to provide a face part extraction method capable of performing stable face part extraction without being affected by a direction or the like, and to provide a face authentication apparatus capable of performing face authentication with high accuracy.

以下本発明を実施形態により説明する。   Embodiments of the present invention will be described below.

(実施形態1)
図1は本実施形態の顔部品抽出方法として採用した顔認証装置Xの全体構成を示しており、この顔認証装置Xは、撮像カメラ1と、画像データ入力部2と、顔位置探索部3と、本実施形態を採用した顔部品抽出部4と、抽出された顔部品画像によって顔認証を行う顔認証部5等から構成される。
(Embodiment 1)
FIG. 1 shows the overall configuration of a face authentication apparatus X employed as a face component extraction method of the present embodiment. The face authentication apparatus X includes an imaging camera 1, an image data input unit 2, and a face position search unit 3. And a face part extraction unit 4 adopting the present embodiment, a face authentication part 5 that performs face authentication using the extracted face part image, and the like.

この顔認証装置Xは、図2(a)に示すフローチャートに沿って認証処理を行う。   The face authentication apparatus X performs authentication processing according to the flowchart shown in FIG.

つまり、撮像カメラ1により撮影された認証対象の人物の例えば上半身の濃淡画像からなる撮像画像データを画像データ入力部2に入力する(ステップS1)。この画像データ入力部2では撮像画像データを画像データバッファ2aに格納する処理を行う。
次に顔位置探索部3が画像データバッファ2a上の撮像画像から顔位置を探索して切り出す処理を行う(ステップS3)。
That is, captured image data composed of, for example, a gray image of the upper body of the person to be authenticated photographed by the imaging camera 1 is input to the image data input unit 2 (step S1). The image data input unit 2 performs processing for storing captured image data in the image data buffer 2a.
Next, the face position search unit 3 searches for and extracts a face position from the captured image on the image data buffer 2a (step S3).

この顔位置探索部3での顔位置探索方法は特に限定されるものではなく、周知の方法をい適宜採用すれば良く、例えば撮像画像及び予め準備されている顔検出用テンプレート画像から濃度勾配方向画像を抽出し、この抽出した濃度勾配方向画像上の基準点と参照とする座標点との距離及び基準点と座標点とを結ぶ線が座標点を通る水平軸と交差する角度の情報を抽出し、この抽出した距離及び角度の情報を顔検出用テンプレート画像の濃度勾配方向の値毎に形状特徴を分離し、撮像画像の濃度勾配方向画像の参照とする座標点における濃度勾配方向の値及び上記の形状特徴に基づいて撮像画像の濃度勾配方向画像における基準点候補点に対する投票処理を行って投票結果に基づいて顔位置を検出する方法等がある。   The face position search method in the face position search unit 3 is not particularly limited, and any known method may be adopted as appropriate. For example, a density gradient direction from a captured image and a face detection template image prepared in advance. Extract an image, and extract information on the distance between the reference point on the extracted density gradient direction image and the reference coordinate point, and the angle at which the line connecting the reference point and the coordinate point intersects the horizontal axis passing through the coordinate point The shape feature is separated for each value of the density gradient direction of the face detection template image from the extracted distance and angle information, and the value of the density gradient direction at the coordinate point used as the reference for the density gradient direction image of the captured image and There is a method of performing a voting process on a reference point candidate point in a density gradient direction image of a captured image based on the shape feature and detecting a face position based on a voting result.

さて顔位置探索部3で探索された顔位置の濃淡画像、つまり顔画像(例えば図3(a)参照)は顔部品抽出部4に送られ、顔部品抽出部4で目、口等の顔部品の二値化画像を抽出する(ステップS4)。   Now, the gray image of the face position searched by the face position search unit 3, that is, the face image (see, for example, FIG. 3A) is sent to the face component extraction unit 4, and the face component extraction unit 4 performs the face such as eyes and mouth. A binarized image of the part is extracted (step S4).

ここで顔部品抽出部4は、顔画像において濃度変化の激しい画素及びその近隣の画素に顔部品を示す情報を含むものとしてこれら濃度変化の激しい画素及びその近隣の画素を選択して二値化画像を顔部品画像として抽出するものであって、図2(b)のフローチャートに沿った処理動作を行う。つまり、まず顔部位の濃淡画像を入力し(ステップS30)、この濃淡画像をステップS31で微分処理して微分強度画像とし、この微分強度画像の内、抽出したい顔部品が含まれる所定領域内で画素を微分強度の大きい順に並び替える処理を行う(ステップS32)。   Here, the face part extraction unit 4 selects and binarizes the pixels having a large density change and the neighboring pixels as the face image including information indicating the face part in the pixel having a large density change and the neighboring pixels in the face image. The image is extracted as a face part image, and a processing operation is performed according to the flowchart of FIG. That is, first, a gray image of a facial part is input (step S30), and the gray image is differentiated in step S31 to obtain a differential intensity image. Within the differential intensity image, within a predetermined region including a facial part to be extracted. A process of rearranging the pixels in descending order of differential intensity is performed (step S32).

ここで所定領域とは、被写体である人物に撮影時に照射する照明方向で明るさ分布の差が同じ領域をグルーピングしたものである。例えば、顔の場合、顔と照明の位置関係で想定されるものは次の3通りである。
(1)顔の横方向に照明が配置されている場合は、顔の左右のどちらかが明るく、その反対が暗い。
(2)顔の上方又は下方に照明が配置されている場合は、顔の上、下のどちらかが明るく、その反対が暗い。
(3)顔の斜め上方又は斜め下方に照明が配置されている場合は、例えば顔を4分割して照明に近い側が明るく、その反対が暗い。
Here, the predetermined area is a grouping of areas having the same brightness distribution difference in the illumination direction irradiated to the person who is the subject at the time of shooting. For example, in the case of a face, the following three types are assumed in the positional relationship between the face and illumination.
(1) When the illumination is arranged in the lateral direction of the face, either the left or right side of the face is bright and the opposite is dark.
(2) When illumination is arranged above or below the face, either the top or bottom of the face is bright and the opposite is dark.
(3) When the illumination is arranged obliquely above or below the face, for example, the face is divided into four parts and the side near the illumination is bright, and the opposite is dark.

尚顔を4分割すれば、照明を横方向に配置した場合も、上又は下に配置した場合も同様に得られることは容易に想像できる。   If the face is divided into four parts, it can be easily imagined that the illumination can be obtained in the same manner when the illumination is arranged in the horizontal direction or when it is arranged above or below.

また予め照明と顔位置関係が判っていれば、それに従って領域分割すれば良く、逆に判らない場合には、何れにも対応できる分割として、例えば4分割を採用する。   If the relationship between the illumination and the face position is known in advance, the region may be divided accordingly. If the relationship is not known, for example, four divisions are adopted as a division that can correspond to any of them.

更に、また上下左右方向の明るさに隔たりが少ないであろうと想定されれば、顔全体を一つの領域として扱ってもよい。   Furthermore, if it is assumed that there is little difference in brightness in the vertical and horizontal directions, the entire face may be treated as one area.

そして4分割或いは2分割した場合において、上下左右の明るさの隔たりが少なくても、同様の効果が得られることは、容易に想像できる。   It can be easily imagined that the same effect can be obtained even when there is little difference in brightness between the top, bottom, left and right in the case of dividing into four or two.

図3(a)は顔の上下左右毎の明るさ部分に大きな差がなく、隔たりが少ない状況であることが予め判っている場合において、顔全体を一つの領域として、その領域全体で微分強度の並び替えを行う場合を、図3(b)は顔を左右に2分割した場合を、図3(c)は上下左右に4分割した場合を示す。   FIG. 3 (a) shows that there is no significant difference in the brightness portions of the face in the vertical and horizontal directions, and that there is little separation, and the entire face is regarded as one area, and the differential intensity over the entire area. 3 (b) shows a case where the face is divided into left and right parts, and FIG. 3 (c) shows a case where the face is divided into four parts up, down, left and right.

ここで微分強度順に並べ替えるアルゴリズムとして例えばバブルソートを採用する。このバブルソートは、ランダムに並んだ配列の最終の要素の値Aと、その一つ上の要素の値Bを比較して,A>Bの場合には値Aの要素と、値Bの要素との並び順を逆転する。次に最終の一つ上の要素(先の値Aの要素)と、最終から二つ上の要素の値Cとの比較を行い、例えばA>Cの場合には値Aの要素と値Cの要素との並び順を逆転させる。このような比較と並び替えを順次行って、配列の最初の要素の値との比較を終了した段階で、配列の一番初めの要素としては最も値が大きな要素が入っていることになる。そして再度配列の最後の要素の値と、その一つ上の要素の値との比較から始め、配列の最初から2番目までを行った段階で、最初の要素が最も大きい値、2番目の要素が2番目に大きな値が入ることになる。このようにして順次比較と並び替えを繰り返すことで、配列の全ての要素についての並び替えが行えることになる。   Here, for example, a bubble sort is adopted as an algorithm for rearranging in the order of differential intensity. This bubble sort is performed by comparing the value A of the last element of the array arranged at random with the value B of the element immediately above, and if A> B, the element of value A and the element of value B Reverse the order of the order. Next, a comparison is made between the last one element (the element of the previous value A) and the value C of the element two above from the last. For example, if A> C, the element of the value A and the value C Reverse the order of the elements. When such comparison and rearrangement are sequentially performed and the comparison with the value of the first element of the array is completed, the element having the largest value is included as the first element of the array. Then start again by comparing the value of the last element of the array with the value of the element above it, and when the first to the second of the array are performed, the value with the largest first element and the second element Will be the second largest value. By repeating the sequential comparison and rearrangement in this way, it is possible to rearrange all the elements of the array.

さて上述のように並び替え処理を終えると、ステップS34で微分強度条件が適正であるか否かを判断し、適正であると判断した場合に微分強度の大きさの上位から抽出したい顔部品に対応付けてある指定の画素数分だけ選択した部位を濃度変化の激しい部位、それ以外の部位を濃度変化の激しくない平坦な部位として閾値を設定し、この閾値で微分強度画像を2値化画像として出力する(ステップS35)。ここで指定する画素数はグルーピンした領域に分割された顔部品に対応付けてある画素数を振り分けて指定する。尚顔部品に対応して指定する画素数としては、目・口・眉・鼻孔を構成する画素数とする。また鼻横しわ、口横しわ、顎のライン、眼鏡の枠など目、口に比べて位置の個人差が大きく、位置を指定しにくいが、顔認証に有効な部位の場合には、顔認証に有効な画素を選択するのに必要な画素数を別途実測して、顔認証時に使用するようにしても良い。   When the rearrangement process is completed as described above, it is determined in step S34 whether or not the differential intensity condition is appropriate. If it is determined that the differential intensity condition is appropriate, the facial parts to be extracted from the top of the magnitude of the differential intensity are determined. A threshold value is set with a part selected corresponding to the designated number of associated pixels as a part where the density change is severe, and the other part as a flat part where the density change is not severe, and the differential intensity image is binarized with this threshold value. (Step S35). The number of pixels specified here is specified by assigning the number of pixels associated with the face part divided into the grouped areas. Note that the number of pixels designated corresponding to the face part is the number of pixels constituting the eyes, mouth, eyebrows, and nostrils. In addition, it is difficult to specify the position because the position of the wrinkles of the nose, wrinkles of the mouth, lines of the chin, eyeglasses, etc. Alternatively, the number of pixels necessary for selecting effective pixels may be separately measured and used for face authentication.

図4に図3(b)のグルーピングに対応した微分強度画像の2値化画像を示す。   FIG. 4 shows a binarized image of the differential intensity image corresponding to the grouping of FIG.

例えば照明なしで撮像した場合の顔位置の濃淡画像は図4(a−1)となり、その微分強度順に並べて替えて上述の条件で2値化した画像は、図4(a−2)となる。また照明照射方向を人物の顔に対して向かって左側から照射した場合の顔位置の濃淡画像は図4(b−1)となり、その微分強度順に並べて替えて上述の条件で2値化した画像は、図4(b−2)となり、照明照射方向を人物の顔に対して正面から照射した場合の顔位置の濃淡画像は図4(c−1)となり、その微分強度順に並べて替えて上述の条件で2値化した画像は図4(c−2)となる。更に照明照射方向を人物の顔に対して向かって右側から照射した場合の顔位置の濃淡画像は、図4(d−1)となり、その微分強度順に並べて替えて上述の条件で2値化した画像は、図4(d−2)となる。   For example, the gray image of the face position when imaged without illumination is shown in FIG. 4 (a-1), and the image rearranged in the order of the differential intensity and binarized under the above conditions is shown in FIG. 4 (a-2). . Further, the gray image of the face position when the illumination irradiation direction is irradiated from the left side with respect to the face of the person is as shown in FIG. 4B-1, and is rearranged in the order of the differential intensity and binarized under the above conditions. 4 (b-2), and the gray image of the face position when the illumination irradiation direction is irradiated from the front to the face of the person is shown in FIG. 4 (c-1). An image binarized under the above conditions is shown in FIG. Furthermore, the gray image of the face position when the illumination irradiation direction is irradiated from the right side with respect to the face of the person is as shown in FIG. 4 (d-1), and is rearranged in the order of the differential intensity and binarized under the above conditions. The image is as shown in FIG.

上述した閾値を設定するための指定の画素数は、予め抽出したい顔部品を構成する画素数を予め基準となる顔画像を用いて調べて記憶部4aに登録したもので、この画素数を用いて微分強度順に並べ替え後の2値化の閾値を設定することで、図4に示すように画像の明るさや、コントラスト、照明照射方向(照明照射条件)の影響を受けることなく安定に必要な顔部品が抽出することができるのである。   The specified number of pixels for setting the threshold value described above is obtained by checking in advance the number of pixels constituting the face part to be extracted using a reference face image and registering it in the storage unit 4a. By setting the binarization threshold after rearrangement in the order of differential intensity, it is necessary to be stable without being affected by the brightness of the image, contrast, and illumination illumination direction (illumination illumination conditions) as shown in FIG. Facial parts can be extracted.

ちなみに微分強度画像を固定された閾値で二値化すると、図5(a−1)で示す照明なしで撮像した場合の顔位置の濃淡画像の場合には図5(a−2)となり、また図5(b−1)で示す照明照射方向を人物の顔に対して向かって左側から照射した場合の顔位置の濃淡画像の場合には図5(b−2)となり、図5(c−1)で示す照明照射方向を人物の顔に対して正面から照射した場合の顔位置の濃淡画像の場合には図5(c−2)となる。更に図5(d−1)に示す照明照射方向を人物の顔に対して向かって右側から照射した場合の顔位置の濃淡画像の場合には、図5(d−2)となる。   By the way, when the differential intensity image is binarized with a fixed threshold value, the gray image of the face position when imaged without illumination shown in FIG. 5 (a-1) becomes FIG. 5 (a-2). In the case of the gray image of the face position when the illumination irradiation direction shown in FIG. 5B-1 is irradiated from the left side toward the face of the person, it becomes FIG. 5B-2, and FIG. In the case of the gray image of the face position when the illumination irradiation direction shown in 1) is applied from the front to the face of the person, it is as shown in FIG. Further, in the case of the gray image of the face position when the illumination irradiation direction shown in FIG. 5D-1 is irradiated from the right side with respect to the human face, it becomes FIG. 5D-2.

この図5の場合には照明照射方向の影響を受けて、微分強度二値化画像が変化していることが判る。このような固定閾値を用いた場合には、画像の明るさやコントラストの影響をも受け、例えば、コントラストの大きい濃淡画像では、微分強度値も大きくなるが、コントラストの小さい濃淡画像では微分強度値も小さくなるので、得られる微分強度二値化画像が変化する。つまり顔認証を行う際に用いる顔部品画像が安定せず、顔認証の精度が低くなってしまう。   In the case of FIG. 5, it can be seen that the differential intensity binarized image changes due to the influence of the illumination irradiation direction. When such a fixed threshold value is used, it is also affected by the brightness and contrast of the image.For example, in a gray image with a high contrast, the differential intensity value increases, but in a gray image with a low contrast, the differential intensity value also increases. Since it becomes smaller, the obtained differential intensity binarized image changes. That is, the face part image used for face authentication is not stable, and the accuracy of face authentication is lowered.

ところで、上述のステップS34で微分強度が適正でないと判断されると、本実施形態では、撮像カメラ1の撮像パラメータ(例えばコントラスト、オフセット値)を変換制御する信号を顔部品抽出部4から撮像カメラ1に送って制御し(ステップS36)、再度図2のステップS1からの処理を行う。   By the way, when it is determined in step S34 described above that the differential intensity is not appropriate, in the present embodiment, a signal for converting and controlling the imaging parameters (for example, contrast and offset value) of the imaging camera 1 is sent from the facial component extraction unit 4 to the imaging camera. 1 is controlled (step S36), and the processing from step S1 in FIG. 2 is performed again.

つまり逆光による中間階調における低コントラストや、明るすぎ又は暗すぎによって上限値又は下限値に張り付いて飽和している状態で、微分強度の最下位値が所定の微分強度値を満たさない場合が起きる。例えば図6(a)に示す適正なコントラストの顔画像における輝度分布ヒストグラムは図6(b)のよう、また微分強度分布ヒストグラムは図6(c)のようになるが、図7(a)に示すようにコントラストが不足している顔画像における輝度分布ヒストグラムは、図7(b)のように、また微分強度分布ヒストグラムは図7(c)のようになる。そして微分強度値が小さい状態は、明るさ変化がない平坦な状態で、僅かなノイズによって方向値がばらつき可能性がある。このような場合に上述のように顔部品抽出部4から撮像パラメータを撮像カメラ1に送って制御し、適正な微分強度を得るのである。   In other words, the lowest contrast value may not satisfy the specified differential intensity value in the state of low contrast in the intermediate gray level due to backlighting, saturation with sticking to the upper limit value or lower limit value due to being too bright or too dark. Get up. For example, the brightness distribution histogram in the face image with appropriate contrast shown in FIG. 6A is as shown in FIG. 6B, and the differential intensity distribution histogram is as shown in FIG. 6C. As shown in FIG. 7B, the luminance distribution histogram of the face image with insufficient contrast is as shown in FIG. 7B, and the differential intensity distribution histogram is as shown in FIG. 7C. The state where the differential intensity value is small is a flat state where there is no change in brightness, and the direction value may vary due to slight noise. In such a case, as described above, the imaging parameter is sent from the face part extraction unit 4 to the imaging camera 1 and controlled to obtain an appropriate differential intensity.

尚撮像カメラ1の制御を行う判断としては、微分強度分布の平均値、或いは微分強度の並び替えの途中段階での微分強度分布からの最下位値の類推結果に基づく判断、更には微分処理前の濃淡画素の画素値(濃淡値)の分布の状態、例えば所定範囲の分布の広さを満たしていないときに撮像カメラ1を制御するという判断等でも良い。   The determination to control the imaging camera 1 includes determination based on the average value of the differential intensity distribution, or the analogy result of the lowest value from the differential intensity distribution in the middle of the sorting of the differential intensity, and further before the differential processing. For example, it may be determined that the imaging camera 1 is to be controlled when the distribution value of the grayscale pixels is not satisfying the distribution range of a predetermined range.

以上のように本実施形態の顔部品抽出では、顔部品が含まれる所定領域での微分強度値の並び替え後、予め実際に求めた基準となる顔における顔部品に対応付けてある指定画素数分だけ微分強度値の上位から選択し、その選択部位を濃度変化の激しいところ、それ以外を濃度の変化の激しくない平坦な部位として二値化を行って、顔部品画像の抽出を行うようにしているので、画像の明るさ、コントラスト、照明照射方向などの影響を受けることなく、顔認証に必要な顔部品を含む二値化画像が安定良得られることになり、その結果顔認証の精度も高くなる。   As described above, in the face part extraction of the present embodiment, the number of designated pixels associated with the face part in the reference face actually obtained in advance after rearranging the differential intensity values in the predetermined area including the face part. Select from the top of the differential intensity value by the amount, and binarize the selected part as a flat part where the density change is intense and the other part is not so intense in density change, and extract the facial part image. Therefore, it is possible to obtain a stable binary image including face parts necessary for face recognition without being affected by the brightness, contrast, illumination direction, etc. of the image. Also gets higher.

また上述の場合には、二値化の閾値として用いる顔部品の画素数は、予め基準となる顔部品の画素数を実測したものを使用しているが、抽出したい顔部品の大きさが変われば比例して指定する画素数も変化させるようにしても良い。例えば認証対象となる顔の大きさを面積等により測定して、基準となる画素数を実測した顔の大きさと比較し、その比較結果に応じて基準となる画素数を変化させて指定画素数を決定すれば、認証対象の人物の顔部品の大きさに合った顔部品画像を抽出することができる。   In the above case, the number of pixels of the facial part used as the threshold for binarization is obtained by measuring the number of pixels of the standard facial part in advance, but the size of the facial part to be extracted changes. For example, the number of pixels designated in proportion may be changed. For example, measure the size of the face to be authenticated by area, etc., compare the number of reference pixels with the measured face size, change the number of reference pixels according to the comparison result, and specify the number of pixels , It is possible to extract a facial part image that matches the size of the facial part of the person to be authenticated.

更に上述の場合には所定領域は顔画像の左右、上下左右、或いは全体において、強度微分の並び替えを行っているが、図8に示すように予め基準となる顔画像から目、口、更には鼻の領域を定めて、その領域の画素数、つまり顔部品の画素数と各領域の位置を記憶部4aに登録しておき、この登録した基準の顔画像での各顔部品の領域を、微分強度値順に並び替えを行う所定領域とするようにしても良い。   Further, in the above-described case, the predetermined area is rearranged in intensity differentiation in the left, right, up, down, left, or right of the face image, but as shown in FIG. Defines the nose area, registers the number of pixels in that area, that is, the number of pixels of the face part and the position of each area in the storage unit 4a, and sets the area of each face part in the registered reference face image. Alternatively, it may be a predetermined area in which rearrangement is performed in the order of differential intensity values.

図9は目、口の各領域での微分強度順の並び替えを行った場合を示し、図9(a−1)で示す照明なしで撮像した場合の顔位置の濃淡画像の場合には図9(a−2)となり、また図9(b−1)で示す照明照射方向を人物の顔に対して向かって左側から照射した場合の顔位置の濃淡画像の場合には図9(b−2)となり、図9(c−1)で示す照明照射方向を人物の顔に対して正面から照射した場合の顔位置の濃淡画像の場合には図10(c−2)となる。更に図9(d−1)に示す照明照射方向を人物の顔に対して向かって右側から照射した場合の顔位置の濃淡画像の場合には、図9(d−2)となる。   FIG. 9 shows a case where the order of the differential intensities is rearranged in each area of the eyes and mouth, and in the case of the gray image of the face position when imaged without illumination shown in FIG. 9 (a-2), and in the case of the gray image of the face position when the illumination irradiation direction shown in FIG. 9 (b-1) is irradiated from the left side with respect to the human face, FIG. 2), and in the case of the gray image of the face position when the illumination irradiation direction shown in FIG. 9C-1 is irradiated from the front to the human face, FIG. 10C-2 is obtained. Further, in the case of the gray image of the face position when the illumination irradiation direction shown in FIG. 9D-1 is irradiated from the right side with respect to the face of the person, it becomes FIG. 9D-2.

上述の微分強度順の並べ替えは、微分強度値の大きい方から順に並び替えたが、微分強度値の小さい方から順に並べ替えても良い。つまり微分強度の順の並び替えを行う所定領域に含まれる顔部品が構成する画素数が全画素数の50%以上の場合のように、微分強度の大きい方を選択する数が多い場合に有効で、微分強度の大きい順に並べるよりは、微分強度の小さい方から順に並べる並び替え処理の時間を短縮することができるのである。尚上述のバブルソートの並び替え方法を用いる場合には、要素の値の比較時において値が値が小さいときに配列の位置を逆転すれば良い。そしてこの場合では、予め実測した顔部品に対応付けてある指定画素数分だけ小さい方から選択した部位を濃度変化の激しくない平坦な部位とし、それ以外の部位を濃度変化の激しい部位として二値化を行う。   In the above-described rearrangement in the order of the differential strength, the rearrangement is performed in descending order of the differential strength value. In other words, it is effective when the number of pixels with larger differential strength is selected, such as when the number of pixels of the face part included in the predetermined area where the differential strength is rearranged is 50% or more of the total number of pixels. Thus, it is possible to shorten the time for the rearrangement process in which the differential strengths are arranged in order from the one with the lower differential strengths than in the order of the higher differential strengths. When the above-described bubble sort rearrangement method is used, the position of the array may be reversed when the values are small when comparing the element values. In this case, the portion selected from the smaller of the designated number of pixels associated with the face part actually measured in advance is set as a flat portion where the density change is not intense, and the other portion is set as a portion where the density change is intense. To do.

更に上述の方法では、微分強度の大きいところの抽出ができるが、濃度変化はその領域の外側の1画素にはまだ存在する可能性も大きい。つまり微分強度が小さいところは方向値もばらつきが大きくて信頼性に欠けるので、微分強度順に並べ替えて平坦な場所として除外しているが、微分強度の画素にはその物体固有の情報を持っていると考えられるのである。そこで、上述の二値化した画像を抽出後に、膨張処理して、その膨張処理した領域を濃度の激しい、つまり顔部品の領域として抽出するようにしても良い。   Further, in the above method, extraction can be performed where the differential intensity is large, but there is a high possibility that the density change still exists in one pixel outside the area. In other words, where the differential intensity is small, the direction value also varies widely and lacks reliability, so it is sorted out in the order of differential intensity and excluded as a flat place, but the differential intensity pixel has information specific to the object. It is thought that there is. Therefore, after the above binarized image is extracted, expansion processing may be performed, and the expanded processing region may be extracted as a region having a high density, that is, a face part region.

図10は目、口の各領域での微分強度順の並び替えを行い、且つ膨張処理を行った場合を示し、図10(a−1)で示す照明なしで撮像した場合の顔位置の濃淡画像の場合には図10(a−2)となり、また図10(b−1)で示す照明照射方向を人物の顔に対して向かって左側から照射した場合の顔位置の濃淡画像の場合には図9(b−2)となり、図10(c−1)で示す照明照射方向を人物の顔に対して正面から照射した場合の顔位置の濃淡画像の場合には図10(c−2)となる。更に図10(d−1)に示す照明照射方向を人物の顔に対して向かって右側から照射した場合の顔位置の濃淡画像の場合には、図10(d−2)となる。   FIG. 10 shows a case where the order of differential intensities is rearranged in each area of the eye and mouth and an expansion process is performed. The contrast of the face position when the image is taken without illumination shown in FIG. In the case of an image, it becomes FIG. 10 (a-2), and in the case of a gray image of the face position when the illumination irradiation direction shown in FIG. 10 (b-1) is irradiated from the left side toward the face of a person. 9 (b-2), and in the case of the gray image of the face position when the illumination irradiation direction shown in FIG. 10 (c-1) is irradiated from the front to the face of the person, FIG. ) Further, in the case of the gray image of the face position when the illumination irradiation direction shown in FIG. 10D-1 is irradiated from the right side with respect to the face of the person, it becomes FIG. 10D-2.

ここで上述のように顔部品抽出部4で微分強度の二値化画像が作成されると、顔認証部5はその二値化画像を取り込み、顔部品に対応付けてある場所の画素値を用いて濃淡マッチング或いは上述の濃度勾配方向マッチングにより顔認証判断処理を行い(ステップS4)、その認証結果を外部へ出力する処理を行う(ステップS5)。   Here, when a binary image of differential intensity is created by the face part extraction unit 4 as described above, the face authentication unit 5 takes in the binarized image and sets the pixel value of the location associated with the face part. Then, face authentication determination processing is performed by density matching or the above-described density gradient direction matching (step S4), and processing for outputting the authentication result to the outside is performed (step S5).

顔認証部5における顔認証方法は適宜な方法を用いれば良いが、顔認証部5の顔認証方法の一例としては濃度勾配方向値を用いたテンプレートマッチングを用いる方法がある。この場合、濃度勾配方向値としては次式で示すように隣接画素間の差分値の大きさによって決定する濃度勾配方向値を用いている。従って信頼性の高いテンプレートマッチングを行うためには、信頼性の高い濃度勾配方向値が必要であり、そのためには本例では隣接画素間のコントラストの大きい顔部品画像が必要となる。   Although an appropriate method may be used as the face authentication method in the face authentication unit 5, an example of the face authentication method of the face authentication unit 5 is a method using template matching using density gradient direction values. In this case, the density gradient direction value determined by the magnitude of the difference value between adjacent pixels is used as the density gradient direction value as shown in the following equation. Therefore, in order to perform highly reliable template matching, a highly reliable density gradient direction value is required, and for this purpose, a facial part image having a high contrast between adjacent pixels is required in this example.

dx=(c+2f+i−a+2d+g)
dy=(g+2h+i−a+2b+c)
θ=tan−1 (dy/dx)
|G(i、j)|=[dx(i,j)+dy(i,j)]1/2
尚上記式はマスクサイズ3×3のソーベルフィルタを用いた場合で、dx、dyは画素におけるx方向、y方向の微分値、a〜iは注目画素とその8近傍の画素における画素値(濃度値)を示し、θは濃度勾配方向、|G(i,J)|は画素(i,j)における微分強度値を示す。
dx = (c + 2f + ia + 2d + g)
dy = (g + 2h + ia−2b + c)
θ = tan −1 (dy / dx)
| G (i, j) | = [dx 2 (i, j) + dy 2 (i, j)] 1/2
In the above equation, a Sobel filter having a mask size of 3 × 3 is used, dx and dy are differential values in the x direction and y direction of the pixel, and a to i are pixel values of the pixel of interest and its neighboring eight pixels ( Represents the density gradient direction, and | G (i, J) | represents the differential intensity value at the pixel (i, j).

次に顔部品抽出部4から出力される微分強度2値化画像中の顔部品を示す白部分をテンプレートマッチングさせる部位として決定後、顔認証を行う場合における顔認証部5でのテンプレートマッチングの相関値の計算方法の例を簡単に説明する。   Next, after determining the white part indicating the facial part in the differential intensity binarized image output from the facial part extraction unit 4 as a part to be template-matched, correlation of template matching in the face authentication unit 5 when performing face authentication An example of a value calculation method will be briefly described.

例1
顔認証部5が画像入力部2から画像データバッファ2aを通じて入力する入力画像A(I,J)と、認証用に登録している顔画像からなるテンプレート画像B(I,J)とを何れも256階調とし、画素毎差分値C(I,J)=|A(I,J)−B(I,J)|とし、相関値計算対象となる画素の総数をNとすると、テンプレートマッチングの相関値は
相関値=1−(ΣC(I,J))/N/256と計算される。
Example 1
Both the input image A (I, J) input from the image input unit 2 through the image data buffer 2a by the face authentication unit 5 and the template image B (I, J) made up of face images registered for authentication. Assuming 256 gradations, pixel difference value C (I, J) = | A (I, J) −B (I, J) |, and assuming that the total number of pixels for which correlation values are to be calculated is N, template matching The correlation value is calculated as correlation value = 1− (ΣC (I, J)) / N / 256.

例2
本例は異なる重みを付けた計算方法の例であって、顔認証部5が画像入力部2から画像データバッファ2aを通じて入力する入力画像A(I,J)と、認証用に登録している顔画像からなるテンプレート画像B(I,J)とを何れも256階調とし、画素毎差分値C(I,J)の算出時に係数αを乗算する。このαは上述のように膨張処理した領域に対しては0以上1未満の指定値、最初から計算対象の領域には1とする。従って画素毎差分値C(I,J)は、
C(I,J)=α×|A(I,J)−B(I,J)|となる。
Example 2
This example is an example of a calculation method with different weights, and the face authentication unit 5 registers the input image A (I, J) input from the image input unit 2 through the image data buffer 2a for authentication. The template image B (I, J) made up of the face image has 256 gradations and is multiplied by a coefficient α when calculating the pixel-specific difference value C (I, J). This α is a specified value of 0 or more and less than 1 for the area subjected to the expansion processing as described above, and 1 for the area to be calculated from the beginning. Therefore, the pixel-by-pixel difference value C (I, J) is
C (I, J) = α × | A (I, J) −B (I, J) |

また相関値計算対象となる画素の総数Nは、膨張前の画素数nと膨張したときの画素数mを加算した値(N=n+m)となる。   Further, the total number N of pixels for which correlation values are to be calculated is a value (N = n + m) obtained by adding the number of pixels n before expansion and the number of pixels m when expanded.

そしてテンプレートマッチングの相関値は、
相関値=1−(ΣC(I,J))/K/256 (但しK=n+α×m)
と計算される。
And the correlation value of template matching is
Correlation value = 1− (ΣC (I, J)) / K / 256 (where K = n + α × m)
Is calculated.

本例の計算方法は、信頼性に欠ける分だけ、相関値計算に重みを付ける(最初から計算対象の画素と比べると相関値への影響を及ぼさないように重みを付ける)ことは合理的と言える。   In the calculation method of this example, it is reasonable to weight the correlation value calculation as much as it lacks reliability (weighting so as not to affect the correlation value compared to the pixel to be calculated from the beginning). I can say that.

例えば、膨張前の最初からの計算対象画素における差分値をC(I,J)=64、画素総数をN=n=100とし、膨張後の計算対象画素における差分値をC(I,J)=32とし、その総数をm=100とすると、膨張前の相関値は、0.75、膨張後の相関値(重みを付けない計算による)は、0.8125となる。   For example, the difference value in the calculation target pixel from the beginning before expansion is C (I, J) = 64, the total number of pixels is N = n = 100, and the difference value in the calculation target pixel after expansion is C (I, J). = 32 and the total number is m = 100, the correlation value before expansion is 0.75, and the correlation value after expansion (by calculation without weighting) is 0.8125.

一方、重みの係数αを0.1とした場合に膨張後の相関値は0.7614(小数点5位を四捨五入)となり、また重みの係数αを0.8とした場合には0,8056(小数点5位を四捨五入)となる。   On the other hand, when the weight coefficient α is 0.1, the correlation value after expansion is 0.7614 (rounded to the fifth decimal place), and when the weight coefficient α is 0.8, 0,8056 ( Rounded to the fifth decimal place).

つまり本例の計算方法では、係数αを小さな値にすることで、膨張分も加味するが膨張前の値に近い相関値を算出することができるのである。   That is, in the calculation method of this example, by setting the coefficient α to a small value, a correlation value close to the value before expansion can be calculated while taking into account the expansion.

一実施形態を用いた顔認証装置の全体構成図である。It is a whole block diagram of the face authentication apparatus using one Embodiment. (a)は一実施形態を用いた顔認証装置の動作説明用フローチャート、(b)は顔認証装置内の顔部品抽出部の動作説明用フローチャートである。(A) is a flowchart for explaining the operation of the face authentication apparatus using one embodiment, and (b) is a flowchart for explaining the operation of the face part extraction unit in the face authentication apparatus. 一実施形態における顔画像のグルーピングの説明図である。It is explanatory drawing of grouping of the face image in one Embodiment. 一実施形態において、微分強度の並び替えを行う所定領域を顔画像の左右とした場合の顔部品画像の抽出例の説明図である。In one embodiment, it is explanatory drawing of the example of extraction of the face component image when the predetermined area | region which rearranges differential intensity | strength is made into the left and right of a face image. 比較例における顔部品画像の抽出例の説明図である。It is explanatory drawing of the example of extraction of the face component image in a comparative example. 一実施形態において、顔部品抽出に用いる適正コントラストの顔画像の説明図であって、(a)は顔画像例図、(b)は輝度分布ヒストグラム、(c)は微分強度分布ヒストグラムである。In one Embodiment, it is explanatory drawing of the face image of appropriate contrast used for face component extraction, Comprising: (a) is a face image example figure, (b) is a luminance distribution histogram, (c) is a differential intensity distribution histogram. 一実施形態において、顔部品抽出に用いるコントラスト不足の顔画像の説明図であって、(a)は顔画像例図、(b)は輝度分布ヒストグラム、(c)は微分強度分布ヒストグラムである。In one embodiment, it is explanatory drawing of the face image with insufficient contrast used for face component extraction, Comprising: (a) is a face image example figure, (b) is a luminance distribution histogram, (c) is a differential intensity distribution histogram. 一実施形態において、微分強度の並び替えを行う所定領域を目、口とした場合の説明図である。In one Embodiment, it is explanatory drawing at the time of making the predetermined area | region which rearranges differential intensity | strength into eyes and a mouth. 一実施形態において、微分強度の並び替えを行う所定領域を目、口とした場合の顔部品画像の抽出例の説明図である。In one embodiment, it is explanatory drawing of the example of extraction of the face component image in case the predetermined area | region which rearranges differential intensity | strength is made into the eyes and the mouth. 一実施形態において、膨張処理を行い且つ微分強度の並び替えを行う所定領域を目、口とした場合の顔部品画像の抽出例の説明図である。In one embodiment, it is explanatory drawing of the example of extraction of the face component image in case the predetermined area | region which performs an expansion process and rearranges differential intensity | strength is made into eyes and a mouth.

符号の説明Explanation of symbols

1 撮像カメラ
2 画像データ入力部
2a 画像データバッファ
3 顔位置探索部
4 顔部品抽出部
4a 記憶部
5 顔認証部
X 顔認証装置
DESCRIPTION OF SYMBOLS 1 Imaging camera 2 Image data input part 2a Image data buffer 3 Face position search part 4 Face component extraction part 4a Memory | storage part 5 Face authentication part X Face authentication apparatus

Claims (10)

人物が撮像された濃淡画像から顔部位を探索した後、顔部位の濃淡画像において濃度変化の激しい画素及びその近隣の画素に顔部品を示す情報を含むものとしてこれら濃度変化の激しい画素及びその近隣の画素を選択して二値化画像を顔部品画像として抽出する顔部品抽出方法であって、
顔部位探索後、顔部位の濃淡画像を微分処理して微分強度画像とし、この微分強度画像の内、抽出したい顔部品が含まれる所定領域内で画素を微分強度順に並び替え、この並び替え後に微分強度順に基づいて顔部品に対応付けてある指定画素数分だけ選択し、その選択した部位と、その他の部位とを二値化した微分強度二値化画像を顔部品画像として抽出することを特徴とする顔部品抽出方法。
After searching for a facial part from a gray image captured by a person, the pixel having a large density change in the gray image of the facial part and its neighboring pixels including information indicating a facial part in the neighboring pixel and its neighboring pixels A face part extraction method for selecting a pixel and extracting a binarized image as a face part image,
After searching for a facial part, the grayscale image of the facial part is differentiated into a differential intensity image, and in this differential intensity image, the pixels are rearranged in the order of the differential intensity within a predetermined area including the facial part to be extracted. Select only the specified number of pixels associated with the facial part based on the order of the differential intensity, and extract the differential intensity binarized image obtained by binarizing the selected part and other parts as a facial part image. Feature facial part extraction method.
前記微分強度の並び替えを微分強度の大きい順とし、この微分強度の大きい方から顔部品に対応付けてある指定画素数分だけ選択し、該選択した部位を濃度の変化の激しいところ、それ以外の部位を濃度変化の激しくない平坦な部位として二値化することを特徴とする請求項1記載の顔部品抽出方法。 Sort the differential intensities in descending order of the differential intensity, select the specified number of pixels corresponding to the face part from the one with the highest differential intensity, and place the selected part where the change in density is severe, otherwise 2. The facial part extraction method according to claim 1, wherein the part is binarized as a flat part where the density does not change drastically. 前記微分強度の並び替えを微分強度の小さい順とし、この微分強度の小さい方から顔部品に対応付けてある指定画素数分だけ選択し、該選択した部位を濃度の変化の激しくない平坦な部位とし、それ以外の部位を濃度変化の激しい部位として二値化することを特徴とする請求項1記載の顔部品抽出方法。 Reorder the differential intensities in the order of decreasing differential intensities, and select from the smaller differential intensities the specified number of pixels associated with the facial part, and select the selected part as a flat part where the density does not change drastically 2. The method of extracting facial parts according to claim 1, wherein the other parts are binarized as parts having a sharp density change. 顔部品を構成する画素数を基準となる顔画像から予め求めておき、この基準の画素数を基に抽出対象の顔の大きさに応じて前記指定画素数を決定することを特徴とする請求項1乃至3の何れか記載の顔部品抽出方法。 The number of pixels constituting a face part is obtained in advance from a reference face image, and the specified number of pixels is determined according to the size of a face to be extracted based on the number of reference pixels. Item 4. The facial part extraction method according to any one of Items 1 to 3. 顔認証に有効で且つ個人差の大きな部位について顔認証に必要な画素数を上記指定画素数とは別に予め求めておくことを特徴とする請求項1乃至4の何れか記載の顔部品抽出方法。 5. The face component extraction method according to claim 1, wherein the number of pixels necessary for face authentication is determined in advance separately from the designated number of pixels for a part that is effective for face authentication and has a large individual difference. . 微分強度二値化画像として抽出される領域を膨張処理することを特徴とする請求項1乃至5の何れか記載の顔部品抽出方法。 6. The face component extraction method according to claim 1, wherein a region extracted as a differential intensity binarized image is expanded. 前記所定領域は、照明方向によって同じように影響を受ける領域毎にグループ化した領域であることを特徴とする請求項1乃至6の何れか記載の顔部品抽出方法。 7. The face part extraction method according to claim 1, wherein the predetermined area is an area grouped for each area that is similarly affected by the illumination direction. 前記並べ替え後に、前記指定画素数での最下位置付近の微分強度が所定の微分強度値を満たさないときには、所定の微分強度値を越えるように撮像カメラの撮像パラメータを制御することを特徴とする請求項1乃至7の何れか記載の顔部品抽出方法。 After the rearrangement, when the differential intensity near the lowest position at the specified number of pixels does not satisfy a predetermined differential intensity value, the imaging parameter of the imaging camera is controlled to exceed the predetermined differential intensity value. The face part extracting method according to any one of claims 1 to 7. 前記濃淡画像の濃度値分布が所定の範囲の分布広さを満たさないときには、濃度値分布が所定の範囲の分布広さを満たすように撮像カメラの撮像パラメータを制御することを特徴とする請求項1乃至8の何れか記載の顔部品抽出方法。 The imaging parameter of the imaging camera is controlled so that the density value distribution satisfies a predetermined range of distribution when the density value distribution of the grayscale image does not satisfy a predetermined range of distribution. 9. The facial part extraction method according to any one of 1 to 8. 請求項1乃至9の顔部品抽出方法によって二値化画像として顔部品を抽出する顔部品抽出手段と、該顔部品抽出手段からの二値化画像を取り込み、顔部品に対応付けてある場所の画素値を用いて濃淡マッチング或いは濃度勾配方向マッチングにより顔認証判断処理を行うことを特徴とする顔認証装置。

A facial part extracting unit that extracts a facial part as a binarized image by the facial part extracting method according to claim 1, and a binarized image from the facial part extracting unit A face authentication apparatus that performs face authentication determination processing by grayscale matching or density gradient direction matching using pixel values.

JP2005268969A 2005-09-15 2005-09-15 Facial part extraction method and face authentication device Expired - Fee Related JP4470848B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2005268969A JP4470848B2 (en) 2005-09-15 2005-09-15 Facial part extraction method and face authentication device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2005268969A JP4470848B2 (en) 2005-09-15 2005-09-15 Facial part extraction method and face authentication device

Publications (2)

Publication Number Publication Date
JP2007080087A true JP2007080087A (en) 2007-03-29
JP4470848B2 JP4470848B2 (en) 2010-06-02

Family

ID=37940311

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2005268969A Expired - Fee Related JP4470848B2 (en) 2005-09-15 2005-09-15 Facial part extraction method and face authentication device

Country Status (1)

Country Link
JP (1) JP4470848B2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010026858A (en) * 2008-07-22 2010-02-04 Panasonic Corp Authentication imaging apparatus
JP2011233059A (en) * 2010-04-30 2011-11-17 Seiko Epson Corp Image processing apparatus, image processing method, holding robot and program
US8641622B2 (en) 2004-10-06 2014-02-04 Guided Therapy Systems, Llc Method and system for treating photoaged tissue
JP2014179090A (en) * 2013-03-14 2014-09-25 Zazzle Com Inc Segmentation of image based on color and color difference
JP2015525914A (en) * 2012-06-28 2015-09-07 アルカテル−ルーセント Method and system for generating a high resolution video stream
US9262665B2 (en) 2010-06-30 2016-02-16 Opticon Sensors Europe B.V. Decoding method and decoding processing device
CN112863010A (en) * 2020-12-29 2021-05-28 宁波友好智能安防科技有限公司 Video image processing system of anti-theft lock

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8641622B2 (en) 2004-10-06 2014-02-04 Guided Therapy Systems, Llc Method and system for treating photoaged tissue
JP2010026858A (en) * 2008-07-22 2010-02-04 Panasonic Corp Authentication imaging apparatus
JP2011233059A (en) * 2010-04-30 2011-11-17 Seiko Epson Corp Image processing apparatus, image processing method, holding robot and program
US9262665B2 (en) 2010-06-30 2016-02-16 Opticon Sensors Europe B.V. Decoding method and decoding processing device
JP2015525914A (en) * 2012-06-28 2015-09-07 アルカテル−ルーセント Method and system for generating a high resolution video stream
JP2014179090A (en) * 2013-03-14 2014-09-25 Zazzle Com Inc Segmentation of image based on color and color difference
CN112863010A (en) * 2020-12-29 2021-05-28 宁波友好智能安防科技有限公司 Video image processing system of anti-theft lock

Also Published As

Publication number Publication date
JP4470848B2 (en) 2010-06-02

Similar Documents

Publication Publication Date Title
CN105956578B (en) A kind of face verification method of identity-based certificate information
CN111401372B (en) Method for extracting and identifying image-text information of scanned document
JP4470848B2 (en) Facial part extraction method and face authentication device
KR101656566B1 (en) Device to extract biometric feature vector, method to extract biometric feature vector and program to extract biometric feature vector
US7492926B2 (en) Method for identifying a person from a detected eye image
EP2500862B1 (en) Fake-finger determination device, fake-finger determination method and fake-finger determination program
US8879847B2 (en) Image processing device, method of controlling image processing device, and program for enabling computer to execute same method
KR101445281B1 (en) Face image detecting device, face image detecting method, and computer readable medium with face image detecting program
CN110998598A (en) Detection of manipulated images
US20150016679A1 (en) Feature extraction device, feature extraction method, and feature extraction program
US9633284B2 (en) Image processing apparatus and image processing method of identifying object in image
JP4783331B2 (en) Face recognition device
JP2013522754A (en) Iris recognition apparatus and method using a plurality of iris templates
CN109584202A (en) Image processing apparatus, method and non-transitory computer-readable storage media
JP6123975B2 (en) Feature amount extraction apparatus and feature amount extraction method
US11532148B2 (en) Image processing system
JP5955031B2 (en) Face image authentication device
CN105718931A (en) System And Method For Determining Clutter In An Acquired Image
WO2017108222A1 (en) Image processing system
CN108154483A (en) Image Processing Apparatus, Image Processing Method And Recording Medium
JP2010026805A (en) Character recognition device and character recognition method
KR100955257B1 (en) Method for Iris Recognition based on Individual Tensile Properties of Iris-Patterns
KR101473991B1 (en) Method and apparatus for detecting face
CN113610071A (en) Face living body detection method and device, electronic equipment and storage medium
KR101985474B1 (en) A Robust Detection Method of Body Areas Using Adaboost

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20080423

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20091113

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20091117

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20100118

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20100209

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20100222

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130312

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130312

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130312

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140312

Year of fee payment: 4

LAPS Cancellation because of no payment of annual fees