JP2010117981A - Face detector - Google Patents
Face detector Download PDFInfo
- Publication number
- JP2010117981A JP2010117981A JP2008292173A JP2008292173A JP2010117981A JP 2010117981 A JP2010117981 A JP 2010117981A JP 2008292173 A JP2008292173 A JP 2008292173A JP 2008292173 A JP2008292173 A JP 2008292173A JP 2010117981 A JP2010117981 A JP 2010117981A
- Authority
- JP
- Japan
- Prior art keywords
- face
- likelihood
- detection
- image
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Processing (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
Description
本発明は、監視カメラ装置などの画像撮影装置によって取得された画像データ中の顔画像領域を検出する顔検出装置に関する。 The present invention relates to a face detection device that detects a face image region in image data acquired by an image capturing device such as a monitoring camera device.
顔検出装置は、監視カメラ装置などの画像撮影装置によって取得された1フレーム毎の画像データにおいて、顔画像領域を検出する。このような顔検出装置の性能を示す指標として、顔画像領域の検出率(以下、検出率という。)、非顔画像領域の誤検出率(以下、誤検出率という。)及び顔画像領域の見逃し率(以下、見逃し率という。)が知られている。 The face detection device detects a face image region in image data for each frame acquired by an image capturing device such as a monitoring camera device. As indices indicating the performance of such a face detection device, the detection rate of the face image area (hereinafter referred to as detection rate), the false detection rate of the non-face image area (hereinafter referred to as false detection rate), and the face image area. An overlook rate (hereinafter referred to as an overlook rate) is known.
一般に、顔検出装置は、矩形形状を有する顔画像領域の真の位置及び真の大きさと、顔画像領域として検出された矩形の検出領域の位置及び大きさとを比較し、上記検出領域の位置及び大きさが顔画像領域の真の位置及び真の大きさとそれぞれ一致するときには、上記検出領域を「検出」と分類し、一致しないときには上記検出領域を「誤検出」と分類する。ここで、顔画像領域として検出された検出領域の数をNdetectedと定義し、Ndetected個の検出領域のうち「検出」と分類されたものの数をNtrueと定義し、「誤検出」と分類されたものの数をNfalseと定義する。このとき、Ndetected=Ntrue+Nfalseが成り立つ。さらに、検出されるべき顔画像領域の数をn、検出されるべき顔画像領域のうち実際に検出された顔画像領域の数をn0、検出されるべき顔画像領域のうち検出されなかった顔画像領域の数をn1(=n−n0)と定義する。このとき、本明細書において、上記検出率、誤検出率及び見逃し率を、1フレーム毎に以下のように定義する。 In general, the face detection device compares the true position and true size of a face image area having a rectangular shape with the position and size of a rectangular detection area detected as a face image area, When the size matches the true position and true size of the face image area, the detection area is classified as “detection”, and when the size does not match, the detection area is classified as “false detection”. Here, the number of detection areas detected as face image areas is defined as Ndetected, the number of Ndetected detection areas classified as "detected" is defined as Ntrue, and classified as "false detections" Define the number of things as Nfalse. At this time, Ndetected = Ntrue + Nfalse holds. Furthermore, n is the number of face image areas to be detected, n0 is the number of face image areas actually detected among the face image areas to be detected, and no face is detected among the face image areas to be detected. The number of image areas is defined as n1 (= n−n0). At this time, in the present specification, the detection rate, the false detection rate, and the miss rate are defined as follows for each frame.
検出率(%)=(n1/n)×100;
誤検出率(個)=Nfalse;
見逃し率(%)=(n1/n)×100.
Detection rate (%) = (n1 / n) × 100;
False detection rate (pieces) = Nfalse;
Missing rate (%) = (n1 / n) × 100.
また、真の顔画像領域と検出領域とが一致するか否かは、例えば、以下の式を満たすか否かによって判断される。
0.8<(a/a0)<1.2かつD<a0×0.2
Whether or not the true face image area matches the detection area is determined by whether or not the following expression is satisfied, for example.
0.8 <(a / a0) <1.2 and D <a0 × 0.2
ここで、aは真の顔画像領域の大きさ(例えば、幅である。)を表すパラメータであり、a0は検出領域の大きさを表すパラメータであり、Dは、真の顔画像領域の重心位置と検出領域の重心位置との間の距離である。 Here, a is a parameter representing the size (for example, width) of the true face image region, a0 is a parameter representing the size of the detection region, and D is the center of gravity of the true face image region. It is the distance between the position and the barycentric position of the detection area.
非特許文献1には、比較的高い検出率(例えば、99パーセント)と数十パーセント程度の誤検出率とを有する、検出性能は比較的低いが高速な演算が可能な識別器(classifier)を多段に直列接続する技術が開示されている。これにより、顔画像領域の検出率の低下を抑えながら、画像中に含まれる多くの非顔画像領域を排除し、高速性と高い性能とをかね合わせた顔検出器を実現できる。 Non-Patent Document 1 includes a classifier that has a relatively high detection rate (for example, 99%) and an erroneous detection rate of about several tens of percent, but has a relatively low detection performance but can perform high-speed calculations. A technique of serial connection in multiple stages is disclosed. As a result, it is possible to realize a face detector that eliminates many non-face image areas included in the image while suppressing a decrease in the detection rate of the face image area, and combines high speed and high performance.
顔検出装置は、変化する照明環境下で顔の向きが様々に変化することによって顔画像領域の画像パターンが変化しても、高い検出率で当該顔画像領域を検出する必要がある。しかしながら、一般に、このような多様な画像パターンを有する顔画像領域の検出率をあげようとした場合、誤検出率も同時にあがってしまうという問題点があった。 The face detection device needs to detect the face image area with a high detection rate even if the image pattern of the face image area changes due to various changes in the orientation of the face in a changing lighting environment. However, in general, when trying to increase the detection rate of a face image area having such various image patterns, there is a problem that the false detection rate also increases at the same time.
このような問題点に対処するために、非特許文献1では、99パーセント程度の高い検出率と40パーセント程度の誤検出率とを有する識別器であって、検出性能は低いが高速に計算できる単体の識別器を多段に直列接続する。これにより、例えば、上記多段接続の段数が複数mであるときには、最終的に、0.99m×100パーセントの検出率と0.4m×100パーセントの低い誤検出率とを有する、高速性と高い性能とを兼ね備えた顔検出器を実現できる。 In order to deal with such a problem, Non-Patent Document 1 is a discriminator having a high detection rate of about 99% and a false detection rate of about 40%, and the detection performance is low but can be calculated at high speed. Single discriminators are connected in series in multiple stages. Thereby, for example, when the number of stages of the multistage connection is a plurality of m, the high speed performance finally having a detection rate of 0.99 m × 100 percent and a low false detection rate of 0.4 m × 100 percent. And a high-performance face detector.
しかしながら、実際には、監視カメラ装置によって撮影された画像データのように、顔画像領域において照明環境及び顔の向きの変動によって顔画像のパターンが大きく変動するときには、100パーセントに近い高い検出率と数10パーセント程度の誤検出率とを有する各識別器を実現することは難しかった。 However, in reality, when the face image pattern varies greatly due to variations in the lighting environment and the face orientation in the face image area, such as image data captured by the surveillance camera device, the detection rate is close to 100%. It has been difficult to realize each discriminator having a false detection rate of about several tens of percent.
本発明の目的は以上の問題点を解決し、照明環境及び顔の向きの変動によって顔画像のパターンが大きく変動するときにも、従来技術に比較して誤検出率を上げることなく高い検出率で顔画像領域を検出できる顔検出装置を提供することにある。 The object of the present invention is to solve the above-mentioned problems, and even when the pattern of the face image largely fluctuates due to fluctuations in the lighting environment and the face direction, a high detection rate without increasing the false detection rate compared to the prior art. An object of the present invention is to provide a face detection device capable of detecting a face image area.
本発明に係る顔検出装置は、入力される画像データの検出領域において所定の第1の画像特徴量を算出し、上記第1の画像特徴量に基づいて、第1の顔検出アルゴリズムを用いて上記検出領域の顔画像領域らしさを示す顔尤度を算出する顔尤度算出手段と、上記算出された顔尤度を所定の第1のしきい値と比較し、上記第1のしきい値未満の顔尤度を有する検出領域を非顔画像領域であると判定する顔尤度判定手段と、上記第1のしきい値以上の顔尤度を有する検出領域を含む周辺検出領域を設定し、上記周辺検出領域において所定の第2の画像特徴量を算出し、上記第2の画像特徴量に基づいて、上記第1の顔検出アルゴリズムに比較して低速な顔検出アルゴリズムであって、上記第1の顔検出アルゴリズムの顔画像領域の検出率に比較して高い検出率及び上記第1の顔検出アルゴリズムの非顔画像領域の誤検出率に比較して低い誤検出率を有する第2の顔検出アルゴリズムを用いて、上記周辺検出領域の顔画像領域らしさを示す顔周辺尤度を算出する顔周辺尤度算出手段と、上記算出された顔周辺尤度を所定の第2のしきい値と比較し、上記第2のしきい値未満の顔周辺尤度を有する周辺検出領域に含まれる検出領域を非顔画像領域であると判定する顔周辺尤度判定手段とを備える。 The face detection apparatus according to the present invention calculates a predetermined first image feature amount in a detection region of input image data, and uses a first face detection algorithm based on the first image feature amount. Face likelihood calculation means for calculating a face likelihood indicating the likelihood of the face image area of the detection area, the calculated face likelihood is compared with a predetermined first threshold value, and the first threshold value is calculated. A face likelihood determining means for determining a detection region having a face likelihood of less than a non-face image region, and a peripheral detection region including a detection region having a face likelihood equal to or greater than the first threshold value. A predetermined second image feature amount in the peripheral detection region, and a low-speed face detection algorithm based on the second image feature amount as compared to the first face detection algorithm, Compared to the detection rate of the face image area of the first face detection algorithm Using the second face detection algorithm having a high detection rate and a low false detection rate compared to the false detection rate of the non-face image region of the first face detection algorithm, the likelihood of the face image region of the peripheral detection region is set. A face periphery likelihood calculating means for calculating a face periphery likelihood to be shown, and comparing the calculated face periphery likelihood with a predetermined second threshold value, and a face periphery likelihood less than the second threshold value. And a face peripheral likelihood determining unit that determines that a detection area included in the peripheral detection area having a non-face image area.
本発明に係る顔検出装置によれば、入力される画像データの検出領域において所定の第1の画像特徴量を算出し、上記第1の画像特徴量に基づいて、第1の顔検出アルゴリズムを用いて上記検出領域の顔画像領域らしさを示す顔尤度を算出する顔尤度算出手段と、上記第1のしきい値以上の顔尤度を有する検出領域を含む周辺検出領域を設定し、上記周辺検出領域において所定の第2の画像特徴量を算出し、上記第2の画像特徴量に基づいて、上記第1の顔検出アルゴリズムに比較して低速な顔検出アルゴリズムであって、上記第1の顔検出アルゴリズムの顔画像領域の検出率に比較して高い検出率及び上記第1の顔検出アルゴリズムの非顔画像領域の誤検出率に比較して低い誤検出率を有する第2の顔検出アルゴリズムを用いて、上記周辺検出領域の顔画像領域らしさを示す顔周辺尤度を算出する顔周辺尤度算出手段とを備えたので、照明環境及び顔の向きの変動によって顔画像のパターンが大きく変動するときにも、従来技術に比較して誤検出率を上げることなく高い検出率で高速に顔画像領域を検出できる。 According to the face detection device of the present invention, the predetermined first image feature amount is calculated in the detection region of the input image data, and the first face detection algorithm is calculated based on the first image feature amount. Using a face likelihood calculating means for calculating a face likelihood indicating the likelihood of the face image area of the detection area, and setting a peripheral detection area including a detection area having a face likelihood equal to or higher than the first threshold; A predetermined second image feature amount is calculated in the periphery detection region, and based on the second image feature amount, a face detection algorithm that is slower than the first face detection algorithm, A second face having a detection rate higher than the detection rate of the face image area of the first face detection algorithm and a detection error rate lower than that of the non-face image area of the first face detection algorithm. Using the detection algorithm, Face peripheral likelihood calculating means for calculating the face peripheral likelihood indicating the likelihood of the face image area of the outgoing region, so even when the pattern of the face image varies greatly due to variations in the lighting environment and face orientation, The face image area can be detected at a high speed with a high detection rate without increasing the false detection rate compared to the technology.
以下、本発明に係る実施の形態について図面を参照して説明する。なお、以下の各実施の形態において、同様の構成要素については同一の符号を付している。 Hereinafter, embodiments according to the present invention will be described with reference to the drawings. In addition, in each following embodiment, the same code | symbol is attached | subjected about the same component.
図1は、本発明の実施の形態1に係る顔検出装置の構成を示すブロック図であり、図2は、図1のCPU20によって実行される顔検出処理を示すフローチャートである。また、図3は、図2の顔検出処理によって処理される画像データの一例並びに当該顔検出処理に用いられる検出領域A1及び周辺検出領域A2を示す図である。 FIG. 1 is a block diagram showing the configuration of the face detection apparatus according to Embodiment 1 of the present invention, and FIG. 2 is a flowchart showing face detection processing executed by the CPU 20 of FIG. FIG. 3 is a diagram showing an example of image data processed by the face detection process of FIG. 2 and a detection area A1 and a peripheral detection area A2 used for the face detection process.
図1において、本実施の形態に係る顔検出装置は、画像入力部1と、CPU20と、判定結果出力部8とを備えて構成される。画像入力部1はインターフェース回路であって、監視カメラ装置(図示せず。)などの画像撮影装置からの画像データを入力して当該画像データに対して顔検出装置とのインターフェース処理を実行してCPU20に出力する。また、CPU20は、画像走査部2と、顔尤度算出部3と、顔尤度判定部4と、顔周辺尤度算出部5と、顔周辺尤度判定部6と、統合尤度判定部7とを備えて構成され、詳細後述する顔検出処理によって、入力される画像データ中の顔画像領域の位置と大きさとを検出して判定結果出力部8に出力する。判定結果出力部8はディスプレイ装置であって、入力される画像データの画像上に上記検出された顔画像領域を出力して表示する。 In FIG. 1, the face detection apparatus according to the present embodiment includes an image input unit 1, a CPU 20, and a determination result output unit 8. The image input unit 1 is an interface circuit that inputs image data from an image capturing device such as a monitoring camera device (not shown) and executes interface processing with the face detection device on the image data. It outputs to CPU20. The CPU 20 also includes an image scanning unit 2, a face likelihood calculating unit 3, a face likelihood determining unit 4, a face peripheral likelihood calculating unit 5, a face peripheral likelihood determining unit 6, and an integrated likelihood determining unit. 7 is detected, and the position and size of the face image area in the input image data are detected and output to the determination result output unit 8 by face detection processing, which will be described in detail later. The determination result output unit 8 is a display device, and outputs and displays the detected face image area on the image of the input image data.
なお、本実施の形態及び以下の各実施の形態に係る顔検出処理によって検出される顔画像領域のパラメータは、矩形形状を有する当該顔画像領域の左上の角の画像データ中の位置(x,y)及び当該顔画像領域の大きさ(幅w及び高さh)である。ここで、画像データ中のX軸及びY軸を、図3に示すように定義する。 Note that the parameters of the face image area detected by the face detection processing according to the present embodiment and each of the following embodiments are the position (x, y) and the size (width w and height h) of the face image area. Here, the X axis and the Y axis in the image data are defined as shown in FIG.
本実施の形態に係る顔検出装置は、
(a)入力される画像データの検出領域A1において第1の画像特徴量を算出し、第1の画像特徴量に基づいて、第1の顔検出アルゴリズムを用いて検出領域A1の顔画像領域らしさを示す顔尤度Pfaceを算出する顔尤度算出部3と、
(b)算出された顔尤度Pfaceをしきい値αと比較し、しきい値α未満の顔尤度Pfaceを有する検出領域A1を非顔画像領域であると判定する顔尤度判定部4と、
(c)しきい値α以上の顔尤度Pfaceを有する検出領域A1を含む周辺検出領域A2を設定し、周辺検出領域A2において第2の画像特徴量を算出し、第2の画像特徴量に基づいて、第1の顔検出アルゴリズムに比較して低速な顔検出アルゴリズムであって、第1の顔検出アルゴリズムの顔画像領域の検出率に比較して高い検出率及び第1の顔検出アルゴリズムの非顔画像領域の誤検出率に比較して低い誤検出率を有する第2の顔検出アルゴリズムを用いて、周辺検出領域A2の顔画像領域らしさを示す顔周辺尤度Pneighborを算出する顔周辺尤度算出部5と、
(d)算出された顔周辺尤度Pneighborをしきい値βと比較し、しきい値β未満の顔周辺尤度Pneighborを有する周辺検出領域A2に含まれる検出領域A1を非顔画像領域であると判定する顔周辺尤度判定部6とを備えたことを特徴としている。
The face detection apparatus according to the present embodiment is
(A) The first image feature amount is calculated in the detection region A1 of the input image data, and the likelihood of the face image region of the detection region A1 is calculated using the first face detection algorithm based on the first image feature amount. A face likelihood calculating unit 3 that calculates a face likelihood Pface indicating
(B) The face likelihood determination unit 4 that compares the calculated face likelihood Pface with the threshold value α and determines that the detection area A1 having the face likelihood Pface less than the threshold value α is a non-face image area. When,
(C) A periphery detection area A2 including a detection area A1 having a face likelihood Pface greater than or equal to a threshold value α is set, a second image feature amount is calculated in the periphery detection area A2, and the second image feature amount is calculated. Based on the first face detection algorithm, the detection speed is lower than that of the first face detection algorithm, and the detection ratio of the first face detection algorithm is higher than the detection ratio of the face image area of the first face detection algorithm. Using the second face detection algorithm having a low false detection rate compared to the non-face image region false detection rate, the face peripheral likelihood Pneighbor indicating the likelihood of the face image region of the peripheral detection region A2 is calculated. Degree calculation unit 5;
(D) The calculated face peripheral likelihood Pneighbor is compared with the threshold β, and the detection area A1 included in the peripheral detection area A2 having the face peripheral likelihood Pneighbor less than the threshold β is a non-face image area. And a face periphery likelihood determining unit 6 that determines that
次に、図2を参照して、本実施の形態に係る顔検出処理を説明する。ステップS1において、画像走査部2は画像入力部1を介して入力される画像データを走査する。具体的には、画像走査部2は、検出領域A1のX座標x,Y座標y,幅w及び高さhを以下の式(1)〜式(4)及び図3に示すようにそれぞれ変化させることにより、検出領域A1で画像領域全体を走査する。 Next, face detection processing according to the present embodiment will be described with reference to FIG. In step S <b> 1, the image scanning unit 2 scans image data input via the image input unit 1. Specifically, the image scanning unit 2 changes the X coordinate x, Y coordinate y, width w, and height h of the detection area A1 as shown in the following equations (1) to (4) and FIG. By doing so, the entire image area is scanned in the detection area A1.
ここで、x0,y0,w0,h0,T,L,Nは正の整数値を有する定数であり、sは1以上の実数の定数であり、検出領域A1で画像領域全体を走査するようにそれぞれ設定される。また、幅w0及び高さh0は検出領域A1の最小の大きさを示す。 Here, x0, y0, w0, h0, T, L, and N are constants having positive integer values, s is a real constant of 1 or more, and the entire image area is scanned in the detection area A1. Each is set. Further, the width w0 and the height h0 indicate the minimum size of the detection area A1.
次に、CPU20は、検出領域A1毎に以下のステップS2〜ステップS8の処理を行うことにより、当該検出領域A1が顔画像領域であるか非顔画像領域であるかを判定する。まず、ステップS2において、顔尤度算出部3は、走査された画像データの各検出領域A1において、当該検出領域A1の第1の画像特徴量を算出し、算出された第1の画像特徴量に基づいて、第1の顔検出アルゴリズムを用いて、検出領域A1の顔画像領域らしさを示す尤度である顔尤度Pfaceを算出して顔尤度判定部4に出力する。次に、ステップS3において、顔尤度判定部4は、算出された顔尤度Pfaceが所定のしきい値α以上であるか否かを判断し、YESのときには検出領域A1及び算出された顔尤度Pfaceを抽出して顔周辺尤度算出部5に出力してステップS4に進む一方、NOのときには検出領域A1は非顔画像領域であると判定してステップS8に進む。 Next, the CPU 20 determines whether the detection area A1 is a face image area or a non-face image area by performing the following steps S2 to S8 for each detection area A1. First, in step S2, the face likelihood calculating unit 3 calculates a first image feature amount of the detection area A1 in each detection area A1 of the scanned image data, and calculates the calculated first image feature amount. Based on the above, using the first face detection algorithm, the face likelihood Pface which is the likelihood indicating the likelihood of the face image area of the detection area A1 is calculated and output to the face likelihood determining unit 4. Next, in step S3, the face likelihood determination unit 4 determines whether or not the calculated face likelihood Pface is equal to or greater than a predetermined threshold value α. If YES, the detection area A1 and the calculated face are determined. The likelihood Pface is extracted and output to the face periphery likelihood calculating unit 5 and the process proceeds to step S4. On the other hand, if NO, the detection area A1 is determined to be a non-face image area and the process proceeds to step S8.
さらに、ステップS4において、顔周辺尤度算出部5は、しきい値α以上の顔尤度Pfaceを有する検出領域A1を含む周辺検出領域A2を設定し、当該周辺検出領域A2において第2の画像特徴量を算出し、算出された第2の画像特徴量に基づいて第2の顔検出アルゴリズムを用いて、周辺検出領域A2の顔画像領域らしさを示す尤度である顔周辺尤度Pneighborを算出して顔周辺尤度判定部5に出力する。ここで、周辺検出領域A2は、検出領域A1を当該検出領域A1の中心に対して所定の割合で拡大した領域であり、上記割合は、例えば、周辺検出領域A2が顔輪郭及び頭部だけを含むように、又は顔輪郭、頭部及び肩を含むように予め設定される定数である。次に、ステップS5において、顔周辺尤度判定部6は、顔周辺尤度Pneighborが所定のしきい値β以上であるか否かを判断し、YESのときには検出領域A1及び算出された顔周辺尤度Pneighborを抽出して統合尤度判定部7に出力してステップS6に進み、NOのときには検出領域A1は非顔画像領域であると判定してステップS8に進む。 Furthermore, in step S4, the face periphery likelihood calculating unit 5 sets a periphery detection region A2 including a detection region A1 having a face likelihood Pface equal to or greater than the threshold value α, and the second image is detected in the periphery detection region A2. A feature amount is calculated, and a face periphery likelihood Pneighbor, which is a likelihood indicating the likelihood of the face image region of the periphery detection region A2, is calculated using the second face detection algorithm based on the calculated second image feature amount. And output to the face periphery likelihood determination unit 5. Here, the peripheral detection area A2 is an area obtained by enlarging the detection area A1 with a predetermined ratio with respect to the center of the detection area A1, and the ratio is, for example, that the peripheral detection area A2 includes only the face contour and head. It is a constant set in advance so as to include a face outline, a head, and a shoulder. Next, in step S5, the face periphery likelihood determining unit 6 determines whether or not the face periphery likelihood Pneighbor is equal to or greater than a predetermined threshold value β. If YES, the detection region A1 and the calculated face periphery are determined. The likelihood Pneighbor is extracted and output to the integrated likelihood determination unit 7 and the process proceeds to step S6. If NO, the detection area A1 is determined to be a non-face image area and the process proceeds to step S8.
そして、ステップS6において、統合尤度判定部7は、しきい値α以上の顔尤度Pfaceを有しかつしきい値β以上の顔周辺尤度Pneighborを有する検出領域A1において算出された顔尤度Pfaceと顔周辺尤度Pneighborとを乗算することにより、顔尤度Pfaceと顔周辺尤度Pneighborとを統合的に含む統合尤度Pcombined(=Pneighbor×Pface)を算出する。そして、統合尤度Pcombinedがしきい値γ以上であるか否かを判断し、YESのときにはステップS7において、検出領域A1は顔画像領域であると判定して、検出領域A1の位置(x,y)及び大きさ(w,h)を、画像データ中の顔画像領域の位置及び大きさとして判定結果出力部8を介して出力する。一方、ステップS7においてNOのときには、ステップS8に進む。ステップS8において、検出領域A1は非顔画像領域であると判定され、当該判定結果が判定結果出力部8を介して出力される。以上詳述したように、図2の顔検出処理によって、入力される画像データ中の検出領域A1のうち、しきい値α以上の顔尤度Pfaceを有し、しきい値β以上の顔周辺尤度Pneighborを有し、かつしきい値γ以上の統合尤度Pcombinedを有する検出領域A1が顔画像領域として抽出され、抽出された検出領域A1の位置及び大きさが出力される。 In step S6, the integrated likelihood determination unit 7 calculates the face likelihood calculated in the detection area A1 having a face likelihood Pface greater than or equal to the threshold value α and having a face peripheral likelihood Pneighbor greater than or equal to the threshold value β. The integrated likelihood Pcombined (= Pneighbor × Pface) that integrally includes the face likelihood Pface and the face peripheral likelihood Pneighbor is calculated by multiplying the degree Pface and the face peripheral likelihood Pneighbor. Then, it is determined whether or not the integrated likelihood Pcombined is greater than or equal to a threshold value γ. If YES, in step S7, it is determined that the detection area A1 is a face image area, and the position (x, y) and size (w, h) are output via the determination result output unit 8 as the position and size of the face image area in the image data. On the other hand, if NO in step S7, the process proceeds to step S8. In step S8, it is determined that the detection area A1 is a non-face image area, and the determination result is output via the determination result output unit 8. As described in detail above, in the detection area A1 in the input image data by the face detection process of FIG. 2, the face periphery having a face likelihood Pface greater than or equal to the threshold α and greater than or equal to the threshold β A detection area A1 having a likelihood Pneighbor and having an integrated likelihood Pcombined equal to or greater than the threshold γ is extracted as a face image area, and the position and size of the extracted detection area A1 are output.
次に、図2のステップS2及びS4においてそれぞれ用いられる第1及び第2の顔検出アルゴリズムについて説明する。第1の顔検出アルゴリズムは、画像領域の大部分を占める顔画像を含まない非顔画像領域と、当該顔画像領域に比較して非常に狭い領域である顔画像領域の中から、第2の顔検出アルゴリズムに比較して高速に顔画像領域を抽出するためのアルゴリズムである。このため、第1の顔検出アルゴリズムとしては、数十パーセント程度の誤検出率があったとしても、比較的小さい見逃し率で(すなわち、比較的大きい検出率で)高速に顔画像領域を検出できるアルゴリズムを用いることが望ましい。例えば、第1の顔検出アルゴリズムとして、非特許文献1〜非特許文献3に係る方法を用いてもよい。非特許文献1では、例えば、検出領域の輝度値の分布に基づいて画像特徴量であるハール(Haar)特徴量を算出し、算出されたハール特徴量に基づいて、学習アルゴリズムとしてアダブースト(AdaBoost)を用いて顔尤度Pfaceを算出する。 Next, the first and second face detection algorithms used in steps S2 and S4 in FIG. 2 will be described. The first face detection algorithm includes a non-face image area that does not include a face image that occupies most of the image area, and a face image area that is a very narrow area compared to the face image area. This is an algorithm for extracting a face image region at a higher speed than the face detection algorithm. Therefore, as the first face detection algorithm, even if there is an error detection rate of about several tens of percent, it is possible to detect a face image region at a high speed with a relatively small miss rate (that is, with a relatively large detection rate). It is desirable to use an algorithm. For example, the method according to Non-Patent Document 1 to Non-Patent Document 3 may be used as the first face detection algorithm. In Non-Patent Document 1, for example, a Haar feature amount that is an image feature amount is calculated based on a luminance value distribution of a detection region, and AdaBoost is used as a learning algorithm based on the calculated Haar feature amount. Is used to calculate the face likelihood Pface.
監視カメラ映像などの画像撮影装置によって取得された画像データに含まれる顔画像領域内の画像パターンは、上下左右方向の様々な顔の向きの変化並びに照明の明るさ、暗さ及び均一性などの照明環境の変動に依存して大きく変動する。このため、第1の顔検出アルゴリズムを用いて算出された顔尤度Pfaceがしきい値α以上である検出領域A1は、顔画像領域の他に非顔画像領域も含む。 The image pattern in the face image area included in the image data acquired by the image capturing device such as a surveillance camera image is a variety of face orientation changes in the vertical and horizontal directions and the brightness, darkness, and uniformity of illumination. Fluctuates greatly depending on the lighting environment. For this reason, the detection area A1 in which the face likelihood Pface calculated using the first face detection algorithm is equal to or greater than the threshold value α includes a non-face image area in addition to the face image area.
本実施の形態では、顔検出アルゴリズムによって誤検出率の大きさが異なることに着目し、しきい値α以上の顔尤度Pfaceを有する検出領域A1(図2のステップS3参照。)において、第2の顔検出アルゴリズムを用いて顔周辺尤度Pneighborを算出する。ここで、事前確率として第1の顔検出アルゴリズムと組み合わせた場合に、効率的に誤検出率を下げることができかつ比較的高い事後確率が得られる顔検出アルゴリズムを、第2の顔検出アルゴリズムとして用いる。具体的には、第2の顔検出アルゴリズムは、第1の顔検出アルゴリズムとの同時確率を考えた場合に、第1の検出アルゴリズムの検出率より高い検出率及び第1の検出アルゴリズムの誤検出率より低い誤検出率を有する。これにより、従来技術に比較して検出率を下げることなく、誤検出率だけを下げることができる。また、第1の顔検出アルゴリズムは第2の顔検出アルゴリズムに比較して高速であり、これにより、従来技術に比較して高速に顔画像領域を検出できる。 In the present embodiment, paying attention to the fact that the magnitude of the false detection rate varies depending on the face detection algorithm, in the detection area A1 (see step S3 in FIG. 2) having the face likelihood Pface equal to or greater than the threshold value α. The face peripheral likelihood Pneighbor is calculated using the face detection algorithm of No. 2. Here, when combined with the first face detection algorithm as the prior probability, a face detection algorithm that can efficiently reduce the false detection rate and obtain a relatively high posterior probability is referred to as a second face detection algorithm. Use. Specifically, the second face detection algorithm has a detection rate higher than the detection rate of the first detection algorithm and a false detection of the first detection algorithm when considering the joint probability with the first face detection algorithm. Have a false detection rate lower than the rate. As a result, it is possible to reduce only the false detection rate without lowering the detection rate as compared with the prior art. In addition, the first face detection algorithm is faster than the second face detection algorithm, and thus the face image area can be detected faster than the conventional technique.
さらに、顔尤度算出部3は、検出領域A1の第1の画像特徴量に基づいて顔尤度Pfaceを算出し、顔周辺尤度算出部6は、周辺検出領域A2の所定の第2の画像特徴量に基づいて顔周辺尤度Pneighborを算出して出力する。第2の画像特徴量は第1の画像特徴量に比較して検出領域内A1の顔画像のパターンの変化の影響を受けにくい画像特徴量である。本実施の形態において、第1の画像特徴量は検出領域A1内の輝度値の分布を表す画像特徴量であって、第2の画像特徴量に比較して照明環境の変動の影響を受けやすい。一方、周辺検出領域A2内の輝度の勾配の方向のヒストグラムである。従って、第2の画像特徴量は、顔の輪郭を含む顔の形状を表し、照明環境の変動や顔の向きの影響を受けにくく、かつ顔の内部の画像パターンの変化によらず一定な値を有する画像特徴量であり、本実施の形態では、輝度値に基づいて算出されるエッジのデータを用いて計算される。第2の顔検出アルゴリズムにおいて第2の画像特徴量と組み合わせて用いられる学習アルゴリズムについては特に制約はないが、例えばサポートベクターマシン(Support Vector Machine(SVM))を用いて顔検出器を生成することができる。また、非特許文献3の方法によって、サポートベクターマシンからのスコアを用いて、識別器から出力される値を顔周辺尤度Pneighborに変換できる。 Further, the face likelihood calculating unit 3 calculates the face likelihood Pface based on the first image feature amount of the detection area A1, and the face peripheral likelihood calculating unit 6 is a predetermined second of the peripheral detection area A2. A face peripheral likelihood Pneighbor is calculated based on the image feature amount and output. The second image feature amount is an image feature amount that is less susceptible to the change in the pattern of the face image in the detection area A1 as compared to the first image feature amount. In the present embodiment, the first image feature amount is an image feature amount representing the distribution of luminance values in the detection area A1, and is more susceptible to fluctuations in the illumination environment than the second image feature amount. . On the other hand, it is a histogram of the direction of the luminance gradient in the peripheral detection area A2. Therefore, the second image feature amount represents the shape of the face including the outline of the face, is not easily affected by variations in the illumination environment and the face direction, and has a constant value regardless of the change in the image pattern inside the face. In this embodiment, the image feature amount is calculated using edge data calculated based on the luminance value. The learning algorithm used in combination with the second image feature amount in the second face detection algorithm is not particularly limited. For example, a face detector is generated using a support vector machine (SVM). Can do. Further, according to the method of Non-Patent Document 3, the value output from the discriminator can be converted into the face peripheral likelihood Pneighbor using the score from the support vector machine.
本実施の形態によれば、顔尤度算出部3は、走査された画像データの各検出領域A1において第1の画像特徴量を算出し、算出された第1の画像特徴量に基づいて、第1の顔検出アルゴリズムを用いて検出領域A1の顔尤度Pfaceを算出し、顔周辺尤度算出部5は、しきい値α以上の顔尤度Pfaceを有する各検出領域A1において、当該検出領域A1を含む周辺検出領域A2を設定し、周辺検出領域A2において第2の画像特徴量を算出し、算出された第2の画像特徴量に基づいて周辺検出領域A2の顔周辺尤度Pneighborを算出して出力する。ここで、第2の顔検出アルゴリズムは第1の顔検出アルゴリズムに比較して低速な顔検出アルゴリズムであり、第2の顔検出アルゴリズムは第1の顔検出アルゴリズムの検出率より高い検出率及び第1の顔検出アルゴリズムの誤検出率より低い誤検出率を有するので、照明環境及び顔の向きの変動によって顔画像のパターンが大きく変動するときにも、従来技術に比較して誤検出率を上げることなく高い検出率で高速に顔画像領域を検出できる。 According to the present embodiment, the face likelihood calculating unit 3 calculates the first image feature amount in each detection area A1 of the scanned image data, and based on the calculated first image feature amount, The face likelihood Pface of the detection area A1 is calculated using the first face detection algorithm, and the face peripheral likelihood calculation unit 5 performs the detection in each detection area A1 having the face likelihood Pface equal to or greater than the threshold value α. A periphery detection region A2 including the region A1 is set, a second image feature amount is calculated in the periphery detection region A2, and the face periphery likelihood Pneighbor of the periphery detection region A2 is calculated based on the calculated second image feature amount. Calculate and output. Here, the second face detection algorithm is a slower face detection algorithm than the first face detection algorithm, and the second face detection algorithm has a detection rate higher than the detection rate of the first face detection algorithm. Since the false detection rate is lower than the false detection rate of the face detection algorithm of 1, the false detection rate is increased as compared with the conventional technique even when the pattern of the face image largely fluctuates due to fluctuations in the illumination environment and the face direction. A face image area can be detected at a high speed without a high detection rate.
また、顔周辺尤度算出部5は、周辺検出領域A2を上検出領域A1よりも広くなるように設定し、かつ第2の画像特徴量は、所定の第1の画像特徴量に比較して検出領域A1内の顔画像のパターンの変化の影響を受けにくい画像特徴量である。従って、従来技術に比較して高い検出率で顔画像領域を検出できる。 Further, the face periphery likelihood calculating unit 5 sets the periphery detection region A2 to be wider than the upper detection region A1, and the second image feature amount is compared with the predetermined first image feature amount. This is an image feature amount that is not easily affected by the change in the pattern of the face image in the detection area A1. Therefore, the face image area can be detected with a higher detection rate than in the conventional technique.
さらに、しきい値α以上の顔尤度Pfaceを有しかつしきい値β以上の顔周辺尤度Pneighborを有する検出領域A1において算出された顔尤度Pface及び算出された顔周辺尤度Pneighborに基づいて、顔尤度Pfaceと顔周辺尤度Pneighborとを統合的に含む尤度である統合尤度Pcombinedを算出し、算出された統合尤度Pcombinedをしきい値γと比較し、しきい値γ以上の統合尤度Pcombinedを有する検出領域A1を顔画像領域であると判定する統合尤度判定部7を備えたので、従来技術に比較して高い検出率で顔画像領域を検出できる。 Further, the face likelihood Pface calculated in the detection area A1 having a face likelihood Pface greater than or equal to the threshold value α and having a face periphery likelihood Pneighbor greater than or equal to the threshold value β and the calculated face periphery likelihood Pneighbor Based on this, the integrated likelihood Pcombined, which is a likelihood including the face likelihood Pface and the face peripheral likelihood Pneighbor, is calculated, and the calculated integrated likelihood Pcombined is compared with the threshold γ. Since the integrated likelihood determination unit 7 that determines that the detection area A1 having an integrated likelihood Pcombined equal to or larger than γ is a face image area, the face image area can be detected with a higher detection rate than the conventional technique.
なお、図2のステップS2において、検出領域A1の位置及び大きさを変化させることによって画像領域全体を走査したが、本発明はこれに限られず、画像ピラミッドとよばれる方法を用いてもよい。画像ピラミッドでは、画像領域自体の大きさを段階的に縮小し、大きさの異なる各画像領域に対して固定の大きさを有する矩形形状を有する検出領域を移動させる。これにより、画像領域中の異なる大きさの顔画像領域を検出できる。 In FIG. 2, the entire image area is scanned by changing the position and size of the detection area A1, but the present invention is not limited to this, and a method called an image pyramid may be used. In the image pyramid, the size of the image region itself is reduced stepwise, and a detection region having a rectangular shape having a fixed size is moved for each image region having a different size. Thereby, it is possible to detect face image regions of different sizes in the image region.
実施の形態2.
図4は、本発明の実施の形態2に係る顔検出装置の構成を示すブロック図であり、図5は、図4のCPU20Aによって実行される顔検出処理を示すフローチャートである。また、図6は、図5の顔検出処理によって処理される画像データの一例及び存在しうる顔画像領域F1,F2及び存在し得ない顔画像領域F3,F4を示す図である。図4において、実施の形態2の顔検出装置は、画像入力部1と、CPU20Aと、顔分布データベースメモリ21とを備えて構成され、実施の形態1の顔検出装置と比較して、以下の点が異なる。
Embodiment 2. FIG.
FIG. 4 is a block diagram showing the configuration of the face detection apparatus according to Embodiment 2 of the present invention, and FIG. 5 is a flowchart showing face detection processing executed by CPU 20A of FIG. FIG. 6 is a diagram illustrating an example of image data processed by the face detection process of FIG. 5 and face image areas F1 and F2 that may exist and face image areas F3 and F4 that may not exist. 4, the face detection device according to the second embodiment includes an image input unit 1, a CPU 20A, and a face distribution database memory 21. Compared with the face detection device according to the first embodiment, The point is different.
(a)検出領域A1の画像データ内での位置(x,y)及び検出領域A1の大きさを示すパラメータspと、画像データ内での位置(x,y)及び大きさを示すパラメータspを有する顔画像領域が存在する確率である画像空間内顔尤度Pcamera1との関係を示す顔分布データベースを予め格納した顔分布データベースメモリ21をさらに備えたこと。
(b)顔分布データベースを参照して、入力される画像データの検出領域A1の画像空間内顔尤度Pcamera1を算出する画像空間内顔尤度算出部11をさらに備えたこと。
(c)画像空間内顔尤度Pcamera1をしきい値ηと比較し、しきい値η未満の画像空間内顔尤度Pcamera1を有する検出領域A1を非顔画像領域であると判定する画像空間内顔尤度判定部12をさらに備えたこと。
(d)しきい値α以上の顔尤度Pfaceを有し、しきい値β以上の顔周辺尤度Pneighborを有しかつしきい値η以上の画像空間内顔尤度Pcamera1を有する検出領域A1において算出された顔尤度Pface、算出された顔周辺尤度Pneighbor及び算出された画像空間内顔尤度Pcamera1に基づいて、顔尤度Pfaceと顔周辺尤度Pneighborと画像空間内顔尤度Pcamera1とを統合的に含む尤度である統合尤度Pcombined1を算出し、算出された統合尤度Pcombined1を所定のしきい値γと比較し、しきい値γ以上の統合尤度Pcombined1を有する検出領域A1を顔画像領域であると判定する統合尤度判定部7Aをさらに備えたこと。
(e)顔尤度算出部3は、入力される画像データの検出領域A1に代えて、しきい値η以上の画像空間内顔尤度Pcamera1を有する検出領域A1において顔尤度Pfaceを算出すること。
(A) A parameter sp indicating the position (x, y) in the image data of the detection area A1 and the size of the detection area A1, and a parameter sp indicating the position (x, y) and the size in the image data. And a face distribution database memory 21 in which a face distribution database indicating a relationship with the face likelihood Pcamera1 in the image space, which is the probability that the face image area has, is stored in advance.
(B) The image space face likelihood calculating unit 11 that calculates the image space face likelihood Pcamera1 of the detection area A1 of the input image data with reference to the face distribution database is further provided.
(C) In the image space, the face likelihood Pcamera1 in the image space is compared with the threshold η, and the detection area A1 having the face likelihood Pcamera1 in the image space less than the threshold η is determined to be a non-face image area. A face likelihood determination unit 12 is further provided.
(D) A detection region A1 having a face likelihood Pface greater than or equal to a threshold value α, a face peripheral likelihood Pneighbor greater than or equal to a threshold value β, and a face likelihood Pcamera1 in the image space greater than or equal to the threshold value η. Based on the face likelihood Pface calculated in step, the calculated face peripheral likelihood Pneighbor, and the calculated image space face likelihood Pcamera1, the face likelihood Pface, the face periphery likelihood Pneighbor, and the image space face likelihood Pcamera1 Is a detection region having an integrated likelihood Pcombined1 that is equal to or greater than the threshold γ by calculating an integrated likelihood Pcombined1 that is a likelihood that includes An integrated likelihood determination unit 7A that determines that A1 is a face image region is further provided.
(E) The face likelihood calculating unit 3 calculates the face likelihood Pface in the detection region A1 having the face likelihood Pcamera1 in the image space equal to or larger than the threshold η, instead of the detection region A1 of the input image data. thing.
検出領域A1は、当該検出領域A1の位置(x,y)及び当該検出領域A1の大きさ(w,h)を示すパラメータspで表される。ここで、spは、以下の式で表される。
例えば、図6に示すように、監視カメラ装置で撮影された廊下の画像データの場合には、通常は、その廊下を歩く人物の顔はカメラに近づくにつれて、画像領域の上から下へと移動しかつ大きくなる。このとき、顔画像領域は、顔画像領域F1(x1,x1,sp1)から顔画像領域F2(x2,x2,sp2)のように変化する。このため、画像領域の上部に顔画像領域F3(x3,x3,sp3)のような比較的大きい顔画像領域が存在する確率及び画像領域の下部に顔画像領域F4(x4,x4,sp4)のような比較的大きい顔画像領域が存在する確率は極めて小さい。 For example, as shown in FIG. 6, in the case of corridor image data taken by a surveillance camera device, the face of a person walking in the corridor usually moves from the top to the bottom of the image area as it approaches the camera. And become bigger. At this time, the face image area changes from the face image area F1 (x1, x1, sp1) to the face image area F2 (x2, x2, sp2). Therefore, the probability that a relatively large face image area such as the face image area F3 (x3, x3, sp3) exists above the image area and the face image area F4 (x4, x4, sp4) below the image area. The probability that such a relatively large face image area exists is extremely small.
このとき、位置(x,y)に存在しうる顔画像領域の大きさを示すパラメータspは、平均値spmean及び標準偏差spsigmaを有する正規分布に従う。本実施の形態では、位置(x,y)にパラメータspで表される大きさを有する顔画像領域が存在する確率(尤もらしさ)である画像空間内顔尤度Pcamera1(x,y,sp)を、以下の式(5)で表す。 At this time, the parameter sp indicating the size of the face image area that may exist at the position (x, y) follows a normal distribution having an average value spmean and a standard deviation spsigma. In the present embodiment, the face likelihood Pcamera1 (x, y, sp) in the image space, which is the probability (probability) that a face image region having a size represented by the parameter sp exists at the position (x, y). Is represented by the following formula (5).
図4において、顔分布データベースメモリ21は、検出領域A1の位置(x,y)及び検出領域A1の大きさ(w,h)を示すパラメータspと、位置(x,y)にパラメータspで表される大きさを有する顔画像領域が存在する確率である画像空間内顔尤度Pcamera1(x,y,sp)との関係を示す顔分布データベースを予め格納している。例えば、図6に示した例では、画像領域F1の画像空間内顔尤度Pcamera1(x1,y1,sp1)及び画像領域F2の画像空間内顔尤度Pcamera1(x2,y2,sp2)は、画像領域F3の画像空間内顔尤度Pcamera1(x3,y3,sp3)及び画像領域F4の画像空間内顔尤度Pcamera1(x4,y4,sp4)よりも大きい。 In FIG. 4, the face distribution database memory 21 is represented by a parameter sp indicating the position (x, y) of the detection area A1 and the size (w, h) of the detection area A1, and the parameter sp at the position (x, y). A face distribution database indicating the relationship with the face likelihood Pcamera1 (x, y, sp) in the image space, which is the probability that a face image area having the size to be present, is stored in advance. For example, in the example shown in FIG. 6, the image space face likelihood Pcamera1 (x1, y1, sp1) of the image region F1 and the image space face likelihood Pcamera1 (x2, y2, sp2) of the image region F2 are It is larger than the face likelihood Pcamera1 (x3, y3, sp3) in the image space of the region F3 and the face likelihood Pcamera1 (x4, y4, sp4) in the image space of the image region F4.
図5の顔検出処理は、図2の顔検出処理と比較して、ステップS1とステップS2との間にステップS11及びステップS12を追加し、ステップS6,S7をステップS6A,S7Aにそれぞれ置き換えた点のみが異なる。CPU20Aは、ステップS1に引き続いて、検出領域A1毎に以下のステップS11,S12,S2〜S8の処理を行うことにより、当該検出領域A1が顔画像領域であるか非顔画像領域であるかを判定する。 Compared with the face detection process of FIG. 2, the face detection process of FIG. 5 adds steps S11 and S12 between steps S1 and S2, and replaces steps S6 and S7 with steps S6A and S7A, respectively. Only the point is different. Subsequent to step S1, the CPU 20A performs the following steps S11, S12, S2 to S8 for each detection area A1, thereby determining whether the detection area A1 is a face image area or a non-face image area. judge.
ステップS11において、画像空間内顔尤度算出部11は、顔分布データベースメモリ21内の顔分布データベースを参照して、検出領域A1(x,y,sp)において画像空間顔尤度Pcamera1(x,y,sp)を算出して画像空間内顔尤度判定部12に出力する。次に、ステップS12において、画像空間内顔尤度判定部12は、画像空間顔尤度Pcamera1(x,y,sp)が所定のしきい値η以上であるか否かを判断し、YESのときには検出領域A1及び算出された画像空間顔尤度Pcamera1(x,y,sp)を抽出して顔尤度算出部3に出力してステップS2に進み、NOのときには検出領域A1は非顔画像領域であると判定してステップS8に進む。 In step S11, the image space face likelihood calculating unit 11 refers to the face distribution database in the face distribution database memory 21, and the image space face likelihood Pcamera1 (x, 1) in the detection area A1 (x, y, sp). y, sp) is calculated and output to the face likelihood determination unit 12 in the image space. Next, in step S12, the in-image-space face likelihood determination unit 12 determines whether or not the image-space face likelihood Pcamera1 (x, y, sp) is equal to or greater than a predetermined threshold value η. Sometimes the detection area A1 and the calculated image space face likelihood Pcamera1 (x, y, sp) are extracted and output to the face likelihood calculation unit 3 to proceed to step S2, and when NO, the detection area A1 is a non-face image. It determines with it being an area | region, and progresses to step S8.
また、ステップS6Aにおいて、統合尤度判定部7Aは、しきい値α以上の顔尤度Pfaceを有し、しきい値β以上の顔周辺尤度Pneighborを有しかつしきい値η以上の画像空間内顔尤度Pcamera1を有する検出領域A1において算出された顔尤度Pface、算出された顔周辺尤度Pneighbor及び算出された画像空間内顔尤度Pcamera1とを乗算することにより統合尤度Pcombined1(=Pneighbor×Pface×Pcamera1)を算出し、算出された統合尤度Pcombined1を所定のしきい値γと比較し、YESのときにはステップS7Aにおいて、しきい値γ以上の統合尤度Pcombined1を有する検出領域A1を顔画像領域であると判定して検出領域A1の位置(x,y)及び大きさ(w,h)を、画像データ中の顔画像領域の位置及び大きさとして判定結果出力部8を介して出力する。一方、ステップS7AにおいてNOのときには、ステップS8に進む。 In step S6A, the integrated likelihood determination unit 7A has a face likelihood Pface greater than or equal to the threshold value α, a face peripheral likelihood Pneighbor greater than or equal to the threshold value β, and an image greater than or equal to the threshold value η. By multiplying the face likelihood Pface calculated in the detection area A1 having the face likelihood Pcamera1 in space, the face peripheral likelihood Pneighbor calculated and the face likelihood Pcamera1 calculated in the image space, the integrated likelihood Pcombined1 ( = Pneighbor × Pface × Pcamera1), the calculated integrated likelihood Pcombined1 is compared with a predetermined threshold γ, and if YES, in step S7A, a detection region having an integrated likelihood Pcombined1 equal to or greater than the threshold γ The determination result output unit 8 determines that A1 is a face image area and uses the position (x, y) and size (w, h) of the detection area A1 as the position and size of the face image area in the image data. Output via. On the other hand, if NO in step S7A, the process proceeds to step S8.
本実施の形態によれば、顔分布データベースメモリ21の顔分布データベースを参照して、走査された画像データの各検出領域A1において画像空間内顔尤度Pcamera1aを算出する画像空間内顔尤度算出部11と、画像空間内顔尤度Pcamera1がしきい値η以上であるか否かを判断する画像空間内顔尤度判定部12とを備えたので、実施の形態1に比較してさらに低い誤検出率で顔画像領域を検出できる。 According to the present embodiment, referring to the face distribution database in the face distribution database memory 21, the image space face likelihood calculation is performed to calculate the image space face likelihood Pcamera1a in each detection area A1 of the scanned image data. Since it includes the unit 11 and the in-image-space face likelihood determination unit 12 that determines whether or not the in-image-space face likelihood Pcamera1 is greater than or equal to the threshold value η, it is lower than that in the first embodiment. The face image area can be detected with a false detection rate.
また、画像空間内顔尤度算出部11及び画像空間内顔尤度判定部12を顔尤度算出部3の前段に備えたので、画像空間内顔尤度Pcamera1がしきい値γより小さい検出領域A1において顔尤度Pfaceの算出を行う必要が無く、実施の形態1に比較してさらに高速に顔画像領域を検出できる。 Further, since the in-image-space face likelihood calculating unit 11 and the in-image-space face likelihood determining unit 12 are provided in the previous stage of the face-likelihood calculating unit 3, detection of the in-image-space face likelihood Pcamera1 is smaller than the threshold γ. It is not necessary to calculate the face likelihood Pface in the area A1, and the face image area can be detected at a higher speed than in the first embodiment.
さらに、顔尤度Pfaceと顔周辺尤度Pneighborと画像空間顔尤度Pcamera1とに基づいて統合尤度Pcombined1を算出し、しきい値γ以上の統合尤度Pcombined1を有する検出領域A1を、顔画像領域として出力するので、実施の形態1に比較してさらに高い検出率で顔画像領域を検出できる。 Further, the integrated likelihood Pcombined1 is calculated based on the face likelihood Pface, the face peripheral likelihood Pneighbor, and the image space face likelihood Pcamera1, and the detection area A1 having the integrated likelihood Pcombined1 equal to or larger than the threshold γ is detected as the face image. Since it is output as a region, the face image region can be detected with a higher detection rate than in the first embodiment.
なお、顔分布データベースメモリ21内の顔分布データベース31を、実際に検出された顔画像領域のデータに基づいて作成してもよい。 Note that the face distribution database 31 in the face distribution database memory 21 may be created based on the data of the actually detected face image area.
実施の形態3.
図7は、本発明の実施の形態3に係る顔検出装置の構成を示すブロック図であり、図8は、図7のCPU20Bによって実行される顔検出処理を示すフローチャートである。図7において、実施の形態3の顔検出装置は、画像入力部1と、CPU20Bと、顔分布データベースメモリ21Aと、空間位置情報入力部22とを備えて構成され、実施の形態2の顔検出装置と比較して、以下の点が異なる。
Embodiment 3 FIG.
FIG. 7 is a block diagram showing a configuration of the face detection apparatus according to Embodiment 3 of the present invention, and FIG. 8 is a flowchart showing face detection processing executed by CPU 20B of FIG. In FIG. 7, the face detection apparatus according to the third embodiment includes an image input unit 1, a CPU 20B, a face distribution database memory 21A, and a spatial position information input unit 22, and the face detection according to the second embodiment. Compared to the device, the following points are different.
(a)検出領域A1内に存在する物体の空間位置データを入力する空間位置情報入力部22をさらに備えたこと。
(b)顔分布データベースメモリ21に代えて、検出領域A1の画像データ内での位置(x,y)及び検出領域A1の大きさを示すパラメータspと、空間位置データと、空間位置データに対応する検出領域A1に画像データ内での位置(x,y)及び大きさを示すパラメータspを有する顔画像領域が存在する確率である画像空間内顔尤度Pcamera2との関係を示す顔分布データベースを予め格納した顔分布データベースメモリ21Aを備えたこと。
(c)画像空間内顔尤度判定部12に代えて、画像空間内顔尤度Pcamera2をしきい値ηと比較し、しきい値η未満の画像空間内顔尤度Pcamera2を有する検出領域A1を非顔画像領域であると判定する画像空間内顔尤度判定部12Aを備えたこと。
(d)統合尤度判定部7Aに代えて、しきい値α以上の顔尤度Pfaceを有し、しきい値β以上の顔周辺尤度Pneighborを有しかつしきい値η以上の画像空間内顔尤度Pcamera2を有する検出領域A1において算出された顔尤度Pface、算出された顔周辺尤度Pneighbor及び算出された画像空間内顔尤度Pcamera2に基づいて、顔尤度Pfaceと顔周辺尤度Pneighborと画像空間内顔尤度Pcamera2とを統合的に含む尤度である統合尤度Pcombined2を算出し、算出された統合尤度Pcombined2を所定のしきい値γと比較し、しきい値γ以上の統合尤度Pcombined2を有する検出領域A1を顔画像領域であると判定する統合尤度判定部7Bを備えたこと。
(e)顔尤度算出部3は、画像空間内顔尤度Pcamera1に代えて、しきい値η以上の画像空間内顔尤度Pcamera2を有する検出領域A1において顔尤度Pfaceを算出すること。
(A) A spatial position information input unit 22 for inputting spatial position data of an object existing in the detection area A1 is further provided.
(B) Instead of the face distribution database memory 21, the position (x, y) in the image data of the detection area A1 and the parameter sp indicating the size of the detection area A1, the spatial position data, and the spatial position data are supported. A face distribution database showing a relationship with a face likelihood Pcamera2 in the image space, which is a probability that a face image region having a parameter sp indicating the position (x, y) and size in the image data exists in the detection region A1 A face distribution database memory 21A stored in advance is provided.
(C) In place of the image space face likelihood determination unit 12, the image space face likelihood Pcamera2 is compared with the threshold value η, and the detection region A1 having the image space face likelihood Pcamera2 less than the threshold value η Is provided with an in-image-space face likelihood determination unit 12A that determines that a non-face image region.
(D) In place of the integrated likelihood determination unit 7A, an image space having a face likelihood Pface greater than or equal to the threshold α, a face peripheral likelihood Pneighbor greater than or equal to the threshold β, and greater than or equal to the threshold η Based on the face likelihood Pface calculated in the detection area A1 having the inner face likelihood Pcamera2, the calculated face peripheral likelihood Pneighbor, and the calculated face likelihood Pcamera2 in the image space, the face likelihood Pface and the face peripheral likelihood The integrated likelihood Pcombined2, which is a likelihood that includes the degree Pneighbor and the face likelihood Pcamera2 in the image space, and compares the calculated integrated likelihood Pcombined2 with a predetermined threshold γ, and the threshold γ An integrated likelihood determination unit 7B that determines that the detection area A1 having the above integrated likelihood Pcombined2 is a face image area is provided.
(E) The face likelihood calculating unit 3 calculates the face likelihood Pface in the detection area A1 having the image space face likelihood Pcamera2 equal to or greater than the threshold η, instead of the image space face likelihood Pcamera1.
図7において、空間位置情報入力部22は、検出領域A1に存在する物体の監視カメラ装置からの距離dを検出するステレオカメラ装置又は超音波センサなどの距離センサを有する空間位置検出装置(図示せず。)から、当該距離dのデータを入力する。また、顔分布データベースメモリ21Aは、検出領域A1の位置(x,y)及び大きさ(w,h)と、当該検出領域A1に存在する物体の監視カメラ装置dからの距離と、画像空間内顔尤度Pcamera2(x,y,sp,d)との関係を示す顔分布データベースを予め格納する。 In FIG. 7, the spatial position information input unit 22 includes a spatial position detection device (not shown) having a distance sensor such as a stereo camera device or an ultrasonic sensor that detects a distance d of an object existing in the detection area A1 from the monitoring camera device. (2), the data of the distance d is input. In addition, the face distribution database memory 21A includes the position (x, y) and size (w, h) of the detection area A1, the distance of the object existing in the detection area A1 from the monitoring camera device d, and the image space A face distribution database indicating the relationship with the face likelihood Pcamera2 (x, y, sp, d) is stored in advance.
図8の顔検出処理は、図5の顔検出処理と比較して、ステップS11,S12,S6A,S7Aに代えてステップS11A,S12A,S6B,S7Bを有することを特徴としている。ステップS11Aにおいて、画像空間内顔尤度算出部11Aは、空間位置情報入力部22からの距離データに基づいて、顔分布データベースメモリ21A内の顔分布データベースを参照して、検出領域A1(x,y,sp)において画像空間顔尤度Pcamera2(x,y,sp,d)を算出して画像空間内顔尤度判定部12Aに出力する。次に、ステップS12Aにおいて、画像空間内顔尤度判定部12Aは、画像空間顔尤度Pcamera2(x,y,sp,d)が所定のしきい値η以上であるか否かを判断し、YESのときには検出領域A1及び算出された画像空間顔尤度Pcamera2(x,y,sp,d)を抽出して顔尤度算出部3に出力してステップS2に進み、NOのときには検出領域A1は非顔画像領域であると判定してステップS8に進む。 The face detection process of FIG. 8 is characterized by having steps S11A, S12A, S6B, and S7B instead of steps S11, S12, S6A, and S7A, as compared with the face detection process of FIG. In step S11A, the face likelihood calculation unit 11A in the image space refers to the face distribution database in the face distribution database memory 21A based on the distance data from the spatial position information input unit 22, and detects the detection region A1 (x, The image space face likelihood Pcamera2 (x, y, sp, d) is calculated at y, sp) and output to the in-image space face likelihood determination unit 12A. Next, in step S12A, the in-image space face likelihood determining unit 12A determines whether or not the image space face likelihood Pcamera2 (x, y, sp, d) is equal to or greater than a predetermined threshold value η. If YES, the detection area A1 and the calculated image space face likelihood Pcamera2 (x, y, sp, d) are extracted and output to the face likelihood calculation unit 3 and the process proceeds to step S2. If NO, the detection area A1 Is determined to be a non-face image area, and the process proceeds to step S8.
また、ステップS6Bにおいて、統合尤度判定部7Bは、しきい値α以上の顔尤度Pfaceを有し、しきい値β以上の顔周辺尤度Pneighborを有しかつしきい値η以上の画像空間内顔尤度Pcamera2を有する検出領域A1において算出された顔尤度Pface、算出された顔周辺尤度Pneighbor及び算出された画像空間内顔尤度Pcamera2とを乗算することにより統合尤度Pcombined2(=Pneighbor×Pface×Pcamera2)を算出し、算出された統合尤度Pcombined2を所定のしきい値γと比較し、YESのときにはステップS7Bにおいて、しきい値γ以上の統合尤度Pcombined2を有する検出領域A1を顔画像領域であると判定して検出領域A1の位置(x,y)及び大きさ(w,h)を、画像データ中の顔画像領域の位置及び大きさとして判定結果出力部8を介して出力する。一方、ステップS7AにおいてNOのときには、ステップS8に進む。 In step S6B, the integrated likelihood determination unit 7B has an image having a face likelihood Pface greater than or equal to the threshold α, a face peripheral likelihood Pneighbor greater than or equal to the threshold β, and greater than or equal to the threshold η. By multiplying the face likelihood Pface calculated in the detection area A1 having the in-space face likelihood Pcamera2, the calculated face periphery likelihood Pneighbor, and the calculated in-image space face likelihood Pcamera2, the integrated likelihood Pcombined2 ( = Pneighbor × Pface × Pcamera2), and the calculated integrated likelihood Pcombined2 is compared with a predetermined threshold value γ. If YES, in step S7B, a detection region having an integrated likelihood Pcombined2 equal to or greater than the threshold value γ The determination result output unit 8 determines that A1 is a face image area and uses the position (x, y) and size (w, h) of the detection area A1 as the position and size of the face image area in the image data. Output via. On the other hand, if NO in step S7A, the process proceeds to step S8.
本実施の形態によれば、距離dに基づいて、顔分布データベースメモリ21Aの顔分布データベースを参照して、検出領域A1の画像空間内顔尤度Pcamera2を算出する画像空間内顔尤度算出部11Aを備えたので、実施の形態2に比較してさらに低い誤検出率で顔画像領域を検出できる。 According to the present embodiment, the in-image-space face likelihood calculating unit that calculates the in-image-space face likelihood Pcamera2 of the detection area A1 with reference to the face distribution database in the face distribution database memory 21A based on the distance d. Since 11A is provided, the face image area can be detected with a lower false detection rate than in the second embodiment.
変形例.
なお、実施の形態2の顔検出処理(図5)において、ステップS11及びステップS12をステップS3の次に実行してもよい。また、実施の形態3の顔検出処理(図8)において、ステップS11A及びステップS12AをステップS3の次に実行してもよい。
Modified example.
In the face detection process (FIG. 5) according to the second embodiment, step S11 and step S12 may be executed after step S3. In the face detection process (FIG. 8) according to the third embodiment, step S11A and step S12A may be executed after step S3.
さらに、上記各実施の形態に係る顔検出装置は人間の顔を含む顔画像領域の位置及び大きさを検出したが、本発明はこれに限られず、歩行者、車両、標識又は看板などを含む画像領域を検出するように構成してもよい。 Furthermore, although the face detection device according to each of the above embodiments detects the position and size of a face image area including a human face, the present invention is not limited to this, and includes a pedestrian, a vehicle, a sign, or a signboard. You may comprise so that an image area | region may be detected.
以上詳述したように、本発明に係る顔検出装置によれば、入力される画像データの検出領域において所定の第1の画像特徴量を算出し、上記第1の画像特徴量に基づいて、第1の顔検出アルゴリズムを用いて上記検出領域の顔画像領域らしさを示す顔尤度を算出する顔尤度算出手段と、上記第1のしきい値以上の顔尤度を有する検出領域を含む周辺検出領域を設定し、上記周辺検出領域において所定の第2の画像特徴量を算出し、上記第2の画像特徴量に基づいて、上記第1の顔検出アルゴリズムに比較して低速な顔検出アルゴリズムであって、上記第1の顔検出アルゴリズムの顔画像領域の検出率に比較して高い検出率及び上記第1の顔検出アルゴリズムの非顔画像領域の誤検出率に比較して低い誤検出率を有する第2の顔検出アルゴリズムを用いて、上記周辺検出領域の顔画像領域らしさを示す顔周辺尤度を算出する顔周辺尤度算出手段とを備えたので、照明環境及び顔の向きの変動によって顔画像のパターンが大きく変動するときにも、従来技術に比較して誤検出率を上げることなく高い検出率で高速に顔画像領域を検出できる。 As described above in detail, according to the face detection device of the present invention, the predetermined first image feature amount is calculated in the detection region of the input image data, and based on the first image feature amount, A face likelihood calculating means for calculating a face likelihood indicating the likelihood of the face image area of the detection area using a first face detection algorithm, and a detection area having a face likelihood equal to or greater than the first threshold value. A peripheral detection region is set, a predetermined second image feature amount is calculated in the peripheral detection region, and the face detection is performed at a lower speed than the first face detection algorithm based on the second image feature amount. An algorithm having a high detection rate compared to the detection rate of the face image region of the first face detection algorithm and a low detection error compared to the detection rate of the non-face image region of the first face detection algorithm Second face detection algorithm having a rate Using a face periphery likelihood calculating means for calculating a face periphery likelihood indicating the likelihood of the face image region of the periphery detection region, so that the pattern of the face image greatly varies depending on the illumination environment and the variation of the face direction. In this case, the face image area can be detected at a high speed with a high detection rate without increasing the false detection rate as compared with the prior art.
1 画像入力部、2 画像走査部、3 顔尤度算出部、4 顔尤度判定部、5 顔周辺尤度算出部、6 顔周辺尤度判定部、7,7A,7B 統合尤度判定部、8 判定結果出力部、11 画像空間内顔尤度算出部、12,12A 画像空間内顔尤度判定部、20,20A,20B CPU、21,21A 顔分布データベースメモリ、22 空間位置情報入力部、A1 検出領域、A2 周辺検出領域。 DESCRIPTION OF SYMBOLS 1 Image input part, 2 Image scanning part, 3 Face likelihood calculation part, 4 Face likelihood determination part, 5 Face periphery likelihood calculation part, 6 Face periphery likelihood determination part, 7, 7A, 7B Integrated likelihood determination part , 8 Determination result output unit, 11 Image space face likelihood calculation unit, 12, 12A Image space face likelihood determination unit, 20, 20A, 20B CPU, 21, 21A face distribution database memory, 22 Spatial position information input unit , A1 detection area, A2 periphery detection area.
Claims (7)
上記算出された顔尤度を所定の第1のしきい値と比較し、上記第1のしきい値未満の顔尤度を有する検出領域を非顔画像領域であると判定する顔尤度判定手段と、
上記第1のしきい値以上の顔尤度を有する検出領域を含む周辺検出領域を設定し、上記周辺検出領域において所定の第2の画像特徴量を算出し、上記第2の画像特徴量に基づいて、上記第1の顔検出アルゴリズムに比較して低速な顔検出アルゴリズムであって、上記第1の顔検出アルゴリズムの顔画像領域の検出率に比較して高い検出率及び上記第1の顔検出アルゴリズムの非顔画像領域の誤検出率に比較して低い誤検出率を有する第2の顔検出アルゴリズムを用いて、上記周辺検出領域の顔画像領域らしさを示す顔周辺尤度を算出する顔周辺尤度算出手段と、
上記算出された顔周辺尤度を所定の第2のしきい値と比較し、上記第2のしきい値未満の顔周辺尤度を有する周辺検出領域に含まれる検出領域を非顔画像領域であると判定する顔周辺尤度判定手段とを備えたことを特徴とする顔検出装置。 A predetermined first image feature amount is calculated in the detection region of the input image data, and based on the first image feature amount, the likelihood of the face image region of the detection region is calculated using a first face detection algorithm. Face likelihood calculating means for calculating the face likelihood to be shown;
Face likelihood determination in which the calculated face likelihood is compared with a predetermined first threshold, and a detection area having a face likelihood less than the first threshold is determined to be a non-face image area Means,
A peripheral detection region including a detection region having a face likelihood equal to or greater than the first threshold value is set, a predetermined second image feature amount is calculated in the peripheral detection region, and the second image feature amount is calculated. Based on the first face detection algorithm, the face detection algorithm is slower than the first face detection algorithm, and has a higher detection rate and the first face than the detection rate of the face image area of the first face detection algorithm. A face for calculating a face peripheral likelihood indicating the likelihood of the face image region of the peripheral detection region by using a second face detection algorithm having a low false detection rate compared to the false detection rate of the non-face image region of the detection algorithm Marginal likelihood calculating means;
The calculated face periphery likelihood is compared with a predetermined second threshold value, and a detection region included in the periphery detection region having a face periphery likelihood less than the second threshold value is a non-face image region. A face detection apparatus comprising: a face periphery likelihood determination unit that determines that there is a face.
上記第2の画像特徴量は、上記第1の画像特徴量に比較して上記検出領域内の顔画像のパターンの変化の影響を受けにくい画像特徴量であることを特徴とする請求項1記載の顔検出装置。 The face periphery likelihood calculating means sets the periphery detection region to be wider than the detection region,
2. The image feature quantity according to claim 1, wherein the second image feature quantity is an image feature quantity that is less susceptible to a change in a pattern of a face image in the detection area than the first image feature quantity. Face detection device.
上記第2の画像特徴量は上記周辺検出領域内の輝度値の勾配の方向の分布を表す画像特徴量であることを特徴とする請求項2記載の顔検出装置。 The first image feature amount is an image feature amount representing a distribution of luminance values in the detection region,
The face detection apparatus according to claim 2, wherein the second image feature amount is an image feature amount representing a distribution in a gradient direction of a luminance value in the peripheral detection region.
上記顔分布データベースを参照して、上記入力される画像データの検出領域の画像空間内顔尤度を算出する画像空間内顔尤度算出手段と、
上記画像空間内顔尤度を所定の第3のしきい値と比較し、上記第3のしきい値未満の画像空間内顔尤度を有する検出領域を非顔画像領域であると判定する画像空間内顔尤度判定手段とをさらに備え、
上記顔尤度算出手段は、上記第3のしきい値以上の画像空間内顔尤度を有する検出領域において上記顔尤度を算出することを特徴とする請求項1乃至3のうちのいずれか1つに記載の顔検出装置。 A relationship between a parameter indicating the position and size of the detection area in the image data and a face likelihood in the image space, which is a probability that a face image area having the position and size in the image data exists. A face distribution database memory that stores a face distribution database in advance;
Referring to the face distribution database, a face likelihood calculation means in the image space for calculating a face likelihood in the image space of the detection area of the input image data;
Image in which the face likelihood in the image space is compared with a predetermined third threshold, and a detection area having a face likelihood in the image space that is less than the third threshold is determined as a non-face image area Further comprising a face likelihood determination means in space,
The face likelihood calculation means calculates the face likelihood in a detection region having a face likelihood in the image space equal to or greater than the third threshold value. The face detection apparatus according to one.
上記検出領域の画像データ内での位置及び大きさを示すパラメータと、上記空間位置データと、上記空間位置データに対応する検出領域に上記画像データ内での位置及び大きさを有する顔画像領域が存在する確率である画像空間内顔尤度との関係を示す顔分布データベースを予め格納した顔分布データベースメモリと、
上記入力された空間位置データに基づいて、上記顔分布データベースを参照して、上記入力される画像データの検出領域の画像空間内顔尤度を算出する画像空間内顔尤度算出手段と、
上記算出された画像空間内顔尤度を所定の第3のしきい値と比較し、上記第3のしきい値未満の画像空間内顔尤度を有する検出領域を非顔画像領域であると判定する画像空間内顔尤度判定手段とをさらに備え、
上記顔尤度算出手段は、上記第3のしきい値以上の画像空間内顔尤度を有する検出領域において上記顔尤度を算出することを特徴とする請求項1乃至3のうちのいずれか1つに記載の顔検出装置。 Spatial position information input means for inputting spatial position data of an object existing in the detection area;
A parameter indicating the position and size of the detection area in the image data, the spatial position data, and a face image area having a position and size in the image data in the detection area corresponding to the spatial position data. A face distribution database memory pre-stored with a face distribution database indicating a relationship with a face likelihood in the image space, which is a probability of existing;
Based on the input spatial position data, referring to the face distribution database, an intra-image-space face likelihood calculating unit that calculates an intra-image-space face likelihood of a detection area of the input image data;
The calculated face likelihood in the image space is compared with a predetermined third threshold value, and the detection area having the face likelihood in the image space less than the third threshold value is a non-face image area. A face likelihood determining means for determining in the image space,
The face likelihood calculation means calculates the face likelihood in a detection region having a face likelihood in the image space equal to or greater than the third threshold value. The face detection apparatus according to one.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008292173A JP4903192B2 (en) | 2008-11-14 | 2008-11-14 | Face detection device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008292173A JP4903192B2 (en) | 2008-11-14 | 2008-11-14 | Face detection device |
Publications (2)
Publication Number | Publication Date |
---|---|
JP2010117981A true JP2010117981A (en) | 2010-05-27 |
JP4903192B2 JP4903192B2 (en) | 2012-03-28 |
Family
ID=42305605
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2008292173A Expired - Fee Related JP4903192B2 (en) | 2008-11-14 | 2008-11-14 | Face detection device |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP4903192B2 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012060463A1 (en) * | 2010-11-05 | 2012-05-10 | グローリー株式会社 | Subject detection method and subject detection device |
JP2013029924A (en) * | 2011-07-27 | 2013-02-07 | Dainippon Printing Co Ltd | Individual identification apparatus, individual identification target, individual identification method and program |
JP2014013469A (en) * | 2012-07-04 | 2014-01-23 | Mitsubishi Electric Corp | Image processor |
JP2015507271A (en) * | 2012-01-13 | 2015-03-05 | 富士通株式会社 | Object recognition method and object recognition apparatus |
CN110651300A (en) * | 2017-07-14 | 2020-01-03 | 欧姆龙株式会社 | Object detection device, object detection method, and program |
JP2020119030A (en) * | 2019-01-18 | 2020-08-06 | ソフトバンク株式会社 | Control program of information processing device, control method of information processing device, and information processing device |
WO2020230340A1 (en) * | 2019-05-13 | 2020-11-19 | 株式会社マイクロネット | Facial recognition system, facial recognition method, and facial recognition program |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2023016415A (en) | 2021-07-21 | 2023-02-02 | キヤノン株式会社 | Identification device, method for identification, method for learning, program, model, and data structure |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006323779A (en) * | 2005-05-20 | 2006-11-30 | Canon Inc | Image processing method and device |
JP2007066010A (en) * | 2005-08-31 | 2007-03-15 | Fujifilm Corp | Learning method for discriminator, object discrimination apparatus, and program |
JP2008033785A (en) * | 2006-07-31 | 2008-02-14 | Seiko Epson Corp | Object detector, object detection method and object detection program |
-
2008
- 2008-11-14 JP JP2008292173A patent/JP4903192B2/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006323779A (en) * | 2005-05-20 | 2006-11-30 | Canon Inc | Image processing method and device |
JP2007066010A (en) * | 2005-08-31 | 2007-03-15 | Fujifilm Corp | Learning method for discriminator, object discrimination apparatus, and program |
JP2008033785A (en) * | 2006-07-31 | 2008-02-14 | Seiko Epson Corp | Object detector, object detection method and object detection program |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012060463A1 (en) * | 2010-11-05 | 2012-05-10 | グローリー株式会社 | Subject detection method and subject detection device |
JP2012099070A (en) * | 2010-11-05 | 2012-05-24 | Glory Ltd | Subject detection method and subject detecting device |
JP2013029924A (en) * | 2011-07-27 | 2013-02-07 | Dainippon Printing Co Ltd | Individual identification apparatus, individual identification target, individual identification method and program |
JP2015507271A (en) * | 2012-01-13 | 2015-03-05 | 富士通株式会社 | Object recognition method and object recognition apparatus |
JP2014013469A (en) * | 2012-07-04 | 2014-01-23 | Mitsubishi Electric Corp | Image processor |
CN110651300A (en) * | 2017-07-14 | 2020-01-03 | 欧姆龙株式会社 | Object detection device, object detection method, and program |
CN110651300B (en) * | 2017-07-14 | 2023-09-12 | 欧姆龙株式会社 | Object detection device, object detection method, and program |
JP2020119030A (en) * | 2019-01-18 | 2020-08-06 | ソフトバンク株式会社 | Control program of information processing device, control method of information processing device, and information processing device |
WO2020230340A1 (en) * | 2019-05-13 | 2020-11-19 | 株式会社マイクロネット | Facial recognition system, facial recognition method, and facial recognition program |
JP2020187479A (en) * | 2019-05-13 | 2020-11-19 | 株式会社マイクロネット | Face recognition system, face recognition method and face recognition program |
US11455828B2 (en) | 2019-05-13 | 2022-09-27 | Micronet Co., Ltd. | Face recognition system, face recognition method and face recognition program |
Also Published As
Publication number | Publication date |
---|---|
JP4903192B2 (en) | 2012-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4903192B2 (en) | Face detection device | |
JP6088792B2 (en) | Image detection apparatus, control program, and image detection method | |
JP5726125B2 (en) | Method and system for detecting an object in a depth image | |
US20150253864A1 (en) | Image Processor Comprising Gesture Recognition System with Finger Detection and Tracking Functionality | |
CN110348270B (en) | Image object identification method and image object identification system | |
JP2006146922A (en) | Template-based face detection method | |
JP5671928B2 (en) | Learning device, learning method, identification device, identification method, and program | |
JP2006350434A (en) | Hand-shape recognition device and its method | |
CN104036284A (en) | Adaboost algorithm based multi-scale pedestrian detection method | |
JP6351243B2 (en) | Image processing apparatus and image processing method | |
JP2013215549A (en) | Image processing device, image processing program, and image processing method | |
US20160026857A1 (en) | Image processor comprising gesture recognition system with static hand pose recognition based on dynamic warping | |
JP6071002B2 (en) | Reliability acquisition device, reliability acquisition method, and reliability acquisition program | |
JP2009265732A (en) | Image processor and method thereof | |
JP6110174B2 (en) | Image detection apparatus, control program, and image detection method | |
JP2011053951A (en) | Image processing apparatus | |
JP2007025902A (en) | Image processor and image processing method | |
JP5100688B2 (en) | Object detection apparatus and program | |
CN107368832A (en) | Target detection and sorting technique based on image | |
JP2018036901A (en) | Image processor, image processing method and image processing program | |
JP2013015891A (en) | Image processing apparatus, image processing method, and program | |
Devrari et al. | Fast face detection using graphics processor | |
CN109657577B (en) | Animal detection method based on entropy and motion offset | |
JP2014142760A (en) | Image detection device, control program and image detection method | |
JP2015041226A (en) | Image recognition device, image recognition method, and computer program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A621 | Written request for application examination |
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20100927 |
|
A977 | Report on retrieval |
Free format text: JAPANESE INTERMEDIATE CODE: A971007 Effective date: 20110819 |
|
A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20110830 |
|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20111028 |
|
TRDD | Decision of grant or rejection written | ||
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20111206 |
|
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 |
|
A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20120104 |
|
R150 | Certificate of patent or registration of utility model |
Ref document number: 4903192 Country of ref document: JP Free format text: JAPANESE INTERMEDIATE CODE: R150 Free format text: JAPANESE INTERMEDIATE CODE: R150 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20150113 Year of fee payment: 3 |
|
R250 | Receipt of annual fees |
Free format text: JAPANESE INTERMEDIATE CODE: R250 |
|
R250 | Receipt of annual fees |
Free format text: JAPANESE INTERMEDIATE CODE: R250 |
|
R250 | Receipt of annual fees |
Free format text: JAPANESE INTERMEDIATE CODE: R250 |
|
R250 | Receipt of annual fees |
Free format text: JAPANESE INTERMEDIATE CODE: R250 |
|
R250 | Receipt of annual fees |
Free format text: JAPANESE INTERMEDIATE CODE: R250 |
|
R250 | Receipt of annual fees |
Free format text: JAPANESE INTERMEDIATE CODE: R250 |
|
LAPS | Cancellation because of no payment of annual fees |