TWI225222B - Robust face detection algorithm for real-time video sequence - Google Patents

Robust face detection algorithm for real-time video sequence Download PDF

Info

Publication number
TWI225222B
TWI225222B TW92129221A TW92129221A TWI225222B TW I225222 B TWI225222 B TW I225222B TW 92129221 A TW92129221 A TW 92129221A TW 92129221 A TW92129221 A TW 92129221A TW I225222 B TWI225222 B TW I225222B
Authority
TW
Taiwan
Prior art keywords
eye
face
eyes
area
image
Prior art date
Application number
TW92129221A
Other languages
Chinese (zh)
Other versions
TW200515301A (en
Inventor
Shih-Ching Sun
Mei-Juan Chen
Original Assignee
Leadtek Research Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Leadtek Research Inc filed Critical Leadtek Research Inc
Priority to TW92129221A priority Critical patent/TWI225222B/en
Application granted granted Critical
Publication of TWI225222B publication Critical patent/TWI225222B/en
Publication of TW200515301A publication Critical patent/TW200515301A/en

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention is directed to a face detection method. In the method, an image data in a YCbCr color space is received, wherein a Y component of the image data to analyze out a motion region and a CbCr component of the image to analyze out a skin color region. The motion region and the skin color region are combined to produce a face candidate. An eye detection process on the image is performed to detect out eye candidates. And then, an eye-pair verification process is performed to find an eye-pair candidate from the eye candidates, wherein the eye-pair candidate is also within a region of the face candidate.

Description

11935twf.doc/006 玖、發明說明: 發明所屬之枝術領域 本發明是有關於一種影像處理。本發明特別有關於 一種在影像上偵測臉部之技術。 先前技術 近年來,人類臉部偵測變得更普遍。在各種應用中, 自動偵測人類臉部變得非常重要,比如在影像監測,人機 介面,臉部辨認與臉部影像資料庫管理中。在臉部辨認應 用中,在處理前必需知道人類臉部的位置。臉部追蹤應用 也需要既定的臉部位置。在臉部影像資料庫管理中,由於 資料庫的資料量很大,必需儘快發現人類臉部。雖然現已 用數種方法來進行臉部偵測,仍有許多因素使得臉部偵測 變得很困難,比如大小,位置,方向性(直立與旋轉),表 情,載眼鏡與歪頭等。臉部偵測之各種方法已於近幾年中 被提出,但幾乎沒有一種方法能將上述因素完全考量到。 然而,用於任一種即時應用中之臉部偵測技術必需符合上 述因素。現已廣泛利用膚色來加速臉部偵測。誤判膚色是 不可避免的。神經網路已可用於在灰色影像中偵測臉部。 然而’其計算非常複雜’因爲神經網路必需處理在該影像 中的許多微小之區域視窗。 對於習知臉部偵測演算法,由於偵測錯誤與長計算 時間的關係,將無法正確臉部偵測,更遑論即時偵測臉部。 仍需硏發出較好的臉部偵測演算法以更有效率地偵測臉 部。 11935twf.doc/006 本發明提供一種適用於影像串中之臉部偵測方法。 本發明之臉部偵測方法可有效與快速地偵測臉部,其中在 動作區中,可在錯誤率大爲降低情況下來即時偵測臉部。 本發明提供一種臉部偵測方法,包括:接收一YCbCr 色彩空間中之一影像資料;其中使用該影像資料之一 Y成 份來分析出一動作區,以及使用該影像資料之一 CbCr成 份來分析出一膚色區。合倂該動作區與該膚色區以產生一 臉部可能位置。對該影像進行一眼睛偵測處理以偵測出眼 睛可能位置。接著,執行一成對眼睛檢驗處理以從該些眼 睛可能位置中找出一成對眼睛可能位置,其中成對眼睛可 能位置也位於該臉部可能位置之一區內。 在上述偵測臉部方法中,使用該影像資料之該Y成 份之該步驟包括:對該影像資料之該Y成份進行一框差異 處理,其中應用—無限脈衝響應類(IIR)濾波器來增強該框 差異,以補償該膚色區之一缺點。 在上述偵測臉部方法中,該方法更包括一標記處理 以標出一臉部位置,以排除具相當小標記値之該臉部可能 位置。 在上述偵測臉部方法中,進行該眼睛偵測處理之該 步驟包括:檢查一眼睛區,其中排除在一範圍外之該眼睛 區。接著,檢查該眼睛區之一比率,其中排除具長形之初 步眼睛可能位置。接著,檢查一密度調節’其中各眼睛可 能位置具有一最小矩形方框以符合眼睛可能位置’以及如 果該初步眼睛可能位置具小面積但卻有大MRB’消去該 初步眼睛可能位置。 11935twf.doc/006 在上述偵測臉部方法中,執行該成對眼睛檢驗處理 之該步驟包括:藉由考量在± 45°內之成對眼睛斜率以找 出一初步成對眼睛可能位置。接著,當該初步成對眼睛可 能位置之兩個眼睛可能位置之眼睛區具有大比率時,排除 該初步成對眼睛可能位置。根據該初步成對眼睛可能位置 以產生一臉部多邊形,以及當該臉部多邊形在該臉部可能 位置之一區外時,排除該初步成對眼睛可能位置。設定在 一像素區內之一亮度影像,其中該亮度影像包括一中間區 與兩側邊區。計算該中間區內之一平均亮度値與該兩側邊 區內之一平均亮度値間之一差値,以及如果該差値在一既 定範圍內,則該初步成對眼睛可能位置是該成對眼睛可能 位置。 另外,本發明提供一種對一影像進行臉部偵測之方 法,包括:偵測一臉部可能位置;對該影像進行一眼睛偵 測處理以偵測出至少兩個眼睛可能位置;以及執行一成對 眼睛檢驗處理以從該些眼睛可能位置當中找出一成對眼睛 可能位置,其中成對眼睛可能位置也位於該臉部可能位置 之一區內。 要了解,上述描述與底下之描述只是舉例而已,且 可提供本發明之進一步解釋。 爲讓本發明之上述和其他目的、特徵、和優點能更明 顯易懂,下文特舉一較佳實施例,並配合所附圖式,作詳 細說明如下: 奮施方式: 在本發明中,將提出穩健臉部偵測之新方法。所提 11935twf.doc/006 出之臉部偵測演算法包括膚色分割,動作區分割與面貌偵 測。該演算法可即時地(每秒3〇個框)偵測普通交換格式 (common interchange format,CIF)影像,其包括表情,轉 Μ 頭與不同的臉部大小。膚色區分割與動作區分割可 快速地定位出臉部的可能位置。利用穩健的眼睛偵測演算 以偵沏I目良睛。最後,成對眼睛確認將決定臉部可能位置的 有效性。實施例如下: 本發明根據膚色,動作與面貌分析而提出一種臉部 偵測之快速演算法。首先,利用一組色差値來得到膚色區。 接著’提出利用增強框差異來分割動作區之新方法。接著, 合倂膚色區與動作區來定位出可能的臉部位置。也利用穩 健眼睛偵測法以偵測出在所偵測到之可能臉部位置區中之 眼睛。接著,檢驗眼睛是否成對以決定臉部可能位置之有 效性。 此臉部偵測演算法之槪觀顯示於第1圖中,其包括 兩個主要模組:(1)定位臉部以找出臉部可能位置;以及(2) 偵測面貌以檢驗所偵測出之臉部可能位置。開始時,在步 驟100,影像資料會被臉部定位模組接收或輸入。該影像 資料係位於色彩空間(color space)中,比如YCbCi·色彩空 間。影像資料可分成數個成份,分別對框資訊與色彩資訊 靈敏。在將YCbCr色彩空間當成較佳色彩空間下,Y成份 對框靈敏而CbCr對色彩靈敏。 在步驟102中,Y成份被框差異增強處理。該框差異 係被無限脈衝響應類型(IIR)濾波器增強,而動作區被所提 出之動作分割法所分割(步驟104)。另一方面,係利一般 11935twf.doc/006 的膚色模組來將像素分類爲皮膚像素類與非皮膚像素類 (步驟106)。接著,合倂該影像之動作區與膚色區(步驟108) 以得到更正確的臉部可能位置。之後,利用眼睛偵測110 與成對眼睛檢驗112來檢驗各臉部可能位置。成功地通過 臉部檢驗之區可當成臉部區。 底下將膚色分割更詳細描述: 模仿膚色需要選擇適當的色彩空間與辨別該空間中 有關於膚色之群組。在此應用該YCbCr色彩空間,因其 廣泛應用於影像壓縮標準(比如MPEG與JPEG)中。甚至, 可藉由是否有集中且連續分佈於YCbCr色彩空間中之某 一組色差値(比如Cb與Cr)之存在來辨認膚色區。所有輸 入影像之最適當範圍是Rcb=[77, 127]與RCr[133, 173]。 如果某一像素之Cb與Cr値皆落於Rcb與RCr範圍內,則 該像素被分類爲膚色像素。 底下也詳細描述動作區分割。雖然膚色技術可快速 定位臉部區,其也可能會偵測到背景中之錯誤可能位置。 本發明根據框差異而提出動作區分割演算法以補償只使用 膚色之缺點。 框差異能有效找出動作區,但其有兩個嚴重缺點。 缺點之一是框差異通常會出現於邊緣區上,另一缺點是當 物體移動量不多時,框差異有時會變得非常弱’如第2(b) 圖所示。因而,利用IIR類濾波器來增強框差異。IIR類 濾波器之槪念是一種迴授迴圈。各輸出値輸出成下一輸入 値。對於Mx N影像,本發明之IIR類可簡化並描述如下:11935twf.doc / 006 (ii) Description of the invention: Field of the art to which the invention belongs The present invention relates to an image processing. The present invention is particularly related to a technique for detecting a face on an image. Prior technology In recent years, human face detection has become more common. In various applications, automatic detection of human faces becomes very important, such as in image monitoring, human-machine interface, face recognition and facial image database management. In face recognition applications, the position of a human face must be known before processing. Face tracking applications also require established face positions. In the management of facial image database, due to the large amount of data in the database, it is necessary to find human faces as soon as possible. Although several methods have been used for face detection, there are still many factors that make face detection difficult, such as size, position, orientation (upright and rotation), expression, glasses and tilted head. Various methods of face detection have been proposed in recent years, but few methods can fully consider the above factors. However, the face detection technology used in any real-time application must meet the above factors. Skin tone is now widely used to accelerate face detection. Misjudging skin tone is inevitable. Neural networks have been used to detect faces in gray images. However, its calculation is very complicated because the neural network has to deal with many tiny area windows in the image. For the conventional face detection algorithm, due to the relationship between detection errors and long calculation time, face detection cannot be performed correctly, let alone real-time face detection. There is still a need for better face detection algorithms to detect faces more efficiently. 11935twf.doc / 006 The present invention provides a method for face detection in an image string. The face detection method of the present invention can effectively and quickly detect a face, and in the action area, a face can be detected in real time with a greatly reduced error rate. The invention provides a face detection method, which includes: receiving an image data in a YCbCr color space; wherein a Y component of the image data is used to analyze an action area; and a CbCr component of the image data is used to analyze A skin tone area appears. Combine the action area and the skin color area to create a possible face position. An eye detection process is performed on the image to detect a possible eye position. Next, a pair of eye inspection process is performed to find out a pair of possible eye positions from the possible eye positions, where the possible pair of eye positions are also located in one of the possible face positions. In the above method for detecting a face, the step of using the Y component of the image data includes: performing a frame difference process on the Y component of the image data, where an infinite impulse response (IIR) filter is used to enhance The frame differs to compensate for one of the disadvantages of the skin tone area. In the above method for detecting a face, the method further includes a marking process to mark a face position to exclude a possible position of the face with a relatively small mark. In the above method for detecting a face, the step of performing the eye detection process includes: inspecting an eye area, wherein the eye area outside a range is excluded. Next, a ratio of one of the eye areas is checked, in which possible positions of the elongated initial eye are excluded. Next, check a density adjustment 'where each eye possible position has a smallest rectangular box to fit the eye possible position' and if the preliminary eye possible position has a small area but a large MRB 'eliminate the preliminary eye possible position. 11935twf.doc / 006 In the face detection method described above, the step of performing the paired eye inspection process includes: finding a preliminary paired eye possible position by considering the slope of the paired eyes within ± 45 °. Then, when the eye area of the two possible positions of the preliminary pair of eyes has a large ratio, the possible positions of the preliminary pair of eyes are excluded. A face polygon is generated based on the possible positions of the preliminary paired eyes, and the possible positions of the preliminary paired eyes are excluded when the face polygon is outside an area of the possible positions of the face. A brightness image set in a pixel region, wherein the brightness image includes a middle region and two side regions. Calculate a difference between an average brightness 値 in the middle region and an average brightness 値 in the side regions on both sides, and if the difference is within a predetermined range, the possible position of the preliminary paired eyes is the composition. Possible location for eyes. In addition, the present invention provides a method for face detection of an image, including: detecting a possible position of a face; performing an eye detection process on the image to detect at least two possible positions of the eyes; and performing a The paired eye inspection process is to find a pair of possible eye positions from among the possible positions of the eyes, where the paired eye possible positions are also located in one of the possible positions of the face. It should be understood that the above description and the following description are merely examples, and may provide further explanation of the present invention. In order to make the above and other objects, features, and advantages of the present invention more comprehensible, a preferred embodiment is given below and described in detail with the accompanying drawings as follows: Modes of implementation: In the present invention, A new method for robust face detection will be proposed. The proposed 11935twf.doc / 006 face detection algorithm includes skin color segmentation, motion region segmentation, and face detection. The algorithm can detect common interchange format (CIF) images in real time (30 frames per second), which includes expressions, transposed M heads, and different face sizes. Skin color segmentation and motion segmentation can quickly locate the possible positions of the face. Make use of robust eye detection algorithms to detect good eyes. Finally, paired eye confirmation will determine the effectiveness of the possible positions of the face. The implementation examples are as follows: The present invention proposes a fast algorithm for face detection based on skin color, motion and facial analysis. First, a set of color differences are used to obtain the skin color area. Then, a new method of segmenting the action area using the difference of the enhanced frame is proposed. Then, the skin color area and the motion area are combined to locate possible face positions. Robust eye detection is also used to detect eyes in the area of possible face positions detected. Next, check whether the eyes are paired to determine the validity of the possible positions of the face. The overview of this face detection algorithm is shown in Figure 1. It includes two main modules: (1) locate the face to find the possible position of the face; and (2) detect the face to check the detected Possible position of the detected face. At the beginning, in step 100, the image data will be received or input by the face positioning module. The image data is located in a color space, such as YCbCi · Color Space. The image data can be divided into several components, which are sensitive to frame information and color information, respectively. Under the YCbCr color space as the better color space, the Y component is sensitive to the frame and CbCr is sensitive to the color. In step 102, the Y component is processed by frame difference enhancement. The frame difference is enhanced by the infinite impulse response type (IIR) filter, and the action area is divided by the proposed action segmentation method (step 104). On the other hand, the skin color module of 11935twf.doc / 006 is used to classify pixels into skin pixels and non-skin pixels (step 106). Next, the motion area and skin color area of the image are combined (step 108) to obtain a more accurate possible face position. Then, the eye detection 110 and the paired eye inspection 112 are used to check the possible positions of each face. An area that successfully passes a face test can be considered a face area. Skin color segmentation is described in more detail below: Mimicking skin color requires selecting an appropriate color space and identifying the groups of skin colors in that space. This YCbCr color space is used here because it is widely used in image compression standards such as MPEG and JPEG. Furthermore, the skin color region can be identified by the existence of a set of color differences (such as Cb and Cr) that are concentrated and continuously distributed in the YCbCr color space. The most appropriate ranges for all input images are Rcb = [77, 127] and RCr [133, 173]. If both Cb and Cr 値 of a pixel fall within the range of Rcb and RCr, the pixel is classified as a skin color pixel. The action zone segmentation is also described in detail below. Although skin tone technology can quickly locate facial areas, it may also detect incorrect possible locations in the background. The present invention proposes an action area segmentation algorithm based on frame differences to compensate for the disadvantage of using only skin color. Frame differences can effectively find the action area, but it has two serious disadvantages. One of the disadvantages is that the frame difference usually appears on the edge area, and the other disadvantage is that when the object is not moving much, the frame difference sometimes becomes very weak ', as shown in Figure 2 (b). Therefore, IIR type filters are used to enhance the frame difference. The idea of IIR filters is a feedback loop. Each output 値 is output as the next input 値. For Mx N images, the IIR class of the present invention can be simplified and described as follows:

Ot(x,y)=It(x,y)+wx (VKx,y) 11935twf.doc/006 其中 x=0,…M-l ; y=0 ’ …Ν·1 ’ It(x,y)是在像素(χ, y)處之原始第t個框差異,而Ot(x,y)則是增強後之第t 個框差異。在此,ω是加權數,可設成比如0.9。第2(c) 圖顯示增強框差異後之結果。顯然地,動作區變得比原始 更明顯且更容易取出。 利用平均濾波器與擴張操作來去除雜訊與增強影 像。因而,可得到位圖(bitmapWJx,y),而値爲1之各像 素代表動作像素,而値爲0的像素代表非動作像素。接著, 掃描程式可取出動作區。掃描程序包括兩方向,垂直掃描 與水平掃描,描述如下:在垂直掃描中,搜尋位圖〇Κχ, y)之各行中之動作像素之頂端邊界與底部邊界。一旦找出 此兩邊界,將頂端邊界與底部邊界間之各像素設爲動作像 素且設定其値爲1。另外,位於此兩邊界外之其他像素設 爲非動作像素且設定其値爲〇。因此,可得到位圖並標示 爲〇2(x,y)。水平掃描包括從左至右掃描與從右至左掃描。 從左至右掃描之描述如下: 〇2(x,y)=〇 (if 0】(χ,y)=0 Π 02(χ-1,y)=〇) 其中χ=1,…M-l ; y=0,…Ν-1。接著,從右至左掃 描進行如下: 〇2(x,y)=〇 (if 〇ι(χ,y)=〇 门 〇2(x+i,y)=〇) 其中χ=Μ-2,,…0 ; y=0,…N-l。不符合此條件之 像素之値不變。接著,找出在位圖02(x,y)中値爲1之像 素之所有最短連續像素並接著將之移除。這是爲確保能得 到動作區之正確幾何形狀。第3(a)圖顯示動作區分割之結 果。動作區顯示成白色,而非動作區顯示成黑色。 1225222 11935twf.doc/006 合倂如第3(b)圖之膚色區與動作區以定位臉部可能位 置。接著,利用標示技術來標示出臉部位置並除去小標示 以得到臉部可能位置。第3(c)圖顯示合倂動作與膚色區後 之臉部可能位置。 在底下將詳細描述眼睛偵測110(參考第1圖)。其要 找出面貌以檢驗臉部之存在。其想法是在各臉部可能位置 處偵測有可能的眼睛可能位置。接著,考量兩個眼睛可能 ' 位置之成對關聯性,並用於決定臉部可能位置之有效性。 〜 在習知演算法中,大部份演算法係以亮度成份來偵 _ 測面貌。然而,在本發明中,亮度成份經常會導致錯誤警 告與雜訊。實際上,雖然眼睛區之低亮度可由底部偵測器 來偵測,在待偵測之附近區域中之邊緣區也有較低的亮 度。甚至,亮度成份會受到光線改變與陰影的影響。在本 發明中,並不利用亮度成份,而是利用色差成份來偵測眼 睛。在本發明中,從色差成份之分析可看出,在眼睛附近 可找到高Cb値。所以,較好利用峰値偵測器來偵測高Cb 値區。影像Cb(x,y)之峰域可如下獲得: P(x,y)={[(Cb2(x,y)©g(x,y))㊉g(x,y)]· Cb2(x,y) Φ 其中g(x,y)是結構元件。所輸入之Cb2影像在其被 本身相減之前會先侵蝕接著擴張。第4圖顯示在YCbCr * 色彩空間之不同成份中之形態操作結果圖。顯然地,Cb ' 成份有較少與較緊密的眼睛可能位置,比起Y與Cb成份。 在Y成份中,由於在眼睛附近有較亮的像素,底部偵測器 永遠會導致散亂的眼睛可能位置,如第4(b)圖所示。 可利用數個標準來消除錯誤的眼睛可能位置: 11 1225222 11935twf.doc/006 1. 眼區:具太大或太小的任何眼睛可能位置將被消 去。 2. 眼區之比率:具長形之眼睛可能位置也將被消去。 3. 密度調節:各眼睛可能位置具一最小矩形方框 (minimal rectangle box,MRB)以符合眼睛可能位置。如果 眼睛可能位置具小面積但卻有大的MRB,其將被消去。 第5(a)圖顯示在峰値偵測後之眼睛可能位置之影像。 · 在後續步驟中,選擇成對眼睛可能位置,並檢驗是 、 否爲正確的一雙眼睛。仍有數個標準能幫助找出正確成對 __ 眼睛可能位置。 '如果斜率在± 45°內,成對眼睛可能位置可視爲正確 的一雙眼睛。 如果兩隻眼睛之面積比率太大的話,將去除此成對 眼睛可能位置。 延伸各成對眼睛可能位置以產生臉部方框(第5(b) 圖)。如果臉部方框在臉部可能位置內的話,可視爲正確 的臉部方框。 根據成對眼睛位置,取樣亮度影像,比如20x 10像 41 素大小。接著,計算所取樣影像之中央區與兩側邊區間之 平均差。等式描述如下: ΣΣ^^) D 1 x=6 户0__x=〇 y=〇_x-\4y=0_ 80 120 正確的成對眼睛必需具有較高的平均差,因爲眼睛 通常具較低的亮度。如果該對眼睛之平均差介於既定臨界 値,0丨£&與Diffd()wn之間,其視爲正確的一對眼睛。Diffup 12 1225222 11935twf.doc/006 與Diffd()wn之實際値爲何可根據亮度影像之實際設計與大 小而決定出。比如,Diffup與Diffd_^ 64與0。 甚至,如果臉部方框(或正方形,或者甚至爲多邊形) 重疊於一臉部可能位置內,利用下列標準來決定何者爲正 確。計算取樣眼睛影像之邊緣像素之數量。各取樣眼睛影 像得到之邊緣像素之數量記爲E。各取樣影像得到對稱値 S : s=( it,Y(x,y)-Y(l9-x9y))/(Ymax -rmin +1) ^=0^=0 其中Y是亮度値,而¥_與Ymin分別是此取樣眼睛 影像之最大與最小亮度値。一般來說,真正的眼睛影像具 有被面貌造成之高E値與低S値。接著,可計算臉部分數 (FaceScore) ·Ot (x, y) = It (x, y) + wx (VKx, y) 11935twf.doc / 006 where x = 0, ... Ml; y = 0 '… N · 1' It (x, y) is at The original t-th frame difference at pixel (χ, y), and Ot (x, y) is the t-th frame difference after enhancement. Here, ω is a weighted number and can be set to, for example, 0.9. Figure 2 (c) shows the results after the difference in the enhanced frame. Obviously, the action zone becomes more visible and easier to remove than the original. Use averaging filters and spreading operations to remove noise and enhance images. Therefore, a bitmap (bitmapWJx, y) can be obtained, where each pixel with 値 being 1 represents an active pixel, and a pixel with 0 being 0 represents a non-moving pixel. The scanner can then remove the action area. The scanning procedure includes two directions, vertical scanning and horizontal scanning, which are described as follows: In vertical scanning, the top and bottom boundaries of the action pixels in each row of the bitmap 〇 ×, y) are searched. Once these two boundaries are found, set each pixel between the top boundary and the bottom boundary as the action pixel and set its 値 to 1. In addition, other pixels located outside these two boundaries are set as non-action pixels and 値 is set to zero. Therefore, a bitmap can be obtained and designated as 02 (x, y). Horizontal scanning includes scanning from left to right and scanning from right to left. The description of scanning from left to right is as follows: 〇2 (x, y) = 〇 (if 0) (χ, y) = 0 Π 02 (χ-1, y) = 〇) where χ = 1, ... Ml; y = 0, ... N-1. Then, scanning from right to left is performed as follows: 〇2 (x, y) = 〇 (if 〇ι (χ, y) = 〇 门 〇2 (x + i, y) = 〇) where χ = Μ-2, , ... 0; y = 0, ... Nl. Pixels that do not meet this condition are unchanged. Next, find all the shortest consecutive pixels of the pixel 値 1 in the bitmap 02 (x, y) and then remove them. This is to ensure that the correct geometry of the action zone is obtained. Figure 3 (a) shows the results of the division of the action area. The action area is displayed in white, and the non-action area is displayed in black. 1225222 11935twf.doc / 006 Combine the skin tone area and action area as shown in Figure 3 (b) to locate the possible positions of the face. Next, use the labeling technology to mark the position of the face and remove the small markers to get the possible position of the face. Figure 3 (c) shows the possible positions of the face after combining the motion and skin tone areas. Eye detection 110 will be described in detail below (refer to Figure 1). It has to find the face to check the presence of the face. The idea is to detect possible eye positions at possible positions on each face. Next, consider the pairwise correlation of the possible 'positions' of the two eyes and use them to determine the effectiveness of the possible position of the face. ~ In the conventional algorithms, most algorithms use the brightness component to detect and measure the appearance. However, in the present invention, the luminance component often causes false alarms and noise. In fact, although the low brightness of the eye area can be detected by the bottom detector, the edge area in the nearby area to be detected also has lower brightness. Furthermore, the brightness component is affected by changes in light and shadows. In the present invention, the eyes are not detected by using the brightness component but the color difference component. In the present invention, it can be seen from the analysis of the color difference component that a high Cb 値 can be found near the eyes. Therefore, it is better to use the peak detector to detect high Cb regions. The peak range of the image Cb (x, y) can be obtained as follows: P (x, y) = {[(Cb2 (x, y) © g (x, y)) ㊉g (x, y)] · Cb2 (x, y) Φ where g (x, y) is a structural element. The input Cb2 image will be eroded and then expanded before it is subtracted by itself. Figure 4 shows the results of morphological operations in different components of the YCbCr * color space. Obviously, the Cb 'component has fewer and closer eye possible positions than the Y and Cb components. In the Y component, because there are brighter pixels near the eyes, the bottom detector will always lead to possible positions of scattered eyes, as shown in Figure 4 (b). Several criteria can be used to eliminate the wrong eye position: 11 1225222 11935twf.doc / 006 1. Eye area: Any eye position that is too large or too small will be eliminated. 2. The ratio of the eye area: the possible position of the eye with long shape will also be eliminated. 3. Density adjustment: Each eye may have a minimal rectangular box (MRB) to match the possible eye position. If the eye may have a small area but a large MRB, it will be eliminated. Figure 5 (a) shows an image of the possible positions of the eyes after peak detection. · In the next steps, select the possible positions of the pair of eyes and verify that yes and no are the correct pair of eyes. There are still several criteria that can help find the correct pair of __ possible eye positions. 'If the slope is within ± 45 °, the possible positions of the pair of eyes can be considered as the correct pair of eyes. If the area ratio of the two eyes is too large, the possible positions of the paired eyes will be removed. Extend the possible positions of each pair of eyes to create a face frame (Figure 5 (b)). If the face frame is within the possible position of the face, it can be regarded as the correct face frame. Based on the position of the paired eyes, sample a brightness image, such as a 20x10 pixel 41 pixel size. Next, calculate the average difference between the center region and the side margins of the sampled image. The equation is described as follows: ΣΣ ^^) D 1 x = 6 0__x = 〇y = 〇_x- \ 4y = 0_ 80 120 Correct paired eyes must have a higher mean difference, because eyes usually have lower brightness. If the average difference between the pair of eyes is between a predetermined threshold 値, 0 丨 £ & and Diffd () wn, it is regarded as the correct pair of eyes. Why Diffup 12 1225222 11935twf.doc / 006 and Diffd () wn can be decided based on the actual design and size of the brightness image. For example, Diffup and Diffd_ ^ 64 and 0. Furthermore, if a face frame (or square, or even a polygon) overlaps within the possible positions of a face, the following criteria are used to determine which is correct. Calculate the number of edge pixels of the sampled eye image. The number of edge pixels obtained from each sampled eye image is denoted as E. Symmetric 得到 S is obtained for each sampled image: s = (it, Y (x, y) -Y (l9-x9y)) / (Ymax -rmin +1) ^ = 0 ^ = 0 where Y is the brightness 値 and ¥ _ And Ymin are the maximum and minimum brightness of this sampled eye image, respectively. Generally, real eye images have high E 値 and low S 値 caused by the appearance. Next, you can calculate the number of face parts (FaceScore)

FaceScore=E/S 接著,有最大FaceScore値之成對眼睛被視爲真正的 一對眼睛,並保留相關之臉部方框。第6(c)圖顯示重疊決 定之決果。 實驗結果 在此段中將顯示實驗結果。實驗包括兩組,第1組 與第2組。在第1組中,測試6個QCIF影像串,其包括 四個標準檢查程式(benchmark)與2個影像串。在第2組中’ 由網路相機拍攝12個CIF串。Y,Cb與Cr之空間取樣頻 率是4 ·· 2 : 0。Ne,Nm與Nf可分別代表正確偵測,漏失 偵測與錯誤偵測之臉部之數量。偵測率(DR)與錯誤率(FR) 定義如下: 13 11935twf.doc/006 DR=NC/(NC+Nm) FR=Nf/(Nc+Nf) 在第1組中,第7圖顯示包括蘇茜(Suzie),克萊兒 (Claire),卡爾風(Carphone),業務員與兩個測試串之測試 QCIF串之實驗結果圖。測試各串之前1〇〇個框並得到統 計資料。這些串包括各種頭部姿勢,比如抬頭,轉頭’低 頭,歪頭與鏡頭拉近/拉遠。因爲有各種頭部姿勢’在某 些框中只偵測到少數的偵測錯誤。表1顯示所選之標準檢 查程式與影像串之偵測率。可發現,所有偵測率高於80% ° 漏失的偵測框通常由眨眼造成’眼睛突然不見或眼睛被頭 髮蓋住所造成。在第7(e)(f)圖中,此兩影像串由網路相機 在不同的光線條件下拍攝。對於QCIF串,在CPU爲P4 2.4GHz的PC中,平均偵測時間是8.1ms/框。 表1 QCIF串之臉部偵測結果 DR FR Suzie 91.0% 4.2% Claire 86.0% 9.5% Carphone 91.0% 5.2% 業務員 86.0% 1.1% 測試串1 93.0% 5.1% 測試串2 80.0% 14.0% 平均 87.8% 6.6% 在第2組中,測試了包括1〇位不同人物之3500個 影像框。第8圖顯示測試CIF串之某些臉部偵測結果圖’ 而偵測率顯示表2中。各串包括各種表情(第8(a)(b)圖)與 頭部姿勢(第8(c)(d)(e)(f)圖),轉頭(第8(g)(h)⑴圖)’以及 1225222 11935twf.doc/006 多人(第8(k)(l)圖)。平均偵測率是94.95%,而平均錯誤率 是2.11%。甚至,CIF影像串之平均偵測時間是32ms/框。 表2 CIF串之臉部偵測結果 DR FR DR FR ⑷ 99.2% 0.8% (g) 91.6% 5.0% ⑻ 88.0% 3.9% ⑻ 90.4% 6.2% (c) 98.4% 0.4% (i) 97.2% 1.2% ⑷ 96.8% 2.0% (j) 97.2% 1.2% ⑷ 94.4% 1.3% (k) 94.0% 1.1% (f) 95.6% 2.0% Ο) 96.6% 0.2% 平均DR率:94.95% 平均FR率:2.11% 本發明演算法專注在即時臉部偵測之硏究。也提出 、有效率之動作區分割與眼睛偵測法。從實驗結果看出,本 發明之臉部偵測演算法具有高偵測率與快速偵測速度。其 也顯示出本發明之臉部偵測演算法可即時執行且不受環境 影響。只有在非常少數框中才會出現錯誤偵測。因而,本 發明演算法是穩健的,實用的與有效的。 雖然本發明已以一較佳實施例揭露如上,然其並非用 以限定本發明,任何熟習此技藝者,在不脫離本發明之精 神和範圍內,當可作些許之更動與潤飾,因此本發明之保 護範圍當視後附之申請專利範圍所界定者爲準。 圖式簡單說明 第1圖是根據本發明一較佳實施例之臉部偵測法之 流程圖。 15 1225222 11935twf.doc/006 第2圖是根據本發明一較佳實施例之比較框差異與 增強框差異所得結果之圖。 第3圖是臉部位置之結果圖。 第4圖顯示在YCbCf膚色空間之不同成份中之形態 操作結果圖。 第5圖顯示臉部確認之結果圖。 第6圖顯7JK重疊決定之結果圖。 第7圖顯示測試QCIF串之實驗結果圖。 第8圖顯示測試CIF串之某些臉部偵測結果圖。 圖式標示說明= 1〇〇 :輸入影像 102 :框差異增強 104 :動作區分割 106 :膚色區分割 108 :合倂 110 :眼睛偵測 112 :成對眼睛檢驗 114 :臉部影像FaceScore = E / S Next, the pair of eyes with the largest FaceScore 値 is considered a true pair of eyes, and the relevant face box is retained. Figure 6 (c) shows the results of overlapping decisions. Experimental results The experimental results are shown in this paragraph. The experiment consisted of two groups, group 1 and group 2. In the first group, six QCIF image strings were tested, which included four benchmarks and two image strings. In the second group, 12 CIF strings are captured by a web camera. The spatial sampling frequency of Y, Cb and Cr is 4 ·· 2: 0. Ne, Nm and Nf can respectively represent the number of faces for correct detection, missing detection and false detection. The detection rate (DR) and error rate (FR) are defined as follows: 13 11935twf.doc / 006 DR = NC / (NC + Nm) FR = Nf / (Nc + Nf) In group 1, Figure 7 shows the include Suzie, Claire, Carphone, salesman and two test strings test QCIF string experimental results. Test 100 boxes before each string and get statistics. These strings include various head postures, such as raising the head, turning the head ' Because there are various head postures' and only a few detection errors are detected in some frames. Table 1 shows the detection rates of the selected standard inspection program and image string. It can be found that all the detection rates are higher than 80%. Missing detection frames are usually caused by blinking ’the eyes suddenly disappear or the eyes are covered by the hair. In Figure 7 (e) (f), the two image strings were captured by a web camera under different lighting conditions. For QCIF strings, in a PC with a P4 2.4GHz CPU, the average detection time is 8.1ms / frame. Table 1 Face detection results of QCIF string DR FR Suzie 91.0% 4.2% Claire 86.0% 9.5% Carphone 91.0% 5.2% Salesperson 86.0% 1.1% Test string 1 93.0% 5.1% Test string 2 80.0% 14.0% Average 87.8% 6.6% In group 2, 3500 video frames including 10 different characters were tested. FIG. 8 shows some face detection results of the test CIF string, and the detection rate is shown in Table 2. Each string includes various expressions (pictures 8 (a) (b)) and head posture (pictures 8 (c) (d) (e) (f)), turning the head (picture 8 (g) (h)) (Photo) 'and 1225222 11935twf.doc / 006 (Figure 8 (k) (l)). The average detection rate is 94.95% and the average error rate is 2.11%. Moreover, the average detection time of a CIF image string is 32ms / frame. Table 2 CIF face detection results DR FR DR FR ⑷ 99.2% 0.8% (g) 91.6% 5.0% ⑻ 88.0% 3.9% ⑻ 90.4% 6.2% (c) 98.4% 0.4% (i) 97.2% 1.2% ⑷ 96.8% 2.0% (j) 97.2% 1.2% ⑷ 94.4% 1.3% (k) 94.0% 1.1% (f) 95.6% 2.0% 〇) 96.6% 0.2% Average DR rate: 94.95% Average FR rate: 2.11% The invention of the algorithm focuses on the research of real-time face detection. It also proposes efficient motion segmentation and eye detection. It is seen from the experimental results that the face detection algorithm of the present invention has high detection rate and fast detection speed. It also shows that the face detection algorithm of the present invention can be executed immediately without being affected by the environment. Error detection occurs only in very few boxes. Therefore, the algorithm of the present invention is robust, practical and effective. Although the present invention has been disclosed as above with a preferred embodiment, it is not intended to limit the present invention. Any person skilled in the art can make some changes and retouch without departing from the spirit and scope of the present invention. The scope of protection of the invention shall be determined by the scope of the attached patent application. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a flowchart of a face detection method according to a preferred embodiment of the present invention. 15 1225222 11935twf.doc / 006 Figure 2 is a graph of the results obtained by comparing the difference between the frame and the difference between the enhanced frame according to a preferred embodiment of the present invention. Figure 3 is the result of the face position. Figure 4 shows the results of the morphological operation in different components of the YCbCf complexion space. Figure 5 shows the result of the face confirmation. Figure 6 shows the result of the 7JK overlap decision. Figure 7 shows the experimental results of testing QCIF strings. Figure 8 shows some face detection results of the test CIF string. Explanation of the graphic labeling = 100: Input image 102: Frame difference enhancement 104: Motion area segmentation 106: Skin color segmentation 108: Combined 110: Eye detection 112: Paired eye inspection 114: Face image

Claims (1)

1225222 11935twf.doc/006 拾、申請專利範圍: 1. 一種臉部偵測方法,適合用於一影像串中,該方法 包括= 接收一 YCbCr色彩空間中之一影像資料; 使用該影像資料之一 γ成份來分析出一動作區; 使用該影像資料之一 cbCr成份來分析出一膚色區; 合倂該動作區與該膚色區以產生一臉部可能位置; ' 對該影像進行一眼睛偵測處理以偵測出眼睛可能位 置;以及 鲁 執行一成對眼睛檢驗處理以從該些眼睛可能位置中 找出一成對眼睛可能位置’其中成對眼睛可能位置也位於 該臉部可能位置之一區內。 2. 如申請專利範圍第1項所述之臉部偵測方法,其 中,在使用該影像資料之該CbCr成份之該步驟中,一 cb 値介於77與127之間,而一 Cr値介於133與173之間。 3·如申請專利範圍第1項所述之臉部偵測方法,其 中,使用該影像資料之該Y成份之該步驟包括: 對該影像資料之該Y成份進行一框差異處理,其中 鲁 應用一無限脈衝響應類(IIR)濾波器來增強該框差異,以補 償該膚色區之一缺點。 4.如申請專利範圍第1項所述之臉部偵測方法,更包 括一標記處理以標出一臉部位置,以排除具相當小標記値 之該臉部可能位置。 5·如申請專利範圍第1項所述之臉部偵測方法,其中 進行該眼睛偵測處理之該步驟包括: 17 1225222 11935twf.doc/006 檢查一眼睛區,其中排除在一範圍外之該眼睛區; 檢查該眼睛區之一比率,其中排除具長形之初步眼 睛可能位置;以及 檢查一密度調節,其中各眼睛可能位置具有一最小 矩形方框以符合眼睛可能位置,以及如果該初步眼睛可能 位置具小面積但卻有大MRB,消去該初步眼睛可能位置。 6·如申請專利範圍第1項所述之臉部偵測方法,其中 執行該成對眼睛檢驗處理之該步驟包括: 藉由考量在± 45°內之成對眼睛斜率以找出一初步成 對眼睛可能位置; 當該初步成對眼睛可能位置之兩個眼睛可能位置之 眼睛區具有大比率時,排除該初步成對眼睛可能位置; 根據該初步成對眼睛可能位置以產生一臉部多邊 形,以及當該臉部多邊形在該臉部可能位置之一區外時, 排除該初步成對眼睛可能位置;以及 設定在一像素區內之一慕度影像,其中該亮度影像 包括一中間區與兩側邊區,其中計算該中間區內之均 亮度値與該兩側邊區內之一平均亮度値間之一差値,&及 如果該差値在一既定範圍內,則該初步成對眼睛可能位置 是該成對眼睛可能位置。 7·如申請專利範圍第6項所述之臉部偵測方法,其中 在決定該成對眼睛可能位置後以及當有多重臉部多邊形重 疊時,更進行一臉部對稱檢驗。 8.如申請專利範圍第7項所述之臉部偵測方法,其中 該成對眼睛可能位置之一眼睛影像之邊緣像素之數量ε除 18 11935twf.doc/006 以一對稱差値S,以產生一臉部分數,其中選擇具最大臉 部分數値之該臉部多邊形。 9·如申請專利範圍第6項所述之臉部偵測方法,其中 該臉部多邊形包括一長方形或一正方形。 10. 如申請專利範圍第6項所述之臉部偵測方法,其 中該亮度影像是以像素爲單位之2〇x 10影像區。 11. 如申請專利範圍第10項所述之臉部偵測方法,其 中該中間區是沿著一長邊上之中間的8個像素。 12·如申請專利範圍第1〇項所述之臉部偵測方法,其 中該中間區是反映在兩眼間之一區。 13. —種臉部偵測方法’包括: 接收一色彩空間中之一影像資料; 使用該影像資料之一第一色彩成份來分析出一動作 ; 使用該影像資料之一第二色彩成份來分析出一膚色 區; 合倂該動作區與該膚色區以產生一臉部可能位置; 對該影像進行一眼睛偵測處理以偵測出眼睛可能位 置;以及 執行一成對眼睛檢驗處理以從該些眼睛可能位置當 中找出一成對眼睛可能位置,其中成對眼睛可能位置也位 於該臉部可能位置之一區內。 14. 一種對一影像進行臉部偵測之方法,包括: 偵測一臉部可能位置; 對該影像進行一眼睛偵測處理以偵測出至少兩個眼 1225222 11935twf.doc/006 睛可能位置;以及 執行一成對眼睛檢驗處理以從該些眼睛可能位置當 中找出一成對眼睛可能位置,其中成對眼睛可能位置也位 於該臉部可能位置之一區內。 15.如申請專利範圍第14項所述之對一影像進行臉部 偵測之方法,其中,進行該眼睛偵測處理之該步驟包括: 檢查一眼睛區,其中排除在一範圍外之該眼睛區; 檢查該眼睛區之一比率,其中排除具長形之初步眼 睛可能位置;以及 檢查一密度調節,其中各眼睛可能位置具有一最小 矩形方框以符合眼睛可能位置,以及如果該初步眼睛可能 位置具小面積但卻有大MRB,消去該初步眼睛可能位置。 16·如申請專利範圍第14項所述之對一影像進行臉部 偵測之方法’其中執行該成對眼睛檢驗處理之該步驟包 括: 藉由考量在± 45°內之成對眼睛斜率以找出一初步成 對眼睛可能位置; 當該初步成對眼睛可能位置之兩個眼睛可能位置之 眼睛區具有大比率時,排除該初步成對眼睛可能位置; 根據該初步成對眼睛可能位置以產生一臉部多邊 形’以及當該臉部多邊形在該臉部可能位置之一區外時, 排除該初步成對眼睛可能位置;以及 設定在一像素區內之一亮度影像,其中該亮度影像 包括一中間區與兩側邊區,其中計算該中間區內之一平均 亮度値與該兩側邊區內之一平均亮度値間之一差値,以及 20 1225222 11935twf.doc/006 如果該差値在一既定範圍內,則該初步成對眼睛可能位置 是該成對眼睛可能位置。 17. 如申請專利範圍第16項所述之對一影像進行臉部 偵測之方法,其中在決定該成對眼睛可能位置後以及當有 多重臉部多邊形重疊時,更進行一臉部對稱檢驗。 18. 如申請專利範圍第16項所述之對一影像進行臉部 偵測之方法,其中該臉部多邊形包括一長方形或一正方 形。1225222 11935twf.doc / 006 Patent application scope: 1. A face detection method suitable for use in an image string, the method includes: receiving one image data in a YCbCr color space; using one of the image data γ component to analyze an action area; use a cbCr component of the image data to analyze a skin color area; combine the action area and the skin color area to generate a possible face position; 'perform eye detection on the image Processing to detect possible eye positions; and performing a paired eye inspection process to find a pair of possible eye positions from the possible positions of the eyes, where the pair of possible eye positions are also located at one of the possible positions of the face Area. 2. The face detection method as described in item 1 of the scope of patent application, wherein in this step of using the CbCr component of the image data, a cb 値 is between 77 and 127, and a Cr 値 is introduced Between 133 and 173. 3. The face detection method according to item 1 of the scope of patent application, wherein the step of using the Y component of the image data includes: performing a frame difference processing on the Y component of the image data, wherein Lu applies An infinite impulse response (IIR) filter is used to enhance the frame difference to compensate for a shortcoming of the skin tone region. 4. The face detection method described in item 1 of the scope of patent application, further comprising a marker process to mark a face position to exclude the possible position of the face with a relatively small mark 値. 5. The face detection method as described in item 1 of the scope of patent application, wherein the step of performing the eye detection process includes: 17 1225222 11935twf.doc / 006 Examine an eye area, and exclude the area outside the area. Eye area; check one ratio of the eye area, which excludes the probable position of the initial eye with a long shape; and check a density adjustment, where each eye position has a smallest rectangular box to fit the possible eye position, and The possible location has a small area but a large MRB, eliminating this preliminary eye possible location. 6. The face detection method as described in item 1 of the scope of patent application, wherein the step of performing the paired eye inspection process includes: finding a preliminary result by considering the slope of the paired eyes within ± 45 ° Possible positions of eyes; When the eye area of two possible positions of the preliminary paired eyes has a large ratio, exclude the possible positions of the preliminary paired eyes; Generate a facial polygon according to the possible positions of the preliminary paired eyes , And when the polygon of the face is outside one of the possible positions of the face, excluding the possible positions of the preliminary paired eyes; and a mudu image set in a pixel region, where the brightness image includes a middle region and A two-sided border region, in which a difference between the average luminance 该 in the middle region and an average luminance 値 in the two-sided border region is calculated, and if the difference is within a predetermined range, the preliminary result Possible positions for the eyes are possible positions for the pair of eyes. 7. The face detection method as described in item 6 of the scope of patent application, wherein a face symmetry test is performed after determining the possible positions of the pair of eyes and when multiple face polygons overlap. 8. The face detection method as described in item 7 of the scope of the patent application, wherein the number of edge pixels ε of the eye image of one of the possible positions of the paired eyes is divided by 18 11935twf.doc / 006 with a symmetric difference S, where A number of face parts is generated, and the face polygon with the largest number of face parts is selected. 9. The face detection method according to item 6 of the scope of patent application, wherein the face polygon includes a rectangle or a square. 10. The face detection method as described in item 6 of the scope of patent application, wherein the brightness image is a 20x10 image area in pixels. 11. The face detection method as described in item 10 of the scope of patent application, wherein the middle region is 8 pixels along the middle of a long side. 12. The face detection method as described in item 10 of the scope of patent application, wherein the middle area is reflected in one of the areas between the eyes. 13. —A face detection method 'includes: receiving an image data in a color space; using a first color component of the image data to analyze a motion; using a second color component of the image data to analyze A skin color area; combining the action area and the skin color area to generate a possible face position; performing an eye detection process on the image to detect a possible eye position; and performing a paired eye inspection process from the Among the possible positions of the eyes, a pair of possible positions of the eyes is found, and the possible positions of the pair of eyes are also located within one of the possible positions of the face. 14. A method for face detection of an image, comprising: detecting a possible position of a face; performing an eye detection process on the image to detect at least two eyes 1225222 11935twf.doc / 006 possible position of the eye ; And performing a pair of eye inspection process to find a pair of possible eye positions from among the possible positions of the eyes, where the pair of possible eye positions are also located within one of the possible positions of the face. 15. The method for face detection of an image as described in item 14 of the scope of the patent application, wherein the step of performing the eye detection process includes: inspecting an eye area, wherein the eye outside a range is excluded Check one ratio of the eye area, which excludes probable initial eye positions; and check a density adjustment, where each eye possible position has a smallest rectangular box to fit the possible eye position, and if the preliminary eye position The location has a small area but a large MRB, eliminating this preliminary eye location. 16. The method for performing face detection on an image as described in item 14 of the scope of the patent application, wherein the step of performing the paired eye inspection process includes: by considering the paired eye slope within ± 45 ° to Find a possible position of the preliminary paired eyes; When the eye area of the two possible positions of the preliminary paired eyes has a large ratio, exclude the possible position of the preliminary paired eyes; Generating a face polygon 'and excluding the possible positions of the preliminary paired eyes when the face polygon is outside one of the possible positions of the face; and setting a luminance image in a pixel region, where the luminance image includes A middle area and two side areas, in which a difference between an average brightness 値 in the middle area and an average brightness 値 in the side areas is calculated, and 20 1225222 11935twf.doc / 006 if the difference Within a predetermined range, the possible positions of the preliminary paired eyes are the possible positions of the paired eyes. 17. The method for face detection on an image as described in item 16 of the scope of patent application, wherein after determining the possible positions of the pair of eyes and when multiple face polygons overlap, a face symmetry test is performed . 18. The method for face detection of an image as described in item 16 of the scope of patent application, wherein the polygon of the face includes a rectangle or a square. 21twenty one
TW92129221A 2003-10-22 2003-10-22 Robust face detection algorithm for real-time video sequence TWI225222B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW92129221A TWI225222B (en) 2003-10-22 2003-10-22 Robust face detection algorithm for real-time video sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW92129221A TWI225222B (en) 2003-10-22 2003-10-22 Robust face detection algorithm for real-time video sequence

Publications (2)

Publication Number Publication Date
TWI225222B true TWI225222B (en) 2004-12-11
TW200515301A TW200515301A (en) 2005-05-01

Family

ID=34568557

Family Applications (1)

Application Number Title Priority Date Filing Date
TW92129221A TWI225222B (en) 2003-10-22 2003-10-22 Robust face detection algorithm for real-time video sequence

Country Status (1)

Country Link
TW (1) TWI225222B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015113183A1 (en) 2015-08-10 2017-02-16 Yuan-Hung WEN Cable operated drive assembly for vehicles
DE102015017184A1 (en) 2015-08-10 2017-03-02 Yuan-Hung WEN Cable operated drive assembly for vehicles
US10699430B2 (en) 2018-10-09 2020-06-30 Industrial Technology Research Institute Depth estimation apparatus, autonomous vehicle using the same, and depth estimation method thereof

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015113183A1 (en) 2015-08-10 2017-02-16 Yuan-Hung WEN Cable operated drive assembly for vehicles
DE102015017184A1 (en) 2015-08-10 2017-03-02 Yuan-Hung WEN Cable operated drive assembly for vehicles
US10699430B2 (en) 2018-10-09 2020-06-30 Industrial Technology Research Institute Depth estimation apparatus, autonomous vehicle using the same, and depth estimation method thereof
TWI709943B (en) * 2018-10-09 2020-11-11 財團法人工業技術研究院 Depth estimation apparatus, autonomous vehicle using the same, and depth estimation method thereof

Also Published As

Publication number Publication date
TW200515301A (en) 2005-05-01

Similar Documents

Publication Publication Date Title
CN108038456B (en) Anti-deception method in face recognition system
US6526161B1 (en) System and method for biometrics-based facial feature extraction
US7936926B2 (en) Apparatus, method, and program for face feature point detection
JP3761059B2 (en) Method and apparatus for detecting human face and observer tracking display
JP3999964B2 (en) Multi-mode digital image processing method for eye detection
US7860280B2 (en) Facial feature detection method and device
US7376270B2 (en) Detecting human faces and detecting red eyes
US7653221B2 (en) Method and apparatus for automatic eyeglasses detection and removal
JP4597391B2 (en) Facial region detection apparatus and method, and computer-readable recording medium
US20050063568A1 (en) Robust face detection algorithm for real-time video sequence
JP7197485B2 (en) Detection system, detection device and method
US8290277B2 (en) Method and apparatus for setting a lip region for lip reading
JP2008234208A (en) Facial region detection apparatus and program
JP2004348674A (en) Region detection method and its device
EP3241151A1 (en) An image face processing method and apparatus
US20160026859A1 (en) Image processing apparatus, image processing method and image processing program
CN112001853A (en) Image processing apparatus, image processing method, image capturing apparatus, and storage medium
TWI225222B (en) Robust face detection algorithm for real-time video sequence
JP2005165387A (en) Method and device for detecting stripe defective of picture and display device
JP6527765B2 (en) Wrinkle state analyzer and method
KR101206736B1 (en) Apparatus and method for detecting rotated face region
EP1865443A2 (en) Facial feature detection method and device
JP4831344B2 (en) Eye position detection method
Rahman et al. Real-time face-based auto-focus for digital still and cell-phone cameras
JPH11283036A (en) Object detector and object detection method

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees