TWI689285B - Facial symmetry detection method and system thereof - Google Patents

Facial symmetry detection method and system thereof Download PDF

Info

Publication number
TWI689285B
TWI689285B TW107140636A TW107140636A TWI689285B TW I689285 B TWI689285 B TW I689285B TW 107140636 A TW107140636 A TW 107140636A TW 107140636 A TW107140636 A TW 107140636A TW I689285 B TWI689285 B TW I689285B
Authority
TW
Taiwan
Prior art keywords
feature
image
face
mouth
complex
Prior art date
Application number
TW107140636A
Other languages
Chinese (zh)
Other versions
TW202019338A (en
Inventor
張傳育
鄭曼汝
馬惠明
Original Assignee
國立雲林科技大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立雲林科技大學 filed Critical 國立雲林科技大學
Priority to TW107140636A priority Critical patent/TWI689285B/en
Application granted granted Critical
Publication of TWI689285B publication Critical patent/TWI689285B/en
Publication of TW202019338A publication Critical patent/TW202019338A/en

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

A facial symmetry detection method includes a detecting step and a determining step, wherein the detecting step includes a pre-processing step, a feature extracting step and a feature selecting step. The pre-processing step captures an image by an image capture device, and pre-processes the image so as to produce a post-processing image. The feature extracting step extracts a plurality of image features from the post-processing image so as to produce an image feature set. The image feature set includes a plurality of feature symmetry indexes and a plurality of blocking similarities. The feature selecting step selects a part of image feature from the image feature set so as to produce a determining feature set and enter the determining feature set into a classifier. The determining step produces a determining result according to the determining feature set. The determining result is divided into an unsymmetrical state and a normal state. Therefore, an accuracy rate of the facial symmetry detection method and a system thereof are increased.

Description

臉部對稱性檢測方法及其系統 Face symmetry detection method and system

本發明是關於一種臉部中風檢測方法及其系統,特別是關於一種透過複數特徵對稱性指數及複數特徵區塊相似度進行判斷的臉部中風檢測方法及其系統。 The invention relates to a face stroke detection method and system, in particular to a face stroke detection method and system judged by complex feature symmetry index and complex feature block similarity.

由於臉部中風最明顯的特徵為眼睛及嘴巴歪斜,然傳統的中風檢測的方法及其系統僅針對原始影像的臉部特徵做左右對稱性的判斷,因此傳統的中風檢測的方法及其系統的準確度較低,進而導致臉部中風的患者錯過最佳的就醫時間。 Since the most obvious feature of face stroke is skewed eyes and mouth, the traditional stroke detection method and its system only judge the left-right symmetry of the facial features of the original image. Therefore, the traditional stroke detection method and its system The accuracy is low, which causes patients with facial strokes to miss the best time to see a doctor.

有鑑於此,發展一種具有高準確度的臉部中風檢測方法及其系統是非常重要的。 In view of this, it is very important to develop a face stroke detection method and system with high accuracy.

因此,本發明之目的在於提供一種臉部中風檢測方法及其系統,其透過複數特徵對稱性指數及複數特徵區塊相似度進行判斷,可提升臉部中風檢測方法及其系統的準確度,以避免臉部中風患者因誤判而延誤就醫時間。 Therefore, the object of the present invention is to provide a face stroke detection method and system, which can be judged by complex feature symmetry index and complex feature block similarity, which can improve the accuracy of face stroke detection method and system Avoid patients with facial stroke delaying medical treatment due to misjudgment.

依據本發明的方法態樣之一實施方式提供一種臉部中風檢測方法,其包含偵測步驟以及判斷步驟,其中偵測步驟包含前處理步驟、特徵提取步驟及特徵選取步驟。前處理步驟係透過影像擷取裝置擷取影像資料,並針對影像資料進行前處理以得到處理後影像。特徵提取步驟係針對處理後影像進行特徵提取以產生影像特徵集合,影像特徵集合包含複數特徵對稱性指數及複數特徵區塊相似度。特徵選取步驟係針對影像特徵集合進行特徵選取以形成判斷特徵集合,並將判斷特徵集合輸入至分類器。判斷步驟為分類器根據判斷特徵集合進行判斷以產生判斷結果,判斷結果分為中風狀態或正常狀態。 According to one embodiment of the method aspect of the present invention, a face stroke detection method is provided, which includes a detection step and a judgment step, wherein the detection step includes a pre-processing step, a feature extraction step, and a feature selection step. The pre-processing step is to capture image data through an image capturing device, and perform pre-processing on the image data to obtain a processed image. The feature extraction step is to extract features from the processed image to generate an image feature set. The image feature set includes a complex feature symmetry index and a complex feature block similarity. The feature selection step is to perform feature selection on the image feature set to form a judgment feature set, and input the judgment feature set to the classifier. The judgment step is that the classifier makes a judgment based on the judgment feature set to generate a judgment result, and the judgment result is classified into a stroke state or a normal state.

藉此,本發明的臉部中風檢測方法透過將從影像特徵集合中選取出之判斷特徵集合輸入至分類器,以提升臉部中風檢測方法的準確度。 Thus, the face stroke detection method of the present invention improves the accuracy of the face stroke detection method by inputting the judgment feature set selected from the image feature set to the classifier.

根據前段所述的臉部中風檢測方法,其中前處理步驟包含臉部偵測步驟、正規化處理步驟、特徵點偵測步驟及校正處理步驟。臉部偵測步驟係針對影像資料進行臉部偵測並擷取內臉影像。正規化處理步驟係針對內臉影像進行正規化處理以得到正規化內臉影像。特徵點偵測步驟係針對正規化內臉影像進行特徵點偵測以得到特徵點內臉影像,特徵點內臉影像包含複數臉部特徵點。校正處理步驟係利用臉部特徵點中至少二者校正特徵點內臉影像以得到處理後影像。 According to the face stroke detection method described in the preceding paragraph, the pre-processing step includes a face detection step, a normalization processing step, a feature point detection step, and a correction processing step. The face detection step is to perform face detection on the image data and capture the inner face image. The normalization processing step is to normalize the inner face image to obtain a normalized inner face image. The feature point detection step is to perform feature point detection on the normalized inner face image to obtain a face image within the feature point, and the face image within the feature point includes a plurality of face feature points. The correction processing step uses at least two of the face feature points to correct the face image in the feature point to obtain the processed image.

根據前段所述的臉部中風檢測方法,其中特徵點偵測是採用基於迴歸樹以得到特徵點內臉影像,且臉部特徵點的數量可為60。 According to the face stroke detection method described in the preceding paragraph, the feature point detection is based on a regression tree to obtain the face image within the feature points, and the number of facial feature points can be 60.

根據前段所述的臉部中風檢測方法,其中複數特徵對稱性指數包含嘴部斜率、嘴部面積比值、嘴部距離比值、兩眼距離比值及兩眼面積比值,而複數特徵區塊相似度包含眼部彩色相似性指數、眼部三值化相似性指數、複數眼部賈柏相似性指數、嘴部彩色相似性指數、嘴部三值化相似性指數及複數嘴部賈柏相似性指數。 According to the face stroke detection method described in the previous paragraph, the complex feature symmetry index includes mouth slope, mouth area ratio, mouth distance ratio, two-eye distance ratio, and two-eye area ratio, and the complex feature block similarity includes Eye color similarity index, eye three-valued similarity index, plural eye Jiabai similarity index, mouth color similarity index, mouth three-valued similarity index, and plural mouth Jiabai similarity index.

根據前段所述的臉部中風檢測方法,其中分類器可為支援向量機、隨機森林或貝氏分類器。 According to the face stroke detection method described in the preceding paragraph, the classifier may be a support vector machine, a random forest, or a Bayesian classifier.

根據前段所述的臉部中風檢測方法,其更包含建模步驟。建模步驟包含資料庫建立步驟、訓練前處理步驟、訓練特徵提取步驟及訓練特徵選取步驟。資料庫建立步驟係建立中風檢測資料庫,中風檢測資料庫中可包含複數中風影像及複數正常影像。訓練前處理步驟係針對各中風影像或各正常影像進行訓練前處理以形成處理後中風檢測影像。訓練特徵提取步驟係針對處理後中風檢測影像進行訓練特徵提取以產生中風檢測特徵集合,中風檢測特徵集合可包含複數訓練對稱性指數及複數訓練區塊相似度。訓練特徵選取步驟係針對中風檢測特徵集合進行訓練特徵選取以產生判斷特徵集合,並利用判斷特徵集合訓練分類器。 According to the face stroke detection method described in the previous paragraph, it further includes a modeling step. The modeling steps include database creation step, pre-training processing step, training feature extraction step and training feature selection step. The database creation step is to establish a stroke detection database, and the stroke detection database may include plural stroke images and plural normal images. The pre-training processing step is to perform pre-training processing on each stroke image or each normal image to form a stroke detection image after processing. The training feature extraction step is to perform training feature extraction on the processed stroke detection image to generate a stroke detection feature set. The stroke detection feature set may include a complex training symmetry index and a complex training block similarity. The training feature selection step is to perform training feature selection on the stroke detection feature set to generate a judgment feature set, and use the judgment feature set to train a classifier.

根據前段所述的臉部中風檢測方法,其中訓練特徵選取是使用隨機生成循序前進浮動演算法,以產生判斷特徵集合。 According to the face stroke detection method described in the previous paragraph, the training feature selection is to use a random generation step-by-step floating algorithm to generate a judgment feature set.

依據本發明的結構態樣之一實施方式提供一種臉部中風檢測系統,其包含影像擷取裝置以及處理器,其中處理 器包含前處理模組、特徵提取模組及特徵選取模組。前處理模組係針對影像資料進行前處理以得到處理後影像。特徵提取模組係針對處理後影像進行特徵提取以產生影像特徵集合。特徵選取模組係針對影像特徵集合進行特徵選取以形成判斷特徵集合,並將判斷特徵集合輸入至分類器以產生判斷結果。 According to one embodiment of the structural aspect of the present invention, a face stroke detection system is provided, which includes an image capture device and a processor, wherein the processing The device includes a pre-processing module, a feature extraction module and a feature selection module. The pre-processing module performs pre-processing on the image data to obtain the processed image. The feature extraction module performs feature extraction on the processed image to generate an image feature set. The feature selection module performs feature selection on the image feature set to form a judgment feature set, and inputs the judgment feature set to the classifier to generate a judgment result.

藉此,本發明的臉部中風檢測系統之特徵選取模組透過將影像特徵集合中選取出之判斷特徵集合輸入至分類器,以提升臉部中風檢測系統的準確度。 Thus, the feature selection module of the face stroke detection system of the present invention improves the accuracy of the face stroke detection system by inputting the judgment feature set selected from the image feature set to the classifier.

根據前段所述的臉部中風檢測系統,其更包含中風檢測資料庫,中風檢測資料庫包含複數中風影像及複數正常影像。 According to the facial stroke detection system described in the preceding paragraph, it further includes a stroke detection database, and the stroke detection database includes plural stroke images and plural normal images.

根據前段所述的臉部中風檢測系統,其中影像擷取裝置可為攝影機,且分類器可為支援向量機、隨機森林或貝氏分類器。 According to the face stroke detection system described in the preceding paragraph, the image capturing device may be a camera, and the classifier may be a support vector machine, random forest, or Bayesian classifier.

s100‧‧‧臉部中風檢測方法 s100‧‧‧Face stroke detection method

s110‧‧‧偵測步驟 s110‧‧‧detection steps

s111‧‧‧前處理步驟 s111‧‧‧Pre-processing steps

s1111‧‧‧臉部偵測步驟 s1111‧‧‧Face detection steps

s1112‧‧‧正規化處理步驟 s1112‧‧‧Regulation processing steps

s1113‧‧‧特徵點偵測步驟 s1113‧‧‧Feature point detection steps

s1114‧‧‧校正處理步驟 s1114‧‧‧ Calibration process steps

s112‧‧‧特徵提取步驟 s112‧‧‧Feature extraction step

s113‧‧‧特徵選取步驟 s113‧‧‧Feature selection steps

s120‧‧‧判斷步驟 s120‧‧‧judgment step

s130‧‧‧建模步驟 s130‧‧‧Modeling steps

s131‧‧‧資料庫建立步驟 s131‧‧‧ database creation steps

s132‧‧‧訓練前處理步驟 s132‧‧‧Pre-processing steps

s1321‧‧‧訓練臉部偵測步驟 s1321‧‧‧Training face detection steps

314‧‧‧右嘴區塊 314‧‧‧Right mouth block

400‧‧‧臉部中風檢測系統 400‧‧‧ facial stroke detection system

410‧‧‧影像擷取裝置 410‧‧‧Image capture device

420‧‧‧處理器 420‧‧‧ processor

421‧‧‧前處理模組 421‧‧‧Pre-processing module

422‧‧‧特徵提取模組 422‧‧‧Feature extraction module

423‧‧‧特徵選取模組 423‧‧‧Feature selection module

424‧‧‧分類器 424‧‧‧Classifier

430‧‧‧中風檢測資料庫 430‧‧‧Stroke detection database

P 28P 29P 33P 36P 37P 38P 39P 40P 41P 42P 43P 44P 45P 46P 47P 48P 49P 50P 51P 52P 53P 54P 55P 56P 57P 58P 59‧‧‧臉部特徵點 P 28 , P 29 , P 33 , P 36 , P 37 , P 38 , P 39 , P 40 , P 41 , P 42 , P 43 , P 44 , P 45 , P 46 , P 47 , P 48 , P 49 , P 50 , P 51 , P 52 , P 53 , P 54 , P 55 , P 56 , P 57 , P 58 , P 59 ‧‧‧ facial feature points

s1322‧‧‧訓練正規化處理步驟 s1322‧‧‧Training normalization processing steps

s1323‧‧‧訓練特徵點偵測步驟 s1323‧‧‧Training feature detection steps

s1324‧‧‧訓練校正處理步驟 s1324‧‧‧Training correction process steps

s133‧‧‧訓練特徵提取步驟 s133‧‧‧Training feature extraction step

s134‧‧‧訓練特徵選取步驟 s134‧‧‧Training feature selection steps

310a‧‧‧眼部區塊 310a‧‧‧Eye block

310b‧‧‧嘴部區塊 310b‧‧‧ mouth block

311‧‧‧左眼區塊 311‧‧‧Left eye block

312‧‧‧右眼區塊 312‧‧‧Right eye block

313‧‧‧左嘴區塊 313‧‧‧ Left mouth block

f 1‧‧‧第一中心點 f 1 ‧‧‧ first center point

f 2‧‧‧第二中心點 f 2 ‧‧‧second center point

f 3‧‧‧第三中心點 f 3 ‧‧‧ third center point

roi LE ‧‧‧左眼區塊起始點 roi LE ‧‧‧ Starting point of left eye block

roi RE ‧‧‧右眼區塊起始點 roi RE ‧‧‧ Starting point of right eye block

roi LM ‧‧‧左嘴區塊起始點 roi LM ‧‧‧ Starting point of left mouth block

roi RM ‧‧‧右嘴區塊起始點 roi RM ‧‧‧ Starting point of the right mouth block

P LE ‧‧‧左眼區塊基準點 P LE ‧‧‧ Reference point of the left eye block

P RE ‧‧‧右眼區塊基準點 P RE ‧‧‧Right-eye block reference point

M‧‧‧垂直線 M ‧‧‧ vertical line

第1圖繪示依照本發明之一方法態樣之一實施方式的臉部中風檢測方法之步驟流程圖;第2圖繪示依照第1圖實施方式的臉部中風檢測方法之前處理步驟之步驟流程圖;第3圖繪示依照第1圖實施方式的臉部中風檢測方法之前處理步驟的特徵點偵測步驟之特徵點內臉影像的特徵點示意圖; 第4圖繪示依照第1圖實施方式的臉部中風檢測方法之前處理步驟的處理後影像之特徵點示意圖;第5圖繪示依照第4圖實施方式的臉部中風檢測方法之處理後影像的眼部區塊及嘴部區塊之特徵點示意圖;第6圖繪示依照本發明之一方法態樣之另一實施例的臉部中風檢測方法之步驟流程圖;第7圖繪示依照第6圖實施方式的臉部中風檢測方法之建模步驟的訓練前處理步驟之步驟流程圖;第8圖繪示依照本發明之一結構態樣之一實施方式的臉部中風檢測系統之方塊示意圖;以及第9圖繪示依照本發明之一結構態樣之另一實施方式的臉部中風檢測系統之方塊示意圖。 FIG. 1 shows a flowchart of steps of a face stroke detection method according to an embodiment of a method aspect of the present invention; FIG. 2 shows steps of processing steps before a face stroke detection method according to the embodiment of FIG. 1 Flowchart; FIG. 3 shows a schematic diagram of the feature points of the face image within the feature points of the feature point detection step of the processing step before the face stroke detection method according to the embodiment of FIG. 1; FIG. 4 is a schematic diagram of the feature points of the processed image before the processing steps of the face stroke detection method according to the embodiment of FIG. 1; FIG. 5 is a processed image of the face stroke detection method according to the embodiment of FIG. 4 A schematic diagram of the characteristic points of the eye block and the mouth block of FIG. 6 is a flow chart showing the steps of a method for detecting a face stroke according to another embodiment of a method aspect of the present invention; Figure 6 is a flow chart of the pre-training processing steps of the modeling step of the face stroke detection method according to the embodiment of Figure 6; Figure 8 is a block diagram of a face stroke detection system according to one embodiment of a structural aspect of the present invention. Schematic diagram; and FIG. 9 shows a block diagram of a face stroke detection system according to another embodiment of a structural aspect of the present invention.

第1圖繪示依照本發明之一方法態樣之一實施方式的臉部中風檢測方法s100之步驟流程圖。由第1圖可知,臉部中風檢測方法s100包含偵測步驟s110以及判斷步驟s120。 FIG. 1 shows a flowchart of steps of a facial stroke detection method s100 according to an embodiment of a method aspect of the present invention. It can be seen from FIG. 1 that the facial stroke detection method s100 includes a detection step s110 and a determination step s120.

詳細來說,偵測步驟s110包含前處理步驟s111、特徵提取步驟s112及特徵選取步驟s113,其中前處理步驟s111係透過影像擷取裝置410(標示於第8圖)擷取影像資料,並針對影像資料進行前處理以得到處理後影像,特徵提取步驟s112係針對處理後影像進行特徵提取以產生影像特徵集合,影像特徵集合包含複數特徵對稱性指數及複數 特徵區塊相似度,而特徵選取步驟s113係針對影像特徵集合進行特徵選取以形成判斷特徵集合,並將判斷特徵集合輸入至分類器424(標示於第8圖)。判斷步驟s120為分類器424根據判斷特徵集合進行判斷以產生判斷結果,判斷結果分為中風狀態或正常狀態。藉此,透過將特徵選取步驟s113所形成之判斷特徵集合輸入至分類器424,以提升臉部中風檢測方法s100的準確度。 In detail, the detection step s110 includes a pre-processing step s111, a feature extraction step s112, and a feature selection step s113, wherein the pre-processing step s111 is to capture image data through the image capturing device 410 (labeled in FIG. 8), and The image data is pre-processed to obtain the processed image. The feature extraction step s112 performs feature extraction on the processed image to generate an image feature set. The image feature set includes a complex feature symmetry index and a complex number Feature block similarity, and feature selection step s113 performs feature selection on the image feature set to form a judgment feature set, and inputs the judgment feature set to the classifier 424 (labeled in FIG. 8). The judgment step s120 is that the classifier 424 makes a judgment based on the judgment feature set to generate a judgment result, and the judgment result is classified into a stroke state or a normal state. Therefore, by inputting the judgment feature set formed in the feature selection step s113 to the classifier 424, the accuracy of the facial stroke detection method s100 is improved.

第2圖繪示依照第1圖實施方式的臉部中風檢測方法s100之前處理步驟s111之步驟流程圖,第3圖繪示依照第1圖實施方式的臉部中風檢測方法s100之前處理步驟s111的特徵點偵測步驟s1113之特徵點內臉影像的特徵點示意圖,第4圖繪示依照第1圖實施方式的臉部中風檢測方法s100之前處理步驟s111的處理後影像之特徵點示意圖。由第2圖、第3圖及第4圖可知,前處理步驟s111可包含臉部偵測步驟s1111、正規化處理步驟s1112、特徵點偵測步驟s1113及校正處理步驟s1114。臉部偵測步驟s1111可針對影像資料進行臉部偵測以擷取內臉影像,其中內臉影像可為經過臉部偵測後所擷取下來的人臉影像,臉部偵測的方法可為方向梯度直方圖(Histogram of Oriented Gradient;HOG)。正規化處理步驟s1112係針對內臉影像進行正規化處理,以得到正規化內臉影像,其作用在於調整內臉影像之大小,而其正規化處理的方法可為相鄰內插法(Nearest Neighbor Interpolation)。特徵點偵測步驟s1113係針對正規化內臉影像進行特徵點偵測以得到特徵點內臉影像,特 徵點內臉影像包含複數臉部特徵點,其中特徵點偵測的方法可採用基於迴歸樹(Ensemble of Regression Trees;ERT),且臉部特徵點的數量可為60。校正處理步驟s1114係利用臉部特徵點中至少二者校正特徵點內臉影像以得到處理後影像,其中校正的方式可為根據臉部特徵點中之左眼眼頭與右眼眼頭之斜率進行校正,即臉部特徵點(P 39 ,P 42),首先計算兩眼眼頭斜率,再根據兩眼眼頭斜率計算特徵點內臉影像的校正旋轉角度以校正特徵點內臉影像,從而產生處理後影像。兩眼眼頭斜率的計算符合式(1)。 FIG. 2 shows a flowchart of the process step s111 before the face stroke detection method s100 according to the embodiment of FIG. 1, FIG. 3 shows a process step s111 before the face stroke detection method s100 according to the embodiment of FIG. Feature point detection step s1113 is a schematic diagram of the feature points of the face image within the feature points. FIG. 4 is a schematic diagram of the feature points of the processed image before the process step s111 before the face stroke detection method s100 according to the embodiment of FIG. 1. As can be seen from FIG. 2, FIG. 3 and FIG. 4, the pre-processing step s111 may include a face detection step s1111, a normalization processing step s1112, a feature point detection step s1113, and a correction processing step s1114. The face detection step s1111 can perform face detection on the image data to capture the inner face image, where the inner face image can be the face image captured after face detection, and the face detection method can be Histogram of Oriented Gradient (HOG). The normalization processing step s1112 is to normalize the inner face image to obtain the normalized inner face image, and its role is to adjust the size of the inner face image, and the normalization method may be adjacent interpolation (Nearest Neighbor Interpolation). The feature point detection step s1113 is to perform feature point detection on the normalized inner face image to obtain the face image within the feature point. The face image within the feature point includes a plurality of face feature points. The method of feature point detection may use a regression tree-based method (Ensemble of Regression Trees; ERT), and the number of facial feature points can be 60. The correction processing step s1114 uses at least two of the face feature points to correct the face image in the feature point to obtain a processed image, wherein the correction method may be based on the slope of the left eye head and the right eye head in the face feature points To perform correction, that is, facial feature points ( P 39 , P 42 ), first calculate the slope of the head of the two eyes, and then calculate the correction rotation angle of the face image within the feature point according to the slope of the head of the two eyes to correct the face image within the feature point, thus Produce processed images. The calculation of the slope of the eyes of the two eyes complies with equation (1).

Figure 107140636-A0101-12-0007-1
Figure 107140636-A0101-12-0007-1

其中,EyeM為兩眼眼頭斜率,

Figure 107140636-A0101-12-0007-49
為臉部特徵點P 42y軸座標,
Figure 107140636-A0101-12-0007-50
為臉部特徵點P 39y軸座標,
Figure 107140636-A0101-12-0007-51
為臉部特徵點P 42x軸座標,
Figure 107140636-A0101-12-0007-52
為臉部特徵點P 39x軸座標。校正旋轉角度的計算符合式(2)。 Among them, EyeM is the slope of the eyes of both eyes,
Figure 107140636-A0101-12-0007-49
Is the y- axis coordinate of facial feature point P 42 ,
Figure 107140636-A0101-12-0007-50
Is the y- axis coordinate of facial feature point P 39 ,
Figure 107140636-A0101-12-0007-51
Is the x- axis coordinate of facial feature point P 42 ,
Figure 107140636-A0101-12-0007-52
It is the x- axis coordinate of facial feature point P 39 . The calculation of the correction rotation angle complies with equation (2).

Figure 107140636-A0101-12-0007-2
Figure 107140636-A0101-12-0007-2

其中angle為校正旋轉角度。必須一提的是,在後續段落中,

Figure 107140636-A0101-12-0007-38
表示為臉部特徵點P i x座標,
Figure 107140636-A0101-12-0007-39
表示為臉部特徵點P i y座標,其中i可為0-59,在往後段落中將不再贅述。 Where angle is the corrected rotation angle. It must be mentioned that in the subsequent paragraphs,
Figure 107140636-A0101-12-0007-38
Expressed as the x coordinate of the facial feature point P i ,
Figure 107140636-A0101-12-0007-39
It is expressed as the y coordinate of the facial feature point P i , where i can be 0-59, and will not be described in detail in the following paragraphs.

請配合參照第1圖、第3圖及第4圖,特徵提取步驟s112中所產生之影像特徵集合可包含複數特徵對稱性指數及複數特徵區塊相似度,其中複數特徵對稱性指數包含嘴部斜率、嘴部面積比值、嘴部距離比值、兩眼距離比值及兩眼面積比值,而複數特徵區塊相似度可包含眼部彩色相似性指數、眼部三值化相似性指數、複數眼部賈柏相似性指 數、嘴部彩色相似性指數、嘴部三值化相似性指數及複數嘴部賈柏相似性指數。 Please refer to Figure 1, Figure 3 and Figure 4, the image feature set generated in the feature extraction step s112 may include complex feature symmetry index and complex feature block similarity, wherein complex feature symmetry index includes mouth Slope, mouth area ratio, mouth distance ratio, two-eye distance ratio, and two-eye area ratio, and the complex feature block similarity may include eye color similarity index, eye ternary similarity index, complex eye Jia Bai Number, mouth color similarity index, mouth ternary similarity index, and plural mouth Jiabai similarity index.

嘴部斜率可為臉部特徵點之兩嘴角特徵點的斜率,即臉部特徵點(P 54 ,P 48),嘴部斜率的計算符合式(3)。 The slope may be a two part nozzle mouth feature point of the face feature point, i.e. the slope face feature point (P 54, P 48), calculating the slope of the mouth according to formula (3).

Figure 107140636-A0101-12-0008-3
Figure 107140636-A0101-12-0008-3

其中,MouthM為嘴部斜率。 Among them, MouthM is the slope of the mouth.

嘴部面積比值可藉由左邊嘴部的面積及右邊嘴部的面積進行計算而產生,左邊嘴部的面積可透過臉部特徵點(P 48 ,P 49 ,P 50 ,P 51 ,P 57 ,P 58 ,P 59)進行計算,而右邊嘴部的面積可透過臉部特徵點(P 51 ,P 52 ,P 53 ,P 54 ,P 55 ,P 56 ,P 57)進行計算,左邊嘴部的面積計算符合式(4)。 The mouth area ratio can be generated by calculating the area of the left mouth and the area of the right mouth, and the area of the left mouth can pass through the facial feature points ( P 48 , P 49 , P 50 , P 51 , P 57 , P 58 , P 59 ), and the area of the right mouth can be calculated through the facial feature points ( P 51 , P 52 , P 53 , P 54 , P 55 , P 56 , P 57 ). The area calculation conforms to equation (4).

Figure 107140636-A0101-12-0008-4
Figure 107140636-A0101-12-0008-4

其中,A LM 為左邊嘴部的面積,右邊嘴部的面積計算符合式(5)。 Among them, A LM is the area of the left mouth, and the calculation of the area of the right mouth conforms to equation (5).

Figure 107140636-A0101-12-0008-5
Figure 107140636-A0101-12-0008-5

其中,A RM 為右邊嘴部的面積,而嘴部面積比值符合式(6)。 Among them, A RM is the area of the mouth on the right, and the ratio of the area of the mouth conforms to equation (6).

Figure 107140636-A0101-12-0008-6
Figure 107140636-A0101-12-0008-6

其中,ratio MA 為嘴部面積比值。 Among them, ratio MA is the ratio of mouth area.

嘴部距離比值可藉由臉部特徵點之左邊嘴部特徵點平均距離及右邊嘴部特徵點平均距離以產生嘴部距離比值,其中透過臉部特徵點(P 49 ,P 59)及臉部特徵點(P 50 ,P 58)可得出左邊嘴部特徵點平均距離,透過臉部特徵點(P 52 ,P 56)及 臉部特徵點(P 53 ,P 55)可得出右邊嘴部特徵點平均距離,左邊嘴部特徵點平均距離符合式(7)。 The mouth distance ratio can be generated by the average distance of the left mouth feature point of the face feature point and the average distance of the right mouth feature point, through the face feature point ( P 49 , P 59 ) and the face characteristic point (P 50, P 58) can be drawn from the left side of the mouth the average feature points through the face feature point (P 52, P 56) and the face feature point (P 53, P 55) can be drawn from the right side of the mouth portion The average distance of the feature points, and the average distance of the feature points of the left mouth comply with equation (7).

Figure 107140636-A0101-12-0009-7
Figure 107140636-A0101-12-0009-7

其中,D LM 為左邊嘴部特徵點平均距離,D(P 49 ,P 59)為臉部特徵點(P 49 ,P 59)之歐基里德距離,D(P 50 ,P 58)為臉部特徵點(P 50 ,P 58)之歐基里德距離,右邊嘴部特徵點平均距離符合式(8)。 Among them, D LM is the average distance of the left mouth feature points, D ( P 49 , P 59 ) is the Euclid distance of the face feature points ( P 49 , P 59 ), and D ( P 50 , P 58 ) is the face The Euclidean distance of the feature points ( P 50 ,P 58 ) of the central part, and the average distance of the feature points of the mouth on the right accord with equation (8).

Figure 107140636-A0101-12-0009-8
Figure 107140636-A0101-12-0009-8

其中,D RM 為右邊嘴部特徵點平均距離,D(P 52 ,P 56)為臉部特徵點(P 52 ,P 56)之歐基里德距離,D(P 53 ,P 55)為臉部特徵點(P 53 ,P 55)之歐基里德距離,而嘴部距離比值符合式(9)。 Among them, D RM is the average distance of the feature points on the right mouth, D ( P 52 , P 56 ) is the Euclidean distance of the face feature points ( P 52 , P 56 ), and D ( P 53 , P 55 ) is the face The Euclid distance of the characteristic points ( P 53 , P 55 ), and the ratio of the mouth distance conforms to equation (9).

Figure 107140636-A0101-12-0009-9
Figure 107140636-A0101-12-0009-9

其中,ratio MD 為嘴部距離比值。 Among them, ratio MD is the mouth distance ratio.

為了計算兩眼距離比值可透過臉部特徵點(P 37 ,P 41)與臉部特徵點(P 38 ,P 40)得出左眼特徵點平均距離,及透過臉部特徵點(P 43 ,P 47)與臉部特徵點(P 44 ,P 46)得出右眼特徵點平均距離,藉以產生兩眼距離比值,左眼特徵點平均距離符合式(10)。 In order to calculate the ratio of the distance between two eyes , the average distance between the feature points of the left eye ( P 37 , P 41 ) and the feature points of the face ( P 38 , P 40 ) can be obtained, and the feature points of the left eye ( P 43 , P 47 ) and the facial feature points ( P 44 , P 46 ) obtain the average distance of the right eye feature point, thereby generating the ratio of the distance between the two eyes, and the average distance of the left eye feature point conforms to equation (10).

Figure 107140636-A0101-12-0009-10
Figure 107140636-A0101-12-0009-10

其中,D LE 為左眼特徵點平均距離,D(P 37 ,P 41)為臉部特徵點(P 37 ,P 41)之歐基里德距離,D(P 38 ,P 40)為臉部特徵點(P 38 ,P 40)之歐基里德距離,右眼特徵點平均距離符合式(11)。 Among them, D LE is the average distance of the left eye feature points, D ( P 37 , P 41 ) is the Euclidean distance of the face feature points ( P 37 , P 41 ), and D ( P 38 , P 40 ) is the face The Euclidean distance of the feature points ( P 38 , P 40 ) and the average distance of the feature points of the right eye comply with equation (11).

Figure 107140636-A0101-12-0009-53
Figure 107140636-A0101-12-0009-53

其中,D RE 為右眼特徵點平均距離,D(P 43 ,P 47)為臉部特徵點(P 43 ,P 47)之歐基里德距離,D(P 44 ,P 46)為臉部特徵點(P 44 ,P 46)之歐基里德距離,而兩眼距離比值符合式(12)。 Among them, D RE is the average distance of the feature points of the right eye, D ( P 43 , P 47 ) is the Euclid distance of the face feature points ( P 43 , P 47 ), and D ( P 44 , P 46 ) is the face characteristic point (P 44, P 46) of the Euclidean distance, while the two distance ratio according to formula (12).

Figure 107140636-A0101-12-0010-12
Figure 107140636-A0101-12-0010-12

其中,ratio ED 為兩眼距離比值。 Among them, ratio ED is the ratio of the distance between two eyes.

兩眼面積比值可藉由左眼的面積及右眼的面積進行計算而產生,左眼的面積可透過臉部特徵點(P 36 ,P 37 ,P 38 ,P 39 ,P 40 ,P 41 )進行計算,右眼的面積可透過臉部特徵點(P 42 ,P 43 ,P 44 ,P 45 ,P 46 ,P 47 )計算,左眼的面積計算符合式(13)。 The ratio of the area of the two eyes can be generated by calculating the area of the left eye and the area of the right eye, and the area of the left eye can pass through the feature points of the face (P 36 , P 37 , P 38 , P 39 , P 40 , P 41 ) For calculation, the area of the right eye can be calculated through facial feature points (P 42 , P 43 , P 44 , P 45 , P 46 , P 47 ), and the calculation of the area of the left eye conforms to equation (13).

Figure 107140636-A0101-12-0010-13
Figure 107140636-A0101-12-0010-13

其中,A LE 為左眼的面積,右眼的面積計算符合式(14)。 Among them, A LE is the area of the left eye, and the calculation of the area of the right eye conforms to equation (14).

Figure 107140636-A0101-12-0010-14
Figure 107140636-A0101-12-0010-14

其中,A RE 為右眼的面積,而兩眼面積比值符合式(15)。 Among them, A RE is the area of the right eye, and the ratio of the area of the two eyes conforms to equation (15).

Figure 107140636-A0101-12-0010-15
Figure 107140636-A0101-12-0010-15

其中,ratio EA 為兩眼面積比值。 Among them, ratio EA is the ratio of the area of the two eyes.

第5圖繪示依照第4圖實施方式的臉部中風檢測方法s100之處理後影像的眼部區塊310a及嘴部區塊310b之特徵點示意圖,由第5圖可知,處理後影像可更包含眼部區塊310a的影像及嘴部區塊310b的影像,其中眼部區塊310a可包含左眼區塊311及右眼區塊312,嘴部區塊310b可包含左嘴區塊313及右嘴區塊314。 FIG. 5 is a schematic diagram showing the feature points of the eye block 310a and the mouth block 310b of the processed image according to the face stroke detection method s100 according to the embodiment of FIG. 4, as can be seen from FIG. 5, the processed image can be more Including the image of the eye block 310a and the image of the mouth block 310b, wherein the eye block 310a may include the left eye block 311 and the right eye block 312, and the mouth block 310b may include the left mouth block 313 and Right mouth block 314.

複數特徵區塊相似度之眼部彩色相似性指數、眼部三值化相似性指數及複數眼部賈柏相似性指數可透過左眼區塊311的影像及右眼區塊312的影像進行計算,其中左眼區塊311可包含左眼區塊基準點P LE 及左眼區塊起始點roi LE ,右眼區塊312包含右眼區塊基準點P RE 及右眼區塊起始點roi RE 。為了得到左眼區塊311的影像及右眼區塊312的影像,首先,可藉由臉部特徵點P 28的垂直線M找出臉部特徵點P 39與垂直線M的最短距離之第一中心點f 1及臉部特徵點P 42與垂直線M的最短距離之第二中心點f 2,左眼區塊基準點P LE x座標可以第一中心點f 1x座標做為參考,右眼區塊基準點P RE x座標可以第二中心點f 2x座標做為參考,而左眼區塊基準點P LE 及右眼區塊基準點P RE y座標可以臉部特徵點P 29y座標做為參考,因此左眼區塊基準點P LE 的座標可表示為

Figure 107140636-A0101-12-0011-16
,右眼區塊基準點P RE 的座標可表示為
Figure 107140636-A0101-12-0011-17
,並進而計算左眼區塊起始點roi LE 及右眼區塊起始點roi RE 之座標位置,左眼區塊311及右眼區塊312的大小可為35×35,因此左眼區塊起始點roi LE 的座標可表示為
Figure 107140636-A0101-12-0011-18
,右眼區塊起始點roi RE 的座標可表示為
Figure 107140636-A0101-12-0011-19
。 The eye color similarity index, the eye ternary similarity index, and the complex eye Jiabo similarity index of the complex feature block similarity can be calculated from the image of the left eye block 311 and the image of the right eye block 312 , Where the left-eye block 311 can include the left-eye block reference point P LE and the left-eye block starting point roi LE , and the right-eye block 312 includes the right-eye block reference point P RE and the right-eye block starting point roi RE . In order to obtain the image of the left-eye block 311 and the image of the right-eye block 312, first, the shortest distance between the facial feature point P 39 and the vertical line M can be found by the vertical line M of the facial feature point P 28 the second center point of the shortest distance f 2 of a center point F 1 and the facial feature points P 42 and the vertical line M and the reference point P LE block left x coordinate of the center point may be a first coordinate x 1 as F reference, the reference point P RE block right x-coordinate of the center point can be a second coordinate x F 2 as a reference, and the reference point P LE left block and the right block of the reference point P RE y coordinate can face The y coordinate of the feature point P 29 is used as a reference, so the coordinate of the reference point P LE of the left eye block can be expressed as
Figure 107140636-A0101-12-0011-16
, The coordinates of the reference point P RE of the right eye block can be expressed as
Figure 107140636-A0101-12-0011-17
, And then calculate the coordinate positions of the left eye block starting point roi LE and the right eye block starting point roi RE . The size of the left eye block 311 and the right eye block 312 can be 35×35, so the left eye area The coordinates of the block start point roi LE can be expressed as
Figure 107140636-A0101-12-0011-18
, The coordinates of the starting point of the right eye block roi RE can be expressed as
Figure 107140636-A0101-12-0011-19
.

複數特徵區塊相似度之嘴部彩色相似性指數、嘴部三值化相似性指數及複數嘴部賈柏相似性指數可透過左嘴區塊313的影像及右嘴區塊314的影像進行計算,臉部特徵點P 33及臉部特徵點P 51的中心點可為第三中心點f 3,左嘴區塊起始點roi LM 及右嘴區塊起始點roi RM 是以第三中心點 f 3y座標作為參考,左嘴區塊起始點roi LM x座標的基準點為臉部特徵點P 50x座標,右嘴區塊起始點roi RM x座標的基準點為臉部特徵點P 52x座標,左嘴區塊313及右嘴區塊314的大小可為20×20,因此左嘴區塊起始點roi LM 的座標可表示為

Figure 107140636-A0101-12-0012-20
,右嘴區塊起始點roi RM 的座標可表 示為
Figure 107140636-A0101-12-0012-21
。 The mouth color similarity index, mouth ternary similarity index and complex mouth Jiabai similarity index of the complex feature block similarity can be calculated from the image of the left mouth block 313 and the image of the right mouth block 314 , The center point of the facial feature point P 33 and the facial feature point P 51 can be the third center point f 3 , the left mouth block starting point roi LM and the right mouth block starting point roi RM are the third center f y coordinates of the point 3 as a reference, the starting point of the left nozzle block roi LM x-coordinate of the reference point P x is the face feature point coordinates 50, and the right nozzle block roi RM starting x-coordinate of the reference point Is the x coordinate of the facial feature point P 52 , the size of the left mouth block 313 and the right mouth block 314 can be 20×20, so the coordinates of the starting point roi LM of the left mouth block can be expressed as
Figure 107140636-A0101-12-0012-20
, The coordinates of the roi RM starting point of the right mouth block can be expressed as
Figure 107140636-A0101-12-0012-21
.

眼部彩色相似性指數為將左眼區塊311的影像及右眼區塊312的影像進行相似度指標評估之結果,而嘴部彩色相似性指數為將左嘴區塊313的影像及右嘴區塊314的影像進行相似度指標評估之結果。相似度指標評估符合式(16)。 The eye color similarity index is the result of evaluating the similarity index of the image of the left eye block 311 and the image of the right eye block 312, and the mouth color similarity index is the image of the left mouth block 313 and the right mouth The image of block 314 is the result of evaluating the similarity index. The similarity index evaluation conforms to equation (16).

Figure 107140636-A0101-12-0012-22
Figure 107140636-A0101-12-0012-22

其中,G 1G 2分別為進行相似度比較之二相似度指標評估的輸入影像,二相似度指標評估的輸入影像可分別為左眼區塊311的影像及右眼區塊312的影像,或者分別為左嘴區塊313的影像及右嘴區塊314的影像,SSIM(G 1 ,G 2)為相似度指標,C 1C 2為常數,C 1可為6.5025,C 2可為58.5225,

Figure 107140636-A0101-12-0012-23
為輸入影像G 1之平均值、
Figure 107140636-A0101-12-0012-26
為輸入影像G 2之平均值,
Figure 107140636-A0101-12-0012-24
為輸入影像G 1之標準差,
Figure 107140636-A0101-12-0012-27
為輸入影像G 2之標準差,
Figure 107140636-A0101-12-0012-25
為共變異數。值得一提的是,在計算相似度指標評估前,需先將二相似度指標評估的輸入影像中之一者進行左右映射後再進行運算。 Among them, G 1 and G 2 are the input images evaluated by the second similarity index for similarity comparison. The input images evaluated by the two similarity indexes may be the image of the left-eye block 311 and the image of the right-eye block 312, respectively. Or the images of the left mouth block 313 and the right mouth block 314, respectively, SSIM ( G 1 , G 2 ) is the similarity index, C 1 and C 2 are constants, C 1 can be 6.5025, C 2 can be 58.5225,
Figure 107140636-A0101-12-0012-23
Is the average value of the input image G 1 ,
Figure 107140636-A0101-12-0012-26
Is the average value of the input image G 2 ,
Figure 107140636-A0101-12-0012-24
Is the standard deviation of the input image G 1 ,
Figure 107140636-A0101-12-0012-27
Is the standard deviation of the input image G 2 ,
Figure 107140636-A0101-12-0012-25
Is the covariance. It is worth mentioning that before calculating the evaluation of the similarity index, it is necessary to map one of the input images evaluated by the second similarity index to the left and right before performing the calculation.

眼部三值化相似性指數係分別將左眼區塊311的影像及右眼區塊312的影像進行局部三值化,再針對局部 三值化後之左眼區塊影像及局部三值化後之右眼區塊影像進行相似度指標評估,以產生眼部三值化相似性指數。嘴部三值化相似性指數係分別將左嘴區塊313的影像及右嘴區塊314的影像進行局部三值化,再針對局部三值化後之左嘴區塊影像及局部三值化後之右嘴區塊影像進行相似度指標評估,以產生嘴部三值化相似性指數。將眼部區塊310a及嘴部區塊310b進行局部三值化的目的在於可減少不同的光線所造成的影響,以有效的降低雜訊並增強紋理特徵,局部三值化的方法符合式(17)及式(18)。 The three-valued eye similarity index is to perform three-value localization on the image of the left-eye block 311 and the image of the right-eye block 312, respectively. The three-valued left-eye block image and the partially three-valued right-eye block image are evaluated for similarity indicators to generate an eye tri-valued similarity index. The mouth three-valued similarity index is to perform the local three-valued image of the left mouth block 313 and the right mouth block 314 respectively, and then to the left mouth block image and the local three-valued image after the local three-valued After that, the image of the right mouth block is evaluated for the similarity index, so as to generate the ternary similarity index of the mouth. The purpose of local triangulation of the eye block 310a and the mouth block 310b is to reduce the influence caused by different light rays, to effectively reduce noise and enhance texture features, and the method of local triangulation conforms to the formula ( 17) and formula (18).

Figure 107140636-A0101-12-0013-28
Figure 107140636-A0101-12-0013-28

Figure 107140636-A0101-12-0013-29
And
Figure 107140636-A0101-12-0013-29

其中,LTP R,N (u,v)為局部三值化後之結果,R,N表示半徑為R的圓上有N個鄰近的像素值,n c 表示中心點(u,v)的像素值,n i 為第i個鄰近點的像素值,t為門檻值,門檻值可為5,所以區間將會處於[n c -t,n c +t],s(x)為鄰近點進行局部三值化之後的結果。換句話說,當n i 大於n c +t時,則s(x)=1,而n i 落於區間內時,則s(x)=0,當n i 小於n c -t時,則s(x)=-1。 Among them, LTP R,N ( u,v ) is the result of local three-valued, R,N means that there are N adjacent pixel values on the circle of radius R , n c means the pixel of the center point ( u,v ) Value, n i is the pixel value of the i- th neighboring point, t is the threshold value, the threshold value can be 5, so the interval will be in [ n c - t, n c + t ], s ( x ) is the neighboring point The result after local trinization. In other words, when n i is greater than n c + t , then s ( x )=1, and when n i falls within the interval, then s ( x )=0, when n i is less than n c - t , then s ( x )=-1.

眼部賈柏相似性指數為分別將左眼區塊311的影像及右眼區塊312的影像進行賈柏濾波器轉換而產生複數左眼部紋理特徵圖及複數右眼部紋理特徵圖,再針對複數左眼部紋理特徵圖及複數右眼部紋理特徵圖進行相似度指標評估之結果,其中左眼部紋理特徵圖係增強左眼區塊311的影像中不明顯之區域,而右眼部紋理特徵圖是增強右眼區 塊312的影像中不明顯之區域。嘴部賈柏相似性指數為分別將左嘴區塊313的影像及右嘴區塊314的影像進行賈柏濾波器轉換而形成複數左嘴部紋理特徵圖及複數右嘴部紋理特徵圖,再針對複數左嘴部紋理特徵圖及複數右嘴部紋理特徵圖進行相似度指標評估之結果,其中左嘴部紋理特徵圖是增強左嘴區塊313的影像中不明顯之區域,而右嘴部紋理特徵圖是增強右嘴區塊314的影像中不明顯之區域。賈柏濾波器轉換為將符合式(19)。 The eye Jabber similarity index is to convert the image of the left-eye block 311 and the image of the right-eye block 312 into a Jabber filter to generate a complex left-eye texture feature map and a complex right-eye texture feature map, and then The result of the similarity index evaluation on the complex left eye texture feature map and the complex right eye texture feature map, in which the left eye texture feature map enhances the inconspicuous areas in the image of the left eye block 311, while the right eye Texture feature map is to enhance the right eye area The unobvious area in the image of block 312. The mouth Jabber similarity index is to convert the image of the left mouth block 313 and the image of the right mouth block 314 to a Jabber filter to form a complex left mouth texture feature map and a complex right mouth texture feature map, and then The result of the similarity index evaluation on the complex left mouth texture feature map and the complex right mouth texture feature map, where the left mouth texture feature map is to enhance the inconspicuous areas in the image of the left mouth block 313, while the right mouth The texture feature map is to enhance the inconspicuous areas in the image of the right mouth block 314. The conversion of the Jarber filter will be consistent with equation (19).

G θ,s (x,y)=∫∫ φ s,θ (x,y).f(x,y)dxdy 式(19)。 G θ,s ( x,y )= ∫∫ φ s,θ ( x,y ). f ( x,y ) dxdy formula (19).

其中,G θ,s (x,y)為輸入影像經賈柏濾波器轉換後之輸出影像,φ s,θ (x,y)為賈柏濾波器,s為尺度,s大於或等於0且小於或等於4。θ為角度,θ大於或等於0且小於或等於7,藉此可得到40種不同尺度與角度之賈柏濾波器。f(x,y)為賈柏濾波器轉換之輸入影像,其可為左眼區塊311的影像、右眼區塊312的影像、左嘴區塊313的影像或右嘴區塊314的影像。 Where G θ,s ( x,y ) is the output image after the input image is converted by the Jabe filter, φ s,θ ( x,y ) is the Jabe filter, s is the scale, s is greater than or equal to 0 and Less than or equal to 4. θ is an angle, θ is greater than or equal to 0 and less than or equal to 7, thereby obtaining 40 different scales and angles of Jarber filters. f ( x,y ) is the input image converted by the Jarber filter, which can be the image of the left-eye block 311, the image of the right-eye block 312, the image of the left-mouth block 313 or the image of the right-mouth block 314 .

請參考表一,由表一可知,影像特徵集合可包含嘴部斜率、兩眼面積比值、兩眼距離比值、眼部彩色相似性指數、眼部三值化相似性指數、眼部賈柏相似性指數、嘴部面積比值、嘴部距離比值、嘴部彩色相似性指數、嘴部三值化相似性指數及嘴部賈柏相似性指數,其中眼部賈柏相似性指數及嘴部賈柏相似性指數的數量可各為40,故影像特徵集合中的特徵數量可為89。臉部中風檢測方法s100之特徵選取步驟s113可針對影像特徵集合進行特徵選取以形成 判斷特徵集合,並將判斷特徵集合輸入至分類器424以產生判斷結果。分類器424可為支援向量機、隨機森林或貝氏分類器,且不同之分類器424可具有相異之判斷特徵集合。 Please refer to Table 1. From Table 1, it can be seen that the image feature set can include the mouth slope, the ratio of the area of the two eyes, the ratio of the distance between the two eyes, the color similarity index of the eye, the ternary similarity index of the eye, and the eye Jia Bai Sex index, mouth area ratio, mouth distance ratio, mouth color similarity index, mouth ternary similarity index, and mouth Jiabo similarity index, among which eye Jiabo similarity index and mouth Jiabo The number of similarity indexes can be 40 each, so the number of features in the image feature set can be 89. The feature selection step s113 of the facial stroke detection method s100 may perform feature selection on the image feature set to form The judgment feature set is judged, and the judgment feature set is input to the classifier 424 to generate a judgment result. The classifier 424 may be a support vector machine, random forest, or Bayesian classifier, and different classifiers 424 may have different sets of judgment features.

Figure 107140636-A0101-12-0015-30
Figure 107140636-A0101-12-0015-30

第6圖繪示依照本發明之一方法態樣之另一實施例的臉部中風檢測方法s100之步驟流程圖。由第6圖可知,臉部中風檢測方法s100包含建模步驟s130、偵測步驟s110及判斷步驟s120。 FIG. 6 is a flowchart showing the steps of a facial stroke detection method s100 according to another embodiment of a method aspect of the present invention. It can be seen from FIG. 6 that the facial stroke detection method s100 includes a modeling step s130, a detection step s110, and a judgment step s120.

請配合參照第1圖、第2圖、第3圖、第4圖及第5圖,在第6圖的實施方式中,偵測步驟s110及判斷步驟s120均與第1圖、第2圖、第3圖、第4圖及第5圖中對應之步驟相同,不再贅述。建模步驟s130包含資料庫建立步驟s131、 訓練前處理步驟s132、訓練特徵提取步驟s133及訓練特徵選取步驟s134,其中資料庫建立步驟s131係用以建立中風檢測資料庫430(標示於第9圖),中風檢測資料庫430包含複數中風影像及複數正常影像,訓練前處理步驟s132係針對各中風影像或各正常影像進行訓練前處理以形成處理後中風檢測影像,訓練特徵提取步驟s133係針對處理後中風檢測影像進行訓練特徵提取以產生中風檢測特徵集合,中風檢測特徵集合包含複數訓練對稱性指數及複數訓練區塊相似度,訓練特徵提取的方法與特徵提取的方法相同,在此不另贅述,故中風檢測特徵集合中的特徵數量可為89,訓練特徵選取步驟s134係針對中風檢測特徵集合進行訓練特徵選取以產生判斷特徵集合,並利用判斷特徵集合訓練分類器424。 Please refer to FIG. 1, FIG. 2, FIG. 3, FIG. 4, and FIG. 5, in the embodiment of FIG. 6, the detection step s110 and the judgment step s120 are the same as those of FIG. 1, FIG. 2, The corresponding steps in Figure 3, Figure 4 and Figure 5 are the same and will not be repeated here. The modeling step s130 includes a database creation step s131, Pre-training processing step s132, training feature extraction step s133 and training feature selection step s134, wherein database creation step s131 is used to create stroke detection database 430 (labeled in FIG. 9), stroke detection database 430 contains complex stroke images And a plurality of normal images, the pre-training processing step s132 performs pre-training processing on each stroke image or each normal image to form a processed stroke detection image, and the training feature extraction step s133 performs training feature extraction on the processed stroke detection image to generate a stroke Detection feature set. The stroke detection feature set includes complex training symmetry index and complex training block similarity. The method of training feature extraction is the same as the feature extraction method, which is not repeated here, so the number of features in the stroke detection feature set can be At 89, the training feature selection step s134 is to perform training feature selection on the stroke detection feature set to generate a judgment feature set, and use the judgment feature set to train the classifier 424.

第7圖繪示依照第6圖實施方式的臉部中風檢測方法s100之建模步驟s130的訓練前處理步驟s132之步驟流程圖,訓練前處理步驟s132可包含訓練臉部偵測步驟s1321、訓練正規化處理步驟s1322、訓練特徵點偵測步驟s1323及訓練校正處理步驟s1324。訓練臉部偵測步驟s1321係針對中風影像或正常影像進行臉部偵測以擷取訓練內臉影像,臉部偵測方法與第2圖之臉部偵測步驟s1111相同,在此不另贅述。訓練正規化處理步驟s1322針對訓練內臉影像進行正規化處理,以得到訓練正規化內臉影像,其作用及正規化處理方法與第2圖之正規化處理步驟s1112相同,在此不另贅述。訓練特徵點偵測步驟s1323係針對訓練 正規化內臉影像進行特徵點偵測以得到訓練特徵點內臉影像,訓練特徵點內臉影像包含複數臉部特徵點,其特徵點偵測的方法與第2圖之特徵點偵測步驟s1113相同,在此不另贅述。訓練校正處理步驟s1324係利用臉部特徵點中至少二者校正訓練特徵點內臉影像以得到處理後中風檢測影像,校正的方式與第2圖之校正處理步驟s1114相同,在此不另贅述。 FIG. 7 shows a flowchart of the pre-training processing step s132 of the modeling step s130 of the face stroke detection method s100 according to the embodiment of FIG. 6. The pre-training processing step s132 may include the training face detection step s1321, training The normalization processing step s1322, the training feature point detection step s1323, and the training correction processing step s1324. The training face detection step s1321 is to perform face detection on stroke images or normal images to capture the training inner face image. The face detection method is the same as the face detection step s1111 in FIG. 2 and will not be repeated here. . The training normalization processing step s1322 performs normalization processing on the training inner face image to obtain the training normalization inner face image, and its function and normalization processing method are the same as those of the normalization processing step s1112 in FIG. 2 and will not be repeated here. Training feature point detection step s1323 is for training The normalized inner face image is subjected to feature point detection to obtain the face image within the training feature point. The face image within the training feature point includes a plurality of face feature points. The feature point detection method and the feature point detection step s1113 in FIG. 2 The same, not repeat them here. The training correction processing step s1324 uses at least two of the facial feature points to correct the face image in the training feature point to obtain the processed stroke detection image. The correction method is the same as the correction processing step s1114 in FIG. 2 and will not be repeated here.

訓練特徵選取步驟s134之訓練特徵選取的方法可為隨機生成循序前進浮動演算法。訓練特徵選取步驟s134為將隨機生成循序前進浮動演算法與分類器424結合以從中風檢測特徵集合選取最適合分類器424的特徵集合以形成判斷特徵集合,並利用判斷特徵集合訓練分類器424。藉此,可降低判斷特徵集合中的特徵數量,以降低訓練前處理及分類器424的辨識時間。也就是說,臉部中風檢測方法s100可應用於不同的分類器424,且因應分類器424的不同,可選取出不同的判斷特徵集合,以使臉部中風檢測方法s100在應用於不同的分類器424時,皆可具有良好的準確度。隨機生成循序前進浮動演算法可包含產生步驟(Generation)、納入步驟(Inclusion)及排除步驟(Exclusion)。 The training feature selection method in the training feature selection step s134 may be a randomly generated progressive floating algorithm. The training feature selection step s134 is to combine a randomly generated step-by-step floating algorithm with the classifier 424 to select the feature set most suitable for the classifier 424 from the stroke detection feature set to form a judgment feature set, and use the judgment feature set to train the classifier 424. In this way, the number of features in the judgment feature set can be reduced to reduce the pre-training processing and the recognition time of the classifier 424. That is to say, the face stroke detection method s100 can be applied to different classifiers 424, and according to the different classifiers 424, different judgment feature sets can be optionally taken out so that the face stroke detection method s100 is applied to different classifications The device 424 can have good accuracy. The random generation step-by-step floating algorithm may include a generation step (Generation), an inclusion step (Inclusion) and an exclusion step (Exclusion).

產生步驟係從中風檢測特徵集合中隨機挑選k個特徵作為測試特徵集合,中風檢測特徵集合的特徵數量為D,且中風檢測特徵集合的特徵數量可為89,而中風檢測特 徵集合中未被挑選的D-k個特徵為待選特徵,由待選特徵所形成之集合為待選特徵集合。 The generating step is to randomly select k features from the stroke detection feature set as the test feature set, the number of features of the stroke detection feature set is D , and the number of features of the stroke detection feature set may be 89, but the stroke detection feature set is not selected The D - k features are candidate features, and the set formed by the candidate features is the candidate feature set.

納入步驟是從D-k個待選特徵中找出最佳測試特徵,並將最佳測試特徵納入測試特徵集合以形成最佳測試特徵集合,最佳測試特徵集合可使分類器424具有最高的準確度,納入步驟的運算符合式(20)。 The inclusion step is to find the best test features from the D - k candidate features, and incorporate the best test features into the test feature set to form the best test feature set. The best test feature set enables the classifier 424 to have the highest Accuracy, the operation included in the step conforms to equation (20).

Figure 107140636-A0101-12-0018-31
Figure 107140636-A0101-12-0018-31

其中,T +為最佳測試特徵,A為中風檢測特徵集合,B k 為測試特徵集合,k為維數,k可為2-15,α為待選特徵,J(B k +α)為分類器424使用特徵集合B k +α執行判斷時的準確度。值得一提的是,在執行完納入步驟的運算後,執行納入步驟的運算後之測試特徵集合B k+1為未執行納入步驟的運算之測試特徵集合B k 加上最佳測試特徵T +所形成之集合,且測試特徵集合的維數會增加,即B k+1 =B k +T +,且k=k+1,並執行排除步驟。 Among them, T + is the best test feature, A is the stroke detection feature set, B k is the test feature set, k is the dimension, k can be 2-15, α is the candidate feature, and J ( B k + α ) is The classifier 424 uses the feature set B k + α to perform the accuracy of judgment. It is worth mentioning that, after executing step into operation, wherein the test operation performed after the step into a set of k +1 B for testing the operational characteristics of the step is not performed into the test set B k plus the best characterized T + The formed set, and the dimension of the test feature set will increase, that is, B k +1 =B k +T + , and k = k +1, and perform the exclusion step.

排除步驟是從測試特徵集合中找出最差測試特徵,使得測試特徵集合在捨去最差測試特徵後形成最佳測試特徵集合,排除步驟的運算符合式(21)。 The exclusion step is to find the worst test feature from the test feature set, so that the test feature set discards the worst test feature to form the best test feature set, and the operation of the exclusion step conforms to equation (21).

Figure 107140636-A0101-12-0018-32
Figure 107140636-A0101-12-0018-32

其中,T -為最差測試特徵,β為測試特徵集合中一者,J(B k -β)為分類器424使用特徵集合B k -β執行判斷時的準確度。值得一提的是,在執行完排除步驟的運算後,當J(B k -T -)大於J(B k-1)時,則代表捨去T -準確度較高,B k-1=B k -T -k=k-1,當T -T +的相同時,即代表納入步驟和排除步驟找出的特徵相同,則T -為捨棄特徵且 ψ=ψ+1,依序進行排除步驟,其中J(B k -T -)為分類器424使用特徵集合B k -T -執行判斷時的準確度,J(B k-1)為分類器424使用特徵集合B k-1執行判斷時的準確度,ψ為捨棄特徵的數量;當J(B k -T -)小於J(B k-1)時,代表特徵集合B k -T -沒有比特徵集合B k-1還要好。若維數k與捨棄特徵的數量ψ的和不等於中風檢測特徵集合的特徵數量D,即k+ψD,則執行納入步驟,而當維數k與捨棄特徵的數量ψ的和等於中風檢測特徵集合的特徵數量D,即k+ψ=D時,B k 即為判斷特徵集合。 Where, T - is the worst test feature, β is one of the test feature sets, and J ( B k - β ) is the accuracy when the classifier 424 performs the judgment using the feature set B k - β . It is worth mentioning that, after executing the step of calculating excluded, when J (B k - T -) is greater than J (B k -1), represents discarded T - high degree of accuracy, B k -1 = B k - T -, k = k -1, when T - T + simultaneously with the same phase, i.e., representatives in the steps and procedures that identify the characteristics of the T - is discarded and characterized ψ = ψ +1, sequentially exclusion step, where J (B k - T -) B k is set using the feature classifier 424 - T - the time of executing the accuracy of determination, J (B k -1) is the set B k -1 424 using the feature classifier accuracy when performing determination, ψ is characteristic of the number of discarded; if J (B k - T -) is smaller than J (B k -1), representative of the feature set B k - T - B k -1 is not more than a set of further features Better. If the sum of the dimension k and the number of discarded features ψ is not equal to the number of features D of the stroke detection feature set, that is, k + ψD , then the inclusion step is performed, and when the sum of the dimension k and the number of discarded features ψ is equal to the stroke When detecting the feature number D of the feature set, that is, k + ψ = D , B k is the judgment feature set.

請配合參照表二,表二所示為臉部中風檢測方法s100之分類器424為支援向量機時的準確度比較表,其中第一實施例為使用隨機生成循序前進浮動演算法訓練分類器424,第一比較例是將中風檢測特徵集合做為判斷特徵集合訓練分類器,第二比較例為使用循序前進浮動搜尋演算法訓練分類器。由表二可知,第一實施例之判斷特徵集合的特徵數量為53,且準確度可達100%,即利用隨機生成循序前進浮動演算法訓練分類器424可使臉部中風檢測方法s100具有較高的準確度且所需的特徵數量較少,也就是說第一實施例的訓練前處理及分類器424的辨識時間低於第一比較例及第二比較例。 Please refer to Table 2. Table 2 shows the accuracy comparison table when the classifier 424 of the face stroke detection method s100 is a support vector machine. The first embodiment is to train the classifier 424 using a randomly generated sequential forward floating algorithm The first comparative example uses a stroke detection feature set as a judgment feature set to train a classifier, and the second comparative example uses a progressive search algorithm to train a classifier. It can be seen from Table 2 that the number of features in the judgment feature set of the first embodiment is 53, and the accuracy can reach 100%, that is, the random stroke generation algorithm is used to train the classifier 424 to make the face stroke detection method s100 have a better The accuracy is high and the number of required features is small, that is to say, the pre-training processing and the recognition time of the classifier 424 of the first embodiment are lower than those of the first and second comparative examples.

Figure 107140636-A0101-12-0019-33
Figure 107140636-A0101-12-0019-33
Figure 107140636-A0101-12-0020-34
Figure 107140636-A0101-12-0020-34

請配合參照表三,表三所示為臉部中風檢測方法s100之分類器424係隨機森林時的準確度比較表,其中第二實施例為使用隨機生成循序前進浮動演算法訓練分類器424,第三比較例是將中風檢測特徵集合做為判斷特徵集合訓練分類器,第四比較例為使用循序前進浮動搜尋演算法訓練分類器。由表三可知,第二實施例的準確度高於第三比較例與第四比較例的準確度,也就是說利用隨機生成循序前進浮動演算法訓練分類器424可具有較高的準確度。 Please refer to Table 3, which shows the accuracy comparison table of the face stroke detection method s100 when the classifier 424 is a random forest. The second embodiment is to train the classifier 424 using a randomly generated sequential forward floating algorithm. The third comparative example is to use the stroke detection feature set as a judgment feature set to train a classifier, and the fourth comparative example is to use a progressive search algorithm to train the classifier. It can be seen from Table 3 that the accuracy of the second embodiment is higher than the accuracy of the third comparative example and the fourth comparative example, that is to say, the classifier 424 can be trained with a random generation and sequential forward floating algorithm to have higher accuracy.

Figure 107140636-A0101-12-0020-35
Figure 107140636-A0101-12-0020-35

請配合參照表四,表四所示為臉部中風檢測方法s100之分類器424係貝氏分類器時的準確度比較表,其中第三實施例為使用隨機生成循序前進浮動演算法訓練分類器424,第五比較例是將中風檢測特徵集合做為判斷特徵集合訓練分類器,第六比較例為使用循序前進浮動搜尋演算法訓練分類器。由表四可知,第三實施例的準確度高於第五比較例之準確度,而第三實施例與第六比較例相同,然第三實施例所使用的特徵數量低於第六比較例的判斷特徵數量,因此第三實施例的分類器424的辨識時間可低於第六比較例。 Please refer to Table 4. Table 4 shows a comparison table of the accuracy of the face stroke detection method s100 classifier 424 when it is a Bayesian classifier. The third embodiment is to train a classifier using a randomly generated sequential forward floating algorithm 424. The fifth comparative example is to train the classifier using the stroke detection feature set as the judgment feature set, and the sixth comparative example is to train the classifier using a progressive search algorithm. It can be seen from Table 4 that the accuracy of the third embodiment is higher than that of the fifth comparative example, and the third embodiment is the same as the sixth comparative example, but the number of features used in the third embodiment is lower than that of the sixth comparative example To determine the number of features, so the recognition time of the classifier 424 of the third embodiment can be lower than that of the sixth comparative example.

表四

Figure 107140636-A0101-12-0021-36
Table 4
Figure 107140636-A0101-12-0021-36

第8圖繪示依照本發明之一結構態樣之一實施方式的臉部中風檢測系統400之方塊示意圖。由第8圖可知,臉部中風檢測系統400包含影像擷取裝置410及處理器420,其中影像擷取裝置410可用以擷取影像資料,處理器420電性連接影像擷取裝置410。 FIG. 8 is a block diagram of a facial stroke detection system 400 according to an embodiment of a structural aspect of the present invention. As can be seen from FIG. 8, the face stroke detection system 400 includes an image capturing device 410 and a processor 420, wherein the image capturing device 410 can be used to capture image data, and the processor 420 is electrically connected to the image capturing device 410.

詳細來說,處理器420可包含前處理模組421、特徵提取模組422、特徵選取模組423及分類器424。前處理模組421係針對影像資料進行前處理以得到處理後影像。特徵提取模組422係針對處理後影像進行特徵提取以產生影像特徵集合。特徵選取模組423係針對影像特徵集合進行特徵選取,以形成判斷特徵集合,並將判斷特徵集合輸入至分類器424以產生判斷結果,判斷結果可分為中風狀態或正常狀態。藉此,臉部中風檢測系統400可具有較高的判斷準確度,以避免發生誤判導致臉部中風患者延誤治療的黃金時間。 In detail, the processor 420 may include a pre-processing module 421, a feature extraction module 422, a feature selection module 423, and a classifier 424. The pre-processing module 421 performs pre-processing on the image data to obtain processed images. The feature extraction module 422 performs feature extraction on the processed image to generate an image feature set. The feature selection module 423 performs feature selection on the image feature set to form a judgment feature set, and inputs the judgment feature set to the classifier 424 to generate a judgment result. The judgment result may be classified into a stroke state or a normal state. In this way, the facial stroke detection system 400 can have a high judgment accuracy, so as to avoid the miscalculation that leads to the prime time for the treatment of facial stroke patients to delay treatment.

第9圖繪示依照本發明之一結構態樣之另一實施方式的臉部中風檢測系統400之方塊示意圖。由第9圖可知,臉部中風檢測系統400可包含影像擷取裝置410、處理器420及中風檢測資料庫430,其中中風檢測資料庫430可包含複數中風影像及複數正常影像。 FIG. 9 is a block diagram of a face stroke detection system 400 according to another embodiment of a structural aspect of the present invention. As can be seen from FIG. 9, the facial stroke detection system 400 may include an image capturing device 410, a processor 420, and a stroke detection database 430, where the stroke detection database 430 may include plural stroke images and plural normal images.

請配合參照第6圖及第8圖,在第9圖的實施方式中,影像擷取裝置410及處理器420與第8圖中對應之結構相同,不再贅述。特別的是,處理器420之前處理模組421更可用以執行訓練前處理步驟s132,訓練前處理步驟s132係針對各中風影像或各正常影像進行訓練前處理,以形成處理後中風檢測影像,處理器420之特徵提取模組422更可用以執行訓練特徵提取步驟s133,訓練特徵提取步驟s133係針對處理後中風檢測影像進行訓練特徵提取以產生中風檢測特徵集合,處理器420之特徵選取模組423更可用以執行訓練特徵選取步驟s134,訓練特徵選取步驟s134針對中風檢測特徵集合進行訓練特徵選取,以產生判斷特徵集合,並利用判斷特徵集合訓練分類器424。 Please refer to FIG. 6 and FIG. 8 together. In the embodiment of FIG. 9, the image capturing device 410 and the processor 420 have the same structure as the corresponding ones in FIG. 8 and will not be repeated here. In particular, the pre-processing module 421 of the processor 420 can be used to perform a pre-training processing step s132. The pre-training processing step s132 performs pre-training processing on each stroke image or each normal image to form a post-processing stroke detection image. The feature extraction module 422 of the processor 420 can be used to perform the training feature extraction step s133. The training feature extraction step s133 performs training feature extraction on the processed stroke detection image to generate a stroke detection feature set. The feature selection module 423 of the processor 420 It can be used to perform the training feature selection step s134, which performs training feature selection on the stroke detection feature set to generate a judgment feature set, and uses the judgment feature set to train the classifier 424.

為方便使用者隨時進行臉部中風檢測及提升臉部中風檢測系統400的準確度,臉部中風檢測系統400可應用於電腦或手機,且臉部中風檢測系統400之影像擷取裝置410可為攝影機,及其使用之分類器424可為支援向量機、隨機森林或貝氏分類器,藉此,使用者可隨時進行臉部中風檢測並降低誤判的機率。 In order to facilitate the user to perform face stroke detection and improve the accuracy of the face stroke detection system 400 at any time, the face stroke detection system 400 can be applied to a computer or a mobile phone, and the image capture device 410 of the face stroke detection system 400 can be The camera and the classifier 424 used by it can be a support vector machine, random forest or Bayesian classifier, by which the user can detect facial strokes at any time and reduce the probability of misjudgment.

綜上所述,本發明之臉部中風檢測方法及臉部中風檢測系統可提供下列優點: In summary, the face stroke detection method and face stroke detection system of the present invention can provide the following advantages:

(1)透過針對影像特徵集合進行特徵選取所形成之判斷特徵集合,可提升臉部中風檢測方法及其系統的準確度。 (1) The accuracy of the face stroke detection method and its system can be improved by the judgment feature set formed by feature selection for the image feature set.

(2)透過隨機生成循序前進浮動演算法結合分類器可找出具有最少特徵數量之判斷特徵集合,藉此,可減少判斷特徵集合中特徵的數量以減少分類器的辨識時間。 (2) By randomly generating a sequential forward floating algorithm combined with a classifier, a judgment feature set with the smallest number of features can be found, thereby reducing the number of features in the judgment feature set to reduce the recognition time of the classifier.

雖然本發明已以實施方式揭露如上,然其並非用以限定本發明,任何熟習此技藝者,在不脫離本發明的精神和範圍內,當可作各種的更動與潤飾,因此本發明的保護範圍當視後附的申請專利範圍所界定者為準。 Although the present invention has been disclosed as above in the embodiments, it is not intended to limit the present invention. Any person who is familiar with this art can make various changes and modifications within the spirit and scope of the present invention, so the protection of the present invention The scope shall be determined by the scope of the attached patent application.

s100:臉部中風檢測方法 s100: Face stroke detection method

s110:偵測步驟 s110: detection step

s111:前處理步驟 s111: pre-processing steps

s112:特徵提取步驟 s112: feature extraction step

s113:特徵選取步驟 s113: feature selection step

s120:判斷步驟 s120: judgment step

Claims (9)

一種臉部對稱性檢測方法,包含:一偵測步驟,包含:一前處理步驟,係透過一影像擷取裝置擷取一影像資料,並針對該影像資料進行一前處理,以得到一處理後影像;一特徵提取步驟,係針對該處理後影像進行一特徵提取,以產生一影像特徵集合,該影像特徵集合包含複數特徵對稱性指數及複數特徵區塊相似度,該複數特徵對稱性指數包含一嘴部斜率、一嘴部面積比值、一嘴部距離比值、一兩眼距離比值及一兩眼面積比值,該複數特徵區塊相似度包含一眼部彩色相似性指數、一眼部三值化相似性指數、複數眼部賈柏相似性指數、一嘴部彩色相似性指數、一嘴部三值化相似性指數及複數嘴部賈柏相似性指數;及一特徵選取步驟,係針對該影像特徵集合進行一特徵選取,以形成一判斷特徵集合,並將該判斷特徵集合輸入至一分類器;以及一判斷步驟,該分類器根據該判斷特徵集合以產生一判斷結果,該判斷結果分為一不對稱狀態或一正常狀態。 A face symmetry detection method, including: a detection step, including: a pre-processing step, an image data is captured by an image capturing device, and a pre-processing is performed on the image data to obtain a post-processing Image; a feature extraction step, a feature extraction is performed on the processed image to generate an image feature set, the image feature set includes a complex feature symmetry index and a complex feature block similarity, the complex feature symmetry index includes One mouth slope, one mouth area ratio, one mouth distance ratio, one or two eyes distance ratio and one or two eyes area ratio, the complex feature block similarity includes one eye color similarity index, one eye three value Similarity index, complex eye Jiabo similarity index, one mouth color similarity index, one mouth ternary similarity index, and plural mouth Jiabai similarity index; and a feature selection step The image feature set performs a feature selection to form a judgment feature set, and inputs the judgment feature set to a classifier; and a judgment step, the classifier generates a judgment result based on the judgment feature set, and the judgment result is divided into It is an asymmetrical state or a normal state. 如申請專利範圍第1項所述之臉部對稱性檢測方法,其中該前處理步驟包含:一臉部偵測步驟,係針對該影像資料進行一臉部偵測,並擷取一內臉影像;一正規化處理步驟,係針對該內臉影像進行該正規化處理,以得到一正規化內臉影像;一特徵點偵測步驟,係針對該正規化內臉影像進行一特徵點偵測,以得到一特徵點內臉影像,該特徵點內臉影像包含複數臉部特徵點;及一校正處理步驟,係利用該些臉部特徵點中至少二者校正該特徵點內臉影像,以得到該處理後影像。 The face symmetry detection method as described in item 1 of the patent scope, wherein the pre-processing step includes: a face detection step, which is to perform a face detection on the image data and acquire an inner face image A normalization processing step, which performs the normalization process on the inner face image to obtain a normalized inner face image; a feature point detection step, which performs a feature point detection on the normalized inner face image, To obtain a face image within a feature point, the face image within the feature point including a plurality of face feature points; and a correction processing step, using at least two of the face feature points to correct the face image within the feature point to obtain The processed image. 如申請專利範圍第2項所述之臉部對稱性檢測方法,其中該特徵點偵測是採用基於迴歸樹,以得到該特徵點內臉影像,且該臉部特徵點的數量為60。 The face symmetry detection method as described in item 2 of the patent application range, wherein the feature point detection is based on a regression tree to obtain the face image within the feature point, and the number of face feature points is 60. 如申請專利範圍第1項所述之臉部對稱性檢測方法,其中該分類器為一支援向量機、一隨機森林或一貝氏分類器。 The face symmetry detection method as described in item 1 of the patent application scope, wherein the classifier is a support vector machine, a random forest or a Bayesian classifier. 如申請專利範圍第1項所述之臉部對稱性檢測方法,更包含: 一建模步驟,包含:一資料庫建立步驟,建立一臉部對稱性檢測資料庫,該臉部對稱性檢測資料庫包含複數臉部不對稱影像及複數正常影像;一訓練前處理步驟,係針對各該臉部不對稱影像或各該正常影像進行一訓練前處理,以形成一處理後臉部對稱性檢測影像;一訓練特徵提取步驟,針對該處理後臉部對稱性檢測影像進行一訓練特徵提取,以產生一臉部對稱性檢測特徵集合,該臉部對稱性檢測特徵集合包含複數訓練對稱性指數及複數訓練區塊相似度;及一訓練特徵選取步驟,針對該臉部對稱性檢測特徵集合進行一訓練特徵選取,以產生該判斷特徵集合,並利用該判斷特徵集合訓練該分類器。 The face symmetry detection method as described in item 1 of the patent application scope further includes: A modeling step, including: a database creation step, creating a face symmetry detection database, the face symmetry detection database includes complex face asymmetric images and complex normal images; a pre-training processing step, system Perform a pre-training process on each of the face asymmetric images or each of the normal images to form a processed face symmetry detection image; a training feature extraction step, perform a training on the processed face symmetry detection image Feature extraction to generate a face symmetry detection feature set, which includes complex training symmetry index and complex training block similarity; and a training feature selection step for the face symmetry detection The feature set performs a training feature selection to generate the judgment feature set, and uses the judgment feature set to train the classifier. 如申請專利範圍第5項所述之臉部對稱性檢測方法,其中該訓練特徵選取是使用隨機生成循序前進浮動演算法,以產生該判斷特徵集合。 The face symmetry detection method as described in item 5 of the patent application scope, wherein the training feature selection is to use a random generation step-by-step floating algorithm to generate the judgment feature set. 一種臉部對稱性檢測系統,包含:一影像擷取裝置,用以擷取一影像資料;以及 一處理器,電性連接該影像擷取裝置,並包含一分類器,且該處理器包含:一前處理模組,係針對該影像資料進行一前處理,以得到一處理後影像;一特徵提取模組,係針對該處理後影像進行一特徵提取,以產生一影像特徵集合,該影像特徵集合包含複數特徵對稱性指數及複數特徵區塊相似度,該複數特徵對稱性指數包含一嘴部斜率、一嘴部面積比值、一嘴部距離比值、一兩眼距離比值及一兩眼面積比值,該複數特徵區塊相似度包含一眼部彩色相似性指數、一眼部三值化相似性指數、複數眼部賈柏相似性指數、一嘴部彩色相似性指數、一嘴部三值化相似性指數及複數嘴部賈柏相似性指數;及一特徵選取模組,係針對該影像特徵集合進行一特徵選取,以形成一判斷特徵集合,並將該判斷特徵集合輸入至該分類器,以產生一判斷結果。 A face symmetry detection system, including: an image capturing device for capturing an image data; and A processor, electrically connected to the image capturing device, and including a classifier, and the processor includes: a pre-processing module that performs a pre-processing on the image data to obtain a processed image; a feature The extraction module performs a feature extraction on the processed image to generate an image feature set. The image feature set includes a complex feature symmetry index and a complex feature block similarity. The complex feature symmetry index includes a mouth Slope, one-mouth area ratio, one-mouth distance ratio, one-two-eye distance ratio and one-two-eye area ratio, the complex feature block similarity includes one-eye color similarity index, one-eye three-valued similarity Index, complex eye Jabber similarity index, one mouth color similarity index, one mouth ternary similarity index, and multiple mouth Jabber similarity index; and a feature selection module for this image feature The collection performs a feature selection to form a judgment feature set, and inputs the judgment feature set to the classifier to generate a judgment result. 如申請專利範圍第7項所述之臉部對稱性檢測系統,更包含一臉部對稱性檢測資料庫,該臉部對稱性檢測資料庫包含複數臉部不對稱影像及複數正常影像。 The face symmetry detection system as described in item 7 of the patent application scope further includes a face symmetry detection database, and the face symmetry detection database includes complex face asymmetric images and complex normal images. 如申請專利範圍第7項所述之臉部對稱性檢測系統,其中該影像擷取裝置為一攝影機,該分類器為一支援向量機、一隨機森林或一貝氏分類器。 A face symmetry detection system as described in item 7 of the patent application, wherein the image capture device is a camera, and the classifier is a support vector machine, a random forest, or a Bayesian classifier.
TW107140636A 2018-11-15 2018-11-15 Facial symmetry detection method and system thereof TWI689285B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW107140636A TWI689285B (en) 2018-11-15 2018-11-15 Facial symmetry detection method and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW107140636A TWI689285B (en) 2018-11-15 2018-11-15 Facial symmetry detection method and system thereof

Publications (2)

Publication Number Publication Date
TWI689285B true TWI689285B (en) 2020-04-01
TW202019338A TW202019338A (en) 2020-06-01

Family

ID=71132503

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107140636A TWI689285B (en) 2018-11-15 2018-11-15 Facial symmetry detection method and system thereof

Country Status (1)

Country Link
TW (1) TWI689285B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112545491A (en) * 2020-11-05 2021-03-26 上海信产管理咨询有限公司 Early stroke self-detection device and detection method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105138973A (en) * 2015-08-11 2015-12-09 北京天诚盛业科技有限公司 Face authentication method and device
CN106250819A (en) * 2016-07-20 2016-12-21 上海交通大学 Based on face's real-time monitor and detection facial symmetry and abnormal method
CN107713984A (en) * 2017-02-07 2018-02-23 王俊 Facial paralysis objective evaluation method and its system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105138973A (en) * 2015-08-11 2015-12-09 北京天诚盛业科技有限公司 Face authentication method and device
CN106250819A (en) * 2016-07-20 2016-12-21 上海交通大学 Based on face's real-time monitor and detection facial symmetry and abnormal method
CN107713984A (en) * 2017-02-07 2018-02-23 王俊 Facial paralysis objective evaluation method and its system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112545491A (en) * 2020-11-05 2021-03-26 上海信产管理咨询有限公司 Early stroke self-detection device and detection method

Also Published As

Publication number Publication date
TW202019338A (en) 2020-06-01

Similar Documents

Publication Publication Date Title
US11176406B2 (en) Edge-based recognition, systems and methods
US12056954B2 (en) System and method for selecting images for facial recognition processing
CN108491786B (en) Face detection method based on hierarchical network and cluster merging
JP2011134114A (en) Pattern recognition method and pattern recognition apparatus
Lee et al. Enhanced iris recognition method by generative adversarial network-based image reconstruction
WO2013075295A1 (en) Clothing identification method and system for low-resolution video
US9135562B2 (en) Method for gender verification of individuals based on multimodal data analysis utilizing an individual's expression prompted by a greeting
CN106023184A (en) Depth significance detection method based on anisotropy center-surround difference
TWI689285B (en) Facial symmetry detection method and system thereof
Ives et al. Iris recognition using the ridge energy direction (RED) algorithm
CN108875488B (en) Object tracking method, object tracking apparatus, and computer-readable storage medium
WO2021054217A1 (en) Image processing device, image processing method and program
Ng et al. An effective segmentation method for iris recognition system
CN112990090A (en) Face living body detection method and device
Das et al. Face liveness detection based on frequency and micro-texture analysis
Dosi et al. Seg-dgdnet: Segmentation based disguise guided dropout network for low resolution face recognition
Moorhouse et al. The nose on your face may not be so plain: Using the nose as a biometric
CN116778533A (en) Palm print full region-of-interest image extraction method, device, equipment and medium
Choi et al. Improved pupil center localization method for eye-gaze tracking-based human-device interaction
CN113435361A (en) Mask identification method based on depth camera
US10846518B2 (en) Facial stroking detection method and system thereof
Li et al. Predict and improve iris recognition performance based on pairwise image quality assessment
JP2021051376A (en) Image processing apparatus, image processing method, and program
JP2021051375A (en) Image processing apparatus, image processing method, and program
CN104573682A (en) Face anti-counterfeiting method based on face similarity