TWI731461B - Identification method of real face and identification device using the same - Google Patents

Identification method of real face and identification device using the same Download PDF

Info

Publication number
TWI731461B
TWI731461B TW108139763A TW108139763A TWI731461B TW I731461 B TWI731461 B TW I731461B TW 108139763 A TW108139763 A TW 108139763A TW 108139763 A TW108139763 A TW 108139763A TW I731461 B TWI731461 B TW I731461B
Authority
TW
Taiwan
Prior art keywords
face
target
processor
area
real
Prior art date
Application number
TW108139763A
Other languages
Chinese (zh)
Other versions
TW202119287A (en
Inventor
陳信志
楊宗翰
何亮融
Original Assignee
宏碁股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 宏碁股份有限公司 filed Critical 宏碁股份有限公司
Priority to TW108139763A priority Critical patent/TWI731461B/en
Priority to CN202010140606.0A priority patent/CN112784661B/en
Publication of TW202119287A publication Critical patent/TW202119287A/en
Application granted granted Critical
Publication of TWI731461B publication Critical patent/TWI731461B/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

An identification method of real face and an identification device using the same are provided. The method includes: obtaining a face image of a target face; obtaining depth information of a target region in the face image; analyzing the depth information to obtain at least one characteristic value related to a quadratic curve which reflects a depth distribution status of the target region; determining whether the at least one characteristic value meets a default condition; if the at least one characteristic value meets the default condition, determining that the target face is a face in a photograph; and if the at least one characteristic value does not meet the default condition, determining that the target face is a real face.

Description

真實人臉的識別方法與真實人臉的識別裝置Real face recognition method and real face recognition device

本發明是有關於一種影像辨識技術,且特別是有關於一種真實人臉的識別方法與真實人臉的識別裝置。The present invention relates to an image recognition technology, and particularly relates to a real face recognition method and real face recognition device.

隨著科技的進步,使用人臉辨識技術來進行電子裝置的登入認證也越來越普及。使用者只要將臉部呈現在電子裝置的鏡頭前,就可以直接藉由臉部驗證機制來進行登入。然而,某些不肖份子可能會使用網路上下載的照片或真實用戶的照片來進行臉部掃描,以嘗試登入其他人的電子裝置。因此,如何提高登入驗證過程中對於真實人臉的辨識效率,實為本領域技術人員所致力研究可課題之一。With the advancement of technology, the use of face recognition technology for login authentication of electronic devices has become more and more popular. As long as the user presents his face in front of the lens of the electronic device, he can log in directly through the face verification mechanism. However, some unscrupulous elements may use photos downloaded from the Internet or photos of real users to scan their faces in an attempt to log in to other people's electronic devices. Therefore, how to improve the recognition efficiency of real human faces in the process of login verification is actually one of the research topics devoted to those skilled in the art.

本發明提供一種真實人臉的識別方法與真實人臉的識別裝置,可有效提高辨識鏡頭前的人臉為真實人臉或照片中的人臉的辨識效率。The invention provides a real face recognition method and a real face recognition device, which can effectively improve the recognition efficiency of recognizing the face in front of the lens as a real face or a face in a photo.

本發明的實施例提供一種真實人臉的識別方法,其包括:獲得目標人臉的臉部影像;獲得所述臉部影像中的目標區域的深度資訊;分析所述深度資訊以獲得與二次曲線相關的至少一特徵值,其中所述二次曲線反映所述目標區域的深度分布狀態;判斷所述至少一特徵值是否符合預設條件;若所述至少一特徵值符合所述預設條件,判定所述目標人臉為照片中的人臉;以及若所述至少一特徵值不符合所述預設條件,判定所述目標人臉為真實人臉。An embodiment of the present invention provides a real face recognition method, which includes: obtaining a face image of a target face; obtaining depth information of a target area in the face image; analyzing the depth information to obtain and secondary At least one characteristic value related to the curve, wherein the quadratic curve reflects the depth distribution state of the target area; judging whether the at least one characteristic value meets a preset condition; if the at least one characteristic value meets the preset condition , Determining that the target human face is a human face in the photo; and if the at least one feature value does not meet the preset condition, determining that the target human face is a real human face.

本發明的實施例另提供一種真實人臉的識別裝置,其包括深度攝影機與處理器。所述處理器耦接至所述深度攝影機。所述處理器用以藉由所述深度攝影機獲得目標人臉的臉部影像。所述處理器更用以藉由所述深度攝影機獲得所述臉部影像中的目標區域的深度資訊。所述處理器更用以分析所述深度資訊以獲得與二次曲線相關的至少一特徵值。所述二次曲線反映所述目標區域的深度分布狀態。所述處理器更用以判斷所述至少一特徵值是否符合預設條件。若所述至少一特徵值符合所述預設條件,所述處理器更用以判定所述目標人臉為照片中的人臉。若所述至少一特徵值不符合所述預設條件,所述處理器更用以判定所述目標人臉為真實人臉。The embodiment of the present invention further provides a real face recognition device, which includes a depth camera and a processor. The processor is coupled to the depth camera. The processor is used for obtaining the facial image of the target person's face by the depth camera. The processor is further used to obtain the depth information of the target area in the face image by the depth camera. The processor is further configured to analyze the depth information to obtain at least one characteristic value related to the quadratic curve. The quadratic curve reflects the depth distribution state of the target area. The processor is further configured to determine whether the at least one characteristic value meets a preset condition. If the at least one characteristic value meets the preset condition, the processor is further configured to determine that the target human face is a human face in the photo. If the at least one characteristic value does not meet the preset condition, the processor is further configured to determine that the target face is a real face.

基於上述,在獲得目標人臉的臉部影像時,可一併獲得所述臉部影像中的目標區域的深度資訊。藉由分析所述深度資訊,可獲得與一個二次曲線相關的至少一特徵值,且所述二次曲線可反映所述目標區域的深度分布狀態。接著,藉由判斷所述至少一特徵值是否符合預設條件,可有效辨識所述目標人臉為真實人臉或照片中的人臉。藉此,可有效提高辨識鏡頭前的人臉為真實人臉或照片中的人臉的辨識效率。Based on the above, when obtaining the facial image of the target person's face, the depth information of the target area in the facial image can be obtained at the same time. By analyzing the depth information, at least one characteristic value related to a quadratic curve can be obtained, and the quadratic curve can reflect the depth distribution state of the target area. Then, by determining whether the at least one characteristic value meets a preset condition, the target face can be effectively identified as a real face or a face in a photo. In this way, the recognition efficiency of recognizing the human face in front of the lens as a real human face or a human face in a photo can be effectively improved.

圖1是根據本發明的一實施例所繪示的電子裝置的示意圖。請參照圖1,電子裝置(亦稱為真實人臉的識別裝置)10可為筆記型電腦、桌上型電腦、平板電腦、智慧型手機、遊戲機或資訊服務站(Kiosk)等各式具備深度攝影機與處理器的電子裝置,且電子裝置10的類型不限於上述。FIG. 1 is a schematic diagram of an electronic device according to an embodiment of the invention. Please refer to Figure 1, the electronic device (also known as a real face recognition device) 10 can be a notebook computer, a desktop computer, a tablet computer, a smart phone, a game console, or an information service station (Kiosk), etc. The electronic device of the depth camera and the processor, and the type of the electronic device 10 is not limited to the above.

電子裝置10包括深度攝影機11、儲存裝置12及處理器13。深度攝影機11可用以拍攝帶有深度資訊的影像。例如,當深度攝影機11的鏡頭前存在人臉(亦稱為目標人臉)時,所拍攝的影像可為人臉影像且此人臉影像中的至少一個像素點可帶有相應位置的深度資訊。例如,深度攝影機11可包括至少一個鏡頭、至少一個感光元件及/或至少一個深度感測器,以完成上述功能。The electronic device 10 includes a depth camera 11, a storage device 12 and a processor 13. The depth camera 11 can be used to shoot images with depth information. For example, when there is a human face (also referred to as a target human face) in front of the lens of the depth camera 11, the captured image may be a human face image, and at least one pixel in the human face image may carry depth information of a corresponding position . For example, the depth camera 11 may include at least one lens, at least one photosensitive element, and/or at least one depth sensor to accomplish the above-mentioned functions.

儲存裝置12用以儲存資料。例如,儲存裝置12可包括非揮發性記憶體模組與揮發性記憶體模組。非揮發性記憶體模組可用以非揮發性地儲存資料。例如,非揮發性記憶體模組可包括唯讀記憶體(ROM)、固態硬碟(SSD)及/或傳統硬碟(HDD)。揮發性記憶體模組可用以暫時地儲存資料。例如,揮發性記憶體模組可包括動態隨機存取記憶體(RAM)。此外,非揮發性記憶體模組及/或揮發性記憶體模組還可以包括其他類型的儲存媒體,本發明不加以限制。The storage device 12 is used to store data. For example, the storage device 12 may include a non-volatile memory module and a volatile memory module. The non-volatile memory module can be used to store data non-volatile. For example, the non-volatile memory module may include read only memory (ROM), solid state drive (SSD) and/or traditional hard drive (HDD). The volatile memory module can be used to temporarily store data. For example, the volatile memory module may include dynamic random access memory (RAM). In addition, the non-volatile memory module and/or the volatile memory module may also include other types of storage media, which is not limited by the present invention.

在一實施例中,儲存裝置12儲存有深度學習模型101。深度學習模型101亦稱為人工智慧模型。深度學習模型101可具有類神經網路架構並可用於影像辨識。在一實施例中,深度學習模型101可用以識別人臉。在一實施例中,深度學習模型101可用以識別人臉中的至少一臉部器官(例如眼睛(或瞳孔)、鼻子、嘴巴及/或耳朵)。此外,深度學習模型101可經由訓練而逐漸提升影像辨識的精準度。在一實施例中,深度學習模型101亦可實作為硬體電路(例如晶片),本發明不加以限制。In one embodiment, the storage device 12 stores a deep learning model 101. The deep learning model 101 is also called an artificial intelligence model. The deep learning model 101 can have a neural network-like architecture and can be used for image recognition. In an embodiment, the deep learning model 101 can be used to recognize human faces. In an embodiment, the deep learning model 101 can be used to identify at least one facial organ (for example, eyes (or pupils), nose, mouth, and/or ears) in a human face. In addition, the deep learning model 101 can gradually improve the accuracy of image recognition through training. In an embodiment, the deep learning model 101 can also be implemented as a hardware circuit (such as a chip), which is not limited by the present invention.

處理器13耦接至深度攝影機11與儲存裝置12。處理器13可以是中央處理單元(CPU)、圖形處理器(GPU),或是其他可程式化之一般用途或特殊用途的微處理器、數位訊號處理器(Digital Signal Processor, DSP)、可程式化控制器、特殊應用積體電路(Application Specific Integrated Circuits, ASIC)、可程式化邏輯裝置(Programmable Logic Device, PLD)或其他類似裝置或這些裝置的組合。處理器13可控制電子裝置10的整體或部分操作。例如,處理器13可運行深度學習模型101以執行影像辨識。The processor 13 is coupled to the depth camera 11 and the storage device 12. The processor 13 may be a central processing unit (CPU), a graphics processing unit (GPU), or other programmable general-purpose or special-purpose microprocessors, digital signal processors (DSP), and programmable Controllers, Application Specific Integrated Circuits (ASIC), Programmable Logic Device (PLD) or other similar devices or a combination of these devices. The processor 13 can control the whole or part of the operation of the electronic device 10. For example, the processor 13 may run the deep learning model 101 to perform image recognition.

在一實施例中,電子裝置10還包括至少一個輸入/輸出介面,以接收訊號或輸出訊號。例如,輸入/輸出介面可包括螢幕、觸控螢幕、觸控板、滑鼠、鍵盤、實體按鈕、揚聲器、麥克風、有線網路卡及/或無線網路卡,且輸入/輸出介面的類型不限於此。In one embodiment, the electronic device 10 further includes at least one input/output interface to receive signals or output signals. For example, the input/output interface can include the screen, touch screen, touchpad, mouse, keyboard, physical buttons, speakers, microphone, wired network card and/or wireless network card, and the type of input/output interface is not Limited to this.

當深度攝影機11的鏡頭前存在人臉(即目標人臉)時,處理器13可藉由深度攝影機11獲得目標人臉的臉部影像。處理器13也可藉由深度攝影機11獲得此臉部影像中的特定區域(亦稱為目標區域)的深度資訊。須注意的是,本發明不限制單一個臉部影像中目標區域的數目及/或形狀。處理器13可分析此深度資訊以獲得與某一個二次曲線相關的至少一個特徵值。其中,所述二次曲線可反映所述目標區域的深度分布狀態。When there is a human face in front of the lens of the depth camera 11 (ie, the target human face), the processor 13 can obtain the facial image of the target human face through the depth camera 11. The processor 13 can also obtain the depth information of a specific area (also referred to as a target area) in the face image through the depth camera 11. It should be noted that the present invention does not limit the number and/or shape of target regions in a single face image. The processor 13 can analyze the depth information to obtain at least one characteristic value related to a certain quadratic curve. Wherein, the quadratic curve may reflect the depth distribution state of the target area.

在獲得所述特徵值後,處理器13可判斷所述特徵值是否符合預設條件。若所述特徵值符合預設條件,處理器13可判定所述目標人臉為照片中的人臉。此外,若所述特徵值不符合預設條件,則處理器13可判定所述目標人臉為真實人臉(即非照片中的人臉)。例如,若有使用者將手機螢幕顯示的照片中的人臉(或紙本照片中的人臉)呈現在深度攝影機11的鏡頭前,則處理器13可根據上述操作判定當前鏡頭前的人臉是照片中的人臉,而非真實人臉。藉此,可減少因為將照片中的人臉誤判為真實人臉而執行的誤動作。After obtaining the characteristic value, the processor 13 may determine whether the characteristic value meets a preset condition. If the feature value meets a preset condition, the processor 13 may determine that the target face is a face in the photo. In addition, if the feature value does not meet the preset condition, the processor 13 may determine that the target human face is a real human face (that is, not a human face in the photo). For example, if a user presents a face in a photo displayed on the screen of a mobile phone (or a face in a paper photo) in front of the lens of the depth camera 11, the processor 13 can determine the face in front of the current lens according to the above operation It is the face in the photo, not the real face. In this way, it is possible to reduce erroneous actions performed because the human face in the photo is misjudged as a real human face.

在一實施例中,處理器13可藉由深度學習模型101分析臉部影像以獲得目標人臉的至少一臉部器官的位置。然後,處理器13可根據所述至少一臉部器官的位置決定所述目標區域。In an embodiment, the processor 13 may analyze the facial image by the deep learning model 101 to obtain the position of at least one facial organ of the target person's face. Then, the processor 13 may determine the target area according to the position of the at least one facial organ.

圖2是根據本發明的一實施例所繪示的人臉影像的示意圖。請參照圖1與圖2,人臉影像21中呈現了一個目標人臉22。處理器13可藉由深度學習模型101來識別出目標人臉22中的臉部器官,例如眼睛、鼻子、嘴巴及/或耳朵。FIG. 2 is a schematic diagram of a human face image according to an embodiment of the invention. Please refer to FIG. 1 and FIG. 2, a target face 22 is presented in the face image 21. The processor 13 can use the deep learning model 101 to identify facial organs in the target face 22, such as eyes, nose, mouth, and/or ears.

在一實施例中,處理器13可根據所識別的至少部分臉部器官的所在位置設置參考點201~205。例如,參考點201可設置於目標人臉22中的左眼的所在位置、參考點202可設置於目標人臉22中的右眼的所在位置、參考點203可設置於目標人臉22中的鼻子的所在位置、參考點204可設置於目標人臉22中的嘴巴的左側且參考點205可設置於目標人臉22中的嘴巴的右側。須注意的是,在其他實施例中,參考點201~205亦可以設置於目標人臉22中的其他位置及/或參考點201~205的數目也可以是更多或更少,本發明不加以限制。In an embodiment, the processor 13 may set the reference points 201 to 205 according to the positions of at least part of the recognized facial organs. For example, the reference point 201 may be set at the location of the left eye in the target face 22, the reference point 202 may be set at the location of the right eye in the target face 22, and the reference point 203 may be set at the location of the target face 22. The position of the nose and the reference point 204 may be set on the left side of the mouth in the target face 22 and the reference point 205 may be set on the right side of the mouth in the target face 22. It should be noted that in other embodiments, the reference points 201 to 205 can also be set at other positions in the target face 22 and/or the number of reference points 201 to 205 can also be more or less. The present invention does not Be restricted.

圖3是根據本發明的一實施例所繪示的目標區域的示意圖。請參照圖1至圖3,在一實施例中,根據所設定的參考點201~205,線段301~306中的至少一者可被決定。例如,處理器13可將線段301設定為參考點201與202之間的中點與參考點204與205之間的中點之間的連線。例如,處理器13可將線段302設定為參考點201與205之間的連線。例如,處理器13可將線段303設定為參考點202與204之間的連線。例如,處理器13可將線段304設定為參考點201與204之間的中點與參考點202與205之間的中點之間的連線。例如,處理器13可將線段305設定為參考點201與204之間的連線。例如,處理器13可將線段306設定為參考點202與205之間的連線。線段301~306的至少一者所經過的路徑可被決定為所述目標區域。換言之,所述目標區域可包含線段301~306的至少一者所經過或涵蓋的像素點(或像素位置)。此外,目標區域中的至少一個像素點可視為取樣點。每一個取樣點可具有一個深度資訊(例如深度值),以反映該取樣點的所在位置的深度。FIG. 3 is a schematic diagram of a target area drawn according to an embodiment of the invention. 1 to 3, in an embodiment, at least one of the line segments 301 to 306 can be determined according to the set reference points 201 to 205. For example, the processor 13 may set the line segment 301 as a connection line between the midpoint between the reference points 201 and 202 and the midpoint between the reference points 204 and 205. For example, the processor 13 may set the line segment 302 as the connection between the reference points 201 and 205. For example, the processor 13 may set the line segment 303 as the connection between the reference points 202 and 204. For example, the processor 13 may set the line segment 304 as a line between the midpoint between the reference points 201 and 204 and the midpoint between the reference points 202 and 205. For example, the processor 13 may set the line segment 305 as the connection line between the reference points 201 and 204. For example, the processor 13 may set the line segment 306 as the connection line between the reference points 202 and 205. The path taken by at least one of the line segments 301 to 306 may be determined as the target area. In other words, the target area may include pixel points (or pixel positions) passed or covered by at least one of the line segments 301 to 306. In addition, at least one pixel in the target area can be regarded as a sampling point. Each sampling point may have a depth information (for example, a depth value) to reflect the depth of the location of the sampling point.

在一實施例中,所述目標區域可被劃分為至少一第一區域與至少一第二區域。第一區域包括目標人臉的鼻子的所在位置。例如,圖3的線段301~304所經過的路徑可視為第一區域。第二區域則不包括目標人臉的鼻子的所在位置。例如,圖3的線段305與306所經過的路徑可視為第二區域。處理器13可藉由分析第一區域及/或第二區域的深度資訊來獲得至少一個特徵值。In an embodiment, the target area may be divided into at least one first area and at least one second area. The first area includes the position of the nose of the target face. For example, the path taken by the line segments 301 to 304 in FIG. 3 can be regarded as the first area. The second area does not include the position of the nose of the target face. For example, the paths traversed by the line segments 305 and 306 in FIG. 3 can be regarded as the second area. The processor 13 can obtain at least one characteristic value by analyzing the depth information of the first area and/or the second area.

圖4是根據本發明的一實施例所繪示的反映深度分布狀態之曲線的示意圖。請參照圖1至圖4,假設取樣點1~100、101~200、201~300、301~400、401~500及501~600分別位於線段301~306所經過的目標區域中。取樣點1~100、101~200、201~300、301~400、401~500及501~600所分別對應的深度值可藉由曲線401~406來表示。換言之,曲線401可反映線段301所經過的路徑上的多個取樣點1~100的深度分布狀態,且曲線406可反映線段306所經過的路徑上的多個取樣點501~600的深度分布狀態,依此類推。FIG. 4 is a schematic diagram of a curve reflecting the depth distribution state drawn according to an embodiment of the present invention. Please refer to Figures 1 to 4, assuming that the sampling points 1~100, 101~200, 201~300, 301~400, 401~500, and 501~600 are respectively located in the target area passed by the line segments 301~306. The depth values corresponding to sampling points 1~100, 101~200, 201~300, 301~400, 401~500, and 501~600 can be represented by curves 401~406. In other words, the curve 401 can reflect the depth distribution state of multiple sampling points 1 to 100 on the path passed by the line segment 301, and the curve 406 can reflect the depth distribution state of multiple sampling points 501 to 600 along the path passed by the line segment 306 ,So on and so forth.

須注意的是,在圖4的實施例中,是假設圖2中的人臉影像21是藉由拍攝真實人臉而獲得(即目標人臉22是真實人臉)。因此,線段301~304所經過的路徑包含目標人臉22中的鼻子的所在位置(鼻子的所在位置的深度值較小),故曲線401~404會呈現類似於二次曲線的彎曲狀,且曲線401的開口方向是向上。此外,由於線段305與306所經過的路徑不包含目標人臉22中的鼻子的所在位置(即線段305與306是經過真實人臉的臉頰部位,其深度變化較小),故曲線405與406會較為平坦。It should be noted that in the embodiment of FIG. 4, it is assumed that the face image 21 in FIG. 2 is obtained by shooting a real face (that is, the target face 22 is a real face). Therefore, the path traversed by the line segments 301-304 includes the position of the nose in the target face 22 (the depth value of the position of the nose is small), so the curves 401-404 will be curved like a quadratic curve, and The opening direction of the curve 401 is upward. In addition, since the paths traversed by the line segments 305 and 306 do not include the position of the nose in the target face 22 (that is, the line segments 305 and 306 pass through the cheeks of the real human face, and the depth changes are small), so the curves 405 and 406 It will be relatively flat.

然而,圖4中的曲線401~406僅為範利,而非用以限制本發明。在其他實施例中,曲線401~406中的任一者所對應的深度值亦可以不同及/或曲線401~406中的任一者所對應的取樣點的數目也可以不同,本發明不加以限制。或者,在圖4的另一實施例中,若圖2中的人臉影像21是藉由拍攝照片中的人臉而獲得(即目標人臉22不是真實人臉),則曲線401~406的深度分布狀態也會有明顯不同。However, the curves 401 to 406 in FIG. 4 are only exemplary, and are not used to limit the present invention. In other embodiments, the depth value corresponding to any one of the curves 401 to 406 may also be different and/or the number of sampling points corresponding to any one of the curves 401 to 406 may also be different, which is not included in the present invention. limit. Or, in another embodiment of FIG. 4, if the face image 21 in FIG. 2 is obtained by taking a face in the photo (that is, the target face 22 is not a real face), the curves 401~406 The depth distribution state will also be significantly different.

在一實施例中,處理器13可利用二次曲線來模擬或逼近曲線401~406中的至少一者,以獲得與曲線401~406中的至少一者相關的特徵值。在一實施例中,所述特徵值包括第一特徵值與第二特徵值。第一特徵值反映所述二次曲線的開口方向與所述二次曲線的彎曲程度。該第二特徵值反映所述二次曲線的極值在所述二次曲線中的位置(或相對位置)。In an embodiment, the processor 13 may use a quadratic curve to simulate or approximate at least one of the curves 401 to 406 to obtain a characteristic value related to at least one of the curves 401 to 406. In an embodiment, the characteristic value includes a first characteristic value and a second characteristic value. The first characteristic value reflects the opening direction of the quadratic curve and the degree of curvature of the quadratic curve. The second characteristic value reflects the position (or relative position) of the extreme value of the quadratic curve in the quadratic curve.

圖5是根據本發明的一實施例所繪示的二次曲線的示意圖。請參照圖1至圖5,以曲線401為例,處理器13可利用二次曲線501來模擬或逼近曲線401。二次曲線501可藉由以下方程式(1.1)來描述。FIG. 5 is a schematic diagram of a quadratic curve drawn according to an embodiment of the present invention. 1 to 5, taking the curve 401 as an example, the processor 13 can use the quadratic curve 501 to simulate or approximate the curve 401. The quadratic curve 501 can be described by the following equation (1.1).

y=a(x-b) 2+c   (1.1) y=a(xb) 2 +c (1.1)

在方程式(1.1)中,參數y表示二次曲線501在縱軸方向的深度值,參數x表示二次曲線501在橫軸方向的取樣點,參數a反映二次曲線501的開口方向與二次曲線501的彎曲程度,參數b反映二次曲線501的極值在二次曲線501中的位置,且參數c為常數。在圖5的實施例中,參數a為正值可反映二次曲線501的開口方向是向上,參數a的值與二次曲線501的彎曲程度呈正相關,且參數b的值可反映二次曲線501中最小的深度值發生在第b個取樣點。In equation (1.1), the parameter y represents the depth value of the conic 501 in the vertical axis direction, the parameter x represents the sampling point of the conic 501 in the horizontal axis direction, and the parameter a reflects the opening direction of the conic 501 and the quadratic The degree of curvature of the curve 501, the parameter b reflects the position of the extreme value of the quadratic curve 501 in the quadratic curve 501, and the parameter c is a constant. In the embodiment of FIG. 5, a positive value of parameter a can reflect that the opening direction of the quadratic curve 501 is upward, the value of parameter a is positively correlated with the degree of curvature of the quadratic curve 501, and the value of parameter b can reflect the conic curve. The smallest depth value in 501 occurs at the b-th sampling point.

在一實施例中,處理器13可根據參數a來獲得與曲線401(或二次曲線501)有關的第一特徵值並根據參數b來與曲線401(或二次曲線501)有關的獲得第二特徵值。在一實施例中,處理器13亦可藉由相同方式獲得與圖4中的曲線402~406中的任一者有關的特徵值,在此不重複贅述。處理器13可根據第一特徵值與第二特徵值決定圖2的目標人臉22為真實人臉或照片中的人臉。In an embodiment, the processor 13 may obtain the first characteristic value related to the curve 401 (or the quadratic curve 501) according to the parameter a, and obtain the first characteristic value related to the curve 401 (or the quadratic curve 501) according to the parameter b. Two eigenvalues. In an embodiment, the processor 13 can also obtain the characteristic value related to any one of the curves 402 to 406 in FIG. 4 in the same manner, and details are not repeated here. The processor 13 may determine that the target face 22 in FIG. 2 is a real face or a face in a photo according to the first feature value and the second feature value.

在一實施例中,處理器13可將參數a決定為第一特徵值。在一實施例中,處理器13可將參數b除以對應於曲線401的取樣點的總數(例如100)並將計算結果決定為第二特徵值。因此,在一實施例中,第一特徵值可為參數a,且第二特徵值可為參數p。其中,參數p=b/(取樣點的總數=100)。須注意的是,在其他實施例中,第一特徵值與第二特徵值亦可以是分別根據參數a與b執行其他邏輯運算而獲得,本發明不加以限制。In an embodiment, the processor 13 may determine the parameter a as the first characteristic value. In an embodiment, the processor 13 may divide the parameter b by the total number of sampling points corresponding to the curve 401 (for example, 100) and determine the calculation result as the second characteristic value. Therefore, in an embodiment, the first characteristic value may be a parameter a, and the second characteristic value may be a parameter p. Among them, the parameter p=b/(the total number of sampling points=100). It should be noted that in other embodiments, the first characteristic value and the second characteristic value may also be obtained by performing other logical operations according to the parameters a and b, respectively, and the present invention is not limited.

在一實施例中,處理器13可判斷第一特徵值(以參數C1表示)及/或第二特徵值(以參數C2表示)是否符合預設條件。在一實施例中,不同目標區域(即線段)所對應的預設條件可以下表1來表示。 表1 線段(目標區域) 第一特徵值C1 第二特徵值C2 301 C1> V1 ︱C2- V4︱> V5 302 C1> V1 ︱C2- V4︱> V5 303 C1> V1 ︱C2- V4︱> V5 304 C1>V2 ︱C2- V4︱> V6 305 C1> V3   306 C1> V3   In an embodiment, the processor 13 may determine whether the first characteristic value (represented by the parameter C1) and/or the second characteristic value (represented by the parameter C2) meets a preset condition. In an embodiment, the preset conditions corresponding to different target regions (ie, line segments) can be represented in Table 1 below. Table 1 Line segment (target area) First eigenvalue C1 Second eigenvalue C2 301 C1> V1 ︱C2- V4︱> V5 302 C1> V1 ︱C2- V4︱> V5 303 C1> V1 ︱C2- V4︱> V5 304 C1>V2 ︱C2- V4︱> V6 305 C1> V3 306 C1> V3

在一實施例中,參數V1可為0.015,參數V2可為0.03,參數V3可為0.02,參數V4可為0.5,參數V5可為0.3及/或參數V6可為0.2。然而,在另一實施例中,參數V1~V6還可以是其他數值,本發明不加以限制。在一實施例中,處理器13可使用多個訓練用的人臉影像來訓練深度學習模型101。根據訓練結果,處理器13可歸納出可用於分辨照片中的人臉以及真實人臉的參數V1~V6。In one embodiment, the parameter V1 can be 0.015, the parameter V2 can be 0.03, the parameter V3 can be 0.02, the parameter V4 can be 0.5, the parameter V5 can be 0.3 and/or the parameter V6 can be 0.2. However, in another embodiment, the parameters V1 to V6 may also be other values, which is not limited by the present invention. In an embodiment, the processor 13 may use a plurality of face images for training to train the deep learning model 101. According to the training result, the processor 13 can generalize the parameters V1 to V6 that can be used to distinguish between the human face in the photo and the real human face.

在一實施例中,只要表1所列的任一條件符合,即可判定圖2的目標人臉22為照片中的人臉。或者,在一實施例中,只有當表1所列的至少2個條件符合時,才可判定圖2的目標人臉22為照片中的人臉。或者,在一實施例中,只有當表1所列的所有條件皆符合時,才可判定圖2的目標人臉22為照片中的人臉。In an embodiment, as long as any one of the conditions listed in Table 1 is met, it can be determined that the target face 22 in FIG. 2 is a face in the photo. Or, in an embodiment, only when at least two conditions listed in Table 1 are met, can the target face 22 in FIG. 2 be determined to be the face in the photo. Or, in an embodiment, only when all the conditions listed in Table 1 are met, can the target face 22 in FIG. 2 be determined to be the face in the photo.

例如,在一實施例中,假設與曲線401相關的第一特徵值C1符合表1中線段301所對應的C1> V1之條件,則響應於此條件之滿足,處理器13可判定圖2的目標人臉22為照片中的人臉,而非真實人臉。或者,在一實施例中,假設與曲線405相關的第一特徵值C1符合表1中線段305所對應的C1> V3之條件,則響應於此條件之滿足,處理器13可判定圖2的目標人臉22為照片中的人臉,而非真實人臉。或者,在一實施例中,假設與曲線402相關的第二特徵值C2符合表1中線段302所對應的︱C2- V4︱> V5之條件,則響應於此條件之滿足,處理器13可判定圖2的目標人臉22為照片中的人臉,而非真實人臉。For example, in one embodiment, assuming that the first characteristic value C1 related to the curve 401 meets the condition of C1>V1 corresponding to the line segment 301 in Table 1, then in response to the satisfaction of this condition, the processor 13 may determine The target human face 22 is a human face in the photo, not a real human face. Or, in an embodiment, assuming that the first characteristic value C1 related to the curve 405 meets the condition of C1>V3 corresponding to the line segment 305 in Table 1, then in response to the satisfaction of this condition, the processor 13 may determine The target human face 22 is a human face in the photo, not a real human face. Or, in an embodiment, assuming that the second characteristic value C2 related to the curve 402 meets the condition of ︱C2-V4︱>V5 corresponding to the line segment 302 in Table 1, then in response to the satisfaction of this condition, the processor 13 may It is determined that the target human face 22 in FIG. 2 is a human face in the photo, rather than a real human face.

在一實施例中,若判定目標人臉為真實人臉,則處理器13可允許繼續執行後續與臉部驗證或臉部圖像註冊有關的操作。例如,在判定圖2的目標人臉22為真實人臉後,處理器13可允許使用人臉影像21來執行臉部驗證及/或臉部圖像註冊。反之,若判定目標人臉為照片中的人臉(即非真實人臉),則處理器13可停止繼續執行後續與臉部驗證或臉部圖像註冊有關的操作。藉此,可減少因為將照片中的人臉誤判為真實人臉而執行的誤動作。In an embodiment, if it is determined that the target human face is a real human face, the processor 13 may allow subsequent operations related to face verification or facial image registration to continue. For example, after determining that the target face 22 in FIG. 2 is a real face, the processor 13 may allow the use of the face image 21 to perform face verification and/or face image registration. Conversely, if it is determined that the target human face is a human face in the photo (ie, an unreal human face), the processor 13 may stop continuing to perform subsequent operations related to face verification or facial image registration. In this way, it is possible to reduce erroneous actions performed because the human face in the photo is misjudged as a real human face.

圖6是根據本發明的一實施例所繪示的真實人臉的識別方法的流程圖。請參照圖6,在步驟S601中,獲得目標人臉的臉部影像。在步驟S602中,獲得所述臉部影像中的目標區域的深度資訊。在步驟S603中,分析所述深度資訊以獲得與二次曲線相關的至少一特徵值。其中所述二次曲線反映所述目標區域的深度分布狀態。在步驟S604中,判斷所述至少一特徵值是否符合預設條件。若所述至少一特徵值符合所述預設條件,在步驟S605中,判定所述目標人臉為照片中的人臉。然而,若所述至少一特徵值皆不符合所述預設條件,在步驟S606中,判定所述目標人臉為真實人臉。Fig. 6 is a flowchart of a method for recognizing a real face according to an embodiment of the present invention. Referring to FIG. 6, in step S601, a facial image of a target person's face is obtained. In step S602, the depth information of the target area in the face image is obtained. In step S603, the depth information is analyzed to obtain at least one characteristic value related to the quadratic curve. The quadratic curve reflects the depth distribution state of the target area. In step S604, it is determined whether the at least one characteristic value meets a preset condition. If the at least one feature value meets the preset condition, in step S605, it is determined that the target face is a face in the photo. However, if none of the at least one feature value meets the preset condition, in step S606, it is determined that the target face is a real face.

然而,圖6中各步驟已詳細說明如上,在此便不再贅述。值得注意的是,圖6中各步驟可以實作為多個程式碼或是電路,本發明不加以限制。此外,圖6的方法可以搭配以上範例實施例使用,也可以單獨使用,本發明不加以限制。However, each step in FIG. 6 has been described in detail as above, and will not be repeated here. It is worth noting that each step in FIG. 6 can be implemented as multiple program codes or circuits, and the present invention is not limited. In addition, the method in FIG. 6 can be used in conjunction with the above exemplary embodiments, or can be used alone, and the present invention is not limited.

綜上所述,本發明的實施例可有效對鏡頭前的照片中的人臉進行過濾及/或對鏡頭前的真實人臉進行辨識,進而減少因為將照片中的人臉誤判為真實人臉而執行的誤動作。In summary, the embodiments of the present invention can effectively filter the faces in the photos in front of the camera and/or recognize the real faces in front of the camera, thereby reducing the misjudgment of the faces in the photos as real faces. The misoperation of the implementation.

10:電子裝置 11:深度攝影機 12:儲存裝置 13:處理器 101:深度學習模型 21:人臉影像 22:目標人臉 201~205:參考點 301~306:線段 401~406:曲線 501:二次曲線 S601~S606:步驟 10: Electronic device 11: Depth camera 12: Storage device 13: Processor 101: Deep learning model 21: Face image 22: Target face 201~205: Reference point 301~306: line segment 401~406: Curve 501: Conic S601~S606: Steps

圖1是根據本發明的一實施例所繪示的電子裝置的示意圖。 圖2是根據本發明的一實施例所繪示的人臉影像的示意圖。 圖3是根據本發明的一實施例所繪示的目標區域的示意圖。 圖4是根據本發明的一實施例所繪示的反映深度分布狀態之曲線的示意圖。 圖5是根據本發明的一實施例所繪示的二次曲線的示意圖。 圖6是根據本發明的一實施例所繪示的真實人臉的識別方法的流程圖。 FIG. 1 is a schematic diagram of an electronic device according to an embodiment of the invention. FIG. 2 is a schematic diagram of a human face image according to an embodiment of the invention. FIG. 3 is a schematic diagram of a target area drawn according to an embodiment of the invention. FIG. 4 is a schematic diagram of a curve reflecting the depth distribution state drawn according to an embodiment of the present invention. FIG. 5 is a schematic diagram of a quadratic curve drawn according to an embodiment of the present invention. Fig. 6 is a flowchart of a method for recognizing a real face according to an embodiment of the present invention.

S601~S606:步驟S601~S606: Steps

Claims (8)

一種真實人臉的識別方法,包括:獲得一目標人臉的一臉部影像;獲得該臉部影像中的一目標區域的深度資訊;分析該深度資訊以獲得與一二次曲線相關的至少一特徵值,其中該二次曲線反映該目標區域的一深度分布狀態;判斷該至少一特徵值是否符合一預設條件;若該至少一特徵值符合該預設條件,判定該目標人臉為照片中的人臉;以及若該至少一特徵值不符合該預設條件,判定該目標人臉為真實人臉,其中該至少一特徵值包括一第一特徵值與一第二特徵值,其中分析該深度資訊以獲得與該二次曲線相關的該至少一特徵值的步驟包括:藉由方程式y=a(x-b)2+c描述該二次曲線;根據所述方程式中的參數a獲得該第一特徵值;以及根據所述方程式中的參數b獲得該第二特徵值。 A method for recognizing a real human face includes: obtaining a facial image of a target human face; obtaining depth information of a target region in the facial image; analyzing the depth information to obtain at least one related to a quadratic curve Feature value, wherein the quadratic curve reflects a depth distribution state of the target area; determine whether the at least one feature value meets a preset condition; if the at least one feature value meets the preset condition, determine that the target face is a photo And if the at least one feature value does not meet the preset condition, determining that the target face is a real face, wherein the at least one feature value includes a first feature value and a second feature value, and the analysis The step of obtaining the at least one characteristic value related to the quadratic curve by the depth information includes: describing the quadratic curve by the equation y=a(xb) 2 +c; obtaining the second curve according to the parameter a in the equation A characteristic value; and obtaining the second characteristic value according to the parameter b in the equation. 如申請專利範圍第1項所述的真實人臉的識別方法,其中該目標區域包括至少一第一區域與至少一第二區域,該至少一第一區域包括該目標人臉的鼻子的所在位置,且該至少一第二區域不包括該目標人臉的該鼻子的該所在位置。 The real face recognition method according to claim 1, wherein the target area includes at least one first area and at least one second area, and the at least one first area includes the position of the nose of the target face , And the at least one second area does not include the position of the nose of the target face. 如申請專利範圍第1項所述的真實人臉的識別方法,其中該第一特徵值反映該二次曲線的一開口方向與該二次曲線的一彎曲程度,且該第二特徵值反映該二次曲線的一極值在該二次曲線中的位置。 According to the method for recognizing a real face as described in item 1 of the scope of patent application, the first feature value reflects an opening direction of the conic and a degree of curvature of the conic, and the second feature value reflects the The position of an extreme value of the quadratic curve in the quadratic curve. 如申請專利範圍第1項所述的真實人臉的識別方法,更包括:藉由一深度學習模型分析該臉部影像以獲得該目標人臉的至少一臉部器官的位置;以及根據該至少一臉部器官的該位置決定該目標區域。 The real face recognition method described in item 1 of the scope of the patent application further includes: analyzing the face image by a deep learning model to obtain the position of at least one facial organ of the target face; and according to the at least The position of a facial organ determines the target area. 一種真實人臉的識別裝置,包括:一深度攝影機;以及一處理器,耦接至該深度攝影機,其中該處理器用以藉由該深度攝影機獲得一目標人臉的一臉部影像,該處理器更用以藉由該深度攝影機獲得該臉部影像中的一目標區域的深度資訊,該處理器更用以分析該深度資訊以獲得與一二次曲線相關的至少一特徵值,其中該二次曲線反映該目標區域的一深度分布狀態,該處理器更用以判斷該至少一特徵值是否符合一預設條件,若該至少一特徵值符合該預設條件,該處理器更用以判定該目標人臉為照片中的人臉,並且 若該至少一特徵值不符合該預設條件,該處理器更用以判定該目標人臉為真實人臉,其中該至少一特徵值包括一第一特徵值與一第二特徵值,其中該處理器分析該深度資訊以獲得與該二次曲線相關的該至少一特徵值的操作包括:藉由方程式y=a(x-b)2+c描述該二次曲線;根據所述方程式中的參數a獲得該第一特徵值;以及根據所述方程式中的參數b獲得該第二特徵值。 A real face recognition device includes: a depth camera; and a processor coupled to the depth camera, wherein the processor is used to obtain a facial image of a target face by the depth camera, the processor It is further used to obtain depth information of a target area in the face image by the depth camera, and the processor is further used to analyze the depth information to obtain at least one characteristic value related to a quadratic curve, wherein the quadratic The curve reflects a depth distribution state of the target area. The processor is further used to determine whether the at least one characteristic value meets a predetermined condition. If the at least one characteristic value meets the predetermined condition, the processor is further used to determine the The target face is a face in the photo, and if the at least one feature value does not meet the preset condition, the processor is further used to determine that the target face is a real face, wherein the at least one feature value includes a first Eigenvalue and a second eigenvalue, wherein the operation of the processor to analyze the depth information to obtain the at least one eigenvalue related to the quadratic curve includes: describing the two eigenvalues by the equation y=a(xb) 2 +c The secondary curve; the first characteristic value is obtained according to the parameter a in the equation; and the second characteristic value is obtained according to the parameter b in the equation. 如申請專利範圍第5項所述的真實人臉的識別裝置,其中該目標區域包括至少一第一區域與至少一第二區域,該至少一第一區域包括該目標人臉的鼻子的所在位置,且該至少一第二區域不包括該目標人臉的該鼻子的該所在位置。 The real face recognition device according to item 5 of the scope of patent application, wherein the target area includes at least one first area and at least one second area, and the at least one first area includes the position of the nose of the target face , And the at least one second area does not include the position of the nose of the target face. 如申請專利範圍第5項所述的真實人臉的識別裝置,其中該第一特徵值反映該二次曲線的一開口方向與該二次曲線的一彎曲程度,且該第二特徵值反映該二次曲線的一極值在該二次曲線中的位置。 For the real face recognition device described in item 5 of the scope of patent application, the first feature value reflects an opening direction of the conic and a degree of curvature of the conic, and the second feature value reflects the The position of an extreme value of the quadratic curve in the quadratic curve. 如申請專利範圍第5項所述的真實人臉的識別裝置,其中該處理器更用以藉由一深度學習模型分析該臉部影像以獲得該目標人臉的至少一臉部器官的位置,並且該處理器更用以根據該至少一臉部器官的該位置決定該目標區域。 For the real face recognition device described in item 5 of the scope of patent application, the processor is further used to analyze the face image by a deep learning model to obtain the position of at least one facial organ of the target face, And the processor is further used for determining the target area according to the position of the at least one facial organ.
TW108139763A 2019-11-01 2019-11-01 Identification method of real face and identification device using the same TWI731461B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW108139763A TWI731461B (en) 2019-11-01 2019-11-01 Identification method of real face and identification device using the same
CN202010140606.0A CN112784661B (en) 2019-11-01 2020-03-03 Real face recognition method and real face recognition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW108139763A TWI731461B (en) 2019-11-01 2019-11-01 Identification method of real face and identification device using the same

Publications (2)

Publication Number Publication Date
TW202119287A TW202119287A (en) 2021-05-16
TWI731461B true TWI731461B (en) 2021-06-21

Family

ID=75749984

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108139763A TWI731461B (en) 2019-11-01 2019-11-01 Identification method of real face and identification device using the same

Country Status (2)

Country Link
CN (1) CN112784661B (en)
TW (1) TWI731461B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200604960A (en) * 2004-07-20 2006-02-01 Jing-Jing Fang Feature-based head structure and texturing head
US8090160B2 (en) * 2007-10-12 2012-01-03 The University Of Houston System Automated method for human face modeling and relighting with application to face recognition
US20150326570A1 (en) * 2014-05-09 2015-11-12 Eyefluence, Inc. Systems and methods for discerning eye signals and continuous biometric identification
TW201727537A (en) * 2016-01-22 2017-08-01 鴻海精密工業股份有限公司 Face recognition system and face recognition method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570489A (en) * 2016-11-10 2017-04-19 腾讯科技(深圳)有限公司 Living body determination method and apparatus, and identity authentication method and device
CN109558764B (en) * 2017-09-25 2021-03-16 杭州海康威视数字技术股份有限公司 Face recognition method and device and computer equipment
CN107844748B (en) * 2017-10-17 2019-02-05 平安科技(深圳)有限公司 Auth method, device, storage medium and computer equipment
CN108376239B (en) * 2018-01-25 2021-10-15 努比亚技术有限公司 Face recognition method, mobile terminal and storage medium
CN108416291B (en) * 2018-03-06 2021-02-19 广州逗号智能零售有限公司 Face detection and recognition method, device and system
CN109117755B (en) * 2018-07-25 2021-04-30 北京飞搜科技有限公司 Face living body detection method, system and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200604960A (en) * 2004-07-20 2006-02-01 Jing-Jing Fang Feature-based head structure and texturing head
US8090160B2 (en) * 2007-10-12 2012-01-03 The University Of Houston System Automated method for human face modeling and relighting with application to face recognition
US20150326570A1 (en) * 2014-05-09 2015-11-12 Eyefluence, Inc. Systems and methods for discerning eye signals and continuous biometric identification
TW201727537A (en) * 2016-01-22 2017-08-01 鴻海精密工業股份有限公司 Face recognition system and face recognition method

Also Published As

Publication number Publication date
CN112784661A (en) 2021-05-11
TW202119287A (en) 2021-05-16
CN112784661B (en) 2024-01-19

Similar Documents

Publication Publication Date Title
CN110826519B (en) Face shielding detection method and device, computer equipment and storage medium
WO2021077984A1 (en) Object recognition method and apparatus, electronic device, and readable storage medium
US11747898B2 (en) Method and apparatus with gaze estimation
WO2018028546A1 (en) Key point positioning method, terminal, and computer storage medium
US11074436B1 (en) Method and apparatus for face recognition
CN106897658B (en) Method and device for identifying human face living body
CN111767900B (en) Face living body detection method, device, computer equipment and storage medium
JP6550094B2 (en) Authentication device and authentication method
US11367305B2 (en) Obstruction detection during facial recognition processes
US20200082157A1 (en) Periocular facial recognition switching
CN106934376B (en) A kind of image-recognizing method, device and mobile terminal
WO2020103700A1 (en) Image recognition method based on micro facial expressions, apparatus and related device
CN109345553B (en) Palm and key point detection method and device thereof, and terminal equipment
WO2020199611A1 (en) Liveness detection method and apparatus, electronic device, and storage medium
CN105612533A (en) In-vivo detection method, in-vivo detection system and computer programe products
US11126827B2 (en) Method and system for image identification
WO2019200702A1 (en) Descreening system training method and apparatus, descreening method and apparatus, device, and medium
CN105335719A (en) Living body detection method and device
CN114495241B (en) Image recognition method and device, electronic equipment and storage medium
WO2021179719A1 (en) Face detection method, apparatus, medium, and electronic device
WO2021042544A1 (en) Facial verification method and apparatus based on mesh removal model, and computer device and storage medium
CN115050064A (en) Face living body detection method, device, equipment and medium
WO2020244160A1 (en) Terminal device control method and apparatus, computer device, and readable storage medium
US20180075295A1 (en) Detection apparatus, detection method, and computer program product
CN110059607A (en) Living body multiple detection method, device, computer equipment and storage medium