TWI807851B - A feature disentanglement system, method and computer-readable medium thereof for domain generalized face anti-spoofing - Google Patents

A feature disentanglement system, method and computer-readable medium thereof for domain generalized face anti-spoofing Download PDF

Info

Publication number
TWI807851B
TWI807851B TW111121276A TW111121276A TWI807851B TW I807851 B TWI807851 B TW I807851B TW 111121276 A TW111121276 A TW 111121276A TW 111121276 A TW111121276 A TW 111121276A TW I807851 B TWI807851 B TW I807851B
Authority
TW
Taiwan
Prior art keywords
domain
living body
loss
contour
image data
Prior art date
Application number
TW111121276A
Other languages
Chinese (zh)
Other versions
TW202349261A (en
Inventor
鄭玉欣
蘇亞凡
柳恆崧
陳尚甫
王鈺強
Original Assignee
中華電信股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中華電信股份有限公司 filed Critical 中華電信股份有限公司
Priority to TW111121276A priority Critical patent/TWI807851B/en
Application granted granted Critical
Publication of TWI807851B publication Critical patent/TWI807851B/en
Publication of TW202349261A publication Critical patent/TW202349261A/en

Links

Images

Abstract

The present invention provides a feature disentanglement system, method and computer-readable medium thereof for domain generalized face anti-spoofing, including a liveness encoder, a content encoder, a domain encoder, a liveness classifier, a content decoder and a domain classifier, wherein a liveness loss, a liveness confusion loss, a content loss, a content confusion loss, a domain loss and a domain confusion loss are calculated to update the above encoders and the above classifiers until the above encoders and the above classifiers reach convergence, and then complete the training of its neural network. Therefore, the present invention can accurately extract the in liveness features of the human face from an image after training. When performing in liveness judgment on the face in the image, it will not be disturbed by domain or environmental factors, thereby achieving the effect of domain generalization face anti-spoofing.

Description

一種領域泛化之人臉防偽的特徵解析系統、方法及其電腦可讀媒介 A feature analysis system, method and computer-readable medium for face anti-counterfeiting with domain generalization

本發明關於一種人臉防偽的特徵解析技術,尤其是指一種領域泛化之人臉防偽的特徵解析系統、方法及電腦可讀媒介。 The present invention relates to a face anti-counterfeiting feature analysis technology, in particular to a field generalized face anti-counterfeiting feature analysis system, method and computer readable medium.

隨人臉辨識系統逐漸成熟,越來越多系統或裝置(如手機及大樓門禁)等皆以人臉辨識作為基礎進行驗證,但這也面臨新的安全性問題,例如:惡意攻擊者試圖透過印刷出之人臉或螢幕顯示出人臉來欺騙人臉辨識模型,故人臉防偽(face anti-spoofing)成為提升資訊安全的一項重要技術。 With the gradual maturity of the face recognition system, more and more systems or devices (such as mobile phones and building access control) are based on face recognition for verification, but this also faces new security issues. For example, malicious attackers try to deceive the face recognition model through printed faces or faces displayed on the screen. Therefore, face anti-spoofing has become an important technology to improve information security.

然而,既有的人臉防偽模型,常會因為領域的差異影響到辨識的效果,例如:某人臉防偽模型使用相機拍攝的人臉資料進行訓練,但在將模型應用於監視器影像的人臉辨識時,會因為資料的光線、拍攝角度、 設備的焦距或解析度等領域的差異,而影響到辨識的效果,此問題被稱為領域差異性。 However, existing face anti-counterfeiting models often affect the recognition effect due to differences in fields. For example, a face anti-counterfeiting model is trained using face data captured by a camera, but when the model is applied to face recognition on monitor images, it may be affected by the light of the data, shooting angle, Differences in fields such as the focal length or resolution of devices affect the recognition effect. This problem is called field differences.

因此,如何避免領域差異性之問題,並實現領域泛化之人臉防偽,以精準地達到領域泛化之人臉防偽的效果,遂成為業界亟待解決的課題。 Therefore, how to avoid the problem of domain differences, and realize domain generalized face anti-counterfeiting, so as to accurately achieve the effect of domain generalized face anti-counterfeiting, has become an urgent problem to be solved in the industry.

為解決前述習知的技術問題或提供相關之功效,本發明提供一種領域泛化之人臉防偽的特徵解析系統,係包括:一活體特徵抽取模組,係接收一欲判斷是否具有活體之人臉影像之影像資料,以從該影像資料中抽取出活體特徵;一活體分類器模組,係通訊連接該活體特徵抽取模組,以接收該影像資料之活體特徵,再由該活體分類器模組依據該影像資料之活體特徵計算出該影像資料之活體分類結果;以及一處理模組,係通訊連接該活體分類器模組,以接收該影像資料之活體分類結果,再由該處理模組依據該影像資料之活體分類結果判斷該影像資料中之人臉影像是否為活體。 In order to solve the aforementioned conventional technical problems or provide related effects, the present invention provides a feature analysis system for face anti-counterfeiting generalized in the field, which includes: a living body feature extraction module, which receives image data of a human face image to judge whether there is a living body, so as to extract the living body features from the image data; The living body classification result of the image data; and a processing module, which is communicatively connected to the living body classifier module to receive the living body classification result of the image data, and then the processing module judges whether the face image in the image data is a living body according to the living body classification result of the image data.

本發明復提供一種領域泛化之人臉防偽的特徵解析方法,係包括:由活體特徵抽取模組接收一欲判斷是否具有活體之人臉影像之影像資料,以從該影像資料中抽取出活體特徵;由活體分類器模組接收該影像資料之活體特徵後,依據該影像資料之活體特徵計算出該影像資料之活體分類結果;以及由處理模組接收該影像資料之活體分類結果後,依據該影像資料之活體分類結果判斷該影像資料中之人臉影像是否為活體。 The present invention further provides a feature analysis method for anti-counterfeiting of a generalized human face, which includes: a living body feature extraction module receives an image data of a human face image for judging whether there is a living body, so as to extract living body features from the image data; after receiving the living body features of the image data, the living body classifier module calculates the living body classification result of the image data according to the living body characteristics of the image data; Whether the face image in the image is live.

於一實施例中,該領域泛化之人臉防偽的特徵解析系統及方法更包括:一通訊連接該活體分類器模組之輪廓特徵抽取模組、一通訊連接該輪廓特徵抽取模組之臉部輪廓重建模組、一通訊連接該活體分類器模組之領域特徵抽取模組,以及一通訊連接該活體特徵抽取模組、該輪廓特徵抽取模組及該領域特徵抽取模組之領域分類器模組。 In one embodiment, the face anti-counterfeiting feature analysis system and method of generalization in the field further includes: a contour feature extraction module communicating with the living body classifier module, a face contour reconstruction module communicating with the contour feature extraction module, a domain feature extraction module communicating with the living body classifier module, and a domain classifier module communicating with the living body feature extraction module, the contour feature extraction module and the domain feature extraction module.

於一實施例中,該處理模組更計算出一活體分類損失、一活體混淆損失、一輪廓重建損失、一輪廓混淆損失、一領域分類損失及一領域混淆損失,以由該處理模組依據該活體分類損失、該活體混淆損失、該輪廓重建損失、該輪廓混淆損失、該領域分類損失及該領域混淆損失之其中至少一者更新該活體特徵抽取模組、該輪廓特徵抽取模組、該領域特徵抽取模組、該活體分類器模組、該臉部輪廓重建模組及該領域分類器模組。 In one embodiment, the processing module further calculates a living body classification loss, a living body confusion loss, a contour reconstruction loss, a contour confusion loss, a domain classification loss, and a domain confusion loss, so that the processing module can update the living body feature extraction module, the contour feature extraction module, the domain feature extraction module, and the living body classifier module according to at least one of the living body classification loss, the living body confusion loss, the contour reconstruction loss, the contour confusion loss, the domain classification loss, and the domain confusion loss , the face contour reconstruction group and the domain classifier module.

於一實施例中,由該活體特徵抽取模組接收至少一訓練影像資料,以從該訓練影像資料中抽取出該訓練影像資料之活體特徵,再由該活體分類器模組及該領域分類器模組依據該訓練影像資料之活體特徵分別計算出一第一活體分類結果及一第一領域分類結果,以令該處理模組依據該第一活體分類結果及該第一領域分類結果分別計算出該活體分類損失及該活體混淆損失。 In one embodiment, the living body feature extraction module receives at least one training image data to extract the living body features of the training image data from the training image data, and then the living body classifier module and the domain classifier module calculate a first living body classification result and a first domain classification result according to the living body features of the training image data, so that the processing module calculates the living body classification loss and the living body confusion loss according to the first living body classification result and the first domain classification result.

於一實施例中,由該輪廓特徵抽取模組接收至少一訓練影像資料,以從該訓練影像資料中抽取出人臉影像之輪廓特徵,再由該臉部輪廓重建模組依據該輪廓特徵重建一臉部輪廓,以令該處理模組依據經重建之該臉部輪廓以計算出該輪廓重建損失。 In one embodiment, the contour feature extraction module receives at least one training image data to extract the contour features of the face image from the training image data, and then the facial contour reconstruction group reconstructs a facial contour according to the contour features, so that the processing module calculates the contour reconstruction loss based on the reconstructed facial contour.

於一實施例中,由該活體分類器模組及該領域分類器模組依據該輪廓特徵分別計算出一第二活體分類結果及一第二領域分類結果,以令該處理模組依據該第二活體分類結果及該第二領域分類結果計算出該輪廓混淆損失。 In one embodiment, the living body classifier module and the domain classifier module respectively calculate a second living body classification result and a second domain classification result according to the contour feature, so that the processing module calculates the contour confusion loss according to the second living body classification result and the second domain classification result.

於一實施例中,由該領域特徵抽取模組接收至少一訓練影像資料,以從該訓練影像資料中抽取出領域特徵,再由該活體分類器模組及該領域分類器模組依據該領域特徵分別計算出一第三活體分類結果及一第三領域分類結果,以令該處理模組依據該第三領域分類結果及該第三活體分類結果分別計算出該領域分類損失及該領域混淆損失。 In one embodiment, the domain feature extraction module receives at least one training image data to extract domain features from the training image data, and then the living body classifier module and the domain classifier module respectively calculate a third living body classification result and a third domain classification result according to the domain features, so that the processing module calculates the domain classification loss and the domain confusion loss according to the third domain classification result and the third living body classification result respectively.

本發明又提供一種電腦可讀媒介,應用於具有處理器及/或記憶體的電腦或計算裝置中,該電腦或該計算裝置透過處理器及/或記憶體執行一目標程式及電腦可讀媒介,並用於執行電腦可讀媒介時執行如上所述之領域泛化之人臉防偽的特徵解析方法。 The present invention further provides a computer-readable medium, which is applied to a computer or computing device with a processor and/or memory, and the computer or the computing device executes a target program and the computer-readable medium through the processor and/or memory, and is used to perform the feature analysis method for face anti-counterfeiting as described above when the computer-readable medium is executed.

由上述可知,本發明之領域泛化之人臉防偽的特徵解析系統、方法及其電腦可讀媒介,藉由計算出的損失函數(即活體分類損失、活體混淆損失、輪廓重建損失、輪廓混淆損失、領域分類損失及領域混淆損失)對上述該些模組進行更新,並直至該些模組達到收斂,進而完成其神經網路之訓練。是以,本發明之領域泛化之人臉防偽的特徵解析系統經訓練後能精準地從一影像中抽取出領域泛化之人臉之活體特徵,以於對影像中的人臉進行活體判斷時,不會受到領域或環境因素的干擾,故本發明可應用於任意的領域中,以達到領域泛化(Domain Generalization)之效果。 As can be seen from the above, the domain generalized face anti-counterfeiting feature analysis system, method and computer-readable medium of the present invention update the above-mentioned modules through the calculated loss functions (i.e., living body classification loss, living body confusion loss, contour reconstruction loss, contour confusion loss, domain classification loss, and domain confusion loss), and complete the training of the neural network until the modules reach convergence. Therefore, the feature analysis system of the domain generalized face anti-counterfeiting of the present invention can accurately extract the living features of the domain generalized human face from an image after training, so that the liveness judgment of the human face in the image will not be disturbed by the domain or environmental factors, so the present invention can be applied to any field to achieve the effect of domain generalization.

1:領域泛化之人臉防偽的特徵解析系統 1: Feature analysis system for face anti-counterfeiting in domain generalization

11:活體特徵抽取模組 11: Living body feature extraction module

12:輪廓特徵抽取模組 12: Contour feature extraction module

13:領域特徵抽取模組 13: Domain feature extraction module

14:活體分類器模組 14: Living classifier module

15:臉部輪廓重建模組 15: Facial contour reconstruction group

16:領域分類器模組 16: Domain classifier module

17:處理模組 17: Processing module

S21A~S23A及S21B~S23B:步驟 S21A~S23A and S21B~S23B: steps

S31A~S33A及S31B~S33B:步驟 S31A~S33A and S31B~S33B: steps

S41A~S43A及S41B~S43B:步驟 S41A~S43A and S41B~S43B: steps

S51~S511:步驟 S51~S511: steps

圖1及圖1-1係為本發明之領域泛化之人臉防偽的特徵解析系統。 Fig. 1 and Fig. 1-1 are the feature analysis system of the generalized face anti-counterfeiting in the field of the present invention.

圖2A係為本發明之活體分類損失之計算流程示意圖。 FIG. 2A is a schematic diagram of the calculation flow of the living body classification loss in the present invention.

圖2B係為本發明之活體混淆損失之計算流程示意圖。 FIG. 2B is a schematic diagram of the calculation flow of the living body confusion loss of the present invention.

圖3A係為本發明之輪廓重建損失之計算流程示意圖。 FIG. 3A is a schematic diagram of the calculation flow of the contour reconstruction loss in the present invention.

圖3B係為本發明之輪廓混淆損失之計算流程示意圖。 FIG. 3B is a schematic diagram of the calculation flow of the contour confusion loss of the present invention.

圖4A係為本發明之領域分類損失之計算流程示意圖。 FIG. 4A is a schematic diagram of the calculation flow of the domain classification loss of the present invention.

圖4B係為本發明之領域混淆損失之計算流程示意圖。 FIG. 4B is a schematic diagram of the calculation flow of the domain confusion loss of the present invention.

圖5係為本發明之領域泛化之人臉防偽的特徵解析系統中之模組的神經網路之訓練方法流程示意圖。 Fig. 5 is a schematic flow chart of the training method of the neural network of the module in the feature analysis system of face anti-counterfeiting generalized in the field of the present invention.

以下藉由特定的具體實施例說明本發明之實施方式,熟悉此技藝之人士可由本說明書所揭示之內容輕易地瞭解本發明之其他優點及功效。 The implementation of the present invention is described below through specific specific examples, and those skilled in the art can easily understand other advantages and effects of the present invention from the content disclosed in this specification.

須知,本說明書所附圖式所繪示之結構、比例、大小等,均僅用以配合說明書所揭示之內容,以供熟悉此技藝之人士之瞭解與閱讀,並非用以限定本發明可實施之限定條件,故不具技術上之實質意義,任何結構之修飾、比例關係之改變或大小之調整,在不影響本發明所能產生之功效及所能達成之目的下,均應仍落在本發明所揭示之技術內容得能涵蓋之範圍內。同時,本說明書中所引用之如「一」、「第一」、「第二」、 「上」及「下」等之用語,亦僅為便於敘述之明瞭,而非用以限定本發明可實施之範圍,其相對關係之改變或調整,在無實質變更技術內容下,當視為本發明可實施之範疇。 It should be noted that the structures, proportions, and sizes shown in the drawings attached to this specification are only used to match the content disclosed in the specification for the understanding and reading of those who are familiar with this technology, and are not used to limit the conditions for the implementation of the present invention, so they have no technical significance. At the same time, references in this specification such as "a", "first", "second", Terms such as "above" and "below" are only used for clarity of description, and are not used to limit the applicable scope of the present invention. Changes or adjustments in their relative relationships are deemed to be within the applicable scope of the present invention if there is no substantial change in the technical content.

圖1係為本發明之領域泛化之人臉防偽的特徵解析系統1,包括:一活體特徵抽取模組11、一輪廓特徵抽取模組12、一領域特徵抽取模組13、一活體分類器模組14、一臉部輪廓重建模組15及一領域分類器模組16,其中,活體特徵抽取模組11、輪廓特徵抽取模組12、領域特徵抽取模組13、活體分類器模組14、臉部輪廓重建模組15及領域分類器模組16皆由人工神經網路(Artificial Neural Network,簡稱:ANN)(或稱:神經網路、類神經網路(Neural Network,簡稱:NN))所構建而成。在一實施例中,領域泛化之人臉防偽的特徵解析系統1更包括一通訊連接上述該些模組之處理模組17(如圖1-1所示),係用於計算上述該些模組之損失函數(Loss Function)。 Fig. 1 is the feature analysis system 1 of the anti-counterfeiting face of the field generalization of the present invention, comprising: a living body feature extraction module 11, a contour feature extraction module 12, a field feature extraction module 13, a living body classifier module 14, a facial contour reconstruction group 15 and a field classifier module 16, wherein the living body feature extraction module 11, contour feature extraction module 12, field feature extraction module 13, living body classifier module 14, Both the contour reconstruction group 15 and the domain classifier module 16 are constructed by an artificial neural network (ANN for short) (or called a neural network, a neural network (NN for short)). In one embodiment, the face anti-counterfeiting feature analysis system 1 for domain generalization further includes a processing module 17 (as shown in FIG. 1-1 ) that is communicatively connected to the above-mentioned modules, and is used to calculate the loss function (Loss Function) of the above-mentioned modules.

在一實施例中,領域泛化之人臉防偽的特徵解析系統1係可建立於相同(或不同)伺服器(如通用型伺服器、檔案型伺服器、儲存單元型伺服器等)及電腦等具有適當演算機制之電子設備中,其中,領域泛化之人臉防偽的特徵解析系統1中之各個模組均可為軟體、硬體或韌體;若為硬體,則可為具有資料處理與運算能力之處理單元、處理器、電腦或伺服器;若為軟體或韌體,則可包括處理單元、處理器、電腦或伺服器可執行之指令,且可安裝於同一硬體裝置或分布於不同的複數硬體裝置。 In one embodiment, the face anti-counterfeiting feature analysis system 1 of domain generalization can be established in the same (or different) servers (such as general-purpose server, file server, storage unit type server, etc.) and computers with appropriate calculation mechanism, wherein each module in the domain generalization face anti-counterfeiting feature analysis system 1 can be software, hardware or firmware; if it is hardware, it can be a processing unit, processor, computer or server with data processing and computing capabilities; if it is software or Firmware may include instructions executable by a processing unit, processor, computer or server, and may be installed on the same hardware device or distributed across multiple different hardware devices.

所述之活體特徵抽取模組11係為一個輸入維度為H×W×3,而輸出維度為D的神經網路,其中,H及W分別為領域泛化之人臉防偽的特徵解析系統1所接收之影像資料的高度及寬度,3代表彩色影像的三個顏色 通道,以及D為活體特徵的向量長度(或稱特徵長度)。於此實施例中,活體特徵抽取模組11所使用之神經網路係為扣除最後一層的ResNet18,且特徵長度D為512。具言之,領域泛化之人臉防偽的特徵解析系統1接收所輸入之一張影像資料(包含人臉影像),由活體特徵抽取模組11從複雜的影像資料中抽取出長度為D的活體特徵,以作為用來判斷該影像資料中之人臉是否為活體的資訊。 The living body feature extraction module 11 is a neural network with an input dimension of H×W×3 and an output dimension of D, wherein H and W are respectively the height and width of the image data received by the feature analysis system 1, and 3 represents the three colors of the color image channel, and D is the vector length (or feature length) of the living body feature. In this embodiment, the neural network used by the living body feature extraction module 11 is a ResNet 18 minus the last layer, and the feature length D is 512. Specifically, the face anti-counterfeiting feature analysis system 1 of domain generalization receives an input image data (including a face image), and the living body feature extraction module 11 extracts a living body feature of length D from the complex image data as information for judging whether the face in the image data is a living body.

於本實施例中,為了使從該影像資料中抽取出的活體特徵不包含無關的領域資訊,而透過處理模組17計算出一活體分類損失及一活體混淆損失,以依據活體分類損失及活體混淆損失訓練活體特徵抽取模組11,其中,活體分類損失及活體混淆損失之計算方式如下所述: In this embodiment, in order to make the living body features extracted from the image data not contain irrelevant domain information, a living body classification loss and a living body confusion loss are calculated through the processing module 17, so as to train the living body feature extraction module 11 according to the living body classification loss and the living body confusion loss. The calculation methods of the living body classification loss and the living body confusion loss are as follows:

1.活體分類損失: 1. Live classification loss:

如圖2A所示,活體分類損失之計算流程包含: As shown in Figure 2A, the calculation process of the living body classification loss includes:

於步驟S21A中,輸入一訓練影像資料,且訓練影像資料包含人臉影像。 In step S21A, a training image data is input, and the training image data includes a face image.

於步驟S22A中,由活體特徵抽取模組11從訓練影像資料中抽取出活體特徵後,再由活體分類器模組14依據訓練影像資料之活體特徵計算出一第一活體分類結果。詳言之,活體分類器模組14為一種類神經網路,且輸入維度為D之活體特徵,以及輸出與活體標籤y i 相同維度之預測值。以活體分類器模組14為一層全連接層為例,活體分類器模組14所預測出之第一活體分類結果為θL×E L (x i,j )+bL,其中,θL及bL為活體分類器模組14之參數。 In step S22A, after the living body features are extracted from the training image data by the living body feature extraction module 11, a first living body classification result is calculated by the living body classifier module 14 according to the living body features of the training image data. In detail, the living body classifier module 14 is a kind of neural network, and the input dimension is the living body feature of D, and the output is the predicted value of the same dimension as the living body label y i . Taking the living body classifier module 14 as a fully connected layer as an example, the first living body classification result predicted by the living body classifier module 14 is θ L × EL ( xi ,j )+b L , where θ L and b L are the parameters of the living body classifier module 14.

於步驟S23A中,由處理模組17利用活體分類損失之損失函數(1)計算出第一活體分類結果與訓練影像資料之活體標籤之間的差距,以得到活體分類損失。詳言之,活體分類損失係為確保活體特徵中包含輸入影像中可供活體判斷的重要資訊,其中,活體分類損失之損失函數(1)之方程式例如下所示: In step S23A, the processing module 17 uses the loss function (1) of the living body classification loss to calculate the difference between the first living body classification result and the living body label of the training image data, so as to obtain the living body classification loss. In detail, the living body classification loss is to ensure that the living body features contain important information in the input image that can be used to judge the living body. Among them, the equation of the loss function (1) of the living body classification loss is as follows:

Figure 111121276-A0101-12-0008-1
Figure 111121276-A0101-12-0008-1

其中,x i,j 為輸入的訓練影像資料;E L 為活體特徵抽取模組11;C L 為活體分類器模組14;y i 為訓練影像資料之活體標籤;S為訓練時使用到的領域數量,在此實施例中為訓練影像資料集的數量;N為神經網路平行運算時,一次輸入的訓練影像資料數量(即batch size);以及L live 為活體分類損失,即活體特徵經由活體分類器模組14計算後所得到之第一活體分類結果與活體標籤之間的差距。 Among them, x i, j are the input training image data; E L is the living body feature extraction module 11; CL is the living body classifier module 14; y i is the living body label of the training image data; The gap between the obtained first living body classification result and the living body label.

2.活體混淆損失: 2. Live body confusion loss:

如圖2B所示,活體混淆損失之計算流程包含: As shown in Figure 2B, the calculation process of the living body confusion loss includes:

於步驟S21B中,輸入一訓練影像資料。 In step S21B, a training image data is input.

於步驟S22B中,由活體特徵抽取模組11從訓練影像資料中抽取出活體特徵,再由領域分類器模組16依據訓練影像資料之活體特徵計算出一第一領域分類結果。詳言之,領域分類器模組16為一種類神經網路,且輸入維度為D之活體特徵,以及輸出為與領域標籤m i 相同維度之預測值。以領域分類器模組16為一層全連接層為例,領域分類器模組16所預測出的 第一領域分類結果為θD×E L (x i,j )+bD,其中,θD及bD為領域分類器模組16之參數。 In step S22B, the living body feature extraction module 11 extracts living body features from the training image data, and then the domain classifier module 16 calculates a first domain classification result based on the living body features of the training image data. In detail, the domain classifier module 16 is a kind of neural network , and the input dimension is the living body feature of D, and the output is the predicted value of the same dimension as the domain label mi . Taking the domain classifier module 16 as a fully connected layer as an example, the first domain classification result predicted by the domain classifier module 16 is θ D × EL ( xi ,j )+b D , where θ D and b D are parameters of the domain classifier module 16.

於步驟S23B中,由處理模組17利用活體混淆損失之損失函數(2)計算出第一領域分類結果與均勻分佈之間的差距,以得到活體混淆損失。詳言之,活體混淆損失係為確保訓練影像資料之活體特徵中不包含訓練影像資料之領域資訊(如拍攝時的光線、使用設備的焦距與解析度等),故領域分類器模組16辨識的第一領域分類結果應接近均勻分佈,其中,活體混淆損失之損失函數(2)之方程式例如下所示: In step S23B, the processing module 17 uses the loss function (2) of the living body confusion loss to calculate the difference between the classification result of the first domain and the uniform distribution, so as to obtain the living body confusion loss. In detail, the living body confusion loss is to ensure that the living body features of the training image data do not include the domain information of the training image data (such as the light when shooting, the focal length and resolution of the equipment used, etc.), so the classification result of the first field identified by the domain classifier module 16 should be close to a uniform distribution. Among them, the equation of the loss function (2) of the living body confusion loss is as follows:

Figure 111121276-A0101-12-0009-2
Figure 111121276-A0101-12-0009-2

其中,x i,j 為輸入的訓練影像資料;E L 為活體特徵抽取模組11;C D 為領域分類器模組16;S為訓練影像資料集的數量;N為一次輸入的訓練影像資料數量;以及

Figure 111121276-A0101-12-0009-18
為活體混淆損失,即活體特徵經由領域分類器模組16計算後所得到之第一領域分類結果與均勻分佈之間的差距。 Among them, x i, j is the input training image data; E L is the living body feature extraction module 11; C D is the domain classifier module 16; S is the number of training image data sets; N is the number of training image data input at one time;
Figure 111121276-A0101-12-0009-18
is the living body confusion loss, that is, the gap between the first domain classification result and the uniform distribution obtained after the living body features are calculated by the domain classifier module 16 .

在一實施例中,第一活體分類結果及第一領域分類結果可為0~1之間的機率值,或是以百分比呈現之數值,且於此不限活體分類結果之數值呈現方式。 In one embodiment, the first living body classification result and the first field classification result may be a probability value between 0 and 1, or a numerical value expressed as a percentage, and the numerical presentation of the living body classification result is not limited here.

所述之輪廓特徵抽取模組12亦為一個輸入維度為H×W×3,而輸出維度為D的神經網路。於此實施例中,活體特徵抽取模組11所使用之神經網路係為扣除最後一層的ResNet18,且特徵長度D為512。具言之,領域泛化之人臉防偽的特徵解析系統1接收所輸入之一張影像資料,由輪廓特徵 抽取模組12從複雜的影像資料中抽取出長度為D的輪廓特徵,以作為用來重建該影像資料中之臉部輪廓的資訊。 The contour feature extraction module 12 is also a neural network with an input dimension of H×W×3 and an output dimension of D. In this embodiment, the neural network used by the living body feature extraction module 11 is a ResNet 18 minus the last layer, and the feature length D is 512. In other words, the feature analysis system 1 for face anti-counterfeiting with domain generalization receives an input image data, and uses contour features The extraction module 12 extracts contour features of length D from the complex image data as information for reconstructing the facial contour in the image data.

於本實施例中,為了使從該影像資料中抽取出的輪廓特徵不包含無關的活體資訊及領域資訊,而透過處理模組17計算出一輪廓重建損失及一輪廓混淆損失,其中,輪廓重建損失及輪廓混淆損失之計算方式如下所述: In this embodiment, in order to prevent the contour features extracted from the image data from including irrelevant living body information and domain information, a contour reconstruction loss and a contour confusion loss are calculated through the processing module 17, wherein the calculation methods of the contour reconstruction loss and contour confusion loss are as follows:

1.輪廓重建損失: 1. Contour reconstruction loss:

如圖3A所示,輪廓重建損失之計算流程包含: As shown in Figure 3A, the calculation process of contour reconstruction loss includes:

於步驟S31A中,輸入一訓練影像資料。 In step S31A, a training image data is input.

於步驟S32A中,由輪廓特徵抽取模組12從訓練影像資料中抽取出輪廓特徵,再由臉部輪廓重建模組15依據訓練影像資料之輪廓特徵重建出一臉部輪廓。 In step S32A, the contour feature extraction module 12 extracts contour features from the training image data, and then the facial contour reconstruction group 15 reconstructs a facial contour according to the contour features of the training image data.

詳言之,臉部輪廓重建模組15為一種類神經網路,且輸入維度為D之輪廓特徵,而輸出的預測值之維度與預先訓練之臉部輪廓生成模組Φ所生成的輪廓Φ(x i,j )之輪廓維度相同。以臉部輪廓重建模組15為一層全連接層且輪廓維度為H×W為例,臉部輪廓重建模組15所重建出之臉部輪廓為θC×E c (x i,j )+bC,其中,θC及bC為領域分類器模組16之參數,θC之維度為H×W×D,bC之維度為H×W,故重建出之輪廓為大小H×W的預測值。 In detail, the facial contour reconstruction group 15 is a kind of neural network, and the input dimension is the contour feature of D, and the dimension of the output prediction value is the same as the contour dimension of the contour Φ( xi ,j ) generated by the pre-trained facial contour generation module Φ. Taking the facial contour reconstruction group 15 as a fully connected layer and the contour dimension as H×W as an example, the facial contour reconstructed by the facial contour reconstruction group 15 is θ C × E c ( xi ,j )+b C , where θ C and b C are the parameters of the domain classifier module 16, the dimension of θ C is H×W×D, and the dimension of b C is H×W, so the reconstructed contour is the predicted value of size H×W.

於步驟S33A中,由處理模組17利用輪廓重建損失之損失函數(3)計算出經重建之臉部輪廓與訓練影像資料之臉部輪廓之間的差距,以得到輪廓重建損失。詳言之,輪廓重建損失係為確保輪廓特徵中包含輸入 的訓練影像資料之中可供重建出臉部輪廓的重要資訊,其中,輪廓重建損失之損失函數(3)之方程式例如下所示: In step S33A, the processing module 17 uses the loss function (3) of the contour reconstruction loss to calculate the difference between the reconstructed facial contour and the facial contour of the training image data to obtain the contour reconstruction loss. In detail, the contour reconstruction loss is to ensure that the contour features contain the input The important information of the facial contour can be reconstructed from the training image data of , among which, the equation of the loss function (3) of the contour reconstruction loss is as follows:

Figure 111121276-A0101-12-0011-3
Figure 111121276-A0101-12-0011-3

其中,x i,j 為輸入的訓練影像資料;E C 為輪廓特徵抽取模組12;D C 為臉部輪廓重建模組15;Φ為預先訓練之臉部輪廓生成模組(圖中未示),以從輸入的訓練影像資料抽取出臉部輪廓,俾供與經重建之臉部輪廓計算出輪廓重建損失;S為訓練影像資料集的數量;N為一次輸入的訓練影像資料數量;以及L cont 為輪廓重建損失,即輪廓特徵經由臉部輪廓重建模組15重建後所得到之臉部輪廓與預先訓練之臉部輪廓生成模組所抽取之臉部輪廓之間的差距。 Among them, x i, j are the input training image data; E C is the contour feature extraction module 12; D C is the facial contour reconstruction group 15; The gap between the facial contours extracted by the facial contour generation model and the facial contours extracted by the pre-trained facial contour generation module .

在一實施例中,預先訓練之臉部輪廓生成模組係為利用PRNet所建立的輸入維度為D,而輸出維度為H×W×3的神經網路。 In one embodiment, the pre-trained facial contour generation module is a neural network with an input dimension of D and an output dimension of H×W×3 established by PRNet.

2.輪廓混淆損失: 2. Contour confusion loss:

如圖3B所示,輪廓混淆損失之計算流程包含: As shown in Figure 3B, the calculation process of the contour confusion loss includes:

於步驟S31B中,輸入一訓練影像資料。 In step S31B, a training image data is input.

於步驟S32B中,由輪廓特徵抽取模組12從訓練影像資料中抽取出輪廓特徵後,再由活體分類器模組14及領域分類器模組16依據訓練影像資料之輪廓特徵分別計算出一第二活體分類結果及一第二領域分類結果。具言之,以活體分類器模組14及領域分類器模組16皆為一層全連接層為例,活體分類器模組14所預測出之第二活體分類結果為θL×E C (x i,j )+bL,而領域分類器模組16所預測出的第二領域分類結果為θD×E C (x i,j )+ bD,其中,θL及bL為活體分類器模組14之參數,而θD及bD為領域分類器模組16之參數。 In step S32B, after the contour feature extraction module 12 extracts the contour features from the training image data, the living body classifier module 14 and the field classifier module 16 respectively calculate a second living body classification result and a second field classification result according to the contour features of the training image data. Specifically, taking the living body classifier module 14 and the domain classifier module 16 as an example of a fully connected layer, the second living body classification result predicted by the living body classifier module 14 is θ L × EC ( xi ,j ) +b L , and the second domain classification result predicted by the domain classifier module 16 is θ D × EC ( xi ,j ) + b D , wherein θ L and b L are the parameters of the living body classifier module 14, and θ D and b D are parameters of the domain classifier module 16 .

於步驟S33B中,由處理模組17利用輪廓混淆損失之損失函數(4)計算出第二活體分類結果、第二領域分類結果與均勻分佈之間的差距,以得到輪廓混淆損失。詳言之,輪廓混淆損失係為確保輪廓特徵中不包含輸入影像中的活體資訊或領域資訊,故第二活體分類結果及第二領域分類結果皆應接近均勻分佈,其中,輪廓混淆損失之損失函數(4)之方程式例如下所示: In step S33B, the processing module 17 uses the loss function (4) of the contour confusion loss to calculate the difference between the second living body classification result, the second field classification result and the uniform distribution, so as to obtain the contour confusion loss. In detail, the contour confusion loss is to ensure that the contour features do not include the living body information or domain information in the input image, so the second living body classification results and the second domain classification results should be close to uniform distribution. Among them, the equation of the loss function (4) of the contour confusion loss is as follows:

Figure 111121276-A0101-12-0012-4
Figure 111121276-A0101-12-0012-4

其中,x i,j 為輸入的訓練影像資料;E C 為輪廓特徵抽取模組12;C L 為活體分類器模組14;C D 為領域分類器模組16;S為訓練影像資料集的數量;N為一次輸入的訓練影像資料數量;以及

Figure 111121276-A0101-12-0012-19
為輪廓混淆損失,即輪廓特徵經由活體分類器模組14及領域分類器模組16計算後所得到之第二活體分類結果及第二領域分類結果相較於均勻分佈之間的差距之總和。 Wherein, x i, j is the input training image data; E C is the contour feature extraction module 12; CL is the living body classifier module 14; CD is the field classifier module 16; S is the number of training image data sets; N is the number of training image data input at one time;
Figure 111121276-A0101-12-0012-19
Contour confusion loss, that is, the sum of the difference between the second living body classification result and the second domain classification result obtained after the contour features are calculated by the living body classifier module 14 and the domain classifier module 16 compared with the uniform distribution.

在一實施例中,第二活體分類結果及第二領域分類結果可為0~1之間的機率值,或是以百分比呈現之數值,且於此不限活體分類結果之數值呈現方式。 In one embodiment, the second living body classification result and the second field classification result may be a probability value between 0 and 1, or a numerical value expressed as a percentage, and the numerical presentation of the living body classification result is not limited here.

所述之領域特徵抽取模組13亦為一個輸入維度為H×W×3,而輸出維度為D的神經網路。於此實施例中,領域特徵抽取模組13所使用之神經網路係為扣除最後一層的ResNet18,且特徵長度D為512。具言之,領域泛化之人臉防偽的特徵解析系統1接收所輸入之一張影像資料,由領域特徵抽 取模組13從複雜的影像資料中抽取出長度為D的領域特徵,以作為用來分辨該影像資料中之領域資訊(如拍攝時的光線、使用設備的焦距與解析度等)。 The domain feature extraction module 13 is also a neural network with an input dimension of H×W×3 and an output dimension of D. In this embodiment, the neural network used by the domain feature extraction module 13 is ResNet 18 minus the last layer, and the feature length D is 512. In other words, the feature analysis system 1 for face anti-counterfeiting with domain generalization receives an input image data, and the domain feature extraction The extraction module 13 extracts domain features of length D from the complex image data as domain information used to distinguish the image data (such as the light when shooting, the focal length and resolution of the equipment used, etc.).

於本實施例中,為了使從該影像資料中抽取出的領域特徵不包含無關的活體資訊,而透過處理模組17計算出一領域分類損失及一領域混淆損失,其中,領域分類損失及領域混淆損失之計算方式如下所述: In this embodiment, in order to prevent the domain features extracted from the image data from including irrelevant living body information, a domain classification loss and a domain confusion loss are calculated through the processing module 17, wherein the calculation methods of the domain classification loss and domain confusion loss are as follows:

1.領域分類損失: 1. Domain classification loss:

如圖4A所示,領域分類損失之計算流程包含: As shown in Figure 4A, the calculation process of domain classification loss includes:

於步驟S41A中,輸入一訓練影像資料。 In step S41A, a training image data is input.

於步驟S42A中,由領域特徵抽取模組13從訓練影像資料中抽取出領域特徵後,再由領域分類器模組16依據訓練影像資料之領域特徵計算出一第三領域分類結果。具言之,以領域分類器模組16為一層全連接層為例,領域分類器模組16所預測出的第三領域分類結果為θD×E D (x i,j )+bD,其中,θD及bD為領域分類器模組16之參數。 In step S42A, after domain features are extracted from the training image data by the domain feature extraction module 13, a third domain classification result is calculated by the domain classifier module 16 according to the domain features of the training image data. Specifically, taking the domain classifier module 16 as an example of a fully connected layer, the third domain classification result predicted by the domain classifier module 16 is θ D × ED ( xi ,j )+b D , where θ D and b D are parameters of the domain classifier module 16.

於步驟S43A中,由處理模組17利用領域分類損失之損失函數(5)計算出第三領域分類結果與訓練影像資料之領域標籤之間的差距,以得到領域分類損失。詳言之,領域分類損失係為確保領特徵中包含輸入影像中可供領域判斷的重要資訊,其中,領域分類損失之損失函數(5)之方程式例如下所示: In step S43A, the processing module 17 uses the loss function (5) of the domain classification loss to calculate the distance between the third domain classification result and the domain label of the training image data to obtain the domain classification loss. In detail, the domain classification loss is to ensure that the collar features contain important information in the input image that can be used for domain judgment. Among them, the equation of the loss function (5) of the domain classification loss is as follows:

Figure 111121276-A0101-12-0013-5
Figure 111121276-A0101-12-0013-5

其中,x i,j 為輸入的訓練影像資料;E D 為領域特徵抽取模組13;C D 為領域分類器模組16;m i 為訓練影像資料之領域標籤;S為訓練影像資料集的數 量;N為一次輸入的訓練影像資料數量;以及L dom 為領域分類損失,即領域特徵經由領域分類器模組16計算後所得到之第三領域分類結果與領域標籤之間的差距。 Among them , x i,j is the input training image data; E D is the domain feature extraction module 13; CD is the domain classifier module 16; m i is the domain label of the training image data; S is the number of training image data sets ;

2.領域混淆損失: 2. Domain confusion loss:

如圖4B所示,領域混淆損失之計算流程包含: As shown in Figure 4B, the calculation process of domain confusion loss includes:

於步驟S41B中,輸入一訓練影像資料。 In step S41B, a training image data is input.

於步驟S42B中,由領域特徵抽取模組13從訓練影像資料中抽取出領域特徵,再由活體分類器模組14依據訓練影像資料之領域特徵計算出一第三活體分類結果。具言之,以活體分類器模組14為一層全連接層為例,活體分類器模組14所預測出的第三活體分類結果為θL×E D (x i,j )+bL,其中,θL及bL為活體分類器模組14之參數。 In step S42B, the domain feature extraction module 13 extracts domain features from the training image data, and then the living body classifier module 14 calculates a third living body classification result according to the domain features of the training image data. Specifically, taking the living body classifier module 14 as a fully connected layer as an example, the third living body classification result predicted by the living body classifier module 14 is θ L × ED ( xi ,j )+b L , where θ L and b L are parameters of the living body classifier module 14.

於步驟S43B中,由處理模組17利用領域混淆損失之損失函數(6)計算出第三活體分類結果與均勻分佈之間的差距,以得到領域混淆損失。詳言之,領域混淆損失係為確保訓練影像資料之領域特徵中不包含訓練影像資料之活體資訊,故活體分類器模組14辨識的第三活體分類結果應接近均勻分佈,其中,領域混淆損失之損失函數(6)之方程式例如下所示: In step S43B, the processing module 17 uses the loss function (6) of the domain confusion loss to calculate the distance between the third living body classification result and the uniform distribution, so as to obtain the domain confusion loss. In detail, the domain confusion loss is to ensure that the domain features of the training image data do not include the living body information of the training image data, so the third living body classification result identified by the living body classifier module 14 should be close to a uniform distribution. Among them, the equation of the loss function (6) of the domain confusion loss is as follows:

Figure 111121276-A0101-12-0014-6
Figure 111121276-A0101-12-0014-6

其中,x i,j 為輸入的訓練影像資料;E D 為領域特徵抽取模組13;C L 為活體分類器模組14;S為訓練影像資料集的數量;N為一次輸入的訓練影像資料數量;以及

Figure 111121276-A0101-12-0014-7
為領域混淆損失,即領域特徵經由活體分類器模組14計算後所得到之第三活體分類結果與均勻分佈之間的差距。 Among them , x i, j is the input training image data; E D is the domain feature extraction module 13; CL is the living body classifier module 14; S is the number of training image data sets; N is the number of training image data input at one time;
Figure 111121276-A0101-12-0014-7
is the domain confusion loss, that is, the gap between the third living body classification result and the uniform distribution after the domain features are calculated by the living body classifier module 14 .

在一實施例中,第三領域分類結果及第三活體分類結果可為0~1之間的機率值,或是以百分比呈現之數值,且於此不限活體分類結果之數值呈現方式。 In one embodiment, the third domain classification result and the third living body classification result may be a probability value between 0 and 1, or a numerical value expressed as a percentage, and the numerical presentation method of the living body classification result is not limited here.

於另一實施例中,以於藉由輸入的訓練影像資料而得到各項損失函數(1)~(6)之損失(即活體分類損失、活體混淆損失、輪廓重建損失、輪廓混淆損失、領域分類損失及領域混淆損失)後,由處理模組17利用類神經網路的反向傳播(Backpropagation,BP),並結合梯度下降法(Gradient descent)計算出各項損失函數(1)~(6)之梯度,以藉由各個損失函數(1)~(6)之梯度回饋最佳化方法,俾對活體特徵抽取模組11、輪廓特徵抽取模組12、領域特徵抽取模組13、活體分類器模組14、臉部輪廓重建模組15及領域分類器模組16中之權重進行更新,進而優化各項損失函數(1)~(6),亦即得到更小的損失函數。再者,各項損失函數(1)~(6)的權重可根據訓練影像資料及神經網路之設計進行調整。 In another embodiment, after obtaining the losses of various loss functions (1)-(6) (i.e., living body classification loss, living body confusion loss, contour reconstruction loss, contour confusion loss, domain classification loss, and domain confusion loss) through the input training image data, the processing module 17 uses neural network-like backpropagation (Backpropagation, BP) and combines gradient descent method (Gradient descent) to calculate the gradients of various loss functions (1)-(6), so as to use each loss function (1)-(6) (6) The gradient feedback optimization method is used to update the weights in the living body feature extraction module 11, contour feature extraction module 12, domain feature extraction module 13, living body classifier module 14, facial contour reconstruction module 15 and domain classifier module 16, and then optimize each loss function (1)~(6), that is, obtain a smaller loss function. Furthermore, the weights of each loss function (1)~(6) can be adjusted according to the training image data and the design of the neural network.

詳言之,於活體特徵抽取模組11、輪廓特徵抽取模組12、領域特徵抽取模組13、活體分類器模組14、臉部輪廓重建模組15及領域分類器模組16進行更新時,所分別使用到的損失函數如下所述: In detail, when the living body feature extraction module 11, contour feature extraction module 12, domain feature extraction module 13, living body classifier module 14, facial contour reconstruction module 15 and domain classifier module 16 are updated, the loss functions used respectively are as follows:

1.活體特徵抽取模組11:活體分類損失及活體混淆損失 1. Living body feature extraction module 11: living body classification loss and living body confusion loss

2.活體分類器模組14:活體分類損失 2. Live Classifier Module 14: Live Classification Loss

3.輪廓特徵抽取模組12:輪廓重建損失及輪廓混淆損失 3. Contour Feature Extraction Module 12: Contour Reconstruction Loss and Contour Confusion Loss

4.臉部輪廓重建模組15:輪廓重建損失 4. Facial Contour Remodeling Group 15: Contour Reconstruction Loss

5.領域特徵抽取模組13:領域分類損失及領域混淆損失 5. Domain Feature Extraction Module 13: Domain Classification Loss and Domain Confusion Loss

6.領域分類器模組16:領域分類損失 6. Domain Classifier Module 16: Domain Classification Loss

接著,持續輸入複數訓練影像資料至活體特徵抽取模組11、輪廓特徵抽取模組12、領域特徵抽取模組13、活體分類器模組14、臉部輪廓重建模組15及領域分類器模組16中,以對該些模組進行如上述實施例所述之訓練過程,直至各項損失函數(1)~(6)達到收斂,藉此完成活體特徵抽取模組11、輪廓特徵抽取模組12、領域特徵抽取模組13、活體分類器模組14、臉部輪廓重建模組15及領域分類器模組16之神經網路的訓練。另一方面,通常以觀察損失函數下降的趨勢做為訓練時模型收斂與否的參考。 Next, continuously input multiple training image data into the living body feature extraction module 11, contour feature extraction module 12, domain feature extraction module 13, living body classifier module 14, facial contour reconstruction module 15 and domain classifier module 16, so as to carry out the training process as described in the above-mentioned embodiment on these modules, until the loss functions (1)~(6) reach convergence, thereby completing the living body feature extraction module 11, contour feature extraction module 12, and domain feature extraction module 13. The neural network training of the living body classifier module 14, the facial contour reconstruction group 15 and the domain classifier module 16. On the other hand, it is usually used to observe the downward trend of the loss function as a reference for whether the model converges during training.

在一實施例中,判斷活體特徵抽取模組11、輪廓特徵抽取模組12、領域特徵抽取模組13、活體分類器模組14、臉部輪廓重建模組15及領域分類器模組16之神經網路的訓練是否達到收斂,以由各項損失函數(1)~(6)的總和大小來進行判斷。具言之,由於實作時各項損失函數(1)~(6)之大小會來回震盪,故需要將各項損失函數(1)~(6)隨更新次數的變化以折線圖表示,並將折線做平滑化,由處理模組17計算各項損失函數(1)~(6)在一定的更新次數內(如:500次)的下降數值是否小於一門檻值,以於下降數值小於一門檻值時判斷該些模組之神經網路的訓練達到收斂,而此門檻值可依據訓練準確度的需求設定之,故於此不限門檻值。 In one embodiment, it is judged whether the neural network training of the living body feature extraction module 11, the outline feature extraction module 12, the domain feature extraction module 13, the living body classifier module 14, the facial contour reconstruction group 15, and the domain classifier module 16 has reached convergence, and is judged by the sum of the loss functions (1)-(6). Specifically, since the size of each loss function (1)~(6) will oscillate back and forth during implementation, it is necessary to represent the changes of each loss function (1)~(6) with the number of updates in a line graph, and smooth the line, and the processing module 17 calculates whether the drop value of each loss function (1)~(6) within a certain number of updates (such as: 500 times) is less than a threshold value, so as to judge whether the training of the neural network of these modules has reached the goal when the drop value is less than a threshold value. convergence, and the threshold can be set according to the training accuracy requirements, so there is no limit to the threshold.

在一實施例中,亦可由處理模組17從臉部輪廓重建模組15所重建之臉部輪廓的品質、活體分類器模組14或/及領域分類器模組16所得到之分類結果的正確率,來判斷活體特徵抽取模組11、輪廓特徵抽取模組12、領域特徵抽取模組13、活體分類器模組14、臉部輪廓重建模組15及領域分類器模組16之神經網路的訓練是否完成。 In one embodiment, the quality of the facial contour reconstructed by the processing module 17 from the facial contour reconstruction module 15, the accuracy of the classification results obtained by the living body classifier module 14 and/or the domain classifier module 16 can also be used to determine whether the training of the neural network of the living body feature extraction module 11, the contour feature extraction module 12, the domain feature extraction module 13, the living body classifier module 14, the facial contour reconstruction group 15 and the domain classifier module 16 is completed.

圖5係為本發明之領域泛化之人臉防偽的特徵解析系統中之模組的神經網路之訓練方法流程示意圖,且一併參閱圖1~圖4-2說明之。此外,本實施例與上述實施例相同處不再贅述,且此訓練方法流程包含以下步驟S51至步驟S511: FIG. 5 is a schematic flow chart of the neural network training method of the modules in the face anti-counterfeiting feature analysis system generalized in the field of the present invention, and it is explained with reference to FIGS. 1 to 4-2. In addition, the similarities between this embodiment and the above embodiment will not be repeated, and the training method flow includes the following steps S51 to S511:

於步驟S51中,輸入一訓練影像資料,其包含人臉影像,以及人臉影像以外的領域資訊(如拍攝時的光線、使用設備的焦距與解析度等)。 In step S51, a training image data is input, which includes a face image and domain information other than the face image (such as the light when shooting, the focal length and resolution of the equipment used, etc.).

於步驟S52中,由活體特徵抽取模組11從訓練影像資料中抽取出有關於人臉影像之活體特徵。 In step S52, the living body feature extraction module 11 extracts the living body features related to the face image from the training image data.

於步驟S53中,由活體分類器模組14依據活體特徵計算出一第一活體分類結果,進而計算出第一活體分類結果與訓練影像資料之活體標籤之間的差距,以得到活體分類損失。 In step S53, the living body classifier module 14 calculates a first living body classification result according to the living body features, and then calculates the difference between the first living body classification result and the living body label of the training image data to obtain the living body classification loss.

於步驟S54中,由領域分類器模組16依據訓練影像資料之活體特徵計算出一第一領域分類結果,進而計算出第一領域分類結果與均勻分佈之間的差距,以得到活體混淆損失。 In step S54, the domain classifier module 16 calculates a first domain classification result according to the living body features of the training image data, and then calculates the distance between the first domain classification result and the uniform distribution to obtain the living body confusion loss.

於步驟S55中,由輪廓特徵抽取模組12從訓練影像資料中抽取出有關於人臉影像之輪廓特徵。 In step S55, the contour feature extraction module 12 extracts contour features related to the face image from the training image data.

於步驟S56中,由臉部輪廓重建模組15依據訓練影像資料之輪廓特徵重建一臉部輪廓,進而計算出經重建之臉部輪廓與訓練影像資料之臉部輪廓之間的差距,以得到輪廓重建損失。 In step S56, the facial contour reconstruction group 15 reconstructs a facial contour according to the contour features of the training image data, and then calculates the difference between the reconstructed facial contour and the facial contour of the training image data to obtain the contour reconstruction loss.

於步驟S57中,由活體分類器模組14及領域分類器模組16依據訓練影像資料之輪廓特徵分別計算出一第二活體分類結果及一第二領域 分類結果,進而計算出第二活體分類結果、第二領域分類結果與均勻分佈之間的差距,以得到輪廓混淆損失。 In step S57, the living body classifier module 14 and the domain classifier module 16 respectively calculate a second living body classification result and a second domain according to the contour features of the training image data Classification results, and then calculate the second living body classification results, the gap between the second domain classification results and the uniform distribution to obtain the contour confusion loss.

於步驟S58中,由領域特徵抽取模組13從複雜的影像資料中抽取出領域特徵。 In step S58, the domain feature extraction module 13 extracts domain features from complex image data.

於步驟S59中,由領域分類器模組16依據訓練影像資料之領域特徵計算出一第三領域分類結果,進而計算出第三領域分類結果與訓練影像資料之領域標籤之間的差距,以得到領域分類損失。 In step S59, the domain classifier module 16 calculates a third domain classification result according to the domain characteristics of the training image data, and then calculates the distance between the third domain classification result and the domain label of the training image data to obtain the domain classification loss.

於步驟S510中,由活體分類器模組14依據訓練影像資料之領域特徵計算出一第三活體分類結果,進而計算出第三活體分類結果與均勻分佈之間的差距,以得到領域混淆損失。 In step S510 , the living body classifier module 14 calculates a third living body classification result according to the domain characteristics of the training image data, and then calculates the distance between the third living body classification result and the uniform distribution to obtain the domain confusion loss.

於步驟S511中,以依據活體分類損失、活體混淆損失、輪廓重建損失、輪廓混淆損失、領域分類損失及領域混淆損失其中至少一者對活體特徵抽取模組11、輪廓特徵抽取模組12、領域特徵抽取模組13、活體分類器模組14、臉部輪廓重建模組15及領域分類器模組16中之權重進行更新,直至上述該些模組達到收斂。 In step S511, the weights in the living body feature extraction module 11, the contour feature extraction module 12, the domain feature extraction module 13, the living body classifier module 14, the facial contour reconstruction group 15 and the domain classifier module 16 are updated according to at least one of the living body classification loss, the living body confusion loss, the contour reconstruction loss, the contour confusion loss, the domain classification loss and the domain confusion loss, until the above-mentioned modules reach convergence.

此外,本發明還揭示一種電腦可讀媒介,係應用於具有處理器(例如,CPU、GPU等)及/或記憶體的計算裝置或電腦中,且儲存有指令,並可利用此計算裝置或電腦透過處理器及/或記憶體執行此電腦可讀媒介,以於執行此電腦可讀媒介時執行上述之方法及各步驟。 In addition, the present invention also discloses a computer-readable medium, which is applied to a computing device or computer having a processor (for example, CPU, GPU, etc.) and/or memory, and stores instructions, and the computing device or computer can be used to execute the computer-readable medium through the processor and/or memory, so as to execute the above-mentioned method and steps when executing the computer-readable medium.

下列係為本發明之本發明之領域泛化之人臉防偽的特徵解析系統1應用實施例,且一併參閱圖1說明之。 The following is an application embodiment of the face anti-counterfeiting feature analysis system 1 generalized in the field of the present invention, and it is explained with reference to FIG. 1 .

於本實施例中,本發明之領域泛化之人臉防偽的特徵解析系統1中之模組已經上述實施例之訓練完成,以於接收到一欲判斷是否為活體的影像資料後,由活體特徵抽取模組11從影像資料抽取出活體特徵,且因經訓練後領域泛化之人臉防偽的特徵解析系統1能將與活體無關的輪廓特徵與領域特徵抽離,故活體特徵不會受到人臉輪廓及領域差異的影響。 In this embodiment, the modules in the domain generalized face anti-counterfeiting feature analysis system 1 of the present invention have been trained in the above-mentioned embodiments, so that after receiving an image data to be judged as living, the living body feature extraction module 11 extracts the living body features from the image data, and after training, the domain generalized face anti-counterfeiting feature analysis system 1 can separate the contour features and domain features that have nothing to do with the living body, so the living body features will not be affected by the contour of the face and the difference in the field.

再者,活體分類器模組14接收由活體特徵抽取模組11所抽取之活體特徵,以計算出一活體分類結果,即可藉由活體分類結果判斷該影像是否為活體。具言之,可以取一閾值,若活體分類結果高於該閾值,則處理模組17判定影像資料為異常,即影像資料中不具備活體;若低於該閾值,則理模組判定影像資料為正常,即影像資料中具備活體。 Furthermore, the living body classifier module 14 receives the living body features extracted by the living body feature extraction module 11 to calculate a living body classification result, which can determine whether the image is a living body according to the living body classification result. Specifically, a threshold can be taken, and if the result of the living body classification is higher than the threshold, the processing module 17 determines that the image data is abnormal, that is, there is no living body in the image data; if it is lower than the threshold, the processing module determines that the image data is normal, that is, there is a living body in the image data.

在一實施例中,活體分類結果及閾值可為0~1之間的機率值,或是以百分比呈現之數值,且閾值可依據需求設定之,故於此不限活體分類結果及閾值之數值呈現方式,以及閾值之大小。 In one embodiment, the living body classification result and the threshold value can be a probability value between 0 and 1, or a value expressed as a percentage, and the threshold value can be set according to requirements, so there is no limitation on the numerical presentation method of the living body classification result and the threshold value, and the size of the threshold value.

綜上所述,本發明之領域泛化之人臉防偽的特徵解析系統、方法及其電腦可讀媒介,藉由計算出的損失函數(即活體分類損失、活體混淆損失、輪廓重建損失、輪廓混淆損失、領域分類損失及領域混淆損失)對活體特徵抽取模組、輪廓特徵抽取模組、領域特徵抽取模組、活體分類器模組、臉部輪廓重建模組及領域分類器模組等模組進行更新,並直至前述該些模組達到收斂,進而完成該些模組之神經網路之訓練。因此,本發明之領域泛化之人臉防偽的特徵解析系統經訓練後能精準地從一影像中抽取出領域泛化之人臉之活體特徵,以於對影像中的人臉進行活體判斷時,不 會受到領域或環境因素的干擾,故本發明可應用於任意的領域中,以達到領域泛化之人臉防偽之效果。 To sum up, the feature analysis system, method and computer-readable medium of the domain-generalized face anti-counterfeiting of the present invention update modules such as the living body feature extraction module, the contour feature extraction module, the domain feature extraction module, the living body classifier module, the facial contour reconstruction group, and the domain classifier module through the calculated loss functions (i.e., the living body classification loss, the living body confusion loss, the contour reconstruction loss, the contour confusion loss, the domain classification loss, and the domain confusion loss), until the aforementioned modules reach convergence, and then Complete the training of the neural network of these modules. Therefore, the generalized human face anti-counterfeiting feature analysis system of the present invention can accurately extract the living body features of the human face with generalized domain from an image after training, so that when the human face in the image is judged live, it will not It will be interfered by field or environmental factors, so the present invention can be applied in any field to achieve the effect of field generalized face anti-counterfeiting.

再者,相較於先前技術利用單一特徵抽取器,以抽取並判斷人臉的活體特徵,導致其應用受限於訓練時所使用的領域,以及單一特徵抽取器所抽取之活體特徵更包含無關的人臉輪廓特徵及領域特徵(如拍攝時的光線、使用設備的焦距與解析度等),進而影響到人臉防偽的判斷效果。 然而,本發明分別使用輪廓特徵抽取模組及領域特徵抽取模組,能將與活體特徵無關的人臉輪廓資訊及領域特徵抽離,藉此讓活體特徵不受人臉輪廓差異與領域差異的干擾,能夠專注於人臉的活體特徵判讀。 Furthermore, compared to the prior art using a single feature extractor to extract and judge live features of the face, its application is limited to the field used in training, and the live features extracted by the single feature extractor include irrelevant face contour features and domain features (such as the light when shooting, the focal length and resolution of the equipment used, etc.), which in turn affects the judging effect of face anti-counterfeiting. However, the present invention uses the contour feature extraction module and the domain feature extraction module respectively, which can separate the facial contour information and domain features that are not related to the living body features, so that the living body features will not be interfered by the differences in the face contour and domain differences, and can focus on the interpretation of the living body features of the human face.

是以,本發明能應用於不同的領域中,以進行人臉的活體特徵判斷,藉此避免有心人士利用非活體之人臉照片或影片進行驗證,進而達到領域泛化之人臉防偽之目的。 Therefore, the present invention can be applied in different fields to judge the living body features of human faces, so as to prevent interested people from using non-living human face photos or videos for verification, and then achieve the purpose of face anti-counterfeiting in generalized fields.

上述實施形態僅例示性說明本發明之原理及其功效,而非用於限制本發明。任何熟習此項技藝之人士均可在不違背本發明之精神及範疇下,對上述實施形態進行修飾與改變。因此,本發明之權利保護範圍應如申請專利範圍所列。 The above-mentioned embodiments are only illustrative to illustrate the principles and effects of the present invention, and are not intended to limit the present invention. Anyone skilled in the art can modify and change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Therefore, the scope of protection of the rights of the present invention should be listed in the scope of the patent application.

1:領域泛化之人臉防偽的特徵解析系統 1: Feature analysis system for face anti-counterfeiting in domain generalization

11:活體特徵抽取模組 11: Living body feature extraction module

12:輪廓特徵抽取模組 12: Contour feature extraction module

13:領域特徵抽取模組 13: Domain feature extraction module

14:活體分類器模組 14: Living classifier module

15:臉部輪廓重建模組 15: Facial contour reconstruction group

16:領域分類器模組 16: Domain classifier module

Claims (13)

一種領域泛化之人臉防偽的特徵解析系統,係包括:一活體特徵抽取模組,係接收一欲判斷是否具有活體之人臉影像之影像資料,以從該影像資料中抽取出活體特徵;一活體分類器模組,係通訊連接該活體特徵抽取模組,以接收該影像資料之活體特徵,再由該活體分類器模組依據該影像資料之活體特徵計算出該影像資料之活體分類結果;一領域分類器模組,係通訊連接該活體特徵抽取模組;以及一處理模組,係通訊連接該活體分類器模組,以接收該影像資料之活體分類結果,再由該處理模組依據該影像資料之活體分類結果判斷該影像資料中之人臉影像是否為活體,其中,由該活體特徵抽取模組接收至少一訓練影像資料,以從該訓練影像資料中抽取出該訓練影像資料之活體特徵,再由該活體分類器模組及該領域分類器模組依據該訓練影像資料之活體特徵分別計算出一第一活體分類結果及一第一領域分類結果,以令該處理模組依據該第一活體分類結果及該第一領域分類結果分別計算出一活體分類損失及一活體混淆損失。 A feature analysis system for face anti-counterfeiting with generalization in the field includes: a living body feature extraction module, which receives image data of a human face image for judging whether there is a living body, so as to extract living body features from the image data; a living body classifier module, which is connected to the living body feature extraction module by communication, to receive the living body features of the image data, and then the living body classifier module calculates the living body classification result of the image data based on the living body features of the image data; Communication connection with the living body feature extraction module; and a processing module, which is communication connected with the living body classifier module to receive the living body classification result of the image data, and then the processing module judges whether the face image in the image data is a living body according to the living body classification result of the image data, wherein the living body feature extraction module receives at least one training image data, so as to extract the living body features of the training image data from the training image data, and then the living body classifier module and the field classifier module according to the training A first living body classification result and a first domain classification result are respectively calculated from the living body features of the image data, so that the processing module calculates a living body classification loss and a living body confusion loss respectively according to the first living body classification result and the first domain classification result. 如請求項1所述之領域泛化之人臉防偽的特徵解析系統,更包括:一通訊連接該活體分類器模組之輪廓特徵抽取模組、一通訊連接該輪廓特徵抽取模組之臉部輪廓重建模組、一通訊連接該活體分類器模組之領域特徵抽取模組,以及通訊連接該輪廓特徵抽取模組及該領域特徵抽取模組之該領域分類器模組,其中,該處理模組更計算出一輪廓重建損失、一輪廓混淆損失、一領域分類損失及一領域混淆損失。 The face anti-counterfeiting feature analysis system for domain generalization as described in claim item 1 further includes: a contour feature extraction module communicating with the living body classifier module, a face contour reconstruction module communicating with the contour feature extraction module, a domain feature extraction module communicating with the living body classifier module, and a domain classifier module communicating with the contour feature extraction module and the domain feature extraction module, wherein the processing module calculates a contour reconstruction loss, a contour confusion loss, and a domain Classification loss and one domain confusion loss. 如請求項2所述之領域泛化之人臉防偽的特徵解析系統,其中,由該處理模組依據該活體分類損失及該活體混淆損失更新該活體特徵抽取模組、依據該活體分類損失更新該活體分類器模組、依據該輪廓重建損失及該輪廓混淆損失更新該輪廓特徵抽取模組、依據該輪廓重建損失更新該臉部輪廓重建模組、依據該領域分類損失及該領域混淆損失更新該領域特徵抽取模組、以及依據該領域分類損失更新該領域分類器模組。 The feature analysis system for face anti-counterfeiting with domain generalization as described in Claim 2, wherein the processing module updates the living body feature extraction module based on the living body classification loss and the living body confusion loss, updates the living body classifier module based on the living body classification loss, updates the contour feature extraction module based on the contour reconstruction loss and the contour confusion loss, updates the face contour reconstruction group according to the contour reconstruction loss, updates the domain feature extraction module according to the domain classification loss and the domain confusion loss, and updates the domain classification loss based on the domain classification loss. Domain classifier mod. 如請求項2所述之領域泛化之人臉防偽的特徵解析系統,其中,由該輪廓特徵抽取模組接收至少一訓練影像資料,以從該訓練影像資料中抽取出人臉影像之輪廓特徵,再由該臉部輪廓重建模組依據該輪廓特徵重建一臉部輪廓,以令該處理模組依據經重建之該臉部輪廓以計算出該輪廓重建損失。 The feature analysis system for face anti-counterfeiting with domain generalization as described in Claim 2, wherein the contour feature extraction module receives at least one training image data to extract the contour features of the face image from the training image data, and then the facial contour reconstruction group reconstructs a facial contour according to the contour features, so that the processing module calculates the contour reconstruction loss based on the reconstructed facial contour. 如請求項4所述之領域泛化之人臉防偽的特徵解析系統,其中,由該活體分類器模組及該領域分類器模組依據該輪廓特徵分別計算出一第二活體分類結果及一第二領域分類結果,以令該處理模組依據該第二活體分類結果及該第二領域分類結果計算出該輪廓混淆損失。 The feature analysis system for face anti-counterfeiting with domain generalization as described in Claim 4, wherein the living body classifier module and the domain classifier module respectively calculate a second living body classification result and a second domain classification result based on the contour feature, so that the processing module calculates the contour confusion loss based on the second living body classification result and the second domain classification result. 如請求項2所述之領域泛化之人臉防偽的特徵解析系統,其中,由該領域特徵抽取模組接收至少一訓練影像資料,以從該訓練影像資料中抽取出領域特徵,再由該活體分類器模組及該領域分類器模組依據該領域特徵分別計算出一第三活體分類結果及一第三領域分類結果,以令該處理模組依據該第三領域分類結果及該第三活體分類結果分別計算出該領域分類損失及該領域混淆損失。 The feature analysis system for face anti-counterfeiting with domain generalization as described in Claim 2, wherein the domain feature extraction module receives at least one training image data to extract domain features from the training image data, and then the living body classifier module and the domain classifier module respectively calculate a third living body classification result and a third domain classification result according to the domain features, so that the processing module calculates the domain classification loss and the domain confusion loss based on the third domain classification result and the third living body classification result respectively. 一種領域泛化之人臉防偽的特徵解析方法,係包括: 由活體特徵抽取模組接收一欲判斷是否具有活體之人臉影像之影像資料,以從該影像資料中抽取出活體特徵;由活體分類器模組接收該影像資料之活體特徵後,依據該影像資料之活體特徵計算出該影像資料之活體分類結果;以及由處理模組接收該影像資料之活體分類結果後,依據該影像資料之活體分類結果判斷該影像資料中之人臉影像是否為活體,其中,由該活體特徵抽取模組接收至少一訓練影像資料,以從該訓練影像資料中抽取出該訓練影像資料之活體特徵;由該活體分類器模組及一領域分類器模組依據該訓練影像資料之活體特徵分別計算出一第一活體分類結果及一第一領域分類結果;及由該處理模組依據該第一活體分類結果及該第一領域分類結果分別計算出一活體分類損失及一活體混淆損失。 A feature analysis method for face anti-counterfeiting with domain generalization, comprising: The living body feature extraction module receives the image data of a human face image to determine whether it has a living body, so as to extract the living body features from the image data; after receiving the living body features of the image data, the living body classifier module calculates the living body classification result of the image data according to the living body features of the image data; The group receives at least one training image data, so as to extract the living body features of the training image data from the training image data; the living body classifier module and a domain classifier module respectively calculate a first living body classification result and a first domain classification result according to the living body features of the training image data; 如請求項7所述之領域泛化之人臉防偽的特徵解析方法,更包括由該處理模組計算出一輪廓重建損失、一輪廓混淆損失、一領域分類損失及一領域混淆損失。 The face anti-counterfeiting feature analysis method for domain generalization as described in Claim 7 further includes calculating a contour reconstruction loss, a contour confusion loss, a domain classification loss, and a domain confusion loss by the processing module. 如請求項8所述之領域泛化之人臉防偽的特徵解析方法,更包括由該處理模組依據該活體分類損失及該活體混淆損失更新該活體特徵抽取模組、依據該活體分類損失更新該活體分類器模組、依據該輪廓重建損失及該輪廓混淆損失更新一輪廓特徵抽取模組、依據該輪廓重建損失更新一臉部輪廓重建模組、依據該領域分類損失及該領域混淆損失更新一領域特徵抽取模組、以及依據該領域分類損失更新該領域分類器模組。 The face anti-counterfeiting feature analysis method of domain generalization as described in claim item 8 further includes the processing module updating the living body feature extraction module based on the living body classification loss and the living body confusion loss, updating the living body classifier module based on the living body classification loss, updating a contour feature extraction module based on the contour reconstruction loss and the contour confusion loss, updating a facial contour reconstruction group based on the contour reconstruction loss, updating a domain feature extraction module based on the domain classification loss and the domain confusion loss, and updating the domain classification loss. Domain classifier mod. 如請求項9所述之領域泛化之人臉防偽的特徵解析方法,更包括:由該輪廓特徵抽取模組接收至少一訓練影像資料,以從該訓練影像資料中抽取出人臉影像之輪廓特徵;及由該臉部輪廓重建模組依據該輪廓特徵重建一臉部輪廓,以令該處理模組依據經重建之該臉部輪廓以計算出該輪廓重建損失。 The feature analysis method for face anti-counterfeiting with domain generalization as described in claim 9 further includes: receiving at least one training image data by the contour feature extraction module, so as to extract the contour features of the face image from the training image data; and reconstructing a facial contour by the facial contour reconstruction group according to the contour features, so that the processing module calculates the contour reconstruction loss based on the reconstructed facial contour. 如請求項10所述之領域泛化之人臉防偽的特徵解析方法,更包括:由該活體分類器模組及該領域分類器模組依據該輪廓特徵分別計算出一第二活體分類結果及一第二領域分類結果;及由該處理模組依據該第二活體分類結果及該第二領域分類結果計算出該輪廓混淆損失。 The feature analysis method for face anti-counterfeiting with domain generalization as described in Claim 10 further includes: calculating a second living body classification result and a second domain classification result by the living body classifier module and the domain classifier module according to the contour feature; and calculating the contour confusion loss by the processing module based on the second living body classification result and the second domain classification result. 如請求項9所述之領域泛化之人臉防偽的特徵解析方法,更包括:由該領域特徵抽取模組接收至少一訓練影像資料,以從該訓練影像資料中抽取出領域特徵;由該活體分類器模組及該領域分類器模組依據該領域特徵分別計算出一第三活體分類結果及一第三領域分類結果;及由該處理模組依據該第三領域分類結果及該第三活體分類結果分別計算出該領域分類損失及該領域混淆損失。 The feature analysis method for face anti-counterfeiting with domain generalization as described in Claim 9 further includes: the domain feature extraction module receives at least one training image data to extract domain features from the training image data; the living body classifier module and the domain classifier module respectively calculate a third living body classification result and a third domain classification result according to the domain features; and the processing module calculates the domain classification loss and the domain confusion loss based on the third domain classification result and the third living body classification result respectively. 一種電腦可讀媒介,應用於計算裝置或電腦中,係儲存有指令,以執行如請求項7至12之任一者所述之領域泛化之人臉防偽的特徵解析方法。 A computer-readable medium, used in a computing device or a computer, stores instructions to execute the feature analysis method for generalized face anti-counterfeiting described in any one of claims 7 to 12.
TW111121276A 2022-06-08 2022-06-08 A feature disentanglement system, method and computer-readable medium thereof for domain generalized face anti-spoofing TWI807851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW111121276A TWI807851B (en) 2022-06-08 2022-06-08 A feature disentanglement system, method and computer-readable medium thereof for domain generalized face anti-spoofing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW111121276A TWI807851B (en) 2022-06-08 2022-06-08 A feature disentanglement system, method and computer-readable medium thereof for domain generalized face anti-spoofing

Publications (2)

Publication Number Publication Date
TWI807851B true TWI807851B (en) 2023-07-01
TW202349261A TW202349261A (en) 2023-12-16

Family

ID=88149292

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111121276A TWI807851B (en) 2022-06-08 2022-06-08 A feature disentanglement system, method and computer-readable medium thereof for domain generalized face anti-spoofing

Country Status (1)

Country Link
TW (1) TWI807851B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW202026948A (en) * 2018-12-29 2020-07-16 大陸商北京市商湯科技開發有限公司 Methods and devices for biological testing and storage medium thereof
CN112287765A (en) * 2020-09-30 2021-01-29 新大陆数字技术股份有限公司 Face living body detection method, device and equipment and readable storage medium
US20210200995A1 (en) * 2017-03-16 2021-07-01 Beijing Sensetime Technology Development Co., Ltd Face anti-counterfeiting detection methods and systems, electronic devices, programs and media
CN113255511A (en) * 2021-05-21 2021-08-13 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for living body identification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210200995A1 (en) * 2017-03-16 2021-07-01 Beijing Sensetime Technology Development Co., Ltd Face anti-counterfeiting detection methods and systems, electronic devices, programs and media
TW202026948A (en) * 2018-12-29 2020-07-16 大陸商北京市商湯科技開發有限公司 Methods and devices for biological testing and storage medium thereof
CN112287765A (en) * 2020-09-30 2021-01-29 新大陆数字技术股份有限公司 Face living body detection method, device and equipment and readable storage medium
CN113255511A (en) * 2021-05-21 2021-08-13 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for living body identification

Also Published As

Publication number Publication date
TW202349261A (en) 2023-12-16

Similar Documents

Publication Publication Date Title
TWI686774B (en) Human face live detection method and device
US11106920B2 (en) People flow estimation device, display control device, people flow estimation method, and recording medium
WO2019109743A1 (en) Url attack detection method and apparatus, and electronic device
WO2021232985A1 (en) Facial recognition method and apparatus, computer device, and storage medium
US9633044B2 (en) Apparatus and method for recognizing image, and method for generating morphable face images from original image
JP5899472B2 (en) Person attribute estimation system and learning data generation apparatus
JP6678246B2 (en) Semantic segmentation based on global optimization
CN105225222B (en) Automatic assessment of perceptual visual quality of different image sets
Li et al. Face spoofing detection with image quality regression
WO2019056503A1 (en) Store monitoring evaluation method, device and storage medium
WO2019200702A1 (en) Descreening system training method and apparatus, descreening method and apparatus, device, and medium
WO2016172923A1 (en) Video detection method, video detection system, and computer program product
WO2018078857A1 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
WO2022078168A1 (en) Identity verification method and apparatus based on artificial intelligence, and computer device and storage medium
CN107316029A (en) A kind of live body verification method and equipment
Parde et al. Face and image representation in deep CNN features
CN114925748A (en) Model training and modal information prediction method, related device, equipment and medium
CN111680544B (en) Face recognition method, device, system, equipment and medium
CN113139462A (en) Unsupervised face image quality evaluation method, electronic device and storage medium
US20210150238A1 (en) Methods and systems for evaluatng a face recognition system using a face mountable device
WO2021042544A1 (en) Facial verification method and apparatus based on mesh removal model, and computer device and storage medium
Parde et al. Deep convolutional neural network features and the original image
CN111382791A (en) Deep learning task processing method, image recognition task processing method and device
TWI807851B (en) A feature disentanglement system, method and computer-readable medium thereof for domain generalized face anti-spoofing
CN110147740B (en) Face recognition method, device, equipment and storage medium