TW202014917A - Authentication method and electronic device using the same - Google Patents
Authentication method and electronic device using the same Download PDFInfo
- Publication number
- TW202014917A TW202014917A TW107136107A TW107136107A TW202014917A TW 202014917 A TW202014917 A TW 202014917A TW 107136107 A TW107136107 A TW 107136107A TW 107136107 A TW107136107 A TW 107136107A TW 202014917 A TW202014917 A TW 202014917A
- Authority
- TW
- Taiwan
- Prior art keywords
- optical flow
- flow information
- action
- video stream
- electronic device
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Abstract
Description
本發明是有關於一種身分驗證方法與使用此方法的電子裝置。The invention relates to an identity verification method and an electronic device using the method.
以往在電子裝置上進行身分驗證都是透過鍵盤、觸控螢幕或滑鼠等輸入裝置來輸入個人密碼,過程相當繁瑣且常發生忘記密碼的情形造成使用者的不便。In the past, identity verification on electronic devices used to input personal passwords through input devices such as keyboards, touch screens, or mice. The process was quite cumbersome and often resulted in user inconveniences due to forgotten passwords.
近年來,許多基於生物特徵的身分驗證方法逐漸被發展出來,例如包括指紋辨識、聲紋辨識、虹膜辨識甚至是臉部辨識等等,各有其優缺點存在。舉例而言,臉部辨識是從靜態的相片中擷取出臉部特徵以進行比對,但是以目前現有的臉部辨識演算法來看準確率有待提升,且並不適用於所有族群的使用者(例如,準確率會因使用者的膚色不同而不同)。In recent years, many biometric-based identity verification methods have been gradually developed, including fingerprint recognition, voiceprint recognition, iris recognition, and even facial recognition, each with its own advantages and disadvantages. For example, facial recognition is to extract facial features from static photos for comparison, but the accuracy of the current facial recognition algorithm needs to be improved, and it is not suitable for users of all ethnic groups (For example, the accuracy rate will vary depending on the user's skin color).
因此,如何能夠提供一種快速又準確的身分驗證方法是本領域技術人員所共同致力的目標。Therefore, how to provide a fast and accurate identity verification method is the goal of those skilled in the art.
有鑑於此,本發明提供一種身分驗證方法與使用此身分驗證方法的電子裝置,適用於各個族群的使用者且快速而準確。In view of this, the present invention provides an identity verification method and an electronic device using the identity verification method, which is suitable for users of various ethnic groups and is fast and accurate.
本發明的實施例的身分驗證方法適用於電子裝置,並且包括以下步驟:發出一動作提示訊息;啟動電子裝置的影像擷取元件以取得第一視訊串流;根據第一視訊串流中二連續影像,計算取得第一光流資訊;輸入第一光流資訊至類神經網路模型以取得身分驗證結果,其中類神經網路模型包含對應動作提示訊息之動作標籤、對應動作標籤之至少一第二光流資訊,及對應每一第二光流資訊的身分註冊資訊。The identity verification method of the embodiment of the present invention is applicable to electronic devices, and includes the following steps: sending out an action prompt message; activating the image capturing element of the electronic device to obtain the first video stream; according to the two consecutive video stream Image, calculate and obtain the first optical flow information; input the first optical flow information to the neural network-like model to obtain the identity verification result, wherein the neural network-like model includes an action label corresponding to the action prompt message, and at least one first corresponding to the action label Two optical flow information, and identity registration information corresponding to each second optical flow information.
在本發明的一實施例中,上述的身分驗證方法於發出動作提示訊息的步驟之前更包括以下步驟:依據不同的多個動作標籤收集至少一使用者的至少一第二視訊串流;根據每一第二視訊串流中的二連續影像,計算取得至少一第二視訊串流的至少一第二光流資訊;以及利用至少一第二光流資訊訓練類神經網路模型。In an embodiment of the present invention, the above-mentioned identity verification method further includes the following steps before the step of issuing the action prompt message: collecting at least one second video stream of at least one user according to different multiple action tags; Two consecutive images in a second video stream, calculating and obtaining at least one second optical flow information of at least one second video stream; and using at least one second optical flow information to train a neural network model.
在本發明的一實施例中,上述的動作提示訊息對應於其中一個動作標籤。In an embodiment of the invention, the above action prompt message corresponds to one of the action tags.
在本發明的一實施例中,其中利用至少一第二光流資訊訓練類神經網路模型的步驟又包括以下步驟:定義身分註冊資訊至對應的第二光流資訊。In an embodiment of the invention, the step of training the neural network model using at least one second optical flow information further includes the following steps: defining identity registration information to the corresponding second optical flow information.
在本發明的一實施例中,上述的處理器會根據身分驗證結果觸發電子裝置的執行事件。In an embodiment of the present invention, the aforementioned processor triggers an execution event of the electronic device according to the identity verification result.
本發明實施例的電子裝置包括提示元件、影像擷取元件以及處理器。提示元件用以發出動作提示訊息。於動作提示訊息被發出後,影像擷取元件被啟動以取得第一視訊串流。處理器訊號連接於提示元件以及影像擷取元件,且處理器用以運行類神經網路模型,其中類神經網路模型包含對應動作提示訊息之動作標籤、對應動作標籤之至少一第二光流資訊,及對應每一第二光流資訊的身分註冊資訊,處理器用以執行下列步驟:根據第一視訊串流中二連續影像,計算取得第一光流資訊;輸入第一光流資訊至類神經網路模型以取得身分驗證結果。The electronic device of the embodiment of the present invention includes a prompt component, an image capturing component and a processor. The prompting component is used for sending out motion prompting messages. After the action prompt message is sent, the image capturing element is activated to obtain the first video stream. The processor signal is connected to the prompt component and the image capturing component, and the processor is used to run a neural network model, wherein the neural network model includes an action label corresponding to the action prompt message and at least one second optical flow information corresponding to the action label , And the identity registration information corresponding to each second optical flow information, the processor is used to perform the following steps: calculate and obtain the first optical flow information according to the two consecutive images in the first video stream; input the first optical flow information to the nerve-like Network model to obtain identity verification results.
基於上述,本發明實施例所提出的身分驗證方法與使用此方法的電子裝置,利用不同個體對應於相同的動作指令所作的動作會有所差異的特點,藉由拍攝使用者作動作的視訊串流,計算視訊串流中二相鄰影像的光流資訊,並且以此光流資訊輸入已訓練的類神經網路模型來作為身分驗證時的判斷依據。據此,能夠適用於各種外型的族群並且具有高驗證速度。Based on the above, the identity verification method and the electronic device using the method proposed in the embodiments of the present invention utilize the characteristics that different individuals' actions corresponding to the same action command will be different, by capturing the video string of the user's action Flow, calculate the optical flow information of two adjacent images in the video stream, and use this optical flow information to input a trained neural network-like model as the basis for judgment during identity verification. According to this, it can be applied to various appearance groups and has a high verification speed.
為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。In order to make the above-mentioned features and advantages of the present invention more obvious and understandable, the embodiments are specifically described below in conjunction with the accompanying drawings for detailed description as follows.
不同個體對應於相同的動作指令(例如,點頭、搖頭或轉頭)所作的動作也會有所差異,若將所作動作拍攝下來,這些差異會反映在視訊串流所對應的光流(optical flow)資訊中。因此,本發明實施例的身分驗證方法利用視訊串流對應的光流資訊作為身分驗證的依據,再透過機器學習的方法,利用經過訓練的分類器便能夠準確地得到身分驗證的結果。The actions of different individuals corresponding to the same action instructions (for example, nodding, shaking or turning head) will also be different. If the action is taken, these differences will be reflected in the optical flow corresponding to the video stream (optical flow) ) In the news. Therefore, the identity verification method of the embodiment of the present invention uses the optical flow information corresponding to the video stream as the basis for identity verification, and then uses the trained classifier through machine learning to accurately obtain the results of the identity verification.
圖1繪示本發明一實施例的電子裝置示意圖。FIG. 1 is a schematic diagram of an electronic device according to an embodiment of the invention.
請參照圖1,電子裝置100包括處理器110、影像擷取元件120、提示元件130以及儲存元件140,其中影像擷取元件120、提示元件130以及儲存元件140皆訊號連接於處理器110,且處理器110用以運行一類神經網路模型150。在一些實施例中,電子裝置100例如為個人電腦、伺服器、筆記型電腦、平板電腦、智慧型手機等,本發明並不在此限制。1, the
處理器110用以與電子裝置100中的其他元件協作以完成本發明實施例的身分驗證方法。處理器110可例如是雙核心、四核心或八核心等各類型的中央處理器(central processing unit,CPU)、系統晶片(system-on-chip,SOC)、應用處理器(application processor)、媒體處理器(media processor)、微處理器(microprocessor)、數位信號處理器(digital signal processor)可程式化控制器、特殊應用積體電路(application specific integrated circuits, ASIC)、可程式化邏輯裝置(programmable logic device, PLD)或其他類似裝置或這些裝置的組合,本發明不在此限制實作時所使用的處理器類型。在一些實施例中,處理器110例如是用以負責電子裝置100的整體運作。The
影像擷取元件120用以拍攝視訊串流。影像擷取元件120可例如是內建或外接於電子裝置100,並且配備有電荷耦合元件(Charge Coupled Device,CCD)、互補性氧化金屬半導體(Complementary Metal-Oxide Semiconductor,CMOS)元件或其他種類的感光元件的攝像鏡頭,但本發明並不限於此。在一些實施例中,影像擷取元件110例如為內嵌於電子裝置100的螢幕上方的攝影機。The
提示元件130用以發出動作提示訊息以提示使用者執行對應的動作。提示元件130可例如是顯示器或喇叭等,以動態影像、靜態影像、文字或聲音來發出動作提示訊息,但本發明並不限於此。The
儲存元件140用以記錄身分驗證方法所需的資料,例如使用者的身分註冊資料、分類器的權重值或參數等。儲存元件140可例如是任意型式的固定式或可移動式隨機存取記憶體(Random Access Memory,RAM)、唯讀記憶體(Read-Only Memory,ROM)、快閃記憶體(Flash memory)、硬碟或其他類似裝置或這些裝置的組合,但本發明並不限於此。The
由於電子裝置100可用以執行本發明實施例的身分驗證方法,故以下將搭配電子裝置100的各項元件來對身分驗證方法進行詳細說明。然而值得一提的是,本發明並不限制身分驗證方法所適用的裝置。Since the
為了清楚說明,下文中將身分驗證方法分為兩個部分進行說明,分別是類神經網路模型150的訓練以及實際進行身分驗證步驟。在一些實施例中,類神經網路模型150的訓練例如是用來調校處理器110運行類神經網路模型150時所使用的各項參數以及權重值等,而實際進行身分驗證步驟則例如是使用訓練後的類神經網路模型150來進行身分驗證。For the sake of clarity, the identity verification method is divided into two parts for description in the following, which are the training of the neural network-
圖2繪示本發明一實施例的訓練類神經網路模型的流程圖。FIG. 2 shows a flowchart of training a neural network model according to an embodiment of the invention.
請參照圖2,在步驟S210中,處理器110會依據不同的多個動作標籤收集至少一使用者的至少一第二視訊串流。在一些實施例中,動作標籤例如包括點頭、搖頭、轉頭或臉部畫圓。Referring to FIG. 2, in step S210, the
在一些實施例中,步驟S210例如是使用者的註冊步驟,處理器110例如可透過提示元件130來提示使用者作出其中一個動作標籤(例如點頭、搖頭、轉頭或臉部畫圓等),然後透過影像擷取元件120來進行拍攝第二視訊串流,以收集使用者依據上述動作提示訊息而對應執行動作的第二視訊串流。在一些實施例中,使用者的第二視訊串流亦可例如是預存在儲存元件140中或雲端硬碟(未繪示)當中,而處理器110在步驟S210中會從儲存元件140或雲端硬碟來取得使用者的第二視訊串流,本發明並不在此限。In some embodiments, step S210 is, for example, a user registration step. The
舉例來說,「甲」、「乙」、「丙」、「丁」四個使用者進行註冊,而處理器110會收集「甲」、「乙」、「丙」、「丁」四個使用者依據不同的多個動作標籤對應的第二視訊串流。以使用者「甲」為例,處理器110例如會以動作標籤為點頭提示使用者「甲」作點頭動作數次,並且對應取得使用者「甲」點頭的第二視訊串流;又以動作標籤為搖頭提示使用者「甲」作搖頭動作數次,並且對應取得使用者「甲」搖頭的第二視訊串流,以此類推。藉此,處理器110會分別取得「甲」、「乙」、「丙」、「丁」點頭的多個視訊串流、搖頭的多個視訊串流等等。For example, the four users "A", "B", "C", and "D" are registered, and the
在一些實施例中,處理器110例如會將所收集到的所有第二視訊串流儲存在儲存元件140中,並且以資料夾命名來整理第二視訊串流。舉例來說,在使用者「甲」註冊時,處理器110例如可將使用者「甲」動作標籤為點頭的所有第二視訊串流記錄在「甲」資料夾中的「點頭」子資料夾(例如,資料路徑為甲->點頭);將已註冊使用者「甲」動作標籤為搖頭的所有第二視訊串流記錄在「甲」資料夾中的「搖頭」子資料夾(例如,資料路徑為甲->搖頭);在使用者「乙」註冊時,處理器110例如可將已註冊使用者「乙」動作標籤為點頭的所有第二視訊串流記錄在「乙」資料夾中的「點頭」子資料夾(例如,資料路徑為乙->點頭),以此類推。In some embodiments, the
在步驟S220中,處理器110會根據每一第二視訊串流中的二連續影像,計算取得對應該動作標籤其所收集的每一第二視訊串流的第二光流資訊。詳細來說,針對其所收集的各個第二視訊串流,處理器110例如會先進行取樣(例如但不限於,一秒30張)以得到第二視訊串流中的多張影像。隨後,處理器110會計算該些影像中相鄰二連續影像的光流點的位移向量,以作為對應動作標籤的第二光流資訊。In step S220, the
舉例而言,當處理器110針對第二視訊串流進行取樣後得到10張影像,處理器110會計算第2張影像與第1張影像之間多個光流點的多個位移向量,第3張影像與第2張影像之間多個光流點的多個位移向量,以此類推,處理器110會得到9筆兩兩影像之間的多個光流點的多個位移向量,而此9筆兩兩影像之間的多個光流點的多個位移向量便可以作為所述第二視訊串流的第二光流資訊。For example, when the
關於光流點的定義方式與光流資訊的計算方法,所屬領域具備通常知識者當有足夠能力根據相關文獻實作之,故在此並不對其多加贅述。Regarding the definition method of optical flow point and the calculation method of optical flow information, those with ordinary knowledge in the field should have sufficient ability to implement it according to relevant literature, so I will not repeat them here.
舉例來說,處理器110在收集「甲」、「乙」、「丙」、「丁」四個使用者的第二視訊串流後,例如會以上述方式來分別計算所有第二視訊串流的第二光流資訊,而每一個第二光流資訊會對應到「甲」、「乙」、「丙」、「丁」中的其中一個使用者的其中一個動作標籤。For example, after collecting the second video streams of the four users "A", "B", "C", and "D", the
在步驟S230中,處理器110會利用得到的第二光流資訊來訓練類神經網路模型150。為了訓練類神經網路模型150,在步驟S231中處理器110會先定義一身分註冊資訊至對應的第二光流資訊,然後在步驟S233中再利用每一個第二光流資訊及其身分註冊資訊來訓練類神經網路模型150(例如,最佳化分類器中的多個權重值或參數)。必須說明的是,所屬領域具備通常知識者當可依其對機器學習的知識得知訓練類神經網路模型150的具體方式並加以實作,故相關細節不在此贅述。In step S230, the
在一些實施例中,賦予每一個第二光流資訊的身分註冊資訊例如包括其所屬的使用者的個人身分資訊,例如姓名等。舉例來說,對應使用者「甲」點頭的所有第二光流資訊的身分註冊資訊例如包括「甲」的姓名,對應使用者「乙」搖頭的所有第二光流資訊的身分註冊資訊例如包括「乙」的姓名,以此類推。In some embodiments, the identity registration information assigned to each piece of second optical flow information includes, for example, the personal identity information of the user to which it belongs, such as name and the like. For example, the identity registration information corresponding to all the second optical flow information that the user "A" nods includes, for example, the name of "A", and the identity registration information corresponding to all the second optical flow information that the user "B" shakes his head includes, for example, The name of "B", and so on.
在一些實施例中,所訓練的類神經網路模型150例如是採用長短期記憶(long-short-term memory,LSTM)的時間遞歸神經網路(recurrent neural network,RNN),其包括具有記憶能力的神經元。據此,受訓練的長短期記憶的時間遞歸神經網路能夠提供更準確的光流資訊的分類。必須說明的是,本發明並不在此限制類神經網路模型150的類型。In some embodiments, the trained neural network-
完成了上述步驟S230後所得到的類神經網路模型150相當於產生了最佳化的類神經網路模型150的權重值或參數,而處理器110會將此些最佳化的類神經網路模型150的權重值或參數記錄在儲存元件140當中。The
圖3繪示本發明一實施例的進行身分驗證的流程圖。FIG. 3 illustrates a flowchart of identity verification according to an embodiment of the invention.
請參照圖3,在步驟S310中,提示元件130發出動作提示訊息,以提示使用者執行一動作,且動作提示訊息對應其中一個動作標籤。於動作提示訊息被發出後,影像擷取元件120被啟動,在步驟S320中,影像擷取元件120拍攝使用者執行對應於動作提示訊息的所述動作以取得一第一視訊串流,並將第一視訊串流傳送至處理器110。Referring to FIG. 3, in step S310, the prompting
在一些實施例中,類神經網路模型150在進行訓練時例如是使用多個動作標籤,例如點頭、搖頭、轉頭或臉部畫圓等。因此,處理器110會在驗證身分時透過提示元件130來發出動作提示訊息,以提示使用者執行其中一個動作標籤。在處理器110發出了動作提示訊息後,便例如會透過影像擷取元件120來在發出動作提示後的一段特定時間(例如,預設為10秒,但不限於此)內取得第一視訊串流。舉例來說,影像擷取元件120可以從電子裝置100被喚醒或是特定應用程式被開啟時就持續進行拍攝,而處理器110會取得發出動作提示訊息的時間點起算特定時間的視訊串流以供後續驗證使用。In some embodiments, the neural network-
在步驟S330中,處理器110會根據第一視訊串流中相鄰二連續影像來計算取得第一光流資訊。在此步驟中,根據第一視訊串流中的相鄰二連續影像來計算第一光流資訊的詳細方式是類似於前述步驟S220中計算其中一個第二視訊串流的第二光流資訊的方式,故在此不再贅述。In step S330, the
在步驟S340中,處理器110會根據所計算出來的第一光流資訊,運行類神經網路模型150以輸出身分驗證結果。在一些實施例中,受訓練後的類神經網路模型150可用以將所輸入的第一光流資訊歸類至其中一個個人身分資訊,或是歸類至其他使用者資訊,歸類至其他使用者資訊表示此第一光流資訊無法被歸類至已註冊的任何一個使用者。舉例來說,訓練類神經網路模型150時所使用的身分註冊資訊包括「甲」、「乙」、「丙」、「丁」,因此類神經網路模型150在訓練後可以計算出所輸入的第一光流資訊屬於「甲」、「乙」、「丙」以及「丁」各個使用者的機率。倘若所輸入的光流資訊屬於使用者「甲」的機率最高且高於預設閥值,則類神經網路模型150例如會將所輸入的第一光流資訊歸類至使用者「甲」;倘若所輸入的第一光流資訊屬於使用者「乙」的機率最高且高於預設閥值,則類神經網路模型150例如會將所輸入的光流資訊歸類至使用者「乙」;倘若所輸入的第一光流資訊屬於使用者「丙」的機率最高且高於預設閥值,則類神經網路模型150例如會將所輸入的第一光流資訊歸類至使用者「丙」;倘若所輸入的第一光流資訊屬於使用者「丁」的機率最高且高於預設閥值,則類神經網路模型150例如會將所輸入的第一光流資訊歸類至使用者「丁」;倘若所輸入的第一光流資訊屬於「甲」、「乙」、「丙」以及「丁」的機率皆不高於預設閥值,則類神經網路模型150例如會將所輸入的第一光流資訊歸類為「其他」。In step S340, the
在步驟S350中,處理器110會根據類神經網路模型所輸出的身分驗證結果觸發電子裝置100的一執行事件。In step S350, the
在一些實施例中,身分驗證結果例如是顯示使用者是否通過驗證。舉例來說,當身分驗證結果將第一光流資訊歸類為其中一個身分註冊資訊(例如,其中一個使用者的姓名如「甲」、「乙」、「丙」或「丁」),或將光流資訊歸類為「已註冊」時,處理器110會輸出使用者通過驗證的身分驗證結果。另一方面,當身分驗證結果將第一光流資訊歸類為尚未有身分註冊資訊(例如,「其他」),或將第一光流資訊歸類為「未註冊」時,處理器110會輸出使用者未通過驗證的身分驗證結果。據此,處理器110例如可根據身分驗證結果來決定是否觸發電子裝置100的一執行事件,舉例來說,執行事件例如是允許使用者登入電子裝置100或執行特定應用程式。In some embodiments, the identity verification result shows, for example, whether the user passed the verification. For example, when the identity verification result classifies the first optical flow information as one of the identity registration information (for example, the name of one of the users such as "A", "B", "C" or "D"), or When the optical flow information is classified as "registered", the
綜上所述,本發明實施例所提出的身分驗證方法與使用此方法的電子裝置,利用不同個體對應於相同的動作指令所作的動作會有所差異的特點,將作動作的使用者拍攝為視訊串流,計算視訊串流的光流資訊,並且以此光流資訊來作為身分驗證時的依據。據此,能夠適用於各種外型的族群並且具有高驗證速度。此外,在本發明實施例中更使用了深度學習的人工神經網路模型,大幅提升了以光流資訊來驗證身分時的準確度。In summary, the identity verification method and the electronic device using the method provided in the embodiments of the present invention take advantage of the characteristics that the actions performed by different individuals corresponding to the same action instruction will be different. Video streaming, calculate the optical flow information of video streaming, and use this optical flow information as the basis for identity verification. According to this, it can be applied to various appearance groups and has a high verification speed. In addition, in the embodiment of the present invention, an artificial neural network model for deep learning is further used, which greatly improves the accuracy when verifying the identity using optical flow information.
雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed as above with examples, it is not intended to limit the present invention. Any person with ordinary knowledge in the technical field can make some changes and modifications without departing from the spirit and scope of the present invention. The scope of protection of the present invention shall be subject to the scope defined in the appended patent application.
100:電子裝置110:處理器120:影像擷取元件130:提示元件140:儲存元件150:類神經網路模型S210、S220、S230、S231、S233:身分驗證方法中訓練類神經網路模型的步驟S310、S320、S330、S340、S350:身分驗證方法中進行身分驗證的步驟100: Electronic device 110: Processor 120: Image capture component 130: Prompt component 140: Storage component 150: Neural network model S210, S220, S230, S231, S233: Neural network model trained in identity verification method Steps S310, S320, S330, S340, S350: steps of identity verification in the identity verification method
圖1繪示本發明一實施例的電子裝置示意圖。 圖2繪示本發明一實施例的訓練類神經網路模型的流程圖。 圖3繪示本發明一實施例的進行身分驗證的流程圖。FIG. 1 is a schematic diagram of an electronic device according to an embodiment of the invention. FIG. 2 shows a flowchart of training a neural network model according to an embodiment of the invention. FIG. 3 illustrates a flowchart of identity verification according to an embodiment of the invention.
S310、S320、S330、S340、S350:身分驗證方法中進行身分驗證的步驟 S310, S320, S330, S340, S350: steps of identity verification in the identity verification method
Claims (10)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW107136107A TW202014917A (en) | 2018-10-12 | 2018-10-12 | Authentication method and electronic device using the same |
CN201910556051.5A CN111046898A (en) | 2018-10-12 | 2019-06-25 | Identity authentication method and electronic device using same |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW107136107A TW202014917A (en) | 2018-10-12 | 2018-10-12 | Authentication method and electronic device using the same |
Publications (1)
Publication Number | Publication Date |
---|---|
TW202014917A true TW202014917A (en) | 2020-04-16 |
Family
ID=70231718
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW107136107A TW202014917A (en) | 2018-10-12 | 2018-10-12 | Authentication method and electronic device using the same |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111046898A (en) |
TW (1) | TW202014917A (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWM463878U (en) * | 2009-03-12 | 2013-10-21 | Tlj Intertech Inc | Living body identification system and identity authentication device |
CN103049758B (en) * | 2012-12-10 | 2015-09-09 | 北京工业大学 | Merge the remote auth method of gait light stream figure and head shoulder mean shape |
US9294475B2 (en) * | 2013-05-13 | 2016-03-22 | Hoyos Labs Ip, Ltd. | System and method for generating a biometric identifier |
KR20150103507A (en) * | 2014-03-03 | 2015-09-11 | 삼성전자주식회사 | Method of unlocking an electronic device based on motion recognitions, motion recognition unlocking system, and electronic device including the same |
-
2018
- 2018-10-12 TW TW107136107A patent/TW202014917A/en unknown
-
2019
- 2019-06-25 CN CN201910556051.5A patent/CN111046898A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
CN111046898A (en) | 2020-04-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zahara et al. | The facial emotion recognition (FER-2013) dataset for prediction system of micro-expressions face using the convolutional neural network (CNN) algorithm based Raspberry Pi | |
US10810409B2 (en) | Identifying facial expressions in acquired digital images | |
TWI751161B (en) | Terminal equipment, smart phone, authentication method and system based on face recognition | |
US11275928B2 (en) | Methods and systems for facial recognition using motion vector trained model | |
WO2016127437A1 (en) | Live body face verification method and system, and computer program product | |
WO2016172872A1 (en) | Method and device for verifying real human face, and computer program product | |
WO2016197298A1 (en) | Living body detection method, living body detection system and computer program product | |
JP2006293644A (en) | Information processing device and information processing method | |
JP7071991B2 (en) | Methods and equipment for inspecting certificates and bearers | |
Hassanat et al. | New mobile phone and webcam hand images databases for personal authentication and identification | |
JP5241606B2 (en) | Object identification device and object identification method | |
Li et al. | Empirical study of face authentication systems under OSNFD attacks | |
WO2010133661A1 (en) | Identifying facial expressions in acquired digital images | |
Kotwal et al. | Multispectral deep embeddings as a countermeasure to custom silicone mask presentation attacks | |
Liu et al. | Dynamic-hand-gesture authentication dataset and benchmark | |
WO2021104128A1 (en) | Feature library update method and apparatus, inference server and storage medium | |
Cimmino et al. | M2FRED: Mobile masked face REcognition through periocular dynamics analysis | |
Saraswat et al. | Anti-spoofing-enabled contactless attendance monitoring system in the COVID-19 pandemic | |
George et al. | Development of Android Application for Facial Age Group Classification Using TensorFlow Lite | |
JP2009098901A (en) | Method, device and program for detecting facial expression | |
TW202014917A (en) | Authentication method and electronic device using the same | |
Wong et al. | Interactive quality-driven feedback for biometric systems | |
JP2019175282A (en) | Collation apparatus and collation method | |
WO2022104340A1 (en) | Artificial intelligence for passive liveness detection | |
Sudhakar et al. | Deepfake: An Endanger to Cyber Security |