TW202014917A - Authentication method and electronic device using the same - Google Patents

Authentication method and electronic device using the same Download PDF

Info

Publication number
TW202014917A
TW202014917A TW107136107A TW107136107A TW202014917A TW 202014917 A TW202014917 A TW 202014917A TW 107136107 A TW107136107 A TW 107136107A TW 107136107 A TW107136107 A TW 107136107A TW 202014917 A TW202014917 A TW 202014917A
Authority
TW
Taiwan
Prior art keywords
optical flow
flow information
action
video stream
electronic device
Prior art date
Application number
TW107136107A
Other languages
Chinese (zh)
Inventor
鄭圳州
李明隆
Original Assignee
和碩聯合科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 和碩聯合科技股份有限公司 filed Critical 和碩聯合科技股份有限公司
Priority to TW107136107A priority Critical patent/TW202014917A/en
Priority to CN201910556051.5A priority patent/CN111046898A/en
Publication of TW202014917A publication Critical patent/TW202014917A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

An authentication method is provided. The authentication method includes: sending a motion prompt message; activating an image capturing element of an electronic device for acquiring a first video stream; calculating first optical flow information according to two successive images within the first video stream; inputting the first optical flow information into a neural network model in order to obtain an authentication result, where the neural network model includes a motion tag corresponding to the motion prompt message, at least one second optical flow information corresponding to the motion tag and identification registration information corresponding to each second optical flow information. In addition, an electronic device using the authentication method is also provided.

Description

身分驗證方法與使用此方法的電子裝置Identity verification method and electronic device using the method

本發明是有關於一種身分驗證方法與使用此方法的電子裝置。The invention relates to an identity verification method and an electronic device using the method.

以往在電子裝置上進行身分驗證都是透過鍵盤、觸控螢幕或滑鼠等輸入裝置來輸入個人密碼,過程相當繁瑣且常發生忘記密碼的情形造成使用者的不便。In the past, identity verification on electronic devices used to input personal passwords through input devices such as keyboards, touch screens, or mice. The process was quite cumbersome and often resulted in user inconveniences due to forgotten passwords.

近年來,許多基於生物特徵的身分驗證方法逐漸被發展出來,例如包括指紋辨識、聲紋辨識、虹膜辨識甚至是臉部辨識等等,各有其優缺點存在。舉例而言,臉部辨識是從靜態的相片中擷取出臉部特徵以進行比對,但是以目前現有的臉部辨識演算法來看準確率有待提升,且並不適用於所有族群的使用者(例如,準確率會因使用者的膚色不同而不同)。In recent years, many biometric-based identity verification methods have been gradually developed, including fingerprint recognition, voiceprint recognition, iris recognition, and even facial recognition, each with its own advantages and disadvantages. For example, facial recognition is to extract facial features from static photos for comparison, but the accuracy of the current facial recognition algorithm needs to be improved, and it is not suitable for users of all ethnic groups (For example, the accuracy rate will vary depending on the user's skin color).

因此,如何能夠提供一種快速又準確的身分驗證方法是本領域技術人員所共同致力的目標。Therefore, how to provide a fast and accurate identity verification method is the goal of those skilled in the art.

有鑑於此,本發明提供一種身分驗證方法與使用此身分驗證方法的電子裝置,適用於各個族群的使用者且快速而準確。In view of this, the present invention provides an identity verification method and an electronic device using the identity verification method, which is suitable for users of various ethnic groups and is fast and accurate.

本發明的實施例的身分驗證方法適用於電子裝置,並且包括以下步驟:發出一動作提示訊息;啟動電子裝置的影像擷取元件以取得第一視訊串流;根據第一視訊串流中二連續影像,計算取得第一光流資訊;輸入第一光流資訊至類神經網路模型以取得身分驗證結果,其中類神經網路模型包含對應動作提示訊息之動作標籤、對應動作標籤之至少一第二光流資訊,及對應每一第二光流資訊的身分註冊資訊。The identity verification method of the embodiment of the present invention is applicable to electronic devices, and includes the following steps: sending out an action prompt message; activating the image capturing element of the electronic device to obtain the first video stream; according to the two consecutive video stream Image, calculate and obtain the first optical flow information; input the first optical flow information to the neural network-like model to obtain the identity verification result, wherein the neural network-like model includes an action label corresponding to the action prompt message, and at least one first corresponding to the action label Two optical flow information, and identity registration information corresponding to each second optical flow information.

在本發明的一實施例中,上述的身分驗證方法於發出動作提示訊息的步驟之前更包括以下步驟:依據不同的多個動作標籤收集至少一使用者的至少一第二視訊串流;根據每一第二視訊串流中的二連續影像,計算取得至少一第二視訊串流的至少一第二光流資訊;以及利用至少一第二光流資訊訓練類神經網路模型。In an embodiment of the present invention, the above-mentioned identity verification method further includes the following steps before the step of issuing the action prompt message: collecting at least one second video stream of at least one user according to different multiple action tags; Two consecutive images in a second video stream, calculating and obtaining at least one second optical flow information of at least one second video stream; and using at least one second optical flow information to train a neural network model.

在本發明的一實施例中,上述的動作提示訊息對應於其中一個動作標籤。In an embodiment of the invention, the above action prompt message corresponds to one of the action tags.

在本發明的一實施例中,其中利用至少一第二光流資訊訓練類神經網路模型的步驟又包括以下步驟:定義身分註冊資訊至對應的第二光流資訊。In an embodiment of the invention, the step of training the neural network model using at least one second optical flow information further includes the following steps: defining identity registration information to the corresponding second optical flow information.

在本發明的一實施例中,上述的處理器會根據身分驗證結果觸發電子裝置的執行事件。In an embodiment of the present invention, the aforementioned processor triggers an execution event of the electronic device according to the identity verification result.

本發明實施例的電子裝置包括提示元件、影像擷取元件以及處理器。提示元件用以發出動作提示訊息。於動作提示訊息被發出後,影像擷取元件被啟動以取得第一視訊串流。處理器訊號連接於提示元件以及影像擷取元件,且處理器用以運行類神經網路模型,其中類神經網路模型包含對應動作提示訊息之動作標籤、對應動作標籤之至少一第二光流資訊,及對應每一第二光流資訊的身分註冊資訊,處理器用以執行下列步驟:根據第一視訊串流中二連續影像,計算取得第一光流資訊;輸入第一光流資訊至類神經網路模型以取得身分驗證結果。The electronic device of the embodiment of the present invention includes a prompt component, an image capturing component and a processor. The prompting component is used for sending out motion prompting messages. After the action prompt message is sent, the image capturing element is activated to obtain the first video stream. The processor signal is connected to the prompt component and the image capturing component, and the processor is used to run a neural network model, wherein the neural network model includes an action label corresponding to the action prompt message and at least one second optical flow information corresponding to the action label , And the identity registration information corresponding to each second optical flow information, the processor is used to perform the following steps: calculate and obtain the first optical flow information according to the two consecutive images in the first video stream; input the first optical flow information to the nerve-like Network model to obtain identity verification results.

基於上述,本發明實施例所提出的身分驗證方法與使用此方法的電子裝置,利用不同個體對應於相同的動作指令所作的動作會有所差異的特點,藉由拍攝使用者作動作的視訊串流,計算視訊串流中二相鄰影像的光流資訊,並且以此光流資訊輸入已訓練的類神經網路模型來作為身分驗證時的判斷依據。據此,能夠適用於各種外型的族群並且具有高驗證速度。Based on the above, the identity verification method and the electronic device using the method proposed in the embodiments of the present invention utilize the characteristics that different individuals' actions corresponding to the same action command will be different, by capturing the video string of the user's action Flow, calculate the optical flow information of two adjacent images in the video stream, and use this optical flow information to input a trained neural network-like model as the basis for judgment during identity verification. According to this, it can be applied to various appearance groups and has a high verification speed.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。In order to make the above-mentioned features and advantages of the present invention more obvious and understandable, the embodiments are specifically described below in conjunction with the accompanying drawings for detailed description as follows.

不同個體對應於相同的動作指令(例如,點頭、搖頭或轉頭)所作的動作也會有所差異,若將所作動作拍攝下來,這些差異會反映在視訊串流所對應的光流(optical flow)資訊中。因此,本發明實施例的身分驗證方法利用視訊串流對應的光流資訊作為身分驗證的依據,再透過機器學習的方法,利用經過訓練的分類器便能夠準確地得到身分驗證的結果。The actions of different individuals corresponding to the same action instructions (for example, nodding, shaking or turning head) will also be different. If the action is taken, these differences will be reflected in the optical flow corresponding to the video stream (optical flow) ) In the news. Therefore, the identity verification method of the embodiment of the present invention uses the optical flow information corresponding to the video stream as the basis for identity verification, and then uses the trained classifier through machine learning to accurately obtain the results of the identity verification.

圖1繪示本發明一實施例的電子裝置示意圖。FIG. 1 is a schematic diagram of an electronic device according to an embodiment of the invention.

請參照圖1,電子裝置100包括處理器110、影像擷取元件120、提示元件130以及儲存元件140,其中影像擷取元件120、提示元件130以及儲存元件140皆訊號連接於處理器110,且處理器110用以運行一類神經網路模型150。在一些實施例中,電子裝置100例如為個人電腦、伺服器、筆記型電腦、平板電腦、智慧型手機等,本發明並不在此限制。1, the electronic device 100 includes a processor 110, an image capturing element 120, a prompt element 130, and a storage element 140, wherein the image capturing element 120, the prompt element 130, and the storage element 140 are all connected to the processor 110, and The processor 110 is used to run a type of neural network model 150. In some embodiments, the electronic device 100 is, for example, a personal computer, a server, a notebook computer, a tablet computer, a smartphone, etc., and the invention is not limited thereto.

處理器110用以與電子裝置100中的其他元件協作以完成本發明實施例的身分驗證方法。處理器110可例如是雙核心、四核心或八核心等各類型的中央處理器(central processing unit,CPU)、系統晶片(system-on-chip,SOC)、應用處理器(application processor)、媒體處理器(media processor)、微處理器(microprocessor)、數位信號處理器(digital signal processor)可程式化控制器、特殊應用積體電路(application specific integrated circuits, ASIC)、可程式化邏輯裝置(programmable logic device, PLD)或其他類似裝置或這些裝置的組合,本發明不在此限制實作時所使用的處理器類型。在一些實施例中,處理器110例如是用以負責電子裝置100的整體運作。The processor 110 is used to cooperate with other elements in the electronic device 100 to complete the identity verification method of the embodiment of the present invention. The processor 110 may be, for example, a dual-core, quad-core, or eight-core central processing unit (CPU), system-on-chip (SOC), application processor (application processor), or media Processor (media processor), microprocessor (microprocessor), digital signal processor (digital signal processor) programmable controller, application specific integrated circuits (application specific integrated circuits, ASIC), programmable logic device (programmable logic device, PLD) or other similar devices or a combination of these devices, the present invention does not limit the type of processor used in implementation. In some embodiments, the processor 110 is responsible for the overall operation of the electronic device 100, for example.

影像擷取元件120用以拍攝視訊串流。影像擷取元件120可例如是內建或外接於電子裝置100,並且配備有電荷耦合元件(Charge Coupled Device,CCD)、互補性氧化金屬半導體(Complementary Metal-Oxide Semiconductor,CMOS)元件或其他種類的感光元件的攝像鏡頭,但本發明並不限於此。在一些實施例中,影像擷取元件110例如為內嵌於電子裝置100的螢幕上方的攝影機。The image capturing element 120 is used to shoot video streams. The image capture device 120 may be, for example, built-in or external to the electronic device 100, and is equipped with a charge coupled device (Charge Coupled Device, CCD), a complementary metal-oxide semiconductor (CMOS) device, or other types of The imaging lens of the photosensitive element, but the present invention is not limited to this. In some embodiments, the image capturing element 110 is, for example, a camera embedded above the screen of the electronic device 100.

提示元件130用以發出動作提示訊息以提示使用者執行對應的動作。提示元件130可例如是顯示器或喇叭等,以動態影像、靜態影像、文字或聲音來發出動作提示訊息,但本發明並不限於此。The prompting component 130 is used for sending out action prompting messages to prompt the user to perform corresponding actions. The prompting element 130 may be, for example, a display or a speaker, etc., and sends motion prompting messages with dynamic images, static images, text, or sound, but the present invention is not limited thereto.

儲存元件140用以記錄身分驗證方法所需的資料,例如使用者的身分註冊資料、分類器的權重值或參數等。儲存元件140可例如是任意型式的固定式或可移動式隨機存取記憶體(Random Access Memory,RAM)、唯讀記憶體(Read-Only Memory,ROM)、快閃記憶體(Flash memory)、硬碟或其他類似裝置或這些裝置的組合,但本發明並不限於此。The storage element 140 is used to record data required by the identity verification method, such as user identity registration data, weight values or parameters of the classifier, and the like. The storage element 140 can be, for example, any type of fixed or removable random access memory (RAM), read-only memory (ROM), flash memory (Flash memory), Hard disks or other similar devices or combinations of these devices, but the invention is not limited thereto.

由於電子裝置100可用以執行本發明實施例的身分驗證方法,故以下將搭配電子裝置100的各項元件來對身分驗證方法進行詳細說明。然而值得一提的是,本發明並不限制身分驗證方法所適用的裝置。Since the electronic device 100 can be used to execute the identity verification method according to the embodiment of the present invention, the following will describe the identity verification method in detail with various components of the electronic device 100. However, it is worth mentioning that the present invention does not limit the device to which the identity verification method is applicable.

為了清楚說明,下文中將身分驗證方法分為兩個部分進行說明,分別是類神經網路模型150的訓練以及實際進行身分驗證步驟。在一些實施例中,類神經網路模型150的訓練例如是用來調校處理器110運行類神經網路模型150時所使用的各項參數以及權重值等,而實際進行身分驗證步驟則例如是使用訓練後的類神經網路模型150來進行身分驗證。For the sake of clarity, the identity verification method is divided into two parts for description in the following, which are the training of the neural network-like model 150 and the actual identity verification steps. In some embodiments, the training of the neural network-like model 150 is, for example, used to adjust various parameters and weight values used by the processor 110 when running the neural network-like model 150, and the actual identity verification step is, for example, The trained neural network-like model 150 is used for identity verification.

圖2繪示本發明一實施例的訓練類神經網路模型的流程圖。FIG. 2 shows a flowchart of training a neural network model according to an embodiment of the invention.

請參照圖2,在步驟S210中,處理器110會依據不同的多個動作標籤收集至少一使用者的至少一第二視訊串流。在一些實施例中,動作標籤例如包括點頭、搖頭、轉頭或臉部畫圓。Referring to FIG. 2, in step S210, the processor 110 collects at least one second video stream of at least one user according to different multiple action tags. In some embodiments, the action label includes, for example, nodding, shaking, turning, or drawing a circle on the face.

在一些實施例中,步驟S210例如是使用者的註冊步驟,處理器110例如可透過提示元件130來提示使用者作出其中一個動作標籤(例如點頭、搖頭、轉頭或臉部畫圓等),然後透過影像擷取元件120來進行拍攝第二視訊串流,以收集使用者依據上述動作提示訊息而對應執行動作的第二視訊串流。在一些實施例中,使用者的第二視訊串流亦可例如是預存在儲存元件140中或雲端硬碟(未繪示)當中,而處理器110在步驟S210中會從儲存元件140或雲端硬碟來取得使用者的第二視訊串流,本發明並不在此限。In some embodiments, step S210 is, for example, a user registration step. The processor 110 may prompt the user to make one of the action labels (such as nodding, shaking his head, turning his head, or drawing a circle on his face, etc.) through the prompt element 130, Then, the second video stream is photographed through the image capturing element 120 to collect the second video stream corresponding to the action performed by the user according to the above action prompt message. In some embodiments, the user's second video stream may also be pre-stored in the storage element 140 or in a cloud hard drive (not shown), and the processor 110 will step from the storage element 140 or cloud in step S210. The hard disk is used to obtain the second video stream of the user, and the invention is not limited thereto.

舉例來說,「甲」、「乙」、「丙」、「丁」四個使用者進行註冊,而處理器110會收集「甲」、「乙」、「丙」、「丁」四個使用者依據不同的多個動作標籤對應的第二視訊串流。以使用者「甲」為例,處理器110例如會以動作標籤為點頭提示使用者「甲」作點頭動作數次,並且對應取得使用者「甲」點頭的第二視訊串流;又以動作標籤為搖頭提示使用者「甲」作搖頭動作數次,並且對應取得使用者「甲」搖頭的第二視訊串流,以此類推。藉此,處理器110會分別取得「甲」、「乙」、「丙」、「丁」點頭的多個視訊串流、搖頭的多個視訊串流等等。For example, the four users "A", "B", "C", and "D" are registered, and the processor 110 collects the four uses of "A", "B", "C", and "D" The second video stream corresponds to different action tags. Taking the user "A" as an example, the processor 110 may prompt the user "A" to perform a nod action several times with the action label as a nod, and correspondingly obtain the second video stream of the user "A" nod; The label for shaking his head prompts the user "A" to shake his head several times, and correspondingly obtains the second video stream of the user "A" shaking his head, and so on. In this way, the processor 110 respectively obtains multiple video streams of "A", "B", "C", and "D" nods, multiple video streams of shaking heads, and so on.

在一些實施例中,處理器110例如會將所收集到的所有第二視訊串流儲存在儲存元件140中,並且以資料夾命名來整理第二視訊串流。舉例來說,在使用者「甲」註冊時,處理器110例如可將使用者「甲」動作標籤為點頭的所有第二視訊串流記錄在「甲」資料夾中的「點頭」子資料夾(例如,資料路徑為甲->點頭);將已註冊使用者「甲」動作標籤為搖頭的所有第二視訊串流記錄在「甲」資料夾中的「搖頭」子資料夾(例如,資料路徑為甲->搖頭);在使用者「乙」註冊時,處理器110例如可將已註冊使用者「乙」動作標籤為點頭的所有第二視訊串流記錄在「乙」資料夾中的「點頭」子資料夾(例如,資料路徑為乙->點頭),以此類推。In some embodiments, the processor 110 stores, for example, all the collected second video streams in the storage element 140, and organizes the second video streams with folder names. For example, when the user "A" is registered, the processor 110 may, for example, record all the second video streams of the user "A" whose action is nodded in the "nod" subfolder in the "A" folder (For example, the data path is A -> nod); record all the second video streams of the registered user "A" action label as shaking his head in the "shaking head" subfolder of the "A" folder (for example, data The path is A -> shaking his head); when the user "B" is registered, the processor 110 can, for example, record all the second video streams of the registered user "B" action label as nod in the "B" folder "Nod" subfolder (for example, the data path is B -> nod), and so on.

在步驟S220中,處理器110會根據每一第二視訊串流中的二連續影像,計算取得對應該動作標籤其所收集的每一第二視訊串流的第二光流資訊。詳細來說,針對其所收集的各個第二視訊串流,處理器110例如會先進行取樣(例如但不限於,一秒30張)以得到第二視訊串流中的多張影像。隨後,處理器110會計算該些影像中相鄰二連續影像的光流點的位移向量,以作為對應動作標籤的第二光流資訊。In step S220, the processor 110 calculates and obtains the second optical flow information corresponding to each second video stream collected by the action tag according to the two consecutive images in each second video stream. In detail, for each second video stream it collects, the processor 110 first samples (for example, but not limited to, 30 images per second) to obtain multiple images in the second video stream. Subsequently, the processor 110 calculates the displacement vectors of the optical flow points of the two consecutive images in the images as the second optical flow information of the corresponding action label.

舉例而言,當處理器110針對第二視訊串流進行取樣後得到10張影像,處理器110會計算第2張影像與第1張影像之間多個光流點的多個位移向量,第3張影像與第2張影像之間多個光流點的多個位移向量,以此類推,處理器110會得到9筆兩兩影像之間的多個光流點的多個位移向量,而此9筆兩兩影像之間的多個光流點的多個位移向量便可以作為所述第二視訊串流的第二光流資訊。For example, when the processor 110 samples the second video stream to obtain 10 images, the processor 110 calculates multiple displacement vectors for multiple optical flow points between the second image and the first image. Multiple displacement vectors of multiple optical flow points between the three images and the second image, and so on, the processor 110 will obtain multiple displacement vectors of multiple optical flow points between the two pairs of images, and The multiple displacement vectors of the multiple optical flow points between the nine two pairs of images can be used as the second optical flow information of the second video stream.

關於光流點的定義方式與光流資訊的計算方法,所屬領域具備通常知識者當有足夠能力根據相關文獻實作之,故在此並不對其多加贅述。Regarding the definition method of optical flow point and the calculation method of optical flow information, those with ordinary knowledge in the field should have sufficient ability to implement it according to relevant literature, so I will not repeat them here.

舉例來說,處理器110在收集「甲」、「乙」、「丙」、「丁」四個使用者的第二視訊串流後,例如會以上述方式來分別計算所有第二視訊串流的第二光流資訊,而每一個第二光流資訊會對應到「甲」、「乙」、「丙」、「丁」中的其中一個使用者的其中一個動作標籤。For example, after collecting the second video streams of the four users "A", "B", "C", and "D", the processor 110 calculates all the second video streams separately in the above manner, for example The second optical flow information, and each second optical flow information will correspond to one of the action labels of one of the users in "A", "B", "C", and "D".

在步驟S230中,處理器110會利用得到的第二光流資訊來訓練類神經網路模型150。為了訓練類神經網路模型150,在步驟S231中處理器110會先定義一身分註冊資訊至對應的第二光流資訊,然後在步驟S233中再利用每一個第二光流資訊及其身分註冊資訊來訓練類神經網路模型150(例如,最佳化分類器中的多個權重值或參數)。必須說明的是,所屬領域具備通常知識者當可依其對機器學習的知識得知訓練類神經網路模型150的具體方式並加以實作,故相關細節不在此贅述。In step S230, the processor 110 uses the obtained second optical flow information to train the neural network-like model 150. In order to train the neural network-like model 150, in step S231, the processor 110 first defines an identity registration information to the corresponding second optical flow information, and then uses each second optical flow information and its identity registration in step S233 Information to train a neural network-like model 150 (eg, optimize multiple weight values or parameters in the classifier). It must be noted that those with ordinary knowledge in the field can obtain and implement specific methods of training the neural network model 150 based on their knowledge of machine learning, so the relevant details will not be repeated here.

在一些實施例中,賦予每一個第二光流資訊的身分註冊資訊例如包括其所屬的使用者的個人身分資訊,例如姓名等。舉例來說,對應使用者「甲」點頭的所有第二光流資訊的身分註冊資訊例如包括「甲」的姓名,對應使用者「乙」搖頭的所有第二光流資訊的身分註冊資訊例如包括「乙」的姓名,以此類推。In some embodiments, the identity registration information assigned to each piece of second optical flow information includes, for example, the personal identity information of the user to which it belongs, such as name and the like. For example, the identity registration information corresponding to all the second optical flow information that the user "A" nods includes, for example, the name of "A", and the identity registration information corresponding to all the second optical flow information that the user "B" shakes his head includes, for example, The name of "B", and so on.

在一些實施例中,所訓練的類神經網路模型150例如是採用長短期記憶(long-short-term memory,LSTM)的時間遞歸神經網路(recurrent neural network,RNN),其包括具有記憶能力的神經元。據此,受訓練的長短期記憶的時間遞歸神經網路能夠提供更準確的光流資訊的分類。必須說明的是,本發明並不在此限制類神經網路模型150的類型。In some embodiments, the trained neural network-like model 150 is, for example, a long-short-term memory (LSTM) time recurrent neural network (RNN), which includes memory capability Neurons. According to this, the time-recurrent neural network of trained long- and short-term memory can provide more accurate classification of optical flow information. It must be noted that the invention does not limit the type of neural network model 150 here.

完成了上述步驟S230後所得到的類神經網路模型150相當於產生了最佳化的類神經網路模型150的權重值或參數,而處理器110會將此些最佳化的類神經網路模型150的權重值或參數記錄在儲存元件140當中。The neural network model 150 obtained after completing the above step S230 is equivalent to the weight value or parameter of the optimized neural network model 150, and the processor 110 will optimize these neural network models The weight value or parameter of the road model 150 is recorded in the storage element 140.

圖3繪示本發明一實施例的進行身分驗證的流程圖。FIG. 3 illustrates a flowchart of identity verification according to an embodiment of the invention.

請參照圖3,在步驟S310中,提示元件130發出動作提示訊息,以提示使用者執行一動作,且動作提示訊息對應其中一個動作標籤。於動作提示訊息被發出後,影像擷取元件120被啟動,在步驟S320中,影像擷取元件120拍攝使用者執行對應於動作提示訊息的所述動作以取得一第一視訊串流,並將第一視訊串流傳送至處理器110。Referring to FIG. 3, in step S310, the prompting element 130 issues an action prompt message to prompt the user to perform an action, and the action prompt message corresponds to one of the action labels. After the action prompt message is sent, the image capturing element 120 is activated. In step S320, the image capturing element 120 shoots the user performing the action corresponding to the action prompt message to obtain a first video stream, and The first video stream is sent to the processor 110.

在一些實施例中,類神經網路模型150在進行訓練時例如是使用多個動作標籤,例如點頭、搖頭、轉頭或臉部畫圓等。因此,處理器110會在驗證身分時透過提示元件130來發出動作提示訊息,以提示使用者執行其中一個動作標籤。在處理器110發出了動作提示訊息後,便例如會透過影像擷取元件120來在發出動作提示後的一段特定時間(例如,預設為10秒,但不限於此)內取得第一視訊串流。舉例來說,影像擷取元件120可以從電子裝置100被喚醒或是特定應用程式被開啟時就持續進行拍攝,而處理器110會取得發出動作提示訊息的時間點起算特定時間的視訊串流以供後續驗證使用。In some embodiments, the neural network-like model 150 uses multiple motion labels, such as nodding, shaking, turning, or drawing a circle on the face, for example. Therefore, the processor 110 will send an action prompt message through the prompt element 130 when verifying the identity to prompt the user to perform one of the action labels. After the processor 110 sends out an action prompt message, for example, the image capturing element 120 may be used to obtain the first video stream within a certain period of time (for example, the default is 10 seconds, but not limited to this) after the action prompt is issued. flow. For example, the image capturing element 120 can continue to shoot when the electronic device 100 is awakened or a specific application is opened, and the processor 110 obtains a video stream at a specific time from the time point when the action prompt message is issued. For subsequent verification.

在步驟S330中,處理器110會根據第一視訊串流中相鄰二連續影像來計算取得第一光流資訊。在此步驟中,根據第一視訊串流中的相鄰二連續影像來計算第一光流資訊的詳細方式是類似於前述步驟S220中計算其中一個第二視訊串流的第二光流資訊的方式,故在此不再贅述。In step S330, the processor 110 calculates and obtains the first optical flow information according to two consecutive images in the first video stream. In this step, the detailed method of calculating the first optical flow information based on the adjacent two consecutive images in the first video stream is similar to the calculation of the second optical flow information of one of the second video streams in the foregoing step S220 Way, so I won’t go into details here.

在步驟S340中,處理器110會根據所計算出來的第一光流資訊,運行類神經網路模型150以輸出身分驗證結果。在一些實施例中,受訓練後的類神經網路模型150可用以將所輸入的第一光流資訊歸類至其中一個個人身分資訊,或是歸類至其他使用者資訊,歸類至其他使用者資訊表示此第一光流資訊無法被歸類至已註冊的任何一個使用者。舉例來說,訓練類神經網路模型150時所使用的身分註冊資訊包括「甲」、「乙」、「丙」、「丁」,因此類神經網路模型150在訓練後可以計算出所輸入的第一光流資訊屬於「甲」、「乙」、「丙」以及「丁」各個使用者的機率。倘若所輸入的光流資訊屬於使用者「甲」的機率最高且高於預設閥值,則類神經網路模型150例如會將所輸入的第一光流資訊歸類至使用者「甲」;倘若所輸入的第一光流資訊屬於使用者「乙」的機率最高且高於預設閥值,則類神經網路模型150例如會將所輸入的光流資訊歸類至使用者「乙」;倘若所輸入的第一光流資訊屬於使用者「丙」的機率最高且高於預設閥值,則類神經網路模型150例如會將所輸入的第一光流資訊歸類至使用者「丙」;倘若所輸入的第一光流資訊屬於使用者「丁」的機率最高且高於預設閥值,則類神經網路模型150例如會將所輸入的第一光流資訊歸類至使用者「丁」;倘若所輸入的第一光流資訊屬於「甲」、「乙」、「丙」以及「丁」的機率皆不高於預設閥值,則類神經網路模型150例如會將所輸入的第一光流資訊歸類為「其他」。In step S340, the processor 110 runs the neural network-like model 150 according to the calculated first optical flow information to output the identity verification result. In some embodiments, the trained neural network-like model 150 can be used to classify the input first optical flow information to one of the personal identity information, or to other user information, to other User information means that this first optical flow information cannot be classified to any registered user. For example, the identity registration information used when training the neural network model 150 includes "A", "B", "C", and "D", so the neural network model 150 can calculate the input after training The first optical flow information belongs to the probability of each user of "A", "B", "C" and "D". If the input optical flow information belongs to the user "A" with the highest probability and is higher than the preset threshold, the neural network model 150, for example, will classify the input first optical flow information to the user "A" ; If the first optical flow information entered belongs to the user "B" with the highest probability and is higher than the preset threshold, the neural network model 150, for example, will classify the input optical flow information to the user "B" If the entered first optical flow information belongs to the user "C" with the highest probability and is higher than the preset threshold, the neural network model 150, for example, will classify the input first optical flow information to use "C"; if the entered first optical flow information belongs to the user "D" with the highest probability and is higher than the preset threshold, the neural network model 150 will, for example, classify the input first optical flow information Class to user "D"; if the probability that the first optical flow information entered belongs to "A", "B", "C" and "D" is not higher than the preset threshold, then the neural network model 150, for example, classifies the input first optical flow information as "other".

在步驟S350中,處理器110會根據類神經網路模型所輸出的身分驗證結果觸發電子裝置100的一執行事件。In step S350, the processor 110 triggers an execution event of the electronic device 100 according to the identity verification result output by the neural network-like model.

在一些實施例中,身分驗證結果例如是顯示使用者是否通過驗證。舉例來說,當身分驗證結果將第一光流資訊歸類為其中一個身分註冊資訊(例如,其中一個使用者的姓名如「甲」、「乙」、「丙」或「丁」),或將光流資訊歸類為「已註冊」時,處理器110會輸出使用者通過驗證的身分驗證結果。另一方面,當身分驗證結果將第一光流資訊歸類為尚未有身分註冊資訊(例如,「其他」),或將第一光流資訊歸類為「未註冊」時,處理器110會輸出使用者未通過驗證的身分驗證結果。據此,處理器110例如可根據身分驗證結果來決定是否觸發電子裝置100的一執行事件,舉例來說,執行事件例如是允許使用者登入電子裝置100或執行特定應用程式。In some embodiments, the identity verification result shows, for example, whether the user passed the verification. For example, when the identity verification result classifies the first optical flow information as one of the identity registration information (for example, the name of one of the users such as "A", "B", "C" or "D"), or When the optical flow information is classified as "registered", the processor 110 outputs the identity verification result of the user passing the verification. On the other hand, when the identity verification result classifies the first optical flow information as having no registered information (for example, "other"), or classifies the first optical flow information as "unregistered", the processor 110 will Output the user's identity verification result that failed verification. Accordingly, the processor 110 may determine whether to trigger an execution event of the electronic device 100 according to the identity verification result, for example, the execution event is, for example, to allow a user to log in to the electronic device 100 or execute a specific application program.

綜上所述,本發明實施例所提出的身分驗證方法與使用此方法的電子裝置,利用不同個體對應於相同的動作指令所作的動作會有所差異的特點,將作動作的使用者拍攝為視訊串流,計算視訊串流的光流資訊,並且以此光流資訊來作為身分驗證時的依據。據此,能夠適用於各種外型的族群並且具有高驗證速度。此外,在本發明實施例中更使用了深度學習的人工神經網路模型,大幅提升了以光流資訊來驗證身分時的準確度。In summary, the identity verification method and the electronic device using the method provided in the embodiments of the present invention take advantage of the characteristics that the actions performed by different individuals corresponding to the same action instruction will be different. Video streaming, calculate the optical flow information of video streaming, and use this optical flow information as the basis for identity verification. According to this, it can be applied to various appearance groups and has a high verification speed. In addition, in the embodiment of the present invention, an artificial neural network model for deep learning is further used, which greatly improves the accuracy when verifying the identity using optical flow information.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed as above with examples, it is not intended to limit the present invention. Any person with ordinary knowledge in the technical field can make some changes and modifications without departing from the spirit and scope of the present invention. The scope of protection of the present invention shall be subject to the scope defined in the appended patent application.

100:電子裝置110:處理器120:影像擷取元件130:提示元件140:儲存元件150:類神經網路模型S210、S220、S230、S231、S233:身分驗證方法中訓練類神經網路模型的步驟S310、S320、S330、S340、S350:身分驗證方法中進行身分驗證的步驟100: Electronic device 110: Processor 120: Image capture component 130: Prompt component 140: Storage component 150: Neural network model S210, S220, S230, S231, S233: Neural network model trained in identity verification method Steps S310, S320, S330, S340, S350: steps of identity verification in the identity verification method

圖1繪示本發明一實施例的電子裝置示意圖。 圖2繪示本發明一實施例的訓練類神經網路模型的流程圖。 圖3繪示本發明一實施例的進行身分驗證的流程圖。FIG. 1 is a schematic diagram of an electronic device according to an embodiment of the invention. FIG. 2 shows a flowchart of training a neural network model according to an embodiment of the invention. FIG. 3 illustrates a flowchart of identity verification according to an embodiment of the invention.

S310、S320、S330、S340、S350:身分驗證方法中進行身分驗證的步驟 S310, S320, S330, S340, S350: steps of identity verification in the identity verification method

Claims (10)

一種身分驗證方法,適用於一電子裝置,所述身分驗證方法包括: 發出一動作提示訊息; 啟動該電子裝置的一影像擷取元件以取得一第一視訊串流; 根據該第一視訊串流中二連續影像,計算取得一第一光流資訊;以及 輸入該第一光流資訊至一類神經網路模型以取得一身分驗證結果,其中該類神經網路模型包含對應該動作提示訊息之一動作標籤、對應該動作標籤之至少一第二光流資訊,及對應每一該至少一第二光流資訊的一身分註冊資訊。An identity verification method is suitable for an electronic device. The identity verification method includes: sending out an action prompt message; activating an image capturing element of the electronic device to obtain a first video stream; according to the first video stream S2 continuous images, calculate and obtain a first optical flow information; and input the first optical flow information to a type of neural network model to obtain an identity verification result, where the type of neural network model includes one of the prompt messages corresponding to the action The action tag, at least one second optical flow information corresponding to the action tag, and identity registration information corresponding to each of the at least one second optical flow information. 如申請專利範圍第1項所述的身分驗證方法,於發出該動作提示訊息的步驟之前,更包括: 依據不同的多個動作標籤分別收集至少一使用者的至少一第二視訊串流; 根據每一該至少一第二視訊串流中的二連續影像,計算取得該至少一第二視訊串流的該至少一第二光流資訊;以及 利用該至少一第二光流資訊訓練該類神經網路模型。According to the identity verification method described in item 1 of the patent application scope, before the step of issuing the action prompt message, the method further includes: collecting at least one second video stream of at least one user according to different multiple action tags; For each of the two consecutive images in the at least one second video stream, calculating and obtaining the at least one second optical flow information of the at least one second video stream; and using the at least one second optical flow information to train the type of nerve Network model. 如申請專利範圍第2項所述的身分驗證方法,其中該動作提示訊息對應於該些動作標籤的其中之一。The identity verification method as described in item 2 of the patent application scope, wherein the action prompt message corresponds to one of the action tags. 如申請專利範圍第2項所述的身分驗證方法,其中利用該至少一第二光流資訊訓練該類神經網路模型的步驟包括: 定義一身分註冊資訊至對應的該第二光流資訊。The identity verification method as described in item 2 of the patent application scope, wherein the step of training the neural network model using the at least one second optical flow information includes: defining an identity registration information to the corresponding second optical flow information. 如申請專利範圍第1項所述的身分驗證方法,其中該處理器會根據該身分驗證結果觸發該電子裝置的一執行事件。The identity verification method as described in item 1 of the patent application scope, wherein the processor triggers an execution event of the electronic device according to the identity verification result. 一種電子裝置,包括: 一提示元件,用以發出一動作提示訊息; 一影像擷取元件,於該動作提示訊息被發出後,所述影像擷取元件被啟動以取得一第一視訊串流;以及 一處理器,訊號連接於該提示元件以及該影像擷取元件,且該處理器用以運行一類神經網路模型,其中該類神經網路模型包含對應該動作提示訊息之一動作標籤、對應該動作標籤之至少一第二光流資訊及對應每一該至少一第二光流資訊的一身分註冊資訊,該處理器用以執行下列步驟:       根據該第一視訊串流中二連續影像,計算取得一第一光流資訊;以及       輸入該第一光流資訊至該類神經網路模型以取得一身分驗證結果。An electronic device includes: a prompt component for sending out a motion prompt message; an image capturing component, after the motion prompt message is sent, the image capturing component is activated to obtain a first video stream; And a processor, the signal is connected to the prompt component and the image capturing component, and the processor is used to run a type of neural network model, wherein the type of neural network model includes an action label corresponding to the action prompt message, corresponding to At least one second optical flow information of the motion tag and an identity registration information corresponding to each of the at least one second optical flow information, the processor is used to perform the following steps: Based on two consecutive images in the first video stream, the calculation is obtained A first optical flow information; and input the first optical flow information to the neural network model to obtain an identity verification result. 如申請專利範圍第6項所述的電子裝置,其中於該提示元件發出該動作提示訊息之前,該處理器更用以執行下列步驟: 依據不同的多個動作標籤收集至少一使用者的至少一第二視訊串流; 根據每一該至少一第二視訊串流中的二連續影像,計算取得該至少一第二視訊串流的該至少一第二光流資訊;以及 利用該至少一第二光流資訊訓練該類神經網路模型。The electronic device as described in item 6 of the patent application scope, wherein before the prompting element sends out the action prompting message, the processor is further used to perform the following steps: collect at least one of at least one user according to different multiple action tags Second video stream; calculating the at least one second optical flow information of the at least one second video stream according to two consecutive images in each of the at least one second video stream; and using the at least one second Optical flow information trains this type of neural network model. 如申請專利範圍第7項所述的電子裝置,其中該提示元件所發出的該動作提示訊息對應於該些動作標籤的其中之一。The electronic device as described in item 7 of the patent application scope, wherein the action prompt message sent by the prompt element corresponds to one of the action tags. 如申請專利範圍第7項所述的電子裝置,其中在利用該至少一第二光流資訊訓練該類神經網路模型時,該處理器更用以執行下列步驟: 定義一身分註冊資訊至對應的該第二光流資訊。The electronic device as described in item 7 of the patent application scope, wherein when using the at least one second optical flow information to train the neural network model, the processor is further used to perform the following steps: define an identity registration information to the corresponding The second optical flow information. 如申請專利範圍第6項所述的電子裝置,其中該處理器會根據該身分驗證結果觸發該電子裝置的一執行事件。The electronic device as described in item 6 of the patent application scope, wherein the processor triggers an execution event of the electronic device according to the identity verification result.
TW107136107A 2018-10-12 2018-10-12 Authentication method and electronic device using the same TW202014917A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW107136107A TW202014917A (en) 2018-10-12 2018-10-12 Authentication method and electronic device using the same
CN201910556051.5A CN111046898A (en) 2018-10-12 2019-06-25 Identity authentication method and electronic device using same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW107136107A TW202014917A (en) 2018-10-12 2018-10-12 Authentication method and electronic device using the same

Publications (1)

Publication Number Publication Date
TW202014917A true TW202014917A (en) 2020-04-16

Family

ID=70231718

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107136107A TW202014917A (en) 2018-10-12 2018-10-12 Authentication method and electronic device using the same

Country Status (2)

Country Link
CN (1) CN111046898A (en)
TW (1) TW202014917A (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWM463878U (en) * 2009-03-12 2013-10-21 Tlj Intertech Inc Living body identification system and identity authentication device
CN103049758B (en) * 2012-12-10 2015-09-09 北京工业大学 Merge the remote auth method of gait light stream figure and head shoulder mean shape
US9294475B2 (en) * 2013-05-13 2016-03-22 Hoyos Labs Ip, Ltd. System and method for generating a biometric identifier
KR20150103507A (en) * 2014-03-03 2015-09-11 삼성전자주식회사 Method of unlocking an electronic device based on motion recognitions, motion recognition unlocking system, and electronic device including the same

Also Published As

Publication number Publication date
CN111046898A (en) 2020-04-21

Similar Documents

Publication Publication Date Title
Zahara et al. The facial emotion recognition (FER-2013) dataset for prediction system of micro-expressions face using the convolutional neural network (CNN) algorithm based Raspberry Pi
US10810409B2 (en) Identifying facial expressions in acquired digital images
TWI751161B (en) Terminal equipment, smart phone, authentication method and system based on face recognition
US11275928B2 (en) Methods and systems for facial recognition using motion vector trained model
WO2016127437A1 (en) Live body face verification method and system, and computer program product
WO2016172872A1 (en) Method and device for verifying real human face, and computer program product
WO2016197298A1 (en) Living body detection method, living body detection system and computer program product
JP2006293644A (en) Information processing device and information processing method
JP7071991B2 (en) Methods and equipment for inspecting certificates and bearers
Hassanat et al. New mobile phone and webcam hand images databases for personal authentication and identification
JP5241606B2 (en) Object identification device and object identification method
Li et al. Empirical study of face authentication systems under OSNFD attacks
WO2010133661A1 (en) Identifying facial expressions in acquired digital images
Kotwal et al. Multispectral deep embeddings as a countermeasure to custom silicone mask presentation attacks
Liu et al. Dynamic-hand-gesture authentication dataset and benchmark
WO2021104128A1 (en) Feature library update method and apparatus, inference server and storage medium
Cimmino et al. M2FRED: Mobile masked face REcognition through periocular dynamics analysis
Saraswat et al. Anti-spoofing-enabled contactless attendance monitoring system in the COVID-19 pandemic
George et al. Development of Android Application for Facial Age Group Classification Using TensorFlow Lite
JP2009098901A (en) Method, device and program for detecting facial expression
TW202014917A (en) Authentication method and electronic device using the same
Wong et al. Interactive quality-driven feedback for biometric systems
JP2019175282A (en) Collation apparatus and collation method
WO2022104340A1 (en) Artificial intelligence for passive liveness detection
Sudhakar et al. Deepfake: An Endanger to Cyber Security