TWI617999B - Double identification door access system and method thereof - Google Patents
Double identification door access system and method thereof Download PDFInfo
- Publication number
- TWI617999B TWI617999B TW106127010A TW106127010A TWI617999B TW I617999 B TWI617999 B TW I617999B TW 106127010 A TW106127010 A TW 106127010A TW 106127010 A TW106127010 A TW 106127010A TW I617999 B TWI617999 B TW I617999B
- Authority
- TW
- Taiwan
- Prior art keywords
- private key
- identification
- data
- user
- management host
- Prior art date
Links
Landscapes
- Collating Specific Patterns (AREA)
Abstract
一種雙重辨識門禁系統及其方法,係由可攜式智慧行動裝置的影像擷取單元擷取具有使用者生物特徵的三維影像及手勢辨識進行身份辨識以開啟門鎖並讓特定使用者進入。將使用者三維影像的生物特徵視為生物鑰匙並利用行動通訊系統傳輸以進行第一階段辨識,符合條件後將資料傳送至位置所在的門控裝置內,並與視為私有鑰匙的手勢辨識進行第二階段辨識,二階段皆符合條件者才將解除門禁以讓使用者進入。A dual identification access control system and method thereof are characterized in that the image capturing unit of the portable smart mobile device captures a three-dimensional image with user biometrics and gesture recognition for identity recognition to open the door lock and allow a specific user to enter. The biometrics of the user's 3D image are regarded as biometric keys and transmitted by the mobile communication system for the first stage identification. After the conditions are met, the data is transmitted to the gating device where the position is located, and the gesture recognition is regarded as a private key. In the second stage of identification, the second stage will meet the conditions before the door will be released for the user to enter.
Description
本發明是有關於一種辨識門禁辨識系統及其方法,專門使用三維(3D)影像互動式雙重辨識的門禁系統。The invention relates to an identification access control identification system and a method thereof, and an access control system exclusively using three-dimensional (3D) image interactive double identification.
目前門禁系統多以架設攝影機進行人臉辨識,而其人臉辨識是採二維辨識不見得是安全的方法,由於現今容易取得他人之照片,混淆二維辨識方法的系統,再者無法讓使用者在行進中辨識使用上相當不便。At present, the access control system mostly uses face-held cameras to perform face recognition, and its face recognition is a method that is not necessarily safe to adopt two-dimensional identification. Because it is easy to obtain photos of others and confuse the system of two-dimensional identification methods, it is impossible to use It is quite inconvenient to identify the use on the road.
在下列先前的專利文件中有提及類似的概念:台灣專利I560353的生物辨識門鎖系統、台灣專利I355624的門禁管制之認證方法及應用其之行動電子裝置與門禁系統、台灣專利申請案TW200409045的可辨識生物特徵之門禁管制系統、以及台灣專利I476734的多重辨識門禁方法。A similar concept is mentioned in the following prior patent documents: the biometric door lock system of Taiwan Patent I560353, the authentication method of the access control of Taiwan Patent I355624, and the mobile electronic device and access control system using the same, Taiwan Patent Application TW200409045 A biometric access control system and a multi-identification access control method of Taiwan Patent I476734.
由此可見,上述習用方式仍有諸多缺失,實非一良善之設計,而亟待加以改良。It can be seen that there are still many shortcomings in the above-mentioned methods of use, which is not a good design, but needs to be improved.
有鑑於此,本發明之目的即在於針對門禁系統提出一種使用3D影像互動式辨識裝置雙重辨識的門禁系統及其方法。In view of this, the object of the present invention is to provide an access control system and a method for dual recognition of a 3D image interactive identification device for an access control system.
達成上述發明目的之雙重辨識的門禁系統,至少需包括下列步驟。使用可攜式智慧行動裝置上的影像擷取單元擷取使用者身上任何部位的3D影像生物特徵經行動通訊網路對後台主機進行生物特徵資料傳送,經後台管理主機比對成功後將使用者的私有鑰匙資料經由網路送入識別控制器,待使用者透過其私有鑰匙開啟門禁而進入。The double-identified access control system that achieves the above object of the invention requires at least the following steps. The image capturing unit on the portable smart mobile device captures the 3D image biometrics of any part of the user, and transmits the biometric data to the background host via the mobile communication network. After the background management host compares successfully, the user's The private key data is sent to the identification controller via the network, and the user enters through the private key to open the access control.
透過可攜式智慧行動裝置的影像擷取單元任意擷取具有使用者生物特徵的3D影像以進行生物鑰匙的開啟,驗證通過後進行私有鑰匙比對開啟,驗證通過後進行方能解除門禁,以達成門禁開關之雙層驗證機制。The image capturing unit of the portable smart mobile device arbitrarily captures the 3D image with the biometric characteristics of the user to open the biological key, and after the verification is passed, the private key comparison is opened, and the verification can be performed to release the access control. A two-layer verification mechanism for the access switch is achieved.
為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。The above described features and advantages of the invention will be apparent from the following description.
如圖1所示,為依據本發明一實施例之智慧行動裝置辨識的門禁系統之系統架構圖,此門禁系統係包括:可攜式智慧行動裝置1、後台管理主機2、識別控制器3、私有鑰匙4。As shown in FIG. 1 , a system architecture diagram of an access control system identified by a smart mobile device according to an embodiment of the present invention includes: a portable smart mobile device 1, a background management host 2, an identification controller 3, Private key 4.
可攜式智慧行動裝置1可以係智慧型手機、平板電腦等具備網路功能、影像擷取功能的行動裝置,其主要功能係是透過影像擷取單元(例如,照相機、攝影機等)擷取包括使用者生物特徵的3D影像,並藉由無線通訊單元(例如,支援第3代、第4代行動通訊(3G、4G)等)經行動通訊網路將生物特徵資料傳送至後台管理主機2。此生物特徵係由可攜式智慧行動裝置1的影像擷取單元擷取使用者唯一可供辨識的標記,且不限定人臉或指紋,端視應用本發明實施例者需求而自行調整。The portable smart mobile device 1 can be a mobile device, a tablet computer and the like having a network function and an image capturing function, and its main function is to capture through an image capturing unit (for example, a camera, a camera, etc.). The user's biometric 3D image is transmitted to the background management host 2 via the mobile communication network (for example, supporting 3rd generation, 4th generation mobile communication (3G, 4G), etc.) via the mobile communication network. The biometric feature is obtained by the image capturing unit of the portable smart mobile device 1 and is uniquely identifiable by the user, and is not limited to a face or a fingerprint, and is adjusted according to the needs of the embodiments of the present invention.
後台管理主機2可以係桌上型電腦、電腦工作站或伺服器。後台管理主機2如圖2所示包括生物特徵辨識模組11及私有鑰匙分送模組12,後台管理主機2可透過其所具備的諸如中央處理單元(CPU)、微處理器(Microprocessor)、數位信號處理器(DSP)、可程式化控制器等類似者執行那些軟體模組,並具有與可攜式智慧行動裝置1對應的通訊單元以取得可攜式智慧行動裝置1所傳送的資料。The background management host 2 can be a desktop computer, a computer workstation or a server. The background management host 2 includes a biometric identification module 11 and a private key distribution module 12 as shown in FIG. 2, and the background management host 2 can have a central processing unit (CPU), a microprocessor (Microprocessor), and the like. A digital signal processor (DSP), a programmable controller, or the like executes the software modules, and has a communication unit corresponding to the portable smart mobile device 1 to obtain the data transmitted by the portable smart mobile device 1.
而識別控制器3可以係配置可取得私有鑰匙4的控制器,於本實施例中的私有鑰匙4係手勢辨識資料,而識別控制器3則配置有攝影機、照相機等影像擷取單元以取得手勢影像,並具有特定處理器以將取得的手勢影像與私有鑰匙4的資料比對。值得注意的是,後台管理主機2及識別控制器3分別具有對應的有線或無線通訊單元(例如,WiFi、乙太網路(Ethernet)等)以相互傳遞資料(例如,私有鑰匙4、比對結果等)。The identification controller 3 can be configured with a controller that can obtain the private key 4. In the embodiment, the private key 4 is a gesture recognition data, and the recognition controller 3 is configured with an image capturing unit such as a camera or a camera to obtain a gesture. The image has a specific processor to compare the acquired gesture image with the data of the private key 4. It should be noted that the background management host 2 and the identification controller 3 respectively have corresponding wired or wireless communication units (for example, WiFi, Ethernet, etc.) to transfer data to each other (for example, private key 4, comparison) Results, etc.).
當使用者行進至門禁管制區域時,可手持可攜式智慧行動裝置1並透過其影像擷取單元擷取具有生物特徵的3D影像資料。其生物特徵的擷取可由手或手持可攜式智慧行動裝置1自動影像處理以調整到最佳辨識影像,以提昇辨識率。而這些影像資料會被傳送至後台管理主機2。When the user travels to the access control area, the portable smart mobile device 1 can be hand-held and the biometric 3D image data can be retrieved through the image capturing unit. The biometric feature capture can be automatically imaged by the hand or handheld portable smart mobile device 1 to adjust to the best recognized image to improve the recognition rate. These image data will be transferred to the background management host 2.
而本發明實施例的後台管理主機2係使用局部二元特徵(Local Binary Pattern,LBP)運算作為主要的演算法,其方法可分兩步驟:特徵值計算和統計運算。The background management host 2 of the embodiment of the present invention uses a Local Binary Pattern (LBP) operation as a main algorithm, and the method can be divided into two steps: eigenvalue calculation and statistical operation.
特徵值計算是將圖像中每個像素值以3×3鄰域範圍與該像素值相減,若結果為負數則將其值標示為0,若為正數則標示為1。接著,以中心點右邊為起點開始順時針以二進制編碼。計算方式如(1-1)式: …(1-1) 係參考點鄰近所有像素值, 係參考點像素值, 係鄰近像素數量。 The eigenvalue calculation is to subtract each pixel value in the image from the pixel value by 3×3 neighborhood, and if the result is negative, the value is marked as 0, and if it is positive, it is marked as 1. Next, start clockwise with a binary code starting from the right side of the center point. The calculation method is as follows: (1-1): ...(1-1) The reference point is adjacent to all pixel values, Is the reference point pixel value, Is the number of adjacent pixels.
統計運算則是將影像分割成許多的區塊,對每個區塊的特徵值做統計運算,以直方圖呈現資料,再將所有區塊的直方圖彙整成一筆資料。本發明實施例將人臉影像與深度影像(即前述3D影像資料)均切割成九等份,對每一等份進行LBP特徵值運算定正規化後,再匯集成一筆資料,計算完成的LBP值同時也存入資料庫中,以便比對時使用。後台管理主機2根據已建立的資料與使用者的影像資料進行比對與辨識,即進行3D影像資料的比對與辨識。若使用者的資料已於資料庫中建立且比對成功,則後台管理主機2會將使用者私有鑰匙資料經由有線或無線網路送入識別控制器3。The statistical operation divides the image into a number of blocks, performs statistical operations on the feature values of each block, presents the data in a histogram, and then aggregates the histograms of all blocks into one piece of data. In the embodiment of the present invention, the face image and the depth image (that is, the 3D image data) are cut into nine equal parts, and the LBP feature value is normalized for each aliquot, and then collected into a piece of data to calculate the completed LBP. Values are also stored in the database for comparison purposes. The background management host 2 compares and identifies the image data of the user according to the established data, that is, compares and identifies the 3D image data. If the user's data has been established in the database and the comparison is successful, the background management host 2 sends the user's private key data to the identification controller 3 via the wired or wireless network.
而後台管理主機2與資料庫進行比對的方程式如(1-2) 式: j = 1,2,3…M…(1-2) 其中, 為資料庫中的LBP 特徵值, 為欲辨識之LBP特徵值。N代表特徵維度,M代表資料庫中比對的筆數。後台管理主機2將人臉深度資訊與建置資料庫中的資訊,逐一做誤差值的比對,找出誤差最小的生物特徵。 The background management host 2 and the database are compared equations such as (1-2): j = 1,2,3...M ...(1-2) where, For the LBP eigenvalues in the database, For the LBP feature value to be identified. N represents the feature dimension, and M represents the stroke of the comparison in the database. The background management host 2 compares the face depth information with the information in the built-in database, and compares the error values one by one to find the biometric with the smallest error.
需說明的是,生物特徵比對的技術還有很多種,應用本發明實施例者可依據需求而採用他種比對技術,本發明不僅限於LBP運算。It should be noted that there are many techniques for biometric comparison. Those who apply the embodiments of the present invention can adopt other kinds of comparison techniques according to requirements, and the present invention is not limited to the LBP operation.
當後台管理主機2接收到使用者的生物特徵資料時,經由生物特徵辨識模組11(軟體)運算判斷是否為合法使用者,若確認為合法使用者則私有鑰匙分送模組12會將私有鑰匙資料(例如,識別碼)送至識別控制器3供使用者持私有鑰匙4開啟使用(如前述識別控制器3與後台管理主機2互傳訊息),使識別控制器3具有判斷開門的功能(控制解除門禁)。識別控制器3使用完畢後,使具有判斷開門的功能的資料將會自動刪除,使私有鑰匙資料無法被盜用,且待下位使用者時才由後台管理主機2重新載入判斷開門的私有鑰匙資料。當二階段(生物特徵及私有鑰匙比對)皆比對符合,識別控制器3才會解開門禁,讓使用者通行。When the background management host 2 receives the biometric data of the user, it determines whether it is a legitimate user via the biometric identification module 11 (software) operation, and if it is confirmed as a legitimate user, the private key distribution module 12 will be private. The key data (for example, an identification code) is sent to the identification controller 3 for the user to open the private key 4 (such as the identification controller 3 and the background management host 2 exchange messages), so that the identification controller 3 has the function of judging the opening of the door. (Control to release the access control). After the identification controller 3 is used, the data having the function of judging the opening of the door will be automatically deleted, so that the private key data cannot be stolen, and the private key data for judging the opening of the door is reloaded by the background management host 2 when the user is in the lower position. . When the second stage (biometric and private key comparison) is matched, the recognition controller 3 will unlock the door and let the user pass.
於本實施例中,私有鑰匙4係手勢辨識,而手勢辨識方法可以是找出特徵點與直線兩部份。這些特徵點是利用端點與轉折點:首先定義八個方位與一個19x19方型框,若發現只有一個方位則為端點,若有兩個方位且方位相差為四則為直線,若方位相差不為四則為轉折點。In this embodiment, the private key 4 is gesture recognition, and the gesture recognition method may be to find both the feature point and the straight line. These feature points are the use of endpoints and turning points: first define eight orientations and a 19x19 square box. If only one orientation is found, it is the endpoint. If there are two orientations and the orientation difference is four, it is a straight line. If the orientation difference is not Four are turning points.
找岀直線路徑的方法定義n為直線特徵點數,由此可得可能的直線路徑數: …(1-3) The method of finding a straight path defines n as the number of linear feature points, thereby obtaining the number of possible straight paths: ...(1-3)
同時也可以求得每條直線的角度,再將圖像中每個像素點代入(1-4) 式: …(1-4) At the same time, you can also find the angle of each line, and then substitute each pixel in the image into (1-4): ...(1-4)
對每個可能存在的直線路徑做加權,即可得知直線存在與否,最後根據直線數與角度的比對結果,便可執行開門的動作。By weighting each possible straight path, you can know whether the line exists or not. Finally, according to the comparison between the number of lines and the angle, the action of opening the door can be performed.
本發明實施例利用類神經網路具有高度學習及處理非線性問題的能力,建置前置智慧行動裝置辨識的門禁系統,以提供準確的辨識功能。類神經網路最基本的組成單位如圖3所示係類神經網路架構21,輸入值 X與權重值 W輸入至神經元後,神經元內部即開始計算,計算公式如(1-5)式: …(1-5) 其中 係類神經之神經元權重值, 係特徵輸入值, 係神經元的偏權值, 係神經元的輸出值, n係神經元輸入數目, j係正整數。 The embodiment of the invention utilizes the neural network to have a high degree of ability to learn and deal with nonlinear problems, and establishes an access control system for identifying the front smart mobile device to provide an accurate identification function. The most basic component of a neural network is shown in Figure 3. The neural network architecture 21 is shown. After the input value X and the weight value W are input to the neuron, the neuron starts to be calculated. The calculation formula is (1-5). formula: ...(1-5) where The neuron weight value of the nervous system, Feature input value, The partial weight of the neuron, The output value of the neuron, the number of n- series inputs, and the j- positive integer.
本發明實施例使用類神經網路架構及程式來建置倒傳遞類神經網路,此模式的建置工作包括了學習演算法、隱藏層神經元數、隱藏層數、學習速率、以及隱藏層與輸出層轉化函數等六項。學習演算法共分為8個步驟,說明如下:The embodiment of the present invention uses a neural network architecture and a program to construct a reverse neural network. The implementation of this mode includes learning algorithms, hidden layer neurons, hidden layers, learning rate, and hidden layers. Six items, such as the output layer conversion function. The learning algorithm is divided into 8 steps, which are explained as follows:
步驟一:決定網路的層數及各層間神經元數目。在此假設網路的架構為輸入層、一層隱藏層及輸出層,即假設網路是三層網路架構,且假設輸入層的神經元數目有 個、隱藏層的神經元數目有 個,而輸出層的神經元數目有 個。 Step 1: Determine the number of layers in the network and the number of neurons in each layer. It is assumed here that the architecture of the network is an input layer, a hidden layer and an output layer, that is, the network is assumed to be a three-layer network architecture, and the number of neurons in the input layer is assumed to be The number of neurons in the hidden layer And the number of neurons in the output layer has One.
步驟二:以平均分佈隨機亂數設定網路的初始加權值及初始偏權值。因為不同層的神經元彼此相連,如果令 為輸入層第 i個神經元與隱藏層第 h個神經元的加權值,由於有 個輸入神經元與 的隱藏神經元,所以可以用一個雙層迴圈來設定所有輸入層與隱藏層間的初始加權值,其方法如下: for i = 1 to for h = 1 to 平均分佈隨機亂數 = Step 2: Set the initial weighted value and initial bias value of the network by the random distribution random number. Because the neurons in different layers are connected to each other, if The weight of the i- th neuron in the input layer and the h- th neuron in the hidden layer, due to Input neurons and The hidden neurons, so you can use a double layer loop to set the initial weight between all input layers and hidden layers, as follows: for i = 1 to For h = 1 to Average distribution random random number =
同理,如果我們令 為隱藏層第 h個神經元與輸出層第 j個神經元間的權重值,則設定所有隱藏層與輸出層間的初始權重值的方法如下: for h = 1 to for j = 1 to 平均分佈隨機亂數 = Similarly, if we make For the h-th neuron and j-th weight of the neural weights between the hidden layer output layer element, the method of setting the initial values of the weights between the hidden layer and the output layer for all the following: for h = 1 to For j = 1 to Average distribution random random number =
若設定網路中的初始偏權值,則需注意的是只有隱藏層及輸出層才有偏權值,輸入層是沒有的。事實上,輸入層是沒有運算能力的,其只是將一個神經元接收到的訊號平行輸出至隱藏層各個神經元中。若令 為隱藏層第 h個神經元的偏權值, 為輸出層第 j個神經元的偏權值,則設定初始偏權值的方法如下: for h = 1 to 平均分佈隨機亂數 = for j = 1 to 平均分佈隨機亂數 = If you set the initial bias value in the network, you need to pay attention to the fact that only the hidden layer and the output layer have the bias value, and the input layer does not. In fact, the input layer is not computationally capable, it simply outputs the signals received by one neuron in parallel to the neurons in the hidden layer. If For the h-th partial weights of the hidden layer neurons, For the bias value of the jth neuron of the output layer, the method of setting the initial bias value is as follows: for h = 1 to Average distribution random random number = For j = 1 to Average distribution random random number =
步驟三:輸入訓練樣本 , ,…, 及目標輸出值 , ,…, 。輸入值 , ,…, 可為任意的實數值,但是由於倒傳遞網路採用雙彎曲函數(sigmoid function)當作神經元的非線性轉換函數,網路的推論輸出值的值域也必須落在[0,1]之間,所以目標輸出值 , ,…, 其值域也必須落在[0,1]之間。 Step 3: Enter the training sample , ,..., And target output value , ,..., . input value , ,..., It can be any real value, but since the inverse transfer network uses the sigmoid function as the nonlinear transfer function of the neuron, the value range of the network's inferential output value must also fall in [0,1]. Between, so the target output value , ,..., Its range must also fall between [0, 1].
步驟四:計算網路的推論輸出值 , ,…, 。計算的方法是先算出隱藏層的輸出值,方法如下: for h = 1 to …(1-6) for h. =1 to …(1-7)其中 為隱藏層第 h個神經元的加權乘積和,而 為隱藏層第 h個神經元的輸出值,將收集到的加權乘積和 再做一次非線性轉換。 Step 4: Calculate the inference output value of the network , ,..., . The calculation method is to first calculate the output value of the hidden layer, as follows: for h = 1 to ...(1-6) for h. =1 to ...(1-7) where A weighted product of the hidden layer neurons and h, and To hide the output value of the hth neuron of the layer, the weighted product sum to be collected Do another nonlinear conversion.
由於輸出層的輸入訊號來自隱藏層的輸出值,所以其推論輸出值可計算如下: for j = 1 to …(1-9) for j = 1 to …(1-10) 其中 及 分別是輸出層第 j個神經元的加權乘積和及推論輸出值。 Since the input signal of the output layer is from the output value of the hidden layer, its inference output value can be calculated as follows: for j = 1 to ...(1-9) for j = 1 to ...(1-10) where and They are the weighted product sum and inference output values of the jth neuron in the output layer.
步驟五:計算輸出層與隱藏層的差距量。計算輸出層差距量的公式如下: for j = 1 to … (1-11) 其中 是輸出層第 j個神經元的差距量。在公式(1-11)中的 表示目標輸出值與網路推論輸出值間的誤差,所以 表示 與 之間的誤差量度。 Step 5: Calculate the difference between the output layer and the hidden layer. The formula for calculating the gap in the output layer is as follows: for j = 1 to ... (1-11) where Is the amount of gap in the jth neuron of the output layer. In formula (1-11) Indicates the error between the target output value and the network inference output value, so Express versus The measure of error between.
而計算隱藏層差距量的公式如下: for h = 1 to …(1-12) 其中 表示隱藏層第 h個神經元的差距量。請注意,在(1-12)式中包含了一項子式: ,此式表示輸出層差距量的加權乘積和。所以 的計算與輸出層的差距量有關,這意味將輸出層的誤差倒傳至隱藏層來計算其差距量。 The formula for calculating the amount of hidden layer gap is as follows: for h = 1 to ...(1-12) where Indicates the amount of gap in the hth neuron of the hidden layer. Note that there is a subform in (1-12): This formula represents the weighted product sum of the output layer gaps. and so The calculation is related to the amount of gap in the output layer, which means that the error of the output layer is reversed to the hidden layer to calculate the amount of the gap.
步驟六:計算各層間的加權值修正量及偏權值修正量。若令 表示隱藏層第 h個神經元與輸出層第 j個神經元間的加權值修正量,且令 表示輸出第 j個神經元的偏權值修正量,則計算其間所有加權值及偏權值修正量的方法如下: for h =1 to for j =1 to …(1-13) …(1-14) 其中 為學習速率,一般取值為0.1~1.0。而為了加速網路的收斂速度,可將公式(1-13)與(1-14)改寫成 …(1-15) …(1-16) 其中 為慣性因子,一般取值為0.0~0.9。 Step 6: Calculate the correction amount of the weight value and the correction amount of the bias value between the layers. If It represents the h-th neuron and j-th weighted value of the correction amount between the hidden layer output layer neural element, and let To indicate the correction value of the bias value of the jth neuron, the method of calculating all the weighting values and the correction value of the bias value is as follows: for h =1 to For j =1 to ...(1-13) ...(1-14) where For the learning rate, the value is generally 0.1~1.0. In order to speed up the convergence of the network, formulas (1-13) and (1-14) can be rewritten into ...(1-15) ...(1-16) where For the inertia factor, the value is generally 0.0~0.9.
同理,若令 為輸入層第 i個神經元與隱藏層第 h個神經元間的加權值修正量,且令 為隱藏層第 h個神經元的偏權值修正量,則計算其間所有加權值及偏權值修正量的公式如下: for i = 1 to for h = 1 to …(1-17) for h = 1 to …(1-18) Similarly, if The weighted value correction amount between the i- th neuron of the input layer and the h- th neuron of the hidden layer, and The formula for the correction value of the amount of h-th partial weights of the hidden layer neurons, and the weighting values of all partial therebetween weight correction amount is calculated as follows: for i = 1 to For h = 1 to ...(1-17) for h = 1 to ...(1-18)
步驟七:更新各層間的加權值及偏權值。更新隱藏層與輸出層間的加權值及輸出層偏權值的方法如下: for h =1 to for j =1 to …(1-19) for j = 1 to Step 7: Update the weighted values and the partial weights between the layers. The method of updating the weighted value between the hidden layer and the output layer and the output layer bias value is as follows: for h =1 to For j =1 to ...(1-19) for j = 1 to
同理,更新輸入層與隱藏層間的加權值及隱藏層偏權值的方法如下: for i = 1 to for h = 1 to …(1-20) for h = 1 to Similarly, the method of updating the weighted value between the input layer and the hidden layer and the hidden layer bias value is as follows: for i = 1 to For h = 1 to ...(1-20) for h = 1 to
步驟八:重複步驟3至步驟7,直到網路收斂。學習過程通常以一次一個訓練樣本進行,直到網路學習完所有的訓練樣本,稱為一個學習循環(learning circle),讓網路重複學習個學習循環,直到網路收斂為止。為了測試網路是否收歛,定義下列誤差函數來表示網路的學習品質: …( 1-21) 此式表示輸出層各個神經元的平方誤差和。因為在學習過程中,我們希望網路的推論輸出值 與目標輸出值 越接近越好,所以(1-21)式的計算值應小於一個合理的範圍才行。隱藏層神經元數係以平均法求得。隱藏層數、學習速率、隱藏層與輸出層轉化函數皆以似誤法求得最佳值。 Step 8: Repeat steps 3 through 7 until the network converges. The learning process is usually performed one training sample at a time, until the network learns all the training samples, called a learning circle, allowing the network to repeatedly learn a learning loop until the network converges. To test whether the network is converging, define the following error function to represent the learning quality of the network: ...( 1-21) This formula represents the sum of the squared errors of the individual neurons in the output layer. Because in the learning process, we want the inference output value of the network And target output value The closer the better, the calculated value of (1-21) should be less than a reasonable range. The number of hidden layer neurons is obtained by the averaging method. The hidden layer, learning rate, hidden layer and output layer conversion functions all find the best value by erroneous method.
需說明的是,本實施例中的私有鑰匙係採用手勢辨識技術,然於其他實施例中,私有鑰匙可能係特定識別碼、特定圖樣等,端視應用本實施者依據需求自行調整。It should be noted that the private key in this embodiment adopts a gesture recognition technology. However, in other embodiments, the private key may be a specific identification code, a specific pattern, or the like, and the application is adjusted according to the requirements of the present application.
特點及功效Features and effects
1. 本發明之智慧行動裝置辨識的門禁系統及其方法,係使用可攜式智慧行動裝置1擷取具有使用者生物特徵的3D影像並經行動通訊網路傳送,以進行生物特徵辨識,令系統辨識效能更快速。The access control system and the method for identifying the smart mobile device of the present invention use the portable smart mobile device 1 to capture 3D images with user biometrics and transmit them through a mobile communication network for biometric identification and system Identification is faster.
2. 本發明之智慧行動裝置辨識的門禁系統及其方法,可任意擷取具有使用者生物特徵的3D影像,使系統具可多變性。2. The access control system and the method for identifying the smart mobile device of the present invention can arbitrarily capture 3D images with biometric characteristics of the user, so that the system can be versatile.
3. 本發明之智慧行動裝置辨識的門禁系統及其方法,具有雙因子認證可使系統更安全。3. The access control system and method for identifying the smart mobile device of the present invention have two-factor authentication to make the system more secure.
雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。 Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present invention, and any one of ordinary skill in the art can make some changes and refinements without departing from the spirit and scope of the present invention. The scope of the invention is defined by the scope of the appended claims.
1‧‧‧可攜式智慧行動裝置 1‧‧‧Portable Smart Mobile Device
2‧‧‧後台管理主機 2‧‧‧Background management host
3‧‧‧識別控制器 3‧‧‧ Identification controller
4‧‧‧私有鑰匙 4‧‧‧Private keys
11‧‧‧生物特徵辨識模組 11‧‧‧Biometric Identification Module
12‧‧‧私有鑰匙分送模組 12‧‧‧Private key distribution module
21‧‧‧類神經網路架構 21‧‧‧ class neural network architecture
圖1為依據本發明一實施例之可攜式智慧行動裝置辨識的門禁系統之系統架構圖; 圖2為依據本發明一實施例之後台管理主機架構圖; 圖3為依據本發明一實施例之類神經網路架構。1 is a system architecture diagram of an access control system for identifying a portable smart mobile device according to an embodiment of the present invention; FIG. 2 is a structural diagram of a console management host according to an embodiment of the present invention; FIG. 3 is a diagram of a host management host according to an embodiment of the present invention; Neural network architecture like this.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW106127010A TWI617999B (en) | 2017-08-10 | 2017-08-10 | Double identification door access system and method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW106127010A TWI617999B (en) | 2017-08-10 | 2017-08-10 | Double identification door access system and method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
TWI617999B true TWI617999B (en) | 2018-03-11 |
TW201911124A TW201911124A (en) | 2019-03-16 |
Family
ID=62189062
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW106127010A TWI617999B (en) | 2017-08-10 | 2017-08-10 | Double identification door access system and method thereof |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI617999B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI656272B (en) * | 2018-05-18 | 2019-04-11 | 黃暐皓 | Access control device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW200829781A (en) * | 2006-08-29 | 2008-07-16 | Toshiba Kk | Entry control system and entry control method |
TWM439229U (en) * | 2012-05-11 | 2012-10-11 | Shinsoft Co Ltd | Security apparatus with mulitple safety controls and system using the same |
CN204808397U (en) * | 2015-06-23 | 2015-11-25 | 北京国信实为通讯技术有限公司 | Computer lab basic station corollary equipment centralized management system |
US20160358391A1 (en) * | 2015-06-05 | 2016-12-08 | Dean Drako | Geo-Location Estimate (GLE) Sensitive Physical Access Control Apparatus, System, and Method of Operation |
CN106599797A (en) * | 2016-11-24 | 2017-04-26 | 北京航空航天大学 | Infrared face identification method based on local parallel nerve network |
-
2017
- 2017-08-10 TW TW106127010A patent/TWI617999B/en active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW200829781A (en) * | 2006-08-29 | 2008-07-16 | Toshiba Kk | Entry control system and entry control method |
TWM439229U (en) * | 2012-05-11 | 2012-10-11 | Shinsoft Co Ltd | Security apparatus with mulitple safety controls and system using the same |
US20160358391A1 (en) * | 2015-06-05 | 2016-12-08 | Dean Drako | Geo-Location Estimate (GLE) Sensitive Physical Access Control Apparatus, System, and Method of Operation |
CN204808397U (en) * | 2015-06-23 | 2015-11-25 | 北京国信实为通讯技术有限公司 | Computer lab basic station corollary equipment centralized management system |
CN106599797A (en) * | 2016-11-24 | 2017-04-26 | 北京航空航天大学 | Infrared face identification method based on local parallel nerve network |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI656272B (en) * | 2018-05-18 | 2019-04-11 | 黃暐皓 | Access control device |
Also Published As
Publication number | Publication date |
---|---|
TW201911124A (en) | 2019-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11657525B2 (en) | Extracting information from images | |
CN108009528B (en) | Triple Loss-based face authentication method and device, computer equipment and storage medium | |
KR102036978B1 (en) | Liveness detection method and device, and identity authentication method and device | |
CN110728209B (en) | Gesture recognition method and device, electronic equipment and storage medium | |
CN111274916B (en) | Face recognition method and face recognition device | |
US11941918B2 (en) | Extracting information from images | |
WO2016177259A1 (en) | Similar image recognition method and device | |
TWI664552B (en) | System and method for biometric authentication in connection with camera-equipped devices | |
WO2019227479A1 (en) | Method and apparatus for generating face rotation image | |
JP2018200716A (en) | System and method for biometric authentication in connection with camera-equipped devices | |
KR20230169104A (en) | Personalized biometric anti-spoofing protection using machine learning and enrollment data | |
WO2020187160A1 (en) | Cascaded deep convolutional neural network-based face recognition method and system | |
WO2021129107A1 (en) | Depth face image generation method and device, electronic apparatus, and medium | |
KR20180004898A (en) | Image processing technology and method based on deep learning | |
KR102161359B1 (en) | Apparatus for Extracting Face Image Based on Deep Learning | |
WO2022247539A1 (en) | Living body detection method, estimation network processing method and apparatus, computer device, and computer readable instruction product | |
Zhu et al. | Iot equipment monitoring system based on c5. 0 decision tree and time-series analysis | |
CN112686191B (en) | Living body anti-counterfeiting method, system, terminal and medium based on three-dimensional information of human face | |
Prakash et al. | Accurate hand gesture recognition using CNN and RNN approaches | |
US12019728B2 (en) | AdHoc enrollment process | |
Xu et al. | An effective recognition approach for contactless palmprint | |
TWI617999B (en) | Double identification door access system and method thereof | |
JP4011426B2 (en) | Face detection device, face detection method, and face detection program | |
WO2002080088A1 (en) | Method for biometric identification | |
CN114743277A (en) | Living body detection method, living body detection device, electronic apparatus, storage medium, and program product |