TWI731511B - Finger vein or palm vein identification processing and neural network training method - Google Patents

Finger vein or palm vein identification processing and neural network training method Download PDF

Info

Publication number
TWI731511B
TWI731511B TW108145581A TW108145581A TWI731511B TW I731511 B TWI731511 B TW I731511B TW 108145581 A TW108145581 A TW 108145581A TW 108145581 A TW108145581 A TW 108145581A TW I731511 B TWI731511 B TW I731511B
Authority
TW
Taiwan
Prior art keywords
training
learning rate
finger
weight
layer
Prior art date
Application number
TW108145581A
Other languages
Chinese (zh)
Other versions
TW202123096A (en
Inventor
張振豪
張智瑋
王佳裕
Original Assignee
國立中興大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立中興大學 filed Critical 國立中興大學
Priority to TW108145581A priority Critical patent/TWI731511B/en
Publication of TW202123096A publication Critical patent/TW202123096A/en
Application granted granted Critical
Publication of TWI731511B publication Critical patent/TWI731511B/en

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

This patent proposes a new training method called “Ripple-Freeze,” which distributes different gradient descent rates to different layers. It mitigates gradients in several layer changes under different epochs. The shallow layer passes a small oscillation to the deep layer and makes the parameters change more aggressively. Experimental results indicate that the proposed training method can effectively accelerate the training procedure, with acceleration rates reaching as much as 20%. The results indicate the feasibility of this training method and take a step forward for practical application of palm vein identification systems.

Description

指/掌靜脈辨識處理及其神經網路訓練方法 Finger/palm vein recognition processing and its neural network training method

本發明與指/掌靜脈辨識處理有關,更詳而言之,是關於應用於指/掌靜脈辨識處理的深度神經網路訓練,本發明將之命名為漣波凍結訓練(Ripple-Freeze Training,RFT)。 The present invention is related to finger/palm vein recognition processing. More specifically, it is related to deep neural network training applied to finger/palm vein recognition processing. The present invention names it as Ripple-Freeze Training (Ripple-Freeze Training, RFT).

隨著科技的進步,不論機場出入境通關、銀行提款機的操作、公司門禁之出入以及各式科技產品的解鎖,為了確保安全,用生物特徵來做辨識已成當前主流。其中,使用掌靜脈來做身分辨識,是生物辨識領域一個重要的分支。此法採用非接觸式辨識方法,不但方便衛生,與其他生物辨識方法最大不同之處在於靜脈埋於皮膚之下,相較其他辨識方法更加難以偽造,而靜脈生理組織與時推移之變化甚微,擔當辨識主體足夠穩定。 With the advancement of technology, regardless of airport immigration clearance, bank ATM operation, company access control, and unlocking of various technological products, in order to ensure safety, the use of biometrics for identification has become the current mainstream. Among them, the use of palm veins for body discrimination is an important branch in the field of biometrics. This method uses a non-contact identification method, which is not only convenient and sanitary, but the biggest difference from other biological identification methods is that the vein is buried under the skin, which is more difficult to forge than other identification methods, and the physiological tissue of the vein changes little with the passage of time , The subject of identification is sufficiently stable.

以靜脈影像進行識別(Recognition)指靜脈或掌靜脈包含了以下數個步驟:獲取近紅外線影像、特徵提取、取得ROI影像,接著進行特徵分類(feature classification)描繪出靜脈特徵,以靜脈特徵之幾何形狀或是拓墣結構進行比對以進行一對多身份識別(Identification)。 Recognition of finger veins or palm veins with vein images involves the following steps: obtaining near-infrared images, extracting features, obtaining ROI images, and then performing feature classification to delineate the vein features, and use the geometry of the vein features The shape or topological structure is compared for one-to-many identification (Identification).

特徵分類是將ROI影像資料集送入神經網路進行訓練,以進行預測、統計及準確率之評估。遷移學習(Transfer Learning)神經網路訓練的策略之一,是將一個已經訓練過(pretrained)的模型,再訓練一次以應用於新任務的一種訓練策略。常見的遷移學習訓練策略,都是在訓練之初便決定要凍 結哪些卷積層的權重。一旦下決定,從第一個epoch(期,一個epoch=所有訓練樣本的一個正向傳遞和一個反向傳遞。)到訓練結束,整個訓練過程便不再進行更動。 Feature classification is to send the ROI image data set into the neural network for training to perform prediction, statistics and accuracy evaluation. One of the strategies of Transfer Learning neural network training is to train a pretrained model again to apply it to a new task. Common transfer learning training strategies are decided to freeze at the beginning of training Calculate the weights of which convolutional layers. Once the decision is made, from the first epoch (period, one epoch=a forward pass and a reverse pass of all training samples.) to the end of the training, the entire training process will not be changed.

在神經網路中,底層特徵較為低階通用,頂層特徵較為高階特別,而高階特別的特徵比較接近全連接層。因此可以合理推測,神經網路主要以高階特徵進行分類。而高階特徵從中階特徵成形,中階特徵從低階特徵成形,若採用傳統的遷移學習訓練策略,低階或中階特徵變動過於劇烈,將造成高階特徵不易成形。 In a neural network, the bottom-level features are relatively low-level and general, the top-level features are relatively high-level and special, and the high-level and special features are closer to the fully connected layer. Therefore, it can be reasonably speculated that the neural network is mainly classified by high-level features. The high-level features are formed from the middle-level features, and the middle-level features are formed from the low-level features. If the traditional transfer learning training strategy is adopted, the low-level or middle-level features change too sharply, which will make the high-level features difficult to form.

本發明提出一種結合深度學習神經網路之掌靜脈辨識系統,以本發明創新的訓練方式「漣波凍結訓練法」(Ripple-Freeze Training method,RFT),是一種將卷積神經網路的卷積層依底層、中層、頂層的順序漸進式的將權重凍結的訓練方法。首先全部卷積層的權重都不會凍結,訓練數個epoch(期)後,比較底層的卷積層凍結程度較大,以訓練中層的卷積層;再訓練數個epoch後,中層的卷積層也進行凍結,以訓練頂層的卷積層。將神經網路逐層分配不同的梯度下降速度,避免layer(底層、中層、頂層)在不同epoch(期)之間梯度的變化過於劇烈,而將底層layer的參數震盪傳遞到頂層layer造成變化更加劇烈的震盪。經過實驗結果顯示,運用本發明之訓練方式可有效加速神經網路的訓練,加速率可達20%,Top-1 accuracy(分類正確率)可達99%。 The present invention proposes a palm vein recognition system combined with a deep learning neural network, using the innovative training method of the present invention "Ripple-Freeze Training method" (RFT), which is a volume of convolutional neural network The multi-layer training method in which the weights are gradually frozen in the order of the bottom layer, the middle layer, and the top layer. First of all, the weights of all convolutional layers will not be frozen. After training for several epochs, the lower convolutional layer will be frozen to a greater degree to train the middle convolutional layer; after training for a few more epochs, the middle convolutional layer will also be processed Freeze to train the top convolutional layer. The neural network is assigned different gradient descent speeds layer by layer to avoid excessive changes in the gradient of the layer (bottom layer, middle layer, and top layer) between different epochs (periods), and the parameters of the bottom layer are oscillated to the top layer to cause more changes. Violent shock. The experimental results show that the training method of the present invention can effectively accelerate the training of the neural network, the acceleration rate can reach 20%, and the Top-1 accuracy (classification accuracy) can reach 99%.

本發明一種指/掌靜脈辨識處理之神經網路訓練方法,該方法訓練一卷積神經網路,該卷積神經網路用以對指/掌靜脈紅外線影像的資 料集進行特徵分類;該卷積神經網路包含複數個卷積層,該些卷積層按照一序列次序而做底層、中層、頂層之排列;該訓練方法包括: The present invention is a neural network training method for finger/palm vein recognition and processing. The method trains a convolutional neural network. The convolutional neural network is used for the data of finger/palm vein infrared images. The material set is feature-classified; the convolutional neural network includes a plurality of convolutional layers, and the convolutional layers are arranged in a sequence of bottom, middle, and top layers; the training method includes:

步驟一,全部的該卷積層的權重進行預定期數(epoch)的訓練,以獲得初始權重; Step 1: All the weights of the convolutional layer are trained for a predetermined number of epochs (epochs) to obtain the initial weights;

步驟二,該底層進行權重凍結; Step 2: Perform weight freeze on the bottom layer;

步驟三,該中層進行預定期數(epoch)的訓練,且因步驟二之結果使該訓練忽略該底層的變量; Step 3: The middle layer performs training for a predetermined number of epochs, and the training ignores the variables of the bottom layer due to the result of step 2.

步驟四,該中層進行權重凍結; Step 4: The middle layer performs weight freeze;

步驟五,該頂層進行預定期數(epoch)的訓練,且因步驟四之結果使步驟五的訓練忽略該中層的變量,據以對該指/掌靜脈紅外線影像的高階特徵進行分類。 Step 5: The top layer is trained for a predetermined number of epochs, and because of the result of step 4, the training of step 5 ignores the variables of the middle layer, and classifies the high-level features of the finger/palm vein infrared image.

本發明一種指/掌靜脈辨識處理,包括: The finger/palm vein recognition processing of the present invention includes:

STP-1,獲取指/掌靜脈近紅外線影像(NIR影像); STP-1, to obtain near-infrared image of finger/palm vein (NIR image);

STP-2,該NIR影像執行影像前處理(Image Pre-processing); STP-2, the NIR image performs image pre-processing (Image Pre-processing);

STP-3,將前處理後之影像進行靜脈影像特徵提取(Feature Extraction); STP-3, which performs Feature Extraction on the pre-processed image;

STP-4,將特徵提取後的影像進行切割以定義掌靜脈辨識區域(Region-of-Interest,ROI); STP-4, cut the image after feature extraction to define the palm vein recognition area (Region-of-Interest, ROI);

STP-5,該掌靜脈辨識區域進行資料增強處理(Date Augmentation); STP-5, the palm vein recognition area performs data augmentation processing (Date Augmentation);

STP-6,將資料增強處理後所得到的資料進行資料前處理,並產生訓練資料集(Training dataset)、驗證資料集(Validation dataset)以及測試 資料集(Testing dataset); STP-6, which performs data pre-processing on the data obtained after data enhancement processing, and generates a training dataset, a validation dataset, and a test Data set (Testing dataset);

STP-7,將上述所有資料集dataset)輸入卷積神經網路進行訓練;訓練方法包括: STP-7, input all the above data sets (dataset) into the convolutional neural network for training; training methods include:

STP-7.1,將該卷積神經網路之複數個卷積層按照一序列次序做底層、中層、頂層之排列; STP-7.1: Arrange the bottom, middle, and top layers of the convolutional layers of the convolutional neural network in a sequence;

STP-7.2,全部的該卷積層的權重進行預定期數(epoch)的訓練,以獲得初始權重; STP-7.2, all the weights of the convolutional layer are trained for a predetermined number of epochs to obtain the initial weights;

STP-7.3,該底層進行權重凍結; STP-7.3, the bottom layer performs weight freezing;

STP-7.4,該中層進行預定期數(epoch)的訓練,且因STP-7.3之結果使訓練忽略該底層的變量; STP-7.4, the middle layer is trained for a predetermined number of epochs, and due to the result of STP-7.3, the training ignores the variables of the bottom layer;

STP-7.5,該中層進行權重凍結; STP-7.5, the middle layer performs weight freeze;

STP-7.6,該頂層進行預定期數(epoch)的訓練,且因STP-7.5之結果使訓練忽略該中層的變量,據以對該指/掌靜脈紅外線影像的高階特徵進行分類。 In STP-7.6, the top layer is trained for a predetermined number of epochs, and due to the result of STP-7.5, the training ignores the variables of the middle layer, and classifies the high-level features of the finger/palm vein infrared image.

STP-8,將分類後的高階特徵進行一對多身份識別比對。 STP-8, performs one-to-many identification comparison of classified high-level features.

圖一為本發明使用Casia多光譜取像設備示意圖。 Figure 1 is a schematic diagram of the Casia multi-spectral imaging device used in the present invention.

圖二為本發明使用Casia獲得掌靜脈辨識系統各階段之影像數。 Figure 2 shows the number of images obtained at each stage of the palm vein recognition system using Casia in the present invention.

圖三為本發明使用100人資料集在ResNet-50上之準確率學習曲線。 Figure 3 shows the accuracy learning curve on ResNet-50 using the 100-person data set of the present invention.

圖四為本發明使用100人資料集在ResNet-50上的損失(loss)學習曲線。 Figure 4 shows the loss learning curve on ResNet-50 using the 100-person data set according to the present invention.

圖五為本發明使用100人資料集在ResNet-50上的學習率衰減曲線。 Figure 5 is the learning rate decay curve on ResNet-50 using the 100-person data set according to the present invention.

圖六為本發明Retrain加上RFT之準確率學習曲線。 Figure 6 is the accuracy learning curve of Retrain plus RFT of the present invention.

圖七為本發明Retrain加上RFT之loss學習曲線。 Figure 7 is the loss learning curve of Retrain plus RFT of the present invention.

圖八為本發明Retrain加上RFT之ROC曲線。 Figure 8 shows the ROC curve of Retrain plus RFT of the present invention.

圖九為本發明Retrain加上RFT後每層卷積層之學習率衰減曲線。 Figure 9 is the learning rate decay curve of each convolutional layer after Retrain plus RFT of the present invention.

圖十為本發明Retrain加上RFT後每層批量正規化層之學習率衰減曲線。 Figure 10 is the learning rate decay curve of each batch normalization layer after Retrain plus RFT of the present invention.

本發明提出之新型訓練方式應用於手掌靜脈或手指靜脈辨識處理。而指/掌靜脈辨識處理流程可採已知的靜脈影像辨識處理之標準流程,包括: The new training method proposed by the present invention is applied to palm vein or finger vein recognition processing. The finger/palm vein recognition process can adopt the known standard process of vein image recognition process, including:

將取得的紅外線靜脈影像(NIR image)執行影像前處理(Image Pre-processing),前處理主要為影像二值化(Binarization)取得二值化影像(binary image),將二值化影像(binary image)分別進行邊緣檢測(Edge Detection)以及隔離(Isolation)/增強(Enhancement)。 Perform Image Pre-processing on the obtained infrared vein image (NIR image). The pre-processing is mainly image binarization to obtain a binary image. Binary image ) Perform Edge Detection and Isolation/Enhancement respectively.

將前處理後的影像進行靜脈影像特徵提取(Feature Extraction),特徵提取方式包含但不限於repeated line tracking(重複線跟蹤法)、maximum curvature(最大曲率)、wide line detector(寬線檢測)與Gabor filter(Gabor濾波器)。 Perform Feature Extraction on the pre-processed image. Feature extraction methods include but are not limited to repeated line tracking, maximum curvature, wide line detector and Gabor filter (Gabor filter).

將特徵提取的影像進行影像切割處理以定義掌靜脈辨識區域(Region-of-Interest,ROI),通過手掌中心追蹤步驟(Palm center tracking)、指谷追蹤步驟(Finger valley tracking)、以及重複取樣步驟(Repeated sampling)獲取ROI影像。 The feature extraction image is processed by image cutting to define the palm vein recognition area (Region-of-Interest, ROI), through the palm center tracking step (Palm center tracking), finger valley tracking step (Finger valley tracking), and repeated sampling steps (Repeated sampling) Obtain ROI images.

將ROI影像進行資料增強處理(Data Augmentation),利用資料增 強(Data Augmentation)的方法將ROI影像做各式各樣的變換,如影像模糊(Gaussian Blur)、影像銳化(Sharpen)、仿射變換(Affine transform)、影像晃動(Shake)、加入高斯雜訊(Gaussian Noise)、以及加入影像隨機丟失(Coarse Dropout),將資料做這些處理增加訓練資料集的資料量及多樣性,可避免神經網路在訓練階段出現過度擬合(overfitting)的現象。 Perform Data Augmentation on the ROI image, and use the data to augment The strong (Data Augmentation) method performs various transformations on the ROI image, such as image blur (Gaussian Blur), image sharpening (Sharpen), affine transform (Affine transform), image shaking (Shake), adding Gaussian noise By adding Gaussian Noise (Gaussian Noise) and adding Coarse Dropout, these data are processed to increase the data volume and diversity of the training data set, which can prevent the neural network from overfitting during the training phase.

將資料增強處理後所得到的資料進行正規化(Data Normalization)、標準化(Standardization)及標記(Labeling)等資料前處理(Data Pre-Processing);可有助於神經網路中優化演算法在高維特徵空間上之下降速度;此節將產生訓練資料集(Training dataset)、驗證資料集(Validation dataset)及測試資料集(Testing dataset)。 The data obtained after data enhancement processing is normalized (Data Normalization), standardization (Standardization), and labeling (Data Pre-Processing); it can help optimize the algorithm in the neural network in high-level processing. The rate of descent in the dimensional feature space; this section will generate training datasets, validation datasets, and testing datasets.

將處理好的資料集與標籤一起輸入神經網路中做訓練並做性能評估、預測分類。 The processed data set and labels are input into the neural network for training, performance evaluation, and prediction classification.

本發明將採用兩種深度卷積神經網路架構ResNet-50(Residual Network,殘差網路)與DenseNet(Dense Convolutional Network,稠密卷積神經網路)進行訓練作為驗證案例。 The present invention uses two deep convolutional neural network architectures ResNet-50 (Residual Network, residual network) and DenseNet (Dense Convolutional Network, dense convolutional neural network) for training as a verification case.

本發明提出一種新的訓練方式。在訓練的過程中,首先全部卷積層的權重都不會凍結,訓練數個epoch後,比較底層的卷積層凍結,以訓練中層的卷積層。再訓練數個epoch後,中層的卷積層也進行凍結,以訓練頂層的卷積層。基本概念是由前面的網路訓練中間的網路,再由中間的網路訓練後面的網路,最後訓練出分類器的訓練方式。 The present invention proposes a new training method. In the training process, first, the weights of all convolutional layers will not be frozen. After training for several epochs, compare the freezing of the bottom convolutional layer to train the middle convolutional layer. After training for a few more epochs, the middle convolutional layer is also frozen to train the top convolutional layer. The basic concept is to train the middle network from the previous network, then train the latter network from the middle network, and finally train the training method of the classifier.

與Transfer Learning(遷移學習)兩種訓練策略不同的地方是,本發明提出的訓練策略在訓練過程中會漸進式的將權重凍結。一般而言,可以 觀察到神經網路中,底層特徵較為低階通用,頂層特徵較為高階特別,而高階特別的特徵比較接近全連接層。因此可以合理推測,神經網路主要以高階特徵進行分類。而高階特徵從中階特徵成形,中階特徵從低階特徵成形,因此可以推測若低中階特徵變動過於劇烈,將造成高階特徵不易成形;反之若低中階特徵給予一定時間訓練完成後,將之凍結,讓神經網路的訓練著重於成形高階特徵,推測應可加速神經網路的訓練。 The difference from the two training strategies of Transfer Learning is that the training strategy proposed by the present invention gradually freezes the weights during the training process. Generally speaking, you can It is observed that in the neural network, the bottom-level features are relatively low-level and general, the top-level features are relatively high-level and special, and the high-level features are closer to the fully connected layer. Therefore, it can be reasonably speculated that the neural network is mainly classified by high-level features. The high-level features are formed from the middle-level features, and the middle-level features are formed from the low-level features. Therefore, it can be inferred that if the low- and middle-level features change too sharply, the high-level features will not be easily formed; on the contrary, if the low- and middle-level features are given a certain time after the training is completed, The freezing allows the training of neural networks to focus on shaping high-level features, and it is speculated that it should speed up the training of neural networks.

實作上,要將一層卷積層的權重凍結有很多種方式,本發明透過更改學習率(learning rate)的方式將權重凍結。 In practice, there are many ways to freeze the weights of a convolutional layer. The present invention freezes the weights by changing the learning rate.

在每一次反向傳播中,算出梯度之後,會先乘上學習率,再對權重進行更新。原本的權重更新公式如式(1.1),v t 是t時刻對梯度的first moment estimation,代表著均值(mean),s t 是t時刻對梯度的second moment estimation,代表著變異數(variance),

Figure 108145581-A0101-12-0007-31
&
Figure 108145581-A0101-12-0007-32
在訓練開始初期,特別是衰減率(decay rate)很小時,v t s t 特別偏向0,所以需要做個偏差修正。η表示為學習率,W t 表示為調整前權重係數,W t+1表示為調整後權重係數,ε表示為平滑值,
Figure 108145581-A0101-12-0007-29
Figure 108145581-A0101-12-0007-30
表示為t時間內的阻力係數。 In each backpropagation, after calculating the gradient, it will first multiply the learning rate, and then update the weight. The original weight update formula is as shown in formula (1.1), v t is the first moment estimation of the gradient at time t, which represents the mean, and s t is the second moment estimation of the gradient at time t, which represents the variance.
Figure 108145581-A0101-12-0007-31
&
Figure 108145581-A0101-12-0007-32
In the early beginning of the training, especially the rate of decay (decay rate) is very small, v t and s t 0 special favor, so we need to be offset correction. η is the learning rate, W t is the weight coefficient before adjustment, W t +1 is the weight coefficient after adjustment, and ε is the smooth value.
Figure 108145581-A0101-12-0007-29
versus
Figure 108145581-A0101-12-0007-30
Expressed as the drag coefficient in t time.

Figure 108145581-A0101-12-0007-1
Figure 108145581-A0101-12-0007-1

本發明提出的訓練方法會將學習率η額外乘上一個參數α l ,如式(1.2),將原本屬於全域變數的全域學習率(global learning rate)η定義成每層卷積層各自獨立的層學習率(layer-wise learning rate)η l 。神經網路再用式 (1.3)進行權重更新。α l 為一個隨機參數,將原本屬於全域變數的全域學習率(global learning rate)η定義成每層卷積層各自獨立的層學習率(layer-wise learning rate)η l s t 是t時刻對梯度的second moment estimation,代表著變異數(variance),

Figure 108145581-A0101-12-0008-27
&
Figure 108145581-A0101-12-0008-28
在訓練開始初期,特別是衰減率(decay rate)很小時,v t s t 特別偏向0,所以需要做個偏差修正。η表示為學習率,W t 表示為調整前權重係數,W t+1表示為調整後權重係數,ε表示為平滑值。 The training method proposed by the present invention will additionally multiply the learning rate η by a parameter α l , as shown in formula (1.2), and define the global learning rate η originally belonging to the global variable as an independent layer of each convolutional layer Learning rate (layer-wise learning rate) η l . The neural network reuses formula (1.3) to update the weight. α l is a random parameter, the global learning rate η originally belonging to the global variable is defined as the independent layer-wise learning rate η l of each convolutional layer, and s t is the time t The second moment estimation of the gradient represents the variance,
Figure 108145581-A0101-12-0008-27
&
Figure 108145581-A0101-12-0008-28
In the early beginning of the training, especially the rate of decay (decay rate) is very small, v t and s t 0 special favor, so we need to be offset correction. η is the learning rate, W t is the weight coefficient before adjustment, W t +1 is the weight coefficient after adjustment, and ε is the smooth value.

η l =η×α l (1.2) η l =η×α l (1.2)

Figure 108145581-A0101-12-0008-2
Figure 108145581-A0101-12-0008-2

不論式(1.1)或式(1.3),學習率衰減的方式則保持一致,訓練過程中若觀察損失(loss)並未持續降低,則進行一次學習率的衰減,如式(1.4)。其中下標T表示觀察到loss並未持續降低的epoch,下一個epoch將學習率降低。實作上設定初始全域學習率為10-3,最小全域學習率為10-5。其中下標T表示觀察到loss並未持續降低的epoch,下一個epoch將學習率降低。實作上設定初始全域學習率為10-3,最小全域學習率為10-5Regardless of equation (1.1) or equation (1.3), the learning rate attenuation method remains the same. If the observation loss (loss) does not continue to decrease during the training process, the learning rate attenuation is performed once, as in equation (1.4). The subscript T represents an epoch in which loss is not continuously reduced, and the next epoch will reduce the learning rate. In practice, the initial global learning rate is set to 10 -3 and the minimum global learning rate is 10 -5 . The subscript T represents an epoch in which loss is not continuously reduced, and the next epoch will reduce the learning rate. In practice, the initial global learning rate is set to 10 -3 and the minimum global learning rate is 10 -5 .

Figure 108145581-A0101-12-0008-3
Figure 108145581-A0101-12-0008-3

本發明提出的訓練方法是對優化算法中的學習率進行調整,而每個神經網路一定都會使用優化算法來更新權重,因此這種訓練方式並不會受限於某特定神經網路架構。從實驗結果中,將不同卷積層的學習率大小 為縱軸,而epoch為橫軸的方式,將整個訓練過程中學習率的變動描繪出來,其樣子有如漣波一般,是以本發明將此訓練方法命名為漣波凍結訓練(Ripple-Freeze Training,RFT)。 The training method proposed by the present invention adjusts the learning rate in the optimization algorithm, and each neural network must use the optimization algorithm to update the weights, so this training method is not limited to a specific neural network architecture. From the experimental results, the learning rate of different convolutional layers It is the vertical axis, and the epoch is the horizontal axis, which depicts the changes in the learning rate during the entire training process. It looks like a ripple. This training method is named Ripple-Freeze Training (Ripple-Freeze Training) in the present invention. ,RFT).

本發明的遠紅外線手掌影像取自中國科學院自動化研究所(The Institute of Automation,Chinese Academy of Sciences,CASIA)提供的Multi-Sprctral Palmprint Image Database。此數據集包含100位受測者,6個波段的光線波長,有460nm、630nm、700nm、850nm、940nm及白光。每位受測者在每個波段皆有6張灰階影像,而取像設備由CASIA自己設計,如圖一所示。本發明將850nm波段的6張灰階影像,每位受測者皆取編號1到編號4影像歸類為訓練資料集(Training dataset),編號5影像為驗證資料集(Validation dataset),編號6影像為驗證資料集(Testing dataset)。各個階段所輸出之影像數量如圖二所示。經過各項影像前處理後的數據結果如下表1所示。 The far-infrared palm image of the present invention is taken from the Multi-Sprctral Palmprint Image Database provided by The Institute of Automation (Chinese Academy of Sciences, CASIA). This data set contains 100 subjects with 6 wavelengths of light, including 460nm, 630nm, 700nm, 850nm, 940nm and white light. Each subject has 6 grayscale images in each band, and the imaging equipment is designed by CASIA, as shown in Figure 1. In the present invention, 6 gray-scale images in the 850nm band are classified as training dataset by each subject with images numbered 1 to 4, and image number 5 is the Validation dataset, number 6 The image is a testing dataset. The number of images output at each stage is shown in Figure 2. The data results after each image pre-processing are shown in Table 1 below.

表1 Dataset

Figure 108145581-A0101-12-0009-7
Table 1 Dataset
Figure 108145581-A0101-12-0009-7

使用CASIA 850nm 100人之database,以及ResNet-50的架構,並未使用Ripple-Freeze Training。訓練結果如表2,訓練過程的準確率學習曲線如圖三所示、loss學習曲線如圖四所示,學習率衰減曲線如圖五所示。 Use CASIA 850nm 100-person database and ResNet-50 architecture without using Ripple-Freeze Training. The training results are shown in Table 2. The accuracy learning curve of the training process is shown in Figure 3, the loss learning curve is shown in Figure 4, and the learning rate decay curve is shown in Figure 5.

表2 CASIA 100人在ResNet-50上之訓練結果

Figure 108145581-A0101-12-0010-5
Table 2 Training results of 100 CASIA people on ResNet-50
Figure 108145581-A0101-12-0010-5

實驗結果與state of the art比較如表3所示。本系統與其他掌靜脈辨識系統相比,準確率可達99%。文獻[48]中使用LBP和LDP將靜脈影像進行編碼,再與資料集影像計算漢明距離(Hamming Distance)。本系統與之相比準確率提高了1.8%,Precision提高0.49%,Recall提高1.8%;文獻[49]中使用Matched Filter提取靜脈特徵,再將影像進行異或比對(XOR matching)。與之相比本系統準確率提高了0.2%,Precision略降0.83%,Recall提高0.2%。 The comparison between the experimental results and the state of the art is shown in Table 3. Compared with other palm vein recognition systems, this system has an accuracy rate of 99%. In [48], LBP and LDP are used to encode the vein image, and then the Hamming distance is calculated with the data set image. Compared with this system, the accuracy rate of this system is increased by 1.8%, Precision by 0.49%, and Recall by 1.8%. In the literature [49], Matched Filter is used to extract vein features, and then the image is XOR matching. Compared with this system, the accuracy rate of this system is increased by 0.2%, Precision is slightly decreased by 0.83%, and Recall is increased by 0.2%.

文獻[48]: L. Mirmohamadsadeghi and A. Drygajlo, "Palm vein recognition with local binary patterns and local derivative patterns," in Proc. 2011 International Joint Conference on Biometrics (IJCB), Washington, DC, 2011, pp. 1-6. Reference [48]: L. Mirmohamadsadeghi and A. Drygajlo, "Palm vein recognition with local binary patterns and local derivative patterns," in Proc. 2011 International Joint Conference on Biometrics (IJCB), Washington, DC, 2011, pp. 1- 6.

文獻[49]: Y.-B. Zhang, Q. Li, J. You, and P. Bhattacharya, “Palm vein extraction and matching for personal authentication,” Advances in Visual Information Systems, vol. 4781, 2007, pp. 154-164. Literature [49]: Y.-B. Zhang, Q. Li, J. You, and P. Bhattacharya, “Palm vein extraction and matching for personal authentication,” Advances in Visual Information Systems, vol. 4781, 2007, pp. 154-164.

表3.ResNet-50實驗結果與state of the art比較表

Figure 108145581-A0101-12-0010-6
Table 3. Comparison table of ResNet-50 experimental results and state of the art
Figure 108145581-A0101-12-0010-6

在使用Retrain策略下,考慮Ripple-Freeze Training使用與否來進行訓練,其中卷積層與批量正規化層之學習率設定,而池化層無學習率可以設定,設定方式整理如表4所示,實驗結果準確率比較如表5所示,訓練時間比較如表6所示。由表5及表6中可以觀察到在60人之應用場景下,使用Ripple-Freeze Training準確率提高了0.03%,而訓練時間由原本4小時17分縮短至3小時42分,加速率約為13.48%。加速率計算方式如式(1.5)t表示未使用Ripple-Freeze Training之訓練時間,t RF 表示使用Ripple-Freeze Training之訓練時間。 Under the Retrain strategy, consider whether Ripple-Freeze Training is used or not for training. The learning rate of the convolutional layer and the batch normalization layer is set, and the pooling layer has no learning rate can be set. The setting method is shown in Table 4. The comparison of the accuracy of the experimental results is shown in Table 5, and the comparison of the training time is shown in Table 6. From Table 5 and Table 6, it can be observed that in the 60-person application scenario, the accuracy of using Ripple-Freeze Training is increased by 0.03%, and the training time is shortened from 4 hours and 17 minutes to 3 hours and 42 minutes, and the acceleration rate is about 13.48%. The calculation method of acceleration rate is as formula (1.5) . t represents the training time without Ripple-Freeze Training, t RF represents the training time with Ripple-Freeze Training.

Figure 108145581-A0101-12-0011-8
Figure 108145581-A0101-12-0011-8

在80人之應用場景下,使用Ripple-Freeze Training準確率降低了1.25%,而訓練時間由原本5小時10分縮短至4小時25分,加速率約為11.85%。而100人之應用場景下,這是實驗所能分類的最大分類數量,使用Ripple-Freeze Training準確率未有絲毫降低,訓練時間由原本11小時5分縮短至8小時48分,加速率約為20.61%。 In the application scenario of 80 people, the accuracy rate of using Ripple-Freeze Training is reduced by 1.25%, and the training time is shortened from 5 hours and 10 minutes to 4 hours and 25 minutes, and the acceleration rate is about 11.85%. In the application scenario of 100 people, this is the maximum number of categories that the experiment can classify. The accuracy rate of using Ripple-Freeze Training has not been reduced at all. The training time has been shortened from 11 hours and 5 minutes to 8 hours and 48 minutes, and the acceleration rate is about 20.61%.

由表5及表6中可以觀察到在60人之應用場景下,使用Ripple-Freeze Training準確率提高了0.03%,而訓練時間由原本4小時17分縮短至3小時42分,加速率(Acceleration rate)約為13.48%。加速率計算方式如式(1.6)t表示未使用Ripple-Freeze Training之訓練時間,t RF 表示使用Ripple-Freeze Training之訓練時間。 From Table 5 and Table 6, it can be observed that in the application scenario of 60 people, the accuracy of using Ripple-Freeze Training has increased by 0.03%, and the training time has been shortened from 4 hours and 17 minutes to 3 hours and 42 minutes. rate) is about 13.48%. The calculation method of acceleration rate is as formula (1.6) . t represents the training time without Ripple-Freeze Training, t RF represents the training time with Ripple-Freeze Training.

Figure 108145581-A0101-12-0011-9
Figure 108145581-A0101-12-0011-9

在80人之應用場景下,使用Ripple-Freeze Training準確率降低了1.25%,而訓練時間由原本5小時10分縮短至4小時25分,加速率約為11.85%。而100人之應用場景下,這是實驗所能分類的最大分類數量,使用Ripple-Freeze Training準確率未有絲毫降低,訓練時間由原本11小時5分縮短至8小時48分,加速率約為20.61%。 In the application scenario of 80 people, the accuracy rate of using Ripple-Freeze Training is reduced by 1.25%, and the training time is shortened from 5 hours and 10 minutes to 4 hours and 25 minutes, and the acceleration rate is about 11.85%. In the application scenario of 100 people, this is the maximum number of categories that the experiment can classify. The accuracy rate of using Ripple-Freeze Training has not been reduced at all. The training time has been shortened from 11 hours and 5 minutes to 8 hours and 48 minutes, and the acceleration rate is about 20.61%.

表4 RFT在ResNet-50上之學習率設定

Figure 108145581-A0101-12-0012-10
Table 4 RFT learning rate setting on ResNet-50
Figure 108145581-A0101-12-0012-10

表5 Retrain加上RFT之準確率比較

Figure 108145581-A0101-12-0013-11
Table 5 Accuracy comparison of Retrain plus RFT
Figure 108145581-A0101-12-0013-11

表6 Retrain加上RFT之訓練時間比較

Figure 108145581-A0101-12-0013-12
Table 6 Comparison of training time between Retrain and RFT
Figure 108145581-A0101-12-0013-12

最後加上Ripple-Freeze Training後之實驗結果,其準確率學習曲線如圖六所示,loss學習曲線如圖七所示,卷積層之學習衰減曲線如圖九所示,批量正規化層之學習衰減曲線如圖十所示,ROC曲線如圖八所示。 Finally, the experimental results after Ripple-Freeze Training are added. The accuracy learning curve is shown in Figure 6, the loss learning curve is shown in Figure 7, and the learning attenuation curve of the convolutional layer is shown in Figure 9. The learning of the batch normalization layer The attenuation curve is shown in Figure 10, and the ROC curve is shown in Figure 8.

Claims (10)

一種用於指/掌靜脈辨識之神經網路訓練方法,該神經網路訓練方法訓練一卷積神經網路,以對指/掌靜脈紅外線影像的資料集進行特徵分類;該卷積神經網路包含複數個卷積層,該些卷積層按照一序列次序而做底層、中層、頂層之排列;該神經網路訓練方法包括:步驟一,全部的該些卷積層的權重進行預定期數(epoch)的訓練,以獲得初始權重;步驟二,該底層進行權重凍結;步驟三,該中層進行預定期數(epoch)的訓練,且因步驟二之結果使該訓練忽略該底層的變量;步驟四,該中層進行權重凍結;步驟五,該頂層進行預定期數(epoch)的訓練,且因步驟四之結果使步驟五的訓練忽略該中層的變量,據以對該指/掌靜脈紅外線影像的資料集進行一高階特徵分類;其中,步驟二與步驟四之權重凍結係透過更改學習率(learning rate)的方式來進行。 A neural network training method for finger/palm vein recognition. The neural network training method trains a convolutional neural network to classify features of a data set of infrared images of finger/palm veins; the convolutional neural network It includes a plurality of convolutional layers, and the convolutional layers are arranged in a sequence of the bottom, middle, and top layers; the neural network training method includes: Step 1, the weights of all the convolutional layers are performed for a predetermined number of epochs (epoch) Training to obtain initial weights; step two, the bottom layer performs weight freezing; step three, the middle layer performs training for a predetermined number of epochs, and the training ignores the variables of the bottom layer due to the result of step two; step four, The middle layer is weight frozen; step 5, the top layer is trained for a predetermined number of epochs, and because of the result of step 4, the training of step 5 ignores the variables of the middle layer, based on the data of the infrared image of the finger/palm vein The set performs a high-level feature classification; among them, the weight freezing of step 2 and step 4 is performed by changing the learning rate. 如請求項1所述用於指/掌靜脈辨識之神經網路訓練方法,其中,步驟二與步驟四之權重凍結係透過更改學習率(learning rate)的方式來進行,其係藉由在每一次反向傳播中,調整前權重係數(W t )算出梯度之後,會先乘上一學習率,再對權重進行更新,得到一調整後權重係數(W t+1),其中更改學習率(learning rate)的方式係在該些卷積層的底層、中層、頂層具有不同之學習率。 The neural network training method for finger/palm vein recognition as described in claim 1, wherein the weight freezing of step 2 and step 4 is performed by changing the learning rate, which is performed at each In a backpropagation, after adjusting the weight coefficient ( W t ) to calculate the gradient, it will first multiply it by a learning rate, and then update the weight to obtain an adjusted weight coefficient ( W t +1 ), where the learning rate is changed ( The method of learning rate is that the bottom, middle, and top layers of the convolutional layers have different learning rates. 如請求項2所述用於指/掌靜脈辨識之神經網路訓練方法,其中,所述更改學習率(learning rate)的方式係使原本屬於該些卷積層的底層、中層、頂層的全域學習率(global learning rate)η乘上一參數α l ,定義成每層卷積層各自獨立的層學習率(layer-wise learning rate)η l The neural network training method for finger/palm vein recognition as described in claim 2, wherein the method of changing the learning rate is to enable the global learning of the bottom, middle, and top layers of the convolutional layers. The global learning rate η is multiplied by a parameter α l , which is defined as the independent layer-wise learning rate η l of each convolutional layer. 如請求項3所述用於指/掌靜脈辨識之神經網路訓練方法,其中,該權重凍結方法權重凍結係透過更改學習率(learning rate)的方式來進行,調整前權重係數(W t )與調整後權重係數(W t+1)藉由用式(1.3)進行權重更新;
Figure 108145581-A0305-02-0017-1
其中,
Figure 108145581-A0305-02-0017-3
是偏差修正後的t時刻對梯度的第一動量估算(first moment estimation),代表著均值(mean),
Figure 108145581-A0305-02-0017-4
是偏差修正後的t時刻對梯度的第一動量估算(second moment estimation),代表著變異數(variance);ε表示為平滑值;其中在式(1.3)中,該些學習率之衰減保持一致。
The neural network training method for finger/palm vein recognition described in claim 3, wherein the weight freezing method weight freezing is performed by changing the learning rate, adjusting the weight coefficient ( W t ) before And the adjusted weight coefficient ( W t +1 ) by using formula (1.3) to update the weight;
Figure 108145581-A0305-02-0017-1
among them,
Figure 108145581-A0305-02-0017-3
Is the first moment estimation of the gradient at time t after the deviation correction, which represents the mean,
Figure 108145581-A0305-02-0017-4
Is the second moment estimation of the gradient at time t after the deviation is corrected, which represents the variance; ε is the smooth value; where in formula (1.3), the attenuation of these learning rates remains the same .
如請求項4所述用於指/掌靜脈辨識之神經網路訓練方法,其中,訓練過程中若觀察損失(loss)並未持續降低,通過式(1.4)使下一期數(epoch)的學習率降低
Figure 108145581-A0305-02-0017-2
其中,下標T表示損失(loss)並未持續降低的期數(epoch)。
The neural network training method for finger/palm vein recognition described in claim 4, in which, if the observation loss (loss) does not continue to decrease during the training process, the next epoch (epoch) is Decreased learning rate
Figure 108145581-A0305-02-0017-2
Among them, the subscript T represents the number of epochs in which the loss (loss) has not continued to decrease.
一種指/掌靜脈辨識處理方法,包括:步驟1,獲取指/掌靜脈近紅外線影像(NIR影像);步驟2,該NIR影像執行影像前處理(Image Pre-processing); 步驟3,將前處理後之影像進行靜脈影像特徵提取(Feature Extraction);步驟4,將特徵提取後的影像進行切割以定義掌靜脈辨識區域(Region-of-Interest,ROI);步驟5,該掌靜脈辨識區域進行資料增強處理(Date Augmentation);步驟6,將資料增強處理後所得到的資料進行資料前處理,並產生訓練資料集(Training dataset)、驗證資料集(Validation dataset)以及測試資料集(Testing dataset);步驟7,將上述所有資料集dataset)輸入卷積神經網路進行一訓練;該訓練包括:步驟7.1,將該卷積神經網路之複數個卷積層按照一序列次序做底層、中層、頂層之排列;步驟7.2,全部的該卷積層的權重進行預定期數(epoch)的訓練,以獲得初始權重;步驟7.3,該底層進行權重凍結;步驟7.4,該中層進行預定期數(epoch)的訓練,且因步驟7.3之結果使訓練忽略該底層的變量;步驟7.5,該中層進行權重凍結;步驟7.6,該頂層進行預定期數(epoch)的訓練,且因步驟7.5之結果使訓練忽略該中層的變量,據以對該指/掌靜脈紅外線影像的高階特徵進行分類;步驟8,將分類後的高階特徵進行一對多身份識別比對; 其中,步驟7.3與步驟7.5之權重凍結係透過更改學習率(learning rate)的方式來進行。 A method for identifying and processing finger/palm veins, including: step 1, obtaining a near-infrared image (NIR image) of the finger/palm vein; step 2, performing image pre-processing (Image Pre-processing) on the NIR image; Step 3: Perform Feature Extraction on the pre-processed image; Step 4: Cut the image after feature extraction to define the palm vein recognition region (Region-of-Interest, ROI); Step 5, the Palm vein identification area for data augmentation processing (Date Augmentation); step 6, the data obtained after data augmentation processing for data pre-processing, and generate training data set (Training dataset), validation data set (Validation dataset) and test data Set (Testing dataset); step 7, input all the above-mentioned data sets (dataset) into the convolutional neural network for a training; the training includes: step 7.1, the plurality of convolutional layers of the convolutional neural network are performed in a sequence order Arrangement of the bottom layer, middle layer, and top layer; step 7.2, all the weights of the convolutional layer are trained for a predetermined number of epochs to obtain the initial weight; step 7.3, the bottom layer is weight frozen; step 7.4, the middle layer is performed for a predetermined period Epoch training, and because of the result of step 7.3, the training ignores the variables of the bottom layer; step 7.5, the middle layer performs weight freezing; step 7.6, the top layer performs training for a predetermined number of epochs, and because of step 7.5 As a result, the training ignores the middle-level variables and classifies the high-level features of the finger/palm vein infrared image; step 8, performs one-to-many identification comparison on the classified high-level features; Among them, the weight freezing of step 7.3 and step 7.5 is performed by changing the learning rate. 如請求項6所述指/掌靜脈辨識處理方法,其中,步驟7.3與步驟7.5之權重凍結係透過更改學習率(learning rate)的方式來進行,其係藉由在每一次反向傳播中,調整前權重係數(W t )算出梯度之後,會先乘上一學習率,再對權重進行更新,得到一調整後權重係數(W t+1),其中更改學習率(learning rate)的方式係在該些卷積層的底層、中層、頂層具有不同之學習率。 The finger/palm vein recognition processing method described in claim 6, wherein the weight freezing of step 7.3 and step 7.5 is performed by changing the learning rate, which is performed by each backpropagation, After adjusting the weight coefficient ( W t ) to calculate the gradient, it will first multiply by a learning rate, and then update the weight to obtain an adjusted weight coefficient ( W t +1 ). The method of changing the learning rate (learning rate) is The bottom, middle, and top layers of these convolutional layers have different learning rates. 如請求項7所述指/掌靜脈辨識處理方法,其中,所述更改學習率(learning rate)的方式係使原本屬於該些卷積層的底層、中層、頂層的全域學習率(global learning rate)η乘上一參數α l ,定義成每層卷積層各自獨立的層學習率(layer-wise learning rate)η l For example, the finger/palm vein recognition processing method of claim 7, wherein the method of changing the learning rate is to make the global learning rate of the bottom, middle, and top layers of the convolutional layers (global learning rate) η is multiplied by a parameter α l and defined as the independent layer-wise learning rate η l of each convolutional layer. 如請求項8所述指/掌靜脈辨識處理方法,其中,該權重凍結方法權重凍結係透過更改學習率(learning rate)的方式來進行,調整前權重係數(W t )與調整後權重係數(W t+1)藉由用式(1.3)進行權重更新;
Figure 108145581-A0305-02-0019-5
其中,
Figure 108145581-A0305-02-0019-6
是偏差修正後的t時刻對梯度的第一動量估算(first moment estimation),代表著均值(mean),
Figure 108145581-A0305-02-0019-7
是偏差修正後的t時刻對梯度的第一動量估算(second moment estimation),代表著變異數(variance);ε表示為平滑值;其中在式(1.3)中,該些學習率之衰減保持一致。
The finger/palm vein identification processing method described in claim 8, wherein the weight freezing method is performed by changing the learning rate, and the weight coefficient before adjustment ( W t ) and the weight coefficient after adjustment ( W t +1 ) update the weight by using formula (1.3);
Figure 108145581-A0305-02-0019-5
among them,
Figure 108145581-A0305-02-0019-6
Is the first moment estimation of the gradient at time t after the deviation correction, which represents the mean,
Figure 108145581-A0305-02-0019-7
Is the second moment estimation of the gradient at time t after the deviation is corrected, which represents the variance; ε is the smooth value; where in formula (1.3), the attenuation of these learning rates remains the same .
如請求項9所述指/掌靜脈辨識處理方法,其中,訓練過程中若觀察損 失(loss)並未持續降低,通過式(1.4)使下一期數(epoch)的學習率降低
Figure 108145581-A0305-02-0020-8
其中,下標T表示損失(loss)並未持續降低的期數(epoch)。
The finger/palm vein recognition processing method described in claim 9, wherein, if the observation loss (loss) does not continue to decrease during the training process, the learning rate of the next epoch is reduced by formula (1.4)
Figure 108145581-A0305-02-0020-8
Among them, the subscript T represents the number of epochs in which the loss (loss) has not continued to decrease.
TW108145581A 2019-12-12 2019-12-12 Finger vein or palm vein identification processing and neural network training method TWI731511B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW108145581A TWI731511B (en) 2019-12-12 2019-12-12 Finger vein or palm vein identification processing and neural network training method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW108145581A TWI731511B (en) 2019-12-12 2019-12-12 Finger vein or palm vein identification processing and neural network training method

Publications (2)

Publication Number Publication Date
TW202123096A TW202123096A (en) 2021-06-16
TWI731511B true TWI731511B (en) 2021-06-21

Family

ID=77516947

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108145581A TWI731511B (en) 2019-12-12 2019-12-12 Finger vein or palm vein identification processing and neural network training method

Country Status (1)

Country Link
TW (1) TWI731511B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201211913A (en) * 2010-09-03 2012-03-16 Univ Nat Taiwan Science Tech A method for recognizing the identity of user by palm vein biometric
CN106529468A (en) * 2016-11-07 2017-03-22 重庆工商大学 Finger vein identification method and system based on convolutional neural network
CN109146000A (en) * 2018-09-07 2019-01-04 电子科技大学 A kind of method and device for improving convolutional neural networks based on frost weight
TW201905758A (en) * 2016-07-22 2019-02-01 美商美國Nec實驗室有限公司 Active detection for anti-scratch facial recognition
CN109359520A (en) * 2018-09-04 2019-02-19 汇纳科技股份有限公司 People counting method, system, computer readable storage medium and server

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201211913A (en) * 2010-09-03 2012-03-16 Univ Nat Taiwan Science Tech A method for recognizing the identity of user by palm vein biometric
TW201905758A (en) * 2016-07-22 2019-02-01 美商美國Nec實驗室有限公司 Active detection for anti-scratch facial recognition
CN106529468A (en) * 2016-11-07 2017-03-22 重庆工商大学 Finger vein identification method and system based on convolutional neural network
CN109359520A (en) * 2018-09-04 2019-02-19 汇纳科技股份有限公司 People counting method, system, computer readable storage medium and server
CN109146000A (en) * 2018-09-07 2019-01-04 电子科技大学 A kind of method and device for improving convolutional neural networks based on frost weight

Also Published As

Publication number Publication date
TW202123096A (en) 2021-06-16

Similar Documents

Publication Publication Date Title
Bazrafkan et al. An end to end deep neural network for iris segmentation in unconstrained scenarios
Jacob Capsule network based biometric recognition system
Minaee et al. Fingernet: Pushing the limits of fingerprint recognition using convolutional neural network
Omran et al. An iris recognition system using deep convolutional neural network
Podder et al. An efficient iris segmentation model based on eyelids and eyelashes detection in iris recognition system
Kumar et al. Fingerprint matching using multi-dimensional ANN
Huang et al. Robust finger vein recognition based on deep CNN with spatial attention and bias field correction
Lazimul et al. Fingerprint liveness detection using convolutional neural network and fingerprint image enhancement
Gowroju et al. Review on secure traditional and machine learning algorithms for age prediction using IRIS image
Madenda et al. Retinal biometric identification using convolutional neural network
Waheed et al. Robust extraction of blood vessels for retinal recognition
TWI731511B (en) Finger vein or palm vein identification processing and neural network training method
Pathak et al. Entropy based CNN for segmentation of noisy color eye images using color, texture and brightness contour features
Al-jaberi et al. Palm vein recognition based on convolution neural network
Thiyaneswaran et al. Iris Recognition using Left and Right Iris Feature of the Human Eye for Biometric Security System
Borah et al. Retina and fingerprint based biometric identification system
Tayade et al. Sclera feature extraction using DWT co-efficients
Prasanth et al. Fusion of iris and periocular biometrics authentication using CNN
Roy et al. Multibiometric system using level set, modified LBP and random forest
Babalola et al. A palm vein recognition approach by multiple convolutional neural network models
Babalola et al. Dorsal hand vein biometrics with a novel deep learning approach for person identification
Sable et al. Pretrained Deep Neural Networks for Age Prediction from Iris Biometrics
Hatode et al. Evolution and Testimony of Deep Learning Algorithm for Diabetic Retinopathy Detection
Alhanjouri Improved HMM by deep learning for ear classification
Benaouda et al. A CNN Approach for the Identification of Dorsal Veins of the Hand