TW202111498A - Fingerprint recognition method, chip and electronic device - Google Patents

Fingerprint recognition method, chip and electronic device Download PDF

Info

Publication number
TW202111498A
TW202111498A TW108142024A TW108142024A TW202111498A TW 202111498 A TW202111498 A TW 202111498A TW 108142024 A TW108142024 A TW 108142024A TW 108142024 A TW108142024 A TW 108142024A TW 202111498 A TW202111498 A TW 202111498A
Authority
TW
Taiwan
Prior art keywords
image
sub
fingerprint
gradient
gray
Prior art date
Application number
TW108142024A
Other languages
Chinese (zh)
Other versions
TWI737040B (en
Inventor
李准
翟劍鋒
龍文勇
Original Assignee
大陸商敦泰電子(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商敦泰電子(深圳)有限公司 filed Critical 大陸商敦泰電子(深圳)有限公司
Publication of TW202111498A publication Critical patent/TW202111498A/en
Application granted granted Critical
Publication of TWI737040B publication Critical patent/TWI737040B/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The present invention provides a fingerprint recognition method, the method includes: acquiring an image to be recognized collected by a fingerprint collection device, and dividing the image into a plurality of sub-images; extracting feature values of each sub-image; inputting the feature values of each sub-image into a preset machine learning model, and determining whether the sub-image is a finger image respectively; determining the image as a finger image if the number of the sub-image determined as the finger image exceeds a preset value; and recognizing a fingerprint in the image. The invention also provides a fingerprint recognition chip and an electronic device.

Description

指紋識別方法、晶片及電子裝置Fingerprint identification method, chip and electronic device

本發明涉及圖像識別技術領域,具體涉及一種指紋識別方法、晶片及電子裝置。The invention relates to the technical field of image recognition, in particular to a fingerprint recognition method, a chip and an electronic device.

由於生物識別所具有的便捷與安全等優點,使得生物識別技術在身份認證和網路安全等領域具有廣闊的應用前景。指紋識別技術作為生物體特徵識別技術之一,被應用在智慧手機等各種終端設備上,實現手機解鎖、線上支付等身份認證功能。然而,當指紋識別技術應用在智慧手機等終端設備上時,可能會出現手指之外的其他人體部位或其他物品誤觸指紋採集裝置的情況,例如,衣服口袋、非手指區域的光滑皮膚、鑰匙、果皮等其他物體也可能因各種原因接觸指紋採集裝置,如果對這些非手指物體也直接進行指紋的識別和匹配,浪費計算資源,也不利於指紋識別效率的提升,甚至會出現將非手指圖像誤識別成指紋的情況。Due to the convenience and security of biometrics, biometrics technology has broad application prospects in the fields of identity authentication and network security. Fingerprint recognition technology, as one of the biometric identification technologies, is applied to various terminal devices such as smart phones to realize identity authentication functions such as mobile phone unlocking and online payment. However, when fingerprint recognition technology is applied to terminal devices such as smart phones, it may happen that other human body parts or other items other than fingers touch the fingerprint collection device by mistake, such as clothing pockets, smooth skin on non-finger areas, keys Other objects such as, peels, etc. may also contact the fingerprint collection device for various reasons. If these non-finger objects are also directly identified and matched with fingerprints, it wastes computing resources and is not conducive to the improvement of fingerprint recognition efficiency, and even non-finger images may appear. Like the case of misidentification as a fingerprint.

鑒於以上問題,本發明提出一種指紋識別方法、晶片及電子裝置,以提高指紋檢測的效率與準確率。In view of the above problems, the present invention proposes a fingerprint identification method, chip and electronic device to improve the efficiency and accuracy of fingerprint detection.

本申請的第一方面提供一種指紋識別方法,所述方法包括: 獲取指紋採集設備採集的待識別圖像,並將所述圖像切分為多個子圖像; 提取所述每一子圖像的特徵值; 將所述每一子圖像的特徵值輸入至預設的機器學習模型,分別判斷所述每一子圖像是否為手指圖像; 獲取所述子圖像的判斷結果,當超過預設數目的子圖像被判定為手指圖像時,則確定所述待識別的圖像為手指圖像;以及 對所述手指圖像進行指紋識別。The first aspect of the present application provides a fingerprint identification method, which includes: Acquiring the image to be recognized collected by the fingerprint acquisition device, and dividing the image into a plurality of sub-images; Extracting the feature value of each sub-image; Input the feature value of each sub-image to a preset machine learning model, and determine whether each sub-image is a finger image; Obtain the judgment result of the sub-images, and when the sub-images exceeding the preset number are judged to be finger images, it is determined that the image to be recognized is a finger image; and Perform fingerprint recognition on the finger image.

優選地,所述提取所述每一子圖像的特徵值包括: 根據所述每一子圖像生成歸一化灰度圖像; 生成所述每一子圖像的歸一化梯度圖像; 根據所述歸一化灰度圖像和所述歸一化梯度圖像生成歸一化灰度梯度共生矩陣; 基於所述歸一化灰度梯度共生矩陣提取所述每一子圖像的特徵值。Preferably, the extracting the feature value of each sub-image includes: Generating a normalized grayscale image according to each of the sub-images; Generating a normalized gradient image of each of the sub-images; Generating a normalized gray-level gradient co-occurrence matrix according to the normalized gray-level image and the normalized gradient image; Extracting the feature value of each sub-image based on the normalized gray gradient co-occurrence matrix.

優選地,所述特徵值包括如下任意一種或多種:小梯度優勢、大梯度優勢、灰度分佈不均勻性、梯度分佈不均勻性、能量、灰度平均、梯度平均、灰度均方差、梯度均方差、相關性、灰度熵、梯度熵、混合熵、差分矩、逆差分矩。Preferably, the feature value includes any one or more of the following: small gradient advantage, large gradient advantage, unevenness of grayscale distribution, unevenness of gradient distribution, energy, grayscale average, gradient average, grayscale mean square error, gradient Mean square error, correlation, gray entropy, gradient entropy, mixed entropy, difference moment, inverse difference moment.

可選地,還可以基於灰度共生矩陣提取所述每一子圖像的特徵值。Optionally, the feature value of each sub-image can also be extracted based on the gray level co-occurrence matrix.

優選地,所述預設的機器學習模型包括神經網路模型、支援向量機模型、基於決策樹的分類模型、貝葉斯分類模型中的任意一種或多種。Preferably, the preset machine learning model includes any one or more of a neural network model, a support vector machine model, a classification model based on a decision tree, and a Bayesian classification model.

優選地,所述機器學習模型的訓練方法包括: 獲取樣本資料集,所述樣本資料集中包括正樣本和負樣本,所述正樣本為手指圖像對應的特徵值,對應的標籤為第一標籤,所述負樣本為非手指圖像對應的特徵值,對應的標籤為第二標籤; 將所述樣本資料集劃分為訓練集和測試集; 利用所述訓練集對所述機器學習模型進行訓練; 利用所述測試集對訓練好的機器學習模型進行測試,並根據測試結果對所述訓練好的機器學習模型進行參數調整。Preferably, the training method of the machine learning model includes: Acquire a sample data set, the sample data set includes a positive sample and a negative sample, the positive sample is the feature value corresponding to the finger image, the corresponding label is the first label, and the negative sample is the feature corresponding to the non-finger image Value, the corresponding label is the second label; Dividing the sample data set into a training set and a test set; Training the machine learning model by using the training set; Use the test set to test the trained machine learning model, and adjust the parameters of the trained machine learning model according to the test results.

優選地,當確定所述待識別圖像為手指圖像後,所述指紋識別方法還進一步檢測所述手指圖像中指紋的有效區域,包括: 對所述待識別圖像進行濾波,得到濾波後的灰度圖像; 計算所述濾波後的灰度圖像中每一個圖元的梯度,得到梯度方向圖像; 將所述梯度方向圖像分為多個子塊,並計算所述每一子塊的方差; 將所述每一子塊的方差與設定的閾值比較,若所述子塊的方差大於所述閾值,則確定所述子塊為有效區域,並將所述子塊的標誌位元設置為第一值;否則,確定所述子塊為無效區域,並將所述子塊的標誌位元設置為第二值。Preferably, after determining that the image to be recognized is a finger image, the fingerprint recognition method further detects the effective area of the fingerprint in the finger image, including: Filtering the image to be recognized to obtain a filtered grayscale image; Calculating the gradient of each pixel in the filtered gray image to obtain a gradient direction image; Dividing the gradient direction image into multiple sub-blocks, and calculating the variance of each sub-block; The variance of each sub-block is compared with a set threshold. If the variance of the sub-block is greater than the threshold, the sub-block is determined to be a valid area, and the flag bit of the sub-block is set to the first A value; otherwise, the sub-block is determined to be an invalid area, and the flag bit of the sub-block is set to a second value.

優選地,所述指紋識別方法還包括:針對被確定為無效區域的圖像子塊,通過四鄰域判斷方法或八鄰域判斷方法,根據所述無效區域相鄰區域的標誌位元的值重新確定所述無效區域標誌位元的值。Preferably, the fingerprint identification method further includes: for the image sub-blocks determined to be invalid regions, using a four-neighbors judgment method or an eight-neighbors judgment method, according to the value of the flag bits of the adjacent areas of the invalid area Determine the value of the invalid area flag bit.

本申請的第二方面提供一種指紋識別晶片,所述晶片包括: 圖像切分模組,用於獲取待識別的指紋圖像,並將所述指紋圖像切分為多個子圖像; 特徵提取模組,用於提取所示每一子圖像的特徵值; 輸入模組,用於將所述每一子圖像的特徵值輸入至預設的機器學習模型,分別判斷所述每一子圖像是否為手指圖像;以及 判定模組,用於所述子圖像的判斷結果,當超過預設數目的子圖像被判定為手指時,則確定所述指紋圖像為手指圖像。The second aspect of the present application provides a fingerprint recognition chip, the chip including: The image segmentation module is used to obtain the fingerprint image to be recognized, and divide the fingerprint image into a plurality of sub-images; The feature extraction module is used to extract the feature value of each sub-image shown; The input module is used for inputting the feature value of each sub-image to a preset machine learning model, and separately determining whether each sub-image is a finger image; and The judging module is used for judging the result of the sub-image. When the sub-images exceeding the preset number are judged to be a finger, the fingerprint image is determined to be a finger image.

本發明第三方面提供一種電子裝置,所述電子裝置包括:指紋採集單元,用於採集指紋圖像;處理器;以及記憶體,所述記憶體中存儲有多個程式模組,所述多個程式模組由所述處理器載入並執行如前所述的指紋識別方法。A third aspect of the present invention provides an electronic device, the electronic device comprising: a fingerprint collection unit for collecting fingerprint images; a processor; and a memory in which a plurality of program modules are stored, and the plurality of program modules are stored in the memory; Each program module is loaded by the processor and executes the aforementioned fingerprint identification method.

本發明中指紋識別方法、晶片及電子裝置,通過對待識別圖像進行切分並提取特徵值,結合機器學習模型能夠自動識別非手指區域,可以防止除手指之外的其他活體部位或物品的攻擊,減小指紋識別的誤判率。The fingerprint identification method, chip and electronic device of the present invention can automatically identify non-finger regions by segmenting the image to be identified and extracting feature values, combined with a machine learning model, and can prevent attacks from other living parts or objects except fingers , Reduce the misjudgment rate of fingerprint recognition.

為了能夠更清楚地理解本發明的上述目的、特徵和優點,下面結合附圖和具體實施例對本發明進行詳細描述。需要說明的是,在不衝突的情況下,本申請的實施例及實施例中的特徵可以相互組合。In order to be able to understand the above objectives, features and advantages of the present invention more clearly, the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments of the application and the features in the embodiments can be combined with each other if there is no conflict.

在下面的描述中闡述了很多具體細節以便於充分理解本發明,所描述的實施例僅是本發明一部分實施例,而不是全部的實施例。基於本發明中的實施例,本領域普通技術人員在沒有做出創造性勞動前提下所獲得的所有其他實施例,都屬於本發明保護的範圍。In the following description, many specific details are set forth in order to fully understand the present invention, and the described embodiments are only a part of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.

除非另有定義,本文所使用的所有的技術和科學術語與屬於本發明的技術領域的技術人員通常理解的含義相同。本文中在本發明的說明書中所使用的術語只是為了描述具體的實施例的目的,不是旨在于限制本發明。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the technical field of the present invention. The terms used in the description of the present invention herein are only for the purpose of describing specific embodiments, and are not intended to limit the present invention.

請參閱圖1,圖1為本發明一個實施例提供的指紋識別方法的流程示意圖。根據不同的需求,該流程圖中步驟的順序可以改變,某些步驟可以省略。為了便於說明,僅示出了與本發明實施例相關的部分。Please refer to FIG. 1. FIG. 1 is a schematic flowchart of a fingerprint identification method according to an embodiment of the present invention. According to different needs, the order of the steps in the flowchart can be changed, and some steps can be omitted. For ease of description, only the parts related to the embodiment of the present invention are shown.

如圖1所示,所述指紋識別方法包括以下步驟。As shown in Figure 1, the fingerprint identification method includes the following steps.

步驟S1、獲取指紋採集設備採集的待識別圖像,並將所述圖像切分為多個子圖像。Step S1: Obtain the image to be recognized collected by the fingerprint collecting device, and divide the image into a plurality of sub-images.

所述指紋採集設備可以是設置於手機、平板電腦、工業設備等智慧終端機中的,用於採集用戶指紋進行身份認證。所述指紋採集設備還可以是設置於考勤裝置中的,用於採集使用者指紋進行考勤等。所述指紋採集設備可以是通過光學指紋採集技術、電容式感測器指紋採集技術、超聲波指紋採集技術、或電磁波指紋採集技術等手段採集指紋圖像的。The fingerprint collection device may be installed in smart terminals such as mobile phones, tablet computers, industrial equipment, etc., and is used to collect user fingerprints for identity authentication. The fingerprint collection device may also be installed in the attendance device for collecting user fingerprints for attendance and the like. The fingerprint collection device may collect fingerprint images by means such as optical fingerprint collection technology, capacitive sensor fingerprint collection technology, ultrasonic fingerprint collection technology, or electromagnetic wave fingerprint collection technology.

本發明實施方式中,所述待識別圖像為灰度圖像。在現有的指紋採集設備中,有些採集到的圖像是灰度圖像,還有些採集到的圖像不是灰度圖像,當所述採集的圖像不是灰度圖像時,所述步驟S1還包括將所述指紋採集設備採集的圖像轉換為灰度圖像。In the embodiment of the present invention, the image to be recognized is a grayscale image. In the existing fingerprint collection equipment, some of the collected images are grayscale images, and some of the collected images are not grayscale images. When the collected images are not grayscale images, the step S1 also includes converting the image collected by the fingerprint collection device into a grayscale image.

在獲取到所述待識別圖像後,將所述圖像等分為m*n個相互不重疊的子圖像,其中,m與n均為大於1的正整數,m與n的值可以相同,也可以不同。舉例而言,可以將一張獲取到的圖像按照2*2的方式等分為四個子圖像,也可以按照2*3的方式將所述圖像等分為六個子圖像。After the image to be recognized is acquired, the image is equally divided into m*n sub-images that do not overlap each other, where m and n are both positive integers greater than 1, and the values of m and n can be The same or different. For example, an acquired image may be equally divided into four sub-images in a 2*2 manner, or the image may be equally divided into six sub-images in a 2*3 manner.

步驟S2、提取所述每一子圖像的特徵值。Step S2: Extract the feature value of each sub-image.

在本發明第一實施例中,通過計算每一子圖像的灰度梯度共生矩陣提取所述每一子圖像的特徵值,如圖2所示,通過灰度梯度共生矩陣提取所述每一子圖像的特徵值具體包括如下步驟:In the first embodiment of the present invention, the feature value of each sub-image is extracted by calculating the gray gradient co-occurrence matrix of each sub-image, as shown in FIG. The feature value of a sub-image specifically includes the following steps:

步驟S201、生成每一子圖像的歸一化灰度圖像。Step S201: Generate a normalized grayscale image of each sub-image.

其中,生成所述歸一化灰度圖像可以利用均值方差歸一化演算法或灰度變換歸一化演算法實現。Wherein, generating the normalized gray-scale image can be achieved by using a mean-variance normalization algorithm or a gray-scale transformation normalization algorithm.

均值方差歸一化是將不同時間、不同亮度下採集到的圖像轉換到同一灰度均值和方差的標準圖像;灰度變換歸一化是利用灰度拉伸的方法將原圖像中的灰度分佈擴展到具有整個灰度級的圖像,兩種演算法均為現有演算法,在此不做詳述。Mean variance normalization is to convert the images collected at different times and different brightness to the same gray-scale mean and variance standard image; gray-scale transformation normalization is to use the gray-scale stretching method to convert the original image The grayscale distribution of is extended to an image with the entire grayscale. Both algorithms are existing algorithms and will not be described in detail here.

步驟S202、生成所述每一子圖像的歸一化梯度圖像。Step S202: Generate a normalized gradient image of each sub-image.

一個實施例中,可以利用Sobel運算元生成所述歸一化梯度圖像,Sobel運算元是一個離散的一階差分運算元,用來計算圖像亮度函數的一階梯度之近似值,利用Sobel運算元得到灰度圖像f(x,y)的梯度方向圖像G(x,y),其中,x表示所述灰度圖像中圖元點的橫坐標,y表示所述灰度圖像中圖元點的縱坐標。In one embodiment, the normalized gradient image can be generated using the Sobel operator. The Sobel operator is a discrete first-order difference operator used to calculate the approximate value of the first-order gradient of the image brightness function, using the Sobel operator Element to obtain the gradient direction image G(x,y) of the grayscale image f(x,y), where x represents the abscissa of the pixel point in the grayscale image, and y represents the grayscale image The ordinate of the center primitive point.

所述梯度方向圖像G(x,y)的計算公式如下:

Figure 02_image001
。The calculation formula of the gradient direction image G(x, y) is as follows:
Figure 02_image001
.

其中,Gx、Gy分別為所述原始灰度圖像在x軸、y軸的梯度幅值,計算如下: Gx={f(x+1,y-1)+2f(x+1,y)+f(x+1,y+1)}-{f(x-1,y-1)+2f(x-1,y)+f(x-1,y+1)}; Gy={f(x-1,y+1)+2f(x,y+1)+f(x+1,y+1)}-{f(x-1,y-1)+2f(x,y-1)+f(x+1,y-1)};Wherein, Gx and Gy are the gradient amplitudes of the original grayscale image on the x-axis and y-axis, respectively, which are calculated as follows: Gx={f(x+1,y-1)+2f(x+1,y)+f(x+1,y+1)}-{f(x-1,y-1)+2f(x -1,y)+f(x-1,y+1)}; Gy={f(x-1,y+1)+2f(x,y+1)+f(x+1,y+1)}-{f(x-1,y-1)+2f(x ,y-1)+f(x+1,y-1)};

其他實施例中,所述梯度運算元還可以是Roberts運算元、Prewitt運算元或Laplace運算元中的一種。In other embodiments, the gradient operand may also be one of Roberts operand, Prewitt operand, or Laplace operand.

步驟S203、根據所述歸一化灰度圖像和所述歸一化梯度圖像生成歸一化灰度梯度共生矩陣。Step S203: Generate a normalized gray gradient co-occurrence matrix according to the normalized gray image and the normalized gradient image.

灰度梯度共生矩陣(Gray-GradientCo-occurrence Matrix,簡稱GGCM)是灰度級nGray與梯度級nGrad組成的矩陣。灰度梯度共生矩陣的元素

Figure 02_image003
定義為歸一化灰度圖像
Figure 02_image005
及歸一化梯度圖像
Figure 02_image007
中具有灰度值
Figure 02_image009
和梯度值
Figure 02_image011
的圖元數。即
Figure 02_image013
; 其中,
Figure 02_image015
為歸一化灰度圖像中的值,
Figure 02_image017
為歸一化梯度圖像中的值,
Figure 02_image019
為歸一化灰度圖像量級,
Figure 02_image021
為歸一化梯度圖像量級。Gray-GradientCo-occurrence Matrix (GGCM) is a matrix composed of gray level nGray and gradient level nGrad. Elements of gray gradient co-occurrence matrix
Figure 02_image003
Defined as a normalized grayscale image
Figure 02_image005
Normalized gradient image
Figure 02_image007
Gray value
Figure 02_image009
And gradient value
Figure 02_image011
The number of primitives. which is
Figure 02_image013
; among them,
Figure 02_image015
Is the value in the normalized grayscale image,
Figure 02_image017
Is the value in the normalized gradient image,
Figure 02_image019
Is the normalized gray image magnitude,
Figure 02_image021
Is the normalized gradient image magnitude.

Figure 02_image023
為歸一化灰度梯度共生矩陣,有
Figure 02_image025
。Assume
Figure 02_image023
Is the normalized gray gradient co-occurrence matrix, with
Figure 02_image025
.

步驟S204、基於灰度梯度共生矩陣提取所述每一子圖像的特徵值。Step S204: Extract the feature value of each sub-image based on the gray gradient co-occurrence matrix.

其中,通過所述灰度梯度共生矩陣提取的特徵值包括如下一種或多種:小梯度優勢、大梯度優勢、灰度分佈不均勻性、梯度分佈不均勻性、能量、灰度平均、梯度平均、灰度均方差、梯度均方差、相關性、灰度熵、梯度熵、混合熵、差分矩、逆差分矩。Wherein, the eigenvalues extracted by the gray gradient co-occurrence matrix include one or more of the following: small gradient advantage, large gradient advantage, unevenness of grayscale distribution, unevenness of gradient distribution, energy, grayscale average, gradient average, Gray-level mean square error, gradient mean square error, correlation, gray-level entropy, gradient entropy, mixed entropy, difference moment, inverse difference moment.

其中,1) 小梯度優勢計算方法如下:

Figure 02_image027
; 2) 大梯度優勢計算方法如下:
Figure 02_image029
; 3) 灰度分佈不均勻性計算方法如下:
Figure 02_image031
4) 梯度分佈不均勻性計算方法如下:
Figure 02_image033
; 5) 能量計算方法如下:
Figure 02_image035
; 6) 灰度平均計算方法如下:
Figure 02_image037
; 7) 梯度平均計算方法如下:
Figure 02_image039
; 8) 灰度均方差計算方法如下:
Figure 02_image041
; 9) 梯度均方差計算方法如下:
Figure 02_image043
; 10) 相關性計算方法如下:
Figure 02_image045
; 11) 灰度熵計算方法如下:
Figure 02_image047
; 12) 梯度熵計算方法如下:
Figure 02_image049
; 13) 混合熵計算方法如下:
Figure 02_image051
; 14) 差分矩計算方法如下:
Figure 02_image053
; 15) 逆差分矩計算方法如下:
Figure 02_image055
。Among them, 1) The calculation method of small gradient advantage is as follows:
Figure 02_image027
; 2) The calculation method of large gradient advantage is as follows:
Figure 02_image029
; 3) The calculation method for the unevenness of gray distribution is as follows:
Figure 02_image031
; 4) The calculation method of gradient distribution inhomogeneity is as follows:
Figure 02_image033
; 5) The energy calculation method is as follows:
Figure 02_image035
; 6) The gray average calculation method is as follows:
Figure 02_image037
; 7) The gradient average calculation method is as follows:
Figure 02_image039
8) The calculation method of the gray mean square error is as follows:
Figure 02_image041
9) The calculation method of gradient mean square error is as follows:
Figure 02_image043
10) The correlation calculation method is as follows:
Figure 02_image045
11) The gray entropy calculation method is as follows:
Figure 02_image047
; 12) The gradient entropy calculation method is as follows:
Figure 02_image049
13) The calculation method of mixed entropy is as follows:
Figure 02_image051
; 14) The calculation method of the difference moment is as follows:
Figure 02_image053
15) The calculation method of the inverse difference moment is as follows:
Figure 02_image055
.

可以理解,以上步驟中步驟S201、和步驟步驟S202不分先後順序。通過灰度梯度共生矩陣能夠提取出手指圖像中豐富的指紋資訊,有助於提高手指識別的準確率。因為灰度是構成一副圖像的基礎,梯度是構成圖像邊緣輪廓的要素,圖像的主要資訊是由圖像的邊緣輪廓提供的。灰度梯度共生矩陣能夠很清晰的描繪圖像內各圖元點灰度與梯度的分佈規律,同時也給出了各像點與其領域像點的空間關係,對圖像的紋理能很好地描繪,對於具有方向性的紋理可以從梯度的方向上反映出來。It can be understood that in the above steps, step S201 and step S202 are in no particular order. The gray gradient co-occurrence matrix can extract rich fingerprint information in the finger image, which helps to improve the accuracy of finger recognition. Because the gray scale is the basis of an image, the gradient is the element that constitutes the edge contour of the image, and the main information of the image is provided by the edge contour of the image. The gray-level gradient co-occurrence matrix can clearly describe the distribution of the gray level and gradient of each pixel point in the image, and it also gives the spatial relationship between each image point and its domain image point, which can have a good effect on the texture of the image. Delineation can be reflected in the direction of the gradient for the directional texture.

在本發明第二實施例中,也可以通過計算所述每一子圖像的灰度共生矩陣,陣提取每一子圖像的特徵值。In the second embodiment of the present invention, the eigenvalue of each sub-image can also be extracted by calculating the gray-level co-occurrence matrix of each sub-image.

灰度共生矩陣是一種通過研究圖像灰度的空間相關特性來描述紋理的方法。灰度共生矩陣是一個L*L矩陣,其中,L是被量化後的圖像灰度級數目。灰度共生矩陣的維數與待處理圖像(灰度圖像或轉換而成的灰度圖像)無關,而是由灰度圖像的灰度級數目L決定的,假設待處理圖像大小為a*b,而灰度級數目為L,則其灰度共生矩陣的大小為L*L。圖像灰度級L越大,圖像就越清晰,越能真實反映原圖像本身;但是,L越大,灰度共生矩陣的維數就越大,這樣就使運算量大大增加。一般來說,灰度圖像的灰度值範圍為0-255,灰度級的取值範圍為[8,16,32,64,128,256]。在計算灰度共生矩陣及其特徵值時,為了減小運算量,較優選的可以取灰度級為8或16。Gray-level co-occurrence matrix is a method to describe texture by studying the spatial correlation characteristics of image gray. The gray level co-occurrence matrix is an L*L matrix, where L is the number of gray levels of the image after quantization. The dimension of the gray level co-occurrence matrix has nothing to do with the image to be processed (gray image or converted gray image), but is determined by the number of gray levels L of the gray image, assuming the image to be processed The size is a*b, and the number of gray levels is L, then the size of the gray-level co-occurrence matrix is L*L. The larger the image gray level L, the clearer the image, and the more truly reflecting the original image itself; however, the larger the L, the greater the dimension of the gray-level co-occurrence matrix, which greatly increases the amount of calculation. Generally speaking, the gray value range of the gray image is 0-255, and the value range of the gray level is [8,16,32,64,128,256]. When calculating the gray level co-occurrence matrix and its eigenvalues, in order to reduce the amount of calculation, it is more preferable to set the gray level to 8 or 16.

在所述第二實施例中,優選採用灰度級L=8。可以用如下方法將取值範圍為0-255的灰度級量化到0-7共8個量化等級:

Figure 02_image057
。In the second embodiment, the gray level L=8 is preferably used. The following methods can be used to quantize the gray level with a value range of 0-255 to a total of 8 quantization levels from 0-7:
Figure 02_image057
.

其中,g表示原始灰度值,Ng表示量化後的灰度級。Among them, g represents the original gray value, and Ng represents the quantized gray level.

可以理解,在其他實施方式中,也可以將圖像中的灰度值量化為其他數值的灰度級,本發明對此不作限制,各個量化等級的實施例均應在本發明的保護範圍之內。It can be understood that in other embodiments, the gray value in the image can also be quantized into gray levels of other values. The present invention does not limit this, and the embodiments of each quantization level should fall within the protection scope of the present invention. Inside.

在所述第二實施例中,灰度共生矩陣中的元素取值方式如下: 統計灰度圖像中符合下列條件的點對(兩個圖元點稱為一個點對)的數目p:該點對中兩個圖元點的距離為d(d也稱為步長或位移),方向為θ(即兩點連線與水準軸的夾角),其量化的灰度級分別為i和j。則p就是灰度共生矩陣中第i行、第j列的元素值。其中,位移d和方向θ需要被初始化,在計算過程中不變。In the second embodiment, the values of elements in the gray-level co-occurrence matrix are selected as follows: Count the number of point pairs (two pixel points are called a point pair) in the grayscale image that meet the following conditions p: the distance between the two pixel points in the point pair is d (d is also called step length or displacement ), the direction is θ (that is, the angle between the line of two points and the horizontal axis), and the quantized gray levels are i and j respectively. Then p is the element value of the i-th row and j-th column in the gray-level co-occurrence matrix. Among them, the displacement d and the direction θ need to be initialized and remain unchanged during the calculation process.

在所示第二實施例中,優選所述位移d=1,即,將兩個相鄰圖元點作為一個點對。所述方向θ的取值為0度、45度、90度、135度。也就是說,本實施方式中,所述灰度共生矩陣中每一個元素(i,j)代表灰度級i與灰度級j在圖像中方向θ上的相鄰次數。In the second embodiment shown, it is preferable that the displacement d=1, that is, two adjacent graphic element points are regarded as a point pair. The value of the direction θ is 0 degrees, 45 degrees, 90 degrees, and 135 degrees. That is to say, in this embodiment, each element (i, j) in the gray-level co-occurrence matrix represents the number of times the gray level i and the gray level j are adjacent in the direction θ in the image.

在一個實施方式中,可以通過一個方向的灰度共生矩陣提取特徵值。在另一個實施方式中,所述灰度共生矩陣是通過多個不同方向對應的灰度共生矩陣平均化得到的,例如,所述子圖像的灰度共生矩陣通過0度、45度、90度、135度四個方向對應的灰度共生矩陣進行平均得到的,即灰度共生矩陣GLCM為:

Figure 02_image059
In one embodiment, the eigenvalues can be extracted through a gray-level co-occurrence matrix in one direction. In another embodiment, the gray-level co-occurrence matrix is obtained by averaging multiple gray-level co-occurrence matrices corresponding to different directions. For example, the gray-level co-occurrence matrix of the sub-image passes 0 degrees, 45 degrees, and 90 degrees. The gray-level co-occurrence matrix corresponding to the four directions of degrees and 135 degrees is averaged, that is, the gray-level co-occurrence matrix GLCM is:
Figure 02_image059

在所述第二實施例中,基於所述灰度共生矩陣提取所述子圖像的特徵值包括對比度(Contrast)、相關性(Correlation)、均一性(Homogeneity)、以及最大概率(Maximun Probalility)。In the second embodiment, the feature values of the sub-image extracted based on the gray-level co-occurrence matrix include contrast (Contrast), correlation (Correlation), homogeneity (Homogeneity), and maximum probability (Maximun Probalility) .

其中,對比度的計算公式為:

Figure 02_image061
。Among them, the calculation formula of contrast is:
Figure 02_image061
.

相關性的計算公式為:

Figure 02_image063
,其中,
Figure 02_image065
。The calculation formula for correlation is:
Figure 02_image063
,among them,
Figure 02_image065
.

均一性的計算公式為:

Figure 02_image067
The calculation formula for uniformity is:
Figure 02_image067
.

最大概率的計算公式為:

Figure 02_image069
。The formula for calculating the maximum probability is:
Figure 02_image069
.

在實際應用中,由灰度共生矩陣中一般可以提取出14種特徵值,因此可以理解,在其他實施方式中,也可以基於所述灰度共生矩陣提取其他特徵值作為所述子圖像的特徵值,均應落在本發明的保護範圍之內。In practical applications, 14 eigenvalues can generally be extracted from the gray-level co-occurrence matrix. Therefore, it can be understood that in other embodiments, other eigenvalues can also be extracted based on the gray-level co-occurrence matrix as the sub-image. The characteristic values should all fall within the protection scope of the present invention.

在本發明其他實施方式中,提取所述子圖像特徵值也可以通過其他特徵提取方法實現,例如提取圖像的LBP(Local Binary Pattern)特徵等。In other embodiments of the present invention, extracting the feature value of the sub-image can also be implemented by other feature extraction methods, such as extracting the LBP (Local Binary Pattern) feature of the image.

步驟S3、將所述每一子圖像的特徵值輸入至預設的機器學習模型,分別判斷所述每一子圖像是否為手指圖像。Step S3: Input the feature value of each sub-image into a preset machine learning model, and determine whether each sub-image is a finger image.

其中,所述機器學習模型的輸入為圖像的特徵值。在第一實施方式中,實施特徵值為根據灰度梯度共生矩陣提取的特徵值。在第二實施方式中,所述特徵值包括基於灰度共生矩陣提取的對比度(Contrast)、相關性(Correlation)、均一性(Homogeneity)、以及最大概率(Maximun Probalility)。所述機器學習模型的輸出為所述圖像的分類結果,即是否為手指圖像。Wherein, the input of the machine learning model is the feature value of the image. In the first embodiment, the implementation feature value is the feature value extracted from the gray gradient co-occurrence matrix. In the second embodiment, the feature value includes Contrast, Correlation, Homogeneity, and Maximun Probalility extracted based on the gray level co-occurrence matrix. The output of the machine learning model is the classification result of the image, that is, whether it is a finger image.

在本發明實施方式中,所述機器學習模型可以是,但不限於,BP神經網路模型、支援向量機模型、基於決策樹的分類模型、貝葉斯分類模型等。本發明對所述機器學習模型不做一一列舉,相關的變化實施例均應落入本發明的保護範圍。In the embodiment of the present invention, the machine learning model may be, but is not limited to, a BP neural network model, a support vector machine model, a classification model based on a decision tree, a Bayesian classification model, and the like. The present invention does not enumerate the machine learning models one by one, and all relevant modified embodiments should fall within the protection scope of the present invention.

如圖3所示,為本發明一個實施方式中神經網路模型的訓練方法流程示意圖。本實施方式中,所述神經網路模型的訓練方法包括如下步驟。As shown in FIG. 3, it is a schematic diagram of the process flow of a neural network model training method in an embodiment of the present invention. In this embodiment, the training method of the neural network model includes the following steps.

步驟S301、獲取樣本資料集,所述樣本資料集中包括正樣本和負樣本,所述正樣本為手指圖像對應的基於灰度共生矩陣提取的特徵值,對應的標籤為1,所述負樣本為非手指圖像對應的基於灰度共生矩陣提取的特徵值,對應的標籤為0。例如果皮、導電泡沫等非手指的圖片對應的標籤為0。Step S301: Obtain a sample data set, the sample data set includes positive samples and negative samples, the positive samples are the feature values extracted based on the gray level co-occurrence matrix corresponding to the finger image, and the corresponding label is 1, the negative sample It is the feature value extracted based on the gray-level co-occurrence matrix corresponding to the non-finger image, and the corresponding label is 0. For example, non-finger pictures such as peels, conductive foam, etc., have a label of 0.

在本實施方式中,所述正、負樣本圖像特徵值包括基於灰度共生矩陣提取的對比度(Contrast)、相關性(Correlation)、均一性(Homogeneity)、以及最大概率(Maximun Probalility)。In this embodiment, the positive and negative sample image feature values include contrast (Contrast), correlation (Correlation), homogeneity (Homogeneity), and maximum probability (Maximun Probalility) extracted based on the gray level co-occurrence matrix.

步驟S302、將所述樣本資料集劃分為訓練集和測試集。Step S302: Divide the sample data set into a training set and a test set.

例如,對於上述正、負樣本圖像,將其中的70%歸為一類,作為訓練集,其目的是用於訓練所述神經網路模型,剩下的30%歸為另一類,作為測試集,其目的是用於測試所述神經網路模型的分類性能。For example, for the above positive and negative sample images, 70% of them are classified as a training set, and the purpose is to train the neural network model, and the remaining 30% are classified into another category as a test set , Its purpose is to test the classification performance of the neural network model.

步驟S303、利用所述訓練集對神經網路模型進行訓練。Step S303: Use the training set to train the neural network model.

將訓練集輸入至建立好的神經網路模型中進行模型訓練的過程可以採用現有技術中的手段實現,在此不做詳述。在一些實施例中,利用訓練集對所述神經網路模型進行訓練還可以包括:將深度神經網路模型的訓練部署在多個圖形處理器(Graphics Processing Unit,GPU)上進行分散式訓練。例如,可以通過Tensorflow的分散式訓練原理,將模型的訓練部署在多個圖形處理器上進行分散式訓練,可以縮短模型的訓練時間,加快模型收斂。The process of inputting the training set into the established neural network model for model training can be implemented by means in the prior art, and will not be described in detail here. In some embodiments, using the training set to train the neural network model may further include: deploying the training of the deep neural network model on multiple graphics processing units (GPUs) for distributed training. For example, through Tensorflow's decentralized training principle, the training of the model can be deployed on multiple graphics processors for decentralized training, which can shorten the training time of the model and accelerate the convergence of the model.

步驟S304、利用所述測試集對訓練好的神經網路模型進行測試,並根據測試結果對所述訓練好的神經網路模型進行參數調整。Step S304: Use the test set to test the trained neural network model, and adjust the parameters of the trained neural network model according to the test results.

本實施方式中,利用測試集對訓練好的神經網路模型進行測試可以包括:將測試集中的樣本資料送入所述訓練好的神經網路模型,所述神經網路模型將會對所述樣本圖像的特徵值做出識別,即判斷所述樣本圖像是否為手指圖像,將這個結果與其自身的標籤相比對,即可獲得神經網路模型的分類正確率。根據所述正確率確定是否需要繼續訓練,如果正確率達到預設值,則模型訓練完成。In this embodiment, using the test set to test the trained neural network model may include: sending the sample data in the test set to the trained neural network model, and the neural network model will test the neural network model. The feature value of the sample image is recognized, that is, it is judged whether the sample image is a finger image, and the result is compared with its own label to obtain the classification accuracy of the neural network model. It is determined whether the training needs to be continued according to the correctness rate, and if the correctness rate reaches the preset value, the model training is completed.

其中,所述機器學習模型的訓練可以是離線完成的。所述神經網路模型訓練完成後,將待識別的子圖像輸入該神經網路模型,即可識別出所述子圖像是否為手指圖像,實現圖像的自動識別,整個過程省時、省力,不但提高了手指圖像的識別效率,也大大提高了手指圖像識別的準確率。Wherein, the training of the machine learning model can be done offline. After the training of the neural network model is completed, input the sub-image to be recognized into the neural network model to identify whether the sub-image is a finger image, and realize the automatic recognition of the image, saving time in the whole process , Labor saving, not only improves the efficiency of finger image recognition, but also greatly improves the accuracy of finger image recognition.

步驟S4、獲取所述子圖像的判斷結果,當超過預設數目的子圖像被判定為手指時,則確定所述指紋圖像為手指圖像。Step S4: Obtain the judgment result of the sub-image, and when the sub-image exceeding the preset number is judged to be a finger, it is determined that the fingerprint image is a finger image.

所述預設數目可以根據需要進行設置,當被判定為手指的子圖像的數目小於所述預設數目時,則判定所述指紋圖像為非手指圖像。舉例而言,當所述指紋圖像被切分為4個子圖像時,若所述4個子圖像中有3個或3個以上被判定為手指時,則確定所述指紋圖像為手指圖像,而所述4個子圖像中僅有2個或2個以下子圖像被判定為手指時,則確定所述指紋圖像非手指圖像。The preset number can be set as required, and when the number of sub-images determined to be fingers is less than the preset number, it is determined that the fingerprint image is a non-finger image. For example, when the fingerprint image is divided into 4 sub-images, if 3 or more of the 4 sub-images are determined to be fingers, it is determined that the fingerprint image is a finger If only 2 or less of the 4 sub-images are determined to be a finger, it is determined that the fingerprint image is not a finger image.

在本發明一些實施例中,可以將所述待識別圖像的所述子圖像依次輸入所述機器學習模型中,當確定已經有超過預設數目的子圖像被判定為手指圖像時,則可以停止其他子圖像的輸入和判別,直接確定所述待識別圖像為手指圖像,這樣可以進一步節省運算量,提高手指圖像的識別效率。舉例而言,當所述待識別圖像被切分為6個子圖像時,如果前4個子圖像均被判定為手指圖像,已經達到了所述預設數目,則可以無需再將剩餘的2個子圖像再輸入所述機器學習模型,從而節省運算時間。In some embodiments of the present invention, the sub-images of the image to be recognized may be sequentially input into the machine learning model, when it is determined that more than a preset number of sub-images have been determined to be finger images , The input and discrimination of other sub-images can be stopped, and the image to be recognized is directly determined as a finger image, which can further save the amount of calculation and improve the recognition efficiency of the finger image. For example, when the image to be recognized is divided into 6 sub-images, if the first 4 sub-images are all determined to be finger images, and the preset number has been reached, there is no need to divide the remaining images. The 2 sub-images of is then input into the machine learning model, thereby saving computing time.

本發明通過將指紋採集裝置採集到的圖像分成多個子圖像進行特徵提取,並利用機器學習模型識別是否為手指圖像,在識別為手指圖像時才進行指紋的識別和匹配認證,而將非手指圖像進行排除,這樣可以有效避免非手指物品(例如衣服口袋、果皮、光滑皮膚)等異物的攻擊,減小指紋識別的計算量,同時提高指紋識別的準確率。The present invention divides the image collected by the fingerprint acquisition device into multiple sub-images for feature extraction, and uses a machine learning model to identify whether it is a finger image, and only performs fingerprint identification and matching authentication when it is identified as a finger image. Excluding non-finger images can effectively avoid attacks by foreign objects such as non-finger items (such as clothing pockets, peels, smooth skin), reduce the amount of calculation for fingerprint recognition, and improve the accuracy of fingerprint recognition.

進一步地,當判定所述待識別圖像是手指圖像,而不是其他非手指物體的圖像時,可以開始進行指紋的識別與匹配。然而,在指紋匹配過程中,有些指紋區域的指紋紋理比較模糊,有些指紋區域的指紋紋理比較清晰。指紋紋理模糊的區域容易導致指紋識別錯誤,同時整個手指的指紋識別和匹配運算量也比較大。Further, when it is determined that the image to be recognized is an image of a finger, but not an image of other non-finger objects, fingerprint recognition and matching can be started. However, in the fingerprint matching process, the fingerprint texture of some fingerprint regions is relatively fuzzy, and the fingerprint texture of some fingerprint regions is relatively clear. The fuzzy area of the fingerprint texture is likely to cause fingerprint recognition errors, and the fingerprint recognition and matching operations of the entire finger are also relatively large.

因此,為了進一步提高指紋的識別準確率和效率,本實施例中,在確定所述待識別的圖像是手指圖像後,所述指紋識別方法還優選包括步驟S5:對所述手指圖像中指紋的有效區域進行檢測。其中,所述有效區域是指圖像中指紋紋路清晰的區域,無效區域是指圖像中指紋紋路不清晰的區域。可以理解,在其他實施例中,所述步驟S5也可以省略。Therefore, in order to further improve the accuracy and efficiency of fingerprint recognition, in this embodiment, after determining that the image to be recognized is a finger image, the fingerprint recognition method preferably further includes step S5: The effective area of the fingerprint is detected. Wherein, the effective area refers to an area with clear fingerprint lines in an image, and an invalid area refers to an area with unclear fingerprint lines in an image. It can be understood that in other embodiments, the step S5 may also be omitted.

具體的,參閱圖4,為本發明一實施例中的對指紋有效區域的檢測方法流程示意圖。Specifically, refer to FIG. 4, which is a schematic flowchart of a method for detecting a valid area of a fingerprint in an embodiment of the present invention.

在本實施例中,所述指紋有效區域的檢測方法包括如下步驟。In this embodiment, the method for detecting the effective area of the fingerprint includes the following steps.

步驟S401、對所述指紋採集設備採集的待識別圖像進行濾波處理,得到濾波後灰度圖像。Step S401: Perform filtering processing on the image to be identified collected by the fingerprint collection device to obtain a filtered grayscale image.

所述濾波的方法可以是,但不限於,中值濾波、均值濾波、高斯濾波、雙邊濾波中的任意一種。通過對待識別的圖像進行濾波處理,能夠消除雜訊影響。在一個優選實施例中,採用均值濾波對所述待識別圖像進行濾波處理。The filtering method may be, but is not limited to, any one of median filtering, mean filtering, Gaussian filtering, and bilateral filtering. By filtering the image to be recognized, the influence of noise can be eliminated. In a preferred embodiment, mean filtering is used to perform filtering processing on the image to be recognized.

步驟S402、計算所述濾波後的灰度圖像中每一個圖元的梯度,得到梯度方向圖像。Step S402: Calculate the gradient of each graphic element in the filtered grayscale image to obtain a gradient direction image.

具體地,可以利用Sobel運算元,得到灰度圖像f(x,y)的梯度方向圖像G(x,y),其中,x表示所述灰度圖像中圖元點的橫坐標,y表示所述灰度圖像中圖元點的縱坐標。Specifically, the Sobel operator can be used to obtain the gradient direction image G(x,y) of the grayscale image f(x,y), where x represents the abscissa of the pixel point in the grayscale image, y represents the ordinate of the pixel point in the grayscale image.

所述梯度方向圖像G(x,y)的計算公式如下:

Figure 02_image001
。The calculation formula of the gradient direction image G(x, y) is as follows:
Figure 02_image001
.

其中,Gx、Gy分別為所述原始灰度圖像在x軸、y軸的梯度幅值,計算如下: Gx={f(x+1,y-1)+2f(x+1,y)+f(x+1,y+1)}-{f(x-1,y-1)+2f(x-1,y)+f(x-1,y+1)}; Gy={f(x-1,y+1)+2f(x,y+1)+f(x+1,y+1)}-{f(x-1,y-1)+2f(x,y-1)+f(x+1,y-1)};Wherein, Gx and Gy are the gradient amplitudes of the original grayscale image on the x-axis and y-axis, respectively, which are calculated as follows: Gx={f(x+1,y-1)+2f(x+1,y)+f(x+1,y+1)}-{f(x-1,y-1)+2f(x -1,y)+f(x-1,y+1)}; Gy={f(x-1,y+1)+2f(x,y+1)+f(x+1,y+1)}-{f(x-1,y-1)+2f(x ,y-1)+f(x+1,y-1)};

其他實施例中,也可以利用其他運算元計算所述梯度方向圖像。In other embodiments, other operation elements may also be used to calculate the gradient direction image.

步驟S403、將所述梯度方向圖像分為多個子塊,並計算所述每一子塊的方差。Step S403: Divide the gradient direction image into multiple sub-blocks, and calculate the variance of each sub-block.

本發明對所述梯度方向圖像的切分本發明不做具體限定,在一個優選實施例中,將所述梯度方向圖像可以分為16*16個子塊,其他實施例中也可以分為其他數量的子塊。計算所述每一子塊的方差可以利用現有技術中的公式實現,在此不做詳述。The present invention does not specifically limit the segmentation of the gradient direction image. In a preferred embodiment, the gradient direction image can be divided into 16*16 sub-blocks. In other embodiments, it can also be divided into 16*16 sub-blocks. Other number of sub-blocks. The calculation of the variance of each sub-block can be achieved by using formulas in the prior art, which will not be described in detail here.

步驟S404、將所述每一子塊的方差與設定的閾值比較,若大於閾值,則確定所述子塊為有效區域,並將所述子塊的標誌位元設置為第一值;否則,確定所述子塊為無效區域並將所述子塊的標誌位元設置為第二值。本實施例中第一值為1,所述第二值為0。Step S404: Compare the variance of each sub-block with a set threshold, and if it is greater than the threshold, determine that the sub-block is a valid area, and set the flag bit of the sub-block to the first value; otherwise, It is determined that the sub-block is an invalid area and the flag bit of the sub-block is set to a second value. In this embodiment, the first value is 1, and the second value is 0.

其中,所述閾值可以根據需要進行設置,本發明對此不做具體限制。Wherein, the threshold value can be set as required, and the present invention does not specifically limit this.

檢測出所述手指圖像中指紋紋路清晰的有效區域後,可以通過所述有效區域進行指紋的識別與匹配,能夠提高指紋識別的準確率,並且進行有效區域的識別和匹配相對於整個手指圖像的識別和比對運算量大大的降低了,因此也能夠提高運算效率。After detecting the effective area with clear fingerprint lines in the finger image, fingerprint identification and matching can be performed through the effective area, which can improve the accuracy of fingerprint recognition, and the effective area identification and matching are relative to the entire finger image. The amount of calculation for image recognition and comparison is greatly reduced, so the calculation efficiency can also be improved.

在本發明一個優選實施例中,為了進一步提高指紋的識別準確率,當檢測完指紋的有效區域後,針對被確定為無效區域的子塊,所述方法還進一步包括:按照四鄰域判斷方法根據所述無效區域上、下、左、右四個相鄰區域標誌位元的值對所述無效區域進行重新判定。In a preferred embodiment of the present invention, in order to further improve the accuracy of fingerprint recognition, after detecting the effective area of the fingerprint, for the sub-blocks determined to be invalid areas, the method further includes: according to the four-neighbor judgment method according to The values of the four adjacent area flag bits on the upper, lower, left, and right of the invalid area re-determine the invalid area.

四鄰域判定方法是當區域的四鄰域的標誌位元的值都是一類時,說明該區域屬於其鄰域區域這一類,則將該區域歸入其鄰域類,並將該區域的標誌位元的值設定為鄰域的標誌位元的值,否則,保持該區域的標誌位元的值不變。舉例而言,如圖5所示,設無效區域的位置為f(x,y),標誌位元的值為0,假設其上方相鄰區域f(x,y-1)、下方相鄰區域f(x,y+1)、左方相鄰區域f(x-1,y)、右方相鄰區域f(x+1,y)這四個相鄰區域的標誌位元的值均為1,那麼根據所述四個鄰域對所述無效區域的標誌位元的值重新確定為1,即,根據四鄰域判斷方法,所述無效區域被重新確定為有效區域。The four-neighborhood determination method is that when the values of the flag bits of the four neighbors of the area are all of the same type, it means that the area belongs to the category of its neighborhood area, then the area is classified into its neighborhood category, and the flag bit of the area is The value of the element is set to the value of the flag bit of the neighborhood, otherwise, the value of the flag bit of the area remains unchanged. For example, as shown in Figure 5, suppose the position of the invalid region is f (x, y), the value of the flag bit is 0, and it is assumed that the upper adjacent region f (x, y-1) and the lower adjacent region f(x,y+1), the adjacent area on the left f(x-1,y), and the adjacent area on the right f(x+1,y) have the flag bit value of 1, then according to The four neighbors re-determine the value of the flag bit of the invalid area as 1, that is, according to the four-neighbor judgment method, the invalid area is re-determined as a valid area.

根據四鄰域法對圖像中無效區域進行重新判定時,由於無效區域在圖像中位置的不同,可能有些圖像塊的鄰域數量不夠四個,例如,無效區域在原始圖像左上角的區塊時,可能只有右方和下方兩個鄰域。如圖6所示,在圖像中無效區域的位置可能在左上角、右上角、左下角、右下角、以及上、下、左、右四個邊界,針對這些特殊情況,在根據四鄰域判斷法重新確定無效區域的值時,根據如下方法進行判斷: 1)  無效區域在待識別圖像左上角時的判定方法:若

Figure 02_image071
Figure 02_image073
,則將
Figure 02_image075
置為1,即有效; 2)  無效區域在待識別圖像右上角時的判定方法:若
Figure 02_image077
Figure 02_image073
,則將
Figure 02_image075
置為1,即有效; 3)  無效區域在待識別圖像左下角時的判定方法:若
Figure 02_image071
Figure 02_image079
,則將
Figure 02_image075
置為1,即有效; 4)  無效區域在待識別圖像右下角時的判定方法:若
Figure 02_image077
Figure 02_image079
,則將
Figure 02_image075
置為1,即有效; 5)  無效區域在待識別圖像上邊界時的判定方法:若
Figure 02_image081
Figure 02_image073
,則將
Figure 02_image075
置為1,即有效; 6)  無效區域在待識別圖像下邊界時的判定方法:若
Figure 02_image081
Figure 02_image079
,則將
Figure 02_image075
置為1,即有效; 7)  無效區域在待識別圖像左邊界時的判定方法:若
Figure 02_image083
Figure 02_image071
,則將
Figure 02_image075
置為1,即有效; 8)  無效區域在待識別圖像右邊界時的判定方法:若
Figure 02_image083
Figure 02_image077
,則將
Figure 02_image075
置為1,即有效; 9)  無效區域在待識別圖像中心區域時的判定方法:若
Figure 02_image085
,則將
Figure 02_image075
置為1,即有效。When re-judging the invalid area in the image according to the four-neighborhood method, due to the difference in the position of the invalid area in the image, some image blocks may not have enough neighborhoods of four. For example, the invalid area is in the upper left corner of the original image. In the block, there may only be two neighborhoods on the right and the bottom. As shown in Figure 6, the position of the invalid area in the image may be in the upper left corner, upper right corner, lower left corner, lower right corner, and the upper, lower, left, and right boundaries. For these special cases, the location of the invalid area may be determined according to the four neighborhoods. When re-determining the value of the invalid area, the judgment is made according to the following methods: 1) The judgment method when the invalid area is in the upper left corner of the image to be recognized: if
Figure 02_image071
or
Figure 02_image073
, Then
Figure 02_image075
Set to 1, that is valid; 2) Judgment method when the invalid area is in the upper right corner of the image to be recognized: If
Figure 02_image077
or
Figure 02_image073
, Then
Figure 02_image075
Set to 1, that is valid; 3) Judgment method when the invalid area is in the lower left corner of the image to be recognized: If
Figure 02_image071
or
Figure 02_image079
, Then
Figure 02_image075
Set to 1, that is valid; 4) Judgment method when the invalid area is in the lower right corner of the image to be recognized: If
Figure 02_image077
or
Figure 02_image079
, Then
Figure 02_image075
Set to 1, that is valid; 5) Judgment method when the invalid area is on the upper boundary of the image to be recognized: If
Figure 02_image081
or
Figure 02_image073
, Then
Figure 02_image075
Set to 1, that is valid; 6) Judgment method when the invalid area is at the lower boundary of the image to be recognized: If
Figure 02_image081
or
Figure 02_image079
, Then
Figure 02_image075
Set to 1, that is valid; 7) Judgment method when the invalid area is at the left boundary of the image to be recognized: If
Figure 02_image083
or
Figure 02_image071
, Then
Figure 02_image075
Set to 1, that is valid; 8) Judgment method when the invalid area is at the right boundary of the image to be recognized: If
Figure 02_image083
or
Figure 02_image077
, Then
Figure 02_image075
Set to 1, that is valid; 9) Judgment method when the invalid area is in the central area of the image to be recognized: If
Figure 02_image085
, Then
Figure 02_image075
Set to 1, that is effective.

在其他實施例中,也可以根據八鄰域判斷方法對所述無效區域進行重新判定。In other embodiments, the invalid region can also be re-determined according to the eight-neighborhood judgment method.

圖1-6詳細介紹了本發明的指紋識別方法,通過所述方法,能夠提高指紋識別的效率及準確率。下面結合圖7和圖8,對實現所述指紋識別方法的軟體晶片的功能模組以及硬體裝置架構進行介紹。應該瞭解,所述實施例僅為說明之用,在專利申請範圍上並不受此結構的限制。Figures 1-6 detail the fingerprint identification method of the present invention, through which the efficiency and accuracy of fingerprint identification can be improved. The functional modules of the software chip and the hardware device architecture for implementing the fingerprint identification method will be introduced below in conjunction with FIG. 7 and FIG. 8. It should be understood that the embodiments are only for illustrative purposes, and are not limited by this structure in the scope of the patent application.

圖7為本發明一實施方式提供的指紋識別晶片的結構圖。FIG. 7 is a structural diagram of a fingerprint recognition chip provided by an embodiment of the present invention.

在一些實施方式中,所述指紋識別晶片200可以包括多個由程式碼段所組成的功能模組,以實現指紋識別的功能。In some embodiments, the fingerprint recognition chip 200 may include a plurality of functional modules composed of code segments to realize the function of fingerprint recognition.

參考圖7,本實施方式中,指紋識別晶片200根據其所執行的功能,可以被劃分為多個功能模組,所述各個功能模組用於執行圖1對應實施方式中的各個步驟,以實現指紋識別的功能。本實施方式中,所述指紋識別晶片200的功能模組包括:圖像切分模組201、特徵提取模組202、輸入模組203、判定模組204以及有效區域檢測模組205。各個功能模組的功能將在下面的實施例中進行詳述。Referring to FIG. 7, in this embodiment, the fingerprint recognition chip 200 can be divided into a plurality of functional modules according to the functions performed by it, and the functional modules are used to perform various steps in the embodiment corresponding to FIG. 1 to Realize the function of fingerprint recognition. In this embodiment, the functional modules of the fingerprint recognition chip 200 include: an image segmentation module 201, a feature extraction module 202, an input module 203, a determination module 204, and an effective area detection module 205. The functions of each functional module will be described in detail in the following embodiments.

所述圖像切分模組201用於獲取待識別的指紋圖像,並將所述指紋圖像切分為多個子圖像。The image segmentation module 201 is used to obtain a fingerprint image to be identified, and segment the fingerprint image into a plurality of sub-images.

所述特徵提取模組202用於提取所示每一子圖像的特徵值。The feature extraction module 202 is used to extract the feature value of each sub-image shown.

在一個實施方式中,利用如前所述的灰度梯度共生矩陣提取所述特徵值,所述特徵值包括小梯度優勢、大梯度優勢、灰度分佈不均勻性、梯度分佈不均勻性、能量、灰度平均、梯度平均、灰度均方差、梯度均方差、相關性、灰度熵、梯度熵、混合熵、差分矩、逆差分矩。In one embodiment, the eigenvalues are extracted using the gray gradient co-occurrence matrix as described above, and the eigenvalues include small gradient advantage, large gradient advantage, unevenness of grayscale distribution, unevenness of gradient distribution, energy , Grayscale average, gradient average, grayscale mean square error, gradient mean square error, correlation, grayscale entropy, gradient entropy, mixed entropy, difference moment, inverse difference moment.

在另一個實施方式中,利用如前所述的灰度共生矩陣提取所述特徵值。In another embodiment, the eigenvalue is extracted using the gray level co-occurrence matrix as described above.

所述輸入模組203用於將所述每一子圖像的特徵值輸入至預設的機器學習模型,分別判斷所述每一子圖像是否為手指圖像。The input module 203 is configured to input the feature value of each sub-image into a preset machine learning model, and determine whether each sub-image is a finger image.

所述判定模組204用於獲取所述子圖像的判斷結果,當超過預設數目的子圖像被判定為手指時,則確定所述指紋圖像為手指圖像。The determination module 204 is used to obtain the determination result of the sub-image, and when the sub-image exceeding the preset number is determined to be a finger, it is determined that the fingerprint image is a finger image.

進一步的,所述有效區域檢測模組205用於在確定所述待識別的圖像為手指圖像時,檢測所述圖像中指紋的有效區域。Further, the effective area detection module 205 is configured to detect the effective area of the fingerprint in the image when it is determined that the image to be recognized is a finger image.

具體的,所述有效區域檢測模組205檢測指紋有效區域包括: 對原始灰度圖像指紋進行濾波處理,得到濾波後灰度圖像; 計算濾波後灰度圖像每一個圖元的梯度,得到梯度方向圖像; 將所述梯度方向圖像分為多個子塊,並計算得到實施每一子塊的方差; 將所述每一子塊的方差與設定的閾值比較,若大於閾值,則確定該子塊為有效區域,並將所述子塊的標誌位元設置為第一值;否則,確定該子塊為無效區域,並將所述子塊的標誌位元設置為第二值。其中,所述第一值為1,所述第二值為0。Specifically, the effective area detection module 205 detecting the effective area of the fingerprint includes: Filter the original gray image fingerprint to obtain the filtered gray image; Calculate the gradient of each pixel of the filtered grayscale image to obtain the gradient direction image; Divide the gradient direction image into multiple sub-blocks, and calculate the variance of each sub-block implemented; The variance of each sub-block is compared with the set threshold. If it is greater than the threshold, the sub-block is determined to be a valid area, and the flag bit of the sub-block is set to the first value; otherwise, the sub-block is determined Is an invalid area, and the flag bit of the sub-block is set to the second value. Wherein, the first value is 1, and the second value is 0.

進一步地,所述有效區域檢測模組205檢測指紋有效區域還包括當檢測完指紋的有效區域後,針對被確定為無效區域的子塊,所述方法還進一步包括:按照四鄰域判斷方法根據所述無效區域上、下、左、右四個相鄰區域標誌位元的值對所述無效區域進行重新判定。Further, the valid area detection module 205 detecting the valid area of the fingerprint also includes after detecting the valid area of the fingerprint, for the sub-blocks determined to be invalid areas, the method further includes: according to the four-neighbor judgment method according to all the sub-blocks. The invalid area is re-determined by the values of the four adjacent area flag bits on the upper, lower, left, and right of the invalid area.

圖8為本發明一實施方式提供的電子裝置的功能模組示意圖。所述電子裝置10包括指紋採集單元11、記憶體12、處理器13以及存儲在所述記憶體12中並可在所述處理器13上運行的電腦程式14,例如指紋識別的程式。FIG. 8 is a schematic diagram of functional modules of an electronic device according to an embodiment of the present invention. The electronic device 10 includes a fingerprint collection unit 11, a memory 12, a processor 13, and a computer program 14 stored in the memory 12 and running on the processor 13, such as a fingerprint recognition program.

在本實施方式中,所述電子裝置10可以是但不限於智慧手機、平板電腦、智慧工業設備、指紋考勤機等。In this embodiment, the electronic device 10 may be, but is not limited to, a smart phone, a tablet computer, a smart industrial device, a fingerprint attendance machine, and the like.

所述指紋採集單元11用於採集指紋圖像。所述指紋採集單元11可以通過光學指紋採集技術、電容式感測器指紋採集技術、超聲波指紋採集技術、或電磁波指紋採集技術等手段採集指紋圖像。The fingerprint collection unit 11 is used to collect fingerprint images. The fingerprint collection unit 11 can collect fingerprint images by means such as optical fingerprint collection technology, capacitive sensor fingerprint collection technology, ultrasonic fingerprint collection technology, or electromagnetic wave fingerprint collection technology.

所述處理器13執行所述電腦程式14時實現上述方法實施例中指紋識別方法的步驟,用於識別所述指紋採集單元11採集到的指紋圖像。或者,所述處理器13執行所述電腦程式14實現上述晶片實施例中各模組/單元的功能。The processor 13 implements the steps of the fingerprint identification method in the above method embodiment when the computer program 14 is executed, and is used to identify the fingerprint image collected by the fingerprint collection unit 11. Alternatively, the processor 13 executes the computer program 14 to realize the functions of the modules/units in the foregoing chip embodiment.

示例性的,所述電腦程式14可以被分割成一個或多個模組/單元,所述一個或者多個模組/單元被存儲在所述記憶體12中,並由所述處理器13執行,以完成本發明。所述一個或多個模組/單元可以是能夠完成特定功能的一系列電腦程式指令段,該指令段用於描述所述電腦程式14在所述電子裝置10中的執行過程。例如,所述電腦程式14可以被分割成圖2中的模組201-205。Exemplarily, the computer program 14 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 12 and executed by the processor 13 , To complete the present invention. The one or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program 14 in the electronic device 10. For example, the computer program 14 can be divided into modules 201-205 in FIG. 2.

本領域技術人員可以理解,所述示意圖3僅僅是電子裝置10的示例,並不構成對電子裝置10的限定,電子裝置10可以包括比圖示更多或更少的部件,或者組合某些部件,或者不同的部件,例如所述電子裝置10還可以包括輸入輸出設備等。Those skilled in the art can understand that the schematic diagram 3 is only an example of the electronic device 10, and does not constitute a limitation on the electronic device 10. The electronic device 10 may include more or less components than those shown in the figure, or some components may be combined. , Or different components, for example, the electronic device 10 may also include input and output devices.

所稱處理器13可以是中央處理單元(Central Processing Unit,CPU),還可以包括其他通用處理器、數位訊號處理器(Digital Signal Processor,DSP)、專用積體電路(Application Specific Integrated Circuit,ASIC)、現成可程式設計閘陣列(Field-Programmable Gate Array,FPGA)或者其他可程式設計邏輯器件、分立門或者電晶體邏輯器件、分立硬體元件等。通用處理器可以是微處理器或者該處理器也可以是任何常規的處理器等,所述處理器13是所述電子裝置10的控制中心,利用各種介面和線路連接整個電子裝置10的各個部分。The so-called processor 13 may be a central processing unit (Central Processing Unit, CPU), and may also include other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), and dedicated integrated circuits (Application Specific Integrated Circuits, ASICs). , Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor, etc. The processor 13 is the control center of the electronic device 10, and various interfaces and lines are used to connect various parts of the entire electronic device 10 .

所述記憶體12可用於存儲所述電腦程式14和/或模組/單元,所述處理器13通過運行或執行存儲在所述記憶體12內的電腦程式和/或模組/單元,以及調用存儲在記憶體12內的資料,實現所述電子裝置10的各種功能。記憶體12可以包括外部存儲介質,也可以包括記憶體。此外,記憶體12可以包括高速隨機存取記憶體,還可以包括非易失性記憶體,例如硬碟、記憶體、插接式硬碟,智慧存儲卡(Smart Media Card, SMC),安全數位(Secure Digital, SD)卡,快閃記憶體卡(Flash Card)、至少一個磁碟記憶體件、快閃記憶體器件、或其他易失性固態記憶體件。The memory 12 can be used to store the computer programs 14 and/or modules/units, and the processor 13 runs or executes the computer programs and/or modules/units stored in the memory 12, and The data stored in the memory 12 is called to realize various functions of the electronic device 10. The memory body 12 may include an external storage medium or a memory body. In addition, the memory 12 may include high-speed random access memory, and may also include non-volatile memory, such as hard disk, memory, plug-in hard disk, Smart Media Card (SMC), and secure digital (Secure Digital, SD) card, flash memory card (Flash Card), at least one magnetic disk memory device, flash memory device, or other volatile solid-state memory device.

所述電子裝置10集成的模組/單元如果以軟體功能單元的形式實現並作為獨立的產品銷售或使用時,可以存儲在一個電腦可讀取存儲介質中。基於這樣的理解,本發明實現上述實施例方法中的全部或部分流程,也可以通過電腦程式來指令相關的硬體來完成,所述的電腦程式可存儲於一電腦可讀存儲介質中,該電腦程式在被處理器執行時,可實現上述各個方法實施例的步驟。需要說明的是,所述電腦可讀介質包含的內容可以根據司法管轄區內立法和專利實踐的要求進行適當的增減,例如在某些司法管轄區,根據立法和專利實踐,電腦可讀介質不包括電載波信號和電信信號。If the integrated module/unit of the electronic device 10 is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the present invention implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium. When the computer program is executed by the processor, it can implement the steps of the foregoing method embodiments. It should be noted that the content contained in the computer-readable medium can be appropriately added or deleted according to the requirements of the legislation and patent practice in the jurisdiction. For example, in some jurisdictions, according to the legislation and patent practice, the computer-readable medium Does not include electrical carrier signals and telecommunication signals.

最後應說明的是,以上實施例僅用以說明本發明的技術方案而非限制,儘管參照較佳實施例對本發明進行了詳細說明,本領域的普通技術人員應當理解,可以對本發明的技術方案進行修改或等同替換,而不脫離本發明技術方案的精神和範圍。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention and not to limit them. Although the present invention has been described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solutions of the present invention can be Modifications or equivalent replacements are made without departing from the spirit and scope of the technical solution of the present invention.

10:電子裝置 11:指紋採集單元 12:記憶體 13:處理器 14:電腦程式 200:指紋識別晶片 201:圖像切分模組 202:特徵提取模組 203:輸入模組 204:判定模組 205:有效區域檢測模組 S1-S5、S201-S204、S301-S304、S401-S404:步驟10: Electronic device 11: Fingerprint collection unit 12: Memory 13: processor 14: Computer program 200: Fingerprint recognition chip 201: Image segmentation module 202: Feature Extraction Module 203: Input Module 204: Judgment Module 205: Effective area detection module S1-S5, S201-S204, S301-S304, S401-S404: steps

圖1是本發明一實施例所提供的指紋識別方法的流程示意圖。 圖2是本發明一實施例所提供的子圖像特徵值提取方法流程示意圖。 圖3是本發明一實施例提供的機器學習模型訓練方法示意圖。 圖4是本發明一實施例所提供的指紋有效區域檢測方法的流程示意圖。 圖5是本發明一實施例所提供的指紋有效區域檢測方法原理示意圖。 圖6是本發明一實施方式提供的指紋有效區域檢測方法原理的另一示意圖。 圖7是本發明實施例所提供的指紋識別晶片示意圖。 圖8是本發明一實施方式提供的電子裝置架構示意圖。FIG. 1 is a schematic flowchart of a fingerprint identification method provided by an embodiment of the present invention. FIG. 2 is a schematic flowchart of a method for extracting feature values of sub-images according to an embodiment of the present invention. Fig. 3 is a schematic diagram of a machine learning model training method provided by an embodiment of the present invention. Fig. 4 is a schematic flowchart of a method for detecting a fingerprint valid area provided by an embodiment of the present invention. Fig. 5 is a schematic diagram of the principle of a fingerprint effective area detection method provided by an embodiment of the present invention. FIG. 6 is another schematic diagram of the principle of the fingerprint effective area detection method provided by an embodiment of the present invention. Fig. 7 is a schematic diagram of a fingerprint recognition chip provided by an embodiment of the present invention. FIG. 8 is a schematic diagram of an electronic device architecture provided by an embodiment of the present invention.

S1-S5:步驟S1-S5: steps

Claims (12)

一種指紋識別方法,所述方法包括: 獲取指紋採集設備採集的待識別圖像,並將所述圖像切分為多個子圖像; 提取所述每一子圖像的特徵值; 將所述每一子圖像的特徵值輸入至預設的機器學習模型,分別判斷所述每一子圖像是否為手指圖像; 獲取所述子圖像的判斷結果,當超過預設數目的子圖像被判定為手指圖像時,則確定所述待識別的圖像為手指圖像;以及 對所述手指圖像進行指紋識別。A fingerprint identification method, the method includes: Acquiring the image to be recognized collected by the fingerprint acquisition device, and dividing the image into a plurality of sub-images; Extracting the feature value of each sub-image; Input the feature value of each sub-image to a preset machine learning model, and determine whether each sub-image is a finger image; Obtain the judgment result of the sub-images, and when the sub-images exceeding the preset number are judged to be finger images, it is determined that the image to be recognized is a finger image; and Perform fingerprint recognition on the finger image. 如申請專利範圍第1項所述之指紋識別方法,其中,所述提取所述每一子圖像的特徵值包括: 根據所述每一子圖像生成歸一化灰度圖像; 生成所述每一子圖像的歸一化梯度圖像; 根據所述歸一化灰度圖像和所述歸一化梯度圖像生成歸一化灰度梯度共生矩陣; 基於所述歸一化灰度梯度共生矩陣提取所述每一子圖像的特徵值。According to the fingerprint identification method described in item 1 of the scope of patent application, the extracting the feature value of each sub-image includes: Generating a normalized grayscale image according to each of the sub-images; Generating a normalized gradient image of each of the sub-images; Generating a normalized gray-level gradient co-occurrence matrix according to the normalized gray-level image and the normalized gradient image; Extracting the feature value of each sub-image based on the normalized gray gradient co-occurrence matrix. 如申請專利範圍第2項所述之指紋識別方法,其中,所述特徵值包括如下任意一種或多種:小梯度優勢、大梯度優勢、灰度分佈不均勻性、梯度分佈不均勻性、能量、灰度平均、梯度平均、灰度均方差、梯度均方差、相關性、灰度熵、梯度熵、混合熵、差分矩、逆差分矩。For example, the fingerprint identification method described in item 2 of the scope of patent application, wherein the characteristic value includes any one or more of the following: small gradient advantage, large gradient advantage, unevenness of grayscale distribution, unevenness of gradient distribution, energy, Gray-level average, gradient average, gray-level mean square error, gradient mean square error, correlation, gray-level entropy, gradient entropy, mixed entropy, difference moment, inverse difference moment. 如申請專利範圍第1項所述之指紋識別方法,其中,所述提取所述每一子圖像的特徵值是基於灰度共生矩陣提取的。According to the fingerprint identification method described in item 1 of the scope of patent application, the extraction of the feature value of each sub-image is based on a gray-level co-occurrence matrix. 如申請專利範圍第1項所述之指紋識別方法,其中,所述預設的機器學習模型包括如下一種或多種:神經網路模型、支援向量機模型、基於決策樹的分類模型、貝葉斯分類模型。According to the fingerprint recognition method described in item 1 of the scope of patent application, the preset machine learning model includes one or more of the following: neural network model, support vector machine model, classification model based on decision tree, Bayesian Classification model. 如申請專利範圍第5項所述之指紋識別方法,其中,所述機器學習模型的訓練方法包括: 獲取樣本資料集,所述樣本資料集中包括正樣本和負樣本,所述正樣本為手指圖像對應的特徵值,對應的標籤為第一標籤,所述負樣本為非手指圖像對應的特徵值,對應的標籤為第二標籤; 將所述樣本資料集劃分為訓練集和測試集; 利用所述訓練集對所述機器學習模型進行訓練; 利用所述測試集對訓練好的機器學習模型進行測試,並根據測試結果對所述訓練好的機器學習模型進行參數調整。According to the fingerprint identification method described in item 5 of the scope of patent application, the training method of the machine learning model includes: Acquire a sample data set, the sample data set includes a positive sample and a negative sample, the positive sample is the feature value corresponding to the finger image, the corresponding label is the first label, and the negative sample is the feature corresponding to the non-finger image Value, the corresponding label is the second label; Dividing the sample data set into a training set and a test set; Training the machine learning model by using the training set; Use the test set to test the trained machine learning model, and adjust the parameters of the trained machine learning model according to the test results. 如申請專利範圍第1項所述之指紋識別方法,其中,當確定所述待識別圖像為手指圖像後,所述指紋識別方法還進一步檢測所述手指圖像中指紋的有效區域,包括: 對所述待識別圖像進行濾波,得到濾波後的灰度圖像; 計算所述濾波後的灰度圖像中每一個圖元的梯度,得到梯度方向圖像; 將所述梯度方向圖像分為多個子塊,並計算所述每一子塊的方差; 將所述每一子塊的方差與設定的閾值比較,若所述子塊的方差大於所述閾值,則確定所述子塊為有效區域,並將所述子塊的標誌位元設置為第一值;否則,確定所述子塊為無效區域,並將所述子塊的標誌位元設置為第二值。The fingerprint identification method according to the first item of the scope of patent application, wherein, after determining that the image to be identified is a finger image, the fingerprint identification method further detects the effective area of the fingerprint in the finger image, including : Filtering the image to be recognized to obtain a filtered grayscale image; Calculating the gradient of each pixel in the filtered gray image to obtain a gradient direction image; Dividing the gradient direction image into multiple sub-blocks, and calculating the variance of each sub-block; The variance of each sub-block is compared with a set threshold. If the variance of the sub-block is greater than the threshold, the sub-block is determined to be a valid area, and the flag bit of the sub-block is set to the first A value; otherwise, the sub-block is determined to be an invalid area, and the flag bit of the sub-block is set to a second value. 如申請專利範圍第7項所述之指紋識別方法,其中,所述指紋識別方法還包括: 針對被確定為無效區域的圖像子塊,通過四鄰域判斷方法或八鄰域判斷方法,根據所述無效區域相鄰區域的標誌位元的值重新確定所述無效區域標誌位元的值。The fingerprint identification method described in item 7 of the scope of patent application, wherein the fingerprint identification method further includes: For the image sub-blocks determined to be invalid regions, the value of the invalid region flag bit is re-determined according to the value of the flag bit of the adjacent region of the invalid region through the four-neighbor judgment method or the eight-neighbor judgment method. 一種指紋識別晶片,所述晶片包括: 圖像切分模組,用於獲取待識別的指紋圖像,並將所述指紋圖像切分為多個子圖像; 特徵提取模組,用於提取所示每一子圖像的特徵值; 輸入模組,用於將所述每一子圖像的特徵值輸入至預設的機器學習模型,分別判斷所述每一子圖像是否為手指圖像;以及 判定模組,用於所述子圖像的判斷結果,當超過預設數目的子圖像被判定為手指時,則確定所述指紋圖像為手指圖像。A fingerprint recognition chip, the chip comprising: The image segmentation module is used to obtain the fingerprint image to be recognized, and divide the fingerprint image into a plurality of sub-images; The feature extraction module is used to extract the feature value of each sub-image shown; The input module is used for inputting the feature value of each sub-image to a preset machine learning model, and separately determining whether each sub-image is a finger image; and The judging module is used for judging the result of the sub-image. When the sub-images exceeding the preset number are judged to be a finger, the fingerprint image is determined to be a finger image. 如申請專利範圍第9項所述之指紋識別晶片,其中,所述晶片還包括: 有效區域檢測模組,用於在確定所述待識別的圖像為手指圖像後,進一步檢測所述圖像中指紋的有效區域,包括: 對原始灰度圖像指紋進行濾波處理,得到濾波後灰度圖像; 計算濾波後灰度圖像每一個圖元的梯度,得到梯度方向圖像; 將所述梯度方向圖像分為多個子塊,並計算得到實施每一子塊的方差; 將所述每一子塊的方差與設定的閾值比較,若大於閾值,則確定該子塊為有效區域,並將所述子塊的標誌位元設置為第一值;否則,確定該子塊為無效區域,並將所述子塊的標誌位元設置為第二值。The fingerprint recognition chip described in item 9 of the scope of patent application, wherein the chip further includes: The effective area detection module is used to further detect the effective area of the fingerprint in the image after determining that the image to be recognized is a finger image, including: Filter the original gray image fingerprint to obtain the filtered gray image; Calculate the gradient of each pixel of the filtered grayscale image to obtain the gradient direction image; Divide the gradient direction image into multiple sub-blocks, and calculate the variance of each sub-block implemented; The variance of each sub-block is compared with the set threshold. If it is greater than the threshold, the sub-block is determined to be a valid area, and the flag bit of the sub-block is set to the first value; otherwise, the sub-block is determined Is an invalid area, and the flag bit of the sub-block is set to the second value. 如申請專利範圍第10項所述之指紋識別晶片,其中,所述有效區域檢測模組還用於在檢測完指紋的有效區域後,針對被確定為無效區域的子塊,通過四鄰域判斷方法或八鄰域判斷方法,根據所述無效區域相鄰區域的標誌位元的值重新確定所述無效區域標誌位元的值。For example, the fingerprint recognition chip described in item 10 of the scope of patent application, wherein, the effective area detection module is also used to detect the effective area of the fingerprint, and use the four-neighbor judgment method for the sub-blocks determined as the invalid area Or an eight-neighborhood judgment method that re-determines the value of the invalid area flag bit according to the value of the flag bit of the adjacent area of the invalid area. 一種電子裝置,所述電子裝置包括: 指紋採集單元,用於採集指紋圖像;處理器;以及 記憶體,所述記憶體中存儲有多個程式模組,所述多個程式模組由所述處理器載入並執行如申請專利範圍第1至8項中任意一項所述之指紋識別方法。An electronic device, the electronic device comprising: Fingerprint collection unit for collecting fingerprint images; processor; and A memory in which a plurality of program modules are stored, and the plurality of program modules are loaded by the processor and execute the fingerprint recognition as described in any one of items 1 to 8 of the scope of patent application method.
TW108142024A 2019-09-12 2019-11-19 Fingerprint recognition method, chip and electronic device TWI737040B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910867355.3A CN110765857A (en) 2019-09-12 2019-09-12 Fingerprint identification method, chip and electronic device
CN201910867355.3 2019-09-12

Publications (2)

Publication Number Publication Date
TW202111498A true TW202111498A (en) 2021-03-16
TWI737040B TWI737040B (en) 2021-08-21

Family

ID=69329521

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108142024A TWI737040B (en) 2019-09-12 2019-11-19 Fingerprint recognition method, chip and electronic device

Country Status (2)

Country Link
CN (1) CN110765857A (en)
TW (1) TWI737040B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113312946A (en) * 2020-02-27 2021-08-27 敦泰电子(深圳)有限公司 Fingerprint image feature extraction method and device and computer readable storage medium
CN111488798B (en) * 2020-03-11 2023-12-29 天津极豪科技有限公司 Fingerprint identification method, fingerprint identification device, electronic equipment and storage medium
CN113496172A (en) * 2020-04-03 2021-10-12 深圳爱根斯通科技有限公司 Fingerprint foreign matter detection method and device, electronic equipment and storage medium
CN113591514B (en) * 2020-04-30 2024-03-29 杭州海康威视数字技术股份有限公司 Fingerprint living body detection method, fingerprint living body detection equipment and storage medium
CN111753722B (en) * 2020-06-24 2024-03-26 上海依图网络科技有限公司 Fingerprint identification method and device based on feature point type
CN111860272B (en) * 2020-07-13 2023-10-20 敦泰电子(深圳)有限公司 Image processing method, chip and electronic device
CN112766398B (en) * 2021-01-27 2022-09-16 无锡中车时代智能装备研究院有限公司 Generator rotor vent hole identification method and device
CN112560813B (en) * 2021-02-19 2021-05-25 深圳阜时科技有限公司 Identification method of narrow-strip fingerprint, storage medium and electronic equipment
TWI831059B (en) * 2021-10-12 2024-02-01 大陸商北京集創北方科技股份有限公司 Fingerprint identification method, fingerprint identification device and information processing device
TWI813042B (en) * 2021-10-20 2023-08-21 鴻海精密工業股份有限公司 Neural network partitioning method, system, terminal equipment and storage medium
TWI817656B (en) * 2022-08-16 2023-10-01 大陸商北京集創北方科技股份有限公司 Fingerprint authenticity identification method, fingerprint identification device and information processing device that can prevent fake fingerprint attacks
CN115331269B (en) * 2022-10-13 2023-01-13 天津新视光技术有限公司 Fingerprint identification method based on gradient vector field and application

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI531985B (en) * 2011-01-28 2016-05-01 Univ Ishou Palm biometric method
CN103116744B (en) * 2013-02-05 2016-04-13 浙江工业大学 Based on the false fingerprint detection method of MRF and SVM-KNN classification
CN103324944B (en) * 2013-06-26 2016-11-16 电子科技大学 A kind of based on SVM with the false fingerprint detection method of rarefaction representation
CN104598870A (en) * 2014-07-25 2015-05-06 北京智膜科技有限公司 Living fingerprint detection method based on intelligent mobile information equipment
US9672409B2 (en) * 2015-07-03 2017-06-06 Fingerprint Cards Ab Apparatus and computer-implemented method for fingerprint based authentication
CN105389541B (en) * 2015-10-19 2018-05-01 广东欧珀移动通信有限公司 The recognition methods of fingerprint image and device
CN105718848B (en) * 2015-10-21 2020-08-07 深圳芯启航科技有限公司 Quality evaluation method and device for fingerprint image
CN105913520A (en) * 2016-04-13 2016-08-31 时建华 Elevator car using fingerprint for identification
CN106295555A (en) * 2016-08-08 2017-01-04 深圳芯启航科技有限公司 A kind of detection method of vital fingerprint image
CN106682473A (en) * 2016-12-20 2017-05-17 深圳芯启航科技有限公司 Method and device for identifying identity information of users
CN106657056A (en) * 2016-12-20 2017-05-10 深圳芯启航科技有限公司 Biological feature information management method and system
CN108304759A (en) * 2017-01-11 2018-07-20 神盾股份有限公司 Identify the method and electronic device of finger
WO2019078769A1 (en) * 2017-10-18 2019-04-25 Fingerprint Cards Ab Differentiating between live and spoof fingers in fingerprint analysis by machine learning
CN108427923B (en) * 2018-03-08 2022-03-25 广东工业大学 Palm print identification method and device

Also Published As

Publication number Publication date
TWI737040B (en) 2021-08-21
CN110765857A (en) 2020-02-07

Similar Documents

Publication Publication Date Title
TWI737040B (en) Fingerprint recognition method, chip and electronic device
US11783639B2 (en) Liveness test method and apparatus
Alvarez-Betancourt et al. A keypoints-based feature extraction method for iris recognition under variable image quality conditions
WO2017202196A1 (en) Method and device for fingerprint unlocking and user terminal
WO2022042365A1 (en) Method and system for recognizing certificate on basis of graph neural network
CN109829448B (en) Face recognition method, face recognition device and storage medium
WO2021248733A1 (en) Live face detection system applying two-branch three-dimensional convolutional model, terminal and storage medium
JP2015513754A (en) Face recognition method and device
US9875418B2 (en) Method and apparatus for detecting biometric region for user authentication
WO2017161636A1 (en) Fingerprint-based terminal payment method and device
CN113614731A (en) Authentication verification using soft biometrics
CN111626163A (en) Human face living body detection method and device and computer equipment
CN111783629A (en) Human face in-vivo detection method and device for resisting sample attack
CN106709431A (en) Iris recognition method and device
Oldal et al. Hand geometry and palmprint-based authentication using image processing
CN111814682A (en) Face living body detection method and device
CN108960246B (en) Binarization processing device and method for image recognition
WO2020237481A1 (en) Method for determining color inversion region, fingerprint chip, and electronic device
CN102214292B (en) Illumination processing method for human face images
TW201941018A (en) Control method of fingerprint identification module being suitable for the fingerprint identification module and a control module including a classifier
Lomte et al. Biometric fingerprint authentication by minutiae extraction using USB token system
Verma et al. Static Signature Recognition System for User Authentication Based Two Level Cog, Hough Tranform and Neural Network
Vera et al. Iris recognition algorithm on BeagleBone Black
Patel et al. Fingerprint matching using two methods
Suzuki et al. Illumination-invariant face identification using edge-based feature vectors in pseudo-2D Hidden Markov Models