WO2017067287A1 - 一种指纹识别的方法、装置及终端 - Google Patents

一种指纹识别的方法、装置及终端 Download PDF

Info

Publication number
WO2017067287A1
WO2017067287A1 PCT/CN2016/093746 CN2016093746W WO2017067287A1 WO 2017067287 A1 WO2017067287 A1 WO 2017067287A1 CN 2016093746 W CN2016093746 W CN 2016093746W WO 2017067287 A1 WO2017067287 A1 WO 2017067287A1
Authority
WO
WIPO (PCT)
Prior art keywords
variance
pixel
image
column
row
Prior art date
Application number
PCT/CN2016/093746
Other languages
English (en)
French (fr)
Inventor
张强
王立中
周海涛
蒋奎
贺威
Original Assignee
广东欧珀移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东欧珀移动通信有限公司 filed Critical 广东欧珀移动通信有限公司
Publication of WO2017067287A1 publication Critical patent/WO2017067287A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1382Detecting the live character of the finger, i.e. distinguishing from a fake or cadaver finger

Definitions

  • Embodiments of the present invention relate to an application technology of an electronic device, and in particular, to a method, an apparatus, and a terminal for fingerprint recognition.
  • fingerprint recognition technology is widely applied to smart terminals.
  • the user unlocks the smart terminal through fingerprint recognition.
  • a capacitive fingerprint sensor is used for fingerprint recognition. Since the human body is a conductor, when the finger presses the capacitive fingerprint sensor, the fingerprint sensor can obtain the texture of the finger, and then perform subsequent fingerprint recognition operations according to the texture.
  • the fabric texture on the pocket will be captured and identified by the fingerprint sensor in the smart terminal, causing unnecessary recognition and wasting system resources.
  • the invention provides a method, a device and a terminal for fingerprint recognition, so as to effectively identify captured images and improve resource utilization of the intelligent terminal.
  • an embodiment of the present invention provides a fingerprint identification method, including:
  • the texture image is determined to be a fingerprint image.
  • the embodiment of the present invention further provides a device for fingerprint identification, including:
  • a target image acquiring unit configured to acquire a target image, where the target image is included in the texture image
  • a variance calculation unit configured to determine a line variance according to a pixel value of each row of pixel points in the target image acquired by the target image acquisition unit, and/or, according to the target image acquired by the target image acquisition unit
  • the pixel values of the column pixels determine the column variance
  • a determining unit configured to determine that the texture image is a fingerprint image if the row variance and/or column variance obtained by the variance calculation unit is greater than a preset variance.
  • an embodiment of the present invention further provides a terminal, where the terminal includes a memory, a processor, and an executable file corresponding to a process of one or more applications stored in the memory, and configured to be configured by The processor executes, the processor including instructions for performing the following steps:
  • the texture image is determined to be a fingerprint image.
  • Embodiment 1 is a flowchart of a method for fingerprint recognition in Embodiment 1 of the present invention
  • FIG. 2 is a schematic diagram of coordinates of a texture image in the first embodiment of the present invention.
  • FIG. 3 is a flowchart of a first fingerprint identification method in Embodiment 2 of the present invention.
  • FIG. 4 is a flowchart of a second fingerprint identification method in Embodiment 2 of the present invention.
  • FIG. 5 is a schematic diagram of dividing a texture image in Embodiment 2 of the present invention.
  • Embodiment 6 is a schematic diagram showing division of another texture image in Embodiment 2 of the present invention.
  • FIG. 7 is a flowchart of a third fingerprint identification method in Embodiment 2 of the present invention.
  • Embodiment 8 is a schematic diagram showing the position of a preset position area in Embodiment 2 of the present invention.
  • FIG. 9 is a flowchart of a fourth fingerprint identification method in Embodiment 2 of the present invention.
  • FIG. 10 is a schematic structural diagram of an apparatus for fingerprint identification in a third embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a second fingerprint recognition apparatus in Embodiment 3 of the present invention.
  • FIG. 12 is a schematic structural diagram of a third fingerprint recognition apparatus in Embodiment 3 of the present invention.
  • FIG. 13 is a schematic structural diagram of a terminal according to Embodiment 4 of the present invention.
  • Embodiments of the present invention provide a fingerprint identification method, including:
  • the texture image is determined to be a fingerprint image.
  • determining the row variance according to the pixel value of each row of pixel points in the target image, and/or determining the column variance according to the pixel value of each column of pixel points in the target image including:
  • the method further includes:
  • determining the row variance according to the pixel value of each row of pixel points in the target image, and/or determining the column variance according to the pixel value of each column of pixel points in the target image including:
  • the row variance is determined according to the pixel value of each row of pixel points in the binarized target image, and/or the column variance is determined according to the pixel value of each column of pixel points in the target image.
  • performing binarization processing on the target image includes:
  • a pixel in the pixel group whose pixel value is greater than or equal to the threshold value is set to white, and a pixel in the pixel group whose pixel value is smaller than the threshold value is set to black.
  • the obtaining the target image from the captured texture image comprises:
  • determining that the texture image is a fingerprint image comprises:
  • the texture image is a fingerprint image.
  • determining the at least one target area from the texture image comprises:
  • a preset position area in the texture image is determined as the target area, the preset position area having the same geometric center as the texture image.
  • the length of the preset location area is one-half of the length of the texture image and the width of the preset location area is one-half of the width of the texture image, and the preset location area The diagonal intersection coincides with the diagonal intersection of the texture image.
  • the embodiment of the invention further provides a device for fingerprint identification, which comprises:
  • a target image acquiring unit configured to acquire a target image, where the target image is included in the texture image
  • a variance calculation unit configured to determine a line variance according to a pixel value of each row of pixel points in the target image acquired by the target image acquisition unit, and/or, according to the target image acquired by the target image acquisition unit
  • the pixel values of the column pixels determine the column variance
  • a determining unit configured to determine that the texture image is a fingerprint image if the row variance and/or column variance obtained by the variance calculation unit is greater than a preset variance.
  • variance calculation unit is configured to:
  • the device further includes:
  • a binary processing unit configured to perform binarization processing on the target image acquired by the target image acquiring unit
  • the variance calculation unit is further configured to determine a row variance according to a pixel value of each row of pixel points in the target image binarized by the binary processing unit, and/or, according to pixels of each column of pixel points in the target image The value determines the column variance.
  • the binary processing unit is specifically configured to:
  • a pixel in the pixel group whose pixel value is greater than or equal to the threshold value is set to white, and a pixel in the pixel group whose pixel value is smaller than the threshold value is set to black.
  • the target image acquiring unit includes:
  • a target area determining subunit configured to determine at least one target area from the texture image
  • a target image determining subunit configured to determine, respectively, an image in each target area divided by the target area determining subunit as a target image
  • the determining unit is further configured to: if the row variance and/or the column variance corresponding to each target image determined by the target image determining subunit is greater than the preset variance, determine that the texture image is a fingerprint image .
  • the target area determining subunit is further configured to determine a preset position area in the texture image as the target area, where the preset position area has the same geometric center as the texture image.
  • the length of the preset location area is one-half of the length of the texture image and the width of the preset location area is one-half of the width of the texture image, and the preset location area The diagonal intersection coincides with the diagonal intersection of the texture image.
  • An embodiment of the present invention further provides a terminal, where the terminal includes a memory, a processor, and an executable file corresponding to a process of one or more applications stored in the memory, and configured to be configured by the Executed by the processor, the processor includes instructions for performing the following steps:
  • the texture image is determined to be a fingerprint image.
  • determining the row variance according to the pixel value of each row of pixel points in the target image, and/or determining the column variance according to the pixel value of each column of pixel points in the target image including:
  • the method further includes:
  • determining the row variance according to the pixel value of each row of pixel points in the target image, and/or determining the column variance according to the pixel value of each column of pixel points in the target image including:
  • the row variance is determined according to the pixel value of each row of pixel points in the binarized target image, and/or the column variance is determined according to the pixel value of each column of pixel points in the target image.
  • performing binarization processing on the target image includes:
  • a pixel in the pixel group whose pixel value is greater than or equal to the threshold value is set to white, and a pixel in the pixel group whose pixel value is smaller than the threshold value is set to black.
  • the obtaining the target image from the captured texture image comprises:
  • determining that the texture image is a fingerprint image comprises:
  • the texture image is a fingerprint image.
  • determining the at least one target area from the texture image comprises:
  • a preset position area in the texture image is determined as the target area, the preset position area having the same geometric center as the texture image.
  • FIG. 1 is a flowchart of a method for fingerprint identification according to Embodiment 1 of the present invention.
  • the present embodiment is applicable to a case where fingerprint recognition is performed by an intelligent terminal, and the method may be performed by an intelligent terminal having a fingerprint recognition function, such as an intelligent terminal.
  • the smart phone, the tablet computer, etc., the method specifically includes the following steps:
  • Step 110 Acquire a target image from the captured texture image.
  • the smart terminal acquires a texture image through a fingerprint sensor.
  • the texture image can be a grayscale image.
  • the target image may be a texture image or a sub-image in the texture image.
  • Step 120 Determine a row variance according to pixel values of each row of pixel points in the target image, and/or determine a column variance according to pixel values of each column of pixel points in the target image.
  • the color of each pixel in the grayscale image is represented by a red, green, and blue RGB triplet.
  • the (R, G, B) triplet of each pixel is obtained by any one of the following conversion methods to obtain a gray value (ie, a pixel value) corresponding to the (R, G, B) triplet. :
  • the gradation value corresponding to the pixel point can be obtained by any of the above methods.
  • Each coordinate point in the texture image corresponds to one pixel, and each pixel has a unique pixel value, such as a gray value Gray.
  • the texture image coordinates shown in FIG. 3 are used in this embodiment and subsequent embodiments.
  • the texture image is composed of m rows and n columns of pixel dot matrices, and includes m ⁇ n pixels in total.
  • Point, the pixel value (x n , Y m ) of the m rows and n columns corresponds to a pixel value of G nm .
  • the sum of the pixel values of the pixels located in the same row are respectively calculated by the behavior unit.
  • all the pixel points [(x 1 , y 1 ), (x 2 , y 1 )...(x n , y 1 )] in the first row and their pixel values [G 11 , G 12 ... G 1n ] are obtained, and calculating a first pixel value of each pixel row [G 11, G 12 ... G 1n] and a 1.
  • M is the average of the sum of the pixels of each row
  • a 1 to A m sequentially represent the sum of the pixel values of the pixels of each row from the first row to the mth row.
  • H 2 is a row variance
  • a 1 to A m sequentially represent the sum of pixel values of pixel points of each row of the first row to the mth row.
  • the sum of the pixel values of the pixels located in the same column is calculated in units of columns. First, all the pixel points [(x 1 , y 1 ), (x 1 , y 2 )...(x 1 , y m )] in the first column and their pixel values [G 11 , G 21 ... G m1 ] are obtained. And calculating the sum B 1 of the pixel values [G 11 , G 21 ... G m1 ] of the pixels in the first column.
  • Equation 1 is obtained by substituting the average value N into Equation 4 to calculate the column variance.
  • N is the average of the sum of the pixels of each column
  • a 1 to A m sequentially represent the sum of the pixel values of the pixels of each column of the first column to the mth column.
  • L2 is the variance of the columns, B 1 to B n sequentially showing pixel values of the pixels of the first column to the m-th row and each column.
  • Step 130 If the row variance and/or the column variance are greater than the preset variance, determine that the texture image is a fingerprint image.
  • the variance calculated by the fingerprint image will be larger than the variance of the pocket fabric of the regular texture compared with the regular texture pocket fabric.
  • the preset variance value may be defined by referring to the variance value corresponding to the rule graphic.
  • the pixel value of the variance value is 0-10, preferably 6.
  • the technical solution provided by this embodiment obtains a target in a texture image before performing fingerprint recognition.
  • the texture image is determined to be a fingerprint image, and a subsequent fingerprint recognition process is initiated.
  • the fingerprint image of the obtained texture image is directly fingerprinted.
  • the embodiment can determine whether the acquired texture image is a fingerprint image before fingerprint recognition, and when determining the fingerprint image, perform fingerprint recognition to reduce unnecessary Fingerprint identification, improve system resource utilization and recognition efficiency.
  • the mobile phone in addition to being in contact with the fabric in the pocket, may also contact the skin of other parts of the human body, such as the palm, face, nose, and the like. Since the skin texture of the above human body part is relatively uniform, the calculated row variance and/or column variance is still smaller than the row variance and/or column variance corresponding to the fingerprint. Therefore, the above embodiment can also avoid contact with human skin to cause false touches to enable the smart terminal to perform fingerprint recognition, thereby improving recognition efficiency.
  • the embodiment of the present invention further provides a method for fingerprint identification.
  • the method further includes:
  • Step 150 Perform binarization processing on the target image.
  • the threshold T is set, and the pixel of the target image is divided into two parts by the threshold T: a pixel group whose pixel value is larger than the threshold T and a pixel group whose pixel value is smaller than the threshold T.
  • the pixel of the pixel group whose pixel value is larger than the threshold T is set to white (or black), and the pixel of the pixel group whose pixel value is smaller than the threshold T is set to black (or white).
  • the pixel value of the pixel in the target image is 0 or 1. If the target image is a fingerprint image, the pixel value of the pixel corresponding to the texture of the fingerprint is 1 (or 0), and the pixel value corresponding to the gap between the fingerprint textures is 0 (or 1).
  • the target image is a grayscale image
  • the grayscale value corresponding to the threshold T ranges from 0 to 255
  • the exemplary threshold T has a value of 120.
  • the row variance is determined according to the pixel value of each row of pixel points in the target image, and/or the column variance is determined according to the pixel value of each column of the pixel in the target image, which can be implemented by:
  • Step 120a Determine a row variance according to a pixel value of each row of pixel points in the binarized target image, and/or determine a column variance according to a pixel value of each column of pixel points in the target image.
  • the target image is a matrix image of m rows and n columns
  • the sum of the pixel values of each row of pixels in the binarized target image is at most n
  • the sum of the pixel values of each column of pixels is at most m.
  • the target image can be binarized to obtain a binary image. Since the pixel value of the pixel in the binary image is 0 or 1, the sum of the pixel values of each row of pixels and the sum of the pixel values of each column of pixels can be reduced, thereby reducing the complexity of calculating the variance in step 130. Degree, improve the speed of variance calculation, and thus improve the recognition efficiency of the picture.
  • step 110: acquiring a target image from the captured texture image includes:
  • Step 111 Determine at least one target area from the texture image.
  • the area of the target area is smaller than the area of the texture image.
  • the texture image can be divided into a plurality of target areas. For example, as shown in FIG. 5, the texture image is divided into the upper half and the lower half from the center line position, and the upper half is determined as the target area. For another example, as shown in FIG. 6, the texture image is divided into four parts of the upper left part, the lower left part, the upper right part, and the lower right part, and the upper left part and the lower right part are determined as the target area.
  • the target area can also be determined from the texture image. For example, determining a rectangular coordinate region composed of [(a 1 , b 1 ), (a 2 , b 2 ), (a 3 , b 3 ), (a 4 , b 4 )] at a predetermined position in the texture image For the target area.
  • Step 112 Determine an image in each target area as a target image.
  • step 130 if the row variance and/or the column variance are greater than the preset variance, determining that the texture image is a fingerprint image can be implemented by:
  • Step 130a If the row variance and/or the column variance corresponding to each target image are greater than the preset variance, determine that the texture image is a fingerprint image.
  • the row variance (or column variance) of each region is calculated separately. If the row variance (or the column variance) of the plurality of target regions is greater than the preset variance, the images in the plurality of target regions are all irregular images, thereby determining that the texture image is a fingerprint image.
  • the calculation amount can be reduced, the calculation speed of the variance can be improved, and the recognition speed of the fingerprint image can be improved.
  • step 111 determining at least one target area from the texture image may also be implemented by the following manner. :
  • Step 111 ′ determining a preset position area in the texture image as the target area, where the preset position area has the same geometric center as the texture image.
  • the central area of the fingerprint image is an image composed of a fingerprint texture and a gap between textures. Since the image of the central area can accurately represent the texture distribution feature of the fingerprint image, the preset position area of the central area is determined as the target area. The size of the preset position area can be determined according to the rated recognition range of the fingerprint sensor.
  • the length of the preset location area is one-half of the length of the fingerprint sensor and the width of the preset location area is one-half of the width of the fingerprint sensor, and the diagonal of the preset location area is The line intersection coincides with the diagonal intersection of the texture image.
  • the technical solution provided by the embodiment can extract a target area whose fingerprint feature is easier to distinguish from the texture image, and improve the accuracy of the recognition while reducing the calculation amount.
  • FIG. 9 An optional manner of the foregoing embodiment is given by using a usage scenario, as shown in FIG. 9, including:
  • Step 110 Acquire a target image from the captured texture image.
  • Step 120b Determine a row variance according to a pixel value of each row of pixel points in the target image.
  • Step 120c Determine a column variance according to a pixel value of each column of pixels in the target image.
  • Step 130b If the row variance and the column variance are both greater than the preset variance, determine that the texture image is a fingerprint image.
  • the texture image is determined to be a fingerprint image.
  • whether the fingerprint image is collectively determined by the row variance and the column variance can further improve the recognition accuracy of the fingerprint image.
  • the embodiment of the present invention further provides a device 1 for fingerprint identification, which is used to implement the method shown in the above embodiment and is located in a smart terminal.
  • the device 1 includes:
  • the target image acquiring unit 11 is configured to acquire a target image, where the target image is included in the texture image;
  • a variance calculation unit 12 configured to determine a row variance according to a pixel value of each row of pixel points in the target image acquired by the target image acquisition unit 11, and/or according to the target acquired by the target image acquisition unit 11 The pixel value of each column of pixels in the image determines the column variance;
  • a determining unit 13 for the row variance and/or the column side obtained by the variance calculation unit 12 If the difference is greater than the preset variance, it is determined that the texture image is a fingerprint image.
  • variance calculation unit 12 is configured to:
  • the device further includes:
  • the binary processing unit 14 is configured to perform binarization processing on the target image acquired by the target image acquiring unit 11;
  • the variance calculation unit 12 is further configured to determine a row variance according to a pixel value of each row of pixel points in the target image binarized by the binary processing unit 14, and/or according to each column of pixels in the target image.
  • the pixel value determines the column variance.
  • the target image acquiring unit 11 includes:
  • a target area determining subunit 111 configured to determine at least one target area from the texture image
  • a target image determining sub-unit 112 configured to determine, respectively, an image in each target area divided by the target area determining sub-unit 111 as a target image
  • the determining unit 13 is further configured to: if the row variance and/or the column variance corresponding to each target image determined by the target image determining subunit 112 is greater than the preset variance, determine that the texture image is Fingerprint image.
  • the target area determining sub-unit 111 is further configured to determine a preset position area in the texture image as the target area, where the preset position area has the same geometric center as the texture image.
  • the foregoing apparatus can perform the methods provided in Embodiment 1 and Embodiment 2 of the present invention, and has the corresponding functional modules and beneficial effects of performing the foregoing methods.
  • the foregoing apparatus can perform the methods provided in Embodiment 1 and Embodiment 2 of the present invention, and has the corresponding functional modules and beneficial effects of performing the foregoing methods.
  • FIG. 13 is a schematic structural diagram of a terminal according to Embodiment 4 of the present invention.
  • the terminal 20 includes: a memory 21 and a processor 22.
  • the processor 22 can load an executable file corresponding to one or more application processes to the executable file.
  • Processor 22 includes instructions for performing the following steps:
  • the texture image is determined to be a fingerprint image.
  • determining the row variance according to the pixel value of each row of pixel points in the target image, and/or determining the column variance according to the pixel value of each column of pixel points in the target image including:
  • the method further includes:
  • determining the row variance according to the pixel value of each row of pixel points in the target image, and/or determining the column variance according to the pixel value of each column of pixel points in the target image including:
  • the row variance is determined according to the pixel value of each row of pixel points in the binarized target image, and/or the column variance is determined according to the pixel value of each column of pixel points in the target image.
  • performing binarization processing on the target image includes:
  • a pixel in the pixel group whose pixel value is greater than or equal to the threshold value is set to white, and a pixel in the pixel group whose pixel value is smaller than the threshold value is set to black.
  • the obtaining the target image from the captured texture image comprises:
  • determining that the texture image is a fingerprint image comprises:
  • the texture image is a fingerprint image.
  • determining the at least one target area from the texture image comprises:
  • a preset position area in the texture image is determined as the target area, the preset position area having the same geometric center as the texture image.
  • the length of the preset location area is one-half of the length of the texture image and the width of the preset location area is one-half of the width of the texture image, and the preset location area The diagonal intersection coincides with the diagonal intersection of the texture image.
  • the terminal provided in this embodiment may further include: a radio frequency (RF) circuit, a memory including one or more computer readable storage media, an input unit, a display unit, a sensor, an audio circuit, and a wireless fidelity ( WiFi, Wireless Fidelity module, including a processor with one or more processing cores, and a power supply.
  • RF radio frequency
  • the terminal provided in this embodiment can obtain the row variance and/or the column variance in the target image in the texture image.
  • the texture image is determined to be a fingerprint image, and subsequent fingerprint recognition is started.
  • the fingerprint image of the acquired texture image is directly fingerprinted, and the invention can determine whether the acquired texture image is a fingerprint image before fingerprint recognition, and when determining the fingerprint image, perform fingerprint recognition to reduce unnecessary Fingerprint identification improves the utilization of system resources and the efficiency of recognition.
  • the term "and/or” is merely an association that describes an associated object, indicating that there may be three relationships.
  • a and/or B may indicate that A exists separately, and A and B exist simultaneously, and B cases exist alone.
  • the character "/" in this article generally indicates that the contextual object is an "or" relationship.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Collating Specific Patterns (AREA)
  • Image Input (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

本发明公开了一种指纹识别的方法、装置及终端;该方法包括:从捕获的纹理图像中获取目标图像;根据目标图像中每行像素点的像素值确定行方差,和/或,根据目标图像中每列像素点的像素值确定列方差;如果行方差和/或列方差大于预设方差,则确定纹理图像为指纹图像。

Description

一种指纹识别的方法、装置及终端
本申请要求于2015年10月19日提交中国专利局、申请号为CN201510679995.3、发明名称为“一种指纹识别的方法、装置及终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明实施例涉及电子设备应用技术,尤其涉及一种指纹识别的方法、装置及终端。
背景技术
随着电子设备的发展,指纹识别技术被广泛应用到智能终端中。用户通过指纹识别对智能终端进行解锁等操作。
现有技术中,使用电容式指纹传感器进行指纹识别。由于人体为导体,因此当手指按压电容式指纹传感器时,指纹传感器可获得手指的纹理,进而根据该纹理进行后续的指纹识别操作。
当智能终端放入到口袋中时,口袋上的面料纹理将被智能终端中的指纹传感器捕获并进行识别,造成不必要的识别,浪费系统资源。
发明内容
本发明提供一种指纹识别的方法、装置及终端,以实现对捕获的图像进行有效识别,提高智能终端的资源利用率。
第一方面,本发明实施例提供了一种指纹识别的方法,包括:
从捕获的纹理图像中获取目标图像;
根据所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差;
如果所述行方差和/或所述列方差大于预设方差,则确定所述纹理图像为指纹图像。
第二方面,本发明实施例还提供了一种指纹识别的装置,包括:
目标图像获取单元,用于获取目标图像,所述目标图像包含于纹理图像中;
方差计算单元,用于根据所述目标图像获取单元获取的所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像获取单元获取的所述目标图像中每列像素点的像素值确定列方差;
确定单元,用于如果所述方差计算单元得到的所述行方差和/或列方差大于预设方差,则确定所述纹理图像为指纹图像。
第三方面,本发明实施例还提供了一种终端,所述终端包括存储器、处理器,所述存储器中存储有一个或一个以上的应用程序的进程对应的可执行文件,并被配置成由所述处理器执行,所述处理器包括用于执行以下步骤的指令:
从捕获的纹理图像中获取目标图像;
根据所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差;
如果所述行方差和/或所述列方差大于预设方差,则确定所述纹理图像为指纹图像。
附图说明
图1是本发明实施例一中的一个指纹识别的方法的流程图;
图2是本发明实施例一中的一个纹理图像的坐标示意图;
图3是本发明实施例二中的第一个指纹识别的方法的流程图;
图4是本发明实施例二中的第二个指纹识别的方法的流程图;
图5是本发明实施例二中的一个纹理图像的划分示意图;
图6是本发明实施例二中的另一个纹理图像的划分示意图;
图7是本发明实施例二中的第三个指纹识别的方法的流程图;
图8是本发明实施例二中的预设位置区域的位置示意图;
图9是本发明实施例二中的第四个指纹识别的方法的流程图;
图10是本发明实施例三中的第一个指纹识别的装置的结构示意图;
图11是本发明实施例三中的第二个指纹识别的装置的结构示意图;
图12是本发明实施例三中的第三个指纹识别的装置的结构示意图。
图13为本发明实施例四提供的一种终端的结构示意图。
具体实施方式
下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本发明,而非对本发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部结构。
本发明实施例提供一种指纹识别的方法,其中包括:
从捕获的纹理图像中获取目标图像;
根据所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差;
如果所述行方差和/或所述列方差大于预设方差,则确定所述纹理图像为指纹图像。
进一步的,所述根据所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差,包括:
计算所述目标图像中每行像素点的像素值之和,根据所述每行像素点的像素值之和确定行方差;和/或,
计算所述目标图像中每列像素点的像素值之和,根据所述每列像素点的像素值之和确定列方差。
进一步的,在从捕获的纹理图像中获取目标图像之后,所述方法还包括:
对所述目标图像进行二值化处理;
相应的,所述根据所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差,包括:
根据二值化的目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差。
进一步的,对所述目标图像进行二值化处理,包括:
将所述目标图像的像素点分为:像素值大于或等于阈值的像素群和像素值小于阈值的像素群;
将所述像素值大于或等于阈值的像素群中的像素设定为白色,将所述像素值小于阈值的像素群中的像素设定为黑色。
进一步的,所述从捕获的纹理图像中获取目标图像,包括:
从所述纹理图像中确定至少一个目标区域;
将每个目标区域中的图像分别确定为目标图像;
相应的,所述如果所述行方差和/或所述列方差大于预设方差,则确定所述纹理图像为指纹图像,包括:
如果每个所述目标图像对应的行方差和/或列方差均大于所述预设方差,则确定所述纹理图像为指纹图像。
进一步的,所述从所述纹理图像中确定至少一个目标区域,包括:
将所述纹理图像中的预设位置区域确定为所述目标区域,所述预设位置区域与所述纹理图像具有相同的几何中心。
进一步的,所述预设位置区域的长度为所述纹理图像长度的二分之一且所述预设位置区域的宽度为所述纹理图像宽度的二分之一,且所述预设位置区域的对角线交点与所述纹理图像的对角线交点重合。
本发明实施例还提供一种指纹识别的装置,其中包括:
目标图像获取单元,用于获取目标图像,所述目标图像包含于纹理图像中;
方差计算单元,用于根据所述目标图像获取单元获取的所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像获取单元获取的所述目标图像中每列像素点的像素值确定列方差;
确定单元,用于如果所述方差计算单元得到的所述行方差和/或列方差大于预设方差,则确定所述纹理图像为指纹图像。
进一步的,所述方差计算单元用于:
计算所述目标图像获取单元获取的所述目标图像中每行像素点的像素值之和,根据所述每行像素点的像素值之和确定行方差;和/或,
计算所述目标图像获取单元获取的所述目标图像中每列像素点的像素值之和,根据所述每列像素点的像素值之和确定列方差。
进一步的,所述装置还包括:
二值处理单元,用于对所述目标图像获取单元获取的所述目标图像进行二值化处理;
所述方差计算单元还用于,根据所述二值处理单元二值化的目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差。
进一步的,所述二值处理单元,具体用于:
将所述目标图像的像素点分为:像素值大于或等于阈值的像素群和像素值小于阈值的像素群;
将所述像素值大于或等于阈值的像素群中的像素设定为白色,将所述像素值小于阈值的像素群中的像素设定为黑色。
进一步的,所述目标图像获取单元,包括:
目标区域确定子单元,用于从所述纹理图像中确定至少一个目标区域;
目标图像确定子单元,用于将所述目标区域确定子单元划分出的每个目标区域中的图像分别确定为目标图像;
所述确定单元还用于,如果所述目标图像确定子单元确定的每个所述目标图像对应的行方差和/或列方差均大于所述预设方差,则确定所述纹理图像为指纹图像。
进一步的,所述目标区域确定子单元还用于,将所述纹理图像中的预设位置区域确定为所述目标区域,所述预设位置区域与所述纹理图像具有相同的几何中心。
进一步的,所述预设位置区域的长度为所述纹理图像长度的二分之一且所述预设位置区域的宽度为所述纹理图像宽度的二分之一,且所述预设位置区域的对角线交点与所述纹理图像的对角线交点重合。
本发明实施例还提供了一种终端,其中,所述终端包括存储器、处理器,所述存储器中存储有一个或一个以上的应用程序的进程对应的可执行文件,并被配置成由所述处理器执行,所述处理器包括用于执行以下步骤的指令:
从捕获的纹理图像中获取目标图像;
根据所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差;
如果所述行方差和/或所述列方差大于预设方差,则确定所述纹理图像为指纹图像。
进一步的,所述根据所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差,包括:
计算所述目标图像中每行像素点的像素值之和,根据所述每行像素点的像素值之和确定行方差;和/或,
计算所述目标图像中每列像素点的像素值之和,根据所述每列像素点的像素值之和确定列方差。
进一步的,在从捕获的纹理图像中获取目标图像之后,所述方法还包括:
对所述目标图像进行二值化处理;
相应的,所述根据所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差,包括:
根据二值化的目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差。
进一步的,对所述目标图像进行二值化处理,包括:
将所述目标图像的像素点分为:像素值大于或等于阈值的像素群和像素值小于阈值的像素群;
将所述像素值大于或等于阈值的像素群中的像素设定为白色,将所述像素值小于阈值的像素群中的像素设定为黑色。
进一步的,所述从捕获的纹理图像中获取目标图像,包括:
从所述纹理图像中确定至少一个目标区域;
将每个目标区域中的图像分别确定为目标图像;
相应的,所述如果所述行方差和/或所述列方差大于预设方差,则确定所述纹理图像为指纹图像,包括:
如果每个所述目标图像对应的行方差和/或列方差均大于所述预设方差,则确定所述纹理图像为指纹图像。
进一步的,所述从所述纹理图像中确定至少一个目标区域,包括:
将所述纹理图像中的预设位置区域确定为所述目标区域,所述预设位置区域与所述纹理图像具有相同的几何中心。
实施例一
图1为本发明实施例一提供的指纹识别的方法的流程图,本实施例可适用于通过智能终端进行指纹识别的情况,该方法可以由具有指纹识别功能的智能终端来执行,智能终端如智能手机、平板电脑等,该方法具体包括如下步骤:
步骤110、从捕获的纹理图像中获取目标图像。
智能终端通过指纹传感器获取纹理图像。纹理图像可以为灰度图。目标图像可以为纹理图像,也可以为纹理图像中的子图像。
步骤120、根据目标图像中每行像素点的像素值确定行方差,和/或,根据目标图像中每列像素点的像素值确定列方差。
具体可通过下述方式进行实施:
计算所述目标图像中每行像素点的像素值之和,根据所述每行像素点的像素值之和确定行方差;和/或,计算所述目标图像中每列像素点的像素值之和,根据所述每列像素点的像素值之和确定列方差。
灰度图中每个像素点的色彩由红绿蓝RGB三元组表示。为了方便计算,将每个像素的(R,G,B)三元组通过下述任意一种转换方式得到与(R,G,B)三元组对应的灰度值(即像素值)Gray:
方式一:浮点算法:Gray=R×0.3+G×0.59+B×0.11
方式二:整数方法:Gray=(R×30+G×59+B×11)÷100
方式三:平均值法:Gray=(R+G+B)÷3
方式四:仅取绿色:Gray=G
通过上述任意一种方式可得到像素点对应的灰度值,即像素点的像素值。纹理图像中的每个坐标点对应一个像素点,每个像素点具有唯一的像素值,如灰度值Gray。为了方便说明,本实施例及后续实施例中采用图3所示的纹理图像坐标进行说明,如图2所示,纹理图像由m行n列的像素点矩阵组成,共包含m×n个像素点,位于m行n列的像素点(xn,Ym)对应的像素值为Gnm。可选的,m=n=480。
在一种确定行方差的实现方式中,以行为单元,分别计算位于同一行的像素点的像素值的和。首先获取第一行中的全部像素点[(x1,y1)、(x2,y1)…(xn,y1)]及其像素值[G11、G12…G1n],并计算第一行各像素点像素值[G11、G12…G1n]的和A1。然后,获取第二行中的全部像素点[(x1,y2)、(x2,y2)…(xn,y2)]及其像素值[G21、G22…G2n],并计算第二行各像素点像素值[G21、G22…G2n]的和A2。以此类推,得到第三行至第m行各像素点像素值[A3、A4…Am]。
在计算行方差时,首先根据公式一计算每行像素点的和的平均值M。然后将公式一得到该平均值M代入到公式二中计算行方差。
公式一:
Figure PCTCN2016093746-appb-000001
其中,M为每行像素点的和的平均值,A1至Am依次表示第一行至第m行每行的像素点像素值的和。
公式二:
Figure PCTCN2016093746-appb-000002
其中,H2为行方差,A1至Am依次表示第一行至第m行每行的像素点像素值的和。
在一种确定列方差的实现方式中,以列为单元,分别计算位于同一列的像素点的像素值的和。首先获取第一列中的全部像素点[(x1,y1)、(x1,y2)…(x1,ym)]及其像素值[G11、G21…Gm1],并计算第一列各像素点像素值[G11、G21…Gm1]的和B1。然后,获取第二列中的全部像素点[(x2,y1)、(x2,y2)…(x2,ym)]及其像素值[G12、G22…Gm2],并计算第二行各像素点像素值[G12、G22…Gm2]的和B2。以此类推,得到第三行至第m行各像素点像素值[B3、B4…Bm]。
在计算列方差时,首先根据公式三计算每列像素点的和的平均值N。然后将公式一得到该平均值N代入到公式四中计算列方差。
公式三:
Figure PCTCN2016093746-appb-000003
其中,N为每列像素点的和的平均值,A1至Am依次表示第一列至第m列每列的像素点像素值的和。
公式四:
Figure PCTCN2016093746-appb-000004
其中,L2为列方差,B1至Bn依次表示第一列至第m列每列的像素点像素值的和。
步骤130、如果行方差和/或列方差大于预设方差,则确定纹理图像为指纹图像。
当纹理图像为指纹图像时,由于指纹图像中存在不规则的纹理及纹理之间的缝隙,与具有规律纹理口袋面料相比,指纹图像计算出的方差将大于规则纹理的口袋面料的方差。预设方差值可参考规则图形对应的方差值进行定义,例如方差值的像素值为0-10,优选为6。
本实施例提供的技术方案通过在进行指纹识别之前,获取纹理图像中目标 图像中的行方差和/或列方差,当行方差和/或列方差大于预设方差时,确定纹理图像为指纹图像,并启动后续的指纹识别流程。与现有技术直接对获取的纹理图像进行指纹识别相比,本实施例能够在进行指纹识别之前判断获取的纹理图像是否为指纹图像,当确定为指纹图像时,在进行指纹识别,减少不必要的指纹识别,提高系统资源的利用率以及识别效率。
此外,现有技术中手机除了会与口袋中的面料接触,还可能与人体的其他部位的皮肤接触,例如手掌、脸部、鼻子等。由于上述人体部位的皮肤纹理较均匀,因此计算出的行方差和/或列方差仍然小于指纹对应的行方差和/或列方差。因此,上述实施例还可避免与人体皮肤接触造成误触使智能终端进行指纹识别,提高识别效率。
实施例二
本发明实施例还提供了一种指纹识别的方法,作为对上述实施例的进一步说明,如图3所示,在步骤110、获取目标图像之后,所述方法还包括:
步骤150、对目标图像进行二值化处理。
设定阈值T,用阈值T将目标图像的像素点分成两部分:像素值大于阈值T的像素群和像素值小于阈值T的像素群。将像素值大于阈值T的像素群的像素设定为白色(或者黑色),像素值小于阈值T的像素群的像素设定为黑色(或者白色)。经二值化处理后,目标图像中的像素点的像素值为0或1。如果目标图像为指纹图像,则指纹的纹理对应的像素点的像素值为1(或者为0),指纹纹理之间的缝隙对应的像素点的像素值为0(或者为1)。当目标图像为灰度图时,阈值T对应的灰度值(即像素值)的取值范围为0-255,示例性的阈值T的取值为120。
相应的,步骤120、根据目标图像中每行像素点的像素值确定行方差,和/或,根据目标图像中每列像素点的像素值确定列方差,可通过下述方式进行实施:
步骤120a、根据二值化的目标图像中每行像素点的像素值确定行方差,和/或,根据目标图像中每列像素点的像素值确定列方差。
如果目标图像为m行n列的矩阵图像,则二值化的目标图像中每行像素点的像素值之和最大为n,每列像素点的像素值之和最大为m。
本实施例提供的技术方案,能够将目标图像进行二值化处理,得到二值图像。由于二值图像中像素点的像素值为0或为1,因此能够降低每行像素点的像素值之和以及每列像素点的像素值之和的数值,进而降低步骤130中计算方差的复杂度,提高方差计算的速度,进而提高图片的识别效率。
本发明实施例还提供了一种指纹识别的方法,作为对上述实施例的进一步说明,如图4所示,步骤110、从捕获的纹理图像中获取目标图像,包括:
步骤111、从纹理图像中确定至少一个目标区域。
其中,目标区域的面积小于纹理图像的面积。在进行划分时,可以将纹理图像划分为多个目标区域。例如:如图5所示,从中线位置将纹理图像划分为上半部和下半部,并将其中的上半部确定为目标区域。又例如:如图6所示,将纹理图像划分为左上部、左下部、右上部、右下部四个部分,并将其中的左上部和右下部确定为目标区域。
也可以从纹理图像中确定目标区域。例如:在纹理图像中的预定位置,将[(a1,b1)、(a2,b2)、(a3,b3)、(a4,b4)]组成的矩形坐标区域确定为目标区域。
步骤112、将每个目标区域中的图像分别确定为目标图像。
相应的,步骤130、如果行方差和/或列方差大于预设方差,则确定纹理图像为指纹图像,可通过下述方式进行实施:
步骤130a、如果每个目标图像对应的行方差和/或列方差均大于预设方差,则确定纹理图像为指纹图像。
当确定了多个目标区域时,分别计算每个区域的行方差(或者列方差)。如果该多个目标区域的行方差(或者列方差)均大于预设方差,则说明该多个目标区域中的图像均为不规则图像,进而确定纹理图像为指纹图像。
本实施例提供的技术方案,当从纹理图像中确定一个目标区域时,由于目标区域的面积小于纹理图像的面积,因此能够减少计算量,提高方差的计算速度,提高指纹图像的识别速度。当从纹理图像中确定多个目标区域时,能够在提高指纹图像的识别速度的基础上,提高识别是准确性。
本发明实施例还提供了一种指纹识别的方法,作为对上述实施例的进一步说明,如图7所示,步骤111、从纹理图像中确定至少一个目标区域,还可通过下述方式进行实施:
步骤111′、将纹理图像中的预设位置区域确定为目标区域,预设位置区域与纹理图像具有相同的几何中心。
当纹理图像为指纹图像时,指纹图像的中心区域为指纹纹理及纹理间缝隙组成的图像。由于该中心区域的图像能够较为准确地体现指纹图像的纹理分布特点,因此将中心区域的预设位置区域确定为目标区域。预设位置区域的大小可以根据指纹传感器的额定识别范围确定。可选的,如图8所示,预设位置区域的长度为指纹传感器长度的二分之一且预设位置区域的宽度为指纹传感器宽度的二分之一,且预设位置区域的对角线交点与纹理图像的对角线交点重合。
本实施例提供的技术方案,能够从纹理图像中提取指纹特征较容易辨别的目标区域,在降低计算量的同时提高识别的准确性。
下面通过一个使用场景给出上述实施例的一个可选方式,如图9所示,包括:
步骤110、从捕获的纹理图像中获取目标图像。
步骤120b、根据所述目标图像中每行像素点的像素值确定行方差。
步骤120c、根据所述目标图像中每列像素点的像素值确定列方差。
步骤130b、如果行方差和列方差均大于预设方差,则确定纹理图像为指纹图像。
上述使用场景中在行方差和列方差均大于预设方差时,确定纹理图像为指纹图像。与仅根据行方差(或者列方差)确定指纹图像相比,通过行方差和列方差共同确定是否为指纹图像,能够进一步提高指纹图像的识别准确度。
实施例三
本发明实施例还提供了一种指纹识别的装置1,该装置1用于实施上述实施例所示的方法且位于智能终端中,如图10所示,该装置1包括:
目标图像获取单元11,用于获取目标图像,所述目标图像包含于纹理图像中;
方差计算单元12,用于根据所述目标图像获取单元11获取的所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像获取单元11获取的所述目标图像中每列像素点的像素值确定列方差;
确定单元13,用于如果所述方差计算单元12得到的所述行方差和/或列方 差大于预设方差,则确定所述纹理图像为指纹图像。
进一步的,所述方差计算单元12用于:
计算所述目标图像获取单元11获取的所述目标图像中每行像素点的像素值之和,根据所述每行像素点的像素值之和确定行方差;和/或,
计算所述目标图像获取单元11获取的所述目标图像中每列像素点的像素值之和,根据所述每列像素点的像素值之和确定列方差。
进一步的,如图11所示,所述装置还包括:
二值处理单元14,用于对所述目标图像获取单元11获取的所述目标图像进行二值化处理;
所述方差计算单元12还用于,根据所述二值处理单元14二值化的目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差。
进一步的,如图12所示,所述目标图像获取单元11,包括:
目标区域确定子单元111,用于从所述纹理图像中确定至少一个目标区域;
目标图像确定子单元112,用于将所述目标区域确定子单元111划分出的每个目标区域中的图像分别确定为目标图像;
所述确定单元13还用于,如果所述目标图像确定子单元112确定的每个所述目标图像对应的行方差和/或列方差均大于所述预设方差,则确定所述纹理图像为指纹图像。
进一步的,所述目标区域确定子单元111还用于,将所述纹理图像中的预设位置区域确定为所述目标区域,所述预设位置区域与所述纹理图像具有相同的几何中心。
上述装置可执行本发明实施例一和实施例二所提供的方法,具备执行上述方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本发明实施例一和实施例二所提供的方法。
实施例四
图13为本发明实施例四提供的一种终端的结构示意图,该终端20包括:存储器21、处理器22,处理器22可以将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器21中,并被配置成由所述处理器22执行,所述 处理器22包括用于执行以下步骤的指令:
从捕获的纹理图像中获取目标图像;
根据所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差;
如果所述行方差和/或所述列方差大于预设方差,则确定所述纹理图像为指纹图像。
进一步的,所述根据所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差,包括:
计算所述目标图像中每行像素点的像素值之和,根据所述每行像素点的像素值之和确定行方差;和/或,
计算所述目标图像中每列像素点的像素值之和,根据所述每列像素点的像素值之和确定列方差。
进一步的,在从捕获的纹理图像中获取目标图像之后,所述方法还包括:
对所述目标图像进行二值化处理;
相应的,所述根据所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差,包括:
根据二值化的目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差。
进一步的,对所述目标图像进行二值化处理,包括:
将所述目标图像的像素点分为:像素值大于或等于阈值的像素群和像素值小于阈值的像素群;
将所述像素值大于或等于阈值的像素群中的像素设定为白色,将所述像素值小于阈值的像素群中的像素设定为黑色。
进一步的,所述从捕获的纹理图像中获取目标图像,包括:
从所述纹理图像中确定至少一个目标区域;
将每个目标区域中的图像分别确定为目标图像;
相应的,所述如果所述行方差和/或所述列方差大于预设方差,则确定所述纹理图像为指纹图像,包括:
如果每个所述目标图像对应的行方差和/或列方差均大于所述预设方差,则确定所述纹理图像为指纹图像。
进一步的,所述从所述纹理图像中确定至少一个目标区域,包括:
将所述纹理图像中的预设位置区域确定为所述目标区域,所述预设位置区域与所述纹理图像具有相同的几何中心。
进一步的,所述预设位置区域的长度为所述纹理图像长度的二分之一且所述预设位置区域的宽度为所述纹理图像宽度的二分之一,且所述预设位置区域的对角线交点与所述纹理图像的对角线交点重合。
此外,本实施例提供的终端还可以包括:射频(RF,Radio Frequency)电路、包括有一个或一个以上计算机可读存储介质的存储器、输入单元、显示单元、传感器、音频电路、无线保真(WiFi,Wireless Fidelity)模块、包括有一个或者一个以上处理核心的处理器、以及电源等部件。
本实施例提供的终端,可以获取纹理图像中目标图像中的行方差和/或列方差,当行方差和/或列方差大于预设方差时,确定纹理图像为指纹图像,并启动后续的指纹识别流程。与现有技术直接对获取的纹理图像进行指纹识别相比,本发明能够在进行指纹识别之前判断获取的纹理图像是否为指纹图像,当确定为指纹图像时,在进行指纹识别,减少不必要的指纹识别,提高系统资源的利用率以及识别效率。
本发明的各个实施例中,术语“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系。例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。

Claims (20)

  1. 一种指纹识别的方法,其中包括:
    从捕获的纹理图像中获取目标图像;
    根据所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差;
    如果所述行方差和/或所述列方差大于预设方差,则确定所述纹理图像为指纹图像。
  2. 根据权利要求1所述的指纹识别的方法,其中所述根据所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差,包括:
    计算所述目标图像中每行像素点的像素值之和,根据所述每行像素点的像素值之和确定行方差;和/或,
    计算所述目标图像中每列像素点的像素值之和,根据所述每列像素点的像素值之和确定列方差。
  3. 根据权利要求1所述的指纹识别的方法,其中在从捕获的纹理图像中获取目标图像之后,所述方法还包括:
    对所述目标图像进行二值化处理;
    相应的,所述根据所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差,包括:
    根据二值化的目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差。
  4. 根据权利要求3所述的指纹识别的方法,其中对所述目标图像进行二值化处理,包括:
    将所述目标图像的像素点分为:像素值大于或等于阈值的像素群和像素值小于阈值的像素群;
    将所述像素值大于或等于阈值的像素群中的像素设定为白色,将所述像素值小于阈值的像素群中的像素设定为黑色。
  5. 根据权利要求1所述的指纹识别的方法,其中所述从捕获的纹理图像中获取目标图像,包括:
    从所述纹理图像中确定至少一个目标区域;
    将每个目标区域中的图像分别确定为目标图像;
    相应的,所述如果所述行方差和/或所述列方差大于预设方差,则确定所述纹理图像为指纹图像,包括:
    如果每个所述目标图像对应的行方差和/或列方差均大于所述预设方差,则确定所述纹理图像为指纹图像。
  6. 根据权利要求5所述的指纹识别的方法,其中所述从所述纹理图像中确定至少一个目标区域,包括:
    将所述纹理图像中的预设位置区域确定为所述目标区域,所述预设位置区域与所述纹理图像具有相同的几何中心。
  7. 根据权利要求6所述的指纹识别的方法,其中所述预设位置区域的长度为所述纹理图像长度的二分之一且所述预设位置区域的宽度为所述纹理图像宽度的二分之一,且所述预设位置区域的对角线交点与所述纹理图像的对角线交点重合。
  8. 一种指纹识别的装置,其中包括:
    目标图像获取单元,用于获取目标图像,所述目标图像包含于纹理图像中;
    方差计算单元,用于根据所述目标图像获取单元获取的所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像获取单元获取的所述目标图像中每列像素点的像素值确定列方差;
    确定单元,用于如果所述方差计算单元得到的所述行方差和/或列方差大于预设方差,则确定所述纹理图像为指纹图像。
  9. 根据权利要求8所述的指纹识别的装置,其中所述方差计算单元用于:
    计算所述目标图像获取单元获取的所述目标图像中每行像素点的像素值之和,根据所述每行像素点的像素值之和确定行方差;和/或,
    计算所述目标图像获取单元获取的所述目标图像中每列像素点的像素值之和,根据所述每列像素点的像素值之和确定列方差。
  10. 根据权利要求8所述的指纹识别的装置,其中所述装置还包括:
    二值处理单元,用于对所述目标图像获取单元获取的所述目标图像进行二值化处理;
    所述方差计算单元还用于,根据所述二值处理单元二值化的目标图像中每 行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差。
  11. 根据权利要求10所述的指纹识别的装置,其中所述二值处理单元,具体用于:
    将所述目标图像的像素点分为:像素值大于或等于阈值的像素群和像素值小于阈值的像素群;
    将所述像素值大于或等于阈值的像素群中的像素设定为白色,将所述像素值小于阈值的像素群中的像素设定为黑色。
  12. 根据权利要求8所述的指纹识别的装置,其中所述目标图像获取单元,包括:
    目标区域确定子单元,用于从所述纹理图像中确定至少一个目标区域;
    目标图像确定子单元,用于将所述目标区域确定子单元划分出的每个目标区域中的图像分别确定为目标图像;
    所述确定单元还用于,如果所述目标图像确定子单元确定的每个所述目标图像对应的行方差和/或列方差均大于所述预设方差,则确定所述纹理图像为指纹图像。
  13. 根据权利要求12所述的指纹识别的装置,其中所述目标区域确定子单元还用于,将所述纹理图像中的预设位置区域确定为所述目标区域,所述预设位置区域与所述纹理图像具有相同的几何中心。
  14. 根据权利要求13所述的指纹识别的装置,其中所述预设位置区域的长度为所述纹理图像长度的二分之一且所述预设位置区域的宽度为所述纹理图像宽度的二分之一,且所述预设位置区域的对角线交点与所述纹理图像的对角线交点重合。
  15. 一种终端,其中,所述终端包括存储器、处理器,所述存储器中存储有一个或一个以上的应用程序的进程对应的可执行文件,并被配置成由所述处理器执行,所述处理器包括用于执行以下步骤的指令:
    从捕获的纹理图像中获取目标图像;
    根据所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差;
    如果所述行方差和/或所述列方差大于预设方差,则确定所述纹理图像为 指纹图像。
  16. 根据权利要求15所述的终端,其中所述根据所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差,包括:
    计算所述目标图像中每行像素点的像素值之和,根据所述每行像素点的像素值之和确定行方差;和/或,
    计算所述目标图像中每列像素点的像素值之和,根据所述每列像素点的像素值之和确定列方差。
  17. 根据权利要求15所述的终端,其中在从捕获的纹理图像中获取目标图像之后,所述方法还包括:
    对所述目标图像进行二值化处理;
    相应的,所述根据所述目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差,包括:
    根据二值化的目标图像中每行像素点的像素值确定行方差,和/或,根据所述目标图像中每列像素点的像素值确定列方差。
  18. 根据权利要求17所述的终端,其中对所述目标图像进行二值化处理,包括:
    将所述目标图像的像素点分为:像素值大于或等于阈值的像素群和像素值小于阈值的像素群;
    将所述像素值大于或等于阈值的像素群中的像素设定为白色,将所述像素值小于阈值的像素群中的像素设定为黑色。
  19. 根据权利要求15所述的终端,其中所述从捕获的纹理图像中获取目标图像,包括:
    从所述纹理图像中确定至少一个目标区域;
    将每个目标区域中的图像分别确定为目标图像;
    相应的,所述如果所述行方差和/或所述列方差大于预设方差,则确定所述纹理图像为指纹图像,包括:
    如果每个所述目标图像对应的行方差和/或列方差均大于所述预设方差,则确定所述纹理图像为指纹图像。
  20. 根据权利要求19所述的终端,其中所述从所述纹理图像中确定至少一 个目标区域,包括:
    将所述纹理图像中的预设位置区域确定为所述目标区域,所述预设位置区域与所述纹理图像具有相同的几何中心。
PCT/CN2016/093746 2015-10-19 2016-08-05 一种指纹识别的方法、装置及终端 WO2017067287A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510679995.3 2015-10-19
CN201510679995.3A CN105260720B (zh) 2015-10-19 2015-10-19 指纹识别的方法及装置

Publications (1)

Publication Number Publication Date
WO2017067287A1 true WO2017067287A1 (zh) 2017-04-27

Family

ID=55100401

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/093746 WO2017067287A1 (zh) 2015-10-19 2016-08-05 一种指纹识别的方法、装置及终端

Country Status (2)

Country Link
CN (1) CN105260720B (zh)
WO (1) WO2017067287A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516765A (zh) * 2021-06-25 2021-10-19 深圳市优必选科技股份有限公司 一种地图管理方法、地图管理装置及智能设备
CN114881981A (zh) * 2022-05-19 2022-08-09 常州市新创智能科技有限公司 一种玻纤布面的蚊虫检测方法及装置

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260720B (zh) * 2015-10-19 2017-02-22 广东欧珀移动通信有限公司 指纹识别的方法及装置
CN106055947A (zh) * 2016-05-23 2016-10-26 广东欧珀移动通信有限公司 通过指纹解锁移动终端的方法、装置及移动终端
CN106066685B (zh) * 2016-05-30 2018-01-19 广东欧珀移动通信有限公司 一种解锁控制方法及终端设备
CN107895143A (zh) * 2017-10-27 2018-04-10 维沃移动通信有限公司 一种指纹信息处理方法、移动终端及计算机可读存储介质
CN108731772B (zh) * 2018-03-20 2019-08-23 奥菲(泰州)光电传感技术有限公司 液面高度大数据测量系统
CN112561821B (zh) * 2020-12-17 2024-05-17 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) 基于近场扫描的芯片表面电磁数据降噪方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833649A (zh) * 2009-03-09 2010-09-15 杭州晟元芯片技术有限公司 一种指纹残留甄别方法
WO2014101839A1 (en) * 2012-12-31 2014-07-03 Tsinghua University Method for registering fingerprint image
CN104899029A (zh) * 2015-05-28 2015-09-09 广东欧珀移动通信有限公司 一种屏幕控制方法及装置
CN105260720A (zh) * 2015-10-19 2016-01-20 广东欧珀移动通信有限公司 指纹识别的方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1297937C (zh) * 2004-07-02 2007-01-31 电子科技大学 一种基于方向信息的指纹图像分割方法
CN1267849C (zh) * 2004-07-02 2006-08-02 清华大学 基于断纹检测的指纹识别方法
CN1327387C (zh) * 2004-07-13 2007-07-18 清华大学 指纹多特征识别方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833649A (zh) * 2009-03-09 2010-09-15 杭州晟元芯片技术有限公司 一种指纹残留甄别方法
WO2014101839A1 (en) * 2012-12-31 2014-07-03 Tsinghua University Method for registering fingerprint image
CN104899029A (zh) * 2015-05-28 2015-09-09 广东欧珀移动通信有限公司 一种屏幕控制方法及装置
CN105260720A (zh) * 2015-10-19 2016-01-20 广东欧珀移动通信有限公司 指纹识别的方法及装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516765A (zh) * 2021-06-25 2021-10-19 深圳市优必选科技股份有限公司 一种地图管理方法、地图管理装置及智能设备
CN113516765B (zh) * 2021-06-25 2023-08-11 深圳市优必选科技股份有限公司 一种地图管理方法、地图管理装置及智能设备
CN114881981A (zh) * 2022-05-19 2022-08-09 常州市新创智能科技有限公司 一种玻纤布面的蚊虫检测方法及装置
CN114881981B (zh) * 2022-05-19 2023-03-10 常州市新创智能科技有限公司 一种玻纤布面的蚊虫检测方法及装置

Also Published As

Publication number Publication date
CN105260720B (zh) 2017-02-22
CN105260720A (zh) 2016-01-20

Similar Documents

Publication Publication Date Title
WO2017067287A1 (zh) 一种指纹识别的方法、装置及终端
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
US9697416B2 (en) Object detection using cascaded convolutional neural networks
WO2017067270A1 (zh) 指纹图像的识别的方法、装置及终端
WO2017088804A1 (zh) 人脸图像中检测眼镜佩戴的方法及装置
JP2015513754A (ja) 顔認識方法及びデバイス
CN110751024B (zh) 基于手写签名的用户身份识别方法、装置及终端设备
CN104331690B (zh) 一种基于单张图像的肤色人脸检测方法及系统
US20140321770A1 (en) System, method, and computer program product for generating an image thumbnail
WO2018068304A1 (zh) 一种图像匹配的方法及装置
CN108334879B (zh) 一种区域提取方法、系统及终端设备
CN110298785A (zh) 图像美化方法、装置及电子设备
WO2020125062A1 (zh) 一种图像融合方法及相关装置
CN103218600B (zh) 一种实时人脸检测算法
US20190122041A1 (en) Coarse-to-fine hand detection method using deep neural network
CN109903283A (zh) 一种基于图像法向量的掩模图形边缘缺陷检测方法
US10262185B2 (en) Image processing method and image processing system
WO2021051580A1 (zh) 基于分组批量的图片检测方法、装置及存储介质
US20150253861A1 (en) Detecting device and detecting method
US9734610B2 (en) Image processing device, image processing method, and image processing program
CN108288024A (zh) 人脸识别方法及装置
CN111008987A (zh) 基于灰色背景中边缘图像提取方法、装置及可读存储介质
CN110288552A (zh) 视频美化方法、装置及电子设备
CN115840550A (zh) 一种自适应角度的显示屏显示方法、装置及介质
US10706315B2 (en) Image processing device, image processing method, and computer program product

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16856722

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16856722

Country of ref document: EP

Kind code of ref document: A1