CN112418191A - Fingerprint identification model construction method, storage medium and computer equipment - Google Patents

Fingerprint identification model construction method, storage medium and computer equipment Download PDF

Info

Publication number
CN112418191A
CN112418191A CN202110083231.3A CN202110083231A CN112418191A CN 112418191 A CN112418191 A CN 112418191A CN 202110083231 A CN202110083231 A CN 202110083231A CN 112418191 A CN112418191 A CN 112418191A
Authority
CN
China
Prior art keywords
sample
samples
resnet
anchor
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110083231.3A
Other languages
Chinese (zh)
Other versions
CN112418191B (en
Inventor
侯舒文
尹鹏帅
陈子豪
杨光兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Fushi Technology Co Ltd
Original Assignee
Shenzhen Fushi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Fushi Technology Co Ltd filed Critical Shenzhen Fushi Technology Co Ltd
Priority to CN202110083231.3A priority Critical patent/CN112418191B/en
Publication of CN112418191A publication Critical patent/CN112418191A/en
Application granted granted Critical
Publication of CN112418191B publication Critical patent/CN112418191B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application provides a fingerprint identification model construction method based on Resnet and triple Loss, which comprises the steps of constructing N groups of initial samples; training N groups of initial samples by using triple Loss to obtain N groups of training samples; inputting N groups of training samples into an initial Resnet model to train the initial Resnet model to obtain a Resnet model of a target to be tested; inputting the two groups of test samples into a Resnet model of a target to be tested for calculation to obtain two groups of image characteristic quantities; calculating the maximum cosine similarity between the two groups of image characteristic quantities by using a preset algorithm; judging whether the maximum cosine similarity is larger than or equal to a preset value; generating a target Resnet model according to the target Resnet model to be detected; or, constructing M groups of initial samples and inputting M groups of training samples into the Resnet model of the target to be tested to train the Resnet model of the target to be tested. The application also provides a storage medium and a computer device. The method effectively solves the problem of narrow-edge fingerprint matching identification through the fingerprint identification model jointly constructed by Resnet and triple Loss.

Description

Fingerprint identification model construction method, storage medium and computer equipment
Technical Field
The present application relates to the field of fingerprint identification, and in particular, to a method for constructing a fingerprint identification model, a storage medium, and a computer device.
Background
In the modern society, scientific technology is continuously and rapidly developed, electronic equipment such as mobile phones and computers are frequently updated, and identity authentication and identification by fingerprints are becoming popular. The fingerprint is a human body biological characteristic with high identification and uniqueness, and has the characteristics of easy acquisition, high safety and the like, and the fingerprint identification is one of the main modes of the current biological characteristic identification technology. With the development of biometric identification technology, there is a trend toward the conversion of side fingerprints in the underscreen fingerprint identification technology. The side fingerprint collection device has the characteristics of smaller equipment induction area, more convenient application and the like, but the collected fingerprint image is in a slender strip shape, so that the useful fingerprint information is less, and the direction and the position of inputting and unlocking fingerprints can be greatly different, which brings challenges to the refusal and the false recognition in the fingerprint identification.
Therefore, how to obtain a model capable of performing accurate matching identification on the narrow-edge fingerprint is an urgent problem to be solved.
Disclosure of Invention
The application provides a fingerprint identification model construction method, a storage medium and computer equipment, which can effectively realize the matching and identification of narrow-edge fingerprints.
In a first aspect, an embodiment of the present application provides a method for constructing a fingerprint identification model based on Resnet and Triplet Loss, where the method for constructing a fingerprint identification model based on Resnet and Triplet Loss includes:
constructing N groups of initial samples, wherein each group of initial samples in the N groups of initial samples comprises an anchor sample, a positive sample and a negative sample, the anchor sample, the positive sample and the negative sample are square images, and N is an integer greater than 1;
training N groups of initial samples by utilizing the triple Loss to obtain N groups of training samples, wherein each training sample in the N groups of training samples comprises a first training sample and a second training sample, the first training sample is a sample generated by an anchor sample and a positive sample, and the second training sample is a sample generated by the anchor sample and a negative sample;
inputting N groups of training samples into an initial Resnet model to train the initial Resnet model to obtain a Resnet model of a target to be tested;
inputting two groups of test samples into a Resnet model of a target to be tested for calculation to obtain two groups of image characteristic quantities, wherein the two groups of test samples are sampled on the same finger, each group of test samples in the two groups of test samples comprises X square images, and X is an integer greater than 1;
calculating the maximum cosine similarity between the two groups of image characteristic quantities by using a preset algorithm;
judging whether the maximum cosine similarity is larger than or equal to a preset value;
when the maximum cosine similarity is larger than or equal to a preset value, generating a target Resnet model according to the target Resnet model to be detected;
and when the maximum cosine similarity is smaller than a preset value, constructing M groups of initial samples and inputting M groups of training samples into a Resnet model of the target to be tested to train the Resnet model of the target to be tested, wherein M is an integer larger than 1.
In a second aspect, an embodiment of the present application provides a storage medium, where the storage medium stores program instructions, and the program instructions are executed by a processor to implement the above fingerprint identification model building method based on Resnet and triple Loss.
In a third aspect, an embodiment of the present application provides a computer device, where the computer device includes:
a memory for storing program instructions;
and the processor is used for executing program instructions to enable the computer equipment to realize the fingerprint identification model construction method based on Resnet and triple Loss.
In a fourth aspect, an embodiment of the present application provides a method for constructing a fingerprint identification model based on Resnet and Triplet Loss, where the method for constructing a fingerprint identification model based on Resnet and Triplet Loss includes:
an initial sample construction module: constructing an initial sample input into a triple Loss module, wherein the initial sample comprises an anchor sample, a positive sample and a negative sample, the anchor sample, the positive sample and the negative sample are square images, and N is an integer greater than 1;
triple Loss module: training an initial sample into a training sample, wherein the training sample comprises a first training sample and a second training sample, the first training sample is a sample generated by an anchor sample and a positive sample, and the second training sample is a sample generated by the anchor sample and a negative sample;
a training module: inputting the training sample into an initial Resnet model to train the initial Resnet model to obtain a Resnet model of a target to be measured;
a detection module: judging whether the recognition capability of the Resnet model of the target to be detected reaches a preset standard or not; when the recognition capability of the Resnet model of the target to be detected reaches a preset standard, generating the Resnet model of the target to be detected into a Resnet model; and when the recognition capability of the Resnet model of the target to be detected does not reach the preset standard, the initial sample construction module continues to construct an initial sample input into the Triplet Loss module and adds the initial sample to the Triplet Loss module.
The fingerprint identification model builder based on Resnet and triple Loss provides a narrow-edge fingerprint matching algorithm based on the combination of Resnet network and triple Loss to solve the problem that the narrow-edge fingerprint is difficult to match and unlock. Firstly, the distance between positive samples (from the same finger) is reduced by triple Loss learning, the distance between negative samples (from different fingers) is enlarged, so that the positive samples or the negative samples have obvious same characteristics or different characteristics, secondly, the trained positive samples and the trained negative samples are used for training the Resnet network, and the trained Resnet network is checked through two groups of fingerprint images from the same finger until the characteristics output by the Resnet network can show the fingerprint images from the same finger at different acquisition angles and the construction of a fingerprint identification model is completed. Because the positive sample and the negative sample learned by triple Loss are adopted during the training of the Resnet network, the trained Resnet network is more sensitive and accurate to extracting the characteristics from one finger or different fingerprint images, and is suitable for comparison among smaller fingerprint images. In addition, the Resnet network is a light-weight network, so that the calculation amount is small when fingerprint features are extracted, and the storage space is reduced. In addition, the trained network model is verified, so that the network model can be directly copied into a module to run, and the portability of the fingerprint identification model of Resnet and triple Loss is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of the application and that other drawings may be derived from the structure shown in the drawings by one of ordinary skill in the art without the exercise of inventive faculty.
Fig. 1 is a flowchart of a fingerprint identification model construction method based on Resnet and triple Loss according to a first embodiment of the present application.
Fig. 2 is a first sub-flowchart of a fingerprint identification model construction method based on Resnet and triple Loss according to a first embodiment of the present application.
Fig. 3 is a second sub-flowchart of a fingerprint identification model construction method based on Resnet and triple Loss according to the first embodiment of the present application.
Fig. 4 is a third flowchart of a fingerprint identification model construction method based on Resnet and Triplet Loss according to the first embodiment of the present application.
Fig. 5 is a fourth sub-flowchart of a fingerprint identification model construction method based on Resnet and triple Loss according to the first embodiment of the present application.
Fig. 6 is a sub-flowchart of a fingerprint identification model construction method based on Resnet and triple Loss according to a second embodiment of the present application.
Fig. 7 is a schematic diagram of obtaining a positive sample according to the first embodiment of the present application.
Fig. 8 is a schematic diagram of an initial sample provided in the first embodiment of the present application.
Fig. 9 is a schematic diagram of a computer device according to a first embodiment of the present application.
Reference numerals for the various elements in the figures
900 Computer equipment 901 Memory device
902 Processor with a memory having a plurality of memory cells 903 Bus line
904 Display assembly 905 Communication assembly
700 Sample fingerprint image 701 Positive sample obtained by left-right translation
702 Positive sample obtained by up-down translation 703 Randomly rotated positive sample
704 Positive sample obtained by random translation 710 Anchor sample
720 Positive sample 730 Negative sample
800 Initial sample
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the descriptions in this application referring to "first", "second", etc. are for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present application.
Please refer to fig. 1 in combination, which is a method for constructing a fingerprint identification model based on Resnet and triple Loss according to a first embodiment of the present application. The method for constructing the fingerprint identification model based on the Resnet and the triple Loss provided by the first embodiment specifically comprises the following steps.
Step S101, N groups of initial samples are constructed, each group of initial samples in the N groups of initial samples comprises an anchor sample, a positive sample and a negative sample, the anchor sample, the positive sample and the negative sample are square images, and N is an integer larger than 1. For the specific content of constructing N sets of initial samples, refer to steps S1011-S1014. In the application, the initial fingerprint images of the collected anchor sample, the positive sample and the negative sample and the test fingerprint images of the collected test sample are strip-shaped fingerprint images. The strip-shaped fingerprint image can be acquired by a side fingerprint sensor or acquired from a fingerprint sample library. Referring collectively to fig. 8, each set of initial samples 800 includes an anchor sample 710, a positive sample 720, and a negative sample 730. An anchor sample may be grouped with any one of a plurality of positive samples 720 and any one of a plurality of negative samples 730 into a set of initial samples 800. The positive samples 720 and the negative samples 730 of each set of initial samples 800 are different. The square images of the anchor, positive and negative examples and the square images in each set of test examples have the same side length (in pixels) and the same area (in pixels squared).
Step S102, training N groups of initial samples by utilizing the triple Loss to obtain N groups of training samples, wherein each training sample in the N groups of training samples comprises a first training sample and a second training sample, the first training sample is a sample generated by an anchor sample and a positive sample, and the second training sample is a sample generated by the anchor sample and a negative sample.
The triple Loss is called triple Loss, and its three elements include Anchor (Anchor), Positive (Positive) and Negative (Negative). Triple Loss is a Loss function in deep learning, and is used for training samples with small differences, such as fingerprints. After triple Loss learning, the distance between the anchor sample and the positive sample is close, and the distance between the anchor sample and the negative sample is increased. Here, the anchor sample is a sample randomly drawn, the positive sample is a sample of the same class as the anchor sample, and the negative sample is a sample of the opposite class. And (4) performing triple Loss training on the N groups of initial samples to obtain N groups of training samples. Thus constituting a Triplet Loss training sample. In the present embodiment, (a _ P) is defined as a first training sample, and (a _ N) is defined as a second training sample, thereby constituting N training samples of the Triplet Loss. The triple Loss is to make the distance between the (A _ P) feature expressions as small as possible, and the distance between the (A _ N) feature expressions as large as possible, so as to better enlarge the distance between the two types of image features.
And step S103, inputting the N groups of training samples into the initial Resnet model, and training the initial Resnet model to obtain a Resnet model of the target to be measured. In the embodiment of the Deep residual network (Deep residual network, Resnet), a shallow residual network Resnet18 is selected, where 18 denotes an 18-layer network with weight parameters, and is composed of a convolutional layer (17) and a fully-connected layer (1), and a Batch Normalization (BN) layer and a pooling layer are not included in the middle, so that the problem that data distribution of the middle layer changes in the training process is solved, the gradient disappears or explodes, and the training speed is accelerated.
In this embodiment, the fingerprinting model is based on the Resnet18 network and triple Loss implementation. For the convolutional neural network, the deep network generally has better effect than the shallow network, and the width and depth of the network are usually increased to improve the performance of the network, but if the depth of the original network is simply increased, the network model is degraded, so the Resnet18 network is introduced in the embodiment, which not only makes the network fitting easier, but also solves the degradation problem. In the embodiment, the size of the image used for training the fingerprint is small, and the amount of contained characteristic information is limited, so that the shallow residual error network Resnet18 is selected, the calculation amount is reduced, the learning efficiency of the fingerprint identification model is improved, and the problem of performance degradation of the CNN model after the number of layers of the deep learning network of the CNN model is increased is solved. In addition, the Resnet18 network is a lightweight network, so that the computation amount is small when fingerprint features are extracted, and the storage space is reduced.
And step S104, inputting two groups of test samples into a Resnet model of a target to be tested for calculation to obtain two groups of image characteristic quantities, wherein the two groups of test samples are sampled on the same finger, each group of test samples in the two groups of test samples comprises X square images, and X is an integer greater than 1.
And step S105, calculating the maximum cosine similarity between the two groups of image characteristic quantities by using a preset algorithm. In this embodiment, the two sets of image feature quantities include a lateral image feature quantity set and a vertical resolution image feature quantity set, the lateral image feature quantity set includes T1 lateral image feature quantities, and the vertical resolution image feature quantity set includes T2 vertical resolution image feature quantities, where a preset algorithm is used to calculate a maximum cosine similarity between the two sets of image feature quantities, specifically including calculating a maximum cosine similarity between the T1 lateral image feature quantities and the T2 vertical resolution image feature quantities by using a cyclic pass method. T1 and T2 are integers greater than 1.
In the embodiment, a cosine similarity scoring method based on Resnet18 network output feature vectors is provided, and a maximum cosine similarity value is calculated by adopting a cyclic traversal method, so that whether a test sample comes from the same finger can be quickly and accurately judged.
In some other possible embodiments, the euclidean distance between the image feature quantities may also be calculated by using the image feature quantities output by the Resnet18 network, and different or the same finger "overlap areas" may be separated by using a preset corresponding euclidean distance threshold, so as to quickly and accurately determine whether the two images are from the same finger.
Step S106, judging whether the maximum cosine similarity is larger than or equal to a preset value.
And S107, when the maximum cosine similarity is larger than or equal to a preset value, generating a target Resnet model according to the Resnet model to be detected.
And S108, when the maximum cosine similarity is smaller than a preset value, constructing M groups of initial samples and inputting M groups of training samples into a Resnet model of the target to be tested to train the Resnet model of the target to be tested, wherein M is an integer larger than 1.
In the embodiment, the fingerprint feature information is extracted by adopting a deep learning-based method, so that the problem of small-area fingerprint image matching identification such as (side edge) narrow-edge fingerprints is effectively solved. The method comprises the steps of combining a Resnet18 network and a triple Loss in a CNN network for fingerprint matching, dividing a strip-shaped fingerprint image into a plurality of square images, training the corresponding data sets by using the triple Loss, and training an initial Resnet18 network by using the data sets to obtain a target Resnet18 network.
Please refer to fig. 2, which is a flowchart illustrating the sub-steps of step S101 according to an embodiment of the present disclosure. Step S101 constructs N sets of initial samples, specifically including the following steps.
In step S1011, N1 anchor samples are captured on the sample fingerprint image by using a preset square, wherein the side length of the preset square is equal to the length of the short side of the sample fingerprint image, N1< N, and N1 is an integer greater than 1.
In step S1012, N2 positive samples corresponding to each of N1 anchor samples are obtained according to a preset first rule, where N2 is an integer greater than 1. Please refer to steps S10121-S10123 for obtaining the content of N2 positive samples corresponding to each anchor sample.
In step S1013, N3 negative samples corresponding to each of N1 anchor samples are obtained according to a preset second rule, where N3 is an integer greater than 1. Please refer to steps S10131-S10133 for obtaining the content of the N3 negative samples corresponding to each anchor sample.
In step S1014, N groups of initial samples are generated from N1 anchor samples and N2 positive samples and N3 negative samples corresponding to each of the N1 anchor samples according to a preset third rule.
Please refer to fig. 3 in combination, which is a flowchart illustrating the sub-steps of step S1012 according to an embodiment of the present application. Step S1012 obtains N2 positive samples corresponding to each of the N1 anchor samples according to a preset first rule, and specifically includes the following steps.
S10121, performing left-right translation, up-down translation, random translation or random rotation on each anchor sample to obtain N20 positive samples, wherein N2 is less than N20, and N20 is an integer greater than 1. Referring to fig. 7, an anchor sample 710 is arbitrarily captured on the sample fingerprint image 700, wherein the positive sample 701 is obtained by left-right translation, the positive sample 702 is obtained by up-down translation, the positive sample 703 is obtained by random rotation, and the positive sample 704 is obtained by random translation.
S10122, calculating an image coincidence ratio between each of the N20 positive samples and the corresponding anchor sample. Specifically, image coincidence ratios between the positive sample 701 obtained by left-right translation, the positive sample 702 obtained by up-down translation, the positive sample 703 obtained by random rotation, and the positive sample 704 obtained by random translation and the anchor sample 710 are calculated.
S10123, screening N2 positive samples with the image coincidence rate larger than the first preset value.
In this embodiment, the size of the collected sample fingerprint image is 32 × 160, (unit is pixel), an anchor sample of a square image with the size of 32 × 32 is randomly extracted in the length direction, and then different positive samples are randomly transformed according to a preset transformation rule. In the embodiment, the image coincidence rate of the positive sample and the anchor sample is ensured to be not less than 70%, and the sample quality of the positive sample is ensured.
Please refer to fig. 4, which is a flowchart illustrating a sub-step of step S1013 provided in an embodiment of the present application. Step S1013 acquires N3 negative samples corresponding to each of the N1 anchor samples according to a preset second rule, which specifically includes the following steps.
S10131, moving the square on other sample fingerprint images which are not used for obtaining the current anchor sample, and obtaining N30 negative samples, wherein N3< N30, and N30 is an integer larger than 1.
And S10132, calculating the similarity between each negative sample of the N30 negative samples and the corresponding anchor sample. For the calculation of the similarity, refer to steps S101321-S101322.
S10133, screening N3 negative samples with the similarity larger than a second preset value.
Please refer to fig. 5 in combination, which is a flowchart illustrating sub-steps of step S10132 according to an embodiment of the present application. Step S10132 calculates the similarity between each of the N30 negative samples and the corresponding anchor sample, and includes the following steps.
S101321, calculating a negative sample gray value of each negative sample and an anchor sample gray value of the anchor sample corresponding to each negative sample.
S101322, calculating the similarity between the gray value of the negative sample and the gray value of the anchor sample according to a gray average difference matching method.
In this embodiment, a side fingerprint matching algorithm based on Resnet18 and triple Loss is provided, a training set meeting the requirement of triple Loss is provided, a method for training the sample set by using triple Loss is further provided, a screening method with an image coincidence rate of not less than 70% is adopted for positive samples, a screening method with a gray value similarity of not less than 70% is adopted for negative samples, negative samples with gray values meeting a preset gray value threshold are selected from different fingers, and the reliability of the positive samples and the negative samples in training is improved.
Please refer to fig. 6, which is a method for constructing a fingerprint identification model based on Resnet and triple Loss according to a second embodiment of the present application. The difference between the method for constructing a fingerprint identification model based on Resnet and Triplet Loss provided in the second embodiment and the method for constructing a fingerprint identification model based on Resnet and Triplet Loss provided in the first embodiment is that before two sets of test samples are input into a target Resnet model and calculated to obtain two sets of image feature quantities, the method for constructing a fingerprint identification model based on Resnet and Triplet Loss provided in the second embodiment further includes the following steps.
S601, acquiring horizontal and vertical test samples from the same finger, where a sampling angle difference between the horizontal and vertical test samples is 90 °. Specifically, a transverse-recording and vertical-decoding manner is adopted for the finger fingerprint, in this embodiment, the transverse-recording fingerprint image is a transverse-recording test sample, and the vertical-decoding fingerprint image is a vertical-decoding test sample. The two sampling angles are in a cross relationship. In practical application, the sampling angles of the two groups of fingerprint images can be adjusted according to actual requirements.
S602, dividing the horizontal and vertical interpretation test samples into T1 horizontal and T2 vertical interpretation square images, wherein the side lengths of the horizontal and vertical interpretation square images are equal to the length of the short side of the test fingerprint image, and T1 and T2 are integers greater than 1. In this embodiment, the sample fingerprint image and the test fingerprint image have the same length and width (the unit of length and width is pixels), and the same area (the unit is the square of pixels). Two 32 x 160 (in square pixels) fingerprint images have a maximum overlap of 32 x 32 (in square pixels), thus dividing the fingerprint image into 32 x 32 (in square pixels) squares when training the network, and 32 x 32 (in square pixels) squares when the fingerprint is unlocked. T1 is 5, T2 is 5.
In this embodiment, the horizontal and vertical interpretation test samples from the same finger are processed to obtain a square image which can be directly input into the model for fingerprint matching, so that the Resnet18 network can conveniently perform fingerprint identification matching on the input test fingerprint sample.
The present application also provides a storage medium. The storage medium stores program instructions, and the program instructions are executed by the processor to implement any one of the methods for constructing the fingerprint identification model based on Resnet and triple Loss. Since the storage medium adopts all technical solutions of all the above embodiments, at least all the beneficial effects brought by the technical solutions of the above embodiments are achieved, and are not described herein again.
The present application also provides a computer device 900. The computer device 900 comprises a memory 901 and a processor 902, wherein the memory 901 is used for storing program instructions, and the processor 902 is used for executing the program instructions to enable the computer device to implement any one of the above-mentioned methods for constructing a fingerprint identification model based on Resnet and triple Loss. Due to the computer device 900. All technical solutions of all the embodiments are adopted, so that at least all beneficial effects brought by the technical solutions of the embodiments are achieved, and further description is omitted here. Please refer to fig. 9, which is a schematic diagram illustrating an internal structure of a computer apparatus 900 according to a first embodiment of the present application.
The memory 901 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 901 may in some embodiments be an internal storage unit of the computer device 900, such as a hard disk of the computer device 900. The memory 901 may also be an external storage device of the computer device 900 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital Card (SD), a Flash memory Card (Flash Card), etc., provided on the computer device 900. Further, the memory 901 may also include both internal storage units and external storage devices of the computer device 900. The memory 901 may be used to store not only application software installed in the computer apparatus 900 and various types of data, such as program instructions of a fingerprint recognition model construction method based on Resnet and triple Loss, but also temporarily store data that has been output or will be output.
Processor 902 may be, in some embodiments, a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip that executes program instructions or processes data stored in memory 901. Specifically, the processor 902 executes the program instructions of the fingerprint identification model construction method based on the Resnet and the triple Loss to control the computer device 900 to implement the fingerprint identification model construction method based on the Resnet and the triple Loss.
The bus 903 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
Further, computer device 900 may also include a display component 904. The display component 904 may be an LED (Light Emitting Diode) display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light Emitting Diode) touch panel, or the like. The display component 904 may also be referred to as a display device or display unit, as appropriate, for displaying information processed in the computer device 900 and for displaying a visual user interface, among other things.
Further, the computer device 900 may also include a communication component 905, and the communication component 905 may optionally include a wired communication component and/or a wireless communication component (e.g., a WI-FI communication component, a bluetooth communication component, etc.), typically used for establishing a communication connection between the computer device 900 and other computer devices.
While fig. 9 illustrates only a computer device 900 having components 901-905 and program instructions for implementing a fingerprint identification model construction method based on Resnet and Triplet Loss, those skilled in the art will appreciate that the architecture illustrated in fig. 9 is not intended to be limiting of the computer device 900, and may include fewer or more components than those illustrated, or some components in combination, or a different arrangement of components.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The product of the fingerprint identification model construction method based on Resnet and triple Loss comprises one or more program instructions. The procedures or functions according to the embodiments of the present application are generated in whole or in part when the program instructions are loaded and executed on a computer device. The computer apparatus may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The program instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the program instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above described systems, apparatuses and units may refer to the corresponding processes in the above described method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described embodiment of the fingerprint identification model construction method based on Resnet and triple Loss is merely illustrative, for example, the division of the unit is only a logical function division, and there may be other division ways in actual implementation, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program instructions.
In addition, the application also provides a fingerprint identification model construction method based on Resnet and triple Loss, and the fingerprint identification model construction method based on Resnet and triple Loss comprises the following steps:
an initial sample construction module: and constructing an initial sample input into the triple Loss module, wherein the initial sample comprises an anchor sample, a positive sample and a negative sample, the anchor sample, the positive sample and the negative sample are square images, and N is an integer greater than 1.
Triple Loss module: training the initial samples into training samples, wherein the training samples comprise a first training sample and a second training sample, the first training sample is a sample generated by an anchor sample and a positive sample, and the second training sample is a sample generated by the anchor sample and a negative sample.
A training module: and inputting the training sample into the initial Resnet model to train the initial Resnet model to obtain a Resnet model of the target to be measured.
A detection module: and judging whether the recognition capability of the Resnet model of the target to be detected reaches a preset standard or not. And when the recognition capability of the Resnet model of the target to be detected reaches a preset standard, generating the Resnet model of the target to be detected into a Resnet model of the target. And when the recognition capability of the Resnet model of the target to be detected does not reach the preset standard, the initial sample construction module continues to construct an initial sample input into the Triplet Loss module and adds the initial sample to the Triplet Loss module.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, to the extent that such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, it is intended that the present application also encompass such modifications and variations.
The above-mentioned embodiments are only examples of the present invention, and the scope of the claims of the present invention should not be limited by these examples, so that the claims of the present invention should be construed as equivalent and still fall within the scope of the present invention.

Claims (10)

1. A fingerprint identification model construction method based on Resnet and triple Loss is characterized in that the fingerprint identification model construction method based on Resnet and triple Loss comprises the following steps:
constructing N groups of initial samples, wherein each group of initial samples in the N groups of initial samples comprises an anchor sample, a positive sample and a negative sample, the anchor sample, the positive sample and the negative sample are square images, and N is an integer greater than 1;
training the N groups of initial samples by using a triple Loss to obtain N groups of training samples, wherein each group of training samples in the N groups of training samples comprises a first training sample and a second training sample, the first training sample is a sample generated by the anchor sample and the positive sample, and the second training sample is a sample generated by the anchor sample and the negative sample;
inputting the N groups of training samples into an initial Resnet model to train the initial Resnet model to obtain a Resnet model of a target to be tested;
inputting two groups of test samples into the Resnet model of the target to be tested for calculation to obtain two groups of image characteristic quantities, wherein the two groups of test samples are sampled on the same finger, each group of test samples in the two groups of test samples comprises X square images, and X is an integer greater than 1;
calculating the maximum cosine similarity between the two groups of image characteristic quantities by using a preset algorithm;
judging whether the maximum cosine similarity is larger than or equal to a preset value;
when the maximum cosine similarity is larger than or equal to the preset value, generating a target Resnet model according to the Resnet model to be detected; and
and when the maximum cosine similarity is smaller than the preset value, constructing M groups of initial samples and inputting the M groups of training samples into the Resnet model of the target to be tested to train the Resnet model of the target to be tested, wherein M is an integer larger than 1.
2. The method for constructing a fingerprint recognition model based on Resnet and triple Loss as claimed in claim 1, wherein constructing N sets of initial samples specifically comprises:
intercepting N1 anchor samples on a sample fingerprint image by using a preset square, wherein the side length of the preset square is equal to the length of the short side of the sample fingerprint image, N1< N, and N1 is an integer greater than 1;
acquiring N2 positive samples corresponding to each of the N1 anchor samples according to a preset first rule, wherein N2 is an integer greater than 1;
acquiring N3 negative samples corresponding to each of the N1 anchor samples according to a preset second rule, wherein N3 is an integer greater than 1; and
generating the N sets of initial samples from the N1 anchor samples and N2 positive samples and N3 negative samples corresponding to each of the N1 anchor samples according to a preset third rule.
3. The method as claimed in claim 2, wherein the obtaining N2 positive samples corresponding to each of the N1 anchor samples according to a preset first rule comprises:
performing left-right translation, up-down translation, random translation or random rotation on each anchor sample to obtain N20 positive samples, wherein N2 is more than N20, and N20 is an integer more than 1;
calculating an image coincidence rate between each of the N20 positive samples and a corresponding anchor sample; and
and screening the N2 positive samples with the image coincidence rate larger than a first preset value.
4. The method for constructing a fingerprint recognition model based on Resnet and triple Loss as claimed in claim 2, wherein obtaining N3 negative samples corresponding to each of the N1 anchor samples according to a preset second rule specifically comprises:
randomly moving the preset square on other sample fingerprint images which are not used for obtaining the current anchor sample to obtain N30 negative samples, wherein N3 is less than N30, and N30 is an integer greater than 1;
calculating the similarity between each negative sample of the N30 negative samples and the corresponding anchor sample; and
and screening the N3 negative samples with the similarity larger than a second preset value.
5. The method for constructing a fingerprint recognition model based on Resnet and triple Loss as claimed in claim 4, wherein the calculating the similarity between each negative sample of the N30 negative samples and the corresponding anchor sample comprises:
calculating the gray value of the negative sample of each negative sample and the gray value of the anchor sample corresponding to each negative sample; and
and calculating the similarity of the gray value of the negative sample and the gray value of the anchor sample according to the gray average difference matching method.
6. The method as claimed in claim 1, wherein before inputting two sets of test samples into the target Resnet model for calculation to obtain two sets of image feature quantities, the method further comprises:
acquiring a horizontal test sample and a vertical test sample from the same finger, wherein the sampling angle difference between the horizontal test sample and the vertical test sample is 90 degrees; and
segmenting the lateral and vertical interpretation test samples into T1 lateral and T2 vertical interpretation square images, the side lengths of the lateral and vertical interpretation square images being equal to the length of the short side of the test fingerprint image, T1, T2 being integers greater than 1.
7. The method for constructing a Resnet and triple Loss-based fingerprint recognition model according to claim 6, wherein the two sets of image feature quantities include a set of lateral image feature quantities and a set of vertical image feature quantities, the set of lateral image feature quantities includes the T1 lateral image feature quantities, the set of vertical image feature quantities includes the T2 vertical image feature quantities, wherein the calculating the maximum cosine similarity between the two sets of image feature quantities by using a preset algorithm specifically includes:
calculating the maximum cosine similarity between the T1 cross video image feature quantities and the T2 vertical resolution image feature quantities using a loop traversal method calculation.
8. A storage medium having stored thereon program instructions for execution by a processor to implement a method for constructing a Resnet and Triplet Loss based fingerprinting model as claimed in any of claims 1-7.
9. A computer device, characterized in that the computer device comprises:
a memory for storing program instructions; and
a processor for executing the program instructions to enable the computer device to implement the method for constructing a fingerprint identification model based on Resnet and triple Loss according to any one of claims 1 to 7.
10. A fingerprint identification model construction method based on Resnet and triple Loss is characterized in that the fingerprint identification model construction method based on Resnet and triple Loss comprises the following steps:
an initial sample construction module: constructing an initial sample input into a triple Loss module, wherein the initial sample comprises an anchor sample, a positive sample and a negative sample, the anchor sample, the positive sample and the negative sample are square images, and N is an integer greater than 1;
the triple Loss module: training the initial samples into training samples, wherein the training samples comprise a first training sample and a second training sample, the first training sample is a sample generated by the anchor sample and the positive sample, and the second training sample is a sample generated by the anchor sample and the negative sample;
a training module: inputting the training sample into an initial Resnet model to train the initial Resnet model to obtain a Resnet model of a target to be tested;
a detection module: judging whether the recognition capability of the Resnet model of the target to be detected reaches a preset standard or not; when the recognition capability of the Resnet model of the target to be detected reaches the preset standard, generating the Resnet model of the target to be detected into a target Resnet model; and when the recognition capability of the Resnet model of the target to be detected does not meet the preset standard, the initial sample construction module continues to construct the initial sample input into the Triplet Loss module and adds the initial sample to the Triplet Loss module.
CN202110083231.3A 2021-01-21 2021-01-21 Fingerprint identification model construction method, storage medium and computer equipment Active CN112418191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110083231.3A CN112418191B (en) 2021-01-21 2021-01-21 Fingerprint identification model construction method, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110083231.3A CN112418191B (en) 2021-01-21 2021-01-21 Fingerprint identification model construction method, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN112418191A true CN112418191A (en) 2021-02-26
CN112418191B CN112418191B (en) 2021-04-20

Family

ID=74782867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110083231.3A Active CN112418191B (en) 2021-01-21 2021-01-21 Fingerprint identification model construction method, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN112418191B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361497A (en) * 2021-08-09 2021-09-07 北京惠朗时代科技有限公司 Intelligent tail box application method and device based on training sample fingerprint identification
CN117312976A (en) * 2023-10-12 2023-12-29 国家电网有限公司华东分部 Internet of things equipment fingerprint identification system and method based on small sample learning

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392082A (en) * 2017-04-06 2017-11-24 杭州景联文科技有限公司 A kind of small area fingerprint comparison method based on deep learning
CN109711361A (en) * 2018-12-29 2019-05-03 重庆集诚汽车电子有限责任公司 Intelligent cockpit embedded fingerprint feature extracting method based on deep learning
CN109871490A (en) * 2019-03-08 2019-06-11 腾讯科技(深圳)有限公司 Media resource matching process, device, storage medium and computer equipment
CN110610709A (en) * 2019-09-26 2019-12-24 浙江百应科技有限公司 Identity distinguishing method based on voiceprint recognition
WO2020022956A1 (en) * 2018-07-27 2020-01-30 Aioz Pte Ltd Method and apparatus for video content validation
CN111178129A (en) * 2019-11-25 2020-05-19 浙江工商大学 Multi-modal personnel identification method based on face and posture
US20200184278A1 (en) * 2014-03-18 2020-06-11 Z Advanced Computing, Inc. System and Method for Extremely Efficient Image and Pattern Recognition and Artificial Intelligence Platform
CN111428701A (en) * 2020-06-10 2020-07-17 深圳市诺赛特系统有限公司 Small-area fingerprint image feature extraction method, system, terminal and storage medium
CN112052771A (en) * 2020-08-31 2020-12-08 腾讯科技(深圳)有限公司 Object re-identification method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200184278A1 (en) * 2014-03-18 2020-06-11 Z Advanced Computing, Inc. System and Method for Extremely Efficient Image and Pattern Recognition and Artificial Intelligence Platform
CN107392082A (en) * 2017-04-06 2017-11-24 杭州景联文科技有限公司 A kind of small area fingerprint comparison method based on deep learning
WO2020022956A1 (en) * 2018-07-27 2020-01-30 Aioz Pte Ltd Method and apparatus for video content validation
CN109711361A (en) * 2018-12-29 2019-05-03 重庆集诚汽车电子有限责任公司 Intelligent cockpit embedded fingerprint feature extracting method based on deep learning
CN109871490A (en) * 2019-03-08 2019-06-11 腾讯科技(深圳)有限公司 Media resource matching process, device, storage medium and computer equipment
CN110610709A (en) * 2019-09-26 2019-12-24 浙江百应科技有限公司 Identity distinguishing method based on voiceprint recognition
CN111178129A (en) * 2019-11-25 2020-05-19 浙江工商大学 Multi-modal personnel identification method based on face and posture
CN111428701A (en) * 2020-06-10 2020-07-17 深圳市诺赛特系统有限公司 Small-area fingerprint image feature extraction method, system, terminal and storage medium
CN112052771A (en) * 2020-08-31 2020-12-08 腾讯科技(深圳)有限公司 Object re-identification method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RAMI M. JOMAA ET AL: "End-to-End Deep Learning Fusion of Fingerprint and Electrocardiogram Signals for Presentation Attack Detection", 《SENSORS》 *
钟德星 等: "掌纹识别研究进展综述", 《模式识别与人工智能》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361497A (en) * 2021-08-09 2021-09-07 北京惠朗时代科技有限公司 Intelligent tail box application method and device based on training sample fingerprint identification
CN113361497B (en) * 2021-08-09 2021-12-07 北京惠朗时代科技有限公司 Intelligent tail box application method and device based on training sample fingerprint identification
CN117312976A (en) * 2023-10-12 2023-12-29 国家电网有限公司华东分部 Internet of things equipment fingerprint identification system and method based on small sample learning

Also Published As

Publication number Publication date
CN112418191B (en) 2021-04-20

Similar Documents

Publication Publication Date Title
US10762387B2 (en) Method and apparatus for processing image
CN110728209B (en) Gesture recognition method and device, electronic equipment and storage medium
WO2021012526A1 (en) Face recognition model training method, face recognition method and apparatus, device, and storage medium
WO2019109526A1 (en) Method and device for age recognition of face image, storage medium
US10198615B2 (en) Fingerprint enrollment method and apparatus
US8792722B2 (en) Hand gesture detection
US20140267009A1 (en) Authenticating a user using hand gesture
US20210334604A1 (en) Facial recognition method and apparatus
CN112418191B (en) Fingerprint identification model construction method, storage medium and computer equipment
CN103295022B (en) Image similarity calculation system and method
WO2018090937A1 (en) Image processing method, terminal and storage medium
TW202111498A (en) Fingerprint recognition method, chip and electronic device
CN111680675B (en) Face living body detection method, system, device, computer equipment and storage medium
CN108062544A (en) For the method and apparatus of face In vivo detection
US20200005078A1 (en) Content aware forensic detection of image manipulations
CN113051998A (en) Robust anti-spoofing technique using polarization cues in near infrared and visible wavelength bands in biometric identification techniques
CN112464803A (en) Image comparison method and device
CN111626163A (en) Human face living body detection method and device and computer equipment
US20140056487A1 (en) Image processing device and image processing method
CN115862075A (en) Fingerprint identification model training method, fingerprint identification device and related equipment
CN112949576B (en) Attitude estimation method, apparatus, device and storage medium
CN112507987A (en) Fingerprint identification method, storage medium and computer equipment
CN113642639A (en) Living body detection method, living body detection device, living body detection apparatus, and storage medium
CN108596127B (en) Fingerprint identification method, identity verification method and device and identity verification machine
CN112257561B (en) Human face living body detection method and device, machine readable medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant