CN115439894B - Method, electronic device, program product, and medium for training fingerprint matching model - Google Patents

Method, electronic device, program product, and medium for training fingerprint matching model Download PDF

Info

Publication number
CN115439894B
CN115439894B CN202211391045.7A CN202211391045A CN115439894B CN 115439894 B CN115439894 B CN 115439894B CN 202211391045 A CN202211391045 A CN 202211391045A CN 115439894 B CN115439894 B CN 115439894B
Authority
CN
China
Prior art keywords
fingerprint
fingerprint image
loss
image set
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211391045.7A
Other languages
Chinese (zh)
Other versions
CN115439894A (en
Inventor
李丹洪
谢字希
邸皓轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211391045.7A priority Critical patent/CN115439894B/en
Publication of CN115439894A publication Critical patent/CN115439894A/en
Application granted granted Critical
Publication of CN115439894B publication Critical patent/CN115439894B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The embodiment of the application provides a method for training a fingerprint matching model, electronic equipment, a program product and a computer readable storage medium, which can improve the quality of fingerprints output by the fingerprint matching model. The method comprises the following steps: acquiring four analog fingerprint image sets according to the first fingerprint image set, the third fingerprint image set and the generation network, and calculating a first geometric structure loss and a second geometric structure loss according to the four analog fingerprint image sets; acquiring a target discrimination loss, a first generation loss and a second generation loss according to the first simulated fingerprint image set, the second fingerprint image set, the fourth fingerprint image set and the discrimination network; training a discrimination network according to the target discrimination loss; the network is trained to build a fingerprint pairing model based on the first generation loss, the second generation loss, and a weighted sum of the first geometry loss and the second geometry loss.

Description

Method, electronic device, program product, and medium for training fingerprint matching model
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method, an electronic device, a program product, and a computer-readable storage medium for training a fingerprint matching model.
Background
Generating a countermeasure network (GAN) includes generating a network and a discrimination network, where the generating network can generate a simulation sample according to a random sample, and the discrimination network can determine whether the simulation sample is real.
One current method for training fingerprint matching models is as follows: acquiring a random fingerprint image and a real fingerprint image, inputting the random fingerprint image into a generation network, generating a simulated fingerprint image through the generation network, inputting the simulated fingerprint image and the real fingerprint image into a discrimination network, training the discrimination network according to a discrimination result of the simulated fingerprint image and a discrimination result of the real fingerprint image, training the generation network according to the discrimination result of the simulated fingerprint image, and establishing a fingerprint matching model according to the trained generation network and the trained discrimination network.
In practical applications, the fingerprint matching model is easy to generate fingerprints with poor quality.
Disclosure of Invention
The application provides a method, an electronic device, a program product and a computer readable storage medium for training a fingerprint matching model, aiming at solving the problem of generating poor fingerprints by the fingerprint matching model.
In order to achieve the above object, the present application provides the following technical solutions:
a first aspect provides a method of training a fingerprint matching model, the method comprising: acquiring a first fingerprint image set and a second fingerprint image set, and performing first geometric transformation on the first fingerprint image set to obtain a third fingerprint image set; performing first geometric transformation on the second fingerprint image set to obtain a fourth fingerprint image set; inputting the first fingerprint image set into a generating network, and outputting a first analog fingerprint image set through the generating network; inputting the third fingerprint image set into a generating network, and outputting a second analog fingerprint image set through the generating network; performing first geometric transformation on the first analog fingerprint image set to obtain a third analog fingerprint image set; performing second geometric transformation on the second analog fingerprint image set to obtain a fourth analog fingerprint image set; calculating a first geometric loss between the first set of simulated fingerprint images and the fourth set of simulated fingerprint images and a second geometric loss between the second set of simulated fingerprint images and the third set of simulated fingerprint images; inputting the first analog fingerprint image set and the second fingerprint image set into a discrimination network, and outputting a first discrimination loss and a first generation loss through the discrimination network; inputting the second analog fingerprint image set and the fourth fingerprint image set into a discrimination network, and outputting a second discrimination loss and a second generation loss through the discrimination network; taking the weighted sum of the first discrimination loss and the second discrimination loss as a target discrimination loss, and training a discrimination network according to the target discrimination loss; taking the first generation loss, the second generation loss, and the weighted sum of the first geometric structure loss and the second geometric structure loss as a target generation loss, and training a generation network according to the target generation loss; and establishing a fingerprint matching model according to the trained generation network and the trained discrimination network.
And the quality score of each fingerprint image in the first fingerprint image set is lower than a preset quality score, the quality score of each fingerprint image in the second fingerprint image set is higher than the preset quality score, and the second geometric transformation is the inverse transformation of the first geometric transformation.
In this embodiment, the fourth set of simulated fingerprint images and the first set of simulated fingerprint images correspond to the first set of fingerprint images, which may be considered as a set of fingerprint images with geometric constraints, and the first geometric loss calculated from the first set of simulated fingerprint images and the fourth set of simulated fingerprint images may represent a difference in their geometric structures. The second set of simulated fingerprint images and the third set of simulated fingerprint images correspond to a third set of fingerprint images, which may be considered as a set of fingerprint images with geometric constraints, and the second geometric loss calculated from the second set of simulated fingerprint images and the third set of simulated fingerprint images may represent their geometric differences. The generated network obtained by weighting and training the first generation loss, the second generation loss, the first geometric structure loss and the second geometric structure loss has correlation with the geometric transformation fingerprint, so that the geometric structure of the fingerprint can be better reserved, and the fingerprint output by the fingerprint matching model has a clearer geometric structure.
In a possible implementation manner, the first geometric structure loss is a root mean square error of gray levels of the first analog fingerprint image set and the fourth analog fingerprint image set, and the second geometric structure loss is a root mean square error of gray levels of the second analog fingerprint image set and the third analog fingerprint image set.
In another possible implementation manner, the first geometric structure loss is a gray scale average absolute error between the first analog fingerprint image set and the fourth analog fingerprint image set, and the second geometric structure loss is a gray scale average absolute error between the second analog fingerprint image set and the third analog fingerprint image set.
In another possible implementation, the first geometric transformation is a vertical flip, a rotation of 90 degrees, or a rotation of 180 degrees.
In another possible implementation, acquiring the first set of fingerprint images and the second set of fingerprint images includes:
calculating the quality score of each fingerprint image in the fingerprint image library according to the fingerprint quality parameters;
and selecting a first fingerprint image set and a second fingerprint image set from a fingerprint image library according to the fingerprint quality scores.
In combination with the former possible implementation manner, in another possible implementation manner, the fingerprint quality parameter includes a grayscale mean value of the fingerprint image and a grayscale variance of the fingerprint image;
calculating the fingerprint image quality score of each fingerprint image in the fingerprint image library according to the fingerprint quality parameters comprises the following steps: acquiring the gray average value and the gray variance of each fingerprint image; for any fingerprint image, determining the quality of the fingerprint image to be divided into a weighted sum of the mean grayscale value and the variance grayscale value of the fingerprint image.
In another possible implementation manner, the method further includes: acquiring a fingerprint to be processed; inputting the fingerprint to be processed into a generation network of the fingerprint matching model, and generating the matching fingerprint of the fingerprint to be processed through the generation network of the fingerprint matching model.
A third aspect provides an electronic device comprising an acquisition unit and a processing unit;
the acquiring unit is used for acquiring a first fingerprint image set and a second fingerprint image set, wherein the quality score of each fingerprint image in the first fingerprint image set is lower than a preset quality score, and the quality score of each fingerprint image in the second fingerprint image set is higher than the preset quality score;
the processing unit is used for carrying out first geometric transformation on the first fingerprint image set to obtain a third fingerprint image set; performing first geometric transformation on the second fingerprint image set to obtain a fourth fingerprint image set; inputting the first fingerprint image set into a generating network, and outputting a first analog fingerprint image set through the generating network; inputting the third fingerprint image set into a generation network, and outputting a second analog fingerprint image set through the generation network; performing first geometric transformation on the first analog fingerprint image set to obtain a third analog fingerprint image set; performing second geometric transformation on the second simulated fingerprint image set to obtain a fourth simulated fingerprint image set, wherein the second geometric transformation is an inverse transformation of the first geometric transformation; calculating a first geometric loss between the first set of simulated fingerprint images and the fourth set of simulated fingerprint images and a second geometric loss between the second set of simulated fingerprint images and the third set of simulated fingerprint images; inputting the first simulated fingerprint image set and the second simulated fingerprint image set into a discrimination network, and outputting a first discrimination loss and a first generation loss through the discrimination network; inputting the second analog fingerprint image set and the fourth fingerprint image set into a discrimination network, and outputting a second discrimination loss and a second generation loss through the discrimination network; taking the weighted sum of the first discrimination loss and the second discrimination loss as a target discrimination loss, and training a discrimination network according to the target discrimination loss; taking the first generation loss, the second generation loss, and the weighted sum of the first geometric structure loss and the second geometric structure loss as a target generation loss, and training a generation network according to the target generation loss; and establishing a fingerprint matching model according to the trained generation network and the trained discrimination network.
In one possible implementation, the first geometric structure loss is a root mean square error of the grays of the first analog fingerprint image set and the fourth analog fingerprint image set, and the second geometric structure loss is a root mean square error of the grays of the second analog fingerprint image set and the third analog fingerprint image set.
In another possible implementation, the first geometric structure loss is a gray scale average absolute error between the first analog fingerprint image set and the fourth analog fingerprint image set, and the second geometric structure loss is a gray scale average absolute error between the second analog fingerprint image set and the third analog fingerprint image set.
In another possible implementation, the first geometric transformation is a vertical flip, a rotation of 90 degrees or a rotation of 180 degrees.
In another possible implementation manner, the obtaining unit is specifically configured to calculate a quality score of each fingerprint image in the fingerprint image library according to the fingerprint quality parameter; and selecting a first fingerprint image set and a second fingerprint image set from a fingerprint image library according to the fingerprint quality scores.
In another possible implementation manner, the obtaining unit is specifically configured to obtain a grayscale mean value and a grayscale variance of each fingerprint image in a case that the fingerprint quality parameter includes the grayscale mean value of the fingerprint image and the grayscale variance of the fingerprint image; for any fingerprint image, determining the quality of the fingerprint image to be divided into the weighted sum of the gray level average value and the gray level variance of the fingerprint image.
In another possible implementation manner, the obtaining unit is further configured to obtain a fingerprint to be processed; the processing unit is also used for inputting the fingerprint to be processed into the generation network of the fingerprint matching model, and generating the matching fingerprint of the fingerprint to be processed through the generation network of the fingerprint matching model.
A third aspect provides an electronic device comprising a processor and a memory, the memory for storing a program; the processor is adapted to implement the method of the first aspect by executing a program.
A fourth aspect provides a computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method of the first aspect.
A fifth aspect provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect.
Drawings
FIGS. 1a and 1b are schematic diagrams of generation of a pair fingerprint according to an embodiment of the present application;
FIG. 2 is a flowchart of training a fingerprint matching model provided in the present application;
FIG. 3 is a timing diagram of a training fingerprint matching model provided herein;
FIG. 4 is a schematic diagram of generating a paired fingerprint as provided herein;
FIG. 5 is a block diagram of an electronic device provided herein;
fig. 6 is a hardware structure diagram of an electronic device provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. The terminology used in the following examples is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of this application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, such as "one or more", unless the context clearly indicates otherwise. It should also be understood that in the embodiments of the present application, "one or more" means one, two, or more than two; "and/or" describes the association relationship of the associated objects, indicating that three relationships may exist; for example, a and/or B, may represent: a exists singly, A and B exist simultaneously, and B exists singly, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather mean "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The embodiments of the present application relate to a plurality of numbers greater than or equal to two. It should be noted that, in the description of the embodiments of the present application, the terms "first", "second", and the like are used for distinguishing the description, and are not to be construed as indicating or implying relative importance or order.
The method for training the fingerprint matching model can be applied to electronic equipment, and the electronic equipment can be a mobile phone, a tablet personal computer, a desktop computer, a vehicle-mounted computer, wearable equipment, virtual reality equipment, augmented reality equipment, mixed reality equipment and other terminals, or a server, such as a center server, an edge server or a local server of a local data center.
In the following, an application scenario is described as a process for generating a pairing fingerprint, and referring to fig. 1a and fig. 1b, in an example, a mobile phone is deployed with a fingerprint pairing model. After clicking a virtual button of 'input fingerprint' on a mobile phone screen, a user A inputs a fingerprint 101 of the user A, then clicks a virtual button of 'generating matched fingerprint', the fingerprint 101 generates a fingerprint 102 through a fingerprint matching model, and the geometrical structure of the fingerprint 102 is clearer than that of the fingerprint 101, so that the matched fingerprint 101 and fingerprint 102 are obtained, as shown in FIG. 1 b.
Referring to fig. 2, the process of training a fingerprint matching model according to the present application is described below, and in an embodiment, a method of training a fingerprint matching model according to the present application includes:
step 201, acquiring a first fingerprint image set and a second fingerprint image set.
And the quality score of each fingerprint image in the first fingerprint image set is lower than the preset quality score. The quality score of each fingerprint image in the second fingerprint image set is higher than the preset quality score.
Step 202, performing a first geometric transformation on the first fingerprint image set to obtain a third fingerprint image set.
And step 203, performing first geometric transformation on the second fingerprint image set to obtain a fourth fingerprint image set.
And performing first geometric transformation on each fingerprint image in the first fingerprint image set and the second fingerprint image set to respectively obtain a third fingerprint image set and a fourth fingerprint image set.
And 204, inputting the first fingerprint image set into a generation network, and outputting the first analog fingerprint image set through the generation network.
And 205, inputting the third fingerprint image set into a generation network, and outputting a second analog fingerprint image set through the generation network.
And step 206, performing first geometric transformation on the first analog fingerprint image set to obtain a third analog fingerprint image set.
And step 207, performing second geometric transformation on the second analog fingerprint image set to obtain a fourth analog fingerprint image set.
In this application, the first geometric transformation is vertically flipped, rotated by 90 degrees or rotated by 180 degrees, and the second geometric transformation is the inverse of the first geometric transformation. Specifically, when the first geometric transformation is rotated 90 degrees clockwise, the second geometric transformation is rotated 90 degrees counterclockwise. When the first geometric transformation is rotated 90 degrees counterclockwise, the second geometric transformation is rotated 90 degrees clockwise. When the first geometric transformation is vertically flipped from top to bottom, the second geometric transformation is vertically flipped from bottom to top.
Each fingerprint image set may include one or more fingerprint images. A set of fingerprint images in this application may be understood as a sequence of fingerprints. It is to be understood that the ith fingerprint image of the first set of analog fingerprint images corresponds to the ith fingerprint image of the fourth set of analog fingerprint images, which have the same image orientation and also similar fingerprint geometry. Similarly, the ith fingerprint image of the second set of analog fingerprint images corresponds to the ith fingerprint image of the third set of analog fingerprint images, and they have the same image orientation and also have similar fingerprint geometries.
Step 208, calculating a first geometry loss between the first set of simulated fingerprint images and the fourth set of simulated fingerprint images and a second geometry loss between the second set of simulated fingerprint images and the third set of simulated fingerprint images.
Optionally, the first geometric structure loss is a root mean square error of gray scales of the first analog fingerprint image set and the fourth analog fingerprint image set, and the second geometric structure loss is a root mean square error of gray scales of the second analog fingerprint image set and the third analog fingerprint image set. The root mean square error is also referred to as the L2 norm.
Optionally, the first geometric structure loss is a gray scale average absolute error between the first analog fingerprint image set and the fourth analog fingerprint image set, and the second geometric structure loss is a gray scale average absolute error between the second analog fingerprint image set and the third analog fingerprint image set. The mean absolute error is also referred to as the L1 norm.
For example, the first geometric loss, the first set of simulated fingerprint images, and the fourth set of simulated fingerprint images satisfy the following equations:
Figure 79845DEST_PATH_IMAGE001
Figure 513101DEST_PATH_IMAGE002
for a first geometry loss, ->
Figure 321657DEST_PATH_IMAGE003
For a first set of analog fingerprint images, ->
Figure 82939DEST_PATH_IMAGE004
A fourth set of analog fingerprint images. />
Figure 769398DEST_PATH_IMAGE005
Represents a mathematical expectation>
Figure 198105DEST_PATH_IMAGE006
Is a first set of fingerprint images.
For example, the second geometry loss, the second set of simulated fingerprint images, and the third set of simulated fingerprint images satisfy the following equations:
Figure 443142DEST_PATH_IMAGE007
Figure 957300DEST_PATH_IMAGE008
for a second geometry loss>
Figure 86930DEST_PATH_IMAGE009
For a second set of analog fingerprint images, based on a comparison of the fingerprint image data and the fingerprint image data>
Figure 229198DEST_PATH_IMAGE010
A third set of analog fingerprint images. />
Figure 786081DEST_PATH_IMAGE011
Indicates a mathematical expectation>
Figure 912169DEST_PATH_IMAGE012
Is a first set of fingerprint images.
And step 209, inputting the first analog fingerprint image set and the second fingerprint image set into a discrimination network, and outputting a first discrimination loss and a first generation loss through the discrimination network.
Specifically, a first analog fingerprint image set is input into a discrimination network, and a first discrimination result is output through the discrimination network; inputting the second fingerprint image set into a discrimination network, and outputting a second discrimination result through the discrimination network; calculating a first discrimination loss according to the first discrimination result, the second discrimination result and the discrimination loss function; and calculating a first generation loss according to the first discrimination result and the generation loss function.
Optionally, the function for calculating the first discriminant loss is:
Figure 579911DEST_PATH_IMAGE013
Figure 576686DEST_PATH_IMAGE014
is a first decision to fail>
Figure 304470DEST_PATH_IMAGE015
Is based on the second determination result>
Figure 793220DEST_PATH_IMAGE016
For the second fingerprint image set, ->
Figure 126637DEST_PATH_IMAGE017
Is given by
Figure 118864DEST_PATH_IMAGE018
Fingerprint image of (4), based on the fingerprint image of (4)>
Figure 407763DEST_PATH_IMAGE019
Is the first decision result, is greater than or equal to>
Figure 852651DEST_PATH_IMAGE020
For a first set of fingerprint images, ->
Figure 986829DEST_PATH_IMAGE021
Is selected as belonging to>
Figure 364721DEST_PATH_IMAGE022
The fingerprint image of (1).
Optionally, the generating loss function is:
Figure 168728DEST_PATH_IMAGE023
Figure 491125DEST_PATH_IMAGE024
for a generation loss calculated on the basis of the ith fingerprint set, <' >>
Figure 304361DEST_PATH_IMAGE025
For the result of the discrimination of the fingerprint image in the ith fingerprint image set>
Figure 395813DEST_PATH_IMAGE026
For the ith fingerprint image set>
Figure 636302DEST_PATH_IMAGE027
Is the fingerprint image of the ith fingerprint image set.
And step 210, inputting the second analog fingerprint image set and the fourth fingerprint image set into a discrimination network, and outputting a second discrimination loss and a second generation loss through the discrimination network.
Specifically, the second analog fingerprint image is input into a discrimination network, and a third discrimination result is output through the discrimination network; and inputting the fourth fingerprint image set into a discrimination network, and outputting a fourth discrimination result through the discrimination network. Calculating a second discrimination loss according to the third discrimination result, the fourth discrimination result and the discrimination loss function; and calculating a second generation loss according to the third discrimination result and the generation loss function.
Optionally, the function for calculating the second discrimination loss is:
Figure 321361DEST_PATH_IMAGE028
Figure 62921DEST_PATH_IMAGE029
for a second determination of loss>
Figure 884247DEST_PATH_IMAGE030
Is the fourth decision result, is->
Figure 420270DEST_PATH_IMAGE031
For a fourth fingerprint image set, ->
Figure 592625DEST_PATH_IMAGE032
Is/belong to>
Figure 482084DEST_PATH_IMAGE033
Is based on the fingerprint image of>
Figure 813708DEST_PATH_IMAGE034
Is the third decision result, is>
Figure 130420DEST_PATH_IMAGE035
For a third fingerprint image set, ->
Figure 790072DEST_PATH_IMAGE036
Is/belong to>
Figure 610785DEST_PATH_IMAGE037
The fingerprint image of (1).
And step 211, training the discrimination network according to the target discrimination loss, wherein the target discrimination loss is the weighted sum of the first discrimination loss and the second discrimination loss.
Optionally, first discrimination loss
Figure 406702DEST_PATH_IMAGE038
And a second determination penalty>
Figure 550108DEST_PATH_IMAGE039
And a target discrimination loss>
Figure 431476DEST_PATH_IMAGE040
The following formula is satisfied: />
Figure 787371DEST_PATH_IMAGE041
。/>
Figure 703374DEST_PATH_IMAGE042
And &>
Figure 262356DEST_PATH_IMAGE043
For weighting coefficients>
Figure 490075DEST_PATH_IMAGE044
Or->
Figure 384081DEST_PATH_IMAGE045
May be [0,1 ]]Any one of the values can be specifically set according to the actual situation.
Step 212, training the generation network according to the target generation loss, wherein the target generation loss is the first generation loss, the second generation loss, and the weighted sum of the first geometry loss and the second geometry loss.
Optionally, first generation loss
Figure 216908DEST_PATH_IMAGE046
Second generation loss->
Figure 577482DEST_PATH_IMAGE047
Loss of first geometry>
Figure 561006DEST_PATH_IMAGE048
The second geometry lost->
Figure 134070DEST_PATH_IMAGE049
And target generation penalty>
Figure 149300DEST_PATH_IMAGE050
The following formula is satisfied:
Figure 415196DEST_PATH_IMAGE051
。/>
Figure 617507DEST_PATH_IMAGE052
、/>
Figure 525420DEST_PATH_IMAGE053
、/>
Figure 4943DEST_PATH_IMAGE054
and &>
Figure 566374DEST_PATH_IMAGE055
For weighting coefficients>
Figure 396927DEST_PATH_IMAGE056
、/>
Figure 436427DEST_PATH_IMAGE057
、/>
Figure 36036DEST_PATH_IMAGE058
Or->
Figure 909314DEST_PATH_IMAGE055
May be [0,1 ]]Any one of the values can be specifically set according to the actual situation.
Steps 201 to 212 are a process of training the discriminant network and the generation network once, and steps 201 to 212 are executed iteratively to train the discriminant network and the generation network for a plurality of times. After training the discrimination network and the generation network for N times, two fingerprint image sets can be obtained again, and the two fingerprint image sets continue training the current discrimination network and the current generation network according to the training procedures shown in the steps 201 to 212 until the training end condition is reached. Optionally, the training end condition is a preset training number. Optionally, the training end condition includes that the target discrimination loss is less than or equal to a preset discrimination loss threshold and the target generation loss is less than or equal to a preset generation loss threshold.
And step 213, establishing a fingerprint matching model according to the trained generation network and the trained discrimination network. The fingerprint matching model comprises a trained generation network and a trained discrimination network.
In this embodiment, the fourth set of analog fingerprint images and the first set of analog fingerprint images correspond to the first set of fingerprint images, and the first geometric loss calculated according to the first set of analog fingerprint images and the fourth set of analog fingerprint images may represent a difference in their geometric structures. The second set of simulated fingerprint images and the third set of simulated fingerprint images correspond to a third set of fingerprint images, and a second geometric loss calculated from the second set of simulated fingerprint images and the third set of simulated fingerprint images may represent a difference in their geometric structures. And calculating to obtain a target generation loss according to the first generation loss, the second generation loss, the first geometric structure loss and the second geometric structure loss, wherein the target generation loss can reflect the generation loss of the generation network more accurately, and the generation network can better generate the geometric structure of the fingerprint based on the target generation loss training, so that the quality of the generated fingerprint is improved, the convergence speed of the generation network can also be improved, and the possibility of model collapse is reduced.
For ease of understanding, the method for training the fingerprint matching model in the present application is described below with reference to fig. 3 as a timing diagram, and in one example, the geometric transformation is rotated 90 degrees clockwise. And processing the first fingerprint image set into a first analog fingerprint image set through a generation network, and rotating all images of the first analog fingerprint image set by 90 degrees clockwise to obtain a third analog fingerprint image set. All images of the first set of fingerprint images are rotated 90 degrees clockwise resulting in a third set of fingerprint images. And processing the third fingerprint image set into a second analog fingerprint image set through a generation network, and rotating all images of the second analog fingerprint image set by 90 degrees in a counterclockwise direction to obtain a fourth analog fingerprint image set.
And calculating the gray level difference value of the 1 st fingerprint image of the first analog fingerprint image set and the 1 st fingerprint image of the fourth analog fingerprint image set, and so on, calculating all the gray level difference values in the first analog fingerprint image set and the fourth analog fingerprint image set, and calculating the gray level root mean square error (namely the first geometric structure loss) of the first analog fingerprint image set and the fourth analog fingerprint image set according to all the gray level difference values. And calculating the gray-scale root-mean-square error (namely the second geometric structure loss) of the second analog fingerprint image set and the third analog fingerprint image set.
And processing the first simulated fingerprint image set into a first judgment result through the judgment network, processing the second fingerprint image set into a second judgment result through the judgment network, and calculating the first judgment loss according to the first judgment result and the second judgment result. And processing the second analog fingerprint image set into a third judgment result through the judgment network, processing the fourth fingerprint image set into a fourth judgment result through the judgment network, and calculating a second judgment loss according to the third judgment result and the fourth judgment result. And taking the weighted sum of the first discrimination loss and the second discrimination loss as a target discrimination loss, and updating the weight of the discrimination network according to the target discrimination loss, so that the discrimination network is trained once.
And calculating a first generation loss according to the first judgment result, calculating a second generation loss according to the third judgment result, taking the weighted sum of the first generation loss, the second generation loss, the first geometric structure loss and the second geometric structure loss as a target generation loss, and updating the weight of the generation network according to the target generation loss, so that the generation network is trained once.
The method and the device can iteratively execute the processes, so that the judgment network and the generation network are trained for multiple times. The method can also acquire other fingerprint image sets, and train the discrimination network and the generation network for multiple times by using the other fingerprint image sets according to the training process until the target discrimination loss is close to or equal to 0 and the target generation loss is close to or equal to 0. And forming a fingerprint matching model by the generated network and the discrimination network obtained after training.
In the application, the first fingerprint image set and the second fingerprint image set can be obtained in advance, or the first fingerprint image set and the second fingerprint image set can be obtained by electronic equipment automatically from a fingerprint image library. Describing the method for automatically acquiring a first set of fingerprint images and a second set of fingerprint images, in an alternative embodiment, step 201 includes: calculating the quality score of each fingerprint image in the fingerprint image library according to the fingerprint quality parameters; and selecting a first fingerprint image set and a second fingerprint image set from a fingerprint image library according to the fingerprint quality scores.
The fingerprint quality parameter may include one or more of a grayscale mean of the fingerprint image, a grayscale variance of the fingerprint image, and a standard deviation of the fingerprint image. Optionally, calculating the fingerprint image quality score of each fingerprint image in the fingerprint image library according to the fingerprint quality parameters includes: when the fingerprint quality parameters comprise the gray average value of the fingerprint image and the gray variance of the fingerprint image, acquiring the gray average value and the gray variance of each fingerprint image; for any fingerprint image, determining the quality of the fingerprint image to be divided into the weighted sum of the gray level average value and the gray level variance of the fingerprint image.
The embodiment can calculate the quality score of each fingerprint image according to the fingerprint quality parameters, and then select a poor fingerprint image set (i.e. a first fingerprint image set) and a good fingerprint image set (i.e. a second fingerprint image set) according to the quality scores and the quality score threshold. It should be understood that the quality scores may be calculated based on one or more fingerprint quality parameters, and the method of calculating the quality scores based on the fingerprint quality parameters is not limited to the above examples.
In another optional embodiment, the method for training a fingerprint matching model in the present application further includes: and after the fingerprint to be processed is obtained, inputting the fingerprint to be processed into a generation network of the fingerprint matching model, and generating the matching fingerprint of the fingerprint to be processed through the generation network of the fingerprint matching model.
Referring to fig. 4, in an example, the fingerprint matching model 40 includes a generating network 401 and a discriminating network 402, and the fingerprints to be processed are processed by the generating network 401 to obtain matching fingerprints. Therefore, the paired fingerprints can be generated end to end, and the paired fingerprints can be acquired at low cost.
The application provides a method for enabling an electronic device 500 to train a fingerprint matching model. Referring to fig. 5, in one embodiment, an electronic device 500 includes an acquisition unit 501 and a processing unit 502.
The acquiring unit 501 is configured to acquire a first fingerprint image set and a second fingerprint image set, where a quality score of each fingerprint image in the first fingerprint image set is lower than a preset quality score, and a quality score of each fingerprint image in the second fingerprint image set is higher than the preset quality score;
the processing unit 502 is configured to perform a first geometric transformation on the first fingerprint image set to obtain a third fingerprint image set; performing first geometric transformation on the second fingerprint image set to obtain a fourth fingerprint image set; inputting the first fingerprint image set into a generation network, and outputting a first analog fingerprint image set through the generation network; inputting the third fingerprint image set into a generation network, and outputting a second analog fingerprint image set through the generation network; performing first geometric transformation on the first analog fingerprint image set to obtain a third analog fingerprint image set; performing second geometric transformation on the second simulated fingerprint image set to obtain a fourth simulated fingerprint image set, wherein the second geometric transformation is an inverse transformation of the first geometric transformation; calculating a first geometric loss between the first set of simulated fingerprint images and the fourth set of simulated fingerprint images and a second geometric loss between the second set of simulated fingerprint images and the third set of simulated fingerprint images; inputting the first analog fingerprint image set and the second fingerprint image set into a discrimination network, and outputting a first discrimination loss and a first generation loss through the discrimination network; inputting the second analog fingerprint image set and the fourth fingerprint image set into a discrimination network, and outputting a second discrimination loss and a second generation loss through the discrimination network; training a discrimination network according to a target discrimination loss, wherein the target discrimination loss is a weighted sum of a first discrimination loss and a second discrimination loss; training the generation network according to a target generation loss, wherein the target generation loss is a weighted sum of a first generation loss, a second generation loss, a first geometric structure loss and a second geometric structure loss; and establishing a fingerprint matching model according to the trained generation network and the trained discrimination network.
As shown in fig. 6, in one embodiment, the present application provides an electronic device 600 comprising: a bus 602, a processor 604, a memory 606, and a communication interface 608. The processor 604, memory 606, and communication interface 608 communicate over the bus 602. It should be understood that the present application does not limit the number of processors, memories, and communication interfaces in the electronic device 600.
The bus 602 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one line is shown in FIG. 6, but it is not intended that there be only one bus or one type of bus. Bus 602 may include a pathway to transfer information between various components of electronic device 600 (e.g., memory 606, processor 604, communication interface 608).
The processor 604 may include any one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a neural-Network Processing Unit (NPU) Microprocessor (MP), or a Digital Signal Processor (DSP).
The memory 606 may include volatile memory (volatile memory), such as Random Access Memory (RAM). Processor 604 may also include non-volatile memory (non-volatile memory), such as read-only memory (ROM), flash memory, a Hard Disk Drive (HDD) or a Solid State Drive (SSD).
The memory 606 stores executable program code, and the processor 604 executes the executable program code to implement the functions of the aforementioned obtaining unit 501 and the processing unit 502, respectively, so as to implement the method for training the fingerprint matching model. That is, the memory 606 has instructions stored thereon for performing a method of training a fingerprint matching model.
The communication interface 608 enables communication between the electronic device 600 and other devices or communication networks using transceiver modules such as, but not limited to, network interface cards, transceivers, and the like.
The embodiment of the application also provides a computer program product containing instructions. The computer program product may be a software or program product containing instructions capable of running on a computer or stored in any available medium. When the computer program product is run on a computer, the computer is caused to perform a method of training a fingerprint matching model.
The embodiment of the application also provides a computer readable storage medium. The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a data center, that contains one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), among others. The computer-readable storage medium includes instructions that direct a computer to perform a method of training a fingerprint pairing model.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the present disclosure as defined by the appended claims.

Claims (12)

1. A method of training a fingerprint matching model, comprising:
acquiring a first fingerprint image set and a second fingerprint image set, wherein the quality score of each fingerprint image in the first fingerprint image set is lower than a preset quality score, and the quality score of each fingerprint image in the second fingerprint image set is higher than the preset quality score;
performing first geometric transformation on the first fingerprint image set to obtain a third fingerprint image set;
performing first geometric transformation on the second fingerprint image set to obtain a fourth fingerprint image set;
inputting the first fingerprint image set into a generating network, and outputting a first analog fingerprint image set through the generating network;
inputting the third fingerprint image set into a generating network, and outputting a second analog fingerprint image set through the generating network;
performing first geometric transformation on the first analog fingerprint image set to obtain a third analog fingerprint image set;
performing a second geometric transformation on the second set of simulated fingerprint images to obtain a fourth set of simulated fingerprint images, the second geometric transformation being an inverse transformation of the first geometric transformation;
calculating a first geometric loss between the first set of simulated fingerprint images and the fourth set of simulated fingerprint images and a second geometric loss between the second set of simulated fingerprint images and a third set of simulated fingerprint images;
inputting the first analog fingerprint image set and the second fingerprint image set into a discrimination network, and outputting a first discrimination loss and a first generation loss through the discrimination network;
inputting the second analog fingerprint image set and the fourth fingerprint image set into a discrimination network, and outputting a second discrimination loss and a second generation loss through the discrimination network;
training the discrimination network according to a target discrimination loss, wherein the target discrimination loss is a weighted sum of the first discrimination loss and the second discrimination loss;
training the generation network according to a target generation loss, wherein the target generation loss is the first generation loss, the second generation loss, and a weighted sum of the first geometry loss and the second geometry loss;
and establishing a fingerprint matching model according to the trained generation network and the trained discrimination network.
2. The method of claim 1, wherein the first geometry loss is a root mean square error in grayscale for the first set of analog fingerprint images and the fourth set of analog fingerprint images, and wherein the second geometry loss is a root mean square error in grayscale for the second set of analog fingerprint images and the third set of analog fingerprint images.
3. The method of claim 1, wherein the first geometric transformation is a vertical flip, a 90 degree rotation, or a 180 degree rotation.
4. The method according to any one of claims 1 to 3, characterized in that said acquiring a first set of fingerprint images and a second set of fingerprint images comprises:
calculating the quality score of each fingerprint image in the fingerprint image library according to the fingerprint quality parameters;
and selecting a first fingerprint image set and a second fingerprint image set from the fingerprint image library according to the fingerprint quality scores.
5. The method of claim 4, wherein the fingerprint quality parameters comprise a grayscale mean of the fingerprint image and a grayscale variance of the fingerprint image;
the calculating the fingerprint image quality score of each fingerprint image in the fingerprint image library according to the fingerprint quality parameters comprises the following steps: acquiring the gray average value and the gray variance of each fingerprint image; for any fingerprint image, determining the quality of the fingerprint image to be divided into a weighted sum of the mean and variance of the gray levels of the fingerprint image.
6. The method according to any one of claims 1 to 3, further comprising:
acquiring a fingerprint to be processed;
inputting the fingerprint to be processed into the generation network of the fingerprint matching model, and generating the matching fingerprint of the fingerprint to be processed through the generation network of the fingerprint matching model.
7. An electronic device, comprising:
the fingerprint identification device comprises an acquisition unit, a comparison unit and a comparison unit, wherein the acquisition unit is used for acquiring a first fingerprint image set and a second fingerprint image set, the quality score of each fingerprint image in the first fingerprint image set is lower than a preset quality score, and the quality score of each fingerprint image in the second fingerprint image set is higher than the preset quality score;
the processing unit is used for carrying out first geometric transformation on the first fingerprint image set to obtain a third fingerprint image set; performing first geometric transformation on the second fingerprint image set to obtain a fourth fingerprint image set; inputting the first fingerprint image set into a generating network, and outputting a first analog fingerprint image set through the generating network; inputting the third fingerprint image set into a generating network, and outputting a second analog fingerprint image set through the generating network; performing first geometric transformation on the first analog fingerprint image set to obtain a third analog fingerprint image set; performing a second geometric transformation on the second set of simulated fingerprint images to obtain a fourth set of simulated fingerprint images, the second geometric transformation being an inverse transformation of the first geometric transformation; calculating a first geometric loss between the first set of simulated fingerprint images and the fourth set of simulated fingerprint images and a second geometric loss between the second set of simulated fingerprint images and a third set of simulated fingerprint images; inputting the first analog fingerprint image set and the second fingerprint image set into a discrimination network, and outputting a first discrimination loss and a first generation loss through the discrimination network; inputting the second analog fingerprint image set and the fourth fingerprint image set into a discrimination network, and outputting a second discrimination loss and a second generation loss through the discrimination network; training the discrimination network according to a target discrimination loss, wherein the target discrimination loss is a weighted sum of the first discrimination loss and the second discrimination loss; training the generation network according to a target generation loss, wherein the target generation loss is the first generation loss, the second generation loss, and a weighted sum of the first geometry loss and the second geometry loss; and establishing a fingerprint matching model according to the trained generation network and the trained discrimination network.
8. The electronic device according to claim 7, wherein the obtaining unit is specifically configured to calculate a quality score of each fingerprint image in the fingerprint image library according to the fingerprint quality parameter; and selecting a first fingerprint image set and a second fingerprint image set from the fingerprint image library according to the fingerprint quality scores.
9. The electronic device according to claim 8, wherein the obtaining unit is specifically configured to obtain a mean grayscale value and a variance grayscale value for each fingerprint image if the fingerprint quality parameter includes a mean grayscale value and a variance grayscale value for the fingerprint image; for any fingerprint image, determining the quality of the fingerprint image to be divided into a weighted sum of the mean and variance of the gray levels of the fingerprint image.
10. The electronic device of any of claims 7-9,
the acquisition unit is also used for acquiring a fingerprint to be processed;
the processing unit is further configured to input the fingerprint to be processed into the generation network of the fingerprint matching model, and generate a matching fingerprint of the fingerprint to be processed through the generation network of the fingerprint matching model.
11. An electronic device comprising a processor and a memory, the memory configured to store instructions, the processor configured to execute the instructions, the electronic device configured to perform the method of any of claims 1-6.
12. A computer-readable storage medium having instructions stored thereon, which when executed on a computer, cause the computer to perform the method of any one of claims 1 to 6.
CN202211391045.7A 2022-11-08 2022-11-08 Method, electronic device, program product, and medium for training fingerprint matching model Active CN115439894B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211391045.7A CN115439894B (en) 2022-11-08 2022-11-08 Method, electronic device, program product, and medium for training fingerprint matching model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211391045.7A CN115439894B (en) 2022-11-08 2022-11-08 Method, electronic device, program product, and medium for training fingerprint matching model

Publications (2)

Publication Number Publication Date
CN115439894A CN115439894A (en) 2022-12-06
CN115439894B true CN115439894B (en) 2023-04-11

Family

ID=84252166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211391045.7A Active CN115439894B (en) 2022-11-08 2022-11-08 Method, electronic device, program product, and medium for training fingerprint matching model

Country Status (1)

Country Link
CN (1) CN115439894B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445007A (en) * 2020-03-03 2020-07-24 平安科技(深圳)有限公司 Training method and system for resisting generation of neural network
CN112967174A (en) * 2021-01-21 2021-06-15 北京达佳互联信息技术有限公司 Image generation model training method, image generation device and storage medium
CN113469897A (en) * 2021-05-24 2021-10-01 苏州市科远软件技术开发有限公司 Training method and device of image enhancement model, image enhancement method and device and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489290B (en) * 2019-04-02 2023-05-16 长信智控网络科技有限公司 Face image super-resolution reconstruction method and device and terminal equipment
CN111325668B (en) * 2020-02-06 2023-04-18 北京字节跳动网络技术有限公司 Training method and device for image processing deep learning model and electronic equipment
CN115131218A (en) * 2021-03-25 2022-09-30 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer readable medium and electronic equipment
CN115170388A (en) * 2022-07-28 2022-10-11 西南大学 Character line draft generation method, device, equipment and medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445007A (en) * 2020-03-03 2020-07-24 平安科技(深圳)有限公司 Training method and system for resisting generation of neural network
CN112967174A (en) * 2021-01-21 2021-06-15 北京达佳互联信息技术有限公司 Image generation model training method, image generation device and storage medium
CN113469897A (en) * 2021-05-24 2021-10-01 苏州市科远软件技术开发有限公司 Training method and device of image enhancement model, image enhancement method and device and electronic equipment

Also Published As

Publication number Publication date
CN115439894A (en) 2022-12-06

Similar Documents

Publication Publication Date Title
CN108898186B (en) Method and device for extracting image
CN108229296B (en) Face skin attribute identification method and device, electronic equipment and storage medium
CN109859305B (en) Three-dimensional face modeling and recognizing method and device based on multi-angle two-dimensional face
CN108427927B (en) Object re-recognition method and apparatus, electronic device, program, and storage medium
CN112329619B (en) Face recognition method and device, electronic equipment and readable storage medium
CN109272016B (en) Target detection method, device, terminal equipment and computer readable storage medium
CN109345553B (en) Palm and key point detection method and device thereof, and terminal equipment
CN107679466B (en) Information output method and device
JP2020537204A (en) Deep Neural Network Normalization Methods and Devices, Instruments, and Storage Media
CN114186632B (en) Method, device, equipment and storage medium for training key point detection model
CN108229301B (en) Eyelid line detection method and device and electronic equipment
WO2019146057A1 (en) Learning device, system for generating captured image classification device, device for generating captured image classification device, learning method, and program
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
CN111353325A (en) Key point detection model training method and device
WO2021043023A1 (en) Image processing method and device, classifier training method, and readable storage medium
CN113469091A (en) Face recognition method, training method, electronic device and storage medium
CN115439894B (en) Method, electronic device, program product, and medium for training fingerprint matching model
CN109753561B (en) Automatic reply generation method and device
CN110717817A (en) Pre-loan approval method and device, electronic equipment and computer-readable storage medium
CN114882273B (en) Visual identification method, device, equipment and storage medium applied to narrow space
CN113298098B (en) Fundamental matrix estimation method and related product
WO2022126917A1 (en) Deep learning-based face image evaluation method and apparatus, device, and medium
CN115063847A (en) Training method and device for facial image acquisition model
CN116263938A (en) Image processing method, device and computer readable storage medium
CN114373034A (en) Image processing method, image processing apparatus, image processing device, storage medium, and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant